id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1104.3538
Henrique Araujo
H. M. Araujo, D. Yu. Akimov, E. J. Barnes, V. A. Belov, A. Bewick, A. A. Burenkov, V. Chepel. A. Currie, L. DeViveiros, B. Edwards, C. Ghag, A. Hollingsworth, M. Horn, G. E. Kalmus, A. S. Kobyakin, A. G. Kovalenko, V. N. Lebedenko, A. Lindote, M. I. Lopes, R. Luscher, P. Majewski, A. StJ. Murphy. F. Neves, S. M. Paling, J. Pinto da Cunha, R. Preece, J. J. Quenby, L. Reichhart, P. R. Scovell, C. Silva, V. N. Solovov, N. J. T. Smith, P. F. Smith, V. N. Stekhanov, T. J. Sumner, C. Thorne, R. J. Walker
Radioactivity Backgrounds in ZEPLIN-III
12 pages, 5 figures
null
10.1016/j.astropartphys.2011.11.001
null
physics.ins-det hep-ex
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine electron and nuclear recoil backgrounds from radioactivity in the ZEPLIN-III dark matter experiment at Boulby. The rate of low-energy electron recoils in the liquid xenon WIMP target is 0.75$\pm$0.05 events/kg/day/keV, which represents a 20-fold improvement over the rate observed during the first science run. Energy and spatial distributions agree with those predicted by component-level Monte Carlo simulations propagating the effects of the radiological contamination measured for materials employed in the experiment. Neutron elastic scattering is predicted to yield 3.05$\pm$0.5 nuclear recoils with energy 5-50 keV per year, which translates to an expectation of 0.4 events in a 1-year dataset in anti-coincidence with the veto detector for realistic signal acceptance. Less obvious background sources are discussed, especially in the context of future experiments. These include contamination of scintillation pulses with Cherenkov light from Compton electrons and from $\beta$ activity internal to photomultipliers, which can increase the size and lower the apparent time constant of the scintillation response. Another challenge is posed by multiple-scatter $\gamma$-rays with one or more vertices in regions that yield no ionisation. If the discrimination power achieved in the first run can be replicated, ZEPLIN-III should reach a sensitivity of $\sim 1 \times 10^{-8}$ pb$\cdot$year to the scalar WIMP-nucleon elastic cross-section, as originally conceived.
[ { "version": "v1", "created": "Mon, 18 Apr 2011 16:45:26 GMT" }, { "version": "v2", "created": "Fri, 12 Aug 2011 13:19:49 GMT" } ]
2015-05-27T00:00:00
[ [ "Araujo", "H. M.", "" ], [ "Akimov", "D. Yu.", "" ], [ "Barnes", "E. J.", "" ], [ "Belov", "V. A.", "" ], [ "Bewick", "A.", "" ], [ "Burenkov", "A. A.", "" ], [ "Currie", "V. Chepel. A.", "" ], [ "DeViveiros", "L.", "" ], [ "Edwards", "B.", "" ], [ "Ghag", "C.", "" ], [ "Hollingsworth", "A.", "" ], [ "Horn", "M.", "" ], [ "Kalmus", "G. E.", "" ], [ "Kobyakin", "A. S.", "" ], [ "Kovalenko", "A. G.", "" ], [ "Lebedenko", "V. N.", "" ], [ "Lindote", "A.", "" ], [ "Lopes", "M. I.", "" ], [ "Luscher", "R.", "" ], [ "Majewski", "P.", "" ], [ "Neves", "A. StJ. Murphy. F.", "" ], [ "Paling", "S. M.", "" ], [ "da Cunha", "J. Pinto", "" ], [ "Preece", "R.", "" ], [ "Quenby", "J. J.", "" ], [ "Reichhart", "L.", "" ], [ "Scovell", "P. R.", "" ], [ "Silva", "C.", "" ], [ "Solovov", "V. N.", "" ], [ "Smith", "N. J. T.", "" ], [ "Smith", "P. F.", "" ], [ "Stekhanov", "V. N.", "" ], [ "Sumner", "T. J.", "" ], [ "Thorne", "C.", "" ], [ "Walker", "R. J.", "" ] ]
TITLE: Radioactivity Backgrounds in ZEPLIN-III ABSTRACT: We examine electron and nuclear recoil backgrounds from radioactivity in the ZEPLIN-III dark matter experiment at Boulby. The rate of low-energy electron recoils in the liquid xenon WIMP target is 0.75$\pm$0.05 events/kg/day/keV, which represents a 20-fold improvement over the rate observed during the first science run. Energy and spatial distributions agree with those predicted by component-level Monte Carlo simulations propagating the effects of the radiological contamination measured for materials employed in the experiment. Neutron elastic scattering is predicted to yield 3.05$\pm$0.5 nuclear recoils with energy 5-50 keV per year, which translates to an expectation of 0.4 events in a 1-year dataset in anti-coincidence with the veto detector for realistic signal acceptance. Less obvious background sources are discussed, especially in the context of future experiments. These include contamination of scintillation pulses with Cherenkov light from Compton electrons and from $\beta$ activity internal to photomultipliers, which can increase the size and lower the apparent time constant of the scintillation response. Another challenge is posed by multiple-scatter $\gamma$-rays with one or more vertices in regions that yield no ionisation. If the discrimination power achieved in the first run can be replicated, ZEPLIN-III should reach a sensitivity of $\sim 1 \times 10^{-8}$ pb$\cdot$year to the scalar WIMP-nucleon elastic cross-section, as originally conceived.
no_new_dataset
0.93276
1104.4512
Vassilis Kekatos
Pedro A. Forero, Vassilis Kekatos, Georgios B. Giannakis
Robust Clustering Using Outlier-Sparsity Regularization
Submitted to IEEE Trans. on PAMI
null
10.1109/TSP.2012.2196696
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensitive to the presence of outliers in the data. Even a few outliers can compromise the ability of these algorithms to identify meaningful hidden structures rendering their outcome unreliable. This paper develops robust clustering algorithms that not only aim to cluster the data, but also to identify the outliers. The novel approaches rely on the infrequent presence of outliers in the data which translates to sparsity in a judiciously chosen domain. Capitalizing on the sparsity in the outlier domain, outlier-aware robust K-means and probabilistic clustering approaches are proposed. Their novelty lies on identifying outliers while effecting sparsity in the outlier domain through carefully chosen regularization. A block coordinate descent approach is developed to obtain iterative algorithms with convergence guarantees and small excess computational complexity with respect to their non-robust counterparts. Kernelized versions of the robust clustering algorithms are also developed to efficiently handle high-dimensional data, identify nonlinearly separable clusters, or even cluster objects that are not represented by vectors. Numerical tests on both synthetic and real datasets validate the performance and applicability of the novel algorithms.
[ { "version": "v1", "created": "Fri, 22 Apr 2011 22:01:14 GMT" } ]
2015-05-27T00:00:00
[ [ "Forero", "Pedro A.", "" ], [ "Kekatos", "Vassilis", "" ], [ "Giannakis", "Georgios B.", "" ] ]
TITLE: Robust Clustering Using Outlier-Sparsity Regularization ABSTRACT: Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensitive to the presence of outliers in the data. Even a few outliers can compromise the ability of these algorithms to identify meaningful hidden structures rendering their outcome unreliable. This paper develops robust clustering algorithms that not only aim to cluster the data, but also to identify the outliers. The novel approaches rely on the infrequent presence of outliers in the data which translates to sparsity in a judiciously chosen domain. Capitalizing on the sparsity in the outlier domain, outlier-aware robust K-means and probabilistic clustering approaches are proposed. Their novelty lies on identifying outliers while effecting sparsity in the outlier domain through carefully chosen regularization. A block coordinate descent approach is developed to obtain iterative algorithms with convergence guarantees and small excess computational complexity with respect to their non-robust counterparts. Kernelized versions of the robust clustering algorithms are also developed to efficiently handle high-dimensional data, identify nonlinearly separable clusters, or even cluster objects that are not represented by vectors. Numerical tests on both synthetic and real datasets validate the performance and applicability of the novel algorithms.
no_new_dataset
0.951729
1406.5141
Adrian Melott
A.L. Melott (Kansas), I.G. Usoskin (Oulu), G.A Kovaltsov (St. Petersburg), and C.M. Laird (Kansas)
Has the Earth been exposed to numerous supernovae within the last 300 kyr?
8 pages, 1 figure. to be published in the Interenational Journal of Astrobiology
International Journal of Astrobiology 14, 375-378 (2015)
10.1017/S1473550414000512
null
astro-ph.EP astro-ph.HE astro-ph.SR physics.geo-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Firestone (2014) asserted evidence for numerous (23) nearby (d<300 pc) supernovae within the Middle and Late Pleistocene. If true, this would have strong implications for the irradiation of the Earth; at this rate, mass extinction level events due to supernovae would be more frequent than 100 Myr. However, there are numerous errors in the application of past research. The paper overestimates likely nitrate and 14C production from moderately nearby supernovae by about four orders of magnitude. Moreover, the results are based on wrongly selected (obsolete) nitrate and 14C datasets. The use of correct and up-to-date datasets does not confirm the claimed results. The claims in the paper are invalidated.
[ { "version": "v1", "created": "Thu, 19 Jun 2014 18:36:33 GMT" }, { "version": "v2", "created": "Thu, 25 Sep 2014 14:21:05 GMT" } ]
2015-05-27T00:00:00
[ [ "Melott", "A. L.", "", "Kansas" ], [ "Usoskin", "I. G.", "", "Oulu" ], [ "Kovaltsov", "G. A", "", "St.\n Petersburg" ], [ "Laird", "C. M.", "", "Kansas" ] ]
TITLE: Has the Earth been exposed to numerous supernovae within the last 300 kyr? ABSTRACT: Firestone (2014) asserted evidence for numerous (23) nearby (d<300 pc) supernovae within the Middle and Late Pleistocene. If true, this would have strong implications for the irradiation of the Earth; at this rate, mass extinction level events due to supernovae would be more frequent than 100 Myr. However, there are numerous errors in the application of past research. The paper overestimates likely nitrate and 14C production from moderately nearby supernovae by about four orders of magnitude. Moreover, the results are based on wrongly selected (obsolete) nitrate and 14C datasets. The use of correct and up-to-date datasets does not confirm the claimed results. The claims in the paper are invalidated.
no_new_dataset
0.942612
1505.06751
Ayad Ghany Ismaeel
Ayad Ghany Ismaeel, Raghad Zuhair Yousif
Novel Mining of Cancer via Mutation in Tumor Protein P53 using Quick Propagation Network
6 Pages, 9 figures, 2 Table
International Journal of Computer Science and Electronics Engineering IJCSEE, Volume 3, Issue 2, 2015
null
null
cs.CE
http://creativecommons.org/licenses/by/3.0/
There is multiple databases contain datasets of TP53 gene and its tumor protein P53 which believed to be involved in over 50% of human cancers cases, these databases are rich as datasets covered all mutations caused diseases (cancers), but they haven't efficient mining method can classify and diagnosis mutations patient's then predict the cancer of that patient. This paper proposed a novel mining of cancer via mutations because there is no mining method before offers friendly, effective and flexible predict or diagnosis of cancers via using whole common database of TP53 gene (tumor protein P53) as dataset and selecting a minimum number of fields in training and testing quick propagation algorithm which supporting this miming method. Simulating quick propagation network for the train dataset shows results the Correlation (0.9999), R-squared (0.9998) and mean of Absolute Relative Error (0.0029), while the training for the ALL datasets (train, test and validation dataset) have results the Correlation (0.9993), R-squared (0.9987) and mean of Absolute Relative Error (0.0057).
[ { "version": "v1", "created": "Wed, 20 May 2015 15:46:45 GMT" } ]
2015-05-27T00:00:00
[ [ "Ismaeel", "Ayad Ghany", "" ], [ "Yousif", "Raghad Zuhair", "" ] ]
TITLE: Novel Mining of Cancer via Mutation in Tumor Protein P53 using Quick Propagation Network ABSTRACT: There is multiple databases contain datasets of TP53 gene and its tumor protein P53 which believed to be involved in over 50% of human cancers cases, these databases are rich as datasets covered all mutations caused diseases (cancers), but they haven't efficient mining method can classify and diagnosis mutations patient's then predict the cancer of that patient. This paper proposed a novel mining of cancer via mutations because there is no mining method before offers friendly, effective and flexible predict or diagnosis of cancers via using whole common database of TP53 gene (tumor protein P53) as dataset and selecting a minimum number of fields in training and testing quick propagation algorithm which supporting this miming method. Simulating quick propagation network for the train dataset shows results the Correlation (0.9999), R-squared (0.9998) and mean of Absolute Relative Error (0.0029), while the training for the ALL datasets (train, test and validation dataset) have results the Correlation (0.9993), R-squared (0.9987) and mean of Absolute Relative Error (0.0057).
no_new_dataset
0.946051
1505.06800
Baochang Zhang
Lei Wang, Baochang Zhang
Boosting-like Deep Learning For Pedestrian Detection
9 pages,7 figures
null
null
null
cs.CV cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-sa/3.0/
This paper proposes boosting-like deep learning (BDL) framework for pedestrian detection. Due to overtraining on the limited training samples, overfitting is a major problem of deep learning. We incorporate a boosting-like technique into deep learning to weigh the training samples, and thus prevent overtraining in the iterative process. We theoretically give the details of derivation of our algorithm, and report the experimental results on open data sets showing that BDL achieves a better stable performance than the state-of-the-arts. Our approach achieves 15.85% and 3.81% reduction in the average miss rate compared with ACF and JointDeep on the largest Caltech benchmark dataset, respectively.
[ { "version": "v1", "created": "Tue, 26 May 2015 03:52:52 GMT" } ]
2015-05-27T00:00:00
[ [ "Wang", "Lei", "" ], [ "Zhang", "Baochang", "" ] ]
TITLE: Boosting-like Deep Learning For Pedestrian Detection ABSTRACT: This paper proposes boosting-like deep learning (BDL) framework for pedestrian detection. Due to overtraining on the limited training samples, overfitting is a major problem of deep learning. We incorporate a boosting-like technique into deep learning to weigh the training samples, and thus prevent overtraining in the iterative process. We theoretically give the details of derivation of our algorithm, and report the experimental results on open data sets showing that BDL achieves a better stable performance than the state-of-the-arts. Our approach achieves 15.85% and 3.81% reduction in the average miss rate compared with ACF and JointDeep on the largest Caltech benchmark dataset, respectively.
no_new_dataset
0.948917
1505.06812
Purushottam Kar
Harikrishna Narasimhan and Purushottam Kar and Prateek Jain
Optimizing Non-decomposable Performance Measures: A Tale of Two Classes
To appear in proceedings of the 32nd International Conference on Machine Learning (ICML 2015)
Journal of Machine Learning Research, W&CP 37 (2015)
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure. Such measures have spurred much interest and pose specific challenges to learning algorithms since their non-additive nature precludes a direct application of well-studied large scale optimization methods such as stochastic gradient descent. In this paper we reveal that for two large families of performance measures that can be expressed as functions of true positive/negative rates, it is indeed possible to implement point stochastic updates. The families we consider are concave and pseudo-linear functions of TPR, TNR which cover several popularly used performance measures such as F-measure, G-mean and H-mean. Our core contribution is an adaptive linearization scheme for these families, using which we develop optimization techniques that enable truly point-based stochastic updates. For concave performance measures we propose SPADE, a stochastic primal dual solver; for pseudo-linear measures we propose STAMP, a stochastic alternate maximization procedure. Both methods have crisp convergence guarantees, demonstrate significant speedups over existing methods - often by an order of magnitude or more, and give similar or more accurate predictions on test data.
[ { "version": "v1", "created": "Tue, 26 May 2015 05:59:33 GMT" } ]
2015-05-27T00:00:00
[ [ "Narasimhan", "Harikrishna", "" ], [ "Kar", "Purushottam", "" ], [ "Jain", "Prateek", "" ] ]
TITLE: Optimizing Non-decomposable Performance Measures: A Tale of Two Classes ABSTRACT: Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure. Such measures have spurred much interest and pose specific challenges to learning algorithms since their non-additive nature precludes a direct application of well-studied large scale optimization methods such as stochastic gradient descent. In this paper we reveal that for two large families of performance measures that can be expressed as functions of true positive/negative rates, it is indeed possible to implement point stochastic updates. The families we consider are concave and pseudo-linear functions of TPR, TNR which cover several popularly used performance measures such as F-measure, G-mean and H-mean. Our core contribution is an adaptive linearization scheme for these families, using which we develop optimization techniques that enable truly point-based stochastic updates. For concave performance measures we propose SPADE, a stochastic primal dual solver; for pseudo-linear measures we propose STAMP, a stochastic alternate maximization procedure. Both methods have crisp convergence guarantees, demonstrate significant speedups over existing methods - often by an order of magnitude or more, and give similar or more accurate predictions on test data.
no_new_dataset
0.941115
1505.06814
Francesco Palmieri A. N.
Francesco A. N. Palmieri and Amedeo Buonanno
Discrete Independent Component Analysis (DICA) with Belief Propagation
Sumbitted for publication (May 2015)
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply belief propagation to a Bayesian bipartite graph composed of discrete independent hidden variables and discrete visible variables. The network is the Discrete counterpart of Independent Component Analysis (DICA) and it is manipulated in a factor graph form for inference and learning. A full set of simulations is reported for character images from the MNIST dataset. The results show that the factorial code implemented by the sources contributes to build a good generative model for the data that can be used in various inference modes.
[ { "version": "v1", "created": "Tue, 26 May 2015 06:02:05 GMT" } ]
2015-05-27T00:00:00
[ [ "Palmieri", "Francesco A. N.", "" ], [ "Buonanno", "Amedeo", "" ] ]
TITLE: Discrete Independent Component Analysis (DICA) with Belief Propagation ABSTRACT: We apply belief propagation to a Bayesian bipartite graph composed of discrete independent hidden variables and discrete visible variables. The network is the Discrete counterpart of Independent Component Analysis (DICA) and it is manipulated in a factor graph form for inference and learning. A full set of simulations is reported for character images from the MNIST dataset. The results show that the factorial code implemented by the sources contributes to build a good generative model for the data that can be used in various inference modes.
no_new_dataset
0.947381
1505.06907
Andreas Gr\"unauer
Andreas Gr\"unauer and Markus Vincze
Using Dimension Reduction to Improve the Classification of High-dimensional Data
Presented at OAGM Workshop, 2015 (arXiv:1505.01065)
null
null
OAGM/2015/09
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we show that the classification performance of high-dimensional structural MRI data with only a small set of training examples is improved by the usage of dimension reduction methods. We assessed two different dimension reduction variants: feature selection by ANOVA F-test and feature transformation by PCA. On the reduced datasets, we applied common learning algorithms using 5-fold cross-validation. Training, tuning of the hyperparameters, as well as the performance evaluation of the classifiers was conducted using two different performance measures: Accuracy, and Receiver Operating Characteristic curve (AUC). Our hypothesis is supported by experimental results.
[ { "version": "v1", "created": "Tue, 26 May 2015 11:33:04 GMT" } ]
2015-05-27T00:00:00
[ [ "Grünauer", "Andreas", "" ], [ "Vincze", "Markus", "" ] ]
TITLE: Using Dimension Reduction to Improve the Classification of High-dimensional Data ABSTRACT: In this work we show that the classification performance of high-dimensional structural MRI data with only a small set of training examples is improved by the usage of dimension reduction methods. We assessed two different dimension reduction variants: feature selection by ANOVA F-test and feature transformation by PCA. On the reduced datasets, we applied common learning algorithms using 5-fold cross-validation. Training, tuning of the hyperparameters, as well as the performance evaluation of the classifiers was conducted using two different performance measures: Accuracy, and Receiver Operating Characteristic curve (AUC). Our hypothesis is supported by experimental results.
no_new_dataset
0.953013
1505.06915
Jean-Philippe Vert
K\'evin Vervier (CBIO), Pierre Mah\'e, Maud Tournoud, Jean-Baptiste Veyrieras, Jean-Philippe Vert (CBIO)
Large-scale Machine Learning for Metagenomics Sequence Classification
null
null
null
null
q-bio.QM cs.CE cs.LG q-bio.GN stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Due to the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. In this work, we investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile. We show that machine learning-based compositional approaches benefit from increasing the number of fragments sampled from reference genome to tune their parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning these models involves training a machine learning model on about 10 8 samples in 10 7 dimensions, which is out of reach of standard soft-wares but can be done efficiently with modern implementations for large-scale machine learning. The resulting models are competitive in terms of accuracy with well-established alignment tools for problems involving a small to moderate number of candidate species, and for reasonable amounts of sequencing errors. We show, however, that compositional approaches are still limited in their ability to deal with problems involving a greater number of species, and more sensitive to sequencing errors. We finally confirm that compositional approach achieve faster prediction times, with a gain of 3 to 15 times with respect to the BWA-MEM short read mapper, depending on the number of candidate species and the level of sequencing noise.
[ { "version": "v1", "created": "Tue, 26 May 2015 12:02:04 GMT" } ]
2015-05-27T00:00:00
[ [ "Vervier", "Kévin", "", "CBIO" ], [ "Mahé", "Pierre", "", "CBIO" ], [ "Tournoud", "Maud", "", "CBIO" ], [ "Veyrieras", "Jean-Baptiste", "", "CBIO" ], [ "Vert", "Jean-Philippe", "", "CBIO" ] ]
TITLE: Large-scale Machine Learning for Metagenomics Sequence Classification ABSTRACT: Metagenomics characterizes the taxonomic diversity of microbial communities by sequencing DNA directly from an environmental sample. One of the main challenges in metagenomics data analysis is the binning step, where each sequenced read is assigned to a taxonomic clade. Due to the large volume of metagenomics datasets, binning methods need fast and accurate algorithms that can operate with reasonable computing requirements. While standard alignment-based methods provide state-of-the-art performance, compositional approaches that assign a taxonomic class to a DNA read based on the k-mers it contains have the potential to provide faster solutions. In this work, we investigate the potential of modern, large-scale machine learning implementations for taxonomic affectation of next-generation sequencing reads based on their k-mers profile. We show that machine learning-based compositional approaches benefit from increasing the number of fragments sampled from reference genome to tune their parameters, up to a coverage of about 10, and from increasing the k-mer size to about 12. Tuning these models involves training a machine learning model on about 10 8 samples in 10 7 dimensions, which is out of reach of standard soft-wares but can be done efficiently with modern implementations for large-scale machine learning. The resulting models are competitive in terms of accuracy with well-established alignment tools for problems involving a small to moderate number of candidate species, and for reasonable amounts of sequencing errors. We show, however, that compositional approaches are still limited in their ability to deal with problems involving a greater number of species, and more sensitive to sequencing errors. We finally confirm that compositional approach achieve faster prediction times, with a gain of 3 to 15 times with respect to the BWA-MEM short read mapper, depending on the number of candidate species and the level of sequencing noise.
no_new_dataset
0.948394
1505.06918
Roman Lutz
Roman Lutz
Fantasy Football Prediction
class project, 7 pages (1 sources, 1 appendix)
null
null
null
cs.LG
http://creativecommons.org/licenses/by/3.0/
The ubiquity of professional sports and specifically the NFL have lead to an increase in popularity for Fantasy Football. Users have many tools at their disposal: statistics, predictions, rankings of experts and even recommendations of peers. There are issues with all of these, though. Especially since many people pay money to play, the prediction tools should be enhanced as they provide unbiased and easy-to-use assistance for users. This paper provides and discusses approaches to predict Fantasy Football scores of Quarterbacks with relatively limited data. In addition to that, it includes several suggestions on how the data could be enhanced to achieve better results. The dataset consists only of game data from the last six NFL seasons. I used two different methods to predict the Fantasy Football scores of NFL players: Support Vector Regression (SVR) and Neural Networks. The results of both are promising given the limited data that was used.
[ { "version": "v1", "created": "Tue, 26 May 2015 12:14:56 GMT" } ]
2015-05-27T00:00:00
[ [ "Lutz", "Roman", "" ] ]
TITLE: Fantasy Football Prediction ABSTRACT: The ubiquity of professional sports and specifically the NFL have lead to an increase in popularity for Fantasy Football. Users have many tools at their disposal: statistics, predictions, rankings of experts and even recommendations of peers. There are issues with all of these, though. Especially since many people pay money to play, the prediction tools should be enhanced as they provide unbiased and easy-to-use assistance for users. This paper provides and discusses approaches to predict Fantasy Football scores of Quarterbacks with relatively limited data. In addition to that, it includes several suggestions on how the data could be enhanced to achieve better results. The dataset consists only of game data from the last six NFL seasons. I used two different methods to predict the Fantasy Football scores of NFL players: Support Vector Regression (SVR) and Neural Networks. The results of both are promising given the limited data that was used.
no_new_dataset
0.950503
1505.06999
Luis Ortiz
Joshua Belanich and Luis E. Ortiz
Some Open Problems in Optimal AdaBoost and Decision Stumps
4 pages, rejected from COLT15 Open Problems May 19, 2015 (submitted April 21, 2015; original 3 pages in COLT-conference format)
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The significance of the study of the theoretical and practical properties of AdaBoost is unquestionable, given its simplicity, wide practical use, and effectiveness on real-world datasets. Here we present a few open problems regarding the behavior of "Optimal AdaBoost," a term coined by Rudin, Daubechies, and Schapire in 2004 to label the simple version of the standard AdaBoost algorithm in which the weak learner that AdaBoost uses always outputs the weak classifier with lowest weighted error among the respective hypothesis class of weak classifiers implicit in the weak learner. We concentrate on the standard, "vanilla" version of Optimal AdaBoost for binary classification that results from using an exponential-loss upper bound on the misclassification training error. We present two types of open problems. One deals with general weak hypotheses. The other deals with the particular case of decision stumps, as often and commonly used in practice. Answers to the open problems can have immediate significant impact to (1) cementing previously established results on asymptotic convergence properties of Optimal AdaBoost, for finite datasets, which in turn can be the start to any convergence-rate analysis; (2) understanding the weak-hypotheses class of effective decision stumps generated from data, which we have empirically observed to be significantly smaller than the typically obtained class, as well as the effect on the weak learner's running time and previously established improved bounds on the generalization performance of Optimal AdaBoost classifiers; and (3) shedding some light on the "self control" that AdaBoost tends to exhibit in practice.
[ { "version": "v1", "created": "Tue, 26 May 2015 15:18:33 GMT" } ]
2015-05-27T00:00:00
[ [ "Belanich", "Joshua", "" ], [ "Ortiz", "Luis E.", "" ] ]
TITLE: Some Open Problems in Optimal AdaBoost and Decision Stumps ABSTRACT: The significance of the study of the theoretical and practical properties of AdaBoost is unquestionable, given its simplicity, wide practical use, and effectiveness on real-world datasets. Here we present a few open problems regarding the behavior of "Optimal AdaBoost," a term coined by Rudin, Daubechies, and Schapire in 2004 to label the simple version of the standard AdaBoost algorithm in which the weak learner that AdaBoost uses always outputs the weak classifier with lowest weighted error among the respective hypothesis class of weak classifiers implicit in the weak learner. We concentrate on the standard, "vanilla" version of Optimal AdaBoost for binary classification that results from using an exponential-loss upper bound on the misclassification training error. We present two types of open problems. One deals with general weak hypotheses. The other deals with the particular case of decision stumps, as often and commonly used in practice. Answers to the open problems can have immediate significant impact to (1) cementing previously established results on asymptotic convergence properties of Optimal AdaBoost, for finite datasets, which in turn can be the start to any convergence-rate analysis; (2) understanding the weak-hypotheses class of effective decision stumps generated from data, which we have empirically observed to be significantly smaller than the typically obtained class, as well as the effect on the weak learner's running time and previously established improved bounds on the generalization performance of Optimal AdaBoost classifiers; and (3) shedding some light on the "self control" that AdaBoost tends to exhibit in practice.
no_new_dataset
0.94366
1407.3859
Jeremy Kepner
Jeremy Kepner, Christian Anderson, William Arcand, David Bestor, Bill Bergeron, Chansup Byun, Matthew Hubbell, Peter Michaleas, Julie Mullen, David O'Gwynn, Andrew Prout, Albert Reuther, Antonio Rosa, Charles Yee (MIT)
D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database
6 pages; IEEE HPEC 2013
null
10.1109/HPEC.2013.6670318
null
cs.DB astro-ph.IM cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using novel schemas. The Dynamic Distributed Dimensional Data Model (D4M)[http://d4m.mit.edu] provides a uniform mathematical framework based on associative arrays that encompasses both traditional (i.e., SQL) and non-traditional databases. For non-traditional databases D4M naturally leads to a general purpose schema that can be used to fully index and rapidly query every unique string in a dataset. The D4M 2.0 Schema has been applied with little or no customization to cyber, bioinformatics, scientific citation, free text, and social media data. The D4M 2.0 Schema is simple, requires minimal parsing, and achieves the highest published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are independent of the D4M interface. Any interface to Accumulo can achieve these benefits by using the D4M 2.0 Schema
[ { "version": "v1", "created": "Tue, 15 Jul 2014 01:54:45 GMT" } ]
2015-05-26T00:00:00
[ [ "Kepner", "Jeremy", "", "MIT" ], [ "Anderson", "Christian", "", "MIT" ], [ "Arcand", "William", "", "MIT" ], [ "Bestor", "David", "", "MIT" ], [ "Bergeron", "Bill", "", "MIT" ], [ "Byun", "Chansup", "", "MIT" ], [ "Hubbell", "Matthew", "", "MIT" ], [ "Michaleas", "Peter", "", "MIT" ], [ "Mullen", "Julie", "", "MIT" ], [ "O'Gwynn", "David", "", "MIT" ], [ "Prout", "Andrew", "", "MIT" ], [ "Reuther", "Albert", "", "MIT" ], [ "Rosa", "Antonio", "", "MIT" ], [ "Yee", "Charles", "", "MIT" ] ]
TITLE: D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database ABSTRACT: Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using novel schemas. The Dynamic Distributed Dimensional Data Model (D4M)[http://d4m.mit.edu] provides a uniform mathematical framework based on associative arrays that encompasses both traditional (i.e., SQL) and non-traditional databases. For non-traditional databases D4M naturally leads to a general purpose schema that can be used to fully index and rapidly query every unique string in a dataset. The D4M 2.0 Schema has been applied with little or no customization to cyber, bioinformatics, scientific citation, free text, and social media data. The D4M 2.0 Schema is simple, requires minimal parsing, and achieves the highest published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are independent of the D4M interface. Any interface to Accumulo can achieve these benefits by using the D4M 2.0 Schema
no_new_dataset
0.949153
1412.1740
Matthew Kusner
Matt J. Kusner, Nicholas I. Kolkin, Stephen Tyree, Kilian Q. Weinberger
Image Data Compression for Covariance and Histogram Descriptors
null
null
null
null
stat.ML cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Covariance and histogram image descriptors provide an effective way to capture information about images. Both excel when used in combination with special purpose distance metrics. For covariance descriptors these metrics measure the distance along the non-Euclidean Riemannian manifold of symmetric positive definite matrices. For histogram descriptors the Earth Mover's distance measures the optimal transport between two histograms. Although more precise, these distance metrics are very expensive to compute, making them impractical in many applications, even for data sets of only a few thousand examples. In this paper we present two methods to compress the size of covariance and histogram datasets with only marginal increases in test error for k-nearest neighbor classification. Specifically, we show that we can reduce data sets to 16% and in some cases as little as 2% of their original size, while approximately matching the test error of kNN classification on the full training set. In fact, because the compressed set is learned in a supervised fashion, it sometimes even outperforms the full data set, while requiring only a fraction of the space and drastically reducing test-time computation.
[ { "version": "v1", "created": "Thu, 4 Dec 2014 17:22:22 GMT" }, { "version": "v2", "created": "Sat, 23 May 2015 17:07:59 GMT" } ]
2015-05-26T00:00:00
[ [ "Kusner", "Matt J.", "" ], [ "Kolkin", "Nicholas I.", "" ], [ "Tyree", "Stephen", "" ], [ "Weinberger", "Kilian Q.", "" ] ]
TITLE: Image Data Compression for Covariance and Histogram Descriptors ABSTRACT: Covariance and histogram image descriptors provide an effective way to capture information about images. Both excel when used in combination with special purpose distance metrics. For covariance descriptors these metrics measure the distance along the non-Euclidean Riemannian manifold of symmetric positive definite matrices. For histogram descriptors the Earth Mover's distance measures the optimal transport between two histograms. Although more precise, these distance metrics are very expensive to compute, making them impractical in many applications, even for data sets of only a few thousand examples. In this paper we present two methods to compress the size of covariance and histogram datasets with only marginal increases in test error for k-nearest neighbor classification. Specifically, we show that we can reduce data sets to 16% and in some cases as little as 2% of their original size, while approximately matching the test error of kNN classification on the full training set. In fact, because the compressed set is learned in a supervised fashion, it sometimes even outperforms the full data set, while requiring only a fraction of the space and drastically reducing test-time computation.
no_new_dataset
0.950227
1504.06580
Cicero dos Santos
Cicero Nogueira dos Santos, Bing Xiang, Bowen Zhou
Classifying Relations by Ranking with Convolutional Neural Networks
Accepted as a long paper in the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015)
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.
[ { "version": "v1", "created": "Fri, 24 Apr 2015 17:50:33 GMT" }, { "version": "v2", "created": "Sun, 24 May 2015 13:58:05 GMT" } ]
2015-05-26T00:00:00
[ [ "Santos", "Cicero Nogueira dos", "" ], [ "Xiang", "Bing", "" ], [ "Zhou", "Bowen", "" ] ]
TITLE: Classifying Relations by Ranking with Convolutional Neural Networks ABSTRACT: Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.
no_new_dataset
0.946646
1505.06249
Alexandre Drouin
Alexandre Drouin, S\'ebastien Gigu\`ere, Maxime D\'eraspe, Fran\c{c}ois Laviolette, Mario Marchand, Jacques Corbeil
Greedy Biomarker Discovery in the Genome with Applications to Antimicrobial Resistance
Peer-reviewed and accepted for an oral presentation in the Greed is Great workshop at the International Conference on Machine Learning, Lille, France, 2015
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Set Covering Machine (SCM) is a greedy learning algorithm that produces sparse classifiers. We extend the SCM for datasets that contain a huge number of features. The whole genetic material of living organisms is an example of such a case, where the number of feature exceeds 10^7. Three human pathogens were used to evaluate the performance of the SCM at predicting antimicrobial resistance. Our results show that the SCM compares favorably in terms of sparsity and accuracy against L1 and L2 regularized Support Vector Machines and CART decision trees. Moreover, the SCM was the only algorithm that could consider the full feature space. For all other algorithms, the latter had to be filtered as a preprocessing step.
[ { "version": "v1", "created": "Fri, 22 May 2015 23:29:40 GMT" } ]
2015-05-26T00:00:00
[ [ "Drouin", "Alexandre", "" ], [ "Giguère", "Sébastien", "" ], [ "Déraspe", "Maxime", "" ], [ "Laviolette", "François", "" ], [ "Marchand", "Mario", "" ], [ "Corbeil", "Jacques", "" ] ]
TITLE: Greedy Biomarker Discovery in the Genome with Applications to Antimicrobial Resistance ABSTRACT: The Set Covering Machine (SCM) is a greedy learning algorithm that produces sparse classifiers. We extend the SCM for datasets that contain a huge number of features. The whole genetic material of living organisms is an example of such a case, where the number of feature exceeds 10^7. Three human pathogens were used to evaluate the performance of the SCM at predicting antimicrobial resistance. Our results show that the SCM compares favorably in terms of sparsity and accuracy against L1 and L2 regularized Support Vector Machines and CART decision trees. Moreover, the SCM was the only algorithm that could consider the full feature space. For all other algorithms, the latter had to be filtered as a preprocessing step.
no_new_dataset
0.952309
1505.06250
George Toderici
Balakrishnan Varadarajan and George Toderici and Sudheendra Vijayanarasimhan and Apostol Natsev
Efficient Large Scale Video Classification
null
null
null
null
cs.CV cs.MM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video classification has advanced tremendously over the recent years. A large part of the improvements in video classification had to do with the work done by the image classification community and the use of deep convolutional networks (CNNs) which produce competitive results with hand- crafted motion features. These networks were adapted to use video frames in various ways and have yielded state of the art classification results. We present two methods that build on this work, and scale it up to work with millions of videos and hundreds of thousands of classes while maintaining a low computational cost. In the context of large scale video processing, training CNNs on video frames is extremely time consuming, due to the large number of frames involved. We propose to avoid this problem by training CNNs on either YouTube thumbnails or Flickr images, and then using these networks' outputs as features for other higher level classifiers. We discuss the challenges of achieving this and propose two models for frame-level and video-level classification. The first is a highly efficient mixture of experts while the latter is based on long short term memory neural networks. We present results on the Sports-1M video dataset (1 million videos, 487 classes) and on a new dataset which has 12 million videos and 150,000 labels.
[ { "version": "v1", "created": "Fri, 22 May 2015 23:45:32 GMT" } ]
2015-05-26T00:00:00
[ [ "Varadarajan", "Balakrishnan", "" ], [ "Toderici", "George", "" ], [ "Vijayanarasimhan", "Sudheendra", "" ], [ "Natsev", "Apostol", "" ] ]
TITLE: Efficient Large Scale Video Classification ABSTRACT: Video classification has advanced tremendously over the recent years. A large part of the improvements in video classification had to do with the work done by the image classification community and the use of deep convolutional networks (CNNs) which produce competitive results with hand- crafted motion features. These networks were adapted to use video frames in various ways and have yielded state of the art classification results. We present two methods that build on this work, and scale it up to work with millions of videos and hundreds of thousands of classes while maintaining a low computational cost. In the context of large scale video processing, training CNNs on video frames is extremely time consuming, due to the large number of frames involved. We propose to avoid this problem by training CNNs on either YouTube thumbnails or Flickr images, and then using these networks' outputs as features for other higher level classifiers. We discuss the challenges of achieving this and propose two models for frame-level and video-level classification. The first is a highly efficient mixture of experts while the latter is based on long short term memory neural networks. We present results on the Sports-1M video dataset (1 million videos, 487 classes) and on a new dataset which has 12 million videos and 150,000 labels.
new_dataset
0.962427
1505.06485
Syama Sundar Rangapuram
Syama Sundar Rangapuram and Matthias Hein
Constrained 1-Spectral Clustering
Long version of paper accepted at AISTATS 2012
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important form of prior information in clustering comes in form of cannot-link and must-link constraints. We present a generalization of the popular spectral clustering technique which integrates such constraints. Motivated by the recently proposed $1$-spectral clustering for the unconstrained problem, our method is based on a tight relaxation of the constrained normalized cut into a continuous optimization problem. Opposite to all other methods which have been suggested for constrained spectral clustering, we can always guarantee to satisfy all constraints. Moreover, our soft formulation allows to optimize a trade-off between normalized cut and the number of violated constraints. An efficient implementation is provided which scales to large datasets. We outperform consistently all other proposed methods in the experiments.
[ { "version": "v1", "created": "Sun, 24 May 2015 21:25:44 GMT" } ]
2015-05-26T00:00:00
[ [ "Rangapuram", "Syama Sundar", "" ], [ "Hein", "Matthias", "" ] ]
TITLE: Constrained 1-Spectral Clustering ABSTRACT: An important form of prior information in clustering comes in form of cannot-link and must-link constraints. We present a generalization of the popular spectral clustering technique which integrates such constraints. Motivated by the recently proposed $1$-spectral clustering for the unconstrained problem, our method is based on a tight relaxation of the constrained normalized cut into a continuous optimization problem. Opposite to all other methods which have been suggested for constrained spectral clustering, we can always guarantee to satisfy all constraints. Moreover, our soft formulation allows to optimize a trade-off between normalized cut and the number of violated constraints. An efficient implementation is provided which scales to large datasets. We outperform consistently all other proposed methods in the experiments.
no_new_dataset
0.944689
1505.06531
Tsu-Wei Chen
Tsu-Wei Chen, Meena Abdelmaseeh, Daniel Stashuk
Affine and Regional Dynamic Time Warpng
null
null
null
null
cs.CV cs.CE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pointwise matches between two time series are of great importance in time series analysis, and dynamic time warping (DTW) is known to provide generally reasonable matches. There are situations where time series alignment should be invariant to scaling and offset in amplitude or where local regions of the considered time series should be strongly reflected in pointwise matches. Two different variants of DTW, affine DTW (ADTW) and regional DTW (RDTW), are proposed to handle scaling and offset in amplitude and provide regional emphasis respectively. Furthermore, ADTW and RDTW can be combined in two different ways to generate alignments that incorporate advantages from both methods, where the affine model can be applied either globally to the entire time series or locally to each region. The proposed alignment methods outperform DTW on specific simulated datasets, and one-nearest-neighbor classifiers using their associated difference measures are competitive with the difference measures associated with state-of-the-art alignment methods on real datasets.
[ { "version": "v1", "created": "Mon, 25 May 2015 03:23:31 GMT" } ]
2015-05-26T00:00:00
[ [ "Chen", "Tsu-Wei", "" ], [ "Abdelmaseeh", "Meena", "" ], [ "Stashuk", "Daniel", "" ] ]
TITLE: Affine and Regional Dynamic Time Warpng ABSTRACT: Pointwise matches between two time series are of great importance in time series analysis, and dynamic time warping (DTW) is known to provide generally reasonable matches. There are situations where time series alignment should be invariant to scaling and offset in amplitude or where local regions of the considered time series should be strongly reflected in pointwise matches. Two different variants of DTW, affine DTW (ADTW) and regional DTW (RDTW), are proposed to handle scaling and offset in amplitude and provide regional emphasis respectively. Furthermore, ADTW and RDTW can be combined in two different ways to generate alignments that incorporate advantages from both methods, where the affine model can be applied either globally to the entire time series or locally to each region. The proposed alignment methods outperform DTW on specific simulated datasets, and one-nearest-neighbor classifiers using their associated difference measures are competitive with the difference measures associated with state-of-the-art alignment methods on real datasets.
no_new_dataset
0.953188
1505.06532
Ali Jahanian
Ali Jahanian, S. V. N. Vishwanathan, Jan P. Allebach
Colors $-$Messengers of Concepts: Visual Design Mining for Learning Color Semantics
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the concept of color semantics by modeling a dataset of magazine cover designs, evaluating the model via crowdsourcing, and demonstrating several prototypes that facilitate color-related design tasks. We investigate a probabilistic generative modeling framework that expresses semantic concepts as a combination of color and word distributions $-$color-word topics. We adopt an extension to Latent Dirichlet Allocation (LDA) topic modeling called LDA-dual to infer a set of color-word topics over a corpus of 2,654 magazine covers spanning 71 distinct titles and 12 genres. While LDA models text documents as distributions over word topics, we model magazine covers as distributions over color-word topics. The results of our crowdsourced experiments confirm that the model is able to successfully discover the associations between colors and linguistic concepts. Finally, we demonstrate several simple prototypes that apply the learned model to color palette recommendation, design example retrieval, image retrieval, image color selection, and image recoloring.
[ { "version": "v1", "created": "Mon, 25 May 2015 03:34:46 GMT" } ]
2015-05-26T00:00:00
[ [ "Jahanian", "Ali", "" ], [ "Vishwanathan", "S. V. N.", "" ], [ "Allebach", "Jan P.", "" ] ]
TITLE: Colors $-$Messengers of Concepts: Visual Design Mining for Learning Color Semantics ABSTRACT: This paper studies the concept of color semantics by modeling a dataset of magazine cover designs, evaluating the model via crowdsourcing, and demonstrating several prototypes that facilitate color-related design tasks. We investigate a probabilistic generative modeling framework that expresses semantic concepts as a combination of color and word distributions $-$color-word topics. We adopt an extension to Latent Dirichlet Allocation (LDA) topic modeling called LDA-dual to infer a set of color-word topics over a corpus of 2,654 magazine covers spanning 71 distinct titles and 12 genres. While LDA models text documents as distributions over word topics, we model magazine covers as distributions over color-word topics. The results of our crowdsourced experiments confirm that the model is able to successfully discover the associations between colors and linguistic concepts. Finally, we demonstrate several simple prototypes that apply the learned model to color palette recommendation, design example retrieval, image retrieval, image color selection, and image recoloring.
no_new_dataset
0.943138
1505.06538
J Massey Cashore
J. Massey Cashore, Xiaoting Zhao, Alexander A. Alemi, Yujia Liu, Peter I. Frazier
Clustering via Content-Augmented Stochastic Blockmodels
null
null
null
null
stat.ML cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much of the data being created on the web contains interactions between users and items. Stochastic blockmodels, and other methods for community detection and clustering of bipartite graphs, can infer latent user communities and latent item clusters from this interaction data. These methods, however, typically ignore the items' contents and the information they provide about item clusters, despite the tendency of items in the same latent cluster to share commonalities in content. We introduce content-augmented stochastic blockmodels (CASB), which use item content together with user-item interaction data to enhance the user communities and item clusters learned. Comparisons to several state-of-the-art benchmark methods, on datasets arising from scientists interacting with scientific articles, show that content-augmented stochastic blockmodels provide highly accurate clusters with respect to metrics representative of the underlying community structure.
[ { "version": "v1", "created": "Mon, 25 May 2015 04:19:12 GMT" } ]
2015-05-26T00:00:00
[ [ "Cashore", "J. Massey", "" ], [ "Zhao", "Xiaoting", "" ], [ "Alemi", "Alexander A.", "" ], [ "Liu", "Yujia", "" ], [ "Frazier", "Peter I.", "" ] ]
TITLE: Clustering via Content-Augmented Stochastic Blockmodels ABSTRACT: Much of the data being created on the web contains interactions between users and items. Stochastic blockmodels, and other methods for community detection and clustering of bipartite graphs, can infer latent user communities and latent item clusters from this interaction data. These methods, however, typically ignore the items' contents and the information they provide about item clusters, despite the tendency of items in the same latent cluster to share commonalities in content. We introduce content-augmented stochastic blockmodels (CASB), which use item content together with user-item interaction data to enhance the user communities and item clusters learned. Comparisons to several state-of-the-art benchmark methods, on datasets arising from scientists interacting with scientific articles, show that content-augmented stochastic blockmodels provide highly accurate clusters with respect to metrics representative of the underlying community structure.
no_new_dataset
0.946794
1505.05921
Katherine Driggs-Campbell
Katherine Driggs-Campbell and Ruzena Bajcsy
Identifying Modes of Intent from Driver Behaviors in Dynamic Environments
Submitted to ITSC 2015
null
null
null
cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In light of growing attention of intelligent vehicle systems, we propose developing a driver model that uses a hybrid system formulation to capture the intent of the driver. This model hopes to capture human driving behavior in a way that can be utilized by semi- and fully autonomous systems in heterogeneous environments. We consider a discrete set of high level goals or intent modes, that is designed to encompass the decision making process of the human. A driver model is derived using a dataset of lane changes collected in a realistic driving simulator, in which the driver actively labels data to give us insight into her intent. By building the labeled dataset, we are able to utilize classification tools to build the driver model using features of based on her perception of the environment, and achieve high accuracy in identifying driver intent. Multiple algorithms are presented and compared on the dataset, and a comparison of the varying behaviors between drivers is drawn. Using this modeling methodology, we present a model that can be used to assess driver behaviors and to develop human-inspired safety metrics that can be utilized in intelligent vehicular systems.
[ { "version": "v1", "created": "Thu, 21 May 2015 23:19:09 GMT" } ]
2015-05-25T00:00:00
[ [ "Driggs-Campbell", "Katherine", "" ], [ "Bajcsy", "Ruzena", "" ] ]
TITLE: Identifying Modes of Intent from Driver Behaviors in Dynamic Environments ABSTRACT: In light of growing attention of intelligent vehicle systems, we propose developing a driver model that uses a hybrid system formulation to capture the intent of the driver. This model hopes to capture human driving behavior in a way that can be utilized by semi- and fully autonomous systems in heterogeneous environments. We consider a discrete set of high level goals or intent modes, that is designed to encompass the decision making process of the human. A driver model is derived using a dataset of lane changes collected in a realistic driving simulator, in which the driver actively labels data to give us insight into her intent. By building the labeled dataset, we are able to utilize classification tools to build the driver model using features of based on her perception of the environment, and achieve high accuracy in identifying driver intent. Multiple algorithms are presented and compared on the dataset, and a comparison of the varying behaviors between drivers is drawn. Using this modeling methodology, we present a model that can be used to assess driver behaviors and to develop human-inspired safety metrics that can be utilized in intelligent vehicular systems.
new_dataset
0.9598
1505.05957
Tianmin Shu
Tianmin Shu, Dan Xie, Brandon Rothrock, Sinisa Todorovic, Song-Chun Zhu
Joint Inference of Groups, Events and Human Roles in Aerial Videos
CVPR 2015 Oral Presentation
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.
[ { "version": "v1", "created": "Fri, 22 May 2015 05:59:18 GMT" } ]
2015-05-25T00:00:00
[ [ "Shu", "Tianmin", "" ], [ "Xie", "Dan", "" ], [ "Rothrock", "Brandon", "" ], [ "Todorovic", "Sinisa", "" ], [ "Zhu", "Song-Chun", "" ] ]
TITLE: Joint Inference of Groups, Events and Human Roles in Aerial Videos ABSTRACT: With the advent of drones, aerial video analysis becomes increasingly important; yet, it has received scant attention in the literature. This paper addresses a new problem of parsing low-resolution aerial videos of large spatial areas, in terms of 1) grouping, 2) recognizing events and 3) assigning roles to people engaged in events. We propose a novel framework aimed at conducting joint inference of the above tasks, as reasoning about each in isolation typically fails in our setting. Given noisy tracklets of people and detections of large objects and scene surfaces (e.g., building, grass), we use a spatiotemporal AND-OR graph to drive our joint inference, using Markov Chain Monte Carlo and dynamic programming. We also introduce a new formalism of spatiotemporal templates characterizing latent sub-events. For evaluation, we have collected and released a new aerial videos dataset using a hex-rotor flying over picnic areas rich with group events. Our results demonstrate that we successfully address above inference tasks under challenging conditions.
new_dataset
0.954942
1505.06169
Emma Strubell
Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum
Learning Dynamic Feature Selection for Fast Sequential Prediction
Appears in The 53rd Annual Meeting of the Association for Computational Linguistics, Beijing, China, July 2015
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97% and parsing LAS above 88.5% both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed.
[ { "version": "v1", "created": "Fri, 22 May 2015 18:28:21 GMT" } ]
2015-05-25T00:00:00
[ [ "Strubell", "Emma", "" ], [ "Vilnis", "Luke", "" ], [ "Silverstein", "Kate", "" ], [ "McCallum", "Andrew", "" ] ]
TITLE: Learning Dynamic Feature Selection for Fast Sequential Prediction ABSTRACT: We present paired learning and inference algorithms for significantly reducing computation and increasing speed of the vector dot products in the classifiers that are at the heart of many NLP components. This is accomplished by partitioning the features into a sequence of templates which are ordered such that high confidence can often be reached using only a small fraction of all features. Parameter estimation is arranged to maximize accuracy and early confidence in this sequence. Our approach is simpler and better suited to NLP than other related cascade methods. We present experiments in left-to-right part-of-speech tagging, named entity recognition, and transition-based dependency parsing. On the typical benchmarking datasets we can preserve POS tagging accuracy above 97% and parsing LAS above 88.5% both with over a five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase in speed.
no_new_dataset
0.945751
1401.4489
Devansh Arpit
Devansh Arpit, Ifeoma Nwogu, Gaurav Srivastava, Venu Govindaraju
An Analysis of Random Projections in Cancelable Biometrics
null
null
null
null
cs.CV cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With increasing concerns about security, the need for highly secure physical biometrics-based authentication systems utilizing \emph{cancelable biometric} technologies is on the rise. Because the problem of cancelable template generation deals with the trade-off between template security and matching performance, many state-of-the-art algorithms successful in generating high quality cancelable biometrics all have random projection as one of their early processing steps. This paper therefore presents a formal analysis of why random projections is an essential step in cancelable biometrics. By formally defining the notion of an \textit{Independent Subspace Structure} for datasets, it can be shown that random projection preserves the subspace structure of data vectors generated from a union of independent linear subspaces. The bound on the minimum number of random vectors required for this to hold is also derived and is shown to depend logarithmically on the number of data samples, not only in independent subspaces but in disjoint subspace settings as well. The theoretical analysis presented is supported in detail with empirical results on real-world face recognition datasets.
[ { "version": "v1", "created": "Fri, 17 Jan 2014 23:21:56 GMT" }, { "version": "v2", "created": "Wed, 5 Feb 2014 02:57:25 GMT" }, { "version": "v3", "created": "Fri, 14 Nov 2014 02:38:09 GMT" } ]
2015-05-22T00:00:00
[ [ "Arpit", "Devansh", "" ], [ "Nwogu", "Ifeoma", "" ], [ "Srivastava", "Gaurav", "" ], [ "Govindaraju", "Venu", "" ] ]
TITLE: An Analysis of Random Projections in Cancelable Biometrics ABSTRACT: With increasing concerns about security, the need for highly secure physical biometrics-based authentication systems utilizing \emph{cancelable biometric} technologies is on the rise. Because the problem of cancelable template generation deals with the trade-off between template security and matching performance, many state-of-the-art algorithms successful in generating high quality cancelable biometrics all have random projection as one of their early processing steps. This paper therefore presents a formal analysis of why random projections is an essential step in cancelable biometrics. By formally defining the notion of an \textit{Independent Subspace Structure} for datasets, it can be shown that random projection preserves the subspace structure of data vectors generated from a union of independent linear subspaces. The bound on the minimum number of random vectors required for this to hold is also derived and is shown to depend logarithmically on the number of data samples, not only in independent subspaces but in disjoint subspace settings as well. The theoretical analysis presented is supported in detail with empirical results on real-world face recognition datasets.
no_new_dataset
0.948251
1407.7073
Weinan Zhang
Weinan Zhang, Shuai Yuan, Jun Wang, Xuehua Shen
Real-Time Bidding Benchmarking with iPinYou Dataset
UCL Technical Report 2014
null
null
null
cs.GT cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Being an emerging paradigm for display advertising, Real-Time Bidding (RTB) drives the focus of the bidding strategy from context to users' interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users' responses from advertisers' perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.
[ { "version": "v1", "created": "Fri, 25 Jul 2014 23:20:29 GMT" }, { "version": "v2", "created": "Fri, 1 Aug 2014 11:22:17 GMT" }, { "version": "v3", "created": "Thu, 21 May 2015 18:20:30 GMT" } ]
2015-05-22T00:00:00
[ [ "Zhang", "Weinan", "" ], [ "Yuan", "Shuai", "" ], [ "Wang", "Jun", "" ], [ "Shen", "Xuehua", "" ] ]
TITLE: Real-Time Bidding Benchmarking with iPinYou Dataset ABSTRACT: Being an emerging paradigm for display advertising, Real-Time Bidding (RTB) drives the focus of the bidding strategy from context to users' interest by computing a bid for each impression in real time. The data mining work and particularly the bidding strategy development becomes crucial in this performance-driven business. However, researchers in computational advertising area have been suffering from lack of publicly available benchmark datasets, which are essential to compare different algorithms and systems. Fortunately, a leading Chinese advertising technology company iPinYou decided to release the dataset used in its global RTB algorithm competition in 2013. The dataset includes logs of ad auctions, bids, impressions, clicks, and final conversions. These logs reflect the market environment as well as form a complete path of users' responses from advertisers' perspective. This dataset directly supports the experiments of some important research problems such as bid optimisation and CTR estimation. To the best of our knowledge, this is the first publicly available dataset on RTB display advertising. Thus, they are valuable for reproducible research and understanding the whole RTB ecosystem. In this paper, we first provide the detailed statistical analysis of this dataset. Then we introduce the research problem of bid optimisation in RTB and the simple yet comprehensive evaluation protocol. Besides, a series of benchmark experiments are also conducted, including both click-through rate (CTR) estimation and bid optimisation.
new_dataset
0.96793
1502.01602
Donn Morrison
V\'aclav Bel\'ak, Afra Mashhadi, Alessandra Sala, Donn Morrison
Phantom cascades: The effect of hidden nodes on information diffusion
Preprint submitted to Elsevier Computer Communications
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on information diffusion generally assumes complete knowledge of the underlying network. However, in the presence of factors such as increasing privacy awareness, restrictions on application programming interfaces (APIs) and sampling strategies, this assumption rarely holds in the real world which in turn leads to an underestimation of the size of information cascades. In this work we study the effect of hidden network structure on information diffusion processes. We characterise information cascades through activation paths traversing visible and hidden parts of the network. We quantify diffusion estimation error while varying the amount of hidden structure in five empirical and synthetic network datasets and demonstrate the effect of topological properties on this error. Finally, we suggest practical recommendations for practitioners and propose a model to predict the cascade size with minimal information regarding the underlying network.
[ { "version": "v1", "created": "Thu, 5 Feb 2015 15:13:33 GMT" }, { "version": "v2", "created": "Thu, 21 May 2015 13:34:14 GMT" } ]
2015-05-22T00:00:00
[ [ "Belák", "Václav", "" ], [ "Mashhadi", "Afra", "" ], [ "Sala", "Alessandra", "" ], [ "Morrison", "Donn", "" ] ]
TITLE: Phantom cascades: The effect of hidden nodes on information diffusion ABSTRACT: Research on information diffusion generally assumes complete knowledge of the underlying network. However, in the presence of factors such as increasing privacy awareness, restrictions on application programming interfaces (APIs) and sampling strategies, this assumption rarely holds in the real world which in turn leads to an underestimation of the size of information cascades. In this work we study the effect of hidden network structure on information diffusion processes. We characterise information cascades through activation paths traversing visible and hidden parts of the network. We quantify diffusion estimation error while varying the amount of hidden structure in five empirical and synthetic network datasets and demonstrate the effect of topological properties on this error. Finally, we suggest practical recommendations for practitioners and propose a model to predict the cascade size with minimal information regarding the underlying network.
no_new_dataset
0.951233
1503.00756
Marco Stronati
Konstantinos Chatzikokolakis, Catuscia Palamidessi, Marco Stronati
Constructing elastic distinguishability metrics for location privacy
null
null
10.1515/popets-2015-0023
null
cs.CR
http://creativecommons.org/licenses/by/3.0/
With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy.
[ { "version": "v1", "created": "Mon, 2 Mar 2015 21:32:11 GMT" }, { "version": "v2", "created": "Thu, 21 May 2015 09:39:47 GMT" } ]
2015-05-22T00:00:00
[ [ "Chatzikokolakis", "Konstantinos", "" ], [ "Palamidessi", "Catuscia", "" ], [ "Stronati", "Marco", "" ] ]
TITLE: Constructing elastic distinguishability metrics for location privacy ABSTRACT: With the increasing popularity of hand-held devices, location-based applications and services have access to accurate and real-time location information, raising serious privacy concerns for their users. The recently introduced notion of geo-indistinguishability tries to address this problem by adapting the well-known concept of differential privacy to the area of location-based systems. Although geo-indistinguishability presents various appealing aspects, it has the problem of treating space in a uniform way, imposing the addition of the same amount of noise everywhere on the map. In this paper we propose a novel elastic distinguishability metric that warps the geometrical distance, capturing the different degrees of density of each area. As a consequence, the obtained mechanism adapts the level of noise while achieving the same degree of privacy everywhere. We also show how such an elastic metric can easily incorporate the concept of a "geographic fence" that is commonly employed to protect the highly recurrent locations of a user, such as his home or work. We perform an extensive evaluation of our technique by building an elastic metric for Paris' wide metropolitan area, using semantic information from the OpenStreetMap database. We compare the resulting mechanism against the Planar Laplace mechanism satisfying standard geo-indistinguishability, using two real-world datasets from the Gowalla and Brightkite location-based social networks. The results show that the elastic mechanism adapts well to the semantics of each area, adjusting the noise as we move outside the city center, hence offering better overall privacy.
no_new_dataset
0.949248
1505.05667
Xipeng Qiu
Chenxi Zhu, Xipeng Qiu, Xinchi Chen, Xuanjing Huang
A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network
null
null
null
null
cs.CL cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations. We propose a recursive convolutional neural network (RCNN) architecture to capture syntactic and compositional-semantic representations of phrases and words in a dependency tree. Different with the original recursive neural network, we introduce the convolution and pooling layers, which can model a variety of compositions by the feature maps and choose the most informative compositions by the pooling layers. Based on RCNN, we use a discriminative model to re-rank a $k$-best list of candidate dependency parsing trees. The experiments show that RCNN is very effective to improve the state-of-the-art dependency parsing on both English and Chinese datasets.
[ { "version": "v1", "created": "Thu, 21 May 2015 10:23:10 GMT" } ]
2015-05-22T00:00:00
[ [ "Zhu", "Chenxi", "" ], [ "Qiu", "Xipeng", "" ], [ "Chen", "Xinchi", "" ], [ "Huang", "Xuanjing", "" ] ]
TITLE: A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network ABSTRACT: In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations. We propose a recursive convolutional neural network (RCNN) architecture to capture syntactic and compositional-semantic representations of phrases and words in a dependency tree. Different with the original recursive neural network, we introduce the convolution and pooling layers, which can model a variety of compositions by the feature maps and choose the most informative compositions by the pooling layers. Based on RCNN, we use a discriminative model to re-rank a $k$-best list of candidate dependency parsing trees. The experiments show that RCNN is very effective to improve the state-of-the-art dependency parsing on both English and Chinese datasets.
no_new_dataset
0.950595
1505.05723
Indre Zliobaite
Indre Zliobaite
On the relation between accuracy and fairness in binary classification
Accepted for presentation to the 2nd workshop on Fairness, Accountability, and Transparency in Machine Learning (http://www.fatml.org/)
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our study revisits the problem of accuracy-fairness tradeoff in binary classification. We argue that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different rates of positive predictions. We provide methodological recommendations for sound comparison of non-discriminatory classifiers, and present a brief theoretical and empirical analysis of tradeoffs between accuracy and non-discrimination.
[ { "version": "v1", "created": "Thu, 21 May 2015 13:20:06 GMT" } ]
2015-05-22T00:00:00
[ [ "Zliobaite", "Indre", "" ] ]
TITLE: On the relation between accuracy and fairness in binary classification ABSTRACT: Our study revisits the problem of accuracy-fairness tradeoff in binary classification. We argue that comparison of non-discriminatory classifiers needs to account for different rates of positive predictions, otherwise conclusions about performance may be misleading, because accuracy and discrimination of naive baselines on the same dataset vary with different rates of positive predictions. We provide methodological recommendations for sound comparison of non-discriminatory classifiers, and present a brief theoretical and empirical analysis of tradeoffs between accuracy and non-discrimination.
no_new_dataset
0.944331
1505.05753
Mario Fritz
Iaroslav Shcherbatyi, Andreas Bulling, Mario Fritz
GazeDPM: Early Integration of Gaze Information in Deformable Part Models
null
null
null
null
cs.CV cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error.
[ { "version": "v1", "created": "Thu, 21 May 2015 14:39:51 GMT" } ]
2015-05-22T00:00:00
[ [ "Shcherbatyi", "Iaroslav", "" ], [ "Bulling", "Andreas", "" ], [ "Fritz", "Mario", "" ] ]
TITLE: GazeDPM: Early Integration of Gaze Information in Deformable Part Models ABSTRACT: An increasing number of works explore collaborative human-computer systems in which human gaze is used to enhance computer vision systems. For object detection these efforts were so far restricted to late integration approaches that have inherent limitations, such as increased precision without increase in recall. We propose an early integration approach in a deformable part model, which constitutes a joint formulation over gaze and visual data. We show that our GazeDPM method improves over the state-of-the-art DPM baseline by 4% and a recent method for gaze-supported object detection by 3% on the public POET dataset. Our approach additionally provides introspection of the learnt models, can reveal salient image structures, and allows us to investigate the interplay between gaze attracting and repelling areas, the importance of view-specific models, as well as viewers' personal biases in gaze patterns. We finally study important practical aspects of our approach, such as the impact of using saliency maps instead of real fixations, the impact of the number of fixations, as well as robustness to gaze estimation error.
no_new_dataset
0.946695
1409.4450
Giovanni Luca Ciampaglia
Giovanni Luca Ciampaglia, Alessandro Flammini, Filippo Menczer
The production of information in the attention economy
14 pages, 3 figures, 1 table
null
10.1038/srep09452
Giovanni Luca Ciampaglia, Alessandro Flammini & Filippo Menczer Scientific Reports 5, Article number: 9452 (2015)
physics.soc-ph cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online traces of human activity offer novel opportunities to study the dynamics of complex knowledge exchange networks, and in particular how the relationship between demand and supply of information is mediated by competition for our limited individual attention. The emergent patterns of collective attention determine what new information is generated and consumed. Can we measure the relationship between demand and supply for new information about a topic? Here we propose a normalization method to compare attention bursts statistics across topics that have an heterogeneous distribution of attention. Through analysis of a massive dataset on traffic to Wikipedia, we find that the production of new knowledge is associated to significant shifts of collective attention, which we take as a proxy for its demand. What we observe is consistent with a scenario in which the allocation of attention toward a topic stimulates the demand for information about it, and in turn the supply of further novel information. Our attempt to quantify demand and supply of information, and our finding about their temporal ordering, may lead to the development of the fundamental laws of the attention economy, and a better understanding of the social exchange of knowledge in online and offline information networks.
[ { "version": "v1", "created": "Mon, 15 Sep 2014 21:13:35 GMT" } ]
2015-05-21T00:00:00
[ [ "Ciampaglia", "Giovanni Luca", "" ], [ "Flammini", "Alessandro", "" ], [ "Menczer", "Filippo", "" ] ]
TITLE: The production of information in the attention economy ABSTRACT: Online traces of human activity offer novel opportunities to study the dynamics of complex knowledge exchange networks, and in particular how the relationship between demand and supply of information is mediated by competition for our limited individual attention. The emergent patterns of collective attention determine what new information is generated and consumed. Can we measure the relationship between demand and supply for new information about a topic? Here we propose a normalization method to compare attention bursts statistics across topics that have an heterogeneous distribution of attention. Through analysis of a massive dataset on traffic to Wikipedia, we find that the production of new knowledge is associated to significant shifts of collective attention, which we take as a proxy for its demand. What we observe is consistent with a scenario in which the allocation of attention toward a topic stimulates the demand for information about it, and in turn the supply of further novel information. Our attempt to quantify demand and supply of information, and our finding about their temporal ordering, may lead to the development of the fundamental laws of the attention economy, and a better understanding of the social exchange of knowledge in online and offline information networks.
no_new_dataset
0.936981
1502.04623
Ivo Danihelka
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra
DRAW: A Recurrent Neural Network For Image Generation
null
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
[ { "version": "v1", "created": "Mon, 16 Feb 2015 16:48:56 GMT" }, { "version": "v2", "created": "Wed, 20 May 2015 15:29:42 GMT" } ]
2015-05-21T00:00:00
[ [ "Gregor", "Karol", "" ], [ "Danihelka", "Ivo", "" ], [ "Graves", "Alex", "" ], [ "Rezende", "Danilo Jimenez", "" ], [ "Wierstra", "Daan", "" ] ]
TITLE: DRAW: A Recurrent Neural Network For Image Generation ABSTRACT: This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
no_new_dataset
0.953057
1504.06755
Pingmei Xu
Pingmei Xu, Krista A Ehinger, Yinda Zhang, Adam Finkelstein, Sanjeev R. Kulkarni, Jianxiong Xiao
TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking
9 pages, 14 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
[ { "version": "v1", "created": "Sat, 25 Apr 2015 19:26:47 GMT" }, { "version": "v2", "created": "Wed, 20 May 2015 18:51:23 GMT" } ]
2015-05-21T00:00:00
[ [ "Xu", "Pingmei", "" ], [ "Ehinger", "Krista A", "" ], [ "Zhang", "Yinda", "" ], [ "Finkelstein", "Adam", "" ], [ "Kulkarni", "Sanjeev R.", "" ], [ "Xiao", "Jianxiong", "" ] ]
TITLE: TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking ABSTRACT: Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
new_dataset
0.947137
1505.05208
Raajen Patel
Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, and Richard G. Baraniuk
oASIS: Adaptive Column Sampling for Kernel Matrix Approximation
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Kernel matrices (e.g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction. For large datasets, the cost of forming and factoring such kernel matrices becomes intractable. To address this challenge, we introduce a new adaptive sampling algorithm called Accelerated Sequential Incoherence Selection (oASIS) that samples columns without explicitly computing the entire kernel matrix. We provide conditions under which oASIS is guaranteed to exactly recover the kernel matrix with an optimal number of columns selected. Numerical experiments on both synthetic and real-world datasets demonstrate that oASIS achieves performance comparable to state-of-the-art adaptive sampling methods at a fraction of the computational cost. The low runtime complexity of oASIS and its low memory footprint enable the solution of large problems that are simply intractable using other adaptive methods.
[ { "version": "v1", "created": "Tue, 19 May 2015 23:12:01 GMT" } ]
2015-05-21T00:00:00
[ [ "Patel", "Raajen", "" ], [ "Goldstein", "Thomas A.", "" ], [ "Dyer", "Eva L.", "" ], [ "Mirhoseini", "Azalia", "" ], [ "Baraniuk", "Richard G.", "" ] ]
TITLE: oASIS: Adaptive Column Sampling for Kernel Matrix Approximation ABSTRACT: Kernel matrices (e.g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction. For large datasets, the cost of forming and factoring such kernel matrices becomes intractable. To address this challenge, we introduce a new adaptive sampling algorithm called Accelerated Sequential Incoherence Selection (oASIS) that samples columns without explicitly computing the entire kernel matrix. We provide conditions under which oASIS is guaranteed to exactly recover the kernel matrix with an optimal number of columns selected. Numerical experiments on both synthetic and real-world datasets demonstrate that oASIS achieves performance comparable to state-of-the-art adaptive sampling methods at a fraction of the computational cost. The low runtime complexity of oASIS and its low memory footprint enable the solution of large problems that are simply intractable using other adaptive methods.
no_new_dataset
0.944485
1505.05211
Souvik Bhattacherjee
Souvik Bhattacherjee and Amit Chavan and Silu Huang and Amol Deshpande and Aditya Parameswaran
Principles of Dataset Versioning: Exploring the Recreation/Storage Tradeoff
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relative ease of collaborative data science and analysis has led to a proliferation of many thousands or millions of $versions$ of the same datasets in many scientific and commercial domains, acquired or constructed at various stages of data analysis across many users, and often over long periods of time. Managing, storing, and recreating these dataset versions is a non-trivial task. The fundamental challenge here is the $storage-recreation\;trade-off$: the more storage we use, the faster it is to recreate or retrieve versions, while the less storage we use, the slower it is to recreate or retrieve versions. Despite the fundamental nature of this problem, there has been a surprisingly little amount of work on it. In this paper, we study this trade-off in a principled manner: we formulate six problems under various settings, trading off these quantities in various ways, demonstrate that most of the problems are intractable, and propose a suite of inexpensive heuristics drawing from techniques in delay-constrained scheduling, and spanning tree literature, to solve these problems. We have built a prototype version management system, that aims to serve as a foundation to our DATAHUB system for facilitating collaborative data science. We demonstrate, via extensive experiments, that our proposed heuristics provide efficient solutions in practical dataset versioning scenarios.
[ { "version": "v1", "created": "Tue, 19 May 2015 23:45:05 GMT" } ]
2015-05-21T00:00:00
[ [ "Bhattacherjee", "Souvik", "" ], [ "Chavan", "Amit", "" ], [ "Huang", "Silu", "" ], [ "Deshpande", "Amol", "" ], [ "Parameswaran", "Aditya", "" ] ]
TITLE: Principles of Dataset Versioning: Exploring the Recreation/Storage Tradeoff ABSTRACT: The relative ease of collaborative data science and analysis has led to a proliferation of many thousands or millions of $versions$ of the same datasets in many scientific and commercial domains, acquired or constructed at various stages of data analysis across many users, and often over long periods of time. Managing, storing, and recreating these dataset versions is a non-trivial task. The fundamental challenge here is the $storage-recreation\;trade-off$: the more storage we use, the faster it is to recreate or retrieve versions, while the less storage we use, the slower it is to recreate or retrieve versions. Despite the fundamental nature of this problem, there has been a surprisingly little amount of work on it. In this paper, we study this trade-off in a principled manner: we formulate six problems under various settings, trading off these quantities in various ways, demonstrate that most of the problems are intractable, and propose a suite of inexpensive heuristics drawing from techniques in delay-constrained scheduling, and spanning tree literature, to solve these problems. We have built a prototype version management system, that aims to serve as a foundation to our DATAHUB system for facilitating collaborative data science. We demonstrate, via extensive experiments, that our proposed heuristics provide efficient solutions in practical dataset versioning scenarios.
no_new_dataset
0.944382
1505.05225
Lihua Guo
Guo Lihua, Li Fudi
Image aesthetic evaluation using paralleled deep convolution neural network
7 pages, 6 figures, 9 tables
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image aesthetic evaluation has attracted much attention in recent years. Image aesthetic evaluation methods heavily depend on the effective aesthetic feature. Traditional meth-ods always extract hand-crafted features. However, these hand-crafted features are always designed to adapt particu-lar datasets, and extraction of them needs special design. Rather than extracting hand-crafted features, an automati-cally learn of aesthetic features based on deep convolutional neural network (DCNN) is first adopt in this paper. As we all know, when the training dataset is given, the DCNN architecture with high complexity may meet the over-fitting problem. On the other side, the DCNN architecture with low complexity would not efficiently extract effective features. For these reasons, we further propose a paralleled convolutional neural network (PDCNN) with multi-level structures to automatically adapt to the training dataset. Experimental results show that our proposed PDCNN architecture achieves better performance than other traditional methods.
[ { "version": "v1", "created": "Wed, 20 May 2015 02:03:23 GMT" } ]
2015-05-21T00:00:00
[ [ "Lihua", "Guo", "" ], [ "Fudi", "Li", "" ] ]
TITLE: Image aesthetic evaluation using paralleled deep convolution neural network ABSTRACT: Image aesthetic evaluation has attracted much attention in recent years. Image aesthetic evaluation methods heavily depend on the effective aesthetic feature. Traditional meth-ods always extract hand-crafted features. However, these hand-crafted features are always designed to adapt particu-lar datasets, and extraction of them needs special design. Rather than extracting hand-crafted features, an automati-cally learn of aesthetic features based on deep convolutional neural network (DCNN) is first adopt in this paper. As we all know, when the training dataset is given, the DCNN architecture with high complexity may meet the over-fitting problem. On the other side, the DCNN architecture with low complexity would not efficiently extract effective features. For these reasons, we further propose a paralleled convolutional neural network (PDCNN) with multi-level structures to automatically adapt to the training dataset. Experimental results show that our proposed PDCNN architecture achieves better performance than other traditional methods.
no_new_dataset
0.951549
1505.05232
Songfan Yang
Songfan Yang, Deva Ramanan
Multi-scale recognition with DAG-CNNs
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multiscale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even "off-the-self" multiscale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9% and 9.5%, respectively.
[ { "version": "v1", "created": "Wed, 20 May 2015 02:52:07 GMT" } ]
2015-05-21T00:00:00
[ [ "Yang", "Songfan", "" ], [ "Ramanan", "Deva", "" ] ]
TITLE: Multi-scale recognition with DAG-CNNs ABSTRACT: We explore multi-scale convolutional neural nets (CNNs) for image classification. Contemporary approaches extract features from a single output layer. By extracting features from multiple layers, one can simultaneously reason about high, mid, and low-level features during classification. The resulting multi-scale architecture can itself be seen as a feed-forward model that is structured as a directed acyclic graph (DAG-CNNs). We use DAG-CNNs to learn a set of multiscale features that can be effectively shared between coarse and fine-grained classification tasks. While fine-tuning such models helps performance, we show that even "off-the-self" multiscale features perform quite well. We present extensive analysis and demonstrate state-of-the-art classification performance on three standard scene benchmarks (SUN397, MIT67, and Scene15). In terms of the heavily benchmarked MIT67 and Scene15 datasets, our results reduce the lowest previously-reported error by 23.9% and 9.5%, respectively.
no_new_dataset
0.947817
1505.05322
Ai Munandar Tb
Azhari SN and Tb. Ai Munandar
Unsupervised Neural Network-Naive Bayes Model for Grouping Data Regional Development Results
6 pages
International Journal of Computer Applications, Volume 104, No 15, October 2014
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determination quadrant development has an important role in order to determine the achievement of the development of a district, in terms of the sector's gross regional domestic product (GDP). The process of determining the quadrant development typically uses Klassen rules based on its sector GDP. This study aims to provide a new approach in the conduct of regional development quadrant clustering using cluster techniques. Clustering is performed based on the average value of the growth and development of a district contribution compared with the average value and contribution of the development of the province based on data in comparison with a year of data to be compared. Testing models of clustering, performed on a dataset of two provinces, namely Banten (as a data testing) and Central Java (as the training data), to see the accuracy of the classification model proposed. The proposed model consists of two learning methods in it, namely unsupervised (Self Organizing Map / SOM-NN) method and supervised (Naive Bayess). SOM-NN method is used as a learning engine to generate training data for the target Class that will be used in the machine learning Naive Bayess. The results showed the clustering accuracy rate of the model was 98.1%, while the clustering accuracy rate of the model results compared to manual analysis shows the accuracy of the typology Klassen smaller, ie 29.63%. On one side, clustering results of the proposed model is influenced by the number and keagaraman data sets used.
[ { "version": "v1", "created": "Wed, 20 May 2015 11:22:18 GMT" } ]
2015-05-21T00:00:00
[ [ "SN", "Azhari", "" ], [ "Munandar", "Tb. Ai", "" ] ]
TITLE: Unsupervised Neural Network-Naive Bayes Model for Grouping Data Regional Development Results ABSTRACT: Determination quadrant development has an important role in order to determine the achievement of the development of a district, in terms of the sector's gross regional domestic product (GDP). The process of determining the quadrant development typically uses Klassen rules based on its sector GDP. This study aims to provide a new approach in the conduct of regional development quadrant clustering using cluster techniques. Clustering is performed based on the average value of the growth and development of a district contribution compared with the average value and contribution of the development of the province based on data in comparison with a year of data to be compared. Testing models of clustering, performed on a dataset of two provinces, namely Banten (as a data testing) and Central Java (as the training data), to see the accuracy of the classification model proposed. The proposed model consists of two learning methods in it, namely unsupervised (Self Organizing Map / SOM-NN) method and supervised (Naive Bayess). SOM-NN method is used as a learning engine to generate training data for the target Class that will be used in the machine learning Naive Bayess. The results showed the clustering accuracy rate of the model was 98.1%, while the clustering accuracy rate of the model results compared to manual analysis shows the accuracy of the typology Klassen smaller, ie 29.63%. On one side, clustering results of the proposed model is influenced by the number and keagaraman data sets used.
no_new_dataset
0.954351
1505.05338
Poorna Dasgupta
Poorna Banerjee Dasgupta
Algorithmic Analysis of Edge Ranking and Profiling for MTF Determination of an Imaging System
3 pages, Published with International Journal of Computer Trends and Technology (IJCTT), Volume-23 Number-1, 2015
International Journal of Computer Trends and Technology (IJCTT) V23(1):46-48, May 2015
10.14445/22312803/IJCTT-V23P110
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Edge detection is one of the most principal techniques for detecting discontinuities in the gray levels of image pixels. The Modulation Transfer Function (MTF) is one of the main criteria for assessing imaging quality and is a parameter frequently used for measuring the sharpness of an imaging system. In order to determine the MTF, it is essential to determine the best edge from the target image so that an edge profile can be developed and then the line spread function and hence the MTF, can be computed accordingly. For regular image sizes, the human visual system is adept enough to identify suitable edges from the image. But considering huge image datasets, such as those obtained from satellites, the image size may range in few gigabytes and in such a case, manual inspection of images for determination of the best suitable edge is not plausible and hence, edge profiling tasks have to be automated. This paper presents a novel, yet simple, algorithm for edge ranking and detection from image data-sets for MTF computation, which is ideal for automation on vectorised graphical processing units.
[ { "version": "v1", "created": "Wed, 20 May 2015 12:12:48 GMT" } ]
2015-05-21T00:00:00
[ [ "Dasgupta", "Poorna Banerjee", "" ] ]
TITLE: Algorithmic Analysis of Edge Ranking and Profiling for MTF Determination of an Imaging System ABSTRACT: Edge detection is one of the most principal techniques for detecting discontinuities in the gray levels of image pixels. The Modulation Transfer Function (MTF) is one of the main criteria for assessing imaging quality and is a parameter frequently used for measuring the sharpness of an imaging system. In order to determine the MTF, it is essential to determine the best edge from the target image so that an edge profile can be developed and then the line spread function and hence the MTF, can be computed accordingly. For regular image sizes, the human visual system is adept enough to identify suitable edges from the image. But considering huge image datasets, such as those obtained from satellites, the image size may range in few gigabytes and in such a case, manual inspection of images for determination of the best suitable edge is not plausible and hence, edge profiling tasks have to be automated. This paper presents a novel, yet simple, algorithm for edge ranking and detection from image data-sets for MTF computation, which is ideal for automation on vectorised graphical processing units.
no_new_dataset
0.947817
1505.05354
Lianwen Jin
Weixin Yang, Lianwen Jin, Dacheng Tao, Zecheng Xie, Ziyong Feng
DropSample: A New Training Method to Enhance Deep Convolutional Neural Networks for Large-Scale Unconstrained Handwritten Chinese Character Recognition
18 pages, 8 figures, 5 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the theory of Leitners learning box from the field of psychology, we propose DropSample, a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample, each training sample is associated with a quota function that is dynamically adjusted on the basis of the classification confidence given by the DCNN softmax output. After a learning iteration, samples with low confidence will have a higher probability of being selected as training data in the next iteration; in contrast, well-trained and well-recognized samples with very high confidence will have a lower probability of being involved in the next training iteration and can be gradually eliminated. As a result, the learning process becomes more efficient as it progresses. Furthermore, we investigate the use of domain-specific knowledge to enhance the performance of DCNN by adding a domain knowledge layer before the traditional CNN. By adopting DropSample together with different types of domain-specific knowledge, the accuracy of HCCR can be improved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1, and ICDAR 2013 online HCCR competition datasets yield outstanding recognition rates of 97.33%, 97.06%, and 97.51% respectively, all of which are significantly better than the previous best results reported in the literature.
[ { "version": "v1", "created": "Wed, 20 May 2015 13:08:57 GMT" } ]
2015-05-21T00:00:00
[ [ "Yang", "Weixin", "" ], [ "Jin", "Lianwen", "" ], [ "Tao", "Dacheng", "" ], [ "Xie", "Zecheng", "" ], [ "Feng", "Ziyong", "" ] ]
TITLE: DropSample: A New Training Method to Enhance Deep Convolutional Neural Networks for Large-Scale Unconstrained Handwritten Chinese Character Recognition ABSTRACT: Inspired by the theory of Leitners learning box from the field of psychology, we propose DropSample, a new method for training deep convolutional neural networks (DCNNs), and apply it to large-scale online handwritten Chinese character recognition (HCCR). According to the principle of DropSample, each training sample is associated with a quota function that is dynamically adjusted on the basis of the classification confidence given by the DCNN softmax output. After a learning iteration, samples with low confidence will have a higher probability of being selected as training data in the next iteration; in contrast, well-trained and well-recognized samples with very high confidence will have a lower probability of being involved in the next training iteration and can be gradually eliminated. As a result, the learning process becomes more efficient as it progresses. Furthermore, we investigate the use of domain-specific knowledge to enhance the performance of DCNN by adding a domain knowledge layer before the traditional CNN. By adopting DropSample together with different types of domain-specific knowledge, the accuracy of HCCR can be improved efficiently. Experiments on the CASIA-OLHDWB 1.0, CASIA-OLHWDB 1.1, and ICDAR 2013 online HCCR competition datasets yield outstanding recognition rates of 97.33%, 97.06%, and 97.51% respectively, all of which are significantly better than the previous best results reported in the literature.
no_new_dataset
0.948585
1010.1015
Keith Wiley
Keith Wiley, Andrew Connolly, Jeff Gardner, Simon Krughof, Magdalena Balazinska, Bill Howe, YongChul Kwon and YingYi Bu
Astronomy in the Cloud: Using MapReduce for Image Coaddition
31 pages, 11 figures, 2 tables
null
10.1086/658877
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification, and moving object tracking. Since such studies benefit from the highest quality data, methods such as image coaddition (stacking) will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources or transient objects, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, e.g., Amazon's EC2. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
[ { "version": "v1", "created": "Tue, 5 Oct 2010 20:35:53 GMT" } ]
2015-05-20T00:00:00
[ [ "Wiley", "Keith", "" ], [ "Connolly", "Andrew", "" ], [ "Gardner", "Jeff", "" ], [ "Krughof", "Simon", "" ], [ "Balazinska", "Magdalena", "" ], [ "Howe", "Bill", "" ], [ "Kwon", "YongChul", "" ], [ "Bu", "YingYi", "" ] ]
TITLE: Astronomy in the Cloud: Using MapReduce for Image Coaddition ABSTRACT: In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification, and moving object tracking. Since such studies benefit from the highest quality data, methods such as image coaddition (stacking) will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources or transient objects, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, e.g., Amazon's EC2. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.
no_new_dataset
0.938857
1011.6268
Marija Mitrovic
Marija Mitrovi\'c, Georgios Paltoglou and Bosiljka Tadi\'c
Quantitative Analysis of Bloggers Collective Behavior Powered by Emotions
null
null
10.1088/1742-5468/2011/02/P02005
0143821
physics.soc-ph cond-mat.stat-mech cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale data resulting from users online interactions provide the ultimate source of information to study emergent social phenomena on the Web. From individual actions of users to observable collective behaviors, different mechanisms involving emotions expressed in the posted text play a role. Here we combine approaches of statistical physics with machine-learning methods of text analysis to study emergence of the emotional behavior among Web users. Mapping the high-resolution data from digg.com onto bipartite network of users and their comments onto posted stories, we identify user communities centered around certain popular posts and determine emotional contents of the related comments by the emotion-classifier developed for this type of texts. Applied over different time periods, this framework reveals strong correlations between the excess of negative emotions and the evolution of communities. We observe avalanches of emotional comments exhibiting significant self-organized critical behavior and temporal correlations. To explore robustness of these critical states, we design a network automaton model on realistic network connections and several control parameters, which can be inferred from the dataset. Dissemination of emotions by a small fraction of very active users appears to critically tune the collective states.
[ { "version": "v1", "created": "Mon, 29 Nov 2010 15:36:39 GMT" } ]
2015-05-20T00:00:00
[ [ "Mitrović", "Marija", "" ], [ "Paltoglou", "Georgios", "" ], [ "Tadić", "Bosiljka", "" ] ]
TITLE: Quantitative Analysis of Bloggers Collective Behavior Powered by Emotions ABSTRACT: Large-scale data resulting from users online interactions provide the ultimate source of information to study emergent social phenomena on the Web. From individual actions of users to observable collective behaviors, different mechanisms involving emotions expressed in the posted text play a role. Here we combine approaches of statistical physics with machine-learning methods of text analysis to study emergence of the emotional behavior among Web users. Mapping the high-resolution data from digg.com onto bipartite network of users and their comments onto posted stories, we identify user communities centered around certain popular posts and determine emotional contents of the related comments by the emotion-classifier developed for this type of texts. Applied over different time periods, this framework reveals strong correlations between the excess of negative emotions and the evolution of communities. We observe avalanches of emotional comments exhibiting significant self-organized critical behavior and temporal correlations. To explore robustness of these critical states, we design a network automaton model on realistic network connections and several control parameters, which can be inferred from the dataset. Dissemination of emotions by a small fraction of very active users appears to critically tune the collective states.
no_new_dataset
0.940079
1012.1184
Lei Zhang Dr.
Weisheng Dong, Lei Zhang, Guangming Shi, Xiaolin Wu
Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization
35 pages. This paper is under review in IEEE TIP
null
10.1109/TIP.2011.2108306
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a pre-collected dataset of example image patches, and then for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image non-local self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.
[ { "version": "v1", "created": "Mon, 6 Dec 2010 14:37:14 GMT" } ]
2015-05-20T00:00:00
[ [ "Dong", "Weisheng", "" ], [ "Zhang", "Lei", "" ], [ "Shi", "Guangming", "" ], [ "Wu", "Xiaolin", "" ] ]
TITLE: Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization ABSTRACT: As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of l1-norm optimization techniques, and the fact that natural images are intrinsically sparse in some domain. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a pre-collected dataset of example image patches, and then for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image non-local self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.
no_new_dataset
0.948632
1401.4529
Bal\'azs Hidasi
Bal\'azs Hidasi, Domonkos Tikk
General factorization framework for context-aware recommendations
The final publication is available at Springer via http://dx.doi.org/10.1007/s10618-015-0417-y. Data Mining and Knowledge Discovery, 2015
null
10.1007/s10618-015-0417-y
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a General Factorization Framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework's potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments -- performed on five real life, implicit feedback datasets -- that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the fac[truncated]
[ { "version": "v1", "created": "Sat, 18 Jan 2014 11:13:26 GMT" }, { "version": "v2", "created": "Tue, 19 May 2015 11:50:22 GMT" } ]
2015-05-20T00:00:00
[ [ "Hidasi", "Balázs", "" ], [ "Tikk", "Domonkos", "" ] ]
TITLE: General factorization framework for context-aware recommendations ABSTRACT: Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a General Factorization Framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework's potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments -- performed on five real life, implicit feedback datasets -- that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the fac[truncated]
no_new_dataset
0.947672
1408.6865
Paul Dauncey
P. D. Dauncey, M. Kenzie, N. Wardle and G. J. Davies
Handling uncertainties in background shapes: the discrete profiling method
Accepted by J.Inst
null
10.1088/1748-0221/10/04/P04015
null
physics.data-an hep-ex
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A common problem in data analysis is that the functional form, as well as the parameter values, of the underlying model which should describe a dataset is not known a priori. In these cases some extra uncertainty must be assigned to the extracted parameters of interest due to lack of exact knowledge of the functional form of the model. A method for assigning an appropriate error is presented. The method is based on considering the choice of functional form as a discrete nuisance parameter which is profiled in an analogous way to continuous nuisance parameters. The bias and coverage of this method are shown to be good when applied to a realistic example.
[ { "version": "v1", "created": "Thu, 28 Aug 2014 21:31:13 GMT" }, { "version": "v2", "created": "Mon, 22 Dec 2014 17:05:11 GMT" }, { "version": "v3", "created": "Wed, 24 Dec 2014 15:13:18 GMT" }, { "version": "v4", "created": "Fri, 20 Feb 2015 10:55:12 GMT" }, { "version": "v5", "created": "Tue, 28 Apr 2015 12:58:45 GMT" } ]
2015-05-20T00:00:00
[ [ "Dauncey", "P. D.", "" ], [ "Kenzie", "M.", "" ], [ "Wardle", "N.", "" ], [ "Davies", "G. J.", "" ] ]
TITLE: Handling uncertainties in background shapes: the discrete profiling method ABSTRACT: A common problem in data analysis is that the functional form, as well as the parameter values, of the underlying model which should describe a dataset is not known a priori. In these cases some extra uncertainty must be assigned to the extracted parameters of interest due to lack of exact knowledge of the functional form of the model. A method for assigning an appropriate error is presented. The method is based on considering the choice of functional form as a discrete nuisance parameter which is profiled in an analogous way to continuous nuisance parameters. The bias and coverage of this method are shown to be good when applied to a realistic example.
no_new_dataset
0.947186
1501.07320
Arun Tejasvi Chaganty
Volodymyr Kuleshov and Arun Tejasvi Chaganty and Percy Liang
Tensor Factorization via Matrix Factorization
Appearing in Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tensor factorization arises in many machine learning applications, such knowledge base modeling and parameter estimation in latent variable models. However, numerical methods for tensor factorization have not reached the level of maturity of matrix factorization methods. In this paper, we propose a new method for CP tensor factorization that uses random projections to reduce the problem to simultaneous matrix diagonalization. Our method is conceptually simple and also applies to non-orthogonal and asymmetric tensors of arbitrary order. We prove that a small number random projections essentially preserves the spectral information in the tensor, allowing us to remove the dependence on the eigengap that plagued earlier tensor-to-matrix reductions. Experimentally, our method outperforms existing tensor factorization methods on both simulated data and two real datasets.
[ { "version": "v1", "created": "Thu, 29 Jan 2015 01:01:34 GMT" }, { "version": "v2", "created": "Mon, 18 May 2015 21:53:34 GMT" } ]
2015-05-20T00:00:00
[ [ "Kuleshov", "Volodymyr", "" ], [ "Chaganty", "Arun Tejasvi", "" ], [ "Liang", "Percy", "" ] ]
TITLE: Tensor Factorization via Matrix Factorization ABSTRACT: Tensor factorization arises in many machine learning applications, such knowledge base modeling and parameter estimation in latent variable models. However, numerical methods for tensor factorization have not reached the level of maturity of matrix factorization methods. In this paper, we propose a new method for CP tensor factorization that uses random projections to reduce the problem to simultaneous matrix diagonalization. Our method is conceptually simple and also applies to non-orthogonal and asymmetric tensors of arbitrary order. We prove that a small number random projections essentially preserves the spectral information in the tensor, allowing us to remove the dependence on the eigengap that plagued earlier tensor-to-matrix reductions. Experimentally, our method outperforms existing tensor factorization methods on both simulated data and two real datasets.
no_new_dataset
0.950134
1505.04803
Yong Jae Lee
Yong Jae Lee and Kristen Grauman
Predicting Important Objects for Egocentric Video Summarization
Published in the International Journal of Computer Vision (IJCV), January 2015
null
10.1007/s11263-014-0794-5
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
[ { "version": "v1", "created": "Mon, 18 May 2015 20:07:20 GMT" } ]
2015-05-20T00:00:00
[ [ "Lee", "Yong Jae", "" ], [ "Grauman", "Kristen", "" ] ]
TITLE: Predicting Important Objects for Egocentric Video Summarization ABSTRACT: We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video---such as the nearness to hands, gaze, and frequency of occurrence---and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. We adjust the compactness of the final summary given either an importance selection criterion or a length budget; for the latter, we design an efficient dynamic programming solution that accounts for importance, visual uniqueness, and temporal displacement. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.
no_new_dataset
0.955068
1505.04873
Lior Talker
Lior Talker and Yael Moses and Ilan Shimshoni
Have a Look at What I See
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a method for guiding a photographer to rotate her/his smartphone camera to obtain an image that overlaps with another image of the same scene. The other image is taken by another photographer from a different viewpoint. Our method is applicable even when the images do not have overlapping fields of view. Straightforward applications of our method include sharing attention to regions of interest for social purposes, or adding missing images to improve structure for motion results. Our solution uses additional images of the scene, which are often available since many people use their smartphone cameras regularly. These images may be available online from other photographers who are present at the scene. Our method avoids 3D scene reconstruction; it relies instead on a new representation that consists of the spatial orders of the scene points on two axes, x and y. This representation allows a sequence of points to be chosen efficiently and projected onto the photographers images, using epipolar point transfer. Overlaying these epipolar lines on the live preview of the camera produces a convenient interface to guide the user. The method was tested on challenging datasets of images and succeeded in guiding a photographer from one view to a non-overlapping destination view.
[ { "version": "v1", "created": "Tue, 19 May 2015 04:49:46 GMT" } ]
2015-05-20T00:00:00
[ [ "Talker", "Lior", "" ], [ "Moses", "Yael", "" ], [ "Shimshoni", "Ilan", "" ] ]
TITLE: Have a Look at What I See ABSTRACT: We propose a method for guiding a photographer to rotate her/his smartphone camera to obtain an image that overlaps with another image of the same scene. The other image is taken by another photographer from a different viewpoint. Our method is applicable even when the images do not have overlapping fields of view. Straightforward applications of our method include sharing attention to regions of interest for social purposes, or adding missing images to improve structure for motion results. Our solution uses additional images of the scene, which are often available since many people use their smartphone cameras regularly. These images may be available online from other photographers who are present at the scene. Our method avoids 3D scene reconstruction; it relies instead on a new representation that consists of the spatial orders of the scene points on two axes, x and y. This representation allows a sequence of points to be chosen efficiently and projected onto the photographers images, using epipolar point transfer. Overlaying these epipolar lines on the live preview of the camera produces a convenient interface to guide the user. The method was tested on challenging datasets of images and succeeded in guiding a photographer from one view to a non-overlapping destination view.
no_new_dataset
0.942454
1505.04880
Nasser Ghadiri
Amin Beiranvand and Nasser Ghadiri
ADQUEX: Adaptive Processing of Federated Queries over Linked Data based on Tuple Routing
null
null
null
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the distribution of linked data across the web, the methods that process federated queries through a distributed approach are more attractive to the users and have gained more prosperity. In distributed processing of federated queries, we need methods and procedures to execute the query in an optimal manner. Most of the existing methods perform the optimization task based on some statistical information, whereas the query processor does not have precise statistical information about their properties, since the data sources are autonomous. When precise statistics are not available, the possibility of wrong estimations will highly increase, and may lead to inefficient execution of the query at runtime. Another problem of the existing methods is that in the optimization phase, they assume that runtime conditions of query execution are stable, while the environment in which federated queries are executed over linked data is dynamic and non-predictable. By considering these two problems, there is a great potential for exploiting the federated query processing techniques in an adaptive manner. In this paper, an adaptive method is proposed for processing federated queries over linked data, based on the concept of routing the tuples. The proposed method, named ADQUEX, is able to execute the query effectively without any prior statistical information. This method can change the query execution plan at runtime so that less intermediate results are produced. It can also adapt the execution plan to new situation if unpredicted network latencies arise. Extensive evaluation of our method by running real queries over well-known linked datasets shows very good results especially for complex queries.
[ { "version": "v1", "created": "Tue, 19 May 2015 05:51:27 GMT" } ]
2015-05-20T00:00:00
[ [ "Beiranvand", "Amin", "" ], [ "Ghadiri", "Nasser", "" ] ]
TITLE: ADQUEX: Adaptive Processing of Federated Queries over Linked Data based on Tuple Routing ABSTRACT: Due to the distribution of linked data across the web, the methods that process federated queries through a distributed approach are more attractive to the users and have gained more prosperity. In distributed processing of federated queries, we need methods and procedures to execute the query in an optimal manner. Most of the existing methods perform the optimization task based on some statistical information, whereas the query processor does not have precise statistical information about their properties, since the data sources are autonomous. When precise statistics are not available, the possibility of wrong estimations will highly increase, and may lead to inefficient execution of the query at runtime. Another problem of the existing methods is that in the optimization phase, they assume that runtime conditions of query execution are stable, while the environment in which federated queries are executed over linked data is dynamic and non-predictable. By considering these two problems, there is a great potential for exploiting the federated query processing techniques in an adaptive manner. In this paper, an adaptive method is proposed for processing federated queries over linked data, based on the concept of routing the tuples. The proposed method, named ADQUEX, is able to execute the query effectively without any prior statistical information. This method can change the query execution plan at runtime so that less intermediate results are produced. It can also adapt the execution plan to new situation if unpredicted network latencies arise. Extensive evaluation of our method by running real queries over well-known linked datasets shows very good results especially for complex queries.
no_new_dataset
0.937897
1505.04922
Lianwen Jin
Weixin Yang, Lianwen Jin, Manfei Liu
Character-level Chinese Writer Identification using Path Signature Feature, DropStroke and Deep CNN
5 pages, 4 figures, 2 tables. Manuscript is accepted to appear in ICDAR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing online writer-identification systems require that the text content is supplied in advance and rely on separately designed features and classifiers. The identifications are based on lines of text, entire paragraphs, or entire documents; however, these materials are not always available. In this paper, we introduce a path-signature feature to an end-to-end text-independent writer-identification system with a deep convolutional neural network (DCNN). Because deep models require a considerable amount of data to achieve good performance, we propose a data-augmentation method named DropStroke to enrich personal handwriting. Experiments were conducted on online handwritten Chinese characters from the CASIA-OLHWDB1.0 dataset, which consists of 3,866 classes from 420 writers. For each writer, we only used 200 samples for training and the remaining 3,666. The results reveal that the path-signature feature is useful for writer identification, and the proposed DropStroke technique enhances the generalization and significantly improves performance.
[ { "version": "v1", "created": "Tue, 19 May 2015 09:25:46 GMT" } ]
2015-05-20T00:00:00
[ [ "Yang", "Weixin", "" ], [ "Jin", "Lianwen", "" ], [ "Liu", "Manfei", "" ] ]
TITLE: Character-level Chinese Writer Identification using Path Signature Feature, DropStroke and Deep CNN ABSTRACT: Most existing online writer-identification systems require that the text content is supplied in advance and rely on separately designed features and classifiers. The identifications are based on lines of text, entire paragraphs, or entire documents; however, these materials are not always available. In this paper, we introduce a path-signature feature to an end-to-end text-independent writer-identification system with a deep convolutional neural network (DCNN). Because deep models require a considerable amount of data to achieve good performance, we propose a data-augmentation method named DropStroke to enrich personal handwriting. Experiments were conducted on online handwritten Chinese characters from the CASIA-OLHWDB1.0 dataset, which consists of 3,866 classes from 420 writers. For each writer, we only used 200 samples for training and the remaining 3,666. The results reveal that the path-signature feature is useful for writer identification, and the proposed DropStroke technique enhances the generalization and significantly improves performance.
no_new_dataset
0.948632
1505.04925
Lianwen Jin
Zhuoyao Zhong, Lianwen Jin, Zecheng Xie
High Performance Offline Handwritten Chinese Character Recognition Using GoogLeNet and Directional Feature Maps
5 pages, 4 figures, 2 tables. Manuscript is accepted to appear in ICDAR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Just like its great success in solving many computer vision problems, the convolutional neural networks (CNN) provided new end-to-end approach to handwritten Chinese character recognition (HCCR) with very promising results in recent years. However, previous CNNs so far proposed for HCCR were neither deep enough nor slim enough. We show in this paper that, a deeper architecture can benefit HCCR a lot to achieve higher performance, meanwhile can be designed with less parameters. We also show that the traditional feature extraction methods, such as Gabor or gradient feature maps, are still useful for enhancing the performance of CNN. We design a streamlined version of GoogLeNet [13], which was original proposed for image classification in recent years with very deep architecture, for HCCR (denoted as HCCR-GoogLeNet). The HCCR-GoogLeNet we used is 19 layers deep but involves with only 7.26 million parameters. Experiments were conducted using the ICDAR 2013 offline HCCR competition dataset. It has been shown that with the proper incorporation with traditional directional feature maps, the proposed single and ensemble HCCR-GoogLeNet models achieve new state of the art recognition accuracy of 96.35% and 96.74%, respectively, outperforming previous best result with significant gap.
[ { "version": "v1", "created": "Tue, 19 May 2015 09:32:54 GMT" } ]
2015-05-20T00:00:00
[ [ "Zhong", "Zhuoyao", "" ], [ "Jin", "Lianwen", "" ], [ "Xie", "Zecheng", "" ] ]
TITLE: High Performance Offline Handwritten Chinese Character Recognition Using GoogLeNet and Directional Feature Maps ABSTRACT: Just like its great success in solving many computer vision problems, the convolutional neural networks (CNN) provided new end-to-end approach to handwritten Chinese character recognition (HCCR) with very promising results in recent years. However, previous CNNs so far proposed for HCCR were neither deep enough nor slim enough. We show in this paper that, a deeper architecture can benefit HCCR a lot to achieve higher performance, meanwhile can be designed with less parameters. We also show that the traditional feature extraction methods, such as Gabor or gradient feature maps, are still useful for enhancing the performance of CNN. We design a streamlined version of GoogLeNet [13], which was original proposed for image classification in recent years with very deep architecture, for HCCR (denoted as HCCR-GoogLeNet). The HCCR-GoogLeNet we used is 19 layers deep but involves with only 7.26 million parameters. Experiments were conducted using the ICDAR 2013 offline HCCR competition dataset. It has been shown that with the proper incorporation with traditional directional feature maps, the proposed single and ensemble HCCR-GoogLeNet models achieve new state of the art recognition accuracy of 96.35% and 96.74%, respectively, outperforming previous best result with significant gap.
no_new_dataset
0.947381
1505.05136
Nicolas Turenne
Nicolas Turenne
A Table-Binning Approach for Visualizing the Past
null
null
null
null
cs.IR cs.DB cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large amounts of data are available due to low-cost and high-capacity data storage equipments. We propose a data exploration/visualization method for tabular multi-dimensional, time-varying datasets to present selected items in their global context. The approach is simple and uses a rank-based visualization and a pattern matching functionality based on temporal profiles. Ranking categories can be specified in a flexible way and are used instead of actual values (value reduction into bins) and plotting it over time in an unevenly quantized representation. Patterns that emerge are matched against a set of eight predefined temporal profiles. The graphical summarization of large-scale temporal data is proposed and applicability is tested qualitatively on about eight data sets and the approach is compared to classic line plots and SAX representation
[ { "version": "v1", "created": "Mon, 27 Apr 2015 11:15:39 GMT" } ]
2015-05-20T00:00:00
[ [ "Turenne", "Nicolas", "" ] ]
TITLE: A Table-Binning Approach for Visualizing the Past ABSTRACT: Large amounts of data are available due to low-cost and high-capacity data storage equipments. We propose a data exploration/visualization method for tabular multi-dimensional, time-varying datasets to present selected items in their global context. The approach is simple and uses a rank-based visualization and a pattern matching functionality based on temporal profiles. Ranking categories can be specified in a flexible way and are used instead of actual values (value reduction into bins) and plotting it over time in an unevenly quantized representation. Patterns that emerge are matched against a set of eight predefined temporal profiles. The graphical summarization of large-scale temporal data is proposed and applicability is tested qualitatively on about eight data sets and the approach is compared to classic line plots and SAX representation
no_new_dataset
0.948058
1005.5227
Michael Schreiber
Michael Schreiber
Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers
19 pages, including 6 tables and 3 figures accepted for publication in Annalen der Physik Berlin
Ann. Phys. (Berlin) 522, 536-554 (2010)
10.1002/andp.201000046
null
physics.soc-ph cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Hirsch index or h-index is widely used to quantify the impact of an individual's scientific research output, determining the highest number h of a scientist's papers that received at least h citations. Several variants of the index have been proposed in order to give more or less preference to highly cited papers. I analyse the citation records of 26 physicists discussing various suggestions, in particular A, e, f, g, h(2), h_w, h_T, \hbar, m, {\pi}, R, s, t, w, and maxprod. The total number of all and of all cited publications as well as the highest and the average number of citations are also compared. Advantages and disadvantages of these indices and indicators are discussed. Correlation coefficients are determined quantifying which indices and indicators yield similar and which yield more deviating rankings of the 26 datasets. For 6 datasets the determination of the indices and indicators is visualized.
[ { "version": "v1", "created": "Fri, 28 May 2010 07:08:09 GMT" } ]
2015-05-19T00:00:00
[ [ "Schreiber", "Michael", "" ] ]
TITLE: Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers ABSTRACT: The Hirsch index or h-index is widely used to quantify the impact of an individual's scientific research output, determining the highest number h of a scientist's papers that received at least h citations. Several variants of the index have been proposed in order to give more or less preference to highly cited papers. I analyse the citation records of 26 physicists discussing various suggestions, in particular A, e, f, g, h(2), h_w, h_T, \hbar, m, {\pi}, R, s, t, w, and maxprod. The total number of all and of all cited publications as well as the highest and the average number of citations are also compared. Advantages and disadvantages of these indices and indicators are discussed. Correlation coefficients are determined quantifying which indices and indicators yield similar and which yield more deviating rankings of the 26 datasets. For 6 datasets the determination of the indices and indicators is visualized.
no_new_dataset
0.952042
1006.4828
Vamsi Kundeti
Vamsi Kundeti and Sanguthevar Rajasekaran and Hieu Dinh
An Efficient Algorithm For Chinese Postman Walk on Bi-directed de Bruijn Graphs
null
null
10.1007/978-3-642-17458-2_16
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sequence assembly from short reads is an important problem in biology. It is known that solving the sequence assembly problem exactly on a bi-directed de Bruijn graph or a string graph is intractable. However finding a Shortest Double stranded DNA string (SDDNA) containing all the k-long words in the reads seems to be a good heuristic to get close to the original genome. This problem is equivalent to finding a cyclic Chinese Postman (CP) walk on the underlying un-weighted bi-directed de Bruijn graph built from the reads. The Chinese Postman walk Problem (CPP) is solved by reducing it to a general bi-directed flow on this graph which runs in O(|E|2 log2(|V |)) time. In this paper we show that the cyclic CPP on bi-directed graphs can be solved without reducing it to bi-directed flow. We present a ?(p(|V | + |E|) log(|V |) + (dmaxp)3) time algorithm to solve the cyclic CPP on a weighted bi-directed de Bruijn graph, where p = max{|{v|din(v) - dout(v) > 0}|, |{v|din(v) - dout(v) < 0}|} and dmax = max{|din(v) - dout(v)}. Our algorithm performs asymptotically better than the bidirected flow algorithm when the number of imbalanced nodes p is much less than the nodes in the bi-directed graph. From our experimental results on various datasets, we have noticed that the value of p/|V | lies between 0.08% and 0.13% with 95% probability.
[ { "version": "v1", "created": "Thu, 24 Jun 2010 16:35:47 GMT" } ]
2015-05-19T00:00:00
[ [ "Kundeti", "Vamsi", "" ], [ "Rajasekaran", "Sanguthevar", "" ], [ "Dinh", "Hieu", "" ] ]
TITLE: An Efficient Algorithm For Chinese Postman Walk on Bi-directed de Bruijn Graphs ABSTRACT: Sequence assembly from short reads is an important problem in biology. It is known that solving the sequence assembly problem exactly on a bi-directed de Bruijn graph or a string graph is intractable. However finding a Shortest Double stranded DNA string (SDDNA) containing all the k-long words in the reads seems to be a good heuristic to get close to the original genome. This problem is equivalent to finding a cyclic Chinese Postman (CP) walk on the underlying un-weighted bi-directed de Bruijn graph built from the reads. The Chinese Postman walk Problem (CPP) is solved by reducing it to a general bi-directed flow on this graph which runs in O(|E|2 log2(|V |)) time. In this paper we show that the cyclic CPP on bi-directed graphs can be solved without reducing it to bi-directed flow. We present a ?(p(|V | + |E|) log(|V |) + (dmaxp)3) time algorithm to solve the cyclic CPP on a weighted bi-directed de Bruijn graph, where p = max{|{v|din(v) - dout(v) > 0}|, |{v|din(v) - dout(v) < 0}|} and dmax = max{|din(v) - dout(v)}. Our algorithm performs asymptotically better than the bidirected flow algorithm when the number of imbalanced nodes p is much less than the nodes in the bi-directed graph. From our experimental results on various datasets, we have noticed that the value of p/|V | lies between 0.08% and 0.13% with 95% probability.
no_new_dataset
0.952838
1008.3907
Michael Murphy
J. K. Webb, J. A. King, M. T. Murphy, V. V. Flambaum, R. F. Carswell, M. B. Bainbridge
Indications of a spatial variation of the fine structure constant
5 pages, 5 figures, published on 31st October 2011 in Physical Review Letters
Phys. Rev. Lett., 107, 191101, 2011
10.1103/PhysRevLett.107.191101
null
astro-ph.CO gr-qc hep-ph hep-th nucl-th physics.atom-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We previously reported Keck telescope observations suggesting a smaller value of the fine structure constant, alpha, at high redshift. New Very Large Telescope (VLT) data, probing a different direction in the universe, shows an inverse evolution; alpha increases at high redshift. Although the pattern could be due to as yet undetected systematic effects, with the systematics as presently understood the combined dataset fits a spatial dipole, significant at the 4.2-sigma level, in the direction right ascension 17.5 +/- 0.9 hours, declination -58 +/- 9 degrees. The independent VLT and Keck samples give consistent dipole directions and amplitudes, as do high and low redshift samples. A search for systematics, using observations duplicated at both telescopes, reveals none so far which emulate this result.
[ { "version": "v1", "created": "Mon, 23 Aug 2010 20:00:12 GMT" }, { "version": "v2", "created": "Tue, 1 Nov 2011 03:06:27 GMT" } ]
2015-05-19T00:00:00
[ [ "Webb", "J. K.", "" ], [ "King", "J. A.", "" ], [ "Murphy", "M. T.", "" ], [ "Flambaum", "V. V.", "" ], [ "Carswell", "R. F.", "" ], [ "Bainbridge", "M. B.", "" ] ]
TITLE: Indications of a spatial variation of the fine structure constant ABSTRACT: We previously reported Keck telescope observations suggesting a smaller value of the fine structure constant, alpha, at high redshift. New Very Large Telescope (VLT) data, probing a different direction in the universe, shows an inverse evolution; alpha increases at high redshift. Although the pattern could be due to as yet undetected systematic effects, with the systematics as presently understood the combined dataset fits a spatial dipole, significant at the 4.2-sigma level, in the direction right ascension 17.5 +/- 0.9 hours, declination -58 +/- 9 degrees. The independent VLT and Keck samples give consistent dipole directions and amplitudes, as do high and low redshift samples. A search for systematics, using observations duplicated at both telescopes, reveals none so far which emulate this result.
no_new_dataset
0.930268
1008.3982
Peng Wang
Peng Wang, Ting Lei, Chi Ho Yeung and Bing-hong Wang
Heterogenous Human Dynamics in Intra and Inter-day Time Scale
null
null
10.1209/0295-5075/94/18005
null
physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study two large data sets containing the information of two different human behaviors: blog-posting and wiki-revising. In both cases, the interevent time distributions decay as power-laws at both individual and population level. As different from previous studies, we put emphasis on time scales and obtain heterogeneous decay exponents in intra- and inter-day range for the same dataset. Moreover, we observe opposite trend of exponents in relation to individual $Activity$. Further investigations show that the presence of intra-day activities mask the correlation between consecutive inter-day activities and lead to an underestimate of $Memory$, which explain the contradicting results in recent empirical studies. Removal of data in intra-day range reveals the high values of $Memory$ and lead us to convergent results between wiki-revising and blog-posting.
[ { "version": "v1", "created": "Tue, 24 Aug 2010 08:18:47 GMT" } ]
2015-05-19T00:00:00
[ [ "Wang", "Peng", "" ], [ "Lei", "Ting", "" ], [ "Yeung", "Chi Ho", "" ], [ "Wang", "Bing-hong", "" ] ]
TITLE: Heterogenous Human Dynamics in Intra and Inter-day Time Scale ABSTRACT: In this paper, we study two large data sets containing the information of two different human behaviors: blog-posting and wiki-revising. In both cases, the interevent time distributions decay as power-laws at both individual and population level. As different from previous studies, we put emphasis on time scales and obtain heterogeneous decay exponents in intra- and inter-day range for the same dataset. Moreover, we observe opposite trend of exponents in relation to individual $Activity$. Further investigations show that the presence of intra-day activities mask the correlation between consecutive inter-day activities and lead to an underestimate of $Memory$, which explain the contradicting results in recent empirical studies. Removal of data in intra-day range reveals the high values of $Memory$ and lead us to convergent results between wiki-revising and blog-posting.
no_new_dataset
0.94079
1404.1653
Zi-Ke Zhang Dr.
Lu Yu, Chuang Liu, Zi-Ke Zhang
Multi-Linear Interactive Matrix Factorization
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recommender systems, which can significantly help users find their interested items from the information era, has attracted an increasing attention from both the scientific and application society. One of the widest applied recommendation methods is the Matrix Factorization (MF). However, most of MF based approaches focus on the user-item rating matrix, but ignoring the ingredients which may have significant influence on users' preferences on items. In this paper, we propose a multi-linear interactive MF algorithm (MLIMF) to model the interactions between the users and each event associated with their final decisions. Our model considers not only the user-item rating information but also the pairwise interactions based on some empirically supported factors. In addition, we compared the proposed model with three typical other methods: user-based collaborative filtering (UCF), item-based collaborative filtering (ICF) and regularized MF (RMF). Experimental results on two real-world datasets, \emph{MovieLens} 1M and \emph{MovieLens} 100k, show that our method performs much better than other three methods in the accuracy of recommendation. This work may shed some light on the in-depth understanding of modeling user online behaviors and the consequent decisions.
[ { "version": "v1", "created": "Mon, 7 Apr 2014 04:39:14 GMT" }, { "version": "v2", "created": "Mon, 18 May 2015 06:05:02 GMT" } ]
2015-05-19T00:00:00
[ [ "Yu", "Lu", "" ], [ "Liu", "Chuang", "" ], [ "Zhang", "Zi-Ke", "" ] ]
TITLE: Multi-Linear Interactive Matrix Factorization ABSTRACT: Recommender systems, which can significantly help users find their interested items from the information era, has attracted an increasing attention from both the scientific and application society. One of the widest applied recommendation methods is the Matrix Factorization (MF). However, most of MF based approaches focus on the user-item rating matrix, but ignoring the ingredients which may have significant influence on users' preferences on items. In this paper, we propose a multi-linear interactive MF algorithm (MLIMF) to model the interactions between the users and each event associated with their final decisions. Our model considers not only the user-item rating information but also the pairwise interactions based on some empirically supported factors. In addition, we compared the proposed model with three typical other methods: user-based collaborative filtering (UCF), item-based collaborative filtering (ICF) and regularized MF (RMF). Experimental results on two real-world datasets, \emph{MovieLens} 1M and \emph{MovieLens} 100k, show that our method performs much better than other three methods in the accuracy of recommendation. This work may shed some light on the in-depth understanding of modeling user online behaviors and the consequent decisions.
no_new_dataset
0.946941
1407.2433
Peter Foster
Peter Foster, Simon Dixon, Anssi Klapuri
Identifying Cover Songs Using Information-Theoretic Measures of Similarity
13 pages, 5 figures, 4 tables. v3: Accepted version
IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 23 no. 6, pp. 993-1005, 2015
10.1109/TASLP.2015.2416655
null
cs.IR cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates methods for quantifying similarity between audio signals, specifically for the task of of cover song detection. We consider an information-theoretic approach, where we compute pairwise measures of predictability between time series. We compare discrete-valued approaches operating on quantised audio features, to continuous-valued approaches. In the discrete case, we propose a method for computing the normalised compression distance, where we account for correlation between time series. In the continuous case, we propose to compute information-based measures of similarity as statistics of the prediction error between time series. We evaluate our methods on two cover song identification tasks using a data set comprised of 300 Jazz standards and using the Million Song Dataset. For both datasets, we observe that continuous-valued approaches outperform discrete-valued approaches. We consider approaches to estimating the normalised compression distance (NCD) based on string compression and prediction, where we observe that our proposed normalised compression distance with alignment (NCDA) improves average performance over NCD, for sequential compression algorithms. Finally, we demonstrate that continuous-valued distances may be combined to improve performance with respect to baseline approaches. Using a large-scale filter-and-refine approach, we demonstrate state-of-the-art performance for cover song identification using the Million Song Dataset.
[ { "version": "v1", "created": "Wed, 9 Jul 2014 11:04:15 GMT" }, { "version": "v2", "created": "Thu, 10 Jul 2014 00:53:47 GMT" }, { "version": "v3", "created": "Sun, 17 May 2015 15:53:43 GMT" } ]
2015-05-19T00:00:00
[ [ "Foster", "Peter", "" ], [ "Dixon", "Simon", "" ], [ "Klapuri", "Anssi", "" ] ]
TITLE: Identifying Cover Songs Using Information-Theoretic Measures of Similarity ABSTRACT: This paper investigates methods for quantifying similarity between audio signals, specifically for the task of of cover song detection. We consider an information-theoretic approach, where we compute pairwise measures of predictability between time series. We compare discrete-valued approaches operating on quantised audio features, to continuous-valued approaches. In the discrete case, we propose a method for computing the normalised compression distance, where we account for correlation between time series. In the continuous case, we propose to compute information-based measures of similarity as statistics of the prediction error between time series. We evaluate our methods on two cover song identification tasks using a data set comprised of 300 Jazz standards and using the Million Song Dataset. For both datasets, we observe that continuous-valued approaches outperform discrete-valued approaches. We consider approaches to estimating the normalised compression distance (NCD) based on string compression and prediction, where we observe that our proposed normalised compression distance with alignment (NCDA) improves average performance over NCD, for sequential compression algorithms. Finally, we demonstrate that continuous-valued distances may be combined to improve performance with respect to baseline approaches. Using a large-scale filter-and-refine approach, we demonstrate state-of-the-art performance for cover song identification using the Million Song Dataset.
no_new_dataset
0.939526
1501.07645
Sachin Talathi
Sachin S. Talathi
Hyper-parameter optimization of Deep Convolutional Networks for object recognition
4 pages, 1 figure, 3 tables, Submitted to ICIP 2015
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently sequential model based optimization (SMBO) has emerged as a promising hyper-parameter optimization strategy in machine learning. In this work, we investigate SMBO to identify architecture hyper-parameters of deep convolution networks (DCNs) object recognition. We propose a simple SMBO strategy that starts from a set of random initial DCN architectures to generate new architectures, which on training perform well on a given dataset. Using the proposed SMBO strategy we are able to identify a number of DCN architectures that produce results that are comparable to state-of-the-art results on object recognition benchmarks.
[ { "version": "v1", "created": "Fri, 30 Jan 2015 02:08:51 GMT" }, { "version": "v2", "created": "Sun, 17 May 2015 03:32:22 GMT" } ]
2015-05-19T00:00:00
[ [ "Talathi", "Sachin S.", "" ] ]
TITLE: Hyper-parameter optimization of Deep Convolutional Networks for object recognition ABSTRACT: Recently sequential model based optimization (SMBO) has emerged as a promising hyper-parameter optimization strategy in machine learning. In this work, we investigate SMBO to identify architecture hyper-parameters of deep convolution networks (DCNs) object recognition. We propose a simple SMBO strategy that starts from a set of random initial DCN architectures to generate new architectures, which on training perform well on a given dataset. Using the proposed SMBO strategy we are able to identify a number of DCN architectures that produce results that are comparable to state-of-the-art results on object recognition benchmarks.
no_new_dataset
0.954052
1504.02531
Zhimin Gao
Zhimin Gao, Lei Wang, Luping Zhou, Jianjia Zhang
HEp-2 Cell Image Classification with Deep Convolutional Neural Networks
32 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient Human Epithelial-2 (HEp-2) cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper presents an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. This paper elaborates the important components of this framework, discusses multiple key factors that impact the efficiency of training a deep CNN, and systematically compares this framework with the well-established image classification models in the literature. Experiments on benchmark datasets show that i) the proposed framework can effectively outperform existing models by properly applying data augmentation; ii) our CNN-based framework demonstrates excellent adaptability across different datasets, which is highly desirable for classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.
[ { "version": "v1", "created": "Fri, 10 Apr 2015 01:58:17 GMT" }, { "version": "v2", "created": "Mon, 18 May 2015 01:12:12 GMT" } ]
2015-05-19T00:00:00
[ [ "Gao", "Zhimin", "" ], [ "Wang", "Lei", "" ], [ "Zhou", "Luping", "" ], [ "Zhang", "Jianjia", "" ] ]
TITLE: HEp-2 Cell Image Classification with Deep Convolutional Neural Networks ABSTRACT: Efficient Human Epithelial-2 (HEp-2) cell image classification can facilitate the diagnosis of many autoimmune diseases. This paper presents an automatic framework for this classification task, by utilizing the deep convolutional neural networks (CNNs) which have recently attracted intensive attention in visual recognition. This paper elaborates the important components of this framework, discusses multiple key factors that impact the efficiency of training a deep CNN, and systematically compares this framework with the well-established image classification models in the literature. Experiments on benchmark datasets show that i) the proposed framework can effectively outperform existing models by properly applying data augmentation; ii) our CNN-based framework demonstrates excellent adaptability across different datasets, which is highly desirable for classification under varying laboratory settings. Our system is ranked high in the cell image classification competition hosted by ICPR 2014.
no_new_dataset
0.952397
1504.06236
Arman Shokrollahi
Amir Sheikhahmadi, Mohammad A. Nematbakhsh, and Arman Shokrollahi
Improving detection of influential nodes in complex networks
17 pages, 8 figures, 5 tables . . . accepted for publication in Physica A: Statistical Mechanics and its Applications
null
10.1016/j.physa.2015.04.035
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently an increasing amount of research is devoted to the question of how the most influential nodes (seeds) can be found effectively in a complex network. There are a number of measures proposed for this purpose, for instance, high-degree centrality measure reflects the importance of the network topology and has a reasonable runtime performance to find a set of nodes with highest degree, but they do not have a satisfactory dissemination potentiality in the network due to having many common neighbors ($\mbox{CN}^{(1)}$) and common neighbors of neighbors ($\mbox{CN}^{(2)}$). This flaw holds in other measures as well. In this paper, we compare high-degree centrality measure with other well-known measures using ten datasets in order to find a proportion for the common seeds in the seed sets obtained by them. We, thereof, propose an improved high-degree centrality measure (named DegreeDistance) and improve it to enhance accuracy in two phases, FIDD and SIDD, by putting a threshold on the number of common neighbors of already-selected seed nodes and a non-seed node which is under investigation to be selected as a seed as well as considering the influence score of seed nodes directly or through their common neighbors over the non-seed node. To evaluate the accuracy and runtime performance of DegreeDistance, FIDD, and SIDD, they are applied to eight large-scale networks and it finally turns out that SIDD dramatically outperforms other well-known measures and evinces comparatively more accurate performance in identifying the most influential nodes.
[ { "version": "v1", "created": "Thu, 23 Apr 2015 16:01:35 GMT" }, { "version": "v2", "created": "Fri, 24 Apr 2015 08:10:49 GMT" } ]
2015-05-19T00:00:00
[ [ "Sheikhahmadi", "Amir", "" ], [ "Nematbakhsh", "Mohammad A.", "" ], [ "Shokrollahi", "Arman", "" ] ]
TITLE: Improving detection of influential nodes in complex networks ABSTRACT: Recently an increasing amount of research is devoted to the question of how the most influential nodes (seeds) can be found effectively in a complex network. There are a number of measures proposed for this purpose, for instance, high-degree centrality measure reflects the importance of the network topology and has a reasonable runtime performance to find a set of nodes with highest degree, but they do not have a satisfactory dissemination potentiality in the network due to having many common neighbors ($\mbox{CN}^{(1)}$) and common neighbors of neighbors ($\mbox{CN}^{(2)}$). This flaw holds in other measures as well. In this paper, we compare high-degree centrality measure with other well-known measures using ten datasets in order to find a proportion for the common seeds in the seed sets obtained by them. We, thereof, propose an improved high-degree centrality measure (named DegreeDistance) and improve it to enhance accuracy in two phases, FIDD and SIDD, by putting a threshold on the number of common neighbors of already-selected seed nodes and a non-seed node which is under investigation to be selected as a seed as well as considering the influence score of seed nodes directly or through their common neighbors over the non-seed node. To evaluate the accuracy and runtime performance of DegreeDistance, FIDD, and SIDD, they are applied to eight large-scale networks and it finally turns out that SIDD dramatically outperforms other well-known measures and evinces comparatively more accurate performance in identifying the most influential nodes.
no_new_dataset
0.951997
1505.04243
Paul Grigas
Robert M. Freund, Paul Grigas, Rahul Mazumder
A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives
null
null
null
null
math.ST cs.LG math.OC stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS$_\varepsilon$) and least squares boosting (LS-Boost($\varepsilon$)), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS$_\varepsilon$ that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost($\varepsilon$) and FS$_\varepsilon$) by using techniques of modern first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
[ { "version": "v1", "created": "Sat, 16 May 2015 04:23:08 GMT" } ]
2015-05-19T00:00:00
[ [ "Freund", "Robert M.", "" ], [ "Grigas", "Paul", "" ], [ "Mazumder", "Rahul", "" ] ]
TITLE: A New Perspective on Boosting in Linear Regression via Subgradient Optimization and Relatives ABSTRACT: In this paper we analyze boosting algorithms in linear regression from a new perspective: that of modern first-order methods in convex optimization. We show that classic boosting algorithms in linear regression, namely the incremental forward stagewise algorithm (FS$_\varepsilon$) and least squares boosting (LS-Boost($\varepsilon$)), can be viewed as subgradient descent to minimize the loss function defined as the maximum absolute correlation between the features and residuals. We also propose a modification of FS$_\varepsilon$ that yields an algorithm for the Lasso, and that may be easily extended to an algorithm that computes the Lasso path for different values of the regularization parameter. Furthermore, we show that these new algorithms for the Lasso may also be interpreted as the same master algorithm (subgradient descent), applied to a regularized version of the maximum absolute correlation loss function. We derive novel, comprehensive computational guarantees for several boosting algorithms in linear regression (including LS-Boost($\varepsilon$) and FS$_\varepsilon$) by using techniques of modern first-order methods in convex optimization. Our computational guarantees inform us about the statistical properties of boosting algorithms. In particular they provide, for the first time, a precise theoretical description of the amount of data-fidelity and regularization imparted by running a boosting algorithm with a prespecified learning rate for a fixed but arbitrary number of iterations, for any dataset.
no_new_dataset
0.944536
1505.04286
Harry Commin PhD
Harry Commin
Robust Real-time Extraction of Fiducial Facial Feature Points using Haar-like Features
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we explore methods of robustly extracting fiducial facial feature points - an important process for numerous facial image processing tasks. We consider various methods to first detect face, then facial features and finally salient facial feature points. Colour-based models are analysed and their overall unsuitability for this task is summarised. The bulk of the report is then dedicated to proposing a learning-based method centred on the Viola-Jones algorithm. The specific difficulties and considerations relating to feature point detection are laid out in this context and a novel approach is established to address these issues. On a sequence of clear and unobstructed face images, our proposed system achieves average detection rates of over 90%. Then, using a more varied sample dataset, we identify some possible areas for future development of our system.
[ { "version": "v1", "created": "Sat, 16 May 2015 15:41:21 GMT" } ]
2015-05-19T00:00:00
[ [ "Commin", "Harry", "" ] ]
TITLE: Robust Real-time Extraction of Fiducial Facial Feature Points using Haar-like Features ABSTRACT: In this paper, we explore methods of robustly extracting fiducial facial feature points - an important process for numerous facial image processing tasks. We consider various methods to first detect face, then facial features and finally salient facial feature points. Colour-based models are analysed and their overall unsuitability for this task is summarised. The bulk of the report is then dedicated to proposing a learning-based method centred on the Viola-Jones algorithm. The specific difficulties and considerations relating to feature point detection are laid out in this context and a novel approach is established to address these issues. On a sequence of clear and unobstructed face images, our proposed system achieves average detection rates of over 90%. Then, using a more varied sample dataset, we identify some possible areas for future development of our system.
no_new_dataset
0.945651
1505.04366
Seunghoon Hong
Hyeonwoo Noh, Seunghoon Hong, Bohyung Han
Learning Deconvolution Network for Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5%) among the methods trained with no external data through ensemble with the fully convolutional network.
[ { "version": "v1", "created": "Sun, 17 May 2015 07:33:28 GMT" } ]
2015-05-19T00:00:00
[ [ "Noh", "Hyeonwoo", "" ], [ "Hong", "Seunghoon", "" ], [ "Han", "Bohyung", "" ] ]
TITLE: Learning Deconvolution Network for Semantic Segmentation ABSTRACT: We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5%) among the methods trained with no external data through ensemble with the fully convolutional network.
no_new_dataset
0.951459
1505.04427
Zhenzhong Lan
Zhenzhong Lan, Dezhong Yao, Ming Lin, Shoou-I Yu, Alexander Hauptmann
The Best of Both Worlds: Combining Data-independent and Data-driven Approaches for Action Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the success of data-driven convolutional neural networks (CNNs) in object recognition on static images, researchers are working hard towards developing CNN equivalents for learning video features. However, learning video features globally has proven to be quite a challenge due to its high dimensionality, the lack of labelled data and the difficulty in processing large-scale video data. Therefore, we propose to leverage effective techniques from both data-driven and data-independent approaches to improve action recognition system. Our contribution is three-fold. First, we propose a two-stream Stacked Convolutional Independent Subspace Analysis (ConvISA) architecture to show that unsupervised learning methods can significantly boost the performance of traditional local features extracted from data-independent models. Second, we demonstrate that by learning on video volumes detected by Improved Dense Trajectory (IDT), we can seamlessly combine our novel local descriptors with hand-crafted descriptors. Thus we can utilize available feature enhancing techniques developed for hand-crafted descriptors. Finally, similar to multi-class classification framework in CNNs, we propose a training-free re-ranking technique that exploits the relationship among action classes to improve the overall performance. Our experimental results on four benchmark action recognition datasets show significantly improved performance.
[ { "version": "v1", "created": "Sun, 17 May 2015 17:54:38 GMT" } ]
2015-05-19T00:00:00
[ [ "Lan", "Zhenzhong", "" ], [ "Yao", "Dezhong", "" ], [ "Lin", "Ming", "" ], [ "Yu", "Shoou-I", "" ], [ "Hauptmann", "Alexander", "" ] ]
TITLE: The Best of Both Worlds: Combining Data-independent and Data-driven Approaches for Action Recognition ABSTRACT: Motivated by the success of data-driven convolutional neural networks (CNNs) in object recognition on static images, researchers are working hard towards developing CNN equivalents for learning video features. However, learning video features globally has proven to be quite a challenge due to its high dimensionality, the lack of labelled data and the difficulty in processing large-scale video data. Therefore, we propose to leverage effective techniques from both data-driven and data-independent approaches to improve action recognition system. Our contribution is three-fold. First, we propose a two-stream Stacked Convolutional Independent Subspace Analysis (ConvISA) architecture to show that unsupervised learning methods can significantly boost the performance of traditional local features extracted from data-independent models. Second, we demonstrate that by learning on video volumes detected by Improved Dense Trajectory (IDT), we can seamlessly combine our novel local descriptors with hand-crafted descriptors. Thus we can utilize available feature enhancing techniques developed for hand-crafted descriptors. Finally, similar to multi-class classification framework in CNNs, we propose a training-free re-ranking technique that exploits the relationship among action classes to improve the overall performance. Our experimental results on four benchmark action recognition datasets show significantly improved performance.
no_new_dataset
0.948728
1505.04474
Saurabh Gupta
Saurabh Gupta and Jitendra Malik
Visual Semantic Role Labeling
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.
[ { "version": "v1", "created": "Sun, 17 May 2015 23:21:35 GMT" } ]
2015-05-19T00:00:00
[ [ "Gupta", "Saurabh", "" ], [ "Malik", "Jitendra", "" ] ]
TITLE: Visual Semantic Role Labeling ABSTRACT: In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.
no_new_dataset
0.927166
1505.04636
Alex Smola J
Mu Li, Dave G. Andersen, Alexander J. Smola
Graph Partitioning via Parallel Submodular Approximation to Accelerate Distributed Machine Learning
null
null
null
null
cs.DC cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed computing excels at processing large scale data, but the communication cost for synchronizing the shared parameters may slow down the overall performance. Fortunately, the interactions between parameter and data in many problems are sparse, which admits efficient partition in order to reduce the communication overhead. In this paper, we formulate data placement as a graph partitioning problem. We propose a distributed partitioning algorithm. We give both theoretical guarantees and a highly efficient implementation. We also provide a highly efficient implementation of the algorithm and demonstrate its promising results on both text datasets and social networks. We show that the proposed algorithm leads to 1.6x speedup of a state-of-the-start distributed machine learning system by eliminating 90\% of the network communication.
[ { "version": "v1", "created": "Mon, 18 May 2015 13:43:46 GMT" } ]
2015-05-19T00:00:00
[ [ "Li", "Mu", "" ], [ "Andersen", "Dave G.", "" ], [ "Smola", "Alexander J.", "" ] ]
TITLE: Graph Partitioning via Parallel Submodular Approximation to Accelerate Distributed Machine Learning ABSTRACT: Distributed computing excels at processing large scale data, but the communication cost for synchronizing the shared parameters may slow down the overall performance. Fortunately, the interactions between parameter and data in many problems are sparse, which admits efficient partition in order to reduce the communication overhead. In this paper, we formulate data placement as a graph partitioning problem. We propose a distributed partitioning algorithm. We give both theoretical guarantees and a highly efficient implementation. We also provide a highly efficient implementation of the algorithm and demonstrate its promising results on both text datasets and social networks. We show that the proposed algorithm leads to 1.6x speedup of a state-of-the-start distributed machine learning system by eliminating 90\% of the network communication.
no_new_dataset
0.945601
1002.0157
Valerio Lucarini
Valerio Lucarini, Klaus Fraedrich, Francesco Ragone
New Results on the Thermodynamical Properties of the Climate System
56 pages, 6 Figure, 4 Tables. Final version with additional theoretical discussion, improved analysis of IPCC models. Accepted in J. Atmos. Sci
null
10.1175/2011JAS3713.1
null
physics.ao-ph astro-ph.EP cond-mat.stat-mech physics.class-ph physics.comp-ph physics.flu-dyn physics.geo-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we exploit two equivalent formulations of the average rate of material entropy production in a planetary system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. Our approach is based only upon 2D radiative fields at the surface and at the top of atmosphere of a general planetary body. Using 2D fields at the top of atmosphere alone, we derive lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamical processes, we provide insight on a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the PCMDI/CMIP3 dataset in pre-industrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large scale meridional heat transport contributes only about 10%. The total material entropy production is typically 55 mWK-1m-2, with discrepancies of the order of 5% and CMs' baroclinic efficiencies are clustered around 0.055. When looking at the variability and co-variability of the considered thermodynamical quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain.
[ { "version": "v1", "created": "Sun, 31 Jan 2010 20:51:03 GMT" }, { "version": "v2", "created": "Mon, 15 Feb 2010 15:37:11 GMT" }, { "version": "v3", "created": "Wed, 3 Nov 2010 13:45:23 GMT" }, { "version": "v4", "created": "Tue, 7 Jun 2011 06:57:00 GMT" } ]
2015-05-18T00:00:00
[ [ "Lucarini", "Valerio", "" ], [ "Fraedrich", "Klaus", "" ], [ "Ragone", "Francesco", "" ] ]
TITLE: New Results on the Thermodynamical Properties of the Climate System ABSTRACT: In this paper we exploit two equivalent formulations of the average rate of material entropy production in a planetary system to propose an approximate splitting between contributions due to vertical and eminently horizontal processes. Our approach is based only upon 2D radiative fields at the surface and at the top of atmosphere of a general planetary body. Using 2D fields at the top of atmosphere alone, we derive lower bounds to the rate of material entropy production and to the intensity of the Lorenz energy cycle. By introducing a measure of the efficiency of the planetary system with respect to horizontal thermodynamical processes, we provide insight on a previous intuition on the possibility of defining a baroclinic heat engine extracting work from the meridional heat flux. The approximate formula of the material entropy production is verified and used for studying the global thermodynamic properties of climate models (CMs) included in the PCMDI/CMIP3 dataset in pre-industrial climate conditions. It is found that about 90% of the material entropy production is due to vertical processes such as convection, whereas the large scale meridional heat transport contributes only about 10%. The total material entropy production is typically 55 mWK-1m-2, with discrepancies of the order of 5% and CMs' baroclinic efficiencies are clustered around 0.055. When looking at the variability and co-variability of the considered thermodynamical quantities, the agreement among CMs is worse, suggesting that the description of feedbacks is more uncertain.
no_new_dataset
0.948965
1003.0368
Ana M Mancho
Carolina Mendoza, Ana M Mancho
The hidden geometry of ocean flows
4 pages, 4 figures
Physical Review Letters 105 (3) 038501 (2010)
10.1103/PhysRevLett.105.038501
null
nlin.CD physics.ao-ph physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new global Lagrangian descriptor that is applied to flows with general time dependence (altimetric datasets). It succeeds in detecting simultaneously, with great accuracy, invariant manifolds, hyperbolic and non-hyperbolic flow regions.
[ { "version": "v1", "created": "Mon, 1 Mar 2010 15:05:17 GMT" }, { "version": "v2", "created": "Thu, 15 Jul 2010 15:03:00 GMT" } ]
2015-05-18T00:00:00
[ [ "Mendoza", "Carolina", "" ], [ "Mancho", "Ana M", "" ] ]
TITLE: The hidden geometry of ocean flows ABSTRACT: We introduce a new global Lagrangian descriptor that is applied to flows with general time dependence (altimetric datasets). It succeeds in detecting simultaneously, with great accuracy, invariant manifolds, hyperbolic and non-hyperbolic flow regions.
no_new_dataset
0.952086
1003.0533
Luca Sorriso-Valvo
V. Carbone, R. Marino, L. Sorriso-Valvo, A. Noullez, R. Bruno
Scaling Laws of Turbulence and Heating of Fast SolarWind: The Role of Density Fluctuations
null
Rhys. Rev. Lett. 103, 061102, 2009
10.1103/PhysRevLett.103.061102
null
physics.space-ph physics.flu-dyn physics.plasm-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incompressible and isotropic magnetohydrodynamic turbulence in plasms can be described by an exact relation for the energy flux through the scales. This Yaglom-like scaling law has been recently observed in the solar wind above the solar poles observed by the Ulysses spacecraft, where the turbulence is in an Alfv\'enic state. An analogous phenomenological scaling law, suitably modified to take into account compressible fluctuations, is observed more frequently in the same dataset. Large scale density fluctuations, despite their low amplitude, play thus a crucial role in the basic scaling properties of turbulence. The turbulent cascade rate in the compressive case can moreover supply the energy dissipation needed to account for the local heating of the non-adiabatic solar wind.
[ { "version": "v1", "created": "Tue, 2 Mar 2010 09:23:20 GMT" } ]
2015-05-18T00:00:00
[ [ "Carbone", "V.", "" ], [ "Marino", "R.", "" ], [ "Sorriso-Valvo", "L.", "" ], [ "Noullez", "A.", "" ], [ "Bruno", "R.", "" ] ]
TITLE: Scaling Laws of Turbulence and Heating of Fast SolarWind: The Role of Density Fluctuations ABSTRACT: Incompressible and isotropic magnetohydrodynamic turbulence in plasms can be described by an exact relation for the energy flux through the scales. This Yaglom-like scaling law has been recently observed in the solar wind above the solar poles observed by the Ulysses spacecraft, where the turbulence is in an Alfv\'enic state. An analogous phenomenological scaling law, suitably modified to take into account compressible fluctuations, is observed more frequently in the same dataset. Large scale density fluctuations, despite their low amplitude, play thus a crucial role in the basic scaling properties of turbulence. The turbulent cascade rate in the compressive case can moreover supply the energy dissipation needed to account for the local heating of the non-adiabatic solar wind.
no_new_dataset
0.949949
1003.1931
Zi-Ke Zhang Mr.
Zi-Ke Zhang, Chuang Liu
Hypergraph model of social tagging networks
7 pages,7 figures, 32 references
J. Stat. Mech. (2010) P100005
10.1088/1742-5468/2010/10/P10005
null
physics.soc-ph cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The past few years have witnessed the great success of a new family of paradigms, so-called folksonomy, which allows users to freely associate tags to resources and efficiently manage them. In order to uncover the underlying structures and user behaviors in folksonomy, in this paper, we propose an evolutionary hypergrah model to explain the emerging statistical properties. The present model introduces a novel mechanism that one can not only assign tags to resources, but also retrieve resources via collaborative tags. We then compare the model with a real-world dataset: \emph{Del.icio.us}. Indeed, the present model shows considerable agreement with the empirical data in following aspects: power-law hyperdegree distributions, negtive correlation between clustering coefficients and hyperdegrees, and small average distances. Furthermore, the model indicates that most tagging behaviors are motivated by labeling tags to resources, and tags play a significant role in effectively retrieving interesting resources and making acquaintance with congenial friends. The proposed model may shed some light on the in-depth understanding of the structure and function of folksonomy.
[ { "version": "v1", "created": "Tue, 9 Mar 2010 17:03:41 GMT" } ]
2015-05-18T00:00:00
[ [ "Zhang", "Zi-Ke", "" ], [ "Liu", "Chuang", "" ] ]
TITLE: Hypergraph model of social tagging networks ABSTRACT: The past few years have witnessed the great success of a new family of paradigms, so-called folksonomy, which allows users to freely associate tags to resources and efficiently manage them. In order to uncover the underlying structures and user behaviors in folksonomy, in this paper, we propose an evolutionary hypergrah model to explain the emerging statistical properties. The present model introduces a novel mechanism that one can not only assign tags to resources, but also retrieve resources via collaborative tags. We then compare the model with a real-world dataset: \emph{Del.icio.us}. Indeed, the present model shows considerable agreement with the empirical data in following aspects: power-law hyperdegree distributions, negtive correlation between clustering coefficients and hyperdegrees, and small average distances. Furthermore, the model indicates that most tagging behaviors are motivated by labeling tags to resources, and tags play a significant role in effectively retrieving interesting resources and making acquaintance with congenial friends. The proposed model may shed some light on the in-depth understanding of the structure and function of folksonomy.
no_new_dataset
0.928797
1306.1461
Bob Sturm
Bob L. Sturm
The GTZAN dataset: Its contents, its faults, their effects on evaluation, and its future use
29 pages, 7 figures, 6 tables, 128 references
null
10.1080/09298215.2014.894533
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.
[ { "version": "v1", "created": "Thu, 6 Jun 2013 16:30:44 GMT" }, { "version": "v2", "created": "Fri, 7 Jun 2013 16:57:39 GMT" } ]
2015-05-18T00:00:00
[ [ "Sturm", "Bob L.", "" ] ]
TITLE: The GTZAN dataset: Its contents, its faults, their effects on evaluation, and its future use ABSTRACT: The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.
no_new_dataset
0.919931
1505.03358
Miriam Redi
Rossano Schifanella, Miriam Redi, Luca Aiello
An Image is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures
ICWSM 2015
null
null
null
cs.SI cs.CV cs.CY cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5%.
[ { "version": "v1", "created": "Wed, 13 May 2015 12:40:24 GMT" }, { "version": "v2", "created": "Fri, 15 May 2015 10:05:34 GMT" } ]
2015-05-18T00:00:00
[ [ "Schifanella", "Rossano", "" ], [ "Redi", "Miriam", "" ], [ "Aiello", "Luca", "" ] ]
TITLE: An Image is Worth More than a Thousand Favorites: Surfacing the Hidden Beauty of Flickr Pictures ABSTRACT: The dynamics of attention in social media tend to obey power laws. Attention concentrates on a relatively small number of popular items and neglecting the vast majority of content produced by the crowd. Although popularity can be an indication of the perceived value of an item within its community, previous research has hinted to the fact that popularity is distinct from intrinsic quality. As a result, content with low visibility but high quality lurks in the tail of the popularity distribution. This phenomenon can be particularly evident in the case of photo-sharing communities, where valuable photographers who are not highly engaged in online social interactions contribute with high-quality pictures that remain unseen. We propose to use a computer vision method to surface beautiful pictures from the immense pool of near-zero-popularity items, and we test it on a large dataset of creative-commons photos on Flickr. By gathering a large crowdsourced ground truth of aesthetics scores for Flickr images, we show that our method retrieves photos whose median perceived beauty score is equal to the most popular ones, and whose average is lower by only 1.5%.
no_new_dataset
0.93233
1505.03873
Kevin Tang
Kevin Tang and Manohar Paluri and Li Fei-Fei and Rob Fergus and Lubomir Bourdev
Improving Image Classification with Location Context
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the widespread availability of cellphones and cameras that have GPS capabilities, it is common for images being uploaded to the Internet today to have GPS coordinates associated with them. In addition to research that tries to predict GPS coordinates from visual features, this also opens up the door to problems that are conditioned on the availability of GPS coordinates. In this work, we tackle the problem of performing image classification with location context, in which we are given the GPS coordinates for images in both the train and test phases. We explore different ways of encoding and extracting features from the GPS coordinates, and show how to naturally incorporate these features into a Convolutional Neural Network (CNN), the current state-of-the-art for most image classification and recognition problems. We also show how it is possible to simultaneously learn the optimal pooling radii for a subset of our features within the CNN framework. To evaluate our model and to help promote research in this area, we identify a set of location-sensitive concepts and annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has GPS coordinates with these concepts, which we make publicly available. By leveraging location context, we are able to achieve almost a 7% gain in mean average precision.
[ { "version": "v1", "created": "Thu, 14 May 2015 20:13:34 GMT" } ]
2015-05-18T00:00:00
[ [ "Tang", "Kevin", "" ], [ "Paluri", "Manohar", "" ], [ "Fei-Fei", "Li", "" ], [ "Fergus", "Rob", "" ], [ "Bourdev", "Lubomir", "" ] ]
TITLE: Improving Image Classification with Location Context ABSTRACT: With the widespread availability of cellphones and cameras that have GPS capabilities, it is common for images being uploaded to the Internet today to have GPS coordinates associated with them. In addition to research that tries to predict GPS coordinates from visual features, this also opens up the door to problems that are conditioned on the availability of GPS coordinates. In this work, we tackle the problem of performing image classification with location context, in which we are given the GPS coordinates for images in both the train and test phases. We explore different ways of encoding and extracting features from the GPS coordinates, and show how to naturally incorporate these features into a Convolutional Neural Network (CNN), the current state-of-the-art for most image classification and recognition problems. We also show how it is possible to simultaneously learn the optimal pooling radii for a subset of our features within the CNN framework. To evaluate our model and to help promote research in this area, we identify a set of location-sensitive concepts and annotate a subset of the Yahoo Flickr Creative Commons 100M dataset that has GPS coordinates with these concepts, which we make publicly available. By leveraging location context, we are able to achieve almost a 7% gain in mean average precision.
no_new_dataset
0.946794
1503.03957
Prabhjot Singh
Prabhjot Singh, Sumit Dhawan, Shubham Agarwal, Narina Thakur
Implementation of an efficient Fuzzy Logic based Information Retrieval System
arXiv admin note: substantial text overlap with http://ntz-develop.blogspot.in/ , http://www.micsymposium.org/mics2012/submissions/mics2012_submission_8.pdf , http://www.slideshare.net/JeffreyStricklandPhD/predictive-modeling-and-analytics-selectchapters-41304405 by other authors
null
null
null
cs.IR
http://creativecommons.org/licenses/by/3.0/
This paper exemplifies the implementation of an efficient Information Retrieval (IR) System to compute the similarity between a dataset and a query using Fuzzy Logic. TREC dataset has been used for the same purpose. The dataset is parsed to generate keywords index which is used for the similarity comparison with the user query. Each query is assigned a score value based on its fuzzy similarity with the index keywords. The relevant documents are retrieved based on the score value. The performance and accuracy of the proposed fuzzy similarity model is compared with Cosine similarity model using Precision-Recall curves. The results prove the dominance of Fuzzy Similarity based IR system.
[ { "version": "v1", "created": "Fri, 13 Mar 2015 05:21:02 GMT" } ]
2015-05-15T00:00:00
[ [ "Singh", "Prabhjot", "" ], [ "Dhawan", "Sumit", "" ], [ "Agarwal", "Shubham", "" ], [ "Thakur", "Narina", "" ] ]
TITLE: Implementation of an efficient Fuzzy Logic based Information Retrieval System ABSTRACT: This paper exemplifies the implementation of an efficient Information Retrieval (IR) System to compute the similarity between a dataset and a query using Fuzzy Logic. TREC dataset has been used for the same purpose. The dataset is parsed to generate keywords index which is used for the similarity comparison with the user query. Each query is assigned a score value based on its fuzzy similarity with the index keywords. The relevant documents are retrieved based on the score value. The performance and accuracy of the proposed fuzzy similarity model is compared with Cosine similarity model using Precision-Recall curves. The results prove the dominance of Fuzzy Similarity based IR system.
no_new_dataset
0.949482
1504.05457
Yury Kashnitsky
Yury Kashnitsky, Sergei O. Kuznetsov
Graphlet-based lazy associative graph classification
This paper has been withdrawn by the author due to the incomplete set of necessary definitions and experiments
null
null
null
cs.DS cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper addresses the graph classification problem and introduces a modification of the lazy associative classification method to efficiently handle intersections of graphs. Graph intersections are approximated with all common subgraphs up to a fixed size similarly to what is done with graphlet kernels. We explain the idea of the algorithm with a toy example and describe our experiments with a predictive toxicology dataset.
[ { "version": "v1", "created": "Tue, 21 Apr 2015 15:12:45 GMT" }, { "version": "v2", "created": "Wed, 13 May 2015 20:46:47 GMT" } ]
2015-05-15T00:00:00
[ [ "Kashnitsky", "Yury", "" ], [ "Kuznetsov", "Sergei O.", "" ] ]
TITLE: Graphlet-based lazy associative graph classification ABSTRACT: The paper addresses the graph classification problem and introduces a modification of the lazy associative classification method to efficiently handle intersections of graphs. Graph intersections are approximated with all common subgraphs up to a fixed size similarly to what is done with graphlet kernels. We explain the idea of the algorithm with a toy example and describe our experiments with a predictive toxicology dataset.
no_new_dataset
0.940953
1505.03560
Sebastiano Stramaglia
Ibai Diez, Asier Erramuzpe, Inaki Escudero, Beatriz Mateos, Alberto Cabrera, Daniele Marinazzo, Ernesto J. Sanz-Arigita, Sebastiano Stramaglia and Jesus M. Cortes
Information flow between resting state networks
47 pages, 5 figures, 4 tables, 3 supplementary figures. Accepted for publication in Brain Connectivity in its current form
null
null
null
q-bio.NC physics.data-an q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The resting brain dynamics self-organizes into a finite number of correlated patterns known as resting state networks (RSNs). It is well known that techniques like independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting state magnetic resonance imaging. After haemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of Transfer Entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k = 1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k greater than one our method calculates the k-multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension-dependent, increasing from k =1 (i.e., the average voxels activity) up to a maximum occurring at k =5 to finally decay to zero for k greater than 10. This suggests that a small number of components (close to 5) is sufficient to describe the IF pattern between RSNs. Our method - addressing differences in IF between RSNs for any generic data - can be used for group comparison in health or disease. To illustrate this, we have calculated the interRSNs IF in a dataset of Alzheimer's Disease (AD) to find that the most significant differences between AD and controls occurred for k =2, in addition to AD showing increased IF w.r.t. controls.
[ { "version": "v1", "created": "Wed, 13 May 2015 21:45:44 GMT" } ]
2015-05-15T00:00:00
[ [ "Diez", "Ibai", "" ], [ "Erramuzpe", "Asier", "" ], [ "Escudero", "Inaki", "" ], [ "Mateos", "Beatriz", "" ], [ "Cabrera", "Alberto", "" ], [ "Marinazzo", "Daniele", "" ], [ "Sanz-Arigita", "Ernesto J.", "" ], [ "Stramaglia", "Sebastiano", "" ], [ "Cortes", "Jesus M.", "" ] ]
TITLE: Information flow between resting state networks ABSTRACT: The resting brain dynamics self-organizes into a finite number of correlated patterns known as resting state networks (RSNs). It is well known that techniques like independent component analysis can separate the brain activity at rest to provide such RSNs, but the specific pattern of interaction between RSNs is not yet fully understood. To this aim, we propose here a novel method to compute the information flow (IF) between different RSNs from resting state magnetic resonance imaging. After haemodynamic response function blind deconvolution of all voxel signals, and under the hypothesis that RSNs define regions of interest, our method first uses principal component analysis to reduce dimensionality in each RSN to next compute IF (estimated here in terms of Transfer Entropy) between the different RSNs by systematically increasing k (the number of principal components used in the calculation). When k = 1, this method is equivalent to computing IF using the average of all voxel activities in each RSN. For k greater than one our method calculates the k-multivariate IF between the different RSNs. We find that the average IF among RSNs is dimension-dependent, increasing from k =1 (i.e., the average voxels activity) up to a maximum occurring at k =5 to finally decay to zero for k greater than 10. This suggests that a small number of components (close to 5) is sufficient to describe the IF pattern between RSNs. Our method - addressing differences in IF between RSNs for any generic data - can be used for group comparison in health or disease. To illustrate this, we have calculated the interRSNs IF in a dataset of Alzheimer's Disease (AD) to find that the most significant differences between AD and controls occurred for k =2, in addition to AD showing increased IF w.r.t. controls.
no_new_dataset
0.947381
1505.03581
Ali Borji
Ali Borji, Laurent Itti
CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Saliency modeling has been an active research area in computer vision for about two decades. Existing state of the art models perform very well in predicting where people look in natural scenes. There is, however, the risk that these models may have been overfitting themselves to available small scale biased datasets, thus trapping the progress in a local minimum. To gain a deeper insight regarding current issues in saliency modeling and to better gauge progress, we recorded eye movements of 120 observers while they freely viewed a large number of naturalistic and artificial images. Our stimuli includes 4000 images; 200 from each of 20 categories covering different types of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor, Outdoor, Jumbled, Random, and Line drawings. We analyze some basic properties of this dataset and compare some successful models. We believe that our dataset opens new challenges for the next generation of saliency models and helps conduct behavioral studies on bottom-up visual attention.
[ { "version": "v1", "created": "Thu, 14 May 2015 00:34:43 GMT" } ]
2015-05-15T00:00:00
[ [ "Borji", "Ali", "" ], [ "Itti", "Laurent", "" ] ]
TITLE: CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research ABSTRACT: Saliency modeling has been an active research area in computer vision for about two decades. Existing state of the art models perform very well in predicting where people look in natural scenes. There is, however, the risk that these models may have been overfitting themselves to available small scale biased datasets, thus trapping the progress in a local minimum. To gain a deeper insight regarding current issues in saliency modeling and to better gauge progress, we recorded eye movements of 120 observers while they freely viewed a large number of naturalistic and artificial images. Our stimuli includes 4000 images; 200 from each of 20 categories covering different types of scenes such as Cartoons, Art, Objects, Low resolution images, Indoor, Outdoor, Jumbled, Random, and Line drawings. We analyze some basic properties of this dataset and compare some successful models. We believe that our dataset opens new challenges for the next generation of saliency models and helps conduct behavioral studies on bottom-up visual attention.
new_dataset
0.961929
0909.3893
Cesar Hidalgo
Cesar A. Hidalgo, Nicholas Blumm, Albert-Laszlo Barabasi, Nicholas Christakis
A dynamic network approach for the study of human phenotypes
28 pages (double space), 6 figures
null
10.1371/journal.pcbi.1000353
null
physics.bio-ph physics.data-an physics.med-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of networks to integrate different genetic, proteomic, and metabolic datasets has been proposed as a viable path toward elucidating the origins of specific diseases. Here we introduce a new phenotypic database summarizing correlations obtained from the disease history of more than 30 million patients in a Phenotypic Disease Network (PDN). We present evidence that the structure of the PDN is relevant to the understanding of illness progression by showing that (1) patients develop diseases close in the network to those they already have; (2) the progression of disease along the links of the network is different for patients of different genders and ethnicities; (3) patients diagnosed with diseases which are more highly connected in the PDN tend to die sooner than those affected by less connected diseases; and (4) diseases that tend to be preceded by others in the PDN tend to be more connected than diseases that precede other illnesses, and are associated with higher degrees of mortality. Our findings show that disease progression can be represented and studied using network methods, offering the potential to enhance our understanding of the origin and evolution of human diseases. The dataset introduced here, released concurrently with this publication, represents the largest relational phenotypic resource publicly available to the research community.
[ { "version": "v1", "created": "Tue, 22 Sep 2009 02:25:42 GMT" } ]
2015-05-14T00:00:00
[ [ "Hidalgo", "Cesar A.", "" ], [ "Blumm", "Nicholas", "" ], [ "Barabasi", "Albert-Laszlo", "" ], [ "Christakis", "Nicholas", "" ] ]
TITLE: A dynamic network approach for the study of human phenotypes ABSTRACT: The use of networks to integrate different genetic, proteomic, and metabolic datasets has been proposed as a viable path toward elucidating the origins of specific diseases. Here we introduce a new phenotypic database summarizing correlations obtained from the disease history of more than 30 million patients in a Phenotypic Disease Network (PDN). We present evidence that the structure of the PDN is relevant to the understanding of illness progression by showing that (1) patients develop diseases close in the network to those they already have; (2) the progression of disease along the links of the network is different for patients of different genders and ethnicities; (3) patients diagnosed with diseases which are more highly connected in the PDN tend to die sooner than those affected by less connected diseases; and (4) diseases that tend to be preceded by others in the PDN tend to be more connected than diseases that precede other illnesses, and are associated with higher degrees of mortality. Our findings show that disease progression can be represented and studied using network methods, offering the potential to enhance our understanding of the origin and evolution of human diseases. The dataset introduced here, released concurrently with this publication, represents the largest relational phenotypic resource publicly available to the research community.
new_dataset
0.960398
0910.0767
Serena Bradde
M. Bailly-Bechet, S. Bradde, A. Braunstein, A. Flaxman, L. Foini, R. Zecchina
Clustering with shallow trees
11 pages, 7 figures
J. Stat. Mech. (2009) P12010
10.1088/1742-5468/2009/12/P12010
null
cond-mat.dis-nn cs.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new method for hierarchical clustering based on the optimisation of a cost function over trees of limited depth, and we derive a message--passing method that allows to solve it efficiently. The method and algorithm can be interpreted as a natural interpolation between two well-known approaches, namely single linkage and the recently presented Affinity Propagation. We analyze with this general scheme three biological/medical structured datasets (human population based on genetic information, proteins based on sequences and verbal autopsies) and show that the interpolation technique provides new insight.
[ { "version": "v1", "created": "Mon, 5 Oct 2009 14:13:25 GMT" }, { "version": "v2", "created": "Mon, 23 Nov 2009 15:44:29 GMT" } ]
2015-05-14T00:00:00
[ [ "Bailly-Bechet", "M.", "" ], [ "Bradde", "S.", "" ], [ "Braunstein", "A.", "" ], [ "Flaxman", "A.", "" ], [ "Foini", "L.", "" ], [ "Zecchina", "R.", "" ] ]
TITLE: Clustering with shallow trees ABSTRACT: We propose a new method for hierarchical clustering based on the optimisation of a cost function over trees of limited depth, and we derive a message--passing method that allows to solve it efficiently. The method and algorithm can be interpreted as a natural interpolation between two well-known approaches, namely single linkage and the recently presented Affinity Propagation. We analyze with this general scheme three biological/medical structured datasets (human population based on genetic information, proteins based on sequences and verbal autopsies) and show that the interpolation technique provides new insight.
no_new_dataset
0.948537
0912.2826
Floriana Gargiulo
Floriana Gargiulo, Sonia Ternes, Sylvie Huet, Guillaume Deffuant
An iterative approach for generating statistically realistic populations of households
16 oages, 11 figures
null
10.1371/journal.pone.0008828
null
cs.MA cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Many different simulation frameworks, in different topics, need to treat realistic datasets to initialize and calibrate the system. A precise reproduction of initial states is extremely important to obtain reliable forecast from the model. Methodology/Principal Findings: This paper proposes an algorithm to create an artificial population where individuals are described by their age, and are gathered in households respecting a variety of statistical constraints (distribution of household types, sizes, age of household head, difference of age between partners and among parents and children). Such a population is often the initial state of microsimulation or (agent) individual-based models. To get a realistic distribution of households is often very important, because this distribution has an impact on the demographic evolution. Usual techniques from microsimulation approach cross different sources of aggregated data for generating individuals. In our case the number of combinations of different households (types, sizes, age of participants) makes it computationally difficult to use directly such methods. Hence we developed a specific algorithm to make the problem more easily tractable. Conclusions/Significance: We generate the populations of two pilot municipalities in Auvergne region (France), to illustrate the approach. The generated populations show a good agreement with the available statistical datasets (not used for the generation) and are obtained in a reasonable computational time.
[ { "version": "v1", "created": "Tue, 15 Dec 2009 09:28:33 GMT" } ]
2015-05-14T00:00:00
[ [ "Gargiulo", "Floriana", "" ], [ "Ternes", "Sonia", "" ], [ "Huet", "Sylvie", "" ], [ "Deffuant", "Guillaume", "" ] ]
TITLE: An iterative approach for generating statistically realistic populations of households ABSTRACT: Background: Many different simulation frameworks, in different topics, need to treat realistic datasets to initialize and calibrate the system. A precise reproduction of initial states is extremely important to obtain reliable forecast from the model. Methodology/Principal Findings: This paper proposes an algorithm to create an artificial population where individuals are described by their age, and are gathered in households respecting a variety of statistical constraints (distribution of household types, sizes, age of household head, difference of age between partners and among parents and children). Such a population is often the initial state of microsimulation or (agent) individual-based models. To get a realistic distribution of households is often very important, because this distribution has an impact on the demographic evolution. Usual techniques from microsimulation approach cross different sources of aggregated data for generating individuals. In our case the number of combinations of different households (types, sizes, age of participants) makes it computationally difficult to use directly such methods. Hence we developed a specific algorithm to make the problem more easily tractable. Conclusions/Significance: We generate the populations of two pilot municipalities in Auvergne region (France), to illustrate the approach. The generated populations show a good agreement with the available statistical datasets (not used for the generation) and are obtained in a reasonable computational time.
no_new_dataset
0.949529
1503.05214
Tarek Elgamal
Tarek Elgamal, Mohamed Hefeeda
Analysis of PCA Algorithms in Distributed Environments
null
null
null
null
cs.DC cs.LG cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical machine learning algorithms often face scalability bottlenecks when they are applied to large-scale data. Such algorithms were designed to work with small data that is assumed to fit in the memory of one machine. In this report, we analyze different methods for computing an important machine learing algorithm, namely Principal Component Analysis (PCA), and we comment on its limitations in supporting large datasets. The methods are analyzed and compared across two important metrics: time complexity and communication complexity. We consider the worst-case scenarios for both metrics, and we identify the software libraries that implement each method. The analysis in this report helps researchers and engineers in (i) understanding the main bottlenecks for scalability in different PCA algorithms, (ii) choosing the most appropriate method and software library for a given application and data set characteristics, and (iii) designing new scalable PCA algorithms.
[ { "version": "v1", "created": "Tue, 17 Mar 2015 20:38:15 GMT" }, { "version": "v2", "created": "Wed, 13 May 2015 12:05:02 GMT" } ]
2015-05-14T00:00:00
[ [ "Elgamal", "Tarek", "" ], [ "Hefeeda", "Mohamed", "" ] ]
TITLE: Analysis of PCA Algorithms in Distributed Environments ABSTRACT: Classical machine learning algorithms often face scalability bottlenecks when they are applied to large-scale data. Such algorithms were designed to work with small data that is assumed to fit in the memory of one machine. In this report, we analyze different methods for computing an important machine learing algorithm, namely Principal Component Analysis (PCA), and we comment on its limitations in supporting large datasets. The methods are analyzed and compared across two important metrics: time complexity and communication complexity. We consider the worst-case scenarios for both metrics, and we identify the software libraries that implement each method. The analysis in this report helps researchers and engineers in (i) understanding the main bottlenecks for scalability in different PCA algorithms, (ii) choosing the most appropriate method and software library for a given application and data set characteristics, and (iii) designing new scalable PCA algorithms.
no_new_dataset
0.952926
1505.01919
Puja Gupta
Puja Gupta
Characterization of Performance Anomalies in Hadoop
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the huge variety of data and equally large-scale systems, there is not a unique execution setting for these systems which can guarantee the best performance for each query. In this project, we tried so study the impact of different execution settings on execution time of workloads by varying them one at a time. Using the data from these experiments, a decision tree was built where each internal node represents the execution parameter, each branch represents value chosen for the parameter and each leaf node represents a range for execution time in minutes. The attribute in the decision tree to split the dataset on is selected based on the maximum information gain or lowest entropy. Once the tree is trained with the training samples, this tree can be used to get approximate range for the expected execution time. When the actual execution time differs from this expected value, a performance anomaly can be detected. For a test dataset with 400 samples, 99% of samples had actual execution time in the range predicted time by the decision tree. Also on analyzing the constructed tree, an idea about what configuration can give better performance for a given workload can be obtained. Initial experiments suggest that the impact an execution parameter can have on the target attribute (here execution time) can be related to the distance of that feature node from the root of the constructed decision tree. From initial results the percent change in the values of the target attribute for various value of the feature node which is closer to the root is 6 times larger than when that same iii feature node is away from the root node. This observation will depend on how well the decision tree was trained and may not be true for every case.
[ { "version": "v1", "created": "Fri, 8 May 2015 03:24:33 GMT" }, { "version": "v2", "created": "Wed, 13 May 2015 03:50:26 GMT" } ]
2015-05-14T00:00:00
[ [ "Gupta", "Puja", "" ] ]
TITLE: Characterization of Performance Anomalies in Hadoop ABSTRACT: With the huge variety of data and equally large-scale systems, there is not a unique execution setting for these systems which can guarantee the best performance for each query. In this project, we tried so study the impact of different execution settings on execution time of workloads by varying them one at a time. Using the data from these experiments, a decision tree was built where each internal node represents the execution parameter, each branch represents value chosen for the parameter and each leaf node represents a range for execution time in minutes. The attribute in the decision tree to split the dataset on is selected based on the maximum information gain or lowest entropy. Once the tree is trained with the training samples, this tree can be used to get approximate range for the expected execution time. When the actual execution time differs from this expected value, a performance anomaly can be detected. For a test dataset with 400 samples, 99% of samples had actual execution time in the range predicted time by the decision tree. Also on analyzing the constructed tree, an idea about what configuration can give better performance for a given workload can be obtained. Initial experiments suggest that the impact an execution parameter can have on the target attribute (here execution time) can be related to the distance of that feature node from the root of the constructed decision tree. From initial results the percent change in the values of the target attribute for various value of the feature node which is closer to the root is 6 times larger than when that same iii feature node is away from the root node. This observation will depend on how well the decision tree was trained and may not be true for every case.
no_new_dataset
0.955361
1505.02212
Yakir Reshef
Yakir A. Reshef, David N. Reshef, Pardis C. Sabeti, Michael M. Mitzenmacher
Equitability, interval estimation, and statistical power
Yakir A. Reshef and David N. Reshef are co-first authors, Pardis C. Sabeti and Michael M. Mitzenmacher are co-last authors. This paper, together with arXiv:1505.02212, subsumes arXiv:1408.4908
null
null
null
math.ST cs.LG q-bio.QM stat.ME stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For analysis of a high-dimensional dataset, a common approach is to test a null hypothesis of statistical independence on all variable pairs using a non-parametric measure of dependence. However, because this approach attempts to identify any non-trivial relationship no matter how weak, it often identifies too many relationships to be useful. What is needed is a way of identifying a smaller set of relationships that merit detailed further analysis. Here we formally present and characterize equitability, a property of measures of dependence that aims to overcome this challenge. Notionally, an equitable statistic is a statistic that, given some measure of noise, assigns similar scores to equally noisy relationships of different types [Reshef et al. 2011]. We begin by formalizing this idea via a new object called the interpretable interval, which functions as an interval estimate of the amount of noise in a relationship of unknown type. We define an equitable statistic as one with small interpretable intervals. We then draw on the equivalence of interval estimation and hypothesis testing to show that under moderate assumptions an equitable statistic is one that yields well powered tests for distinguishing not only between trivial and non-trivial relationships of all kinds but also between non-trivial relationships of different strengths. This means that equitability allows us to specify a threshold relationship strength $x_0$ and to search for relationships of all kinds with strength greater than $x_0$. Thus, equitability can be thought of as a strengthening of power against independence that enables fruitful analysis of data sets with a small number of strong, interesting relationships and a large number of weaker ones. We conclude with a demonstration of how our two equivalent characterizations of equitability can be used to evaluate the equitability of a statistic in practice.
[ { "version": "v1", "created": "Sat, 9 May 2015 00:31:23 GMT" }, { "version": "v2", "created": "Tue, 12 May 2015 20:05:17 GMT" } ]
2015-05-14T00:00:00
[ [ "Reshef", "Yakir A.", "" ], [ "Reshef", "David N.", "" ], [ "Sabeti", "Pardis C.", "" ], [ "Mitzenmacher", "Michael M.", "" ] ]
TITLE: Equitability, interval estimation, and statistical power ABSTRACT: For analysis of a high-dimensional dataset, a common approach is to test a null hypothesis of statistical independence on all variable pairs using a non-parametric measure of dependence. However, because this approach attempts to identify any non-trivial relationship no matter how weak, it often identifies too many relationships to be useful. What is needed is a way of identifying a smaller set of relationships that merit detailed further analysis. Here we formally present and characterize equitability, a property of measures of dependence that aims to overcome this challenge. Notionally, an equitable statistic is a statistic that, given some measure of noise, assigns similar scores to equally noisy relationships of different types [Reshef et al. 2011]. We begin by formalizing this idea via a new object called the interpretable interval, which functions as an interval estimate of the amount of noise in a relationship of unknown type. We define an equitable statistic as one with small interpretable intervals. We then draw on the equivalence of interval estimation and hypothesis testing to show that under moderate assumptions an equitable statistic is one that yields well powered tests for distinguishing not only between trivial and non-trivial relationships of all kinds but also between non-trivial relationships of different strengths. This means that equitability allows us to specify a threshold relationship strength $x_0$ and to search for relationships of all kinds with strength greater than $x_0$. Thus, equitability can be thought of as a strengthening of power against independence that enables fruitful analysis of data sets with a small number of strong, interesting relationships and a large number of weaker ones. We conclude with a demonstration of how our two equivalent characterizations of equitability can be used to evaluate the equitability of a statistic in practice.
no_new_dataset
0.938407
1505.02867
Charles Mathy
Charles Mathy, Nate Derbinsky, Jos\'e Bento, Jonathan Rosenthal and Jonathan Yedidia
The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning
7 pages, 4 figs, 1 page supp. info
Proc. of the 29th AAAI Conference on Artificial Intelligence (AAAI), 2864-2870. Austin, TX, USA. (2015)
null
null
cs.LG cs.DS cs.IR stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. The algorithm builds a forest of trees whose nodes store previously seen examples. It can be shown data points one at a time and updates itself incrementally, hence it is naturally online. Few instance-based algorithms have this property while being simultaneously fast, which the BF is. This is crucial for applications where one needs to respond to input data in real time. The number of children of each node is not set beforehand but obtained from the training procedure, which makes the algorithm very flexible with regards to what data manifolds it can learn. We test its generalization performance and speed on a range of benchmark datasets and detail in which settings it outperforms the state of the art. Empirically we find that training time scales as O(DNlog(N)) and testing as O(Dlog(N)), where D is the dimensionality and N the amount of data,
[ { "version": "v1", "created": "Tue, 12 May 2015 03:45:11 GMT" } ]
2015-05-14T00:00:00
[ [ "Mathy", "Charles", "" ], [ "Derbinsky", "Nate", "" ], [ "Bento", "José", "" ], [ "Rosenthal", "Jonathan", "" ], [ "Yedidia", "Jonathan", "" ] ]
TITLE: The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning ABSTRACT: We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. The algorithm builds a forest of trees whose nodes store previously seen examples. It can be shown data points one at a time and updates itself incrementally, hence it is naturally online. Few instance-based algorithms have this property while being simultaneously fast, which the BF is. This is crucial for applications where one needs to respond to input data in real time. The number of children of each node is not set beforehand but obtained from the training procedure, which makes the algorithm very flexible with regards to what data manifolds it can learn. We test its generalization performance and speed on a range of benchmark datasets and detail in which settings it outperforms the state of the art. Empirically we find that training time scales as O(DNlog(N)) and testing as O(Dlog(N)), where D is the dimensionality and N the amount of data,
no_new_dataset
0.946151
1505.02973
Konstantinos Tserpes
Evangelos Psomakelis, Konstantinos Tserpes, Dimosthenis Anagnostopoulos, Theodora Varvarigou
Comparing methods for Twitter Sentiment Analysis
5 pages, 1 figure, 6th Conference on Knowledge Discovery and Information Retrieval 2014, Rome, Italy
null
null
null
cs.CL cs.IR cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work extends the set of works which deal with the popular problem of sentiment analysis in Twitter. It investigates the most popular document ("tweet") representation methods which feed sentiment evaluation mechanisms. In particular, we study the bag-of-words, n-grams and n-gram graphs approaches and for each of them we evaluate the performance of a lexicon-based and 7 learning-based classification algorithms (namely SVM, Na\"ive Bayesian Networks, Logistic Regression, Multilayer Perceptrons, Best-First Trees, Functional Trees and C4.5) as well as their combinations, using a set of 4451 manually annotated tweets. The results demonstrate the superiority of learning-based methods and in particular of n-gram graphs approaches for predicting the sentiment of tweets. They also show that the combinatory approach has impressive effects on n-grams, raising the confidence up to 83.15% on the 5-Grams, using majority vote and a balanced dataset (equal number of positive, negative and neutral tweets for training). In the n-gram graph cases the improvement was small to none, reaching 94.52% on the 4-gram graphs, using Orthodromic distance and a threshold of 0.001.
[ { "version": "v1", "created": "Tue, 12 May 2015 12:05:19 GMT" } ]
2015-05-14T00:00:00
[ [ "Psomakelis", "Evangelos", "" ], [ "Tserpes", "Konstantinos", "" ], [ "Anagnostopoulos", "Dimosthenis", "" ], [ "Varvarigou", "Theodora", "" ] ]
TITLE: Comparing methods for Twitter Sentiment Analysis ABSTRACT: This work extends the set of works which deal with the popular problem of sentiment analysis in Twitter. It investigates the most popular document ("tweet") representation methods which feed sentiment evaluation mechanisms. In particular, we study the bag-of-words, n-grams and n-gram graphs approaches and for each of them we evaluate the performance of a lexicon-based and 7 learning-based classification algorithms (namely SVM, Na\"ive Bayesian Networks, Logistic Regression, Multilayer Perceptrons, Best-First Trees, Functional Trees and C4.5) as well as their combinations, using a set of 4451 manually annotated tweets. The results demonstrate the superiority of learning-based methods and in particular of n-gram graphs approaches for predicting the sentiment of tweets. They also show that the combinatory approach has impressive effects on n-grams, raising the confidence up to 83.15% on the 5-Grams, using majority vote and a balanced dataset (equal number of positive, negative and neutral tweets for training). In the n-gram graph cases the improvement was small to none, reaching 94.52% on the 4-gram graphs, using Orthodromic distance and a threshold of 0.001.
no_new_dataset
0.946001
1505.03229
Ikuro Sato
Ikuro Sato, Hiroki Nishimura, Kensuke Yokoi
APAC: Augmented PAttern Classification with Neural Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
[ { "version": "v1", "created": "Wed, 13 May 2015 03:33:29 GMT" } ]
2015-05-14T00:00:00
[ [ "Sato", "Ikuro", "" ], [ "Nishimura", "Hiroki", "" ], [ "Yokoi", "Kensuke", "" ] ]
TITLE: APAC: Augmented PAttern Classification with Neural Networks ABSTRACT: Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
no_new_dataset
0.947672
1505.03236
Jensi
R. Jensi and G. Wiselin Jiji
Hybrid data clustering approach using K-Means and Flower Pollination Algorithm
11 pages, Journal. Advanced Computational Intelligence: An International Journal (ACII), Vol.2, No.2, April 2015
null
null
null
cs.LG cs.IR cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data clustering is a technique for clustering set of objects into known number of groups. Several approaches are widely applied to data clustering so that objects within the clusters are similar and objects in different clusters are far away from each other. K-Means, is one of the familiar center based clustering algorithms since implementation is very easy and fast convergence. However, K-Means algorithm suffers from initialization, hence trapped in local optima. Flower Pollination Algorithm (FPA) is the global optimization technique, which avoids trapping in local optimum solution. In this paper, a novel hybrid data clustering approach using Flower Pollination Algorithm and K-Means (FPAKM) is proposed. The proposed algorithm results are compared with K-Means and FPA on eight datasets. From the experimental results, FPAKM is better than FPA and K-Means.
[ { "version": "v1", "created": "Wed, 13 May 2015 04:24:50 GMT" } ]
2015-05-14T00:00:00
[ [ "Jensi", "R.", "" ], [ "Jiji", "G. Wiselin", "" ] ]
TITLE: Hybrid data clustering approach using K-Means and Flower Pollination Algorithm ABSTRACT: Data clustering is a technique for clustering set of objects into known number of groups. Several approaches are widely applied to data clustering so that objects within the clusters are similar and objects in different clusters are far away from each other. K-Means, is one of the familiar center based clustering algorithms since implementation is very easy and fast convergence. However, K-Means algorithm suffers from initialization, hence trapped in local optima. Flower Pollination Algorithm (FPA) is the global optimization technique, which avoids trapping in local optimum solution. In this paper, a novel hybrid data clustering approach using Flower Pollination Algorithm and K-Means (FPAKM) is proposed. The proposed algorithm results are compared with K-Means and FPA on eight datasets. From the experimental results, FPAKM is better than FPA and K-Means.
no_new_dataset
0.951006
1505.03329
Jelena Milosevic
Jelena Milosevic, Alberto Ferrante, Miroslaw Malek
A general practitioner or a specialist for your infected smartphone?
2 pages poster, IEEE Symposium on Security and Privacy, San Jose, USA, May 2015
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With explosive growth in the number of mobile devices, the mobile malware is rapidly spreading as well, and the number of encountered malware families is increasing. Existing solutions, which are mainly based on one malware detector running on the phone or in the cloud, are no longer effective. Main problem lies in the fact that it might be impossible to create a unique mobile malware detector that would be able to detect different malware families with high accuracy, being at the same time lightweight enough not to drain battery quickly and fast enough to give results of detection promptly. The proposed approach to mobile malware detection is analogous to general practitioner versus specialist approach to dealing with a medical problem. Similarly to a general practitioner that, based on indicative symptoms identifies potential illnesses and sends the patient to an appropriate specialist, our detection system distinguishes among symptoms representing different malware families and, once the symptoms are detected, it triggers specific analyses. A system monitoring application operates in the same way as a general practitioner. It is able to distinguish between different symptoms and trigger appropriate detection mechanisms. As an analogy to different specialists, an ensemble of detectors, each of which specifically trained for a particular malware family, is used. The main challenge of the approach is to define representative symptoms of different malware families and train detectors accordingly to them. The main goal of the poster is to foster discussion on the most representative symptoms of different malware families and to discuss initial results in this area obtained by using Malware Genome project dataset.
[ { "version": "v1", "created": "Wed, 13 May 2015 11:09:52 GMT" } ]
2015-05-14T00:00:00
[ [ "Milosevic", "Jelena", "" ], [ "Ferrante", "Alberto", "" ], [ "Malek", "Miroslaw", "" ] ]
TITLE: A general practitioner or a specialist for your infected smartphone? ABSTRACT: With explosive growth in the number of mobile devices, the mobile malware is rapidly spreading as well, and the number of encountered malware families is increasing. Existing solutions, which are mainly based on one malware detector running on the phone or in the cloud, are no longer effective. Main problem lies in the fact that it might be impossible to create a unique mobile malware detector that would be able to detect different malware families with high accuracy, being at the same time lightweight enough not to drain battery quickly and fast enough to give results of detection promptly. The proposed approach to mobile malware detection is analogous to general practitioner versus specialist approach to dealing with a medical problem. Similarly to a general practitioner that, based on indicative symptoms identifies potential illnesses and sends the patient to an appropriate specialist, our detection system distinguishes among symptoms representing different malware families and, once the symptoms are detected, it triggers specific analyses. A system monitoring application operates in the same way as a general practitioner. It is able to distinguish between different symptoms and trigger appropriate detection mechanisms. As an analogy to different specialists, an ensemble of detectors, each of which specifically trained for a particular malware family, is used. The main challenge of the approach is to define representative symptoms of different malware families and train detectors accordingly to them. The main goal of the poster is to foster discussion on the most representative symptoms of different malware families and to discuss initial results in this area obtained by using Malware Genome project dataset.
no_new_dataset
0.922412
0811.2802
Perivolaropoulos Leandros
L. Perivolaropoulos and A. Shafieloo
Bright High z SnIa: A Challenge for LCDM?
8 pages, 7 figures. Extended analysis, added two figures, results unchanged. The data and the mathematica v.5.2 files used for the production of the figures may be downloaded from http://leandros.physics.uoi.gr/bnd.zip. Accepted in Physical Review D (to appear)
Phys.Rev.D79:123502,2009
10.1103/PhysRevD.79.123502
null
astro-ph gr-qc physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has recently been pointed out by Kowalski et. al. (arxiv:0804.4142) that there is `an unexpected brightness of the SnIa data at z>1'. We quantify this statement by constructing a new statistic which is applicable directly on the Type Ia Supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa datapoints with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (eg LCDM). We then focus on the highest redshift bin and extend its size towards lower redshifts until the Binned Normalized Difference (BND) changes sign (crosses 0) at a redshift z_c (bin size N_c). The bin size N_c of this crossing (the statistical variable) is then compared with the corresponding crossing bin size N_{mc} for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size N_c obtained from the Union08 and Gold06 data with respect to the best fit LCDM model is anomalously large compared to N_{mc} of the corresponding Monte Carlo datasets obtained from the best fit LCDM in each case. In particular, only 2.2% of the Monte Carlo LCDM datasets are consistent with the Gold06 value of N_c while the corresponding probability for the Union08 value of N_c is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 datasets is realized in the context of a (w_0,w_1)=(-1,0) model (LCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w_0,w_1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 datasets.
[ { "version": "v1", "created": "Mon, 17 Nov 2008 21:03:50 GMT" }, { "version": "v2", "created": "Mon, 1 Jun 2009 15:24:13 GMT" } ]
2015-05-13T00:00:00
[ [ "Perivolaropoulos", "L.", "" ], [ "Shafieloo", "A.", "" ] ]
TITLE: Bright High z SnIa: A Challenge for LCDM? ABSTRACT: It has recently been pointed out by Kowalski et. al. (arxiv:0804.4142) that there is `an unexpected brightness of the SnIa data at z>1'. We quantify this statement by constructing a new statistic which is applicable directly on the Type Ia Supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa datapoints with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (eg LCDM). We then focus on the highest redshift bin and extend its size towards lower redshifts until the Binned Normalized Difference (BND) changes sign (crosses 0) at a redshift z_c (bin size N_c). The bin size N_c of this crossing (the statistical variable) is then compared with the corresponding crossing bin size N_{mc} for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size N_c obtained from the Union08 and Gold06 data with respect to the best fit LCDM model is anomalously large compared to N_{mc} of the corresponding Monte Carlo datasets obtained from the best fit LCDM in each case. In particular, only 2.2% of the Monte Carlo LCDM datasets are consistent with the Gold06 value of N_c while the corresponding probability for the Union08 value of N_c is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 datasets is realized in the context of a (w_0,w_1)=(-1,0) model (LCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w_0,w_1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 datasets.
no_new_dataset
0.956145
0812.0743
Qiang Li
Qiang Li, Yan He, Jing-ping Jiang
A Novel Clustering Algorithm Based on Quantum Games
19 pages, 5 figures, 5 tables
2009 J. Phys. A: Math. Theor. 42 445303
10.1088/1751-8113/42/44/445303
null
cs.LG cs.AI cs.CV cs.GT cs.MA cs.NE quant-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
[ { "version": "v1", "created": "Wed, 3 Dec 2008 15:46:03 GMT" }, { "version": "v2", "created": "Sat, 10 Oct 2009 09:10:36 GMT" } ]
2015-05-13T00:00:00
[ [ "Li", "Qiang", "" ], [ "He", "Yan", "" ], [ "Jiang", "Jing-ping", "" ] ]
TITLE: A Novel Clustering Algorithm Based on Quantum Games ABSTRACT: Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.
no_new_dataset
0.950595
0902.4478
David Pretty
D. G. Pretty and B. D. Blackwell
A data mining algorithm for automated characterisation of fluctuations in multichannel timeseries
14 pages, 8 figures
null
10.1016/j.cpc.2009.05.003
null
physics.data-an physics.plasm-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a data mining technique for the analysis of multichannel oscillatory timeseries data and show an application using poloidal arrays of magnetic sensors installed in the H-1 heliac. The procedure is highly automated, and scales well to large datasets. The timeseries data is split into short time segments to provide time resolution, and each segment is represented by a singular value decomposition (SVD). By comparing power spectra of the temporal singular vectors, singular values are grouped into subsets which define fluctuation structures. Thresholds for the normalised energy of the fluctuation structure and the normalised entropy of the SVD are used to filter the dataset. We assume that distinct classes of fluctuations are localised in the space of phase differences between each pair of nearest neighbour channels. An expectation maximisation clustering algorithm is used to locate the distinct classes of fluctuations, and a cluster tree mapping is used to visualise the results.
[ { "version": "v1", "created": "Wed, 25 Feb 2009 22:48:09 GMT" } ]
2015-05-13T00:00:00
[ [ "Pretty", "D. G.", "" ], [ "Blackwell", "B. D.", "" ] ]
TITLE: A data mining algorithm for automated characterisation of fluctuations in multichannel timeseries ABSTRACT: We present a data mining technique for the analysis of multichannel oscillatory timeseries data and show an application using poloidal arrays of magnetic sensors installed in the H-1 heliac. The procedure is highly automated, and scales well to large datasets. The timeseries data is split into short time segments to provide time resolution, and each segment is represented by a singular value decomposition (SVD). By comparing power spectra of the temporal singular vectors, singular values are grouped into subsets which define fluctuation structures. Thresholds for the normalised energy of the fluctuation structure and the normalised entropy of the SVD are used to filter the dataset. We assume that distinct classes of fluctuations are localised in the space of phase differences between each pair of nearest neighbour channels. An expectation maximisation clustering algorithm is used to locate the distinct classes of fluctuations, and a cluster tree mapping is used to visualise the results.
no_new_dataset
0.949902
0906.2914
Michal Zerola
Michal Zerola, J\'er\^ome Lauret, Roman Bart\'ak and Michal \v{S}umbera
Efficient Multi-site Data Movement Using Constraint Programming for Data Hungry Science
To appear in proceedings of Computing in High Energy and Nuclear Physics 2009
null
10.1088/1742-6596/219/6/062069
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the past decade, HENP experiments have been heading towards a distributed computing model in an effort to concurrently process tasks over enormous data sets that have been increasing in size as a function of time. In order to optimize all available resources (geographically spread) and minimize the processing time, it is necessary to face also the question of efficient data transfers and placements. A key question is whether the time penalty for moving the data to the computational resources is worth the presumed gain. Onward to the truly distributed task scheduling we present the technique using a Constraint Programming (CP) approach. The CP technique schedules data transfers from multiple resources considering all available paths of diverse characteristic (capacity, sharing and storage) having minimum user's waiting time as an objective. We introduce a model for planning data transfers to a single destination (data transfer) as well as its extension for an optimal data set spreading strategy (data placement). Several enhancements for a solver of the CP model will be shown, leading to a faster schedule computation time using symmetry breaking, branch cutting, well studied principles from job-shop scheduling field and several heuristics. Finally, we will present the design and implementation of a corner-stone application aimed at moving datasets according to the schedule. Results will include comparison of performance and trade-off between CP techniques and a Peer-2-Peer model from simulation framework as well as the real case scenario taken from a practical usage of a CP scheduler.
[ { "version": "v1", "created": "Tue, 16 Jun 2009 12:33:25 GMT" }, { "version": "v2", "created": "Thu, 18 Jun 2009 19:20:05 GMT" } ]
2015-05-13T00:00:00
[ [ "Zerola", "Michal", "" ], [ "Lauret", "Jérôme", "" ], [ "Barták", "Roman", "" ], [ "Šumbera", "Michal", "" ] ]
TITLE: Efficient Multi-site Data Movement Using Constraint Programming for Data Hungry Science ABSTRACT: For the past decade, HENP experiments have been heading towards a distributed computing model in an effort to concurrently process tasks over enormous data sets that have been increasing in size as a function of time. In order to optimize all available resources (geographically spread) and minimize the processing time, it is necessary to face also the question of efficient data transfers and placements. A key question is whether the time penalty for moving the data to the computational resources is worth the presumed gain. Onward to the truly distributed task scheduling we present the technique using a Constraint Programming (CP) approach. The CP technique schedules data transfers from multiple resources considering all available paths of diverse characteristic (capacity, sharing and storage) having minimum user's waiting time as an objective. We introduce a model for planning data transfers to a single destination (data transfer) as well as its extension for an optimal data set spreading strategy (data placement). Several enhancements for a solver of the CP model will be shown, leading to a faster schedule computation time using symmetry breaking, branch cutting, well studied principles from job-shop scheduling field and several heuristics. Finally, we will present the design and implementation of a corner-stone application aimed at moving datasets according to the schedule. Results will include comparison of performance and trade-off between CP techniques and a Peer-2-Peer model from simulation framework as well as the real case scenario taken from a practical usage of a CP scheduler.
no_new_dataset
0.946695
1412.6071
Benjamin Graham
Benjamin Graham
Fractional Max-Pooling
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolutional networks almost always incorporate some form of spatial pooling, and very often it is alpha times alpha max-pooling with alpha=2. Max-pooling act on the hidden layers of the network, reducing their size by an integer multiplicative factor alpha. The amazing by-product of discarding 75% of your data is that you build into the network a degree of invariance with respect to translations and elastic distortions. However, if you simply alternate convolutional layers with max-pooling layers, performance is limited due to the rapid reduction in spatial size, and the disjoint nature of the pooling regions. We have formulated a fractional version of max-pooling where alpha is allowed to take non-integer values. Our version of max-pooling is stochastic as there are lots of different ways of constructing suitable pooling regions. We find that our form of fractional max-pooling reduces overfitting on a variety of datasets: for instance, we improve on the state-of-the art for CIFAR-100 without even using dropout.
[ { "version": "v1", "created": "Thu, 18 Dec 2014 20:45:11 GMT" }, { "version": "v2", "created": "Mon, 22 Dec 2014 11:06:35 GMT" }, { "version": "v3", "created": "Mon, 2 Mar 2015 20:06:22 GMT" }, { "version": "v4", "created": "Tue, 12 May 2015 06:36:11 GMT" } ]
2015-05-13T00:00:00
[ [ "Graham", "Benjamin", "" ] ]
TITLE: Fractional Max-Pooling ABSTRACT: Convolutional networks almost always incorporate some form of spatial pooling, and very often it is alpha times alpha max-pooling with alpha=2. Max-pooling act on the hidden layers of the network, reducing their size by an integer multiplicative factor alpha. The amazing by-product of discarding 75% of your data is that you build into the network a degree of invariance with respect to translations and elastic distortions. However, if you simply alternate convolutional layers with max-pooling layers, performance is limited due to the rapid reduction in spatial size, and the disjoint nature of the pooling regions. We have formulated a fractional version of max-pooling where alpha is allowed to take non-integer values. Our version of max-pooling is stochastic as there are lots of different ways of constructing suitable pooling regions. We find that our form of fractional max-pooling reduces overfitting on a variety of datasets: for instance, we improve on the state-of-the art for CIFAR-100 without even using dropout.
no_new_dataset
0.947088
1412.6857
Tyng-Luh Liu
Jyh-Jing Hwang and Tyng-Luh Liu
Contour Detection Using Cost-Sensitive Convolutional Neural Networks
9 pages, 3 figures
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the DenseNet architecture to achieve pixelwise fine-tuning and then consider a cost-sensitive strategy to further improve the learning with a small dataset of edge and non-edge image patches. In the experiment of contour detection, we look into the effectiveness of combining per-pixel features from different CNN layers and obtain comparable performances to the state-of-the-art on BSDS500.
[ { "version": "v1", "created": "Mon, 22 Dec 2014 01:16:50 GMT" }, { "version": "v2", "created": "Wed, 24 Dec 2014 14:37:27 GMT" }, { "version": "v3", "created": "Thu, 15 Jan 2015 15:01:16 GMT" }, { "version": "v4", "created": "Sat, 28 Feb 2015 07:37:54 GMT" }, { "version": "v5", "created": "Tue, 12 May 2015 08:42:42 GMT" } ]
2015-05-13T00:00:00
[ [ "Hwang", "Jyh-Jing", "" ], [ "Liu", "Tyng-Luh", "" ] ]
TITLE: Contour Detection Using Cost-Sensitive Convolutional Neural Networks ABSTRACT: We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the DenseNet architecture to achieve pixelwise fine-tuning and then consider a cost-sensitive strategy to further improve the learning with a small dataset of edge and non-edge image patches. In the experiment of contour detection, we look into the effectiveness of combining per-pixel features from different CNN layers and obtain comparable performances to the state-of-the-art on BSDS500.
no_new_dataset
0.946349
1504.06603
Dmytro Mishkin
Dmytro Mishkin and Jiri Matas and Michal Perdoch and Karel Lenc
WxBS: Wide Baseline Stereo Generalizations
Descriptor and detector evaluation expanded
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have presented a new problem -- the wide multiple baseline stereo (WxBS) -- which considers matching of images that simultaneously differ in more than one image acquisition factor such as viewpoint, illumination, sensor type or where object appearance changes significantly, e.g. over time. A new dataset with the ground truth for evaluation of matching algorithms has been introduced and will be made public. We have extensively tested a large set of popular and recent detectors and descriptors and show than the combination of RootSIFT and HalfRootSIFT as descriptors with MSER and Hessian-Affine detectors works best for many different nuisance factors. We show that simple adaptive thresholding improves Hessian-Affine, DoG, MSER (and possibly other) detectors and allows to use them on infrared and low contrast images. A novel matching algorithm for addressing the WxBS problem has been introduced. We have shown experimentally that the WxBS-M matcher dominantes the state-of-the-art methods both on both the new and existing datasets.
[ { "version": "v1", "created": "Fri, 24 Apr 2015 19:19:04 GMT" }, { "version": "v2", "created": "Tue, 12 May 2015 14:42:53 GMT" } ]
2015-05-13T00:00:00
[ [ "Mishkin", "Dmytro", "" ], [ "Matas", "Jiri", "" ], [ "Perdoch", "Michal", "" ], [ "Lenc", "Karel", "" ] ]
TITLE: WxBS: Wide Baseline Stereo Generalizations ABSTRACT: We have presented a new problem -- the wide multiple baseline stereo (WxBS) -- which considers matching of images that simultaneously differ in more than one image acquisition factor such as viewpoint, illumination, sensor type or where object appearance changes significantly, e.g. over time. A new dataset with the ground truth for evaluation of matching algorithms has been introduced and will be made public. We have extensively tested a large set of popular and recent detectors and descriptors and show than the combination of RootSIFT and HalfRootSIFT as descriptors with MSER and Hessian-Affine detectors works best for many different nuisance factors. We show that simple adaptive thresholding improves Hessian-Affine, DoG, MSER (and possibly other) detectors and allows to use them on infrared and low contrast images. A novel matching algorithm for addressing the WxBS problem has been introduced. We have shown experimentally that the WxBS-M matcher dominantes the state-of-the-art methods both on both the new and existing datasets.
new_dataset
0.958226
1505.02982
Baoguang Shi
Baoguang Shi, Cong Yao, Chengquan Zhang, Xiaowei Guo, Feiyue Huang, Xiang Bai
Automatic Script Identification in the Wild
5 pages, 7 figures, submitted to ICDAR 2015
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid increase of transnational communication and cooperation, people frequently encounter multilingual scenarios in various situations. In this paper, we are concerned with a relatively new problem: script identification at word or line levels in natural scenes. A large-scale dataset with a great quantity of natural images and 10 types of widely used languages is constructed and released. In allusion to the challenges in script identification in real-world scenarios, a deep learning based algorithm is proposed. The experiments on the proposed dataset demonstrate that our algorithm achieves superior performance, compared with conventional image classification methods, such as the original CNN architecture and LLC.
[ { "version": "v1", "created": "Tue, 12 May 2015 12:38:30 GMT" } ]
2015-05-13T00:00:00
[ [ "Shi", "Baoguang", "" ], [ "Yao", "Cong", "" ], [ "Zhang", "Chengquan", "" ], [ "Guo", "Xiaowei", "" ], [ "Huang", "Feiyue", "" ], [ "Bai", "Xiang", "" ] ]
TITLE: Automatic Script Identification in the Wild ABSTRACT: With the rapid increase of transnational communication and cooperation, people frequently encounter multilingual scenarios in various situations. In this paper, we are concerned with a relatively new problem: script identification at word or line levels in natural scenes. A large-scale dataset with a great quantity of natural images and 10 types of widely used languages is constructed and released. In allusion to the challenges in script identification in real-world scenarios, a deep learning based algorithm is proposed. The experiments on the proposed dataset demonstrate that our algorithm achieves superior performance, compared with conventional image classification methods, such as the original CNN architecture and LLC.
new_dataset
0.955899
1505.03008
Lovro \v{S}ubelj
Dalibor Fiala, Lovro \v{S}ubelj, Slavko \v{Z}itnik, Marko Bajec
Do PageRank-based author rankings outperform simple citation counts?
28 pages, 5 figures, 6 tables
J. Infometr. 9(2), 334-348 (2015)
10.1016/j.joi.2015.02.008
null
cs.DL cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The basic indicators of a researcher's productivity and impact are still the number of publications and their citation counts. These metrics are clear, straightforward, and easy to obtain. When a ranking of scholars is needed, for instance in grant, award, or promotion procedures, their use is the fastest and cheapest way of prioritizing some scientists over others. However, due to their nature, there is a danger of oversimplifying scientific achievements. Therefore, many other indicators have been proposed including the usage of the PageRank algorithm known for the ranking of webpages and its modifications suited to citation networks. Nevertheless, this recursive method is computationally expensive and even if it has the advantage of favouring prestige over popularity, its application should be well justified, particularly when compared to the standard citation counts. In this study, we analyze three large datasets of computer science papers in the categories of artificial intelligence, software engineering, and theory and methods and apply 12 different ranking methods to the citation networks of authors. We compare the resulting rankings with self-compiled lists of outstanding researchers selected as frequent editorial board members of prestigious journals in the field and conclude that there is no evidence of PageRank-based methods outperforming simple citation counts.
[ { "version": "v1", "created": "Tue, 12 May 2015 13:59:30 GMT" } ]
2015-05-13T00:00:00
[ [ "Fiala", "Dalibor", "" ], [ "Šubelj", "Lovro", "" ], [ "Žitnik", "Slavko", "" ], [ "Bajec", "Marko", "" ] ]
TITLE: Do PageRank-based author rankings outperform simple citation counts? ABSTRACT: The basic indicators of a researcher's productivity and impact are still the number of publications and their citation counts. These metrics are clear, straightforward, and easy to obtain. When a ranking of scholars is needed, for instance in grant, award, or promotion procedures, their use is the fastest and cheapest way of prioritizing some scientists over others. However, due to their nature, there is a danger of oversimplifying scientific achievements. Therefore, many other indicators have been proposed including the usage of the PageRank algorithm known for the ranking of webpages and its modifications suited to citation networks. Nevertheless, this recursive method is computationally expensive and even if it has the advantage of favouring prestige over popularity, its application should be well justified, particularly when compared to the standard citation counts. In this study, we analyze three large datasets of computer science papers in the categories of artificial intelligence, software engineering, and theory and methods and apply 12 different ranking methods to the citation networks of authors. We compare the resulting rankings with self-compiled lists of outstanding researchers selected as frequent editorial board members of prestigious journals in the field and conclude that there is no evidence of PageRank-based methods outperforming simple citation counts.
no_new_dataset
0.941277
1505.03093
Jose Luis Alves
Manuel Pinheiro and J.L. Alves
A new Level-set based Protocol for Accurate Bone Segmentation from CT Imaging
11 pages, 24 figures
null
null
null
physics.med-ph cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work it is proposed a medical image segmentation pipeline for accurate bone segmentation from CT imaging. It is a two-step methodology, with a pre-segmentation step and a segmentation refinement step. First, the user performs a rough segmenting of the desired region of interest. Next, a fully automatic refinement step is applied to the pre-segmented data. The automatic segmentation refinement is composed by several sub-stpng, namely image deconvolution, image cropping and interpolation. The user-defined pre-segmentation is then refined over the deconvolved, cropped, and up-sampled version of the image. The algorithm is applied in the segmentation of CT images of a composite femur bone, reconstructed with different reconstruction protocols. Segmentation outcomes are validated against a gold standard model obtained with coordinate measuring machine Nikon Metris LK V20 with a digital line scanner LC60-D that guarantees an accuracy of 28 $\mu m$. High sub-pixel accuracy models were obtained for all tested Datasets. The algorithm is able to produce high quality segmentation of the composite femur regardless of the surface meshing strategy used.
[ { "version": "v1", "created": "Tue, 12 May 2015 17:33:44 GMT" } ]
2015-05-13T00:00:00
[ [ "Pinheiro", "Manuel", "" ], [ "Alves", "J. L.", "" ] ]
TITLE: A new Level-set based Protocol for Accurate Bone Segmentation from CT Imaging ABSTRACT: In this work it is proposed a medical image segmentation pipeline for accurate bone segmentation from CT imaging. It is a two-step methodology, with a pre-segmentation step and a segmentation refinement step. First, the user performs a rough segmenting of the desired region of interest. Next, a fully automatic refinement step is applied to the pre-segmented data. The automatic segmentation refinement is composed by several sub-stpng, namely image deconvolution, image cropping and interpolation. The user-defined pre-segmentation is then refined over the deconvolved, cropped, and up-sampled version of the image. The algorithm is applied in the segmentation of CT images of a composite femur bone, reconstructed with different reconstruction protocols. Segmentation outcomes are validated against a gold standard model obtained with coordinate measuring machine Nikon Metris LK V20 with a digital line scanner LC60-D that guarantees an accuracy of 28 $\mu m$. High sub-pixel accuracy models were obtained for all tested Datasets. The algorithm is able to produce high quality segmentation of the composite femur regardless of the surface meshing strategy used.
no_new_dataset
0.949106
1408.6221
Amir Gholami
Amir Gholami, Andreas Mang, George Biros
An inverse problem formulation for parameter estimation of a reaction diffusion model of low grade gliomas
J. Mat. Bio. (2015)
null
10.1007/s00285-015-0888-x
null
math.NA physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate the spatial distribution of tumor concentration, as well as the magnitude of anisotropic tumor diffusion. We use a constrained optimization formulation with a reaction-diffusion model that results in a system of nonlinear partial differential equations (PDEs). In our formulation, we estimate the parameters using partially observed, noisy tumor concentration data at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging (DTI). The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation and outline the numerical algorithms for solving the resulting equations. We test the method using a synthetic dataset and compute the reconstruction error for different noise levels and detection thresholds for monofocal and multifocal test cases.
[ { "version": "v1", "created": "Tue, 26 Aug 2014 19:49:31 GMT" }, { "version": "v2", "created": "Wed, 27 Aug 2014 19:14:52 GMT" }, { "version": "v3", "created": "Mon, 11 May 2015 18:13:50 GMT" } ]
2015-05-12T00:00:00
[ [ "Gholami", "Amir", "" ], [ "Mang", "Andreas", "" ], [ "Biros", "George", "" ] ]
TITLE: An inverse problem formulation for parameter estimation of a reaction diffusion model of low grade gliomas ABSTRACT: We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate the spatial distribution of tumor concentration, as well as the magnitude of anisotropic tumor diffusion. We use a constrained optimization formulation with a reaction-diffusion model that results in a system of nonlinear partial differential equations (PDEs). In our formulation, we estimate the parameters using partially observed, noisy tumor concentration data at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging (DTI). The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation and outline the numerical algorithms for solving the resulting equations. We test the method using a synthetic dataset and compute the reconstruction error for different noise levels and detection thresholds for monofocal and multifocal test cases.
no_new_dataset
0.947332