id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1402.1389 | Yarin Gal | Yarin Gal, Mark van der Wilk, Carl E. Rasmussen | Distributed Variational Inference in Sparse Gaussian Process Regression
and Latent Variable Models | 9 pages, 8 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes (GPs) are a powerful tool for probabilistic inference over
functions. They have been applied to both regression and non-linear
dimensionality reduction, and offer desirable properties such as uncertainty
estimates, robustness to over-fitting, and principled ways for tuning
hyper-parameters. However the scalability of these models to big datasets
remains an active topic of research. We introduce a novel re-parametrisation of
variational inference for sparse GP regression and latent variable models that
allows for an efficient distributed algorithm. This is done by exploiting the
decoupling of the data given the inducing points to re-formulate the evidence
lower bound in a Map-Reduce setting. We show that the inference scales well
with data and computational resources, while preserving a balanced distribution
of the load among the nodes. We further demonstrate the utility in scaling
Gaussian processes to big data. We show that GP performance improves with
increasing amounts of data in regression (on flight data with 2 million
records) and latent variable modelling (on MNIST). The results show that GPs
perform better than many common models often used for big data.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2014 16:08:40 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Sep 2014 21:16:47 GMT"
}
] | 2014-10-01T00:00:00 | [
[
"Gal",
"Yarin",
""
],
[
"van der Wilk",
"Mark",
""
],
[
"Rasmussen",
"Carl E.",
""
]
] | TITLE: Distributed Variational Inference in Sparse Gaussian Process Regression
and Latent Variable Models
ABSTRACT: Gaussian processes (GPs) are a powerful tool for probabilistic inference over
functions. They have been applied to both regression and non-linear
dimensionality reduction, and offer desirable properties such as uncertainty
estimates, robustness to over-fitting, and principled ways for tuning
hyper-parameters. However the scalability of these models to big datasets
remains an active topic of research. We introduce a novel re-parametrisation of
variational inference for sparse GP regression and latent variable models that
allows for an efficient distributed algorithm. This is done by exploiting the
decoupling of the data given the inducing points to re-formulate the evidence
lower bound in a Map-Reduce setting. We show that the inference scales well
with data and computational resources, while preserving a balanced distribution
of the load among the nodes. We further demonstrate the utility in scaling
Gaussian processes to big data. We show that GP performance improves with
increasing amounts of data in regression (on flight data with 2 million
records) and latent variable modelling (on MNIST). The results show that GPs
perform better than many common models often used for big data.
| no_new_dataset | 0.945651 |
1402.1774 | Ali Makhdoumi | Ali Makhdoumi, Salman Salamatian, Nadia Fawaz, Muriel Medard | From the Information Bottleneck to the Privacy Funnel | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on the privacy-utility trade-off encountered by users who wish to
disclose some information to an analyst, that is correlated with their private
data, in the hope of receiving some utility. We rely on a general privacy
statistical inference framework, under which data is transformed before it is
disclosed, according to a probabilistic privacy mapping. We show that when the
log-loss is introduced in this framework in both the privacy metric and the
distortion metric, the privacy leakage and the utility constraint can be
reduced to the mutual information between private data and disclosed data, and
between non-private data and disclosed data respectively. We justify the
relevance and generality of the privacy metric under the log-loss by proving
that the inference threat under any bounded cost function can be upper-bounded
by an explicit function of the mutual information between private data and
disclosed data. We then show that the privacy-utility tradeoff under the
log-loss can be cast as the non-convex Privacy Funnel optimization, and we
leverage its connection to the Information Bottleneck, to provide a greedy
algorithm that is locally optimal. We evaluate its performance on the US census
dataset.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2014 21:23:10 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2014 20:54:18 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Feb 2014 01:38:04 GMT"
},
{
"version": "v4",
"created": "Sun, 11 May 2014 21:27:18 GMT"
},
{
"version": "v5",
"created": "Tue, 30 Sep 2014 03:28:10 GMT"
}
] | 2014-10-01T00:00:00 | [
[
"Makhdoumi",
"Ali",
""
],
[
"Salamatian",
"Salman",
""
],
[
"Fawaz",
"Nadia",
""
],
[
"Medard",
"Muriel",
""
]
] | TITLE: From the Information Bottleneck to the Privacy Funnel
ABSTRACT: We focus on the privacy-utility trade-off encountered by users who wish to
disclose some information to an analyst, that is correlated with their private
data, in the hope of receiving some utility. We rely on a general privacy
statistical inference framework, under which data is transformed before it is
disclosed, according to a probabilistic privacy mapping. We show that when the
log-loss is introduced in this framework in both the privacy metric and the
distortion metric, the privacy leakage and the utility constraint can be
reduced to the mutual information between private data and disclosed data, and
between non-private data and disclosed data respectively. We justify the
relevance and generality of the privacy metric under the log-loss by proving
that the inference threat under any bounded cost function can be upper-bounded
by an explicit function of the mutual information between private data and
disclosed data. We then show that the privacy-utility tradeoff under the
log-loss can be cast as the non-convex Privacy Funnel optimization, and we
leverage its connection to the Information Bottleneck, to provide a greedy
algorithm that is locally optimal. We evaluate its performance on the US census
dataset.
| no_new_dataset | 0.943712 |
1402.5792 | Seyed Mostafa Kia | Seyed Mostafa Kia, Hossein Rahmani, Reza Mortezaei, Mohsen Ebrahimi
Moghaddam, Amer Namazi | A Novel Scheme for Intelligent Recognition of Pornographic Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Harmful contents are rising in internet day by day and this motivates the
essence of more research in fast and reliable obscene and immoral material
filtering. Pornographic image recognition is an important component in each
filtering system. In this paper, a new approach for detecting pornographic
images is introduced. In this approach, two new features are suggested. These
two features in combination with other simple traditional features provide
decent difference between porn and non-porn images. In addition, we applied
fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron)
and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of
system was evaluated over 18354 download images from internet. The attained
precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on
test dataset. Achieved results verify the performance of proposed system versus
other related works.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 11:15:04 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jun 2014 14:04:13 GMT"
},
{
"version": "v3",
"created": "Mon, 29 Sep 2014 22:15:26 GMT"
}
] | 2014-10-01T00:00:00 | [
[
"Kia",
"Seyed Mostafa",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Mortezaei",
"Reza",
""
],
[
"Moghaddam",
"Mohsen Ebrahimi",
""
],
[
"Namazi",
"Amer",
""
]
] | TITLE: A Novel Scheme for Intelligent Recognition of Pornographic Images
ABSTRACT: Harmful contents are rising in internet day by day and this motivates the
essence of more research in fast and reliable obscene and immoral material
filtering. Pornographic image recognition is an important component in each
filtering system. In this paper, a new approach for detecting pornographic
images is introduced. In this approach, two new features are suggested. These
two features in combination with other simple traditional features provide
decent difference between porn and non-porn images. In addition, we applied
fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron)
and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of
system was evaluated over 18354 download images from internet. The attained
precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on
test dataset. Achieved results verify the performance of proposed system versus
other related works.
| no_new_dataset | 0.952838 |
1409.7311 | Matthijs van Leeuwen | Matthijs van Leeuwen and Antti Ukkonen | Estimating the pattern frequency spectrum inside the browser | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a browser application for estimating the number of frequent
patterns, in particular itemsets, as well as the pattern frequency spectrum.
The pattern frequency spectrum is defined as the function that shows for every
value of the frequency threshold $\sigma$ the number of patterns that are
frequent in a given dataset. Our demo implements a recent algorithm proposed by
the authors for finding the spectrum. The demo is 100% JavaScript, and runs in
all modern browsers. We observe that modern JavaScript engines can deliver
performance that makes it viable to run non-trivial data analysis algorithms in
browser applications.
| [
{
"version": "v1",
"created": "Thu, 25 Sep 2014 16:08:14 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Sep 2014 15:22:51 GMT"
}
] | 2014-10-01T00:00:00 | [
[
"van Leeuwen",
"Matthijs",
""
],
[
"Ukkonen",
"Antti",
""
]
] | TITLE: Estimating the pattern frequency spectrum inside the browser
ABSTRACT: We present a browser application for estimating the number of frequent
patterns, in particular itemsets, as well as the pattern frequency spectrum.
The pattern frequency spectrum is defined as the function that shows for every
value of the frequency threshold $\sigma$ the number of patterns that are
frequent in a given dataset. Our demo implements a recent algorithm proposed by
the authors for finding the spectrum. The demo is 100% JavaScript, and runs in
all modern browsers. We observe that modern JavaScript engines can deliver
performance that makes it viable to run non-trivial data analysis algorithms in
browser applications.
| no_new_dataset | 0.942771 |
1409.8276 | Beyza Ermis Ms | Beyza Ermis, A. Taylan Cemgil | A Bayesian Tensor Factorization Model via Variational Inference for Link
Prediction | arXiv admin note: substantial text overlap with arXiv:1409.8083 | null | null | null | cs.LG cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic approaches for tensor factorization aim to extract meaningful
structure from incomplete data by postulating low rank constraints. Recently,
variational Bayesian (VB) inference techniques have successfully been applied
to large scale models. This paper presents full Bayesian inference via VB on
both single and coupled tensor factorization models. Our method can be run even
for very large models and is easily implemented. It exhibits better prediction
performance than existing approaches based on maximum likelihood on several
real-world datasets for missing link prediction problem.
| [
{
"version": "v1",
"created": "Mon, 29 Sep 2014 12:29:21 GMT"
}
] | 2014-10-01T00:00:00 | [
[
"Ermis",
"Beyza",
""
],
[
"Cemgil",
"A. Taylan",
""
]
] | TITLE: A Bayesian Tensor Factorization Model via Variational Inference for Link
Prediction
ABSTRACT: Probabilistic approaches for tensor factorization aim to extract meaningful
structure from incomplete data by postulating low rank constraints. Recently,
variational Bayesian (VB) inference techniques have successfully been applied
to large scale models. This paper presents full Bayesian inference via VB on
both single and coupled tensor factorization models. Our method can be run even
for very large models and is easily implemented. It exhibits better prediction
performance than existing approaches based on maximum likelihood on several
real-world datasets for missing link prediction problem.
| no_new_dataset | 0.950595 |
1402.6926 | Peter Foster | Peter Foster, Matthias Mauch and Simon Dixon | Sequential Complexity as a Descriptor for Musical Similarity | 13 pages, 9 figures, 8 tables. Accepted version | IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol. 22 no. 12, pp. 1965-1977, 2014 | 10.1109/TASLP.2014.2357676 | null | cs.IR cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 14:51:48 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Feb 2014 15:14:37 GMT"
},
{
"version": "v3",
"created": "Sun, 28 Sep 2014 23:33:44 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Foster",
"Peter",
""
],
[
"Mauch",
"Matthias",
""
],
[
"Dixon",
"Simon",
""
]
] | TITLE: Sequential Complexity as a Descriptor for Musical Similarity
ABSTRACT: We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.
| new_dataset | 0.86799 |
1409.1458 | Martin Jaggi | Martin Jaggi, Virginia Smith, Martin Tak\'a\v{c}, Jonathan Terhorst,
Sanjay Krishnan, Thomas Hofmann, Michael I. Jordan | Communication-Efficient Distributed Dual Coordinate Ascent | NIPS 2014 version, including proofs. Published in Advances in Neural
Information Processing Systems 27 (NIPS 2014) | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Communication remains the most significant bottleneck in the performance of
distributed optimization algorithms for large-scale machine learning. In this
paper, we propose a communication-efficient framework, CoCoA, that uses local
computation in a primal-dual setting to dramatically reduce the amount of
necessary communication. We provide a strong convergence rate analysis for this
class of algorithms, as well as experiments on real-world distributed datasets
with implementations in Spark. In our experiments, we find that as compared to
state-of-the-art mini-batch versions of SGD and SDCA algorithms, CoCoA
converges to the same .001-accurate solution quality on average 25x as quickly.
| [
{
"version": "v1",
"created": "Thu, 4 Sep 2014 14:59:35 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Sep 2014 16:07:32 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Jaggi",
"Martin",
""
],
[
"Smith",
"Virginia",
""
],
[
"Takáč",
"Martin",
""
],
[
"Terhorst",
"Jonathan",
""
],
[
"Krishnan",
"Sanjay",
""
],
[
"Hofmann",
"Thomas",
""
],
[
"Jordan",
"Michael I.",
""
]
] | TITLE: Communication-Efficient Distributed Dual Coordinate Ascent
ABSTRACT: Communication remains the most significant bottleneck in the performance of
distributed optimization algorithms for large-scale machine learning. In this
paper, we propose a communication-efficient framework, CoCoA, that uses local
computation in a primal-dual setting to dramatically reduce the amount of
necessary communication. We provide a strong convergence rate analysis for this
class of algorithms, as well as experiments on real-world distributed datasets
with implementations in Spark. In our experiments, we find that as compared to
state-of-the-art mini-batch versions of SGD and SDCA algorithms, CoCoA
converges to the same .001-accurate solution quality on average 25x as quickly.
| no_new_dataset | 0.948965 |
1409.5443 | Vasileios Kolias | Vasilis Kolias, Ioannis Anagnostopoulos, Eleftherios Kayafas | Exploratory Analysis of a Terabyte Scale Web Corpus | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a preliminary analysis over the largest publicly
accessible web dataset: the Common Crawl Corpus. We measure nine web
characteristics from two levels of granularity using MapReduce and we comment
on the initial observations over a fraction of it. To the best of our knowledge
two of the characteristics, the language distribution and the HTML version of
pages have not been analyzed in previous work, while the specific dataset has
been only analyzed on page level.
| [
{
"version": "v1",
"created": "Thu, 18 Sep 2014 20:00:52 GMT"
},
{
"version": "v2",
"created": "Sat, 27 Sep 2014 08:23:24 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Kolias",
"Vasilis",
""
],
[
"Anagnostopoulos",
"Ioannis",
""
],
[
"Kayafas",
"Eleftherios",
""
]
] | TITLE: Exploratory Analysis of a Terabyte Scale Web Corpus
ABSTRACT: In this paper we present a preliminary analysis over the largest publicly
accessible web dataset: the Common Crawl Corpus. We measure nine web
characteristics from two levels of granularity using MapReduce and we comment
on the initial observations over a fraction of it. To the best of our knowledge
two of the characteristics, the language distribution and the HTML version of
pages have not been analyzed in previous work, while the specific dataset has
been only analyzed on page level.
| no_new_dataset | 0.938576 |
1409.7963 | Arjun Jain | Arjun Jain, Jonathan Tompson, Yann LeCun and Christoph Bregler | MoDeep: A Deep Learning Framework Using Motion Features for Human Pose
Estimation | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel and efficient method for articulated human
pose estimation in videos using a convolutional network architecture, which
incorporates both color and motion features. We propose a new human body pose
dataset, FLIC-motion, that extends the FLIC dataset with additional motion
features. We apply our architecture to this dataset and report significantly
better performance than current state-of-the-art pose detection systems.
| [
{
"version": "v1",
"created": "Sun, 28 Sep 2014 21:32:15 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Jain",
"Arjun",
""
],
[
"Tompson",
"Jonathan",
""
],
[
"LeCun",
"Yann",
""
],
[
"Bregler",
"Christoph",
""
]
] | TITLE: MoDeep: A Deep Learning Framework Using Motion Features for Human Pose
Estimation
ABSTRACT: In this work, we propose a novel and efficient method for articulated human
pose estimation in videos using a convolutional network architecture, which
incorporates both color and motion features. We propose a new human body pose
dataset, FLIC-motion, that extends the FLIC dataset with additional motion
features. We apply our architecture to this dataset and report significantly
better performance than current state-of-the-art pose detection systems.
| new_dataset | 0.952618 |
1409.8028 | Christoph Fuchs | Daniel Raumer, Christoph Fuchs, Georg Groh | Reaching Consensus Among Mobile Agents: A Distributed Protocol for the
Detection of Social Situations | 16 pages, 4 figures, 1 table | null | null | null | cs.SI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physical social encounters are governed by a set of socio-psychological
behavioral rules with a high degree of uniform validity. Past research has
shown how these rules or the resulting properties of the encounters (e.g. the
geometry of interaction) can be used for algorithmic detection of social
interaction. In this paper, we present a distributed protocol to gain a common
understanding of the existing social situations among agents.
Our approach allows a group of agents to combine their subjective assessment
of an ongoing social situation. Based on perceived social cues obtained from
raw data signals, they reach a consensus about the existence, parameters, and
participants of a social situation. We evaluate our protocol using two
real-world datasets with social interaction information and additional
synthetic data generated by our social-aware mobility model.
| [
{
"version": "v1",
"created": "Mon, 29 Sep 2014 08:45:51 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Raumer",
"Daniel",
""
],
[
"Fuchs",
"Christoph",
""
],
[
"Groh",
"Georg",
""
]
] | TITLE: Reaching Consensus Among Mobile Agents: A Distributed Protocol for the
Detection of Social Situations
ABSTRACT: Physical social encounters are governed by a set of socio-psychological
behavioral rules with a high degree of uniform validity. Past research has
shown how these rules or the resulting properties of the encounters (e.g. the
geometry of interaction) can be used for algorithmic detection of social
interaction. In this paper, we present a distributed protocol to gain a common
understanding of the existing social situations among agents.
Our approach allows a group of agents to combine their subjective assessment
of an ongoing social situation. Based on perceived social cues obtained from
raw data signals, they reach a consensus about the existence, parameters, and
participants of a social situation. We evaluate our protocol using two
real-world datasets with social interaction information and additional
synthetic data generated by our social-aware mobility model.
| no_new_dataset | 0.95018 |
1409.8152 | Yelena Mejova | Yelena Mejova, Amy X. Zhang, Nicholas Diakopoulos, Carlos Castillo | Controversy and Sentiment in Online News | Computation+Journalism Symposium 2014 | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/3.0/ | How do news sources tackle controversial issues? In this work, we take a
data-driven approach to understand how controversy interplays with emotional
expression and biased language in the news. We begin by introducing a new
dataset of controversial and non-controversial terms collected using
crowdsourcing. Then, focusing on 15 major U.S. news outlets, we compare
millions of articles discussing controversial and non-controversial issues over
a span of 7 months. We find that in general, when it comes to controversial
issues, the use of negative affect and biased language is prevalent, while the
use of strong emotion is tempered. We also observe many differences across news
sources. Using these findings, we show that we can indicate to what extent an
issue is controversial, by comparing it with other issues in terms of how they
are portrayed across different media.
| [
{
"version": "v1",
"created": "Mon, 29 Sep 2014 15:23:50 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Mejova",
"Yelena",
""
],
[
"Zhang",
"Amy X.",
""
],
[
"Diakopoulos",
"Nicholas",
""
],
[
"Castillo",
"Carlos",
""
]
] | TITLE: Controversy and Sentiment in Online News
ABSTRACT: How do news sources tackle controversial issues? In this work, we take a
data-driven approach to understand how controversy interplays with emotional
expression and biased language in the news. We begin by introducing a new
dataset of controversial and non-controversial terms collected using
crowdsourcing. Then, focusing on 15 major U.S. news outlets, we compare
millions of articles discussing controversial and non-controversial issues over
a span of 7 months. We find that in general, when it comes to controversial
issues, the use of negative affect and biased language is prevalent, while the
use of strong emotion is tempered. We also observe many differences across news
sources. Using these findings, we show that we can indicate to what extent an
issue is controversial, by comparing it with other issues in terms of how they
are portrayed across different media.
| new_dataset | 0.953405 |
1409.8191 | Djallel Bouneffouf | Robin Allesiardo, Raphael Feraud and Djallel Bouneffouf | A Neural Networks Committee for the Contextual Bandit Problem | 21st International Conference on Neural Information Processing | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new contextual bandit algorithm, NeuralBandit, which
does not need hypothesis on stationarity of contexts and rewards. Several
neural networks are trained to modelize the value of rewards knowing the
context. Two variants, based on multi-experts approach, are proposed to choose
online the parameters of multi-layer perceptrons. The proposed algorithms are
successfully tested on a large dataset with and without stationarity of
rewards.
| [
{
"version": "v1",
"created": "Mon, 29 Sep 2014 17:08:21 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"Allesiardo",
"Robin",
""
],
[
"Feraud",
"Raphael",
""
],
[
"Bouneffouf",
"Djallel",
""
]
] | TITLE: A Neural Networks Committee for the Contextual Bandit Problem
ABSTRACT: This paper presents a new contextual bandit algorithm, NeuralBandit, which
does not need hypothesis on stationarity of contexts and rewards. Several
neural networks are trained to modelize the value of rewards knowing the
context. Two variants, based on multi-experts approach, are proposed to choose
online the parameters of multi-layer perceptrons. The proposed algorithms are
successfully tested on a large dataset with and without stationarity of
rewards.
| no_new_dataset | 0.944638 |
1409.8202 | Matteo De Felice | Matteo De Felice, Marcello Petitta, Paolo M. Ruti | Short-Term Predictability of Photovoltaic Production over Italy | Submitted to Renewable Energy | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photovoltaic (PV) power production increased drastically in Europe throughout
the last years. About the 6% of electricity in Italy comes from PV and for an
efficient management of the power grid an accurate and reliable forecasting of
production would be needed. Starting from a dataset of electricity production
of 65 Italian solar plants for the years 2011-2012 we investigate the
possibility to forecast daily production from one to ten days of lead time
without using on site measurements. Our study is divided in two parts: an
assessment of the predictability of meteorological variables using weather
forecasts and an analysis on the application of data-driven modelling in
predicting solar power production. We calibrate a SVM model using available
observations and then we force the same model with the predicted variables from
weather forecasts with a lead time from one to ten days. As expected, solar
power production is strongly influenced by cloudiness and clear sky, in fact we
observe that while during summer we obtain a general error under the 10%
(slightly lower in south Italy), during winter the error is abundantly above
the 20%.
| [
{
"version": "v1",
"created": "Mon, 29 Sep 2014 17:24:29 GMT"
}
] | 2014-09-30T00:00:00 | [
[
"De Felice",
"Matteo",
""
],
[
"Petitta",
"Marcello",
""
],
[
"Ruti",
"Paolo M.",
""
]
] | TITLE: Short-Term Predictability of Photovoltaic Production over Italy
ABSTRACT: Photovoltaic (PV) power production increased drastically in Europe throughout
the last years. About the 6% of electricity in Italy comes from PV and for an
efficient management of the power grid an accurate and reliable forecasting of
production would be needed. Starting from a dataset of electricity production
of 65 Italian solar plants for the years 2011-2012 we investigate the
possibility to forecast daily production from one to ten days of lead time
without using on site measurements. Our study is divided in two parts: an
assessment of the predictability of meteorological variables using weather
forecasts and an analysis on the application of data-driven modelling in
predicting solar power production. We calibrate a SVM model using available
observations and then we force the same model with the predicted variables from
weather forecasts with a lead time from one to ten days. As expected, solar
power production is strongly influenced by cloudiness and clear sky, in fact we
observe that while during summer we obtain a general error under the 10%
(slightly lower in south Italy), during winter the error is abundantly above
the 20%.
| no_new_dataset | 0.940953 |
1405.5737 | Arif Mahmood | Arif Mahmood and Ajmal S. Mian | Semi-supervised Spectral Clustering for Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Classification Via Clustering (CVC) algorithm which enables
existing clustering methods to be efficiently employed in classification
problems. In CVC, training and test data are co-clustered and class-cluster
distributions are used to find the label of the test data. To determine an
efficient number of clusters, a Semi-supervised Hierarchical Clustering (SHC)
algorithm is proposed. Clusters are obtained by hierarchically applying two-way
NCut by using signs of the Fiedler vector of the normalized graph Laplacian. To
this end, a Direct Fiedler Vector Computation algorithm is proposed. The graph
cut is based on the data structure and does not consider labels. Labels are
used only to define the stopping criterion for graph cut. We propose clustering
to be performed on the Grassmannian manifolds facilitating the formation of
spectral ensembles. The proposed algorithm outperformed state-of-the-art
image-set classification algorithms on five standard datasets.
| [
{
"version": "v1",
"created": "Thu, 22 May 2014 13:05:27 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Sep 2014 04:01:41 GMT"
}
] | 2014-09-29T00:00:00 | [
[
"Mahmood",
"Arif",
""
],
[
"Mian",
"Ajmal S.",
""
]
] | TITLE: Semi-supervised Spectral Clustering for Classification
ABSTRACT: We propose a Classification Via Clustering (CVC) algorithm which enables
existing clustering methods to be efficiently employed in classification
problems. In CVC, training and test data are co-clustered and class-cluster
distributions are used to find the label of the test data. To determine an
efficient number of clusters, a Semi-supervised Hierarchical Clustering (SHC)
algorithm is proposed. Clusters are obtained by hierarchically applying two-way
NCut by using signs of the Fiedler vector of the normalized graph Laplacian. To
this end, a Direct Fiedler Vector Computation algorithm is proposed. The graph
cut is based on the data structure and does not consider labels. Labels are
used only to define the stopping criterion for graph cut. We propose clustering
to be performed on the Grassmannian manifolds facilitating the formation of
spectral ensembles. The proposed algorithm outperformed state-of-the-art
image-set classification algorithms on five standard datasets.
| no_new_dataset | 0.951142 |
1409.7458 | Jiantao Jiao | Jiantao Jiao, Kartik Venkat, Yanjun Han, Tsachy Weissman | Beyond Maximum Likelihood: from Theory to Practice | null | null | null | null | stat.ME cs.DS cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximum likelihood is the most widely used statistical estimation technique.
Recent work by the authors introduced a general methodology for the
construction of estimators for functionals in parametric models, and
demonstrated improvements - both in theory and in practice - over the maximum
likelihood estimator (MLE), particularly in high dimensional scenarios
involving parameter dimension comparable to or larger than the number of
samples. This approach to estimation, building on results from approximation
theory, is shown to yield minimax rate-optimal estimators for a wide class of
functionals, implementable with modest computational requirements. In a
nutshell, a message of this recent work is that, for a wide class of
functionals, the performance of these essentially optimal estimators with $n$
samples is comparable to that of the MLE with $n \ln n$ samples.
In the present paper, we highlight the applicability of the aforementioned
methodology to statistical problems beyond functional estimation, and show that
it can yield substantial gains. For example, we demonstrate that for learning
tree-structured graphical models, our approach achieves a significant reduction
of the required data size compared with the classical Chow--Liu algorithm,
which is an implementation of the MLE, to achieve the same accuracy. The key
step in improving the Chow--Liu algorithm is to replace the empirical mutual
information with the estimator for mutual information proposed by the authors.
Further, applying the same replacement approach to classical Bayesian network
classification, the resulting classifiers uniformly outperform the previous
classifiers on 26 widely used datasets.
| [
{
"version": "v1",
"created": "Fri, 26 Sep 2014 01:45:34 GMT"
}
] | 2014-09-29T00:00:00 | [
[
"Jiao",
"Jiantao",
""
],
[
"Venkat",
"Kartik",
""
],
[
"Han",
"Yanjun",
""
],
[
"Weissman",
"Tsachy",
""
]
] | TITLE: Beyond Maximum Likelihood: from Theory to Practice
ABSTRACT: Maximum likelihood is the most widely used statistical estimation technique.
Recent work by the authors introduced a general methodology for the
construction of estimators for functionals in parametric models, and
demonstrated improvements - both in theory and in practice - over the maximum
likelihood estimator (MLE), particularly in high dimensional scenarios
involving parameter dimension comparable to or larger than the number of
samples. This approach to estimation, building on results from approximation
theory, is shown to yield minimax rate-optimal estimators for a wide class of
functionals, implementable with modest computational requirements. In a
nutshell, a message of this recent work is that, for a wide class of
functionals, the performance of these essentially optimal estimators with $n$
samples is comparable to that of the MLE with $n \ln n$ samples.
In the present paper, we highlight the applicability of the aforementioned
methodology to statistical problems beyond functional estimation, and show that
it can yield substantial gains. For example, we demonstrate that for learning
tree-structured graphical models, our approach achieves a significant reduction
of the required data size compared with the classical Chow--Liu algorithm,
which is an implementation of the MLE, to achieve the same accuracy. The key
step in improving the Chow--Liu algorithm is to replace the empirical mutual
information with the estimator for mutual information proposed by the authors.
Further, applying the same replacement approach to classical Bayesian network
classification, the resulting classifiers uniformly outperform the previous
classifiers on 26 widely used datasets.
| no_new_dataset | 0.9462 |
1409.6805 | Siting Ren | Siting Ren, Sheng Gao | Improving Cross-domain Recommendation through Probabilistic
Cluster-level Latent Factor Model--Extended Version | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-domain recommendation has been proposed to transfer user behavior
pattern by pooling together the rating data from multiple domains to alleviate
the sparsity problem appearing in single rating domains. However, previous
models only assume that multiple domains share a latent common rating pattern
based on the user-item co-clustering. To capture diversities among different
domains, we propose a novel Probabilistic Cluster-level Latent Factor (PCLF)
model to improve the cross-domain recommendation performance. Experiments on
several real world datasets demonstrate that our proposed model outperforms the
state-of-the-art methods for the cross-domain recommendation task.
| [
{
"version": "v1",
"created": "Wed, 24 Sep 2014 02:55:31 GMT"
}
] | 2014-09-26T00:00:00 | [
[
"Ren",
"Siting",
""
],
[
"Gao",
"Sheng",
""
]
] | TITLE: Improving Cross-domain Recommendation through Probabilistic
Cluster-level Latent Factor Model--Extended Version
ABSTRACT: Cross-domain recommendation has been proposed to transfer user behavior
pattern by pooling together the rating data from multiple domains to alleviate
the sparsity problem appearing in single rating domains. However, previous
models only assume that multiple domains share a latent common rating pattern
based on the user-item co-clustering. To capture diversities among different
domains, we propose a novel Probabilistic Cluster-level Latent Factor (PCLF)
model to improve the cross-domain recommendation performance. Experiments on
several real world datasets demonstrate that our proposed model outperforms the
state-of-the-art methods for the cross-domain recommendation task.
| no_new_dataset | 0.951142 |
1409.7307 | YuGei Gan | Yufei Gan, Tong Zhuo, Chu He | Image Classification with A Deep Network Model based on Compressive
Sensing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To simplify the parameter of the deep learning network, a cascaded
compressive sensing model "CSNet" is implemented for image classification.
Firstly, we use cascaded compressive sensing network to learn feature from the
data. Secondly, CSNet generates the feature by binary hashing and block-wise
histograms. Finally, a linear SVM classifier is used to classify these
features. The experiments on the MNIST dataset indicate that higher
classification accuracy can be obtained by this algorithm.
| [
{
"version": "v1",
"created": "Thu, 25 Sep 2014 15:52:05 GMT"
}
] | 2014-09-26T00:00:00 | [
[
"Gan",
"Yufei",
""
],
[
"Zhuo",
"Tong",
""
],
[
"He",
"Chu",
""
]
] | TITLE: Image Classification with A Deep Network Model based on Compressive
Sensing
ABSTRACT: To simplify the parameter of the deep learning network, a cascaded
compressive sensing model "CSNet" is implemented for image classification.
Firstly, we use cascaded compressive sensing network to learn feature from the
data. Secondly, CSNet generates the feature by binary hashing and block-wise
histograms. Finally, a linear SVM classifier is used to classify these
features. The experiments on the MNIST dataset indicate that higher
classification accuracy can be obtained by this algorithm.
| no_new_dataset | 0.951188 |
1409.7313 | YuGei Gan | Yufei Gan, Teng Yang, Chu He | A Deep Graph Embedding Network Model for Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new deep learning network "GENet", it combines
the multi-layer network architec- ture and graph embedding framework. Firstly,
we use simplest unsupervised learning PCA/LDA as first layer to generate the
low- level feature. Secondly, many cascaded dimensionality reduction layers
based on graph embedding framework are applied to GENet. Finally, a linear SVM
classifier is used to classify dimension-reduced features. The experiments
indicate that higher classification accuracy can be obtained by this algorithm
on the CMU-PIE, ORL, Extended Yale B dataset.
| [
{
"version": "v1",
"created": "Thu, 25 Sep 2014 16:14:18 GMT"
}
] | 2014-09-26T00:00:00 | [
[
"Gan",
"Yufei",
""
],
[
"Yang",
"Teng",
""
],
[
"He",
"Chu",
""
]
] | TITLE: A Deep Graph Embedding Network Model for Face Recognition
ABSTRACT: In this paper, we propose a new deep learning network "GENet", it combines
the multi-layer network architec- ture and graph embedding framework. Firstly,
we use simplest unsupervised learning PCA/LDA as first layer to generate the
low- level feature. Secondly, many cascaded dimensionality reduction layers
based on graph embedding framework are applied to GENet. Finally, a linear SVM
classifier is used to classify dimension-reduced features. The experiments
indicate that higher classification accuracy can be obtained by this algorithm
on the CMU-PIE, ORL, Extended Yale B dataset.
| no_new_dataset | 0.948917 |
1307.7220 | Anmer Daskin | Anmer Daskin, Ananth Grama, Sabre Kais | Multiple Network Alignment on Quantum Computers | null | null | 10.1007/s11128-014-0818-7 | null | quant-ph cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comparative analyses of graph structured datasets underly diverse problems.
Examples of these problems include identification of conserved functional
components (biochemical interactions) across species, structural similarity of
large biomolecules, and recurring patterns of interactions in social networks.
A large class of such analyses methods quantify the topological similarity of
nodes across networks. The resulting correspondence of nodes across networks,
also called node alignment, can be used to identify invariant subgraphs across
the input graphs.
Given $k$ graphs as input, alignment algorithms use topological information
to assign a similarity score to each $k$-tuple of nodes, with elements (nodes)
drawn from each of the input graphs. Nodes are considered similar if their
neighbors are also similar. An alternate, equivalent view of these network
alignment algorithms is to consider the Kronecker product of the input graphs,
and to identify high-ranked nodes in the Kronecker product graph. Conventional
methods such as PageRank and HITS (Hypertext Induced Topic Selection) can be
used for this purpose. These methods typically require computation of the
principal eigenvector of a suitably modified Kronecker product matrix of the
input graphs. We adopt this alternate view of the problem to address the
problem of multiple network alignment. Using the phase estimation algorithm, we
show that the multiple network alignment problem can be efficiently solved on
quantum computers. We characterize the accuracy and performance of our method,
and show that it can deliver exponential speedups over conventional
(non-quantum) methods.
| [
{
"version": "v1",
"created": "Sat, 27 Jul 2013 06:37:49 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2014 14:00:59 GMT"
}
] | 2014-09-25T00:00:00 | [
[
"Daskin",
"Anmer",
""
],
[
"Grama",
"Ananth",
""
],
[
"Kais",
"Sabre",
""
]
] | TITLE: Multiple Network Alignment on Quantum Computers
ABSTRACT: Comparative analyses of graph structured datasets underly diverse problems.
Examples of these problems include identification of conserved functional
components (biochemical interactions) across species, structural similarity of
large biomolecules, and recurring patterns of interactions in social networks.
A large class of such analyses methods quantify the topological similarity of
nodes across networks. The resulting correspondence of nodes across networks,
also called node alignment, can be used to identify invariant subgraphs across
the input graphs.
Given $k$ graphs as input, alignment algorithms use topological information
to assign a similarity score to each $k$-tuple of nodes, with elements (nodes)
drawn from each of the input graphs. Nodes are considered similar if their
neighbors are also similar. An alternate, equivalent view of these network
alignment algorithms is to consider the Kronecker product of the input graphs,
and to identify high-ranked nodes in the Kronecker product graph. Conventional
methods such as PageRank and HITS (Hypertext Induced Topic Selection) can be
used for this purpose. These methods typically require computation of the
principal eigenvector of a suitably modified Kronecker product matrix of the
input graphs. We adopt this alternate view of the problem to address the
problem of multiple network alignment. Using the phase estimation algorithm, we
show that the multiple network alignment problem can be efficiently solved on
quantum computers. We characterize the accuracy and performance of our method,
and show that it can deliver exponential speedups over conventional
(non-quantum) methods.
| no_new_dataset | 0.943608 |
1402.6956 | Michael Marino | EXO-200 Collaboration: J.B. Albert, D.J. Auty, P.S. Barbeau, E.
Beauchamp, D. Beck, V. Belov, C. Benitez-Medina, J. Bonatt, M. Breidenbach,
T. Brunner, A. Burenkov, G.F. Cao, C. Chambers, J. Chaves, B. Cleveland, M.
Coon, A. Craycraft, T. Daniels, M. Danilov, S.J. Daugherty, C.G. Davis, J.
Davis, R. DeVoe, S. Delaquis, T. Didberidze, A. Dolgolenko, M.J. Dolinski, M.
Dunford, W. Fairbank Jr., J. Farine, W. Feldmeier, P. Fierlinger, D.
Fudenberg, G. Giroux, R. Gornea, K. Graham, G. Gratta, C. Hall, S. Herrin, M.
Hughes, M.J. Jewell, X.S. Jiang, A. Johnson, T.N. Johnson, S. Johnston, A.
Karelin, L.J. Kaufman, R. Killick, T. Koffas, S. Kravitz, A. Kuchenkov, K.S.
Kumar, D.S. Leonard, F. Leonard, C. Licciardi, Y.H. Lin, R. MacLellan, M.G.
Marino, B. Mong, D. Moore, R. Nelson, A. Odian, I. Ostrovskiy, C. Ouellet, A.
Piepke, A. Pocar, C.Y. Prescott, A. Rivas, P.C. Rowson, M.P. Rozo, J.J.
Russell, A. Schubert, D. Sinclair, S. Slutsky, E. Smith, V. Stekhanov, M.
Tarka, T. Tolba, D. Tosi, K. Twelker, P. Vogel, J.-L. Vuilleumier, A. Waite,
J. Walton, T. Walton, M. Weber, L.J. Wen, U. Wichoski, J.D. Wright, L. Yang,
Y.-R. Yen, O.Ya. Zeldovich, Y.B. Zhao | Search for Majorana neutrinos with the first two years of EXO-200 data | 9 pages, 6 figures | Nature 510, 229-234 (12 June 2014) | 10.1038/nature13432 | null | nucl-ex hep-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many extensions of the Standard Model of particle physics suggest that
neutrinos should be Majorana-type fermions, but this assumption is difficult to
confirm. Observation of neutrinoless double-beta decay ($0\nu \beta \beta$), a
spontaneous transition that may occur in several candidate nuclei, would verify
the Majorana nature of the neutrino and constrain the absolute scale of the
neutrino mass spectrum. Recent searches carried out with $^{76}$Ge (GERDA
experiment) and $^{136}$Xe (KamLAND-Zen and EXO-200 experiments) have
established the lifetime of this decay to be longer than $10^{25}$ yr,
corresponding to a limit on the neutrino mass of 0.2-0.4 eV. Here we report new
results from EXO-200 based on 100 kg$\cdot$yr of $^{136}$Xe exposure,
representing an almost fourfold increase from our earlier published datasets.
We have improved the detector resolution at the $^{136}$Xe double-beta-decay
Q-value to $\sigma$/E = 1.53% and revised the data analysis. The obtained
half-life sensitivity is $1.9\cdot10^{25}$ yr, an improvement by a factor of
2.7 compared to previous EXO-200 results. We find no statistically significant
evidence for $0\nu \beta \beta$ decay and set a half-life limit of
$1.1\cdot10^{25}$ yr at 90% CL. The high sensitivity holds promise for further
running of the EXO-200 detector and future $0\nu \beta \beta$ decay searches
with nEXO.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2014 16:24:12 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Jun 2014 21:13:11 GMT"
}
] | 2014-09-25T00:00:00 | [
[
"200 Collaboration",
"",
""
],
[
"Albert",
"J. B.",
""
],
[
"Auty",
"D. J.",
""
],
[
"Barbeau",
"P. S.",
""
],
[
"Beauchamp",
"E.",
""
],
[
"Beck",
"D.",
""
],
[
"Belov",
"V.",
""
],
[
"Benitez-Medina",
"C.",
""
],
[
"Bonatt",
"J.",
""
],
[
"Breidenbach",
"M.",
""
],
[
"Brunner",
"T.",
""
],
[
"Burenkov",
"A.",
""
],
[
"Cao",
"G. F.",
""
],
[
"Chambers",
"C.",
""
],
[
"Chaves",
"J.",
""
],
[
"Cleveland",
"B.",
""
],
[
"Coon",
"M.",
""
],
[
"Craycraft",
"A.",
""
],
[
"Daniels",
"T.",
""
],
[
"Danilov",
"M.",
""
],
[
"Daugherty",
"S. J.",
""
],
[
"Davis",
"C. G.",
""
],
[
"Davis",
"J.",
""
],
[
"DeVoe",
"R.",
""
],
[
"Delaquis",
"S.",
""
],
[
"Didberidze",
"T.",
""
],
[
"Dolgolenko",
"A.",
""
],
[
"Dolinski",
"M. J.",
""
],
[
"Dunford",
"M.",
""
],
[
"Fairbank",
"W.",
"Jr."
],
[
"Farine",
"J.",
""
],
[
"Feldmeier",
"W.",
""
],
[
"Fierlinger",
"P.",
""
],
[
"Fudenberg",
"D.",
""
],
[
"Giroux",
"G.",
""
],
[
"Gornea",
"R.",
""
],
[
"Graham",
"K.",
""
],
[
"Gratta",
"G.",
""
],
[
"Hall",
"C.",
""
],
[
"Herrin",
"S.",
""
],
[
"Hughes",
"M.",
""
],
[
"Jewell",
"M. J.",
""
],
[
"Jiang",
"X. S.",
""
],
[
"Johnson",
"A.",
""
],
[
"Johnson",
"T. N.",
""
],
[
"Johnston",
"S.",
""
],
[
"Karelin",
"A.",
""
],
[
"Kaufman",
"L. J.",
""
],
[
"Killick",
"R.",
""
],
[
"Koffas",
"T.",
""
],
[
"Kravitz",
"S.",
""
],
[
"Kuchenkov",
"A.",
""
],
[
"Kumar",
"K. S.",
""
],
[
"Leonard",
"D. S.",
""
],
[
"Leonard",
"F.",
""
],
[
"Licciardi",
"C.",
""
],
[
"Lin",
"Y. H.",
""
],
[
"MacLellan",
"R.",
""
],
[
"Marino",
"M. G.",
""
],
[
"Mong",
"B.",
""
],
[
"Moore",
"D.",
""
],
[
"Nelson",
"R.",
""
],
[
"Odian",
"A.",
""
],
[
"Ostrovskiy",
"I.",
""
],
[
"Ouellet",
"C.",
""
],
[
"Piepke",
"A.",
""
],
[
"Pocar",
"A.",
""
],
[
"Prescott",
"C. Y.",
""
],
[
"Rivas",
"A.",
""
],
[
"Rowson",
"P. C.",
""
],
[
"Rozo",
"M. P.",
""
],
[
"Russell",
"J. J.",
""
],
[
"Schubert",
"A.",
""
],
[
"Sinclair",
"D.",
""
],
[
"Slutsky",
"S.",
""
],
[
"Smith",
"E.",
""
],
[
"Stekhanov",
"V.",
""
],
[
"Tarka",
"M.",
""
],
[
"Tolba",
"T.",
""
],
[
"Tosi",
"D.",
""
],
[
"Twelker",
"K.",
""
],
[
"Vogel",
"P.",
""
],
[
"Vuilleumier",
"J. -L.",
""
],
[
"Waite",
"A.",
""
],
[
"Walton",
"J.",
""
],
[
"Walton",
"T.",
""
],
[
"Weber",
"M.",
""
],
[
"Wen",
"L. J.",
""
],
[
"Wichoski",
"U.",
""
],
[
"Wright",
"J. D.",
""
],
[
"Yang",
"L.",
""
],
[
"Yen",
"Y. -R.",
""
],
[
"Zeldovich",
"O. Ya.",
""
],
[
"Zhao",
"Y. B.",
""
]
] | TITLE: Search for Majorana neutrinos with the first two years of EXO-200 data
ABSTRACT: Many extensions of the Standard Model of particle physics suggest that
neutrinos should be Majorana-type fermions, but this assumption is difficult to
confirm. Observation of neutrinoless double-beta decay ($0\nu \beta \beta$), a
spontaneous transition that may occur in several candidate nuclei, would verify
the Majorana nature of the neutrino and constrain the absolute scale of the
neutrino mass spectrum. Recent searches carried out with $^{76}$Ge (GERDA
experiment) and $^{136}$Xe (KamLAND-Zen and EXO-200 experiments) have
established the lifetime of this decay to be longer than $10^{25}$ yr,
corresponding to a limit on the neutrino mass of 0.2-0.4 eV. Here we report new
results from EXO-200 based on 100 kg$\cdot$yr of $^{136}$Xe exposure,
representing an almost fourfold increase from our earlier published datasets.
We have improved the detector resolution at the $^{136}$Xe double-beta-decay
Q-value to $\sigma$/E = 1.53% and revised the data analysis. The obtained
half-life sensitivity is $1.9\cdot10^{25}$ yr, an improvement by a factor of
2.7 compared to previous EXO-200 results. We find no statistically significant
evidence for $0\nu \beta \beta$ decay and set a half-life limit of
$1.1\cdot10^{25}$ yr at 90% CL. The high sensitivity holds promise for further
running of the EXO-200 detector and future $0\nu \beta \beta$ decay searches
with nEXO.
| no_new_dataset | 0.941223 |
1406.5052 | Uldis Boj\=ars | Uldis Boj\=ars and Ren\=ars Liepi\c{n}\v{s} | The State of Open Data in Latvia: 2014 | keywords: Open Data, Open Government Data, PSI, Latvia | Baltic J. Modern Computing, Vol. 2 (2014), No. 3, 160-170 | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the state of Open Data in Latvia at the middle of 2014.
The study is divided into two parts: (i) a survey of open data situation and
(ii) an overview of available open data sets. The first part examines the
general open data climate in Latvia according to the guidelines of the OKFN
Open Data Index making the results comparable to those of other participants of
this index. The second part examines datasets made available on the Latvia Open
Data community catalogue, the only open data catalogue available in Latvia at
the moment. We conclude that Latvia public sector open data mostly fulfil the
basic criteria (e.g., data is available) of the Open Data Index but fail on
more advanced criteria: the majority of data considered in the study are not
published in machine-readable form, are not available for bulk download and
none of the data sources have open license statements.
| [
{
"version": "v1",
"created": "Thu, 19 Jun 2014 14:08:03 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Sep 2014 10:52:45 GMT"
}
] | 2014-09-25T00:00:00 | [
[
"Bojārs",
"Uldis",
""
],
[
"Liepiņš",
"Renārs",
""
]
] | TITLE: The State of Open Data in Latvia: 2014
ABSTRACT: This paper examines the state of Open Data in Latvia at the middle of 2014.
The study is divided into two parts: (i) a survey of open data situation and
(ii) an overview of available open data sets. The first part examines the
general open data climate in Latvia according to the guidelines of the OKFN
Open Data Index making the results comparable to those of other participants of
this index. The second part examines datasets made available on the Latvia Open
Data community catalogue, the only open data catalogue available in Latvia at
the moment. We conclude that Latvia public sector open data mostly fulfil the
basic criteria (e.g., data is available) of the Open Data Index but fail on
more advanced criteria: the majority of data considered in the study are not
published in machine-readable form, are not available for bulk download and
none of the data sources have open license statements.
| no_new_dataset | 0.951908 |
1108.4674 | Laurent Duval | Sergi Ventosa, Sylvain Le Roy, Ir\`ene Huard, Antonio Pica, H\'erald
Rabeson, Patrice Ricarte and Laurent Duval | Adaptive multiple subtraction with wavelet-based complex unary Wiener
filters | 18 pages, 10 color figures | Geophysics, Nov.-Dec. 2012, vol. 77, issue 6, pages V183-V192 | 10.1190/geo2011-0318.1 | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adaptive subtraction is a key element in predictive multiple-suppression
methods. It minimizes misalignments and amplitude differences between modeled
and actual multiples, and thus reduces multiple contamination in the dataset
after subtraction. Due to the high cross-correlation between their waveform,
the main challenge resides in attenuating multiples without distorting
primaries. As they overlap on a wide frequency range, we split this wide-band
problem into a set of more tractable narrow-band filter designs, using a 1D
complex wavelet frame. This decomposition enables a single-pass adaptive
subtraction via complex, single-sample (unary) Wiener filters, consistently
estimated on overlapping windows in a complex wavelet transformed domain. Each
unary filter compensates amplitude differences within its frequency support,
and can correct small and large misalignment errors through phase and integer
delay corrections. This approach greatly simplifies the matching filter
estimation and, despite its simplicity, narrows the gap between 1D and standard
adaptive 2D methods on field data.
| [
{
"version": "v1",
"created": "Tue, 23 Aug 2011 19:14:42 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Aug 2011 19:54:34 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Apr 2012 18:01:25 GMT"
},
{
"version": "v4",
"created": "Thu, 31 May 2012 20:42:15 GMT"
},
{
"version": "v5",
"created": "Wed, 27 Jun 2012 19:03:25 GMT"
},
{
"version": "v6",
"created": "Mon, 29 Apr 2013 20:02:50 GMT"
}
] | 2014-09-24T00:00:00 | [
[
"Ventosa",
"Sergi",
""
],
[
"Roy",
"Sylvain Le",
""
],
[
"Huard",
"Irène",
""
],
[
"Pica",
"Antonio",
""
],
[
"Rabeson",
"Hérald",
""
],
[
"Ricarte",
"Patrice",
""
],
[
"Duval",
"Laurent",
""
]
] | TITLE: Adaptive multiple subtraction with wavelet-based complex unary Wiener
filters
ABSTRACT: Adaptive subtraction is a key element in predictive multiple-suppression
methods. It minimizes misalignments and amplitude differences between modeled
and actual multiples, and thus reduces multiple contamination in the dataset
after subtraction. Due to the high cross-correlation between their waveform,
the main challenge resides in attenuating multiples without distorting
primaries. As they overlap on a wide frequency range, we split this wide-band
problem into a set of more tractable narrow-band filter designs, using a 1D
complex wavelet frame. This decomposition enables a single-pass adaptive
subtraction via complex, single-sample (unary) Wiener filters, consistently
estimated on overlapping windows in a complex wavelet transformed domain. Each
unary filter compensates amplitude differences within its frequency support,
and can correct small and large misalignment errors through phase and integer
delay corrections. This approach greatly simplifies the matching filter
estimation and, despite its simplicity, narrows the gap between 1D and standard
adaptive 2D methods on field data.
| no_new_dataset | 0.946001 |
1405.2362 | Yan Fang | Yan Fang, Matthew J. Cotter, Donald M. Chiarulli, Steven P. Levitan | Image Segmentation Using Frequency Locking of Coupled Oscillators | 7 pages, 14 figures, the 51th Design Automation Conference 2014, Work
in Progress Poster Session | null | 10.1109/CNNA.2014.6888657 | null | cs.CV q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synchronization of coupled oscillators is observed at multiple levels of
neural systems, and has been shown to play an important function in visual
perception. We propose a computing system based on locally coupled oscillator
networks for image segmentation. The system can serve as the preprocessing
front-end of an image processing pipeline where the common frequencies of
clusters of oscillators reflect the segmentation results. To demonstrate the
feasibility of our design, the system is simulated and tested on a human face
image dataset and its performance is compared with traditional intensity
threshold based algorithms. Our system shows both better performance and higher
noise tolerance than traditional methods.
| [
{
"version": "v1",
"created": "Fri, 9 May 2014 21:53:05 GMT"
}
] | 2014-09-24T00:00:00 | [
[
"Fang",
"Yan",
""
],
[
"Cotter",
"Matthew J.",
""
],
[
"Chiarulli",
"Donald M.",
""
],
[
"Levitan",
"Steven P.",
""
]
] | TITLE: Image Segmentation Using Frequency Locking of Coupled Oscillators
ABSTRACT: Synchronization of coupled oscillators is observed at multiple levels of
neural systems, and has been shown to play an important function in visual
perception. We propose a computing system based on locally coupled oscillator
networks for image segmentation. The system can serve as the preprocessing
front-end of an image processing pipeline where the common frequencies of
clusters of oscillators reflect the segmentation results. To demonstrate the
feasibility of our design, the system is simulated and tested on a human face
image dataset and its performance is compared with traditional intensity
threshold based algorithms. Our system shows both better performance and higher
noise tolerance than traditional methods.
| no_new_dataset | 0.952442 |
1408.2003 | Bo Han | Bo Han, Bo He, Rui Nian, Mengmeng Ma, Shujing Zhang, Minghui Li and
Amaury Lendasse | LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS
for Blended Data | Accepted for publication in Neurocomputing, 01/19/2014 | Neurocomputing, 2014, Elsevier. Manuscript ID: NEUCOM-D-13-01029 | 10.1016/j.neucom.2014.01.069 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme learning machine (ELM) as a neural network algorithm has shown its
good performance, such as fast speed, simple structure etc, but also, weak
robustness is an unavoidable defect in original ELM for blended data. We
present a new machine learning framework called LARSEN-ELM for overcoming this
problem. In our paper, we would like to show two key steps in LARSEN-ELM. In
the first step, preprocessing, we select the input variables highly related to
the output using least angle regression (LARS). In the second step, training,
we employ Genetic Algorithm (GA) based selective ensemble and original ELM. In
the experiments, we apply a sum of two sines and four datasets from UCI
repository to verify the robustness of our approach. The experimental results
show that compared with original ELM and other methods such as OP-ELM,
GASEN-ELM and LSBoost, LARSEN-ELM significantly improve robustness performance
while keeping a relatively high speed.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 01:31:02 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Aug 2014 02:54:54 GMT"
}
] | 2014-09-24T00:00:00 | [
[
"Han",
"Bo",
""
],
[
"He",
"Bo",
""
],
[
"Nian",
"Rui",
""
],
[
"Ma",
"Mengmeng",
""
],
[
"Zhang",
"Shujing",
""
],
[
"Li",
"Minghui",
""
],
[
"Lendasse",
"Amaury",
""
]
] | TITLE: LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS
for Blended Data
ABSTRACT: Extreme learning machine (ELM) as a neural network algorithm has shown its
good performance, such as fast speed, simple structure etc, but also, weak
robustness is an unavoidable defect in original ELM for blended data. We
present a new machine learning framework called LARSEN-ELM for overcoming this
problem. In our paper, we would like to show two key steps in LARSEN-ELM. In
the first step, preprocessing, we select the input variables highly related to
the output using least angle regression (LARS). In the second step, training,
we employ Genetic Algorithm (GA) based selective ensemble and original ELM. In
the experiments, we apply a sum of two sines and four datasets from UCI
repository to verify the robustness of our approach. The experimental results
show that compared with original ELM and other methods such as OP-ELM,
GASEN-ELM and LSBoost, LARSEN-ELM significantly improve robustness performance
while keeping a relatively high speed.
| no_new_dataset | 0.948965 |
1408.2004 | Bo Han | Bo Han, Bo He, Mengmeng Ma, Tingting Sun, Tianhong Yan, Amaury
Lendasse | RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning
Machines for Robustness Improvement | Accepted for publication in Mathematical Problems in Engineering,
09/22/2014 | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.
| [
{
"version": "v1",
"created": "Sat, 9 Aug 2014 01:36:03 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Aug 2014 02:35:11 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Sep 2014 07:48:35 GMT"
}
] | 2014-09-24T00:00:00 | [
[
"Han",
"Bo",
""
],
[
"He",
"Bo",
""
],
[
"Ma",
"Mengmeng",
""
],
[
"Sun",
"Tingting",
""
],
[
"Yan",
"Tianhong",
""
],
[
"Lendasse",
"Amaury",
""
]
] | TITLE: RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning
Machines for Robustness Improvement
ABSTRACT: Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.
| no_new_dataset | 0.946892 |
1407.1610 | Pulkit Agrawal | Pulkit Agrawal, Ross Girshick, Jitendra Malik | Analyzing the Performance of Multilayer Neural Networks for Object
Recognition | Published in European Conference on Computer Vision 2014 (ECCV-2014) | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last two years, convolutional neural networks (CNNs) have achieved an
impressive suite of results on standard recognition datasets and tasks.
CNN-based features seem poised to quickly replace engineered representations,
such as SIFT and HOG. However, compared to SIFT and HOG, we understand much
less about the nature of the features learned by large CNNs. In this paper, we
experimentally probe several aspects of CNN feature learning in an attempt to
help practitioners gain useful, evidence-backed intuitions about how to apply
CNNs to computer vision problems.
| [
{
"version": "v1",
"created": "Mon, 7 Jul 2014 08:00:57 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Sep 2014 17:49:01 GMT"
}
] | 2014-09-23T00:00:00 | [
[
"Agrawal",
"Pulkit",
""
],
[
"Girshick",
"Ross",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Analyzing the Performance of Multilayer Neural Networks for Object
Recognition
ABSTRACT: In the last two years, convolutional neural networks (CNNs) have achieved an
impressive suite of results on standard recognition datasets and tasks.
CNN-based features seem poised to quickly replace engineered representations,
such as SIFT and HOG. However, compared to SIFT and HOG, we understand much
less about the nature of the features learned by large CNNs. In this paper, we
experimentally probe several aspects of CNN feature learning in an attempt to
help practitioners gain useful, evidence-backed intuitions about how to apply
CNNs to computer vision problems.
| no_new_dataset | 0.95096 |
1408.3809 | Hossein Rahmani | Hossein Rahmani, Arif Mahmood, Du Q. Huynh, Ajmal Mian | HOPC: Histogram of Oriented Principal Components of 3D Pointclouds for
Action Recognition | ECCV 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing techniques for 3D action recognition are sensitive to viewpoint
variations because they extract features from depth images which change
significantly with viewpoint. In contrast, we directly process the pointclouds
and propose a new technique for action recognition which is more robust to
noise, action speed and viewpoint variations. Our technique consists of a novel
descriptor and keypoint detection algorithm. The proposed descriptor is
extracted at a point by encoding the Histogram of Oriented Principal Components
(HOPC) within an adaptive spatio-temporal support volume around that point.
Based on this descriptor, we present a novel method to detect Spatio-Temporal
Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that
the proposed descriptor and STKP detector outperform state-of-the-art
algorithms on three benchmark human activity datasets. We also introduce a new
multiview public dataset and show the robustness of our proposed method to
viewpoint variations.
| [
{
"version": "v1",
"created": "Sun, 17 Aug 2014 10:34:47 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Sep 2014 02:49:32 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Sep 2014 01:46:55 GMT"
},
{
"version": "v4",
"created": "Mon, 22 Sep 2014 06:50:28 GMT"
}
] | 2014-09-23T00:00:00 | [
[
"Rahmani",
"Hossein",
""
],
[
"Mahmood",
"Arif",
""
],
[
"Huynh",
"Du Q.",
""
],
[
"Mian",
"Ajmal",
""
]
] | TITLE: HOPC: Histogram of Oriented Principal Components of 3D Pointclouds for
Action Recognition
ABSTRACT: Existing techniques for 3D action recognition are sensitive to viewpoint
variations because they extract features from depth images which change
significantly with viewpoint. In contrast, we directly process the pointclouds
and propose a new technique for action recognition which is more robust to
noise, action speed and viewpoint variations. Our technique consists of a novel
descriptor and keypoint detection algorithm. The proposed descriptor is
extracted at a point by encoding the Histogram of Oriented Principal Components
(HOPC) within an adaptive spatio-temporal support volume around that point.
Based on this descriptor, we present a novel method to detect Spatio-Temporal
Key-Points (STKPs) in 3D pointcloud sequences. Experimental results show that
the proposed descriptor and STKP detector outperform state-of-the-art
algorithms on three benchmark human activity datasets. We also introduce a new
multiview public dataset and show the robustness of our proposed method to
viewpoint variations.
| new_dataset | 0.959269 |
1408.3810 | Hossein Rahmani | Hossein Rahmani, Arif Mahmood, Du Huynh, Ajmal Mian | Action Classification with Locality-constrained Linear Coding | ICPR 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an action classification algorithm which uses Locality-constrained
Linear Coding (LLC) to capture discriminative information of human body
variations in each spatiotemporal subsequence of a video sequence. Our proposed
method divides the input video into equally spaced overlapping spatiotemporal
subsequences, each of which is decomposed into blocks and then cells. We use
the Histogram of Oriented Gradient (HOG3D) feature to encode the information in
each cell. We justify the use of LLC for encoding the block descriptor by
demonstrating its superiority over Sparse Coding (SC). Our sequence descriptor
is obtained via a logistic regression classifier with L2 regularization. We
evaluate and compare our algorithm with ten state-of-the-art algorithms on five
benchmark datasets. Experimental results show that, on average, our algorithm
gives better accuracy than these ten algorithms.
| [
{
"version": "v1",
"created": "Sun, 17 Aug 2014 10:46:45 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Sep 2014 06:54:34 GMT"
}
] | 2014-09-23T00:00:00 | [
[
"Rahmani",
"Hossein",
""
],
[
"Mahmood",
"Arif",
""
],
[
"Huynh",
"Du",
""
],
[
"Mian",
"Ajmal",
""
]
] | TITLE: Action Classification with Locality-constrained Linear Coding
ABSTRACT: We propose an action classification algorithm which uses Locality-constrained
Linear Coding (LLC) to capture discriminative information of human body
variations in each spatiotemporal subsequence of a video sequence. Our proposed
method divides the input video into equally spaced overlapping spatiotemporal
subsequences, each of which is decomposed into blocks and then cells. We use
the Histogram of Oriented Gradient (HOG3D) feature to encode the information in
each cell. We justify the use of LLC for encoding the block descriptor by
demonstrating its superiority over Sparse Coding (SC). Our sequence descriptor
is obtained via a logistic regression classifier with L2 regularization. We
evaluate and compare our algorithm with ten state-of-the-art algorithms on five
benchmark datasets. Experimental results show that, on average, our algorithm
gives better accuracy than these ten algorithms.
| no_new_dataset | 0.952574 |
1409.6070 | Benjamin Graham | Benjamin Graham | Spatially-sparse convolutional neural networks | 13 pages | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks (CNNs) perform well on problems such as
handwriting recognition and image classification. However, the performance of
the networks is often limited by budget and time constraints, particularly when
trying to train deep networks.
Motivated by the problem of online handwriting recognition, we developed a
CNN for processing spatially-sparse inputs; a character drawn with a one-pixel
wide pen on a high resolution grid looks like a sparse matrix. Taking advantage
of the sparsity allowed us more efficiently to train and test large, deep CNNs.
On the CASIA-OLHWDB1.1 dataset containing 3755 character classes we get a test
error of 3.82%.
Although pictures are not sparse, they can be thought of as sparse by adding
padding. Applying a deep convolutional network using sparsity has resulted in a
substantial reduction in test error on the CIFAR small picture datasets: 6.28%
on CIFAR-10 and 24.30% for CIFAR-100.
| [
{
"version": "v1",
"created": "Mon, 22 Sep 2014 02:39:27 GMT"
}
] | 2014-09-23T00:00:00 | [
[
"Graham",
"Benjamin",
""
]
] | TITLE: Spatially-sparse convolutional neural networks
ABSTRACT: Convolutional neural networks (CNNs) perform well on problems such as
handwriting recognition and image classification. However, the performance of
the networks is often limited by budget and time constraints, particularly when
trying to train deep networks.
Motivated by the problem of online handwriting recognition, we developed a
CNN for processing spatially-sparse inputs; a character drawn with a one-pixel
wide pen on a high resolution grid looks like a sparse matrix. Taking advantage
of the sparsity allowed us more efficiently to train and test large, deep CNNs.
On the CASIA-OLHWDB1.1 dataset containing 3755 character classes we get a test
error of 3.82%.
Although pictures are not sparse, they can be thought of as sparse by adding
padding. Applying a deep convolutional network using sparsity has resulted in a
substantial reduction in test error on the CIFAR small picture datasets: 6.28%
on CIFAR-10 and 24.30% for CIFAR-100.
| no_new_dataset | 0.950041 |
1311.2789 | Stian Soiland-Reyes | Kristina M. Hettne, Harish Dharuri, Jun Zhao, Katherine Wolstencroft,
Khalid Belhajjame, Stian Soiland-Reyes, Eleni Mina, Mark Thompson, Don
Cruickshank, Lourdes Verdes-Montenegro, Julian Garrido, David de Roure, Oscar
Corcho, Graham Klyne, Reinout van Schouwen, Peter A. C. 't Hoen, Sean
Bechhofer, Carole Goble, Marco Roos | Structuring research methods and data with the Research Object model:
genomics workflows as a case study | 35 pages, 10 figures, 1 table. Submitted to Journal of Biomedical
Semantics on 2013-05-13, resubmitted after reviews 2013-11-09, 2014-06-27.
Accepted in principle 2014-07-29. Published: 2014-09-18
http://www.jbiomedsem.com/content/5/1/41. Research Object homepage:
http://www.researchobject.org/ | null | 10.1186/2041-1480-5-41 | uk-ac-man-scw:212837 | q-bio.GN cs.DL | http://creativecommons.org/licenses/by/3.0/ | One of the main challenges for biomedical research lies in the
computer-assisted integrative study of large and increasingly complex
combinations of data in order to understand molecular mechanisms. The
preservation of the materials and methods of such computational experiments
with clear annotations is essential for understanding an experiment, and this
is increasingly recognized in the bioinformatics community. Our assumption is
that offering means of digital, structured aggregation and annotation of the
objects of an experiment will provide necessary meta-data for a scientist to
understand and recreate the results of an experiment. To support this we
explored a model for the semantic description of a workflow-centric Research
Object (RO), where an RO is defined as a resource that aggregates other
resources, e.g., datasets, software, spreadsheets, text, etc. We applied this
model to a case study where we analysed human metabolite variation by
workflows.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2013 14:23:33 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Aug 2014 13:28:07 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Sep 2014 10:37:56 GMT"
}
] | 2014-09-22T00:00:00 | [
[
"Hettne",
"Kristina M.",
""
],
[
"Dharuri",
"Harish",
""
],
[
"Zhao",
"Jun",
""
],
[
"Wolstencroft",
"Katherine",
""
],
[
"Belhajjame",
"Khalid",
""
],
[
"Soiland-Reyes",
"Stian",
""
],
[
"Mina",
"Eleni",
""
],
[
"Thompson",
"Mark",
""
],
[
"Cruickshank",
"Don",
""
],
[
"Verdes-Montenegro",
"Lourdes",
""
],
[
"Garrido",
"Julian",
""
],
[
"de Roure",
"David",
""
],
[
"Corcho",
"Oscar",
""
],
[
"Klyne",
"Graham",
""
],
[
"van Schouwen",
"Reinout",
""
],
[
"Hoen",
"Peter A. C. 't",
""
],
[
"Bechhofer",
"Sean",
""
],
[
"Goble",
"Carole",
""
],
[
"Roos",
"Marco",
""
]
] | TITLE: Structuring research methods and data with the Research Object model:
genomics workflows as a case study
ABSTRACT: One of the main challenges for biomedical research lies in the
computer-assisted integrative study of large and increasingly complex
combinations of data in order to understand molecular mechanisms. The
preservation of the materials and methods of such computational experiments
with clear annotations is essential for understanding an experiment, and this
is increasingly recognized in the bioinformatics community. Our assumption is
that offering means of digital, structured aggregation and annotation of the
objects of an experiment will provide necessary meta-data for a scientist to
understand and recreate the results of an experiment. To support this we
explored a model for the semantic description of a workflow-centric Research
Object (RO), where an RO is defined as a resource that aggregates other
resources, e.g., datasets, software, spreadsheets, text, etc. We applied this
model to a case study where we analysed human metabolite variation by
workflows.
| no_new_dataset | 0.947284 |
1409.5512 | Liangyue Li | Liangyue Li, Hanghang Tong, Nan Cao, Kate Ehrlich, Yu-Ru Lin, Norbou
Buchler | Replacing the Irreplaceable: Fast Algorithms for Team Member
Recommendation | Initially submitted to KDD 2014 | null | null | null | cs.SI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of Team Member Replacement: given a team
of people embedded in a social network working on the same task, find a good
candidate who can fit in the team after one team member becomes unavailable. We
conjecture that a good team member replacement should have good skill matching
as well as good structure matching. We formulate this problem using the concept
of graph kernel. To tackle the computational challenges, we propose a family of
fast algorithms by (a) designing effective pruning strategies, and (b)
exploring the smoothness between the existing and the new team structures. We
conduct extensive experimental evaluations on real world datasets to
demonstrate the effectiveness and efficiency. Our algorithms (a) perform
significantly better than the alternative choices in terms of both precision
and recall; and (b) scale sub-linearly.
| [
{
"version": "v1",
"created": "Fri, 19 Sep 2014 04:05:16 GMT"
}
] | 2014-09-22T00:00:00 | [
[
"Li",
"Liangyue",
""
],
[
"Tong",
"Hanghang",
""
],
[
"Cao",
"Nan",
""
],
[
"Ehrlich",
"Kate",
""
],
[
"Lin",
"Yu-Ru",
""
],
[
"Buchler",
"Norbou",
""
]
] | TITLE: Replacing the Irreplaceable: Fast Algorithms for Team Member
Recommendation
ABSTRACT: In this paper, we study the problem of Team Member Replacement: given a team
of people embedded in a social network working on the same task, find a good
candidate who can fit in the team after one team member becomes unavailable. We
conjecture that a good team member replacement should have good skill matching
as well as good structure matching. We formulate this problem using the concept
of graph kernel. To tackle the computational challenges, we propose a family of
fast algorithms by (a) designing effective pruning strategies, and (b)
exploring the smoothness between the existing and the new team structures. We
conduct extensive experimental evaluations on real world datasets to
demonstrate the effectiveness and efficiency. Our algorithms (a) perform
significantly better than the alternative choices in terms of both precision
and recall; and (b) scale sub-linearly.
| no_new_dataset | 0.951459 |
1310.1525 | Yang Yang | Yang Yang and Yuxiao Dong and Nitesh V. Chawla | Microscopic Evolution of Social Networks by Triad Position Profile | 12 pages, 13 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disentangling the mechanisms underlying the social network evolution is one
of social science's unsolved puzzles. Preferential attachment is a powerful
mechanism explaining social network dynamics, yet not able to explain all
scaling-laws in social networks. Recent advances in understanding social
network dynamics demonstrate that several scaling-laws in social networks
follow as natural consequences of triadic closure. Macroscopic comparisons
between them are discussed empirically in many works. However the network
evolution drives not only the emergence of macroscopic scaling but also the
microscopic behaviors. Here we exploit two fundamental aspects of the network
microscopic evolution: the individual influence evolution and the process of
link formation. First we develop a novel framework for the microscopic
evolution, where the mechanisms of preferential attachment and triadic closure
are well balanced. Then on four real-world datasets we apply our approach for
two microscopic problems: node's prominence prediction and link prediction,
where our method yields significant predictive improvement over baseline
solutions. Finally to be rigorous and comprehensive, we further observe that
our framework has a stronger generalization capacity across different kinds of
social networks for two microscopic prediction problems. We unveil the
significant factors with a greater degree of precision than has heretofore been
possible, and shed new light on networks evolution.
| [
{
"version": "v1",
"created": "Sun, 6 Oct 2013 01:17:13 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Oct 2013 03:54:19 GMT"
},
{
"version": "v3",
"created": "Thu, 18 Sep 2014 13:15:46 GMT"
}
] | 2014-09-19T00:00:00 | [
[
"Yang",
"Yang",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Chawla",
"Nitesh V.",
""
]
] | TITLE: Microscopic Evolution of Social Networks by Triad Position Profile
ABSTRACT: Disentangling the mechanisms underlying the social network evolution is one
of social science's unsolved puzzles. Preferential attachment is a powerful
mechanism explaining social network dynamics, yet not able to explain all
scaling-laws in social networks. Recent advances in understanding social
network dynamics demonstrate that several scaling-laws in social networks
follow as natural consequences of triadic closure. Macroscopic comparisons
between them are discussed empirically in many works. However the network
evolution drives not only the emergence of macroscopic scaling but also the
microscopic behaviors. Here we exploit two fundamental aspects of the network
microscopic evolution: the individual influence evolution and the process of
link formation. First we develop a novel framework for the microscopic
evolution, where the mechanisms of preferential attachment and triadic closure
are well balanced. Then on four real-world datasets we apply our approach for
two microscopic problems: node's prominence prediction and link prediction,
where our method yields significant predictive improvement over baseline
solutions. Finally to be rigorous and comprehensive, we further observe that
our framework has a stronger generalization capacity across different kinds of
social networks for two microscopic prediction problems. We unveil the
significant factors with a greater degree of precision than has heretofore been
possible, and shed new light on networks evolution.
| no_new_dataset | 0.950273 |
1310.6288 | Hao Zhang | Hao Zhang and Liqing Zhang | Spatial-Spectral Boosting Analysis for Stroke Patients' Motor Imagery
EEG in Rehabilitation Training | 10 pages,3 figures | null | 10.3233/978-1-61499-419-0-537 | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current studies about motor imagery based rehabilitation training systems for
stroke subjects lack an appropriate analytic method, which can achieve a
considerable classification accuracy, at the same time detects gradual changes
of imagery patterns during rehabilitation process and disinters potential
mechanisms about motor function recovery. In this study, we propose an adaptive
boosting algorithm based on the cortex plasticity and spectral band shifts.
This approach models the usually predetermined spatial-spectral configurations
in EEG study into variable preconditions, and introduces a new heuristic of
stochastic gradient boost for training base learners under these preconditions.
We compare our proposed algorithm with commonly used methods on datasets
collected from 2 months' clinical experiments. The simulation results
demonstrate the effectiveness of the method in detecting the variations of
stroke patients' EEG patterns. By chronologically reorganizing the weight
parameters of the learned additive model, we verify the spatial compensatory
mechanism on impaired cortex and detect the changes of accentuation bands in
spectral domain, which may contribute important prior knowledge for
rehabilitation practice.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2013 16:43:59 GMT"
}
] | 2014-09-19T00:00:00 | [
[
"Zhang",
"Hao",
""
],
[
"Zhang",
"Liqing",
""
]
] | TITLE: Spatial-Spectral Boosting Analysis for Stroke Patients' Motor Imagery
EEG in Rehabilitation Training
ABSTRACT: Current studies about motor imagery based rehabilitation training systems for
stroke subjects lack an appropriate analytic method, which can achieve a
considerable classification accuracy, at the same time detects gradual changes
of imagery patterns during rehabilitation process and disinters potential
mechanisms about motor function recovery. In this study, we propose an adaptive
boosting algorithm based on the cortex plasticity and spectral band shifts.
This approach models the usually predetermined spatial-spectral configurations
in EEG study into variable preconditions, and introduces a new heuristic of
stochastic gradient boost for training base learners under these preconditions.
We compare our proposed algorithm with commonly used methods on datasets
collected from 2 months' clinical experiments. The simulation results
demonstrate the effectiveness of the method in detecting the variations of
stroke patients' EEG patterns. By chronologically reorganizing the weight
parameters of the learned additive model, we verify the spatial compensatory
mechanism on impaired cortex and detect the changes of accentuation bands in
spectral domain, which may contribute important prior knowledge for
rehabilitation practice.
| no_new_dataset | 0.948917 |
1405.6874 | Sebastian Deorowicz | Szymon Grabowski, Sebastian Deorowicz, {\L}ukasz Roguski | Disk-based genome sequencing data compression | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: High-coverage sequencing data have significant, yet hard to
exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA
stream of large datasets, since the redundancy between overlapping reads cannot
be easily captured in the (relatively small) main memory. More interesting
solutions for this problem are disk-based~(Yanovsky, 2011; Cox et al., 2012),
where the better of these two, from Cox~{\it et al.}~(2012), is based on the
Burrows--Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0
Gb human genome sequencing collection with almost 45-fold coverage.
Results: We propose ORCOM (Overlapping Reads COmpression with Minimizers), a
compression algorithm dedicated to sequencing reads (DNA only). Our method
makes use of a conceptually simple and easily parallelizable idea of
minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to
fit the 134.0 Gb dataset into only 5.31 GB of space.
Availability: http://sun.aei.polsl.pl/orcom under a free license.
| [
{
"version": "v1",
"created": "Tue, 27 May 2014 11:34:35 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Sep 2014 17:41:36 GMT"
}
] | 2014-09-19T00:00:00 | [
[
"Grabowski",
"Szymon",
""
],
[
"Deorowicz",
"Sebastian",
""
],
[
"Roguski",
"Łukasz",
""
]
] | TITLE: Disk-based genome sequencing data compression
ABSTRACT: Motivation: High-coverage sequencing data have significant, yet hard to
exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA
stream of large datasets, since the redundancy between overlapping reads cannot
be easily captured in the (relatively small) main memory. More interesting
solutions for this problem are disk-based~(Yanovsky, 2011; Cox et al., 2012),
where the better of these two, from Cox~{\it et al.}~(2012), is based on the
Burrows--Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0
Gb human genome sequencing collection with almost 45-fold coverage.
Results: We propose ORCOM (Overlapping Reads COmpression with Minimizers), a
compression algorithm dedicated to sequencing reads (DNA only). Our method
makes use of a conceptually simple and easily parallelizable idea of
minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to
fit the 134.0 Gb dataset into only 5.31 GB of space.
Availability: http://sun.aei.polsl.pl/orcom under a free license.
| no_new_dataset | 0.943295 |
1409.5165 | Michael Bloodgood | Michael Bloodgood and K. Vijay-Shanker | A Method for Stopping Active Learning Based on Stabilizing Predictions
and the Need for User-Adjustable Stopping | 9 pages, 3 figures, 5 tables; appeared in Proceedings of the
Thirteenth Conference on Computational Natural Language Learning
(CoNLL-2009), June 2009 | In Proceedings of the Thirteenth Conference on Computational
Natural Language Learning (CoNLL-2009), pages 39-47, Boulder, Colorado, June
2009. Association for Computational Linguistics | null | null | cs.LG cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A survey of existing methods for stopping active learning (AL) reveals the
needs for methods that are: more widely applicable; more aggressive in saving
annotations; and more stable across changing datasets. A new method for
stopping AL based on stabilizing predictions is presented that addresses these
needs. Furthermore, stopping methods are required to handle a broad range of
different annotation/performance tradeoff valuations. Despite this, the
existing body of work is dominated by conservative methods with little (if any)
attention paid to providing users with control over the behavior of stopping
methods. The proposed method is shown to fill a gap in the level of
aggressiveness available for stopping AL and supports providing users with
control over stopping behavior.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 23:28:59 GMT"
}
] | 2014-09-19T00:00:00 | [
[
"Bloodgood",
"Michael",
""
],
[
"Vijay-Shanker",
"K.",
""
]
] | TITLE: A Method for Stopping Active Learning Based on Stabilizing Predictions
and the Need for User-Adjustable Stopping
ABSTRACT: A survey of existing methods for stopping active learning (AL) reveals the
needs for methods that are: more widely applicable; more aggressive in saving
annotations; and more stable across changing datasets. A new method for
stopping AL based on stabilizing predictions is presented that addresses these
needs. Furthermore, stopping methods are required to handle a broad range of
different annotation/performance tradeoff valuations. Despite this, the
existing body of work is dominated by conservative methods with little (if any)
attention paid to providing users with control over the behavior of stopping
methods. The proposed method is shown to fill a gap in the level of
aggressiveness available for stopping AL and supports providing users with
control over stopping behavior.
| no_new_dataset | 0.94887 |
1406.6811 | Fumin Shen | Fumin Shen, Chunhua Shen and Heng Tao Shen | Face Image Classification by Pooling Raw Features | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a very simple, efficient yet surprisingly effective feature
extraction method for face recognition (about 20 lines of Matlab code), which
is mainly inspired by spatial pyramid pooling in generic image classification.
We show that features formed by simply pooling local patches over a multi-level
pyramid, coupled with a linear classifier, can significantly outperform most
recent face recognition methods. The simplicity of our feature extraction
procedure is demonstrated by the fact that no learning is involved (except PCA
whitening). We show that, multi-level spatial pooling and dense extraction of
multi-scale patches play critical roles in face image classification. The
extracted facial features can capture strong structural information of
individual faces with no label information being used. We also find that,
pre-processing on local image patches such as contrast normalization can have
an important impact on the classification accuracy. In particular, on the
challenging face recognition datasets of FERET and LFW-a, our method improves
previous best results by more than 10% and 20%, respectively.
| [
{
"version": "v1",
"created": "Thu, 26 Jun 2014 08:56:55 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Sep 2014 06:40:29 GMT"
}
] | 2014-09-18T00:00:00 | [
[
"Shen",
"Fumin",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Face Image Classification by Pooling Raw Features
ABSTRACT: We propose a very simple, efficient yet surprisingly effective feature
extraction method for face recognition (about 20 lines of Matlab code), which
is mainly inspired by spatial pyramid pooling in generic image classification.
We show that features formed by simply pooling local patches over a multi-level
pyramid, coupled with a linear classifier, can significantly outperform most
recent face recognition methods. The simplicity of our feature extraction
procedure is demonstrated by the fact that no learning is involved (except PCA
whitening). We show that, multi-level spatial pooling and dense extraction of
multi-scale patches play critical roles in face image classification. The
extracted facial features can capture strong structural information of
individual faces with no label information being used. We also find that,
pre-processing on local image patches such as contrast normalization can have
an important impact on the classification accuracy. In particular, on the
challenging face recognition datasets of FERET and LFW-a, our method improves
previous best results by more than 10% and 20%, respectively.
| no_new_dataset | 0.948775 |
1409.4936 | Anthony Bagnall Dr | Anthony Bagnall and Reda Younsi | Ensembles of Random Sphere Cover Classifiers | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose and evaluate alternative ensemble schemes for a new instance based
learning classifier, the Randomised Sphere Cover (RSC) classifier. RSC fuses
instances into spheres, then bases classification on distance to spheres rather
than distance to instances. The randomised nature of RSC makes it ideal for use
in ensembles. We propose two ensemble methods tailored to the RSC classifier;
$\alpha \beta$RSE, an ensemble based on instance resampling and $\alpha$RSSE, a
subspace ensemble. We compare $\alpha \beta$RSE and $\alpha$RSSE to tree based
ensembles on a set of UCI datasets and demonstrates that RSC ensembles perform
significantly better than some of these ensembles, and not significantly worse
than the others. We demonstrate via a case study on six gene expression data
sets that $\alpha$RSSE can outperform other subspace ensemble methods on high
dimensional data when used in conjunction with an attribute filter. Finally, we
perform a set of Bias/Variance decomposition experiments to analyse the source
of improvement in comparison to a base classifier.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 10:18:34 GMT"
}
] | 2014-09-18T00:00:00 | [
[
"Bagnall",
"Anthony",
""
],
[
"Younsi",
"Reda",
""
]
] | TITLE: Ensembles of Random Sphere Cover Classifiers
ABSTRACT: We propose and evaluate alternative ensemble schemes for a new instance based
learning classifier, the Randomised Sphere Cover (RSC) classifier. RSC fuses
instances into spheres, then bases classification on distance to spheres rather
than distance to instances. The randomised nature of RSC makes it ideal for use
in ensembles. We propose two ensemble methods tailored to the RSC classifier;
$\alpha \beta$RSE, an ensemble based on instance resampling and $\alpha$RSSE, a
subspace ensemble. We compare $\alpha \beta$RSE and $\alpha$RSSE to tree based
ensembles on a set of UCI datasets and demonstrates that RSC ensembles perform
significantly better than some of these ensembles, and not significantly worse
than the others. We demonstrate via a case study on six gene expression data
sets that $\alpha$RSSE can outperform other subspace ensemble methods on high
dimensional data when used in conjunction with an attribute filter. Finally, we
perform a set of Bias/Variance decomposition experiments to analyse the source
of improvement in comparison to a base classifier.
| no_new_dataset | 0.951639 |
1409.5020 | Matteo Rucco | Matteo Rucco, Lorenzo Falsetti, Damir Herman, Tanya Petrossian,
Emanuela Merelli, Cinzia Nitti and Aldo Salvi | Using Topological Data Analysis for diagnosis pulmonary embolism | 18 pages, 5 figures, 6 tables. arXiv admin note: text overlap with
arXiv:cs/0308031 by other authors without attribution | null | null | null | physics.med-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pulmonary Embolism (PE) is a common and potentially lethal condition. Most
patients die within the first few hours from the event. Despite diagnostic
advances, delays and underdiagnosis in PE are common.To increase the diagnostic
performance in PE, current diagnostic work-up of patients with suspected acute
pulmonary embolism usually starts with the assessment of clinical pretest
probability using plasma d-Dimer measurement and clinical prediction rules. The
most validated and widely used clinical decision rules are the Wells and Geneva
Revised scores. We aimed to develop a new clinical prediction rule (CPR) for PE
based on topological data analysis and artificial neural network. Filter or
wrapper methods for features reduction cannot be applied to our dataset: the
application of these algorithms can only be performed on datasets without
missing data. Instead, we applied Topological data analysis (TDA) to overcome
the hurdle of processing datasets with null values missing data. A topological
network was developed using the Iris software (Ayasdi, Inc., Palo Alto). The PE
patient topology identified two ares in the pathological group and hence two
distinct clusters of PE patient populations. Additionally, the topological
netowrk detected several sub-groups among healthy patients that likely are
affected with non-PE diseases. TDA was further utilized to identify key
features which are best associated as diagnostic factors for PE and used this
information to define the input space for a back-propagation artificial neural
network (BP-ANN). It is shown that the area under curve (AUC) of BP-ANN is
greater than the AUCs of the scores (Wells and revised Geneva) used among
physicians. The results demonstrate topological data analysis and the BP-ANN,
when used in combination, can produce better predictive models than Wells or
revised Geneva scores system for the analyzed cohort
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 15:08:15 GMT"
}
] | 2014-09-18T00:00:00 | [
[
"Rucco",
"Matteo",
""
],
[
"Falsetti",
"Lorenzo",
""
],
[
"Herman",
"Damir",
""
],
[
"Petrossian",
"Tanya",
""
],
[
"Merelli",
"Emanuela",
""
],
[
"Nitti",
"Cinzia",
""
],
[
"Salvi",
"Aldo",
""
]
] | TITLE: Using Topological Data Analysis for diagnosis pulmonary embolism
ABSTRACT: Pulmonary Embolism (PE) is a common and potentially lethal condition. Most
patients die within the first few hours from the event. Despite diagnostic
advances, delays and underdiagnosis in PE are common.To increase the diagnostic
performance in PE, current diagnostic work-up of patients with suspected acute
pulmonary embolism usually starts with the assessment of clinical pretest
probability using plasma d-Dimer measurement and clinical prediction rules. The
most validated and widely used clinical decision rules are the Wells and Geneva
Revised scores. We aimed to develop a new clinical prediction rule (CPR) for PE
based on topological data analysis and artificial neural network. Filter or
wrapper methods for features reduction cannot be applied to our dataset: the
application of these algorithms can only be performed on datasets without
missing data. Instead, we applied Topological data analysis (TDA) to overcome
the hurdle of processing datasets with null values missing data. A topological
network was developed using the Iris software (Ayasdi, Inc., Palo Alto). The PE
patient topology identified two ares in the pathological group and hence two
distinct clusters of PE patient populations. Additionally, the topological
netowrk detected several sub-groups among healthy patients that likely are
affected with non-PE diseases. TDA was further utilized to identify key
features which are best associated as diagnostic factors for PE and used this
information to define the input space for a back-propagation artificial neural
network (BP-ANN). It is shown that the area under curve (AUC) of BP-ANN is
greater than the AUCs of the scores (Wells and revised Geneva) used among
physicians. The results demonstrate topological data analysis and the BP-ANN,
when used in combination, can produce better predictive models than Wells or
revised Geneva scores system for the analyzed cohort
| no_new_dataset | 0.958847 |
1409.5034 | Faraz Zaidi | Faraz Zaidi, Chris Muelder, Arnaud Sallaberry | Analysis and Visualization of Dynamic Networks | Book chapter | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter provides an overview of the different techniques and methods
that exist for the analysis and visualization of dynamic networks. Basic
definitions and formal notations are discussed and important references are
cited.
A major reason for the popularity of the field of dynamic networks is its
applicability in a number of diverse fields. The field of dynamic networks is
in its infancy and there are so many avenues that need to be explored. From
developing network generation models to developing temporal metrics and
measures, from structural analysis to visual analysis, there is room for
further exploration in almost every dimension where dynamic networks are
studied. Recently, with the availability of dynamic data from various fields,
the empirical study and experimentation with real data sets has also helped
maturate the field. Furthermore, researchers have started to develop
foundations and theories based on these datasets which in turn has resulted
lots of activity among research communities.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 15:40:01 GMT"
}
] | 2014-09-18T00:00:00 | [
[
"Zaidi",
"Faraz",
""
],
[
"Muelder",
"Chris",
""
],
[
"Sallaberry",
"Arnaud",
""
]
] | TITLE: Analysis and Visualization of Dynamic Networks
ABSTRACT: This chapter provides an overview of the different techniques and methods
that exist for the analysis and visualization of dynamic networks. Basic
definitions and formal notations are discussed and important references are
cited.
A major reason for the popularity of the field of dynamic networks is its
applicability in a number of diverse fields. The field of dynamic networks is
in its infancy and there are so many avenues that need to be explored. From
developing network generation models to developing temporal metrics and
measures, from structural analysis to visual analysis, there is room for
further exploration in almost every dimension where dynamic networks are
studied. Recently, with the availability of dynamic data from various fields,
the empirical study and experimentation with real data sets has also helped
maturate the field. Furthermore, researchers have started to develop
foundations and theories based on these datasets which in turn has resulted
lots of activity among research communities.
| no_new_dataset | 0.941708 |
1310.2963 | Michael Szell | Paolo Santi, Giovanni Resta, Michael Szell, Stanislav Sobolevsky,
Steven Strogatz, Carlo Ratti | Quantifying the benefits of vehicle pooling with shareability networks | Main text: 6 pages, 3 figures, SI: 24 pages | PNAS 111(37), 13290-13294 (2014) | 10.1073/pnas.1403657111 | null | physics.soc-ph cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Taxi services are a vital part of urban transportation, and a considerable
contributor to traffic congestion and air pollution causing substantial adverse
effects on human health. Sharing taxi trips is a possible way of reducing the
negative impact of taxi services on cities, but this comes at the expense of
passenger discomfort quantifiable in terms of a longer travel time. Due to
computational challenges, taxi sharing has traditionally been approached on
small scales, such as within airport perimeters, or with dynamical ad-hoc
heuristics. However, a mathematical framework for the systematic understanding
of the tradeoff between collective benefits of sharing and individual passenger
discomfort is lacking. Here we introduce the notion of shareability network
which allows us to model the collective benefits of sharing as a function of
passenger inconvenience, and to efficiently compute optimal sharing strategies
on massive datasets. We apply this framework to a dataset of millions of taxi
trips taken in New York City, showing that with increasing but still relatively
low passenger discomfort, cumulative trip length can be cut by 40% or more.
This benefit comes with reductions in service cost, emissions, and with split
fares, hinting towards a wide passenger acceptance of such a shared service.
Simulation of a realistic online system demonstrates the feasibility of a
shareable taxi service in New York City. Shareability as a function of trip
density saturates fast, suggesting effectiveness of the taxi sharing system
also in cities with much sparser taxi fleets or when willingness to share is
low.
| [
{
"version": "v1",
"created": "Thu, 10 Oct 2013 20:56:56 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Sep 2014 19:48:38 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Santi",
"Paolo",
""
],
[
"Resta",
"Giovanni",
""
],
[
"Szell",
"Michael",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Strogatz",
"Steven",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Quantifying the benefits of vehicle pooling with shareability networks
ABSTRACT: Taxi services are a vital part of urban transportation, and a considerable
contributor to traffic congestion and air pollution causing substantial adverse
effects on human health. Sharing taxi trips is a possible way of reducing the
negative impact of taxi services on cities, but this comes at the expense of
passenger discomfort quantifiable in terms of a longer travel time. Due to
computational challenges, taxi sharing has traditionally been approached on
small scales, such as within airport perimeters, or with dynamical ad-hoc
heuristics. However, a mathematical framework for the systematic understanding
of the tradeoff between collective benefits of sharing and individual passenger
discomfort is lacking. Here we introduce the notion of shareability network
which allows us to model the collective benefits of sharing as a function of
passenger inconvenience, and to efficiently compute optimal sharing strategies
on massive datasets. We apply this framework to a dataset of millions of taxi
trips taken in New York City, showing that with increasing but still relatively
low passenger discomfort, cumulative trip length can be cut by 40% or more.
This benefit comes with reductions in service cost, emissions, and with split
fares, hinting towards a wide passenger acceptance of such a shared service.
Simulation of a realistic online system demonstrates the feasibility of a
shareable taxi service in New York City. Shareability as a function of trip
density saturates fast, suggesting effectiveness of the taxi sharing system
also in cities with much sparser taxi fleets or when willingness to share is
low.
| no_new_dataset | 0.898143 |
1401.7047 | Jonathan Tu | Jonathan H. Tu, Clarence W. Rowley, J. Nathan Kutz, and Jessica K.
Shang | Toward compressed DMD: spectral analysis of fluid flows using
sub-Nyquist-rate PIV data | null | Exp. Fluids 55(9):1805 (2014) | 10.1007/s00348-014-1805-6 | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic mode decomposition (DMD) is a powerful and increasingly popular tool
for performing spectral analysis of fluid flows. However, it requires data that
satisfy the Nyquist-Shannon sampling criterion. In many fluid flow experiments,
such data are impossible to capture. We propose a new approach that combines
ideas from DMD and compressed sensing. Given a vector-valued signal, we take
measurements randomly in time (at a sub-Nyquist rate) and project the data onto
a low-dimensional subspace. We then use compressed sensing to identify the
dominant frequencies in the signal and their corresponding modes. We
demonstrate this method using two examples, analyzing both an artificially
constructed test dataset and particle image velocimetry data collected from the
flow past a cylinder. In each case, our method correctly identifies the
characteristic frequencies and oscillatory modes dominating the signal, proving
the proposed method to be a capable tool for spectral analysis using
sub-Nyquist-rate sampling.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2014 23:30:17 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Tu",
"Jonathan H.",
""
],
[
"Rowley",
"Clarence W.",
""
],
[
"Kutz",
"J. Nathan",
""
],
[
"Shang",
"Jessica K.",
""
]
] | TITLE: Toward compressed DMD: spectral analysis of fluid flows using
sub-Nyquist-rate PIV data
ABSTRACT: Dynamic mode decomposition (DMD) is a powerful and increasingly popular tool
for performing spectral analysis of fluid flows. However, it requires data that
satisfy the Nyquist-Shannon sampling criterion. In many fluid flow experiments,
such data are impossible to capture. We propose a new approach that combines
ideas from DMD and compressed sensing. Given a vector-valued signal, we take
measurements randomly in time (at a sub-Nyquist rate) and project the data onto
a low-dimensional subspace. We then use compressed sensing to identify the
dominant frequencies in the signal and their corresponding modes. We
demonstrate this method using two examples, analyzing both an artificially
constructed test dataset and particle image velocimetry data collected from the
flow past a cylinder. In each case, our method correctly identifies the
characteristic frequencies and oscillatory modes dominating the signal, proving
the proposed method to be a capable tool for spectral analysis using
sub-Nyquist-rate sampling.
| new_dataset | 0.958731 |
1407.6705 | Reza Azad | Reza Azad, Hamid Reza Shayegh, Hamed Amiri | A Robust and Efficient Method for Improving Accuracy of License Plate
Characters Recognition | This paper has been withdrawn by the author due to a crucial sign
error in equation 1 and some mistake | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | License Plate Recognition (LPR) plays an important role on the traffic
monitoring and parking management. A robust and efficient method for enhancing
accuracy of license plate characters recognition based on K Nearest Neighbours
(K-NN) classifier is presented in this paper. The system first prepares a
contour form of the extracted character, then the angle and distance feature
information about the character is extracted and finally K-NN classifier is
used to character recognition. Angle and distance features of a character have
been computed based on distribution of points on the bitmap image of character.
In K-NN method, the Euclidean distance between testing point and reference
points is calculated in order to find the k-nearest neighbours. We evaluated
our method on the available dataset that contain 1200 sample. Using 70% samples
for training, we tested our method on whole samples and obtained 99% correct
recognition rate.Further, we achieved average 99.41% accuracy using
three/strategy validation technique on 1200 dataset.
| [
{
"version": "v1",
"created": "Thu, 24 Jul 2014 09:26:01 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Sep 2014 07:28:21 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Azad",
"Reza",
""
],
[
"Shayegh",
"Hamid Reza",
""
],
[
"Amiri",
"Hamed",
""
]
] | TITLE: A Robust and Efficient Method for Improving Accuracy of License Plate
Characters Recognition
ABSTRACT: License Plate Recognition (LPR) plays an important role on the traffic
monitoring and parking management. A robust and efficient method for enhancing
accuracy of license plate characters recognition based on K Nearest Neighbours
(K-NN) classifier is presented in this paper. The system first prepares a
contour form of the extracted character, then the angle and distance feature
information about the character is extracted and finally K-NN classifier is
used to character recognition. Angle and distance features of a character have
been computed based on distribution of points on the bitmap image of character.
In K-NN method, the Euclidean distance between testing point and reference
points is calculated in order to find the k-nearest neighbours. We evaluated
our method on the available dataset that contain 1200 sample. Using 70% samples
for training, we tested our method on whole samples and obtained 99% correct
recognition rate.Further, we achieved average 99.41% accuracy using
three/strategy validation technique on 1200 dataset.
| no_new_dataset | 0.956472 |
1409.4403 | Jianguo Liu | Lei Hou, Xue Pan, Qiang Guo, Jian-Guo Liu | Memory effect of the online user preference | 22 pages, 5 figures, Scientific Reports 2014 Accepted | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The mechanism of the online user preference evolution is of great
significance for understanding the online user behaviors and improving the
quality of online services. Since users are allowed to rate on objects in many
online systems, ratings can well reflect the users' preference. With two
benchmark datasets from online systems, we uncover the memory effect in users'
selecting behavior which is the sequence of qualities of selected objects and
the rating behavior which is the sequence of ratings delivered by each user.
Furthermore, the memory duration is presented to describe the length of a
memory, which exhibits the power-law distribution, i.e., the probability of the
occurring of long-duration memory is much higher than that of the random case
which follows the exponential distribution. We present a preference model in
which a Markovian process is utilized to describe the users' selecting
behavior, and the rating behavior depends on the selecting behavior. With only
one parameter for each of the user's selecting and rating behavior, the
preference model could regenerate any duration distribution ranging from the
power-law form (strong memory) to the exponential form (weak memory).
| [
{
"version": "v1",
"created": "Sat, 13 Sep 2014 11:55:08 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Hou",
"Lei",
""
],
[
"Pan",
"Xue",
""
],
[
"Guo",
"Qiang",
""
],
[
"Liu",
"Jian-Guo",
""
]
] | TITLE: Memory effect of the online user preference
ABSTRACT: The mechanism of the online user preference evolution is of great
significance for understanding the online user behaviors and improving the
quality of online services. Since users are allowed to rate on objects in many
online systems, ratings can well reflect the users' preference. With two
benchmark datasets from online systems, we uncover the memory effect in users'
selecting behavior which is the sequence of qualities of selected objects and
the rating behavior which is the sequence of ratings delivered by each user.
Furthermore, the memory duration is presented to describe the length of a
memory, which exhibits the power-law distribution, i.e., the probability of the
occurring of long-duration memory is much higher than that of the random case
which follows the exponential distribution. We present a preference model in
which a Markovian process is utilized to describe the users' selecting
behavior, and the rating behavior depends on the selecting behavior. With only
one parameter for each of the user's selecting and rating behavior, the
preference model could regenerate any duration distribution ranging from the
power-law form (strong memory) to the exponential form (weak memory).
| no_new_dataset | 0.956513 |
1409.4438 | Manoj Gulati | Manoj Gulati, Shobha Sundar Ram, Amarjeet Singh | An In Depth Study into Using EMI Signatures for Appliance Identification | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Energy conservation is a key factor towards long term energy sustainability.
Real-time end user energy feedback, using disaggregated electric load
composition, can play a pivotal role in motivating consumers towards energy
conservation. Recent works have explored using high frequency conducted
electromagnetic interference (EMI) on power lines as a single point sensing
parameter for monitoring common home appliances. However, key questions
regarding the reliability and feasibility of using EMI signatures for
non-intrusive load monitoring over multiple appliances across different sensing
paradigms remain unanswered. This work presents some of the key challenges
towards using EMI as a unique and time invariant feature for load
disaggregation. In-depth empirical evaluations of a large number of appliances
in different sensing configurations are carried out, in both laboratory and
real world settings. Insights into the effects of external parameters such as
line impedance, background noise and appliance coupling on the EMI behavior of
an appliance are realized through simulations and measurements. A generic
approach for simulating the EMI behavior of an appliance that can then be used
to do a detailed analysis of real world phenomenology is presented. The
simulation approach is validated with EMI data from a router. Our EMI dataset -
High Frequency EMI Dataset (HFED) is also released.
| [
{
"version": "v1",
"created": "Mon, 15 Sep 2014 20:19:46 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Gulati",
"Manoj",
""
],
[
"Ram",
"Shobha Sundar",
""
],
[
"Singh",
"Amarjeet",
""
]
] | TITLE: An In Depth Study into Using EMI Signatures for Appliance Identification
ABSTRACT: Energy conservation is a key factor towards long term energy sustainability.
Real-time end user energy feedback, using disaggregated electric load
composition, can play a pivotal role in motivating consumers towards energy
conservation. Recent works have explored using high frequency conducted
electromagnetic interference (EMI) on power lines as a single point sensing
parameter for monitoring common home appliances. However, key questions
regarding the reliability and feasibility of using EMI signatures for
non-intrusive load monitoring over multiple appliances across different sensing
paradigms remain unanswered. This work presents some of the key challenges
towards using EMI as a unique and time invariant feature for load
disaggregation. In-depth empirical evaluations of a large number of appliances
in different sensing configurations are carried out, in both laboratory and
real world settings. Insights into the effects of external parameters such as
line impedance, background noise and appliance coupling on the EMI behavior of
an appliance are realized through simulations and measurements. A generic
approach for simulating the EMI behavior of an appliance that can then be used
to do a detailed analysis of real world phenomenology is presented. The
simulation approach is validated with EMI data from a router. Our EMI dataset -
High Frequency EMI Dataset (HFED) is also released.
| new_dataset | 0.76207 |
1409.4481 | Aniket Bera | Aniket Bera, David Wolinski, Julien Pettr\'e, Dinesh Manocha | Real-time Crowd Tracking using Parameter Optimized Mixture of Motion
Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel, real-time algorithm to track the trajectory of each
pedestrian in moderately dense crowded scenes. Our formulation is based on an
adaptive particle-filtering scheme that uses a combination of various
multi-agent heterogeneous pedestrian simulation models. We automatically
compute the optimal parameters for each of these different models based on
prior tracked data and use the best model as motion prior for our
particle-filter based tracking algorithm. We also use our "mixture of motion
models" for adaptive particle selection and accelerate the performance of the
online tracking algorithm. The motion model parameter estimation is formulated
as an optimization problem, and we use an approach that solves this
combinatorial optimization problem in a model independent manner and hence
scalable to any multi-agent pedestrian motion model. We evaluate the
performance of our approach on different crowd video datasets and highlight the
improvement in accuracy over homogeneous motion models and a baseline
mean-shift based tracker. In practice, our formulation can compute trajectories
of tens of pedestrians on a multi-core desktop CPU in in real time and offer
higher accuracy as compared to prior real time pedestrian tracking algorithms.
| [
{
"version": "v1",
"created": "Tue, 16 Sep 2014 01:36:52 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Bera",
"Aniket",
""
],
[
"Wolinski",
"David",
""
],
[
"Pettré",
"Julien",
""
],
[
"Manocha",
"Dinesh",
""
]
] | TITLE: Real-time Crowd Tracking using Parameter Optimized Mixture of Motion
Models
ABSTRACT: We present a novel, real-time algorithm to track the trajectory of each
pedestrian in moderately dense crowded scenes. Our formulation is based on an
adaptive particle-filtering scheme that uses a combination of various
multi-agent heterogeneous pedestrian simulation models. We automatically
compute the optimal parameters for each of these different models based on
prior tracked data and use the best model as motion prior for our
particle-filter based tracking algorithm. We also use our "mixture of motion
models" for adaptive particle selection and accelerate the performance of the
online tracking algorithm. The motion model parameter estimation is formulated
as an optimization problem, and we use an approach that solves this
combinatorial optimization problem in a model independent manner and hence
scalable to any multi-agent pedestrian motion model. We evaluate the
performance of our approach on different crowd video datasets and highlight the
improvement in accuracy over homogeneous motion models and a baseline
mean-shift based tracker. In practice, our formulation can compute trajectories
of tens of pedestrians on a multi-core desktop CPU in in real time and offer
higher accuracy as compared to prior real time pedestrian tracking algorithms.
| no_new_dataset | 0.948965 |
1409.4507 | Awny Sayed | Awny Sayed and Amal Almaqrashi | Scalable and Efficient Self-Join Processing technique in RDF data | 8-pages, 5-figures, International Journal of Computer Science Issues
(IJCSI), Volume 11, Issue 2. April 2014 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient management of RDF data plays an important role in successfully
understanding and fast querying data. Although the current approaches of
indexing in RDF Triples such as property tables and vertically partitioned
solved many issues; however, they still suffer from the performance in the
complex self-join queries and insert data in the same table. As an improvement
in this paper, we propose an alternative solution to facilitate flexibility and
efficiency in that queries and try to reach to the optimal solution to decrease
the self-joins as much as possible, this solution based on the idea of
"Recursive Mapping of Twin Tables". Our main goal of Recursive Mapping of Twin
Tables (RMTT) approach is divided the main RDF Triple into two tables which
have the same structure of RDF Triple and insert the RDF data recursively. Our
experimental results compared the performance of join queries in vertically
partitioned approach and the RMTT approach using very large RDF data, like DBLP
and DBpedia datasets. Our experimental results with a number of complex
submitted queries shows that our approach is highly scalable compared with
RDF-3X approach and RMTT reduces the number of self-joins especially in complex
queries 3-4 times than RDF-3X approach
| [
{
"version": "v1",
"created": "Tue, 16 Sep 2014 05:21:06 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Sayed",
"Awny",
""
],
[
"Almaqrashi",
"Amal",
""
]
] | TITLE: Scalable and Efficient Self-Join Processing technique in RDF data
ABSTRACT: Efficient management of RDF data plays an important role in successfully
understanding and fast querying data. Although the current approaches of
indexing in RDF Triples such as property tables and vertically partitioned
solved many issues; however, they still suffer from the performance in the
complex self-join queries and insert data in the same table. As an improvement
in this paper, we propose an alternative solution to facilitate flexibility and
efficiency in that queries and try to reach to the optimal solution to decrease
the self-joins as much as possible, this solution based on the idea of
"Recursive Mapping of Twin Tables". Our main goal of Recursive Mapping of Twin
Tables (RMTT) approach is divided the main RDF Triple into two tables which
have the same structure of RDF Triple and insert the RDF data recursively. Our
experimental results compared the performance of join queries in vertically
partitioned approach and the RMTT approach using very large RDF data, like DBLP
and DBpedia datasets. Our experimental results with a number of complex
submitted queries shows that our approach is highly scalable compared with
RDF-3X approach and RMTT reduces the number of self-joins especially in complex
queries 3-4 times than RDF-3X approach
| no_new_dataset | 0.946547 |
1409.4698 | Charmgil Hong | Charmgil Hong, Iyad Batal, Milos Hauskrecht | A Mixtures-of-Experts Framework for Multi-Label Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a novel probabilistic approach for multi-label classification that
is based on the mixtures-of-experts architecture combined with recently
introduced conditional tree-structured Bayesian networks. Our approach captures
different input-output relations from multi-label data using the efficient
tree-structured classifiers, while the mixtures-of-experts architecture aims to
compensate for the tree-structured restrictions and build a more accurate
model. We develop and present algorithms for learning the model from data and
for performing multi-label predictions on future data instances. Experiments on
multiple benchmark datasets demonstrate that our approach achieves highly
competitive results and outperforms the existing state-of-the-art multi-label
classification methods.
| [
{
"version": "v1",
"created": "Tue, 16 Sep 2014 16:52:14 GMT"
}
] | 2014-09-17T00:00:00 | [
[
"Hong",
"Charmgil",
""
],
[
"Batal",
"Iyad",
""
],
[
"Hauskrecht",
"Milos",
""
]
] | TITLE: A Mixtures-of-Experts Framework for Multi-Label Classification
ABSTRACT: We develop a novel probabilistic approach for multi-label classification that
is based on the mixtures-of-experts architecture combined with recently
introduced conditional tree-structured Bayesian networks. Our approach captures
different input-output relations from multi-label data using the efficient
tree-structured classifiers, while the mixtures-of-experts architecture aims to
compensate for the tree-structured restrictions and build a more accurate
model. We develop and present algorithms for learning the model from data and
for performing multi-label predictions on future data instances. Experiments on
multiple benchmark datasets demonstrate that our approach achieves highly
competitive results and outperforms the existing state-of-the-art multi-label
classification methods.
| no_new_dataset | 0.94887 |
1309.1737 | Gene Katsevich | Gene Katsevich, Alexander Katsevich, Amit Singer | Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem | null | null | null | null | math.NA cs.NA physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a
sample of randomly-oriented copies of a molecule. The problem of single
particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy
2D projection images taken at unknown directions to reconstruct the 3D
structure of the molecule. In some situations, the molecule under examination
exhibits structural variability, which poses a fundamental challenge in SPR.
The heterogeneity problem is the task of mapping the space of conformational
states of a molecule. It has been previously suggested that the leading
eigenvectors of the covariance matrix of the 3D molecules can be used to solve
the heterogeneity problem. Estimating the covariance matrix is challenging,
since only projections of the molecules are observed, but not the molecules
themselves. In this paper, we formulate a general problem of covariance
estimation from noisy projections of samples. This problem has intimate
connections with matrix completion problems and high-dimensional principal
component analysis. We propose an estimator and prove its consistency. When
there are finitely many heterogeneity classes, the spectrum of the estimated
covariance matrix reveals the number of classes. The estimator can be found as
the solution to a certain linear system. In the cryo-EM case, the linear
operator to be inverted, which we term the projection covariance transform, is
an important object in covariance estimation for tomographic problems involving
structural variation. Inverting it involves applying a filter akin to the ramp
filter in tomography. We design a basis in which this linear operator is sparse
and thus can be tractably inverted despite its large size. We demonstrate via
numerical experiments on synthetic datasets the robustness of our algorithm to
high levels of noise.
| [
{
"version": "v1",
"created": "Tue, 3 Sep 2013 02:23:53 GMT"
},
{
"version": "v2",
"created": "Sat, 24 May 2014 00:34:30 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Sep 2014 20:29:22 GMT"
}
] | 2014-09-16T00:00:00 | [
[
"Katsevich",
"Gene",
""
],
[
"Katsevich",
"Alexander",
""
],
[
"Singer",
"Amit",
""
]
] | TITLE: Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem
ABSTRACT: In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a
sample of randomly-oriented copies of a molecule. The problem of single
particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy
2D projection images taken at unknown directions to reconstruct the 3D
structure of the molecule. In some situations, the molecule under examination
exhibits structural variability, which poses a fundamental challenge in SPR.
The heterogeneity problem is the task of mapping the space of conformational
states of a molecule. It has been previously suggested that the leading
eigenvectors of the covariance matrix of the 3D molecules can be used to solve
the heterogeneity problem. Estimating the covariance matrix is challenging,
since only projections of the molecules are observed, but not the molecules
themselves. In this paper, we formulate a general problem of covariance
estimation from noisy projections of samples. This problem has intimate
connections with matrix completion problems and high-dimensional principal
component analysis. We propose an estimator and prove its consistency. When
there are finitely many heterogeneity classes, the spectrum of the estimated
covariance matrix reveals the number of classes. The estimator can be found as
the solution to a certain linear system. In the cryo-EM case, the linear
operator to be inverted, which we term the projection covariance transform, is
an important object in covariance estimation for tomographic problems involving
structural variation. Inverting it involves applying a filter akin to the ramp
filter in tomography. We design a basis in which this linear operator is sparse
and thus can be tractably inverted despite its large size. We demonstrate via
numerical experiments on synthetic datasets the robustness of our algorithm to
high levels of noise.
| no_new_dataset | 0.952882 |
1409.3867 | Vishwakarma Singh | Vishwakarma Singh and Ambuj K. Singh | Nearest Keyword Set Search in Multi-dimensional Datasets | Accepted as Full Research Paper to ICDE 2014, Chicago, IL, USA | null | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keyword-based search in text-rich multi-dimensional datasets facilitates many
novel applications and tools. In this paper, we consider objects that are
tagged with keywords and are embedded in a vector space. For these datasets, we
study queries that ask for the tightest groups of points satisfying a given set
of keywords. We propose a novel method called ProMiSH (Projection and Multi
Scale Hashing) that uses random projection and hash-based index structures, and
achieves high scalability and speedup. We present an exact and an approximate
version of the algorithm. Our empirical studies, both on real and synthetic
datasets, show that ProMiSH has a speedup of more than four orders over
state-of-the-art tree-based techniques. Our scalability tests on datasets of
sizes up to 10 million and dimensions up to 100 for queries having up to 9
keywords show that ProMiSH scales linearly with the dataset size, the dataset
dimension, the query size, and the result size.
| [
{
"version": "v1",
"created": "Fri, 12 Sep 2014 21:12:16 GMT"
}
] | 2014-09-16T00:00:00 | [
[
"Singh",
"Vishwakarma",
""
],
[
"Singh",
"Ambuj K.",
""
]
] | TITLE: Nearest Keyword Set Search in Multi-dimensional Datasets
ABSTRACT: Keyword-based search in text-rich multi-dimensional datasets facilitates many
novel applications and tools. In this paper, we consider objects that are
tagged with keywords and are embedded in a vector space. For these datasets, we
study queries that ask for the tightest groups of points satisfying a given set
of keywords. We propose a novel method called ProMiSH (Projection and Multi
Scale Hashing) that uses random projection and hash-based index structures, and
achieves high scalability and speedup. We present an exact and an approximate
version of the algorithm. Our empirical studies, both on real and synthetic
datasets, show that ProMiSH has a speedup of more than four orders over
state-of-the-art tree-based techniques. Our scalability tests on datasets of
sizes up to 10 million and dimensions up to 100 for queries having up to 9
keywords show that ProMiSH scales linearly with the dataset size, the dataset
dimension, the query size, and the result size.
| no_new_dataset | 0.945248 |
1409.3881 | Michael Bloodgood | Michael Bloodgood and K. Vijay-Shanker | An Approach to Reducing Annotation Costs for BioNLP | 2 pages, 1 figure, 5 tables; appeared in Proceedings of the Workshop
on Current Trends in Biomedical Natural Language Processing at ACL
(Association for Computational Linguistics) 2008 | In Proceedings of the Workshop on Current Trends in Biomedical
Natural Language Processing, pages 104-105, Columbus, Ohio, June 2008.
Association for Computational Linguistics | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a broad range of BioNLP tasks for which active learning (AL) can
significantly reduce annotation costs and a specific AL algorithm we have
developed is particularly effective in reducing annotation costs for these
tasks. We have previously developed an AL algorithm called ClosestInitPA that
works best with tasks that have the following characteristics: redundancy in
training material, burdensome annotation costs, Support Vector Machines (SVMs)
work well for the task, and imbalanced datasets (i.e. when set up as a binary
classification problem, one class is substantially rarer than the other). Many
BioNLP tasks have these characteristics and thus our AL algorithm is a natural
approach to apply to BioNLP tasks.
| [
{
"version": "v1",
"created": "Fri, 12 Sep 2014 22:40:38 GMT"
}
] | 2014-09-16T00:00:00 | [
[
"Bloodgood",
"Michael",
""
],
[
"Vijay-Shanker",
"K.",
""
]
] | TITLE: An Approach to Reducing Annotation Costs for BioNLP
ABSTRACT: There is a broad range of BioNLP tasks for which active learning (AL) can
significantly reduce annotation costs and a specific AL algorithm we have
developed is particularly effective in reducing annotation costs for these
tasks. We have previously developed an AL algorithm called ClosestInitPA that
works best with tasks that have the following characteristics: redundancy in
training material, burdensome annotation costs, Support Vector Machines (SVMs)
work well for the task, and imbalanced datasets (i.e. when set up as a binary
classification problem, one class is substantially rarer than the other). Many
BioNLP tasks have these characteristics and thus our AL algorithm is a natural
approach to apply to BioNLP tasks.
| no_new_dataset | 0.947721 |
1409.4044 | Alain Tapp | Alain Tapp | A new approach in machine learning | Preliminary report | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this technical report we presented a novel approach to machine learning.
Once the new framework is presented, we will provide a simple and yet very
powerful learning algorithm which will be benchmark on various dataset.
The framework we proposed is based on booleen circuits; more specifically the
classifier produced by our algorithm have that form. Using bits and boolean
gates instead of real numbers and multiplication enable the the learning
algorithm and classifier to use very efficient boolean vector operations. This
enable both the learning algorithm and classifier to be extremely efficient.
The accuracy of the classifier we obtain with our framework compares very
favorably those produced by conventional techniques, both in terms of
efficiency and accuracy.
| [
{
"version": "v1",
"created": "Sun, 14 Sep 2014 10:25:23 GMT"
}
] | 2014-09-16T00:00:00 | [
[
"Tapp",
"Alain",
""
]
] | TITLE: A new approach in machine learning
ABSTRACT: In this technical report we presented a novel approach to machine learning.
Once the new framework is presented, we will provide a simple and yet very
powerful learning algorithm which will be benchmark on various dataset.
The framework we proposed is based on booleen circuits; more specifically the
classifier produced by our algorithm have that form. Using bits and boolean
gates instead of real numbers and multiplication enable the the learning
algorithm and classifier to use very efficient boolean vector operations. This
enable both the learning algorithm and classifier to be extremely efficient.
The accuracy of the classifier we obtain with our framework compares very
favorably those produced by conventional techniques, both in terms of
efficiency and accuracy.
| no_new_dataset | 0.953794 |
1409.4155 | Sicheng Xiong | Sicheng Xiong, R\'omer Rosales, Yuanli Pei, Xiaoli Z. Fern | Active Metric Learning from Relative Comparisons | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on active learning of distance metrics from relative
comparison information. A relative comparison specifies, for a data point
triplet $(x_i,x_j,x_k)$, that instance $x_i$ is more similar to $x_j$ than to
$x_k$. Such constraints, when available, have been shown to be useful toward
defining appropriate distance metrics. In real-world applications, acquiring
constraints often require considerable human effort. This motivates us to study
how to select and query the most useful relative comparisons to achieve
effective metric learning with minimum user effort. Given an underlying class
concept that is employed by the user to provide such constraints, we present an
information-theoretic criterion that selects the triplet whose answer leads to
the highest expected gain in information about the classes of a set of
examples. Directly applying the proposed criterion requires examining $O(n^3)$
triplets with $n$ instances, which is prohibitive even for datasets of moderate
size. We show that a randomized selection strategy can be used to reduce the
selection pool from $O(n^3)$ to $O(n)$, allowing us to scale up to larger-size
problems. Experiments show that the proposed method consistently outperforms
two baseline policies.
| [
{
"version": "v1",
"created": "Mon, 15 Sep 2014 04:37:46 GMT"
}
] | 2014-09-16T00:00:00 | [
[
"Xiong",
"Sicheng",
""
],
[
"Rosales",
"Rómer",
""
],
[
"Pei",
"Yuanli",
""
],
[
"Fern",
"Xiaoli Z.",
""
]
] | TITLE: Active Metric Learning from Relative Comparisons
ABSTRACT: This work focuses on active learning of distance metrics from relative
comparison information. A relative comparison specifies, for a data point
triplet $(x_i,x_j,x_k)$, that instance $x_i$ is more similar to $x_j$ than to
$x_k$. Such constraints, when available, have been shown to be useful toward
defining appropriate distance metrics. In real-world applications, acquiring
constraints often require considerable human effort. This motivates us to study
how to select and query the most useful relative comparisons to achieve
effective metric learning with minimum user effort. Given an underlying class
concept that is employed by the user to provide such constraints, we present an
information-theoretic criterion that selects the triplet whose answer leads to
the highest expected gain in information about the classes of a set of
examples. Directly applying the proposed criterion requires examining $O(n^3)$
triplets with $n$ instances, which is prohibitive even for datasets of moderate
size. We show that a randomized selection strategy can be used to reduce the
selection pool from $O(n^3)$ to $O(n)$, allowing us to scale up to larger-size
problems. Experiments show that the proposed method consistently outperforms
two baseline policies.
| no_new_dataset | 0.94428 |
1210.5092 | Hans J. Haubold | A. Niu, M. Ochiai, H.J. Haubold, T. Doi | The United Nations Human Space Technology Initiative (HSTI): Science
Activities | 8 pages. arXiv admin note: text overlap with arXiv:1210.4797 | 63rd International Astronautical Congress, Naples, Italy, 2012,
IAC-12-A2.5.11 | null | null | physics.pop-ph physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The United Nations Human Space Technology Initiative (HSTI) aims at promoting
international cooperation in human spaceflight and space exploration-related
activities; creating awareness among countries on the benefits of utilizing
human space technology and its applications; and building capacity in
microgravity education and research. HSTI has been conducting various
scientific activities to promote microgravity education and research. The
primary science activity is called 'Zero-gravity Instrument Distribution
Project', in which one-axis clinostats will be distributed worldwide. The
distribution project will provide unique opportunities for students and
researchers to observe the growth of indigenous plants in their countries in a
simulated microgravity condition and is expected to create a huge dataset of
plant species with their responses to gravity.
| [
{
"version": "v1",
"created": "Thu, 18 Oct 2012 11:14:20 GMT"
}
] | 2014-09-15T00:00:00 | [
[
"Niu",
"A.",
""
],
[
"Ochiai",
"M.",
""
],
[
"Haubold",
"H. J.",
""
],
[
"Doi",
"T.",
""
]
] | TITLE: The United Nations Human Space Technology Initiative (HSTI): Science
Activities
ABSTRACT: The United Nations Human Space Technology Initiative (HSTI) aims at promoting
international cooperation in human spaceflight and space exploration-related
activities; creating awareness among countries on the benefits of utilizing
human space technology and its applications; and building capacity in
microgravity education and research. HSTI has been conducting various
scientific activities to promote microgravity education and research. The
primary science activity is called 'Zero-gravity Instrument Distribution
Project', in which one-axis clinostats will be distributed worldwide. The
distribution project will provide unique opportunities for students and
researchers to observe the growth of indigenous plants in their countries in a
simulated microgravity condition and is expected to create a huge dataset of
plant species with their responses to gravity.
| no_new_dataset | 0.739258 |
1409.3206 | Petko Georgiev | Petko Georgiev, Nicholas D. Lane, Kiran K. Rachuri, Cecilia Mascolo | DSP.Ear: Leveraging Co-Processor Support for Continuous Audio Sensing on
Smartphones | 15 pages, 12th ACM Conference on Embedded Network Sensor Systems
(SenSys '14) | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapidly growing adoption of sensor-enabled smartphones has greatly fueled
the proliferation of applications that use phone sensors to monitor user
behavior. A central sensor among these is the microphone which enables, for
instance, the detection of valence in speech, or the identification of
speakers. Deploying multiple of these applications on a mobile device to
continuously monitor the audio environment allows for the acquisition of a
diverse range of sound-related contextual inferences. However, the cumulative
processing burden critically impacts the phone battery.
To address this problem, we propose DSP.Ear - an integrated sensing system
that takes advantage of the latest low-power DSP co-processor technology in
commodity mobile devices to enable the continuous and simultaneous operation of
multiple established algorithms that perform complex audio inferences. The
system extracts emotions from voice, estimates the number of people in a room,
identifies the speakers, and detects commonly found ambient sounds, while
critically incurring little overhead to the device battery. This is achieved
through a series of pipeline optimizations that allow the computation to remain
largely on the DSP. Through detailed evaluation of our prototype implementation
we show that, by exploiting a smartphone's co-processor, DSP.Ear achieves a 3
to 7 times increase in the battery lifetime compared to a solution that uses
only the phone's main processor. In addition, DSP.Ear is 2 to 3 times more
power efficient than a naive DSP solution without optimizations. We further
analyze a large-scale dataset from 1320 Android users to show that in about
80-90% of the daily usage instances DSP.Ear is able to sustain a full day of
operation (even in the presence of other smartphone workloads) with a single
battery charge.
| [
{
"version": "v1",
"created": "Wed, 10 Sep 2014 19:30:58 GMT"
}
] | 2014-09-15T00:00:00 | [
[
"Georgiev",
"Petko",
""
],
[
"Lane",
"Nicholas D.",
""
],
[
"Rachuri",
"Kiran K.",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | TITLE: DSP.Ear: Leveraging Co-Processor Support for Continuous Audio Sensing on
Smartphones
ABSTRACT: The rapidly growing adoption of sensor-enabled smartphones has greatly fueled
the proliferation of applications that use phone sensors to monitor user
behavior. A central sensor among these is the microphone which enables, for
instance, the detection of valence in speech, or the identification of
speakers. Deploying multiple of these applications on a mobile device to
continuously monitor the audio environment allows for the acquisition of a
diverse range of sound-related contextual inferences. However, the cumulative
processing burden critically impacts the phone battery.
To address this problem, we propose DSP.Ear - an integrated sensing system
that takes advantage of the latest low-power DSP co-processor technology in
commodity mobile devices to enable the continuous and simultaneous operation of
multiple established algorithms that perform complex audio inferences. The
system extracts emotions from voice, estimates the number of people in a room,
identifies the speakers, and detects commonly found ambient sounds, while
critically incurring little overhead to the device battery. This is achieved
through a series of pipeline optimizations that allow the computation to remain
largely on the DSP. Through detailed evaluation of our prototype implementation
we show that, by exploiting a smartphone's co-processor, DSP.Ear achieves a 3
to 7 times increase in the battery lifetime compared to a solution that uses
only the phone's main processor. In addition, DSP.Ear is 2 to 3 times more
power efficient than a naive DSP solution without optimizations. We further
analyze a large-scale dataset from 1320 Android users to show that in about
80-90% of the daily usage instances DSP.Ear is able to sustain a full day of
operation (even in the presence of other smartphone workloads) with a single
battery charge.
| no_new_dataset | 0.929376 |
1409.3446 | Haimonti Dutta | Haimonti Dutta and Ashwin Srinivasan | Consensus-Based Modelling using Distributed Feature Construction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A particularly successful role for Inductive Logic Programming (ILP) is as a
tool for discovering useful relational features for subsequent use in a
predictive model. Conceptually, the case for using ILP to construct relational
features rests on treating these features as functions, the automated discovery
of which necessarily requires some form of first-order learning. Practically,
there are now several reports in the literature that suggest that augmenting
any existing features with ILP-discovered relational features can substantially
improve the predictive power of a model. While the approach is straightforward
enough, much still needs to be done to scale it up to explore more fully the
space of possible features that can be constructed by an ILP system. This is in
principle, infinite and in practice, extremely large. Applications have been
confined to heuristic or random selections from this space. In this paper, we
address this computational difficulty by allowing features to be constructed in
a distributed manner. That is, there is a network of computational units, each
of which employs an ILP engine to construct some small number of features and
then builds a (local) model. We then employ a consensus-based algorithm, in
which neighboring nodes share information to update local models. For a
category of models (those with convex loss functions), it can be shown that the
algorithm will result in all nodes converging to a consensus model. In
practice, it may be slow to achieve this convergence. Nevertheless, our results
on synthetic and real datasets that suggests that in relatively short time the
"best" node in the network reaches a model whose predictive accuracy is
comparable to that obtained using more computational effort in a
non-distributed setting (the best node is identified as the one whose weights
converge first).
| [
{
"version": "v1",
"created": "Thu, 11 Sep 2014 14:11:02 GMT"
}
] | 2014-09-12T00:00:00 | [
[
"Dutta",
"Haimonti",
""
],
[
"Srinivasan",
"Ashwin",
""
]
] | TITLE: Consensus-Based Modelling using Distributed Feature Construction
ABSTRACT: A particularly successful role for Inductive Logic Programming (ILP) is as a
tool for discovering useful relational features for subsequent use in a
predictive model. Conceptually, the case for using ILP to construct relational
features rests on treating these features as functions, the automated discovery
of which necessarily requires some form of first-order learning. Practically,
there are now several reports in the literature that suggest that augmenting
any existing features with ILP-discovered relational features can substantially
improve the predictive power of a model. While the approach is straightforward
enough, much still needs to be done to scale it up to explore more fully the
space of possible features that can be constructed by an ILP system. This is in
principle, infinite and in practice, extremely large. Applications have been
confined to heuristic or random selections from this space. In this paper, we
address this computational difficulty by allowing features to be constructed in
a distributed manner. That is, there is a network of computational units, each
of which employs an ILP engine to construct some small number of features and
then builds a (local) model. We then employ a consensus-based algorithm, in
which neighboring nodes share information to update local models. For a
category of models (those with convex loss functions), it can be shown that the
algorithm will result in all nodes converging to a consensus model. In
practice, it may be slow to achieve this convergence. Nevertheless, our results
on synthetic and real datasets that suggests that in relatively short time the
"best" node in the network reaches a model whose predictive accuracy is
comparable to that obtained using more computational effort in a
non-distributed setting (the best node is identified as the one whose weights
converge first).
| no_new_dataset | 0.94699 |
1401.5794 | Stefanos Leontsinis Mr. | T. Alexopoulos and S. Leontsinis | Benford's Law and the Universe | 6 pages, 7 figures | null | 10.1007/s12036-014-9303-z | null | physics.pop-ph astro-ph.GA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benford's law predicts the occurrence of the $n^{\mathrm{th}}$ digit of
numbers in datasets originating from various sources of the world, ranging from
financial data to atomic spectra. It is intriguing that although many features
of Benford's law have been proven and analysed, it is still not fully
mathematically understood. In this paper we investigate the distances of
galaxies and stars by comparing the first, second and third significant digit
probabilities with Benford's predictions. It is found that the distances of
galaxies follow reasonably well the first digit law and the star distances
agree very well with the first, second and third significant digit.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 15:34:39 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Sep 2014 14:59:19 GMT"
}
] | 2014-09-11T00:00:00 | [
[
"Alexopoulos",
"T.",
""
],
[
"Leontsinis",
"S.",
""
]
] | TITLE: Benford's Law and the Universe
ABSTRACT: Benford's law predicts the occurrence of the $n^{\mathrm{th}}$ digit of
numbers in datasets originating from various sources of the world, ranging from
financial data to atomic spectra. It is intriguing that although many features
of Benford's law have been proven and analysed, it is still not fully
mathematically understood. In this paper we investigate the distances of
galaxies and stars by comparing the first, second and third significant digit
probabilities with Benford's predictions. It is found that the distances of
galaxies follow reasonably well the first digit law and the star distances
agree very well with the first, second and third significant digit.
| no_new_dataset | 0.949482 |
1409.2905 | Sunsern Cheamanunkul | Sunsern Cheamanunkul, Evan Ettinger and Yoav Freund | Non-Convex Boosting Overcomes Random Label Noise | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sensitivity of Adaboost to random label noise is a well-studied problem.
LogitBoost, BrownBoost and RobustBoost are boosting algorithms claimed to be
less sensitive to noise than AdaBoost. We present the results of experiments
evaluating these algorithms on both synthetic and real datasets. We compare the
performance on each of datasets when the labels are corrupted by different
levels of independent label noise. In presence of random label noise, we found
that BrownBoost and RobustBoost perform significantly better than AdaBoost and
LogitBoost, while the difference between each pair of algorithms is
insignificant. We provide an explanation for the difference based on the margin
distributions of the algorithms.
| [
{
"version": "v1",
"created": "Tue, 9 Sep 2014 21:36:47 GMT"
}
] | 2014-09-11T00:00:00 | [
[
"Cheamanunkul",
"Sunsern",
""
],
[
"Ettinger",
"Evan",
""
],
[
"Freund",
"Yoav",
""
]
] | TITLE: Non-Convex Boosting Overcomes Random Label Noise
ABSTRACT: The sensitivity of Adaboost to random label noise is a well-studied problem.
LogitBoost, BrownBoost and RobustBoost are boosting algorithms claimed to be
less sensitive to noise than AdaBoost. We present the results of experiments
evaluating these algorithms on both synthetic and real datasets. We compare the
performance on each of datasets when the labels are corrupted by different
levels of independent label noise. In presence of random label noise, we found
that BrownBoost and RobustBoost perform significantly better than AdaBoost and
LogitBoost, while the difference between each pair of algorithms is
insignificant. We provide an explanation for the difference based on the margin
distributions of the algorithms.
| no_new_dataset | 0.954605 |
1211.1513 | Naresh Manwani | Naresh Manwani, P. S. Sastry | K-Plane Regression | null | null | 10.1016/j.ins.2014.08.058 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel algorithm for piecewise linear regression
which can learn continuous as well as discontinuous piecewise linear functions.
The main idea is to repeatedly partition the data and learn a liner model in in
each partition. While a simple algorithm incorporating this idea does not work
well, an interesting modification results in a good algorithm. The proposed
algorithm is similar in spirit to $k$-means clustering algorithm. We show that
our algorithm can also be viewed as an EM algorithm for maximum likelihood
estimation of parameters under a reasonable probability model. We empirically
demonstrate the effectiveness of our approach by comparing its performance with
the state of art regression learning algorithms on some real world datasets.
| [
{
"version": "v1",
"created": "Wed, 7 Nov 2012 10:57:38 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Mar 2013 09:00:24 GMT"
}
] | 2014-09-10T00:00:00 | [
[
"Manwani",
"Naresh",
""
],
[
"Sastry",
"P. S.",
""
]
] | TITLE: K-Plane Regression
ABSTRACT: In this paper, we present a novel algorithm for piecewise linear regression
which can learn continuous as well as discontinuous piecewise linear functions.
The main idea is to repeatedly partition the data and learn a liner model in in
each partition. While a simple algorithm incorporating this idea does not work
well, an interesting modification results in a good algorithm. The proposed
algorithm is similar in spirit to $k$-means clustering algorithm. We show that
our algorithm can also be viewed as an EM algorithm for maximum likelihood
estimation of parameters under a reasonable probability model. We empirically
demonstrate the effectiveness of our approach by comparing its performance with
the state of art regression learning algorithms on some real world datasets.
| no_new_dataset | 0.943867 |
1403.1840 | Yunchao Gong | Yunchao Gong and Liwei Wang and Ruiqi Guo and Svetlana Lazebnik | Multi-scale Orderless Pooling of Deep Convolutional Activation Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (CNN) have shown their promise as a
universal representation for recognition. However, global CNN activations lack
geometric invariance, which limits their robustness for classification and
matching of highly variable scenes. To improve the invariance of CNN
activations without degrading their discriminative power, this paper presents a
simple but effective scheme called multi-scale orderless pooling (MOP-CNN).
This scheme extracts CNN activations for local patches at multiple scale
levels, performs orderless VLAD pooling of these activations at each level
separately, and concatenates the result. The resulting MOP-CNN representation
can be used as a generic feature for either supervised or unsupervised
recognition tasks, from image classification to instance-level retrieval; it
consistently outperforms global CNN activations without requiring any joint
training of prediction layers for a particular target dataset. In absolute
terms, it achieves state-of-the-art results on the challenging SUN397 and MIT
Indoor Scenes classification datasets, and competitive results on
ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2014 19:03:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jul 2014 17:38:52 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Sep 2014 22:03:21 GMT"
}
] | 2014-09-10T00:00:00 | [
[
"Gong",
"Yunchao",
""
],
[
"Wang",
"Liwei",
""
],
[
"Guo",
"Ruiqi",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] | TITLE: Multi-scale Orderless Pooling of Deep Convolutional Activation Features
ABSTRACT: Deep convolutional neural networks (CNN) have shown their promise as a
universal representation for recognition. However, global CNN activations lack
geometric invariance, which limits their robustness for classification and
matching of highly variable scenes. To improve the invariance of CNN
activations without degrading their discriminative power, this paper presents a
simple but effective scheme called multi-scale orderless pooling (MOP-CNN).
This scheme extracts CNN activations for local patches at multiple scale
levels, performs orderless VLAD pooling of these activations at each level
separately, and concatenates the result. The resulting MOP-CNN representation
can be used as a generic feature for either supervised or unsupervised
recognition tasks, from image classification to instance-level retrieval; it
consistently outperforms global CNN activations without requiring any joint
training of prediction layers for a particular target dataset. In absolute
terms, it achieves state-of-the-art results on the challenging SUN397 and MIT
Indoor Scenes classification datasets, and competitive results on
ILSVRC2012/2013 classification and INRIA Holidays retrieval datasets.
| no_new_dataset | 0.953923 |
1409.2800 | Toufiq Parag | Toufiq Parag | Enforcing Label and Intensity Consistency for IR Target Detection | First appeared in OTCBVS 2011 \cite{parag11otcbvs}. This manuscript
presents updated results and an extension | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study formulates the IR target detection as a binary classification
problem of each pixel. Each pixel is associated with a label which indicates
whether it is a target or background pixel. The optimal label set for all the
pixels of an image maximizes aposteriori distribution of label configuration
given the pixel intensities. The posterior probability is factored into (or
proportional to) a conditional likelihood of the intensity values and a prior
probability of label configuration. Each of these two probabilities are
computed assuming a Markov Random Field (MRF) on both pixel intensities and
their labels. In particular, this study enforces neighborhood dependency on
both intensity values, by a Simultaneous Auto Regressive (SAR) model, and on
labels, by an Auto-Logistic model. The parameters of these MRF models are
learned from labeled examples. During testing, an MRF inference technique,
namely Iterated Conditional Mode (ICM), produces the optimal label for each
pixel. The detection performance is further improved by incorporating temporal
information through background subtraction. High performances on benchmark
datasets demonstrate effectiveness of this method for IR target detection.
| [
{
"version": "v1",
"created": "Tue, 9 Sep 2014 16:20:08 GMT"
}
] | 2014-09-10T00:00:00 | [
[
"Parag",
"Toufiq",
""
]
] | TITLE: Enforcing Label and Intensity Consistency for IR Target Detection
ABSTRACT: This study formulates the IR target detection as a binary classification
problem of each pixel. Each pixel is associated with a label which indicates
whether it is a target or background pixel. The optimal label set for all the
pixels of an image maximizes aposteriori distribution of label configuration
given the pixel intensities. The posterior probability is factored into (or
proportional to) a conditional likelihood of the intensity values and a prior
probability of label configuration. Each of these two probabilities are
computed assuming a Markov Random Field (MRF) on both pixel intensities and
their labels. In particular, this study enforces neighborhood dependency on
both intensity values, by a Simultaneous Auto Regressive (SAR) model, and on
labels, by an Auto-Logistic model. The parameters of these MRF models are
learned from labeled examples. During testing, an MRF inference technique,
namely Iterated Conditional Mode (ICM), produces the optimal label for each
pixel. The detection performance is further improved by incorporating temporal
information through background subtraction. High performances on benchmark
datasets demonstrate effectiveness of this method for IR target detection.
| no_new_dataset | 0.949763 |
1409.1320 | Wei Ping | Wei Ping, Qiang Liu, Alexander Ihler | Marginal Structured SVM with Hidden Variables | Accepted by the 31st International Conference on Machine Learning
(ICML 2014). 12 pages version with supplement | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose the marginal structured SVM (MSSVM) for structured
prediction with hidden variables. MSSVM properly accounts for the uncertainty
of hidden variables, and can significantly outperform the previously proposed
latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art
methods, especially when that uncertainty is large. Our method also results in
a smoother objective function, making gradient-based optimization of MSSVMs
converge significantly faster than for LSSVMs. We also show that our method
consistently outperforms hidden conditional random fields (HCRFs; Quattoni et
al. (2007)) on both simulated and real-world datasets. Furthermore, we propose
a unified framework that includes both our and several other existing methods
as special cases, and provides insights into the comparison of different models
in practice.
| [
{
"version": "v1",
"created": "Thu, 4 Sep 2014 05:06:34 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Sep 2014 21:13:36 GMT"
}
] | 2014-09-09T00:00:00 | [
[
"Ping",
"Wei",
""
],
[
"Liu",
"Qiang",
""
],
[
"Ihler",
"Alexander",
""
]
] | TITLE: Marginal Structured SVM with Hidden Variables
ABSTRACT: In this work, we propose the marginal structured SVM (MSSVM) for structured
prediction with hidden variables. MSSVM properly accounts for the uncertainty
of hidden variables, and can significantly outperform the previously proposed
latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art
methods, especially when that uncertainty is large. Our method also results in
a smoother objective function, making gradient-based optimization of MSSVMs
converge significantly faster than for LSSVMs. We also show that our method
consistently outperforms hidden conditional random fields (HCRFs; Quattoni et
al. (2007)) on both simulated and real-world datasets. Furthermore, we propose
a unified framework that includes both our and several other existing methods
as special cases, and provides insights into the comparison of different models
in practice.
| no_new_dataset | 0.948822 |
1409.2002 | Saba Babakhani | Saba Babakhani, Niloofar Mozaffari and Ali Hamzeh | A Martingale Approach to Detect Peak of News in Social Network | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | Nowadays, social medias such as Twitter, Memetracker and Blogs have become
powerful tools to propagate information. They facilitate quick dissemination
sequence of information such as news article, blog posts, user's interests and
thoughts through large scale. Providing strong means to analyzing social
networks structure and how information diffuse through them is essential. Many
recent studies emphasize on modeling information diffusion and their patterns
to gain some useful knowledge. In this paper, we propose a statistical approach
to online detect peak points of news when spread over social networks, to the
best of our knowledge has never investigated before. The proposed model use
martingale approach to predict peak points when news reached the peak of its
popularity. Experimental results on real datasets show good performance of our
approach to online detect these peak points.
| [
{
"version": "v1",
"created": "Sat, 6 Sep 2014 10:30:20 GMT"
}
] | 2014-09-09T00:00:00 | [
[
"Babakhani",
"Saba",
""
],
[
"Mozaffari",
"Niloofar",
""
],
[
"Hamzeh",
"Ali",
""
]
] | TITLE: A Martingale Approach to Detect Peak of News in Social Network
ABSTRACT: Nowadays, social medias such as Twitter, Memetracker and Blogs have become
powerful tools to propagate information. They facilitate quick dissemination
sequence of information such as news article, blog posts, user's interests and
thoughts through large scale. Providing strong means to analyzing social
networks structure and how information diffuse through them is essential. Many
recent studies emphasize on modeling information diffusion and their patterns
to gain some useful knowledge. In this paper, we propose a statistical approach
to online detect peak points of news when spread over social networks, to the
best of our knowledge has never investigated before. The proposed model use
martingale approach to predict peak points when news reached the peak of its
popularity. Experimental results on real datasets show good performance of our
approach to online detect these peak points.
| no_new_dataset | 0.947866 |
1409.2287 | Andreas Damianou Mr | Andreas C. Damianou, Michalis K. Titsias, Neil D. Lawrence | Variational Inference for Uncertainty on the Inputs of Gaussian Process
Models | 51 pages (of which 10 is Appendix), 19 figures | null | null | null | stat.ML cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Gaussian process latent variable model (GP-LVM) provides a flexible
approach for non-linear dimensionality reduction that has been widely applied.
However, the current approach for training GP-LVMs is based on maximum
likelihood, where the latent projection variables are maximized over rather
than integrated out. In this paper we present a Bayesian method for training
GP-LVMs by introducing a non-standard variational inference framework that
allows to approximately integrate out the latent variables and subsequently
train a GP-LVM by maximizing an analytic lower bound on the exact marginal
likelihood. We apply this method for learning a GP-LVM from iid observations
and for learning non-linear dynamical systems where the observations are
temporally correlated. We show that a benefit of the variational Bayesian
procedure is its robustness to overfitting and its ability to automatically
select the dimensionality of the nonlinear latent space. The resulting
framework is generic, flexible and easy to extend for other purposes, such as
Gaussian process regression with uncertain inputs and semi-supervised Gaussian
processes. We demonstrate our method on synthetic data and standard machine
learning benchmarks, as well as challenging real world datasets, including high
resolution video data.
| [
{
"version": "v1",
"created": "Mon, 8 Sep 2014 10:47:23 GMT"
}
] | 2014-09-09T00:00:00 | [
[
"Damianou",
"Andreas C.",
""
],
[
"Titsias",
"Michalis K.",
""
],
[
"Lawrence",
"Neil D.",
""
]
] | TITLE: Variational Inference for Uncertainty on the Inputs of Gaussian Process
Models
ABSTRACT: The Gaussian process latent variable model (GP-LVM) provides a flexible
approach for non-linear dimensionality reduction that has been widely applied.
However, the current approach for training GP-LVMs is based on maximum
likelihood, where the latent projection variables are maximized over rather
than integrated out. In this paper we present a Bayesian method for training
GP-LVMs by introducing a non-standard variational inference framework that
allows to approximately integrate out the latent variables and subsequently
train a GP-LVM by maximizing an analytic lower bound on the exact marginal
likelihood. We apply this method for learning a GP-LVM from iid observations
and for learning non-linear dynamical systems where the observations are
temporally correlated. We show that a benefit of the variational Bayesian
procedure is its robustness to overfitting and its ability to automatically
select the dimensionality of the nonlinear latent space. The resulting
framework is generic, flexible and easy to extend for other purposes, such as
Gaussian process regression with uncertain inputs and semi-supervised Gaussian
processes. We demonstrate our method on synthetic data and standard machine
learning benchmarks, as well as challenging real world datasets, including high
resolution video data.
| no_new_dataset | 0.948775 |
1409.2450 | Robert West | Robert West, Hristo S. Paskov, Jure Leskovec, Christopher Potts | Exploiting Social Network Structure for Person-to-Person Sentiment
Analysis | null | null | null | null | cs.SI cs.CL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person-to-person evaluations are prevalent in all kinds of discourse and
important for establishing reputations, building social bonds, and shaping
public opinion. Such evaluations can be analyzed separately using signed social
networks and textual sentiment analysis, but this misses the rich interactions
between language and social context. To capture such interactions, we develop a
model that predicts individual A's opinion of individual B by synthesizing
information from the signed social network in which A and B are embedded with
sentiment analysis of the evaluative texts relating A to B. We prove that this
problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss
Markov random field, and we show that this implementation outperforms text-only
and network-only versions in two very different datasets involving
community-level decision-making: the Wikipedia Requests for Adminship corpus
and the Convote U.S. Congressional speech corpus.
| [
{
"version": "v1",
"created": "Mon, 8 Sep 2014 18:14:16 GMT"
}
] | 2014-09-09T00:00:00 | [
[
"West",
"Robert",
""
],
[
"Paskov",
"Hristo S.",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Potts",
"Christopher",
""
]
] | TITLE: Exploiting Social Network Structure for Person-to-Person Sentiment
Analysis
ABSTRACT: Person-to-person evaluations are prevalent in all kinds of discourse and
important for establishing reputations, building social bonds, and shaping
public opinion. Such evaluations can be analyzed separately using signed social
networks and textual sentiment analysis, but this misses the rich interactions
between language and social context. To capture such interactions, we develop a
model that predicts individual A's opinion of individual B by synthesizing
information from the signed social network in which A and B are embedded with
sentiment analysis of the evaluative texts relating A to B. We prove that this
problem is NP-hard but can be relaxed to an efficiently solvable hinge-loss
Markov random field, and we show that this implementation outperforms text-only
and network-only versions in two very different datasets involving
community-level decision-making: the Wikipedia Requests for Adminship corpus
and the Convote U.S. Congressional speech corpus.
| no_new_dataset | 0.947332 |
1301.2774 | Jafar Muhammadi | Jafar Muhammadi, Hamid Reza Rabiee and Abbas Hosseini | Crowd Labeling: a survey | Under consideration for publication in Knowledge and Information
Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a burst in the number of research projects on human
computation via crowdsourcing. Multiple choice (or labeling) questions could be
referred to as a common type of problem which is solved by this approach. As an
application, crowd labeling is applied to find true labels for large machine
learning datasets. Since crowds are not necessarily experts, the labels they
provide are rather noisy and erroneous. This challenge is usually resolved by
collecting multiple labels for each sample, and then aggregating them to
estimate the true label. Although the mechanism leads to high-quality labels,
it is not actually cost-effective. As a result, efforts are currently made to
maximize the accuracy in estimating true labels, while fixing the number of
acquired labels.
This paper surveys methods to aggregate redundant crowd labels in order to
estimate unknown true labels. It presents a unified statistical latent model
where the differences among popular methods in the field correspond to
different choices for the parameters of the model. Afterwards, algorithms to
make inference on these models will be surveyed. Moreover, adaptive methods
which iteratively collect labels based on the previously collected labels and
estimated models will be discussed. In addition, this paper compares the
distinguished methods, and provides guidelines for future work required to
address the current open issues.
| [
{
"version": "v1",
"created": "Sun, 13 Jan 2013 14:12:53 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2014 05:59:49 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Sep 2014 06:37:23 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Muhammadi",
"Jafar",
""
],
[
"Rabiee",
"Hamid Reza",
""
],
[
"Hosseini",
"Abbas",
""
]
] | TITLE: Crowd Labeling: a survey
ABSTRACT: Recently, there has been a burst in the number of research projects on human
computation via crowdsourcing. Multiple choice (or labeling) questions could be
referred to as a common type of problem which is solved by this approach. As an
application, crowd labeling is applied to find true labels for large machine
learning datasets. Since crowds are not necessarily experts, the labels they
provide are rather noisy and erroneous. This challenge is usually resolved by
collecting multiple labels for each sample, and then aggregating them to
estimate the true label. Although the mechanism leads to high-quality labels,
it is not actually cost-effective. As a result, efforts are currently made to
maximize the accuracy in estimating true labels, while fixing the number of
acquired labels.
This paper surveys methods to aggregate redundant crowd labels in order to
estimate unknown true labels. It presents a unified statistical latent model
where the differences among popular methods in the field correspond to
different choices for the parameters of the model. Afterwards, algorithms to
make inference on these models will be surveyed. Moreover, adaptive methods
which iteratively collect labels based on the previously collected labels and
estimated models will be discussed. In addition, this paper compares the
distinguished methods, and provides guidelines for future work required to
address the current open issues.
| no_new_dataset | 0.947672 |
1405.7718 | Sajan Goud Lingala | Sajan Goud Lingala, Edward DiBella, Mathews Jacob | Deformation corrected compressed sensing (DC-CS): a novel framework for
accelerated dynamic MRI | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel deformation corrected compressed sensing (DC-CS) framework
to recover dynamic magnetic resonance images from undersampled measurements. We
introduce a generalized formulation that is capable of handling a wide class of
sparsity/compactness priors on the deformation corrected dynamic signal. In
this work, we consider example compactness priors such as sparsity in temporal
Fourier domain, sparsity in temporal finite difference domain, and nuclear norm
penalty to exploit low rank structure. Using variable splitting, we decouple
the complex optimization problem to simpler and well understood sub problems;
the resulting algorithm alternates between simple steps of shrinkage based
denoising, deformable registration, and a quadratic optimization step.
Additionally, we employ efficient continuation strategies to minimize the risk
of convergence to local minima. The proposed formulation contrasts with
existing DC-CS schemes that are customized for free breathing cardiac cine
applications, and other schemes that rely on fully sampled reference frames or
navigator signals to estimate the deformation parameters. The efficient
decoupling enabled by the proposed scheme allows its application to a wide
range of applications including contrast enhanced dynamic MRI. Through
experiments on numerical phantom and in vivo myocardial perfusion MRI datasets,
we demonstrate the utility of the proposed DC-CS scheme in providing robust
reconstructions with reduced motion artifacts over classical compressed sensing
schemes that utilize the compact priors on the original deformation
un-corrected signal.
| [
{
"version": "v1",
"created": "Thu, 29 May 2014 20:36:39 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Sep 2014 23:03:52 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Lingala",
"Sajan Goud",
""
],
[
"DiBella",
"Edward",
""
],
[
"Jacob",
"Mathews",
""
]
] | TITLE: Deformation corrected compressed sensing (DC-CS): a novel framework for
accelerated dynamic MRI
ABSTRACT: We propose a novel deformation corrected compressed sensing (DC-CS) framework
to recover dynamic magnetic resonance images from undersampled measurements. We
introduce a generalized formulation that is capable of handling a wide class of
sparsity/compactness priors on the deformation corrected dynamic signal. In
this work, we consider example compactness priors such as sparsity in temporal
Fourier domain, sparsity in temporal finite difference domain, and nuclear norm
penalty to exploit low rank structure. Using variable splitting, we decouple
the complex optimization problem to simpler and well understood sub problems;
the resulting algorithm alternates between simple steps of shrinkage based
denoising, deformable registration, and a quadratic optimization step.
Additionally, we employ efficient continuation strategies to minimize the risk
of convergence to local minima. The proposed formulation contrasts with
existing DC-CS schemes that are customized for free breathing cardiac cine
applications, and other schemes that rely on fully sampled reference frames or
navigator signals to estimate the deformation parameters. The efficient
decoupling enabled by the proposed scheme allows its application to a wide
range of applications including contrast enhanced dynamic MRI. Through
experiments on numerical phantom and in vivo myocardial perfusion MRI datasets,
we demonstrate the utility of the proposed DC-CS scheme in providing robust
reconstructions with reduced motion artifacts over classical compressed sensing
schemes that utilize the compact priors on the original deformation
un-corrected signal.
| no_new_dataset | 0.949106 |
1409.0908 | Anh Tran | Anh Tran, Jinyan Guan, Thanima Pilantanakitti, Paul Cohen | Action Recognition in the Frequency Domain | Keywords: Artificial Intelligence, Computer Vision, Action
Recognition | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe a simple strategy for mitigating variability in
temporal data series by shifting focus onto long-term, frequency domain
features that are less susceptible to variability. We apply this method to the
human action recognition task and demonstrate how working in the frequency
domain can yield good recognition features for commonly used optical flow and
articulated pose features, which are highly sensitive to small differences in
motion, viewpoint, dynamic backgrounds, occlusion and other sources of
variability. We show how these frequency-based features can be used in
combination with a simple forest classifier to achieve good and robust results
on the popular KTH Actions dataset.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 22:34:29 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Tran",
"Anh",
""
],
[
"Guan",
"Jinyan",
""
],
[
"Pilantanakitti",
"Thanima",
""
],
[
"Cohen",
"Paul",
""
]
] | TITLE: Action Recognition in the Frequency Domain
ABSTRACT: In this paper, we describe a simple strategy for mitigating variability in
temporal data series by shifting focus onto long-term, frequency domain
features that are less susceptible to variability. We apply this method to the
human action recognition task and demonstrate how working in the frequency
domain can yield good recognition features for commonly used optical flow and
articulated pose features, which are highly sensitive to small differences in
motion, viewpoint, dynamic backgrounds, occlusion and other sources of
variability. We show how these frequency-based features can be used in
combination with a simple forest classifier to achieve good and robust results
on the popular KTH Actions dataset.
| no_new_dataset | 0.956472 |
1409.0923 | Ahmad Hassanat | Ahmad Basheer Hassanat | Dimensionality Invariant Similarity Measure | (ISSN: 1545-1003). http://www.jofamericanscience.org | J Am Sci 2014;10(8):221-226 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new similarity measure to be used for general tasks
including supervised learning, which is represented by the K-nearest neighbor
classifier (KNN). The proposed similarity measure is invariant to large
differences in some dimensions in the feature space. The proposed metric is
proved mathematically to be a metric. To test its viability for different
applications, the KNN used the proposed metric for classifying test examples
chosen from a number of real datasets. Compared to some other well known
metrics, the experimental results show that the proposed metric is a promising
distance measure for the KNN classifier with strong potential for a wide range
of applications.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 23:45:29 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Hassanat",
"Ahmad Basheer",
""
]
] | TITLE: Dimensionality Invariant Similarity Measure
ABSTRACT: This paper presents a new similarity measure to be used for general tasks
including supervised learning, which is represented by the K-nearest neighbor
classifier (KNN). The proposed similarity measure is invariant to large
differences in some dimensions in the feature space. The proposed metric is
proved mathematically to be a metric. To test its viability for different
applications, the KNN used the proposed metric for classifying test examples
chosen from a number of real datasets. Compared to some other well known
metrics, the experimental results show that the proposed metric is a promising
distance measure for the KNN classifier with strong potential for a wide range
of applications.
| no_new_dataset | 0.953275 |
1409.1057 | Uwe Aickelin | Alexandros Ladas, Jonathan M. Garibaldi, Rodrigo Scarpel and Uwe
Aickelin | Augmented Neural Networks for Modelling Consumer Indebtness | Proceedings of the 2014 World Congress on Computational Intelligence
(WCCI 2014), pp. 3086-3093, 2014 | null | null | null | cs.CE cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consumer Debt has risen to be an important problem of modern societies,
generating a lot of research in order to understand the nature of consumer
indebtness, which so far its modelling has been carried out by statistical
models. In this work we show that Computational Intelligence can offer a more
holistic approach that is more suitable for the complex relationships an
indebtness dataset has and Linear Regression cannot uncover. In particular, as
our results show, Neural Networks achieve the best performance in modelling
consumer indebtness, especially when they manage to incorporate the significant
and experimentally verified results of the Data Mining process in the model,
exploiting the flexibility Neural Networks offer in designing their topology.
This novel method forms an elaborate framework to model Consumer indebtness
that can be extended to any other real world application.
| [
{
"version": "v1",
"created": "Wed, 3 Sep 2014 12:23:50 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Ladas",
"Alexandros",
""
],
[
"Garibaldi",
"Jonathan M.",
""
],
[
"Scarpel",
"Rodrigo",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Augmented Neural Networks for Modelling Consumer Indebtness
ABSTRACT: Consumer Debt has risen to be an important problem of modern societies,
generating a lot of research in order to understand the nature of consumer
indebtness, which so far its modelling has been carried out by statistical
models. In this work we show that Computational Intelligence can offer a more
holistic approach that is more suitable for the complex relationships an
indebtness dataset has and Linear Regression cannot uncover. In particular, as
our results show, Neural Networks achieve the best performance in modelling
consumer indebtness, especially when they manage to incorporate the significant
and experimentally verified results of the Data Mining process in the model,
exploiting the flexibility Neural Networks offer in designing their topology.
This novel method forms an elaborate framework to model Consumer indebtness
that can be extended to any other real world application.
| no_new_dataset | 0.942507 |
1409.1199 | Stephen Plaza PhD | Stephen M. Plaza | Focused Proofreading: Efficiently Extracting Connectomes from Segmented
EM Images | null | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing.
| [
{
"version": "v1",
"created": "Wed, 3 Sep 2014 19:14:13 GMT"
}
] | 2014-09-04T00:00:00 | [
[
"Plaza",
"Stephen M.",
""
]
] | TITLE: Focused Proofreading: Efficiently Extracting Connectomes from Segmented
EM Images
ABSTRACT: Identifying complex neural circuitry from electron microscopic (EM) images
may help unlock the mysteries of the brain. However, identifying this circuitry
requires time-consuming, manual tracing (proofreading) due to the size and
intricacy of these image datasets, thus limiting state-of-the-art analysis to
very small brain regions. Potential avenues to improve scalability include
automatic image segmentation and crowd sourcing, but current efforts have had
limited success. In this paper, we propose a new strategy, focused
proofreading, that works with automatic segmentation and aims to limit
proofreading to the regions of a dataset that are most impactful to the
resulting circuit. We then introduce a novel workflow, which exploits
biological information such as synapses, and apply it to a large dataset in the
fly optic lobe. With our techniques, we achieve significant tracing speedups of
3-5x without sacrificing the quality of the resulting circuit. Furthermore, our
methodology makes the task of proofreading much more accessible and hence
potentially enhances the effectiveness of crowd sourcing.
| no_new_dataset | 0.951323 |
1311.1780 | KyungHyun Cho | Caglar Gulcehre, Kyunghyun Cho, Razvan Pascanu and Yoshua Bengio | Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks | ECML/PKDD 2014 | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose and investigate a novel nonlinear unit, called $L_p$
unit, for deep neural networks. The proposed $L_p$ unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the $L_p$ unit is more
efficient at representing complex, nonlinear separating boundaries. Each $L_p$
unit defines a superelliptic boundary, with its exact shape defined by the
order $p$. We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few $L_p$ units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed $L_p$ units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep
recurrent neural networks (RNN).
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2013 18:30:37 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Nov 2013 03:32:43 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Nov 2013 18:32:42 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Jan 2014 22:55:24 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Feb 2014 18:17:38 GMT"
},
{
"version": "v6",
"created": "Fri, 7 Feb 2014 18:55:42 GMT"
},
{
"version": "v7",
"created": "Tue, 2 Sep 2014 00:53:40 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Gulcehre",
"Caglar",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Pascanu",
"Razvan",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks
ABSTRACT: In this paper we propose and investigate a novel nonlinear unit, called $L_p$
unit, for deep neural networks. The proposed $L_p$ unit receives signals from
several projections of a subset of units in the layer below and computes a
normalized $L_p$ norm. We notice two interesting interpretations of the $L_p$
unit. First, the proposed unit can be understood as a generalization of a
number of conventional pooling operators such as average, root-mean-square and
max pooling widely used in, for instance, convolutional neural networks (CNN),
HMAX models and neocognitrons. Furthermore, the $L_p$ unit is, to a certain
degree, similar to the recently proposed maxout unit (Goodfellow et al., 2013)
which achieved the state-of-the-art object recognition results on a number of
benchmark datasets. Secondly, we provide a geometrical interpretation of the
activation function based on which we argue that the $L_p$ unit is more
efficient at representing complex, nonlinear separating boundaries. Each $L_p$
unit defines a superelliptic boundary, with its exact shape defined by the
order $p$. We claim that this makes it possible to model arbitrarily shaped,
curved boundaries more efficiently by combining a few $L_p$ units of different
orders. This insight justifies the need for learning different orders for each
unit in the model. We empirically evaluate the proposed $L_p$ units on a number
of datasets and show that multilayer perceptrons (MLP) consisting of the $L_p$
units achieve the state-of-the-art results on a number of benchmark datasets.
Furthermore, we evaluate the proposed $L_p$ unit on the recently proposed deep
recurrent neural networks (RNN).
| no_new_dataset | 0.952309 |
1409.0602 | Zhu Shizhan | Shizhan Zhu, Cheng Li, Chen Change Loy, and Xiaoou Tang | Transferring Landmark Annotations for Cross-Dataset Face Alignment | Shizhan Zhu and Cheng Li share equal contributions | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Dataset bias is a well known problem in object recognition domain. This
issue, nonetheless, is rarely explored in face alignment research. In this
study, we show that dataset plays an integral part of face alignment
performance. Specifically, owing to face alignment dataset bias, training on
one database and testing on another or unseen domain would lead to poor
performance. Creating an unbiased dataset through combining various existing
databases, however, is non-trivial as one has to exhaustively re-label the
landmarks for standardisation. In this work, we propose a simple and yet
effective method to bridge the disparate annotation spaces between databases,
making datasets fusion possible. We show extensive results on combining various
popular databases (LFW, AFLW, LFPW, HELEN) for improved cross-dataset and
unseen data alignment.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 03:36:55 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Zhu",
"Shizhan",
""
],
[
"Li",
"Cheng",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Transferring Landmark Annotations for Cross-Dataset Face Alignment
ABSTRACT: Dataset bias is a well known problem in object recognition domain. This
issue, nonetheless, is rarely explored in face alignment research. In this
study, we show that dataset plays an integral part of face alignment
performance. Specifically, owing to face alignment dataset bias, training on
one database and testing on another or unseen domain would lead to poor
performance. Creating an unbiased dataset through combining various existing
databases, however, is non-trivial as one has to exhaustively re-label the
landmarks for standardisation. In this work, we propose a simple and yet
effective method to bridge the disparate annotation spaces between databases,
making datasets fusion possible. We show extensive results on combining various
popular databases (LFW, AFLW, LFPW, HELEN) for improved cross-dataset and
unseen data alignment.
| no_new_dataset | 0.95275 |
1409.0612 | Ying Long | Ying Long, Zhenjiang Shen | Population spatialization and synthesis with open data | 14 pages | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individuals together with their locations & attributes are essential to feed
micro-level applied urban models (for example, spatial micro-simulation and
agent-based modeling) for policy evaluation. Existed studies on population
spatialization and population synthesis are generally separated. In developing
countries like China, population distribution in a fine scale, as the input for
population synthesis, is not universally available. With the open-government
initiatives in China and the emerging Web 2.0 techniques, more and more open
data are becoming achievable. In this paper, we propose an automatic process
using open data for population spatialization and synthesis. Specifically, the
road network in OpenStreetMap is used to identify and delineate parcel
geometries, while crowd-sourced POIs are gathered to infer urban parcels with a
vector cellular automata model. Housing-related online Check-in records are
then applied to distinguish residential parcels from all of the identified
urban parcels. Finally the published census data, in which the sub-district
level of attributes distribution and relationships are available, is used for
synthesizing population attributes with a previously developed tool Agenter
(Long and Shen, 2013). The results are validated with ground truth
manually-prepared dataset by planners from Beijing Institute of City Planning.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 06:32:39 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Long",
"Ying",
""
],
[
"Shen",
"Zhenjiang",
""
]
] | TITLE: Population spatialization and synthesis with open data
ABSTRACT: Individuals together with their locations & attributes are essential to feed
micro-level applied urban models (for example, spatial micro-simulation and
agent-based modeling) for policy evaluation. Existed studies on population
spatialization and population synthesis are generally separated. In developing
countries like China, population distribution in a fine scale, as the input for
population synthesis, is not universally available. With the open-government
initiatives in China and the emerging Web 2.0 techniques, more and more open
data are becoming achievable. In this paper, we propose an automatic process
using open data for population spatialization and synthesis. Specifically, the
road network in OpenStreetMap is used to identify and delineate parcel
geometries, while crowd-sourced POIs are gathered to infer urban parcels with a
vector cellular automata model. Housing-related online Check-in records are
then applied to distinguish residential parcels from all of the identified
urban parcels. Finally the published census data, in which the sub-district
level of attributes distribution and relationships are available, is used for
synthesizing population attributes with a previously developed tool Agenter
(Long and Shen, 2013). The results are validated with ground truth
manually-prepared dataset by planners from Beijing Institute of City Planning.
| no_new_dataset | 0.959611 |
1409.0651 | Koninika Pal | Koninika Pal, Sebastian Michel | An LSH Index for Computing Kendall's Tau over Top-k Lists | 6 pages, 8 subfigures, presented in Seventeenth International
Workshop on the Web and Databases (WebDB 2014) co-located with ACM SIGMOD2014 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of similarity search within a set of top-k lists
under the Kendall's Tau distance function. This distance describes how related
two rankings are in terms of concordantly and discordantly ordered items. As
top-k lists are usually very short compared to the global domain of possible
items to be ranked, creating an inverted index to look up overlapping lists is
possible but does not capture tight enough the similarity measure. In this
work, we investigate locality sensitive hashing schemes for the Kendall's Tau
distance and evaluate the proposed methods using two real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 10:07:27 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Pal",
"Koninika",
""
],
[
"Michel",
"Sebastian",
""
]
] | TITLE: An LSH Index for Computing Kendall's Tau over Top-k Lists
ABSTRACT: We consider the problem of similarity search within a set of top-k lists
under the Kendall's Tau distance function. This distance describes how related
two rankings are in terms of concordantly and discordantly ordered items. As
top-k lists are usually very short compared to the global domain of possible
items to be ranked, creating an inverted index to look up overlapping lists is
possible but does not capture tight enough the similarity measure. In this
work, we investigate locality sensitive hashing schemes for the Kendall's Tau
distance and evaluate the proposed methods using two real-world datasets.
| no_new_dataset | 0.950411 |
1409.0763 | Uwe Aickelin | Qi Chen, Amanda Whitbrook, Uwe Aickelin and Chris Roadknight | Data classification using the Dempster-Shafer method | Journal of Experimental & Theoretical Artificial Intelligence,
ahead-of-print, 2014 | null | 10.1080/0952813X.2014.886301 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the Dempster-Shafer method is employed as the theoretical
basis for creating data classification systems. Testing is carried out using
three popular (multiple attribute) benchmark datasets that have two, three and
four classes. In each case, a subset of the available data is used for training
to establish thresholds, limits or likelihoods of class membership for each
attribute, and hence create mass functions that establish probability of class
membership for each attribute of the test data. Classification of each data
item is achieved by combination of these probabilities via Dempster's Rule of
Combination. Results for the first two datasets show extremely high
classification accuracy that is competitive with other popular methods. The
third dataset is non-numerical and difficult to classify, but good results can
be achieved provided the system and mass functions are designed carefully and
the right attributes are chosen for combination. In all cases the
Dempster-Shafer method provides comparable performance to other more popular
algorithms, but the overhead of generating accurate mass functions increases
the complexity with the addition of new attributes. Overall, the results
suggest that the D-S approach provides a suitable framework for the design of
classification systems and that automating the mass function design and
calculation would increase the viability of the algorithm for complex
classification problems.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 15:49:40 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Chen",
"Qi",
""
],
[
"Whitbrook",
"Amanda",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Roadknight",
"Chris",
""
]
] | TITLE: Data classification using the Dempster-Shafer method
ABSTRACT: In this paper, the Dempster-Shafer method is employed as the theoretical
basis for creating data classification systems. Testing is carried out using
three popular (multiple attribute) benchmark datasets that have two, three and
four classes. In each case, a subset of the available data is used for training
to establish thresholds, limits or likelihoods of class membership for each
attribute, and hence create mass functions that establish probability of class
membership for each attribute of the test data. Classification of each data
item is achieved by combination of these probabilities via Dempster's Rule of
Combination. Results for the first two datasets show extremely high
classification accuracy that is competitive with other popular methods. The
third dataset is non-numerical and difficult to classify, but good results can
be achieved provided the system and mass functions are designed carefully and
the right attributes are chosen for combination. In all cases the
Dempster-Shafer method provides comparable performance to other more popular
algorithms, but the overhead of generating accurate mass functions increases
the complexity with the addition of new attributes. Overall, the results
suggest that the D-S approach provides a suitable framework for the design of
classification systems and that automating the mass function design and
calculation would increase the viability of the algorithm for complex
classification problems.
| no_new_dataset | 0.952042 |
1409.0791 | Jian Yang | Jian Yang, Liqiu Meng | Feature Selection in Conditional Random Fields for Map Matching of GPS
Trajectories | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Map matching of the GPS trajectory serves the purpose of recovering the
original route on a road network from a sequence of noisy GPS observations. It
is a fundamental technique to many Location Based Services. However, map
matching of a low sampling rate on urban road network is still a challenging
task. In this paper, the characteristics of Conditional Random Fields with
regard to inducing many contextual features and feature selection are explored
for the map matching of the GPS trajectories at a low sampling rate.
Experiments on a taxi trajectory dataset show that our method may achieve
competitive results along with the success of reducing model complexity for
computation-limited applications.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 16:52:53 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Yang",
"Jian",
""
],
[
"Meng",
"Liqiu",
""
]
] | TITLE: Feature Selection in Conditional Random Fields for Map Matching of GPS
Trajectories
ABSTRACT: Map matching of the GPS trajectory serves the purpose of recovering the
original route on a road network from a sequence of noisy GPS observations. It
is a fundamental technique to many Location Based Services. However, map
matching of a low sampling rate on urban road network is still a challenging
task. In this paper, the characteristics of Conditional Random Fields with
regard to inducing many contextual features and feature selection are explored
for the map matching of the GPS trajectories at a low sampling rate.
Experiments on a taxi trajectory dataset show that our method may achieve
competitive results along with the success of reducing model complexity for
computation-limited applications.
| no_new_dataset | 0.951142 |
1409.0798 | Aditya Parameswaran | Anant Bhardwaj, Souvik Bhattacherjee, Amit Chavan, Amol Deshpande,
Aaron J. Elmore, Samuel Madden, Aditya G. Parameswaran | DataHub: Collaborative Data Science & Dataset Version Management at
Scale | 7 pages | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relational databases have limited support for data collaboration, where teams
collaboratively curate and analyze large datasets. Inspired by software version
control systems like git, we propose (a) a dataset version control system,
giving users the ability to create, branch, merge, difference and search large,
divergent collections of datasets, and (b) a platform, DataHub, that gives
users the ability to perform collaborative data analysis building on this
version control system. We outline the challenges in providing dataset version
control at scale.
| [
{
"version": "v1",
"created": "Tue, 2 Sep 2014 17:16:47 GMT"
}
] | 2014-09-03T00:00:00 | [
[
"Bhardwaj",
"Anant",
""
],
[
"Bhattacherjee",
"Souvik",
""
],
[
"Chavan",
"Amit",
""
],
[
"Deshpande",
"Amol",
""
],
[
"Elmore",
"Aaron J.",
""
],
[
"Madden",
"Samuel",
""
],
[
"Parameswaran",
"Aditya G.",
""
]
] | TITLE: DataHub: Collaborative Data Science & Dataset Version Management at
Scale
ABSTRACT: Relational databases have limited support for data collaboration, where teams
collaboratively curate and analyze large datasets. Inspired by software version
control systems like git, we propose (a) a dataset version control system,
giving users the ability to create, branch, merge, difference and search large,
divergent collections of datasets, and (b) a platform, DataHub, that gives
users the ability to perform collaborative data analysis building on this
version control system. We outline the challenges in providing dataset version
control at scale.
| no_new_dataset | 0.937038 |
1204.6535 | Sandeep Gupta | Sandeep Gupta | Citations, Sequence Alignments, Contagion, and Semantics: On Acyclic
Structures and their Randomness | null | null | null | null | cs.DM cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets from several domains, such as life-sciences, semantic web, machine
learning, natural language processing, etc. are naturally structured as acyclic
graphs. These datasets, particularly those in bio-informatics and computational
epidemiology, have grown tremendously over the last decade or so. Increasingly,
as a consequence, there is a need to build and evaluate various strategies for
processing acyclic structured graphs. Most of the proposed research models the
real world acyclic structures as random graphs, i.e., they are generated by
randomly selecting a subset of edges from all possible edges. Unfortunately the
graphs thus generated have predictable and degenerate structures, i.e., the
resulting graphs will always have almost the same degree distribution and very
short paths.
Specifically, we show that if $O(n \log n \log n)$ edges are added to a
binary tree of $n$ nodes then with probability more than $O(1/(\log n)^{1/n})$
the depth of all but $O({\log \log n} ^{\log \log n})$ vertices of the dag
collapses to 1. Experiments show that irregularity, as measured by distribution
of length of random walks from root to leaves, is also predictable and small.
The degree distribution and random walk length properties of real world graphs
from these domains are significantly different from random graphs of similar
vertex and edge size.
| [
{
"version": "v1",
"created": "Mon, 30 Apr 2012 02:19:26 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Jul 2012 16:02:09 GMT"
},
{
"version": "v3",
"created": "Fri, 26 Oct 2012 10:11:36 GMT"
},
{
"version": "v4",
"created": "Tue, 20 Nov 2012 07:07:48 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Jan 2013 19:41:26 GMT"
},
{
"version": "v6",
"created": "Sun, 31 Aug 2014 03:30:09 GMT"
}
] | 2014-09-02T00:00:00 | [
[
"Gupta",
"Sandeep",
""
]
] | TITLE: Citations, Sequence Alignments, Contagion, and Semantics: On Acyclic
Structures and their Randomness
ABSTRACT: Datasets from several domains, such as life-sciences, semantic web, machine
learning, natural language processing, etc. are naturally structured as acyclic
graphs. These datasets, particularly those in bio-informatics and computational
epidemiology, have grown tremendously over the last decade or so. Increasingly,
as a consequence, there is a need to build and evaluate various strategies for
processing acyclic structured graphs. Most of the proposed research models the
real world acyclic structures as random graphs, i.e., they are generated by
randomly selecting a subset of edges from all possible edges. Unfortunately the
graphs thus generated have predictable and degenerate structures, i.e., the
resulting graphs will always have almost the same degree distribution and very
short paths.
Specifically, we show that if $O(n \log n \log n)$ edges are added to a
binary tree of $n$ nodes then with probability more than $O(1/(\log n)^{1/n})$
the depth of all but $O({\log \log n} ^{\log \log n})$ vertices of the dag
collapses to 1. Experiments show that irregularity, as measured by distribution
of length of random walks from root to leaves, is also predictable and small.
The degree distribution and random walk length properties of real world graphs
from these domains are significantly different from random graphs of similar
vertex and edge size.
| no_new_dataset | 0.948394 |
1306.3874 | KyungHyun Cho | Kyunghyun Cho and Xi Chen | Classifying and Visualizing Motion Capture Sequences using Deep Neural
Networks | VISAPP 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gesture recognition using motion capture data and depth sensors has
recently drawn more attention in vision recognition. Currently most systems
only classify dataset with a couple of dozens different actions. Moreover,
feature extraction from the data is often computational complex. In this paper,
we propose a novel system to recognize the actions from skeleton data with
simple, but effective, features using deep neural networks. Features are
extracted for each frame based on the relative positions of joints (PO),
temporal differences (TD), and normalized trajectories of motion (NT). Given
these features a hybrid multi-layer perceptron is trained, which simultaneously
classifies and reconstructs input data. We use deep autoencoder to visualize
learnt features, and the experiments show that deep neural networks can capture
more discriminative information than, for instance, principal component
analysis can. We test our system on a public database with 65 classes and more
than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our
knowledge, the state of the art result for such a large dataset.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2013 14:26:52 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Sep 2014 16:03:02 GMT"
}
] | 2014-09-02T00:00:00 | [
[
"Cho",
"Kyunghyun",
""
],
[
"Chen",
"Xi",
""
]
] | TITLE: Classifying and Visualizing Motion Capture Sequences using Deep Neural
Networks
ABSTRACT: The gesture recognition using motion capture data and depth sensors has
recently drawn more attention in vision recognition. Currently most systems
only classify dataset with a couple of dozens different actions. Moreover,
feature extraction from the data is often computational complex. In this paper,
we propose a novel system to recognize the actions from skeleton data with
simple, but effective, features using deep neural networks. Features are
extracted for each frame based on the relative positions of joints (PO),
temporal differences (TD), and normalized trajectories of motion (NT). Given
these features a hybrid multi-layer perceptron is trained, which simultaneously
classifies and reconstructs input data. We use deep autoencoder to visualize
learnt features, and the experiments show that deep neural networks can capture
more discriminative information than, for instance, principal component
analysis can. We test our system on a public database with 65 classes and more
than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our
knowledge, the state of the art result for such a large dataset.
| no_new_dataset | 0.947817 |
1402.3967 | Salvatore Pascale | Salvatore Pascale and Valerio Lucarini and Xue Feng and Amilcare
Porporato and Shabeh ul Hasson | Analysis of rainfall seasonality from observations and climate models | 35 pages, 15 figures | null | 10.1007/s00382-014-2278-2 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two new indicators of rainfall seasonality based on information entropy, the
relative entropy (RE) and the dimensionless seasonality index (DSI), together
with the mean annual rainfall, are evaluated on a global scale for recently
updated precipitation gridded datasets and for historical simulations from
coupled atmosphere-ocean general circulation models. The RE provides a measure
of the number of wet months and, for precipitation regimes featuring one
maximum in the monthly rain distribution, it is related to the duration of the
wet season. The DSI combines the rainfall intensity with its degree of
seasonality and it is an indicator of the extent of the global monsoon region.
We show that the RE and the DSI are fairly independent of the time resolution
of the precipitation data, thereby allowing objective metrics for model
intercomparison and ranking.
Regions with different precipitation regimes are classified and characterized
in terms of RE and DSI. Comparison of different land observational datasets
reveals substantial difference in their local representation of seasonality. It
is shown that two-dimensional maps of RE provide an easy way to compare
rainfall seasonality from various datasets and to determine areas of interest.
CMIP5 models consistently overestimate the RE over tropical Latin America and
underestimate it in Western Africa and East Asia. It is demonstrated that
positive RE biases in a GCM are associated with simulated monthly precipitation
fractions which are too large during the wet months and too small in the months
preceding the wet season; negative biases are instead due to an excess of
rainfall during the dry months.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2014 11:32:26 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Sep 2014 15:12:14 GMT"
}
] | 2014-09-02T00:00:00 | [
[
"Pascale",
"Salvatore",
""
],
[
"Lucarini",
"Valerio",
""
],
[
"Feng",
"Xue",
""
],
[
"Porporato",
"Amilcare",
""
],
[
"Hasson",
"Shabeh ul",
""
]
] | TITLE: Analysis of rainfall seasonality from observations and climate models
ABSTRACT: Two new indicators of rainfall seasonality based on information entropy, the
relative entropy (RE) and the dimensionless seasonality index (DSI), together
with the mean annual rainfall, are evaluated on a global scale for recently
updated precipitation gridded datasets and for historical simulations from
coupled atmosphere-ocean general circulation models. The RE provides a measure
of the number of wet months and, for precipitation regimes featuring one
maximum in the monthly rain distribution, it is related to the duration of the
wet season. The DSI combines the rainfall intensity with its degree of
seasonality and it is an indicator of the extent of the global monsoon region.
We show that the RE and the DSI are fairly independent of the time resolution
of the precipitation data, thereby allowing objective metrics for model
intercomparison and ranking.
Regions with different precipitation regimes are classified and characterized
in terms of RE and DSI. Comparison of different land observational datasets
reveals substantial difference in their local representation of seasonality. It
is shown that two-dimensional maps of RE provide an easy way to compare
rainfall seasonality from various datasets and to determine areas of interest.
CMIP5 models consistently overestimate the RE over tropical Latin America and
underestimate it in Western Africa and East Asia. It is demonstrated that
positive RE biases in a GCM are associated with simulated monthly precipitation
fractions which are too large during the wet months and too small in the months
preceding the wet season; negative biases are instead due to an excess of
rainfall during the dry months.
| no_new_dataset | 0.943608 |
1408.5571 | Michael (Micky) Fire | Michael Fire, Thomas Chesney, and Yuval Elovici | Quantitative Analysis of Genealogy Using Digitised Family Trees | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the popularity of television shows such as Who Do You Think You
Are? many millions of users have uploaded their family tree to web projects
such as WikiTree. Analysis of this corpus enables us to investigate genealogy
computationally. The study of heritage in the social sciences has led to an
increased understanding of ancestry and descent but such efforts are hampered
by difficult to access data. Genealogical research is typically a tedious
process involving trawling through sources such as birth and death
certificates, wills, letters and land deeds. Decades of research have developed
and examined hypotheses on population sex ratios, marriage trends, fertility,
lifespan, and the frequency of twins and triplets. These can now be tested on
vast datasets containing many billions of entries using machine learning tools.
Here we survey the use of genealogy data mining using family trees dating back
centuries and featuring profiles on nearly 7 million individuals based in over
160 countries. These data are not typically created by trained genealogists and
so we verify them with reference to third party censuses. We present results on
a range of aspects of population dynamics. Our approach extends the boundaries
of genealogy inquiry to precise measurement of underlying human phenomena.
| [
{
"version": "v1",
"created": "Sun, 24 Aug 2014 07:11:20 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Aug 2014 18:26:23 GMT"
}
] | 2014-09-02T00:00:00 | [
[
"Fire",
"Michael",
""
],
[
"Chesney",
"Thomas",
""
],
[
"Elovici",
"Yuval",
""
]
] | TITLE: Quantitative Analysis of Genealogy Using Digitised Family Trees
ABSTRACT: Driven by the popularity of television shows such as Who Do You Think You
Are? many millions of users have uploaded their family tree to web projects
such as WikiTree. Analysis of this corpus enables us to investigate genealogy
computationally. The study of heritage in the social sciences has led to an
increased understanding of ancestry and descent but such efforts are hampered
by difficult to access data. Genealogical research is typically a tedious
process involving trawling through sources such as birth and death
certificates, wills, letters and land deeds. Decades of research have developed
and examined hypotheses on population sex ratios, marriage trends, fertility,
lifespan, and the frequency of twins and triplets. These can now be tested on
vast datasets containing many billions of entries using machine learning tools.
Here we survey the use of genealogy data mining using family trees dating back
centuries and featuring profiles on nearly 7 million individuals based in over
160 countries. These data are not typically created by trained genealogists and
so we verify them with reference to third party censuses. We present results on
a range of aspects of population dynamics. Our approach extends the boundaries
of genealogy inquiry to precise measurement of underlying human phenomena.
| no_new_dataset | 0.935051 |
1409.0347 | Chao Li | Chao Li and Lili Guo and Andrzej Cichocki | Multi-tensor Completion for Estimating Missing Values in Video Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many tensor-based data completion methods aim to solve image and video
in-painting problems. But, all methods were only developed for a single
dataset. In most of real applications, we can usually obtain more than one
dataset to reflect one phenomenon, and all the datasets are mutually related in
some sense. Thus one question raised whether such the relationship can improve
the performance of data completion or not? In the paper, we proposed a novel
and efficient method by exploiting the relationship among datasets for
multi-video data completion. Numerical results show that the proposed method
significantly improve the performance of video in-painting, particularly in the
case of very high missing percentage.
| [
{
"version": "v1",
"created": "Mon, 1 Sep 2014 09:46:52 GMT"
}
] | 2014-09-02T00:00:00 | [
[
"Li",
"Chao",
""
],
[
"Guo",
"Lili",
""
],
[
"Cichocki",
"Andrzej",
""
]
] | TITLE: Multi-tensor Completion for Estimating Missing Values in Video Data
ABSTRACT: Many tensor-based data completion methods aim to solve image and video
in-painting problems. But, all methods were only developed for a single
dataset. In most of real applications, we can usually obtain more than one
dataset to reflect one phenomenon, and all the datasets are mutually related in
some sense. Thus one question raised whether such the relationship can improve
the performance of data completion or not? In the paper, we proposed a novel
and efficient method by exploiting the relationship among datasets for
multi-video data completion. Numerical results show that the proposed method
significantly improve the performance of video in-painting, particularly in the
case of very high missing percentage.
| no_new_dataset | 0.954816 |
1408.7071 | Zhenzhong Lan | Zhenzhong Lan, Xuanchong Li, Alexandar G. Hauptmann | Temporal Extension of Scale Pyramid and Spatial Pyramid Matching for
Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Historically, researchers in the field have spent a great deal of effort to
create image representations that have scale invariance and retain spatial
location information. This paper proposes to encode equivalent temporal
characteristics in video representations for action recognition. To achieve
temporal scale invariance, we develop a method called temporal scale pyramid
(TSP). To encode temporal information, we present and compare two methods
called temporal extension descriptor (TED) and temporal division pyramid (TDP)
. Our purpose is to suggest solutions for matching complex actions that have
large variation in velocity and appearance, which is missing from most current
action representations. The experimental results on four benchmark datasets,
UCF50, HMDB51, Hollywood2 and Olympic Sports, support our approach and
significantly outperform state-of-the-art methods. Most noticeably, we achieve
65.0% mean accuracy and 68.2% mean average precision on the challenging HMDB51
and Hollywood2 datasets which constitutes an absolute improvement over the
state-of-the-art by 7.8% and 3.9%, respectively.
| [
{
"version": "v1",
"created": "Fri, 29 Aug 2014 17:05:29 GMT"
}
] | 2014-09-01T00:00:00 | [
[
"Lan",
"Zhenzhong",
""
],
[
"Li",
"Xuanchong",
""
],
[
"Hauptmann",
"Alexandar G.",
""
]
] | TITLE: Temporal Extension of Scale Pyramid and Spatial Pyramid Matching for
Action Recognition
ABSTRACT: Historically, researchers in the field have spent a great deal of effort to
create image representations that have scale invariance and retain spatial
location information. This paper proposes to encode equivalent temporal
characteristics in video representations for action recognition. To achieve
temporal scale invariance, we develop a method called temporal scale pyramid
(TSP). To encode temporal information, we present and compare two methods
called temporal extension descriptor (TED) and temporal division pyramid (TDP)
. Our purpose is to suggest solutions for matching complex actions that have
large variation in velocity and appearance, which is missing from most current
action representations. The experimental results on four benchmark datasets,
UCF50, HMDB51, Hollywood2 and Olympic Sports, support our approach and
significantly outperform state-of-the-art methods. Most noticeably, we achieve
65.0% mean accuracy and 68.2% mean average precision on the challenging HMDB51
and Hollywood2 datasets which constitutes an absolute improvement over the
state-of-the-art by 7.8% and 3.9%, respectively.
| no_new_dataset | 0.953362 |
1408.6691 | Luca Matteis | Luca Matteis | VoID-graph: Visualize Linked Datasets on the Web | null | null | null | null | cs.DB cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The Linked Open Data (LOD) cloud diagram is a picture that helps us grasp the
contents and the links of globally available data sets. Such diagram has been a
powerful dissemination method for the Linked Data movement, allowing people to
glance at the size and structure of this distributed, interconnected database.
However, generating such image for third-party datasets can be a quite complex
task as it requires the installation and understanding of a variety of tools
which are not easy to setup. In this paper we present VoID-graph
(http://lmatteis.github.io/void-graph/), a standalone web-tool that, given a
VoID description, can visualize a diagram similar to the LOD cloud. It is novel
because the diagram is autonomously shaped from VoID descriptions directly
within a Web-browser, which doesn't require any server cooperation. This makes
it not only easy to use, as no installation or configuration is required, but
also makes it more sustainable, as it is built using Open Web standards such as
JavaScript and SVG.
| [
{
"version": "v1",
"created": "Thu, 28 Aug 2014 12:01:51 GMT"
}
] | 2014-08-29T00:00:00 | [
[
"Matteis",
"Luca",
""
]
] | TITLE: VoID-graph: Visualize Linked Datasets on the Web
ABSTRACT: The Linked Open Data (LOD) cloud diagram is a picture that helps us grasp the
contents and the links of globally available data sets. Such diagram has been a
powerful dissemination method for the Linked Data movement, allowing people to
glance at the size and structure of this distributed, interconnected database.
However, generating such image for third-party datasets can be a quite complex
task as it requires the installation and understanding of a variety of tools
which are not easy to setup. In this paper we present VoID-graph
(http://lmatteis.github.io/void-graph/), a standalone web-tool that, given a
VoID description, can visualize a diagram similar to the LOD cloud. It is novel
because the diagram is autonomously shaped from VoID descriptions directly
within a Web-browser, which doesn't require any server cooperation. This makes
it not only easy to use, as no installation or configuration is required, but
also makes it more sustainable, as it is built using Open Web standards such as
JavaScript and SVG.
| no_new_dataset | 0.940243 |
1408.6779 | Walter Hopkins | Walter Hopkins (ATLAS Collaboration) | ATLAS upgrades for the next decades | null | null | null | ATL-UPGRADE-PROC-2014-003 | physics.ins-det hep-ex | http://creativecommons.org/licenses/by/3.0/ | After the successful LHC operation at the center-of-mass energies of 7 and 8
TeV in 2010-2012, plans are actively advancing for a series of upgrades of the
accelerator, culminating roughly ten years from now in the high-luminosity LHC
(HL-LHC) project, delivering of the order of five times the LHC nominal
instantaneous luminosity along with luminosity leveling. The final goal is to
extend the dataset from about few hundred fb$^{-1}$ to 3000 fb$^{-1}$ by around
2035 for ATLAS and CMS. In parallel, the experiments need to be kept lockstep
with the accelerator to accommodate running beyond the nominal luminosity this
decade. Current planning in ATLAS envisions significant upgrades to the
detector during the consolidation of the LHC to reach full LHC energy and
further upgrades. The challenge of coping with the HL-LHC instantaneous and
integrated luminosity, along with the associated radiation levels, requires
further major changes to the ATLAS detector. The designs are developing rapidly
for a new all-silicon tracker, significant upgrades of the calorimeter and muon
systems, as well as improved triggers and data acquisition. This report
summarizes various improvements to the ATLAS detector required to cope with the
anticipated evolution of the LHC luminosity during this decade and the next.
| [
{
"version": "v1",
"created": "Thu, 28 Aug 2014 17:04:20 GMT"
}
] | 2014-08-29T00:00:00 | [
[
"Hopkins",
"Walter",
"",
"ATLAS Collaboration"
]
] | TITLE: ATLAS upgrades for the next decades
ABSTRACT: After the successful LHC operation at the center-of-mass energies of 7 and 8
TeV in 2010-2012, plans are actively advancing for a series of upgrades of the
accelerator, culminating roughly ten years from now in the high-luminosity LHC
(HL-LHC) project, delivering of the order of five times the LHC nominal
instantaneous luminosity along with luminosity leveling. The final goal is to
extend the dataset from about few hundred fb$^{-1}$ to 3000 fb$^{-1}$ by around
2035 for ATLAS and CMS. In parallel, the experiments need to be kept lockstep
with the accelerator to accommodate running beyond the nominal luminosity this
decade. Current planning in ATLAS envisions significant upgrades to the
detector during the consolidation of the LHC to reach full LHC energy and
further upgrades. The challenge of coping with the HL-LHC instantaneous and
integrated luminosity, along with the associated radiation levels, requires
further major changes to the ATLAS detector. The designs are developing rapidly
for a new all-silicon tracker, significant upgrades of the calorimeter and muon
systems, as well as improved triggers and data acquisition. This report
summarizes various improvements to the ATLAS detector required to cope with the
anticipated evolution of the LHC luminosity during this decade and the next.
| no_new_dataset | 0.939471 |
1207.1206 | Fariba Karimi Ms. | Fariba Karimi, Petter Holme | Threshold model of cascades in temporal networks | 7 pages, 5 figures, 2 tables | Physica A: Statistical Mechanics and its Applications.392.16
(2013): 3476-3483 | 10.1016/j.physa.2013.03.050 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Threshold models try to explain the consequences of social influence like the
spread of fads and opinions. Along with models of epidemics, they constitute a
major theoretical framework of social spreading processes. In threshold models
on static networks, an individual changes her state if a certain fraction of
her neighbors has done the same. When there are strong correlations in the
temporal aspects of contact patterns, it is useful to represent the system as a
temporal network. In such a system, not only contacts but also the time of the
contacts are represented explicitly. There is a consensus that bursty temporal
patterns slow down disease spreading. However, as we will see, this is not a
universal truth for threshold models. In this work, we propose an extension of
Watts' classic threshold model to temporal networks. We do this by assuming
that an agent is influenced by contacts which lie a certain time into the past.
I.e., the individuals are affected by contacts within a time window. In
addition to thresholds as the fraction of contacts, we also investigate the
number of contacts within the time window as a basis for influence. To
elucidate the model's behavior, we run the model on real and randomized
empirical contact datasets.
| [
{
"version": "v1",
"created": "Thu, 5 Jul 2012 09:49:51 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Sep 2012 15:19:39 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Karimi",
"Fariba",
""
],
[
"Holme",
"Petter",
""
]
] | TITLE: Threshold model of cascades in temporal networks
ABSTRACT: Threshold models try to explain the consequences of social influence like the
spread of fads and opinions. Along with models of epidemics, they constitute a
major theoretical framework of social spreading processes. In threshold models
on static networks, an individual changes her state if a certain fraction of
her neighbors has done the same. When there are strong correlations in the
temporal aspects of contact patterns, it is useful to represent the system as a
temporal network. In such a system, not only contacts but also the time of the
contacts are represented explicitly. There is a consensus that bursty temporal
patterns slow down disease spreading. However, as we will see, this is not a
universal truth for threshold models. In this work, we propose an extension of
Watts' classic threshold model to temporal networks. We do this by assuming
that an agent is influenced by contacts which lie a certain time into the past.
I.e., the individuals are affected by contacts within a time window. In
addition to thresholds as the fraction of contacts, we also investigate the
number of contacts within the time window as a basis for influence. To
elucidate the model's behavior, we run the model on real and randomized
empirical contact datasets.
| no_new_dataset | 0.945248 |
1403.0315 | Conrad Sanderson | Johanna Carvajal, Chris McCool, Conrad Sanderson | Summarisation of Short-Term and Long-Term Videos using Texture and
Colour | IEEE Winter Conference on Applications of Computer Vision (WACV),
2014 | null | 10.1109/WACV.2014.6836025 | null | cs.CV stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach to video summarisation that makes use of a
Bag-of-visual-Textures (BoT) approach. Two systems are proposed, one based
solely on the BoT approach and another which exploits both colour information
and BoT features. On 50 short-term videos from the Open Video Project we show
that our BoT and fusion systems both achieve state-of-the-art performance,
obtaining an average F-measure of 0.83 and 0.86 respectively, a relative
improvement of 9% and 13% when compared to the previous state-of-the-art. When
applied to a new underwater surveillance dataset containing 33 long-term
videos, the proposed system reduces the amount of footage by a factor of 27,
with only minor degradation in the information content. This order of magnitude
reduction in video data represents significant savings in terms of time and
potential labour cost when manually reviewing such footage.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 05:19:10 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Carvajal",
"Johanna",
""
],
[
"McCool",
"Chris",
""
],
[
"Sanderson",
"Conrad",
""
]
] | TITLE: Summarisation of Short-Term and Long-Term Videos using Texture and
Colour
ABSTRACT: We present a novel approach to video summarisation that makes use of a
Bag-of-visual-Textures (BoT) approach. Two systems are proposed, one based
solely on the BoT approach and another which exploits both colour information
and BoT features. On 50 short-term videos from the Open Video Project we show
that our BoT and fusion systems both achieve state-of-the-art performance,
obtaining an average F-measure of 0.83 and 0.86 respectively, a relative
improvement of 9% and 13% when compared to the previous state-of-the-art. When
applied to a new underwater surveillance dataset containing 33 long-term
videos, the proposed system reduces the amount of footage by a factor of 27,
with only minor degradation in the information content. This order of magnitude
reduction in video data represents significant savings in terms of time and
potential labour cost when manually reviewing such footage.
| new_dataset | 0.955068 |
1403.0320 | Conrad Sanderson | Shaokang Chen, Arnold Wiliem, Conrad Sanderson, Brian C. Lovell | Matching Image Sets via Adaptive Multi Convex Hull | IEEE Winter Conference on Applications of Computer Vision (WACV),
2014 | null | 10.1109/WACV.2014.6835985 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 06:19:45 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Chen",
"Shaokang",
""
],
[
"Wiliem",
"Arnold",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Matching Image Sets via Adaptive Multi Convex Hull
ABSTRACT: Traditional nearest points methods use all the samples in an image set to
construct a single convex or affine hull model for classification. However,
strong artificial features and noisy data may be generated from combinations of
training samples when significant intra-class variations and/or noise occur in
the image set. Existing multi-model approaches extract local models by
clustering each image set individually only once, with fixed clusters used for
matching with various image sets. This may not be optimal for discrimination,
as undesirable environmental conditions (eg. illumination and pose variations)
may result in the two closest clusters representing different characteristics
of an object (eg. frontal face being compared to non-frontal face). To address
the above problem, we propose a novel approach to enhance nearest points based
methods by integrating affine/convex hull classification with an adapted
multi-model approach. We first extract multiple local convex hulls from a query
image set via maximum margin clustering to diminish the artificial variations
and constrain the noise in local convex hulls. We then propose adaptive
reference clustering (ARC) to constrain the clustering of each gallery image
set by forcing the clusters to have resemblance to the clusters in the query
image set. By applying ARC, noisy clusters in the query set can be discarded.
Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method
outperforms single model approaches and other recent techniques, such as Sparse
Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant
Analysis.
| no_new_dataset | 0.949248 |
1404.0333 | Jose Javier Ramasco | Maxime Lenormand, Miguel Picornell, Oliva G. Cantu-Ros, Antonia
Tugores, Thomas Louail, Ricardo Herranz, Marc Barthelemy, Enrique
Frias-Martinez and Jose J. Ramasco | Cross-checking different sources of mobility information | 11 pages, 9 figures, 1 appendix with 7 figures | PLoS ONE 9, e105184 (2014) | 10.1371/journal.pone.0105184 | null | physics.soc-ph cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pervasive use of new mobile devices has allowed a better characterization
in space and time of human concentrations and mobility in general. Besides its
theoretical interest, describing mobility is of great importance for a number
of practical applications ranging from the forecast of disease spreading to the
design of new spaces in urban environments. While classical data sources, such
as surveys or census, have a limited level of geographical resolution (e.g.,
districts, municipalities, counties are typically used) or are restricted to
generic workdays or weekends, the data coming from mobile devices can be
precisely located both in time and space. Most previous works have used a
single data source to study human mobility patterns. Here we perform instead a
cross-check analysis by comparing results obtained with data collected from
three different sources: Twitter, census and cell phones. The analysis is
focused on the urban areas of Barcelona and Madrid, for which data of the three
types is available. We assess the correlation between the datasets on different
aspects: the spatial distribution of people concentration, the temporal
evolution of people density and the mobility patterns of individuals. Our
results show that the three data sources are providing comparable information.
Even though the representativeness of Twitter geolocated data is lower than
that of mobile phone and census data, the correlations between the population
density profiles and mobility patterns detected by the three datasets are close
to one in a grid with cells of 2x2 and 1x1 square kilometers. This level of
correlation supports the feasibility of interchanging the three data sources at
the spatio-temporal scales considered.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2014 18:05:12 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2014 14:18:50 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Lenormand",
"Maxime",
""
],
[
"Picornell",
"Miguel",
""
],
[
"Cantu-Ros",
"Oliva G.",
""
],
[
"Tugores",
"Antonia",
""
],
[
"Louail",
"Thomas",
""
],
[
"Herranz",
"Ricardo",
""
],
[
"Barthelemy",
"Marc",
""
],
[
"Frias-Martinez",
"Enrique",
""
],
[
"Ramasco",
"Jose J.",
""
]
] | TITLE: Cross-checking different sources of mobility information
ABSTRACT: The pervasive use of new mobile devices has allowed a better characterization
in space and time of human concentrations and mobility in general. Besides its
theoretical interest, describing mobility is of great importance for a number
of practical applications ranging from the forecast of disease spreading to the
design of new spaces in urban environments. While classical data sources, such
as surveys or census, have a limited level of geographical resolution (e.g.,
districts, municipalities, counties are typically used) or are restricted to
generic workdays or weekends, the data coming from mobile devices can be
precisely located both in time and space. Most previous works have used a
single data source to study human mobility patterns. Here we perform instead a
cross-check analysis by comparing results obtained with data collected from
three different sources: Twitter, census and cell phones. The analysis is
focused on the urban areas of Barcelona and Madrid, for which data of the three
types is available. We assess the correlation between the datasets on different
aspects: the spatial distribution of people concentration, the temporal
evolution of people density and the mobility patterns of individuals. Our
results show that the three data sources are providing comparable information.
Even though the representativeness of Twitter geolocated data is lower than
that of mobile phone and census data, the correlations between the population
density profiles and mobility patterns detected by the three datasets are close
to one in a grid with cells of 2x2 and 1x1 square kilometers. This level of
correlation supports the feasibility of interchanging the three data sources at
the spatio-temporal scales considered.
| no_new_dataset | 0.939526 |
1407.6125 | Nicolo Colombo | Nicol\`o Colombo and Nikos Vlassis | Spectral Sequence Motif Discovery | 20 pages, 3 figures, 1 table | null | null | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence discovery tools play a central role in several fields of
computational biology. In the framework of Transcription Factor binding
studies, motif finding algorithms of increasingly high performance are required
to process the big datasets produced by new high-throughput sequencing
technologies. Most existing algorithms are computationally demanding and often
cannot support the large size of new experimental data. We present a new motif
discovery algorithm that is built on a recent machine learning technique,
referred to as Method of Moments. Based on spectral decompositions, this method
is robust under model misspecification and is not prone to locally optimal
solutions. We obtain an algorithm that is extremely fast and designed for the
analysis of big sequencing data. In a few minutes, we can process datasets of
hundreds of thousand sequences and extract motif profiles that match those
computed by various state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Wed, 23 Jul 2014 08:07:50 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2014 18:33:45 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Colombo",
"Nicolò",
""
],
[
"Vlassis",
"Nikos",
""
]
] | TITLE: Spectral Sequence Motif Discovery
ABSTRACT: Sequence discovery tools play a central role in several fields of
computational biology. In the framework of Transcription Factor binding
studies, motif finding algorithms of increasingly high performance are required
to process the big datasets produced by new high-throughput sequencing
technologies. Most existing algorithms are computationally demanding and often
cannot support the large size of new experimental data. We present a new motif
discovery algorithm that is built on a recent machine learning technique,
referred to as Method of Moments. Based on spectral decompositions, this method
is robust under model misspecification and is not prone to locally optimal
solutions. We obtain an algorithm that is extremely fast and designed for the
analysis of big sequencing data. In a few minutes, we can process datasets of
hundreds of thousand sequences and extract motif profiles that match those
computed by various state-of-the-art algorithms.
| no_new_dataset | 0.943138 |
1408.0467 | Yasuo Tabei | Yoshimasa Takabatake, Yasuo Tabei, Hiroshi Sakamoto | Online Pattern Matching for String Edit Distance with Moves | This paper has been accepted to the 21st edition of the International
Symposium on String Processing and Information Retrieval (SPIRE2014) | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edit distance with moves (EDM) is a string-to-string distance measure that
includes substring moves in addition to ordinal editing operations to turn one
string to the other. Although optimizing EDM is intractable, it has many
applications especially in error detections. Edit sensitive parsing (ESP) is an
efficient parsing algorithm that guarantees an upper bound of parsing
discrepancies between different appearances of the same substrings in a string.
ESP can be used for computing an approximate EDM as the L1 distance between
characteristic vectors built by node labels in parsing trees. However, ESP is
not applicable to a streaming text data where a whole text is unknown in
advance. We present an online ESP (OESP) that enables an online pattern
matching for EDM. OESP builds a parse tree for a streaming text and computes
the L1 distance between characteristic vectors in an online manner. For the
space-efficient computation of EDM, OESP directly encodes the parse tree into a
succinct representation by leveraging the idea behind recent results of a
dynamic succinct tree. We experimentally test OESP on the ability to compute
EDM in an online manner on benchmark datasets, and we show OESP's efficiency.
| [
{
"version": "v1",
"created": "Sun, 3 Aug 2014 07:48:52 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2014 05:56:42 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Takabatake",
"Yoshimasa",
""
],
[
"Tabei",
"Yasuo",
""
],
[
"Sakamoto",
"Hiroshi",
""
]
] | TITLE: Online Pattern Matching for String Edit Distance with Moves
ABSTRACT: Edit distance with moves (EDM) is a string-to-string distance measure that
includes substring moves in addition to ordinal editing operations to turn one
string to the other. Although optimizing EDM is intractable, it has many
applications especially in error detections. Edit sensitive parsing (ESP) is an
efficient parsing algorithm that guarantees an upper bound of parsing
discrepancies between different appearances of the same substrings in a string.
ESP can be used for computing an approximate EDM as the L1 distance between
characteristic vectors built by node labels in parsing trees. However, ESP is
not applicable to a streaming text data where a whole text is unknown in
advance. We present an online ESP (OESP) that enables an online pattern
matching for EDM. OESP builds a parse tree for a streaming text and computes
the L1 distance between characteristic vectors in an online manner. For the
space-efficient computation of EDM, OESP directly encodes the parse tree into a
succinct representation by leveraging the idea behind recent results of a
dynamic succinct tree. We experimentally test OESP on the ability to compute
EDM in an online manner on benchmark datasets, and we show OESP's efficiency.
| no_new_dataset | 0.941439 |
1408.5601 | Jianwei Yang | Jianwei Yang, Zhen Lei, Stan Z. Li | Learn Convolutional Neural Network for Face Anti-Spoofing | 8 pages, 9 figures, 7 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though having achieved some progresses, the hand-crafted texture features,
e.g., LBP [23], LBP-TOP [11] are still unable to capture the most
discriminative cues between genuine and fake faces. In this paper, instead of
designing feature by ourselves, we rely on the deep convolutional neural
network (CNN) to learn features of high discriminative ability in a supervised
manner. Combined with some data pre-processing, the face anti-spoofing
performance improves drastically. In the experiments, over 70% relative
decrease of Half Total Error Rate (HTER) is achieved on two challenging
datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art.
Meanwhile, the experimental results from inter-tests between two datasets
indicates CNN can obtain features with better generalization ability. Moreover,
the nets trained using combined data from two datasets have less biases between
two datasets.
| [
{
"version": "v1",
"created": "Sun, 24 Aug 2014 13:08:19 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Aug 2014 02:45:55 GMT"
}
] | 2014-08-27T00:00:00 | [
[
"Yang",
"Jianwei",
""
],
[
"Lei",
"Zhen",
""
],
[
"Li",
"Stan Z.",
""
]
] | TITLE: Learn Convolutional Neural Network for Face Anti-Spoofing
ABSTRACT: Though having achieved some progresses, the hand-crafted texture features,
e.g., LBP [23], LBP-TOP [11] are still unable to capture the most
discriminative cues between genuine and fake faces. In this paper, instead of
designing feature by ourselves, we rely on the deep convolutional neural
network (CNN) to learn features of high discriminative ability in a supervised
manner. Combined with some data pre-processing, the face anti-spoofing
performance improves drastically. In the experiments, over 70% relative
decrease of Half Total Error Rate (HTER) is achieved on two challenging
datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art.
Meanwhile, the experimental results from inter-tests between two datasets
indicates CNN can obtain features with better generalization ability. Moreover,
the nets trained using combined data from two datasets have less biases between
two datasets.
| no_new_dataset | 0.948058 |
1202.3936 | Didier Sornette | Ryohei Hisano and Didier Sornette | On the distribution of time-to-proof of mathematical conjectures | 10 pages + 6 figures | The Mathematical Intelligencer 35 (4), 10-17 (2013) (pp.1-18) | 10.1007/s00283-013-9383-7 | null | physics.soc-ph math-ph math.MP physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is the productivity of Science? Can we measure an evolution of the
production of mathematicians over history? Can we predict the waiting time till
the proof of a challenging conjecture such as the P-versus-NP problem?
Motivated by these questions, we revisit a suggestion published recently and
debated in the "New Scientist" that the historical distribution of
time-to-proof's, i.e., of waiting times between formulation of a mathematical
conjecture and its proof, can be quantified and gives meaningful insights in
the future development of still open conjectures. We find however evidence that
the mathematical process of creation is too much non-stationary, with too
little data and constraints, to allow for a meaningful conclusion. In
particular, the approximate unsteady exponential growth of human population,
and arguably that of mathematicians, essentially hides the true distribution.
Another issue is the incompleteness of the dataset available. In conclusion we
cannot really reject the simplest model of an exponential rate of conjecture
proof with a rate of 0.01/year for the dataset that we have studied,
translating into an average waiting time to proof of 100 years. We hope that
the presented methodology, combining the mathematics of recurrent processes,
linking proved and still open conjectures, with different empirical
constraints, will be useful for other similar investigations probing the
productivity associated with mankind growth and creativity.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2012 15:31:58 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Hisano",
"Ryohei",
""
],
[
"Sornette",
"Didier",
""
]
] | TITLE: On the distribution of time-to-proof of mathematical conjectures
ABSTRACT: What is the productivity of Science? Can we measure an evolution of the
production of mathematicians over history? Can we predict the waiting time till
the proof of a challenging conjecture such as the P-versus-NP problem?
Motivated by these questions, we revisit a suggestion published recently and
debated in the "New Scientist" that the historical distribution of
time-to-proof's, i.e., of waiting times between formulation of a mathematical
conjecture and its proof, can be quantified and gives meaningful insights in
the future development of still open conjectures. We find however evidence that
the mathematical process of creation is too much non-stationary, with too
little data and constraints, to allow for a meaningful conclusion. In
particular, the approximate unsteady exponential growth of human population,
and arguably that of mathematicians, essentially hides the true distribution.
Another issue is the incompleteness of the dataset available. In conclusion we
cannot really reject the simplest model of an exponential rate of conjecture
proof with a rate of 0.01/year for the dataset that we have studied,
translating into an average waiting time to proof of 100 years. We hope that
the presented methodology, combining the mathematics of recurrent processes,
linking proved and still open conjectures, with different empirical
constraints, will be useful for other similar investigations probing the
productivity associated with mankind growth and creativity.
| no_new_dataset | 0.930774 |
1207.2043 | Timothy Dubois | Alex Skvortsov, Milan Jamriska and Timothy C DuBois | Tracer dispersion in the turbulent convective layer | 4 pages, 2 figures, 1 table | J. Atmos. Sci., 70, 4112-4121 (2013) | 10.1175/JAS-D-12-0268.1 | null | nlin.CD physics.ao-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experimental results for passive tracer dispersion in the turbulent surface
layer under convective conditions are presented. In this case, the dispersion
of tracer particles is determined by the interplay of two mechanisms: buoyancy
and advection. In the atmospheric surface layer under stable stratification the
buoyancy mechanism dominates when the distance from the ground is greater than
the Monin-Obukhov length, resulting in a different exponent in the scaling law
of relative separation of lagrangian particles (deviation from the celebrated
Richardson's law). This conclusion is supported by our extensive atmospheric
observations. Exit-time statistics are derived from our experimental dataset,
which demonstrates a significant difference between tracer dispersion in the
convective and neutrally stratified surface layers.
| [
{
"version": "v1",
"created": "Mon, 9 Jul 2012 13:47:13 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Skvortsov",
"Alex",
""
],
[
"Jamriska",
"Milan",
""
],
[
"DuBois",
"Timothy C",
""
]
] | TITLE: Tracer dispersion in the turbulent convective layer
ABSTRACT: Experimental results for passive tracer dispersion in the turbulent surface
layer under convective conditions are presented. In this case, the dispersion
of tracer particles is determined by the interplay of two mechanisms: buoyancy
and advection. In the atmospheric surface layer under stable stratification the
buoyancy mechanism dominates when the distance from the ground is greater than
the Monin-Obukhov length, resulting in a different exponent in the scaling law
of relative separation of lagrangian particles (deviation from the celebrated
Richardson's law). This conclusion is supported by our extensive atmospheric
observations. Exit-time statistics are derived from our experimental dataset,
which demonstrates a significant difference between tracer dispersion in the
convective and neutrally stratified surface layers.
| new_dataset | 0.80329 |
1403.2048 | Andrzej Cichocki | Andrzej Cichocki | Era of Big Data Processing: A New Approach via Tensor Networks and
Tensor Decompositions | Part of this work was presented on the International Workshop on
Smart Info-Media Systems in Asia,(invited talk - SISA-2013) Sept.30--Oct.2,
2013, Nagoya, JAPAN | null | null | null | cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many problems in computational neuroscience, neuroinformatics, pattern/image
recognition, signal processing and machine learning generate massive amounts of
multidimensional data with multiple aspects and high dimensionality. Tensors
(i.e., multi-way arrays) provide often a natural and compact representation for
such massive multidimensional data via suitable low-rank approximations. Big
data analytics require novel technologies to efficiently process huge datasets
within tolerable elapsed times. Such a new emerging technology for
multidimensional big data is a multiway analysis via tensor networks (TNs) and
tensor decompositions (TDs) which represent tensors by sets of factor
(component) matrices and lower-order (core) tensors. Dynamic tensor analysis
allows us to discover meaningful hidden structures of complex data and to
perform generalizations by capturing multi-linear and multi-aspect
relationships. We will discuss some fundamental TN models, their mathematical
and graphical descriptions and associated learning algorithms for large-scale
TDs and TNs, with many potential applications including: Anomaly detection,
feature extraction, classification, cluster analysis, data fusion and
integration, pattern recognition, predictive modeling, regression, time series
analysis and multiway component analysis.
Keywords: Large-scale HOSVD, Tensor decompositions, CPD, Tucker models,
Hierarchical Tucker (HT) decomposition, low-rank tensor approximations (LRA),
Tensorization/Quantization, tensor train (TT/QTT) - Matrix Product States
(MPS), Matrix Product Operator (MPO), DMRG, Strong Kronecker Product (SKP).
| [
{
"version": "v1",
"created": "Sun, 9 Mar 2014 10:45:18 GMT"
},
{
"version": "v2",
"created": "Sat, 3 May 2014 13:41:52 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Jun 2014 08:53:57 GMT"
},
{
"version": "v4",
"created": "Sun, 24 Aug 2014 11:04:32 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Cichocki",
"Andrzej",
""
]
] | TITLE: Era of Big Data Processing: A New Approach via Tensor Networks and
Tensor Decompositions
ABSTRACT: Many problems in computational neuroscience, neuroinformatics, pattern/image
recognition, signal processing and machine learning generate massive amounts of
multidimensional data with multiple aspects and high dimensionality. Tensors
(i.e., multi-way arrays) provide often a natural and compact representation for
such massive multidimensional data via suitable low-rank approximations. Big
data analytics require novel technologies to efficiently process huge datasets
within tolerable elapsed times. Such a new emerging technology for
multidimensional big data is a multiway analysis via tensor networks (TNs) and
tensor decompositions (TDs) which represent tensors by sets of factor
(component) matrices and lower-order (core) tensors. Dynamic tensor analysis
allows us to discover meaningful hidden structures of complex data and to
perform generalizations by capturing multi-linear and multi-aspect
relationships. We will discuss some fundamental TN models, their mathematical
and graphical descriptions and associated learning algorithms for large-scale
TDs and TNs, with many potential applications including: Anomaly detection,
feature extraction, classification, cluster analysis, data fusion and
integration, pattern recognition, predictive modeling, regression, time series
analysis and multiway component analysis.
Keywords: Large-scale HOSVD, Tensor decompositions, CPD, Tucker models,
Hierarchical Tucker (HT) decomposition, low-rank tensor approximations (LRA),
Tensorization/Quantization, tensor train (TT/QTT) - Matrix Product States
(MPS), Matrix Product Operator (MPO), DMRG, Strong Kronecker Product (SKP).
| no_new_dataset | 0.947137 |
1403.3785 | Rosario N. Mantegna | Ming-Xia Li, Vasyl Palchykov, Zhi-Qiang Jiang, Kimmo Kaski, Janos
Kert\'esz, Salvatore Miccich\`e, Michele Tumminello, Wei-Xing Zhou and
Rosario N. Mantegna | Statistically validated mobile communication networks: Evolution of
motifs in European and Chinese data | 19 pages, 8 figures, 5 tables | New J. Phys. 16 (2014) 083038 | 10.1088/1367-2630/16/8/083038 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big data open up unprecedented opportunities to investigate complex systems
including the society. In particular, communication data serve as major sources
for computational social sciences but they have to be cleaned and filtered as
they may contain spurious information due to recording errors as well as
interactions, like commercial and marketing activities, not directly related to
the social network. The network constructed from communication data can only be
considered as a proxy for the network of social relationships. Here we apply a
systematic method, based on multiple hypothesis testing, to statistically
validate the links and then construct the corresponding Bonferroni network,
generalized to the directed case. We study two large datasets of mobile phone
records, one from Europe and the other from China. For both datasets we compare
the raw data networks with the corresponding Bonferroni networks and point out
significant differences in the structures and in the basic network measures. We
show evidence that the Bonferroni network provides a better proxy for the
network of social interactions than the original one. By using the filtered
networks we investigated the statistics and temporal evolution of small
directed 3-motifs and conclude that closed communication triads have a
formation time-scale, which is quite fast and typically intraday. We also find
that open communication triads preferentially evolve to other open triads with
a higher fraction of reciprocated calls. These stylized facts were observed for
both datasets.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2014 10:42:14 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Li",
"Ming-Xia",
""
],
[
"Palchykov",
"Vasyl",
""
],
[
"Jiang",
"Zhi-Qiang",
""
],
[
"Kaski",
"Kimmo",
""
],
[
"Kertész",
"Janos",
""
],
[
"Miccichè",
"Salvatore",
""
],
[
"Tumminello",
"Michele",
""
],
[
"Zhou",
"Wei-Xing",
""
],
[
"Mantegna",
"Rosario N.",
""
]
] | TITLE: Statistically validated mobile communication networks: Evolution of
motifs in European and Chinese data
ABSTRACT: Big data open up unprecedented opportunities to investigate complex systems
including the society. In particular, communication data serve as major sources
for computational social sciences but they have to be cleaned and filtered as
they may contain spurious information due to recording errors as well as
interactions, like commercial and marketing activities, not directly related to
the social network. The network constructed from communication data can only be
considered as a proxy for the network of social relationships. Here we apply a
systematic method, based on multiple hypothesis testing, to statistically
validate the links and then construct the corresponding Bonferroni network,
generalized to the directed case. We study two large datasets of mobile phone
records, one from Europe and the other from China. For both datasets we compare
the raw data networks with the corresponding Bonferroni networks and point out
significant differences in the structures and in the basic network measures. We
show evidence that the Bonferroni network provides a better proxy for the
network of social interactions than the original one. By using the filtered
networks we investigated the statistics and temporal evolution of small
directed 3-motifs and conclude that closed communication triads have a
formation time-scale, which is quite fast and typically intraday. We also find
that open communication triads preferentially evolve to other open triads with
a higher fraction of reciprocated calls. These stylized facts were observed for
both datasets.
| no_new_dataset | 0.947527 |
1408.5530 | Dan He | Dan He, Zhanyong Wang, Laxmi Parida, Eleazar Eskin | IPED2: Inheritance Path based Pedigree Reconstruction Algorithm for
Complicated Pedigrees | 9 pages | null | null | null | cs.DS q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstruction of family trees, or pedigree reconstruction, for a group of
individuals is a fundamental problem in genetics. The problem is known to be
NP-hard even for datasets known to only contain siblings. Some recent methods
have been developed to accurately and efficiently reconstruct pedigrees. These
methods, however, still consider relatively simple pedigrees, for example, they
are not able to handle half-sibling situations where a pair of individuals only
share one parent. In this work, we propose an efficient method, IPED2, based on
our previous work, which specifically targets reconstruction of complicated
pedigrees that include half-siblings. We note that the presence of
half-siblings makes the reconstruction problem significantly more challenging
which is why previous methods exclude the possibility of half-siblings. We
proposed a novel model as well as an efficient graph algorithm and experiments
show that our algorithm achieves relatively accurate reconstruction. To our
knowledge, this is the first method that is able to handle pedigree
reconstruction based on genotype data only when half-sibling exists in any
generation of the pedigree.
| [
{
"version": "v1",
"created": "Sat, 23 Aug 2014 21:01:50 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"He",
"Dan",
""
],
[
"Wang",
"Zhanyong",
""
],
[
"Parida",
"Laxmi",
""
],
[
"Eskin",
"Eleazar",
""
]
] | TITLE: IPED2: Inheritance Path based Pedigree Reconstruction Algorithm for
Complicated Pedigrees
ABSTRACT: Reconstruction of family trees, or pedigree reconstruction, for a group of
individuals is a fundamental problem in genetics. The problem is known to be
NP-hard even for datasets known to only contain siblings. Some recent methods
have been developed to accurately and efficiently reconstruct pedigrees. These
methods, however, still consider relatively simple pedigrees, for example, they
are not able to handle half-sibling situations where a pair of individuals only
share one parent. In this work, we propose an efficient method, IPED2, based on
our previous work, which specifically targets reconstruction of complicated
pedigrees that include half-siblings. We note that the presence of
half-siblings makes the reconstruction problem significantly more challenging
which is why previous methods exclude the possibility of half-siblings. We
proposed a novel model as well as an efficient graph algorithm and experiments
show that our algorithm achieves relatively accurate reconstruction. To our
knowledge, this is the first method that is able to handle pedigree
reconstruction based on genotype data only when half-sibling exists in any
generation of the pedigree.
| no_new_dataset | 0.950411 |
1408.5539 | Mehmet Kuzu | Mehmet Kuzu, Mohammad Saiful Islam, Murat Kantarcioglu | A Distributed Framework for Scalable Search over Encrypted Documents | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, huge amount of documents are increasingly transferred to the remote
servers due to the appealing features of cloud computing. On the other hand,
privacy and security of the sensitive information in untrusted cloud
environment is a big concern. To alleviate such concerns, encryption of
sensitive data before its transfer to the cloud has become an important risk
mitigation option. Encrypted storage provides protection at the expense of a
significant increase in the data management complexity. For effective
management, it is critical to provide efficient selective document retrieval
capability on the encrypted collection. In fact, considerable amount of
searchable symmetric encryption schemes have been designed in the literature to
achieve this task. However, with the emergence of big data everywhere,
available approaches are insufficient to address some crucial real-world
problems such as scalability.
In this study, we focus on practical aspects of a secure keyword search
mechanism over encrypted data on a real cloud infrastructure. First, we propose
a provably secure distributed index along with a parallelizable retrieval
technique that can easily scale to big data. Second, we integrate authorization
into the search scheme to limit the information leakage in multi-user setting
where users are allowed to access only particular documents. Third, we offer
efficient updates on the distributed secure index. In addition, we conduct
extensive empirical analysis on a real dataset to illustrate the efficiency of
the proposed practical techniques.
| [
{
"version": "v1",
"created": "Sun, 24 Aug 2014 00:38:11 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Kuzu",
"Mehmet",
""
],
[
"Islam",
"Mohammad Saiful",
""
],
[
"Kantarcioglu",
"Murat",
""
]
] | TITLE: A Distributed Framework for Scalable Search over Encrypted Documents
ABSTRACT: Nowadays, huge amount of documents are increasingly transferred to the remote
servers due to the appealing features of cloud computing. On the other hand,
privacy and security of the sensitive information in untrusted cloud
environment is a big concern. To alleviate such concerns, encryption of
sensitive data before its transfer to the cloud has become an important risk
mitigation option. Encrypted storage provides protection at the expense of a
significant increase in the data management complexity. For effective
management, it is critical to provide efficient selective document retrieval
capability on the encrypted collection. In fact, considerable amount of
searchable symmetric encryption schemes have been designed in the literature to
achieve this task. However, with the emergence of big data everywhere,
available approaches are insufficient to address some crucial real-world
problems such as scalability.
In this study, we focus on practical aspects of a secure keyword search
mechanism over encrypted data on a real cloud infrastructure. First, we propose
a provably secure distributed index along with a parallelizable retrieval
technique that can easily scale to big data. Second, we integrate authorization
into the search scheme to limit the information leakage in multi-user setting
where users are allowed to access only particular documents. Third, we offer
efficient updates on the distributed secure index. In addition, we conduct
extensive empirical analysis on a real dataset to illustrate the efficiency of
the proposed practical techniques.
| no_new_dataset | 0.943034 |
1408.5573 | Paolo Missier | Matias Garcia-Constantino and Paolo Missier and Phil Blytheand Amy
Weihong Guo | Measuring the impact of cognitive distractions on driving performance
using time series analysis | IEEE ITS conference, 2014 | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using current sensing technology, a wealth of data on driving sessions is
potentially available through a combination of vehicle sensors and drivers'
physiology sensors (heart rate, breathing rate, skin temperature, etc.). Our
hypothesis is that it should be possible to exploit the combination of time
series produced by such multiple sensors during a driving session, in order to
(i) learn models of normal driving behaviour, and (ii) use such models to
detect important and potentially dangerous deviations from the norm in
real-time, and thus enable the generation of appropriate alerts. Crucially, we
believe that such models and interventions should and can be personalised and
tailor-made for each individual driver. As an initial step towards this goal,
in this paper we present techniques for assessing the impact of cognitive
distraction on drivers, based on simple time series analysis. We have tested
our method on a rich dataset of driving sessions, carried out in a professional
simulator, involving a panel of volunteer drivers. Each session included a
different type of cognitive distraction, and resulted in multiple time series
from a variety of on-board sensors as well as sensors worn by the driver.
Crucially, each driver also recorded an initial session with no distractions.
In our model, such initial session provides the baseline times series that make
it possible to quantitatively assess driver performance under distraction
conditions.
| [
{
"version": "v1",
"created": "Sun, 24 Aug 2014 07:36:21 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Garcia-Constantino",
"Matias",
""
],
[
"Missier",
"Paolo",
""
],
[
"Guo",
"Phil Blytheand Amy Weihong",
""
]
] | TITLE: Measuring the impact of cognitive distractions on driving performance
using time series analysis
ABSTRACT: Using current sensing technology, a wealth of data on driving sessions is
potentially available through a combination of vehicle sensors and drivers'
physiology sensors (heart rate, breathing rate, skin temperature, etc.). Our
hypothesis is that it should be possible to exploit the combination of time
series produced by such multiple sensors during a driving session, in order to
(i) learn models of normal driving behaviour, and (ii) use such models to
detect important and potentially dangerous deviations from the norm in
real-time, and thus enable the generation of appropriate alerts. Crucially, we
believe that such models and interventions should and can be personalised and
tailor-made for each individual driver. As an initial step towards this goal,
in this paper we present techniques for assessing the impact of cognitive
distraction on drivers, based on simple time series analysis. We have tested
our method on a rich dataset of driving sessions, carried out in a professional
simulator, involving a panel of volunteer drivers. Each session included a
different type of cognitive distraction, and resulted in multiple time series
from a variety of on-board sensors as well as sensors worn by the driver.
Crucially, each driver also recorded an initial session with no distractions.
In our model, such initial session provides the baseline times series that make
it possible to quantitatively assess driver performance under distraction
conditions.
| no_new_dataset | 0.937669 |
1408.5777 | Saba Ahsan | Saba Ahsan, Varun Singh and J\"org Ott | Characterizing Internet Video for Large-scale Active Measurements | 15 pages, 18 figures | null | null | null | cs.MM cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of high definition video content on the web has brought
about a significant change in the characteristics of Internet video, but not
many studies on characterizing video have been done after this change. Video
characteristics such as video length, format, target bit rate, and resolution
provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing
playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the
variability in video frame sizes, etc. This paper presents datasets collected
in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed
(or most popular) video charts in 58 countries. We describe the basic
characteristics of the videos on YouTube for each category, format, video
length, file size, and data rate variation, observing that video length and
file size fit a log normal distribution. We show that three minutes of a video
suffice to represent its instant data rate fluctuation and that we can infer
data rate characteristics of different video resolutions from a single given
one. Based on our findings, we design active measurements for measuring the
performance of Internet video.
| [
{
"version": "v1",
"created": "Thu, 7 Aug 2014 16:38:25 GMT"
}
] | 2014-08-26T00:00:00 | [
[
"Ahsan",
"Saba",
""
],
[
"Singh",
"Varun",
""
],
[
"Ott",
"Jörg",
""
]
] | TITLE: Characterizing Internet Video for Large-scale Active Measurements
ABSTRACT: The availability of high definition video content on the web has brought
about a significant change in the characteristics of Internet video, but not
many studies on characterizing video have been done after this change. Video
characteristics such as video length, format, target bit rate, and resolution
provide valuable input to design Adaptive Bit Rate (ABR) algorithms, sizing
playout buffers in Dynamic Adaptive HTTP streaming (DASH) players, model the
variability in video frame sizes, etc. This paper presents datasets collected
in 2013 and 2014 that contains over 130,000 videos from YouTube's most viewed
(or most popular) video charts in 58 countries. We describe the basic
characteristics of the videos on YouTube for each category, format, video
length, file size, and data rate variation, observing that video length and
file size fit a log normal distribution. We show that three minutes of a video
suffice to represent its instant data rate fluctuation and that we can infer
data rate characteristics of different video resolutions from a single given
one. Based on our findings, we design active measurements for measuring the
performance of Internet video.
| new_dataset | 0.839273 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.