id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1402.3926 | Hideitsu Hino | Toshiyuki Kato, Hideitsu Hino, and Noboru Murata | Sparse Coding Approach for Multi-Frame Image Super Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An image super-resolution method from multiple observation of low-resolution
images is proposed. The method is based on sub-pixel accuracy block matching
for estimating relative displacements of observed images, and sparse signal
representation for estimating the corresponding high-resolution image. Relative
displacements of small patches of observed low-resolution images are accurately
estimated by a computationally efficient block matching method. Since the
estimated displacements are also regarded as a warping component of image
degradation process, the matching results are directly utilized to generate
low-resolution dictionary for sparse image representation. The matching scores
of the block matching are used to select a subset of low-resolution patches for
reconstructing a high-resolution patch, that is, an adaptive selection of
informative low-resolution images is realized. When there is only one
low-resolution image, the proposed method works as a single-frame
super-resolution method. The proposed method is shown to perform comparable or
superior to conventional single- and multi-frame super-resolution methods
through experiments using various real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2014 08:23:35 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Kato",
"Toshiyuki",
""
],
[
"Hino",
"Hideitsu",
""
],
[
"Murata",
"Noboru",
""
]
] | TITLE: Sparse Coding Approach for Multi-Frame Image Super Resolution
ABSTRACT: An image super-resolution method from multiple observation of low-resolution
images is proposed. The method is based on sub-pixel accuracy block matching
for estimating relative displacements of observed images, and sparse signal
representation for estimating the corresponding high-resolution image. Relative
displacements of small patches of observed low-resolution images are accurately
estimated by a computationally efficient block matching method. Since the
estimated displacements are also regarded as a warping component of image
degradation process, the matching results are directly utilized to generate
low-resolution dictionary for sparse image representation. The matching scores
of the block matching are used to select a subset of low-resolution patches for
reconstructing a high-resolution patch, that is, an adaptive selection of
informative low-resolution images is realized. When there is only one
low-resolution image, the proposed method works as a single-frame
super-resolution method. The proposed method is shown to perform comparable or
superior to conventional single- and multi-frame super-resolution methods
through experiments using various real-world datasets.
| no_new_dataset | 0.949623 |
1402.4033 | Erheng Zhong | Erheng Zhong, Evan Wei Xiang, Wei Fan, Nathan Nan Liu, Qiang Yang | Friendship Prediction in Composite Social Networks | 10 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Friendship prediction is an important task in social network analysis (SNA).
It can help users identify friends and improve their level of activity. Most
previous approaches predict users' friendship based on their historical
records, such as their existing friendship, social interactions, etc. However,
in reality, most users have limited friends in a single network, and the data
can be very sparse. The sparsity problem causes existing methods to overfit the
rare observations and suffer from serious performance degradation. This is
particularly true when a new social network just starts to form. We observe
that many of today's social networks are composite in nature, where people are
often engaged in multiple networks. In addition, users' friendships are always
correlated, for example, they are both friends on Facebook and Google+. Thus,
by considering those overlapping users as the bridge, the friendship knowledge
in other networks can help predict their friendships in the current network.
This can be achieved by exploiting the knowledge in different networks in a
collective manner. However, as each individual network has its own properties
that can be incompatible and inconsistent with other networks, the naive
merging of all networks into a single one may not work well. The proposed
solution is to extract the common behaviors between different networks via a
hierarchical Bayesian model. It captures the common knowledge across networks,
while avoiding negative impacts due to network differences. Empirical studies
demonstrate that the proposed approach improves the mean average precision of
friendship prediction over state-of-the-art baselines on nine real-world social
networking datasets significantly.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2014 15:36:38 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Zhong",
"Erheng",
""
],
[
"Xiang",
"Evan Wei",
""
],
[
"Fan",
"Wei",
""
],
[
"Liu",
"Nathan Nan",
""
],
[
"Yang",
"Qiang",
""
]
] | TITLE: Friendship Prediction in Composite Social Networks
ABSTRACT: Friendship prediction is an important task in social network analysis (SNA).
It can help users identify friends and improve their level of activity. Most
previous approaches predict users' friendship based on their historical
records, such as their existing friendship, social interactions, etc. However,
in reality, most users have limited friends in a single network, and the data
can be very sparse. The sparsity problem causes existing methods to overfit the
rare observations and suffer from serious performance degradation. This is
particularly true when a new social network just starts to form. We observe
that many of today's social networks are composite in nature, where people are
often engaged in multiple networks. In addition, users' friendships are always
correlated, for example, they are both friends on Facebook and Google+. Thus,
by considering those overlapping users as the bridge, the friendship knowledge
in other networks can help predict their friendships in the current network.
This can be achieved by exploiting the knowledge in different networks in a
collective manner. However, as each individual network has its own properties
that can be incompatible and inconsistent with other networks, the naive
merging of all networks into a single one may not work well. The proposed
solution is to extract the common behaviors between different networks via a
hierarchical Bayesian model. It captures the common knowledge across networks,
while avoiding negative impacts due to network differences. Empirical studies
demonstrate that the proposed approach improves the mean average precision of
friendship prediction over state-of-the-art baselines on nine real-world social
networking datasets significantly.
| no_new_dataset | 0.938407 |
1402.4084 | Edward Moroshko | Edward Moroshko, Koby Crammer | Selective Sampling with Drift | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2014 17:53:57 GMT"
}
] | 2014-02-18T00:00:00 | [
[
"Moroshko",
"Edward",
""
],
[
"Crammer",
"Koby",
""
]
] | TITLE: Selective Sampling with Drift
ABSTRACT: Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting.
| no_new_dataset | 0.946843 |
1304.5299 | Anoop Korattikara | Anoop Korattikara, Yutian Chen, Max Welling | Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget | v4 - version accepted by ICML2014 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can we make Bayesian posterior MCMC sampling more efficient when faced with
very large datasets? We argue that computing the likelihood for N datapoints in
the Metropolis-Hastings (MH) test to reach a single binary decision is
computationally inefficient. We introduce an approximate MH rule based on a
sequential hypothesis test that allows us to accept or reject samples with high
confidence using only a fraction of the data required for the exact MH rule.
While this method introduces an asymptotic bias, we show that this bias can be
controlled and is more than offset by a decrease in variance due to our ability
to draw more samples per unit of time.
| [
{
"version": "v1",
"created": "Fri, 19 Apr 2013 02:51:52 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Apr 2013 21:13:59 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Jul 2013 18:05:53 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Feb 2014 07:42:15 GMT"
}
] | 2014-02-17T00:00:00 | [
[
"Korattikara",
"Anoop",
""
],
[
"Chen",
"Yutian",
""
],
[
"Welling",
"Max",
""
]
] | TITLE: Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget
ABSTRACT: Can we make Bayesian posterior MCMC sampling more efficient when faced with
very large datasets? We argue that computing the likelihood for N datapoints in
the Metropolis-Hastings (MH) test to reach a single binary decision is
computationally inefficient. We introduce an approximate MH rule based on a
sequential hypothesis test that allows us to accept or reject samples with high
confidence using only a fraction of the data required for the exact MH rule.
While this method introduces an asymptotic bias, we show that this bias can be
controlled and is more than offset by a decrease in variance due to our ability
to draw more samples per unit of time.
| no_new_dataset | 0.951459 |
1402.1783 | Jason J Corso | Caiming Xiong, David Johnson, Jason J. Corso | Active Clustering with Model-Based Uncertainty Reduction | 14 pages, 8 figures, submitted to TPAMI (second version just fixes a
missing reference and format) | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised clustering seeks to augment traditional clustering methods by
incorporating side information provided via human expertise in order to
increase the semantic meaningfulness of the resulting clusters. However, most
current methods are \emph{passive} in the sense that the side information is
provided beforehand and selected randomly. This may require a large number of
constraints, some of which could be redundant, unnecessary, or even detrimental
to the clustering results. Thus in order to scale such semi-supervised
algorithms to larger problems it is desirable to pursue an \emph{active}
clustering method---i.e. an algorithm that maximizes the effectiveness of the
available human labor by only requesting human input where it will have the
greatest impact. Here, we propose a novel online framework for active
semi-supervised spectral clustering that selects pairwise constraints as
clustering proceeds, based on the principle of uncertainty reduction. Using a
first-order Taylor expansion, we decompose the expected uncertainty reduction
problem into a gradient and a step-scale, computed via an application of matrix
perturbation theory and cluster-assignment entropy, respectively. The resulting
model is used to estimate the uncertainty reduction potential of each sample in
the dataset. We then present the human user with pairwise queries with respect
to only the best candidate sample. We evaluate our method using three different
image datasets (faces, leaves and dogs), a set of common UCI machine learning
datasets and a gene dataset. The results validate our decomposition formulation
and show that our method is consistently superior to existing state-of-the-art
techniques, as well as being robust to noise and to unknown numbers of
clusters.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2014 22:13:03 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Feb 2014 02:53:32 GMT"
}
] | 2014-02-17T00:00:00 | [
[
"Xiong",
"Caiming",
""
],
[
"Johnson",
"David",
""
],
[
"Corso",
"Jason J.",
""
]
] | TITLE: Active Clustering with Model-Based Uncertainty Reduction
ABSTRACT: Semi-supervised clustering seeks to augment traditional clustering methods by
incorporating side information provided via human expertise in order to
increase the semantic meaningfulness of the resulting clusters. However, most
current methods are \emph{passive} in the sense that the side information is
provided beforehand and selected randomly. This may require a large number of
constraints, some of which could be redundant, unnecessary, or even detrimental
to the clustering results. Thus in order to scale such semi-supervised
algorithms to larger problems it is desirable to pursue an \emph{active}
clustering method---i.e. an algorithm that maximizes the effectiveness of the
available human labor by only requesting human input where it will have the
greatest impact. Here, we propose a novel online framework for active
semi-supervised spectral clustering that selects pairwise constraints as
clustering proceeds, based on the principle of uncertainty reduction. Using a
first-order Taylor expansion, we decompose the expected uncertainty reduction
problem into a gradient and a step-scale, computed via an application of matrix
perturbation theory and cluster-assignment entropy, respectively. The resulting
model is used to estimate the uncertainty reduction potential of each sample in
the dataset. We then present the human user with pairwise queries with respect
to only the best candidate sample. We evaluate our method using three different
image datasets (faces, leaves and dogs), a set of common UCI machine learning
datasets and a gene dataset. The results validate our decomposition formulation
and show that our method is consistently superior to existing state-of-the-art
techniques, as well as being robust to noise and to unknown numbers of
clusters.
| no_new_dataset | 0.942454 |
1402.3371 | Andrea Ballatore | Andrea Ballatore, Michela Bertolotto, David C. Wilson | An evaluative baseline for geo-semantic relatedness and similarity | GeoInformatica 2014 | null | 10.1007/s10707-013-0197-8 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In geographic information science and semantics, the computation of semantic
similarity is widely recognised as key to supporting a vast number of tasks in
information integration and retrieval. By contrast, the role of geo-semantic
relatedness has been largely ignored. In natural language processing, semantic
relatedness is often confused with the more specific semantic similarity. In
this article, we discuss a notion of geo-semantic relatedness based on Lehrer's
semantic fields, and we compare it with geo-semantic similarity. We then
describe and validate the Geo Relatedness and Similarity Dataset (GeReSiD), a
new open dataset designed to evaluate computational measures of geo-semantic
relatedness and similarity. This dataset is larger than existing datasets of
this kind, and includes 97 geographic terms combined into 50 term pairs rated
by 203 human subjects. GeReSiD is available online and can be used as an
evaluation baseline to determine empirically to what degree a given
computational model approximates geo-semantic relatedness and similarity.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2014 06:06:47 GMT"
}
] | 2014-02-17T00:00:00 | [
[
"Ballatore",
"Andrea",
""
],
[
"Bertolotto",
"Michela",
""
],
[
"Wilson",
"David C.",
""
]
] | TITLE: An evaluative baseline for geo-semantic relatedness and similarity
ABSTRACT: In geographic information science and semantics, the computation of semantic
similarity is widely recognised as key to supporting a vast number of tasks in
information integration and retrieval. By contrast, the role of geo-semantic
relatedness has been largely ignored. In natural language processing, semantic
relatedness is often confused with the more specific semantic similarity. In
this article, we discuss a notion of geo-semantic relatedness based on Lehrer's
semantic fields, and we compare it with geo-semantic similarity. We then
describe and validate the Geo Relatedness and Similarity Dataset (GeReSiD), a
new open dataset designed to evaluate computational measures of geo-semantic
relatedness and similarity. This dataset is larger than existing datasets of
this kind, and includes 97 geographic terms combined into 50 term pairs rated
by 203 human subjects. GeReSiD is available online and can be used as an
evaluation baseline to determine empirically to what degree a given
computational model approximates geo-semantic relatedness and similarity.
| new_dataset | 0.950273 |
1402.3499 | Ariel Cintron-Arias | Ariel Cintron-Arias | To Go Viral | null | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical models are validated against empirical data, while examining
potential indicators for an online video that went viral. We revisit some
concepts of infectious disease modeling (e.g. reproductive number) and we
comment on the role of model parameters that interplay in the spread of
innovations. The dataset employed here provides strong evidence that the number
of online views is governed by exponential growth patterns, explaining a common
feature of viral videos.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2014 15:35:25 GMT"
}
] | 2014-02-17T00:00:00 | [
[
"Cintron-Arias",
"Ariel",
""
]
] | TITLE: To Go Viral
ABSTRACT: Mathematical models are validated against empirical data, while examining
potential indicators for an online video that went viral. We revisit some
concepts of infectious disease modeling (e.g. reproductive number) and we
comment on the role of model parameters that interplay in the spread of
innovations. The dataset employed here provides strong evidence that the number
of online views is governed by exponential growth patterns, explaining a common
feature of viral videos.
| new_dataset | 0.564294 |
1402.3010 | Eray Ozkural | Eray \"Ozkural, Cevdet Aykanat | 1-D and 2-D Parallel Algorithms for All-Pairs Similarity Problem | null | null | null | null | cs.IR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All-pairs similarity problem asks to find all vector pairs in a set of
vectors the similarities of which surpass a given similarity threshold, and it
is a computational kernel in data mining and information retrieval for several
tasks. We investigate the parallelization of a recent fast sequential
algorithm. We propose effective 1-D and 2-D data distribution strategies that
preserve the essential optimizations in the fast algorithm. 1-D parallel
algorithms distribute either dimensions or vectors, whereas the 2-D parallel
algorithm distributes data both ways. Additional contributions to the 1-D
vertical distribution include a local pruning strategy to reduce the number of
candidates, a recursive pruning algorithm, and block processing to reduce
imbalance. The parallel algorithms were programmed in OCaml which affords much
convenience. Our experiments indicate that the performance depends on the
dataset, therefore a variety of parallelizations is useful.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2014 00:14:33 GMT"
}
] | 2014-02-14T00:00:00 | [
[
"Özkural",
"Eray",
""
],
[
"Aykanat",
"Cevdet",
""
]
] | TITLE: 1-D and 2-D Parallel Algorithms for All-Pairs Similarity Problem
ABSTRACT: All-pairs similarity problem asks to find all vector pairs in a set of
vectors the similarities of which surpass a given similarity threshold, and it
is a computational kernel in data mining and information retrieval for several
tasks. We investigate the parallelization of a recent fast sequential
algorithm. We propose effective 1-D and 2-D data distribution strategies that
preserve the essential optimizations in the fast algorithm. 1-D parallel
algorithms distribute either dimensions or vectors, whereas the 2-D parallel
algorithm distributes data both ways. Additional contributions to the 1-D
vertical distribution include a local pruning strategy to reduce the number of
candidates, a recursive pruning algorithm, and block processing to reduce
imbalance. The parallel algorithms were programmed in OCaml which affords much
convenience. Our experiments indicate that the performance depends on the
dataset, therefore a variety of parallelizations is useful.
| no_new_dataset | 0.945349 |
1402.3261 | Didier Henrion | Jan Heller, Didier Henrion (LAAS, CTU/FEE), Tomas Pajdla | Hand-Eye and Robot-World Calibration by Global Polynomial Optimization | null | null | null | null | cs.CV math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The need to relate measurements made by a camera to a different known
coordinate system arises in many engineering applications. Historically, it
appeared for the first time in the connection with cameras mounted on robotic
systems. This problem is commonly known as hand-eye calibration. In this paper,
we present several formulations of hand-eye calibration that lead to
multivariate polynomial optimization problems. We show that the method of
convex linear matrix inequality (LMI) relaxations can be used to effectively
solve these problems and to obtain globally optimal solutions. Further, we show
that the same approach can be used for the simultaneous hand-eye and
robot-world calibration. Finally, we validate the proposed solutions using both
synthetic and real datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2014 19:17:01 GMT"
}
] | 2014-02-14T00:00:00 | [
[
"Heller",
"Jan",
"",
"LAAS, CTU/FEE"
],
[
"Henrion",
"Didier",
"",
"LAAS, CTU/FEE"
],
[
"Pajdla",
"Tomas",
""
]
] | TITLE: Hand-Eye and Robot-World Calibration by Global Polynomial Optimization
ABSTRACT: The need to relate measurements made by a camera to a different known
coordinate system arises in many engineering applications. Historically, it
appeared for the first time in the connection with cameras mounted on robotic
systems. This problem is commonly known as hand-eye calibration. In this paper,
we present several formulations of hand-eye calibration that lead to
multivariate polynomial optimization problems. We show that the method of
convex linear matrix inequality (LMI) relaxations can be used to effectively
solve these problems and to obtain globally optimal solutions. Further, we show
that the same approach can be used for the simultaneous hand-eye and
robot-world calibration. Finally, we validate the proposed solutions using both
synthetic and real datasets.
| no_new_dataset | 0.947478 |
1210.1766 | Jun Zhu | Jun Zhu, Ning Chen, and Eric P. Xing | Bayesian Inference with Posterior Regularization and applications to
Infinite Latent SVMs | 49 pages, 11 figures | null | null | null | cs.LG cs.AI stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing Bayesian models, especially nonparametric Bayesian methods, rely on
specially conceived priors to incorporate domain knowledge for discovering
improved latent representations. While priors can affect posterior
distributions through Bayes' rule, imposing posterior regularization is
arguably more direct and in some cases more natural and general. In this paper,
we present regularized Bayesian inference (RegBayes), a novel computational
framework that performs posterior inference with a regularization term on the
desired post-data posterior distribution under an information theoretical
formulation. RegBayes is more flexible than the procedure that elicits expert
knowledge via priors, and it covers both directed Bayesian networks and
undirected Markov networks whose Bayesian formulation results in hybrid chain
graph models. When the regularization is induced from a linear operator on the
posterior distributions, such as the expectation operator, we present a general
convex-analysis theorem to characterize the solution of RegBayes. Furthermore,
we present two concrete examples of RegBayes, infinite latent support vector
machines (iLSVM) and multi-task infinite latent support vector machines
(MT-iLSVM), which explore the large-margin idea in combination with a
nonparametric Bayesian model for discovering predictive latent features for
classification and multi-task learning, respectively. We present efficient
inference methods and report empirical studies on several benchmark datasets,
which appear to demonstrate the merits inherited from both large-margin
learning and Bayesian nonparametrics. Such results were not available until
now, and contribute to push forward the interface between these two important
subfields, which have been largely treated as isolated in the community.
| [
{
"version": "v1",
"created": "Fri, 5 Oct 2012 14:10:20 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2013 09:33:44 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Feb 2014 06:31:12 GMT"
}
] | 2014-02-13T00:00:00 | [
[
"Zhu",
"Jun",
""
],
[
"Chen",
"Ning",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: Bayesian Inference with Posterior Regularization and applications to
Infinite Latent SVMs
ABSTRACT: Existing Bayesian models, especially nonparametric Bayesian methods, rely on
specially conceived priors to incorporate domain knowledge for discovering
improved latent representations. While priors can affect posterior
distributions through Bayes' rule, imposing posterior regularization is
arguably more direct and in some cases more natural and general. In this paper,
we present regularized Bayesian inference (RegBayes), a novel computational
framework that performs posterior inference with a regularization term on the
desired post-data posterior distribution under an information theoretical
formulation. RegBayes is more flexible than the procedure that elicits expert
knowledge via priors, and it covers both directed Bayesian networks and
undirected Markov networks whose Bayesian formulation results in hybrid chain
graph models. When the regularization is induced from a linear operator on the
posterior distributions, such as the expectation operator, we present a general
convex-analysis theorem to characterize the solution of RegBayes. Furthermore,
we present two concrete examples of RegBayes, infinite latent support vector
machines (iLSVM) and multi-task infinite latent support vector machines
(MT-iLSVM), which explore the large-margin idea in combination with a
nonparametric Bayesian model for discovering predictive latent features for
classification and multi-task learning, respectively. We present efficient
inference methods and report empirical studies on several benchmark datasets,
which appear to demonstrate the merits inherited from both large-margin
learning and Bayesian nonparametrics. Such results were not available until
now, and contribute to push forward the interface between these two important
subfields, which have been largely treated as isolated in the community.
| no_new_dataset | 0.948442 |
1309.7750 | Stefanos Ougiaroglou | Stefanos Ougiaroglou, Georgios Evangelidis, Dimitris A. Dervos | An Extensive Experimental Study on the Cluster-based Reference Set
Reduction for speeding-up the k-NN Classifier | Proceeding of International Conference on Integrated Information
(IC-InInfo 2011), pp. 12-15, Kos island, Greece, 2011 | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The k-Nearest Neighbor (k-NN) classification algorithm is one of the most
widely-used lazy classifiers because of its simplicity and ease of
implementation. It is considered to be an effective classifier and has many
applications. However, its major drawback is that when sequential search is
used to find the neighbors, it involves high computational cost. Speeding-up
k-NN search is still an active research field. Hwang and Cho have recently
proposed an adaptive cluster-based method for fast Nearest Neighbor searching.
The effectiveness of this method is based on the adjustment of three
parameters. However, the authors evaluated their method by setting specific
parameter values and using only one dataset. In this paper, an extensive
experimental study of this method is presented. The results, which are based on
five real life datasets, illustrate that if the parameters of the method are
carefully defined, one can achieve even better classification performance.
| [
{
"version": "v1",
"created": "Mon, 30 Sep 2013 08:24:14 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2014 22:46:36 GMT"
}
] | 2014-02-13T00:00:00 | [
[
"Ougiaroglou",
"Stefanos",
""
],
[
"Evangelidis",
"Georgios",
""
],
[
"Dervos",
"Dimitris A.",
""
]
] | TITLE: An Extensive Experimental Study on the Cluster-based Reference Set
Reduction for speeding-up the k-NN Classifier
ABSTRACT: The k-Nearest Neighbor (k-NN) classification algorithm is one of the most
widely-used lazy classifiers because of its simplicity and ease of
implementation. It is considered to be an effective classifier and has many
applications. However, its major drawback is that when sequential search is
used to find the neighbors, it involves high computational cost. Speeding-up
k-NN search is still an active research field. Hwang and Cho have recently
proposed an adaptive cluster-based method for fast Nearest Neighbor searching.
The effectiveness of this method is based on the adjustment of three
parameters. However, the authors evaluated their method by setting specific
parameter values and using only one dataset. In this paper, an extensive
experimental study of this method is presented. The results, which are based on
five real life datasets, illustrate that if the parameters of the method are
carefully defined, one can achieve even better classification performance.
| no_new_dataset | 0.949949 |
1402.2807 | Rui Zhou | Rui Zhou, Chengfei Liu, Jeffrey Xu Yu, Weifa Liang and Yanchun Zhang | Efficient Truss Maintenance in Evolving Networks | null | null | null | null | cs.DB cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Truss was proposed to study social network data represented by graphs. A
k-truss of a graph is a cohesive subgraph, in which each edge is contained in
at least k-2 triangles within the subgraph. While truss has been demonstrated
as superior to model the close relationship in social networks and efficient
algorithms for finding trusses have been extensively studied, very little
attention has been paid to truss maintenance. However, most social networks are
evolving networks. It may be infeasible to recompute trusses from scratch from
time to time in order to find the up-to-date $k$-trusses in the evolving
networks. In this paper, we discuss how to maintain trusses in a graph with
dynamic updates. We first discuss a set of properties on maintaining trusses,
then propose algorithms on maintaining trusses on edge deletions and
insertions, finally, we discuss truss index maintenance. We test the proposed
techniques on real datasets. The experiment results show the promise of our
work.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2014 12:57:06 GMT"
}
] | 2014-02-13T00:00:00 | [
[
"Zhou",
"Rui",
""
],
[
"Liu",
"Chengfei",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Liang",
"Weifa",
""
],
[
"Zhang",
"Yanchun",
""
]
] | TITLE: Efficient Truss Maintenance in Evolving Networks
ABSTRACT: Truss was proposed to study social network data represented by graphs. A
k-truss of a graph is a cohesive subgraph, in which each edge is contained in
at least k-2 triangles within the subgraph. While truss has been demonstrated
as superior to model the close relationship in social networks and efficient
algorithms for finding trusses have been extensively studied, very little
attention has been paid to truss maintenance. However, most social networks are
evolving networks. It may be infeasible to recompute trusses from scratch from
time to time in order to find the up-to-date $k$-trusses in the evolving
networks. In this paper, we discuss how to maintain trusses in a graph with
dynamic updates. We first discuss a set of properties on maintaining trusses,
then propose algorithms on maintaining trusses on edge deletions and
insertions, finally, we discuss truss index maintenance. We test the proposed
techniques on real datasets. The experiment results show the promise of our
work.
| no_new_dataset | 0.950686 |
1402.2826 | Aniket Bera | Aniket Bera and Dinesh Manocha | Realtime Multilevel Crowd Tracking using Reciprocal Velocity Obstacles | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel, realtime algorithm to compute the trajectory of each
pedestrian in moderately dense crowd scenes. Our formulation is based on an
adaptive particle filtering scheme that uses a multi-agent motion model based
on velocity-obstacles, and takes into account local interactions as well as
physical and personal constraints of each pedestrian. Our method dynamically
changes the number of particles allocated to each pedestrian based on different
confidence metrics. Additionally, we use a new high-definition crowd video
dataset, which is used to evaluate the performance of different pedestrian
tracking algorithms. This dataset consists of videos of indoor and outdoor
scenes, recorded at different locations with 30-80 pedestrians. We highlight
the performance benefits of our algorithm over prior techniques using this
dataset. In practice, our algorithm can compute trajectories of tens of
pedestrians on a multi-core desktop CPU at interactive rates (27-30 frames per
second). To the best of our knowledge, our approach is 4-5 times faster than
prior methods, which provide similar accuracy.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2014 15:49:53 GMT"
}
] | 2014-02-13T00:00:00 | [
[
"Bera",
"Aniket",
""
],
[
"Manocha",
"Dinesh",
""
]
] | TITLE: Realtime Multilevel Crowd Tracking using Reciprocal Velocity Obstacles
ABSTRACT: We present a novel, realtime algorithm to compute the trajectory of each
pedestrian in moderately dense crowd scenes. Our formulation is based on an
adaptive particle filtering scheme that uses a multi-agent motion model based
on velocity-obstacles, and takes into account local interactions as well as
physical and personal constraints of each pedestrian. Our method dynamically
changes the number of particles allocated to each pedestrian based on different
confidence metrics. Additionally, we use a new high-definition crowd video
dataset, which is used to evaluate the performance of different pedestrian
tracking algorithms. This dataset consists of videos of indoor and outdoor
scenes, recorded at different locations with 30-80 pedestrians. We highlight
the performance benefits of our algorithm over prior techniques using this
dataset. In practice, our algorithm can compute trajectories of tens of
pedestrians on a multi-core desktop CPU at interactive rates (27-30 frames per
second). To the best of our knowledge, our approach is 4-5 times faster than
prior methods, which provide similar accuracy.
| new_dataset | 0.958499 |
1402.2941 | Zohaib Khan | Zohaib Khan, Faisal Shafait, Yiqun Hu, Ajmal Mian | Multispectral Palmprint Encoding and Recognition | Preliminary version of this manuscript was published in ICCV 2011. Z.
Khan A. Mian and Y. Hu, "Contour Code: Robust and Efficient Multispectral
Palmprint Encoding for Human Recognition", International Conference on
Computer Vision, 2011. MATLAB Code available:
https://sites.google.com/site/zohaibnet/Home/codes | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2014 06:35:51 GMT"
}
] | 2014-02-13T00:00:00 | [
[
"Khan",
"Zohaib",
""
],
[
"Shafait",
"Faisal",
""
],
[
"Hu",
"Yiqun",
""
],
[
"Mian",
"Ajmal",
""
]
] | TITLE: Multispectral Palmprint Encoding and Recognition
ABSTRACT: Palmprints are emerging as a new entity in multi-modal biometrics for human
identification and verification. Multispectral palmprint images captured in the
visible and infrared spectrum not only contain the wrinkles and ridge structure
of a palm, but also the underlying pattern of veins; making them a highly
discriminating biometric identifier. In this paper, we propose a feature
encoding scheme for robust and highly accurate representation and matching of
multispectral palmprints. To facilitate compact storage of the feature, we
design a binary hash table structure that allows for efficient matching in
large databases. Comprehensive experiments for both identification and
verification scenarios are performed on two public datasets -- one captured
with a contact-based sensor (PolyU dataset), and the other with a contact-free
sensor (CASIA dataset). Recognition results in various experimental setups show
that the proposed method consistently outperforms existing state-of-the-art
methods. Error rates achieved by our method (0.003% on PolyU and 0.2% on CASIA)
are the lowest reported in literature on both dataset and clearly indicate the
viability of palmprint as a reliable and promising biometric. All source codes
are publicly available.
| no_new_dataset | 0.9357 |
1307.4048 | Pavan Kumar D S | D. S. Pavan Kumar, N. Vishnu Prasad, Vikas Joshi, S. Umesh | Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust
Speech Recognition | Submitted to Automatic Speech Recognition and Understanding (ASRU)
2013 Workshop | null | 10.1109/ASRU.2013.6707725 | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a modification to the training process of the popular SPLICE
algorithm has been proposed for noise robust speech recognition. The
modification is based on feature correlations, and enables this stereo-based
algorithm to improve the performance in all noise conditions, especially in
unseen cases. Further, the modified framework is extended to work for
non-stereo datasets where clean and noisy training utterances, but not stereo
counterparts, are required. Finally, an MLLR-based computationally efficient
run-time noise adaptation method in SPLICE framework has been proposed. The
modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of
Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93%
absolute improvements over Aurora-2 and Aurora-4 baseline models respectively.
Run-time adaptation shows 9.89% absolute improvement in modified framework as
compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR
adaptation on HMMs.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2013 18:39:10 GMT"
}
] | 2014-02-12T00:00:00 | [
[
"Kumar",
"D. S. Pavan",
""
],
[
"Prasad",
"N. Vishnu",
""
],
[
"Joshi",
"Vikas",
""
],
[
"Umesh",
"S.",
""
]
] | TITLE: Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust
Speech Recognition
ABSTRACT: In this paper, a modification to the training process of the popular SPLICE
algorithm has been proposed for noise robust speech recognition. The
modification is based on feature correlations, and enables this stereo-based
algorithm to improve the performance in all noise conditions, especially in
unseen cases. Further, the modified framework is extended to work for
non-stereo datasets where clean and noisy training utterances, but not stereo
counterparts, are required. Finally, an MLLR-based computationally efficient
run-time noise adaptation method in SPLICE framework has been proposed. The
modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of
Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93%
absolute improvements over Aurora-2 and Aurora-4 baseline models respectively.
Run-time adaptation shows 9.89% absolute improvement in modified framework as
compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR
adaptation on HMMs.
| no_new_dataset | 0.950595 |
1402.2300 | Aaron Karper | Aaron Karper | Feature and Variable Selection in Classification | Part of master seminar in document analysis held by Marcus
Eichenberger-Liwicki | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/publicdomain/ | The amount of information in the form of features and variables avail- able
to machine learning algorithms is ever increasing. This can lead to classifiers
that are prone to overfitting in high dimensions, high di- mensional models do
not lend themselves to interpretable results, and the CPU and memory resources
necessary to run on high-dimensional datasets severly limit the applications of
the approaches. Variable and feature selection aim to remedy this by finding a
subset of features that in some way captures the information provided best. In
this paper we present the general methodology and highlight some specific
approaches.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2014 21:05:58 GMT"
}
] | 2014-02-12T00:00:00 | [
[
"Karper",
"Aaron",
""
]
] | TITLE: Feature and Variable Selection in Classification
ABSTRACT: The amount of information in the form of features and variables avail- able
to machine learning algorithms is ever increasing. This can lead to classifiers
that are prone to overfitting in high dimensions, high di- mensional models do
not lend themselves to interpretable results, and the CPU and memory resources
necessary to run on high-dimensional datasets severly limit the applications of
the approaches. Variable and feature selection aim to remedy this by finding a
subset of features that in some way captures the information provided best. In
this paper we present the general methodology and highlight some specific
approaches.
| no_new_dataset | 0.952794 |
1402.2363 | Ashish Shingade ANS | Ashish Shingade and Archana Ghotkar | Animation of 3D Human Model Using Markerless Motion Capture Applied To
Sports | null | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markerless motion capture is an active research in 3D virtualization. In
proposed work we presented a system for markerless motion capture for 3D human
character animation, paper presents a survey on motion and skeleton tracking
techniques which are developed or are under development. The paper proposed a
method to transform the motion of a performer to a 3D human character (model),
the 3D human character performs similar movements as that of a performer in
real time. In the proposed work, human model data will be captured by Kinect
camera, processed data will be applied on 3D human model for animation. 3D
human model is created using open source software (MakeHuman). Anticipated
dataset for sport activity is considered as input which can be applied to any
HCI application.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2014 04:05:12 GMT"
}
] | 2014-02-12T00:00:00 | [
[
"Shingade",
"Ashish",
""
],
[
"Ghotkar",
"Archana",
""
]
] | TITLE: Animation of 3D Human Model Using Markerless Motion Capture Applied To
Sports
ABSTRACT: Markerless motion capture is an active research in 3D virtualization. In
proposed work we presented a system for markerless motion capture for 3D human
character animation, paper presents a survey on motion and skeleton tracking
techniques which are developed or are under development. The paper proposed a
method to transform the motion of a performer to a 3D human character (model),
the 3D human character performs similar movements as that of a performer in
real time. In the proposed work, human model data will be captured by Kinect
camera, processed data will be applied on 3D human model for animation. 3D
human model is created using open source software (MakeHuman). Anticipated
dataset for sport activity is considered as input which can be applied to any
HCI application.
| no_new_dataset | 0.934634 |
1402.2606 | Dibyendu Mukherjee | Dibyendu Mukherjee | A Fast Two Pass Multi-Value Segmentation Algorithm based on Connected
Component Analysis | 9 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Connected component analysis (CCA) has been heavily used to label binary
images and classify segments. However, it has not been well-exploited to
segment multi-valued natural images. This work proposes a novel multi-value
segmentation algorithm that utilizes CCA to segment color images. A user
defined distance measure is incorporated in the proposed modified CCA to
identify and segment similar image regions. The raw output of the algorithm
consists of distinctly labelled segmented regions. The proposed algorithm has a
unique design architecture that provides several benefits: 1) it can be used to
segment any multi-channel multi-valued image; 2) the distance
measure/segmentation criteria can be application-specific and 3) an absolute
linear-time implementation allows easy extension for real-time video
segmentation. Experimental demonstrations of the aforesaid benefits are
presented along with the comparison results on multiple datasets with current
benchmark algorithms. A number of possible application areas are also
identified and results on real-time video segmentation has been presented to
show the promise of the proposed method.
| [
{
"version": "v1",
"created": "Tue, 11 Feb 2014 19:27:05 GMT"
}
] | 2014-02-12T00:00:00 | [
[
"Mukherjee",
"Dibyendu",
""
]
] | TITLE: A Fast Two Pass Multi-Value Segmentation Algorithm based on Connected
Component Analysis
ABSTRACT: Connected component analysis (CCA) has been heavily used to label binary
images and classify segments. However, it has not been well-exploited to
segment multi-valued natural images. This work proposes a novel multi-value
segmentation algorithm that utilizes CCA to segment color images. A user
defined distance measure is incorporated in the proposed modified CCA to
identify and segment similar image regions. The raw output of the algorithm
consists of distinctly labelled segmented regions. The proposed algorithm has a
unique design architecture that provides several benefits: 1) it can be used to
segment any multi-channel multi-valued image; 2) the distance
measure/segmentation criteria can be application-specific and 3) an absolute
linear-time implementation allows easy extension for real-time video
segmentation. Experimental demonstrations of the aforesaid benefits are
presented along with the comparison results on multiple datasets with current
benchmark algorithms. A number of possible application areas are also
identified and results on real-time video segmentation has been presented to
show the promise of the proposed method.
| no_new_dataset | 0.945851 |
1311.7676 | Song Gao | Song Gao, Linna Li, Wenwen Li, Krzysztof Janowicz, Yue Zhang | Constructing Gazetteers from Volunteered Big Geo-Data Based on Hadoop | 45 pages, 10 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional gazetteers are built and maintained by authoritative mapping
agencies. In the age of Big Data, it is possible to construct gazetteers in a
data-driven approach by mining rich volunteered geographic information (VGI)
from the Web. In this research, we build a scalable distributed platform and a
high-performance geoprocessing workflow based on the Hadoop ecosystem to
harvest crowd-sourced gazetteer entries. Using experiments based on geotagged
datasets in Flickr, we find that the MapReduce-based workflow running on the
spatially enabled Hadoop cluster can reduce the processing time compared with
traditional desktop-based operations by an order of magnitude. We demonstrate
how to use such a novel spatial-computing infrastructure to facilitate
gazetteer research. In addition, we introduce a provenance-based trust model
for quality assurance. This work offers new insights on enriching future
gazetteers with the use of Hadoop clusters, and makes contributions in
connecting GIS to the cloud computing environment for the next frontier of Big
Geo-Data analytics.
| [
{
"version": "v1",
"created": "Fri, 29 Nov 2013 19:52:42 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Feb 2014 07:11:22 GMT"
}
] | 2014-02-10T00:00:00 | [
[
"Gao",
"Song",
""
],
[
"Li",
"Linna",
""
],
[
"Li",
"Wenwen",
""
],
[
"Janowicz",
"Krzysztof",
""
],
[
"Zhang",
"Yue",
""
]
] | TITLE: Constructing Gazetteers from Volunteered Big Geo-Data Based on Hadoop
ABSTRACT: Traditional gazetteers are built and maintained by authoritative mapping
agencies. In the age of Big Data, it is possible to construct gazetteers in a
data-driven approach by mining rich volunteered geographic information (VGI)
from the Web. In this research, we build a scalable distributed platform and a
high-performance geoprocessing workflow based on the Hadoop ecosystem to
harvest crowd-sourced gazetteer entries. Using experiments based on geotagged
datasets in Flickr, we find that the MapReduce-based workflow running on the
spatially enabled Hadoop cluster can reduce the processing time compared with
traditional desktop-based operations by an order of magnitude. We demonstrate
how to use such a novel spatial-computing infrastructure to facilitate
gazetteer research. In addition, we introduce a provenance-based trust model
for quality assurance. This work offers new insights on enriching future
gazetteers with the use of Hadoop clusters, and makes contributions in
connecting GIS to the cloud computing environment for the next frontier of Big
Geo-Data analytics.
| no_new_dataset | 0.946695 |
1402.1546 | Weiwei Sun | Renchu Song, Weiwei Sun, Baihua Zheng, Yu Zheng | PRESS: A Novel Framework of Trajectory Compression in Road Networks | 27 pages, 17 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Location data becomes more and more important. In this paper, we focus on the
trajectory data, and propose a new framework, namely PRESS (Paralleled
Road-Network-Based Trajectory Compression), to effectively compress trajectory
data under road network constraints. Different from existing work, PRESS
proposes a novel representation for trajectories to separate the spatial
representation of a trajectory from the temporal representation, and proposes a
Hybrid Spatial Compression (HSC) algorithm and error Bounded Temporal
Compression (BTC) algorithm to compress the spatial and temporal information of
trajectories respectively. PRESS also supports common spatial-temporal queries
without fully decompressing the data. Through an extensive experimental study
on real trajectory dataset, PRESS significantly outperforms existing approaches
in terms of saving storage cost of trajectory data with bounded errors.
| [
{
"version": "v1",
"created": "Fri, 7 Feb 2014 03:29:08 GMT"
}
] | 2014-02-10T00:00:00 | [
[
"Song",
"Renchu",
""
],
[
"Sun",
"Weiwei",
""
],
[
"Zheng",
"Baihua",
""
],
[
"Zheng",
"Yu",
""
]
] | TITLE: PRESS: A Novel Framework of Trajectory Compression in Road Networks
ABSTRACT: Location data becomes more and more important. In this paper, we focus on the
trajectory data, and propose a new framework, namely PRESS (Paralleled
Road-Network-Based Trajectory Compression), to effectively compress trajectory
data under road network constraints. Different from existing work, PRESS
proposes a novel representation for trajectories to separate the spatial
representation of a trajectory from the temporal representation, and proposes a
Hybrid Spatial Compression (HSC) algorithm and error Bounded Temporal
Compression (BTC) algorithm to compress the spatial and temporal information of
trajectories respectively. PRESS also supports common spatial-temporal queries
without fully decompressing the data. Through an extensive experimental study
on real trajectory dataset, PRESS significantly outperforms existing approaches
in terms of saving storage cost of trajectory data with bounded errors.
| no_new_dataset | 0.944791 |
1402.0914 | Scott Linderman | Scott W. Linderman and Ryan P. Adams | Discovering Latent Network Structure in Point Process Data | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks play a central role in modern data analysis, enabling us to reason
about systems by studying the relationships between their parts. Most often in
network analysis, the edges are given. However, in many systems it is difficult
or impossible to measure the network directly. Examples of latent networks
include economic interactions linking financial instruments and patterns of
reciprocity in gang violence. In these cases, we are limited to noisy
observations of events associated with each node. To enable analysis of these
implicit networks, we develop a probabilistic model that combines
mutually-exciting point processes with random graph models. We show how the
Poisson superposition principle enables an elegant auxiliary variable
formulation and a fully-Bayesian, parallel inference algorithm. We evaluate
this new model empirically on several datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 23:48:23 GMT"
}
] | 2014-02-06T00:00:00 | [
[
"Linderman",
"Scott W.",
""
],
[
"Adams",
"Ryan P.",
""
]
] | TITLE: Discovering Latent Network Structure in Point Process Data
ABSTRACT: Networks play a central role in modern data analysis, enabling us to reason
about systems by studying the relationships between their parts. Most often in
network analysis, the edges are given. However, in many systems it is difficult
or impossible to measure the network directly. Examples of latent networks
include economic interactions linking financial instruments and patterns of
reciprocity in gang violence. In these cases, we are limited to noisy
observations of events associated with each node. To enable analysis of these
implicit networks, we develop a probabilistic model that combines
mutually-exciting point processes with random graph models. We show how the
Poisson superposition principle enables an elegant auxiliary variable
formulation and a fully-Bayesian, parallel inference algorithm. We evaluate
this new model empirically on several datasets.
| no_new_dataset | 0.94887 |
1402.0595 | Mojtaba Seyedhosseini | Mojtaba Seyedhosseini and Tolga Tasdizen | Scene Labeling with Contextual Hierarchical Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene labeling is the problem of assigning an object label to each pixel. It
unifies the image segmentation and object recognition problems. The importance
of using contextual information in scene labeling frameworks has been widely
realized in the field. We propose a contextual framework, called contextual
hierarchical model (CHM), which learns contextual information in a hierarchical
framework for scene labeling. At each level of the hierarchy, a classifier is
trained based on downsampled input images and outputs of previous levels. Our
model then incorporates the resulting multi-resolution contextual information
into a classifier to segment the input image at original resolution. This
training strategy allows for optimization of a joint posterior probability at
multiple resolutions through the hierarchy. Contextual hierarchical model is
purely based on the input image patches and does not make use of any fragments
or shape examples. Hence, it is applicable to a variety of problems such as
object segmentation and edge detection. We demonstrate that CHM outperforms
state-of-the-art on Stanford background and Weizmann horse datasets. It also
outperforms state-of-the-art edge detection methods on NYU depth dataset and
achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 02:10:01 GMT"
}
] | 2014-02-05T00:00:00 | [
[
"Seyedhosseini",
"Mojtaba",
""
],
[
"Tasdizen",
"Tolga",
""
]
] | TITLE: Scene Labeling with Contextual Hierarchical Models
ABSTRACT: Scene labeling is the problem of assigning an object label to each pixel. It
unifies the image segmentation and object recognition problems. The importance
of using contextual information in scene labeling frameworks has been widely
realized in the field. We propose a contextual framework, called contextual
hierarchical model (CHM), which learns contextual information in a hierarchical
framework for scene labeling. At each level of the hierarchy, a classifier is
trained based on downsampled input images and outputs of previous levels. Our
model then incorporates the resulting multi-resolution contextual information
into a classifier to segment the input image at original resolution. This
training strategy allows for optimization of a joint posterior probability at
multiple resolutions through the hierarchy. Contextual hierarchical model is
purely based on the input image patches and does not make use of any fragments
or shape examples. Hence, it is applicable to a variety of problems such as
object segmentation and edge detection. We demonstrate that CHM outperforms
state-of-the-art on Stanford background and Weizmann horse datasets. It also
outperforms state-of-the-art edge detection methods on NYU depth dataset and
achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).
| no_new_dataset | 0.952574 |
1309.3256 | Rachel Ward | Abhinav Nellore and Rachel Ward | Recovery guarantees for exemplar-based clustering | 24 pages, 4 figures | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a certain class of distributions, we prove that the linear programming
relaxation of $k$-medoids clustering---a variant of $k$-means clustering where
means are replaced by exemplars from within the dataset---distinguishes points
drawn from nonoverlapping balls with high probability once the number of points
drawn and the separation distance between any two balls are sufficiently large.
Our results hold in the nontrivial regime where the separation distance is
small enough that points drawn from different balls may be closer to each other
than points drawn from the same ball; in this case, clustering by thresholding
pairwise distances between points can fail. We also exhibit numerical evidence
of high-probability recovery in a substantially more permissive regime.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2013 19:38:18 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2014 03:56:31 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Nellore",
"Abhinav",
""
],
[
"Ward",
"Rachel",
""
]
] | TITLE: Recovery guarantees for exemplar-based clustering
ABSTRACT: For a certain class of distributions, we prove that the linear programming
relaxation of $k$-medoids clustering---a variant of $k$-means clustering where
means are replaced by exemplars from within the dataset---distinguishes points
drawn from nonoverlapping balls with high probability once the number of points
drawn and the separation distance between any two balls are sufficiently large.
Our results hold in the nontrivial regime where the separation distance is
small enough that points drawn from different balls may be closer to each other
than points drawn from the same ball; in this case, clustering by thresholding
pairwise distances between points can fail. We also exhibit numerical evidence
of high-probability recovery in a substantially more permissive regime.
| no_new_dataset | 0.952397 |
1311.2663 | Shandian Zhe | Shandian Zhe and Yuan Qi and Youngja Park and Ian Molloy and Suresh
Chari | DinTucker: Scaling up Gaussian process models on multidimensional arrays
with billions of elements | null | null | null | null | cs.LG cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Infinite Tucker Decomposition (InfTucker) and random function prior models,
as nonparametric Bayesian models on infinite exchangeable arrays, are more
powerful models than widely-used multilinear factorization methods including
Tucker and PARAFAC decomposition, (partly) due to their capability of modeling
nonlinear relationships between array elements. Despite their great predictive
performance and sound theoretical foundations, they cannot handle massive data
due to a prohibitively high training time. To overcome this limitation, we
present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor
decomposition algorithm on MAPREDUCE. While maintaining the predictive accuracy
of InfTucker, it is scalable on massive data. DINTUCKER is based on a new
hierarchical Bayesian model that enables local training of InfTucker on
subarrays and information integration from all local training results. We use
distributed stochastic gradient descent, coupled with variational inference, to
train this model. We apply DINTUCKER to multidimensional arrays with billions
of elements from applications in the "Read the Web" project (Carlson et al.,
2010) and in information security and compare it with the state-of-the-art
large-scale tensor decomposition method, GigaTensor. On both datasets,
DINTUCKER achieves significantly higher prediction accuracy with less
computational time.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2013 02:36:03 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Nov 2013 23:50:57 GMT"
},
{
"version": "v3",
"created": "Sun, 15 Dec 2013 13:56:18 GMT"
},
{
"version": "v4",
"created": "Thu, 23 Jan 2014 05:49:44 GMT"
},
{
"version": "v5",
"created": "Sat, 1 Feb 2014 14:35:04 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Zhe",
"Shandian",
""
],
[
"Qi",
"Yuan",
""
],
[
"Park",
"Youngja",
""
],
[
"Molloy",
"Ian",
""
],
[
"Chari",
"Suresh",
""
]
] | TITLE: DinTucker: Scaling up Gaussian process models on multidimensional arrays
with billions of elements
ABSTRACT: Infinite Tucker Decomposition (InfTucker) and random function prior models,
as nonparametric Bayesian models on infinite exchangeable arrays, are more
powerful models than widely-used multilinear factorization methods including
Tucker and PARAFAC decomposition, (partly) due to their capability of modeling
nonlinear relationships between array elements. Despite their great predictive
performance and sound theoretical foundations, they cannot handle massive data
due to a prohibitively high training time. To overcome this limitation, we
present Distributed Infinite Tucker (DINTUCKER), a large-scale nonlinear tensor
decomposition algorithm on MAPREDUCE. While maintaining the predictive accuracy
of InfTucker, it is scalable on massive data. DINTUCKER is based on a new
hierarchical Bayesian model that enables local training of InfTucker on
subarrays and information integration from all local training results. We use
distributed stochastic gradient descent, coupled with variational inference, to
train this model. We apply DINTUCKER to multidimensional arrays with billions
of elements from applications in the "Read the Web" project (Carlson et al.,
2010) and in information security and compare it with the state-of-the-art
large-scale tensor decomposition method, GigaTensor. On both datasets,
DINTUCKER achieves significantly higher prediction accuracy with less
computational time.
| no_new_dataset | 0.945197 |
1312.6169 | C\'edric Lagnier | C\'edric Lagnier, Simon Bourigault, Sylvain Lamprier, Ludovic Denoyer
and Patrick Gallinari | Learning Information Spread in Content Networks | 4 pages | null | null | null | cs.LG cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a model for predicting the diffusion of content information on
social media. When propagation is usually modeled on discrete graph structures,
we introduce here a continuous diffusion model, where nodes in a diffusion
cascade are projected onto a latent space with the property that their
proximity in this space reflects the temporal diffusion process. We focus on
the task of predicting contaminated users for an initial initial information
source and provide preliminary results on differents datasets.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 22:49:01 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Feb 2014 20:36:57 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Lagnier",
"Cédric",
""
],
[
"Bourigault",
"Simon",
""
],
[
"Lamprier",
"Sylvain",
""
],
[
"Denoyer",
"Ludovic",
""
],
[
"Gallinari",
"Patrick",
""
]
] | TITLE: Learning Information Spread in Content Networks
ABSTRACT: We introduce a model for predicting the diffusion of content information on
social media. When propagation is usually modeled on discrete graph structures,
we introduce here a continuous diffusion model, where nodes in a diffusion
cascade are projected onto a latent space with the property that their
proximity in this space reflects the temporal diffusion process. We focus on
the task of predicting contaminated users for an initial initial information
source and provide preliminary results on differents datasets.
| no_new_dataset | 0.9549 |
1401.2288 | Hemant Kumar Aggarwal | Hemant Kumar Aggarwal and Angshul Majumdar | Extension of Sparse Randomized Kaczmarz Algorithm for Multiple
Measurement Vectors | null | null | null | null | cs.NA cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2014 11:24:35 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Jan 2014 10:05:15 GMT"
},
{
"version": "v3",
"created": "Sun, 2 Feb 2014 08:13:58 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Aggarwal",
"Hemant Kumar",
""
],
[
"Majumdar",
"Angshul",
""
]
] | TITLE: Extension of Sparse Randomized Kaczmarz Algorithm for Multiple
Measurement Vectors
ABSTRACT: The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints.
| no_new_dataset | 0.944842 |
1401.4307 | Khalid Belhajjame | Khalid Belhajjame and Jun Zhao and Daniel Garijo and Kristina Hettne
and Raul Palma and \'Oscar Corcho and Jos\'e-Manuel G\'omez-P\'erez and Sean
Bechhofer and Graham Klyne and Carole Goble | The Research Object Suite of Ontologies: Sharing and Exchanging Research
Data and Methods on the Open Web | 20 pages | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in life sciences is increasingly being conducted in a digital and
online environment. In particular, life scientists have been pioneers in
embracing new computational tools to conduct their investigations. To support
the sharing of digital objects produced during such research investigations, we
have witnessed in the last few years the emergence of specialized repositories,
e.g., DataVerse and FigShare. Such repositories provide users with the means to
share and publish datasets that were used or generated in research
investigations. While these repositories have proven their usefulness,
interpreting and reusing evidence for most research results is a challenging
task. Additional contextual descriptions are needed to understand how those
results were generated and/or the circumstances under which they were
concluded. Because of this, scientists are calling for models that go beyond
the publication of datasets to systematically capture the life cycle of
scientific investigations and provide a single entry point to access the
information about the hypothesis investigated, the datasets used, the
experiments carried out, the results of the experiments, the people involved in
the research, etc. In this paper we present the Research Object (RO) suite of
ontologies, which provide a structured container to encapsulate research data
and methods along with essential metadata descriptions. Research Objects are
portable units that enable the sharing, preservation, interpretation and reuse
of research investigation results. The ontologies we present have been designed
in the light of requirements that we gathered from life scientists. They have
been built upon existing popular vocabularies to facilitate interoperability.
Furthermore, we have developed tools to support the creation and sharing of
Research Objects, thereby promoting and facilitating their adoption.
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2014 11:07:52 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Feb 2014 10:27:19 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Belhajjame",
"Khalid",
""
],
[
"Zhao",
"Jun",
""
],
[
"Garijo",
"Daniel",
""
],
[
"Hettne",
"Kristina",
""
],
[
"Palma",
"Raul",
""
],
[
"Corcho",
"Óscar",
""
],
[
"Gómez-Pérez",
"José-Manuel",
""
],
[
"Bechhofer",
"Sean",
""
],
[
"Klyne",
"Graham",
""
],
[
"Goble",
"Carole",
""
]
] | TITLE: The Research Object Suite of Ontologies: Sharing and Exchanging Research
Data and Methods on the Open Web
ABSTRACT: Research in life sciences is increasingly being conducted in a digital and
online environment. In particular, life scientists have been pioneers in
embracing new computational tools to conduct their investigations. To support
the sharing of digital objects produced during such research investigations, we
have witnessed in the last few years the emergence of specialized repositories,
e.g., DataVerse and FigShare. Such repositories provide users with the means to
share and publish datasets that were used or generated in research
investigations. While these repositories have proven their usefulness,
interpreting and reusing evidence for most research results is a challenging
task. Additional contextual descriptions are needed to understand how those
results were generated and/or the circumstances under which they were
concluded. Because of this, scientists are calling for models that go beyond
the publication of datasets to systematically capture the life cycle of
scientific investigations and provide a single entry point to access the
information about the hypothesis investigated, the datasets used, the
experiments carried out, the results of the experiments, the people involved in
the research, etc. In this paper we present the Research Object (RO) suite of
ontologies, which provide a structured container to encapsulate research data
and methods along with essential metadata descriptions. Research Objects are
portable units that enable the sharing, preservation, interpretation and reuse
of research investigation results. The ontologies we present have been designed
in the light of requirements that we gathered from life scientists. They have
been built upon existing popular vocabularies to facilitate interoperability.
Furthermore, we have developed tools to support the creation and sharing of
Research Objects, thereby promoting and facilitating their adoption.
| no_new_dataset | 0.944177 |
1402.0238 | Vincent Labatut | Burcu Kantarc{\i}, Vincent Labatut | Classification of Complex Networks Based on Topological Properties | null | 3rd Conference on Social Computing and its Applications, Karlsruhe
: Germany (2013) | 10.1109/CGC.2013.54 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex networks are a powerful modeling tool, allowing the study of
countless real-world systems. They have been used in very different domains
such as computer science, biology, sociology, management, etc. Authors have
been trying to characterize them using various measures such as degree
distribution, transitivity or average distance. Their goal is to detect certain
properties such as the small-world or scale-free properties. Previous works
have shown some of these properties are present in many different systems,
while others are characteristic of certain types of systems only. However, each
one of these studies generally focuses on a very small number of topological
measures and networks. In this work, we aim at using a more systematic
approach. We first constitute a dataset of 152 publicly available networks,
spanning over 7 different domains. We then process 14 different topological
measures to characterize them in the most possible complete way. Finally, we
apply standard data mining tools to analyze these data. A cluster analysis
reveals it is possible to obtain two significantly distinct clusters of
networks, corresponding roughly to a bisection of the domains modeled by the
networks. On these data, the most discriminant measures are density,
modularity, average degree and transitivity, and at a lesser extent, closeness
and edgebetweenness centralities.Abstract--Complex networks are a powerful
modeling tool, allowing the study of countless real-world systems. They have
been used in very different domains such as computer science, biology,
sociology, management, etc. Authors have been trying to characterize them using
various measures such as degree distribution, transitivity or average distance.
Their goal is to detect certain properties such as the small-world or
scale-free properties. Previous works have shown some of these properties are
present in many different systems, while others are characteristic of certain
types of systems only. However, each one of these studies generally focuses on
a very small number of topological measures and networks. In this work, we aim
at using a more systematic approach. We first constitute a dataset of 152
publicly available networks, spanning over 7 different domains. We then process
14 different topological measures to characterize them in the most possible
complete way. Finally, we apply standard data mining tools to analyze these
data. A cluster analysis reveals it is possible to obtain two significantly
distinct clusters of networks, corresponding roughly to a bisection of the
domains modeled by the networks. On these data, the most discriminant measures
are density, modularity, average degree and transitivity, and at a lesser
extent, closeness and edgebetweenness centralities.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2014 19:48:52 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Kantarcı",
"Burcu",
""
],
[
"Labatut",
"Vincent",
""
]
] | TITLE: Classification of Complex Networks Based on Topological Properties
ABSTRACT: Complex networks are a powerful modeling tool, allowing the study of
countless real-world systems. They have been used in very different domains
such as computer science, biology, sociology, management, etc. Authors have
been trying to characterize them using various measures such as degree
distribution, transitivity or average distance. Their goal is to detect certain
properties such as the small-world or scale-free properties. Previous works
have shown some of these properties are present in many different systems,
while others are characteristic of certain types of systems only. However, each
one of these studies generally focuses on a very small number of topological
measures and networks. In this work, we aim at using a more systematic
approach. We first constitute a dataset of 152 publicly available networks,
spanning over 7 different domains. We then process 14 different topological
measures to characterize them in the most possible complete way. Finally, we
apply standard data mining tools to analyze these data. A cluster analysis
reveals it is possible to obtain two significantly distinct clusters of
networks, corresponding roughly to a bisection of the domains modeled by the
networks. On these data, the most discriminant measures are density,
modularity, average degree and transitivity, and at a lesser extent, closeness
and edgebetweenness centralities.Abstract--Complex networks are a powerful
modeling tool, allowing the study of countless real-world systems. They have
been used in very different domains such as computer science, biology,
sociology, management, etc. Authors have been trying to characterize them using
various measures such as degree distribution, transitivity or average distance.
Their goal is to detect certain properties such as the small-world or
scale-free properties. Previous works have shown some of these properties are
present in many different systems, while others are characteristic of certain
types of systems only. However, each one of these studies generally focuses on
a very small number of topological measures and networks. In this work, we aim
at using a more systematic approach. We first constitute a dataset of 152
publicly available networks, spanning over 7 different domains. We then process
14 different topological measures to characterize them in the most possible
complete way. Finally, we apply standard data mining tools to analyze these
data. A cluster analysis reveals it is possible to obtain two significantly
distinct clusters of networks, corresponding roughly to a bisection of the
domains modeled by the networks. On these data, the most discriminant measures
are density, modularity, average degree and transitivity, and at a lesser
extent, closeness and edgebetweenness centralities.
| no_new_dataset | 0.937268 |
1402.0459 | Haoyang (Hubert) Duan | Hubert Haoyang Duan | Applying Supervised Learning Algorithms and a New Feature Selection
Method to Predict Coronary Artery Disease | This is a Master of Science in Mathematics thesis under the
supervision of Dr. Vladimir Pestov and Dr. George Wells submitted on January
31, 2014 at the University of Ottawa; 102 pages and 15 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From a fresh data science perspective, this thesis discusses the prediction
of coronary artery disease based on genetic variations at the DNA base pair
level, called Single-Nucleotide Polymorphisms (SNPs), collected from the
Ontario Heart Genomics Study (OHGS).
First, the thesis explains two commonly used supervised learning algorithms,
the k-Nearest Neighbour (k-NN) and Random Forest classifiers, and includes a
complete proof that the k-NN classifier is universally consistent in any finite
dimensional normed vector space. Second, the thesis introduces two
dimensionality reduction steps, Random Projections, a known feature extraction
technique based on the Johnson-Lindenstrauss lemma, and a new method termed
Mass Transportation Distance (MTD) Feature Selection for discrete domains.
Then, this thesis compares the performance of Random Projections with the k-NN
classifier against MTD Feature Selection and Random Forest, for predicting
artery disease based on accuracy, the F-Measure, and area under the Receiver
Operating Characteristic (ROC) curve.
The comparative results demonstrate that MTD Feature Selection with Random
Forest is vastly superior to Random Projections and k-NN. The Random Forest
classifier is able to obtain an accuracy of 0.6660 and an area under the ROC
curve of 0.8562 on the OHGS genetic dataset, when 3335 SNPs are selected by MTD
Feature Selection for classification. This area is considerably better than the
previous high score of 0.608 obtained by Davies et al. in 2010 on the same
dataset.
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2014 18:47:41 GMT"
}
] | 2014-02-04T00:00:00 | [
[
"Duan",
"Hubert Haoyang",
""
]
] | TITLE: Applying Supervised Learning Algorithms and a New Feature Selection
Method to Predict Coronary Artery Disease
ABSTRACT: From a fresh data science perspective, this thesis discusses the prediction
of coronary artery disease based on genetic variations at the DNA base pair
level, called Single-Nucleotide Polymorphisms (SNPs), collected from the
Ontario Heart Genomics Study (OHGS).
First, the thesis explains two commonly used supervised learning algorithms,
the k-Nearest Neighbour (k-NN) and Random Forest classifiers, and includes a
complete proof that the k-NN classifier is universally consistent in any finite
dimensional normed vector space. Second, the thesis introduces two
dimensionality reduction steps, Random Projections, a known feature extraction
technique based on the Johnson-Lindenstrauss lemma, and a new method termed
Mass Transportation Distance (MTD) Feature Selection for discrete domains.
Then, this thesis compares the performance of Random Projections with the k-NN
classifier against MTD Feature Selection and Random Forest, for predicting
artery disease based on accuracy, the F-Measure, and area under the Receiver
Operating Characteristic (ROC) curve.
The comparative results demonstrate that MTD Feature Selection with Random
Forest is vastly superior to Random Projections and k-NN. The Random Forest
classifier is able to obtain an accuracy of 0.6660 and an area under the ROC
curve of 0.8562 on the OHGS genetic dataset, when 3335 SNPs are selected by MTD
Feature Selection for classification. This area is considerably better than the
previous high score of 0.608 obtained by Davies et al. in 2010 on the same
dataset.
| no_new_dataset | 0.951504 |
1310.4822 | Hugo Jair Escalante | Hugo Jair Escalante, Isabelle Guyon, Vassilis Athitsos, Pat
Jangyodsuk, Jun Wan | Principal motion components for gesture recognition using a
single-example | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces principal motion components (PMC), a new method for
one-shot gesture recognition. In the considered scenario a single
training-video is available for each gesture to be recognized, which limits the
application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion
energy is obtained per each pair of consecutive frames in a video. Motion maps
associated to a video are processed to obtain a PCA model, which is used for
recognition under a reconstruction-error approach. The main benefits of the
proposed approach are its simplicity, easiness of implementation, competitive
performance and efficiency. We report experimental results in one-shot gesture
recognition using the ChaLearn Gesture Dataset; a benchmark comprising more
than 50,000 gestures, recorded as both RGB and depth video with a Kinect
camera. Results obtained with PMC are competitive with alternative methods
proposed for the same data set.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2013 19:52:50 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jan 2014 12:04:41 GMT"
}
] | 2014-02-03T00:00:00 | [
[
"Escalante",
"Hugo Jair",
""
],
[
"Guyon",
"Isabelle",
""
],
[
"Athitsos",
"Vassilis",
""
],
[
"Jangyodsuk",
"Pat",
""
],
[
"Wan",
"Jun",
""
]
] | TITLE: Principal motion components for gesture recognition using a
single-example
ABSTRACT: This paper introduces principal motion components (PMC), a new method for
one-shot gesture recognition. In the considered scenario a single
training-video is available for each gesture to be recognized, which limits the
application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion
energy is obtained per each pair of consecutive frames in a video. Motion maps
associated to a video are processed to obtain a PCA model, which is used for
recognition under a reconstruction-error approach. The main benefits of the
proposed approach are its simplicity, easiness of implementation, competitive
performance and efficiency. We report experimental results in one-shot gesture
recognition using the ChaLearn Gesture Dataset; a benchmark comprising more
than 50,000 gestures, recorded as both RGB and depth video with a Kinect
camera. Results obtained with PMC are competitive with alternative methods
proposed for the same data set.
| no_new_dataset | 0.832645 |
1401.7727 | Benjamin Rubinstein | Battista Biggio and Igino Corona and Blaine Nelson and Benjamin I. P.
Rubinstein and Davide Maiorca and Giorgio Fumera and Giorgio Giacinto and and
Fabio Roli | Security Evaluation of Support Vector Machines in Adversarial
Environments | 47 pages, 9 figures; chapter accepted into book 'Support Vector
Machine Applications' | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2014 03:37:18 GMT"
}
] | 2014-01-31T00:00:00 | [
[
"Biggio",
"Battista",
""
],
[
"Corona",
"Igino",
""
],
[
"Nelson",
"Blaine",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
],
[
"Maiorca",
"Davide",
""
],
[
"Fumera",
"Giorgio",
""
],
[
"Giacinto",
"Giorgio",
""
],
[
"Roli",
"and Fabio",
""
]
] | TITLE: Security Evaluation of Support Vector Machines in Adversarial
Environments
ABSTRACT: Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.
| no_new_dataset | 0.94887 |
1401.7837 | Andrea Andrisani | Antonio Dumas, Andrea Andrisani, Maurizio Bonnici, Mauro Madonia,
Michele Trancossi | A new correlation between solar energy radiation and some atmospheric
parameters | 23 pages, 3 figures, 4 tables | null | null | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The energy balance for an atmospheric layer near the soil is evaluated. By
integrating it over the whole day period a linear relationship between the
global daily solar radiation incident on a horizontal surface and the product
of the sunshine hours at clear sky with the maximum temperature variation in
the day is achieved. The results show a comparable accuracy with some well
recognized solar energy models such as the \ang-Prescott one, at least for
Mediterranean climatic area. Validation of the result has been performed using
old dataset which are almost contemporary and relative to the same sites with
the ones used for comparison.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2014 13:27:50 GMT"
}
] | 2014-01-31T00:00:00 | [
[
"Dumas",
"Antonio",
""
],
[
"Andrisani",
"Andrea",
""
],
[
"Bonnici",
"Maurizio",
""
],
[
"Madonia",
"Mauro",
""
],
[
"Trancossi",
"Michele",
""
]
] | TITLE: A new correlation between solar energy radiation and some atmospheric
parameters
ABSTRACT: The energy balance for an atmospheric layer near the soil is evaluated. By
integrating it over the whole day period a linear relationship between the
global daily solar radiation incident on a horizontal surface and the product
of the sunshine hours at clear sky with the maximum temperature variation in
the day is achieved. The results show a comparable accuracy with some well
recognized solar energy models such as the \ang-Prescott one, at least for
Mediterranean climatic area. Validation of the result has been performed using
old dataset which are almost contemporary and relative to the same sites with
the ones used for comparison.
| no_new_dataset | 0.93744 |
1310.6775 | Linas Vepstas PhD | Linas Vepstas | Durkheim Project Data Analysis Report | 43 pages, to appear as appendix of primary science publication
Poulin, et al "Predicting the risk of suicide by analyzing the text of
clinical notes" | null | 10.1371/journal.pone.0085733.s001 | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report describes the suicidality prediction models created under the
DARPA DCAPS program in association with the Durkheim Project
[http://durkheimproject.org/]. The models were built primarily from
unstructured text (free-format clinician notes) for several hundred patient
records obtained from the Veterans Health Administration (VHA). The models were
constructed using a genetic programming algorithm applied to bag-of-words and
bag-of-phrases datasets. The influence of additional structured data was
explored but was found to be minor. Given the small dataset size,
classification between cohorts was high fidelity (98%). Cross-validation
suggests these models are reasonably predictive, with an accuracy of 50% to 69%
on five rotating folds, with ensemble averages of 58% to 67%. One particularly
noteworthy result is that word-pairs can dramatically improve classification
accuracy; but this is the case only when one of the words in the pair is
already known to have a high predictive value. By contrast, the set of all
possible word-pairs does not improve on a simple bag-of-words model.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2013 21:10:53 GMT"
}
] | 2014-01-30T00:00:00 | [
[
"Vepstas",
"Linas",
""
]
] | TITLE: Durkheim Project Data Analysis Report
ABSTRACT: This report describes the suicidality prediction models created under the
DARPA DCAPS program in association with the Durkheim Project
[http://durkheimproject.org/]. The models were built primarily from
unstructured text (free-format clinician notes) for several hundred patient
records obtained from the Veterans Health Administration (VHA). The models were
constructed using a genetic programming algorithm applied to bag-of-words and
bag-of-phrases datasets. The influence of additional structured data was
explored but was found to be minor. Given the small dataset size,
classification between cohorts was high fidelity (98%). Cross-validation
suggests these models are reasonably predictive, with an accuracy of 50% to 69%
on five rotating folds, with ensemble averages of 58% to 67%. One particularly
noteworthy result is that word-pairs can dramatically improve classification
accuracy; but this is the case only when one of the words in the pair is
already known to have a high predictive value. By contrast, the set of all
possible word-pairs does not improve on a simple bag-of-words model.
| no_new_dataset | 0.946547 |
1401.1974 | Vu Nguyen | Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui | Bayesian Nonparametric Multilevel Clustering with Group-Level Contexts | Full version of ICML 2014 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a Bayesian nonparametric framework for multilevel clustering which
utilizes group-level context information to simultaneously discover
low-dimensional structures of the group contents and partitions groups into
clusters. Using the Dirichlet process as the building block, our model
constructs a product base-measure with a nested structure to accommodate
content and context observations at multiple levels. The proposed model
possesses properties that link the nested Dirichlet processes (nDP) and the
Dirichlet process mixture models (DPM) in an interesting way: integrating out
all contents results in the DPM over contexts, whereas integrating out
group-specific contexts results in the nDP mixture over content variables. We
provide a Polya-urn view of the model and an efficient collapsed Gibbs
inference procedure. Extensive experiments on real-world datasets demonstrate
the advantage of utilizing context information via our model in both text and
image domains.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2014 12:08:07 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jan 2014 06:28:03 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Jan 2014 08:13:58 GMT"
},
{
"version": "v4",
"created": "Wed, 29 Jan 2014 01:54:57 GMT"
}
] | 2014-01-30T00:00:00 | [
[
"Nguyen",
"Vu",
""
],
[
"Phung",
"Dinh",
""
],
[
"Nguyen",
"XuanLong",
""
],
[
"Venkatesh",
"Svetha",
""
],
[
"Bui",
"Hung Hai",
""
]
] | TITLE: Bayesian Nonparametric Multilevel Clustering with Group-Level Contexts
ABSTRACT: We present a Bayesian nonparametric framework for multilevel clustering which
utilizes group-level context information to simultaneously discover
low-dimensional structures of the group contents and partitions groups into
clusters. Using the Dirichlet process as the building block, our model
constructs a product base-measure with a nested structure to accommodate
content and context observations at multiple levels. The proposed model
possesses properties that link the nested Dirichlet processes (nDP) and the
Dirichlet process mixture models (DPM) in an interesting way: integrating out
all contents results in the DPM over contexts, whereas integrating out
group-specific contexts results in the nDP mixture over content variables. We
provide a Polya-urn view of the model and an efficient collapsed Gibbs
inference procedure. Extensive experiments on real-world datasets demonstrate
the advantage of utilizing context information via our model in both text and
image domains.
| no_new_dataset | 0.953319 |
1207.7253 | S\'ebastien Gigu\`ere | S\'ebastien Gigu\`ere, Mario Marchand, Fran\c{c}ois Laviolette,
Alexandre Drouin and Jacques Corbeil | Learning a peptide-protein binding affinity predictor with kernel ridge
regression | 22 pages, 4 figures, 5 tables | BMC Bioinformatics 2013, 14:82 | 10.1186/1471-2105-14-82 | null | q-bio.QM cs.LG q-bio.BM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a specialized string kernel for small bio-molecules, peptides and
pseudo-sequences of binding interfaces. The kernel incorporates
physico-chemical properties of amino acids and elegantly generalize eight
kernels, such as the Oligo, the Weighted Degree, the Blended Spectrum, and the
Radial Basis Function. We provide a low complexity dynamic programming
algorithm for the exact computation of the kernel and a linear time algorithm
for it's approximation. Combined with kernel ridge regression and SupCK, a
novel binding pocket kernel, the proposed kernel yields biologically relevant
and good prediction accuracy on the PepX database. For the first time, a
machine learning predictor is capable of accurately predicting the binding
affinity of any peptide to any protein. The method was also applied to both
single-target and pan-specific Major Histocompatibility Complex class II
benchmark datasets and three Quantitative Structure Affinity Model benchmark
datasets.
On all benchmarks, our method significantly (p-value < 0.057) outperforms the
current state-of-the-art methods at predicting peptide-protein binding
affinities. The proposed approach is flexible and can be applied to predict any
quantitative biological activity. The method should be of value to a large
segment of the research community with the potential to accelerate
peptide-based drug and vaccine development.
| [
{
"version": "v1",
"created": "Tue, 31 Jul 2012 14:11:31 GMT"
}
] | 2014-01-29T00:00:00 | [
[
"Giguère",
"Sébastien",
""
],
[
"Marchand",
"Mario",
""
],
[
"Laviolette",
"François",
""
],
[
"Drouin",
"Alexandre",
""
],
[
"Corbeil",
"Jacques",
""
]
] | TITLE: Learning a peptide-protein binding affinity predictor with kernel ridge
regression
ABSTRACT: We propose a specialized string kernel for small bio-molecules, peptides and
pseudo-sequences of binding interfaces. The kernel incorporates
physico-chemical properties of amino acids and elegantly generalize eight
kernels, such as the Oligo, the Weighted Degree, the Blended Spectrum, and the
Radial Basis Function. We provide a low complexity dynamic programming
algorithm for the exact computation of the kernel and a linear time algorithm
for it's approximation. Combined with kernel ridge regression and SupCK, a
novel binding pocket kernel, the proposed kernel yields biologically relevant
and good prediction accuracy on the PepX database. For the first time, a
machine learning predictor is capable of accurately predicting the binding
affinity of any peptide to any protein. The method was also applied to both
single-target and pan-specific Major Histocompatibility Complex class II
benchmark datasets and three Quantitative Structure Affinity Model benchmark
datasets.
On all benchmarks, our method significantly (p-value < 0.057) outperforms the
current state-of-the-art methods at predicting peptide-protein binding
affinities. The proposed approach is flexible and can be applied to predict any
quantitative biological activity. The method should be of value to a large
segment of the research community with the potential to accelerate
peptide-based drug and vaccine development.
| no_new_dataset | 0.946001 |
1212.0695 | Emanuele Frandi | Emanuele Frandi, Ricardo Nanculef, Maria Grazia Gasparo, Stefano Lodi,
Claudio Sartori | Training Support Vector Machines Using Frank-Wolfe Optimization Methods | null | International Journal on Pattern Recognition and Artificial
Intelligence, 27(3), 2013 | 10.1142/S0218001413600033 | null | cs.LG cs.CV math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems.
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2012 12:05:31 GMT"
}
] | 2014-01-29T00:00:00 | [
[
"Frandi",
"Emanuele",
""
],
[
"Nanculef",
"Ricardo",
""
],
[
"Gasparo",
"Maria Grazia",
""
],
[
"Lodi",
"Stefano",
""
],
[
"Sartori",
"Claudio",
""
]
] | TITLE: Training Support Vector Machines Using Frank-Wolfe Optimization Methods
ABSTRACT: Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems.
| no_new_dataset | 0.944125 |
1312.6597 | Luis Marujo | Luis Marujo, Anatole Gershman, Jaime Carbonell, David Martins de
Matos, Jo\~ao P. Neto | Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning | Preliminary version of the paper | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2013 16:52:56 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Jan 2014 23:09:17 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Marujo",
"Luis",
""
],
[
"Gershman",
"Anatole",
""
],
[
"Carbonell",
"Jaime",
""
],
[
"de Matos",
"David Martins",
""
],
[
"Neto",
"João P.",
""
]
] | TITLE: Co-Multistage of Multiple Classifiers for Imbalanced Multiclass Learning
ABSTRACT: In this work, we propose two stochastic architectural models (CMC and CMC-M)
with two layers of classifiers applicable to datasets with one and multiple
skewed classes. This distinction becomes important when the datasets have a
large number of classes. Therefore, we present a novel solution to imbalanced
multiclass learning with several skewed majority classes, which improves
minority classes identification. This fact is particularly important for text
classification tasks, such as event detection. Our models combined with
pre-processing sampling techniques improved the classification results on six
well-known datasets. Finally, we have also introduced a new metric SG-Mean to
overcome the multiplication by zero limitation of G-Mean.
| no_new_dataset | 0.952131 |
1401.6484 | Kiran Sree Pokkuluri Prof | Pokkuluri Kiran Sree, Inampudi Ramesh Babu | Identification of Protein Coding Regions in Genomic DNA Using
Unsupervised FMACA Based Pattern Classifier | arXiv admin note: text overlap with arXiv:1312.2642 | IJCSNS International Journal of Computer Science and Network
Security, VOL.8 No.1, January 2008,305-310 | null | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genes carry the instructions for making proteins that are found in a cell as
a specific sequence of nucleotides that are found in DNA molecules. But, the
regions of these genes that code for proteins may occupy only a small region of
the sequence. Identifying the coding regions play a vital role in understanding
these genes. In this paper we propose a unsupervised Fuzzy Multiple Attractor
Cellular Automata (FMCA) based pattern classifier to identify the coding region
of a DNA sequence. We propose a distinct K-Means algorithm for designing FMACA
classifier which is simple, efficient and produces more accurate classifier
than that has previously been obtained for a range of different sequence
lengths. Experimental results confirm the scalability of the proposed
Unsupervised FCA based classifier to handle large volume of datasets
irrespective of the number of classes, tuples and attributes. Good
classification accuracy has been established.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2014 01:48:14 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Sree",
"Pokkuluri Kiran",
""
],
[
"Babu",
"Inampudi Ramesh",
""
]
] | TITLE: Identification of Protein Coding Regions in Genomic DNA Using
Unsupervised FMACA Based Pattern Classifier
ABSTRACT: Genes carry the instructions for making proteins that are found in a cell as
a specific sequence of nucleotides that are found in DNA molecules. But, the
regions of these genes that code for proteins may occupy only a small region of
the sequence. Identifying the coding regions play a vital role in understanding
these genes. In this paper we propose a unsupervised Fuzzy Multiple Attractor
Cellular Automata (FMCA) based pattern classifier to identify the coding region
of a DNA sequence. We propose a distinct K-Means algorithm for designing FMACA
classifier which is simple, efficient and produces more accurate classifier
than that has previously been obtained for a range of different sequence
lengths. Experimental results confirm the scalability of the proposed
Unsupervised FCA based classifier to handle large volume of datasets
irrespective of the number of classes, tuples and attributes. Good
classification accuracy has been established.
| no_new_dataset | 0.954732 |
1401.6571 | Shibamouli Lahiri | Shibamouli Lahiri, Sagnik Ray Choudhury, Cornelia Caragea | Keyword and Keyphrase Extraction Using Centrality Measures on
Collocation Networks | 11 pages | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keyword and keyphrase extraction is an important problem in natural language
processing, with applications ranging from summarization to semantic search to
document clustering. Graph-based approaches to keyword and keyphrase extraction
avoid the problem of acquiring a large in-domain training corpus by applying
variants of PageRank algorithm on a network of words. Although graph-based
approaches are knowledge-lean and easily adoptable in online systems, it
remains largely open whether they can benefit from centrality measures other
than PageRank. In this paper, we experiment with an array of centrality
measures on word and noun phrase collocation networks, and analyze their
performance on four benchmark datasets. Not only are there centrality measures
that perform as well as or better than PageRank, but they are much simpler
(e.g., degree, strength, and neighborhood size). Furthermore, centrality-based
methods give results that are competitive with and, in some cases, better than
two strong unsupervised baselines.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2014 19:05:45 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Lahiri",
"Shibamouli",
""
],
[
"Choudhury",
"Sagnik Ray",
""
],
[
"Caragea",
"Cornelia",
""
]
] | TITLE: Keyword and Keyphrase Extraction Using Centrality Measures on
Collocation Networks
ABSTRACT: Keyword and keyphrase extraction is an important problem in natural language
processing, with applications ranging from summarization to semantic search to
document clustering. Graph-based approaches to keyword and keyphrase extraction
avoid the problem of acquiring a large in-domain training corpus by applying
variants of PageRank algorithm on a network of words. Although graph-based
approaches are knowledge-lean and easily adoptable in online systems, it
remains largely open whether they can benefit from centrality measures other
than PageRank. In this paper, we experiment with an array of centrality
measures on word and noun phrase collocation networks, and analyze their
performance on four benchmark datasets. Not only are there centrality measures
that perform as well as or better than PageRank, but they are much simpler
(e.g., degree, strength, and neighborhood size). Furthermore, centrality-based
methods give results that are competitive with and, in some cases, better than
two strong unsupervised baselines.
| no_new_dataset | 0.948965 |
1401.6597 | Sadi Seker E | Sadi Evren Seker, Y. Unal, Z. Erdem, and H. Erdinc Kocer | Ensembled Correlation Between Liver Analysis Outputs | null | International Journal of Biology and Biomedical Engineering, ISSN:
1998-4510, Volume 8, pp. 1-5, 2014 | null | null | stat.ML cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data mining techniques on the biological analysis are spreading for most of
the areas including the health care and medical information. We have applied
the data mining techniques, such as KNN, SVM, MLP or decision trees over a
unique dataset, which is collected from 16,380 analysis results for a year.
Furthermore we have also used meta-classifiers to question the increased
correlation rate between the liver disorder and the liver analysis outputs. The
results show that there is a correlation among ALT, AST, Billirubin Direct and
Billirubin Total down to 15% of error rate. Also the correlation coefficient is
up to 94%. This makes possible to predict the analysis results from each other
or disease patterns can be applied over the linear correlation of the
parameters.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2014 23:52:37 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Seker",
"Sadi Evren",
""
],
[
"Unal",
"Y.",
""
],
[
"Erdem",
"Z.",
""
],
[
"Kocer",
"H. Erdinc",
""
]
] | TITLE: Ensembled Correlation Between Liver Analysis Outputs
ABSTRACT: Data mining techniques on the biological analysis are spreading for most of
the areas including the health care and medical information. We have applied
the data mining techniques, such as KNN, SVM, MLP or decision trees over a
unique dataset, which is collected from 16,380 analysis results for a year.
Furthermore we have also used meta-classifiers to question the increased
correlation rate between the liver disorder and the liver analysis outputs. The
results show that there is a correlation among ALT, AST, Billirubin Direct and
Billirubin Total down to 15% of error rate. Also the correlation coefficient is
up to 94%. This makes possible to predict the analysis results from each other
or disease patterns can be applied over the linear correlation of the
parameters.
| new_dataset | 0.947284 |
1401.6891 | Gabriela Csurka | Gabriela Csurka and Julien Ah-Pine and St\'ephane Clinchant | Unsupervised Visual and Textual Information Fusion in Multimedia
Retrieval - A Graph-based Point of View | An extended version of the paper: Visual and Textual Information
Fusion in Multimedia Retrieval using Semantic Filtering and Graph based
Methods, by J. Ah-Pine, G. Csurka and S. Clinchant, submitted to ACM
Transactions on Information Systems | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimedia collections are more than ever growing in size and diversity.
Effective multimedia retrieval systems are thus critical to access these
datasets from the end-user perspective and in a scalable way. We are interested
in repositories of image/text multimedia objects and we study multimodal
information fusion techniques in the context of content based multimedia
information retrieval. We focus on graph based methods which have proven to
provide state-of-the-art performances. We particularly examine two of such
methods : cross-media similarities and random walk based scores. From a
theoretical viewpoint, we propose a unifying graph based framework which
encompasses the two aforementioned approaches. Our proposal allows us to
highlight the core features one should consider when using a graph based
technique for the combination of visual and textual information. We compare
cross-media and random walk based results using three different real-world
datasets. From a practical standpoint, our extended empirical analysis allow us
to provide insights and guidelines about the use of graph based methods for
multimodal information fusion in content based multimedia information
retrieval.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2014 15:29:14 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Csurka",
"Gabriela",
""
],
[
"Ah-Pine",
"Julien",
""
],
[
"Clinchant",
"Stéphane",
""
]
] | TITLE: Unsupervised Visual and Textual Information Fusion in Multimedia
Retrieval - A Graph-based Point of View
ABSTRACT: Multimedia collections are more than ever growing in size and diversity.
Effective multimedia retrieval systems are thus critical to access these
datasets from the end-user perspective and in a scalable way. We are interested
in repositories of image/text multimedia objects and we study multimodal
information fusion techniques in the context of content based multimedia
information retrieval. We focus on graph based methods which have proven to
provide state-of-the-art performances. We particularly examine two of such
methods : cross-media similarities and random walk based scores. From a
theoretical viewpoint, we propose a unifying graph based framework which
encompasses the two aforementioned approaches. Our proposal allows us to
highlight the core features one should consider when using a graph based
technique for the combination of visual and textual information. We compare
cross-media and random walk based results using three different real-world
datasets. From a practical standpoint, our extended empirical analysis allow us
to provide insights and guidelines about the use of graph based methods for
multimodal information fusion in content based multimedia information
retrieval.
| no_new_dataset | 0.9455 |
1401.6911 | Adrian Brown | Adrian J. Brown, Thomas J. Cudahy, Malcolm R. Walter | Hydrothermal alteration at the Panorama Formation, North Pole Dome,
Pilbara Craton, Western Australia | 29 pages, 9 figures, 2 tables | Precambrian Research (2006) 151, 211-223 | 10.1016/j.precamres.2006.08.014 | null | astro-ph.EP physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An airborne hyperspectral remote sensing dataset was obtained of the North
Pole Dome region of the Pilbara Craton in October 2002. It has been analyzed
for indications of hydrothermal minerals. Here we report on the identification
and mapping of hydrothermal minerals in the 3.459 Ga Panorama Formation and
surrounding strata. The spatial distribution of a pattern of subvertical
pyrophyllite rich veins connected to a pyrophyllite rich palaeohorizontal layer
is interpreted to represent the base of an acid-sulfate epithermal system that
is unconformably overlain by the stromatolitic 3.42 Ga Strelley Pool Chert.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2014 20:51:20 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Brown",
"Adrian J.",
""
],
[
"Cudahy",
"Thomas J.",
""
],
[
"Walter",
"Malcolm R.",
""
]
] | TITLE: Hydrothermal alteration at the Panorama Formation, North Pole Dome,
Pilbara Craton, Western Australia
ABSTRACT: An airborne hyperspectral remote sensing dataset was obtained of the North
Pole Dome region of the Pilbara Craton in October 2002. It has been analyzed
for indications of hydrothermal minerals. Here we report on the identification
and mapping of hydrothermal minerals in the 3.459 Ga Panorama Formation and
surrounding strata. The spatial distribution of a pattern of subvertical
pyrophyllite rich veins connected to a pyrophyllite rich palaeohorizontal layer
is interpreted to represent the base of an acid-sulfate epithermal system that
is unconformably overlain by the stromatolitic 3.42 Ga Strelley Pool Chert.
| no_new_dataset | 0.924552 |
1401.6984 | Yajie Miao | Yajie Miao | Kaldi+PDNN: Building DNN-based ASR Systems with Kaldi and PDNN | unpublished manuscript | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The Kaldi toolkit is becoming popular for constructing automated speech
recognition (ASR) systems. Meanwhile, in recent years, deep neural networks
(DNNs) have shown state-of-the-art performance on various ASR tasks. This
document describes our open-source recipes to implement fully-fledged DNN
acoustic modeling using Kaldi and PDNN. PDNN is a lightweight deep learning
toolkit developed under the Theano environment. Using these recipes, we can
build up multiple systems including DNN hybrid systems, convolutional neural
network (CNN) systems and bottleneck feature systems. These recipes are
directly based on the Kaldi Switchboard 110-hour setup. However, adapting them
to new datasets is easy to achieve.
| [
{
"version": "v1",
"created": "Mon, 27 Jan 2014 19:55:34 GMT"
}
] | 2014-01-28T00:00:00 | [
[
"Miao",
"Yajie",
""
]
] | TITLE: Kaldi+PDNN: Building DNN-based ASR Systems with Kaldi and PDNN
ABSTRACT: The Kaldi toolkit is becoming popular for constructing automated speech
recognition (ASR) systems. Meanwhile, in recent years, deep neural networks
(DNNs) have shown state-of-the-art performance on various ASR tasks. This
document describes our open-source recipes to implement fully-fledged DNN
acoustic modeling using Kaldi and PDNN. PDNN is a lightweight deep learning
toolkit developed under the Theano environment. Using these recipes, we can
build up multiple systems including DNN hybrid systems, convolutional neural
network (CNN) systems and bottleneck feature systems. These recipes are
directly based on the Kaldi Switchboard 110-hour setup. However, adapting them
to new datasets is easy to achieve.
| no_new_dataset | 0.945851 |
1401.6404 | Ankit Sharma | Ankit Sharma, Jaideep Srivastava and Abhishek Chandra | Predicting Multi-actor collaborations using Hypergraphs | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks are now ubiquitous and most of them contain interactions
involving multiple actors (groups) like author collaborations, teams or emails
in an organizations, etc. Hypergraphs are natural structures to effectively
capture multi-actor interactions which conventional dyadic graphs fail to
capture. In this work the problem of predicting collaborations is addressed
while modeling the collaboration network as a hypergraph network. The problem
of predicting future multi-actor collaboration is mapped to hyperedge
prediction problem. Given that the higher order edge prediction is an
inherently hard problem, in this work we restrict to the task of predicting
edges (collaborations) that have already been observed in past. In this work,
we propose a novel use of hyperincidence temporal tensors to capture time
varying hypergraphs and provides a tensor decomposition based prediction
algorithm. We quantitatively compare the performance of the hypergraphs based
approach with the conventional dyadic graph based approach. Our hypothesis that
hypergraphs preserve the information that simple graphs destroy is corroborated
by experiments using author collaboration network from the DBLP dataset. Our
results demonstrate the strength of hypergraph based approach to predict higher
order collaborations (size>4) which is very difficult using dyadic graph based
approach. Moreover, while predicting collaborations of size>2 hypergraphs in
most cases provide better results with an average increase of approx. 45% in
F-Score for different sizes = {3,4,5,6,7}.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2014 17:10:16 GMT"
}
] | 2014-01-27T00:00:00 | [
[
"Sharma",
"Ankit",
""
],
[
"Srivastava",
"Jaideep",
""
],
[
"Chandra",
"Abhishek",
""
]
] | TITLE: Predicting Multi-actor collaborations using Hypergraphs
ABSTRACT: Social networks are now ubiquitous and most of them contain interactions
involving multiple actors (groups) like author collaborations, teams or emails
in an organizations, etc. Hypergraphs are natural structures to effectively
capture multi-actor interactions which conventional dyadic graphs fail to
capture. In this work the problem of predicting collaborations is addressed
while modeling the collaboration network as a hypergraph network. The problem
of predicting future multi-actor collaboration is mapped to hyperedge
prediction problem. Given that the higher order edge prediction is an
inherently hard problem, in this work we restrict to the task of predicting
edges (collaborations) that have already been observed in past. In this work,
we propose a novel use of hyperincidence temporal tensors to capture time
varying hypergraphs and provides a tensor decomposition based prediction
algorithm. We quantitatively compare the performance of the hypergraphs based
approach with the conventional dyadic graph based approach. Our hypothesis that
hypergraphs preserve the information that simple graphs destroy is corroborated
by experiments using author collaboration network from the DBLP dataset. Our
results demonstrate the strength of hypergraph based approach to predict higher
order collaborations (size>4) which is very difficult using dyadic graph based
approach. Moreover, while predicting collaborations of size>2 hypergraphs in
most cases provide better results with an average increase of approx. 45% in
F-Score for different sizes = {3,4,5,6,7}.
| no_new_dataset | 0.949716 |
1401.6124 | Fabricio de Franca Olivetti | Fabricio Olivetti de Franca | Iterative Universal Hash Function Generator for Minhashing | 6 pages, 4 tables, 1 algorithm | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Minhashing is a technique used to estimate the Jaccard Index between two sets
by exploiting the probability of collision in a random permutation. In order to
speed up the computation, a random permutation can be approximated by using an
universal hash function such as the $h_{a,b}$ function proposed by Carter and
Wegman. A better estimate of the Jaccard Index can be achieved by using many of
these hash functions, created at random. In this paper a new iterative
procedure to generate a set of $h_{a,b}$ functions is devised that eliminates
the need for a list of random values and avoid the multiplication operation
during the calculation. The properties of the generated hash functions remains
that of an universal hash function family. This is possible due to the random
nature of features occurrence on sparse datasets. Results show that the
uniformity of hashing the features is maintaned while obtaining a speed up of
up to $1.38$ compared to the traditional approach.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 19:03:38 GMT"
}
] | 2014-01-25T00:00:00 | [
[
"de Franca",
"Fabricio Olivetti",
""
]
] | TITLE: Iterative Universal Hash Function Generator for Minhashing
ABSTRACT: Minhashing is a technique used to estimate the Jaccard Index between two sets
by exploiting the probability of collision in a random permutation. In order to
speed up the computation, a random permutation can be approximated by using an
universal hash function such as the $h_{a,b}$ function proposed by Carter and
Wegman. A better estimate of the Jaccard Index can be achieved by using many of
these hash functions, created at random. In this paper a new iterative
procedure to generate a set of $h_{a,b}$ functions is devised that eliminates
the need for a list of random values and avoid the multiplication operation
during the calculation. The properties of the generated hash functions remains
that of an universal hash function family. This is possible due to the random
nature of features occurrence on sparse datasets. Results show that the
uniformity of hashing the features is maintaned while obtaining a speed up of
up to $1.38$ compared to the traditional approach.
| no_new_dataset | 0.945701 |
1401.5814 | Johannes Schneider | Johannes Schneider and Michail Vlachos | On Randomly Projected Hierarchical Clustering with Guarantees | This version contains the conference paper "On Randomly Projected
Hierarchical Clustering with Guarantees'', SIAM International Conference on
Data Mining (SDM), 2014 and, additionally, proofs omitted in the conference
version | null | null | null | cs.IR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical clustering (HC) algorithms are generally limited to small data
instances due to their runtime costs. Here we mitigate this shortcoming and
explore fast HC algorithms based on random projections for single (SLC) and
average (ALC) linkage clustering as well as for the minimum spanning tree
problem (MST). We present a thorough adaptive analysis of our algorithms that
improve prior work from $O(N^2)$ by up to a factor of $N/(\log N)^2$ for a
dataset of $N$ points in Euclidean space. The algorithms maintain, with
arbitrary high probability, the outcome of hierarchical clustering as well as
the worst-case running-time guarantees. We also present parameter-free
instances of our algorithms.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2014 22:01:05 GMT"
}
] | 2014-01-24T00:00:00 | [
[
"Schneider",
"Johannes",
""
],
[
"Vlachos",
"Michail",
""
]
] | TITLE: On Randomly Projected Hierarchical Clustering with Guarantees
ABSTRACT: Hierarchical clustering (HC) algorithms are generally limited to small data
instances due to their runtime costs. Here we mitigate this shortcoming and
explore fast HC algorithms based on random projections for single (SLC) and
average (ALC) linkage clustering as well as for the minimum spanning tree
problem (MST). We present a thorough adaptive analysis of our algorithms that
improve prior work from $O(N^2)$ by up to a factor of $N/(\log N)^2$ for a
dataset of $N$ points in Euclidean space. The algorithms maintain, with
arbitrary high probability, the outcome of hierarchical clustering as well as
the worst-case running-time guarantees. We also present parameter-free
instances of our algorithms.
| no_new_dataset | 0.949201 |
1301.1218 | Matteo Riondato | Matteo Riondato and Fabio Vandin | Finding the True Frequent Itemsets | 13 pages, Extended version of work appeared in SIAM International
Conference on Data Mining, 2014 | null | null | null | cs.LG cs.DB cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It
requires to identify all itemsets appearing in at least a fraction $\theta$ of
a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of
mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the
understanding of the underlying process that generated it. Specifically, in
many applications $\mathcal{D}$ is a collection of samples obtained from an
unknown probability distribution $\pi$ on transactions, and by extracting the
FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e.,
with probability at least $\theta$) generated by $\pi$, which we call the True
Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the
generative process, the set of FIs is only a rough approximation of the set of
TFIs, as it often contains a huge number of \emph{false positives}, i.e.,
spurious itemsets that are not among the TFIs. In this work we design and
analyze an algorithm to identify a threshold $\hat{\theta}$ such that the
collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$
contains only TFIs with probability at least $1-\delta$, for some
user-specified $\delta$. Our method uses results from statistical learning
theory involving the (empirical) VC-dimension of the problem at hand. This
allows us to identify almost all the TFIs without including any false positive.
We also experimentally compare our method with the direct mining of
$\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used
standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and
show that our algorithm outperforms these methods and achieves even better
results than what is guaranteed by the theoretical analysis.
| [
{
"version": "v1",
"created": "Mon, 7 Jan 2013 15:04:43 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Apr 2013 12:54:12 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Jan 2014 16:38:44 GMT"
}
] | 2014-01-23T00:00:00 | [
[
"Riondato",
"Matteo",
""
],
[
"Vandin",
"Fabio",
""
]
] | TITLE: Finding the True Frequent Itemsets
ABSTRACT: Frequent Itemsets (FIs) mining is a fundamental primitive in data mining. It
requires to identify all itemsets appearing in at least a fraction $\theta$ of
a transactional dataset $\mathcal{D}$. Often though, the ultimate goal of
mining $\mathcal{D}$ is not an analysis of the dataset \emph{per se}, but the
understanding of the underlying process that generated it. Specifically, in
many applications $\mathcal{D}$ is a collection of samples obtained from an
unknown probability distribution $\pi$ on transactions, and by extracting the
FIs in $\mathcal{D}$ one attempts to infer itemsets that are frequently (i.e.,
with probability at least $\theta$) generated by $\pi$, which we call the True
Frequent Itemsets (TFIs). Due to the inherently stochastic nature of the
generative process, the set of FIs is only a rough approximation of the set of
TFIs, as it often contains a huge number of \emph{false positives}, i.e.,
spurious itemsets that are not among the TFIs. In this work we design and
analyze an algorithm to identify a threshold $\hat{\theta}$ such that the
collection of itemsets with frequency at least $\hat{\theta}$ in $\mathcal{D}$
contains only TFIs with probability at least $1-\delta$, for some
user-specified $\delta$. Our method uses results from statistical learning
theory involving the (empirical) VC-dimension of the problem at hand. This
allows us to identify almost all the TFIs without including any false positive.
We also experimentally compare our method with the direct mining of
$\mathcal{D}$ at frequency $\theta$ and with techniques based on widely-used
standard bounds (i.e., the Chernoff bounds) of the binomial distribution, and
show that our algorithm outperforms these methods and achieves even better
results than what is guaranteed by the theoretical analysis.
| no_new_dataset | 0.933188 |
1401.5632 | Manoj Krishnaswamy | Manoj Krishnaswamy, G. Hemantha Kumar | Enhancing Template Security of Face Biometrics by Using Edge Detection
and Hashing | 11 pages, 13 figures, Journal. arXiv admin note: text overlap with
arXiv:1307.7474 by other authors | International Journal of Information Processing, 7(4), 11-20,
2013, ISSN : 0973-8215 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we address the issues of using edge detection techniques on
facial images to produce cancellable biometric templates and a novel method for
template verification against tampering. With increasing use of biometrics,
there is a real threat for the conventional systems using face databases, which
store images of users in raw and unaltered form. If compromised not only it is
irrevocable, but can be misused for cross-matching across different databases.
So it is desirable to generate and store revocable templates for the same user
in different applications to prevent cross-matching and to enhance security,
while maintaining privacy and ethics. By comparing different edge detection
methods it has been observed that the edge detection based on the Roberts Cross
operator performs consistently well across multiple face datasets, in which the
face images have been taken under a variety of conditions. We have proposed a
novel scheme using hashing, for extra verification, in order to harden the
security of the stored biometric templates.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2014 11:50:08 GMT"
}
] | 2014-01-23T00:00:00 | [
[
"Krishnaswamy",
"Manoj",
""
],
[
"Kumar",
"G. Hemantha",
""
]
] | TITLE: Enhancing Template Security of Face Biometrics by Using Edge Detection
and Hashing
ABSTRACT: In this paper we address the issues of using edge detection techniques on
facial images to produce cancellable biometric templates and a novel method for
template verification against tampering. With increasing use of biometrics,
there is a real threat for the conventional systems using face databases, which
store images of users in raw and unaltered form. If compromised not only it is
irrevocable, but can be misused for cross-matching across different databases.
So it is desirable to generate and store revocable templates for the same user
in different applications to prevent cross-matching and to enhance security,
while maintaining privacy and ethics. By comparing different edge detection
methods it has been observed that the edge detection based on the Roberts Cross
operator performs consistently well across multiple face datasets, in which the
face images have been taken under a variety of conditions. We have proposed a
novel scheme using hashing, for extra verification, in order to harden the
security of the stored biometric templates.
| no_new_dataset | 0.942612 |
1401.5644 | Issam Sahmoudi issam sahmoudi | Issam Sahmoudi and Hanane Froud and Abdelmonaime Lachkar | A new keyphrases extraction method based on suffix tree data structure
for arabic documents clustering | 17 pages, 3 figures | International Journal of Database Management Systems ( IJDMS )
Vol.5, No.6, December 2013 | 10.5121/ijdms.2013.5602 | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document Clustering is a branch of a larger area of scientific study known as
data mining .which is an unsupervised classification using to find a structure
in a collection of unlabeled data. The useful information in the documents can
be accompanied by a large amount of noise words when using Full Text
Representation, and therefore will affect negatively the result of the
clustering process. So it is with great need to eliminate the noise words and
keeping just the useful information in order to enhance the quality of the
clustering results. This problem occurs with different degree for any language
such as English, European, Hindi, Chinese, and Arabic Language. To overcome
this problem, in this paper, we propose a new and efficient Keyphrases
extraction method based on the Suffix Tree data structure (KpST), the extracted
Keyphrases are then used in the clustering process instead of Full Text
Representation. The proposed method for Keyphrases extraction is language
independent and therefore it may be applied to any language. In this
investigation, we are interested to deal with the Arabic language which is one
of the most complex languages. To evaluate our method, we conduct an
experimental study on Arabic Documents using the most popular Clustering
approach of Hierarchical algorithms: Agglomerative Hierarchical algorithm with
seven linkage techniques and a variety of distance functions and similarity
measures to perform Arabic Document Clustering task. The obtained results show
that our method for extracting Keyphrases increases the quality of the
clustering results. We propose also to study the effect of using the stemming
for the testing dataset to cluster it with the same documents clustering
techniques and similarity/distance measures.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2014 12:36:38 GMT"
}
] | 2014-01-23T00:00:00 | [
[
"Sahmoudi",
"Issam",
""
],
[
"Froud",
"Hanane",
""
],
[
"Lachkar",
"Abdelmonaime",
""
]
] | TITLE: A new keyphrases extraction method based on suffix tree data structure
for arabic documents clustering
ABSTRACT: Document Clustering is a branch of a larger area of scientific study known as
data mining .which is an unsupervised classification using to find a structure
in a collection of unlabeled data. The useful information in the documents can
be accompanied by a large amount of noise words when using Full Text
Representation, and therefore will affect negatively the result of the
clustering process. So it is with great need to eliminate the noise words and
keeping just the useful information in order to enhance the quality of the
clustering results. This problem occurs with different degree for any language
such as English, European, Hindi, Chinese, and Arabic Language. To overcome
this problem, in this paper, we propose a new and efficient Keyphrases
extraction method based on the Suffix Tree data structure (KpST), the extracted
Keyphrases are then used in the clustering process instead of Full Text
Representation. The proposed method for Keyphrases extraction is language
independent and therefore it may be applied to any language. In this
investigation, we are interested to deal with the Arabic language which is one
of the most complex languages. To evaluate our method, we conduct an
experimental study on Arabic Documents using the most popular Clustering
approach of Hierarchical algorithms: Agglomerative Hierarchical algorithm with
seven linkage techniques and a variety of distance functions and similarity
measures to perform Arabic Document Clustering task. The obtained results show
that our method for extracting Keyphrases increases the quality of the
clustering results. We propose also to study the effect of using the stemming
for the testing dataset to cluster it with the same documents clustering
techniques and similarity/distance measures.
| no_new_dataset | 0.954137 |
1401.5389 | Sajib Dasgupta | Sajib Dasgupta, Vincent Ng | Which Clustering Do You Want? Inducing Your Ideal Clustering with
Minimal Feedback | null | Journal Of Artificial Intelligence Research, Volume 39, pages
581-632, 2010 | 10.1613/jair.3003 | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While traditional research on text clustering has largely focused on grouping
documents by topic, it is conceivable that a user may want to cluster documents
along other dimensions, such as the authors mood, gender, age, or sentiment.
Without knowing the users intention, a clustering algorithm will only group
documents along the most prominent dimension, which may not be the one the user
desires. To address the problem of clustering documents along the user-desired
dimension, previous work has focused on learning a similarity metric from data
manually annotated with the users intention or having a human construct a
feature space in an interactive manner during the clustering process. With the
goal of reducing reliance on human knowledge for fine-tuning the similarity
function or selecting the relevant features required by these approaches, we
propose a novel active clustering algorithm, which allows a user to easily
select the dimension along which she wants to cluster the documents by
inspecting only a small number of words. We demonstrate the viability of our
algorithm on a variety of commonly-used sentiment datasets.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:56:03 GMT"
}
] | 2014-01-22T00:00:00 | [
[
"Dasgupta",
"Sajib",
""
],
[
"Ng",
"Vincent",
""
]
] | TITLE: Which Clustering Do You Want? Inducing Your Ideal Clustering with
Minimal Feedback
ABSTRACT: While traditional research on text clustering has largely focused on grouping
documents by topic, it is conceivable that a user may want to cluster documents
along other dimensions, such as the authors mood, gender, age, or sentiment.
Without knowing the users intention, a clustering algorithm will only group
documents along the most prominent dimension, which may not be the one the user
desires. To address the problem of clustering documents along the user-desired
dimension, previous work has focused on learning a similarity metric from data
manually annotated with the users intention or having a human construct a
feature space in an interactive manner during the clustering process. With the
goal of reducing reliance on human knowledge for fine-tuning the similarity
function or selecting the relevant features required by these approaches, we
propose a novel active clustering algorithm, which allows a user to easily
select the dimension along which she wants to cluster the documents by
inspecting only a small number of words. We demonstrate the viability of our
algorithm on a variety of commonly-used sentiment datasets.
| no_new_dataset | 0.948394 |
1401.5407 | Thanuka Wickramarathne | J Xu, TL Wickramarathne, EK Grey, K Steinhaeuser, R Keller, J Drake, N
Chawla and DM Lodge | Patterns of Ship-borne Species Spread: A Clustering Approach for Risk
Assessment and Management of Non-indigenous Species Spread | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spread of non-indigenous species (NIS) through the global shipping
network (GSN) has enormous ecological and economic cost throughout the world.
Previous attempts at quantifying NIS invasions have mostly taken "bottom-up"
approaches that eventually require the use of multiple simplifying assumptions
due to insufficiency and/or uncertainty of available data. By modeling implicit
species exchanges via a graph abstraction that we refer to as the Species Flow
Network (SFN), a different approach that exploits the power of network science
methods in extracting knowledge from largely incomplete data is presented.
Here, coarse-grained species flow dynamics are studied via a graph clustering
approach that decomposes the SFN to clusters of ports and inter-cluster
connections. With this decomposition of ports in place, NIS flow among clusters
can be very efficiently reduced by enforcing NIS management on a few chosen
inter-cluster connections. Furthermore, efficient NIS management strategy for
species exchanges within a cluster (often difficult due higher rate of travel
and pathways) are then derived in conjunction with ecological and environmental
aspects that govern the species establishment. The benefits of the presented
approach include robustness to data uncertainties, implicit incorporation of
"stepping-stone" spread of invasive species, and decoupling of species spread
and establishment risk estimation. Our analysis of a multi-year (1997--2006)
GSN dataset using the presented approach shows the existence of a few large
clusters of ports with higher intra-cluster species flow that are fairly stable
over time. Furthermore, detailed investigations were carried out on vessel
types, ports, and inter-cluster connections. Finally, our observations are
discussed in the context of known NIS invasions and future research directions
are also presented.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2014 18:13:57 GMT"
}
] | 2014-01-22T00:00:00 | [
[
"Xu",
"J",
""
],
[
"Wickramarathne",
"TL",
""
],
[
"Grey",
"EK",
""
],
[
"Steinhaeuser",
"K",
""
],
[
"Keller",
"R",
""
],
[
"Drake",
"J",
""
],
[
"Chawla",
"N",
""
],
[
"Lodge",
"DM",
""
]
] | TITLE: Patterns of Ship-borne Species Spread: A Clustering Approach for Risk
Assessment and Management of Non-indigenous Species Spread
ABSTRACT: The spread of non-indigenous species (NIS) through the global shipping
network (GSN) has enormous ecological and economic cost throughout the world.
Previous attempts at quantifying NIS invasions have mostly taken "bottom-up"
approaches that eventually require the use of multiple simplifying assumptions
due to insufficiency and/or uncertainty of available data. By modeling implicit
species exchanges via a graph abstraction that we refer to as the Species Flow
Network (SFN), a different approach that exploits the power of network science
methods in extracting knowledge from largely incomplete data is presented.
Here, coarse-grained species flow dynamics are studied via a graph clustering
approach that decomposes the SFN to clusters of ports and inter-cluster
connections. With this decomposition of ports in place, NIS flow among clusters
can be very efficiently reduced by enforcing NIS management on a few chosen
inter-cluster connections. Furthermore, efficient NIS management strategy for
species exchanges within a cluster (often difficult due higher rate of travel
and pathways) are then derived in conjunction with ecological and environmental
aspects that govern the species establishment. The benefits of the presented
approach include robustness to data uncertainties, implicit incorporation of
"stepping-stone" spread of invasive species, and decoupling of species spread
and establishment risk estimation. Our analysis of a multi-year (1997--2006)
GSN dataset using the presented approach shows the existence of a few large
clusters of ports with higher intra-cluster species flow that are fairly stable
over time. Furthermore, detailed investigations were carried out on vessel
types, ports, and inter-cluster connections. Finally, our observations are
discussed in the context of known NIS invasions and future research directions
are also presented.
| no_new_dataset | 0.946892 |
1401.4447 | Abdul Kadir | Abdul Kadir, Lukito Edi Nugroho, Adhi Susanto, Paulus Insap Santosa | Leaf Classification Using Shape, Color, and Texture Features | 6 pages, International Journal of Computer Trends and Technology-
July to Aug Issue 2011 | null | null | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several methods to identify plants have been proposed by several researchers.
Commonly, the methods did not capture color information, because color was not
recognized as an important aspect to the identification. In this research,
shape and vein, color, and texture features were incorporated to classify a
leaf. In this case, a neural network called Probabilistic Neural network (PNN)
was used as a classifier. The experimental result shows that the method for
classification gives average accuracy of 93.75% when it was tested on Flavia
dataset, that contains 32 kinds of plant leaves. It means that the method gives
better performance compared to the original work.
| [
{
"version": "v1",
"created": "Wed, 20 Nov 2013 07:55:40 GMT"
}
] | 2014-01-20T00:00:00 | [
[
"Kadir",
"Abdul",
""
],
[
"Nugroho",
"Lukito Edi",
""
],
[
"Susanto",
"Adhi",
""
],
[
"Santosa",
"Paulus Insap",
""
]
] | TITLE: Leaf Classification Using Shape, Color, and Texture Features
ABSTRACT: Several methods to identify plants have been proposed by several researchers.
Commonly, the methods did not capture color information, because color was not
recognized as an important aspect to the identification. In this research,
shape and vein, color, and texture features were incorporated to classify a
leaf. In this case, a neural network called Probabilistic Neural network (PNN)
was used as a classifier. The experimental result shows that the method for
classification gives average accuracy of 93.75% when it was tested on Flavia
dataset, that contains 32 kinds of plant leaves. It means that the method gives
better performance compared to the original work.
| no_new_dataset | 0.954605 |
1401.3830 | Henrik Reif Andersen | Henrik Reif Andersen, Tarik Hadzic, David Pisinger | Interactive Cost Configuration Over Decision Diagrams | null | Journal Of Artificial Intelligence Research, Volume 37, pages
99-139, 2010 | 10.1613/jair.2905 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many AI domains such as product configuration, a user should interactively
specify a solution that must satisfy a set of constraints. In such scenarios,
offline compilation of feasible solutions into a tractable representation is an
important approach to delivering efficient backtrack-free user interaction
online. In particular,binary decision diagrams (BDDs) have been successfully
used as a compilation target for product and service configuration. In this
paper we discuss how to extend BDD-based configuration to scenarios involving
cost functions which express user preferences.
We first show that an efficient, robust and easy to implement extension is
possible if the cost function is additive, and feasible solutions are
represented using multi-valued decision diagrams (MDDs). We also discuss the
effect on MDD size if the cost function is non-additive or if it is encoded
explicitly into MDD. We then discuss interactive configuration in the presence
of multiple cost functions. We prove that even in its simplest form,
multiple-cost configuration is NP-hard in the input MDD. However, for solving
two-cost configuration we develop a pseudo-polynomial scheme and a fully
polynomial approximation scheme. The applicability of our approach is
demonstrated through experiments over real-world configuration models and
product-catalogue datasets. Response times are generally within a fraction of a
second even for very large instances.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:48:15 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Andersen",
"Henrik Reif",
""
],
[
"Hadzic",
"Tarik",
""
],
[
"Pisinger",
"David",
""
]
] | TITLE: Interactive Cost Configuration Over Decision Diagrams
ABSTRACT: In many AI domains such as product configuration, a user should interactively
specify a solution that must satisfy a set of constraints. In such scenarios,
offline compilation of feasible solutions into a tractable representation is an
important approach to delivering efficient backtrack-free user interaction
online. In particular,binary decision diagrams (BDDs) have been successfully
used as a compilation target for product and service configuration. In this
paper we discuss how to extend BDD-based configuration to scenarios involving
cost functions which express user preferences.
We first show that an efficient, robust and easy to implement extension is
possible if the cost function is additive, and feasible solutions are
represented using multi-valued decision diagrams (MDDs). We also discuss the
effect on MDD size if the cost function is non-additive or if it is encoded
explicitly into MDD. We then discuss interactive configuration in the presence
of multiple cost functions. We prove that even in its simplest form,
multiple-cost configuration is NP-hard in the input MDD. However, for solving
two-cost configuration we develop a pseudo-polynomial scheme and a fully
polynomial approximation scheme. The applicability of our approach is
demonstrated through experiments over real-world configuration models and
product-catalogue datasets. Response times are generally within a fraction of a
second even for very large instances.
| no_new_dataset | 0.943764 |
1401.3836 | Liyue Zhao | Liyue Zhao, Yu Zhang and Gita Sukthankar | An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data | 10 pages | null | null | null | cs.LG cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:51:19 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Zhao",
"Liyue",
""
],
[
"Zhang",
"Yu",
""
],
[
"Sukthankar",
"Gita",
""
]
] | TITLE: An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data
ABSTRACT: Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise.
| no_new_dataset | 0.951863 |
1401.3851 | Jing Xu | Jing Xu, Christian R. Shelton | Intrusion Detection using Continuous Time Bayesian Networks | null | Journal Of Artificial Intelligence Research, Volume 39, pages
745-774, 2010 | 10.1613/jair.3050 | null | cs.AI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrusion detection systems (IDSs) fall into two high-level categories:
network-based systems (NIDS) that monitor network behaviors, and host-based
systems (HIDS) that monitor system calls. In this work, we present a general
technique for both systems. We use anomaly detection, which identifies patterns
not conforming to a historic norm. In both types of systems, the rates of
change vary dramatically over time (due to burstiness) and over components (due
to service difference). To efficiently model such systems, we use continuous
time Bayesian networks (CTBNs) and avoid specifying a fixed update interval
common to discrete-time models. We build generative models from the normal
training data, and abnormal behaviors are flagged based on their likelihood
under this norm. For NIDS, we construct a hierarchical CTBN model for the
network packet traces and use Rao-Blackwellized particle filtering to learn the
parameters. We illustrate the power of our method through experiments on
detecting real worms and identifying hosts on two publicly available network
traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel
learning method to deal with the finite resolution of system log file time
stamps, without losing the benefits of our continuous time model. We
demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:59:06 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Xu",
"Jing",
""
],
[
"Shelton",
"Christian R.",
""
]
] | TITLE: Intrusion Detection using Continuous Time Bayesian Networks
ABSTRACT: Intrusion detection systems (IDSs) fall into two high-level categories:
network-based systems (NIDS) that monitor network behaviors, and host-based
systems (HIDS) that monitor system calls. In this work, we present a general
technique for both systems. We use anomaly detection, which identifies patterns
not conforming to a historic norm. In both types of systems, the rates of
change vary dramatically over time (due to burstiness) and over components (due
to service difference). To efficiently model such systems, we use continuous
time Bayesian networks (CTBNs) and avoid specifying a fixed update interval
common to discrete-time models. We build generative models from the normal
training data, and abnormal behaviors are flagged based on their likelihood
under this norm. For NIDS, we construct a hierarchical CTBN model for the
network packet traces and use Rao-Blackwellized particle filtering to learn the
parameters. We illustrate the power of our method through experiments on
detecting real worms and identifying hosts on two publicly available network
traces, the MAWI dataset and the LBNL dataset. For HIDS, we develop a novel
learning method to deal with the finite resolution of system log file time
stamps, without losing the benefits of our continuous time model. We
demonstrate the method by detecting intrusions in the DARPA 1998 BSM dataset.
| no_new_dataset | 0.949201 |
1401.3862 | Yonghong Wang | Yonghong Wang, Chung-Wei Hang, Munindar P. Singh | A Probabilistic Approach for Maintaining Trust Based on Evidence | null | Journal Of Artificial Intelligence Research, Volume 40, pages
221-267, 2011 | 10.1613/jair.3108 | null | cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leading agent-based trust models address two important needs. First, they
show how an agent may estimate the trustworthiness of another agent based on
prior interactions. Second, they show how agents may share their knowledge in
order to cooperatively assess the trustworthiness of others. However, in
real-life settings, information relevant to trust is usually obtained
piecemeal, not all at once. Unfortunately, the problem of maintaining trust has
drawn little attention. Existing approaches handle trust updates in a
heuristic, not a principled, manner. This paper builds on a formal model that
considers probability and certainty as two dimensions of trust. It proposes a
mechanism using which an agent can update the amount of trust it places in
other agents on an ongoing basis. This paper shows via simulation that the
proposed approach (a) provides accurate estimates of the trustworthiness of
agents that change behavior frequently; and (b) captures the dynamic behavior
of the agents. This paper includes an evaluation based on a real dataset drawn
from Amazon Marketplace, a leading e-commerce site.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:04:29 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Wang",
"Yonghong",
""
],
[
"Hang",
"Chung-Wei",
""
],
[
"Singh",
"Munindar P.",
""
]
] | TITLE: A Probabilistic Approach for Maintaining Trust Based on Evidence
ABSTRACT: Leading agent-based trust models address two important needs. First, they
show how an agent may estimate the trustworthiness of another agent based on
prior interactions. Second, they show how agents may share their knowledge in
order to cooperatively assess the trustworthiness of others. However, in
real-life settings, information relevant to trust is usually obtained
piecemeal, not all at once. Unfortunately, the problem of maintaining trust has
drawn little attention. Existing approaches handle trust updates in a
heuristic, not a principled, manner. This paper builds on a formal model that
considers probability and certainty as two dimensions of trust. It proposes a
mechanism using which an agent can update the amount of trust it places in
other agents on an ongoing basis. This paper shows via simulation that the
proposed approach (a) provides accurate estimates of the trustworthiness of
agents that change behavior frequently; and (b) captures the dynamic behavior
of the agents. This paper includes an evaluation based on a real dataset drawn
from Amazon Marketplace, a leading e-commerce site.
| no_new_dataset | 0.944074 |
1401.3881 | Mustafa Bilgic | Mustafa Bilgic, Lise Getoor | Value of Information Lattice: Exploiting Probabilistic Independence for
Effective Feature Subset Acquisition | null | Journal Of Artificial Intelligence Research, Volume 41, pages
69-95, 2011 | 10.1613/jair.3200 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the cost-sensitive feature acquisition problem, where
misclassifying an instance is costly but the expected misclassification cost
can be reduced by acquiring the values of the missing features. Because
acquiring the features is costly as well, the objective is to acquire the right
set of features so that the sum of the feature acquisition cost and
misclassification cost is minimized. We describe the Value of Information
Lattice (VOILA), an optimal and efficient feature subset acquisition framework.
Unlike the common practice, which is to acquire features greedily, VOILA can
reason with subsets of features. VOILA efficiently searches the space of
possible feature subsets by discovering and exploiting conditional independence
properties between the features and it reuses probabilistic inference
computations to further speed up the process. Through empirical evaluation on
five medical datasets, we show that the greedy strategy is often reluctant to
acquire features, as it cannot forecast the benefit of acquiring multiple
features in combination.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:12:42 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Bilgic",
"Mustafa",
""
],
[
"Getoor",
"Lise",
""
]
] | TITLE: Value of Information Lattice: Exploiting Probabilistic Independence for
Effective Feature Subset Acquisition
ABSTRACT: We address the cost-sensitive feature acquisition problem, where
misclassifying an instance is costly but the expected misclassification cost
can be reduced by acquiring the values of the missing features. Because
acquiring the features is costly as well, the objective is to acquire the right
set of features so that the sum of the feature acquisition cost and
misclassification cost is minimized. We describe the Value of Information
Lattice (VOILA), an optimal and efficient feature subset acquisition framework.
Unlike the common practice, which is to acquire features greedily, VOILA can
reason with subsets of features. VOILA efficiently searches the space of
possible feature subsets by discovering and exploiting conditional independence
properties between the features and it reuses probabilistic inference
computations to further speed up the process. Through empirical evaluation on
five medical datasets, we show that the greedy strategy is often reluctant to
acquire features, as it cannot forecast the benefit of acquiring multiple
features in combination.
| no_new_dataset | 0.946001 |
1401.4128 | Charles-Henri Cappelaere | Charles-Henri Cappelaere, R. Dubois, P. Roussel, G. Dreyfus | Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices | Computing in Cardiology, Saragosse : Espagne (2013) | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 18:54:43 GMT"
}
] | 2014-01-17T00:00:00 | [
[
"Cappelaere",
"Charles-Henri",
""
],
[
"Dubois",
"R.",
""
],
[
"Roussel",
"P.",
""
],
[
"Dreyfus",
"G.",
""
]
] | TITLE: Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices
ABSTRACT: The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives.
| no_new_dataset | 0.942612 |
1401.3390 | Mahdi Pakdaman Naeini | Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht | Binary Classifier Calibration: Non-parametric approach | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate calibration of probabilistic predictive models learned is critical
for many practical prediction and decision-making tasks. There are two main
categories of methods for building calibrated classifiers. One approach is to
develop methods for learning probabilistic models that are well-calibrated, ab
initio. The other approach is to use some post-processing methods for
transforming the output of a classifier to be well calibrated, as for example
histogram binning, Platt scaling, and isotonic regression. One advantage of the
post-processing approach is that it can be applied to any existing
probabilistic classification model that was constructed using any
machine-learning method.
In this paper, we first introduce two measures for evaluating how well a
classifier is calibrated. We prove three theorems showing that using a simple
histogram binning post-processing method, it is possible to make a classifier
be well calibrated while retaining its discrimination capability. Also, by
casting the histogram binning method as a density-based non-parametric binary
classifier, we can extend it using two simple non-parametric density estimation
methods. We demonstrate the performance of the proposed calibration methods on
synthetic and real datasets. Experimental results show that the proposed
methods either outperform or are comparable to existing calibration methods.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2014 23:52:16 GMT"
}
] | 2014-01-16T00:00:00 | [
[
"Naeini",
"Mahdi Pakdaman",
""
],
[
"Cooper",
"Gregory F.",
""
],
[
"Hauskrecht",
"Milos",
""
]
] | TITLE: Binary Classifier Calibration: Non-parametric approach
ABSTRACT: Accurate calibration of probabilistic predictive models learned is critical
for many practical prediction and decision-making tasks. There are two main
categories of methods for building calibrated classifiers. One approach is to
develop methods for learning probabilistic models that are well-calibrated, ab
initio. The other approach is to use some post-processing methods for
transforming the output of a classifier to be well calibrated, as for example
histogram binning, Platt scaling, and isotonic regression. One advantage of the
post-processing approach is that it can be applied to any existing
probabilistic classification model that was constructed using any
machine-learning method.
In this paper, we first introduce two measures for evaluating how well a
classifier is calibrated. We prove three theorems showing that using a simple
histogram binning post-processing method, it is possible to make a classifier
be well calibrated while retaining its discrimination capability. Also, by
casting the histogram binning method as a density-based non-parametric binary
classifier, we can extend it using two simple non-parametric density estimation
methods. We demonstrate the performance of the proposed calibration methods on
synthetic and real datasets. Experimental results show that the proposed
methods either outperform or are comparable to existing calibration methods.
| no_new_dataset | 0.946349 |
1401.3413 | Avneesh Saluja | Avneesh Saluja, Mahdi Pakdaman, Dongzhen Piao, Ankur P. Parikh | Infinite Mixed Membership Matrix Factorization | For ICDM 2013 Workshop Proceedings | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 02:39:15 GMT"
}
] | 2014-01-16T00:00:00 | [
[
"Saluja",
"Avneesh",
""
],
[
"Pakdaman",
"Mahdi",
""
],
[
"Piao",
"Dongzhen",
""
],
[
"Parikh",
"Ankur P.",
""
]
] | TITLE: Infinite Mixed Membership Matrix Factorization
ABSTRACT: Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well.
| no_new_dataset | 0.944125 |
1401.3447 | Saher Esmeir | Saher Esmeir, Shaul Markovitch | Anytime Induction of Low-cost, Low-error Classifiers: a Sampling-based
Approach | null | Journal Of Artificial Intelligence Research, Volume 33, pages
1-31, 2008 | 10.1613/jair.2602 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning techniques are gaining prevalence in the production of a
wide range of classifiers for complex real-world applications with nonuniform
testing and misclassification costs. The increasing complexity of these
applications poses a real challenge to resource management during learning and
classification. In this work we introduce ACT (anytime cost-sensitive tree
learner), a novel framework for operating in such complex environments. ACT is
an anytime algorithm that allows learning time to be increased in return for
lower classification costs. It builds a tree top-down and exploits additional
time resources to obtain better estimations for the utility of the different
candidate splits. Using sampling techniques, ACT approximates the cost of the
subtree under each candidate split and favors the one with a minimal cost. As a
stochastic algorithm, ACT is expected to be able to escape local minima, into
which greedy methods may be trapped. Experiments with a variety of datasets
were conducted to compare ACT to the state-of-the-art cost-sensitive tree
learners. The results show that for the majority of domains ACT produces
significantly less costly trees. ACT also exhibits good anytime behavior with
diminishing returns.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:09:07 GMT"
}
] | 2014-01-16T00:00:00 | [
[
"Esmeir",
"Saher",
""
],
[
"Markovitch",
"Shaul",
""
]
] | TITLE: Anytime Induction of Low-cost, Low-error Classifiers: a Sampling-based
Approach
ABSTRACT: Machine learning techniques are gaining prevalence in the production of a
wide range of classifiers for complex real-world applications with nonuniform
testing and misclassification costs. The increasing complexity of these
applications poses a real challenge to resource management during learning and
classification. In this work we introduce ACT (anytime cost-sensitive tree
learner), a novel framework for operating in such complex environments. ACT is
an anytime algorithm that allows learning time to be increased in return for
lower classification costs. It builds a tree top-down and exploits additional
time resources to obtain better estimations for the utility of the different
candidate splits. Using sampling techniques, ACT approximates the cost of the
subtree under each candidate split and favors the one with a minimal cost. As a
stochastic algorithm, ACT is expected to be able to escape local minima, into
which greedy methods may be trapped. Experiments with a variety of datasets
were conducted to compare ACT to the state-of-the-art cost-sensitive tree
learners. The results show that for the majority of domains ACT produces
significantly less costly trees. ACT also exhibits good anytime behavior with
diminishing returns.
| no_new_dataset | 0.946498 |
1401.3474 | Andreas Krause | Andreas Krause, Carlos Guestrin | Optimal Value of Information in Graphical Models | null | Journal Of Artificial Intelligence Research, Volume 35, pages
557-591, 2009 | 10.1613/jair.2737 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world decision making tasks require us to choose among several
expensive observations. In a sensor network, for example, it is important to
select the subset of sensors that is expected to provide the strongest
reduction in uncertainty. In medical decision making tasks, one needs to select
which tests to administer before deciding on the most effective treatment. It
has been general practice to use heuristic-guided procedures for selecting
observations. In this paper, we present the first efficient optimal algorithms
for selecting observations for a class of probabilistic graphical models. For
example, our algorithms allow to optimally label hidden variables in Hidden
Markov Models (HMMs). We provide results for both selecting the optimal subset
of observations, and for obtaining an optimal conditional observation plan.
Furthermore we prove a surprising result: In most graphical models tasks, if
one designs an efficient algorithm for chain graphs, such as HMMs, this
procedure can be generalized to polytree graphical models. We prove that the
optimizing value of information is $NP^{PP}$-hard even for polytrees. It also
follows from our results that just computing decision theoretic value of
information objective functions, which are commonly used in practice, is a
#P-complete problem even on Naive Bayes models (a simple special case of
polytrees).
In addition, we consider several extensions, such as using our algorithms for
scheduling observation selection for multiple sensors. We demonstrate the
effectiveness of our approach on several real-world datasets, including a
prototype sensor network deployment for energy conservation in buildings.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:30:52 GMT"
}
] | 2014-01-16T00:00:00 | [
[
"Krause",
"Andreas",
""
],
[
"Guestrin",
"Carlos",
""
]
] | TITLE: Optimal Value of Information in Graphical Models
ABSTRACT: Many real-world decision making tasks require us to choose among several
expensive observations. In a sensor network, for example, it is important to
select the subset of sensors that is expected to provide the strongest
reduction in uncertainty. In medical decision making tasks, one needs to select
which tests to administer before deciding on the most effective treatment. It
has been general practice to use heuristic-guided procedures for selecting
observations. In this paper, we present the first efficient optimal algorithms
for selecting observations for a class of probabilistic graphical models. For
example, our algorithms allow to optimally label hidden variables in Hidden
Markov Models (HMMs). We provide results for both selecting the optimal subset
of observations, and for obtaining an optimal conditional observation plan.
Furthermore we prove a surprising result: In most graphical models tasks, if
one designs an efficient algorithm for chain graphs, such as HMMs, this
procedure can be generalized to polytree graphical models. We prove that the
optimizing value of information is $NP^{PP}$-hard even for polytrees. It also
follows from our results that just computing decision theoretic value of
information objective functions, which are commonly used in practice, is a
#P-complete problem even on Naive Bayes models (a simple special case of
polytrees).
In addition, we consider several extensions, such as using our algorithms for
scheduling observation selection for multiple sensors. We demonstrate the
effectiveness of our approach on several real-world datasets, including a
prototype sensor network deployment for energy conservation in buildings.
| no_new_dataset | 0.9463 |
1401.3510 | Saurabh Varshney Mr. | Saurabh Varshney and Jyoti Bajpai | Improving Performance Of English-Hindi Cross Language Information
Retrieval Using Transliteration Of Query Terms | International Journal on Natural Language Computing (IJNLC) Vol. 2,
No.6, December 2013 http://airccse.org/journal/ijnlc/index.html | null | 10.5121/ijnlc.2013.2604 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main issue in Cross Language Information Retrieval (CLIR) is the poor
performance of retrieval in terms of average precision when compared to
monolingual retrieval performance. The main reasons behind poor performance of
CLIR are mismatching of query terms, lexical ambiguity and un-translated query
terms. The existing problems of CLIR are needed to be addressed in order to
increase the performance of the CLIR system. In this paper, we are putting our
effort to solve the given problem by proposed an algorithm for improving the
performance of English-Hindi CLIR system. We used all possible combination of
Hindi translated query using transliteration of English query terms and
choosing the best query among them for retrieval of documents. The experiment
is performed on FIRE 2010 (Forum of Information Retrieval Evaluation) datasets.
The experimental result show that the proposed approach gives better
performance of English-Hindi CLIR system and also helps in overcoming existing
problems and outperforms the existing English-Hindi CLIR system in terms of
average precision.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 08:07:08 GMT"
}
] | 2014-01-16T00:00:00 | [
[
"Varshney",
"Saurabh",
""
],
[
"Bajpai",
"Jyoti",
""
]
] | TITLE: Improving Performance Of English-Hindi Cross Language Information
Retrieval Using Transliteration Of Query Terms
ABSTRACT: The main issue in Cross Language Information Retrieval (CLIR) is the poor
performance of retrieval in terms of average precision when compared to
monolingual retrieval performance. The main reasons behind poor performance of
CLIR are mismatching of query terms, lexical ambiguity and un-translated query
terms. The existing problems of CLIR are needed to be addressed in order to
increase the performance of the CLIR system. In this paper, we are putting our
effort to solve the given problem by proposed an algorithm for improving the
performance of English-Hindi CLIR system. We used all possible combination of
Hindi translated query using transliteration of English query terms and
choosing the best query among them for retrieval of documents. The experiment
is performed on FIRE 2010 (Forum of Information Retrieval Evaluation) datasets.
The experimental result show that the proposed approach gives better
performance of English-Hindi CLIR system and also helps in overcoming existing
problems and outperforms the existing English-Hindi CLIR system in terms of
average precision.
| no_new_dataset | 0.94801 |
1401.2912 | Ragesh Jaiswal | Anup Bhattacharya, Ragesh Jaiswal, Nir Ailon | A tight lower bound instance for k-means++ in constant dimension | To appear in TAMC 2014. arXiv admin note: text overlap with
arXiv:1306.4207 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The k-means++ seeding algorithm is one of the most popular algorithms that is
used for finding the initial $k$ centers when using the k-means heuristic. The
algorithm is a simple sampling procedure and can be described as follows: Pick
the first center randomly from the given points. For $i > 1$, pick a point to
be the $i^{th}$ center with probability proportional to the square of the
Euclidean distance of this point to the closest previously $(i-1)$ chosen
centers.
The k-means++ seeding algorithm is not only simple and fast but also gives an
$O(\log{k})$ approximation in expectation as shown by Arthur and Vassilvitskii.
There are datasets on which this seeding algorithm gives an approximation
factor of $\Omega(\log{k})$ in expectation. However, it is not clear from these
results if the algorithm achieves good approximation factor with reasonably
high probability (say $1/poly(k)$). Brunsch and R\"{o}glin gave a dataset where
the k-means++ seeding algorithm achieves an $O(\log{k})$ approximation ratio
with probability that is exponentially small in $k$. However, this and all
other known lower-bound examples are high dimensional. So, an open problem was
to understand the behavior of the algorithm on low dimensional datasets. In
this work, we give a simple two dimensional dataset on which the seeding
algorithm achieves an $O(\log{k})$ approximation ratio with probability
exponentially small in $k$. This solves open problems posed by Mahajan et al.
and by Brunsch and R\"{o}glin.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2014 16:57:57 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2014 04:06:30 GMT"
}
] | 2014-01-15T00:00:00 | [
[
"Bhattacharya",
"Anup",
""
],
[
"Jaiswal",
"Ragesh",
""
],
[
"Ailon",
"Nir",
""
]
] | TITLE: A tight lower bound instance for k-means++ in constant dimension
ABSTRACT: The k-means++ seeding algorithm is one of the most popular algorithms that is
used for finding the initial $k$ centers when using the k-means heuristic. The
algorithm is a simple sampling procedure and can be described as follows: Pick
the first center randomly from the given points. For $i > 1$, pick a point to
be the $i^{th}$ center with probability proportional to the square of the
Euclidean distance of this point to the closest previously $(i-1)$ chosen
centers.
The k-means++ seeding algorithm is not only simple and fast but also gives an
$O(\log{k})$ approximation in expectation as shown by Arthur and Vassilvitskii.
There are datasets on which this seeding algorithm gives an approximation
factor of $\Omega(\log{k})$ in expectation. However, it is not clear from these
results if the algorithm achieves good approximation factor with reasonably
high probability (say $1/poly(k)$). Brunsch and R\"{o}glin gave a dataset where
the k-means++ seeding algorithm achieves an $O(\log{k})$ approximation ratio
with probability that is exponentially small in $k$. However, this and all
other known lower-bound examples are high dimensional. So, an open problem was
to understand the behavior of the algorithm on low dimensional datasets. In
this work, we give a simple two dimensional dataset on which the seeding
algorithm achieves an $O(\log{k})$ approximation ratio with probability
exponentially small in $k$. This solves open problems posed by Mahajan et al.
and by Brunsch and R\"{o}glin.
| no_new_dataset | 0.937612 |
1401.3056 | Yujian Pan | Yujian Pan and Xiang Li | Power of individuals -- Controlling centrality of temporal networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal networks are such networks where nodes and interactions may appear
and disappear at various time scales. With the evidence of ubiquity of temporal
networks in our economy, nature and society, it's urgent and significant to
focus on structural controllability of temporal networks, which nowadays is
still an untouched topic. We develop graphic tools to study the structural
controllability of temporal networks, identifying the intrinsic mechanism of
the ability of individuals in controlling a dynamic and large-scale temporal
network. Classifying temporal trees of a temporal network into different types,
we give (both upper and lower) analytical bounds of the controlling centrality,
which are verified by numerical simulations of both artificial and empirical
temporal networks. We find that the scale-free distribution of node's
controlling centrality is virtually independent of the time scale and types of
datasets, meaning the inherent heterogeneity and robustness of the controlling
centrality of temporal networks.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2014 03:02:20 GMT"
}
] | 2014-01-15T00:00:00 | [
[
"Pan",
"Yujian",
""
],
[
"Li",
"Xiang",
""
]
] | TITLE: Power of individuals -- Controlling centrality of temporal networks
ABSTRACT: Temporal networks are such networks where nodes and interactions may appear
and disappear at various time scales. With the evidence of ubiquity of temporal
networks in our economy, nature and society, it's urgent and significant to
focus on structural controllability of temporal networks, which nowadays is
still an untouched topic. We develop graphic tools to study the structural
controllability of temporal networks, identifying the intrinsic mechanism of
the ability of individuals in controlling a dynamic and large-scale temporal
network. Classifying temporal trees of a temporal network into different types,
we give (both upper and lower) analytical bounds of the controlling centrality,
which are verified by numerical simulations of both artificial and empirical
temporal networks. We find that the scale-free distribution of node's
controlling centrality is virtually independent of the time scale and types of
datasets, meaning the inherent heterogeneity and robustness of the controlling
centrality of temporal networks.
| no_new_dataset | 0.945751 |
1401.3126 | Matteo Zignani | Matteo Zignani and Christian Quadri and Sabrina Gaitto and Gian Paolo
Rossi | Exploiting all phone media? A multidimensional network analysis of phone
users' sociality | 8 pages, 1 figure | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing awareness that human communications and social interactions are
assuming a stratified structure, due to the availability of multiple
techno-communication channels, including online social networks, mobile phone
calls, short messages (SMS) and e-mails, has recently led to the study of
multidimensional networks, as a step further the classical Social Network
Analysis. A few papers have been dedicated to develop the theoretical framework
to deal with such multiplex networks and to analyze some example of
multidimensional social networks. In this context we perform the first study of
the multiplex mobile social network, gathered from the records of both call and
text message activities of millions of users of a large mobile phone operator
over a period of 12 weeks. While social networks constructed from mobile phone
datasets have drawn great attention in recent years, so far studies have dealt
with text message and call data, separately, providing a very partial view of
people sociality expressed on phone. Here we analyze how the call and the text
message dimensions overlap showing how many information about links and nodes
could be lost only accounting for a single layer and how users adopt different
media channels to interact with their neighborhood.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2014 10:27:10 GMT"
}
] | 2014-01-15T00:00:00 | [
[
"Zignani",
"Matteo",
""
],
[
"Quadri",
"Christian",
""
],
[
"Gaitto",
"Sabrina",
""
],
[
"Rossi",
"Gian Paolo",
""
]
] | TITLE: Exploiting all phone media? A multidimensional network analysis of phone
users' sociality
ABSTRACT: The growing awareness that human communications and social interactions are
assuming a stratified structure, due to the availability of multiple
techno-communication channels, including online social networks, mobile phone
calls, short messages (SMS) and e-mails, has recently led to the study of
multidimensional networks, as a step further the classical Social Network
Analysis. A few papers have been dedicated to develop the theoretical framework
to deal with such multiplex networks and to analyze some example of
multidimensional social networks. In this context we perform the first study of
the multiplex mobile social network, gathered from the records of both call and
text message activities of millions of users of a large mobile phone operator
over a period of 12 weeks. While social networks constructed from mobile phone
datasets have drawn great attention in recent years, so far studies have dealt
with text message and call data, separately, providing a very partial view of
people sociality expressed on phone. Here we analyze how the call and the text
message dimensions overlap showing how many information about links and nodes
could be lost only accounting for a single layer and how users adopt different
media channels to interact with their neighborhood.
| no_new_dataset | 0.813905 |
1401.3222 | Alexander V. Mantzaris Dr | Alexander V. Mantzaris | Uncovering nodes that spread information between communities in social
networks | null | null | null | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | From many datasets gathered in online social networks, well defined community
structures have been observed. A large number of users participate in these
networks and the size of the resulting graphs poses computational challenges.
There is a particular demand in identifying the nodes responsible for
information flow between communities; for example, in temporal Twitter networks
edges between communities play a key role in propagating spikes of activity
when the connectivity between communities is sparse and few edges exist between
different clusters of nodes. The new algorithm proposed here is aimed at
revealing these key connections by measuring a node's vicinity to nodes of
another community. We look at the nodes which have edges in more than one
community and the locality of nodes around them which influence the information
received and broadcasted to them. The method relies on independent random walks
of a chosen fixed number of steps, originating from nodes with edges in more
than one community. For the large networks that we have in mind, existing
measures such as betweenness centrality are difficult to compute, even with
recent methods that approximate the large number of operations required. We
therefore design an algorithm that scales up to the demand of current big data
requirements and has the ability to harness parallel processing capabilities.
The new algorithm is illustrated on synthetic data, where results can be judged
carefully, and also on a real, large scale Twitter activity data, where new
insights can be gained.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2014 15:30:27 GMT"
}
] | 2014-01-15T00:00:00 | [
[
"Mantzaris",
"Alexander V.",
""
]
] | TITLE: Uncovering nodes that spread information between communities in social
networks
ABSTRACT: From many datasets gathered in online social networks, well defined community
structures have been observed. A large number of users participate in these
networks and the size of the resulting graphs poses computational challenges.
There is a particular demand in identifying the nodes responsible for
information flow between communities; for example, in temporal Twitter networks
edges between communities play a key role in propagating spikes of activity
when the connectivity between communities is sparse and few edges exist between
different clusters of nodes. The new algorithm proposed here is aimed at
revealing these key connections by measuring a node's vicinity to nodes of
another community. We look at the nodes which have edges in more than one
community and the locality of nodes around them which influence the information
received and broadcasted to them. The method relies on independent random walks
of a chosen fixed number of steps, originating from nodes with edges in more
than one community. For the large networks that we have in mind, existing
measures such as betweenness centrality are difficult to compute, even with
recent methods that approximate the large number of operations required. We
therefore design an algorithm that scales up to the demand of current big data
requirements and has the ability to harness parallel processing capabilities.
The new algorithm is illustrated on synthetic data, where results can be judged
carefully, and also on a real, large scale Twitter activity data, where new
insights can be gained.
| no_new_dataset | 0.942981 |
1401.3258 | Jeremy Kun | Rajmonda Caceres, Kevin Carter, Jeremy Kun | A Boosting Approach to Learning Graph Representations | null | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning the right graph representation from noisy, multisource data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. We explore the extent to which different
quality measurements yield graph representations that are suitable for
community detection. We then present empirical results on both synthetic and
real datasets demonstrating the utility of this framework. Our framework leads
to suitable global graph representations from quality measurements local to
each edge. Finally, we discuss future extensions and theoretical considerations
of learning useful graph representations from weak feedback in general
application settings.
| [
{
"version": "v1",
"created": "Tue, 14 Jan 2014 17:07:01 GMT"
}
] | 2014-01-15T00:00:00 | [
[
"Caceres",
"Rajmonda",
""
],
[
"Carter",
"Kevin",
""
],
[
"Kun",
"Jeremy",
""
]
] | TITLE: A Boosting Approach to Learning Graph Representations
ABSTRACT: Learning the right graph representation from noisy, multisource data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. We explore the extent to which different
quality measurements yield graph representations that are suitable for
community detection. We then present empirical results on both synthetic and
real datasets demonstrating the utility of this framework. Our framework leads
to suitable global graph representations from quality measurements local to
each edge. Finally, we discuss future extensions and theoretical considerations
of learning useful graph representations from weak feedback in general
application settings.
| no_new_dataset | 0.944995 |
1401.2504 | Tao Xiong | Yukun Bao, Tao Xiong, Zhongyi Hu | Multi-Step-Ahead Time Series Prediction using Multiple-Output Support
Vector Regression | 26 pages | null | 10.1016/j.neucom.2013.09.010 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate time series prediction over long future horizons is challenging and
of great interest to both practitioners and academics. As a well-known
intelligent algorithm, the standard formulation of Support Vector Regression
(SVR) could be taken for multi-step-ahead time series prediction, only relying
either on iterated strategy or direct strategy. This study proposes a novel
multiple-step-ahead time series prediction approach which employs
multiple-output support vector regression (M-SVR) with multiple-input
multiple-output (MIMO) prediction strategy. In addition, the rank of three
leading prediction strategies with SVR is comparatively examined, providing
practical implications on the selection of the prediction strategy for
multi-step-ahead forecasting while taking SVR as modeling technique. The
proposed approach is validated with the simulated and real datasets. The
quantitative and comprehensive assessments are performed on the basis of the
prediction accuracy and computational cost. The results indicate that: 1) the
M-SVR using MIMO strategy achieves the best accurate forecasts with accredited
computational load, 2) the standard SVR using direct strategy achieves the
second best accurate forecasts, but with the most expensive computational cost,
and 3) the standard SVR using iterated strategy is the worst in terms of
prediction accuracy, but with the least computational cost.
| [
{
"version": "v1",
"created": "Sat, 11 Jan 2014 06:14:53 GMT"
}
] | 2014-01-14T00:00:00 | [
[
"Bao",
"Yukun",
""
],
[
"Xiong",
"Tao",
""
],
[
"Hu",
"Zhongyi",
""
]
] | TITLE: Multi-Step-Ahead Time Series Prediction using Multiple-Output Support
Vector Regression
ABSTRACT: Accurate time series prediction over long future horizons is challenging and
of great interest to both practitioners and academics. As a well-known
intelligent algorithm, the standard formulation of Support Vector Regression
(SVR) could be taken for multi-step-ahead time series prediction, only relying
either on iterated strategy or direct strategy. This study proposes a novel
multiple-step-ahead time series prediction approach which employs
multiple-output support vector regression (M-SVR) with multiple-input
multiple-output (MIMO) prediction strategy. In addition, the rank of three
leading prediction strategies with SVR is comparatively examined, providing
practical implications on the selection of the prediction strategy for
multi-step-ahead forecasting while taking SVR as modeling technique. The
proposed approach is validated with the simulated and real datasets. The
quantitative and comprehensive assessments are performed on the basis of the
prediction accuracy and computational cost. The results indicate that: 1) the
M-SVR using MIMO strategy achieves the best accurate forecasts with accredited
computational load, 2) the standard SVR using direct strategy achieves the
second best accurate forecasts, but with the most expensive computational cost,
and 3) the standard SVR using iterated strategy is the worst in terms of
prediction accuracy, but with the least computational cost.
| no_new_dataset | 0.953232 |
1401.2688 | Kiran Sree Pokkuluri Prof | Pokkuluri Kiran Sree, Inamupudi Ramesh Babu, SSSN Usha Devi N | PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata) | 6 pages. arXiv admin note: substantial text overlap with
arXiv:1310.4342, arXiv:1310.4495 | Journal of Bioinformatics and Intelligent Control Vol 2, pp
211--215, 2013 | 10.1166/jbic.2013.1052 | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2014 00:38:52 GMT"
}
] | 2014-01-14T00:00:00 | [
[
"Sree",
"Pokkuluri Kiran",
""
],
[
"Babu",
"Inamupudi Ramesh",
""
],
[
"N",
"SSSN Usha Devi",
""
]
] | TITLE: PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata)
ABSTRACT: Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset.
| no_new_dataset | 0.951097 |
1401.2955 | Mahdi Pakdaman Naeini | Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht | Binary Classifier Calibration: Bayesian Non-Parametric Approach | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2014 19:04:13 GMT"
}
] | 2014-01-14T00:00:00 | [
[
"Naeini",
"Mahdi Pakdaman",
""
],
[
"Cooper",
"Gregory F.",
""
],
[
"Hauskrecht",
"Milos",
""
]
] | TITLE: Binary Classifier Calibration: Bayesian Non-Parametric Approach
ABSTRACT: A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.
| no_new_dataset | 0.953013 |
1310.4495 | Kiran Sree Pokkuluri Prof | Pokkuluri Kiran Sree, Inampudi Ramesh Babu and SSSN Usha Devi Nedunuri | Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics | arXiv admin note: text overlap with arXiv:1310.4342 | Review of Bioinformatics and Biometrics (RBB) Volume 2 Issue 3,
September 2013 | null | null | cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2013 15:01:19 GMT"
}
] | 2014-01-13T00:00:00 | [
[
"Sree",
"Pokkuluri Kiran",
""
],
[
"Babu",
"Inampudi Ramesh",
""
],
[
"Nedunuri",
"SSSN Usha Devi",
""
]
] | TITLE: Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics
ABSTRACT: CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%.
| no_new_dataset | 0.953057 |
1401.2258 | Benjamin Roth | Benjamin Roth | Assessing Wikipedia-Based Cross-Language Retrieval Models | 74 pages; MSc thesis at Saarland University | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work compares concept models for cross-language retrieval: First, we
adapt probabilistic Latent Semantic Analysis (pLSA) for multilingual documents.
Experiments with different weighting schemes show that a weighting method
favoring documents of similar length in both language sides gives best results.
Considering that both monolingual and multilingual Latent Dirichlet Allocation
(LDA) behave alike when applied for such documents, we use a training corpus
built on Wikipedia where all documents are length-normalized and obtain
improvements over previously reported scores for LDA. Another focus of our work
is on model combination. For this end we include Explicit Semantic Analysis
(ESA) in the experiments. We observe that ESA is not competitive with LDA in a
query based retrieval task on CLEF 2000 data. The combination of machine
translation with concept models increased performance by 21.1% map in
comparison to machine translation alone. Machine translation relies on parallel
corpora, which may not be available for many language pairs. We further explore
how much cross-lingual information can be carried over by a specific
information source in Wikipedia, namely linked text. The best results are
obtained using a language modeling approach, entirely without information from
parallel corpora. The need for smoothing raises interesting questions on
soundness and efficiency. Link models capture only a certain kind of
information and suggest weighting schemes to emphasize particular words. For a
combined model, another interesting question is therefore how to integrate
different weighting schemes. Using a very simple combination scheme, we obtain
results that compare favorably to previously reported results on the CLEF 2000
dataset.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2014 08:50:54 GMT"
}
] | 2014-01-13T00:00:00 | [
[
"Roth",
"Benjamin",
""
]
] | TITLE: Assessing Wikipedia-Based Cross-Language Retrieval Models
ABSTRACT: This work compares concept models for cross-language retrieval: First, we
adapt probabilistic Latent Semantic Analysis (pLSA) for multilingual documents.
Experiments with different weighting schemes show that a weighting method
favoring documents of similar length in both language sides gives best results.
Considering that both monolingual and multilingual Latent Dirichlet Allocation
(LDA) behave alike when applied for such documents, we use a training corpus
built on Wikipedia where all documents are length-normalized and obtain
improvements over previously reported scores for LDA. Another focus of our work
is on model combination. For this end we include Explicit Semantic Analysis
(ESA) in the experiments. We observe that ESA is not competitive with LDA in a
query based retrieval task on CLEF 2000 data. The combination of machine
translation with concept models increased performance by 21.1% map in
comparison to machine translation alone. Machine translation relies on parallel
corpora, which may not be available for many language pairs. We further explore
how much cross-lingual information can be carried over by a specific
information source in Wikipedia, namely linked text. The best results are
obtained using a language modeling approach, entirely without information from
parallel corpora. The need for smoothing raises interesting questions on
soundness and efficiency. Link models capture only a certain kind of
information and suggest weighting schemes to emphasize particular words. For a
combined model, another interesting question is therefore how to integrate
different weighting schemes. Using a very simple combination scheme, we obtain
results that compare favorably to previously reported results on the CLEF 2000
dataset.
| no_new_dataset | 0.956634 |
1306.0186 | Iain Murray | Benigno Uria, Iain Murray, Hugo Larochelle | RNADE: The real-valued neural autoregressive density-estimator | 12 pages, 3 figures, 3 tables, 2 algorithms. Merges the published
paper and supplementary material into one document | Advances in Neural Information Processing Systems 26:2175-2183,
2013 | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce RNADE, a new model for joint density estimation of real-valued
vectors. Our model calculates the density of a datapoint as the product of
one-dimensional conditionals modeled using mixture density networks with shared
parameters. RNADE learns a distributed representation of the data, while having
a tractable expression for the calculation of densities. A tractable likelihood
allows direct comparison with other methods and training by standard
gradient-based optimizers. We compare the performance of RNADE on several
datasets of heterogeneous and perceptual data, finding it outperforms mixture
models in all but one case.
| [
{
"version": "v1",
"created": "Sun, 2 Jun 2013 09:37:53 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jan 2014 11:14:27 GMT"
}
] | 2014-01-10T00:00:00 | [
[
"Uria",
"Benigno",
""
],
[
"Murray",
"Iain",
""
],
[
"Larochelle",
"Hugo",
""
]
] | TITLE: RNADE: The real-valued neural autoregressive density-estimator
ABSTRACT: We introduce RNADE, a new model for joint density estimation of real-valued
vectors. Our model calculates the density of a datapoint as the product of
one-dimensional conditionals modeled using mixture density networks with shared
parameters. RNADE learns a distributed representation of the data, while having
a tractable expression for the calculation of densities. A tractable likelihood
allows direct comparison with other methods and training by standard
gradient-based optimizers. We compare the performance of RNADE on several
datasets of heterogeneous and perceptual data, finding it outperforms mixture
models in all but one case.
| no_new_dataset | 0.944689 |
1401.1307 | Jiping Xiong | Jiping Xiong, Qinghua Tang, and Jian Zhao | 1-bit Compressive Data Gathering for Wireless Sensor Networks | null | null | null | null | cs.NI | http://creativecommons.org/licenses/by/3.0/ | Compressive sensing (CS) has been widely used for the data gathering in
wireless sensor networks for the purpose of reducing the communication overhead
recent years. In this paper, we first show that with simple modification, 1-bit
compressive sensing can also been used for the data gathering in wireless
sensor networks to further reduce the communication overhead. We also propose a
novel blind 1-bit CS reconstruction algorithm which outperforms other state of
the art blind 1-bit CS reconstruction algorithms. Experimental results on real
sensor datasets demonstrate the efficiency of our method.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2014 08:32:15 GMT"
}
] | 2014-01-08T00:00:00 | [
[
"Xiong",
"Jiping",
""
],
[
"Tang",
"Qinghua",
""
],
[
"Zhao",
"Jian",
""
]
] | TITLE: 1-bit Compressive Data Gathering for Wireless Sensor Networks
ABSTRACT: Compressive sensing (CS) has been widely used for the data gathering in
wireless sensor networks for the purpose of reducing the communication overhead
recent years. In this paper, we first show that with simple modification, 1-bit
compressive sensing can also been used for the data gathering in wireless
sensor networks to further reduce the communication overhead. We also propose a
novel blind 1-bit CS reconstruction algorithm which outperforms other state of
the art blind 1-bit CS reconstruction algorithms. Experimental results on real
sensor datasets demonstrate the efficiency of our method.
| no_new_dataset | 0.9549 |
1401.1489 | Romain H\'erault | John Komar and Romain H\'erault and Ludovic Seifert | Key point selection and clustering of swimmer coordination through
Sparse Fisher-EM | Presented at ECML/PKDD 2013 Workshop on Machine Learning and Data
Mining for Sports Analytics (MLSA2013) | null | null | null | stat.ML cs.CV cs.LG physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2014 20:16:05 GMT"
}
] | 2014-01-08T00:00:00 | [
[
"Komar",
"John",
""
],
[
"Hérault",
"Romain",
""
],
[
"Seifert",
"Ludovic",
""
]
] | TITLE: Key point selection and clustering of swimmer coordination through
Sparse Fisher-EM
ABSTRACT: To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.
| no_new_dataset | 0.946498 |
1311.4276 | Michael (Micky) Fire | Michael Fire and Yuval Elovici | Data Mining of Online Genealogy Datasets for Revealing Lifespan Patterns
in Human Population | null | null | null | null | cs.SI q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online genealogy datasets contain extensive information about millions of
people and their past and present family connections. This vast amount of data
can assist in identifying various patterns in human population. In this study,
we present methods and algorithms which can assist in identifying variations in
lifespan distributions of human population in the past centuries, in detecting
social and genetic features which correlate with human lifespan, and in
constructing predictive models of human lifespan based on various features
which can easily be extracted from genealogy datasets.
We have evaluated the presented methods and algorithms on a large online
genealogy dataset with over a million profiles and over 9 million connections,
all of which were collected from the WikiTree website. Our findings indicate
that significant but small positive correlations exist between the parents'
lifespan and their children's lifespan. Additionally, we found slightly higher
and significant correlations between the lifespans of spouses. We also
discovered a very small positive and significant correlation between longevity
and reproductive success in males, and a small and significant negative
correlation between longevity and reproductive success in females. Moreover,
our machine learning algorithms presented better than random classification
results in predicting which people who outlive the age of 50 will also outlive
the age of 80.
We believe that this study will be the first of many studies which utilize
the wealth of data on human populations, existing in online genealogy datasets,
to better understand factors which influence human lifespan. Understanding
these factors can assist scientists in providing solutions for successful
aging.
| [
{
"version": "v1",
"created": "Mon, 18 Nov 2013 06:23:25 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Jan 2014 10:21:06 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Fire",
"Michael",
""
],
[
"Elovici",
"Yuval",
""
]
] | TITLE: Data Mining of Online Genealogy Datasets for Revealing Lifespan Patterns
in Human Population
ABSTRACT: Online genealogy datasets contain extensive information about millions of
people and their past and present family connections. This vast amount of data
can assist in identifying various patterns in human population. In this study,
we present methods and algorithms which can assist in identifying variations in
lifespan distributions of human population in the past centuries, in detecting
social and genetic features which correlate with human lifespan, and in
constructing predictive models of human lifespan based on various features
which can easily be extracted from genealogy datasets.
We have evaluated the presented methods and algorithms on a large online
genealogy dataset with over a million profiles and over 9 million connections,
all of which were collected from the WikiTree website. Our findings indicate
that significant but small positive correlations exist between the parents'
lifespan and their children's lifespan. Additionally, we found slightly higher
and significant correlations between the lifespans of spouses. We also
discovered a very small positive and significant correlation between longevity
and reproductive success in males, and a small and significant negative
correlation between longevity and reproductive success in females. Moreover,
our machine learning algorithms presented better than random classification
results in predicting which people who outlive the age of 50 will also outlive
the age of 80.
We believe that this study will be the first of many studies which utilize
the wealth of data on human populations, existing in online genealogy datasets,
to better understand factors which influence human lifespan. Understanding
these factors can assist scientists in providing solutions for successful
aging.
| no_new_dataset | 0.553596 |
1401.0778 | Hua-Wei Shen | Hua-Wei Shen, Dashun Wang, Chaoming Song, Albert-L\'aszl\'o Barab\'asi | Modeling and Predicting Popularity Dynamics via Reinforced Poisson
Processes | 8 pages, 5 figure; 3 tables | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ability to predict the popularity dynamics of individual items within a
complex evolving system has important implications in an array of areas. Here
we propose a generative probabilistic framework using a reinforced Poisson
process to model explicitly the process through which individual items gain
their popularity. This model distinguishes itself from existing models via its
capability of modeling the arrival process of popularity and its remarkable
power at predicting the popularity of individual items. It possesses the
flexibility of applying Bayesian treatment to further improve the predictive
power using a conjugate prior. Extensive experiments on a longitudinal citation
dataset demonstrate that this model consistently outperforms existing
popularity prediction methods.
| [
{
"version": "v1",
"created": "Sat, 4 Jan 2014 05:53:18 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Shen",
"Hua-Wei",
""
],
[
"Wang",
"Dashun",
""
],
[
"Song",
"Chaoming",
""
],
[
"Barabási",
"Albert-László",
""
]
] | TITLE: Modeling and Predicting Popularity Dynamics via Reinforced Poisson
Processes
ABSTRACT: An ability to predict the popularity dynamics of individual items within a
complex evolving system has important implications in an array of areas. Here
we propose a generative probabilistic framework using a reinforced Poisson
process to model explicitly the process through which individual items gain
their popularity. This model distinguishes itself from existing models via its
capability of modeling the arrival process of popularity and its remarkable
power at predicting the popularity of individual items. It possesses the
flexibility of applying Bayesian treatment to further improve the predictive
power using a conjugate prior. Extensive experiments on a longitudinal citation
dataset demonstrate that this model consistently outperforms existing
popularity prediction methods.
| no_new_dataset | 0.944638 |
1401.0794 | Taraka Rama Kasicheyanula | Taraka Rama, Lars Borin | Properties of phoneme N -grams across the world's language families | null | null | null | null | cs.CL stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we investigate the properties of phoneme N-grams across half
of the world's languages. We investigate if the sizes of three different N-gram
distributions of the world's language families obey a power law. Further, the
N-gram distributions of language families parallel the sizes of the families,
which seem to obey a power law distribution. The correlation between N-gram
distributions and language family sizes improves with increasing values of N.
We applied statistical tests, originally given by physicists, to test the
hypothesis of power law fit to twelve different datasets. The study also raises
some new questions about the use of N-gram distributions in linguistic
research, which we answer by running a statistical test.
| [
{
"version": "v1",
"created": "Sat, 4 Jan 2014 09:50:55 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Rama",
"Taraka",
""
],
[
"Borin",
"Lars",
""
]
] | TITLE: Properties of phoneme N -grams across the world's language families
ABSTRACT: In this article, we investigate the properties of phoneme N-grams across half
of the world's languages. We investigate if the sizes of three different N-gram
distributions of the world's language families obey a power law. Further, the
N-gram distributions of language families parallel the sizes of the families,
which seem to obey a power law distribution. The correlation between N-gram
distributions and language family sizes improves with increasing values of N.
We applied statistical tests, originally given by physicists, to test the
hypothesis of power law fit to twelve different datasets. The study also raises
some new questions about the use of N-gram distributions in linguistic
research, which we answer by running a statistical test.
| no_new_dataset | 0.953535 |
1401.0864 | Maryam Khademi | Mingming Fan, Maryam Khademi | Predicting a Business Star in Yelp from Its Reviews Text Alone | 5 pages, 6 figures, 2 tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Yelp online reviews are invaluable source of information for users to choose
where to visit or what to eat among numerous available options. But due to
overwhelming number of reviews, it is almost impossible for users to go through
all reviews and find the information they are looking for. To provide a
business overview, one solution is to give the business a 1-5 star(s). This
rating can be subjective and biased toward users personality. In this paper, we
predict a business rating based on user-generated reviews texts alone. This not
only provides an overview of plentiful long review texts but also cancels out
subjectivity. Selecting the restaurant category from Yelp Dataset Challenge, we
use a combination of three feature generation methods as well as four machine
learning models to find the best prediction result. Our approach is to create
bag of words from the top frequent words in all raw text reviews, or top
frequent words/adjectives from results of Part-of-Speech analysis. Our results
show Root Mean Square Error (RMSE) of 0.6 for the combination of Linear
Regression with either of the top frequent words from raw data or top frequent
adjectives after Part-of-Speech (POS).
| [
{
"version": "v1",
"created": "Sun, 5 Jan 2014 03:29:05 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Fan",
"Mingming",
""
],
[
"Khademi",
"Maryam",
""
]
] | TITLE: Predicting a Business Star in Yelp from Its Reviews Text Alone
ABSTRACT: Yelp online reviews are invaluable source of information for users to choose
where to visit or what to eat among numerous available options. But due to
overwhelming number of reviews, it is almost impossible for users to go through
all reviews and find the information they are looking for. To provide a
business overview, one solution is to give the business a 1-5 star(s). This
rating can be subjective and biased toward users personality. In this paper, we
predict a business rating based on user-generated reviews texts alone. This not
only provides an overview of plentiful long review texts but also cancels out
subjectivity. Selecting the restaurant category from Yelp Dataset Challenge, we
use a combination of three feature generation methods as well as four machine
learning models to find the best prediction result. Our approach is to create
bag of words from the top frequent words in all raw text reviews, or top
frequent words/adjectives from results of Part-of-Speech analysis. Our results
show Root Mean Square Error (RMSE) of 0.6 for the combination of Linear
Regression with either of the top frequent words from raw data or top frequent
adjectives after Part-of-Speech (POS).
| no_new_dataset | 0.951188 |
1401.0987 | Chi Jin | Chi Jin, Ziteng Wang, Junliang Huang, Yiqiao Zhong, Liwei Wang | Differentially Private Data Releasing for Smooth Queries with Synthetic
Database Output | null | null | null | null | cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider accurately answering smooth queries while preserving differential
privacy. A query is said to be $K$-smooth if it is specified by a function
defined on $[-1,1]^d$ whose partial derivatives up to order $K$ are all
bounded. We develop an $\epsilon$-differentially private mechanism for the
class of $K$-smooth queries. The major advantage of the algorithm is that it
outputs a synthetic database. In real applications, a synthetic database output
is appealing. Our mechanism achieves an accuracy of $O
(n^{-\frac{K}{2d+K}}/\epsilon )$, and runs in polynomial time. We also
generalize the mechanism to preserve $(\epsilon, \delta)$-differential privacy
with slightly improved accuracy. Extensive experiments on benchmark datasets
demonstrate that the mechanisms have good accuracy and are efficient.
| [
{
"version": "v1",
"created": "Mon, 6 Jan 2014 05:12:01 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Jin",
"Chi",
""
],
[
"Wang",
"Ziteng",
""
],
[
"Huang",
"Junliang",
""
],
[
"Zhong",
"Yiqiao",
""
],
[
"Wang",
"Liwei",
""
]
] | TITLE: Differentially Private Data Releasing for Smooth Queries with Synthetic
Database Output
ABSTRACT: We consider accurately answering smooth queries while preserving differential
privacy. A query is said to be $K$-smooth if it is specified by a function
defined on $[-1,1]^d$ whose partial derivatives up to order $K$ are all
bounded. We develop an $\epsilon$-differentially private mechanism for the
class of $K$-smooth queries. The major advantage of the algorithm is that it
outputs a synthetic database. In real applications, a synthetic database output
is appealing. Our mechanism achieves an accuracy of $O
(n^{-\frac{K}{2d+K}}/\epsilon )$, and runs in polynomial time. We also
generalize the mechanism to preserve $(\epsilon, \delta)$-differential privacy
with slightly improved accuracy. Extensive experiments on benchmark datasets
demonstrate that the mechanisms have good accuracy and are efficient.
| no_new_dataset | 0.945248 |
1401.1191 | Juri Ranieri | Zichong Chen and Juri Ranieri and Runwei Zhang and Martin Vetterli | DASS: Distributed Adaptive Sparse Sensing | Submitted to IEEE Transactions on Wireless Communications | null | null | null | cs.IT cs.NI math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wireless sensor networks are often designed to perform two tasks: sensing a
physical field and transmitting the data to end-users. A crucial aspect of the
design of a WSN is the minimization of the overall energy consumption. Previous
researchers aim at optimizing the energy spent for the communication, while
mostly ignoring the energy cost due to sensing. Recently, it has been shown
that considering the sensing energy cost can be beneficial for further
improving the overall energy efficiency. More precisely, sparse sensing
techniques were proposed to reduce the amount of collected samples and recover
the missing data by using data statistics. While the majority of these
techniques use fixed or random sampling patterns, we propose to adaptively
learn the signal model from the measurements and use the model to schedule when
and where to sample the physical field. The proposed method requires minimal
on-board computation, no inter-node communications and still achieves appealing
reconstruction performance. With experiments on real-world datasets, we
demonstrate significant improvements over both traditional sensing schemes and
the state-of-the-art sparse sensing schemes, particularly when the measured
data is characterized by a strong intra-sensor (temporal) or inter-sensors
(spatial) correlation.
| [
{
"version": "v1",
"created": "Thu, 7 Nov 2013 10:40:47 GMT"
}
] | 2014-01-07T00:00:00 | [
[
"Chen",
"Zichong",
""
],
[
"Ranieri",
"Juri",
""
],
[
"Zhang",
"Runwei",
""
],
[
"Vetterli",
"Martin",
""
]
] | TITLE: DASS: Distributed Adaptive Sparse Sensing
ABSTRACT: Wireless sensor networks are often designed to perform two tasks: sensing a
physical field and transmitting the data to end-users. A crucial aspect of the
design of a WSN is the minimization of the overall energy consumption. Previous
researchers aim at optimizing the energy spent for the communication, while
mostly ignoring the energy cost due to sensing. Recently, it has been shown
that considering the sensing energy cost can be beneficial for further
improving the overall energy efficiency. More precisely, sparse sensing
techniques were proposed to reduce the amount of collected samples and recover
the missing data by using data statistics. While the majority of these
techniques use fixed or random sampling patterns, we propose to adaptively
learn the signal model from the measurements and use the model to schedule when
and where to sample the physical field. The proposed method requires minimal
on-board computation, no inter-node communications and still achieves appealing
reconstruction performance. With experiments on real-world datasets, we
demonstrate significant improvements over both traditional sensing schemes and
the state-of-the-art sparse sensing schemes, particularly when the measured
data is characterized by a strong intra-sensor (temporal) or inter-sensors
(spatial) correlation.
| no_new_dataset | 0.945901 |
1401.0561 | Janne Lindqvist | Michael Sherman, Gradeigh Clark, Yulong Yang, Shridatt Sugrim, Arttu
Modig, Janne Lindqvist, Antti Oulasvirta, Teemu Roos | User-Generated Free-Form Gestures for Authentication: Security and
Memorability | null | null | null | null | cs.CR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the security and memorability of free-form multitouch
gestures for mobile authentication. Towards this end, we collected a dataset
with a generate-test-retest paradigm where participants (N=63) generated
free-form gestures, repeated them, and were later retested for memory. Half of
the participants decided to generate one-finger gestures, and the other half
generated multi-finger gestures. Although there has been recent work on
template-based gestures, there are yet no metrics to analyze security of either
template or free-form gestures. For example, entropy-based metrics used for
text-based passwords are not suitable for capturing the security and
memorability of free-form gestures. Hence, we modify a recently proposed metric
for analyzing information capacity of continuous full-body movements for this
purpose. Our metric computed estimated mutual information in repeated sets of
gestures. Surprisingly, one-finger gestures had higher average mutual
information. Gestures with many hard angles and turns had the highest mutual
information. The best-remembered gestures included signatures and simple
angular shapes. We also implemented a multitouch recognizer to evaluate the
practicality of free-form gestures in a real authentication system and how they
perform against shoulder surfing attacks. We conclude the paper with strategies
for generating secure and memorable free-form gestures, which present a robust
method for mobile authentication.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2014 23:15:27 GMT"
}
] | 2014-01-06T00:00:00 | [
[
"Sherman",
"Michael",
""
],
[
"Clark",
"Gradeigh",
""
],
[
"Yang",
"Yulong",
""
],
[
"Sugrim",
"Shridatt",
""
],
[
"Modig",
"Arttu",
""
],
[
"Lindqvist",
"Janne",
""
],
[
"Oulasvirta",
"Antti",
""
],
[
"Roos",
"Teemu",
""
]
] | TITLE: User-Generated Free-Form Gestures for Authentication: Security and
Memorability
ABSTRACT: This paper studies the security and memorability of free-form multitouch
gestures for mobile authentication. Towards this end, we collected a dataset
with a generate-test-retest paradigm where participants (N=63) generated
free-form gestures, repeated them, and were later retested for memory. Half of
the participants decided to generate one-finger gestures, and the other half
generated multi-finger gestures. Although there has been recent work on
template-based gestures, there are yet no metrics to analyze security of either
template or free-form gestures. For example, entropy-based metrics used for
text-based passwords are not suitable for capturing the security and
memorability of free-form gestures. Hence, we modify a recently proposed metric
for analyzing information capacity of continuous full-body movements for this
purpose. Our metric computed estimated mutual information in repeated sets of
gestures. Surprisingly, one-finger gestures had higher average mutual
information. Gestures with many hard angles and turns had the highest mutual
information. The best-remembered gestures included signatures and simple
angular shapes. We also implemented a multitouch recognizer to evaluate the
practicality of free-form gestures in a real authentication system and how they
perform against shoulder surfing attacks. We conclude the paper with strategies
for generating secure and memorable free-form gestures, which present a robust
method for mobile authentication.
| new_dataset | 0.96793 |
1310.5288 | Andrew Wilson | Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, John P. Cunningham | GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian
Processes | 13 Pages, 9 Figures, 1 Table. Submitted for publication | null | null | null | stat.ML cs.AI cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian processes are typically used for smoothing and interpolation on
small datasets. We introduce a new Bayesian nonparametric framework -- GPatt --
enabling automatic pattern extrapolation with Gaussian processes on large
multidimensional datasets. GPatt unifies and extends highly expressive kernels
and fast exact inference techniques. Without human intervention -- no hand
crafting of kernel features, and no sophisticated initialisation procedures --
we show that GPatt can solve large scale pattern extrapolation, inpainting, and
kernel discovery problems, including a problem with 383400 training points. We
find that GPatt significantly outperforms popular alternative scalable Gaussian
process methods in speed and accuracy. Moreover, we discover profound
differences between each of these methods, suggesting expressive kernels,
nonparametric representations, and exact inference are useful for modelling
large scale multidimensional patterns.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2013 01:26:45 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Oct 2013 16:58:35 GMT"
},
{
"version": "v3",
"created": "Tue, 31 Dec 2013 14:10:34 GMT"
}
] | 2014-01-03T00:00:00 | [
[
"Wilson",
"Andrew Gordon",
""
],
[
"Gilboa",
"Elad",
""
],
[
"Nehorai",
"Arye",
""
],
[
"Cunningham",
"John P.",
""
]
] | TITLE: GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian
Processes
ABSTRACT: Gaussian processes are typically used for smoothing and interpolation on
small datasets. We introduce a new Bayesian nonparametric framework -- GPatt --
enabling automatic pattern extrapolation with Gaussian processes on large
multidimensional datasets. GPatt unifies and extends highly expressive kernels
and fast exact inference techniques. Without human intervention -- no hand
crafting of kernel features, and no sophisticated initialisation procedures --
we show that GPatt can solve large scale pattern extrapolation, inpainting, and
kernel discovery problems, including a problem with 383400 training points. We
find that GPatt significantly outperforms popular alternative scalable Gaussian
process methods in speed and accuracy. Moreover, we discover profound
differences between each of these methods, suggesting expressive kernels,
nonparametric representations, and exact inference are useful for modelling
large scale multidimensional patterns.
| no_new_dataset | 0.941761 |
1311.0536 | Nikos Bikakis | Nikos Bikakis, Chrisa Tsinaraki, Ioannis Stavrakantonakis, Nektarios
Gioldasis, Stavros Christodoulakis | The SPARQL2XQuery Interoperability Framework. Utilizing Schema Mapping,
Schema Transformation and Query Translation to Integrate XML and the Semantic
Web | To appear in World Wide Web Journal (WWWJ), Springer 2013 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Web of Data is an open environment consisting of a great number of large
inter-linked RDF datasets from various domains. In this environment,
organizations and companies adopt the Linked Data practices utilizing Semantic
Web (SW) technologies, in order to publish their data and offer SPARQL
endpoints (i.e., SPARQL-based search services). On the other hand, the dominant
standard for information exchange in the Web today is XML. The SW and XML
worlds and their developed infrastructures are based on different data models,
semantics and query languages. Thus, it is crucial to develop interoperability
mechanisms that allow the Web of Data users to access XML datasets, using
SPARQL, from their own working environments. It is unrealistic to expect that
all the existing legacy data (e.g., Relational, XML, etc.) will be transformed
into SW data. Therefore, publishing legacy data as Linked Data and providing
SPARQL endpoints over them has become a major research challenge. In this
direction, we introduce the SPARQL2XQuery Framework which creates an
interoperable environment, where SPARQL queries are automatically translated to
XQuery queries, in order to access XML data across the Web. The SPARQL2XQuery
Framework provides a mapping model for the expression of OWL-RDF/S to XML
Schema mappings as well as a method for SPARQL to XQuery translation. To this
end, our Framework supports both manual and automatic mapping specification
between ontologies and XML Schemas. In the automatic mapping specification
scenario, the SPARQL2XQuery exploits the XS2OWL component which transforms XML
Schemas into OWL ontologies. Finally, extensive experiments have been conducted
in order to evaluate the schema transformation, mapping generation, query
translation and query evaluation efficiency, using both real and synthetic
datasets.
| [
{
"version": "v1",
"created": "Sun, 3 Nov 2013 21:57:48 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2013 00:20:14 GMT"
},
{
"version": "v3",
"created": "Thu, 2 Jan 2014 02:53:19 GMT"
}
] | 2014-01-03T00:00:00 | [
[
"Bikakis",
"Nikos",
""
],
[
"Tsinaraki",
"Chrisa",
""
],
[
"Stavrakantonakis",
"Ioannis",
""
],
[
"Gioldasis",
"Nektarios",
""
],
[
"Christodoulakis",
"Stavros",
""
]
] | TITLE: The SPARQL2XQuery Interoperability Framework. Utilizing Schema Mapping,
Schema Transformation and Query Translation to Integrate XML and the Semantic
Web
ABSTRACT: The Web of Data is an open environment consisting of a great number of large
inter-linked RDF datasets from various domains. In this environment,
organizations and companies adopt the Linked Data practices utilizing Semantic
Web (SW) technologies, in order to publish their data and offer SPARQL
endpoints (i.e., SPARQL-based search services). On the other hand, the dominant
standard for information exchange in the Web today is XML. The SW and XML
worlds and their developed infrastructures are based on different data models,
semantics and query languages. Thus, it is crucial to develop interoperability
mechanisms that allow the Web of Data users to access XML datasets, using
SPARQL, from their own working environments. It is unrealistic to expect that
all the existing legacy data (e.g., Relational, XML, etc.) will be transformed
into SW data. Therefore, publishing legacy data as Linked Data and providing
SPARQL endpoints over them has become a major research challenge. In this
direction, we introduce the SPARQL2XQuery Framework which creates an
interoperable environment, where SPARQL queries are automatically translated to
XQuery queries, in order to access XML data across the Web. The SPARQL2XQuery
Framework provides a mapping model for the expression of OWL-RDF/S to XML
Schema mappings as well as a method for SPARQL to XQuery translation. To this
end, our Framework supports both manual and automatic mapping specification
between ontologies and XML Schemas. In the automatic mapping specification
scenario, the SPARQL2XQuery exploits the XS2OWL component which transforms XML
Schemas into OWL ontologies. Finally, extensive experiments have been conducted
in order to evaluate the schema transformation, mapping generation, query
translation and query evaluation efficiency, using both real and synthetic
datasets.
| no_new_dataset | 0.942348 |
1312.6158 | Mohammad Pezeshki | Mohammad Ali Keyvanrad, Mohammad Pezeshki, and Mohammad Ali
Homayounpour | Deep Belief Networks for Image Denoising | ICLR 2014 Conference track | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Belief Networks which are hierarchical generative models are effective
tools for feature representation and extraction. Furthermore, DBNs can be used
in numerous aspects of Machine Learning such as image denoising. In this paper,
we propose a novel method for image denoising which relies on the DBNs' ability
in feature representation. This work is based upon learning of the noise
behavior. Generally, features which are extracted using DBNs are presented as
the values of the last layer nodes. We train a DBN a way that the network
totally distinguishes between nodes presenting noise and nodes presenting image
content in the last later of DBN, i.e. the nodes in the last layer of trained
DBN are divided into two distinct groups of nodes. After detecting the nodes
which are presenting the noise, we are able to make the noise nodes inactive
and reconstruct a noiseless image. In section 4 we explore the results of
applying this method on the MNIST dataset of handwritten digits which is
corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in
average mean square error (MSE) was achieved when the proposed method was used
for the reconstruction of the noisy images.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:56:38 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jan 2014 17:04:35 GMT"
}
] | 2014-01-03T00:00:00 | [
[
"Keyvanrad",
"Mohammad Ali",
""
],
[
"Pezeshki",
"Mohammad",
""
],
[
"Homayounpour",
"Mohammad Ali",
""
]
] | TITLE: Deep Belief Networks for Image Denoising
ABSTRACT: Deep Belief Networks which are hierarchical generative models are effective
tools for feature representation and extraction. Furthermore, DBNs can be used
in numerous aspects of Machine Learning such as image denoising. In this paper,
we propose a novel method for image denoising which relies on the DBNs' ability
in feature representation. This work is based upon learning of the noise
behavior. Generally, features which are extracted using DBNs are presented as
the values of the last layer nodes. We train a DBN a way that the network
totally distinguishes between nodes presenting noise and nodes presenting image
content in the last later of DBN, i.e. the nodes in the last layer of trained
DBN are divided into two distinct groups of nodes. After detecting the nodes
which are presenting the noise, we are able to make the noise nodes inactive
and reconstruct a noiseless image. In section 4 we explore the results of
applying this method on the MNIST dataset of handwritten digits which is
corrupted with additive white Gaussian noise (AWGN). A reduction of 65.9% in
average mean square error (MSE) was achieved when the proposed method was used
for the reconstruction of the noisy images.
| no_new_dataset | 0.945651 |
1401.0104 | Tao Xiong | Yukun Bao, Tao Xiong, Zhongyi Hu | PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction | 14 pages. IEEE Transactions on Cybernetics. 2013 | null | 10.1109/TCYB.2013.2265084 | null | cs.AI cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-step-ahead time series prediction is one of the most challenging
research topics in the field of time series modeling and prediction, and is
continually under research. Recently, the multiple-input several
multiple-outputs (MISMO) modeling strategy has been proposed as a promising
alternative for multi-step-ahead time series prediction, exhibiting advantages
compared with the two currently dominating strategies, the iterated and the
direct strategies. Built on the established MISMO strategy, this study proposes
a particle swarm optimization (PSO)-based MISMO modeling strategy, which is
capable of determining the number of sub-models in a self-adaptive mode, with
varying prediction horizons. Rather than deriving crisp divides with equal-size
s prediction horizons from the established MISMO, the proposed PSO-MISMO
strategy, implemented with neural networks, employs a heuristic to create
flexible divides with varying sizes of prediction horizons and to generate
corresponding sub-models, providing considerable flexibility in model
construction, which has been validated with simulated and real datasets.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2013 07:09:02 GMT"
}
] | 2014-01-03T00:00:00 | [
[
"Bao",
"Yukun",
""
],
[
"Xiong",
"Tao",
""
],
[
"Hu",
"Zhongyi",
""
]
] | TITLE: PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction
ABSTRACT: Multi-step-ahead time series prediction is one of the most challenging
research topics in the field of time series modeling and prediction, and is
continually under research. Recently, the multiple-input several
multiple-outputs (MISMO) modeling strategy has been proposed as a promising
alternative for multi-step-ahead time series prediction, exhibiting advantages
compared with the two currently dominating strategies, the iterated and the
direct strategies. Built on the established MISMO strategy, this study proposes
a particle swarm optimization (PSO)-based MISMO modeling strategy, which is
capable of determining the number of sub-models in a self-adaptive mode, with
varying prediction horizons. Rather than deriving crisp divides with equal-size
s prediction horizons from the established MISMO, the proposed PSO-MISMO
strategy, implemented with neural networks, employs a heuristic to create
flexible divides with varying sizes of prediction horizons and to generate
corresponding sub-models, providing considerable flexibility in model
construction, which has been validated with simulated and real datasets.
| no_new_dataset | 0.947817 |
1401.0116 | Dinesh Govindaraj | Dinesh Govindaraj, Raman Sankaran, Sreedal Menon, Chiranjib
Bhattacharyya | Controlled Sparsity Kernel Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a
popular front of research in recent times due to its success in application
problems like Object Categorization. This success is due to the fact that MKL
has the ability to choose from a variety of feature kernels to identify the
optimal kernel combination. But the initial formulation of MKL was only able to
select the best of the features and misses out many other informative kernels
presented. To overcome this, the Lp norm based formulation was proposed by
Kloft et. al. This formulation is capable of choosing a non-sparse set of
kernels through a control parameter p. Unfortunately, the parameter p does not
have a direct meaning to the number of kernels selected. We have observed that
stricter control over the number of kernels selected gives us an edge over
these techniques in terms of accuracy of classification and also helps us to
fine tune the algorithms to the time requirements at hand. In this work, we
propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can
strictly control the number of kernels which we wish to select. The CSKL
formulation introduces a parameter t which directly corresponds to the number
of kernels selected. It is important to note that a search in t space is finite
and fast as compared to p. We have also provided an efficient Reduced Gradient
Descent based algorithm to solve the CSKL formulation, which is proven to
converge. Through our experiments on the Caltech101 Object Categorization
dataset, we have also shown that one can achieve better accuracies than the
previous formulations through the right choice of t.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2013 09:13:09 GMT"
}
] | 2014-01-03T00:00:00 | [
[
"Govindaraj",
"Dinesh",
""
],
[
"Sankaran",
"Raman",
""
],
[
"Menon",
"Sreedal",
""
],
[
"Bhattacharyya",
"Chiranjib",
""
]
] | TITLE: Controlled Sparsity Kernel Learning
ABSTRACT: Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a
popular front of research in recent times due to its success in application
problems like Object Categorization. This success is due to the fact that MKL
has the ability to choose from a variety of feature kernels to identify the
optimal kernel combination. But the initial formulation of MKL was only able to
select the best of the features and misses out many other informative kernels
presented. To overcome this, the Lp norm based formulation was proposed by
Kloft et. al. This formulation is capable of choosing a non-sparse set of
kernels through a control parameter p. Unfortunately, the parameter p does not
have a direct meaning to the number of kernels selected. We have observed that
stricter control over the number of kernels selected gives us an edge over
these techniques in terms of accuracy of classification and also helps us to
fine tune the algorithms to the time requirements at hand. In this work, we
propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can
strictly control the number of kernels which we wish to select. The CSKL
formulation introduces a parameter t which directly corresponds to the number
of kernels selected. It is important to note that a search in t space is finite
and fast as compared to p. We have also provided an efficient Reduced Gradient
Descent based algorithm to solve the CSKL formulation, which is proven to
converge. Through our experiments on the Caltech101 Object Categorization
dataset, we have also shown that one can achieve better accuracies than the
previous formulations through the right choice of t.
| no_new_dataset | 0.948251 |
1205.0651 | Gaurav Pandey | Ambedkar Dukkipati, Gaurav Pandey, Debarghya Ghoshdastidar, Paramita
Koley, D. M. V. Satya Sriram | Generative Maximum Entropy Learning for Multiclass Classification | null | null | null | null | cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximum entropy approach to classification is very well studied in applied
statistics and machine learning and almost all the methods that exists in
literature are discriminative in nature. In this paper, we introduce a maximum
entropy classification method with feature selection for large dimensional data
such as text datasets that is generative in nature. To tackle the curse of
dimensionality of large data sets, we employ conditional independence
assumption (Naive Bayes) and we perform feature selection simultaneously, by
enforcing a `maximum discrimination' between estimated class conditional
densities. For two class problems, in the proposed method, we use Jeffreys
($J$) divergence to discriminate the class conditional densities. To extend our
method to the multi-class case, we propose a completely new approach by
considering a multi-distribution divergence: we replace Jeffreys divergence by
Jensen-Shannon ($JS$) divergence to discriminate conditional densities of
multiple classes. In order to reduce computational complexity, we employ a
modified Jensen-Shannon divergence ($JS_{GM}$), based on AM-GM inequality. We
show that the resulting divergence is a natural generalization of Jeffreys
divergence to a multiple distributions case. As far as the theoretical
justifications are concerned we show that when one intends to select the best
features in a generative maximum entropy approach, maximum discrimination using
$J-$divergence emerges naturally in binary classification. Performance and
comparative study of the proposed algorithms have been demonstrated on large
dimensional text and gene expression datasets that show our methods scale up
very well with large dimensional datasets.
| [
{
"version": "v1",
"created": "Thu, 3 May 2012 08:49:01 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jun 2012 09:38:47 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Dec 2013 08:27:53 GMT"
}
] | 2013-12-31T00:00:00 | [
[
"Dukkipati",
"Ambedkar",
""
],
[
"Pandey",
"Gaurav",
""
],
[
"Ghoshdastidar",
"Debarghya",
""
],
[
"Koley",
"Paramita",
""
],
[
"Sriram",
"D. M. V. Satya",
""
]
] | TITLE: Generative Maximum Entropy Learning for Multiclass Classification
ABSTRACT: Maximum entropy approach to classification is very well studied in applied
statistics and machine learning and almost all the methods that exists in
literature are discriminative in nature. In this paper, we introduce a maximum
entropy classification method with feature selection for large dimensional data
such as text datasets that is generative in nature. To tackle the curse of
dimensionality of large data sets, we employ conditional independence
assumption (Naive Bayes) and we perform feature selection simultaneously, by
enforcing a `maximum discrimination' between estimated class conditional
densities. For two class problems, in the proposed method, we use Jeffreys
($J$) divergence to discriminate the class conditional densities. To extend our
method to the multi-class case, we propose a completely new approach by
considering a multi-distribution divergence: we replace Jeffreys divergence by
Jensen-Shannon ($JS$) divergence to discriminate conditional densities of
multiple classes. In order to reduce computational complexity, we employ a
modified Jensen-Shannon divergence ($JS_{GM}$), based on AM-GM inequality. We
show that the resulting divergence is a natural generalization of Jeffreys
divergence to a multiple distributions case. As far as the theoretical
justifications are concerned we show that when one intends to select the best
features in a generative maximum entropy approach, maximum discrimination using
$J-$divergence emerges naturally in binary classification. Performance and
comparative study of the proposed algorithms have been demonstrated on large
dimensional text and gene expression datasets that show our methods scale up
very well with large dimensional datasets.
| no_new_dataset | 0.945096 |
1306.2866 | Antoine Isaac | Shenghui Wang, Antoine Isaac, Valentine Charles, Rob Koopman, Anthi
Agoropoulou, and Titia van der Werf | Hierarchical structuring of Cultural Heritage objects within large
aggregations | The paper has been published in the proceedings of the TPDL
conference, see http://tpdl2013.info. For the final version see
http://link.springer.com/chapter/10.1007%2F978-3-642-40501-3_25 | null | 10.1007/978-3-642-40501-3_25 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Huge amounts of cultural content have been digitised and are available
through digital libraries and aggregators like Europeana.eu. However, it is not
easy for a user to have an overall picture of what is available nor to find
related objects. We propose a method for hier- archically structuring cultural
objects at different similarity levels. We describe a fast, scalable clustering
algorithm with an automated field selection method for finding semantic
clusters. We report a qualitative evaluation on the cluster categories based on
records from the UK and a quantitative one on the results from the complete
Europeana dataset.
| [
{
"version": "v1",
"created": "Wed, 12 Jun 2013 15:40:48 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Dec 2013 22:44:49 GMT"
}
] | 2013-12-31T00:00:00 | [
[
"Wang",
"Shenghui",
""
],
[
"Isaac",
"Antoine",
""
],
[
"Charles",
"Valentine",
""
],
[
"Koopman",
"Rob",
""
],
[
"Agoropoulou",
"Anthi",
""
],
[
"van der Werf",
"Titia",
""
]
] | TITLE: Hierarchical structuring of Cultural Heritage objects within large
aggregations
ABSTRACT: Huge amounts of cultural content have been digitised and are available
through digital libraries and aggregators like Europeana.eu. However, it is not
easy for a user to have an overall picture of what is available nor to find
related objects. We propose a method for hier- archically structuring cultural
objects at different similarity levels. We describe a fast, scalable clustering
algorithm with an automated field selection method for finding semantic
clusters. We report a qualitative evaluation on the cluster categories based on
records from the UK and a quantitative one on the results from the complete
Europeana dataset.
| no_new_dataset | 0.952926 |
1312.7511 | Shraddha Shinde | Shraddha S. Shinde and Prof. Anagha P. Khedkar | A Novel Scheme for Generating Secure Face Templates Using BDA | 07 pages,IJASCSE | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In identity management system, frequently used biometric recognition system
needs awareness towards issue of protecting biometric template as far as more
reliable solution is apprehensive. In sight of this biometric template
protection algorithm should gratify the basic requirements viz. security,
discriminability and cancelability. As no single template protection method is
capable of satisfying these requirements, a novel scheme for face template
generation and protection is proposed. The novel scheme is proposed to provide
security and accuracy in new user enrolment and authentication process. This
novel scheme takes advantage of both the hybrid approach and the binary
discriminant analysis algorithm. This algorithm is designed on the basis of
random projection, binary discriminant analysis and fuzzy commitment scheme.
Publicly available benchmark face databases (FERET, FRGC, CMU-PIE) and other
datasets are used for evaluation. The proposed novel scheme enhances the
discriminability and recognition accuracy in terms of matching score of the
face images for each stage and provides high security against potential attacks
namely brute force and smart attacks. In this paper, we discuss results viz.
averages matching score, computation time and security for hybrid approach and
novel approach.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2013 09:31:01 GMT"
}
] | 2013-12-31T00:00:00 | [
[
"Shinde",
"Shraddha S.",
""
],
[
"Khedkar",
"Prof. Anagha P.",
""
]
] | TITLE: A Novel Scheme for Generating Secure Face Templates Using BDA
ABSTRACT: In identity management system, frequently used biometric recognition system
needs awareness towards issue of protecting biometric template as far as more
reliable solution is apprehensive. In sight of this biometric template
protection algorithm should gratify the basic requirements viz. security,
discriminability and cancelability. As no single template protection method is
capable of satisfying these requirements, a novel scheme for face template
generation and protection is proposed. The novel scheme is proposed to provide
security and accuracy in new user enrolment and authentication process. This
novel scheme takes advantage of both the hybrid approach and the binary
discriminant analysis algorithm. This algorithm is designed on the basis of
random projection, binary discriminant analysis and fuzzy commitment scheme.
Publicly available benchmark face databases (FERET, FRGC, CMU-PIE) and other
datasets are used for evaluation. The proposed novel scheme enhances the
discriminability and recognition accuracy in terms of matching score of the
face images for each stage and provides high security against potential attacks
namely brute force and smart attacks. In this paper, we discuss results viz.
averages matching score, computation time and security for hybrid approach and
novel approach.
| no_new_dataset | 0.947088 |
1312.7570 | Stefan Mathe | Stefan Mathe, Cristian Sminchisescu | Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for
Visual Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Systems based on bag-of-words models from image features collected at maxima
of sparse interest point operators have been used successfully for both
computer visual object and action recognition tasks. While the sparse,
interest-point based approach to recognition is not inconsistent with visual
processing in biological systems that operate in `saccade and fixate' regimes,
the methodology and emphasis in the human and the computer vision communities
remains sharply distinct. Here, we make three contributions aiming to bridge
this gap. First, we complement existing state-of-the art large scale dynamic
computer vision annotated datasets like Hollywood-2 and UCF Sports with human
eye movements collected under the ecological constraints of the visual action
recognition task. To our knowledge these are the first large human eye tracking
datasets to be collected and made publicly available for video,
vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique
in terms of their (a) large scale and computer vision relevance, (b) dynamic,
video stimuli, (c) task control, as opposed to free-viewing. Second, we
introduce novel sequential consistency and alignment measures, which underline
the remarkable stability of patterns of visual search among subjects. Third, we
leverage the significant amount of collected data in order to pursue studies
and build automatic, end-to-end trainable computer vision systems based on
human eye movements. Our studies not only shed light on the differences between
computer vision spatio-temporal interest point image sampling strategies and
the human fixations, as well as their impact for visual recognition
performance, but also demonstrate that human fixations can be accurately
predicted, and when used in an end-to-end automatic system, leveraging some of
the advanced computer vision practice, can lead to state of the art results.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2013 18:49:04 GMT"
}
] | 2013-12-31T00:00:00 | [
[
"Mathe",
"Stefan",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for
Visual Recognition
ABSTRACT: Systems based on bag-of-words models from image features collected at maxima
of sparse interest point operators have been used successfully for both
computer visual object and action recognition tasks. While the sparse,
interest-point based approach to recognition is not inconsistent with visual
processing in biological systems that operate in `saccade and fixate' regimes,
the methodology and emphasis in the human and the computer vision communities
remains sharply distinct. Here, we make three contributions aiming to bridge
this gap. First, we complement existing state-of-the art large scale dynamic
computer vision annotated datasets like Hollywood-2 and UCF Sports with human
eye movements collected under the ecological constraints of the visual action
recognition task. To our knowledge these are the first large human eye tracking
datasets to be collected and made publicly available for video,
vision.imar.ro/eyetracking (497,107 frames, each viewed by 16 subjects), unique
in terms of their (a) large scale and computer vision relevance, (b) dynamic,
video stimuli, (c) task control, as opposed to free-viewing. Second, we
introduce novel sequential consistency and alignment measures, which underline
the remarkable stability of patterns of visual search among subjects. Third, we
leverage the significant amount of collected data in order to pursue studies
and build automatic, end-to-end trainable computer vision systems based on
human eye movements. Our studies not only shed light on the differences between
computer vision spatio-temporal interest point image sampling strategies and
the human fixations, as well as their impact for visual recognition
performance, but also demonstrate that human fixations can be accurately
predicted, and when used in an end-to-end automatic system, leveraging some of
the advanced computer vision practice, can lead to state of the art results.
| no_new_dataset | 0.909667 |
1312.2877 | Mohammad H. Alomari | Mohammad H. Alomari, Aya Samaha, Khaled AlKamha | Automated Classification of L/R Hand Movement EEG Signals using Advanced
Feature Extraction and Machine Learning | 6 pages, 4 figures | International Journal of Advanced Computer Science and
Applications (ijacsa) 07/2013; 4(6):207-212 | 10.14569/IJACSA.2013.040628 | null | cs.NE cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an automated computer platform for the purpose of
classifying Electroencephalography (EEG) signals associated with left and right
hand movements using a hybrid system that uses advanced feature extraction
techniques and machine learning algorithms. It is known that EEG represents the
brain activity by the electrical voltage fluctuations along the scalp, and
Brain-Computer Interface (BCI) is a device that enables the use of the brain
neural activity to communicate with others or to control machines, artificial
limbs, or robots without direct physical movements. In our research work, we
aspired to find the best feature extraction method that enables the
differentiation between left and right executed fist movements through various
classification algorithms. The EEG dataset used in this research was created
and contributed to PhysioNet by the developers of the BCI2000 instrumentation
system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts
removal was done using AAR. Data was epoched on the basis of Event-Related (De)
Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP)
features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta
rhythms were isolated for the MRCP analysis. The Independent Component Analysis
(ICA) spatial filter was applied on related channels for noise reduction and
isolation of both artifactually and neutrally generated EEG sources. The final
feature vector included the ERD, ERS, and MRCP features in addition to the
mean, power and energy of the activations of the resulting independent
components of the epoched feature datasets. The datasets were inputted into two
machine-learning algorithms: Neural Networks (NNs) and Support Vector Machines
(SVMs). Intensive experiments were carried out and optimum classification
performances of 89.8 and 97.1 were obtained using NN and SVM, respectively.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 17:04:18 GMT"
}
] | 2013-12-30T00:00:00 | [
[
"Alomari",
"Mohammad H.",
""
],
[
"Samaha",
"Aya",
""
],
[
"AlKamha",
"Khaled",
""
]
] | TITLE: Automated Classification of L/R Hand Movement EEG Signals using Advanced
Feature Extraction and Machine Learning
ABSTRACT: In this paper, we propose an automated computer platform for the purpose of
classifying Electroencephalography (EEG) signals associated with left and right
hand movements using a hybrid system that uses advanced feature extraction
techniques and machine learning algorithms. It is known that EEG represents the
brain activity by the electrical voltage fluctuations along the scalp, and
Brain-Computer Interface (BCI) is a device that enables the use of the brain
neural activity to communicate with others or to control machines, artificial
limbs, or robots without direct physical movements. In our research work, we
aspired to find the best feature extraction method that enables the
differentiation between left and right executed fist movements through various
classification algorithms. The EEG dataset used in this research was created
and contributed to PhysioNet by the developers of the BCI2000 instrumentation
system. Data was preprocessed using the EEGLAB MATLAB toolbox and artifacts
removal was done using AAR. Data was epoched on the basis of Event-Related (De)
Synchronization (ERD/ERS) and movement-related cortical potentials (MRCP)
features. Mu/beta rhythms were isolated for the ERD/ERS analysis and delta
rhythms were isolated for the MRCP analysis. The Independent Component Analysis
(ICA) spatial filter was applied on related channels for noise reduction and
isolation of both artifactually and neutrally generated EEG sources. The final
feature vector included the ERD, ERS, and MRCP features in addition to the
mean, power and energy of the activations of the resulting independent
components of the epoched feature datasets. The datasets were inputted into two
machine-learning algorithms: Neural Networks (NNs) and Support Vector Machines
(SVMs). Intensive experiments were carried out and optimum classification
performances of 89.8 and 97.1 were obtained using NN and SVM, respectively.
| no_new_dataset | 0.953405 |
1312.6506 | Prateek Singhal | Prateek Singhal, Aditya Deshpande, N Dinesh Reddy and K Madhava
Krishna | Top Down Approach to Multiple Plane Detection | 6 pages, 22 figures, ICPR conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detecting multiple planes in images is a challenging problem, but one with
many applications. Recent work such as J-Linkage and Ordered Residual Kernels
have focussed on developing a domain independent approach to detect multiple
structures. These multiple structure detection methods are then used for
estimating multiple homographies given feature matches between two images.
Features participating in the multiple homographies detected, provide us the
multiple scene planes. We show that these methods provide locally optimal
results and fail to merge detected planar patches to the true scene planes.
These methods use only residues obtained on applying homography of one plane to
another as cue for merging. In this paper, we develop additional cues such as
local consistency of planes, local normals, texture etc. to perform better
classification and merging . We formulate the classification as an MRF problem
and use TRWS message passing algorithm to solve non metric energy terms and
complex sparse graph structure. We show results on challenging dataset common
in robotics navigation scenarios where our method shows accuracy of more than
85 percent on average while being close or same as the actual number of scene
planes.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2013 10:09:12 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2013 04:35:01 GMT"
}
] | 2013-12-30T00:00:00 | [
[
"Singhal",
"Prateek",
""
],
[
"Deshpande",
"Aditya",
""
],
[
"Reddy",
"N Dinesh",
""
],
[
"Krishna",
"K Madhava",
""
]
] | TITLE: Top Down Approach to Multiple Plane Detection
ABSTRACT: Detecting multiple planes in images is a challenging problem, but one with
many applications. Recent work such as J-Linkage and Ordered Residual Kernels
have focussed on developing a domain independent approach to detect multiple
structures. These multiple structure detection methods are then used for
estimating multiple homographies given feature matches between two images.
Features participating in the multiple homographies detected, provide us the
multiple scene planes. We show that these methods provide locally optimal
results and fail to merge detected planar patches to the true scene planes.
These methods use only residues obtained on applying homography of one plane to
another as cue for merging. In this paper, we develop additional cues such as
local consistency of planes, local normals, texture etc. to perform better
classification and merging . We formulate the classification as an MRF problem
and use TRWS message passing algorithm to solve non metric energy terms and
complex sparse graph structure. We show results on challenging dataset common
in robotics navigation scenarios where our method shows accuracy of more than
85 percent on average while being close or same as the actual number of scene
planes.
| no_new_dataset | 0.950411 |
1312.6948 | Sourish Dasgupta | Sourish Dasgupta, Rupali KaPatel, Ankur Padia, Kushal Shah | Description Logics based Formalization of Wh-Queries | Natural Language Query Processing, Representation | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of Natural Language Query Formalization (NLQF) is to translate a
given user query in natural language (NL) into a formal language so that the
semantic interpretation has equivalence with the NL interpretation.
Formalization of NL queries enables logic based reasoning during information
retrieval, database query, question-answering, etc. Formalization also helps in
Web query normalization and indexing, query intent analysis, etc. In this paper
we are proposing a Description Logics based formal methodology for wh-query
intent (also called desire) identification and corresponding formal
translation. We evaluated the scalability of our proposed formalism using
Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2013 09:23:49 GMT"
}
] | 2013-12-30T00:00:00 | [
[
"Dasgupta",
"Sourish",
""
],
[
"KaPatel",
"Rupali",
""
],
[
"Padia",
"Ankur",
""
],
[
"Shah",
"Kushal",
""
]
] | TITLE: Description Logics based Formalization of Wh-Queries
ABSTRACT: The problem of Natural Language Query Formalization (NLQF) is to translate a
given user query in natural language (NL) into a formal language so that the
semantic interpretation has equivalence with the NL interpretation.
Formalization of NL queries enables logic based reasoning during information
retrieval, database query, question-answering, etc. Formalization also helps in
Web query normalization and indexing, query intent analysis, etc. In this paper
we are proposing a Description Logics based formal methodology for wh-query
intent (also called desire) identification and corresponding formal
translation. We evaluated the scalability of our proposed formalism using
Microsoft Encarta 98 query dataset and OWL-S TC v.4.0 dataset.
| no_new_dataset | 0.951774 |
1312.7085 | Peng Lu | Peng Lu, Xujun Peng, Xinshan Zhu, Xiaojie Wang | Finding More Relevance: Propagating Similarity on Markov Random Field
for Image Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To effectively retrieve objects from large corpus with high accuracy is a
challenge task. In this paper, we propose a method that propagates visual
feature level similarities on a Markov random field (MRF) to obtain a high
level correspondence in image space for image pairs. The proposed
correspondence between image pair reflects not only the similarity of low-level
visual features but also the relations built through other images in the
database and it can be easily integrated into the existing
bag-of-visual-words(BoW) based systems to reduce the missing rate. We evaluate
our method on the standard Oxford-5K, Oxford-105K and Paris-6K dataset. The
experiment results show that the proposed method significantly improves the
retrieval accuracy on three datasets and exceeds the current state-of-the-art
retrieval performance.
| [
{
"version": "v1",
"created": "Thu, 26 Dec 2013 10:55:14 GMT"
}
] | 2013-12-30T00:00:00 | [
[
"Lu",
"Peng",
""
],
[
"Peng",
"Xujun",
""
],
[
"Zhu",
"Xinshan",
""
],
[
"Wang",
"Xiaojie",
""
]
] | TITLE: Finding More Relevance: Propagating Similarity on Markov Random Field
for Image Retrieval
ABSTRACT: To effectively retrieve objects from large corpus with high accuracy is a
challenge task. In this paper, we propose a method that propagates visual
feature level similarities on a Markov random field (MRF) to obtain a high
level correspondence in image space for image pairs. The proposed
correspondence between image pair reflects not only the similarity of low-level
visual features but also the relations built through other images in the
database and it can be easily integrated into the existing
bag-of-visual-words(BoW) based systems to reduce the missing rate. We evaluate
our method on the standard Oxford-5K, Oxford-105K and Paris-6K dataset. The
experiment results show that the proposed method significantly improves the
retrieval accuracy on three datasets and exceeds the current state-of-the-art
retrieval performance.
| no_new_dataset | 0.955152 |
1302.4888 | Yue Shi | Yue Shi, Martha Larson, Alan Hanjalic | Exploiting Social Tags for Cross-Domain Collaborative Filtering | Manuscript under review | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most challenging problems in recommender systems based on the
collaborative filtering (CF) concept is data sparseness, i.e., limited user
preference data is available for making recommendations. Cross-domain
collaborative filtering (CDCF) has been studied as an effective mechanism to
alleviate data sparseness of one domain using the knowledge about user
preferences from other domains. A key question to be answered in the context of
CDCF is what common characteristics can be deployed to link different domains
for effective knowledge transfer. In this paper, we assess the usefulness of
user-contributed (social) tags in this respect. We do so by means of the
Generalized Tag-induced Cross-domain Collaborative Filtering (GTagCDCF)
approach that we propose in this paper and that we developed based on the
general collective matrix factorization framework. Assessment is done by a
series of experiments, using publicly available CF datasets that represent
three cross-domain cases, i.e., two two-domain cases and one three-domain case.
A comparative analysis on two-domain cases involving GTagCDCF and several
state-of-the-art CDCF approaches indicates the increased benefit of using
social tags as representatives of explicit links between domains for CDCF as
compared to the implicit links deployed by the existing CDCF methods. In
addition, we show that users from different domains can already benefit from
GTagCDCF if they only share a few common tags. Finally, we use the three-domain
case to validate the robustness of GTagCDCF with respect to the scale of
datasets and the varying number of domains.
| [
{
"version": "v1",
"created": "Wed, 20 Feb 2013 12:37:33 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2013 16:03:11 GMT"
}
] | 2013-12-25T00:00:00 | [
[
"Shi",
"Yue",
""
],
[
"Larson",
"Martha",
""
],
[
"Hanjalic",
"Alan",
""
]
] | TITLE: Exploiting Social Tags for Cross-Domain Collaborative Filtering
ABSTRACT: One of the most challenging problems in recommender systems based on the
collaborative filtering (CF) concept is data sparseness, i.e., limited user
preference data is available for making recommendations. Cross-domain
collaborative filtering (CDCF) has been studied as an effective mechanism to
alleviate data sparseness of one domain using the knowledge about user
preferences from other domains. A key question to be answered in the context of
CDCF is what common characteristics can be deployed to link different domains
for effective knowledge transfer. In this paper, we assess the usefulness of
user-contributed (social) tags in this respect. We do so by means of the
Generalized Tag-induced Cross-domain Collaborative Filtering (GTagCDCF)
approach that we propose in this paper and that we developed based on the
general collective matrix factorization framework. Assessment is done by a
series of experiments, using publicly available CF datasets that represent
three cross-domain cases, i.e., two two-domain cases and one three-domain case.
A comparative analysis on two-domain cases involving GTagCDCF and several
state-of-the-art CDCF approaches indicates the increased benefit of using
social tags as representatives of explicit links between domains for CDCF as
compared to the implicit links deployed by the existing CDCF methods. In
addition, we show that users from different domains can already benefit from
GTagCDCF if they only share a few common tags. Finally, we use the three-domain
case to validate the robustness of GTagCDCF with respect to the scale of
datasets and the varying number of domains.
| no_new_dataset | 0.944791 |
1312.6723 | Bruce Berriman | G. Bruce Berriman, Ewa Deelman, John Good, Gideon Juve, Jamie Kinney,
Ann Merrihew, and Mats Rynge | Creating A Galactic Plane Atlas With Amazon Web Services | 7 pages, 1 table, 2 figures. Submitted to IEEE Special Edition on
Computing in Astronomy | null | null | null | astro-ph.IM cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes by example how astronomers can use cloud-computing
resources offered by Amazon Web Services (AWS) to create new datasets at scale.
We have created from existing surveys an atlas of the Galactic Plane at 16
wavelengths from 1 {\mu}m to 24 {\mu}m with pixels co-registered at spatial
sampling of 1 arcsec. We explain how open source tools support management and
operation of a virtual cluster on AWS platforms to process data at scale, and
describe the technical issues that users will need to consider, such as
optimization of resources, resource costs, and management of virtual machine
instances.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2013 00:10:27 GMT"
}
] | 2013-12-25T00:00:00 | [
[
"Berriman",
"G. Bruce",
""
],
[
"Deelman",
"Ewa",
""
],
[
"Good",
"John",
""
],
[
"Juve",
"Gideon",
""
],
[
"Kinney",
"Jamie",
""
],
[
"Merrihew",
"Ann",
""
],
[
"Rynge",
"Mats",
""
]
] | TITLE: Creating A Galactic Plane Atlas With Amazon Web Services
ABSTRACT: This paper describes by example how astronomers can use cloud-computing
resources offered by Amazon Web Services (AWS) to create new datasets at scale.
We have created from existing surveys an atlas of the Galactic Plane at 16
wavelengths from 1 {\mu}m to 24 {\mu}m with pixels co-registered at spatial
sampling of 1 arcsec. We explain how open source tools support management and
operation of a virtual cluster on AWS platforms to process data at scale, and
describe the technical issues that users will need to consider, such as
optimization of resources, resource costs, and management of virtual machine
instances.
| no_new_dataset | 0.915053 |
1312.6807 | Feng Xia | Fengqi Li, Chuang Yu, Nanhai Yang, Feng Xia, Guangming Li, Fatemeh
Kaveh-Yazdy | Iterative Nearest Neighborhood Oversampling in Semisupervised Learning
from Imbalanced Data | null | The Scientific World Journal, Volume 2013, Article ID 875450, 2013 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transductive graph-based semi-supervised learning methods usually build an
undirected graph utilizing both labeled and unlabeled samples as vertices.
Those methods propagate label information of labeled samples to neighbors
through their edges in order to get the predicted labels of unlabeled samples.
Most popular semi-supervised learning approaches are sensitive to initial label
distribution happened in imbalanced labeled datasets. The class boundary will
be severely skewed by the majority classes in an imbalanced classification. In
this paper, we proposed a simple and effective approach to alleviate the
unfavorable influence of imbalance problem by iteratively selecting a few
unlabeled samples and adding them into the minority classes to form a balanced
labeled dataset for the learning methods afterwards. The experiments on UCI
datasets and MNIST handwritten digits dataset showed that the proposed approach
outperforms other existing state-of-art methods.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2013 12:24:30 GMT"
}
] | 2013-12-25T00:00:00 | [
[
"Li",
"Fengqi",
""
],
[
"Yu",
"Chuang",
""
],
[
"Yang",
"Nanhai",
""
],
[
"Xia",
"Feng",
""
],
[
"Li",
"Guangming",
""
],
[
"Kaveh-Yazdy",
"Fatemeh",
""
]
] | TITLE: Iterative Nearest Neighborhood Oversampling in Semisupervised Learning
from Imbalanced Data
ABSTRACT: Transductive graph-based semi-supervised learning methods usually build an
undirected graph utilizing both labeled and unlabeled samples as vertices.
Those methods propagate label information of labeled samples to neighbors
through their edges in order to get the predicted labels of unlabeled samples.
Most popular semi-supervised learning approaches are sensitive to initial label
distribution happened in imbalanced labeled datasets. The class boundary will
be severely skewed by the majority classes in an imbalanced classification. In
this paper, we proposed a simple and effective approach to alleviate the
unfavorable influence of imbalance problem by iteratively selecting a few
unlabeled samples and adding them into the minority classes to form a balanced
labeled dataset for the learning methods afterwards. The experiments on UCI
datasets and MNIST handwritten digits dataset showed that the proposed approach
outperforms other existing state-of-art methods.
| no_new_dataset | 0.948442 |
1307.3811 | Weifeng Liu | Weifeng Liu, Dacheng Tao, Jun Cheng, and Yuanyan Tang | Multiview Hessian Discriminative Sparse Coding for Image Annotation | 35 pages | Computer vision and image understanding,118(2014) 50-60 | null | null | cs.MM cs.CV cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse coding represents a signal sparsely by using an overcomplete
dictionary, and obtains promising performance in practical computer vision
applications, especially for signal restoration tasks such as image denoising
and image inpainting. In recent years, many discriminative sparse coding
algorithms have been developed for classification problems, but they cannot
naturally handle visual data represented by multiview features. In addition,
existing sparse coding algorithms use graph Laplacian to model the local
geometry of the data distribution. It has been identified that Laplacian
regularization biases the solution towards a constant function which possibly
leads to poor extrapolating power. In this paper, we present multiview Hessian
discriminative sparse coding (mHDSC) which seamlessly integrates Hessian
regularization with discriminative sparse coding for multiview learning
problems. In particular, mHDSC exploits Hessian regularization to steer the
solution which varies smoothly along geodesics in the manifold, and treats the
label information as an additional view of feature for incorporating the
discriminative power for image annotation. We conduct extensive experiments on
PASCAL VOC'07 dataset and demonstrate the effectiveness of mHDSC for image
annotation.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2013 03:14:05 GMT"
}
] | 2013-12-24T00:00:00 | [
[
"Liu",
"Weifeng",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Cheng",
"Jun",
""
],
[
"Tang",
"Yuanyan",
""
]
] | TITLE: Multiview Hessian Discriminative Sparse Coding for Image Annotation
ABSTRACT: Sparse coding represents a signal sparsely by using an overcomplete
dictionary, and obtains promising performance in practical computer vision
applications, especially for signal restoration tasks such as image denoising
and image inpainting. In recent years, many discriminative sparse coding
algorithms have been developed for classification problems, but they cannot
naturally handle visual data represented by multiview features. In addition,
existing sparse coding algorithms use graph Laplacian to model the local
geometry of the data distribution. It has been identified that Laplacian
regularization biases the solution towards a constant function which possibly
leads to poor extrapolating power. In this paper, we present multiview Hessian
discriminative sparse coding (mHDSC) which seamlessly integrates Hessian
regularization with discriminative sparse coding for multiview learning
problems. In particular, mHDSC exploits Hessian regularization to steer the
solution which varies smoothly along geodesics in the manifold, and treats the
label information as an additional view of feature for incorporating the
discriminative power for image annotation. We conduct extensive experiments on
PASCAL VOC'07 dataset and demonstrate the effectiveness of mHDSC for image
annotation.
| no_new_dataset | 0.945147 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.