Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1210.5338 | Cyril Furtlehner | Cyril Furtlehner, Yufei Han, Jean-Marc Lasgouttes and Victorin Martin | Pairwise MRF Calibration by Perturbation of the Bethe Reference Point | 54 pages, 8 figure. section 5 and refs added in V2 | null | null | Inria RR-8059 | cond-mat.dis-nn cond-mat.stat-mech cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate different ways of generating approximate solutions to the
pairwise Markov random field (MRF) selection problem. We focus mainly on the
inverse Ising problem, but discuss also the somewhat related inverse Gaussian
problem because both types of MRF are suitable for inference tasks with the
belief propagation algorithm (BP) under certain conditions. Our approach
consists in to take a Bethe mean-field solution obtained with a maximum
spanning tree (MST) of pairwise mutual information, referred to as the
\emph{Bethe reference point}, for further perturbation procedures. We consider
three different ways following this idea: in the first one, we select and
calibrate iteratively the optimal links to be added starting from the Bethe
reference point; the second one is based on the observation that the natural
gradient can be computed analytically at the Bethe point; in the third one,
assuming no local field and using low temperature expansion we develop a dual
loop joint model based on a well chosen fundamental cycle basis. We indeed
identify a subclass of planar models, which we refer to as \emph{Bethe-dual
graph models}, having possibly many loops, but characterized by a singly
connected dual factor graph, for which the partition function and the linear
response can be computed exactly in respectively O(N) and $O(N^2)$ operations,
thanks to a dual weight propagation (DWP) message passing procedure that we set
up. When restricted to this subclass of models, the inverse Ising problem being
convex, becomes tractable at any temperature. Experimental tests on various
datasets with refined $L_0$ or $L_1$ regularization procedures indicate that
these approaches may be competitive and useful alternatives to existing ones.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 08:08:55 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Feb 2013 17:32:44 GMT"
}
] | 2013-02-04T00:00:00 | [
[
"Furtlehner",
"Cyril",
""
],
[
"Han",
"Yufei",
""
],
[
"Lasgouttes",
"Jean-Marc",
""
],
[
"Martin",
"Victorin",
""
]
] | TITLE: Pairwise MRF Calibration by Perturbation of the Bethe Reference Point
ABSTRACT: We investigate different ways of generating approximate solutions to the
pairwise Markov random field (MRF) selection problem. We focus mainly on the
inverse Ising problem, but discuss also the somewhat related inverse Gaussian
problem because both types of MRF are suitable for inference tasks with the
belief propagation algorithm (BP) under certain conditions. Our approach
consists in to take a Bethe mean-field solution obtained with a maximum
spanning tree (MST) of pairwise mutual information, referred to as the
\emph{Bethe reference point}, for further perturbation procedures. We consider
three different ways following this idea: in the first one, we select and
calibrate iteratively the optimal links to be added starting from the Bethe
reference point; the second one is based on the observation that the natural
gradient can be computed analytically at the Bethe point; in the third one,
assuming no local field and using low temperature expansion we develop a dual
loop joint model based on a well chosen fundamental cycle basis. We indeed
identify a subclass of planar models, which we refer to as \emph{Bethe-dual
graph models}, having possibly many loops, but characterized by a singly
connected dual factor graph, for which the partition function and the linear
response can be computed exactly in respectively O(N) and $O(N^2)$ operations,
thanks to a dual weight propagation (DWP) message passing procedure that we set
up. When restricted to this subclass of models, the inverse Ising problem being
convex, becomes tractable at any temperature. Experimental tests on various
datasets with refined $L_0$ or $L_1$ regularization procedures indicate that
these approaches may be competitive and useful alternatives to existing ones.
|
1111.5534 | Lazaros Gallos | Lazaros K. Gallos, Diego Rybski, Fredrik Liljeros, Shlomo Havlin,
Hernan A. Makse | How people interact in evolving online affiliation networks | 10 pages, 8 figures | Phys. Rev. X 2, 031014 (2012) | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of human interactions is of central importance for understanding
the behavior of individuals, groups and societies. Here, we observe the
formation and evolution of networks by monitoring the addition of all new links
and we analyze quantitatively the tendencies used to create ties in these
evolving online affiliation networks. We first show that an accurate estimation
of these probabilistic tendencies can only be achieved by following the time
evolution of the network. For example, actions that are attributed to the usual
friend of a friend mechanism through a static snapshot of the network are
overestimated by a factor of two. A detailed analysis of the dynamic network
evolution shows that half of those triangles were generated through other
mechanisms, in spite of the characteristic static pattern. We start by
characterizing every single link when the tie was established in the network.
This allows us to describe the probabilistic tendencies of tie formation and
extract sociological conclusions as follows. The tendencies to add new links
differ significantly from what we would expect if they were not affected by the
individuals' structural position in the network, i.e., from random link
formation. We also find significant differences in behavioral traits among
individuals according to their degree of activity, gender, age, popularity and
other attributes. For instance, in the particular datasets analyzed here, we
find that women reciprocate connections three times as much as men and this
difference increases with age. Men tend to connect with the most popular people
more often than women across all ages. On the other hand, triangular ties
tendencies are similar and independent of gender. Our findings can be useful to
build models of realistic social network structures and discover the underlying
laws that govern establishment of ties in evolving social networks.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2011 16:04:06 GMT"
}
] | 2013-02-01T00:00:00 | [
[
"Gallos",
"Lazaros K.",
""
],
[
"Rybski",
"Diego",
""
],
[
"Liljeros",
"Fredrik",
""
],
[
"Havlin",
"Shlomo",
""
],
[
"Makse",
"Hernan A.",
""
]
] | TITLE: How people interact in evolving online affiliation networks
ABSTRACT: The study of human interactions is of central importance for understanding
the behavior of individuals, groups and societies. Here, we observe the
formation and evolution of networks by monitoring the addition of all new links
and we analyze quantitatively the tendencies used to create ties in these
evolving online affiliation networks. We first show that an accurate estimation
of these probabilistic tendencies can only be achieved by following the time
evolution of the network. For example, actions that are attributed to the usual
friend of a friend mechanism through a static snapshot of the network are
overestimated by a factor of two. A detailed analysis of the dynamic network
evolution shows that half of those triangles were generated through other
mechanisms, in spite of the characteristic static pattern. We start by
characterizing every single link when the tie was established in the network.
This allows us to describe the probabilistic tendencies of tie formation and
extract sociological conclusions as follows. The tendencies to add new links
differ significantly from what we would expect if they were not affected by the
individuals' structural position in the network, i.e., from random link
formation. We also find significant differences in behavioral traits among
individuals according to their degree of activity, gender, age, popularity and
other attributes. For instance, in the particular datasets analyzed here, we
find that women reciprocate connections three times as much as men and this
difference increases with age. Men tend to connect with the most popular people
more often than women across all ages. On the other hand, triangular ties
tendencies are similar and independent of gender. Our findings can be useful to
build models of realistic social network structures and discover the underlying
laws that govern establishment of ties in evolving social networks.
|
1301.7363 | John S. Breese | John S. Breese, David Heckerman, Carl Kadie | Empirical Analysis of Predictive Algorithms for Collaborative Filtering | Appears in Proceedings of the Fourteenth Conference on Uncertainty in
Artificial Intelligence (UAI1998) | null | null | UAI-P-1998-PG-43-52 | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering or recommender systems use a database about user
preferences to predict additional topics or products a new user might like. In
this paper we describe several algorithms designed for this task, including
techniques based on correlation coefficients, vector-based similarity
calculations, and statistical Bayesian methods. We compare the predictive
accuracy of the various methods in a set of representative problem domains. We
use two basic classes of evaluation metrics. The first characterizes accuracy
over a set of individual predictions in terms of average absolute deviation.
The second estimates the utility of a ranked list of suggested items. This
metric uses an estimate of the probability that a user will see a
recommendation in an ordered list. Experiments were run for datasets associated
with 3 application areas, 4 experimental protocols, and the 2 evaluation
metrics for the various algorithms. Results indicate that for a wide range of
conditions, Bayesian networks with decision trees at each node and correlation
methods outperform Bayesian-clustering and vector-similarity methods. Between
correlation and Bayesian networks, the preferred method depends on the nature
of the dataset, nature of the application (ranked versus one-by-one
presentation), and the availability of votes with which to make predictions.
Other considerations include the size of database, speed of predictions, and
learning time.
| [
{
"version": "v1",
"created": "Wed, 30 Jan 2013 15:02:44 GMT"
}
] | 2013-02-01T00:00:00 | [
[
"Breese",
"John S.",
""
],
[
"Heckerman",
"David",
""
],
[
"Kadie",
"Carl",
""
]
] | TITLE: Empirical Analysis of Predictive Algorithms for Collaborative Filtering
ABSTRACT: Collaborative filtering or recommender systems use a database about user
preferences to predict additional topics or products a new user might like. In
this paper we describe several algorithms designed for this task, including
techniques based on correlation coefficients, vector-based similarity
calculations, and statistical Bayesian methods. We compare the predictive
accuracy of the various methods in a set of representative problem domains. We
use two basic classes of evaluation metrics. The first characterizes accuracy
over a set of individual predictions in terms of average absolute deviation.
The second estimates the utility of a ranked list of suggested items. This
metric uses an estimate of the probability that a user will see a
recommendation in an ordered list. Experiments were run for datasets associated
with 3 application areas, 4 experimental protocols, and the 2 evaluation
metrics for the various algorithms. Results indicate that for a wide range of
conditions, Bayesian networks with decision trees at each node and correlation
methods outperform Bayesian-clustering and vector-similarity methods. Between
correlation and Bayesian networks, the preferred method depends on the nature
of the dataset, nature of the application (ranked versus one-by-one
presentation), and the availability of votes with which to make predictions.
Other considerations include the size of database, speed of predictions, and
learning time.
|
0903.4960 | Michael Schreiber | Michael Schreiber | A Case Study of the Modified Hirsch Index hm Accounting for Multiple
Co-authors | 29 pages, including 2 tables, 3 figures with 7 plots altogether,
accepted for publication in J. Am. Soc. Inf. Sci. Techn. vol. 60 (5) 2009 | J. Am. Soc. Inf. Sci. Techn. 60, 1274-1282 (2009) | 10.1002/asi.21057 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | J. E. Hirsch (2005) introduced the h-index to quantify an individual's
scientific research output by the largest number h of a scientist's papers,
that received at least h citations. This so-called Hirsch index can be easily
modified to take multiple co-authorship into account by counting the papers
fractionally according to (the inverse of) the number of authors. I have worked
out 26 empirical cases of physicists to illustrate the effect of this
modification. Although the correlation between the original and the modified
Hirsch index is relatively strong, the arrangement of the datasets is
significantly different depending on whether they are put into order according
to the values of either the original or the modified index.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2009 10:01:42 GMT"
}
] | 2013-01-31T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: A Case Study of the Modified Hirsch Index hm Accounting for Multiple
Co-authors
ABSTRACT: J. E. Hirsch (2005) introduced the h-index to quantify an individual's
scientific research output by the largest number h of a scientist's papers,
that received at least h citations. This so-called Hirsch index can be easily
modified to take multiple co-authorship into account by counting the papers
fractionally according to (the inverse of) the number of authors. I have worked
out 26 empirical cases of physicists to illustrate the effect of this
modification. Although the correlation between the original and the modified
Hirsch index is relatively strong, the arrangement of the datasets is
significantly different depending on whether they are put into order according
to the values of either the original or the modified index.
|
1202.3861 | Michael Schreiber | Michael Schreiber | Inconsistencies of Recently Proposed Citation Impact Indicators and how
to Avoid Them | 14 pages, 9 figures, accepted by Journal of the American Society for
Information Science and Technology Final version with slightly changed
figures, new scoring rule, extended discussion | J. Am. Soc. Inf. Sci. Techn. 63(10), 2062-2073, (2012) | null | null | stat.AP cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown that under certain circumstances in particular for small datasets
the recently proposed citation impact indicators I3(6PR) and R(6,k) behave
inconsistently when additional papers or citations are taken into
consideration. Three simple examples are presented, in which the indicators
fluctuate strongly and the ranking of scientists in the evaluated group is
sometimes completely mixed up by minor changes in the data base. The erratic
behavior is traced to the specific way in which weights are attributed to the
six percentile rank classes, specifically for the tied papers. For 100
percentile rank classes the effects will be less serious. For the 6 classes it
is demonstrated that a different way of assigning weights avoids these
problems, although the non-linearity of the weights for the different
percentile rank classes can still lead to (much less frequent) changes in the
ranking. This behavior is not undesired, because it can be used to correct for
differences in citation behavior in different fields. Remaining deviations from
the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule, the
fractional scoring. Previously proposed consistency criteria are amended by
another property of strict independence which a performance indicator should
aim at.
| [
{
"version": "v1",
"created": "Fri, 17 Feb 2012 10:05:04 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Apr 2012 08:33:52 GMT"
}
] | 2013-01-31T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: Inconsistencies of Recently Proposed Citation Impact Indicators and how
to Avoid Them
ABSTRACT: It is shown that under certain circumstances in particular for small datasets
the recently proposed citation impact indicators I3(6PR) and R(6,k) behave
inconsistently when additional papers or citations are taken into
consideration. Three simple examples are presented, in which the indicators
fluctuate strongly and the ranking of scientists in the evaluated group is
sometimes completely mixed up by minor changes in the data base. The erratic
behavior is traced to the specific way in which weights are attributed to the
six percentile rank classes, specifically for the tied papers. For 100
percentile rank classes the effects will be less serious. For the 6 classes it
is demonstrated that a different way of assigning weights avoids these
problems, although the non-linearity of the weights for the different
percentile rank classes can still lead to (much less frequent) changes in the
ranking. This behavior is not undesired, because it can be used to correct for
differences in citation behavior in different fields. Remaining deviations from
the theoretical value R(6,k) = 1.91 can be avoided by a new scoring rule, the
fractional scoring. Previously proposed consistency criteria are amended by
another property of strict independence which a performance indicator should
aim at.
|
1202.4605 | Andreas Raue | Andreas Raue, Clemens Kreutz, Fabian Joachim Theis, Jens Timmer | Joining Forces of Bayesian and Frequentist Methodology: A Study for
Inference in the Presence of Non-Identifiability | Article to appear in Phil. Trans. Roy. Soc. A | Phil. Trans. R. Soc. A. 371, 20110544, 2013 | 10.1098/rsta.2011.0544 | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasingly complex applications involve large datasets in combination with
non-linear and high dimensional mathematical models. In this context,
statistical inference is a challenging issue that calls for pragmatic
approaches that take advantage of both Bayesian and frequentist methods. The
elegance of Bayesian methodology is founded in the propagation of information
content provided by experimental data and prior assumptions to the posterior
probability distribution of model predictions. However, for complex
applications experimental data and prior assumptions potentially constrain the
posterior probability distribution insufficiently. In these situations Bayesian
Markov chain Monte Carlo sampling can be infeasible. From a frequentist point
of view insufficient experimental data and prior assumptions can be interpreted
as non-identifiability. The profile likelihood approach offers to detect and to
resolve non-identifiability by experimental design iteratively. Therefore, it
allows one to better constrain the posterior probability distribution until
Markov chain Monte Carlo sampling can be used securely. Using an application
from cell biology we compare both methods and show that a successive
application of both methods facilitates a realistic assessment of uncertainty
in model predictions.
| [
{
"version": "v1",
"created": "Tue, 21 Feb 2012 11:44:06 GMT"
}
] | 2013-01-31T00:00:00 | [
[
"Raue",
"Andreas",
""
],
[
"Kreutz",
"Clemens",
""
],
[
"Theis",
"Fabian Joachim",
""
],
[
"Timmer",
"Jens",
""
]
] | TITLE: Joining Forces of Bayesian and Frequentist Methodology: A Study for
Inference in the Presence of Non-Identifiability
ABSTRACT: Increasingly complex applications involve large datasets in combination with
non-linear and high dimensional mathematical models. In this context,
statistical inference is a challenging issue that calls for pragmatic
approaches that take advantage of both Bayesian and frequentist methods. The
elegance of Bayesian methodology is founded in the propagation of information
content provided by experimental data and prior assumptions to the posterior
probability distribution of model predictions. However, for complex
applications experimental data and prior assumptions potentially constrain the
posterior probability distribution insufficiently. In these situations Bayesian
Markov chain Monte Carlo sampling can be infeasible. From a frequentist point
of view insufficient experimental data and prior assumptions can be interpreted
as non-identifiability. The profile likelihood approach offers to detect and to
resolve non-identifiability by experimental design iteratively. Therefore, it
allows one to better constrain the posterior probability distribution until
Markov chain Monte Carlo sampling can be used securely. Using an application
from cell biology we compare both methods and show that a successive
application of both methods facilitates a realistic assessment of uncertainty
in model predictions.
|
1211.2756 | Anton Korobeynikov | Sergey I. Nikolenko, Anton I. Korobeynikov and Max A. Alekseyev | BayesHammer: Bayesian clustering for error correction in single-cell
sequencing | null | BMC Genomics 14(Suppl 1) (2013), pp. S7 | 10.1186/1471-2164-14-S1-S7 | null | q-bio.QM cs.CE cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Error correction of sequenced reads remains a difficult task, especially in
single-cell sequencing projects with extremely non-uniform coverage. While
existing error correction tools designed for standard (multi-cell) sequencing
data usually come up short in single-cell sequencing projects, algorithms
actually used for single-cell error correction have been so far very
simplistic.
We introduce several novel algorithms based on Hamming graphs and Bayesian
subclustering in our new error correction tool BayesHammer. While BayesHammer
was designed for single-cell sequencing, we demonstrate that it also improves
on existing error correction tools for multi-cell sequencing data while working
much faster on real-life datasets. We benchmark BayesHammer on both $k$-mer
counts and actual assembly results with the SPAdes genome assembler.
| [
{
"version": "v1",
"created": "Mon, 12 Nov 2012 19:52:34 GMT"
}
] | 2013-01-31T00:00:00 | [
[
"Nikolenko",
"Sergey I.",
""
],
[
"Korobeynikov",
"Anton I.",
""
],
[
"Alekseyev",
"Max A.",
""
]
] | TITLE: BayesHammer: Bayesian clustering for error correction in single-cell
sequencing
ABSTRACT: Error correction of sequenced reads remains a difficult task, especially in
single-cell sequencing projects with extremely non-uniform coverage. While
existing error correction tools designed for standard (multi-cell) sequencing
data usually come up short in single-cell sequencing projects, algorithms
actually used for single-cell error correction have been so far very
simplistic.
We introduce several novel algorithms based on Hamming graphs and Bayesian
subclustering in our new error correction tool BayesHammer. While BayesHammer
was designed for single-cell sequencing, we demonstrate that it also improves
on existing error correction tools for multi-cell sequencing data while working
much faster on real-life datasets. We benchmark BayesHammer on both $k$-mer
counts and actual assembly results with the SPAdes genome assembler.
|
0910.5260 | Sewoong Oh | Raghunandan H. Keshavan, Sewoong Oh | A Gradient Descent Algorithm on the Grassman Manifold for Matrix
Completion | 26 pages, 15 figures | null | 10.1016/j.trc.2012.12.007 | null | cs.NA cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of reconstructing a low-rank matrix from a small
subset of its entries. In this paper, we describe the implementation of an
efficient algorithm called OptSpace, based on singular value decomposition
followed by local manifold optimization, for solving the low-rank matrix
completion problem. It has been shown that if the number of revealed entries is
large enough, the output of singular value decomposition gives a good estimate
for the original matrix, so that local optimization reconstructs the correct
matrix with high probability. We present numerical results which show that this
algorithm can reconstruct the low rank matrix exactly from a very small subset
of its entries. We further study the robustness of the algorithm with respect
to noise, and its performance on actual collaborative filtering datasets.
| [
{
"version": "v1",
"created": "Tue, 27 Oct 2009 22:19:31 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Nov 2009 23:35:13 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Keshavan",
"Raghunandan H.",
""
],
[
"Oh",
"Sewoong",
""
]
] | TITLE: A Gradient Descent Algorithm on the Grassman Manifold for Matrix
Completion
ABSTRACT: We consider the problem of reconstructing a low-rank matrix from a small
subset of its entries. In this paper, we describe the implementation of an
efficient algorithm called OptSpace, based on singular value decomposition
followed by local manifold optimization, for solving the low-rank matrix
completion problem. It has been shown that if the number of revealed entries is
large enough, the output of singular value decomposition gives a good estimate
for the original matrix, so that local optimization reconstructs the correct
matrix with high probability. We present numerical results which show that this
algorithm can reconstruct the low rank matrix exactly from a very small subset
of its entries. We further study the robustness of the algorithm with respect
to noise, and its performance on actual collaborative filtering datasets.
|
1205.4377 | Kirill Trapeznikov | Kirill Trapeznikov, Venkatesh Saligrama, David Castanon | Multi-Stage Classifier Design | null | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many classification systems, sensing modalities have different acquisition
costs. It is often {\it unnecessary} to use every modality to classify a
majority of examples. We study a multi-stage system in a prediction time cost
reduction setting, where the full data is available for training, but for a
test example, measurements in a new modality can be acquired at each stage for
an additional cost. We seek decision rules to reduce the average measurement
acquisition cost. We formulate an empirical risk minimization problem (ERM) for
a multi-stage reject classifier, wherein the stage $k$ classifier either
classifies a sample using only the measurements acquired so far or rejects it
to the next stage where more attributes can be acquired for a cost. To solve
the ERM problem, we show that the optimal reject classifier at each stage is a
combination of two binary classifiers, one biased towards positive examples and
the other biased towards negative examples. We use this parameterization to
construct stage-by-stage global surrogate risk, develop an iterative algorithm
in the boosting framework and present convergence and generalization results.
We test our work on synthetic, medical and explosives detection datasets. Our
results demonstrate that substantial cost reduction without a significant
sacrifice in accuracy is achievable.
| [
{
"version": "v1",
"created": "Sun, 20 May 2012 03:15:13 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Jan 2013 16:54:30 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Trapeznikov",
"Kirill",
""
],
[
"Saligrama",
"Venkatesh",
""
],
[
"Castanon",
"David",
""
]
] | TITLE: Multi-Stage Classifier Design
ABSTRACT: In many classification systems, sensing modalities have different acquisition
costs. It is often {\it unnecessary} to use every modality to classify a
majority of examples. We study a multi-stage system in a prediction time cost
reduction setting, where the full data is available for training, but for a
test example, measurements in a new modality can be acquired at each stage for
an additional cost. We seek decision rules to reduce the average measurement
acquisition cost. We formulate an empirical risk minimization problem (ERM) for
a multi-stage reject classifier, wherein the stage $k$ classifier either
classifies a sample using only the measurements acquired so far or rejects it
to the next stage where more attributes can be acquired for a cost. To solve
the ERM problem, we show that the optimal reject classifier at each stage is a
combination of two binary classifiers, one biased towards positive examples and
the other biased towards negative examples. We use this parameterization to
construct stage-by-stage global surrogate risk, develop an iterative algorithm
in the boosting framework and present convergence and generalization results.
We test our work on synthetic, medical and explosives detection datasets. Our
results demonstrate that substantial cost reduction without a significant
sacrifice in accuracy is achievable.
|
1301.6686 | Gregory F. Cooper | Gregory F. Cooper, Changwon Yoo | Causal Discovery from a Mixture of Experimental and Observational Data | Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999) | null | null | UAI-P-1999-PG-116-125 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a Bayesian method for combining an arbitrary mixture of
observational and experimental data in order to learn causal Bayesian networks.
Observational data are passively observed. Experimental data, such as that
produced by randomized controlled trials, result from the experimenter
manipulating one or more variables (typically randomly) and observing the
states of other variables. The paper presents a Bayesian method for learning
the causal structure and parameters of the underlying causal process that is
generating the data, given that (1) the data contains a mixture of
observational and experimental case records, and (2) the causal process is
modeled as a causal Bayesian network. This learning method was applied using as
input various mixtures of experimental and observational data that were
generated from the ALARM causal Bayesian network. In these experiments, the
absolute and relative quantities of experimental and observational data were
varied systematically. For each of these training datasets, the learning method
was applied to predict the causal structure and to estimate the causal
parameters that exist among randomly selected pairs of nodes in ALARM that are
not confounded. The paper reports how these structure predictions and parameter
estimates compare with the true causal structures and parameters as given by
the ALARM network.
| [
{
"version": "v1",
"created": "Wed, 23 Jan 2013 15:57:22 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Cooper",
"Gregory F.",
""
],
[
"Yoo",
"Changwon",
""
]
] | TITLE: Causal Discovery from a Mixture of Experimental and Observational Data
ABSTRACT: This paper describes a Bayesian method for combining an arbitrary mixture of
observational and experimental data in order to learn causal Bayesian networks.
Observational data are passively observed. Experimental data, such as that
produced by randomized controlled trials, result from the experimenter
manipulating one or more variables (typically randomly) and observing the
states of other variables. The paper presents a Bayesian method for learning
the causal structure and parameters of the underlying causal process that is
generating the data, given that (1) the data contains a mixture of
observational and experimental case records, and (2) the causal process is
modeled as a causal Bayesian network. This learning method was applied using as
input various mixtures of experimental and observational data that were
generated from the ALARM causal Bayesian network. In these experiments, the
absolute and relative quantities of experimental and observational data were
varied systematically. For each of these training datasets, the learning method
was applied to predict the causal structure and to estimate the causal
parameters that exist among randomly selected pairs of nodes in ALARM that are
not confounded. The paper reports how these structure predictions and parameter
estimates compare with the true causal structures and parameters as given by
the ALARM network.
|
1301.6723 | Stefano Monti | Stefano Monti, Gregory F. Cooper | A Bayesian Network Classifier that Combines a Finite Mixture Model and a
Naive Bayes Model | Appears in Proceedings of the Fifteenth Conference on Uncertainty in
Artificial Intelligence (UAI1999) | null | null | UAI-P-1999-PG-447-456 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new Bayesian network model for classification that
combines the naive-Bayes (NB) classifier and the finite-mixture (FM)
classifier. The resulting classifier aims at relaxing the strong assumptions on
which the two component models are based, in an attempt to improve on their
classification performance, both in terms of accuracy and in terms of
calibration of the estimated probabilities. The proposed classifier is obtained
by superimposing a finite mixture model on the set of feature variables of a
naive Bayes model. We present experimental results that compare the predictive
performance on real datasets of the new classifier with the predictive
performance of the NB classifier and the FM classifier.
| [
{
"version": "v1",
"created": "Wed, 23 Jan 2013 15:59:54 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Monti",
"Stefano",
""
],
[
"Cooper",
"Gregory F.",
""
]
] | TITLE: A Bayesian Network Classifier that Combines a Finite Mixture Model and a
Naive Bayes Model
ABSTRACT: In this paper we present a new Bayesian network model for classification that
combines the naive-Bayes (NB) classifier and the finite-mixture (FM)
classifier. The resulting classifier aims at relaxing the strong assumptions on
which the two component models are based, in an attempt to improve on their
classification performance, both in terms of accuracy and in terms of
calibration of the estimated probabilities. The proposed classifier is obtained
by superimposing a finite mixture model on the set of feature variables of a
naive Bayes model. We present experimental results that compare the predictive
performance on real datasets of the new classifier with the predictive
performance of the NB classifier and the FM classifier.
|
1301.6770 | Zhixiang Eddie Xu | Zhixiang (Eddie) Xu, Minmin Chen, Kilian Q. Weinberger, Fei Sha | An alternative text representation to TF-IDF and Bag-of-Words | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In text mining, information retrieval, and machine learning, text documents
are commonly represented through variants of sparse Bag of Words (sBoW) vectors
(e.g. TF-IDF). Although simple and intuitive, sBoW style representations suffer
from their inherent over-sparsity and fail to capture word-level synonymy and
polysemy. Especially when labeled data is limited (e.g. in document
classification), or the text documents are short (e.g. emails or abstracts),
many features are rarely observed within the training corpus. This leads to
overfitting and reduced generalization accuracy. In this paper we propose Dense
Cohort of Terms (dCoT), an unsupervised algorithm to learn improved sBoW
document features. dCoT explicitly models absent words by removing and
reconstructing random sub-sets of words in the unlabeled corpus. With this
approach, dCoT learns to reconstruct frequent words from co-occurring
infrequent words and maps the high dimensional sparse sBoW vectors into a
low-dimensional dense representation. We show that the feature removal can be
marginalized out and that the reconstruction can be solved for in closed-form.
We demonstrate empirically, on several benchmark datasets, that dCoT features
significantly improve the classification accuracy across several document
classification tasks.
| [
{
"version": "v1",
"created": "Mon, 28 Jan 2013 21:04:45 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Zhixiang",
"",
"",
"Eddie"
],
[
"Xu",
"",
""
],
[
"Chen",
"Minmin",
""
],
[
"Weinberger",
"Kilian Q.",
""
],
[
"Sha",
"Fei",
""
]
] | TITLE: An alternative text representation to TF-IDF and Bag-of-Words
ABSTRACT: In text mining, information retrieval, and machine learning, text documents
are commonly represented through variants of sparse Bag of Words (sBoW) vectors
(e.g. TF-IDF). Although simple and intuitive, sBoW style representations suffer
from their inherent over-sparsity and fail to capture word-level synonymy and
polysemy. Especially when labeled data is limited (e.g. in document
classification), or the text documents are short (e.g. emails or abstracts),
many features are rarely observed within the training corpus. This leads to
overfitting and reduced generalization accuracy. In this paper we propose Dense
Cohort of Terms (dCoT), an unsupervised algorithm to learn improved sBoW
document features. dCoT explicitly models absent words by removing and
reconstructing random sub-sets of words in the unlabeled corpus. With this
approach, dCoT learns to reconstruct frequent words from co-occurring
infrequent words and maps the high dimensional sparse sBoW vectors into a
low-dimensional dense representation. We show that the feature removal can be
marginalized out and that the reconstruction can be solved for in closed-form.
We demonstrate empirically, on several benchmark datasets, that dCoT features
significantly improve the classification accuracy across several document
classification tasks.
|
1301.6800 | Mokhov, Nikolai | C. Yoshikawa (Muons, Inc.), A. Leveling, N.V. Mokhov, J. Morgan, D.
Neuffer, S. Striganov (Fermilab) | Optimization of the Target Subsystem for the New g-2 Experiment | 4 pp. 3rd International Particle Accelerator Conference (IPAC 2012)
20-25 May 2012, New Orleans, Louisiana | null | null | FERMILAB-CONF-12-202-AD-APC | physics.acc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A precision measurement of the muon anomalous magnetic moment, $a_{\mu} =
(g-2)/2$, was previously performed at BNL with a result of 2.2 - 2.7 standard
deviations above the Standard Model (SM) theoretical calculations. The same
experimental apparatus is being planned to run in the new Muon Campus at
Fermilab, where the muon beam is expected to have less pion contamination and
the extended dataset may provide a possible $7.5\sigma$ deviation from the SM,
creating a sensitive and complementary bench mark for proposed SM extensions.
We report here on a preliminary study of the target subsystem where the
apparatus is optimized for pions that have favorable phase space to create
polarized daughter muons around the magic momentum of 3.094 GeV/c, which is
needed by the downstream g 2 muon ring.
| [
{
"version": "v1",
"created": "Mon, 28 Jan 2013 22:30:19 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Yoshikawa",
"C.",
"",
"Muons, Inc."
],
[
"Leveling",
"A.",
"",
"Fermilab"
],
[
"Mokhov",
"N. V.",
"",
"Fermilab"
],
[
"Morgan",
"J.",
"",
"Fermilab"
],
[
"Neuffer",
"D.",
"",
"Fermilab"
],
[
"Striganov",
"S.",
"",
"Fermilab"
]
] | TITLE: Optimization of the Target Subsystem for the New g-2 Experiment
ABSTRACT: A precision measurement of the muon anomalous magnetic moment, $a_{\mu} =
(g-2)/2$, was previously performed at BNL with a result of 2.2 - 2.7 standard
deviations above the Standard Model (SM) theoretical calculations. The same
experimental apparatus is being planned to run in the new Muon Campus at
Fermilab, where the muon beam is expected to have less pion contamination and
the extended dataset may provide a possible $7.5\sigma$ deviation from the SM,
creating a sensitive and complementary bench mark for proposed SM extensions.
We report here on a preliminary study of the target subsystem where the
apparatus is optimized for pions that have favorable phase space to create
polarized daughter muons around the magic momentum of 3.094 GeV/c, which is
needed by the downstream g 2 muon ring.
|
1301.6870 | Paridhi Jain | Anshu Malhotra, Luam Totti, Wagner Meira Jr., Ponnurangam Kumaraguru,
Virgilio Almeida | Studying User Footprints in Different Online Social Networks | The paper is already published in ASONAM 2012 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing popularity and usage of online social media services, people
now have accounts (some times several) on multiple and diverse services like
Facebook, LinkedIn, Twitter and YouTube. Publicly available information can be
used to create a digital footprint of any user using these social media
services. Generating such digital footprints can be very useful for
personalization, profile management, detecting malicious behavior of users. A
very important application of analyzing users' online digital footprints is to
protect users from potential privacy and security risks arising from the huge
publicly available user information. We extracted information about user
identities on different social networks through Social Graph API, FriendFeed,
and Profilactic; we collated our own dataset to create the digital footprints
of the users. We used username, display name, description, location, profile
image, and number of connections to generate the digital footprints of the
user. We applied context specific techniques (e.g. Jaro Winkler similarity,
Wordnet based ontologies) to measure the similarity of the user profiles on
different social networks. We specifically focused on Twitter and LinkedIn. In
this paper, we present the analysis and results from applying automated
classifiers for disambiguating profiles belonging to the same user from
different social networks. UserID and Name were found to be the most
discriminative features for disambiguating user profiles. Using the most
promising set of features and similarity metrics, we achieved accuracy,
precision and recall of 98%, 99%, and 96%, respectively.
| [
{
"version": "v1",
"created": "Tue, 29 Jan 2013 09:29:54 GMT"
}
] | 2013-01-30T00:00:00 | [
[
"Malhotra",
"Anshu",
""
],
[
"Totti",
"Luam",
""
],
[
"Meira",
"Wagner",
"Jr."
],
[
"Kumaraguru",
"Ponnurangam",
""
],
[
"Almeida",
"Virgilio",
""
]
] | TITLE: Studying User Footprints in Different Online Social Networks
ABSTRACT: With the growing popularity and usage of online social media services, people
now have accounts (some times several) on multiple and diverse services like
Facebook, LinkedIn, Twitter and YouTube. Publicly available information can be
used to create a digital footprint of any user using these social media
services. Generating such digital footprints can be very useful for
personalization, profile management, detecting malicious behavior of users. A
very important application of analyzing users' online digital footprints is to
protect users from potential privacy and security risks arising from the huge
publicly available user information. We extracted information about user
identities on different social networks through Social Graph API, FriendFeed,
and Profilactic; we collated our own dataset to create the digital footprints
of the users. We used username, display name, description, location, profile
image, and number of connections to generate the digital footprints of the
user. We applied context specific techniques (e.g. Jaro Winkler similarity,
Wordnet based ontologies) to measure the similarity of the user profiles on
different social networks. We specifically focused on Twitter and LinkedIn. In
this paper, we present the analysis and results from applying automated
classifiers for disambiguating profiles belonging to the same user from
different social networks. UserID and Name were found to be the most
discriminative features for disambiguating user profiles. Using the most
promising set of features and similarity metrics, we achieved accuracy,
precision and recall of 98%, 99%, and 96%, respectively.
|
1210.0137 | Pierre Deville Pierre | Vincent D. Blondel, Markus Esch, Connie Chan, Fabrice Clerot, Pierre
Deville, Etienne Huens, Fr\'ed\'eric Morlot, Zbigniew Smoreda and Cezary
Ziemlicki | Data for Development: the D4D Challenge on Mobile Phone Data | 10 pages, 3 figures | null | null | null | cs.CY cs.SI physics.soc-ph stat.CO | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The Orange "Data for Development" (D4D) challenge is an open data challenge
on anonymous call patterns of Orange's mobile phone users in Ivory Coast. The
goal of the challenge is to help address society development questions in novel
ways by contributing to the socio-economic development and well-being of the
Ivory Coast population. Participants to the challenge are given access to four
mobile phone datasets and the purpose of this paper is to describe the four
datasets. The website http://www.d4d.orange.com contains more information about
the participation rules. The datasets are based on anonymized Call Detail
Records (CDR) of phone calls and SMS exchanges between five million of Orange's
customers in Ivory Coast between December 1, 2011 and April 28, 2012. The
datasets are: (a) antenna-to-antenna traffic on an hourly basis, (b) individual
trajectories for 50,000 customers for two week time windows with antenna
location information, (3) individual trajectories for 500,000 customers over
the entire observation period with sub-prefecture location information, and (4)
a sample of communication graphs for 5,000 customers
| [
{
"version": "v1",
"created": "Sat, 29 Sep 2012 17:39:16 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jan 2013 12:56:55 GMT"
}
] | 2013-01-29T00:00:00 | [
[
"Blondel",
"Vincent D.",
""
],
[
"Esch",
"Markus",
""
],
[
"Chan",
"Connie",
""
],
[
"Clerot",
"Fabrice",
""
],
[
"Deville",
"Pierre",
""
],
[
"Huens",
"Etienne",
""
],
[
"Morlot",
"Frédéric",
""
],
[
"Smoreda",
"Zbigniew",
""
],
[
"Ziemlicki",
"Cezary",
""
]
] | TITLE: Data for Development: the D4D Challenge on Mobile Phone Data
ABSTRACT: The Orange "Data for Development" (D4D) challenge is an open data challenge
on anonymous call patterns of Orange's mobile phone users in Ivory Coast. The
goal of the challenge is to help address society development questions in novel
ways by contributing to the socio-economic development and well-being of the
Ivory Coast population. Participants to the challenge are given access to four
mobile phone datasets and the purpose of this paper is to describe the four
datasets. The website http://www.d4d.orange.com contains more information about
the participation rules. The datasets are based on anonymized Call Detail
Records (CDR) of phone calls and SMS exchanges between five million of Orange's
customers in Ivory Coast between December 1, 2011 and April 28, 2012. The
datasets are: (a) antenna-to-antenna traffic on an hourly basis, (b) individual
trajectories for 50,000 customers for two week time windows with antenna
location information, (3) individual trajectories for 500,000 customers over
the entire observation period with sub-prefecture location information, and (4)
a sample of communication graphs for 5,000 customers
|
1301.4293 | Limin Yao | Sebastian Riedel, Limin Yao, Andrew McCallum | Latent Relation Representations for Universal Schemas | 4 pages, ICLR workshop | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional relation extraction predicts relations within some fixed and
finite target schema. Machine learning approaches to this task require either
manual annotation or, in the case of distant supervision, existing structured
sources of the same schema. The need for existing datasets can be avoided by
using a universal schema: the union of all involved schemas (surface form
predicates as in OpenIE, and relations in the schemas of pre-existing
databases). This schema has an almost unlimited set of relations (due to
surface forms), and supports integration with existing structured data (through
the relation types of existing databases). To populate a database of such
schema we present a family of matrix factorization models that predict affinity
between database tuples and relations. We show that this achieves substantially
higher accuracy than the traditional classification approach. More importantly,
by operating simultaneously on relations observed in text and in pre-existing
structured DBs such as Freebase, we are able to reason about unstructured and
structured data in mutually-supporting ways. By doing so our approach
outperforms state-of-the-art distant supervision systems.
| [
{
"version": "v1",
"created": "Fri, 18 Jan 2013 04:37:30 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Jan 2013 20:10:21 GMT"
}
] | 2013-01-29T00:00:00 | [
[
"Riedel",
"Sebastian",
""
],
[
"Yao",
"Limin",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Latent Relation Representations for Universal Schemas
ABSTRACT: Traditional relation extraction predicts relations within some fixed and
finite target schema. Machine learning approaches to this task require either
manual annotation or, in the case of distant supervision, existing structured
sources of the same schema. The need for existing datasets can be avoided by
using a universal schema: the union of all involved schemas (surface form
predicates as in OpenIE, and relations in the schemas of pre-existing
databases). This schema has an almost unlimited set of relations (due to
surface forms), and supports integration with existing structured data (through
the relation types of existing databases). To populate a database of such
schema we present a family of matrix factorization models that predict affinity
between database tuples and relations. We show that this achieves substantially
higher accuracy than the traditional classification approach. More importantly,
by operating simultaneously on relations observed in text and in pre-existing
structured DBs such as Freebase, we are able to reason about unstructured and
structured data in mutually-supporting ways. By doing so our approach
outperforms state-of-the-art distant supervision systems.
|
1301.5686 | Jeon-Hyung Kang | Jeon-Hyung Kang, Jun Ma, Yan Liu | Transfer Topic Modeling with Ease and Scalability | 2012 SIAM International Conference on Data Mining (SDM12) Pages:
{564-575} | null | null | null | cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing volume of short texts generated on social media sites, such as
Twitter or Facebook, creates a great demand for effective and efficient topic
modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it
is not optimal due to its weakness in handling short texts with fast-changing
topics and scalability concerns. In this paper, we propose a transfer learning
approach that utilizes abundant labeled documents from other domains (such as
Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting
and result interpretation. Specifically, we develop Transfer Hierarchical LDA
(thLDA) model, which incorporates the label information from other domains via
informative priors. In addition, we develop a parallel implementation of our
model for large-scale applications. We demonstrate the effectiveness of our
thLDA model on both a microblogging dataset and standard text collections
including AP and RCV1 datasets.
| [
{
"version": "v1",
"created": "Thu, 24 Jan 2013 02:02:13 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Jan 2013 18:00:19 GMT"
}
] | 2013-01-29T00:00:00 | [
[
"Kang",
"Jeon-Hyung",
""
],
[
"Ma",
"Jun",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: Transfer Topic Modeling with Ease and Scalability
ABSTRACT: The increasing volume of short texts generated on social media sites, such as
Twitter or Facebook, creates a great demand for effective and efficient topic
modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it
is not optimal due to its weakness in handling short texts with fast-changing
topics and scalability concerns. In this paper, we propose a transfer learning
approach that utilizes abundant labeled documents from other domains (such as
Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting
and result interpretation. Specifically, we develop Transfer Hierarchical LDA
(thLDA) model, which incorporates the label information from other domains via
informative priors. In addition, we develop a parallel implementation of our
model for large-scale applications. We demonstrate the effectiveness of our
thLDA model on both a microblogging dataset and standard text collections
including AP and RCV1 datasets.
|
1301.6553 | Thomas Couronne | Thomas Couronne, Zbigniew Smoreda, Ana-Maria Olteanu | Chatty Mobiles:Individual mobility and communication patterns | NetMob 2011, Boston | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human mobility analysis is an important issue in social sciences, and
mobility data are among the most sought-after sources of information in ur-
Data ban studies, geography, transportation and territory management. In
network sciences mobility studies have become popular in the past few years,
especially using mobile phone location data. For preserving the customer
privacy, datasets furnished by telecom operators are anonymized. At the same
time, the large size of datasets often makes the task of calculating all
observed trajectories very difficult and time-consuming. One solution could be
to sample users. However, the fact of not having information about the mobile
user makes the sampling delicate. Some researchers select randomly a sample of
users from their dataset. Others try to optimize this method, for example,
taking into account only users with a certain number or frequency of locations
recorded. At the first glance, the second choice seems to be more efficient:
having more individual traces makes the analysis more precise. However, the
most frequently used CDR data (Call Detail Records) have location generated
only at the moment of communication (call, SMS, data connection). Due to this
fact, users mobility patterns cannot be precisely built upon their
communication patterns. Hence, these data have evident short-comings both in
terms of spatial and temporal scale. In this paper we propose to estimate the
correlation between the users communication and mo- bility in order to better
assess the bias of frequency based sampling. Using technical GSM network data
(including communication but also independent mobility records), we will
analyze the relationship between communication and mobility patterns.
| [
{
"version": "v1",
"created": "Mon, 28 Jan 2013 14:19:48 GMT"
}
] | 2013-01-29T00:00:00 | [
[
"Couronne",
"Thomas",
""
],
[
"Smoreda",
"Zbigniew",
""
],
[
"Olteanu",
"Ana-Maria",
""
]
] | TITLE: Chatty Mobiles:Individual mobility and communication patterns
ABSTRACT: Human mobility analysis is an important issue in social sciences, and
mobility data are among the most sought-after sources of information in ur-
Data ban studies, geography, transportation and territory management. In
network sciences mobility studies have become popular in the past few years,
especially using mobile phone location data. For preserving the customer
privacy, datasets furnished by telecom operators are anonymized. At the same
time, the large size of datasets often makes the task of calculating all
observed trajectories very difficult and time-consuming. One solution could be
to sample users. However, the fact of not having information about the mobile
user makes the sampling delicate. Some researchers select randomly a sample of
users from their dataset. Others try to optimize this method, for example,
taking into account only users with a certain number or frequency of locations
recorded. At the first glance, the second choice seems to be more efficient:
having more individual traces makes the analysis more precise. However, the
most frequently used CDR data (Call Detail Records) have location generated
only at the moment of communication (call, SMS, data connection). Due to this
fact, users mobility patterns cannot be precisely built upon their
communication patterns. Hence, these data have evident short-comings both in
terms of spatial and temporal scale. In this paper we propose to estimate the
correlation between the users communication and mo- bility in order to better
assess the bias of frequency based sampling. Using technical GSM network data
(including communication but also independent mobility records), we will
analyze the relationship between communication and mobility patterns.
|
1301.5943 | Lu\'is Filipe Te\'ofilo | Lu\'is Filipe Te\'ofilo, Luis Paulo Reis | Identifying Player\'s Strategies in No Limit Texas Hold\'em Poker
through the Analysis of Individual Moves | null | null | null | null | cs.AI cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of competitive artificial Poker playing agents has proven to
be a challenge, because agents must deal with unreliable information and
deception which make it essential to model the opponents in order to achieve
good results. This paper presents a methodology to develop opponent modeling
techniques for Poker agents. The approach is based on applying clustering
algorithms to a Poker game database in order to identify player types based on
their actions. First, common game moves were identified by clustering all
players\' moves. Then, player types were defined by calculating the frequency
with which the players perform each type of movement. With the given dataset, 7
different types of players were identified with each one having at least one
tactic that characterizes him. The identification of player types may improve
the overall performance of Poker agents, because it helps the agents to predict
the opponent\'s moves, by associating each opponent to a distinct cluster.
| [
{
"version": "v1",
"created": "Fri, 25 Jan 2013 01:49:15 GMT"
}
] | 2013-01-28T00:00:00 | [
[
"Teófilo",
"Luís Filipe",
""
],
[
"Reis",
"Luis Paulo",
""
]
] | TITLE: Identifying Player\'s Strategies in No Limit Texas Hold\'em Poker
through the Analysis of Individual Moves
ABSTRACT: The development of competitive artificial Poker playing agents has proven to
be a challenge, because agents must deal with unreliable information and
deception which make it essential to model the opponents in order to achieve
good results. This paper presents a methodology to develop opponent modeling
techniques for Poker agents. The approach is based on applying clustering
algorithms to a Poker game database in order to identify player types based on
their actions. First, common game moves were identified by clustering all
players\' moves. Then, player types were defined by calculating the frequency
with which the players perform each type of movement. With the given dataset, 7
different types of players were identified with each one having at least one
tactic that characterizes him. The identification of player types may improve
the overall performance of Poker agents, because it helps the agents to predict
the opponent\'s moves, by associating each opponent to a distinct cluster.
|
1212.2142 | Arnab Chatterjee | Arnab Chatterjee, Marija Mitrovi\'c and Santo Fortunato | Universality in voting behavior: an empirical analysis | 19 pages, 10 figures, 8 tables. The elections data-sets can be
downloaded from http://becs.aalto.fi/en/research/complex_systems/elections/ | Scientific Reports 3, 1049 (2013) | null | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Election data represent a precious source of information to study human
behavior at a large scale. In proportional elections with open lists, the
number of votes received by a candidate, rescaled by the average performance of
all competitors in the same party list, has the same distribution regardless of
the country and the year of the election. Here we provide the first thorough
assessment of this claim. We analyzed election datasets of 15 countries with
proportional systems. We confirm that a class of nations with similar election
rules fulfill the universality claim. Discrepancies from this trend in other
countries with open-lists elections are always associated with peculiar
differences in the election rules, which matter more than differences between
countries and historical periods. Our analysis shows that the role of parties
in the electoral performance of candidates is crucial: alternative scalings not
taking into account party affiliations lead to poor results.
| [
{
"version": "v1",
"created": "Mon, 10 Dec 2012 17:26:06 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Jan 2013 12:41:20 GMT"
}
] | 2013-01-25T00:00:00 | [
[
"Chatterjee",
"Arnab",
""
],
[
"Mitrović",
"Marija",
""
],
[
"Fortunato",
"Santo",
""
]
] | TITLE: Universality in voting behavior: an empirical analysis
ABSTRACT: Election data represent a precious source of information to study human
behavior at a large scale. In proportional elections with open lists, the
number of votes received by a candidate, rescaled by the average performance of
all competitors in the same party list, has the same distribution regardless of
the country and the year of the election. Here we provide the first thorough
assessment of this claim. We analyzed election datasets of 15 countries with
proportional systems. We confirm that a class of nations with similar election
rules fulfill the universality claim. Discrepancies from this trend in other
countries with open-lists elections are always associated with peculiar
differences in the election rules, which matter more than differences between
countries and historical periods. Our analysis shows that the role of parties
in the electoral performance of candidates is crucial: alternative scalings not
taking into account party affiliations lead to poor results.
|
1301.5399 | Hoda Sadat Ayatollahi Tabatabaii | Hoda S. Ayatollahi Tabatabaii, Hamid R. Rabiee, Mohammad Hossein
Rohban, Mostafa Salehi | Incorporating Betweenness Centrality in Compressive Sensing for
Congestion Detection | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new Compressive Sensing (CS) scheme for detecting
network congested links. We focus on decreasing the required number of
measurements to detect all congested links in the context of network
tomography. We have expanded the LASSO objective function by adding a new term
corresponding to the prior knowledge based on the relationship between the
congested links and the corresponding link Betweenness Centrality (BC). The
accuracy of the proposed model is verified by simulations on two real datasets.
The results demonstrate that our model outperformed the state-of-the-art CS
based method with significant improvements in terms of F-Score.
| [
{
"version": "v1",
"created": "Wed, 23 Jan 2013 04:12:08 GMT"
}
] | 2013-01-24T00:00:00 | [
[
"Tabatabaii",
"Hoda S. Ayatollahi",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Rohban",
"Mohammad Hossein",
""
],
[
"Salehi",
"Mostafa",
""
]
] | TITLE: Incorporating Betweenness Centrality in Compressive Sensing for
Congestion Detection
ABSTRACT: This paper presents a new Compressive Sensing (CS) scheme for detecting
network congested links. We focus on decreasing the required number of
measurements to detect all congested links in the context of network
tomography. We have expanded the LASSO objective function by adding a new term
corresponding to the prior knowledge based on the relationship between the
congested links and the corresponding link Betweenness Centrality (BC). The
accuracy of the proposed model is verified by simulations on two real datasets.
The results demonstrate that our model outperformed the state-of-the-art CS
based method with significant improvements in terms of F-Score.
|
1204.4491 | Huy Nguyen | Huy Nguyen, Rong Zheng | On Budgeted Influence Maximization in Social Networks | Submitted to JSAC NS | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a budget and arbitrary cost for selecting each node, the budgeted
influence maximization (BIM) problem concerns selecting a set of seed nodes to
disseminate some information that maximizes the total number of nodes
influenced (termed as influence spread) in social networks at a total cost no
more than the budget. Our proposed seed selection algorithm for the BIM problem
guarantees an approximation ratio of (1 - 1/sqrt(e)). The seed selection
algorithm needs to calculate the influence spread of candidate seed sets, which
is known to be #P-complex. Identifying the linkage between the computation of
marginal probabilities in Bayesian networks and the influence spread, we devise
efficient heuristic algorithms for the latter problem. Experiments using both
large-scale social networks and synthetically generated networks demonstrate
superior performance of the proposed algorithm with moderate computation costs.
Moreover, synthetic datasets allow us to vary the network parameters and gain
important insights on the impact of graph structures on the performance of
different algorithms.
| [
{
"version": "v1",
"created": "Thu, 19 Apr 2012 22:50:48 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Aug 2012 05:02:19 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Jan 2013 07:01:49 GMT"
}
] | 2013-01-23T00:00:00 | [
[
"Nguyen",
"Huy",
""
],
[
"Zheng",
"Rong",
""
]
] | TITLE: On Budgeted Influence Maximization in Social Networks
ABSTRACT: Given a budget and arbitrary cost for selecting each node, the budgeted
influence maximization (BIM) problem concerns selecting a set of seed nodes to
disseminate some information that maximizes the total number of nodes
influenced (termed as influence spread) in social networks at a total cost no
more than the budget. Our proposed seed selection algorithm for the BIM problem
guarantees an approximation ratio of (1 - 1/sqrt(e)). The seed selection
algorithm needs to calculate the influence spread of candidate seed sets, which
is known to be #P-complex. Identifying the linkage between the computation of
marginal probabilities in Bayesian networks and the influence spread, we devise
efficient heuristic algorithms for the latter problem. Experiments using both
large-scale social networks and synthetically generated networks demonstrate
superior performance of the proposed algorithm with moderate computation costs.
Moreover, synthetic datasets allow us to vary the network parameters and gain
important insights on the impact of graph structures on the performance of
different algorithms.
|
1301.5088 | Ian Goodfellow | Ian J. Goodfellow | Piecewise Linear Multilayer Perceptrons and Dropout | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new type of hidden layer for a multilayer perceptron, and
demonstrate that it obtains the best reported performance for an MLP on the
MNIST dataset.
| [
{
"version": "v1",
"created": "Tue, 22 Jan 2013 07:10:34 GMT"
}
] | 2013-01-23T00:00:00 | [
[
"Goodfellow",
"Ian J.",
""
]
] | TITLE: Piecewise Linear Multilayer Perceptrons and Dropout
ABSTRACT: We propose a new type of hidden layer for a multilayer perceptron, and
demonstrate that it obtains the best reported performance for an MLP on the
MNIST dataset.
|
1301.5121 | Alex Averbuch | Alex Averbuch, Martin Neumann | Partitioning Graph Databases - A Quantitative Evaluation | null | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic data is growing at increasing rates, in both size and
connectivity: the increasing presence of, and interest in, relationships
between data. An example is the Twitter social network graph. Due to this
growth demand is increasing for technologies that can process such data.
Currently relational databases are the predominant technology, but they are
poorly suited to processing connected data as they are optimized for
index-intensive operations. Conversely, graph databases are optimized for graph
computation. They link records by direct references, avoiding index lookups,
and enabling retrieval of adjacent elements in constant time, regardless of
graph size. However, as data volume increases these databases outgrow the
resources of one computer and data partitioning becomes necessary. We evaluate
the viability of using graph partitioning algorithms to partition graph
databases. A prototype partitioned database was developed. Three partitioning
algorithms explored and one implemented. Three graph datasets were used: two
real and one synthetically generated. These were partitioned in various ways
and the impact on database performance measured. We defined one synthetic
access pattern per dataset and executed each on the partitioned datasets.
Evaluation took place in a simulation environment, ensuring repeatability and
allowing measurement of metrics like network traffic and load balance. Results
show that compared to random partitioning the partitioning algorithm reduced
traffic by 40-90%. Executing the algorithm intermittently during usage
maintained partition quality, while requiring only 1% the computation of
initial partitioning. Strong correlations were found between theoretic quality
metrics and generated network traffic under non-uniform access patterns.
| [
{
"version": "v1",
"created": "Tue, 22 Jan 2013 09:48:34 GMT"
}
] | 2013-01-23T00:00:00 | [
[
"Averbuch",
"Alex",
""
],
[
"Neumann",
"Martin",
""
]
] | TITLE: Partitioning Graph Databases - A Quantitative Evaluation
ABSTRACT: Electronic data is growing at increasing rates, in both size and
connectivity: the increasing presence of, and interest in, relationships
between data. An example is the Twitter social network graph. Due to this
growth demand is increasing for technologies that can process such data.
Currently relational databases are the predominant technology, but they are
poorly suited to processing connected data as they are optimized for
index-intensive operations. Conversely, graph databases are optimized for graph
computation. They link records by direct references, avoiding index lookups,
and enabling retrieval of adjacent elements in constant time, regardless of
graph size. However, as data volume increases these databases outgrow the
resources of one computer and data partitioning becomes necessary. We evaluate
the viability of using graph partitioning algorithms to partition graph
databases. A prototype partitioned database was developed. Three partitioning
algorithms explored and one implemented. Three graph datasets were used: two
real and one synthetically generated. These were partitioned in various ways
and the impact on database performance measured. We defined one synthetic
access pattern per dataset and executed each on the partitioned datasets.
Evaluation took place in a simulation environment, ensuring repeatability and
allowing measurement of metrics like network traffic and load balance. Results
show that compared to random partitioning the partitioning algorithm reduced
traffic by 40-90%. Executing the algorithm intermittently during usage
maintained partition quality, while requiring only 1% the computation of
initial partitioning. Strong correlations were found between theoretic quality
metrics and generated network traffic under non-uniform access patterns.
|
1012.4506 | Taha Sochi | Taha Sochi | High Throughput Software for Powder Diffraction and its Application to
Heterogeneous Catalysis | thesis, 202 pages, 95 figures, 6 tables | null | null | null | physics.data-an hep-ex physics.chem-ph physics.comp-ph physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this thesis we investigate high throughput computational methods for
processing large quantities of data collected from synchrotrons and their
application to spectral analysis of powder diffraction data. We also present
the main product of this PhD programme, specifically a software called 'EasyDD'
developed by the author. This software was created to meet the increasing
demand on data processing and analysis capabilities as required by modern
detectors which produce huge quantities of data. Modern detectors coupled with
the high intensity X-ray sources available at synchrotrons have led to the
situation where datasets can be collected in ever shorter time scales and in
ever larger numbers. Such large volumes of datasets pose a data processing
bottleneck which augments with current and future instrument development.
EasyDD has achieved its objectives and made significant contributions to
scientific research. It can also be used as a model for more mature attempts in
the future. EasyDD is currently in use by a number of researchers in a number
of academic and research institutions to process high-energy diffraction data.
These include data collected by different techniques such as Energy Dispersive
Diffraction, Angle Dispersive Diffraction and Computer Aided Tomography. EasyDD
has already been used in a number of published studies, and is currently in use
by the High Energy X-Ray Imaging Technology project. The software was also used
by the author to process and analyse datasets collected from synchrotron
radiation facilities. In this regard, the thesis presents novel scientific
research involving the use of EasyDD to handle large diffraction datasets in
the study of alumina-supported metal oxide catalyst bodies. These data were
collected using Tomographic Energy Dispersive Diffraction Imaging and Computer
Aided Tomography techniques.
| [
{
"version": "v1",
"created": "Mon, 20 Dec 2010 23:35:54 GMT"
}
] | 2013-01-22T00:00:00 | [
[
"Sochi",
"Taha",
""
]
] | TITLE: High Throughput Software for Powder Diffraction and its Application to
Heterogeneous Catalysis
ABSTRACT: In this thesis we investigate high throughput computational methods for
processing large quantities of data collected from synchrotrons and their
application to spectral analysis of powder diffraction data. We also present
the main product of this PhD programme, specifically a software called 'EasyDD'
developed by the author. This software was created to meet the increasing
demand on data processing and analysis capabilities as required by modern
detectors which produce huge quantities of data. Modern detectors coupled with
the high intensity X-ray sources available at synchrotrons have led to the
situation where datasets can be collected in ever shorter time scales and in
ever larger numbers. Such large volumes of datasets pose a data processing
bottleneck which augments with current and future instrument development.
EasyDD has achieved its objectives and made significant contributions to
scientific research. It can also be used as a model for more mature attempts in
the future. EasyDD is currently in use by a number of researchers in a number
of academic and research institutions to process high-energy diffraction data.
These include data collected by different techniques such as Energy Dispersive
Diffraction, Angle Dispersive Diffraction and Computer Aided Tomography. EasyDD
has already been used in a number of published studies, and is currently in use
by the High Energy X-Ray Imaging Technology project. The software was also used
by the author to process and analyse datasets collected from synchrotron
radiation facilities. In this regard, the thesis presents novel scientific
research involving the use of EasyDD to handle large diffraction datasets in
the study of alumina-supported metal oxide catalyst bodies. These data were
collected using Tomographic Energy Dispersive Diffraction Imaging and Computer
Aided Tomography techniques.
|
1207.4417 | Jingwei Liu | Jingwei Liu, Meizhi Xu | Penalty Constraints and Kernelization of M-Estimation Based Fuzzy
C-Means | null | null | null | null | cs.CV stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A framework of M-estimation based fuzzy C-means clustering (MFCM) algorithm
is proposed with iterative reweighted least squares (IRLS) algorithm, and
penalty constraint and kernelization extensions of MFCM algorithms are also
developed. Introducing penalty information to the object functions of MFCM
algorithms, the spatially constrained fuzzy C-means (SFCM) is extended to
penalty constraints MFCM algorithms(abbr. pMFCM).Substituting the Euclidean
distance with kernel method, the MFCM and pMFCM algorithms are extended to
kernelized MFCM (abbr. KMFCM) and kernelized pMFCM (abbr.pKMFCM) algorithms.
The performances of MFCM, pMFCM, KMFCM and pKMFCM algorithms are evaluated in
three tasks: pattern recognition on 10 standard data sets from UCI Machine
Learning databases, noise image segmentation performances on a synthetic image,
a magnetic resonance brain image (MRI), and image segmentation of a standard
images from Berkeley Segmentation Dataset and Benchmark. The experimental
results demonstrate the effectiveness of our proposed algorithms in pattern
recognition and image segmentation.
| [
{
"version": "v1",
"created": "Wed, 18 Jul 2012 17:20:32 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Jan 2013 10:33:02 GMT"
}
] | 2013-01-22T00:00:00 | [
[
"Liu",
"Jingwei",
""
],
[
"Xu",
"Meizhi",
""
]
] | TITLE: Penalty Constraints and Kernelization of M-Estimation Based Fuzzy
C-Means
ABSTRACT: A framework of M-estimation based fuzzy C-means clustering (MFCM) algorithm
is proposed with iterative reweighted least squares (IRLS) algorithm, and
penalty constraint and kernelization extensions of MFCM algorithms are also
developed. Introducing penalty information to the object functions of MFCM
algorithms, the spatially constrained fuzzy C-means (SFCM) is extended to
penalty constraints MFCM algorithms(abbr. pMFCM).Substituting the Euclidean
distance with kernel method, the MFCM and pMFCM algorithms are extended to
kernelized MFCM (abbr. KMFCM) and kernelized pMFCM (abbr.pKMFCM) algorithms.
The performances of MFCM, pMFCM, KMFCM and pKMFCM algorithms are evaluated in
three tasks: pattern recognition on 10 standard data sets from UCI Machine
Learning databases, noise image segmentation performances on a synthetic image,
a magnetic resonance brain image (MRI), and image segmentation of a standard
images from Berkeley Segmentation Dataset and Benchmark. The experimental
results demonstrate the effectiveness of our proposed algorithms in pattern
recognition and image segmentation.
|
1301.3753 | Leif Johnson | Leif Johnson and Craig Corcoran | Switched linear encoding with rectified linear autoencoders | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several recent results in machine learning have established formal
connections between autoencoders---artificial neural network models that
attempt to reproduce their inputs---and other coding models like sparse coding
and K-means. This paper explores in depth an autoencoder model that is
constructed using rectified linear activations on its hidden units. Our
analysis builds on recent results to further unify the world of sparse linear
coding models. We provide an intuitive interpretation of the behavior of these
coding models and demonstrate this intuition using small, artificial datasets
with known distributions.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 17:04:10 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Jan 2013 19:38:36 GMT"
}
] | 2013-01-22T00:00:00 | [
[
"Johnson",
"Leif",
""
],
[
"Corcoran",
"Craig",
""
]
] | TITLE: Switched linear encoding with rectified linear autoencoders
ABSTRACT: Several recent results in machine learning have established formal
connections between autoencoders---artificial neural network models that
attempt to reproduce their inputs---and other coding models like sparse coding
and K-means. This paper explores in depth an autoencoder model that is
constructed using rectified linear activations on its hidden units. Our
analysis builds on recent results to further unify the world of sparse linear
coding models. We provide an intuitive interpretation of the behavior of these
coding models and demonstrate this intuition using small, artificial datasets
with known distributions.
|
1301.3844 | Gregory F. Cooper | Gregory F. Cooper | A Bayesian Method for Causal Modeling and Discovery Under Selection | Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000) | null | null | UAI-P-2000-PG-98-106 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a Bayesian method for learning causal networks using
samples that were selected in a non-random manner from a population of
interest. Examples of data obtained by non-random sampling include convenience
samples and case-control data in which a fixed number of samples with and
without some condition is collected; such data are not uncommon. The paper
describes a method for combining data under selection with prior beliefs in
order to derive a posterior probability for a model of the causal processes
that are generating the data in the population of interest. The priors include
beliefs about the nature of the non-random sampling procedure. Although exact
application of the method would be computationally intractable for most
realistic datasets, efficient special-case and approximation methods are
discussed. Finally, the paper describes how to combine learning under selection
with previous methods for learning from observational and experimental data
that are obtained on random samples of the population of interest. The net
result is a Bayesian methodology that supports causal modeling and discovery
from a rich mixture of different types of data.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 15:49:26 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Cooper",
"Gregory F.",
""
]
] | TITLE: A Bayesian Method for Causal Modeling and Discovery Under Selection
ABSTRACT: This paper describes a Bayesian method for learning causal networks using
samples that were selected in a non-random manner from a population of
interest. Examples of data obtained by non-random sampling include convenience
samples and case-control data in which a fixed number of samples with and
without some condition is collected; such data are not uncommon. The paper
describes a method for combining data under selection with prior beliefs in
order to derive a posterior probability for a model of the causal processes
that are generating the data in the population of interest. The priors include
beliefs about the nature of the non-random sampling procedure. Although exact
application of the method would be computationally intractable for most
realistic datasets, efficient special-case and approximation methods are
discussed. Finally, the paper describes how to combine learning under selection
with previous methods for learning from observational and experimental data
that are obtained on random samples of the population of interest. The net
result is a Bayesian methodology that supports causal modeling and discovery
from a rich mixture of different types of data.
|
1301.3856 | Nir Friedman | Nir Friedman, Daphne Koller | Being Bayesian about Network Structure | Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000) | null | null | UAI-P-2000-PG-201-210 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many domains, we are interested in analyzing the structure of the
underlying distribution, e.g., whether one variable is a direct parent of the
other. Bayesian model-selection attempts to find the MAP model and use its
structure to answer these questions. However, when the amount of available data
is modest, there might be many models that have non-negligible posterior. Thus,
we want compute the Bayesian posterior of a feature, i.e., the total posterior
probability of all models that contain it. In this paper, we propose a new
approach for this task. We first show how to efficiently compute a sum over the
exponential number of networks that are consistent with a fixed ordering over
network variables. This allows us to compute, for a given ordering, both the
marginal probability of the data and the posterior of a feature. We then use
this result as the basis for an algorithm that approximates the Bayesian
posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC)
method, but over orderings rather than over network structures. The space of
orderings is much smaller and more regular than the space of structures, and
has a smoother posterior `landscape'. We present empirical results on synthetic
and real-life datasets that compare our approach to full model averaging (when
possible), to MCMC over network structures, and to a non-Bayesian bootstrap
approach.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 15:50:14 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Friedman",
"Nir",
""
],
[
"Koller",
"Daphne",
""
]
] | TITLE: Being Bayesian about Network Structure
ABSTRACT: In many domains, we are interested in analyzing the structure of the
underlying distribution, e.g., whether one variable is a direct parent of the
other. Bayesian model-selection attempts to find the MAP model and use its
structure to answer these questions. However, when the amount of available data
is modest, there might be many models that have non-negligible posterior. Thus,
we want compute the Bayesian posterior of a feature, i.e., the total posterior
probability of all models that contain it. In this paper, we propose a new
approach for this task. We first show how to efficiently compute a sum over the
exponential number of networks that are consistent with a fixed ordering over
network variables. This allows us to compute, for a given ordering, both the
marginal probability of the data and the posterior of a feature. We then use
this result as the basis for an algorithm that approximates the Bayesian
posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC)
method, but over orderings rather than over network structures. The space of
orderings is much smaller and more regular than the space of structures, and
has a smoother posterior `landscape'. We present empirical results on synthetic
and real-life datasets that compare our approach to full model averaging (when
possible), to MCMC over network structures, and to a non-Bayesian bootstrap
approach.
|
1301.3884 | Dmitry Y. Pavlov | Dmitry Y. Pavlov, Heikki Mannila, Padhraic Smyth | Probabilistic Models for Query Approximation with Large Sparse Binary
Datasets | Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000) | null | null | UAI-P-2000-PG-465-472 | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large sparse sets of binary transaction data with millions of records and
thousands of attributes occur in various domains: customers purchasing
products, users visiting web pages, and documents containing words are just
three typical examples. Real-time query selectivity estimation (the problem of
estimating the number of rows in the data satisfying a given predicate) is an
important practical problem for such databases.
We investigate the application of probabilistic models to this problem. In
particular, we study a Markov random field (MRF) approach based on frequent
sets and maximum entropy, and compare it to the independence model and the
Chow-Liu tree model. We find that the MRF model provides substantially more
accurate probability estimates than the other methods but is more expensive
from a computational and memory viewpoint. To alleviate the computational
requirements we show how one can apply bucket elimination and clique tree
approaches to take advantage of structure in the models and in the queries. We
provide experimental results on two large real-world transaction datasets.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 15:52:06 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Pavlov",
"Dmitry Y.",
""
],
[
"Mannila",
"Heikki",
""
],
[
"Smyth",
"Padhraic",
""
]
] | TITLE: Probabilistic Models for Query Approximation with Large Sparse Binary
Datasets
ABSTRACT: Large sparse sets of binary transaction data with millions of records and
thousands of attributes occur in various domains: customers purchasing
products, users visiting web pages, and documents containing words are just
three typical examples. Real-time query selectivity estimation (the problem of
estimating the number of rows in the data satisfying a given predicate) is an
important practical problem for such databases.
We investigate the application of probabilistic models to this problem. In
particular, we study a Markov random field (MRF) approach based on frequent
sets and maximum entropy, and compare it to the independence model and the
Chow-Liu tree model. We find that the MRF model provides substantially more
accurate probability estimates than the other methods but is more expensive
from a computational and memory viewpoint. To alleviate the computational
requirements we show how one can apply bucket elimination and clique tree
approaches to take advantage of structure in the models and in the queries. We
provide experimental results on two large real-world transaction datasets.
|
1301.3891 | Marc Sebban | Marc Sebban, Richard Nock | Combining Feature and Prototype Pruning by Uncertainty Minimization | Appears in Proceedings of the Sixteenth Conference on Uncertainty in
Artificial Intelligence (UAI2000) | null | null | UAI-P-2000-PG-533-540 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus in this paper on dataset reduction techniques for use in k-nearest
neighbor classification. In such a context, feature and prototype selections
have always been independently treated by the standard storage reduction
algorithms. While this certifying is theoretically justified by the fact that
each subproblem is NP-hard, we assume in this paper that a joint storage
reduction is in fact more intuitive and can in practice provide better results
than two independent processes. Moreover, it avoids a lot of distance
calculations by progressively removing useless instances during the feature
pruning. While standard selection algorithms often optimize the accuracy to
discriminate the set of solutions, we use in this paper a criterion based on an
uncertainty measure within a nearest-neighbor graph. This choice comes from
recent results that have proven that accuracy is not always the suitable
criterion to optimize. In our approach, a feature or an instance is removed if
its deletion improves information of the graph. Numerous experiments are
presented in this paper and a statistical analysis shows the relevance of our
approach, and its tolerance in the presence of noise.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 15:52:33 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Sebban",
"Marc",
""
],
[
"Nock",
"Richard",
""
]
] | TITLE: Combining Feature and Prototype Pruning by Uncertainty Minimization
ABSTRACT: We focus in this paper on dataset reduction techniques for use in k-nearest
neighbor classification. In such a context, feature and prototype selections
have always been independently treated by the standard storage reduction
algorithms. While this certifying is theoretically justified by the fact that
each subproblem is NP-hard, we assume in this paper that a joint storage
reduction is in fact more intuitive and can in practice provide better results
than two independent processes. Moreover, it avoids a lot of distance
calculations by progressively removing useless instances during the feature
pruning. While standard selection algorithms often optimize the accuracy to
discriminate the set of solutions, we use in this paper a criterion based on an
uncertainty measure within a nearest-neighbor graph. This choice comes from
recent results that have proven that accuracy is not always the suitable
criterion to optimize. In our approach, a feature or an instance is removed if
its deletion improves information of the graph. Numerous experiments are
presented in this paper and a statistical analysis shows the relevance of our
approach, and its tolerance in the presence of noise.
|
1301.4028 | Michael Schreiber | Michael Schreiber | Do we need the g-index? | 7 pages, 3 figures accepted for publication in Journal of the
American Society for Information Science and Technology | null | null | null | physics.soc-ph cs.DL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Using a very small sample of 8 datasets it was recently shown by De Visscher
(2011) that the g-index is very close to the square root of the total number of
citations. It was argued that there is no bibliometrically meaningful
difference. Using another somewhat larger empirical sample of 26 datasets I
show that the difference may be larger and I argue in favor of the g-index.
| [
{
"version": "v1",
"created": "Thu, 17 Jan 2013 09:45:27 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Schreiber",
"Michael",
""
]
] | TITLE: Do we need the g-index?
ABSTRACT: Using a very small sample of 8 datasets it was recently shown by De Visscher
(2011) that the g-index is very close to the square root of the total number of
citations. It was argued that there is no bibliometrically meaningful
difference. Using another somewhat larger empirical sample of 26 datasets I
show that the difference may be larger and I argue in favor of the g-index.
|
1301.4171 | Jason Weston | Jason Weston, Ron Weiss, Hector Yee | Affinity Weighted Embedding | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised (linear) embedding models like Wsabie and PSI have proven
successful at ranking, recommendation and annotation tasks. However, despite
being scalable to large datasets they do not take full advantage of the extra
data due to their linear nature, and typically underfit. We propose a new class
of models which aim to provide improved performance while retaining many of the
benefits of the existing class of embedding models. Our new approach works by
iteratively learning a linear embedding model where the next iteration's
features and labels are reweighted as a function of the previous iteration. We
describe several variants of the family, and give some initial results.
| [
{
"version": "v1",
"created": "Thu, 17 Jan 2013 17:46:27 GMT"
}
] | 2013-01-18T00:00:00 | [
[
"Weston",
"Jason",
""
],
[
"Weiss",
"Ron",
""
],
[
"Yee",
"Hector",
""
]
] | TITLE: Affinity Weighted Embedding
ABSTRACT: Supervised (linear) embedding models like Wsabie and PSI have proven
successful at ranking, recommendation and annotation tasks. However, despite
being scalable to large datasets they do not take full advantage of the extra
data due to their linear nature, and typically underfit. We propose a new class
of models which aim to provide improved performance while retaining many of the
benefits of the existing class of embedding models. Our new approach works by
iteratively learning a linear embedding model where the next iteration's
features and labels are reweighted as a function of the previous iteration. We
describe several variants of the family, and give some initial results.
|
1207.0166 | Claudio Gentile | Claudio Gentile and Francesco Orabona | On Multilabel Classification and Ranking with Partial Feedback | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel multilabel/ranking algorithm working in partial
information settings. The algorithm is based on 2nd-order descent methods, and
relies on upper-confidence bounds to trade-off exploration and exploitation. We
analyze this algorithm in a partial adversarial setting, where covariates can
be adversarial, but multilabel probabilities are ruled by (generalized) linear
models. We show O(T^{1/2} log T) regret bounds, which improve in several ways
on the existing results. We test the effectiveness of our upper-confidence
scheme by contrasting against full-information baselines on real-world
multilabel datasets, often obtaining comparable performance.
| [
{
"version": "v1",
"created": "Sat, 30 Jun 2012 23:07:03 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Nov 2012 16:48:22 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Jan 2013 19:19:34 GMT"
}
] | 2013-01-17T00:00:00 | [
[
"Gentile",
"Claudio",
""
],
[
"Orabona",
"Francesco",
""
]
] | TITLE: On Multilabel Classification and Ranking with Partial Feedback
ABSTRACT: We present a novel multilabel/ranking algorithm working in partial
information settings. The algorithm is based on 2nd-order descent methods, and
relies on upper-confidence bounds to trade-off exploration and exploitation. We
analyze this algorithm in a partial adversarial setting, where covariates can
be adversarial, but multilabel probabilities are ruled by (generalized) linear
models. We show O(T^{1/2} log T) regret bounds, which improve in several ways
on the existing results. We test the effectiveness of our upper-confidence
scheme by contrasting against full-information baselines on real-world
multilabel datasets, often obtaining comparable performance.
|
1301.3528 | Momiao Xiong | Momiao Xiong and Long Ma | An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance | null | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2013 23:19:14 GMT"
}
] | 2013-01-17T00:00:00 | [
[
"Xiong",
"Momiao",
""
],
[
"Ma",
"Long",
""
]
] | TITLE: An Efficient Sufficient Dimension Reduction Method for Identifying
Genetic Variants of Clinical Significance
ABSTRACT: Fast and cheaper next generation sequencing technologies will generate
unprecedentedly massive and highly-dimensional genomic and epigenomic variation
data. In the near future, a routine part of medical record will include the
sequenced genomes. A fundamental question is how to efficiently extract genomic
and epigenomic variants of clinical utility which will provide information for
optimal wellness and interference strategies. Traditional paradigm for
identifying variants of clinical validity is to test association of the
variants. However, significantly associated genetic variants may or may not be
usefulness for diagnosis and prognosis of diseases. Alternative to association
studies for finding genetic variants of predictive utility is to systematically
search variants that contain sufficient information for phenotype prediction.
To achieve this, we introduce concepts of sufficient dimension reduction and
coordinate hypothesis which project the original high dimensional data to very
low dimensional space while preserving all information on response phenotypes.
We then formulate clinically significant genetic variant discovery problem into
sparse SDR problem and develop algorithms that can select significant genetic
variants from up to or even ten millions of predictors with the aid of dividing
SDR for whole genome into a number of subSDR problems defined for genomic
regions. The sparse SDR is in turn formulated as sparse optimal scoring
problem, but with penalty which can remove row vectors from the basis matrix.
To speed up computation, we develop the modified alternating direction method
for multipliers to solve the sparse optimal scoring problem which can easily be
implemented in parallel. To illustrate its application, the proposed method is
applied to simulation data and the NHLBI's Exome Sequencing Project dataset
|
1301.3539 | Yoonseop Kang | Yoonseop Kang and Seungjin Choi | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | 3 pages, 2 figures, ICLR2013 workshop track submission | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We proposea graphical model for multi-view feature extraction that
automatically adapts its structure to achieve better representation of data
distribution. The proposed model, structure-adapting multi-view harmonium
(SA-MVH) has switch parameters that control the connection between hidden nodes
and input views, and learn the switch parameter while training. Numerical
experiments on synthetic and a real-world dataset demonstrate the useful
behavior of the SA-MVH, compared to existing multi-view feature extraction
methods.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 01:07:38 GMT"
}
] | 2013-01-17T00:00:00 | [
[
"Kang",
"Yoonseop",
""
],
[
"Choi",
"Seungjin",
""
]
] | TITLE: Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
ABSTRACT: We proposea graphical model for multi-view feature extraction that
automatically adapts its structure to achieve better representation of data
distribution. The proposed model, structure-adapting multi-view harmonium
(SA-MVH) has switch parameters that control the connection between hidden nodes
and input views, and learn the switch parameter while training. Numerical
experiments on synthetic and a real-world dataset demonstrate the useful
behavior of the SA-MVH, compared to existing multi-view feature extraction
methods.
|
1301.3557 | Matthew Zeiler | Matthew D. Zeiler and Rob Fergus | Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks | 9 pages | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a simple and effective method for regularizing large
convolutional neural networks. We replace the conventional deterministic
pooling operations with a stochastic procedure, randomly picking the activation
within each pooling region according to a multinomial distribution, given by
the activities within the pooling region. The approach is hyper-parameter free
and can be combined with other regularization approaches, such as dropout and
data augmentation. We achieve state-of-the-art performance on four image
datasets, relative to other approaches that do not utilize data augmentation.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 02:12:07 GMT"
}
] | 2013-01-17T00:00:00 | [
[
"Zeiler",
"Matthew D.",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Stochastic Pooling for Regularization of Deep Convolutional Neural
Networks
ABSTRACT: We introduce a simple and effective method for regularizing large
convolutional neural networks. We replace the conventional deterministic
pooling operations with a stochastic procedure, randomly picking the activation
within each pooling region according to a multinomial distribution, given by
the activities within the pooling region. The approach is hyper-parameter free
and can be combined with other regularization approaches, such as dropout and
data augmentation. We achieve state-of-the-art performance on four image
datasets, relative to other approaches that do not utilize data augmentation.
|
1301.3744 | Tim Vines | Timothy H. Vines, Rose L. Andrew, Dan G. Bock, Michelle T. Franklin,
Kimberly J. Gilbert, Nolan C. Kane, Jean-S\'ebastien Moore, Brook T. Moyers,
S\'ebastien Renaut, Diana J. Rennison, Thor Veen, Sam Yeaman | Mandated data archiving greatly improves access to research data | null | null | 10.1096/fj.12-218164 | null | cs.DL physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The data underlying scientific papers should be accessible to researchers
both now and in the future, but how best can we ensure that these data are
available? Here we examine the effectiveness of four approaches to data
archiving: no stated archiving policy, recommending (but not requiring)
archiving, and two versions of mandating data deposition at acceptance. We
control for differences between data types by trying to obtain data from papers
that use a single, widespread population genetic analysis, STRUCTURE. At one
extreme, we found that mandated data archiving policies that require the
inclusion of a data availability statement in the manuscript improve the odds
of finding the data online almost a thousand-fold compared to having no policy.
However, archiving rates at journals with less stringent policies were only
very slightly higher than those with no policy at all. At one extreme, we found
that mandated data archiving policies that require the inclusion of a data
availability statement in the manuscript improve the odds of finding the data
online almost a thousand fold compared to having no policy. However, archiving
rates at journals with less stringent policies were only very slightly higher
than those with no policy at all. We also assessed the effectiveness of asking
for data directly from authors and obtained over half of the requested
datasets, albeit with about 8 days delay and some disagreement with authors.
Given the long term benefits of data accessibility to the academic community,
we believe that journal based mandatory data archiving policies and mandatory
data availability statements should be more widely adopted.
| [
{
"version": "v1",
"created": "Wed, 16 Jan 2013 16:22:26 GMT"
}
] | 2013-01-17T00:00:00 | [
[
"Vines",
"Timothy H.",
""
],
[
"Andrew",
"Rose L.",
""
],
[
"Bock",
"Dan G.",
""
],
[
"Franklin",
"Michelle T.",
""
],
[
"Gilbert",
"Kimberly J.",
""
],
[
"Kane",
"Nolan C.",
""
],
[
"Moore",
"Jean-Sébastien",
""
],
[
"Moyers",
"Brook T.",
""
],
[
"Renaut",
"Sébastien",
""
],
[
"Rennison",
"Diana J.",
""
],
[
"Veen",
"Thor",
""
],
[
"Yeaman",
"Sam",
""
]
] | TITLE: Mandated data archiving greatly improves access to research data
ABSTRACT: The data underlying scientific papers should be accessible to researchers
both now and in the future, but how best can we ensure that these data are
available? Here we examine the effectiveness of four approaches to data
archiving: no stated archiving policy, recommending (but not requiring)
archiving, and two versions of mandating data deposition at acceptance. We
control for differences between data types by trying to obtain data from papers
that use a single, widespread population genetic analysis, STRUCTURE. At one
extreme, we found that mandated data archiving policies that require the
inclusion of a data availability statement in the manuscript improve the odds
of finding the data online almost a thousand-fold compared to having no policy.
However, archiving rates at journals with less stringent policies were only
very slightly higher than those with no policy at all. At one extreme, we found
that mandated data archiving policies that require the inclusion of a data
availability statement in the manuscript improve the odds of finding the data
online almost a thousand fold compared to having no policy. However, archiving
rates at journals with less stringent policies were only very slightly higher
than those with no policy at all. We also assessed the effectiveness of asking
for data directly from authors and obtained over half of the requested
datasets, albeit with about 8 days delay and some disagreement with authors.
Given the long term benefits of data accessibility to the academic community,
we believe that journal based mandatory data archiving policies and mandatory
data availability statements should be more widely adopted.
|
1301.2659 | Fabrice Rossi | Romain Guigour\`es, Marc Boull\'e, Fabrice Rossi (SAMM) | A Triclustering Approach for Time Evolving Graphs | null | Co-clustering and Applications International Conference on Data
Mining Workshop, Brussels : Belgium (2012) | 10.1109/ICDMW.2012.61 | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel technique to track structures in time evolving
graphs. The method is based on a parameter free approach for three-dimensional
co-clustering of the source vertices, the target vertices and the time. All
these features are simultaneously segmented in order to build time segments and
clusters of vertices whose edge distributions are similar and evolve in the
same way over the time segments. The main novelty of this approach lies in that
the time segments are directly inferred from the evolution of the edge
distribution between the vertices, thus not requiring the user to make an a
priori discretization. Experiments conducted on a synthetic dataset illustrate
the good behaviour of the technique, and a study of a real-life dataset shows
the potential of the proposed approach for exploratory data analysis.
| [
{
"version": "v1",
"created": "Sat, 12 Jan 2013 07:51:14 GMT"
}
] | 2013-01-15T00:00:00 | [
[
"Guigourès",
"Romain",
"",
"SAMM"
],
[
"Boullé",
"Marc",
"",
"SAMM"
],
[
"Rossi",
"Fabrice",
"",
"SAMM"
]
] | TITLE: A Triclustering Approach for Time Evolving Graphs
ABSTRACT: This paper introduces a novel technique to track structures in time evolving
graphs. The method is based on a parameter free approach for three-dimensional
co-clustering of the source vertices, the target vertices and the time. All
these features are simultaneously segmented in order to build time segments and
clusters of vertices whose edge distributions are similar and evolve in the
same way over the time segments. The main novelty of this approach lies in that
the time segments are directly inferred from the evolution of the edge
distribution between the vertices, thus not requiring the user to make an a
priori discretization. Experiments conducted on a synthetic dataset illustrate
the good behaviour of the technique, and a study of a real-life dataset shows
the potential of the proposed approach for exploratory data analysis.
|
1301.2785 | Rafi Muhammad | Muhammad Rafi, Mohammad Shahid Shaikh | A comparison of SVM and RVM for Document Classification | ICoCSIM 2012, Medan Indonesia | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM.
| [
{
"version": "v1",
"created": "Sun, 13 Jan 2013 15:58:09 GMT"
}
] | 2013-01-15T00:00:00 | [
[
"Rafi",
"Muhammad",
""
],
[
"Shaikh",
"Mohammad Shahid",
""
]
] | TITLE: A comparison of SVM and RVM for Document Classification
ABSTRACT: Document classification is a task of assigning a new unclassified document to
one of the predefined set of classes. The content based document classification
uses the content of the document with some weighting criteria to assign it to
one of the predefined classes. It is a major task in library science,
electronic document management systems and information sciences. This paper
investigates document classification by using two different classification
techniques (1) Support Vector Machine (SVM) and (2) Relevance Vector Machine
(RVM). SVM is a supervised machine learning technique that can be used for
classification task. In its basic form, SVM represents the instances of the
data into space and tries to separate the distinct classes by a maximum
possible wide gap (hyper plane) that separates the classes. On the other hand
RVM uses probabilistic measure to define this separation space. RVM uses
Bayesian inference to obtain succinct solution, thus RVM uses significantly
fewer basis functions. Experimental studies on three standard text
classification datasets reveal that although RVM takes more training time, its
classification is much better as compared to SVM.
|
1301.2283 | Tomas Kocka | Tomas Kocka, Robert Castelo | Improved learning of Bayesian networks | Appears in Proceedings of the Seventeenth Conference on Uncertainty
in Artificial Intelligence (UAI2001) | null | null | UAI-P-2001-PG-269-276 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The search space of Bayesian Network structures is usually defined as Acyclic
Directed Graphs (DAGs) and the search is done by local transformations of DAGs.
But the space of Bayesian Networks is ordered by DAG Markov model inclusion and
it is natural to consider that a good search policy should take this into
account. First attempt to do this (Chickering 1996) was using equivalence
classes of DAGs instead of DAGs itself. This approach produces better results
but it is significantly slower. We present a compromise between these two
approaches. It uses DAGs to search the space in such a way that the ordering by
inclusion is taken into account. This is achieved by repetitive usage of local
moves within the equivalence class of DAGs. We show that this new approach
produces better results than the original DAGs approach without substantial
change in time complexity. We present empirical results, within the framework
of heuristic search and Markov Chain Monte Carlo, provided through the Alarm
dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Jan 2013 16:24:32 GMT"
}
] | 2013-01-14T00:00:00 | [
[
"Kocka",
"Tomas",
""
],
[
"Castelo",
"Robert",
""
]
] | TITLE: Improved learning of Bayesian networks
ABSTRACT: The search space of Bayesian Network structures is usually defined as Acyclic
Directed Graphs (DAGs) and the search is done by local transformations of DAGs.
But the space of Bayesian Networks is ordered by DAG Markov model inclusion and
it is natural to consider that a good search policy should take this into
account. First attempt to do this (Chickering 1996) was using equivalence
classes of DAGs instead of DAGs itself. This approach produces better results
but it is significantly slower. We present a compromise between these two
approaches. It uses DAGs to search the space in such a way that the ordering by
inclusion is taken into account. This is achieved by repetitive usage of local
moves within the equivalence class of DAGs. We show that this new approach
produces better results than the original DAGs approach without substantial
change in time complexity. We present empirical results, within the framework
of heuristic search and Markov Chain Monte Carlo, provided through the Alarm
dataset.
|
1301.2375 | Jianxin Li | Jianxin Li, Chengfei Liu, Liang Yao and Jeffrey Xu Yu | Context-based Diversification for Keyword Queries over XML Data | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While keyword query empowers ordinary users to search vast amount of data,
the ambiguity of keyword query makes it difficult to effectively answer keyword
queries, especially for short and vague keyword queries. To address this
challenging problem, in this paper we propose an approach that automatically
diversifies XML keyword search based on its different contexts in the XML data.
Given a short and vague keyword query and XML data to be searched, we firstly
derive keyword search candidates of the query by a classifical feature
selection model. And then, we design an effective XML keyword search
diversification model to measure the quality of each candidate. After that,
three efficient algorithms are proposed to evaluate the possible generated
query candidates representing the diversified search intentions, from which we
can find and return top-$k$ qualified query candidates that are most relevant
to the given keyword query while they can cover maximal number of distinct
results.At last, a comprehensive evaluation on real and synthetic datasets
demonstrates the effectiveness of our proposed diversification model and the
efficiency of our algorithms.
| [
{
"version": "v1",
"created": "Fri, 11 Jan 2013 01:33:50 GMT"
}
] | 2013-01-14T00:00:00 | [
[
"Li",
"Jianxin",
""
],
[
"Liu",
"Chengfei",
""
],
[
"Yao",
"Liang",
""
],
[
"Yu",
"Jeffrey Xu",
""
]
] | TITLE: Context-based Diversification for Keyword Queries over XML Data
ABSTRACT: While keyword query empowers ordinary users to search vast amount of data,
the ambiguity of keyword query makes it difficult to effectively answer keyword
queries, especially for short and vague keyword queries. To address this
challenging problem, in this paper we propose an approach that automatically
diversifies XML keyword search based on its different contexts in the XML data.
Given a short and vague keyword query and XML data to be searched, we firstly
derive keyword search candidates of the query by a classifical feature
selection model. And then, we design an effective XML keyword search
diversification model to measure the quality of each candidate. After that,
three efficient algorithms are proposed to evaluate the possible generated
query candidates representing the diversified search intentions, from which we
can find and return top-$k$ qualified query candidates that are most relevant
to the given keyword query while they can cover maximal number of distinct
results.At last, a comprehensive evaluation on real and synthetic datasets
demonstrates the effectiveness of our proposed diversification model and the
efficiency of our algorithms.
|
1301.2378 | Jianxin Li | Jianxin Li, Chengfei Liu, Liang Yao, Jeffrey Xu Yu and Rui Zhou | Query-driven Frequent Co-occurring Term Extraction over Relational Data
using MapReduce | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study how to efficiently compute \textit{frequent
co-occurring terms} (FCT) in the results of a keyword query in parallel using
the popular MapReduce framework. Taking as input a keyword query q and an
integer k, an FCT query reports the k terms that are not in q, but appear most
frequently in the results of the keyword query q over multiple joined
relations. The returned terms of FCT search can be used to do query expansion
and query refinement for traditional keyword search. Different from the method
of FCT search in a single platform, our proposed approach can efficiently
answer a FCT query using the MapReduce Paradigm without pre-computing the
results of the original keyword query, which is run in parallel platform. In
this work, we can output the final FCT search results by two MapReduce jobs:
the first is to extract the statistical information of the data; and the second
is to calculate the total frequency of each term based on the output of the
first job. At the two MapReduce jobs, we would guarantee the load balance of
mappers and the computational balance of reducers as much as possible.
Analytical and experimental evaluations demonstrate the efficiency and
scalability of our proposed approach using TPC-H benchmark datasets with
different sizes.
| [
{
"version": "v1",
"created": "Fri, 11 Jan 2013 01:55:10 GMT"
}
] | 2013-01-14T00:00:00 | [
[
"Li",
"Jianxin",
""
],
[
"Liu",
"Chengfei",
""
],
[
"Yao",
"Liang",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Zhou",
"Rui",
""
]
] | TITLE: Query-driven Frequent Co-occurring Term Extraction over Relational Data
using MapReduce
ABSTRACT: In this paper we study how to efficiently compute \textit{frequent
co-occurring terms} (FCT) in the results of a keyword query in parallel using
the popular MapReduce framework. Taking as input a keyword query q and an
integer k, an FCT query reports the k terms that are not in q, but appear most
frequently in the results of the keyword query q over multiple joined
relations. The returned terms of FCT search can be used to do query expansion
and query refinement for traditional keyword search. Different from the method
of FCT search in a single platform, our proposed approach can efficiently
answer a FCT query using the MapReduce Paradigm without pre-computing the
results of the original keyword query, which is run in parallel platform. In
this work, we can output the final FCT search results by two MapReduce jobs:
the first is to extract the statistical information of the data; and the second
is to calculate the total frequency of each term based on the output of the
first job. At the two MapReduce jobs, we would guarantee the load balance of
mappers and the computational balance of reducers as much as possible.
Analytical and experimental evaluations demonstrate the efficiency and
scalability of our proposed approach using TPC-H benchmark datasets with
different sizes.
|
1301.2115 | Krikamol Muandet | Krikamol Muandet, David Balduzzi, Bernhard Sch\"olkopf | Domain Generalization via Invariant Feature Representation | The 30th International Conference on Machine Learning (ICML 2013) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates domain generalization: How to take knowledge acquired
from an arbitrary number of related domains and apply it to previously unseen
domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based
optimization algorithm that learns an invariant transformation by minimizing
the dissimilarity across domains, whilst preserving the functional relationship
between input and output variables. A learning-theoretic analysis shows that
reducing dissimilarity improves the expected generalization ability of
classifiers on new domains, motivating the proposed algorithm. Experimental
results on synthetic and real-world datasets demonstrate that DICA successfully
learns invariant features and improves classifier performance in practice.
| [
{
"version": "v1",
"created": "Thu, 10 Jan 2013 13:29:17 GMT"
}
] | 2013-01-11T00:00:00 | [
[
"Muandet",
"Krikamol",
""
],
[
"Balduzzi",
"David",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] | TITLE: Domain Generalization via Invariant Feature Representation
ABSTRACT: This paper investigates domain generalization: How to take knowledge acquired
from an arbitrary number of related domains and apply it to previously unseen
domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based
optimization algorithm that learns an invariant transformation by minimizing
the dissimilarity across domains, whilst preserving the functional relationship
between input and output variables. A learning-theoretic analysis shows that
reducing dissimilarity improves the expected generalization ability of
classifiers on new domains, motivating the proposed algorithm. Experimental
results on synthetic and real-world datasets demonstrate that DICA successfully
learns invariant features and improves classifier performance in practice.
|
1301.1722 | Andrea Montanari | Yash Deshpande and Andrea Montanari | Linear Bandits in High Dimension and Recommendation Systems | 21 pages, 4 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large number of online services provide automated recommendations to help
users to navigate through a large collection of items. New items (products,
videos, songs, advertisements) are suggested on the basis of the user's past
history and --when available-- her demographic profile. Recommendations have to
satisfy the dual goal of helping the user to explore the space of available
items, while allowing the system to probe the user's preferences.
We model this trade-off using linearly parametrized multi-armed bandits,
propose a policy and prove upper and lower bounds on the cumulative "reward"
that coincide up to constants in the data poor (high-dimensional) regime. Prior
work on linear bandits has focused on the data rich (low-dimensional) regime
and used cumulative "risk" as the figure of merit. For this data rich regime,
we provide a simple modification for our policy that achieves near-optimal risk
performance under more restrictive assumptions on the geometry of the problem.
We test (a variation of) the scheme used for establishing achievability on the
Netflix and MovieLens datasets and obtain good agreement with the qualitative
predictions of the theory we develop.
| [
{
"version": "v1",
"created": "Tue, 8 Jan 2013 23:45:06 GMT"
}
] | 2013-01-10T00:00:00 | [
[
"Deshpande",
"Yash",
""
],
[
"Montanari",
"Andrea",
""
]
] | TITLE: Linear Bandits in High Dimension and Recommendation Systems
ABSTRACT: A large number of online services provide automated recommendations to help
users to navigate through a large collection of items. New items (products,
videos, songs, advertisements) are suggested on the basis of the user's past
history and --when available-- her demographic profile. Recommendations have to
satisfy the dual goal of helping the user to explore the space of available
items, while allowing the system to probe the user's preferences.
We model this trade-off using linearly parametrized multi-armed bandits,
propose a policy and prove upper and lower bounds on the cumulative "reward"
that coincide up to constants in the data poor (high-dimensional) regime. Prior
work on linear bandits has focused on the data rich (low-dimensional) regime
and used cumulative "risk" as the figure of merit. For this data rich regime,
we provide a simple modification for our policy that achieves near-optimal risk
performance under more restrictive assumptions on the geometry of the problem.
We test (a variation of) the scheme used for establishing achievability on the
Netflix and MovieLens datasets and obtain good agreement with the qualitative
predictions of the theory we develop.
|
1301.1502 | Hannah Inbarani | N. Kalaiselvi, H. Hannah Inbarani | Fuzzy Soft Set Based Classification for Gene Expression Data | 7 pages, IJSER Vol.3 Issue: 10 Oct 2012 | null | null | null | cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is one of the major issues in Data Mining Research fields. The
classification problems in medical area often classify medical dataset based on
the result of medical diagnosis or description of medical treatment by the
medical practitioner. This research work discusses the classification process
of Gene Expression data for three different cancers which are breast cancer,
lung cancer and leukemia cancer with two classes which are cancerous stage and
non cancerous stage. We have applied a fuzzy soft set similarity based
classifier to enhance the accuracy to predict the stages among cancer genes and
the informative genes are selected by using Entopy filtering.
| [
{
"version": "v1",
"created": "Tue, 8 Jan 2013 11:48:49 GMT"
}
] | 2013-01-09T00:00:00 | [
[
"Kalaiselvi",
"N.",
""
],
[
"Inbarani",
"H. Hannah",
""
]
] | TITLE: Fuzzy Soft Set Based Classification for Gene Expression Data
ABSTRACT: Classification is one of the major issues in Data Mining Research fields. The
classification problems in medical area often classify medical dataset based on
the result of medical diagnosis or description of medical treatment by the
medical practitioner. This research work discusses the classification process
of Gene Expression data for three different cancers which are breast cancer,
lung cancer and leukemia cancer with two classes which are cancerous stage and
non cancerous stage. We have applied a fuzzy soft set similarity based
classifier to enhance the accuracy to predict the stages among cancer genes and
the informative genes are selected by using Entopy filtering.
|
1212.4775 | Mario Frank | Mario Frank, Joachim M. Buhmann, David Basin | Role Mining with Probabilistic Models | accepted for publication at ACM Transactions on Information and
System Security (TISSEC) | null | null | null | cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Role mining tackles the problem of finding a role-based access control (RBAC)
configuration, given an access-control matrix assigning users to access
permissions as input. Most role mining approaches work by constructing a large
set of candidate roles and use a greedy selection strategy to iteratively pick
a small subset such that the differences between the resulting RBAC
configuration and the access control matrix are minimized. In this paper, we
advocate an alternative approach that recasts role mining as an inference
problem rather than a lossy compression problem. Instead of using combinatorial
algorithms to minimize the number of roles needed to represent the
access-control matrix, we derive probabilistic models to learn the RBAC
configuration that most likely underlies the given matrix.
Our models are generative in that they reflect the way that permissions are
assigned to users in a given RBAC configuration. We additionally model how
user-permission assignments that conflict with an RBAC configuration emerge and
we investigate the influence of constraints on role hierarchies and on the
number of assignments. In experiments with access-control matrices from
real-world enterprises, we compare our proposed models with other role mining
methods. Our results show that our probabilistic models infer roles that
generalize well to new system users for a wide variety of data, while other
models' generalization abilities depend on the dataset given.
| [
{
"version": "v1",
"created": "Wed, 19 Dec 2012 18:12:34 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Jan 2013 17:27:55 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Jan 2013 22:24:15 GMT"
}
] | 2013-01-08T00:00:00 | [
[
"Frank",
"Mario",
""
],
[
"Buhmann",
"Joachim M.",
""
],
[
"Basin",
"David",
""
]
] | TITLE: Role Mining with Probabilistic Models
ABSTRACT: Role mining tackles the problem of finding a role-based access control (RBAC)
configuration, given an access-control matrix assigning users to access
permissions as input. Most role mining approaches work by constructing a large
set of candidate roles and use a greedy selection strategy to iteratively pick
a small subset such that the differences between the resulting RBAC
configuration and the access control matrix are minimized. In this paper, we
advocate an alternative approach that recasts role mining as an inference
problem rather than a lossy compression problem. Instead of using combinatorial
algorithms to minimize the number of roles needed to represent the
access-control matrix, we derive probabilistic models to learn the RBAC
configuration that most likely underlies the given matrix.
Our models are generative in that they reflect the way that permissions are
assigned to users in a given RBAC configuration. We additionally model how
user-permission assignments that conflict with an RBAC configuration emerge and
we investigate the influence of constraints on role hierarchies and on the
number of assignments. In experiments with access-control matrices from
real-world enterprises, we compare our proposed models with other role mining
methods. Our results show that our probabilistic models infer roles that
generalize well to new system users for a wide variety of data, while other
models' generalization abilities depend on the dataset given.
|
1301.0561 | David Maxwell Chickering | David Maxwell Chickering, Christopher Meek | Finding Optimal Bayesian Networks | Appears in Proceedings of the Eighteenth Conference on Uncertainty in
Artificial Intelligence (UAI2002) | null | null | UAI-P-2002-PG-94-102 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we derive optimality results for greedy Bayesian-network
search algorithms that perform single-edge modifications at each step and use
asymptotically consistent scoring criteria. Our results extend those of Meek
(1997) and Chickering (2002), who demonstrate that in the limit of large
datasets, if the generative distribution is perfect with respect to a DAG
defined over the observable variables, such search algorithms will identify
this optimal (i.e. generative) DAG model. We relax their assumption about the
generative distribution, and assume only that this distribution satisfies the
{em composition property} over the observable variables, which is a more
realistic assumption for real domains. Under this assumption, we guarantee that
the search algorithms identify an {em inclusion-optimal} model; that is, a
model that (1) contains the generative distribution and (2) has no sub-model
that contains this distribution. In addition, we show that the composition
property is guaranteed to hold whenever the dependence relationships in the
generative distribution can be characterized by paths between singleton
elements in some generative graphical model (e.g. a DAG, a chain graph, or a
Markov network) even when the generative model includes unobserved variables,
and even when the observed data is subject to selection bias.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 15:55:46 GMT"
}
] | 2013-01-07T00:00:00 | [
[
"Chickering",
"David Maxwell",
""
],
[
"Meek",
"Christopher",
""
]
] | TITLE: Finding Optimal Bayesian Networks
ABSTRACT: In this paper, we derive optimality results for greedy Bayesian-network
search algorithms that perform single-edge modifications at each step and use
asymptotically consistent scoring criteria. Our results extend those of Meek
(1997) and Chickering (2002), who demonstrate that in the limit of large
datasets, if the generative distribution is perfect with respect to a DAG
defined over the observable variables, such search algorithms will identify
this optimal (i.e. generative) DAG model. We relax their assumption about the
generative distribution, and assume only that this distribution satisfies the
{em composition property} over the observable variables, which is a more
realistic assumption for real domains. Under this assumption, we guarantee that
the search algorithms identify an {em inclusion-optimal} model; that is, a
model that (1) contains the generative distribution and (2) has no sub-model
that contains this distribution. In addition, we show that the composition
property is guaranteed to hold whenever the dependence relationships in the
generative distribution can be characterized by paths between singleton
elements in some generative graphical model (e.g. a DAG, a chain graph, or a
Markov network) even when the generative model includes unobserved variables,
and even when the observed data is subject to selection bias.
|
1301.0432 | Fahad Mahmood Mr | F. Mahmood, F. Kunwar | A Self-Organizing Neural Scheme for Door Detection in Different
Environments | Page No 13-18, 7 figures, Published with International Journal of
Computer Applications (IJCA) | International Journal of Computer Applications 60(9):13-18, 2012 | 10.5120/9719-3679 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Doors are important landmarks for indoor mobile robot navigation and also
assist blind people to independently access unfamiliar buildings. Most existing
algorithms of door detection are limited to work for familiar environments
because of restricted assumptions about color, texture and shape. In this paper
we propose a novel approach which employs feature based classification and uses
the Kohonen Self-Organizing Map (SOM) for the purpose of door detection.
Generic and stable features are used for the training of SOM that increase the
performance significantly: concavity, bottom-edge intensity profile and door
edges. To validate the robustness and generalizability of our method, we
collected a large dataset of real world door images from a variety of
environments and different lighting conditions. The algorithm achieves more
than 95% detection which demonstrates that our door detection method is generic
and robust with variations of color, texture, occlusions, lighting condition,
scales, and viewpoints.
| [
{
"version": "v1",
"created": "Thu, 3 Jan 2013 12:04:28 GMT"
}
] | 2013-01-04T00:00:00 | [
[
"Mahmood",
"F.",
""
],
[
"Kunwar",
"F.",
""
]
] | TITLE: A Self-Organizing Neural Scheme for Door Detection in Different
Environments
ABSTRACT: Doors are important landmarks for indoor mobile robot navigation and also
assist blind people to independently access unfamiliar buildings. Most existing
algorithms of door detection are limited to work for familiar environments
because of restricted assumptions about color, texture and shape. In this paper
we propose a novel approach which employs feature based classification and uses
the Kohonen Self-Organizing Map (SOM) for the purpose of door detection.
Generic and stable features are used for the training of SOM that increase the
performance significantly: concavity, bottom-edge intensity profile and door
edges. To validate the robustness and generalizability of our method, we
collected a large dataset of real world door images from a variety of
environments and different lighting conditions. The algorithm achieves more
than 95% detection which demonstrates that our door detection method is generic
and robust with variations of color, texture, occlusions, lighting condition,
scales, and viewpoints.
|
0911.2942 | Chris Giannella | Chris Giannella, Kun Liu, Hillol Kargupta | Breaching Euclidean Distance-Preserving Data Perturbation Using Few
Known Inputs | This is a major revision accounting for journal peer-review. Changes
include: removal of known sample attack, more citations added, an empirical
comparison against the algorithm of Kaplan et al. added | Data & Knowledge Engineering 83, pages 93-110, 2013 | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine Euclidean distance-preserving data perturbation as a tool for
privacy-preserving data mining. Such perturbations allow many important data
mining algorithms e.g. hierarchical and k-means clustering), with only minor
modification, to be applied to the perturbed data and produce exactly the same
results as if applied to the original data. However, the issue of how well the
privacy of the original data is preserved needs careful study. We engage in
this study by assuming the role of an attacker armed with a small set of known
original data tuples (inputs). Little work has been done examining this kind of
attack when the number of known original tuples is less than the number of data
dimensions. We focus on this important case, develop and rigorously analyze an
attack that utilizes any number of known original tuples. The approach allows
the attacker to estimate the original data tuple associated with each perturbed
tuple and calculate the probability that the estimation results in a privacy
breach. On a real 16-dimensional dataset, we show that the attacker, with 4
known original tuples, can estimate an original unknown tuple with less than 7%
error with probability exceeding 0.8.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2009 02:51:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Jan 2013 15:49:10 GMT"
}
] | 2013-01-03T00:00:00 | [
[
"Giannella",
"Chris",
""
],
[
"Liu",
"Kun",
""
],
[
"Kargupta",
"Hillol",
""
]
] | TITLE: Breaching Euclidean Distance-Preserving Data Perturbation Using Few
Known Inputs
ABSTRACT: We examine Euclidean distance-preserving data perturbation as a tool for
privacy-preserving data mining. Such perturbations allow many important data
mining algorithms e.g. hierarchical and k-means clustering), with only minor
modification, to be applied to the perturbed data and produce exactly the same
results as if applied to the original data. However, the issue of how well the
privacy of the original data is preserved needs careful study. We engage in
this study by assuming the role of an attacker armed with a small set of known
original data tuples (inputs). Little work has been done examining this kind of
attack when the number of known original tuples is less than the number of data
dimensions. We focus on this important case, develop and rigorously analyze an
attack that utilizes any number of known original tuples. The approach allows
the attacker to estimate the original data tuple associated with each perturbed
tuple and calculate the probability that the estimation results in a privacy
breach. On a real 16-dimensional dataset, we show that the attacker, with 4
known original tuples, can estimate an original unknown tuple with less than 7%
error with probability exceeding 0.8.
|
1212.5841 | Andrei Zinovyev Dr. | Andrei Zinovyev and Evgeny Mirkes | Data complexity measured by principal graphs | Computers and Mathematics with Applications, in press | null | 10.1016/j.camwa.2012.12.009 | null | cs.LG cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to measure the complexity of a finite set of vectors embedded in a
multidimensional space? This is a non-trivial question which can be approached
in many different ways. Here we suggest a set of data complexity measures using
universal approximators, principal cubic complexes. Principal cubic complexes
generalise the notion of principal manifolds for datasets with non-trivial
topologies. The type of the principal cubic complex is determined by its
dimension and a grammar of elementary graph transformations. The simplest
grammar produces principal trees.
We introduce three natural types of data complexity: 1) geometric (deviation
of the data's approximator from some "idealized" configuration, such as
deviation from harmonicity); 2) structural (how many elements of a principal
graph are needed to approximate the data), and 3) construction complexity (how
many applications of elementary graph transformations are needed to construct
the principal object starting from the simplest one).
We compute these measures for several simulated and real-life data
distributions and show them in the "accuracy-complexity" plots, helping to
optimize the accuracy/complexity ratio. We discuss various issues connected
with measuring data complexity. Software for computing data complexity measures
from principal cubic complexes is provided as well.
| [
{
"version": "v1",
"created": "Sun, 23 Dec 2012 23:20:14 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Jan 2013 00:00:40 GMT"
}
] | 2013-01-03T00:00:00 | [
[
"Zinovyev",
"Andrei",
""
],
[
"Mirkes",
"Evgeny",
""
]
] | TITLE: Data complexity measured by principal graphs
ABSTRACT: How to measure the complexity of a finite set of vectors embedded in a
multidimensional space? This is a non-trivial question which can be approached
in many different ways. Here we suggest a set of data complexity measures using
universal approximators, principal cubic complexes. Principal cubic complexes
generalise the notion of principal manifolds for datasets with non-trivial
topologies. The type of the principal cubic complex is determined by its
dimension and a grammar of elementary graph transformations. The simplest
grammar produces principal trees.
We introduce three natural types of data complexity: 1) geometric (deviation
of the data's approximator from some "idealized" configuration, such as
deviation from harmonicity); 2) structural (how many elements of a principal
graph are needed to approximate the data), and 3) construction complexity (how
many applications of elementary graph transformations are needed to construct
the principal object starting from the simplest one).
We compute these measures for several simulated and real-life data
distributions and show them in the "accuracy-complexity" plots, helping to
optimize the accuracy/complexity ratio. We discuss various issues connected
with measuring data complexity. Software for computing data complexity measures
from principal cubic complexes is provided as well.
|
1212.6316 | Nathalie Villa-Vialaneix | Madalina Olteanu (SAMM), Nathalie Villa-Vialaneix (SAMM), Marie
Cottrell (SAMM) | On-line relational SOM for dissimilarity data | WSOM 2012, Santiago : Chile (2012) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In some applications and in order to address real world situations better,
data may be more complex than simple vectors. In some examples, they can be
known through their pairwise dissimilarities only. Several variants of the Self
Organizing Map algorithm were introduced to generalize the original algorithm
to this framework. Whereas median SOM is based on a rough representation of the
prototypes, relational SOM allows representing these prototypes by a virtual
combination of all elements in the data set. However, this latter approach
suffers from two main drawbacks. First, its complexity can be large. Second,
only a batch version of this algorithm has been studied so far and it often
provides results having a bad topographic organization. In this article, an
on-line version of relational SOM is described and justified. The algorithm is
tested on several datasets, including categorical data and graphs, and compared
with the batch version and with other SOM algorithms for non vector data.
| [
{
"version": "v1",
"created": "Thu, 27 Dec 2012 07:07:06 GMT"
}
] | 2013-01-03T00:00:00 | [
[
"Olteanu",
"Madalina",
"",
"SAMM"
],
[
"Villa-Vialaneix",
"Nathalie",
"",
"SAMM"
],
[
"Cottrell",
"Marie",
"",
"SAMM"
]
] | TITLE: On-line relational SOM for dissimilarity data
ABSTRACT: In some applications and in order to address real world situations better,
data may be more complex than simple vectors. In some examples, they can be
known through their pairwise dissimilarities only. Several variants of the Self
Organizing Map algorithm were introduced to generalize the original algorithm
to this framework. Whereas median SOM is based on a rough representation of the
prototypes, relational SOM allows representing these prototypes by a virtual
combination of all elements in the data set. However, this latter approach
suffers from two main drawbacks. First, its complexity can be large. Second,
only a batch version of this algorithm has been studied so far and it often
provides results having a bad topographic organization. In this article, an
on-line version of relational SOM is described and justified. The algorithm is
tested on several datasets, including categorical data and graphs, and compared
with the batch version and with other SOM algorithms for non vector data.
|
1301.0082 | F. Ozgur Catak | F. Ozgur Catak and M. Erdal Balaban | CloudSVM : Training an SVM Classifier in Cloud Computing Systems | 13 pages | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In conventional method, distributed support vector machines (SVM) algorithms
are trained over pre-configured intranet/internet environments to find out an
optimal classifier. These methods are very complicated and costly for large
datasets. Hence, we propose a method that is referred as the Cloud SVM training
mechanism (CloudSVM) in a cloud computing environment with MapReduce technique
for distributed machine learning applications. Accordingly, (i) SVM algorithm
is trained in distributed cloud storage servers that work concurrently; (ii)
merge all support vectors in every trained cloud node; and (iii) iterate these
two steps until the SVM converges to the optimal classifier function. Large
scale data sets are not possible to train using SVM algorithm on a single
computer. The results of this study are important for training of large scale
data sets for machine learning applications. We provided that iterative
training of splitted data set in cloud computing environment using SVM will
converge to a global optimal classifier in finite iteration size.
| [
{
"version": "v1",
"created": "Tue, 1 Jan 2013 13:20:27 GMT"
}
] | 2013-01-03T00:00:00 | [
[
"Catak",
"F. Ozgur",
""
],
[
"Balaban",
"M. Erdal",
""
]
] | TITLE: CloudSVM : Training an SVM Classifier in Cloud Computing Systems
ABSTRACT: In conventional method, distributed support vector machines (SVM) algorithms
are trained over pre-configured intranet/internet environments to find out an
optimal classifier. These methods are very complicated and costly for large
datasets. Hence, we propose a method that is referred as the Cloud SVM training
mechanism (CloudSVM) in a cloud computing environment with MapReduce technique
for distributed machine learning applications. Accordingly, (i) SVM algorithm
is trained in distributed cloud storage servers that work concurrently; (ii)
merge all support vectors in every trained cloud node; and (iii) iterate these
two steps until the SVM converges to the optimal classifier function. Large
scale data sets are not possible to train using SVM algorithm on a single
computer. The results of this study are important for training of large scale
data sets for machine learning applications. We provided that iterative
training of splitted data set in cloud computing environment using SVM will
converge to a global optimal classifier in finite iteration size.
|
1301.0289 | Aaditya Prakash | Aaditya Prakash | Reconstructing Self Organizing Maps as Spider Graphs for better visual
interpretation of large unstructured datasets | 9 pages, 8 figures | null | null | null | cs.GR stat.ML | http://creativecommons.org/licenses/by/3.0/ | Self-Organizing Maps (SOM) are popular unsupervised artificial neural network
used to reduce dimensions and visualize data. Visual interpretation from
Self-Organizing Maps (SOM) has been limited due to grid approach of data
representation, which makes inter-scenario analysis impossible. The paper
proposes a new way to structure SOM. This model reconstructs SOM to show
strength between variables as the threads of a cobweb and illuminate
inter-scenario analysis. While Radar Graphs are very crude representation of
spider web, this model uses more lively and realistic cobweb representation to
take into account the difference in strength and length of threads. This model
allows for visualization of highly unstructured dataset with large number of
dimensions, common in Bigdata sources.
| [
{
"version": "v1",
"created": "Mon, 24 Dec 2012 17:10:28 GMT"
}
] | 2013-01-03T00:00:00 | [
[
"Prakash",
"Aaditya",
""
]
] | TITLE: Reconstructing Self Organizing Maps as Spider Graphs for better visual
interpretation of large unstructured datasets
ABSTRACT: Self-Organizing Maps (SOM) are popular unsupervised artificial neural network
used to reduce dimensions and visualize data. Visual interpretation from
Self-Organizing Maps (SOM) has been limited due to grid approach of data
representation, which makes inter-scenario analysis impossible. The paper
proposes a new way to structure SOM. This model reconstructs SOM to show
strength between variables as the threads of a cobweb and illuminate
inter-scenario analysis. While Radar Graphs are very crude representation of
spider web, this model uses more lively and realistic cobweb representation to
take into account the difference in strength and length of threads. This model
allows for visualization of highly unstructured dataset with large number of
dimensions, common in Bigdata sources.
|
1205.4463 | Salah A. Aly | Salah A. Aly | Pilgrims Face Recognition Dataset -- HUFRD | 5 pages, 13 images, 1 table of a new HUFRD work | null | null | null | cs.CV cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we define a new pilgrims face recognition dataset, called HUFRD
dataset. The new developed dataset presents various pilgrims' images taken from
outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah
seasons. Such dataset will be used to test our developed facial recognition and
detection algorithms, as well as assess in the missing and found recognition
system \cite{crowdsensing}.
| [
{
"version": "v1",
"created": "Sun, 20 May 2012 22:07:27 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Dec 2012 00:58:09 GMT"
}
] | 2013-01-01T00:00:00 | [
[
"Aly",
"Salah A.",
""
]
] | TITLE: Pilgrims Face Recognition Dataset -- HUFRD
ABSTRACT: In this work, we define a new pilgrims face recognition dataset, called HUFRD
dataset. The new developed dataset presents various pilgrims' images taken from
outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah
seasons. Such dataset will be used to test our developed facial recognition and
detection algorithms, as well as assess in the missing and found recognition
system \cite{crowdsensing}.
|
1212.6659 | Raphael Pelossof | Raphael Pelossof and Zhiliang Ying | Focus of Attention for Linear Predictors | 9 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:1105.0382 | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method to stop the evaluation of a prediction process when the
result of the full evaluation is obvious. This trait is highly desirable in
prediction tasks where a predictor evaluates all its features for every example
in large datasets. We observe that some examples are easier to classify than
others, a phenomenon which is characterized by the event when most of the
features agree on the class of an example. By stopping the feature evaluation
when encountering an easy- to-classify example, the predictor can achieve
substantial gains in computation. Our method provides a natural attention
mechanism for linear predictors where the predictor concentrates most of its
computation on hard-to-classify examples and quickly discards easy-to-classify
ones. By modifying a linear prediction algorithm such as an SVM or AdaBoost to
include our attentive method we prove that the average number of features
computed is O(sqrt(n log 1/sqrt(delta))) where n is the original number of
features, and delta is the error rate incurred due to early stopping. We
demonstrate the effectiveness of Attentive Prediction on MNIST, Real-sim,
Gisette, and synthetic datasets.
| [
{
"version": "v1",
"created": "Sat, 29 Dec 2012 20:23:48 GMT"
}
] | 2013-01-01T00:00:00 | [
[
"Pelossof",
"Raphael",
""
],
[
"Ying",
"Zhiliang",
""
]
] | TITLE: Focus of Attention for Linear Predictors
ABSTRACT: We present a method to stop the evaluation of a prediction process when the
result of the full evaluation is obvious. This trait is highly desirable in
prediction tasks where a predictor evaluates all its features for every example
in large datasets. We observe that some examples are easier to classify than
others, a phenomenon which is characterized by the event when most of the
features agree on the class of an example. By stopping the feature evaluation
when encountering an easy- to-classify example, the predictor can achieve
substantial gains in computation. Our method provides a natural attention
mechanism for linear predictors where the predictor concentrates most of its
computation on hard-to-classify examples and quickly discards easy-to-classify
ones. By modifying a linear prediction algorithm such as an SVM or AdaBoost to
include our attentive method we prove that the average number of features
computed is O(sqrt(n log 1/sqrt(delta))) where n is the original number of
features, and delta is the error rate incurred due to early stopping. We
demonstrate the effectiveness of Attentive Prediction on MNIST, Real-sim,
Gisette, and synthetic datasets.
|
1212.5637 | Claudio Gentile | Nicolo' Cesa-Bianchi, Claudio Gentile, Fabio Vitale, Giovanni Zappella | Random Spanning Trees and the Prediction of Weighted Graphs | Appeared in ICML 2010 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of sequentially predicting the binary labels on
the nodes of an arbitrary weighted graph. We show that, under a suitable
parametrization of the problem, the optimal number of prediction mistakes can
be characterized (up to logarithmic factors) by the cutsize of a random
spanning tree of the graph. The cutsize is induced by the unknown adversarial
labeling of the graph nodes. In deriving our characterization, we obtain a
simple randomized algorithm achieving in expectation the optimal mistake bound
on any polynomially connected weighted graph. Our algorithm draws a random
spanning tree of the original graph and then predicts the nodes of this tree in
constant expected amortized time and linear space. Experiments on real-world
datasets show that our method compares well to both global (Perceptron) and
local (label propagation) methods, while being generally faster in practice.
| [
{
"version": "v1",
"created": "Fri, 21 Dec 2012 23:51:21 GMT"
}
] | 2012-12-27T00:00:00 | [
[
"Cesa-Bianchi",
"Nicolo'",
""
],
[
"Gentile",
"Claudio",
""
],
[
"Vitale",
"Fabio",
""
],
[
"Zappella",
"Giovanni",
""
]
] | TITLE: Random Spanning Trees and the Prediction of Weighted Graphs
ABSTRACT: We investigate the problem of sequentially predicting the binary labels on
the nodes of an arbitrary weighted graph. We show that, under a suitable
parametrization of the problem, the optimal number of prediction mistakes can
be characterized (up to logarithmic factors) by the cutsize of a random
spanning tree of the graph. The cutsize is induced by the unknown adversarial
labeling of the graph nodes. In deriving our characterization, we obtain a
simple randomized algorithm achieving in expectation the optimal mistake bound
on any polynomially connected weighted graph. Our algorithm draws a random
spanning tree of the original graph and then predicts the nodes of this tree in
constant expected amortized time and linear space. Experiments on real-world
datasets show that our method compares well to both global (Perceptron) and
local (label propagation) methods, while being generally faster in practice.
|
1212.5701 | Matthew Zeiler | Matthew D. Zeiler | ADADELTA: An Adaptive Learning Rate Method | 6 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel per-dimension learning rate method for gradient descent
called ADADELTA. The method dynamically adapts over time using only first order
information and has minimal computational overhead beyond vanilla stochastic
gradient descent. The method requires no manual tuning of a learning rate and
appears robust to noisy gradient information, different model architecture
choices, various data modalities and selection of hyperparameters. We show
promising results compared to other methods on the MNIST digit classification
task using a single machine and on a large scale voice dataset in a distributed
cluster environment.
| [
{
"version": "v1",
"created": "Sat, 22 Dec 2012 15:46:49 GMT"
}
] | 2012-12-27T00:00:00 | [
[
"Zeiler",
"Matthew D.",
""
]
] | TITLE: ADADELTA: An Adaptive Learning Rate Method
ABSTRACT: We present a novel per-dimension learning rate method for gradient descent
called ADADELTA. The method dynamically adapts over time using only first order
information and has minimal computational overhead beyond vanilla stochastic
gradient descent. The method requires no manual tuning of a learning rate and
appears robust to noisy gradient information, different model architecture
choices, various data modalities and selection of hyperparameters. We show
promising results compared to other methods on the MNIST digit classification
task using a single machine and on a large scale voice dataset in a distributed
cluster environment.
|
1212.6246 | Radford M. Neal | Chunyi Wang and Radford M. Neal | Gaussian Process Regression with Heteroscedastic or Non-Gaussian
Residuals | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian Process (GP) regression models typically assume that residuals are
Gaussian and have the same variance for all observations. However, applications
with input-dependent noise (heteroscedastic residuals) frequently arise in
practice, as do applications in which the residuals do not have a Gaussian
distribution. In this paper, we propose a GP Regression model with a latent
variable that serves as an additional unobserved covariate for the regression.
This model (which we call GPLC) allows for heteroscedasticity since it allows
the function to have a changing partial derivative with respect to this
unobserved covariate. With a suitable covariance function, our GPLC model can
handle (a) Gaussian residuals with input-dependent variance, or (b)
non-Gaussian residuals with input-dependent variance, or (c) Gaussian residuals
with constant variance. We compare our model, using synthetic datasets, with a
model proposed by Goldberg, Williams and Bishop (1998), which we refer to as
GPLV, which only deals with case (a), as well as a standard GP model which can
handle only case (c). Markov Chain Monte Carlo methods are developed for both
modelsl. Experiments show that when the data is heteroscedastic, both GPLC and
GPLV give better results (smaller mean squared error and negative
log-probability density) than standard GP regression. In addition, when the
residual are Gaussian, our GPLC model is generally nearly as good as GPLV,
while when the residuals are non-Gaussian, our GPLC model is better than GPLV.
| [
{
"version": "v1",
"created": "Wed, 26 Dec 2012 20:45:48 GMT"
}
] | 2012-12-27T00:00:00 | [
[
"Wang",
"Chunyi",
""
],
[
"Neal",
"Radford M.",
""
]
] | TITLE: Gaussian Process Regression with Heteroscedastic or Non-Gaussian
Residuals
ABSTRACT: Gaussian Process (GP) regression models typically assume that residuals are
Gaussian and have the same variance for all observations. However, applications
with input-dependent noise (heteroscedastic residuals) frequently arise in
practice, as do applications in which the residuals do not have a Gaussian
distribution. In this paper, we propose a GP Regression model with a latent
variable that serves as an additional unobserved covariate for the regression.
This model (which we call GPLC) allows for heteroscedasticity since it allows
the function to have a changing partial derivative with respect to this
unobserved covariate. With a suitable covariance function, our GPLC model can
handle (a) Gaussian residuals with input-dependent variance, or (b)
non-Gaussian residuals with input-dependent variance, or (c) Gaussian residuals
with constant variance. We compare our model, using synthetic datasets, with a
model proposed by Goldberg, Williams and Bishop (1998), which we refer to as
GPLV, which only deals with case (a), as well as a standard GP model which can
handle only case (c). Markov Chain Monte Carlo methods are developed for both
modelsl. Experiments show that when the data is heteroscedastic, both GPLC and
GPLV give better results (smaller mean squared error and negative
log-probability density) than standard GP regression. In addition, when the
residual are Gaussian, our GPLC model is generally nearly as good as GPLV,
while when the residuals are non-Gaussian, our GPLC model is better than GPLV.
|
1211.3089 | Yuheng Hu | Yuheng Hu, Ajita John, Fei Wang, Subbarao Kambhampati | ET-LDA: Joint Topic Modeling for Aligning Events and their Twitter
Feedback | reference error, delete for now | null | null | null | cs.SI cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During broadcast events such as the Superbowl, the U.S. Presidential and
Primary debates, etc., Twitter has become the de facto platform for crowds to
share perspectives and commentaries about them. Given an event and an
associated large-scale collection of tweets, there are two fundamental research
problems that have been receiving increasing attention in recent years. One is
to extract the topics covered by the event and the tweets; the other is to
segment the event. So far these problems have been viewed separately and
studied in isolation. In this work, we argue that these problems are in fact
inter-dependent and should be addressed together. We develop a joint Bayesian
model that performs topic modeling and event segmentation in one unified
framework. We evaluate the proposed model both quantitatively and qualitatively
on two large-scale tweet datasets associated with two events from different
domains to show that it improves significantly over baseline models.
| [
{
"version": "v1",
"created": "Tue, 13 Nov 2012 19:46:51 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Dec 2012 05:50:15 GMT"
}
] | 2012-12-24T00:00:00 | [
[
"Hu",
"Yuheng",
""
],
[
"John",
"Ajita",
""
],
[
"Wang",
"Fei",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] | TITLE: ET-LDA: Joint Topic Modeling for Aligning Events and their Twitter
Feedback
ABSTRACT: During broadcast events such as the Superbowl, the U.S. Presidential and
Primary debates, etc., Twitter has become the de facto platform for crowds to
share perspectives and commentaries about them. Given an event and an
associated large-scale collection of tweets, there are two fundamental research
problems that have been receiving increasing attention in recent years. One is
to extract the topics covered by the event and the tweets; the other is to
segment the event. So far these problems have been viewed separately and
studied in isolation. In this work, we argue that these problems are in fact
inter-dependent and should be addressed together. We develop a joint Bayesian
model that performs topic modeling and event segmentation in one unified
framework. We evaluate the proposed model both quantitatively and qualitatively
on two large-scale tweet datasets associated with two events from different
domains to show that it improves significantly over baseline models.
|
1212.5265 | Tamal Ghosh Tamal Ghosh | Tamal Ghosh, Pranab K Dan | An Effective Machine-Part Grouping Algorithm to Construct Manufacturing
Cells | null | Proceedings of Conference on Industrial Engineering (NCIE 2011) | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The machine-part cell formation problem consists of creating machine cells
and their corresponding part families with the objective of minimizing the
inter-cell and intra-cell movement while maximizing the machine utilization.
This article demonstrates a hybrid clustering approach for the cell formation
problem in cellular manufacturing that conjoins Sorenson s similarity
coefficient based method to form the production cells. Computational results
are shown over the test datasets obtained from the past literature. The hybrid
technique is shown to outperform the other methods proposed in literature and
including powerful soft computing approaches such as genetic algorithms,
genetic programming by exceeding the solution quality on the test problems.
| [
{
"version": "v1",
"created": "Thu, 20 Dec 2012 15:51:13 GMT"
}
] | 2012-12-24T00:00:00 | [
[
"Ghosh",
"Tamal",
""
],
[
"Dan",
"Pranab K",
""
]
] | TITLE: An Effective Machine-Part Grouping Algorithm to Construct Manufacturing
Cells
ABSTRACT: The machine-part cell formation problem consists of creating machine cells
and their corresponding part families with the objective of minimizing the
inter-cell and intra-cell movement while maximizing the machine utilization.
This article demonstrates a hybrid clustering approach for the cell formation
problem in cellular manufacturing that conjoins Sorenson s similarity
coefficient based method to form the production cells. Computational results
are shown over the test datasets obtained from the past literature. The hybrid
technique is shown to outperform the other methods proposed in literature and
including powerful soft computing approaches such as genetic algorithms,
genetic programming by exceeding the solution quality on the test problems.
|
1212.4458 | Dan Burger | Dan Burger, Keivan G. Stassun, Joshua Pepper, Robert J. Siverd, Martin
A. Paegert, Nathan M. De Lee | Filtergraph: A Flexible Web Application for Instant Data Visualization
of Astronomy Datasets | 4 pages, 1 figure. Originally presented at the ADASS XXII Conference
in Champaign, IL on November 6, 2012. Published in the conference proceedings
by ASP Conference Series (revised to include URL of web application) | null | null | null | astro-ph.IM cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Filtergraph is a web application being developed by the Vanderbilt Initiative
in Data-intensive Astrophysics (VIDA) to flexibly handle a large variety of
astronomy datasets. While current datasets at Vanderbilt are being used to
search for eclipsing binaries and extrasolar planets, this system can be easily
reconfigured for a wide variety of data sources. The user loads a flat-file
dataset into Filtergraph which instantly generates an interactive data portal
that can be easily shared with others. From this portal, the user can
immediately generate scatter plots, histograms, and tables based on the
dataset. Key features of the portal include the ability to filter the data in
real time through user-specified criteria, the ability to select data by
dragging on the screen, and the ability to perform arithmetic operations on the
data in real time. The application is being optimized for speed in the context
of very large datasets: for instance, plot generated from a stellar database of
3.1 million entries render in less than 2 seconds on a standard web server
platform. This web application has been created using the Web2py web framework
based on the Python programming language. Filtergraph is freely available at
http://filtergraph.vanderbilt.edu/.
| [
{
"version": "v1",
"created": "Tue, 18 Dec 2012 19:00:06 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Dec 2012 17:04:02 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Dec 2012 17:17:00 GMT"
}
] | 2012-12-21T00:00:00 | [
[
"Burger",
"Dan",
""
],
[
"Stassun",
"Keivan G.",
""
],
[
"Pepper",
"Joshua",
""
],
[
"Siverd",
"Robert J.",
""
],
[
"Paegert",
"Martin A.",
""
],
[
"De Lee",
"Nathan M.",
""
]
] | TITLE: Filtergraph: A Flexible Web Application for Instant Data Visualization
of Astronomy Datasets
ABSTRACT: Filtergraph is a web application being developed by the Vanderbilt Initiative
in Data-intensive Astrophysics (VIDA) to flexibly handle a large variety of
astronomy datasets. While current datasets at Vanderbilt are being used to
search for eclipsing binaries and extrasolar planets, this system can be easily
reconfigured for a wide variety of data sources. The user loads a flat-file
dataset into Filtergraph which instantly generates an interactive data portal
that can be easily shared with others. From this portal, the user can
immediately generate scatter plots, histograms, and tables based on the
dataset. Key features of the portal include the ability to filter the data in
real time through user-specified criteria, the ability to select data by
dragging on the screen, and the ability to perform arithmetic operations on the
data in real time. The application is being optimized for speed in the context
of very large datasets: for instance, plot generated from a stellar database of
3.1 million entries render in less than 2 seconds on a standard web server
platform. This web application has been created using the Web2py web framework
based on the Python programming language. Filtergraph is freely available at
http://filtergraph.vanderbilt.edu/.
|
1212.4871 | Ramin Norousi | Ramin Norousi, Stephan Wickles, Christoph Leidig, Thomas Becker,
Volker J. Schmid, Roland Beckmann, Achim Tresch | Automatic post-picking using MAPPOS improves particle image detection
from Cryo-EM micrographs | null | null | null | null | stat.ML cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryo-electron microscopy (cryo-EM) studies using single particle
reconstruction are extensively used to reveal structural information on
macromolecular complexes. Aiming at the highest achievable resolution, state of
the art electron microscopes automatically acquire thousands of high-quality
micrographs. Particles are detected on and boxed out from each micrograph using
fully- or semi-automated approaches. However, the obtained particles still
require laborious manual post-picking classification, which is one major
bottleneck for single particle analysis of large datasets. We introduce MAPPOS,
a supervised post-picking strategy for the classification of boxed particle
images, as additional strategy adding to the already efficient automated
particle picking routines. MAPPOS employs machine learning techniques to train
a robust classifier from a small number of characteristic image features. In
order to accurately quantify the performance of MAPPOS we used simulated
particle and non-particle images. In addition, we verified our method by
applying it to an experimental cryo-EM dataset and comparing the results to the
manual classification of the same dataset. Comparisons between MAPPOS and
manual post-picking classification by several human experts demonstrated that
merely a few hundred sample images are sufficient for MAPPOS to classify an
entire dataset with a human-like performance. MAPPOS was shown to greatly
accelerate the throughput of large datasets by reducing the manual workload by
orders of magnitude while maintaining a reliable identification of non-particle
images.
| [
{
"version": "v1",
"created": "Wed, 19 Dec 2012 22:17:18 GMT"
}
] | 2012-12-21T00:00:00 | [
[
"Norousi",
"Ramin",
""
],
[
"Wickles",
"Stephan",
""
],
[
"Leidig",
"Christoph",
""
],
[
"Becker",
"Thomas",
""
],
[
"Schmid",
"Volker J.",
""
],
[
"Beckmann",
"Roland",
""
],
[
"Tresch",
"Achim",
""
]
] | TITLE: Automatic post-picking using MAPPOS improves particle image detection
from Cryo-EM micrographs
ABSTRACT: Cryo-electron microscopy (cryo-EM) studies using single particle
reconstruction are extensively used to reveal structural information on
macromolecular complexes. Aiming at the highest achievable resolution, state of
the art electron microscopes automatically acquire thousands of high-quality
micrographs. Particles are detected on and boxed out from each micrograph using
fully- or semi-automated approaches. However, the obtained particles still
require laborious manual post-picking classification, which is one major
bottleneck for single particle analysis of large datasets. We introduce MAPPOS,
a supervised post-picking strategy for the classification of boxed particle
images, as additional strategy adding to the already efficient automated
particle picking routines. MAPPOS employs machine learning techniques to train
a robust classifier from a small number of characteristic image features. In
order to accurately quantify the performance of MAPPOS we used simulated
particle and non-particle images. In addition, we verified our method by
applying it to an experimental cryo-EM dataset and comparing the results to the
manual classification of the same dataset. Comparisons between MAPPOS and
manual post-picking classification by several human experts demonstrated that
merely a few hundred sample images are sufficient for MAPPOS to classify an
entire dataset with a human-like performance. MAPPOS was shown to greatly
accelerate the throughput of large datasets by reducing the manual workload by
orders of magnitude while maintaining a reliable identification of non-particle
images.
|
1212.4788 | Dominik Grimm dg | Dominik Grimm, Bastian Greshake, Stefan Kleeberger, Christoph Lippert,
Oliver Stegle, Bernhard Sch\"olkopf, Detlef Weigel and Karsten Borgwardt | easyGWAS: An integrated interspecies platform for performing genome-wide
association studies | null | null | null | null | q-bio.GN cs.CE cs.DL stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: The rapid growth in genome-wide association studies (GWAS) in
plants and animals has brought about the need for a central resource that
facilitates i) performing GWAS, ii) accessing data and results of other GWAS,
and iii) enabling all users regardless of their background to exploit the
latest statistical techniques without having to manage complex software and
computing resources.
Results: We present easyGWAS, a web platform that provides methods, tools and
dynamic visualizations to perform and analyze GWAS. In addition, easyGWAS makes
it simple to reproduce results of others, validate findings, and access larger
sample sizes through merging of public datasets.
Availability: Detailed method and data descriptions as well as tutorials are
available in the supplementary materials. easyGWAS is available at
http://easygwas.tuebingen.mpg.de/.
Contact: [email protected]
| [
{
"version": "v1",
"created": "Wed, 19 Dec 2012 18:39:06 GMT"
}
] | 2012-12-20T00:00:00 | [
[
"Grimm",
"Dominik",
""
],
[
"Greshake",
"Bastian",
""
],
[
"Kleeberger",
"Stefan",
""
],
[
"Lippert",
"Christoph",
""
],
[
"Stegle",
"Oliver",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Weigel",
"Detlef",
""
],
[
"Borgwardt",
"Karsten",
""
]
] | TITLE: easyGWAS: An integrated interspecies platform for performing genome-wide
association studies
ABSTRACT: Motivation: The rapid growth in genome-wide association studies (GWAS) in
plants and animals has brought about the need for a central resource that
facilitates i) performing GWAS, ii) accessing data and results of other GWAS,
and iii) enabling all users regardless of their background to exploit the
latest statistical techniques without having to manage complex software and
computing resources.
Results: We present easyGWAS, a web platform that provides methods, tools and
dynamic visualizations to perform and analyze GWAS. In addition, easyGWAS makes
it simple to reproduce results of others, validate findings, and access larger
sample sizes through merging of public datasets.
Availability: Detailed method and data descriptions as well as tutorials are
available in the supplementary materials. easyGWAS is available at
http://easygwas.tuebingen.mpg.de/.
Contact: [email protected]
|
1212.4347 | Bonggun Shin | Bonggun Shin, Alice Oh | Bayesian Group Nonnegative Matrix Factorization for EEG Analysis | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a generative model of a group EEG analysis, based on appropriate
kernel assumptions on EEG data. We derive the variational inference update rule
using various approximation techniques. The proposed model outperforms the
current state-of-the-art algorithms in terms of common pattern extraction. The
validity of the proposed model is tested on the BCI competition dataset.
| [
{
"version": "v1",
"created": "Tue, 18 Dec 2012 13:35:38 GMT"
}
] | 2012-12-19T00:00:00 | [
[
"Shin",
"Bonggun",
""
],
[
"Oh",
"Alice",
""
]
] | TITLE: Bayesian Group Nonnegative Matrix Factorization for EEG Analysis
ABSTRACT: We propose a generative model of a group EEG analysis, based on appropriate
kernel assumptions on EEG data. We derive the variational inference update rule
using various approximation techniques. The proposed model outperforms the
current state-of-the-art algorithms in terms of common pattern extraction. The
validity of the proposed model is tested on the BCI competition dataset.
|
1212.3938 | Aurore Laurendeau | Aurore Laurendeau (ISTerre), Fabrice Cotton (ISTerre), Luis Fabian
Bonilla | Nonstationary Stochastic Simulation of Strong Ground-Motion Time
Histories : Application to the Japanese Database | 10 pages; 15th World Conference on Earthquake Engineering, Lisbon :
Portugal (2012) | null | null | null | stat.AP physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For earthquake-resistant design, engineering seismologists employ
time-history analysis for nonlinear simulations. The nonstationary stochastic
method previously developed by Pousse et al. (2006) has been updated. This
method has the advantage of being both simple, fast and taking into account the
basic concepts of seismology (Brune's source, realistic time envelope function,
nonstationarity and ground-motion variability). Time-domain simulations are
derived from the signal spectrogram and depend on few ground-motion parameters:
Arias intensity, significant relative duration and central frequency. These
indicators are obtained from empirical attenuation equations that relate them
to the magnitude of the event, the source-receiver distance, and the site
conditions. We improve the nonstationary stochastic method by using new
functional forms (new surface rock dataset, analysis of both intra-event and
inter-event residuals, consideration of the scaling relations and VS30), by
assessing the central frequency with S-transform and by better considering the
stress drop variability.
| [
{
"version": "v1",
"created": "Mon, 17 Dec 2012 08:47:29 GMT"
}
] | 2012-12-18T00:00:00 | [
[
"Laurendeau",
"Aurore",
"",
"ISTerre"
],
[
"Cotton",
"Fabrice",
"",
"ISTerre"
],
[
"Bonilla",
"Luis Fabian",
""
]
] | TITLE: Nonstationary Stochastic Simulation of Strong Ground-Motion Time
Histories : Application to the Japanese Database
ABSTRACT: For earthquake-resistant design, engineering seismologists employ
time-history analysis for nonlinear simulations. The nonstationary stochastic
method previously developed by Pousse et al. (2006) has been updated. This
method has the advantage of being both simple, fast and taking into account the
basic concepts of seismology (Brune's source, realistic time envelope function,
nonstationarity and ground-motion variability). Time-domain simulations are
derived from the signal spectrogram and depend on few ground-motion parameters:
Arias intensity, significant relative duration and central frequency. These
indicators are obtained from empirical attenuation equations that relate them
to the magnitude of the event, the source-receiver distance, and the site
conditions. We improve the nonstationary stochastic method by using new
functional forms (new surface rock dataset, analysis of both intra-event and
inter-event residuals, consideration of the scaling relations and VS30), by
assessing the central frequency with S-transform and by better considering the
stress drop variability.
|
1212.3964 | Sourav Dutta | Suman K. Bera, Sourav Dutta, Ankur Narang and Souvik Bhattacherjee | Advanced Bloom Filter Based Algorithms for Efficient Approximate Data
De-Duplication in Streams | 41 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications involving telecommunication call data records, web pages, online
transactions, medical records, stock markets, climate warning systems, etc.,
necessitate efficient management and processing of such massively exponential
amount of data from diverse sources. De-duplication or Intelligent Compression
in streaming scenarios for approximate identification and elimination of
duplicates from such unbounded data stream is a greater challenge given the
real-time nature of data arrival. Stable Bloom Filters (SBF) addresses this
problem to a certain extent. .
In this work, we present several novel algorithms for the problem of
approximate detection of duplicates in data streams. We propose the Reservoir
Sampling based Bloom Filter (RSBF) combining the working principle of reservoir
sampling and Bloom Filters. We also present variants of the novel Biased
Sampling based Bloom Filter (BSBF) based on biased sampling concepts. We also
propose a randomized load balanced variant of the sampling Bloom Filter
approach to efficiently tackle the duplicate detection. In this work, we thus
provide a generic framework for de-duplication using Bloom Filters. Using
detailed theoretical analysis we prove analytical bounds on the false positive
rate, false negative rate and convergence rate of the proposed structures. We
exhibit that our models clearly outperform the existing methods. We also
demonstrate empirical analysis of the structures using real-world datasets (3
million records) and also with synthetic datasets (1 billion records) capturing
various input distributions.
| [
{
"version": "v1",
"created": "Mon, 17 Dec 2012 11:47:09 GMT"
}
] | 2012-12-18T00:00:00 | [
[
"Bera",
"Suman K.",
""
],
[
"Dutta",
"Sourav",
""
],
[
"Narang",
"Ankur",
""
],
[
"Bhattacherjee",
"Souvik",
""
]
] | TITLE: Advanced Bloom Filter Based Algorithms for Efficient Approximate Data
De-Duplication in Streams
ABSTRACT: Applications involving telecommunication call data records, web pages, online
transactions, medical records, stock markets, climate warning systems, etc.,
necessitate efficient management and processing of such massively exponential
amount of data from diverse sources. De-duplication or Intelligent Compression
in streaming scenarios for approximate identification and elimination of
duplicates from such unbounded data stream is a greater challenge given the
real-time nature of data arrival. Stable Bloom Filters (SBF) addresses this
problem to a certain extent. .
In this work, we present several novel algorithms for the problem of
approximate detection of duplicates in data streams. We propose the Reservoir
Sampling based Bloom Filter (RSBF) combining the working principle of reservoir
sampling and Bloom Filters. We also present variants of the novel Biased
Sampling based Bloom Filter (BSBF) based on biased sampling concepts. We also
propose a randomized load balanced variant of the sampling Bloom Filter
approach to efficiently tackle the duplicate detection. In this work, we thus
provide a generic framework for de-duplication using Bloom Filters. Using
detailed theoretical analysis we prove analytical bounds on the false positive
rate, false negative rate and convergence rate of the proposed structures. We
exhibit that our models clearly outperform the existing methods. We also
demonstrate empirical analysis of the structures using real-world datasets (3
million records) and also with synthetic datasets (1 billion records) capturing
various input distributions.
|
1212.3390 | A Majumder | Anirban Majumder and Nisheeth Shrivastava | Know Your Personalization: Learning Topic level Personalization in
Online Services | privacy, personalization | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online service platforms (OSPs), such as search engines, news-websites,
ad-providers, etc., serve highly pe rsonalized content to the user, based on
the profile extracted from his history with the OSP. Although personalization
(generally) leads to a better user experience, it also raises privacy concerns
for the user---he does not know what is present in his profile and more
importantly, what is being used to per sonalize content for him. In this paper,
we capture OSP's personalization for an user in a new data structure called the
person alization vector ($\eta$), which is a weighted vector over a set of
topics, and present techniques to compute it for users of an OSP. Our approach
treats OSPs as black-boxes, and extracts $\eta$ by mining only their output,
specifical ly, the personalized (for an user) and vanilla (without any user
information) contents served, and the differences in these content. We
formulate a new model called Latent Topic Personalization (LTP) that captures
the personalization vector into a learning framework and present efficient
inference algorithms for it. We do extensive experiments for search result
personalization using both data from real Google users and synthetic datasets.
Our results show high accuracy (R-pre = 84%) of LTP in finding personalized
topics. For Google data, our qualitative results show how LTP can also
identifies evidences---queries for results on a topic with high $\eta$ value
were re-ranked. Finally, we show how our approach can be used to build a new
Privacy evaluation framework focused at end-user privacy on commercial OSPs.
| [
{
"version": "v1",
"created": "Fri, 14 Dec 2012 04:12:21 GMT"
}
] | 2012-12-17T00:00:00 | [
[
"Majumder",
"Anirban",
""
],
[
"Shrivastava",
"Nisheeth",
""
]
] | TITLE: Know Your Personalization: Learning Topic level Personalization in
Online Services
ABSTRACT: Online service platforms (OSPs), such as search engines, news-websites,
ad-providers, etc., serve highly pe rsonalized content to the user, based on
the profile extracted from his history with the OSP. Although personalization
(generally) leads to a better user experience, it also raises privacy concerns
for the user---he does not know what is present in his profile and more
importantly, what is being used to per sonalize content for him. In this paper,
we capture OSP's personalization for an user in a new data structure called the
person alization vector ($\eta$), which is a weighted vector over a set of
topics, and present techniques to compute it for users of an OSP. Our approach
treats OSPs as black-boxes, and extracts $\eta$ by mining only their output,
specifical ly, the personalized (for an user) and vanilla (without any user
information) contents served, and the differences in these content. We
formulate a new model called Latent Topic Personalization (LTP) that captures
the personalization vector into a learning framework and present efficient
inference algorithms for it. We do extensive experiments for search result
personalization using both data from real Google users and synthetic datasets.
Our results show high accuracy (R-pre = 84%) of LTP in finding personalized
topics. For Google data, our qualitative results show how LTP can also
identifies evidences---queries for results on a topic with high $\eta$ value
were re-ranked. Finally, we show how our approach can be used to build a new
Privacy evaluation framework focused at end-user privacy on commercial OSPs.
|
1212.2981 | Celine Beauval | C\'eline Beauval (ISTerre), Hilal Tasan (ISTerre), Aurore Laurendeau
(ISTerre), Elise Delavaud, Fabrice Cotton (ISTerre), Philippe Gu\'eguen
(ISTerre), Nicolas Kuehn | On the Testing of Ground--Motion Prediction Equations against
Small--Magnitude Data | null | Bulletin of the Seismological Society of America 102, 5 (2012)
1994-2007 | 10.1785/0120110271 | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ground-motion prediction equations (GMPE) are essential in probabilistic
seismic hazard studies for estimating the ground motions generated by the
seismic sources. In low seismicity regions, only weak motions are available in
the lifetime of accelerometric networks, and the equations selected for the
probabilistic studies are usually models established from foreign data.
Although most ground-motion prediction equations have been developed for
magnitudes 5 and above, the minimum magnitude often used in probabilistic
studies in low seismicity regions is smaller. Desaggregations have shown that,
at return periods of engineering interest, magnitudes lower than 5 can be
contributing to the hazard. This paper presents the testing of several GMPEs
selected in current international and national probabilistic projects against
weak motions recorded in France (191 recordings with source-site distances up
to 300km, 3.8\leqMw\leq4.5). The method is based on the loglikelihood value
proposed by Scherbaum et al. (2009). The best fitting models (approximately
2.5\leqLLH\leq3.5) over the whole frequency range are the Cauzzi and Faccioli
(2008), Akkar and Bommer (2010) and Abrahamson and Silva (2008) models. No
significant regional variation of ground motions is highlighted, and the
magnitude scaling could be predominant in the control of ground-motion
amplitudes. Furthermore, we take advantage of a rich Japanese dataset to run
tests on randomly selected low-magnitude subsets, and check that a dataset of
~190 observations, same size as the French dataset, is large enough to obtain
stable LLH estimates. Additionally we perform the tests against larger
magnitudes (5-7) from the Japanese dataset. The ranking of models is partially
modified, indicating a magnitude scaling effect for some of the models, and
showing that extrapolating testing results obtained from low magnitude ranges
to higher magnitude ranges is not straightforward.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 21:01:39 GMT"
}
] | 2012-12-14T00:00:00 | [
[
"Beauval",
"Céline",
"",
"ISTerre"
],
[
"Tasan",
"Hilal",
"",
"ISTerre"
],
[
"Laurendeau",
"Aurore",
"",
"ISTerre"
],
[
"Delavaud",
"Elise",
"",
"ISTerre"
],
[
"Cotton",
"Fabrice",
"",
"ISTerre"
],
[
"Guéguen",
"Philippe",
"",
"ISTerre"
],
[
"Kuehn",
"Nicolas",
""
]
] | TITLE: On the Testing of Ground--Motion Prediction Equations against
Small--Magnitude Data
ABSTRACT: Ground-motion prediction equations (GMPE) are essential in probabilistic
seismic hazard studies for estimating the ground motions generated by the
seismic sources. In low seismicity regions, only weak motions are available in
the lifetime of accelerometric networks, and the equations selected for the
probabilistic studies are usually models established from foreign data.
Although most ground-motion prediction equations have been developed for
magnitudes 5 and above, the minimum magnitude often used in probabilistic
studies in low seismicity regions is smaller. Desaggregations have shown that,
at return periods of engineering interest, magnitudes lower than 5 can be
contributing to the hazard. This paper presents the testing of several GMPEs
selected in current international and national probabilistic projects against
weak motions recorded in France (191 recordings with source-site distances up
to 300km, 3.8\leqMw\leq4.5). The method is based on the loglikelihood value
proposed by Scherbaum et al. (2009). The best fitting models (approximately
2.5\leqLLH\leq3.5) over the whole frequency range are the Cauzzi and Faccioli
(2008), Akkar and Bommer (2010) and Abrahamson and Silva (2008) models. No
significant regional variation of ground motions is highlighted, and the
magnitude scaling could be predominant in the control of ground-motion
amplitudes. Furthermore, we take advantage of a rich Japanese dataset to run
tests on randomly selected low-magnitude subsets, and check that a dataset of
~190 observations, same size as the French dataset, is large enough to obtain
stable LLH estimates. Additionally we perform the tests against larger
magnitudes (5-7) from the Japanese dataset. The ranking of models is partially
modified, indicating a magnitude scaling effect for some of the models, and
showing that extrapolating testing results obtained from low magnitude ranges
to higher magnitude ranges is not straightforward.
|
1212.3013 | Gabriele Modena | K. Massoudi, G. Modena | Product/Brand extraction from WikiPedia | 17 pages. Manuscript first creation date: November 27, 2009. At the
time of first creation both authors were affiliated with the University of
Amsterdam (The Netherlands) | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we describe the task of extracting product and brand pages from
wikipedia. We present an experimental environment and setup built on top of a
dataset of wikipedia pages we collected. We introduce a method for recognition
of product pages modelled as a boolean probabilistic classification task. We
show that this approach can lead to promising results and we discuss
alternative approaches we considered.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 23:25:46 GMT"
}
] | 2012-12-14T00:00:00 | [
[
"Massoudi",
"K.",
""
],
[
"Modena",
"G.",
""
]
] | TITLE: Product/Brand extraction from WikiPedia
ABSTRACT: In this paper we describe the task of extracting product and brand pages from
wikipedia. We present an experimental environment and setup built on top of a
dataset of wikipedia pages we collected. We introduce a method for recognition
of product pages modelled as a boolean probabilistic classification task. We
show that this approach can lead to promising results and we discuss
alternative approaches we considered.
|
1212.3152 | Benjamin Laken | Benjamin A. Laken, Jasa \v{C}alogovi\'c, Tariq Shahbaz and Enric
Pall\'e | Examining a solar climate link in diurnal temperature ranges | 18 pages, 7 figures, 1 table | Laken B.A., J. Calogovic, T. Shahbaz, & E. Palle (2012) Examining
a solar-climate link in diurnal temperature ranges. Journal of Geophysical
Research, 117, D18112, 9PP | 10.1029/2012JD17683 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent study has suggested a link between the surface level diurnal
temperature range (DTR) and variations in the cosmic ray (CR) flux. As the DTR
is an effective proxy for cloud cover, this result supports the notion that
widespread cloud changes may be induced by the CR flux. If confirmed, this
would have significant implications for our understanding of natural climate
forcings. Here, we perform a detailed investigation of the relationships
between DTR and solar activity (total solar irradiance and the CR flux) from
more than 60 years of NCEP/NCAR reanalysis data and observations from
meteorological station data. We find no statistically significant evidence to
suggest that the DTR is connected to either long-term solar periodicities (11
or 1.68 year) or short-term (daily-timescale) fluctuations in solar activity,
and we attribute previous reports on the contrary to an incorrect estimation of
the statistical significance of the data. If a CR-DTR relationship exists,
based on the estimated noise in DTR composites during Forbush decrease (FD)
events, the DTR response would need to be larger than 0.03{\deg}C per 1%
increase in the CR flux to be reliably detected. Compared with a much smaller
rough estimate of -0.005{\deg}C per 1% increase in the CR flux expected if
previous claims that FD events cause reductions in the cloud cover are valid,
we conclude it is not possible to detect a solar related responses in
station-based or reanalysis-based DTR datasets related to a hypothesized
CR-cloud link, as potential signals would be drowned in noise.
| [
{
"version": "v1",
"created": "Thu, 13 Dec 2012 12:42:43 GMT"
}
] | 2012-12-14T00:00:00 | [
[
"Laken",
"Benjamin A.",
""
],
[
"Čalogović",
"Jasa",
""
],
[
"Shahbaz",
"Tariq",
""
],
[
"Pallé",
"Enric",
""
]
] | TITLE: Examining a solar climate link in diurnal temperature ranges
ABSTRACT: A recent study has suggested a link between the surface level diurnal
temperature range (DTR) and variations in the cosmic ray (CR) flux. As the DTR
is an effective proxy for cloud cover, this result supports the notion that
widespread cloud changes may be induced by the CR flux. If confirmed, this
would have significant implications for our understanding of natural climate
forcings. Here, we perform a detailed investigation of the relationships
between DTR and solar activity (total solar irradiance and the CR flux) from
more than 60 years of NCEP/NCAR reanalysis data and observations from
meteorological station data. We find no statistically significant evidence to
suggest that the DTR is connected to either long-term solar periodicities (11
or 1.68 year) or short-term (daily-timescale) fluctuations in solar activity,
and we attribute previous reports on the contrary to an incorrect estimation of
the statistical significance of the data. If a CR-DTR relationship exists,
based on the estimated noise in DTR composites during Forbush decrease (FD)
events, the DTR response would need to be larger than 0.03{\deg}C per 1%
increase in the CR flux to be reliably detected. Compared with a much smaller
rough estimate of -0.005{\deg}C per 1% increase in the CR flux expected if
previous claims that FD events cause reductions in the cloud cover are valid,
we conclude it is not possible to detect a solar related responses in
station-based or reanalysis-based DTR datasets related to a hypothesized
CR-cloud link, as potential signals would be drowned in noise.
|
1212.3287 | Celine Beauval | C\'eline Beauval (ISTerre), F. Cotton (ISTerre), N. Abrahamson, N.
Theodulidis (ITSAK), E. Delavaud (ISTerre), L. Rodriguez (ISTerre), F.
Scherbaum, A. Haendel | Regional differences in subduction ground motions | 10 pages | World Conference on Earthquake Engineering, Lisbonne : Portugal
(2012) | null | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A few ground-motion prediction models have been published in the last years,
for predicting ground motions produced by interface and intraslab earthquakes.
When one must carry out a probabilistic seismic hazard analysis in a region
including a subduction zone, GMPEs must be selected to feed a logic tree. In
the present study, the aim is to identify which models provide the best fit to
the dataset M6+, global or local models. The subduction regions considered are
Japan, Taiwan, Central and South America, and Greece. Most of the data comes
from the database built to develop the new BCHydro subduction global GMPE
(Abrahamson et al., submitted). We show that this model is among best-fitting
models in all cases, followed closely by Zhao et al. (2006), whereas the local
Lin and Lee (2008) is well predicting the data in Taiwan and also in Greece.
The Scherbaum et al. (2009) LLH method prove to be efficient in providing one
number quantifying the overall fit, but additional analysis on the
between-event and within-event variabilities are mandatory, to control if
median prediction per event and/or variability within an event is within the
scatter predicted by the model.
| [
{
"version": "v1",
"created": "Thu, 13 Dec 2012 19:41:24 GMT"
}
] | 2012-12-14T00:00:00 | [
[
"Beauval",
"Céline",
"",
"ISTerre"
],
[
"Cotton",
"F.",
"",
"ISTerre"
],
[
"Abrahamson",
"N.",
"",
"ITSAK"
],
[
"Theodulidis",
"N.",
"",
"ITSAK"
],
[
"Delavaud",
"E.",
"",
"ISTerre"
],
[
"Rodriguez",
"L.",
"",
"ISTerre"
],
[
"Scherbaum",
"F.",
""
],
[
"Haendel",
"A.",
""
]
] | TITLE: Regional differences in subduction ground motions
ABSTRACT: A few ground-motion prediction models have been published in the last years,
for predicting ground motions produced by interface and intraslab earthquakes.
When one must carry out a probabilistic seismic hazard analysis in a region
including a subduction zone, GMPEs must be selected to feed a logic tree. In
the present study, the aim is to identify which models provide the best fit to
the dataset M6+, global or local models. The subduction regions considered are
Japan, Taiwan, Central and South America, and Greece. Most of the data comes
from the database built to develop the new BCHydro subduction global GMPE
(Abrahamson et al., submitted). We show that this model is among best-fitting
models in all cases, followed closely by Zhao et al. (2006), whereas the local
Lin and Lee (2008) is well predicting the data in Taiwan and also in Greece.
The Scherbaum et al. (2009) LLH method prove to be efficient in providing one
number quantifying the overall fit, but additional analysis on the
between-event and within-event variabilities are mandatory, to control if
median prediction per event and/or variability within an event is within the
scatter predicted by the model.
|
1212.2692 | Ghazali Osman | Ghazali Osman, Muhammad Suzuri Hitam and Mohd Nasir Ismail | Enhanced skin colour classifier using RGB Ratio model | 14 pages; International Journal on Soft Computing (IJSC) Vol.3, No.4,
November 2012 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skin colour detection is frequently been used for searching people, face
detection, pornographic filtering and hand tracking. The presence of skin or
non-skin in digital image can be determined by manipulating pixels colour or
pixels texture. The main problem in skin colour detection is to represent the
skin colour distribution model that is invariant or least sensitive to changes
in illumination condition. Another problem comes from the fact that many
objects in the real world may possess almost similar skin-tone colour such as
wood, leather, skin-coloured clothing, hair and sand. Moreover, skin colour is
different between races and can be different from a person to another, even
with people of the same ethnicity. Finally, skin colour will appear a little
different when different types of camera are used to capture the object or
scene. The objective in this study is to develop a skin colour classifier based
on pixel-based using RGB ratio model. The RGB ratio model is a newly proposed
method that belongs under the category of an explicitly defined skin region
model. This skin classifier was tested with SIdb dataset and two benchmark
datasets; UChile and TDSD datasets to measure classifier performance. The
performance of skin classifier was measured based on true positive (TF) and
false positive (FP) indicator. This newly proposed model was compared with
Kovac, Saleh and Swift models. The experimental results showed that the RGB
ratio model outperformed all the other models in term of detection rate. The
RGB ratio model is able to reduce FP detection that caused by reddish objects
colour as well as be able to detect darkened skin and skin covered by shadow.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 03:01:00 GMT"
}
] | 2012-12-13T00:00:00 | [
[
"Osman",
"Ghazali",
""
],
[
"Hitam",
"Muhammad Suzuri",
""
],
[
"Ismail",
"Mohd Nasir",
""
]
] | TITLE: Enhanced skin colour classifier using RGB Ratio model
ABSTRACT: Skin colour detection is frequently been used for searching people, face
detection, pornographic filtering and hand tracking. The presence of skin or
non-skin in digital image can be determined by manipulating pixels colour or
pixels texture. The main problem in skin colour detection is to represent the
skin colour distribution model that is invariant or least sensitive to changes
in illumination condition. Another problem comes from the fact that many
objects in the real world may possess almost similar skin-tone colour such as
wood, leather, skin-coloured clothing, hair and sand. Moreover, skin colour is
different between races and can be different from a person to another, even
with people of the same ethnicity. Finally, skin colour will appear a little
different when different types of camera are used to capture the object or
scene. The objective in this study is to develop a skin colour classifier based
on pixel-based using RGB ratio model. The RGB ratio model is a newly proposed
method that belongs under the category of an explicitly defined skin region
model. This skin classifier was tested with SIdb dataset and two benchmark
datasets; UChile and TDSD datasets to measure classifier performance. The
performance of skin classifier was measured based on true positive (TF) and
false positive (FP) indicator. This newly proposed model was compared with
Kovac, Saleh and Swift models. The experimental results showed that the RGB
ratio model outperformed all the other models in term of detection rate. The
RGB ratio model is able to reduce FP detection that caused by reddish objects
colour as well as be able to detect darkened skin and skin covered by shadow.
|
1212.2823 | Shuran Song | Shuran Song, Jianxiong Xiao | Tracking Revisited using RGBD Camera: Baseline and Benchmark | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there has been significant progress in the past decade,tracking is
still a very challenging computer vision task, due to problems such as
occlusion and model drift.Recently, the increased popularity of depth sensors
e.g. Microsoft Kinect has made it easy to obtain depth data at low cost.This
may be a game changer for tracking, since depth information can be used to
prevent model drift and handle occlusion.In this paper, we construct a
benchmark dataset of 100 RGBD videos with high diversity, including deformable
objects, various occlusion conditions and moving cameras. We propose a very
simple but strong baseline model for RGBD tracking, and present a quantitative
comparison of several state-of-the-art tracking algorithms.Experimental results
show that including depth information and reasoning about occlusion
significantly improves tracking performance. The datasets, evaluation details,
source code for the baseline algorithm, and instructions for submitting new
models will be made available online after acceptance.
| [
{
"version": "v1",
"created": "Wed, 12 Dec 2012 14:02:41 GMT"
}
] | 2012-12-13T00:00:00 | [
[
"Song",
"Shuran",
""
],
[
"Xiao",
"Jianxiong",
""
]
] | TITLE: Tracking Revisited using RGBD Camera: Baseline and Benchmark
ABSTRACT: Although there has been significant progress in the past decade,tracking is
still a very challenging computer vision task, due to problems such as
occlusion and model drift.Recently, the increased popularity of depth sensors
e.g. Microsoft Kinect has made it easy to obtain depth data at low cost.This
may be a game changer for tracking, since depth information can be used to
prevent model drift and handle occlusion.In this paper, we construct a
benchmark dataset of 100 RGBD videos with high diversity, including deformable
objects, various occlusion conditions and moving cameras. We propose a very
simple but strong baseline model for RGBD tracking, and present a quantitative
comparison of several state-of-the-art tracking algorithms.Experimental results
show that including depth information and reasoning about occlusion
significantly improves tracking performance. The datasets, evaluation details,
source code for the baseline algorithm, and instructions for submitting new
models will be made available online after acceptance.
|
1212.2468 | David Maxwell Chickering | David Maxwell Chickering, Christopher Meek, David Heckerman | Large-Sample Learning of Bayesian Networks is NP-Hard | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-124-133 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we provide new complexity results for algorithms that learn
discrete-variable Bayesian networks from data. Our results apply whenever the
learning algorithm uses a scoring criterion that favors the simplest model able
to represent the generative distribution exactly. Our results therefore hold
whenever the learning algorithm uses a consistent scoring criterion and is
applied to a sufficiently large dataset. We show that identifying high-scoring
structures is hard, even when we are given an independence oracle, an inference
oracle, and/or an information oracle. Our negative results also apply to the
learning of discrete-variable Bayesian networks in which each node has at most
k parents, for all k > 3.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:04:28 GMT"
}
] | 2012-12-12T00:00:00 | [
[
"Chickering",
"David Maxwell",
""
],
[
"Meek",
"Christopher",
""
],
[
"Heckerman",
"David",
""
]
] | TITLE: Large-Sample Learning of Bayesian Networks is NP-Hard
ABSTRACT: In this paper, we provide new complexity results for algorithms that learn
discrete-variable Bayesian networks from data. Our results apply whenever the
learning algorithm uses a scoring criterion that favors the simplest model able
to represent the generative distribution exactly. Our results therefore hold
whenever the learning algorithm uses a consistent scoring criterion and is
applied to a sufficiently large dataset. We show that identifying high-scoring
structures is hard, even when we are given an independence oracle, an inference
oracle, and/or an information oracle. Our negative results also apply to the
learning of discrete-variable Bayesian networks in which each node has at most
k parents, for all k > 3.
|
1212.2478 | Rong Jin | Rong Jin, Luo Si, ChengXiang Zhai | Preference-based Graphic Models for Collaborative Filtering | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-329-336 | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering is a very useful general technique for exploiting the
preference patterns of a group of users to predict the utility of items to a
particular user. Previous research has studied several probabilistic graphic
models for collaborative filtering with promising results. However, while these
models have succeeded in capturing the similarity among users and items in one
way or the other, none of them has considered the fact that users with similar
interests in items can have very different rating patterns; some users tend to
assign a higher rating to all items than other users. In this paper, we propose
and study of two new graphic models that address the distinction between user
preferences and ratings. In one model, called the decoupled model, we introduce
two different variables to decouple a users preferences FROM his ratings. IN
the other, called the preference model, we model the orderings OF items
preferred BY a USER, rather than the USERs numerical ratings of items.
Empirical study over two datasets of movie ratings shows that appropriate
modeling of the distinction between user preferences and ratings improves the
performance substantially and consistently. Specifically, the proposed
decoupled model outperforms all five existing approaches that we compare with
significantly, but the preference model is not very successful. These results
suggest that explicit modeling of the underlying user preferences is very
important for collaborative filtering, but we can not afford ignoring the
rating information completely.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:06:09 GMT"
}
] | 2012-12-12T00:00:00 | [
[
"Jin",
"Rong",
""
],
[
"Si",
"Luo",
""
],
[
"Zhai",
"ChengXiang",
""
]
] | TITLE: Preference-based Graphic Models for Collaborative Filtering
ABSTRACT: Collaborative filtering is a very useful general technique for exploiting the
preference patterns of a group of users to predict the utility of items to a
particular user. Previous research has studied several probabilistic graphic
models for collaborative filtering with promising results. However, while these
models have succeeded in capturing the similarity among users and items in one
way or the other, none of them has considered the fact that users with similar
interests in items can have very different rating patterns; some users tend to
assign a higher rating to all items than other users. In this paper, we propose
and study of two new graphic models that address the distinction between user
preferences and ratings. In one model, called the decoupled model, we introduce
two different variables to decouple a users preferences FROM his ratings. IN
the other, called the preference model, we model the orderings OF items
preferred BY a USER, rather than the USERs numerical ratings of items.
Empirical study over two datasets of movie ratings shows that appropriate
modeling of the distinction between user preferences and ratings improves the
performance substantially and consistently. Specifically, the proposed
decoupled model outperforms all five existing approaches that we compare with
significantly, but the preference model is not very successful. These results
suggest that explicit modeling of the underlying user preferences is very
important for collaborative filtering, but we can not afford ignoring the
rating information completely.
|
1212.2483 | Amir Globerson | Amir Globerson, Gal Chechik, Naftali Tishby | Sufficient Dimensionality Reduction with Irrelevant Statistics | Appears in Proceedings of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI2003) | null | null | UAI-P-2003-PG-281-288 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of finding a reduced dimensionality representation of categorical
variables while preserving their most relevant characteristics is fundamental
for the analysis of complex data. Specifically, given a co-occurrence matrix of
two variables, one often seeks a compact representation of one variable which
preserves information about the other variable. We have recently introduced
``Sufficient Dimensionality Reduction' [GT-2003], a method that extracts
continuous reduced dimensional features whose measurements (i.e., expectation
values) capture maximal mutual information among the variables. However, such
measurements often capture information that is irrelevant for a given task.
Widely known examples are illumination conditions, which are irrelevant as
features for face recognition, writing style which is irrelevant as a feature
for content classification, and intonation which is irrelevant as a feature for
speech recognition. Such irrelevance cannot be deduced apriori, since it
depends on the details of the task, and is thus inherently ill defined in the
purely unsupervised case. Separating relevant from irrelevant features can be
achieved using additional side data that contains such irrelevant structures.
This approach was taken in [CT-2002], extending the information bottleneck
method, which uses clustering to compress the data. Here we use this
side-information framework to identify features whose measurements are
maximally informative for the original data set, but carry as little
information as possible on a side data set. In statistical terms this can be
understood as extracting statistics which are maximally sufficient for the
original dataset, while simultaneously maximally ancillary for the side
dataset. We formulate this tradeoff as a constrained optimization problem and
characterize its solutions. We then derive a gradient descent algorithm for
this problem, which is based on the Generalized Iterative Scaling method for
finding maximum entropy distributions. The method is demonstrated on synthetic
data, as well as on real face recognition datasets, and is shown to outperform
standard methods such as oriented PCA.
| [
{
"version": "v1",
"created": "Fri, 19 Oct 2012 15:05:46 GMT"
}
] | 2012-12-12T00:00:00 | [
[
"Globerson",
"Amir",
""
],
[
"Chechik",
"Gal",
""
],
[
"Tishby",
"Naftali",
""
]
] | TITLE: Sufficient Dimensionality Reduction with Irrelevant Statistics
ABSTRACT: The problem of finding a reduced dimensionality representation of categorical
variables while preserving their most relevant characteristics is fundamental
for the analysis of complex data. Specifically, given a co-occurrence matrix of
two variables, one often seeks a compact representation of one variable which
preserves information about the other variable. We have recently introduced
``Sufficient Dimensionality Reduction' [GT-2003], a method that extracts
continuous reduced dimensional features whose measurements (i.e., expectation
values) capture maximal mutual information among the variables. However, such
measurements often capture information that is irrelevant for a given task.
Widely known examples are illumination conditions, which are irrelevant as
features for face recognition, writing style which is irrelevant as a feature
for content classification, and intonation which is irrelevant as a feature for
speech recognition. Such irrelevance cannot be deduced apriori, since it
depends on the details of the task, and is thus inherently ill defined in the
purely unsupervised case. Separating relevant from irrelevant features can be
achieved using additional side data that contains such irrelevant structures.
This approach was taken in [CT-2002], extending the information bottleneck
method, which uses clustering to compress the data. Here we use this
side-information framework to identify features whose measurements are
maximally informative for the original data set, but carry as little
information as possible on a side data set. In statistical terms this can be
understood as extracting statistics which are maximally sufficient for the
original dataset, while simultaneously maximally ancillary for the side
dataset. We formulate this tradeoff as a constrained optimization problem and
characterize its solutions. We then derive a gradient descent algorithm for
this problem, which is based on the Generalized Iterative Scaling method for
finding maximum entropy distributions. The method is demonstrated on synthetic
data, as well as on real face recognition datasets, and is shown to outperform
standard methods such as oriented PCA.
|
1212.2546 | Jonathan Masci | Jonathan Masci and Jes\'us Angulo and J\"urgen Schmidhuber | A Learning Framework for Morphological Operators using Counter-Harmonic
Mean | Submitted to ISMM'13 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel framework for learning morphological operators using
counter-harmonic mean. It combines concepts from morphology and convolutional
neural networks. A thorough experimental validation analyzes basic
morphological operators dilation and erosion, opening and closing, as well as
the much more complex top-hat transform, for which we report a real-world
application from the steel industry. Using online learning and stochastic
gradient descent, our system learns both the structuring element and the
composition of operators. It scales well to large datasets and online settings.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 17:29:04 GMT"
}
] | 2012-12-12T00:00:00 | [
[
"Masci",
"Jonathan",
""
],
[
"Angulo",
"Jesús",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] | TITLE: A Learning Framework for Morphological Operators using Counter-Harmonic
Mean
ABSTRACT: We present a novel framework for learning morphological operators using
counter-harmonic mean. It combines concepts from morphology and convolutional
neural networks. A thorough experimental validation analyzes basic
morphological operators dilation and erosion, opening and closing, as well as
the much more complex top-hat transform, for which we report a real-world
application from the steel industry. Using online learning and stochastic
gradient descent, our system learns both the structuring element and the
composition of operators. It scales well to large datasets and online settings.
|
1212.2573 | K. S. Sesh Kumar | K. S. Sesh Kumar (LIENS, INRIA Paris - Rocquencourt), Francis Bach
(LIENS, INRIA Paris - Rocquencourt) | Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs | null | null | null | null | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of learning the structure of undirected graphical
models with bounded treewidth, within the maximum likelihood framework. This is
an NP-hard problem and most approaches consider local search techniques. In
this paper, we pose it as a combinatorial optimization problem, which is then
relaxed to a convex optimization problem that involves searching over the
forest and hyperforest polytopes with special structures, independently. A
supergradient method is used to solve the dual problem, with a run-time
complexity of $O(k^3 n^{k+2} \log n)$ for each iteration, where $n$ is the
number of variables and $k$ is a bound on the treewidth. We compare our
approach to state-of-the-art methods on synthetic datasets and classical
benchmarks, showing the gains of the novel convex approach.
| [
{
"version": "v1",
"created": "Tue, 11 Dec 2012 18:22:31 GMT"
}
] | 2012-12-12T00:00:00 | [
[
"Kumar",
"K. S. Sesh",
"",
"LIENS, INRIA Paris - Rocquencourt"
],
[
"Bach",
"Francis",
"",
"LIENS, INRIA Paris - Rocquencourt"
]
] | TITLE: Convex Relaxations for Learning Bounded Treewidth Decomposable Graphs
ABSTRACT: We consider the problem of learning the structure of undirected graphical
models with bounded treewidth, within the maximum likelihood framework. This is
an NP-hard problem and most approaches consider local search techniques. In
this paper, we pose it as a combinatorial optimization problem, which is then
relaxed to a convex optimization problem that involves searching over the
forest and hyperforest polytopes with special structures, independently. A
supergradient method is used to solve the dual problem, with a run-time
complexity of $O(k^3 n^{k+2} \log n)$ for each iteration, where $n$ is the
number of variables and $k$ is a bound on the treewidth. We compare our
approach to state-of-the-art methods on synthetic datasets and classical
benchmarks, showing the gains of the novel convex approach.
|
1212.1909 | Luay Nakhleh | Yun Yu and Luay Nakhleh | Fast Algorithms for Reconciliation under Hybridization and Incomplete
Lineage Sorting | null | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconciling a gene tree with a species tree is an important task that reveals
much about the evolution of genes, genomes, and species, as well as about the
molecular function of genes. A wide array of computational tools have been
devised for this task under certain evolutionary events such as hybridization,
gene duplication/loss, or incomplete lineage sorting. Work on reconciling gene
tree with species phylogenies under two or more of these events have also begun
to emerge. Our group recently devised both parsimony and probabilistic
frameworks for reconciling a gene tree with a phylogenetic network, thus
allowing for the detection of hybridization in the presence of incomplete
lineage sorting. While the frameworks were general and could handle any
topology, they are computationally intensive, rendering their application to
large datasets infeasible. In this paper, we present two novel approaches to
address the computational challenges of the two frameworks that are based on
the concept of ancestral configurations. Our approaches still compute exact
solutions while improving the computational time by up to five orders of
magnitude. These substantial gains in speed scale the applicability of these
unified reconciliation frameworks to much larger data sets. We discuss how the
topological features of the gene tree and phylogenetic network may affect the
performance of the new algorithms. We have implemented the algorithms in our
PhyloNet software package, which is publicly available in open source.
| [
{
"version": "v1",
"created": "Sun, 9 Dec 2012 18:12:55 GMT"
}
] | 2012-12-11T00:00:00 | [
[
"Yu",
"Yun",
""
],
[
"Nakhleh",
"Luay",
""
]
] | TITLE: Fast Algorithms for Reconciliation under Hybridization and Incomplete
Lineage Sorting
ABSTRACT: Reconciling a gene tree with a species tree is an important task that reveals
much about the evolution of genes, genomes, and species, as well as about the
molecular function of genes. A wide array of computational tools have been
devised for this task under certain evolutionary events such as hybridization,
gene duplication/loss, or incomplete lineage sorting. Work on reconciling gene
tree with species phylogenies under two or more of these events have also begun
to emerge. Our group recently devised both parsimony and probabilistic
frameworks for reconciling a gene tree with a phylogenetic network, thus
allowing for the detection of hybridization in the presence of incomplete
lineage sorting. While the frameworks were general and could handle any
topology, they are computationally intensive, rendering their application to
large datasets infeasible. In this paper, we present two novel approaches to
address the computational challenges of the two frameworks that are based on
the concept of ancestral configurations. Our approaches still compute exact
solutions while improving the computational time by up to five orders of
magnitude. These substantial gains in speed scale the applicability of these
unified reconciliation frameworks to much larger data sets. We discuss how the
topological features of the gene tree and phylogenetic network may affect the
performance of the new algorithms. We have implemented the algorithms in our
PhyloNet software package, which is publicly available in open source.
|
1212.1936 | Nicolas Boulanger-Lewandowski | Nicolas Boulanger-Lewandowski, Yoshua Bengio and Pascal Vincent | High-dimensional sequence transduction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of transforming an input sequence into a
high-dimensional output sequence in order to transcribe polyphonic audio music
into symbolic notation. We introduce a probabilistic model based on a recurrent
neural network that is able to learn realistic output distributions given the
input and we devise an efficient algorithm to search for the global mode of
that distribution. The resulting method produces musically plausible
transcriptions even under high levels of noise and drastically outperforms
previous state-of-the-art approaches on five datasets of synthesized sounds and
real recordings, approximately halving the test error rate.
| [
{
"version": "v1",
"created": "Sun, 9 Dec 2012 23:28:02 GMT"
}
] | 2012-12-11T00:00:00 | [
[
"Boulanger-Lewandowski",
"Nicolas",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Vincent",
"Pascal",
""
]
] | TITLE: High-dimensional sequence transduction
ABSTRACT: We investigate the problem of transforming an input sequence into a
high-dimensional output sequence in order to transcribe polyphonic audio music
into symbolic notation. We introduce a probabilistic model based on a recurrent
neural network that is able to learn realistic output distributions given the
input and we devise an efficient algorithm to search for the global mode of
that distribution. The resulting method produces musically plausible
transcriptions even under high levels of noise and drastically outperforms
previous state-of-the-art approaches on five datasets of synthesized sounds and
real recordings, approximately halving the test error rate.
|
1212.1633 | Andrei Bulatov | Cong Wang and Andrei A. Bulatov | Inferring Attitude in Online Social Networks Based On Quadratic
Correlation | 18 pages, 3 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The structure of an online social network in most cases cannot be described
just by links between its members. We study online social networks, in which
members may have certain attitude, positive or negative toward each other, and
so the network consists of a mixture of both positive and negative
relationships. Our goal is to predict the sign of a given relationship based on
the evidences provided in the current snapshot of the network. More precisely,
using machine learning techniques we develop a model that after being trained
on a particular network predicts the sign of an unknown or hidden link. The
model uses relationships and influences from peers as evidences for the guess,
however, the set of peers used is not predefined but rather learned during the
training process. We use quadratic correlation between peer members to train
the predictor. The model is tested on popular online datasets such as Epinions,
Slashdot, and Wikipedia. In many cases it shows almost perfect prediction
accuracy. Moreover, our model can also be efficiently updated as the
underlaying social network evolves.
| [
{
"version": "v1",
"created": "Fri, 7 Dec 2012 15:45:35 GMT"
}
] | 2012-12-10T00:00:00 | [
[
"Wang",
"Cong",
""
],
[
"Bulatov",
"Andrei A.",
""
]
] | TITLE: Inferring Attitude in Online Social Networks Based On Quadratic
Correlation
ABSTRACT: The structure of an online social network in most cases cannot be described
just by links between its members. We study online social networks, in which
members may have certain attitude, positive or negative toward each other, and
so the network consists of a mixture of both positive and negative
relationships. Our goal is to predict the sign of a given relationship based on
the evidences provided in the current snapshot of the network. More precisely,
using machine learning techniques we develop a model that after being trained
on a particular network predicts the sign of an unknown or hidden link. The
model uses relationships and influences from peers as evidences for the guess,
however, the set of peers used is not predefined but rather learned during the
training process. We use quadratic correlation between peer members to train
the predictor. The model is tested on popular online datasets such as Epinions,
Slashdot, and Wikipedia. In many cases it shows almost perfect prediction
accuracy. Moreover, our model can also be efficiently updated as the
underlaying social network evolves.
|
1211.6086 | Kang Zhao | Kang Zhao, Greta Greer, Baojun Qiu, Prasenjit Mitra, Kenneth Portier,
and John Yen | Finding influential users of an online health community: a new metric
based on sentiment influence | Working paper | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What characterizes influential users in online health communities (OHCs)? We
hypothesize that (1) the emotional support received by OHC members can be
assessed from their sentiment ex-pressed in online interactions, and (2) such
assessments can help to identify influential OHC members. Through text mining
and sentiment analysis of users' online interactions, we propose a novel metric
that directly measures a user's ability to affect the sentiment of others.
Using dataset from an OHC, we demonstrate that this metric is highly effective
in identifying influential users. In addition, combining the metric with other
traditional measures further improves the identification of influential users.
This study can facilitate online community management and advance our
understanding of social influence in OHCs.
| [
{
"version": "v1",
"created": "Mon, 26 Nov 2012 20:37:00 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Dec 2012 23:12:05 GMT"
}
] | 2012-12-07T00:00:00 | [
[
"Zhao",
"Kang",
""
],
[
"Greer",
"Greta",
""
],
[
"Qiu",
"Baojun",
""
],
[
"Mitra",
"Prasenjit",
""
],
[
"Portier",
"Kenneth",
""
],
[
"Yen",
"John",
""
]
] | TITLE: Finding influential users of an online health community: a new metric
based on sentiment influence
ABSTRACT: What characterizes influential users in online health communities (OHCs)? We
hypothesize that (1) the emotional support received by OHC members can be
assessed from their sentiment ex-pressed in online interactions, and (2) such
assessments can help to identify influential OHC members. Through text mining
and sentiment analysis of users' online interactions, we propose a novel metric
that directly measures a user's ability to affect the sentiment of others.
Using dataset from an OHC, we demonstrate that this metric is highly effective
in identifying influential users. In addition, combining the metric with other
traditional measures further improves the identification of influential users.
This study can facilitate online community management and advance our
understanding of social influence in OHCs.
|
1212.0888 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Unmixing of Hyperspectral Data Using Robust Statistics-based NMF | 4 pages, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mixed pixels are presented in hyperspectral images due to low spatial
resolution of hyperspectral sensors. Spectral unmixing decomposes mixed pixels
spectra into endmembers spectra and abundance fractions. In this paper using of
robust statistics-based nonnegative matrix factorization (RNMF) for spectral
unmixing of hyperspectral data is investigated. RNMF uses a robust cost
function and iterative updating procedure, so is not sensitive to outliers.
This method has been applied to simulated data using USGS spectral library,
AVIRIS and ROSIS datasets. Unmixing results are compared to traditional NMF
method based on SAD and AAD measures. Results demonstrate that this method can
be used efficiently for hyperspectral unmixing purposes.
| [
{
"version": "v1",
"created": "Tue, 4 Dec 2012 21:59:35 GMT"
}
] | 2012-12-06T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Unmixing of Hyperspectral Data Using Robust Statistics-based NMF
ABSTRACT: Mixed pixels are presented in hyperspectral images due to low spatial
resolution of hyperspectral sensors. Spectral unmixing decomposes mixed pixels
spectra into endmembers spectra and abundance fractions. In this paper using of
robust statistics-based nonnegative matrix factorization (RNMF) for spectral
unmixing of hyperspectral data is investigated. RNMF uses a robust cost
function and iterative updating procedure, so is not sensitive to outliers.
This method has been applied to simulated data using USGS spectral library,
AVIRIS and ROSIS datasets. Unmixing results are compared to traditional NMF
method based on SAD and AAD measures. Results demonstrate that this method can
be used efficiently for hyperspectral unmixing purposes.
|
1212.1037 | Tushar Rao Mr. | Tushar Rao (NSIT-Delhi) and Saket Srivastava (IIIT-Delhi) | Modeling Movements in Oil, Gold, Forex and Market Indices using Search
Volume Index and Twitter Sentiments | 10 pages, 4 figures, 9 Tables | null | null | IIITD-TR-2012-005 | cs.CE cs.SI q-fin.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Study of the forecasting models using large scale microblog discussions and
the search behavior data can provide a good insight for better understanding
the market movements. In this work we collected a dataset of 2 million tweets
and search volume index (SVI from Google) for a period of June 2010 to
September 2011. We perform a study over a set of comprehensive causative
relationships and developed a unified approach to a model for various market
securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100),
commodity markets (oil and gold) and Euro Forex rates. We also investigate the
lagged and statistically causative relations of Twitter sentiments developed
during active trading days and market inactive days in combination with the
search behavior of public before any change in the prices/ indices. Our results
show extent of lagged significance with high correlation value upto 0.82
between search volumes and gold price in USD. We find weekly accuracy in
direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100
with significant reduction in mean average percentage error for all the
forecasting models.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2012 14:28:40 GMT"
}
] | 2012-12-06T00:00:00 | [
[
"Rao",
"Tushar",
"",
"NSIT-Delhi"
],
[
"Srivastava",
"Saket",
"",
"IIIT-Delhi"
]
] | TITLE: Modeling Movements in Oil, Gold, Forex and Market Indices using Search
Volume Index and Twitter Sentiments
ABSTRACT: Study of the forecasting models using large scale microblog discussions and
the search behavior data can provide a good insight for better understanding
the market movements. In this work we collected a dataset of 2 million tweets
and search volume index (SVI from Google) for a period of June 2010 to
September 2011. We perform a study over a set of comprehensive causative
relationships and developed a unified approach to a model for various market
securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100),
commodity markets (oil and gold) and Euro Forex rates. We also investigate the
lagged and statistically causative relations of Twitter sentiments developed
during active trading days and market inactive days in combination with the
search behavior of public before any change in the prices/ indices. Our results
show extent of lagged significance with high correlation value upto 0.82
between search volumes and gold price in USD. We find weekly accuracy in
direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100
with significant reduction in mean average percentage error for all the
forecasting models.
|
1212.1100 | Jim Smith Dr | J. E. Smith, P. Caleb-Solly, M. A. Tahir, D. Sannen, H. van-Brussel | Making Early Predictions of the Accuracy of Machine Learning
Applications | 35 pagers, 12 figures | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accuracy of machine learning systems is a widely studied research topic.
Established techniques such as cross-validation predict the accuracy on unseen
data of the classifier produced by applying a given learning method to a given
training data set. However, they do not predict whether incurring the cost of
obtaining more data and undergoing further training will lead to higher
accuracy. In this paper we investigate techniques for making such early
predictions. We note that when a machine learning algorithm is presented with a
training set the classifier produced, and hence its error, will depend on the
characteristics of the algorithm, on training set's size, and also on its
specific composition. In particular we hypothesise that if a number of
classifiers are produced, and their observed error is decomposed into bias and
variance terms, then although these components may behave differently, their
behaviour may be predictable.
We test our hypothesis by building models that, given a measurement taken
from the classifier created from a limited number of samples, predict the
values that would be measured from the classifier produced when the full data
set is presented. We create separate models for bias, variance and total error.
Our models are built from the results of applying ten different machine
learning algorithms to a range of data sets, and tested with "unseen"
algorithms and datasets. We analyse the results for various numbers of initial
training samples, and total dataset sizes. Results show that our predictions
are very highly correlated with the values observed after undertaking the extra
training. Finally we consider the more complex case where an ensemble of
heterogeneous classifiers is trained, and show how we can accurately estimate
an upper bound on the accuracy achievable after further training.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2012 17:07:39 GMT"
}
] | 2012-12-06T00:00:00 | [
[
"Smith",
"J. E.",
""
],
[
"Caleb-Solly",
"P.",
""
],
[
"Tahir",
"M. A.",
""
],
[
"Sannen",
"D.",
""
],
[
"van-Brussel",
"H.",
""
]
] | TITLE: Making Early Predictions of the Accuracy of Machine Learning
Applications
ABSTRACT: The accuracy of machine learning systems is a widely studied research topic.
Established techniques such as cross-validation predict the accuracy on unseen
data of the classifier produced by applying a given learning method to a given
training data set. However, they do not predict whether incurring the cost of
obtaining more data and undergoing further training will lead to higher
accuracy. In this paper we investigate techniques for making such early
predictions. We note that when a machine learning algorithm is presented with a
training set the classifier produced, and hence its error, will depend on the
characteristics of the algorithm, on training set's size, and also on its
specific composition. In particular we hypothesise that if a number of
classifiers are produced, and their observed error is decomposed into bias and
variance terms, then although these components may behave differently, their
behaviour may be predictable.
We test our hypothesis by building models that, given a measurement taken
from the classifier created from a limited number of samples, predict the
values that would be measured from the classifier produced when the full data
set is presented. We create separate models for bias, variance and total error.
Our models are built from the results of applying ten different machine
learning algorithms to a range of data sets, and tested with "unseen"
algorithms and datasets. We analyse the results for various numbers of initial
training samples, and total dataset sizes. Results show that our predictions
are very highly correlated with the values observed after undertaking the extra
training. Finally we consider the more complex case where an ensemble of
heterogeneous classifiers is trained, and show how we can accurately estimate
an upper bound on the accuracy achievable after further training.
|
1212.1131 | Lior Rokach | Gilad Katz, Guy Shani, Bracha Shapira, Lior Rokach | Using Wikipedia to Boost SVD Recommender Systems | null | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Singular Value Decomposition (SVD) has been used successfully in recent years
in the area of recommender systems. In this paper we present how this model can
be extended to consider both user ratings and information from Wikipedia. By
mapping items to Wikipedia pages and quantifying their similarity, we are able
to use this information in order to improve recommendation accuracy, especially
when the sparsity is high. Another advantage of the proposed approach is the
fact that it can be easily integrated into any other SVD implementation,
regardless of additional parameters that may have been added to it. Preliminary
experimental results on the MovieLens dataset are encouraging.
| [
{
"version": "v1",
"created": "Wed, 5 Dec 2012 19:03:39 GMT"
}
] | 2012-12-06T00:00:00 | [
[
"Katz",
"Gilad",
""
],
[
"Shani",
"Guy",
""
],
[
"Shapira",
"Bracha",
""
],
[
"Rokach",
"Lior",
""
]
] | TITLE: Using Wikipedia to Boost SVD Recommender Systems
ABSTRACT: Singular Value Decomposition (SVD) has been used successfully in recent years
in the area of recommender systems. In this paper we present how this model can
be extended to consider both user ratings and information from Wikipedia. By
mapping items to Wikipedia pages and quantifying their similarity, we are able
to use this information in order to improve recommendation accuracy, especially
when the sparsity is high. Another advantage of the proposed approach is the
fact that it can be easily integrated into any other SVD implementation,
regardless of additional parameters that may have been added to it. Preliminary
experimental results on the MovieLens dataset are encouraging.
|
1212.0763 | Modou Gueye M. | Modou Gueye, Talel Abdessalem, Hubert Naacke | Dynamic recommender system : using cluster-based biases to improve the
accuracy of the predictions | 31 pages, 7 figures | null | null | null | cs.LG cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is today accepted that matrix factorization models allow a high quality of
rating prediction in recommender systems. However, a major drawback of matrix
factorization is its static nature that results in a progressive declining of
the accuracy of the predictions after each factorization. This is due to the
fact that the new obtained ratings are not taken into account until a new
factorization is computed, which can not be done very often because of the high
cost of matrix factorization.
In this paper, aiming at improving the accuracy of recommender systems, we
propose a cluster-based matrix factorization technique that enables online
integration of new ratings. Thus, we significantly enhance the obtained
predictions between two matrix factorizations. We use finer-grained user biases
by clustering similar items into groups, and allocating in these groups a bias
to each user. The experiments we did on large datasets demonstrated the
efficiency of our approach.
| [
{
"version": "v1",
"created": "Mon, 3 Dec 2012 13:00:27 GMT"
}
] | 2012-12-05T00:00:00 | [
[
"Gueye",
"Modou",
""
],
[
"Abdessalem",
"Talel",
""
],
[
"Naacke",
"Hubert",
""
]
] | TITLE: Dynamic recommender system : using cluster-based biases to improve the
accuracy of the predictions
ABSTRACT: It is today accepted that matrix factorization models allow a high quality of
rating prediction in recommender systems. However, a major drawback of matrix
factorization is its static nature that results in a progressive declining of
the accuracy of the predictions after each factorization. This is due to the
fact that the new obtained ratings are not taken into account until a new
factorization is computed, which can not be done very often because of the high
cost of matrix factorization.
In this paper, aiming at improving the accuracy of recommender systems, we
propose a cluster-based matrix factorization technique that enables online
integration of new ratings. Thus, we significantly enhance the obtained
predictions between two matrix factorizations. We use finer-grained user biases
by clustering similar items into groups, and allocating in these groups a bias
to each user. The experiments we did on large datasets demonstrated the
efficiency of our approach.
|
1212.0030 | Andrew Habib | Osama Khalil, Andrew Habib | Viewpoint Invariant Object Detector | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Object Detection is the task of identifying the existence of an object class
instance and locating it within an image. Difficulties in handling high
intra-class variations constitute major obstacles to achieving high performance
on standard benchmark datasets (scale, viewpoint, lighting conditions and
orientation variations provide good examples). Suggested model aims at
providing more robustness to detecting objects suffering severe distortion due
to < 60{\deg} viewpoint changes. In addition, several model computational
bottlenecks have been resolved leading to a significant increase in the model
performance (speed and space) without compromising the resulting accuracy.
Finally, we produced two illustrative applications showing the potential of the
object detection technology being deployed in real life applications; namely
content-based image search and content-based video search.
| [
{
"version": "v1",
"created": "Fri, 30 Nov 2012 22:35:19 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Khalil",
"Osama",
""
],
[
"Habib",
"Andrew",
""
]
] | TITLE: Viewpoint Invariant Object Detector
ABSTRACT: Object Detection is the task of identifying the existence of an object class
instance and locating it within an image. Difficulties in handling high
intra-class variations constitute major obstacles to achieving high performance
on standard benchmark datasets (scale, viewpoint, lighting conditions and
orientation variations provide good examples). Suggested model aims at
providing more robustness to detecting objects suffering severe distortion due
to < 60{\deg} viewpoint changes. In addition, several model computational
bottlenecks have been resolved leading to a significant increase in the model
performance (speed and space) without compromising the resulting accuracy.
Finally, we produced two illustrative applications showing the potential of the
object detection technology being deployed in real life applications; namely
content-based image search and content-based video search.
|
1212.0087 | Nader Jelassi | Mohamed Nader Jelassi and Sadok Ben Yahia and Engelbert Mephu Nguifo | A scalable mining of frequent quadratic concepts in d-folksonomies | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Folksonomy mining is grasping the interest of web 2.0 community since it
represents the core data of social resource sharing systems. However, a
scrutiny of the related works interested in mining folksonomies unveils that
the time stamp dimension has not been considered. For example, the wealthy
number of works dedicated to mining tri-concepts from folksonomies did not take
into account time dimension. In this paper, we will consider a folksonomy
commonly composed of triples <users, tags, resources> and we shall consider the
time as a new dimension. We motivate our approach by highlighting the battery
of potential applications. Then, we present the foundations for mining
quadri-concepts, provide a formal definition of the problem and introduce a new
efficient algorithm, called QUADRICONS for its solution to allow for mining
folksonomies in time, i.e., d-folksonomies. We also introduce a new closure
operator that splits the induced search space into equivalence classes whose
smallest elements are the quadri-minimal generators. Carried out experiments on
large-scale real-world datasets highlight good performances of our algorithm.
| [
{
"version": "v1",
"created": "Sat, 1 Dec 2012 09:16:35 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Jelassi",
"Mohamed Nader",
""
],
[
"Yahia",
"Sadok Ben",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
]
] | TITLE: A scalable mining of frequent quadratic concepts in d-folksonomies
ABSTRACT: Folksonomy mining is grasping the interest of web 2.0 community since it
represents the core data of social resource sharing systems. However, a
scrutiny of the related works interested in mining folksonomies unveils that
the time stamp dimension has not been considered. For example, the wealthy
number of works dedicated to mining tri-concepts from folksonomies did not take
into account time dimension. In this paper, we will consider a folksonomy
commonly composed of triples <users, tags, resources> and we shall consider the
time as a new dimension. We motivate our approach by highlighting the battery
of potential applications. Then, we present the foundations for mining
quadri-concepts, provide a formal definition of the problem and introduce a new
efficient algorithm, called QUADRICONS for its solution to allow for mining
folksonomies in time, i.e., d-folksonomies. We also introduce a new closure
operator that splits the induced search space into equivalence classes whose
smallest elements are the quadri-minimal generators. Carried out experiments on
large-scale real-world datasets highlight good performances of our algorithm.
|
1212.0141 | Yiye Ruan | Hemant Purohit and Yiye Ruan and David Fuhry and Srinivasan
Parthasarathy and Amit Sheth | On the Role of Social Identity and Cohesion in Characterizing Online
Social Communities | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two prevailing theories for explaining social group or community structure
are cohesion and identity. The social cohesion approach posits that social
groups arise out of an aggregation of individuals that have mutual
interpersonal attraction as they share common characteristics. These
characteristics can range from common interests to kinship ties and from social
values to ethnic backgrounds. In contrast, the social identity approach posits
that an individual is likely to join a group based on an intrinsic
self-evaluation at a cognitive or perceptual level. In other words group
members typically share an awareness of a common category membership.
In this work we seek to understand the role of these two contrasting theories
in explaining the behavior and stability of social communities in Twitter. A
specific focal point of our work is to understand the role of these theories in
disparate contexts ranging from disaster response to socio-political activism.
We extract social identity and social cohesion features-of-interest for large
scale datasets of five real-world events and examine the effectiveness of such
features in capturing behavioral characteristics and the stability of groups.
We also propose a novel measure of social group sustainability based on the
divergence in group discussion. Our main findings are: 1) Sharing of social
identities (especially physical location) among group members has a positive
impact on group sustainability, 2) Structural cohesion (represented by high
group density and low average shortest path length) is a strong indicator of
group sustainability, and 3) Event characteristics play a role in shaping group
sustainability, as social groups in transient events behave differently from
groups in events that last longer.
| [
{
"version": "v1",
"created": "Sat, 1 Dec 2012 18:03:33 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Purohit",
"Hemant",
""
],
[
"Ruan",
"Yiye",
""
],
[
"Fuhry",
"David",
""
],
[
"Parthasarathy",
"Srinivasan",
""
],
[
"Sheth",
"Amit",
""
]
] | TITLE: On the Role of Social Identity and Cohesion in Characterizing Online
Social Communities
ABSTRACT: Two prevailing theories for explaining social group or community structure
are cohesion and identity. The social cohesion approach posits that social
groups arise out of an aggregation of individuals that have mutual
interpersonal attraction as they share common characteristics. These
characteristics can range from common interests to kinship ties and from social
values to ethnic backgrounds. In contrast, the social identity approach posits
that an individual is likely to join a group based on an intrinsic
self-evaluation at a cognitive or perceptual level. In other words group
members typically share an awareness of a common category membership.
In this work we seek to understand the role of these two contrasting theories
in explaining the behavior and stability of social communities in Twitter. A
specific focal point of our work is to understand the role of these theories in
disparate contexts ranging from disaster response to socio-political activism.
We extract social identity and social cohesion features-of-interest for large
scale datasets of five real-world events and examine the effectiveness of such
features in capturing behavioral characteristics and the stability of groups.
We also propose a novel measure of social group sustainability based on the
divergence in group discussion. Our main findings are: 1) Sharing of social
identities (especially physical location) among group members has a positive
impact on group sustainability, 2) Structural cohesion (represented by high
group density and low average shortest path length) is a strong indicator of
group sustainability, and 3) Event characteristics play a role in shaping group
sustainability, as social groups in transient events behave differently from
groups in events that last longer.
|
1212.0146 | Yiye Ruan | Yiye Ruan and David Fuhry and Srinivasan Parthasarathy | Efficient Community Detection in Large Networks using Content and Links | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we discuss a very simple approach of combining content and link
information in graph structures for the purpose of community discovery, a
fundamental task in network analysis. Our approach hinges on the basic
intuition that many networks contain noise in the link structure and that
content information can help strengthen the community signal. This enables ones
to eliminate the impact of noise (false positives and false negatives), which
is particularly prevalent in online social networks and Web-scale information
networks.
Specifically we introduce a measure of signal strength between two nodes in
the network by fusing their link strength with content similarity. Link
strength is estimated based on whether the link is likely (with high
probability) to reside within a community. Content similarity is estimated
through cosine similarity or Jaccard coefficient. We discuss a simple mechanism
for fusing content and link similarity. We then present a biased edge sampling
procedure which retains edges that are locally relevant for each graph node.
The resulting backbone graph can be clustered using standard community
discovery algorithms such as Metis and Markov clustering.
Through extensive experiments on multiple real-world datasets (Flickr,
Wikipedia and CiteSeer) with varying sizes and characteristics, we demonstrate
the effectiveness and efficiency of our methods over state-of-the-art learning
and mining approaches several of which also attempt to combine link and content
analysis for the purposes of community discovery. Specifically we always find a
qualitative benefit when combining content with link analysis. Additionally our
biased graph sampling approach realizes a quantitative benefit in that it is
typically several orders of magnitude faster than competing approaches.
| [
{
"version": "v1",
"created": "Sat, 1 Dec 2012 18:41:34 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Ruan",
"Yiye",
""
],
[
"Fuhry",
"David",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] | TITLE: Efficient Community Detection in Large Networks using Content and Links
ABSTRACT: In this paper we discuss a very simple approach of combining content and link
information in graph structures for the purpose of community discovery, a
fundamental task in network analysis. Our approach hinges on the basic
intuition that many networks contain noise in the link structure and that
content information can help strengthen the community signal. This enables ones
to eliminate the impact of noise (false positives and false negatives), which
is particularly prevalent in online social networks and Web-scale information
networks.
Specifically we introduce a measure of signal strength between two nodes in
the network by fusing their link strength with content similarity. Link
strength is estimated based on whether the link is likely (with high
probability) to reside within a community. Content similarity is estimated
through cosine similarity or Jaccard coefficient. We discuss a simple mechanism
for fusing content and link similarity. We then present a biased edge sampling
procedure which retains edges that are locally relevant for each graph node.
The resulting backbone graph can be clustered using standard community
discovery algorithms such as Metis and Markov clustering.
Through extensive experiments on multiple real-world datasets (Flickr,
Wikipedia and CiteSeer) with varying sizes and characteristics, we demonstrate
the effectiveness and efficiency of our methods over state-of-the-art learning
and mining approaches several of which also attempt to combine link and content
analysis for the purposes of community discovery. Specifically we always find a
qualitative benefit when combining content with link analysis. Additionally our
biased graph sampling approach realizes a quantitative benefit in that it is
typically several orders of magnitude faster than competing approaches.
|
1212.0317 | M HM Krishna Prasad Dr | B. Adinarayana Reddy, O. Srinivasa Rao and M. H. M. Krishna Prasad | An Improved UP-Growth High Utility Itemset Mining | (0975 8887) | International Journal of Computer Applications Volume 58, No.2,
2012, 25-28 | 10.5120/9255-3424 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient discovery of frequent itemsets in large datasets is a crucial task
of data mining. In recent years, several approaches have been proposed for
generating high utility patterns, they arise the problems of producing a large
number of candidate itemsets for high utility itemsets and probably degrades
mining performance in terms of speed and space. Recently proposed compact tree
structure, viz., UP Tree, maintains the information of transactions and
itemsets, facilitate the mining performance and avoid scanning original
database repeatedly. In this paper, UP Tree (Utility Pattern Tree) is adopted,
which scans database only twice to obtain candidate items and manage them in an
efficient data structured way. Applying UP Tree to the UP Growth takes more
execution time for Phase II. Hence this paper presents modified algorithm
aiming to reduce the execution time by effectively identifying high utility
itemsets.
| [
{
"version": "v1",
"created": "Mon, 3 Dec 2012 08:50:50 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Reddy",
"B. Adinarayana",
""
],
[
"Rao",
"O. Srinivasa",
""
],
[
"Prasad",
"M. H. M. Krishna",
""
]
] | TITLE: An Improved UP-Growth High Utility Itemset Mining
ABSTRACT: Efficient discovery of frequent itemsets in large datasets is a crucial task
of data mining. In recent years, several approaches have been proposed for
generating high utility patterns, they arise the problems of producing a large
number of candidate itemsets for high utility itemsets and probably degrades
mining performance in terms of speed and space. Recently proposed compact tree
structure, viz., UP Tree, maintains the information of transactions and
itemsets, facilitate the mining performance and avoid scanning original
database repeatedly. In this paper, UP Tree (Utility Pattern Tree) is adopted,
which scans database only twice to obtain candidate items and manage them in an
efficient data structured way. Applying UP Tree to the UP Growth takes more
execution time for Phase II. Hence this paper presents modified algorithm
aiming to reduce the execution time by effectively identifying high utility
itemsets.
|
1212.0402 | Khurram Soomro | Khurram Soomro, Amir Roshan Zamir and Mubarak Shah | UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild | null | null | null | CRCV-TR-12-01 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce UCF101 which is currently the largest dataset of human actions.
It consists of 101 action classes, over 13k clips and 27 hours of video data.
The database consists of realistic user uploaded videos containing camera
motion and cluttered background. Additionally, we provide baseline action
recognition results on this new dataset using standard bag of words approach
with overall performance of 44.5%. To the best of our knowledge, UCF101 is
currently the most challenging dataset of actions due to its large number of
classes, large number of clips and also unconstrained nature of such clips.
| [
{
"version": "v1",
"created": "Mon, 3 Dec 2012 14:45:31 GMT"
}
] | 2012-12-04T00:00:00 | [
[
"Soomro",
"Khurram",
""
],
[
"Zamir",
"Amir Roshan",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild
ABSTRACT: We introduce UCF101 which is currently the largest dataset of human actions.
It consists of 101 action classes, over 13k clips and 27 hours of video data.
The database consists of realistic user uploaded videos containing camera
motion and cluttered background. Additionally, we provide baseline action
recognition results on this new dataset using standard bag of words approach
with overall performance of 44.5%. To the best of our knowledge, UCF101 is
currently the most challenging dataset of actions due to its large number of
classes, large number of clips and also unconstrained nature of such clips.
|
1211.3375 | Stephan Seufert | Stephan Seufert, Avishek Anand, Srikanta Bedathur, Gerhard Weikum | High-Performance Reachability Query Processing under Index Size
Restrictions | 30 pages | null | null | null | cs.DB cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a scalable and highly efficient index structure for
the reachability problem over graphs. We build on the well-known node interval
labeling scheme where the set of vertices reachable from a particular node is
compactly encoded as a collection of node identifier ranges. We impose an
explicit bound on the size of the index and flexibly assign approximate
reachability ranges to nodes of the graph such that the number of index probes
to answer a query is minimized. The resulting tunable index structure generates
a better range labeling if the space budget is increased, thus providing a
direct control over the trade off between index size and the query processing
performance. By using a fast recursive querying method in conjunction with our
index structure, we show that in practice, reachability queries can be answered
in the order of microseconds on an off-the-shelf computer - even for the case
of massive-scale real world graphs. Our claims are supported by an extensive
set of experimental results using a multitude of benchmark and real-world
web-scale graph datasets.
| [
{
"version": "v1",
"created": "Wed, 14 Nov 2012 18:28:28 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Nov 2012 16:06:19 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Nov 2012 14:13:28 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Nov 2012 09:40:31 GMT"
},
{
"version": "v5",
"created": "Thu, 29 Nov 2012 21:28:22 GMT"
}
] | 2012-12-03T00:00:00 | [
[
"Seufert",
"Stephan",
""
],
[
"Anand",
"Avishek",
""
],
[
"Bedathur",
"Srikanta",
""
],
[
"Weikum",
"Gerhard",
""
]
] | TITLE: High-Performance Reachability Query Processing under Index Size
Restrictions
ABSTRACT: In this paper, we propose a scalable and highly efficient index structure for
the reachability problem over graphs. We build on the well-known node interval
labeling scheme where the set of vertices reachable from a particular node is
compactly encoded as a collection of node identifier ranges. We impose an
explicit bound on the size of the index and flexibly assign approximate
reachability ranges to nodes of the graph such that the number of index probes
to answer a query is minimized. The resulting tunable index structure generates
a better range labeling if the space budget is increased, thus providing a
direct control over the trade off between index size and the query processing
performance. By using a fast recursive querying method in conjunction with our
index structure, we show that in practice, reachability queries can be answered
in the order of microseconds on an off-the-shelf computer - even for the case
of massive-scale real world graphs. Our claims are supported by an extensive
set of experimental results using a multitude of benchmark and real-world
web-scale graph datasets.
|
1211.6851 | Chiheb-Eddine Ben n'cir C.B.N'cir | Chiheb-Eddine Ben N'Cir and Nadia Essoussi | Classification Recouvrante Bas\'ee sur les M\'ethodes \`a Noyau | Les 43\`emes Journ\'ees de Statistique | Les 43\`emes Journ\'ees de Statistique 2011 | null | null | cs.LG stat.CO stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Overlapping clustering problem is an important learning issue in which
clusters are not mutually exclusive and each object may belongs simultaneously
to several clusters. This paper presents a kernel based method that produces
overlapping clusters on a high feature space using mercer kernel techniques to
improve separability of input patterns. The proposed method, called
OKM-K(Overlapping $k$-means based kernel method), extends OKM (Overlapping
$k$-means) method to produce overlapping schemes. Experiments are performed on
overlapping dataset and empirical results obtained with OKM-K outperform
results obtained with OKM.
| [
{
"version": "v1",
"created": "Thu, 29 Nov 2012 09:22:19 GMT"
}
] | 2012-11-30T00:00:00 | [
[
"N'Cir",
"Chiheb-Eddine Ben",
""
],
[
"Essoussi",
"Nadia",
""
]
] | TITLE: Classification Recouvrante Bas\'ee sur les M\'ethodes \`a Noyau
ABSTRACT: Overlapping clustering problem is an important learning issue in which
clusters are not mutually exclusive and each object may belongs simultaneously
to several clusters. This paper presents a kernel based method that produces
overlapping clusters on a high feature space using mercer kernel techniques to
improve separability of input patterns. The proposed method, called
OKM-K(Overlapping $k$-means based kernel method), extends OKM (Overlapping
$k$-means) method to produce overlapping schemes. Experiments are performed on
overlapping dataset and empirical results obtained with OKM-K outperform
results obtained with OKM.
|
1211.6859 | Chiheb-Eddine Ben n'cir C.B.N'cir | Chiheb-Eddine Ben N'Cir and Nadia Essoussi and Patrice Bertrand | Overlapping clustering based on kernel similarity metric | Second Meeting on Statistics and Data Mining 2010 | Second Meeting on Statistics and Data Mining Second Meeting on
Statistics and Data Mining March 11-12, 2010 | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Producing overlapping schemes is a major issue in clustering. Recent proposed
overlapping methods relies on the search of an optimal covering and are based
on different metrics, such as Euclidean distance and I-Divergence, used to
measure closeness between observations. In this paper, we propose the use of
another measure for overlapping clustering based on a kernel similarity metric
.We also estimate the number of overlapped clusters using the Gram matrix.
Experiments on both Iris and EachMovie datasets show the correctness of the
estimation of number of clusters and show that measure based on kernel
similarity metric improves the precision, recall and f-measure in overlapping
clustering.
| [
{
"version": "v1",
"created": "Thu, 29 Nov 2012 09:35:30 GMT"
}
] | 2012-11-30T00:00:00 | [
[
"N'Cir",
"Chiheb-Eddine Ben",
""
],
[
"Essoussi",
"Nadia",
""
],
[
"Bertrand",
"Patrice",
""
]
] | TITLE: Overlapping clustering based on kernel similarity metric
ABSTRACT: Producing overlapping schemes is a major issue in clustering. Recent proposed
overlapping methods relies on the search of an optimal covering and are based
on different metrics, such as Euclidean distance and I-Divergence, used to
measure closeness between observations. In this paper, we propose the use of
another measure for overlapping clustering based on a kernel similarity metric
.We also estimate the number of overlapped clusters using the Gram matrix.
Experiments on both Iris and EachMovie datasets show the correctness of the
estimation of number of clusters and show that measure based on kernel
similarity metric improves the precision, recall and f-measure in overlapping
clustering.
|
1211.2881 | Junyoung Chung | Junyoung Chung, Donghoon Lee, Youngjoo Seo, and Chang D. Yoo | Deep Attribute Networks | This paper has been withdrawn by the author due to a crucial
grammatical errors | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Obtaining compact and discriminative features is one of the major challenges
in many of the real-world image classification tasks such as face verification
and object recognition. One possible approach is to represent input image on
the basis of high-level features that carry semantic meaning which humans can
understand. In this paper, a model coined deep attribute network (DAN) is
proposed to address this issue. For an input image, the model outputs the
attributes of the input image without performing any classification. The
efficacy of the proposed model is evaluated on unconstrained face verification
and real-world object recognition tasks using the LFW and the a-PASCAL
datasets. We demonstrate the potential of deep learning for attribute-based
classification by showing comparable results with existing state-of-the-art
results. Once properly trained, the DAN is fast and does away with calculating
low-level features which are maybe unreliable and computationally expensive.
| [
{
"version": "v1",
"created": "Tue, 13 Nov 2012 03:41:31 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Nov 2012 11:30:46 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Nov 2012 08:39:03 GMT"
}
] | 2012-11-29T00:00:00 | [
[
"Chung",
"Junyoung",
""
],
[
"Lee",
"Donghoon",
""
],
[
"Seo",
"Youngjoo",
""
],
[
"Yoo",
"Chang D.",
""
]
] | TITLE: Deep Attribute Networks
ABSTRACT: Obtaining compact and discriminative features is one of the major challenges
in many of the real-world image classification tasks such as face verification
and object recognition. One possible approach is to represent input image on
the basis of high-level features that carry semantic meaning which humans can
understand. In this paper, a model coined deep attribute network (DAN) is
proposed to address this issue. For an input image, the model outputs the
attributes of the input image without performing any classification. The
efficacy of the proposed model is evaluated on unconstrained face verification
and real-world object recognition tasks using the LFW and the a-PASCAL
datasets. We demonstrate the potential of deep learning for attribute-based
classification by showing comparable results with existing state-of-the-art
results. Once properly trained, the DAN is fast and does away with calculating
low-level features which are maybe unreliable and computationally expensive.
|
1208.3665 | Christian Riess | Vincent Christlein, Christian Riess, Johannes Jordan, Corinna Riess
and Elli Angelopoulou | An Evaluation of Popular Copy-Move Forgery Detection Approaches | Main paper: 14 pages, supplemental material: 12 pages, main paper
appeared in IEEE Transaction on Information Forensics and Security | IEEE Transactions on Information Forensics and Security, volume 7,
number 6, 2012, pp. 1841-1854 | 10.1109/TIFS.2012.2218597 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.
| [
{
"version": "v1",
"created": "Fri, 17 Aug 2012 19:41:23 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Nov 2012 20:53:51 GMT"
}
] | 2012-11-27T00:00:00 | [
[
"Christlein",
"Vincent",
""
],
[
"Riess",
"Christian",
""
],
[
"Jordan",
"Johannes",
""
],
[
"Riess",
"Corinna",
""
],
[
"Angelopoulou",
"Elli",
""
]
] | TITLE: An Evaluation of Popular Copy-Move Forgery Detection Approaches
ABSTRACT: A copy-move forgery is created by copying and pasting content within the same
image, and potentially post-processing it. In recent years, the detection of
copy-move forgeries has become one of the most actively researched topics in
blind image forensics. A considerable number of different algorithms have been
proposed focusing on different types of postprocessed copies. In this paper, we
aim to answer which copy-move forgery detection algorithms and processing steps
(e.g., matching, filtering, outlier detection, affine transformation
estimation) perform best in various postprocessing scenarios. The focus of our
analysis is to evaluate the performance of previously proposed feature sets. We
achieve this by casting existing algorithms in a common pipeline. In this
paper, we examined the 15 most prominent feature sets. We analyzed the
detection performance on a per-image basis and on a per-pixel basis. We created
a challenging real-world copy-move dataset, and a software framework for
systematic image manipulation. Experiments show, that the keypoint-based
features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and
Zernike features perform very well. These feature sets exhibit the best
robustness against various noise sources and downsampling, while reliably
identifying the copied regions.
|
1211.5625 | Sriganesh Srihari Dr | Sriganesh Srihari, Hon Wai Leong | A survey of computational methods for protein complex prediction from
protein interaction networks | 27 pages, 5 figures, 4 tables | Srihari, S., Leong, HW., J Bioinform Comput Biol 11(2): 1230002,
2013 | 10.1142/S021972001230002X | null | cs.CE q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complexes of physically interacting proteins are one of the fundamental
functional units responsible for driving key biological mechanisms within the
cell. Their identification is therefore necessary not only to understand
complex formation but also the higher level organization of the cell. With the
advent of high-throughput techniques in molecular biology, significant amount
of physical interaction data has been cataloged from organisms such as yeast,
which has in turn fueled computational approaches to systematically mine
complexes from the network of physical interactions among proteins (PPI
network). In this survey, we review, classify and evaluate some of the key
computational methods developed till date for the identification of protein
complexes from PPI networks. We present two insightful taxonomies that reflect
how these methods have evolved over the years towards improving automated
complex prediction. We also discuss some open challenges facing accurate
reconstruction of complexes, the crucial ones being presence of high proportion
of errors and noise in current high-throughput datasets and some key aspects
overlooked by current complex detection methods. We hope this review will not
only help to condense the history of computational complex detection for easy
reference, but also provide valuable insights to drive further research in this
area.
| [
{
"version": "v1",
"created": "Sat, 24 Nov 2012 00:30:33 GMT"
}
] | 2012-11-27T00:00:00 | [
[
"Srihari",
"Sriganesh",
""
],
[
"Leong",
"Hon Wai",
""
]
] | TITLE: A survey of computational methods for protein complex prediction from
protein interaction networks
ABSTRACT: Complexes of physically interacting proteins are one of the fundamental
functional units responsible for driving key biological mechanisms within the
cell. Their identification is therefore necessary not only to understand
complex formation but also the higher level organization of the cell. With the
advent of high-throughput techniques in molecular biology, significant amount
of physical interaction data has been cataloged from organisms such as yeast,
which has in turn fueled computational approaches to systematically mine
complexes from the network of physical interactions among proteins (PPI
network). In this survey, we review, classify and evaluate some of the key
computational methods developed till date for the identification of protein
complexes from PPI networks. We present two insightful taxonomies that reflect
how these methods have evolved over the years towards improving automated
complex prediction. We also discuss some open challenges facing accurate
reconstruction of complexes, the crucial ones being presence of high proportion
of errors and noise in current high-throughput datasets and some key aspects
overlooked by current complex detection methods. We hope this review will not
only help to condense the history of computational complex detection for easy
reference, but also provide valuable insights to drive further research in this
area.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.