id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1208.3629 | Hang Zhou | Marc Lelarge and Hang Zhou | Sublinear-Time Algorithms for Monomer-Dimer Systems on Bounded Degree
Graphs | null | null | null | null | cs.DS cs.DM math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a graph $G$, let $Z(G,\lambda)$ be the partition function of the
monomer-dimer system defined by $\sum_k m_k(G)\lambda^k$, where $m_k(G)$ is the
number of matchings of size $k$ in $G$. We consider graphs of bounded degree
and develop a sublinear-time algorithm for estimating $\log Z(G,\lambda)$ at an
arbitrary value $\lambda>0$ within additive error $\epsilon n$ with high
probability. The query complexity of our algorithm does not depend on the size
of $G$ and is polynomial in $1/\epsilon$, and we also provide a lower bound
quadratic in $1/\epsilon$ for this problem. This is the first analysis of a
sublinear-time approximation algorithm for a $# P$-complete problem. Our
approach is based on the correlation decay of the Gibbs distribution associated
with $Z(G,\lambda)$. We show that our algorithm approximates the probability
for a vertex to be covered by a matching, sampled according to this Gibbs
distribution, in a near-optimal sublinear time. We extend our results to
approximate the average size and the entropy of such a matching within an
additive error with high probability, where again the query complexity is
polynomial in $1/\epsilon$ and the lower bound is quadratic in $1/\epsilon$.
Our algorithms are simple to implement and of practical use when dealing with
massive datasets. Our results extend to other systems where the correlation
decay is known to hold as for the independent set problem up to the critical
activity.
| [
{
"version": "v1",
"created": "Fri, 17 Aug 2012 16:11:27 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Sep 2012 21:07:44 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Apr 2013 21:01:49 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Jun 2013 10:58:30 GMT"
},
{
"version": "v5",
"created": "Wed, 4 Sep 2013 07:49:39 GMT"
}
] | 2013-09-05T00:00:00 | [
[
"Lelarge",
"Marc",
""
],
[
"Zhou",
"Hang",
""
]
] | TITLE: Sublinear-Time Algorithms for Monomer-Dimer Systems on Bounded Degree
Graphs
ABSTRACT: For a graph $G$, let $Z(G,\lambda)$ be the partition function of the
monomer-dimer system defined by $\sum_k m_k(G)\lambda^k$, where $m_k(G)$ is the
number of matchings of size $k$ in $G$. We consider graphs of bounded degree
and develop a sublinear-time algorithm for estimating $\log Z(G,\lambda)$ at an
arbitrary value $\lambda>0$ within additive error $\epsilon n$ with high
probability. The query complexity of our algorithm does not depend on the size
of $G$ and is polynomial in $1/\epsilon$, and we also provide a lower bound
quadratic in $1/\epsilon$ for this problem. This is the first analysis of a
sublinear-time approximation algorithm for a $# P$-complete problem. Our
approach is based on the correlation decay of the Gibbs distribution associated
with $Z(G,\lambda)$. We show that our algorithm approximates the probability
for a vertex to be covered by a matching, sampled according to this Gibbs
distribution, in a near-optimal sublinear time. We extend our results to
approximate the average size and the entropy of such a matching within an
additive error with high probability, where again the query complexity is
polynomial in $1/\epsilon$ and the lower bound is quadratic in $1/\epsilon$.
Our algorithms are simple to implement and of practical use when dealing with
massive datasets. Our results extend to other systems where the correlation
decay is known to hold as for the independent set problem up to the critical
activity.
| no_new_dataset | 0.942188 |
1309.1009 | Suranjan Ganguly | Ayan Seal, Suranjan Ganguly, Debotosh Bhattacharjee, Mita Nasipuri,
Dipak Kumar Basu | A Comparative Study of Human thermal face recognition based on Haar
wavelet transform (HWT) and Local Binary Pattern (LBP) | 17 pages Computational Intelligence and Neuroscience 2012 | null | null | null | cs.CV | http://creativecommons.org/licenses/publicdomain/ | Thermal infra-red (IR) images focus on changes of temperature distribution on
facial muscles and blood vessels. These temperature changes can be regarded as
texture features of images. A comparative study of face recognition methods
working in thermal spectrum is carried out in this paper. In these study two
local-matching methods based on Haar wavelet transform and Local Binary Pattern
(LBP) are analyzed. Wavelet transform is a good tool to analyze multi-scale,
multi-direction changes of texture. Local binary patterns (LBP) are a type of
feature used for classification in computer vision. Firstly, human thermal IR
face image is preprocessed and cropped the face region only from the entire
image. Secondly, two different approaches are used to extract the features from
the cropped face region. In the first approach, the training images and the
test images are processed with Haar wavelet transform and the LL band and the
average of LH/HL/HH bands sub-images are created for each face image. Then a
total confidence matrix is formed for each face image by taking a weighted sum
of the corresponding pixel values of the LL band and average band. For LBP
feature extraction, each of the face images in training and test datasets is
divided into 161 numbers of sub images, each of size 8X8 pixels. For each such
sub images, LBP features are extracted which are concatenated in row wise
manner. PCA is performed separately on the individual feature set for
dimensionality reeducation. Finally two different classifiers are used to
classify face images. One such classifier multi-layer feed forward neural
network and another classifier is minimum distance classifier. The Experiments
have been performed on the database created at our own laboratory and Terravic
Facial IR Database.
| [
{
"version": "v1",
"created": "Wed, 4 Sep 2013 12:41:48 GMT"
}
] | 2013-09-05T00:00:00 | [
[
"Seal",
"Ayan",
""
],
[
"Ganguly",
"Suranjan",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kumar",
""
]
] | TITLE: A Comparative Study of Human thermal face recognition based on Haar
wavelet transform (HWT) and Local Binary Pattern (LBP)
ABSTRACT: Thermal infra-red (IR) images focus on changes of temperature distribution on
facial muscles and blood vessels. These temperature changes can be regarded as
texture features of images. A comparative study of face recognition methods
working in thermal spectrum is carried out in this paper. In these study two
local-matching methods based on Haar wavelet transform and Local Binary Pattern
(LBP) are analyzed. Wavelet transform is a good tool to analyze multi-scale,
multi-direction changes of texture. Local binary patterns (LBP) are a type of
feature used for classification in computer vision. Firstly, human thermal IR
face image is preprocessed and cropped the face region only from the entire
image. Secondly, two different approaches are used to extract the features from
the cropped face region. In the first approach, the training images and the
test images are processed with Haar wavelet transform and the LL band and the
average of LH/HL/HH bands sub-images are created for each face image. Then a
total confidence matrix is formed for each face image by taking a weighted sum
of the corresponding pixel values of the LL band and average band. For LBP
feature extraction, each of the face images in training and test datasets is
divided into 161 numbers of sub images, each of size 8X8 pixels. For each such
sub images, LBP features are extracted which are concatenated in row wise
manner. PCA is performed separately on the individual feature set for
dimensionality reeducation. Finally two different classifiers are used to
classify face images. One such classifier multi-layer feed forward neural
network and another classifier is minimum distance classifier. The Experiments
have been performed on the database created at our own laboratory and Terravic
Facial IR Database.
| no_new_dataset | 0.953405 |
1304.3754 | Jonathan Ullman | Karthekeyan Chandrasekaran, Justin Thaler, Jonathan Ullman, Andrew Wan | Faster Private Release of Marginals on Small Databases | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of answering \emph{$k$-way marginal} queries on a
database $D \in (\{0,1\}^d)^n$, while preserving differential privacy. The
answer to a $k$-way marginal query is the fraction of the database's records $x
\in \{0,1\}^d$ with a given value in each of a given set of up to $k$ columns.
Marginal queries enable a rich class of statistical analyses on a dataset, and
designing efficient algorithms for privately answering marginal queries has
been identified as an important open problem in private data analysis.
For any $k$, we give a differentially private online algorithm that runs in
time $$ \min{\exp(d^{1-\Omega(1/\sqrt{k})}), \exp(d / \log^{.99} d)\} $$ per
query and answers any (possibly superpolynomially long and adaptively chosen)
sequence of $k$-way marginal queries up to error at most $\pm .01$ on every
query, provided $n \gtrsim d^{.51} $. To the best of our knowledge, this is the
first algorithm capable of privately answering marginal queries with a
non-trivial worst-case accuracy guarantee on a database of size $\poly(d, k)$
in time $\exp(o(d))$.
Our algorithms are a variant of the private multiplicative weights algorithm
(Hardt and Rothblum, FOCS '10), but using a different low-weight representation
of the database. We derive our low-weight representation using approximations
to the OR function by low-degree polynomials with coefficients of bounded
$L_1$-norm. We also prove a strong limitation on our approach that is of
independent approximation-theoretic interest. Specifically, we show that for
any $k = o(\log d)$, any polynomial with coefficients of $L_1$-norm $poly(d)$
that pointwise approximates the $d$-variate OR function on all inputs of
Hamming weight at most $k$ must have degree $d^{1-O(1/\sqrt{k})}$.
| [
{
"version": "v1",
"created": "Sat, 13 Apr 2013 00:37:17 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Sep 2013 00:41:55 GMT"
}
] | 2013-09-04T00:00:00 | [
[
"Chandrasekaran",
"Karthekeyan",
""
],
[
"Thaler",
"Justin",
""
],
[
"Ullman",
"Jonathan",
""
],
[
"Wan",
"Andrew",
""
]
] | TITLE: Faster Private Release of Marginals on Small Databases
ABSTRACT: We study the problem of answering \emph{$k$-way marginal} queries on a
database $D \in (\{0,1\}^d)^n$, while preserving differential privacy. The
answer to a $k$-way marginal query is the fraction of the database's records $x
\in \{0,1\}^d$ with a given value in each of a given set of up to $k$ columns.
Marginal queries enable a rich class of statistical analyses on a dataset, and
designing efficient algorithms for privately answering marginal queries has
been identified as an important open problem in private data analysis.
For any $k$, we give a differentially private online algorithm that runs in
time $$ \min{\exp(d^{1-\Omega(1/\sqrt{k})}), \exp(d / \log^{.99} d)\} $$ per
query and answers any (possibly superpolynomially long and adaptively chosen)
sequence of $k$-way marginal queries up to error at most $\pm .01$ on every
query, provided $n \gtrsim d^{.51} $. To the best of our knowledge, this is the
first algorithm capable of privately answering marginal queries with a
non-trivial worst-case accuracy guarantee on a database of size $\poly(d, k)$
in time $\exp(o(d))$.
Our algorithms are a variant of the private multiplicative weights algorithm
(Hardt and Rothblum, FOCS '10), but using a different low-weight representation
of the database. We derive our low-weight representation using approximations
to the OR function by low-degree polynomials with coefficients of bounded
$L_1$-norm. We also prove a strong limitation on our approach that is of
independent approximation-theoretic interest. Specifically, we show that for
any $k = o(\log d)$, any polynomial with coefficients of $L_1$-norm $poly(d)$
that pointwise approximates the $d$-variate OR function on all inputs of
Hamming weight at most $k$ must have degree $d^{1-O(1/\sqrt{k})}$.
| no_new_dataset | 0.940161 |
1309.0309 | Xiaojiang Peng | Xiaojiang Peng, Qiang Peng, Yu Qiao, Junzhou Chen, Mehtab Afzal | A Study on Unsupervised Dictionary Learning and Feature Encoding for
Action Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many efforts have been devoted to develop alternative methods to traditional
vector quantization in image domain such as sparse coding and soft-assignment.
These approaches can be split into a dictionary learning phase and a feature
encoding phase which are often closely connected. In this paper, we investigate
the effects of these phases by separating them for video-based action
classification. We compare several dictionary learning methods and feature
encoding schemes through extensive experiments on KTH and HMDB51 datasets.
Experimental results indicate that sparse coding performs consistently better
than the other encoding methods in large complex dataset (i.e., HMDB51), and it
is robust to different dictionaries. For small simple dataset (i.e., KTH) with
less variation, however, all the encoding strategies perform competitively. In
addition, we note that the strength of sophisticated encoding approaches comes
not from their corresponding dictionaries but the encoding mechanisms, and we
can just use randomly selected exemplars as dictionaries for video-based action
classification.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2013 07:06:05 GMT"
}
] | 2013-09-03T00:00:00 | [
[
"Peng",
"Xiaojiang",
""
],
[
"Peng",
"Qiang",
""
],
[
"Qiao",
"Yu",
""
],
[
"Chen",
"Junzhou",
""
],
[
"Afzal",
"Mehtab",
""
]
] | TITLE: A Study on Unsupervised Dictionary Learning and Feature Encoding for
Action Classification
ABSTRACT: Many efforts have been devoted to develop alternative methods to traditional
vector quantization in image domain such as sparse coding and soft-assignment.
These approaches can be split into a dictionary learning phase and a feature
encoding phase which are often closely connected. In this paper, we investigate
the effects of these phases by separating them for video-based action
classification. We compare several dictionary learning methods and feature
encoding schemes through extensive experiments on KTH and HMDB51 datasets.
Experimental results indicate that sparse coding performs consistently better
than the other encoding methods in large complex dataset (i.e., HMDB51), and it
is robust to different dictionaries. For small simple dataset (i.e., KTH) with
less variation, however, all the encoding strategies perform competitively. In
addition, we note that the strength of sophisticated encoding approaches comes
not from their corresponding dictionaries but the encoding mechanisms, and we
can just use randomly selected exemplars as dictionaries for video-based action
classification.
| no_new_dataset | 0.949248 |
1309.0337 | Neil Houlsby | Neil Houlsby, Massimiliano Ciaramita | Scalable Probabilistic Entity-Topic Modeling | null | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an LDA approach to entity disambiguation. Each topic is associated
with a Wikipedia article and topics generate either content words or entity
mentions. Training such models is challenging because of the topic and
vocabulary size, both in the millions. We tackle these problems using a novel
distributed inference and representation framework based on a parallel Gibbs
sampler guided by the Wikipedia link graph, and pipelines of MapReduce allowing
fast and memory-frugal processing of large datasets. We report state-of-the-art
performance on a public dataset.
| [
{
"version": "v1",
"created": "Mon, 2 Sep 2013 09:34:50 GMT"
}
] | 2013-09-03T00:00:00 | [
[
"Houlsby",
"Neil",
""
],
[
"Ciaramita",
"Massimiliano",
""
]
] | TITLE: Scalable Probabilistic Entity-Topic Modeling
ABSTRACT: We present an LDA approach to entity disambiguation. Each topic is associated
with a Wikipedia article and topics generate either content words or entity
mentions. Training such models is challenging because of the topic and
vocabulary size, both in the millions. We tackle these problems using a novel
distributed inference and representation framework based on a parallel Gibbs
sampler guided by the Wikipedia link graph, and pipelines of MapReduce allowing
fast and memory-frugal processing of large datasets. We report state-of-the-art
performance on a public dataset.
| no_new_dataset | 0.949856 |
1012.3115 | Mark Tschopp | M.A. Tschopp, M.F. Horstemeyer, F. Gao, X. Sun, M. Khaleel | Energetic driving force for preferential binding of self-interstitial
atoms to Fe grain boundaries over vacancies | 4 pages, 4 figures | Scripta Materialia 64 (2011) 908-911 | 10.1016/j.scriptamat.2011.01.031 | null | cond-mat.mes-hall physics.atom-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular dynamics simulations of 50 Fe grain boundaries were used to
understand their interaction with vacancies and self-interstitial atoms at all
atomic positions within 20 Angstroms of the boundary, which is important for
designing radiation-resistant polycrystalline materials. Site-to-site variation
within the boundary of both vacancy and self-interstitial formation energies is
substantial, with the majority of sites having lower formation energies than in
the bulk. Comparing the vacancy and self-interstitial atom binding energies for
each site shows that there is an energetic driving force for interstitials to
preferentially bind to grain boundary sites over vacancies. Furthermore, these
results provide a valuable dataset for quantifying uncertainty bounds for
various grain boundary types at the nanoscale, which can be propagated to
higher scale simulations of microstructure evolution.
| [
{
"version": "v1",
"created": "Tue, 14 Dec 2010 18:09:42 GMT"
}
] | 2013-09-02T00:00:00 | [
[
"Tschopp",
"M. A.",
""
],
[
"Horstemeyer",
"M. F.",
""
],
[
"Gao",
"F.",
""
],
[
"Sun",
"X.",
""
],
[
"Khaleel",
"M.",
""
]
] | TITLE: Energetic driving force for preferential binding of self-interstitial
atoms to Fe grain boundaries over vacancies
ABSTRACT: Molecular dynamics simulations of 50 Fe grain boundaries were used to
understand their interaction with vacancies and self-interstitial atoms at all
atomic positions within 20 Angstroms of the boundary, which is important for
designing radiation-resistant polycrystalline materials. Site-to-site variation
within the boundary of both vacancy and self-interstitial formation energies is
substantial, with the majority of sites having lower formation energies than in
the bulk. Comparing the vacancy and self-interstitial atom binding energies for
each site shows that there is an energetic driving force for interstitials to
preferentially bind to grain boundary sites over vacancies. Furthermore, these
results provide a valuable dataset for quantifying uncertainty bounds for
various grain boundary types at the nanoscale, which can be propagated to
higher scale simulations of microstructure evolution.
| no_new_dataset | 0.952574 |
1308.6683 | Jerome Darmont | Chantola Kit (ERIC), Marouane Hachicha (ERIC), J\'er\^ome Darmont
(ERIC) | Benchmarking Summarizability Processing in XML Warehouses with Complex
Hierarchies | 15th International Workshop on Data Warehousing and OLAP (DOLAP
2012), Maui : United States (2012) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Business Intelligence plays an important role in decision making. Based on
data warehouses and Online Analytical Processing, a business intelligence tool
can be used to analyze complex data. Still, summarizability issues in data
warehouses cause ineffective analyses that may become critical problems to
businesses. To settle this issue, many researchers have studied and proposed
various solutions, both in relational and XML data warehouses. However, they
find difficulty in evaluating the performance of their proposals since the
available benchmarks lack complex hierarchies. In order to contribute to
summarizability analysis, this paper proposes an extension to the XML warehouse
benchmark (XWeB) with complex hierarchies. The benchmark enables us to generate
XML data warehouses with scalable complex hierarchies as well as
summarizability processing. We experimentally demonstrated that complex
hierarchies can definitely be included into a benchmark dataset, and that our
benchmark is able to compare two alternative approaches dealing with
summarizability issues.
| [
{
"version": "v1",
"created": "Fri, 30 Aug 2013 09:02:02 GMT"
}
] | 2013-09-02T00:00:00 | [
[
"Kit",
"Chantola",
"",
"ERIC"
],
[
"Hachicha",
"Marouane",
"",
"ERIC"
],
[
"Darmont",
"Jérôme",
"",
"ERIC"
]
] | TITLE: Benchmarking Summarizability Processing in XML Warehouses with Complex
Hierarchies
ABSTRACT: Business Intelligence plays an important role in decision making. Based on
data warehouses and Online Analytical Processing, a business intelligence tool
can be used to analyze complex data. Still, summarizability issues in data
warehouses cause ineffective analyses that may become critical problems to
businesses. To settle this issue, many researchers have studied and proposed
various solutions, both in relational and XML data warehouses. However, they
find difficulty in evaluating the performance of their proposals since the
available benchmarks lack complex hierarchies. In order to contribute to
summarizability analysis, this paper proposes an extension to the XML warehouse
benchmark (XWeB) with complex hierarchies. The benchmark enables us to generate
XML data warehouses with scalable complex hierarchies as well as
summarizability processing. We experimentally demonstrated that complex
hierarchies can definitely be included into a benchmark dataset, and that our
benchmark is able to compare two alternative approaches dealing with
summarizability issues.
| new_dataset | 0.675283 |
1308.6721 | Puneet Kumar | Pierre-Yves Baudin (INRIA Saclay - Ile de France), Danny Goodman,
Puneet Kumar (INRIA Saclay - Ile de France, CVN), Noura Azzabou (MIRCEN,
UPMC), Pierre G. Carlier (UPMC), Nikos Paragios (INRIA Saclay - Ile de
France, MAS, LIGM, ENPC), M. Pawan Kumar (INRIA Saclay - Ile de France, CVN) | Discriminative Parameter Estimation for Random Walks Segmentation | Medical Image Computing and Computer Assisted Interventaion (2013) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba- bilistic segmentation. We overcome this challenge by treating the opti-
mal probabilistic segmentation that is compatible with the given hard
segmentation as a latent variable. This allows us to employ the latent support
vector machine formulation for parameter estimation. We show that our approach
signi cantly outperforms the baseline methods on a challenging dataset
consisting of real clinical 3D MRI volumes of skeletal muscles.
| [
{
"version": "v1",
"created": "Fri, 30 Aug 2013 12:13:11 GMT"
}
] | 2013-09-02T00:00:00 | [
[
"Baudin",
"Pierre-Yves",
"",
"INRIA Saclay - Ile de France"
],
[
"Goodman",
"Danny",
"",
"INRIA Saclay - Ile de France, CVN"
],
[
"Kumar",
"Puneet",
"",
"INRIA Saclay - Ile de France, CVN"
],
[
"Azzabou",
"Noura",
"",
"MIRCEN,\n UPMC"
],
[
"Carlier",
"Pierre G.",
"",
"UPMC"
],
[
"Paragios",
"Nikos",
"",
"INRIA Saclay - Ile de\n France, MAS, LIGM, ENPC"
],
[
"Kumar",
"M. Pawan",
"",
"INRIA Saclay - Ile de France, CVN"
]
] | TITLE: Discriminative Parameter Estimation for Random Walks Segmentation
ABSTRACT: The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba- bilistic segmentation. We overcome this challenge by treating the opti-
mal probabilistic segmentation that is compatible with the given hard
segmentation as a latent variable. This allows us to employ the latent support
vector machine formulation for parameter estimation. We show that our approach
signi cantly outperforms the baseline methods on a challenging dataset
consisting of real clinical 3D MRI volumes of skeletal muscles.
| no_new_dataset | 0.946941 |
1308.6181 | Jesus Cerquides | Victor Bellon and Jesus Cerquides and Ivo Grosse | Bayesian Conditional Gaussian Network Classifiers with Applications to
Mass Spectra Classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifiers based on probabilistic graphical models are very effective. In
continuous domains, maximum likelihood is usually used to assess the
predictions of those classifiers. When data is scarce, this can easily lead to
overfitting. In any probabilistic setting, Bayesian averaging (BA) provides
theoretically optimal predictions and is known to be robust to overfitting. In
this work we introduce Bayesian Conditional Gaussian Network Classifiers, which
efficiently perform exact Bayesian averaging over the parameters. We evaluate
the proposed classifiers against the maximum likelihood alternatives proposed
so far over standard UCI datasets, concluding that performing BA improves the
quality of the assessed probabilities (conditional log likelihood) whilst
maintaining the error rate.
Overfitting is more likely to occur in domains where the number of data items
is small and the number of variables is large. These two conditions are met in
the realm of bioinformatics, where the early diagnosis of cancer from mass
spectra is a relevant task. We provide an application of our classification
framework to that problem, comparing it with the standard maximum likelihood
alternative, where the improvement of quality in the assessed probabilities is
confirmed.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2013 15:14:47 GMT"
}
] | 2013-08-29T00:00:00 | [
[
"Bellon",
"Victor",
""
],
[
"Cerquides",
"Jesus",
""
],
[
"Grosse",
"Ivo",
""
]
] | TITLE: Bayesian Conditional Gaussian Network Classifiers with Applications to
Mass Spectra Classification
ABSTRACT: Classifiers based on probabilistic graphical models are very effective. In
continuous domains, maximum likelihood is usually used to assess the
predictions of those classifiers. When data is scarce, this can easily lead to
overfitting. In any probabilistic setting, Bayesian averaging (BA) provides
theoretically optimal predictions and is known to be robust to overfitting. In
this work we introduce Bayesian Conditional Gaussian Network Classifiers, which
efficiently perform exact Bayesian averaging over the parameters. We evaluate
the proposed classifiers against the maximum likelihood alternatives proposed
so far over standard UCI datasets, concluding that performing BA improves the
quality of the assessed probabilities (conditional log likelihood) whilst
maintaining the error rate.
Overfitting is more likely to occur in domains where the number of data items
is small and the number of variables is large. These two conditions are met in
the realm of bioinformatics, where the early diagnosis of cancer from mass
spectra is a relevant task. We provide an application of our classification
framework to that problem, comparing it with the standard maximum likelihood
alternative, where the improvement of quality in the assessed probabilities is
confirmed.
| no_new_dataset | 0.953275 |
1308.5137 | Uwe Aickelin | Josie McCulloch, Christian Wagner, Uwe Aickelin | Measuring the Directional Distance Between Fuzzy Sets | UKCI 2013, the 13th Annual Workshop on Computational Intelligence,
Surrey University | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measure of distance between two fuzzy sets is a fundamental tool within
fuzzy set theory. However, current distance measures within the literature do
not account for the direction of change between fuzzy sets; a useful concept in
a variety of applications, such as Computing With Words. In this paper, we
highlight this utility and introduce a distance measure which takes the
direction between sets into account. We provide details of its application for
normal and non-normal, as well as convex and non-convex fuzzy sets. We
demonstrate the new distance measure using real data from the MovieLens dataset
and establish the benefits of measuring the direction between fuzzy sets.
| [
{
"version": "v1",
"created": "Fri, 23 Aug 2013 14:31:10 GMT"
}
] | 2013-08-26T00:00:00 | [
[
"McCulloch",
"Josie",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Measuring the Directional Distance Between Fuzzy Sets
ABSTRACT: The measure of distance between two fuzzy sets is a fundamental tool within
fuzzy set theory. However, current distance measures within the literature do
not account for the direction of change between fuzzy sets; a useful concept in
a variety of applications, such as Computing With Words. In this paper, we
highlight this utility and introduce a distance measure which takes the
direction between sets into account. We provide details of its application for
normal and non-normal, as well as convex and non-convex fuzzy sets. We
demonstrate the new distance measure using real data from the MovieLens dataset
and establish the benefits of measuring the direction between fuzzy sets.
| no_new_dataset | 0.949389 |
1307.1372 | Kishore Kumar Gajula | G.Kishore Kumar and V.K.Jayaraman | Clustering of Complex Networks and Community Detection Using Group
Search Optimization | 7 pages, 2 figures | null | null | null | cs.NE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group Search Optimizer(GSO) is one of the best algorithms, is very new in the
field of Evolutionary Computing. It is very robust and efficient algorithm,
which is inspired by animal searching behaviour. The paper describes an
application of GSO to clustering of networks. We have tested GSO against five
standard benchmark datasets, GSO algorithm is proved very competitive in terms
of accuracy and convergence speed.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 15:22:35 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Aug 2013 09:11:13 GMT"
}
] | 2013-08-20T00:00:00 | [
[
"Kumar",
"G. Kishore",
""
],
[
"Jayaraman",
"V. K.",
""
]
] | TITLE: Clustering of Complex Networks and Community Detection Using Group
Search Optimization
ABSTRACT: Group Search Optimizer(GSO) is one of the best algorithms, is very new in the
field of Evolutionary Computing. It is very robust and efficient algorithm,
which is inspired by animal searching behaviour. The paper describes an
application of GSO to clustering of networks. We have tested GSO against five
standard benchmark datasets, GSO algorithm is proved very competitive in terms
of accuracy and convergence speed.
| no_new_dataset | 0.949856 |
1308.3872 | Jian Sun | Jian Sun and Wei Chen and Junhui Deng and Jie Gao and Xianfeng Gu and
Feng Luo | A Variational Principle for Improving 2D Triangle Meshes based on
Hyperbolic Volume | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of improving 2D triangle meshes
tessellating planar regions. We propose a new variational principle for
improving 2D triangle meshes where the energy functional is a convex function
over the angle structures whose maximizer is unique and consists only of
equilateral triangles. This energy functional is related to hyperbolic volume
of ideal 3-simplex. Even with extra constraints on the angles for embedding the
mesh into the plane and preserving the boundary, the energy functional remains
well-behaved. We devise an efficient algorithm for maximizing the energy
functional over these extra constraints. We apply our algorithm to various
datasets and compare its performance with that of CVT. The experimental results
show that our algorithm produces the meshes with both the angles and the aspect
ratios of triangles lying in tighter intervals.
| [
{
"version": "v1",
"created": "Sun, 18 Aug 2013 16:40:31 GMT"
}
] | 2013-08-20T00:00:00 | [
[
"Sun",
"Jian",
""
],
[
"Chen",
"Wei",
""
],
[
"Deng",
"Junhui",
""
],
[
"Gao",
"Jie",
""
],
[
"Gu",
"Xianfeng",
""
],
[
"Luo",
"Feng",
""
]
] | TITLE: A Variational Principle for Improving 2D Triangle Meshes based on
Hyperbolic Volume
ABSTRACT: In this paper, we consider the problem of improving 2D triangle meshes
tessellating planar regions. We propose a new variational principle for
improving 2D triangle meshes where the energy functional is a convex function
over the angle structures whose maximizer is unique and consists only of
equilateral triangles. This energy functional is related to hyperbolic volume
of ideal 3-simplex. Even with extra constraints on the angles for embedding the
mesh into the plane and preserving the boundary, the energy functional remains
well-behaved. We devise an efficient algorithm for maximizing the energy
functional over these extra constraints. We apply our algorithm to various
datasets and compare its performance with that of CVT. The experimental results
show that our algorithm produces the meshes with both the angles and the aspect
ratios of triangles lying in tighter intervals.
| no_new_dataset | 0.951006 |
1308.4038 | Carmen Delia Vega Orozco CDVO | Carmen D. Vega Orozco and Jean Golay and Mikhail Kanevski | Multifractal portrayal of the Swiss population | 17 pages, 6 figures | null | null | null | physics.soc-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fractal geometry is a fundamental approach for describing the complex
irregularities of the spatial structure of point patterns. The present research
characterizes the spatial structure of the Swiss population distribution in the
three Swiss geographical regions (Alps, Plateau and Jura) and at the entire
country level. These analyses were carried out using fractal and multifractal
measures for point patterns, which enabled the estimation of the spatial degree
of clustering of a distribution at different scales. The Swiss population
dataset is presented on a grid of points and thus it can be modelled as a
"point process" where each point is characterized by its spatial location
(geometrical support) and a number of inhabitants (measured variable). The
fractal characterization was performed by means of the box-counting dimension
and the multifractal analysis was conducted through the Renyi's generalized
dimensions and the multifractal spectrum. Results showed that the four
population patterns are all multifractals and present different clustering
behaviours. Applying multifractal and fractal methods at different geographical
regions and at different scales allowed us to quantify and describe the
dissimilarities between the four structures and their underlying processes.
This paper is the first Swiss geodemographic study applying multifractal
methods using high resolution data.
| [
{
"version": "v1",
"created": "Mon, 19 Aug 2013 14:32:00 GMT"
}
] | 2013-08-20T00:00:00 | [
[
"Orozco",
"Carmen D. Vega",
""
],
[
"Golay",
"Jean",
""
],
[
"Kanevski",
"Mikhail",
""
]
] | TITLE: Multifractal portrayal of the Swiss population
ABSTRACT: Fractal geometry is a fundamental approach for describing the complex
irregularities of the spatial structure of point patterns. The present research
characterizes the spatial structure of the Swiss population distribution in the
three Swiss geographical regions (Alps, Plateau and Jura) and at the entire
country level. These analyses were carried out using fractal and multifractal
measures for point patterns, which enabled the estimation of the spatial degree
of clustering of a distribution at different scales. The Swiss population
dataset is presented on a grid of points and thus it can be modelled as a
"point process" where each point is characterized by its spatial location
(geometrical support) and a number of inhabitants (measured variable). The
fractal characterization was performed by means of the box-counting dimension
and the multifractal analysis was conducted through the Renyi's generalized
dimensions and the multifractal spectrum. Results showed that the four
population patterns are all multifractals and present different clustering
behaviours. Applying multifractal and fractal methods at different geographical
regions and at different scales allowed us to quantify and describe the
dissimilarities between the four structures and their underlying processes.
This paper is the first Swiss geodemographic study applying multifractal
methods using high resolution data.
| no_new_dataset | 0.953492 |
1211.6687 | Nicolas Gillis | Nicolas Gillis | Robustness Analysis of Hottopixx, a Linear Programming Model for
Factoring Nonnegative Matrices | 23 pages; new numerical results; Comparison with Arora et al.;
Accepted in SIAM J. Mat. Anal. Appl | SIAM J. Matrix Anal. & Appl. 34 (3), pp. 1189-1212, 2013 | 10.1137/120900629 | null | stat.ML cs.LG cs.NA math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although nonnegative matrix factorization (NMF) is NP-hard in general, it has
been shown very recently that it is tractable under the assumption that the
input nonnegative data matrix is close to being separable (separability
requires that all columns of the input matrix belongs to the cone spanned by a
small subset of these columns). Since then, several algorithms have been
designed to handle this subclass of NMF problems. In particular, Bittorf,
Recht, R\'e and Tropp (`Factoring nonnegative matrices with linear programs',
NIPS 2012) proposed a linear programming model, referred to as Hottopixx. In
this paper, we provide a new and more general robustness analysis of their
method. In particular, we design a provably more robust variant using a
post-processing strategy which allows us to deal with duplicates and near
duplicates in the dataset.
| [
{
"version": "v1",
"created": "Wed, 28 Nov 2012 18:05:56 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Dec 2012 16:06:55 GMT"
},
{
"version": "v3",
"created": "Sun, 17 Feb 2013 08:53:06 GMT"
},
{
"version": "v4",
"created": "Fri, 31 May 2013 15:06:57 GMT"
}
] | 2013-08-19T00:00:00 | [
[
"Gillis",
"Nicolas",
""
]
] | TITLE: Robustness Analysis of Hottopixx, a Linear Programming Model for
Factoring Nonnegative Matrices
ABSTRACT: Although nonnegative matrix factorization (NMF) is NP-hard in general, it has
been shown very recently that it is tractable under the assumption that the
input nonnegative data matrix is close to being separable (separability
requires that all columns of the input matrix belongs to the cone spanned by a
small subset of these columns). Since then, several algorithms have been
designed to handle this subclass of NMF problems. In particular, Bittorf,
Recht, R\'e and Tropp (`Factoring nonnegative matrices with linear programs',
NIPS 2012) proposed a linear programming model, referred to as Hottopixx. In
this paper, we provide a new and more general robustness analysis of their
method. In particular, we design a provably more robust variant using a
post-processing strategy which allows us to deal with duplicates and near
duplicates in the dataset.
| no_new_dataset | 0.94256 |
1207.3270 | Anastasios Skarlatidis | Anastasios Skarlatidis, Georgios Paliouras, Alexander Artikis, George
A. Vouros | Probabilistic Event Calculus for Event Recognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic event recognition systems have been successfully applied to a
variety of application domains, extracting useful information in the form of
events, allowing experts or other systems to monitor and respond when
significant events are recognised. In a typical event recognition application,
however, these systems often have to deal with a significant amount of
uncertainty. In this paper, we address the issue of uncertainty in logic-based
event recognition by extending the Event Calculus with probabilistic reasoning.
Markov Logic Networks are a natural candidate for our logic-based formalism.
However, the temporal semantics of the Event Calculus introduce a number of
challenges for the proposed model. We show how and under what assumptions we
can overcome these problems. Additionally, we study how probabilistic modelling
changes the behaviour of the formalism, affecting its key property, the inertia
of fluents. Furthermore, we demonstrate the advantages of the probabilistic
Event Calculus through examples and experiments in the domain of activity
recognition, using a publicly available dataset for video surveillance.
| [
{
"version": "v1",
"created": "Fri, 13 Jul 2012 14:57:35 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Aug 2013 11:13:05 GMT"
}
] | 2013-08-16T00:00:00 | [
[
"Skarlatidis",
"Anastasios",
""
],
[
"Paliouras",
"Georgios",
""
],
[
"Artikis",
"Alexander",
""
],
[
"Vouros",
"George A.",
""
]
] | TITLE: Probabilistic Event Calculus for Event Recognition
ABSTRACT: Symbolic event recognition systems have been successfully applied to a
variety of application domains, extracting useful information in the form of
events, allowing experts or other systems to monitor and respond when
significant events are recognised. In a typical event recognition application,
however, these systems often have to deal with a significant amount of
uncertainty. In this paper, we address the issue of uncertainty in logic-based
event recognition by extending the Event Calculus with probabilistic reasoning.
Markov Logic Networks are a natural candidate for our logic-based formalism.
However, the temporal semantics of the Event Calculus introduce a number of
challenges for the proposed model. We show how and under what assumptions we
can overcome these problems. Additionally, we study how probabilistic modelling
changes the behaviour of the formalism, affecting its key property, the inertia
of fluents. Furthermore, we demonstrate the advantages of the probabilistic
Event Calculus through examples and experiments in the domain of activity
recognition, using a publicly available dataset for video surveillance.
| no_new_dataset | 0.945298 |
1301.6314 | Yakir Reshef | David Reshef (1), Yakir Reshef (1), Michael Mitzenmacher (2), Pardis
Sabeti (2) (1, 2 - contributed equally) | Equitability Analysis of the Maximal Information Coefficient, with
Comparisons | 22 pages, 9 figures | null | null | null | cs.LG q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A measure of dependence is said to be equitable if it gives similar scores to
equally noisy relationships of different types. Equitability is important in
data exploration when the goal is to identify a relatively small set of
strongest associations within a dataset as opposed to finding as many non-zero
associations as possible, which often are too many to sift through. Thus an
equitable statistic, such as the maximal information coefficient (MIC), can be
useful for analyzing high-dimensional data sets. Here, we explore both
equitability and the properties of MIC, and discuss several aspects of the
theory and practice of MIC. We begin by presenting an intuition behind the
equitability of MIC through the exploration of the maximization and
normalization steps in its definition. We then examine the speed and optimality
of the approximation algorithm used to compute MIC, and suggest some directions
for improving both. Finally, we demonstrate in a range of noise models and
sample sizes that MIC is more equitable than natural alternatives, such as
mutual information estimation and distance correlation.
| [
{
"version": "v1",
"created": "Sun, 27 Jan 2013 03:45:30 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Aug 2013 20:51:50 GMT"
}
] | 2013-08-16T00:00:00 | [
[
"Reshef",
"David",
""
],
[
"Reshef",
"Yakir",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Sabeti",
"Pardis",
""
]
] | TITLE: Equitability Analysis of the Maximal Information Coefficient, with
Comparisons
ABSTRACT: A measure of dependence is said to be equitable if it gives similar scores to
equally noisy relationships of different types. Equitability is important in
data exploration when the goal is to identify a relatively small set of
strongest associations within a dataset as opposed to finding as many non-zero
associations as possible, which often are too many to sift through. Thus an
equitable statistic, such as the maximal information coefficient (MIC), can be
useful for analyzing high-dimensional data sets. Here, we explore both
equitability and the properties of MIC, and discuss several aspects of the
theory and practice of MIC. We begin by presenting an intuition behind the
equitability of MIC through the exploration of the maximization and
normalization steps in its definition. We then examine the speed and optimality
of the approximation algorithm used to compute MIC, and suggest some directions
for improving both. Finally, we demonstrate in a range of noise models and
sample sizes that MIC is more equitable than natural alternatives, such as
mutual information estimation and distance correlation.
| no_new_dataset | 0.948298 |
1305.0596 | Taha Hasan | Taha Hassan, Fahad Javed and Naveed Arshad | An Empirical Investigation of V-I Trajectory based Load Signatures for
Non-Intrusive Load Monitoring | 11 pages, 11 figures. Under review for IEEE Transactions on Smart
Grid | null | 10.1109/TSG.2013.2271282 | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Choice of load signature or feature space is one of the most fundamental
design choices for non-intrusive load monitoring or energy disaggregation
problem. Electrical power quantities, harmonic load characteristics, canonical
transient and steady-state waveforms are some of the typical choices of load
signature or load signature basis for current research addressing appliance
classification and prediction. This paper expands and evaluates appliance load
signatures based on V-I trajectory - the mutual locus of instantaneous voltage
and current waveforms - for precision and robustness of prediction in
classification algorithms used to disaggregate residential overall energy use
and predict constituent appliance profiles. We also demonstrate the use of
variants of differential evolution as a novel strategy for selection of optimal
load models in context of energy disaggregation. A publicly available benchmark
dataset REDD is employed for evaluation purposes. Our experimental evaluations
indicate that these load signatures, in conjunction with a number of popular
classification algorithms, offer better or generally comparable overall
precision of prediction, robustness and reliability against dynamic, noisy and
highly similar load signatures with reference to electrical power quantities
and harmonic content. Herein, wave-shape features are found to be an effective
new basis of classification and prediction for semi-automated energy
disaggregation and monitoring.
| [
{
"version": "v1",
"created": "Thu, 2 May 2013 23:32:00 GMT"
}
] | 2013-08-16T00:00:00 | [
[
"Hassan",
"Taha",
""
],
[
"Javed",
"Fahad",
""
],
[
"Arshad",
"Naveed",
""
]
] | TITLE: An Empirical Investigation of V-I Trajectory based Load Signatures for
Non-Intrusive Load Monitoring
ABSTRACT: Choice of load signature or feature space is one of the most fundamental
design choices for non-intrusive load monitoring or energy disaggregation
problem. Electrical power quantities, harmonic load characteristics, canonical
transient and steady-state waveforms are some of the typical choices of load
signature or load signature basis for current research addressing appliance
classification and prediction. This paper expands and evaluates appliance load
signatures based on V-I trajectory - the mutual locus of instantaneous voltage
and current waveforms - for precision and robustness of prediction in
classification algorithms used to disaggregate residential overall energy use
and predict constituent appliance profiles. We also demonstrate the use of
variants of differential evolution as a novel strategy for selection of optimal
load models in context of energy disaggregation. A publicly available benchmark
dataset REDD is employed for evaluation purposes. Our experimental evaluations
indicate that these load signatures, in conjunction with a number of popular
classification algorithms, offer better or generally comparable overall
precision of prediction, robustness and reliability against dynamic, noisy and
highly similar load signatures with reference to electrical power quantities
and harmonic content. Herein, wave-shape features are found to be an effective
new basis of classification and prediction for semi-automated energy
disaggregation and monitoring.
| no_new_dataset | 0.949949 |
1307.7411 | Wajdi Dhifli Wajdi DHIFLI | Wajdi Dhifli, Mohamed Moussaoui, Rabie Saidi, Engelbert Mephu Nguifo | Towards an Efficient Discovery of the Topological Representative
Subgraphs | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the emergence of graph databases, the task of frequent subgraph
discovery has been extensively addressed. Although the proposed approaches in
the literature have made this task feasible, the number of discovered frequent
subgraphs is still very high to be efficiently used in any further exploration.
Feature selection for graph data is a way to reduce the high number of frequent
subgraphs based on exact or approximate structural similarity. However, current
structural similarity strategies are not efficient enough in many real-world
applications, besides, the combinatorial nature of graphs makes it
computationally very costly. In order to select a smaller yet structurally
irredundant set of subgraphs, we propose a novel approach that mines the top-k
topological representative subgraphs among the frequent ones. Our approach
allows detecting hidden structural similarities that existing approaches are
unable to detect such as the density or the diameter of the subgraph. In
addition, it can be easily extended using any user defined structural or
topological attributes depending on the sought properties. Empirical studies on
real and synthetic graph datasets show that our approach is fast and scalable.
| [
{
"version": "v1",
"created": "Sun, 28 Jul 2013 22:17:40 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Aug 2013 21:52:44 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Aug 2013 00:28:30 GMT"
}
] | 2013-08-16T00:00:00 | [
[
"Dhifli",
"Wajdi",
""
],
[
"Moussaoui",
"Mohamed",
""
],
[
"Saidi",
"Rabie",
""
],
[
"Nguifo",
"Engelbert Mephu",
""
]
] | TITLE: Towards an Efficient Discovery of the Topological Representative
Subgraphs
ABSTRACT: With the emergence of graph databases, the task of frequent subgraph
discovery has been extensively addressed. Although the proposed approaches in
the literature have made this task feasible, the number of discovered frequent
subgraphs is still very high to be efficiently used in any further exploration.
Feature selection for graph data is a way to reduce the high number of frequent
subgraphs based on exact or approximate structural similarity. However, current
structural similarity strategies are not efficient enough in many real-world
applications, besides, the combinatorial nature of graphs makes it
computationally very costly. In order to select a smaller yet structurally
irredundant set of subgraphs, we propose a novel approach that mines the top-k
topological representative subgraphs among the frequent ones. Our approach
allows detecting hidden structural similarities that existing approaches are
unable to detect such as the density or the diameter of the subgraph. In
addition, it can be easily extended using any user defined structural or
topological attributes depending on the sought properties. Empirical studies on
real and synthetic graph datasets show that our approach is fast and scalable.
| no_new_dataset | 0.946941 |
1308.3474 | Vipul Periwal | Deborah A. Striegel, Damian Wojtowicz, Teresa M. Przytycka, Vipul
Periwal | Zen and the Science of Pattern Identification: An Inquiry into Bayesian
Skepticism | 31 pages, 12 figures | null | null | null | q-bio.QM physics.data-an | http://creativecommons.org/licenses/publicdomain/ | Finding patterns in data is one of the most challenging open questions in
information science. The number of possible relationships scales
combinatorially with the size of the dataset, overwhelming the exponential
increase in availability of computational resources. Physical insights have
been instrumental in developing efficient computational heuristics. Using
quantum field theory methods and rethinking three centuries of Bayesian
inference, we formulated the problem in terms of finding landscapes of patterns
and solved this problem exactly. The generality of our calculus is illustrated
by applying it to handwritten digit images and to finding structural features
in proteins from sequence alignments without any presumptions about model
priors suited to specific datasets. Landscapes of patterns can be uncovered on
a desktop computer in minutes.
| [
{
"version": "v1",
"created": "Thu, 15 Aug 2013 18:45:35 GMT"
}
] | 2013-08-16T00:00:00 | [
[
"Striegel",
"Deborah A.",
""
],
[
"Wojtowicz",
"Damian",
""
],
[
"Przytycka",
"Teresa M.",
""
],
[
"Periwal",
"Vipul",
""
]
] | TITLE: Zen and the Science of Pattern Identification: An Inquiry into Bayesian
Skepticism
ABSTRACT: Finding patterns in data is one of the most challenging open questions in
information science. The number of possible relationships scales
combinatorially with the size of the dataset, overwhelming the exponential
increase in availability of computational resources. Physical insights have
been instrumental in developing efficient computational heuristics. Using
quantum field theory methods and rethinking three centuries of Bayesian
inference, we formulated the problem in terms of finding landscapes of patterns
and solved this problem exactly. The generality of our calculus is illustrated
by applying it to handwritten digit images and to finding structural features
in proteins from sequence alignments without any presumptions about model
priors suited to specific datasets. Landscapes of patterns can be uncovered on
a desktop computer in minutes.
| no_new_dataset | 0.944074 |
1204.0171 | Mete Ozay | Mete Ozay, Fatos T. Yarman Vural | A New Fuzzy Stacked Generalization Technique and Analysis of its
Performance | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, a new Stacked Generalization technique called Fuzzy Stacked
Generalization (FSG) is proposed to minimize the difference between N -sample
and large-sample classification error of the Nearest Neighbor classifier. The
proposed FSG employs a new hierarchical distance learning strategy to minimize
the error difference. For this purpose, we first construct an ensemble of
base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives
a different feature set extracted from the same sample set. The fuzzy
membership values computed at the decision space of each fuzzy k-NN classifier
are concatenated to form the feature vectors of a fusion space. Finally, the
feature vectors are fed to a meta-layer classifier to learn the degree of
accuracy of the decisions of the base-layer classifiers for meta-layer
classification. Rather than the power of the individual base layer-classifiers,
diversity and cooperation of the classifiers become an important issue to
improve the overall performance of the proposed FSG. A weak base-layer
classifier may boost the overall performance more than a strong classifier, if
it is capable of recognizing the samples, which are not recognized by the rest
of the classifiers, in its own feature space. The experiments explore the type
of the collaboration among the individual classifiers required for an improved
performance of the suggested architecture. Experiments on multiple feature
real-world datasets show that the proposed FSG performs better than the state
of the art ensemble learning algorithms such as Adaboost, Random Subspace and
Rotation Forest. On the other hand, compatible performances are observed in the
experiments on single feature multi-attribute datasets.
| [
{
"version": "v1",
"created": "Sun, 1 Apr 2012 07:16:47 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Oct 2012 19:32:21 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Oct 2012 06:39:31 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Nov 2012 14:53:55 GMT"
},
{
"version": "v5",
"created": "Mon, 12 Aug 2013 21:13:37 GMT"
}
] | 2013-08-14T00:00:00 | [
[
"Ozay",
"Mete",
""
],
[
"Vural",
"Fatos T. Yarman",
""
]
] | TITLE: A New Fuzzy Stacked Generalization Technique and Analysis of its
Performance
ABSTRACT: In this study, a new Stacked Generalization technique called Fuzzy Stacked
Generalization (FSG) is proposed to minimize the difference between N -sample
and large-sample classification error of the Nearest Neighbor classifier. The
proposed FSG employs a new hierarchical distance learning strategy to minimize
the error difference. For this purpose, we first construct an ensemble of
base-layer fuzzy k- Nearest Neighbor (k-NN) classifiers, each of which receives
a different feature set extracted from the same sample set. The fuzzy
membership values computed at the decision space of each fuzzy k-NN classifier
are concatenated to form the feature vectors of a fusion space. Finally, the
feature vectors are fed to a meta-layer classifier to learn the degree of
accuracy of the decisions of the base-layer classifiers for meta-layer
classification. Rather than the power of the individual base layer-classifiers,
diversity and cooperation of the classifiers become an important issue to
improve the overall performance of the proposed FSG. A weak base-layer
classifier may boost the overall performance more than a strong classifier, if
it is capable of recognizing the samples, which are not recognized by the rest
of the classifiers, in its own feature space. The experiments explore the type
of the collaboration among the individual classifiers required for an improved
performance of the suggested architecture. Experiments on multiple feature
real-world datasets show that the proposed FSG performs better than the state
of the art ensemble learning algorithms such as Adaboost, Random Subspace and
Rotation Forest. On the other hand, compatible performances are observed in the
experiments on single feature multi-attribute datasets.
| no_new_dataset | 0.951729 |
1308.2354 | Srijith Ravikumar | Srijith Ravikumar, Kartik Talamadupula, Raju Balakrishnan, Subbarao
Kambhampati | RAProp: Ranking Tweets by Exploiting the Tweet/User/Web Ecosystem and
Inter-Tweet Agreement | 11 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing popularity of Twitter renders improved trustworthiness and
relevance assessment of tweets much more important for search. However, given
the limitations on the size of tweets, it is hard to extract measures for
ranking from the tweets' content alone. We present a novel ranking method,
called RAProp, which combines two orthogonal measures of relevance and
trustworthiness of a tweet. The first, called Feature Score, measures the
trustworthiness of the source of the tweet. This is done by extracting features
from a 3-layer twitter ecosystem, consisting of users, tweets and the pages
referred to in the tweets. The second measure, called agreement analysis,
estimates the trustworthiness of the content of the tweet, by analyzing how and
whether the content is independently corroborated by other tweets. We view the
candidate result set of tweets as the vertices of a graph, with the edges
measuring the estimated agreement between each pair of tweets. The feature
score is propagated over this agreement graph to compute the top-k tweets that
have both trustworthy sources and independent corroboration. The evaluation of
our method on 16 million tweets from the TREC 2011 Microblog Dataset shows that
for top-30 precision we achieve 53% higher than current best performing method
on the Dataset and over 300% over current Twitter Search. We also present a
detailed internal empirical evaluation of RAProp in comparison to several
alternative approaches proposed by us.
| [
{
"version": "v1",
"created": "Sun, 11 Aug 2013 00:56:59 GMT"
}
] | 2013-08-13T00:00:00 | [
[
"Ravikumar",
"Srijith",
""
],
[
"Talamadupula",
"Kartik",
""
],
[
"Balakrishnan",
"Raju",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] | TITLE: RAProp: Ranking Tweets by Exploiting the Tweet/User/Web Ecosystem and
Inter-Tweet Agreement
ABSTRACT: The increasing popularity of Twitter renders improved trustworthiness and
relevance assessment of tweets much more important for search. However, given
the limitations on the size of tweets, it is hard to extract measures for
ranking from the tweets' content alone. We present a novel ranking method,
called RAProp, which combines two orthogonal measures of relevance and
trustworthiness of a tweet. The first, called Feature Score, measures the
trustworthiness of the source of the tweet. This is done by extracting features
from a 3-layer twitter ecosystem, consisting of users, tweets and the pages
referred to in the tweets. The second measure, called agreement analysis,
estimates the trustworthiness of the content of the tweet, by analyzing how and
whether the content is independently corroborated by other tweets. We view the
candidate result set of tweets as the vertices of a graph, with the edges
measuring the estimated agreement between each pair of tweets. The feature
score is propagated over this agreement graph to compute the top-k tweets that
have both trustworthy sources and independent corroboration. The evaluation of
our method on 16 million tweets from the TREC 2011 Microblog Dataset shows that
for top-30 precision we achieve 53% higher than current best performing method
on the Dataset and over 300% over current Twitter Search. We also present a
detailed internal empirical evaluation of RAProp in comparison to several
alternative approaches proposed by us.
| no_new_dataset | 0.948058 |
1308.2166 | Kanat Tangwongsan | Kanat Tangwongsan, A. Pavan, and Srikanta Tirthapura | Parallel Triangle Counting in Massive Streaming Graphs | null | null | null | null | cs.DB cs.DC cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of triangles in a graph is a fundamental metric, used in social
network analysis, link classification and recommendation, and more. Driven by
these applications and the trend that modern graph datasets are both large and
dynamic, we present the design and implementation of a fast and cache-efficient
parallel algorithm for estimating the number of triangles in a massive
undirected graph whose edges arrive as a stream. It brings together the
benefits of streaming algorithms and parallel algorithms. By building on the
streaming algorithms framework, the algorithm has a small memory footprint. By
leveraging the paralell cache-oblivious framework, it makes efficient use of
the memory hierarchy of modern multicore machines without needing to know its
specific parameters. We prove theoretical bounds on accuracy, memory access
cost, and parallel runtime complexity, as well as showing empirically that the
algorithm yields accurate results and substantial speedups compared to an
optimized sequential implementation.
(This is an expanded version of a CIKM'13 paper of the same title.)
| [
{
"version": "v1",
"created": "Fri, 9 Aug 2013 15:54:22 GMT"
}
] | 2013-08-12T00:00:00 | [
[
"Tangwongsan",
"Kanat",
""
],
[
"Pavan",
"A.",
""
],
[
"Tirthapura",
"Srikanta",
""
]
] | TITLE: Parallel Triangle Counting in Massive Streaming Graphs
ABSTRACT: The number of triangles in a graph is a fundamental metric, used in social
network analysis, link classification and recommendation, and more. Driven by
these applications and the trend that modern graph datasets are both large and
dynamic, we present the design and implementation of a fast and cache-efficient
parallel algorithm for estimating the number of triangles in a massive
undirected graph whose edges arrive as a stream. It brings together the
benefits of streaming algorithms and parallel algorithms. By building on the
streaming algorithms framework, the algorithm has a small memory footprint. By
leveraging the paralell cache-oblivious framework, it makes efficient use of
the memory hierarchy of modern multicore machines without needing to know its
specific parameters. We prove theoretical bounds on accuracy, memory access
cost, and parallel runtime complexity, as well as showing empirically that the
algorithm yields accurate results and substantial speedups compared to an
optimized sequential implementation.
(This is an expanded version of a CIKM'13 paper of the same title.)
| no_new_dataset | 0.944177 |
1301.5979 | Lijun Sun Mr | Lijun Sun, Kay W. Axhausen, Der-Horng Lee, Xianfeng Huang | Understanding metropolitan patterns of daily encounters | 7 pages, 3 figures | null | 10.1073/pnas.1306440110 | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding of the mechanisms driving our daily face-to-face encounters is
still limited; the field lacks large-scale datasets describing both individual
behaviors and their collective interactions. However, here, with the help of
travel smart card data, we uncover such encounter mechanisms and structures by
constructing a time-resolved in-vehicle social encounter network on public
buses in a city (about 5 million residents). This is the first time that such a
large network of encounters has been identified and analyzed. Using a
population scale dataset, we find physical encounters display reproducible
temporal patterns, indicating that repeated encounters are regular and
identical. On an individual scale, we find that collective regularities
dominate distinct encounters' bounded nature. An individual's encounter
capability is rooted in his/her daily behavioral regularity, explaining the
emergence of "familiar strangers" in daily life. Strikingly, we find
individuals with repeated encounters are not grouped into small communities,
but become strongly connected over time, resulting in a large, but
imperceptible, small-world contact network or "structure of co-presence" across
the whole metropolitan area. Revealing the encounter pattern and identifying
this large-scale contact network are crucial to understanding the dynamics in
patterns of social acquaintances, collective human behaviors, and --
particularly -- disclosing the impact of human behavior on various
diffusion/spreading processes.
| [
{
"version": "v1",
"created": "Fri, 25 Jan 2013 08:25:14 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Feb 2013 13:16:35 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Jul 2013 05:35:07 GMT"
}
] | 2013-08-07T00:00:00 | [
[
"Sun",
"Lijun",
""
],
[
"Axhausen",
"Kay W.",
""
],
[
"Lee",
"Der-Horng",
""
],
[
"Huang",
"Xianfeng",
""
]
] | TITLE: Understanding metropolitan patterns of daily encounters
ABSTRACT: Understanding of the mechanisms driving our daily face-to-face encounters is
still limited; the field lacks large-scale datasets describing both individual
behaviors and their collective interactions. However, here, with the help of
travel smart card data, we uncover such encounter mechanisms and structures by
constructing a time-resolved in-vehicle social encounter network on public
buses in a city (about 5 million residents). This is the first time that such a
large network of encounters has been identified and analyzed. Using a
population scale dataset, we find physical encounters display reproducible
temporal patterns, indicating that repeated encounters are regular and
identical. On an individual scale, we find that collective regularities
dominate distinct encounters' bounded nature. An individual's encounter
capability is rooted in his/her daily behavioral regularity, explaining the
emergence of "familiar strangers" in daily life. Strikingly, we find
individuals with repeated encounters are not grouped into small communities,
but become strongly connected over time, resulting in a large, but
imperceptible, small-world contact network or "structure of co-presence" across
the whole metropolitan area. Revealing the encounter pattern and identifying
this large-scale contact network are crucial to understanding the dynamics in
patterns of social acquaintances, collective human behaviors, and --
particularly -- disclosing the impact of human behavior on various
diffusion/spreading processes.
| no_new_dataset | 0.770119 |
1308.1118 | Guoqiong Liao | Guoqiong Liao, Yuchen Zhao, Sihong Xie, Philip S. Yu | Latent Networks Fusion based Model for Event Recommendation in Offline
Ephemeral Social Networks | Full version of ACM CIKM2013 paper | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing amount of mobile social media, offline ephemeral social
networks (OffESNs) are receiving more and more attentions. Offline ephemeral
social networks (OffESNs) are the networks created ad-hoc at a specific
location for a specific purpose and lasting for short period of time, relying
on mobile social media such as Radio Frequency Identification (RFID) and
Bluetooth devices. The primary purpose of people in the OffESNs is to acquire
and share information via attending prescheduled events. Event Recommendation
over this kind of networks can facilitate attendees on selecting the
prescheduled events and organizers on making resource planning. However,
because of lack of users preference and rating information, as well as explicit
social relations, both rating based traditional recommendation methods and
social-trust based recommendation methods can no longer work well to recommend
events in the OffESNs. To address the challenges such as how to derive users
latent preferences and social relations and how to fuse the latent information
in a unified model, we first construct two heterogeneous interaction social
networks, an event participation network and a physical proximity network.
Then, we use them to derive users latent preferences and latent networks on
social relations, including like-minded peers, co-attendees and friends.
Finally, we propose an LNF (Latent Networks Fusion) model under a pairwise
factor graph to infer event attendance probabilities for recommendation.
Experiments on an RFID-based real conference dataset have demonstrated the
effectiveness of the proposed model compared with typical solutions.
| [
{
"version": "v1",
"created": "Mon, 5 Aug 2013 21:00:08 GMT"
}
] | 2013-08-07T00:00:00 | [
[
"Liao",
"Guoqiong",
""
],
[
"Zhao",
"Yuchen",
""
],
[
"Xie",
"Sihong",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Latent Networks Fusion based Model for Event Recommendation in Offline
Ephemeral Social Networks
ABSTRACT: With the growing amount of mobile social media, offline ephemeral social
networks (OffESNs) are receiving more and more attentions. Offline ephemeral
social networks (OffESNs) are the networks created ad-hoc at a specific
location for a specific purpose and lasting for short period of time, relying
on mobile social media such as Radio Frequency Identification (RFID) and
Bluetooth devices. The primary purpose of people in the OffESNs is to acquire
and share information via attending prescheduled events. Event Recommendation
over this kind of networks can facilitate attendees on selecting the
prescheduled events and organizers on making resource planning. However,
because of lack of users preference and rating information, as well as explicit
social relations, both rating based traditional recommendation methods and
social-trust based recommendation methods can no longer work well to recommend
events in the OffESNs. To address the challenges such as how to derive users
latent preferences and social relations and how to fuse the latent information
in a unified model, we first construct two heterogeneous interaction social
networks, an event participation network and a physical proximity network.
Then, we use them to derive users latent preferences and latent networks on
social relations, including like-minded peers, co-attendees and friends.
Finally, we propose an LNF (Latent Networks Fusion) model under a pairwise
factor graph to infer event attendance probabilities for recommendation.
Experiments on an RFID-based real conference dataset have demonstrated the
effectiveness of the proposed model compared with typical solutions.
| no_new_dataset | 0.95096 |
1308.1126 | Wang-Q Lim | H. Lakshman, W.-Q Lim, H. Schwarz, D. Marpe, G. Kutyniok, and T.
Wiegand | Image interpolation using Shearlet based iterative refinement | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images.
| [
{
"version": "v1",
"created": "Mon, 5 Aug 2013 21:33:06 GMT"
}
] | 2013-08-07T00:00:00 | [
[
"Lakshman",
"H.",
""
],
[
"Lim",
"W. -Q",
""
],
[
"Schwarz",
"H.",
""
],
[
"Marpe",
"D.",
""
],
[
"Kutyniok",
"G.",
""
],
[
"Wiegand",
"T.",
""
]
] | TITLE: Image interpolation using Shearlet based iterative refinement
ABSTRACT: This paper proposes an image interpolation algorithm exploiting sparse
representation for natural images. It involves three main steps: (a) obtaining
an initial estimate of the high resolution image using linear methods like FIR
filtering, (b) promoting sparsity in a selected dictionary through iterative
thresholding, and (c) extracting high frequency information from the
approximation to refine the initial estimate. For the sparse modeling, a
shearlet dictionary is chosen to yield a multiscale directional representation.
The proposed algorithm is compared to several state-of-the-art methods to
assess its objective as well as subjective performance. Compared to the cubic
spline interpolation method, an average PSNR gain of around 0.8 dB is observed
over a dataset of 200 images.
| no_new_dataset | 0.951504 |
1308.1150 | Ali Wali | Ali Wali and Adel M. Alimi | Multimodal Approach for Video Surveillance Indexing and Retrieval | 7 pages | Journal of Intelligent Computing, Volume: 1, Issue: 4 (December
2010), Page: 165-175 | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an overview of a multimodal system to indexing and
searching video sequence by the content that has been developed within the
REGIMVid project. A large part of our system has been developed as part of
TRECVideo evaluation. The MAVSIR platform provides High-level feature
extraction from audio-visual content and concept/event-based video retrieval.
We illustrate the architecture of the system as well as provide an overview of
the descriptors supported to date. Then we demonstrate the usefulness of the
toolbox in the context of feature extraction, concepts/events learning and
retrieval in large collections of video surveillance dataset. The results are
encouraging as we are able to get good results on several event categories,
while for all events we have gained valuable insights and experience.
| [
{
"version": "v1",
"created": "Tue, 6 Aug 2013 01:21:35 GMT"
}
] | 2013-08-07T00:00:00 | [
[
"Wali",
"Ali",
""
],
[
"Alimi",
"Adel M.",
""
]
] | TITLE: Multimodal Approach for Video Surveillance Indexing and Retrieval
ABSTRACT: In this paper, we present an overview of a multimodal system to indexing and
searching video sequence by the content that has been developed within the
REGIMVid project. A large part of our system has been developed as part of
TRECVideo evaluation. The MAVSIR platform provides High-level feature
extraction from audio-visual content and concept/event-based video retrieval.
We illustrate the architecture of the system as well as provide an overview of
the descriptors supported to date. Then we demonstrate the usefulness of the
toolbox in the context of feature extraction, concepts/events learning and
retrieval in large collections of video surveillance dataset. The results are
encouraging as we are able to get good results on several event categories,
while for all events we have gained valuable insights and experience.
| no_new_dataset | 0.949342 |
1308.0701 | Meisam Booshehri | Meisam Booshehri, Abbas Malekpour, Peter Luksch, Kamran Zamanifar,
Shahdad Shariatmadari | Ontology Enrichment by Extracting Hidden Assertional Knowledge from Text | 9 pages, International Journal of Computer Science and Information
Security | IJCSIS, 11(5), 64-72 | null | null | cs.IR cs.CL | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this position paper we present a new approach for discovering some special
classes of assertional knowledge in the text by using large RDF repositories,
resulting in the extraction of new non-taxonomic ontological relations. Also we
use inductive reasoning beside our approach to make it outperform. Then, we
prepare a case study by applying our approach on sample data and illustrate the
soundness of our proposed approach. Moreover in our point of view current LOD
cloud is not a suitable base for our proposal in all informational domains.
Therefore we figure out some directions based on prior works to enrich datasets
of Linked Data by using web mining. The result of such enrichment can be reused
for further relation extraction and ontology enrichment from unstructured free
text documents.
| [
{
"version": "v1",
"created": "Sat, 3 Aug 2013 14:30:55 GMT"
}
] | 2013-08-06T00:00:00 | [
[
"Booshehri",
"Meisam",
""
],
[
"Malekpour",
"Abbas",
""
],
[
"Luksch",
"Peter",
""
],
[
"Zamanifar",
"Kamran",
""
],
[
"Shariatmadari",
"Shahdad",
""
]
] | TITLE: Ontology Enrichment by Extracting Hidden Assertional Knowledge from Text
ABSTRACT: In this position paper we present a new approach for discovering some special
classes of assertional knowledge in the text by using large RDF repositories,
resulting in the extraction of new non-taxonomic ontological relations. Also we
use inductive reasoning beside our approach to make it outperform. Then, we
prepare a case study by applying our approach on sample data and illustrate the
soundness of our proposed approach. Moreover in our point of view current LOD
cloud is not a suitable base for our proposal in all informational domains.
Therefore we figure out some directions based on prior works to enrich datasets
of Linked Data by using web mining. The result of such enrichment can be reused
for further relation extraction and ontology enrichment from unstructured free
text documents.
| no_new_dataset | 0.95096 |
1308.0749 | Sta\v{s}a Milojevi\'c | Sta\v{s}a Milojevi\'c | Accuracy of simple, initials-based methods for author name
disambiguation | In press in Journal of Informetrics | null | 10.1016/j.joi.2013.06.006 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are a number of solutions that perform unsupervised name disambiguation
based on the similarity of bibliographic records or common co-authorship
patterns. Whether the use of these advanced methods, which are often difficult
to implement, is warranted depends on whether the accuracy of the most basic
disambiguation methods, which only use the author's last name and initials, is
sufficient for a particular purpose. We derive realistic estimates for the
accuracy of simple, initials-based methods using simulated bibliographic
datasets in which the true identities of authors are known. Based on the
simulations in five diverse disciplines we find that the first initial method
already correctly identifies 97% of authors. An alternative simple method,
which takes all initials into account, is typically two times less accurate,
except in certain datasets that can be identified by applying a simple
criterion. Finally, we introduce a new name-based method that combines the
features of first initial and all initials methods by implicitly taking into
account the last name frequency and the size of the dataset. This hybrid method
reduces the fraction of incorrectly identified authors by 10-30% over the first
initial method.
| [
{
"version": "v1",
"created": "Sat, 3 Aug 2013 21:52:12 GMT"
}
] | 2013-08-06T00:00:00 | [
[
"Milojević",
"Staša",
""
]
] | TITLE: Accuracy of simple, initials-based methods for author name
disambiguation
ABSTRACT: There are a number of solutions that perform unsupervised name disambiguation
based on the similarity of bibliographic records or common co-authorship
patterns. Whether the use of these advanced methods, which are often difficult
to implement, is warranted depends on whether the accuracy of the most basic
disambiguation methods, which only use the author's last name and initials, is
sufficient for a particular purpose. We derive realistic estimates for the
accuracy of simple, initials-based methods using simulated bibliographic
datasets in which the true identities of authors are known. Based on the
simulations in five diverse disciplines we find that the first initial method
already correctly identifies 97% of authors. An alternative simple method,
which takes all initials into account, is typically two times less accurate,
except in certain datasets that can be identified by applying a simple
criterion. Finally, we introduce a new name-based method that combines the
features of first initial and all initials methods by implicitly taking into
account the last name frequency and the size of the dataset. This hybrid method
reduces the fraction of incorrectly identified authors by 10-30% over the first
initial method.
| no_new_dataset | 0.953837 |
1301.6659 | Nima Mirbakhsh | Nima Mirbakhsh and Charles X. Ling | Clustering-Based Matrix Factorization | This paper has been withdrawn by the author due to crucial typo and
the poor grammatical text | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems are emerging technologies that nowadays can be found in
many applications such as Amazon, Netflix, and so on. These systems help users
to find relevant information, recommendations, and their preferred items.
Slightly improvement of the accuracy of these recommenders can highly affect
the quality of recommendations. Matrix Factorization is a popular method in
Recommendation Systems showing promising results in accuracy and complexity. In
this paper we propose an extension of matrix factorization which adds general
neighborhood information on the recommendation model. Users and items are
clustered into different categories to see how these categories share
preferences. We then employ these shared interests of categories in a fusion by
Biased Matrix Factorization to achieve more accurate recommendations. This is a
complement for the current neighborhood aware matrix factorization models which
rely on using direct neighborhood information of users and items. The proposed
model is tested on two well-known recommendation system datasets: Movielens100k
and Netflix. Our experiment shows applying the general latent features of
categories into factorized recommender models improves the accuracy of
recommendations. The current neighborhood-aware models need a great number of
neighbors to acheive good accuracies. To the best of our knowledge, the
proposed model is better than or comparable with the current neighborhood-aware
models when they consider fewer number of neighbors.
| [
{
"version": "v1",
"created": "Mon, 28 Jan 2013 20:01:57 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Feb 2013 22:16:44 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Feb 2013 01:04:55 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Aug 2013 22:06:49 GMT"
}
] | 2013-08-05T00:00:00 | [
[
"Mirbakhsh",
"Nima",
""
],
[
"Ling",
"Charles X.",
""
]
] | TITLE: Clustering-Based Matrix Factorization
ABSTRACT: Recommender systems are emerging technologies that nowadays can be found in
many applications such as Amazon, Netflix, and so on. These systems help users
to find relevant information, recommendations, and their preferred items.
Slightly improvement of the accuracy of these recommenders can highly affect
the quality of recommendations. Matrix Factorization is a popular method in
Recommendation Systems showing promising results in accuracy and complexity. In
this paper we propose an extension of matrix factorization which adds general
neighborhood information on the recommendation model. Users and items are
clustered into different categories to see how these categories share
preferences. We then employ these shared interests of categories in a fusion by
Biased Matrix Factorization to achieve more accurate recommendations. This is a
complement for the current neighborhood aware matrix factorization models which
rely on using direct neighborhood information of users and items. The proposed
model is tested on two well-known recommendation system datasets: Movielens100k
and Netflix. Our experiment shows applying the general latent features of
categories into factorized recommender models improves the accuracy of
recommendations. The current neighborhood-aware models need a great number of
neighbors to acheive good accuracies. To the best of our knowledge, the
proposed model is better than or comparable with the current neighborhood-aware
models when they consider fewer number of neighbors.
| no_new_dataset | 0.949716 |
1202.2564 | Christoforos Anagnostopoulos Dr | David J. Hand, Christoforos Anagnostopoulos | A better Beta for the H measure of classification performance | Preprint. Keywords: supervised classification, classifier
performance, AUC, ROC curve, H measure | null | null | null | stat.ME cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The area under the ROC curve is widely used as a measure of performance of
classification rules. However, it has recently been shown that the measure is
fundamentally incoherent, in the sense that it treats the relative severities
of misclassifications differently when different classifiers are used. To
overcome this, Hand (2009) proposed the $H$ measure, which allows a given
researcher to fix the distribution of relative severities to a
classifier-independent setting on a given problem. This note extends the
discussion, and proposes a modified standard distribution for the $H$ measure,
which better matches the requirements of researchers, in particular those faced
with heavily unbalanced datasets, the $Beta(\pi_1+1,\pi_0+1)$ distribution.
[Preprint submitted at Pattern Recognition Letters]
| [
{
"version": "v1",
"created": "Sun, 12 Feb 2012 20:32:15 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Aug 2013 11:44:54 GMT"
}
] | 2013-08-02T00:00:00 | [
[
"Hand",
"David J.",
""
],
[
"Anagnostopoulos",
"Christoforos",
""
]
] | TITLE: A better Beta for the H measure of classification performance
ABSTRACT: The area under the ROC curve is widely used as a measure of performance of
classification rules. However, it has recently been shown that the measure is
fundamentally incoherent, in the sense that it treats the relative severities
of misclassifications differently when different classifiers are used. To
overcome this, Hand (2009) proposed the $H$ measure, which allows a given
researcher to fix the distribution of relative severities to a
classifier-independent setting on a given problem. This note extends the
discussion, and proposes a modified standard distribution for the $H$ measure,
which better matches the requirements of researchers, in particular those faced
with heavily unbalanced datasets, the $Beta(\pi_1+1,\pi_0+1)$ distribution.
[Preprint submitted at Pattern Recognition Letters]
| no_new_dataset | 0.949389 |
1307.7795 | Aaron Darling | Ramanuja Simha and Hagit Shatkay | Protein (Multi-)Location Prediction: Using Location Inter-Dependencies
in a Probabilistic Framework | Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013) | null | null | null | q-bio.QM cs.CE cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowing the location of a protein within the cell is important for
understanding its function, role in biological processes, and potential use as
a drug target. Much progress has been made in developing computational methods
that predict single locations for proteins, assuming that proteins localize to
a single location. However, it has been shown that proteins localize to
multiple locations. While a few recent systems have attempted to predict
multiple locations of proteins, they typically treat locations as independent
or capture inter-dependencies by treating each locations-combination present in
the training set as an individual location-class. We present a new method and a
preliminary system we have developed that directly incorporates
inter-dependencies among locations into the multiple-location-prediction
process, using a collection of Bayesian network classifiers. We evaluate our
system on a dataset of single- and multi-localized proteins. Our results,
obtained by incorporating inter-dependencies are significantly higher than
those obtained by classifiers that do not use inter-dependencies. The
performance of our system on multi-localized proteins is comparable to a top
performing system (YLoc+), without restricting predictions to be based only on
location-combinations present in the training set.
| [
{
"version": "v1",
"created": "Tue, 30 Jul 2013 03:19:05 GMT"
}
] | 2013-08-02T00:00:00 | [
[
"Simha",
"Ramanuja",
""
],
[
"Shatkay",
"Hagit",
""
]
] | TITLE: Protein (Multi-)Location Prediction: Using Location Inter-Dependencies
in a Probabilistic Framework
ABSTRACT: Knowing the location of a protein within the cell is important for
understanding its function, role in biological processes, and potential use as
a drug target. Much progress has been made in developing computational methods
that predict single locations for proteins, assuming that proteins localize to
a single location. However, it has been shown that proteins localize to
multiple locations. While a few recent systems have attempted to predict
multiple locations of proteins, they typically treat locations as independent
or capture inter-dependencies by treating each locations-combination present in
the training set as an individual location-class. We present a new method and a
preliminary system we have developed that directly incorporates
inter-dependencies among locations into the multiple-location-prediction
process, using a collection of Bayesian network classifiers. We evaluate our
system on a dataset of single- and multi-localized proteins. Our results,
obtained by incorporating inter-dependencies are significantly higher than
those obtained by classifiers that do not use inter-dependencies. The
performance of our system on multi-localized proteins is comparable to a top
performing system (YLoc+), without restricting predictions to be based only on
location-combinations present in the training set.
| no_new_dataset | 0.948202 |
1308.0245 | Arian Ojeda Gonz\'alez | Arian Ojeda Gonz\'alez, Silvio Gonz\'alez, Katy Alazo, Alexander
Calzadilla | Implementing an analytical formula for calculating M(3000)F2 in the
ionosonde operated in Havana | In CD (published in Spanish, with original title: "Implementaci\'on
de una f\'ormula anal\'itica para el calculo del coeficiente M(3000)F2") | VII Taller Internacional Inform\'atica y Geociencias Geoinfo 2004 | null | 15pp | physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determining the factor M(3000)F2 is very important for ionograms analysis
obtained of Ionosonde. M(3000)F2 is the result of the maximum usable frequency
(MUF), for to 3000 km distance, divided by the critical frequency of the F2
layer (FoF2). Nowadays, the graphic method to determine the M(3000)F2 is used
in Havana station in the ionograms analysis. The purpose of this work is to
implement an analytic method that allows us the direct obtaining of M(3000)F2,
so it could be programmed and incorporated as part of ionograms elaboration
process in Havana station. When is used a PC, some points in the ionogram can
be determined. This dataset (f; h') are used to calculate analytically the
factor M(3000)F2 . Comparison between the analytic method implemented and the
old graphic method are shown. The new method is more accurate and the errors
are diminished in the factor M(3000)F2.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 2013 15:38:35 GMT"
}
] | 2013-08-02T00:00:00 | [
[
"González",
"Arian Ojeda",
""
],
[
"González",
"Silvio",
""
],
[
"Alazo",
"Katy",
""
],
[
"Calzadilla",
"Alexander",
""
]
] | TITLE: Implementing an analytical formula for calculating M(3000)F2 in the
ionosonde operated in Havana
ABSTRACT: Determining the factor M(3000)F2 is very important for ionograms analysis
obtained of Ionosonde. M(3000)F2 is the result of the maximum usable frequency
(MUF), for to 3000 km distance, divided by the critical frequency of the F2
layer (FoF2). Nowadays, the graphic method to determine the M(3000)F2 is used
in Havana station in the ionograms analysis. The purpose of this work is to
implement an analytic method that allows us the direct obtaining of M(3000)F2,
so it could be programmed and incorporated as part of ionograms elaboration
process in Havana station. When is used a PC, some points in the ionogram can
be determined. This dataset (f; h') are used to calculate analytically the
factor M(3000)F2 . Comparison between the analytic method implemented and the
old graphic method are shown. The new method is more accurate and the errors
are diminished in the factor M(3000)F2.
| no_new_dataset | 0.945851 |
1308.0273 | Qiang Qiu | Qiang Qiu, Guillermo Sapiro | Learning Robust Subspace Clustering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a low-rank transformation-learning framework to robustify subspace
clustering. Many high-dimensional data, such as face images and motion
sequences, lie in a union of low-dimensional subspaces. The subspace clustering
problem has been extensively studied in the literature to partition such
high-dimensional data into clusters corresponding to their underlying
low-dimensional subspaces. However, low-dimensional intrinsic structures are
often violated for real-world observations, as they can be corrupted by errors
or deviate from ideal models. We propose to address this by learning a linear
transformation on subspaces using matrix rank, via its convex surrogate nuclear
norm, as the optimization criteria. The learned linear transformation restores
a low-rank structure for data from the same subspace, and, at the same time,
forces a high-rank structure for data from different subspaces. In this way, we
reduce variations within the subspaces, and increase separations between the
subspaces for more accurate subspace clustering. This proposed learned robust
subspace clustering framework significantly enhances the performance of
existing subspace clustering methods. To exploit the low-rank structures of the
transformed subspaces, we further introduce a subspace clustering technique,
called Robust Sparse Subspace Clustering, which efficiently combines robust PCA
with sparse modeling. We also discuss the online learning of the
transformation, and learning of the transformation while simultaneously
reducing the data dimensionality. Extensive experiments using public datasets
are presented, showing that the proposed approach significantly outperforms
state-of-the-art subspace clustering methods.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 2013 17:31:37 GMT"
}
] | 2013-08-02T00:00:00 | [
[
"Qiu",
"Qiang",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Learning Robust Subspace Clustering
ABSTRACT: We propose a low-rank transformation-learning framework to robustify subspace
clustering. Many high-dimensional data, such as face images and motion
sequences, lie in a union of low-dimensional subspaces. The subspace clustering
problem has been extensively studied in the literature to partition such
high-dimensional data into clusters corresponding to their underlying
low-dimensional subspaces. However, low-dimensional intrinsic structures are
often violated for real-world observations, as they can be corrupted by errors
or deviate from ideal models. We propose to address this by learning a linear
transformation on subspaces using matrix rank, via its convex surrogate nuclear
norm, as the optimization criteria. The learned linear transformation restores
a low-rank structure for data from the same subspace, and, at the same time,
forces a high-rank structure for data from different subspaces. In this way, we
reduce variations within the subspaces, and increase separations between the
subspaces for more accurate subspace clustering. This proposed learned robust
subspace clustering framework significantly enhances the performance of
existing subspace clustering methods. To exploit the low-rank structures of the
transformed subspaces, we further introduce a subspace clustering technique,
called Robust Sparse Subspace Clustering, which efficiently combines robust PCA
with sparse modeling. We also discuss the online learning of the
transformation, and learning of the transformation while simultaneously
reducing the data dimensionality. Extensive experiments using public datasets
are presented, showing that the proposed approach significantly outperforms
state-of-the-art subspace clustering methods.
| no_new_dataset | 0.952353 |
1308.0275 | Qiang Qiu | Qiang Qiu, Guillermo Sapiro, Ching-Hui Chen | Domain-invariant Face Recognition using Learned Low-rank Transformation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a low-rank transformation approach to compensate for face
variations due to changes in visual domains, such as pose and illumination. The
key idea is to learn discriminative linear transformations for face images
using matrix rank as the optimization criteria. The learned linear
transformations restore a shared low-rank structure for faces from the same
subject, and, at the same time, force a high-rank structure for faces from
different subjects. In this way, among the transformed faces, we reduce
variations caused by domain changes within the classes, and increase
separations between the classes for better face recognition across domains.
Extensive experiments using public datasets are presented to demonstrate the
effectiveness of our approach for face recognition across domains. The
potential of the approach for feature extraction in generic object recognition
and coded aperture design are discussed as well.
| [
{
"version": "v1",
"created": "Thu, 1 Aug 2013 17:34:36 GMT"
}
] | 2013-08-02T00:00:00 | [
[
"Qiu",
"Qiang",
""
],
[
"Sapiro",
"Guillermo",
""
],
[
"Chen",
"Ching-Hui",
""
]
] | TITLE: Domain-invariant Face Recognition using Learned Low-rank Transformation
ABSTRACT: We present a low-rank transformation approach to compensate for face
variations due to changes in visual domains, such as pose and illumination. The
key idea is to learn discriminative linear transformations for face images
using matrix rank as the optimization criteria. The learned linear
transformations restore a shared low-rank structure for faces from the same
subject, and, at the same time, force a high-rank structure for faces from
different subjects. In this way, among the transformed faces, we reduce
variations caused by domain changes within the classes, and increase
separations between the classes for better face recognition across domains.
Extensive experiments using public datasets are presented to demonstrate the
effectiveness of our approach for face recognition across domains. The
potential of the approach for feature extraction in generic object recognition
and coded aperture design are discussed as well.
| no_new_dataset | 0.958693 |
1302.0971 | Leon Abdillah | Leon Andretti Abdillah | Validasi data dengan menggunakan objek lookup pada borland delphi 7.0 | 16 pages | MATRIK. 7 (2005) 1-16 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing an application with some tables must concern the validation of
input (specially in Table Child). In order to maximize the accuracy and data
input validation. Its called lookup (took data from other dataset). There are
two ways to look up data from Table Parent: 1) Using Objects (DBLookupComboBox
and DBookupListBox), or 2) Arranging the properties of data types fields (shown
by using DBGrid). In this article is using Borland Delphi software (Inprise
product). The method is offered using 5 (five) practise steps: 1) Relational
Database Scheme, 2) Form Design, 3) Object DatabasesRelationships Scheme, 4)
Properties and Field Type Arrangement, and 5) Procedures. The result of this
paper are: 1) The relationship that using lookup objects are valid, and 2)
Delphi Lookup Objects can be used for 1-1, 1-N, and M-N relationship.
| [
{
"version": "v1",
"created": "Tue, 5 Feb 2013 09:32:54 GMT"
}
] | 2013-08-01T00:00:00 | [
[
"Abdillah",
"Leon Andretti",
""
]
] | TITLE: Validasi data dengan menggunakan objek lookup pada borland delphi 7.0
ABSTRACT: Developing an application with some tables must concern the validation of
input (specially in Table Child). In order to maximize the accuracy and data
input validation. Its called lookup (took data from other dataset). There are
two ways to look up data from Table Parent: 1) Using Objects (DBLookupComboBox
and DBookupListBox), or 2) Arranging the properties of data types fields (shown
by using DBGrid). In this article is using Borland Delphi software (Inprise
product). The method is offered using 5 (five) practise steps: 1) Relational
Database Scheme, 2) Form Design, 3) Object DatabasesRelationships Scheme, 4)
Properties and Field Type Arrangement, and 5) Procedures. The result of this
paper are: 1) The relationship that using lookup objects are valid, and 2)
Delphi Lookup Objects can be used for 1-1, 1-N, and M-N relationship.
| no_new_dataset | 0.942929 |
1307.8136 | Brian Kent | Brian P. Kent, Alessandro Rinaldo, Timothy Verstynen | DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering | 28 pages, 9 figures, for associated software see
https://github.com/CoAxLab/DeBaCl | null | null | null | stat.ME cs.LG stat.ML | http://creativecommons.org/licenses/by/3.0/ | The level set tree approach of Hartigan (1975) provides a probabilistically
based and highly interpretable encoding of the clustering behavior of a
dataset. By representing the hierarchy of data modes as a dendrogram of the
level sets of a density estimator, this approach offers many advantages for
exploratory analysis and clustering, especially for complex and
high-dimensional data. Several R packages exist for level set tree estimation,
but their practical usefulness is limited by computational inefficiency,
absence of interactive graphical capabilities and, from a theoretical
perspective, reliance on asymptotic approximations. To make it easier for
practitioners to capture the advantages of level set trees, we have written the
Python package DeBaCl for DEnsity-BAsed CLustering. In this article we
illustrate how DeBaCl's level set tree estimates can be used for difficult
clustering tasks and interactive graphical data analysis. The package is
intended to promote the practical use of level set trees through improvements
in computational efficiency and a high degree of user customization. In
addition, the flexible algorithms implemented in DeBaCl enjoy finite sample
accuracy, as demonstrated in recent literature on density clustering. Finally,
we show the level set tree framework can be easily extended to deal with
functional data.
| [
{
"version": "v1",
"created": "Tue, 30 Jul 2013 20:19:26 GMT"
}
] | 2013-08-01T00:00:00 | [
[
"Kent",
"Brian P.",
""
],
[
"Rinaldo",
"Alessandro",
""
],
[
"Verstynen",
"Timothy",
""
]
] | TITLE: DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering
ABSTRACT: The level set tree approach of Hartigan (1975) provides a probabilistically
based and highly interpretable encoding of the clustering behavior of a
dataset. By representing the hierarchy of data modes as a dendrogram of the
level sets of a density estimator, this approach offers many advantages for
exploratory analysis and clustering, especially for complex and
high-dimensional data. Several R packages exist for level set tree estimation,
but their practical usefulness is limited by computational inefficiency,
absence of interactive graphical capabilities and, from a theoretical
perspective, reliance on asymptotic approximations. To make it easier for
practitioners to capture the advantages of level set trees, we have written the
Python package DeBaCl for DEnsity-BAsed CLustering. In this article we
illustrate how DeBaCl's level set tree estimates can be used for difficult
clustering tasks and interactive graphical data analysis. The package is
intended to promote the practical use of level set trees through improvements
in computational efficiency and a high degree of user customization. In
addition, the flexible algorithms implemented in DeBaCl enjoy finite sample
accuracy, as demonstrated in recent literature on density clustering. Finally,
we show the level set tree framework can be easily extended to deal with
functional data.
| no_new_dataset | 0.93835 |
1307.8305 | Tobias Glasmachers | Tobias Glasmachers | The Planning-ahead SMO Algorithm | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sequential minimal optimization (SMO) algorithm and variants thereof are
the de facto standard method for solving large quadratic programs for support
vector machine (SVM) training. In this paper we propose a simple yet powerful
modification. The main emphasis is on an algorithm improving the SMO step size
by planning-ahead. The theoretical analysis ensures its convergence to the
optimum. Experiments involving a large number of datasets were carried out to
demonstrate the superiority of the new algorithm.
| [
{
"version": "v1",
"created": "Wed, 31 Jul 2013 12:38:20 GMT"
}
] | 2013-08-01T00:00:00 | [
[
"Glasmachers",
"Tobias",
""
]
] | TITLE: The Planning-ahead SMO Algorithm
ABSTRACT: The sequential minimal optimization (SMO) algorithm and variants thereof are
the de facto standard method for solving large quadratic programs for support
vector machine (SVM) training. In this paper we propose a simple yet powerful
modification. The main emphasis is on an algorithm improving the SMO step size
by planning-ahead. The theoretical analysis ensures its convergence to the
optimum. Experiments involving a large number of datasets were carried out to
demonstrate the superiority of the new algorithm.
| no_new_dataset | 0.949902 |
1307.8405 | Jinyun Yan | Zixuan Wang, Jinyun Yan | Who and Where: People and Location Co-Clustering | 2013 IEEE International Conference on Image Processing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the clustering problem on images where each image
contains patches in people and location domains. We exploit the correlation
between people and location domains, and proposed a semi-supervised
co-clustering algorithm to cluster images. Our algorithm updates the
correlation links at the runtime, and produces clustering in both domains
simultaneously. We conduct experiments in a manually collected dataset and a
Flickr dataset. The result shows that the such correlation improves the
clustering performance.
| [
{
"version": "v1",
"created": "Wed, 31 Jul 2013 17:53:10 GMT"
}
] | 2013-08-01T00:00:00 | [
[
"Wang",
"Zixuan",
""
],
[
"Yan",
"Jinyun",
""
]
] | TITLE: Who and Where: People and Location Co-Clustering
ABSTRACT: In this paper, we consider the clustering problem on images where each image
contains patches in people and location domains. We exploit the correlation
between people and location domains, and proposed a semi-supervised
co-clustering algorithm to cluster images. Our algorithm updates the
correlation links at the runtime, and produces clustering in both domains
simultaneously. We conduct experiments in a manually collected dataset and a
Flickr dataset. The result shows that the such correlation improves the
clustering performance.
| new_dataset | 0.959421 |
1307.8430 | Bryan Conroy | Bryan R. Conroy, Jennifer M. Walz, Brian Cheung, Paul Sajda | Fast Simultaneous Training of Generalized Linear Models (FaSTGLZ) | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an efficient algorithm for simultaneously training sparse
generalized linear models across many related problems, which may arise from
bootstrapping, cross-validation and nonparametric permutation testing. Our
approach leverages the redundancies across problems to obtain significant
computational improvements relative to solving the problems sequentially by a
conventional algorithm. We demonstrate our fast simultaneous training of
generalized linear models (FaSTGLZ) algorithm on a number of real-world
datasets, and we run otherwise computationally intensive bootstrapping and
permutation test analyses that are typically necessary for obtaining
statistically rigorous classification results and meaningful interpretation.
Code is freely available at http://liinc.bme.columbia.edu/fastglz.
| [
{
"version": "v1",
"created": "Wed, 31 Jul 2013 19:18:11 GMT"
}
] | 2013-08-01T00:00:00 | [
[
"Conroy",
"Bryan R.",
""
],
[
"Walz",
"Jennifer M.",
""
],
[
"Cheung",
"Brian",
""
],
[
"Sajda",
"Paul",
""
]
] | TITLE: Fast Simultaneous Training of Generalized Linear Models (FaSTGLZ)
ABSTRACT: We present an efficient algorithm for simultaneously training sparse
generalized linear models across many related problems, which may arise from
bootstrapping, cross-validation and nonparametric permutation testing. Our
approach leverages the redundancies across problems to obtain significant
computational improvements relative to solving the problems sequentially by a
conventional algorithm. We demonstrate our fast simultaneous training of
generalized linear models (FaSTGLZ) algorithm on a number of real-world
datasets, and we run otherwise computationally intensive bootstrapping and
permutation test analyses that are typically necessary for obtaining
statistically rigorous classification results and meaningful interpretation.
Code is freely available at http://liinc.bme.columbia.edu/fastglz.
| no_new_dataset | 0.949295 |
1307.7464 | Sharath Chandra Guntuku | Sharath Chandra Guntuku, Pratik Narang, Chittaranjan Hota | Real-time Peer-to-Peer Botnet Detection Framework based on Bayesian
Regularized Neural Network | null | null | null | null | cs.NI cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade, the Cyberspace has seen an increasing number of attacks
coming from botnets using the Peer-to-Peer (P2P) architecture. Peer-to-Peer
botnets use a decentralized Command & Control architecture. Moreover, a large
number of such botnets already exist, and newer versions- which significantly
differ from their parent bot- are also discovered practically every year. In
this work, the authors propose and implement a novel hybrid framework for
detecting P2P botnets in live network traffic by integrating Neural Networks
with Bayesian Regularization. Bayesian Regularization helps in achieving better
generalization of the dataset, thereby enabling the detection of botnet
activity even of those bots which were never used in training the Neural
Network. Hence such a framework is suitable for detection of newer and unseen
botnets in live traffic of a network. This was verified by testing the
Framework on test data unseen to the Detection module (using untrained botnet
dataset), and the authors were successful in detecting this activity with an
accuracy of 99.2 %.
| [
{
"version": "v1",
"created": "Mon, 29 Jul 2013 05:21:37 GMT"
}
] | 2013-07-30T00:00:00 | [
[
"Guntuku",
"Sharath Chandra",
""
],
[
"Narang",
"Pratik",
""
],
[
"Hota",
"Chittaranjan",
""
]
] | TITLE: Real-time Peer-to-Peer Botnet Detection Framework based on Bayesian
Regularized Neural Network
ABSTRACT: Over the past decade, the Cyberspace has seen an increasing number of attacks
coming from botnets using the Peer-to-Peer (P2P) architecture. Peer-to-Peer
botnets use a decentralized Command & Control architecture. Moreover, a large
number of such botnets already exist, and newer versions- which significantly
differ from their parent bot- are also discovered practically every year. In
this work, the authors propose and implement a novel hybrid framework for
detecting P2P botnets in live network traffic by integrating Neural Networks
with Bayesian Regularization. Bayesian Regularization helps in achieving better
generalization of the dataset, thereby enabling the detection of botnet
activity even of those bots which were never used in training the Neural
Network. Hence such a framework is suitable for detection of newer and unseen
botnets in live traffic of a network. This was verified by testing the
Framework on test data unseen to the Detection module (using untrained botnet
dataset), and the authors were successful in detecting this activity with an
accuracy of 99.2 %.
| no_new_dataset | 0.951142 |
1307.6889 | Nicholas Magliocca | N. R. Magliocca (1), E. C. Ellis (1), T. Oates (2) and M. Schmill (2)
((1) Department of Geography and Environmental Systems, University of
Maryland, Baltimore County, Baltimore, Maryland, USA,(2) Department of
Computer Science and Electrical Engineering, University of Maryland,
Baltimore County, Baltimore, Maryland, USA) | Contextualizing the global relevance of local land change observations | 5 pages, 4 figures, white paper | null | null | null | stat.AP cs.CY physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand global changes in the Earth system, scientists must generalize
globally from observations made locally and regionally. In land change science
(LCS), local field-based observations are costly and time consuming, and
generally obtained by researchers working at disparate local and regional
case-study sites chosen for different reasons. As a result, global synthesis
efforts in LCS tend to be based on non-statistical inferences subject to
geographic biases stemming from data limitations and fragmentation. Thus, a
fundamental challenge is the production of generalized knowledge that links
evidence of the causes and consequences of local land change to global patterns
and vice versa. The GLOBE system was designed to meet this challenge. GLOBE
aims to transform global change science by enabling new scientific workflows
based on statistically robust, globally relevant integration of local and
regional observations using an online social-computational and geovisualization
system. Consistent with the goals of Digital Earth, GLOBE has the capability to
assess the global relevance of local case-study findings within the context of
over 50 global biophysical, land-use, climate, and socio-economic datasets. We
demonstrate the implementation of one such assessment - a representativeness
analysis - with a recently published meta-study of changes in swidden
agriculture in tropical forests. The analysis provides a standardized indicator
to judge the global representativeness of the trends reported in the
meta-study, and a geovisualization is presented that highlights areas for which
sampling efforts can be reduced and those in need of further study. GLOBE will
enable researchers and institutions to rapidly share, compare, and synthesize
local and regional studies within the global context, as well as contributing
to the larger goal of creating a Digital Earth.
| [
{
"version": "v1",
"created": "Thu, 25 Jul 2013 22:40:04 GMT"
}
] | 2013-07-29T00:00:00 | [
[
"Magliocca",
"N. R.",
""
],
[
"Ellis",
"E. C.",
""
],
[
"Oates",
"T.",
""
],
[
"Schmill",
"M.",
""
]
] | TITLE: Contextualizing the global relevance of local land change observations
ABSTRACT: To understand global changes in the Earth system, scientists must generalize
globally from observations made locally and regionally. In land change science
(LCS), local field-based observations are costly and time consuming, and
generally obtained by researchers working at disparate local and regional
case-study sites chosen for different reasons. As a result, global synthesis
efforts in LCS tend to be based on non-statistical inferences subject to
geographic biases stemming from data limitations and fragmentation. Thus, a
fundamental challenge is the production of generalized knowledge that links
evidence of the causes and consequences of local land change to global patterns
and vice versa. The GLOBE system was designed to meet this challenge. GLOBE
aims to transform global change science by enabling new scientific workflows
based on statistically robust, globally relevant integration of local and
regional observations using an online social-computational and geovisualization
system. Consistent with the goals of Digital Earth, GLOBE has the capability to
assess the global relevance of local case-study findings within the context of
over 50 global biophysical, land-use, climate, and socio-economic datasets. We
demonstrate the implementation of one such assessment - a representativeness
analysis - with a recently published meta-study of changes in swidden
agriculture in tropical forests. The analysis provides a standardized indicator
to judge the global representativeness of the trends reported in the
meta-study, and a geovisualization is presented that highlights areas for which
sampling efforts can be reduced and those in need of further study. GLOBE will
enable researchers and institutions to rapidly share, compare, and synthesize
local and regional studies within the global context, as well as contributing
to the larger goal of creating a Digital Earth.
| no_new_dataset | 0.948775 |
1307.6923 | Rajib Rana | Rajib Rana, Mingrui Yang, Tim Wark, Chun Tung Chou, Wen Hu | A Deterministic Construction of Projection matrix for Adaptive
Trajectory Compression | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compressive Sensing, which offers exact reconstruction of sparse signal from
a small number of measurements, has tremendous potential for trajectory
compression. In order to optimize the compression, trajectory compression
algorithms need to adapt compression ratio subject to the compressibility of
the trajectory. Intuitively, the trajectory of an object moving in starlight
road is more compressible compared to the trajectory of a object moving in
winding roads, therefore, higher compression is achievable in the former case
compared to the later. We propose an in-situ compression technique underpinning
the support vector regression theory, which accurately predicts the
compressibility of a trajectory given the mean speed of the object and then
apply compressive sensing to adapt the compression to the compressibility of
the trajectory. The conventional encoding and decoding process of compressive
sensing uses predefined dictionary and measurement (or projection) matrix
pairs. However, the selection of an optimal pair is nontrivial and exhaustive,
and random selection of a pair does not guarantee the best compression
performance. In this paper, we propose a deterministic and data driven
construction for the projection matrix which is obtained by applying singular
value decomposition to a sparsifying dictionary learned from the dataset. We
analyze case studies of pedestrian and animal trajectory datasets including GPS
trajectory data from 127 subjects. The experimental results suggest that the
proposed adaptive compression algorithm, incorporating the deterministic
construction of projection matrix, offers significantly better compression
performance compared to the state-of-the-art alternatives.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2013 04:59:26 GMT"
}
] | 2013-07-29T00:00:00 | [
[
"Rana",
"Rajib",
""
],
[
"Yang",
"Mingrui",
""
],
[
"Wark",
"Tim",
""
],
[
"Chou",
"Chun Tung",
""
],
[
"Hu",
"Wen",
""
]
] | TITLE: A Deterministic Construction of Projection matrix for Adaptive
Trajectory Compression
ABSTRACT: Compressive Sensing, which offers exact reconstruction of sparse signal from
a small number of measurements, has tremendous potential for trajectory
compression. In order to optimize the compression, trajectory compression
algorithms need to adapt compression ratio subject to the compressibility of
the trajectory. Intuitively, the trajectory of an object moving in starlight
road is more compressible compared to the trajectory of a object moving in
winding roads, therefore, higher compression is achievable in the former case
compared to the later. We propose an in-situ compression technique underpinning
the support vector regression theory, which accurately predicts the
compressibility of a trajectory given the mean speed of the object and then
apply compressive sensing to adapt the compression to the compressibility of
the trajectory. The conventional encoding and decoding process of compressive
sensing uses predefined dictionary and measurement (or projection) matrix
pairs. However, the selection of an optimal pair is nontrivial and exhaustive,
and random selection of a pair does not guarantee the best compression
performance. In this paper, we propose a deterministic and data driven
construction for the projection matrix which is obtained by applying singular
value decomposition to a sparsifying dictionary learned from the dataset. We
analyze case studies of pedestrian and animal trajectory datasets including GPS
trajectory data from 127 subjects. The experimental results suggest that the
proposed adaptive compression algorithm, incorporating the deterministic
construction of projection matrix, offers significantly better compression
performance compared to the state-of-the-art alternatives.
| no_new_dataset | 0.94887 |
1307.7035 | Nadine Rons | Nadine Rons, Lucy Amez | Impact vitality: an indicator based on citing publications in search of
excellent scientists | 12 pages | Research Evaluation, 18(3), 233-241, 2009 | 10.3152/095820209X470563 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contributes to the quest for an operational definition of
'research excellence' and proposes a translation of the excellence concept into
a bibliometric indicator. Starting from a textual analysis of funding program
calls aimed at individual researchers and from the challenges for an indicator
at this level in particular, a new type of indicator is proposed. The Impact
Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a
researcher's publication output, based on the change in volume over time of the
citing publications. The introduced metric is shown to posses attractive
operational characteristics and meets a number of criteria which are desirable
when comparing individual researchers. The validity of one of the possible
indicator variants is tested using a small dataset of applicants for a senior
full time Research Fellowship. Options for further research involve testing
various indicator variants on larger samples linked to different kinds of
evaluations.
| [
{
"version": "v1",
"created": "Fri, 26 Jul 2013 13:48:36 GMT"
}
] | 2013-07-29T00:00:00 | [
[
"Rons",
"Nadine",
""
],
[
"Amez",
"Lucy",
""
]
] | TITLE: Impact vitality: an indicator based on citing publications in search of
excellent scientists
ABSTRACT: This paper contributes to the quest for an operational definition of
'research excellence' and proposes a translation of the excellence concept into
a bibliometric indicator. Starting from a textual analysis of funding program
calls aimed at individual researchers and from the challenges for an indicator
at this level in particular, a new type of indicator is proposed. The Impact
Vitality indicator [RONS & AMEZ, 2008] reflects the vitality of the impact of a
researcher's publication output, based on the change in volume over time of the
citing publications. The introduced metric is shown to posses attractive
operational characteristics and meets a number of criteria which are desirable
when comparing individual researchers. The validity of one of the possible
indicator variants is tested using a small dataset of applicants for a senior
full time Research Fellowship. Options for further research involve testing
various indicator variants on larger samples linked to different kinds of
evaluations.
| no_new_dataset | 0.928409 |
1305.3082 | Jialong Han | Jialong Han, Ji-Rong Wen | Mining Frequent Neighborhood Patterns in Large Labeled Graphs | 9 pages | null | 10.1145/2505515.2505530 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the years, frequent subgraphs have been an important sort of targeted
patterns in the pattern mining literatures, where most works deal with
databases holding a number of graph transactions, e.g., chemical structures of
compounds. These methods rely heavily on the downward-closure property (DCP) of
the support measure to ensure an efficient pruning of the candidate patterns.
When switching to the emerging scenario of single-graph databases such as
Google Knowledge Graph and Facebook social graph, the traditional support
measure turns out to be trivial (either 0 or 1). However, to the best of our
knowledge, all attempts to redefine a single-graph support resulted in measures
that either lose DCP, or are no longer semantically intuitive.
This paper targets mining patterns in the single-graph setting. We resolve
the "DCP-intuitiveness" dilemma by shifting the mining target from frequent
subgraphs to frequent neighborhoods. A neighborhood is a specific topological
pattern where a vertex is embedded, and the pattern is frequent if it is shared
by a large portion (above a given threshold) of vertices. We show that the new
patterns not only maintain DCP, but also have equally significant semantics as
subgraph patterns. Experiments on real-life datasets display the feasibility of
our algorithms on relatively large graphs, as well as the capability of mining
interesting knowledge that is not discovered in prior works.
| [
{
"version": "v1",
"created": "Tue, 14 May 2013 09:46:17 GMT"
}
] | 2013-07-26T00:00:00 | [
[
"Han",
"Jialong",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: Mining Frequent Neighborhood Patterns in Large Labeled Graphs
ABSTRACT: Over the years, frequent subgraphs have been an important sort of targeted
patterns in the pattern mining literatures, where most works deal with
databases holding a number of graph transactions, e.g., chemical structures of
compounds. These methods rely heavily on the downward-closure property (DCP) of
the support measure to ensure an efficient pruning of the candidate patterns.
When switching to the emerging scenario of single-graph databases such as
Google Knowledge Graph and Facebook social graph, the traditional support
measure turns out to be trivial (either 0 or 1). However, to the best of our
knowledge, all attempts to redefine a single-graph support resulted in measures
that either lose DCP, or are no longer semantically intuitive.
This paper targets mining patterns in the single-graph setting. We resolve
the "DCP-intuitiveness" dilemma by shifting the mining target from frequent
subgraphs to frequent neighborhoods. A neighborhood is a specific topological
pattern where a vertex is embedded, and the pattern is frequent if it is shared
by a large portion (above a given threshold) of vertices. We show that the new
patterns not only maintain DCP, but also have equally significant semantics as
subgraph patterns. Experiments on real-life datasets display the feasibility of
our algorithms on relatively large graphs, as well as the capability of mining
interesting knowledge that is not discovered in prior works.
| no_new_dataset | 0.948775 |
1304.3285 | Colorado Reed | Colorado Reed and Zoubin Ghahramani | Scaling the Indian Buffet Process via Submodular Maximization | 13 pages, 8 figures | In ICML 2013: JMLR W&CP 28 (3): 1013-1021, 2013 | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference for latent feature models is inherently difficult as the inference
space grows exponentially with the size of the input data and number of latent
features. In this work, we use Kurihara & Welling (2008)'s
maximization-expectation framework to perform approximate MAP inference for
linear-Gaussian latent feature models with an Indian Buffet Process (IBP)
prior. This formulation yields a submodular function of the features that
corresponds to a lower bound on the model evidence. By adding a constant to
this function, we obtain a nonnegative submodular function that can be
maximized via a greedy algorithm that obtains at least a one-third
approximation to the optimal solution. Our inference method scales linearly
with the size of the input data, and we show the efficacy of our method on the
largest datasets currently analyzed using an IBP model.
| [
{
"version": "v1",
"created": "Thu, 11 Apr 2013 13:20:51 GMT"
},
{
"version": "v2",
"created": "Wed, 8 May 2013 20:15:08 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Jun 2013 14:24:58 GMT"
},
{
"version": "v4",
"created": "Wed, 24 Jul 2013 19:20:15 GMT"
}
] | 2013-07-25T00:00:00 | [
[
"Reed",
"Colorado",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: Scaling the Indian Buffet Process via Submodular Maximization
ABSTRACT: Inference for latent feature models is inherently difficult as the inference
space grows exponentially with the size of the input data and number of latent
features. In this work, we use Kurihara & Welling (2008)'s
maximization-expectation framework to perform approximate MAP inference for
linear-Gaussian latent feature models with an Indian Buffet Process (IBP)
prior. This formulation yields a submodular function of the features that
corresponds to a lower bound on the model evidence. By adding a constant to
this function, we obtain a nonnegative submodular function that can be
maximized via a greedy algorithm that obtains at least a one-third
approximation to the optimal solution. Our inference method scales linearly
with the size of the input data, and we show the efficacy of our method on the
largest datasets currently analyzed using an IBP model.
| no_new_dataset | 0.947817 |
1307.6462 | Travis Gagie | Hector Ferrada, Travis Gagie, Tommi Hirvola, Simon J. Puglisi | AliBI: An Alignment-Based Index for Genomic Datasets | null | null | null | null | cs.DS cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With current hardware and software, a standard computer can now hold in RAM
an index for approximate pattern matching on about half a dozen human genomes.
Sequencing technologies have improved so quickly, however, that scientists will
soon demand indexes for thousands of genomes. Whereas most researchers who have
addressed this problem have proposed completely new kinds of indexes, we
recently described a simple technique that scales standard indexes to work on
more genomes. Our main idea was to filter the dataset with LZ77, build a
standard index for the filtered file, and then create a hybrid of that standard
index and an LZ77-based index. In this paper we describe how to our technique
to use alignments instead of LZ77, in order to simplify and speed up both
preprocessing and random access.
| [
{
"version": "v1",
"created": "Wed, 24 Jul 2013 15:42:23 GMT"
}
] | 2013-07-25T00:00:00 | [
[
"Ferrada",
"Hector",
""
],
[
"Gagie",
"Travis",
""
],
[
"Hirvola",
"Tommi",
""
],
[
"Puglisi",
"Simon J.",
""
]
] | TITLE: AliBI: An Alignment-Based Index for Genomic Datasets
ABSTRACT: With current hardware and software, a standard computer can now hold in RAM
an index for approximate pattern matching on about half a dozen human genomes.
Sequencing technologies have improved so quickly, however, that scientists will
soon demand indexes for thousands of genomes. Whereas most researchers who have
addressed this problem have proposed completely new kinds of indexes, we
recently described a simple technique that scales standard indexes to work on
more genomes. Our main idea was to filter the dataset with LZ77, build a
standard index for the filtered file, and then create a hybrid of that standard
index and an LZ77-based index. In this paper we describe how to our technique
to use alignments instead of LZ77, in order to simplify and speed up both
preprocessing and random access.
| no_new_dataset | 0.947624 |
1307.5894 | Md Mansurul Bhuiyan | Mansurul A Bhuiyan and Mohammad Al Hasan | MIRAGE: An Iterative MapReduce based FrequentSubgraph Mining Algorithm | null | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequent subgraph mining (FSM) is an important task for exploratory data
analysis on graph data. Over the years, many algorithms have been proposed to
solve this task. These algorithms assume that the data structure of the mining
task is small enough to fit in the main memory of a computer. However, as the
real-world graph data grows, both in size and quantity, such an assumption does
not hold any longer. To overcome this, some graph database-centric methods have
been proposed in recent years for solving FSM; however, a distributed solution
using MapReduce paradigm has not been explored extensively. Since, MapReduce is
becoming the de- facto paradigm for computation on massive data, an efficient
FSM algorithm on this paradigm is of huge demand. In this work, we propose a
frequent subgraph mining algorithm called MIRAGE which uses an iterative
MapReduce based framework. MIRAGE is complete as it returns all the frequent
subgraphs for a given user-defined support, and it is efficient as it applies
all the optimizations that the latest FSM algorithms adopt. Our experiments
with real life and large synthetic datasets validate the effectiveness of
MIRAGE for mining frequent subgraphs from large graph datasets. The source code
of MIRAGE is available from www.cs.iupui.edu/alhasan/software/
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2013 21:26:00 GMT"
}
] | 2013-07-24T00:00:00 | [
[
"Bhuiyan",
"Mansurul A",
""
],
[
"Hasan",
"Mohammad Al",
""
]
] | TITLE: MIRAGE: An Iterative MapReduce based FrequentSubgraph Mining Algorithm
ABSTRACT: Frequent subgraph mining (FSM) is an important task for exploratory data
analysis on graph data. Over the years, many algorithms have been proposed to
solve this task. These algorithms assume that the data structure of the mining
task is small enough to fit in the main memory of a computer. However, as the
real-world graph data grows, both in size and quantity, such an assumption does
not hold any longer. To overcome this, some graph database-centric methods have
been proposed in recent years for solving FSM; however, a distributed solution
using MapReduce paradigm has not been explored extensively. Since, MapReduce is
becoming the de- facto paradigm for computation on massive data, an efficient
FSM algorithm on this paradigm is of huge demand. In this work, we propose a
frequent subgraph mining algorithm called MIRAGE which uses an iterative
MapReduce based framework. MIRAGE is complete as it returns all the frequent
subgraphs for a given user-defined support, and it is efficient as it applies
all the optimizations that the latest FSM algorithms adopt. Our experiments
with real life and large synthetic datasets validate the effectiveness of
MIRAGE for mining frequent subgraphs from large graph datasets. The source code
of MIRAGE is available from www.cs.iupui.edu/alhasan/software/
| no_new_dataset | 0.944587 |
1307.6023 | Najla Al-Saati | Dr. Najla Akram AL-Saati and Marwa Abd-AlKareem | The Use of Cuckoo Search in Estimating the Parameters of Software
Reliability Growth Models | null | (IJCSIS) International Journal of Computer Science and Information
Security, Vol. 11, No. 6, June 2013 | null | null | cs.AI cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work aims to investigate the reliability of software products as an
important attribute of computer programs; it helps to decide the degree of
trustworthiness a program has in accomplishing its specific functions. This is
done using the Software Reliability Growth Models (SRGMs) through the
estimation of their parameters. The parameters are estimated in this work based
on the available failure data and with the search techniques of Swarm
Intelligence, namely, the Cuckoo Search (CS) due to its efficiency,
effectiveness and robustness. A number of SRGMs is studied, and the results are
compared to Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO)
and extended ACO. Results show that CS outperformed both PSO and ACO in finding
better parameters tested using identical datasets. It was sometimes
outperformed by the extended ACO. Also in this work, the percentages of
training data to testing data are investigated to show their impact on the
results.
| [
{
"version": "v1",
"created": "Tue, 23 Jul 2013 11:22:31 GMT"
}
] | 2013-07-24T00:00:00 | [
[
"AL-Saati",
"Dr. Najla Akram",
""
],
[
"Abd-AlKareem",
"Marwa",
""
]
] | TITLE: The Use of Cuckoo Search in Estimating the Parameters of Software
Reliability Growth Models
ABSTRACT: This work aims to investigate the reliability of software products as an
important attribute of computer programs; it helps to decide the degree of
trustworthiness a program has in accomplishing its specific functions. This is
done using the Software Reliability Growth Models (SRGMs) through the
estimation of their parameters. The parameters are estimated in this work based
on the available failure data and with the search techniques of Swarm
Intelligence, namely, the Cuckoo Search (CS) due to its efficiency,
effectiveness and robustness. A number of SRGMs is studied, and the results are
compared to Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO)
and extended ACO. Results show that CS outperformed both PSO and ACO in finding
better parameters tested using identical datasets. It was sometimes
outperformed by the extended ACO. Also in this work, the percentages of
training data to testing data are investigated to show their impact on the
results.
| no_new_dataset | 0.947769 |
1307.5591 | Subra Mukherjee | Subra Mukherjee, Karen Das | A Novel Equation based Classifier for Detecting Human in Images | published with international journal of Computer Applications (IJCA) | International Journal of Computer Applications 72(6):9-16, June
2013 | 10.5120/12496-7272 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shape based classification is one of the most challenging tasks in the field
of computer vision. Shapes play a vital role in object recognition. The basic
shapes in an image can occur in varying scale, position and orientation. And
specially when detecting human, the task becomes more challenging owing to the
largely varying size, shape, posture and clothing of human. So, in our work we
detect human, based on the head-shoulder shape as it is the most unvarying part
of human body. Here, firstly a new and a novel equation named as the Omega
Equation that describes the shape of human head-shoulder is developed and based
on this equation, a classifier is designed particularly for detecting human
presence in a scene. The classifier detects human by analyzing some of the
discriminative features of the values of the parameters obtained from the Omega
equation. The proposed method has been tested on a variety of shape dataset
taking into consideration the complexities of human head-shoulder shape. In all
the experiments the proposed method demonstrated satisfactory results.
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2013 05:13:03 GMT"
}
] | 2013-07-23T00:00:00 | [
[
"Mukherjee",
"Subra",
""
],
[
"Das",
"Karen",
""
]
] | TITLE: A Novel Equation based Classifier for Detecting Human in Images
ABSTRACT: Shape based classification is one of the most challenging tasks in the field
of computer vision. Shapes play a vital role in object recognition. The basic
shapes in an image can occur in varying scale, position and orientation. And
specially when detecting human, the task becomes more challenging owing to the
largely varying size, shape, posture and clothing of human. So, in our work we
detect human, based on the head-shoulder shape as it is the most unvarying part
of human body. Here, firstly a new and a novel equation named as the Omega
Equation that describes the shape of human head-shoulder is developed and based
on this equation, a classifier is designed particularly for detecting human
presence in a scene. The classifier detects human by analyzing some of the
discriminative features of the values of the parameters obtained from the Omega
equation. The proposed method has been tested on a variety of shape dataset
taking into consideration the complexities of human head-shoulder shape. In all
the experiments the proposed method demonstrated satisfactory results.
| no_new_dataset | 0.951774 |
1307.5599 | Naresh Kumar Mallenahalli Prof. Dr. | M. Naresh Kumar | Performance comparison of State-of-the-art Missing Value Imputation
Algorithms on Some Bench mark Datasets | 17 pages, 6 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision making from data involves identifying a set of attributes that
contribute to effective decision making through computational intelligence. The
presence of missing values greatly influences the selection of right set of
attributes and this renders degradation in classification accuracies of the
classifiers. As missing values are quite common in data collection phase during
field experiments or clinical trails appropriate handling would improve the
classifier performance. In this paper we present a review of recently developed
missing value imputation algorithms and compare their performance on some bench
mark datasets.
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2013 06:50:21 GMT"
}
] | 2013-07-23T00:00:00 | [
[
"Kumar",
"M. Naresh",
""
]
] | TITLE: Performance comparison of State-of-the-art Missing Value Imputation
Algorithms on Some Bench mark Datasets
ABSTRACT: Decision making from data involves identifying a set of attributes that
contribute to effective decision making through computational intelligence. The
presence of missing values greatly influences the selection of right set of
attributes and this renders degradation in classification accuracies of the
classifiers. As missing values are quite common in data collection phase during
field experiments or clinical trails appropriate handling would improve the
classifier performance. In this paper we present a review of recently developed
missing value imputation algorithms and compare their performance on some bench
mark datasets.
| no_new_dataset | 0.945851 |
1307.5702 | Lucas Paletta | Samuel F. Dodge and Lina J. Karam | Is Bottom-Up Attention Useful for Scene Recognition? | null | null | null | ISACS/2013/04 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human visual system employs a selective attention mechanism to understand
the visual world in an eficient manner. In this paper, we show how
computational models of this mechanism can be exploited for the computer vision
application of scene recognition. First, we consider saliency weighting and
saliency pruning, and provide a comparison of the performance of different
attention models in these approaches in terms of classification accuracy.
Pruning can achieve a high degree of computational savings without
significantly sacrificing classification accuracy. In saliency weighting,
however, we found that classification performance does not improve. In
addition, we present a new method to incorporate salient and non-salient
regions for improved classification accuracy. We treat the salient and
non-salient regions separately and combine them using Multiple Kernel Learning.
We evaluate our approach using the UIUC sports dataset and find that with a
small training size, our method improves upon the classification accuracy of
the baseline bag of features approach.
| [
{
"version": "v1",
"created": "Mon, 22 Jul 2013 13:38:16 GMT"
}
] | 2013-07-23T00:00:00 | [
[
"Dodge",
"Samuel F.",
""
],
[
"Karam",
"Lina J.",
""
]
] | TITLE: Is Bottom-Up Attention Useful for Scene Recognition?
ABSTRACT: The human visual system employs a selective attention mechanism to understand
the visual world in an eficient manner. In this paper, we show how
computational models of this mechanism can be exploited for the computer vision
application of scene recognition. First, we consider saliency weighting and
saliency pruning, and provide a comparison of the performance of different
attention models in these approaches in terms of classification accuracy.
Pruning can achieve a high degree of computational savings without
significantly sacrificing classification accuracy. In saliency weighting,
however, we found that classification performance does not improve. In
addition, we present a new method to incorporate salient and non-salient
regions for improved classification accuracy. We treat the salient and
non-salient regions separately and combine them using Multiple Kernel Learning.
We evaluate our approach using the UIUC sports dataset and find that with a
small training size, our method improves upon the classification accuracy of
the baseline bag of features approach.
| no_new_dataset | 0.946695 |
1303.7093 | Aravind Kota Gopalakrishna | Aravind Kota Gopalakrishna, Tanir Ozcelebi, Antonio Liotta, Johan J.
Lukkien | Relevance As a Metric for Evaluating Machine Learning Algorithms | To Appear at International Conference on Machine Learning and Data
Mining (MLDM 2013), 14 pages, 6 figures | null | 10.1007/978-3-642-39712-7_15 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning, the choice of a learning algorithm that is suitable for
the application domain is critical. The performance metric used to compare
different algorithms must also reflect the concerns of users in the application
domain under consideration. In this work, we propose a novel probability-based
performance metric called Relevance Score for evaluating supervised learning
algorithms. We evaluate the proposed metric through empirical analysis on a
dataset gathered from an intelligent lighting pilot installation. In comparison
to the commonly used Classification Accuracy metric, the Relevance Score proves
to be more appropriate for a certain class of applications.
| [
{
"version": "v1",
"created": "Thu, 28 Mar 2013 11:01:53 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Apr 2013 19:12:06 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Apr 2013 14:26:49 GMT"
}
] | 2013-07-19T00:00:00 | [
[
"Gopalakrishna",
"Aravind Kota",
""
],
[
"Ozcelebi",
"Tanir",
""
],
[
"Liotta",
"Antonio",
""
],
[
"Lukkien",
"Johan J.",
""
]
] | TITLE: Relevance As a Metric for Evaluating Machine Learning Algorithms
ABSTRACT: In machine learning, the choice of a learning algorithm that is suitable for
the application domain is critical. The performance metric used to compare
different algorithms must also reflect the concerns of users in the application
domain under consideration. In this work, we propose a novel probability-based
performance metric called Relevance Score for evaluating supervised learning
algorithms. We evaluate the proposed metric through empirical analysis on a
dataset gathered from an intelligent lighting pilot installation. In comparison
to the commonly used Classification Accuracy metric, the Relevance Score proves
to be more appropriate for a certain class of applications.
| no_new_dataset | 0.954435 |
1307.4531 | Jakub Mikians | Jakub Mikians, L\'aszl\'o Gyarmati, Vijay Erramilli, Nikolaos
Laoutaris | Crowd-assisted Search for Price Discrimination in E-Commerce: First
results | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After years of speculation, price discrimination in e-commerce driven by the
personal information that users leave (involuntarily) online, has started
attracting the attention of privacy researchers, regulators, and the press. In
our previous work we demonstrated instances of products whose prices varied
online depending on the location and the characteristics of perspective online
buyers. In an effort to scale up our study we have turned to crowd-sourcing.
Using a browser extension we have collected the prices obtained by an initial
set of 340 test users as they surf the web for products of their interest. This
initial dataset has permitted us to identify a set of online stores where price
variation is more pronounced. We have focused on this subset, and performed a
systematic crawl of their products and logged the prices obtained from
different vantage points and browser configurations. By analyzing this dataset
we see that there exist several retailers that return prices for the same
product that vary by 10%-30% whereas there also exist isolated cases that may
vary up to a multiplicative factor, e.g., x2. To the best of our efforts we
could not attribute the observed price gaps to currency, shipping, or taxation
differences.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2013 08:24:40 GMT"
}
] | 2013-07-18T00:00:00 | [
[
"Mikians",
"Jakub",
""
],
[
"Gyarmati",
"László",
""
],
[
"Erramilli",
"Vijay",
""
],
[
"Laoutaris",
"Nikolaos",
""
]
] | TITLE: Crowd-assisted Search for Price Discrimination in E-Commerce: First
results
ABSTRACT: After years of speculation, price discrimination in e-commerce driven by the
personal information that users leave (involuntarily) online, has started
attracting the attention of privacy researchers, regulators, and the press. In
our previous work we demonstrated instances of products whose prices varied
online depending on the location and the characteristics of perspective online
buyers. In an effort to scale up our study we have turned to crowd-sourcing.
Using a browser extension we have collected the prices obtained by an initial
set of 340 test users as they surf the web for products of their interest. This
initial dataset has permitted us to identify a set of online stores where price
variation is more pronounced. We have focused on this subset, and performed a
systematic crawl of their products and logged the prices obtained from
different vantage points and browser configurations. By analyzing this dataset
we see that there exist several retailers that return prices for the same
product that vary by 10%-30% whereas there also exist isolated cases that may
vary up to a multiplicative factor, e.g., x2. To the best of our efforts we
could not attribute the observed price gaps to currency, shipping, or taxation
differences.
| new_dataset | 0.932083 |
1307.4653 | Massimiliano Pontil | Bernardino Romera-Paredes and Massimiliano Pontil | A New Convex Relaxation for Tensor Completion | null | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank
matrices, to the tensor setting. In this paper, we highlight some limitations
of this approach and propose an alternative convex relaxation on the Euclidean
ball. We then describe a technique to solve the associated regularization
problem, which builds upon the alternating direction method of multipliers.
Experiments on one synthetic dataset and two real datasets indicate that the
proposed method improves significantly over tensor trace norm regularization in
terms of estimation error, while remaining computationally tractable.
| [
{
"version": "v1",
"created": "Wed, 17 Jul 2013 14:38:47 GMT"
}
] | 2013-07-18T00:00:00 | [
[
"Romera-Paredes",
"Bernardino",
""
],
[
"Pontil",
"Massimiliano",
""
]
] | TITLE: A New Convex Relaxation for Tensor Completion
ABSTRACT: We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank
matrices, to the tensor setting. In this paper, we highlight some limitations
of this approach and propose an alternative convex relaxation on the Euclidean
ball. We then describe a technique to solve the associated regularization
problem, which builds upon the alternating direction method of multipliers.
Experiments on one synthetic dataset and two real datasets indicate that the
proposed method improves significantly over tensor trace norm regularization in
terms of estimation error, while remaining computationally tractable.
| no_new_dataset | 0.947769 |
1006.0814 | Bin Jiang | Bin Jiang and Tao Jia | Zipf's Law for All the Natural Cities in the United States: A Geospatial
Perspective | 10 pages, 6 figures, 4 tables, substantially revised | International Journal of Geographical Information Science, 25(8),
2011, 1269-1281 | null | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides a new geospatial perspective on whether or not Zipf's law
holds for all cities or for the largest cities in the United States using a
massive dataset and its computing. A major problem around this issue is how to
define cities or city boundaries. Most of the investigations of Zipf's law rely
on the demarcations of cities imposed by census data, e.g., metropolitan areas
and census-designated places. These demarcations or definitions (of cities) are
criticized for being subjective or even arbitrary. Alternative solutions to
defining cities are suggested, but they still rely on census data for their
definitions. In this paper we demarcate urban agglomerations by clustering
street nodes (including intersections and ends), forming what we call natural
cities. Based on the demarcation, we found that Zipf's law holds remarkably
well for all the natural cities (over 2-4 million in total) across the United
States. There is little sensitivity for the holding with respect to the
clustering resolution used for demarcating the natural cities. This is a big
contrast to urban areas, as defined in the census data, which do not hold
stable for Zipf's law.
Keywords: Natural cities, power law, data-intensive geospatial computing,
scaling of geographic space
| [
{
"version": "v1",
"created": "Fri, 4 Jun 2010 09:20:01 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jul 2010 15:20:16 GMT"
}
] | 2013-07-17T00:00:00 | [
[
"Jiang",
"Bin",
""
],
[
"Jia",
"Tao",
""
]
] | TITLE: Zipf's Law for All the Natural Cities in the United States: A Geospatial
Perspective
ABSTRACT: This paper provides a new geospatial perspective on whether or not Zipf's law
holds for all cities or for the largest cities in the United States using a
massive dataset and its computing. A major problem around this issue is how to
define cities or city boundaries. Most of the investigations of Zipf's law rely
on the demarcations of cities imposed by census data, e.g., metropolitan areas
and census-designated places. These demarcations or definitions (of cities) are
criticized for being subjective or even arbitrary. Alternative solutions to
defining cities are suggested, but they still rely on census data for their
definitions. In this paper we demarcate urban agglomerations by clustering
street nodes (including intersections and ends), forming what we call natural
cities. Based on the demarcation, we found that Zipf's law holds remarkably
well for all the natural cities (over 2-4 million in total) across the United
States. There is little sensitivity for the holding with respect to the
clustering resolution used for demarcating the natural cities. This is a big
contrast to urban areas, as defined in the census data, which do not hold
stable for Zipf's law.
Keywords: Natural cities, power law, data-intensive geospatial computing,
scaling of geographic space
| no_new_dataset | 0.953794 |
1210.2376 | Manlio De Domenico | Manlio De Domenico, Antonio Lima, Mirco Musolesi | Interdependence and Predictability of Human Mobility and Social
Interactions | 21 pages, 9 figures | null | null | null | physics.soc-ph cs.SI nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous studies have shown that human movement is predictable to a certain
extent at different geographic scales. Existing prediction techniques exploit
only the past history of the person taken into consideration as input of the
predictors. In this paper, we show that by means of multivariate nonlinear time
series prediction techniques it is possible to increase the forecasting
accuracy by considering movements of friends, people, or more in general
entities, with correlated mobility patterns (i.e., characterised by high mutual
information) as inputs. Finally, we evaluate the proposed techniques on the
Nokia Mobile Data Challenge and Cabspotting datasets.
| [
{
"version": "v1",
"created": "Mon, 8 Oct 2012 18:44:59 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Jul 2013 17:24:05 GMT"
}
] | 2013-07-17T00:00:00 | [
[
"De Domenico",
"Manlio",
""
],
[
"Lima",
"Antonio",
""
],
[
"Musolesi",
"Mirco",
""
]
] | TITLE: Interdependence and Predictability of Human Mobility and Social
Interactions
ABSTRACT: Previous studies have shown that human movement is predictable to a certain
extent at different geographic scales. Existing prediction techniques exploit
only the past history of the person taken into consideration as input of the
predictors. In this paper, we show that by means of multivariate nonlinear time
series prediction techniques it is possible to increase the forecasting
accuracy by considering movements of friends, people, or more in general
entities, with correlated mobility patterns (i.e., characterised by high mutual
information) as inputs. Finally, we evaluate the proposed techniques on the
Nokia Mobile Data Challenge and Cabspotting datasets.
| no_new_dataset | 0.946051 |
1307.4264 | Rong Zheng | Huy Nguyen and Rong Zheng | A Data-driven Study of Influences in Twitter Communities | 11 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a quantitative study of Twitter, one of the most popular
micro-blogging services, from the perspective of user influence. We crawl
several datasets from the most active communities on Twitter and obtain 20.5
million user profiles, along with 420.2 million directed relations and 105
million tweets among the users. User influence scores are obtained from
influence measurement services, Klout and PeerIndex. Our analysis reveals
interesting findings, including non-power-law influence distribution, strong
reciprocity among users in a community, the existence of homophily and
hierarchical relationships in social influences. Most importantly, we observe
that whether a user retweets a message is strongly influenced by the first of
his followees who posted that message. To capture such an effect, we propose
the first influencer (FI) information diffusion model and show through
extensive evaluation that compared to the widely adopted independent cascade
model, the FI model is more stable and more accurate in predicting influence
spreads in Twitter communities.
| [
{
"version": "v1",
"created": "Tue, 16 Jul 2013 13:07:24 GMT"
}
] | 2013-07-17T00:00:00 | [
[
"Nguyen",
"Huy",
""
],
[
"Zheng",
"Rong",
""
]
] | TITLE: A Data-driven Study of Influences in Twitter Communities
ABSTRACT: This paper presents a quantitative study of Twitter, one of the most popular
micro-blogging services, from the perspective of user influence. We crawl
several datasets from the most active communities on Twitter and obtain 20.5
million user profiles, along with 420.2 million directed relations and 105
million tweets among the users. User influence scores are obtained from
influence measurement services, Klout and PeerIndex. Our analysis reveals
interesting findings, including non-power-law influence distribution, strong
reciprocity among users in a community, the existence of homophily and
hierarchical relationships in social influences. Most importantly, we observe
that whether a user retweets a message is strongly influenced by the first of
his followees who posted that message. To capture such an effect, we propose
the first influencer (FI) information diffusion model and show through
extensive evaluation that compared to the widely adopted independent cascade
model, the FI model is more stable and more accurate in predicting influence
spreads in Twitter communities.
| no_new_dataset | 0.950549 |
1301.4083 | \c{C}a\u{g}lar G\"ul\c{c}ehre | \c{C}a\u{g}lar G\"ul\c{c}ehre and Yoshua Bengio | Knowledge Matters: Importance of Prior Information for Optimization | 37 Pages, 5 figures, 5 tables JMLR Special Topics on Representation
Learning Submission | null | null | null | cs.LG cs.CV cs.NE stat.ML | http://creativecommons.org/licenses/by-nc-sa/3.0/ | We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima.
| [
{
"version": "v1",
"created": "Thu, 17 Jan 2013 13:06:52 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Jan 2013 05:43:57 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Jan 2013 17:11:19 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Mar 2013 20:13:08 GMT"
},
{
"version": "v5",
"created": "Fri, 15 Mar 2013 05:41:47 GMT"
},
{
"version": "v6",
"created": "Sat, 13 Jul 2013 16:38:36 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Gülçehre",
"Çağlar",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Knowledge Matters: Importance of Prior Information for Optimization
ABSTRACT: We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima.
| no_new_dataset | 0.947332 |
1307.3626 | Sadegh Aliakbary | Sadegh Aliakbary, Sadegh Motallebi, Jafar Habibi, Ali Movaghar | Learning an Integrated Distance Metric for Comparing Structure of
Complex Networks | null | null | null | null | cs.SI cs.AI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph comparison plays a major role in many network applications. We often
need a similarity metric for comparing networks according to their structural
properties. Various network features - such as degree distribution and
clustering coefficient - provide measurements for comparing networks from
different points of view, but a global and integrated distance metric is still
missing. In this paper, we employ distance metric learning algorithms in order
to construct an integrated distance metric for comparing structural properties
of complex networks. According to natural witnesses of network similarities
(such as network categories) the distance metric is learned by the means of a
dataset of some labeled real networks. For evaluating our proposed method which
is called NetDistance, we applied it as the distance metric in
K-nearest-neighbors classification. Empirical results show that NetDistance
outperforms previous methods, at least 20 percent, with respect to precision.
| [
{
"version": "v1",
"created": "Sat, 13 Jul 2013 07:53:19 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Aliakbary",
"Sadegh",
""
],
[
"Motallebi",
"Sadegh",
""
],
[
"Habibi",
"Jafar",
""
],
[
"Movaghar",
"Ali",
""
]
] | TITLE: Learning an Integrated Distance Metric for Comparing Structure of
Complex Networks
ABSTRACT: Graph comparison plays a major role in many network applications. We often
need a similarity metric for comparing networks according to their structural
properties. Various network features - such as degree distribution and
clustering coefficient - provide measurements for comparing networks from
different points of view, but a global and integrated distance metric is still
missing. In this paper, we employ distance metric learning algorithms in order
to construct an integrated distance metric for comparing structural properties
of complex networks. According to natural witnesses of network similarities
(such as network categories) the distance metric is learned by the means of a
dataset of some labeled real networks. For evaluating our proposed method which
is called NetDistance, we applied it as the distance metric in
K-nearest-neighbors classification. Empirical results show that NetDistance
outperforms previous methods, at least 20 percent, with respect to precision.
| no_new_dataset | 0.949716 |
1307.3673 | Omar Alonso | Alexandros Ntoulas, Omar Alonso, Vasilis Kandylas | A Data Management Approach for Dataset Selection Using Human Computation | null | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element.
| [
{
"version": "v1",
"created": "Sat, 13 Jul 2013 19:29:33 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Ntoulas",
"Alexandros",
""
],
[
"Alonso",
"Omar",
""
],
[
"Kandylas",
"Vasilis",
""
]
] | TITLE: A Data Management Approach for Dataset Selection Using Human Computation
ABSTRACT: As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element.
| no_new_dataset | 0.954095 |
1307.3755 | Nikesh Dattani | Lila Kari (1), Kathleen A. Hill (2), Abu Sadat Sayem (1), Nathaniel
Bryans (3), Katelyn Davis (2), Nikesh S. Dattani (4), ((1) Department of
Computer Science, University of Western Ontario, Canada, (2) Department of
Biology, University of Western Ontario, Canada, (3) Microsoft Corporation,
(4) Department of Chemistry, Oxford University, UK) | Map of Life: Measuring and Visualizing Species' Relatedness with
"Molecular Distance Maps" | 13 pages, 8 figures. Funded by: NSERC/CRSNG (Natural Science &
Engineering Research Council of Canada / Conseil de recherches en sciences
naturelles et en g\'enie du Canada), and the Oxford University Press.
Acknowledgements: Ronghai Tu, Tao Tao, Steffen Kopecki, Andre Lachance,
Jeremy McNeil, Greg Thorn, Oxford University Mathematical Institute | null | null | null | q-bio.GN cs.CV q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel combination of methods that (i) portrays quantitative
characteristics of a DNA sequence as an image, (ii) computes distances between
these images, and (iii) uses these distances to output a map wherein each
sequence is a point in a common Euclidean space. In the resulting "Molecular
Distance Map" each point signifies a DNA sequence, and the geometric distance
between any two points reflects the degree of relatedness between the
corresponding sequences and species.
Molecular Distance Maps present compelling visual representations of
relationships between species and could be used for taxonomic clarifications,
for species identification, and for studies of evolutionary history. One of the
advantages of this method is its general applicability since, as sequence
alignment is not required, the DNA sequences chosen for comparison can be
completely different regions in different genomes. In fact, this method can be
used to compare any two DNA sequences. For example, in our dataset of 3,176
mitochondrial DNA sequences, it correctly finds the mtDNA sequences most
closely related to that of the anatomically modern human (the Neanderthal, the
Denisovan, and the chimp), and it finds that the sequence most different from
it belongs to a cucumber. Furthermore, our method can be used to compare real
sequences to artificial, computer-generated, DNA sequences. For example, it is
used to determine that the distances between a Homo sapiens sapiens mtDNA and
artificial sequences of the same length and same trinucleotide frequencies can
be larger than the distance between the same human mtDNA and the mtDNA of a
fruit-fly.
We demonstrate this method's promising potential for taxonomical
clarifications by applying it to a diverse variety of cases that have been
historically controversial, such as the genus Polypterus, the family Tarsiidae,
and the vast (super)kingdom Protista.
| [
{
"version": "v1",
"created": "Sun, 14 Jul 2013 17:16:57 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Kari",
"Lila",
""
],
[
"Hill",
"Kathleen A.",
""
],
[
"Sayem",
"Abu Sadat",
""
],
[
"Bryans",
"Nathaniel",
""
],
[
"Davis",
"Katelyn",
""
],
[
"Dattani",
"Nikesh S.",
""
]
] | TITLE: Map of Life: Measuring and Visualizing Species' Relatedness with
"Molecular Distance Maps"
ABSTRACT: We propose a novel combination of methods that (i) portrays quantitative
characteristics of a DNA sequence as an image, (ii) computes distances between
these images, and (iii) uses these distances to output a map wherein each
sequence is a point in a common Euclidean space. In the resulting "Molecular
Distance Map" each point signifies a DNA sequence, and the geometric distance
between any two points reflects the degree of relatedness between the
corresponding sequences and species.
Molecular Distance Maps present compelling visual representations of
relationships between species and could be used for taxonomic clarifications,
for species identification, and for studies of evolutionary history. One of the
advantages of this method is its general applicability since, as sequence
alignment is not required, the DNA sequences chosen for comparison can be
completely different regions in different genomes. In fact, this method can be
used to compare any two DNA sequences. For example, in our dataset of 3,176
mitochondrial DNA sequences, it correctly finds the mtDNA sequences most
closely related to that of the anatomically modern human (the Neanderthal, the
Denisovan, and the chimp), and it finds that the sequence most different from
it belongs to a cucumber. Furthermore, our method can be used to compare real
sequences to artificial, computer-generated, DNA sequences. For example, it is
used to determine that the distances between a Homo sapiens sapiens mtDNA and
artificial sequences of the same length and same trinucleotide frequencies can
be larger than the distance between the same human mtDNA and the mtDNA of a
fruit-fly.
We demonstrate this method's promising potential for taxonomical
clarifications by applying it to a diverse variety of cases that have been
historically controversial, such as the genus Polypterus, the family Tarsiidae,
and the vast (super)kingdom Protista.
| new_dataset | 0.740503 |
1307.3872 | Andrea Farruggia | Andrea Farruggia, Paolo Ferragina, Antonio Frangioni, Rossano
Venturini | Bicriteria data compression | null | null | null | null | cs.IT cs.DS math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advent of massive datasets (and the consequent design of high-performing
distributed storage systems) have reignited the interest of the scientific and
engineering community towards the design of lossless data compressors which
achieve effective compression ratio and very efficient decompression speed.
Lempel-Ziv's LZ77 algorithm is the de facto choice in this scenario because of
its decompression speed and its flexibility in trading decompression speed
versus compressed-space efficiency. Each of the existing implementations offers
a trade-off between space occupancy and decompression speed, so software
engineers have to content themselves by picking the one which comes closer to
the requirements of the application in their hands. Starting from these
premises, and for the first time in the literature, we address in this paper
the problem of trading optimally, and in a principled way, the consumption of
these two resources by introducing the Bicriteria LZ77-Parsing problem, which
formalizes in a principled way what data-compressors have traditionally
approached by means of heuristics. The goal is to determine an LZ77 parsing
which minimizes the space occupancy in bits of the compressed file, provided
that the decompression time is bounded by a fixed amount (or vice-versa). This
way, the software engineer can set its space (or time) requirements and then
derive the LZ77 parsing which optimizes the decompression speed (or the space
occupancy, respectively). We solve this problem efficiently in O(n log^2 n)
time and optimal linear space within a small, additive approximation, by
proving and deploying some specific structural properties of the weighted graph
derived from the possible LZ77-parsings of the input file. The preliminary set
of experiments shows that our novel proposal dominates all the highly
engineered competitors, hence offering a win-win situation in theory&practice.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2013 10:14:56 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Farruggia",
"Andrea",
""
],
[
"Ferragina",
"Paolo",
""
],
[
"Frangioni",
"Antonio",
""
],
[
"Venturini",
"Rossano",
""
]
] | TITLE: Bicriteria data compression
ABSTRACT: The advent of massive datasets (and the consequent design of high-performing
distributed storage systems) have reignited the interest of the scientific and
engineering community towards the design of lossless data compressors which
achieve effective compression ratio and very efficient decompression speed.
Lempel-Ziv's LZ77 algorithm is the de facto choice in this scenario because of
its decompression speed and its flexibility in trading decompression speed
versus compressed-space efficiency. Each of the existing implementations offers
a trade-off between space occupancy and decompression speed, so software
engineers have to content themselves by picking the one which comes closer to
the requirements of the application in their hands. Starting from these
premises, and for the first time in the literature, we address in this paper
the problem of trading optimally, and in a principled way, the consumption of
these two resources by introducing the Bicriteria LZ77-Parsing problem, which
formalizes in a principled way what data-compressors have traditionally
approached by means of heuristics. The goal is to determine an LZ77 parsing
which minimizes the space occupancy in bits of the compressed file, provided
that the decompression time is bounded by a fixed amount (or vice-versa). This
way, the software engineer can set its space (or time) requirements and then
derive the LZ77 parsing which optimizes the decompression speed (or the space
occupancy, respectively). We solve this problem efficiently in O(n log^2 n)
time and optimal linear space within a small, additive approximation, by
proving and deploying some specific structural properties of the weighted graph
derived from the possible LZ77-parsings of the input file. The preliminary set
of experiments shows that our novel proposal dominates all the highly
engineered competitors, hence offering a win-win situation in theory&practice.
| no_new_dataset | 0.944944 |
1307.3938 | Zacharias Stelzer | Z. Stelzer and A. Jackson | Extracting scaling laws from numerical dynamo models | 21 pages, 11 figures | Geophys. J. Int. (June, 2013) 193 (3): 1265-1276 | 10.1093/gji/ggt083 | null | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Earth's magnetic field is generated by processes in the electrically
conducting, liquid outer core, subsumed under the term `geodynamo'. In the last
decades, great effort has been put into the numerical simulation of core
dynamics following from the magnetohydrodynamic (MHD) equations. However, the
numerical simulations are far from Earth's core in terms of several control
parameters. Different scaling analyses found simple scaling laws for quantities
like heat transport, flow velocity, magnetic field strength and magnetic
dissipation time.
We use an extensive dataset of 116 numerical dynamo models compiled by
Christensen and co-workers to analyse these scalings from a rigorous model
selection point of view. Our method of choice is leave-one-out cross-validation
which rates models according to their predictive abilities. In contrast to
earlier results, we find that diffusive processes are not negligible for the
flow velocity and magnetic field strength in the numerical dynamos. Also the
scaling of the magnetic dissipation time turns out to be more complex than
previously suggested. Assuming that the processes relevant in the numerical
models are the same as in Earth's core, we use this scaling to estimate an
Ohmic dissipation of 3-8 TW for the core. This appears to be consistent with
recent high CMB heat flux scenarios.
| [
{
"version": "v1",
"created": "Mon, 15 Jul 2013 13:48:41 GMT"
}
] | 2013-07-16T00:00:00 | [
[
"Stelzer",
"Z.",
""
],
[
"Jackson",
"A.",
""
]
] | TITLE: Extracting scaling laws from numerical dynamo models
ABSTRACT: Earth's magnetic field is generated by processes in the electrically
conducting, liquid outer core, subsumed under the term `geodynamo'. In the last
decades, great effort has been put into the numerical simulation of core
dynamics following from the magnetohydrodynamic (MHD) equations. However, the
numerical simulations are far from Earth's core in terms of several control
parameters. Different scaling analyses found simple scaling laws for quantities
like heat transport, flow velocity, magnetic field strength and magnetic
dissipation time.
We use an extensive dataset of 116 numerical dynamo models compiled by
Christensen and co-workers to analyse these scalings from a rigorous model
selection point of view. Our method of choice is leave-one-out cross-validation
which rates models according to their predictive abilities. In contrast to
earlier results, we find that diffusive processes are not negligible for the
flow velocity and magnetic field strength in the numerical dynamos. Also the
scaling of the magnetic dissipation time turns out to be more complex than
previously suggested. Assuming that the processes relevant in the numerical
models are the same as in Earth's core, we use this scaling to estimate an
Ohmic dissipation of 3-8 TW for the core. This appears to be consistent with
recent high CMB heat flux scenarios.
| no_new_dataset | 0.949201 |
1110.4198 | Alekh Agarwal | Alekh Agarwal, Olivier Chapelle, Miroslav Dudik, John Langford | A Reliable Effective Terascale Linear Learning System | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a system and a set of techniques for learning linear predictors
with convex losses on terascale datasets, with trillions of features, {The
number of features here refers to the number of non-zero entries in the data
matrix.} billions of training examples and millions of parameters in an hour
using a cluster of 1000 machines. Individually none of the component techniques
are new, but the careful synthesis required to obtain an efficient
implementation is. The result is, up to our knowledge, the most scalable and
efficient linear learning system reported in the literature (as of 2011 when
our experiments were conducted). We describe and thoroughly evaluate the
components of the system, showing the importance of the various design choices.
| [
{
"version": "v1",
"created": "Wed, 19 Oct 2011 07:34:19 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2012 18:31:21 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Jul 2013 03:28:17 GMT"
}
] | 2013-07-15T00:00:00 | [
[
"Agarwal",
"Alekh",
""
],
[
"Chapelle",
"Olivier",
""
],
[
"Dudik",
"Miroslav",
""
],
[
"Langford",
"John",
""
]
] | TITLE: A Reliable Effective Terascale Linear Learning System
ABSTRACT: We present a system and a set of techniques for learning linear predictors
with convex losses on terascale datasets, with trillions of features, {The
number of features here refers to the number of non-zero entries in the data
matrix.} billions of training examples and millions of parameters in an hour
using a cluster of 1000 machines. Individually none of the component techniques
are new, but the careful synthesis required to obtain an efficient
implementation is. The result is, up to our knowledge, the most scalable and
efficient linear learning system reported in the literature (as of 2011 when
our experiments were conducted). We describe and thoroughly evaluate the
components of the system, showing the importance of the various design choices.
| no_new_dataset | 0.949995 |
1307.3284 | Shuai Yuan | Shuai Yuan, Jun Wang | Sequential Selection of Correlated Ads by POMDPs | null | Proceedings of the ACM CIKM '12. 515-524 | 10.1145/2396761.2396828 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online advertising has become a key source of revenue for both web search
engines and online publishers. For them, the ability of allocating right ads to
right webpages is critical because any mismatched ads would not only harm web
users' satisfactions but also lower the ad income. In this paper, we study how
online publishers could optimally select ads to maximize their ad incomes over
time. The conventional offline, content-based matching between webpages and ads
is a fine start but cannot solve the problem completely because good matching
does not necessarily lead to good payoff. Moreover, with the limited display
impressions, we need to balance the need of selecting ads to learn true ad
payoffs (exploration) with that of allocating ads to generate high immediate
payoffs based on the current belief (exploitation). In this paper, we address
the problem by employing Partially observable Markov decision processes
(POMDPs) and discuss how to utilize the correlation of ads to improve the
efficiency of the exploration and increase ad incomes in a long run. Our
mathematical derivation shows that the belief states of correlated ads can be
naturally updated using a formula similar to collaborative filtering. To test
our model, a real world ad dataset from a major search engine is collected and
categorized. Experimenting over the data, we provide an analyse of the effect
of the underlying parameters, and demonstrate that our algorithms significantly
outperform other strong baselines.
| [
{
"version": "v1",
"created": "Thu, 11 Jul 2013 22:20:32 GMT"
}
] | 2013-07-15T00:00:00 | [
[
"Yuan",
"Shuai",
""
],
[
"Wang",
"Jun",
""
]
] | TITLE: Sequential Selection of Correlated Ads by POMDPs
ABSTRACT: Online advertising has become a key source of revenue for both web search
engines and online publishers. For them, the ability of allocating right ads to
right webpages is critical because any mismatched ads would not only harm web
users' satisfactions but also lower the ad income. In this paper, we study how
online publishers could optimally select ads to maximize their ad incomes over
time. The conventional offline, content-based matching between webpages and ads
is a fine start but cannot solve the problem completely because good matching
does not necessarily lead to good payoff. Moreover, with the limited display
impressions, we need to balance the need of selecting ads to learn true ad
payoffs (exploration) with that of allocating ads to generate high immediate
payoffs based on the current belief (exploitation). In this paper, we address
the problem by employing Partially observable Markov decision processes
(POMDPs) and discuss how to utilize the correlation of ads to improve the
efficiency of the exploration and increase ad incomes in a long run. Our
mathematical derivation shows that the belief states of correlated ads can be
naturally updated using a formula similar to collaborative filtering. To test
our model, a real world ad dataset from a major search engine is collected and
categorized. Experimenting over the data, we provide an analyse of the effect
of the underlying parameters, and demonstrate that our algorithms significantly
outperform other strong baselines.
| no_new_dataset | 0.945349 |
1307.2669 | Haoyang (Hubert) Duan | Hubert Haoyang Duan, Vladimir Pestov, and Varun Singla | Text Categorization via Similarity Search: An Efficient and Effective
Novel Algorithm | 12 pages, 5 tables, accepted for the 6th International Conference on
Similarity Search and Applications (SISAP 2013) | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a supervised learning algorithm for text categorization which has
brought the team of authors the 2nd place in the text categorization division
of the 2012 Cybersecurity Data Mining Competition (CDMC'2012) and a 3rd prize
overall. The algorithm is quite different from existing approaches in that it
is based on similarity search in the metric space of measure distributions on
the dictionary. At the preprocessing stage, given a labeled learning sample of
texts, we associate to every class label (document category) a point in the
space of question. Unlike it is usual in clustering, this point is not a
centroid of the category but rather an outlier, a uniform measure distribution
on a selection of domain-specific words. At the execution stage, an unlabeled
text is assigned a text category as defined by the closest labeled neighbour to
the point representing the frequency distribution of the words in the text. The
algorithm is both effective and efficient, as further confirmed by experiments
on the Reuters 21578 dataset.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2013 04:41:19 GMT"
}
] | 2013-07-11T00:00:00 | [
[
"Duan",
"Hubert Haoyang",
""
],
[
"Pestov",
"Vladimir",
""
],
[
"Singla",
"Varun",
""
]
] | TITLE: Text Categorization via Similarity Search: An Efficient and Effective
Novel Algorithm
ABSTRACT: We present a supervised learning algorithm for text categorization which has
brought the team of authors the 2nd place in the text categorization division
of the 2012 Cybersecurity Data Mining Competition (CDMC'2012) and a 3rd prize
overall. The algorithm is quite different from existing approaches in that it
is based on similarity search in the metric space of measure distributions on
the dictionary. At the preprocessing stage, given a labeled learning sample of
texts, we associate to every class label (document category) a point in the
space of question. Unlike it is usual in clustering, this point is not a
centroid of the category but rather an outlier, a uniform measure distribution
on a selection of domain-specific words. At the execution stage, an unlabeled
text is assigned a text category as defined by the closest labeled neighbour to
the point representing the frequency distribution of the words in the text. The
algorithm is both effective and efficient, as further confirmed by experiments
on the Reuters 21578 dataset.
| no_new_dataset | 0.949995 |
1205.4683 | Michael Szell | Michael Szell and Stefan Thurner | How women organize social networks different from men | 8 pages, 3 figures | Scientific Reports 3, 1214 (2013) | 10.1038/srep01214 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Superpositions of social networks, such as communication, friendship, or
trade networks, are called multiplex networks, forming the structural backbone
of human societies. Novel datasets now allow quantification and exploration of
multiplex networks. Here we study gender-specific differences of a multiplex
network from a complete behavioral dataset of an online-game society of about
300,000 players. On the individual level females perform better economically
and are less risk-taking than males. Males reciprocate friendship requests from
females faster than vice versa and hesitate to reciprocate hostile actions of
females. On the network level females have more communication partners, who are
less connected than partners of males. We find a strong homophily effect for
females and higher clustering coefficients of females in trade and attack
networks. Cooperative links between males are under-represented, reflecting
competition for resources among males. These results confirm quantitatively
that females and males manage their social networks in substantially different
ways.
| [
{
"version": "v1",
"created": "Mon, 21 May 2012 18:44:27 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Jul 2013 20:17:56 GMT"
}
] | 2013-07-10T00:00:00 | [
[
"Szell",
"Michael",
""
],
[
"Thurner",
"Stefan",
""
]
] | TITLE: How women organize social networks different from men
ABSTRACT: Superpositions of social networks, such as communication, friendship, or
trade networks, are called multiplex networks, forming the structural backbone
of human societies. Novel datasets now allow quantification and exploration of
multiplex networks. Here we study gender-specific differences of a multiplex
network from a complete behavioral dataset of an online-game society of about
300,000 players. On the individual level females perform better economically
and are less risk-taking than males. Males reciprocate friendship requests from
females faster than vice versa and hesitate to reciprocate hostile actions of
females. On the network level females have more communication partners, who are
less connected than partners of males. We find a strong homophily effect for
females and higher clustering coefficients of females in trade and attack
networks. Cooperative links between males are under-represented, reflecting
competition for resources among males. These results confirm quantitatively
that females and males manage their social networks in substantially different
ways.
| new_dataset | 0.973795 |
1307.2484 | Sukanta Basu | Yao Wang, Sukanta Basu, and Lance Manuel | Coupled Mesoscale-Large-Eddy Modeling of Realistic Stable Boundary Layer
Turbulence | null | null | null | null | physics.ao-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Site-specific flow and turbulence information are needed for various
practical applications, ranging from aerodynamic/aeroelastic modeling for wind
turbine design to optical diffraction calculations. Even though highly
desirable, collecting on-site meteorological measurements can be an expensive,
time-consuming, and sometimes a challenging task. In this work, we propose a
coupled mesoscale-large-eddy modeling framework to synthetically generate
site-specific flow and turbulence data. The workhorses behind our framework are
a state-of-the-art, open-source atmospheric model called the Weather Research
and Forecasting (WRF) model and a tuning-free large-eddy simulation (LES)
model.
Using this coupled framework, we simulate a nighttime stable boundary layer
(SBL) case from the well-known CASES-99 field campaign. One of the unique
aspects of this work is the usage of a diverse range of observations for
characterization and validation. The coupled models reproduce certain
characteristics of observed low-level jets. They also capture various scaling
regimes of energy spectra, including the so-called spectral gap. However, the
coupled models are unable to capture the intermittent nature of the observed
surface fluxes. Lastly, we document and discuss: (i) the tremendous
spatio-temporal variabilities of observed and modeled SBL flow fields, and (ii)
the significant disagreements among different observational platforms. Based on
these results, we strongly recommend that future SBL modeling studies consider
rigorous validation exercises based on multi-sensor/multi-platform datasets.
In summary, we believe that the numerical generation of realistic SBL is not
an impossible task. Without any doubt, there remain several computational and
fundamental challenges. The present work should be viewed as a first step to
confront some of these challenges.
| [
{
"version": "v1",
"created": "Tue, 9 Jul 2013 14:55:43 GMT"
}
] | 2013-07-10T00:00:00 | [
[
"Wang",
"Yao",
""
],
[
"Basu",
"Sukanta",
""
],
[
"Manuel",
"Lance",
""
]
] | TITLE: Coupled Mesoscale-Large-Eddy Modeling of Realistic Stable Boundary Layer
Turbulence
ABSTRACT: Site-specific flow and turbulence information are needed for various
practical applications, ranging from aerodynamic/aeroelastic modeling for wind
turbine design to optical diffraction calculations. Even though highly
desirable, collecting on-site meteorological measurements can be an expensive,
time-consuming, and sometimes a challenging task. In this work, we propose a
coupled mesoscale-large-eddy modeling framework to synthetically generate
site-specific flow and turbulence data. The workhorses behind our framework are
a state-of-the-art, open-source atmospheric model called the Weather Research
and Forecasting (WRF) model and a tuning-free large-eddy simulation (LES)
model.
Using this coupled framework, we simulate a nighttime stable boundary layer
(SBL) case from the well-known CASES-99 field campaign. One of the unique
aspects of this work is the usage of a diverse range of observations for
characterization and validation. The coupled models reproduce certain
characteristics of observed low-level jets. They also capture various scaling
regimes of energy spectra, including the so-called spectral gap. However, the
coupled models are unable to capture the intermittent nature of the observed
surface fluxes. Lastly, we document and discuss: (i) the tremendous
spatio-temporal variabilities of observed and modeled SBL flow fields, and (ii)
the significant disagreements among different observational platforms. Based on
these results, we strongly recommend that future SBL modeling studies consider
rigorous validation exercises based on multi-sensor/multi-platform datasets.
In summary, we believe that the numerical generation of realistic SBL is not
an impossible task. Without any doubt, there remain several computational and
fundamental challenges. The present work should be viewed as a first step to
confront some of these challenges.
| no_new_dataset | 0.949389 |
1307.1370 | Latanya Sweeney | Latanya Sweeney | Matching Known Patients to Health Records in Washington State Data | 13 pages | null | null | null | cs.CY cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The State of Washington sells patient-level health data for $50. This
publicly available dataset has virtually all hospitalizations occurring in the
State in a given year, including patient demographics, diagnoses, procedures,
attending physician, hospital, a summary of charges, and how the bill was paid.
It does not contain patient names or addresses (only ZIPs). Newspaper stories
printed in the State for the same year that contain the word "hospitalized"
often include a patient's name and residential information and explain why the
person was hospitalized, such as vehicle accident or assault. News information
uniquely and exactly matched medical records in the State database for 35 of
the 81 cases (or 43 percent) found in 2011, thereby putting names to patient
records. A news reporter verified matches by contacting patients. Employers,
financial organizations and others know the same kind of information as
reported in news stories making it just as easy for them to identify the
medical records of employees, debtors, and others.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 15:21:34 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jul 2013 23:04:48 GMT"
}
] | 2013-07-09T00:00:00 | [
[
"Sweeney",
"Latanya",
""
]
] | TITLE: Matching Known Patients to Health Records in Washington State Data
ABSTRACT: The State of Washington sells patient-level health data for $50. This
publicly available dataset has virtually all hospitalizations occurring in the
State in a given year, including patient demographics, diagnoses, procedures,
attending physician, hospital, a summary of charges, and how the bill was paid.
It does not contain patient names or addresses (only ZIPs). Newspaper stories
printed in the State for the same year that contain the word "hospitalized"
often include a patient's name and residential information and explain why the
person was hospitalized, such as vehicle accident or assault. News information
uniquely and exactly matched medical records in the State database for 35 of
the 81 cases (or 43 percent) found in 2011, thereby putting names to patient
records. A news reporter verified matches by contacting patients. Employers,
financial organizations and others know the same kind of information as
reported in news stories making it just as easy for them to identify the
medical records of employees, debtors, and others.
| no_new_dataset | 0.911574 |
1307.1769 | Lior Rokach | Lior Rokach, Alon Schclar, Ehud Itach | Ensemble Methods for Multi-label Classification | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensemble methods have been shown to be an effective tool for solving
multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm,
each member of the ensemble is associated with a small randomly-selected subset
of k labels. Then, a single label classifier is trained according to each
combination of elements in the subset. In this paper we adopt a similar
approach, however, instead of randomly choosing subsets, we select the minimum
required subsets of k labels that cover all labels and meet additional
constraints such as coverage of inter-label correlations. Construction of the
cover is achieved by formulating the subset selection as a minimum set covering
problem (SCP) and solving it by using approximation algorithms. Every cover
needs only to be prepared once by offline algorithms. Once prepared, a cover
may be applied to the classification of any given multi-label dataset whose
properties conform with those of the cover. The contribution of this paper is
two-fold. First, we introduce SCP as a general framework for constructing label
covers while allowing the user to incorporate cover construction constraints.
We demonstrate the effectiveness of this framework by proposing two
construction constraints whose enforcement produces covers that improve the
prediction performance of random selection. Second, we provide theoretical
bounds that quantify the probabilities of random selection to produce covers
that meet the proposed construction criteria. The experimental results indicate
that the proposed methods improve multi-label classification accuracy and
stability compared with the RAKEL algorithm and to other state-of-the-art
algorithms.
| [
{
"version": "v1",
"created": "Sat, 6 Jul 2013 10:17:44 GMT"
}
] | 2013-07-09T00:00:00 | [
[
"Rokach",
"Lior",
""
],
[
"Schclar",
"Alon",
""
],
[
"Itach",
"Ehud",
""
]
] | TITLE: Ensemble Methods for Multi-label Classification
ABSTRACT: Ensemble methods have been shown to be an effective tool for solving
multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm,
each member of the ensemble is associated with a small randomly-selected subset
of k labels. Then, a single label classifier is trained according to each
combination of elements in the subset. In this paper we adopt a similar
approach, however, instead of randomly choosing subsets, we select the minimum
required subsets of k labels that cover all labels and meet additional
constraints such as coverage of inter-label correlations. Construction of the
cover is achieved by formulating the subset selection as a minimum set covering
problem (SCP) and solving it by using approximation algorithms. Every cover
needs only to be prepared once by offline algorithms. Once prepared, a cover
may be applied to the classification of any given multi-label dataset whose
properties conform with those of the cover. The contribution of this paper is
two-fold. First, we introduce SCP as a general framework for constructing label
covers while allowing the user to incorporate cover construction constraints.
We demonstrate the effectiveness of this framework by proposing two
construction constraints whose enforcement produces covers that improve the
prediction performance of random selection. Second, we provide theoretical
bounds that quantify the probabilities of random selection to produce covers
that meet the proposed construction criteria. The experimental results indicate
that the proposed methods improve multi-label classification accuracy and
stability compared with the RAKEL algorithm and to other state-of-the-art
algorithms.
| no_new_dataset | 0.947284 |
1307.1900 | Arindam Chaudhuri AC | Arindam Chaudhuri, Kajal De | Fuzzy Integer Linear Programming Mathematical Models for Examination
Timetable Problem | International Journal of Innovative Computing, Information and
Control (Special Issue), Volume 7, Number 5, 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ETP is NP Hard combinatorial optimization problem. It has received tremendous
research attention during the past few years given its wide use in
universities. In this Paper, we develop three mathematical models for NSOU,
Kolkata, India using FILP technique. To deal with impreciseness and vagueness
we model various allocation variables through fuzzy numbers. The solution to
the problem is obtained using Fuzzy number ranking method. Each feasible
solution has fuzzy number obtained by Fuzzy objective function. The different
FILP technique performance are demonstrated by experimental data generated
through extensive simulation from NSOU, Kolkata, India in terms of its
execution times. The proposed FILP models are compared with commonly used
heuristic viz. ILP approach on experimental data which gives an idea about
quality of heuristic. The techniques are also compared with different
Artificial Intelligence based heuristics for ETP with respect to best and mean
cost as well as execution time measures on Carter benchmark datasets to
illustrate its effectiveness. FILP takes an appreciable amount of time to
generate satisfactory solution in comparison to other heuristics. The
formulation thus serves as good benchmark for other heuristics. The
experimental study presented here focuses on producing a methodology that
generalizes well over spectrum of techniques that generates significant results
for one or more datasets. The performance of FILP model is finally compared to
the best results cited in literature for Carter benchmarks to assess its
potential. The problem can be further reduced by formulating with lesser number
of allocation variables it without affecting optimality of solution obtained.
FLIP model for ETP can also be adapted to solve other ETP as well as
combinatorial optimization problems.
| [
{
"version": "v1",
"created": "Sun, 7 Jul 2013 19:09:03 GMT"
}
] | 2013-07-09T00:00:00 | [
[
"Chaudhuri",
"Arindam",
""
],
[
"De",
"Kajal",
""
]
] | TITLE: Fuzzy Integer Linear Programming Mathematical Models for Examination
Timetable Problem
ABSTRACT: ETP is NP Hard combinatorial optimization problem. It has received tremendous
research attention during the past few years given its wide use in
universities. In this Paper, we develop three mathematical models for NSOU,
Kolkata, India using FILP technique. To deal with impreciseness and vagueness
we model various allocation variables through fuzzy numbers. The solution to
the problem is obtained using Fuzzy number ranking method. Each feasible
solution has fuzzy number obtained by Fuzzy objective function. The different
FILP technique performance are demonstrated by experimental data generated
through extensive simulation from NSOU, Kolkata, India in terms of its
execution times. The proposed FILP models are compared with commonly used
heuristic viz. ILP approach on experimental data which gives an idea about
quality of heuristic. The techniques are also compared with different
Artificial Intelligence based heuristics for ETP with respect to best and mean
cost as well as execution time measures on Carter benchmark datasets to
illustrate its effectiveness. FILP takes an appreciable amount of time to
generate satisfactory solution in comparison to other heuristics. The
formulation thus serves as good benchmark for other heuristics. The
experimental study presented here focuses on producing a methodology that
generalizes well over spectrum of techniques that generates significant results
for one or more datasets. The performance of FILP model is finally compared to
the best results cited in literature for Carter benchmarks to assess its
potential. The problem can be further reduced by formulating with lesser number
of allocation variables it without affecting optimality of solution obtained.
FLIP model for ETP can also be adapted to solve other ETP as well as
combinatorial optimization problems.
| no_new_dataset | 0.946448 |
1307.2084 | Lucas Maystre | Mohamed Kafsi, Ehsan Kazemi, Lucas Maystre, Lyudmila Yartseva,
Matthias Grossglauser, Patrick Thiran | Mitigating Epidemics through Mobile Micro-measures | Presented at NetMob 2013, Boston | null | null | null | cs.SI cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epidemics of infectious diseases are among the largest threats to the quality
of life and the economic and social well-being of developing countries. The
arsenal of measures against such epidemics is well-established, but costly and
insufficient to mitigate their impact. In this paper, we argue that mobile
technology adds a powerful weapon to this arsenal, because (a) mobile devices
endow us with the unprecedented ability to measure and model the detailed
behavioral patterns of the affected population, and (b) they enable the
delivery of personalized behavioral recommendations to individuals in real
time. We combine these two ideas and propose several strategies to generate
such recommendations from mobility patterns. The goal of each strategy is a
large reduction in infections, with a small impact on the normal course of
daily life. We evaluate these strategies over the Orange D4D dataset and show
the benefit of mobile micro-measures, even if only a fraction of the population
participates. These preliminary results demonstrate the potential of mobile
technology to complement other measures like vaccination and quarantines
against disease epidemics.
| [
{
"version": "v1",
"created": "Mon, 8 Jul 2013 13:15:12 GMT"
}
] | 2013-07-09T00:00:00 | [
[
"Kafsi",
"Mohamed",
""
],
[
"Kazemi",
"Ehsan",
""
],
[
"Maystre",
"Lucas",
""
],
[
"Yartseva",
"Lyudmila",
""
],
[
"Grossglauser",
"Matthias",
""
],
[
"Thiran",
"Patrick",
""
]
] | TITLE: Mitigating Epidemics through Mobile Micro-measures
ABSTRACT: Epidemics of infectious diseases are among the largest threats to the quality
of life and the economic and social well-being of developing countries. The
arsenal of measures against such epidemics is well-established, but costly and
insufficient to mitigate their impact. In this paper, we argue that mobile
technology adds a powerful weapon to this arsenal, because (a) mobile devices
endow us with the unprecedented ability to measure and model the detailed
behavioral patterns of the affected population, and (b) they enable the
delivery of personalized behavioral recommendations to individuals in real
time. We combine these two ideas and propose several strategies to generate
such recommendations from mobility patterns. The goal of each strategy is a
large reduction in infections, with a small impact on the normal course of
daily life. We evaluate these strategies over the Orange D4D dataset and show
the benefit of mobile micro-measures, even if only a fraction of the population
participates. These preliminary results demonstrate the potential of mobile
technology to complement other measures like vaccination and quarantines
against disease epidemics.
| no_new_dataset | 0.94699 |
1307.1542 | Christian von der Weth | Christian von der Weth and Manfred Hauswirth | DOBBS: Towards a Comprehensive Dataset to Study the Browsing Behavior of
Online Users | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The investigation of the browsing behavior of users provides useful
information to optimize web site design, web browser design, search engines
offerings, and online advertisement. This has been a topic of active research
since the Web started and a large body of work exists. However, new online
services as well as advances in Web and mobile technologies clearly changed the
meaning behind "browsing the Web" and require a fresh look at the problem and
research, specifically in respect to whether the used models are still
appropriate. Platforms such as YouTube, Netflix or last.fm have started to
replace the traditional media channels (cinema, television, radio) and media
distribution formats (CD, DVD, Blu-ray). Social networks (e.g., Facebook) and
platforms for browser games attracted whole new, particularly less tech-savvy
audiences. Furthermore, advances in mobile technologies and devices made
browsing "on-the-move" the norm and changed the user behavior as in the mobile
case browsing is often being influenced by the user's location and context in
the physical world. Commonly used datasets, such as web server access logs or
search engines transaction logs, are inherently not capable of capturing the
browsing behavior of users in all these facets. DOBBS (DERI Online Behavior
Study) is an effort to create such a dataset in a non-intrusive, completely
anonymous and privacy-preserving way. To this end, DOBBS provides a browser
add-on that users can install, which keeps track of their browsing behavior
(e.g., how much time they spent on the Web, how long they stay on a website,
how often they visit a website, how they use their browser, etc.). In this
paper, we outline the motivation behind DOBBS, describe the add-on and captured
data in detail, and present some first results to highlight the strengths of
DOBBS.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2013 08:10:11 GMT"
}
] | 2013-07-08T00:00:00 | [
[
"von der Weth",
"Christian",
""
],
[
"Hauswirth",
"Manfred",
""
]
] | TITLE: DOBBS: Towards a Comprehensive Dataset to Study the Browsing Behavior of
Online Users
ABSTRACT: The investigation of the browsing behavior of users provides useful
information to optimize web site design, web browser design, search engines
offerings, and online advertisement. This has been a topic of active research
since the Web started and a large body of work exists. However, new online
services as well as advances in Web and mobile technologies clearly changed the
meaning behind "browsing the Web" and require a fresh look at the problem and
research, specifically in respect to whether the used models are still
appropriate. Platforms such as YouTube, Netflix or last.fm have started to
replace the traditional media channels (cinema, television, radio) and media
distribution formats (CD, DVD, Blu-ray). Social networks (e.g., Facebook) and
platforms for browser games attracted whole new, particularly less tech-savvy
audiences. Furthermore, advances in mobile technologies and devices made
browsing "on-the-move" the norm and changed the user behavior as in the mobile
case browsing is often being influenced by the user's location and context in
the physical world. Commonly used datasets, such as web server access logs or
search engines transaction logs, are inherently not capable of capturing the
browsing behavior of users in all these facets. DOBBS (DERI Online Behavior
Study) is an effort to create such a dataset in a non-intrusive, completely
anonymous and privacy-preserving way. To this end, DOBBS provides a browser
add-on that users can install, which keeps track of their browsing behavior
(e.g., how much time they spent on the Web, how long they stay on a website,
how often they visit a website, how they use their browser, etc.). In this
paper, we outline the motivation behind DOBBS, describe the add-on and captured
data in detail, and present some first results to highlight the strengths of
DOBBS.
| no_new_dataset | 0.889 |
1307.1601 | Uwe Aickelin | Chris Roadknight, Uwe Aickelin, Alex Ladas, Daniele Soria, John
Scholefield and Lindy Durrant | Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification | Federated Conference on Computer Science and Information Systems
(FedCSIS), pp 187-191, 2012 | null | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry.
| [
{
"version": "v1",
"created": "Fri, 5 Jul 2013 12:56:24 GMT"
}
] | 2013-07-08T00:00:00 | [
[
"Roadknight",
"Chris",
""
],
[
"Aickelin",
"Uwe",
""
],
[
"Ladas",
"Alex",
""
],
[
"Soria",
"Daniele",
""
],
[
"Scholefield",
"John",
""
],
[
"Durrant",
"Lindy",
""
]
] | TITLE: Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification
ABSTRACT: In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry.
| new_dataset | 0.969957 |
1307.1275 | Ruifan Li | Fangxiang Feng and Ruifan Li and Xiaojie Wang | Constructing Hierarchical Image-tags Bimodal Representations for Word
Tags Alternative Choice | 6 pages, 1 figure, Presented at the Workshop on Representation
Learning, ICML 2013 | null | null | null | cs.LG cs.NE | http://creativecommons.org/licenses/by/3.0/ | This paper describes our solution to the multi-modal learning challenge of
ICML. This solution comprises constructing three-level representations in three
consecutive stages and choosing correct tag words with a data-specific
strategy. Firstly, we use typical methods to obtain level-1 representations.
Each image is represented using MPEG-7 and gist descriptors with additional
features released by the contest organizers. And the corresponding word tags
are represented by bag-of-words model with a dictionary of 4000 words.
Secondly, we learn the level-2 representations using two stacked RBMs for each
modality. Thirdly, we propose a bimodal auto-encoder to learn the
similarities/dissimilarities between the pairwise image-tags as level-3
representations. Finally, during the test phase, based on one observation of
the dataset, we come up with a data-specific strategy to choose the correct tag
words leading to a leap of an improved overall performance. Our final average
accuracy on the private test set is 100%, which ranks the first place in this
challenge.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 11:10:45 GMT"
}
] | 2013-07-05T00:00:00 | [
[
"Feng",
"Fangxiang",
""
],
[
"Li",
"Ruifan",
""
],
[
"Wang",
"Xiaojie",
""
]
] | TITLE: Constructing Hierarchical Image-tags Bimodal Representations for Word
Tags Alternative Choice
ABSTRACT: This paper describes our solution to the multi-modal learning challenge of
ICML. This solution comprises constructing three-level representations in three
consecutive stages and choosing correct tag words with a data-specific
strategy. Firstly, we use typical methods to obtain level-1 representations.
Each image is represented using MPEG-7 and gist descriptors with additional
features released by the contest organizers. And the corresponding word tags
are represented by bag-of-words model with a dictionary of 4000 words.
Secondly, we learn the level-2 representations using two stacked RBMs for each
modality. Thirdly, we propose a bimodal auto-encoder to learn the
similarities/dissimilarities between the pairwise image-tags as level-3
representations. Finally, during the test phase, based on one observation of
the dataset, we come up with a data-specific strategy to choose the correct tag
words leading to a leap of an improved overall performance. Our final average
accuracy on the private test set is 100%, which ranks the first place in this
challenge.
| no_new_dataset | 0.945045 |
1307.1387 | Uwe Aickelin | Hala Helmi, Jon M. Garibaldi and Uwe Aickelin | Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | UKCI 2011, the 11th Annual Workshop on Computational Intelligence,
Manchester, pp 7-12 | null | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 16:06:25 GMT"
}
] | 2013-07-05T00:00:00 | [
[
"Helmi",
"Hala",
""
],
[
"Garibaldi",
"Jon M.",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm
ABSTRACT: Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD.
| no_new_dataset | 0.950869 |
1307.1391 | Uwe Aickelin | Feng Gu, Jan Feyereisl, Robert Oates, Jenna Reps, Julie Greensmith,
Uwe Aickelin | Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm | Proceedings of the 10th International Conference on Artificial Immune
Systems (ICARIS 2011), LNCS Volume 6825, Cambridge, UK, pp 173-186, 2011 | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested.
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 16:19:21 GMT"
}
] | 2013-07-05T00:00:00 | [
[
"Gu",
"Feng",
""
],
[
"Feyereisl",
"Jan",
""
],
[
"Oates",
"Robert",
""
],
[
"Reps",
"Jenna",
""
],
[
"Greensmith",
"Julie",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm
ABSTRACT: Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested.
| no_new_dataset | 0.940735 |
1307.1417 | Marius Nicolae | Sanguthevar Rajasekaran and Marius Nicolae | An Elegant Algorithm for the Construction of Suffix Arrays | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The suffix array is a data structure that finds numerous applications in
string processing problems for both linguistic texts and biological data. It
has been introduced as a memory efficient alternative for suffix trees. The
suffix array consists of the sorted suffixes of a string. There are several
linear time suffix array construction algorithms (SACAs) known in the
literature. However, one of the fastest algorithms in practice has a worst case
run time of $O(n^2)$. The problem of designing practically and theoretically
efficient techniques remains open. In this paper we present an elegant
algorithm for suffix array construction which takes linear time with high
probability; the probability is on the space of all possible inputs. Our
algorithm is one of the simplest of the known SACAs and it opens up a new
dimension of suffix array construction that has not been explored until now.
Our algorithm is easily parallelizable. We offer parallel implementations on
various parallel models of computing. We prove a lemma on the $\ell$-mers of a
random string which might find independent applications. We also present
another algorithm that utilizes the above algorithm. This algorithm is called
RadixSA and has a worst case run time of $O(n\log{n})$. RadixSA introduces an
idea that may find independent applications as a speedup technique for other
SACAs. An empirical comparison of RadixSA with other algorithms on various
datasets reveals that our algorithm is one of the fastest algorithms to date.
The C++ source code is freely available at
http://www.engr.uconn.edu/~man09004/radixSA.zip
| [
{
"version": "v1",
"created": "Thu, 4 Jul 2013 17:10:08 GMT"
}
] | 2013-07-05T00:00:00 | [
[
"Rajasekaran",
"Sanguthevar",
""
],
[
"Nicolae",
"Marius",
""
]
] | TITLE: An Elegant Algorithm for the Construction of Suffix Arrays
ABSTRACT: The suffix array is a data structure that finds numerous applications in
string processing problems for both linguistic texts and biological data. It
has been introduced as a memory efficient alternative for suffix trees. The
suffix array consists of the sorted suffixes of a string. There are several
linear time suffix array construction algorithms (SACAs) known in the
literature. However, one of the fastest algorithms in practice has a worst case
run time of $O(n^2)$. The problem of designing practically and theoretically
efficient techniques remains open. In this paper we present an elegant
algorithm for suffix array construction which takes linear time with high
probability; the probability is on the space of all possible inputs. Our
algorithm is one of the simplest of the known SACAs and it opens up a new
dimension of suffix array construction that has not been explored until now.
Our algorithm is easily parallelizable. We offer parallel implementations on
various parallel models of computing. We prove a lemma on the $\ell$-mers of a
random string which might find independent applications. We also present
another algorithm that utilizes the above algorithm. This algorithm is called
RadixSA and has a worst case run time of $O(n\log{n})$. RadixSA introduces an
idea that may find independent applications as a speedup technique for other
SACAs. An empirical comparison of RadixSA with other algorithms on various
datasets reveals that our algorithm is one of the fastest algorithms to date.
The C++ source code is freely available at
http://www.engr.uconn.edu/~man09004/radixSA.zip
| no_new_dataset | 0.94801 |
1307.0915 | Yar Muhamad Mr | Yar M. Mughal, A. Krivoshei, P. Annus | Separation of cardiac and respiratory components from the electrical
bio-impedance signal using PCA and fast ICA | 4 pages, International Conference on Control, Engineering and
Information Technology (CEIT'13) | null | null | null | stat.AP physics.ins-det stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is an attempt to separate cardiac and respiratory signals from an
electrical bio-impedance (EBI) dataset. For this two well-known algorithms,
namely Principal Component Analysis (PCA) and Independent Component Analysis
(ICA), were used to accomplish the task. The ability of the PCA and the ICA
methods first reduces the dimension and attempt to separate the useful
components of the EBI, the cardiac and respiratory ones accordingly. It was
investigated with an assumption, that no motion artefacts are present. To carry
out this procedure the two channel complex EBI measurements were provided using
classical Kelvin type four electrode configurations for the each complex
channel. Thus four real signals were used as inputs for the PCA and fast ICA.
The results showed, that neither PCA nor ICA nor combination of them can not
accurately separate the components at least are used only two complex (four
real valued) input components.
| [
{
"version": "v1",
"created": "Wed, 3 Jul 2013 05:51:43 GMT"
}
] | 2013-07-04T00:00:00 | [
[
"Mughal",
"Yar M.",
""
],
[
"Krivoshei",
"A.",
""
],
[
"Annus",
"P.",
""
]
] | TITLE: Separation of cardiac and respiratory components from the electrical
bio-impedance signal using PCA and fast ICA
ABSTRACT: This paper is an attempt to separate cardiac and respiratory signals from an
electrical bio-impedance (EBI) dataset. For this two well-known algorithms,
namely Principal Component Analysis (PCA) and Independent Component Analysis
(ICA), were used to accomplish the task. The ability of the PCA and the ICA
methods first reduces the dimension and attempt to separate the useful
components of the EBI, the cardiac and respiratory ones accordingly. It was
investigated with an assumption, that no motion artefacts are present. To carry
out this procedure the two channel complex EBI measurements were provided using
classical Kelvin type four electrode configurations for the each complex
channel. Thus four real signals were used as inputs for the PCA and fast ICA.
The results showed, that neither PCA nor ICA nor combination of them can not
accurately separate the components at least are used only two complex (four
real valued) input components.
| no_new_dataset | 0.942981 |
1307.0596 | Om Damani | Om P. Damani | Improving Pointwise Mutual Information (PMI) by Incorporating
Significant Co-occurrence | To appear in the proceedings of 17th Conference on Computational
Natural Language Learning, CoNLL 2013 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design a new co-occurrence based word association measure by incorporating
the concept of significant cooccurrence in the popular word association measure
Pointwise Mutual Information (PMI). By extensive experiments with a large
number of publicly available datasets we show that the newly introduced measure
performs better than other co-occurrence based measures and despite being
resource-light, compares well with the best known resource-heavy distributional
similarity and knowledge based word association measures. We investigate the
source of this performance improvement and find that of the two types of
significant co-occurrence - corpus-level and document-level, the concept of
corpus level significance combined with the use of document counts in place of
word counts is responsible for all the performance gains observed. The concept
of document level significance is not helpful for PMI adaptation.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2013 06:25:51 GMT"
}
] | 2013-07-03T00:00:00 | [
[
"Damani",
"Om P.",
""
]
] | TITLE: Improving Pointwise Mutual Information (PMI) by Incorporating
Significant Co-occurrence
ABSTRACT: We design a new co-occurrence based word association measure by incorporating
the concept of significant cooccurrence in the popular word association measure
Pointwise Mutual Information (PMI). By extensive experiments with a large
number of publicly available datasets we show that the newly introduced measure
performs better than other co-occurrence based measures and despite being
resource-light, compares well with the best known resource-heavy distributional
similarity and knowledge based word association measures. We investigate the
source of this performance improvement and find that of the two types of
significant co-occurrence - corpus-level and document-level, the concept of
corpus level significance combined with the use of document counts in place of
word counts is responsible for all the performance gains observed. The concept
of document level significance is not helpful for PMI adaptation.
| no_new_dataset | 0.947672 |
1307.0747 | Uwe Aickelin | Stephanie Foan, Andrew Jackson, Ian Spendlove, Uwe Aickelin | Simulating the Dynamics of T Cell Subsets Throughout the Lifetime | Proceedings of the 10th International Conference on Artificial Immune
Systems (ICARIS 2011), LNCS Volume 6825, Cambridge, UK, pp 71-76 | null | null | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely accepted that the immune system undergoes age-related changes
correlating with increased disease in the elderly. T cell subsets have been
implicated. The aim of this work is firstly to implement and validate a
simulation of T regulatory cell (Treg) dynamics throughout the lifetime, based
on a model by Baltcheva. We show that our initial simulation produces an
inversion between precursor and mature Treys at around 20 years of age, though
the output differs significantly from the original laboratory dataset.
Secondly, this report discusses development of the model to incorporate new
data from a cross-sectional study of healthy blood donors addressing balance
between Treys and Th17 cells with novel markers for Treg. The potential for
simulation to add insight into immune aging is discussed.
| [
{
"version": "v1",
"created": "Tue, 2 Jul 2013 16:19:54 GMT"
}
] | 2013-07-03T00:00:00 | [
[
"Foan",
"Stephanie",
""
],
[
"Jackson",
"Andrew",
""
],
[
"Spendlove",
"Ian",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Simulating the Dynamics of T Cell Subsets Throughout the Lifetime
ABSTRACT: It is widely accepted that the immune system undergoes age-related changes
correlating with increased disease in the elderly. T cell subsets have been
implicated. The aim of this work is firstly to implement and validate a
simulation of T regulatory cell (Treg) dynamics throughout the lifetime, based
on a model by Baltcheva. We show that our initial simulation produces an
inversion between precursor and mature Treys at around 20 years of age, though
the output differs significantly from the original laboratory dataset.
Secondly, this report discusses development of the model to incorporate new
data from a cross-sectional study of healthy blood donors addressing balance
between Treys and Th17 cells with novel markers for Treg. The potential for
simulation to add insight into immune aging is discussed.
| no_new_dataset | 0.935582 |
0812.4235 | Francesco Dinuzzo | Francesco Dinuzzo, Gianluigi Pillonetto, Giuseppe De Nicolao | Client-server multi-task learning from distributed datasets | null | null | 10.1109/TNN.2010.2095882 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A client-server architecture to simultaneously solve multiple learning tasks
from distributed datasets is described. In such architecture, each client is
associated with an individual learning task and the associated dataset of
examples. The goal of the architecture is to perform information fusion from
multiple datasets while preserving privacy of individual data. The role of the
server is to collect data in real-time from the clients and codify the
information in a common database. The information coded in this database can be
used by all the clients to solve their individual learning task, so that each
client can exploit the informative content of all the datasets without actually
having access to private data of others. The proposed algorithmic framework,
based on regularization theory and kernel methods, uses a suitable class of
mixed effect kernels. The new method is illustrated through a simulated music
recommendation system.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2008 16:34:39 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jan 2010 15:37:43 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Dinuzzo",
"Francesco",
""
],
[
"Pillonetto",
"Gianluigi",
""
],
[
"De Nicolao",
"Giuseppe",
""
]
] | TITLE: Client-server multi-task learning from distributed datasets
ABSTRACT: A client-server architecture to simultaneously solve multiple learning tasks
from distributed datasets is described. In such architecture, each client is
associated with an individual learning task and the associated dataset of
examples. The goal of the architecture is to perform information fusion from
multiple datasets while preserving privacy of individual data. The role of the
server is to collect data in real-time from the clients and codify the
information in a common database. The information coded in this database can be
used by all the clients to solve their individual learning task, so that each
client can exploit the informative content of all the datasets without actually
having access to private data of others. The proposed algorithmic framework,
based on regularization theory and kernel methods, uses a suitable class of
mixed effect kernels. The new method is illustrated through a simulated music
recommendation system.
| no_new_dataset | 0.939803 |
1111.4541 | Lu Dang Khoa Nguyen | Nguyen Lu Dang Khoa and Sanjay Chawla | Large Scale Spectral Clustering Using Approximate Commute Time Embedding | null | null | 10.1007/978-3-642-33492-4_4 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectral clustering is a novel clustering method which can detect complex
shapes of data clusters. However, it requires the eigen decomposition of the
graph Laplacian matrix, which is proportion to $O(n^3)$ and thus is not
suitable for large scale systems. Recently, many methods have been proposed to
accelerate the computational time of spectral clustering. These approximate
methods usually involve sampling techniques by which a lot information of the
original data may be lost. In this work, we propose a fast and accurate
spectral clustering approach using an approximate commute time embedding, which
is similar to the spectral embedding. The method does not require using any
sampling technique and computing any eigenvector at all. Instead it uses random
projection and a linear time solver to find the approximate embedding. The
experiments in several synthetic and real datasets show that the proposed
approach has better clustering quality and is faster than the state-of-the-art
approximate spectral clustering methods.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2011 08:39:34 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Feb 2012 04:19:56 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Khoa",
"Nguyen Lu Dang",
""
],
[
"Chawla",
"Sanjay",
""
]
] | TITLE: Large Scale Spectral Clustering Using Approximate Commute Time Embedding
ABSTRACT: Spectral clustering is a novel clustering method which can detect complex
shapes of data clusters. However, it requires the eigen decomposition of the
graph Laplacian matrix, which is proportion to $O(n^3)$ and thus is not
suitable for large scale systems. Recently, many methods have been proposed to
accelerate the computational time of spectral clustering. These approximate
methods usually involve sampling techniques by which a lot information of the
original data may be lost. In this work, we propose a fast and accurate
spectral clustering approach using an approximate commute time embedding, which
is similar to the spectral embedding. The method does not require using any
sampling technique and computing any eigenvector at all. Instead it uses random
projection and a linear time solver to find the approximate embedding. The
experiments in several synthetic and real datasets show that the proposed
approach has better clustering quality and is faster than the state-of-the-art
approximate spectral clustering methods.
| no_new_dataset | 0.948822 |
1306.5390 | Tejaswi Agarwal | Tejaswi Agarwal, Saurabh Jha and B. Rajesh Kanna | P-HGRMS: A Parallel Hypergraph Based Root Mean Square Algorithm for
Image Denoising | 2 pages, 2 figures. Published as poster at the 22nd ACM International
Symposium on High Performance Parallel and Distributed Systems, HPDC 2013,
New York, USA. Won the Best Poster Award at HPDC 2013 | null | null | null | cs.DC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a parallel Salt and Pepper (SP) noise removal algorithm
in a grey level digital image based on the Hypergraph Based Root Mean Square
(HGRMS) approach. HGRMS is generic algorithm for identifying noisy pixels in
any digital image using a two level hierarchical serial approach. However, for
SP noise removal, we reduce this algorithm to a parallel model by introducing a
cardinality matrix and an iteration factor, k, which helps us reduce the
dependencies in the existing approach. We also observe that the performance of
the serial implementation is better on smaller images, but once the threshold
is achieved in terms of image resolution, its computational complexity
increases drastically. We test P-HGRMS using standard images from the Berkeley
Segmentation dataset on NVIDIAs Compute Unified Device Architecture (CUDA) for
noise identification and attenuation. We also compare the noise removal
efficiency of the proposed algorithm using Peak Signal to Noise Ratio (PSNR) to
the existing approach. P-HGRMS maintains the noise removal efficiency and
outperforms its sequential counterpart by 6 to 18 times (6x - 18x) in
computational efficiency.
| [
{
"version": "v1",
"created": "Sun, 23 Jun 2013 09:36:08 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Jun 2013 01:32:41 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Agarwal",
"Tejaswi",
""
],
[
"Jha",
"Saurabh",
""
],
[
"Kanna",
"B. Rajesh",
""
]
] | TITLE: P-HGRMS: A Parallel Hypergraph Based Root Mean Square Algorithm for
Image Denoising
ABSTRACT: This paper presents a parallel Salt and Pepper (SP) noise removal algorithm
in a grey level digital image based on the Hypergraph Based Root Mean Square
(HGRMS) approach. HGRMS is generic algorithm for identifying noisy pixels in
any digital image using a two level hierarchical serial approach. However, for
SP noise removal, we reduce this algorithm to a parallel model by introducing a
cardinality matrix and an iteration factor, k, which helps us reduce the
dependencies in the existing approach. We also observe that the performance of
the serial implementation is better on smaller images, but once the threshold
is achieved in terms of image resolution, its computational complexity
increases drastically. We test P-HGRMS using standard images from the Berkeley
Segmentation dataset on NVIDIAs Compute Unified Device Architecture (CUDA) for
noise identification and attenuation. We also compare the noise removal
efficiency of the proposed algorithm using Peak Signal to Noise Ratio (PSNR) to
the existing approach. P-HGRMS maintains the noise removal efficiency and
outperforms its sequential counterpart by 6 to 18 times (6x - 18x) in
computational efficiency.
| no_new_dataset | 0.94868 |
1307.0129 | Roozbeh Rajabi | Roozbeh Rajabi, Hassan Ghassemian | Hyperspectral Data Unmixing Using GNMF Method and Sparseness Constraint | 4 pages, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Mixed pixels are pixels containing more than one
distinct material called endmembers. The presence percentages of endmembers in
mixed pixels are called abundance fractions. Spectral unmixing problem refers
to decomposing these pixels into a set of endmembers and abundance fractions.
Due to nonnegativity constraint on abundance fractions, nonnegative matrix
factorization methods (NMF) have been widely used for solving spectral unmixing
problem. In this paper we have used graph regularized (GNMF) method with
sparseness constraint to unmix hyperspectral data. This method applied on
simulated data using AVIRIS Indian Pines dataset and USGS library and results
are quantified based on AAD and SAD measures. Results in comparison with other
methods show that the proposed method can unmix data more effectively.
| [
{
"version": "v1",
"created": "Sat, 29 Jun 2013 16:57:44 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Rajabi",
"Roozbeh",
""
],
[
"Ghassemian",
"Hassan",
""
]
] | TITLE: Hyperspectral Data Unmixing Using GNMF Method and Sparseness Constraint
ABSTRACT: Hyperspectral images contain mixed pixels due to low spatial resolution of
hyperspectral sensors. Mixed pixels are pixels containing more than one
distinct material called endmembers. The presence percentages of endmembers in
mixed pixels are called abundance fractions. Spectral unmixing problem refers
to decomposing these pixels into a set of endmembers and abundance fractions.
Due to nonnegativity constraint on abundance fractions, nonnegative matrix
factorization methods (NMF) have been widely used for solving spectral unmixing
problem. In this paper we have used graph regularized (GNMF) method with
sparseness constraint to unmix hyperspectral data. This method applied on
simulated data using AVIRIS Indian Pines dataset and USGS library and results
are quantified based on AAD and SAD measures. Results in comparison with other
methods show that the proposed method can unmix data more effectively.
| no_new_dataset | 0.950088 |
1307.0253 | Bhavana Dalvi | Bhavana Dalvi, William W. Cohen, Jamie Callan | Exploratory Learning | 16 pages; European Conference on Machine Learning and Principles and
Practice of Knowledge Discovery in Databases, 2013 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multiclass semi-supervised learning (SSL), it is sometimes the case that
the number of classes present in the data is not known, and hence no labeled
examples are provided for some classes. In this paper we present variants of
well-known semi-supervised multiclass learning methods that are robust when the
data contains an unknown number of classes. In particular, we present an
"exploratory" extension of expectation-maximization (EM) that explores
different numbers of classes while learning. "Exploratory" SSL greatly improves
performance on three datasets in terms of F1 on the classes with seed examples
i.e., the classes which are expected to be in the data. Our Exploratory EM
algorithm also outperforms a SSL method based non-parametric Bayesian
clustering.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 01:09:25 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Dalvi",
"Bhavana",
""
],
[
"Cohen",
"William W.",
""
],
[
"Callan",
"Jamie",
""
]
] | TITLE: Exploratory Learning
ABSTRACT: In multiclass semi-supervised learning (SSL), it is sometimes the case that
the number of classes present in the data is not known, and hence no labeled
examples are provided for some classes. In this paper we present variants of
well-known semi-supervised multiclass learning methods that are robust when the
data contains an unknown number of classes. In particular, we present an
"exploratory" extension of expectation-maximization (EM) that explores
different numbers of classes while learning. "Exploratory" SSL greatly improves
performance on three datasets in terms of F1 on the classes with seed examples
i.e., the classes which are expected to be in the data. Our Exploratory EM
algorithm also outperforms a SSL method based non-parametric Bayesian
clustering.
| no_new_dataset | 0.950411 |
1307.0261 | Bhavana Dalvi | Bhavana Dalvi, William W. Cohen, and Jamie Callan | WebSets: Extracting Sets of Entities from the Web Using Unsupervised
Information Extraction | 10 pages; International Conference on Web Search and Data Mining 2012 | null | null | null | cs.LG cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a open-domain information extraction method for extracting
concept-instance pairs from an HTML corpus. Most earlier approaches to this
problem rely on combining clusters of distributionally similar terms and
concept-instance pairs obtained with Hearst patterns. In contrast, our method
relies on a novel approach for clustering terms found in HTML tables, and then
assigning concept names to these clusters using Hearst patterns. The method can
be efficiently applied to a large corpus, and experimental results on several
datasets show that our method can accurately extract large numbers of
concept-instance pairs.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 02:49:08 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Dalvi",
"Bhavana",
""
],
[
"Cohen",
"William W.",
""
],
[
"Callan",
"Jamie",
""
]
] | TITLE: WebSets: Extracting Sets of Entities from the Web Using Unsupervised
Information Extraction
ABSTRACT: We describe a open-domain information extraction method for extracting
concept-instance pairs from an HTML corpus. Most earlier approaches to this
problem rely on combining clusters of distributionally similar terms and
concept-instance pairs obtained with Hearst patterns. In contrast, our method
relies on a novel approach for clustering terms found in HTML tables, and then
assigning concept names to these clusters using Hearst patterns. The method can
be efficiently applied to a large corpus, and experimental results on several
datasets show that our method can accurately extract large numbers of
concept-instance pairs.
| no_new_dataset | 0.955402 |
1307.0414 | Ian Goodfellow | Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville,
Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler,
Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li,
Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John
Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing
Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio | Challenges in Representation Learning: A report on three machine
learning contests | 8 pages, 2 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 15:53:22 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Goodfellow",
"Ian J.",
""
],
[
"Erhan",
"Dumitru",
""
],
[
"Carrier",
"Pierre Luc",
""
],
[
"Courville",
"Aaron",
""
],
[
"Mirza",
"Mehdi",
""
],
[
"Hamner",
"Ben",
""
],
[
"Cukierski",
"Will",
""
],
[
"Tang",
"Yichuan",
""
],
[
"Thaler",
"David",
""
],
[
"Lee",
"Dong-Hyun",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Ramaiah",
"Chetan",
""
],
[
"Feng",
"Fangxiang",
""
],
[
"Li",
"Ruifan",
""
],
[
"Wang",
"Xiaojie",
""
],
[
"Athanasakis",
"Dimitris",
""
],
[
"Shawe-Taylor",
"John",
""
],
[
"Milakov",
"Maxim",
""
],
[
"Park",
"John",
""
],
[
"Ionescu",
"Radu",
""
],
[
"Popescu",
"Marius",
""
],
[
"Grozea",
"Cristian",
""
],
[
"Bergstra",
"James",
""
],
[
"Xie",
"Jingjing",
""
],
[
"Romaszko",
"Lukasz",
""
],
[
"Xu",
"Bing",
""
],
[
"Chuang",
"Zhang",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Challenges in Representation Learning: A report on three machine
learning contests
ABSTRACT: The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.
| no_new_dataset | 0.94887 |
1307.0475 | Faraz Ahmed | Faraz Ahmed, Rong Jin and Alex X. Liu | A Random Matrix Approach to Differential Privacy and Structure Preserved
Social Network Graph Publishing | null | null | null | null | cs.CR cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online social networks are being increasingly used for analyzing various
societal phenomena such as epidemiology, information dissemination, marketing
and sentiment flow. Popular analysis techniques such as clustering and
influential node analysis, require the computation of eigenvectors of the real
graph's adjacency matrix. Recent de-anonymization attacks on Netflix and AOL
datasets show that an open access to such graphs pose privacy threats. Among
the various privacy preserving models, Differential privacy provides the
strongest privacy guarantees.
In this paper we propose a privacy preserving mechanism for publishing social
network graph data, which satisfies differential privacy guarantees by
utilizing a combination of theory of random matrix and that of differential
privacy. The key idea is to project each row of an adjacency matrix to a low
dimensional space using the random projection approach and then perturb the
projected matrix with random noise. We show that as compared to existing
approaches for differential private approximation of eigenvectors, our approach
is computationally efficient, preserves the utility and satisfies differential
privacy. We evaluate our approach on social network graphs of Facebook, Live
Journal and Pokec. The results show that even for high values of noise variance
sigma=1 the clustering quality given by normalized mutual information gain is
as low as 0.74. For influential node discovery, the propose approach is able to
correctly recover 80 of the most influential nodes. We also compare our results
with an approach presented in [43], which directly perturbs the eigenvector of
the original data by a Laplacian noise. The results show that this approach
requires a large random perturbation in order to preserve the differential
privacy, which leads to a poor estimation of eigenvectors for large social
networks.
| [
{
"version": "v1",
"created": "Mon, 1 Jul 2013 18:46:28 GMT"
}
] | 2013-07-02T00:00:00 | [
[
"Ahmed",
"Faraz",
""
],
[
"Jin",
"Rong",
""
],
[
"Liu",
"Alex X.",
""
]
] | TITLE: A Random Matrix Approach to Differential Privacy and Structure Preserved
Social Network Graph Publishing
ABSTRACT: Online social networks are being increasingly used for analyzing various
societal phenomena such as epidemiology, information dissemination, marketing
and sentiment flow. Popular analysis techniques such as clustering and
influential node analysis, require the computation of eigenvectors of the real
graph's adjacency matrix. Recent de-anonymization attacks on Netflix and AOL
datasets show that an open access to such graphs pose privacy threats. Among
the various privacy preserving models, Differential privacy provides the
strongest privacy guarantees.
In this paper we propose a privacy preserving mechanism for publishing social
network graph data, which satisfies differential privacy guarantees by
utilizing a combination of theory of random matrix and that of differential
privacy. The key idea is to project each row of an adjacency matrix to a low
dimensional space using the random projection approach and then perturb the
projected matrix with random noise. We show that as compared to existing
approaches for differential private approximation of eigenvectors, our approach
is computationally efficient, preserves the utility and satisfies differential
privacy. We evaluate our approach on social network graphs of Facebook, Live
Journal and Pokec. The results show that even for high values of noise variance
sigma=1 the clustering quality given by normalized mutual information gain is
as low as 0.74. For influential node discovery, the propose approach is able to
correctly recover 80 of the most influential nodes. We also compare our results
with an approach presented in [43], which directly perturbs the eigenvector of
the original data by a Laplacian noise. The results show that this approach
requires a large random perturbation in order to preserve the differential
privacy, which leads to a poor estimation of eigenvectors for large social
networks.
| no_new_dataset | 0.947769 |
1305.3250 | Cristian Popescu | Marian Popescu, Peter J. Dugan, Mohammad Pourhomayoun, Denise Risch,
Harold W. Lewis III, Christopher W. Clark | Bioacoustical Periodic Pulse Train Signal Detection and Classification
using Spectrogram Intensity Binarization and Energy Projection | ICML 2013 Workshop on Machine Learning for Bioacoustics, 2013, 6
pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The following work outlines an approach for automatic detection and
recognition of periodic pulse train signals using a multi-stage process based
on spectrogram edge detection, energy projection and classification. The method
has been implemented to automatically detect and recognize pulse train songs of
minke whales. While the long term goal of this work is to properly identify and
detect minke songs from large multi-year datasets, this effort was developed
using sounds off the coast of Massachusetts, in the Stellwagen Bank National
Marine Sanctuary. The detection methodology is presented and evaluated on 232
continuous hours of acoustic recordings and a qualitative analysis of machine
learning classifiers and their performance is described. The trained automatic
detection and classification system is applied to 120 continuous hours,
comprised of various challenges such as broadband and narrowband noises, low
SNR, and other pulse train signatures. This automatic system achieves a TPR of
63% for FPR of 0.6% (or 0.87 FP/h), at a Precision (PPV) of 84% and an F1 score
of 71%.
| [
{
"version": "v1",
"created": "Tue, 14 May 2013 18:49:52 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Jun 2013 20:09:07 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jun 2013 17:33:59 GMT"
}
] | 2013-07-01T00:00:00 | [
[
"Popescu",
"Marian",
""
],
[
"Dugan",
"Peter J.",
""
],
[
"Pourhomayoun",
"Mohammad",
""
],
[
"Risch",
"Denise",
""
],
[
"Lewis",
"Harold W.",
"III"
],
[
"Clark",
"Christopher W.",
""
]
] | TITLE: Bioacoustical Periodic Pulse Train Signal Detection and Classification
using Spectrogram Intensity Binarization and Energy Projection
ABSTRACT: The following work outlines an approach for automatic detection and
recognition of periodic pulse train signals using a multi-stage process based
on spectrogram edge detection, energy projection and classification. The method
has been implemented to automatically detect and recognize pulse train songs of
minke whales. While the long term goal of this work is to properly identify and
detect minke songs from large multi-year datasets, this effort was developed
using sounds off the coast of Massachusetts, in the Stellwagen Bank National
Marine Sanctuary. The detection methodology is presented and evaluated on 232
continuous hours of acoustic recordings and a qualitative analysis of machine
learning classifiers and their performance is described. The trained automatic
detection and classification system is applied to 120 continuous hours,
comprised of various challenges such as broadband and narrowband noises, low
SNR, and other pulse train signatures. This automatic system achieves a TPR of
63% for FPR of 0.6% (or 0.87 FP/h), at a Precision (PPV) of 84% and an F1 score
of 71%.
| no_new_dataset | 0.953362 |
1306.6805 | Sara Hajian | Sara Hajian | Simultaneous Discrimination Prevention and Privacy Protection in Data
Publishing and Mining | PhD Thesis defended on June 10, 2013, at the Department of Computer
Engineering and Mathematics of Universitat Rovira i Virgili. Advisors: Josep
Domingo-Ferrer and Dino Pedreschi | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data mining is an increasingly important technology for extracting useful
knowledge hidden in large collections of data. There are, however, negative
social perceptions about data mining, among which potential privacy violation
and potential discrimination. Automated data collection and data mining
techniques such as classification have paved the way to making automated
decisions, like loan granting/denial, insurance premium computation. If the
training datasets are biased in what regards discriminatory attributes like
gender, race, religion, discriminatory decisions may ensue. In the first part
of this thesis, we tackle discrimination prevention in data mining and propose
new techniques applicable for direct or indirect discrimination prevention
individually or both at the same time. We discuss how to clean training
datasets and outsourced datasets in such a way that direct and/or indirect
discriminatory decision rules are converted to legitimate (non-discriminatory)
classification rules. In the second part of this thesis, we argue that privacy
and discrimination risks should be tackled together. We explore the
relationship between privacy preserving data mining and discrimination
prevention in data mining to design holistic approaches capable of addressing
both threats simultaneously during the knowledge discovery process. As part of
this effort, we have investigated for the first time the problem of
discrimination and privacy aware frequent pattern discovery, i.e. the
sanitization of the collection of patterns mined from a transaction database in
such a way that neither privacy-violating nor discriminatory inferences can be
inferred on the released patterns. Moreover, we investigate the problem of
discrimination and privacy aware data publishing, i.e. transforming the data,
instead of patterns, in order to simultaneously fulfill privacy preservation
and discrimination prevention.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2013 12:00:56 GMT"
}
] | 2013-07-01T00:00:00 | [
[
"Hajian",
"Sara",
""
]
] | TITLE: Simultaneous Discrimination Prevention and Privacy Protection in Data
Publishing and Mining
ABSTRACT: Data mining is an increasingly important technology for extracting useful
knowledge hidden in large collections of data. There are, however, negative
social perceptions about data mining, among which potential privacy violation
and potential discrimination. Automated data collection and data mining
techniques such as classification have paved the way to making automated
decisions, like loan granting/denial, insurance premium computation. If the
training datasets are biased in what regards discriminatory attributes like
gender, race, religion, discriminatory decisions may ensue. In the first part
of this thesis, we tackle discrimination prevention in data mining and propose
new techniques applicable for direct or indirect discrimination prevention
individually or both at the same time. We discuss how to clean training
datasets and outsourced datasets in such a way that direct and/or indirect
discriminatory decision rules are converted to legitimate (non-discriminatory)
classification rules. In the second part of this thesis, we argue that privacy
and discrimination risks should be tackled together. We explore the
relationship between privacy preserving data mining and discrimination
prevention in data mining to design holistic approaches capable of addressing
both threats simultaneously during the knowledge discovery process. As part of
this effort, we have investigated for the first time the problem of
discrimination and privacy aware frequent pattern discovery, i.e. the
sanitization of the collection of patterns mined from a transaction database in
such a way that neither privacy-violating nor discriminatory inferences can be
inferred on the released patterns. Moreover, we investigate the problem of
discrimination and privacy aware data publishing, i.e. transforming the data,
instead of patterns, in order to simultaneously fulfill privacy preservation
and discrimination prevention.
| no_new_dataset | 0.951459 |
1306.6842 | Dimitris Arabadjis | Dimitris Arabadjis, Fotios Giannopoulos, Constantin Papaodysseus,
Solomon Zannos, Panayiotis Rousopoulos, Michail Panagopoulos, Christopher
Blackwell | New Mathematical and Algorithmic Schemes for Pattern Classification with
Application to the Identification of Writers of Important Ancient Documents | null | Pattern Recognition, Volume 46, Issue 8, Pages 2278-2296, August
2013 | 10.1016/j.patcog.2013.01.019 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel approach is introduced for classifying curves into
proper families, according to their similarity. First, a mathematical quantity
we call plane curvature is introduced and a number of propositions are stated
and proved. Proper similarity measures of two curves are introduced and a
subsequent statistical analysis is applied. First, the efficiency of the curve
fitting process has been tested on 2 shapes datasets of reference. Next, the
methodology has been applied to the very important problem of classifying 23
Byzantine codices and 46 Ancient inscriptions to their writers, thus achieving
correct dating of their content. The inscriptions have been attributed to ten
individual hands and the Byzantine codices to four writers.
| [
{
"version": "v1",
"created": "Fri, 28 Jun 2013 13:51:18 GMT"
}
] | 2013-07-01T00:00:00 | [
[
"Arabadjis",
"Dimitris",
""
],
[
"Giannopoulos",
"Fotios",
""
],
[
"Papaodysseus",
"Constantin",
""
],
[
"Zannos",
"Solomon",
""
],
[
"Rousopoulos",
"Panayiotis",
""
],
[
"Panagopoulos",
"Michail",
""
],
[
"Blackwell",
"Christopher",
""
]
] | TITLE: New Mathematical and Algorithmic Schemes for Pattern Classification with
Application to the Identification of Writers of Important Ancient Documents
ABSTRACT: In this paper, a novel approach is introduced for classifying curves into
proper families, according to their similarity. First, a mathematical quantity
we call plane curvature is introduced and a number of propositions are stated
and proved. Proper similarity measures of two curves are introduced and a
subsequent statistical analysis is applied. First, the efficiency of the curve
fitting process has been tested on 2 shapes datasets of reference. Next, the
methodology has been applied to the very important problem of classifying 23
Byzantine codices and 46 Ancient inscriptions to their writers, thus achieving
correct dating of their content. The inscriptions have been attributed to ten
individual hands and the Byzantine codices to four writers.
| no_new_dataset | 0.949669 |
1306.6058 | Reza Farrahi Moghaddam | Reza Farrahi Moghaddam, Shaohua Chen, Rachid Hedjam, Mohamed Cheriet | A maximal-information color to gray conversion method for document
images: Toward an optimal grayscale representation for document image
binarization | 36 page, the uncompressed version is available on Synchromedia
website | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel method to convert color/multi-spectral images to gray-level images is
introduced to increase the performance of document binarization methods. The
method uses the distribution of the pixel data of the input document image in a
color space to find a transformation, called the dual transform, which balances
the amount of information on all color channels. Furthermore, in order to
reduce the intensity variations on the gray output, a color reduction
preprocessing step is applied. Then, a channel is selected as the gray value
representation of the document image based on the homogeneity criterion on the
text regions. In this way, the proposed method can provide a
luminance-independent contrast enhancement. The performance of the method is
evaluated against various images from two databases, the ICDAR'03 Robust
Reading, the KAIST and the DIBCO'09 datasets, subjectively and objectively with
promising results. The ground truth images for the images from the ICDAR'03
Robust Reading dataset have been created manually by the authors.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2013 18:41:04 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Jun 2013 12:20:20 GMT"
}
] | 2013-06-27T00:00:00 | [
[
"Moghaddam",
"Reza Farrahi",
""
],
[
"Chen",
"Shaohua",
""
],
[
"Hedjam",
"Rachid",
""
],
[
"Cheriet",
"Mohamed",
""
]
] | TITLE: A maximal-information color to gray conversion method for document
images: Toward an optimal grayscale representation for document image
binarization
ABSTRACT: A novel method to convert color/multi-spectral images to gray-level images is
introduced to increase the performance of document binarization methods. The
method uses the distribution of the pixel data of the input document image in a
color space to find a transformation, called the dual transform, which balances
the amount of information on all color channels. Furthermore, in order to
reduce the intensity variations on the gray output, a color reduction
preprocessing step is applied. Then, a channel is selected as the gray value
representation of the document image based on the homogeneity criterion on the
text regions. In this way, the proposed method can provide a
luminance-independent contrast enhancement. The performance of the method is
evaluated against various images from two databases, the ICDAR'03 Robust
Reading, the KAIST and the DIBCO'09 datasets, subjectively and objectively with
promising results. The ground truth images for the images from the ICDAR'03
Robust Reading dataset have been created manually by the authors.
| no_new_dataset | 0.950915 |
1306.3860 | Samuel R\"onnqvist | Peter Sarlin and Samuel R\"onnqvist | Cluster coloring of the Self-Organizing Map: An information
visualization perspective | Forthcoming in Proceedings of 17th International Conference
Information Visualisation (2013) | null | null | null | cs.LG cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper takes an information visualization perspective to visual
representations in the general SOM paradigm. This involves viewing SOM-based
visualizations through the eyes of Bertin's and Tufte's theories on data
graphics. The regular grid shape of the Self-Organizing Map (SOM), while being
a virtue for linking visualizations to it, restricts representation of cluster
structures. From the viewpoint of information visualization, this paper
provides a general, yet simple, solution to projection-based coloring of the
SOM that reveals structures. First, the proposed color space is easy to
construct and customize to the purpose of use, while aiming at being
perceptually correct and informative through two separable dimensions. Second,
the coloring method is not dependent on any specific method of projection, but
is rather modular to fit any objective function suitable for the task at hand.
The cluster coloring is illustrated on two datasets: the iris data, and welfare
and poverty indicators.
| [
{
"version": "v1",
"created": "Mon, 17 Jun 2013 13:57:00 GMT"
}
] | 2013-06-26T00:00:00 | [
[
"Sarlin",
"Peter",
""
],
[
"Rönnqvist",
"Samuel",
""
]
] | TITLE: Cluster coloring of the Self-Organizing Map: An information
visualization perspective
ABSTRACT: This paper takes an information visualization perspective to visual
representations in the general SOM paradigm. This involves viewing SOM-based
visualizations through the eyes of Bertin's and Tufte's theories on data
graphics. The regular grid shape of the Self-Organizing Map (SOM), while being
a virtue for linking visualizations to it, restricts representation of cluster
structures. From the viewpoint of information visualization, this paper
provides a general, yet simple, solution to projection-based coloring of the
SOM that reveals structures. First, the proposed color space is easy to
construct and customize to the purpose of use, while aiming at being
perceptually correct and informative through two separable dimensions. Second,
the coloring method is not dependent on any specific method of projection, but
is rather modular to fit any objective function suitable for the task at hand.
The cluster coloring is illustrated on two datasets: the iris data, and welfare
and poverty indicators.
| no_new_dataset | 0.946843 |
1306.1840 | Paul Mineiro | Paul Mineiro, Nikos Karampatziakis | Loss-Proportional Subsampling for Subsequent ERM | Appears in the proceedings of the 30th International Conference on
Machine Learning | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a sampling scheme suitable for reducing a data set prior to
selecting a hypothesis with minimum empirical risk. The sampling only considers
a subset of the ultimate (unknown) hypothesis set, but can nonetheless
guarantee that the final excess risk will compare favorably with utilizing the
entire original data set. We demonstrate the practical benefits of our approach
on a large dataset which we subsample and subsequently fit with boosted trees.
| [
{
"version": "v1",
"created": "Fri, 7 Jun 2013 20:12:17 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Jun 2013 05:32:31 GMT"
}
] | 2013-06-25T00:00:00 | [
[
"Mineiro",
"Paul",
""
],
[
"Karampatziakis",
"Nikos",
""
]
] | TITLE: Loss-Proportional Subsampling for Subsequent ERM
ABSTRACT: We propose a sampling scheme suitable for reducing a data set prior to
selecting a hypothesis with minimum empirical risk. The sampling only considers
a subset of the ultimate (unknown) hypothesis set, but can nonetheless
guarantee that the final excess risk will compare favorably with utilizing the
entire original data set. We demonstrate the practical benefits of our approach
on a large dataset which we subsample and subsequently fit with boosted trees.
| no_new_dataset | 0.945751 |
1306.4735 | Juan Fern\'andez-Gracia | Juan Fern\'andez-Gracia, V\'ictor M. Egu\'iluz and Maxi San Miguel | Timing interactions in social simulations: The voter model | Book Chapter, 23 pages, 9 figures, 5 tables | In "Temporal Networks", P. Holme, J. Saram\"aki (Eds.), pp
331-352, Springer (2013) | 10.1007/978-3-642-36461-7_17 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent availability of huge high resolution datasets on human activities
has revealed the heavy-tailed nature of the interevent time distributions. In
social simulations of interacting agents the standard approach has been to use
Poisson processes to update the state of the agents, which gives rise to very
homogeneous activity patterns with a well defined characteristic interevent
time. As a paradigmatic opinion model we investigate the voter model and review
the standard update rules and propose two new update rules which are able to
account for heterogeneous activity patterns. For the new update rules each node
gets updated with a probability that depends on the time since the last event
of the node, where an event can be an update attempt (exogenous update) or a
change of state (endogenous update). We find that both update rules can give
rise to power law interevent time distributions, although the endogenous one
more robustly. Apart from that for the exogenous update rule and the standard
update rules the voter model does not reach consensus in the infinite size
limit, while for the endogenous update there exist a coarsening process that
drives the system toward consensus configurations.
| [
{
"version": "v1",
"created": "Tue, 18 Jun 2013 15:03:04 GMT"
}
] | 2013-06-24T00:00:00 | [
[
"Fernández-Gracia",
"Juan",
""
],
[
"Eguíluz",
"Víctor M.",
""
],
[
"Miguel",
"Maxi San",
""
]
] | TITLE: Timing interactions in social simulations: The voter model
ABSTRACT: The recent availability of huge high resolution datasets on human activities
has revealed the heavy-tailed nature of the interevent time distributions. In
social simulations of interacting agents the standard approach has been to use
Poisson processes to update the state of the agents, which gives rise to very
homogeneous activity patterns with a well defined characteristic interevent
time. As a paradigmatic opinion model we investigate the voter model and review
the standard update rules and propose two new update rules which are able to
account for heterogeneous activity patterns. For the new update rules each node
gets updated with a probability that depends on the time since the last event
of the node, where an event can be an update attempt (exogenous update) or a
change of state (endogenous update). We find that both update rules can give
rise to power law interevent time distributions, although the endogenous one
more robustly. Apart from that for the exogenous update rule and the standard
update rules the voter model does not reach consensus in the infinite size
limit, while for the endogenous update there exist a coarsening process that
drives the system toward consensus configurations.
| no_new_dataset | 0.952838 |
1306.5151 | Andrea Vedaldi | Subhransu Maji and Esa Rahtu and Juho Kannala and Matthew Blaschko and
Andrea Vedaldi | Fine-Grained Visual Classification of Aircraft | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces FGVC-Aircraft, a new dataset containing 10,000 images
of aircraft spanning 100 aircraft models, organised in a three-level hierarchy.
At the finer level, differences between models are often subtle but always
visually measurable, making visual recognition challenging but possible. A
benchmark is obtained by defining corresponding classification tasks and
evaluation protocols, and baseline results are presented. The construction of
this dataset was made possible by the work of aircraft enthusiasts, a strategy
that can extend to the study of number of other object classes. Compared to the
domains usually considered in fine-grained visual classification (FGVC), for
example animals, aircraft are rigid and hence less deformable. They, however,
present other interesting modes of variation, including purpose, size,
designation, structure, historical style, and branding.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2013 14:31:57 GMT"
}
] | 2013-06-24T00:00:00 | [
[
"Maji",
"Subhransu",
""
],
[
"Rahtu",
"Esa",
""
],
[
"Kannala",
"Juho",
""
],
[
"Blaschko",
"Matthew",
""
],
[
"Vedaldi",
"Andrea",
""
]
] | TITLE: Fine-Grained Visual Classification of Aircraft
ABSTRACT: This paper introduces FGVC-Aircraft, a new dataset containing 10,000 images
of aircraft spanning 100 aircraft models, organised in a three-level hierarchy.
At the finer level, differences between models are often subtle but always
visually measurable, making visual recognition challenging but possible. A
benchmark is obtained by defining corresponding classification tasks and
evaluation protocols, and baseline results are presented. The construction of
this dataset was made possible by the work of aircraft enthusiasts, a strategy
that can extend to the study of number of other object classes. Compared to the
domains usually considered in fine-grained visual classification (FGVC), for
example animals, aircraft are rigid and hence less deformable. They, however,
present other interesting modes of variation, including purpose, size,
designation, structure, historical style, and branding.
| new_dataset | 0.961244 |
1306.5204 | Fred Morstatter | Fred Morstatter and J\"urgen Pfeffer and Huan Liu and Kathleen M.
Carley | Is the Sample Good Enough? Comparing Data from Twitter's Streaming API
with Twitter's Firehose | Published in ICWSM 2013 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Twitter is a social media giant famous for the exchange of short,
140-character messages called "tweets". In the scientific community, the
microblogging site is known for openness in sharing its data. It provides a
glance into its millions of users and billions of tweets through a "Streaming
API" which provides a sample of all tweets matching some parameters preset by
the API user. The API service has been used by many researchers, companies, and
governmental institutions that want to extract knowledge in accordance with a
diverse array of questions pertaining to social media. The essential drawback
of the Twitter API is the lack of documentation concerning what and how much
data users get. This leads researchers to question whether the sampled data is
a valid representation of the overall activity on Twitter. In this work we
embark on answering this question by comparing data collected using Twitter's
sampled API service with data collected using the full, albeit costly, Firehose
stream that includes every single published tweet. We compare both datasets
using common statistical metrics as well as metrics that allow us to compare
topics, networks, and locations of tweets. The results of our work will help
researchers and practitioners understand the implications of using the
Streaming API.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2013 18:08:42 GMT"
}
] | 2013-06-24T00:00:00 | [
[
"Morstatter",
"Fred",
""
],
[
"Pfeffer",
"Jürgen",
""
],
[
"Liu",
"Huan",
""
],
[
"Carley",
"Kathleen M.",
""
]
] | TITLE: Is the Sample Good Enough? Comparing Data from Twitter's Streaming API
with Twitter's Firehose
ABSTRACT: Twitter is a social media giant famous for the exchange of short,
140-character messages called "tweets". In the scientific community, the
microblogging site is known for openness in sharing its data. It provides a
glance into its millions of users and billions of tweets through a "Streaming
API" which provides a sample of all tweets matching some parameters preset by
the API user. The API service has been used by many researchers, companies, and
governmental institutions that want to extract knowledge in accordance with a
diverse array of questions pertaining to social media. The essential drawback
of the Twitter API is the lack of documentation concerning what and how much
data users get. This leads researchers to question whether the sampled data is
a valid representation of the overall activity on Twitter. In this work we
embark on answering this question by comparing data collected using Twitter's
sampled API service with data collected using the full, albeit costly, Firehose
stream that includes every single published tweet. We compare both datasets
using common statistical metrics as well as metrics that allow us to compare
topics, networks, and locations of tweets. The results of our work will help
researchers and practitioners understand the implications of using the
Streaming API.
| no_new_dataset | 0.943191 |
1301.5650 | Marius Pachitariu | Marius Pachitariu and Maneesh Sahani | Regularization and nonlinearities for neural language models: when are
they needed? | Added new experiments on large datasets and on the Microsoft Research
Sentence Completion Challenge | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural language models (LMs) based on recurrent neural networks (RNN) are
some of the most successful word and character-level LMs. Why do they work so
well, in particular better than linear neural LMs? Possible explanations are
that RNNs have an implicitly better regularization or that RNNs have a higher
capacity for storing patterns due to their nonlinearities or both. Here we
argue for the first explanation in the limit of little training data and the
second explanation for large amounts of text data. We show state-of-the-art
performance on the popular and small Penn dataset when RNN LMs are regularized
with random dropout. Nonetheless, we show even better performance from a
simplified, much less expressive linear RNN model without off-diagonal entries
in the recurrent matrix. We call this model an impulse-response LM (IRLM).
Using random dropout, column normalization and annealed learning rates, IRLMs
develop neurons that keep a memory of up to 50 words in the past and achieve a
perplexity of 102.5 on the Penn dataset. On two large datasets however, the
same regularization methods are unsuccessful for both models and the RNN's
expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity,
respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the
Microsoft Research Sentence Completion (MRSC) task. We develop a slightly
modified IRLM that separates long-context units (LCUs) from short-context units
and show that the LCUs alone achieve a state-of-the-art performance on the MRSC
task of 60.8%. Our analysis indicates that a fruitful direction of research for
neural LMs lies in developing more accessible internal representations, and
suggests an optimization regime of very high momentum terms for effectively
training such models.
| [
{
"version": "v1",
"created": "Wed, 23 Jan 2013 21:18:07 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Jun 2013 14:30:04 GMT"
}
] | 2013-06-21T00:00:00 | [
[
"Pachitariu",
"Marius",
""
],
[
"Sahani",
"Maneesh",
""
]
] | TITLE: Regularization and nonlinearities for neural language models: when are
they needed?
ABSTRACT: Neural language models (LMs) based on recurrent neural networks (RNN) are
some of the most successful word and character-level LMs. Why do they work so
well, in particular better than linear neural LMs? Possible explanations are
that RNNs have an implicitly better regularization or that RNNs have a higher
capacity for storing patterns due to their nonlinearities or both. Here we
argue for the first explanation in the limit of little training data and the
second explanation for large amounts of text data. We show state-of-the-art
performance on the popular and small Penn dataset when RNN LMs are regularized
with random dropout. Nonetheless, we show even better performance from a
simplified, much less expressive linear RNN model without off-diagonal entries
in the recurrent matrix. We call this model an impulse-response LM (IRLM).
Using random dropout, column normalization and annealed learning rates, IRLMs
develop neurons that keep a memory of up to 50 words in the past and achieve a
perplexity of 102.5 on the Penn dataset. On two large datasets however, the
same regularization methods are unsuccessful for both models and the RNN's
expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity,
respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the
Microsoft Research Sentence Completion (MRSC) task. We develop a slightly
modified IRLM that separates long-context units (LCUs) from short-context units
and show that the LCUs alone achieve a state-of-the-art performance on the MRSC
task of 60.8%. Our analysis indicates that a fruitful direction of research for
neural LMs lies in developing more accessible internal representations, and
suggests an optimization regime of very high momentum terms for effectively
training such models.
| no_new_dataset | 0.946597 |
1306.4746 | Daniel Barrett | Daniel Paul Barrett and Jeffrey Mark Siskind | Felzenszwalb-Baum-Welch: Event Detection by Changing Appearance | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method which can detect events in videos by modeling the change
in appearance of the event participants over time. This method makes it
possible to detect events which are characterized not by motion, but by the
changing state of the people or objects involved. This is accomplished by using
object detectors as output models for the states of a hidden Markov model
(HMM). The method allows an HMM to model the sequence of poses of the event
participants over time, and is effective for poses of humans and inanimate
objects. The ability to use existing object-detection methods as part of an
event model makes it possible to leverage ongoing work in the object-detection
community. A novel training method uses an EM loop to simultaneously learn the
temporal structure and object models automatically, without the need to specify
either the individual poses to be modeled or the frames in which they occur.
The E-step estimates the latent assignment of video frames to HMM states, while
the M-step estimates both the HMM transition probabilities and state output
models, including the object detectors, which are trained on the weighted
subset of frames assigned to their state. A new dataset was gathered because
little work has been done on events characterized by changing object pose, and
suitable datasets are not available. Our method produced results superior to
that of comparison systems on this dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2013 03:22:19 GMT"
}
] | 2013-06-21T00:00:00 | [
[
"Barrett",
"Daniel Paul",
""
],
[
"Siskind",
"Jeffrey Mark",
""
]
] | TITLE: Felzenszwalb-Baum-Welch: Event Detection by Changing Appearance
ABSTRACT: We propose a method which can detect events in videos by modeling the change
in appearance of the event participants over time. This method makes it
possible to detect events which are characterized not by motion, but by the
changing state of the people or objects involved. This is accomplished by using
object detectors as output models for the states of a hidden Markov model
(HMM). The method allows an HMM to model the sequence of poses of the event
participants over time, and is effective for poses of humans and inanimate
objects. The ability to use existing object-detection methods as part of an
event model makes it possible to leverage ongoing work in the object-detection
community. A novel training method uses an EM loop to simultaneously learn the
temporal structure and object models automatically, without the need to specify
either the individual poses to be modeled or the frames in which they occur.
The E-step estimates the latent assignment of video frames to HMM states, while
the M-step estimates both the HMM transition probabilities and state output
models, including the object detectors, which are trained on the weighted
subset of frames assigned to their state. A new dataset was gathered because
little work has been done on events characterized by changing object pose, and
suitable datasets are not available. Our method produced results superior to
that of comparison systems on this dataset.
| new_dataset | 0.939471 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.