Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1112.4456 | Massimiliano Dal Mas | Massimiliano Dal Mas | Cluster Analysis for a Scale-Free Folksodriven Structure Network | 9 pages, 4 figures; for details see: http://www.maxdalmas.com | null | null | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Folksonomy is said to provide a democratic tagging system that reflects the
opinions of the general public, but it is not a classification system and it is
hard to make sense of. It would be necessary to share a representation of
contexts by all the users to develop a social and collaborative matching. The
solution could be to help the users to choose proper tags thanks to a dynamical
driven system of folksonomy that could evolve during the time. This paper uses
a cluster analysis to measure a new concept of a structure called
"Folksodriven", which consists of tags, source and time. Many approaches
include in their goals the use of folksonomy that could evolve during time to
evaluate characteristics. This paper describes an alternative where the goal is
to develop a weighted network of tags where link strengths are based on the
frequencies of tag co-occurrence, and studied the weight distributions and
connectivity correlations among nodes in this network. The paper proposes and
analyzes the network structure of the Folksodriven tags thought as folksonomy
tags suggestions for the user on a dataset built on chosen websites. It is
observed that the hypergraphs of the Folksodriven are highly connected and that
the relative path lengths are relatively low, facilitating thus the
serendipitous discovery of interesting contents for the users. Then its
characteristics, Clustering Coefficient, is compared with random networks. The
goal of this paper is a useful analysis of the use of folksonomies on some well
known and extensive web sites with real user involvement. The advantages of the
new tagging method using folksonomy are on a new interesting method to be
employed by a knowledge management system.
*** This paper has been accepted to the International Conference on Social
Computing and its Applications (SCA 2011) - Sydney Australia, 12-14 December
2011 ***
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2011 20:31:00 GMT"
}
] | 2011-12-20T00:00:00 | [
[
"Mas",
"Massimiliano Dal",
""
]
] | TITLE: Cluster Analysis for a Scale-Free Folksodriven Structure Network
ABSTRACT: Folksonomy is said to provide a democratic tagging system that reflects the
opinions of the general public, but it is not a classification system and it is
hard to make sense of. It would be necessary to share a representation of
contexts by all the users to develop a social and collaborative matching. The
solution could be to help the users to choose proper tags thanks to a dynamical
driven system of folksonomy that could evolve during the time. This paper uses
a cluster analysis to measure a new concept of a structure called
"Folksodriven", which consists of tags, source and time. Many approaches
include in their goals the use of folksonomy that could evolve during time to
evaluate characteristics. This paper describes an alternative where the goal is
to develop a weighted network of tags where link strengths are based on the
frequencies of tag co-occurrence, and studied the weight distributions and
connectivity correlations among nodes in this network. The paper proposes and
analyzes the network structure of the Folksodriven tags thought as folksonomy
tags suggestions for the user on a dataset built on chosen websites. It is
observed that the hypergraphs of the Folksodriven are highly connected and that
the relative path lengths are relatively low, facilitating thus the
serendipitous discovery of interesting contents for the users. Then its
characteristics, Clustering Coefficient, is compared with random networks. The
goal of this paper is a useful analysis of the use of folksonomies on some well
known and extensive web sites with real user involvement. The advantages of the
new tagging method using folksonomy are on a new interesting method to be
employed by a knowledge management system.
*** This paper has been accepted to the International Conference on Social
Computing and its Applications (SCA 2011) - Sydney Australia, 12-14 December
2011 ***
|
1112.1527 | Konstantinos Themelis | Konstantinos E. Themelis, Fr\'ed\'eric Schmidt, Olga Sykioti,
Athanasios A. Rontogiannis, Konstantinos D. Koutroumbas, Ioannis A. Daglis | On the unmixing of MEx/OMEGA hyperspectral data | null | Planetary and Space Science, 2011 | 10.1016/j.pss.2011.11.015 | null | astro-ph.IM astro-ph.EP physics.space-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents a comparative study of three different types of
estimators used for supervised linear unmixing of two MEx/OMEGA hyperspectral
cubes. The algorithms take into account the constraints of the abundance
fractions, in order to get physically interpretable results. Abundance maps
show that the Bayesian maximum a posteriori probability (MAP) estimator
proposed in Themelis and Rontogiannis (2008) outperforms the other two schemes,
offering a compromise between complexity and estimation performance. Thus, the
MAP estimator is a candidate algorithm to perform ice and minerals detection on
large hyperspectral datasets.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2011 11:33:23 GMT"
}
] | 2011-12-19T00:00:00 | [
[
"Themelis",
"Konstantinos E.",
""
],
[
"Schmidt",
"Frédéric",
""
],
[
"Sykioti",
"Olga",
""
],
[
"Rontogiannis",
"Athanasios A.",
""
],
[
"Koutroumbas",
"Konstantinos D.",
""
],
[
"Daglis",
"Ioannis A.",
""
]
] | TITLE: On the unmixing of MEx/OMEGA hyperspectral data
ABSTRACT: This article presents a comparative study of three different types of
estimators used for supervised linear unmixing of two MEx/OMEGA hyperspectral
cubes. The algorithms take into account the constraints of the abundance
fractions, in order to get physically interpretable results. Abundance maps
show that the Bayesian maximum a posteriori probability (MAP) estimator
proposed in Themelis and Rontogiannis (2008) outperforms the other two schemes,
offering a compromise between complexity and estimation performance. Thus, the
MAP estimator is a candidate algorithm to perform ice and minerals detection on
large hyperspectral datasets.
|
1112.2774 | Tina Eliassi-Rad | Mangesh Gupte and Tina Eliassi-Rad | Measuring Tie Strength in Implicit Social Networks | 10 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a set of people and a set of events they attend, we address the problem
of measuring connectedness or tie strength between each pair of persons given
that attendance at mutual events gives an implicit social network between
people. We take an axiomatic approach to this problem. Starting from a list of
axioms that a measure of tie strength must satisfy, we characterize functions
that satisfy all the axioms and show that there is a range of measures that
satisfy this characterization. A measure of tie strength induces a ranking on
the edges (and on the set of neighbors for every person). We show that for
applications where the ranking, and not the absolute value of the tie strength,
is the important thing about the measure, the axioms are equivalent to a
natural partial order. Also, to settle on a particular measure, we must make a
non-obvious decision about extending this partial order to a total order, and
that this decision is best left to particular applications. We classify
measures found in prior literature according to the axioms that they satisfy.
In our experiments, we measure tie strength and the coverage of our axioms in
several datasets. Also, for each dataset, we bound the maximum Kendall's Tau
divergence (which measures the number of pairwise disagreements between two
lists) between all measures that satisfy the axioms using the partial order.
This informs us if particular datasets are well behaved where we do not have to
worry about which measure to choose, or we have to be careful about the exact
choice of measure we make.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2011 02:30:22 GMT"
}
] | 2011-12-14T00:00:00 | [
[
"Gupte",
"Mangesh",
""
],
[
"Eliassi-Rad",
"Tina",
""
]
] | TITLE: Measuring Tie Strength in Implicit Social Networks
ABSTRACT: Given a set of people and a set of events they attend, we address the problem
of measuring connectedness or tie strength between each pair of persons given
that attendance at mutual events gives an implicit social network between
people. We take an axiomatic approach to this problem. Starting from a list of
axioms that a measure of tie strength must satisfy, we characterize functions
that satisfy all the axioms and show that there is a range of measures that
satisfy this characterization. A measure of tie strength induces a ranking on
the edges (and on the set of neighbors for every person). We show that for
applications where the ranking, and not the absolute value of the tie strength,
is the important thing about the measure, the axioms are equivalent to a
natural partial order. Also, to settle on a particular measure, we must make a
non-obvious decision about extending this partial order to a total order, and
that this decision is best left to particular applications. We classify
measures found in prior literature according to the axioms that they satisfy.
In our experiments, we measure tie strength and the coverage of our axioms in
several datasets. Also, for each dataset, we bound the maximum Kendall's Tau
divergence (which measures the number of pairwise disagreements between two
lists) between all measures that satisfy the axioms using the partial order.
This informs us if particular datasets are well behaved where we do not have to
worry about which measure to choose, or we have to be careful about the exact
choice of measure we make.
|
1008.5188 | Chunhua Shen | Chunhua Shen, Hanxi Li, Nick Barnes | Totally Corrective Boosting for Regularized Risk Minimization | This paper has been withdrawn by the author | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consideration of the primal and dual problems together leads to important new
insights into the characteristics of boosting algorithms. In this work, we
propose a general framework that can be used to design new boosting algorithms.
A wide variety of machine learning problems essentially minimize a regularized
risk functional. We show that the proposed boosting framework, termed CGBoost,
can accommodate various loss functions and different regularizers in a
totally-corrective optimization fashion. We show that, by solving the primal
rather than the dual, a large body of totally-corrective boosting algorithms
can actually be efficiently solved and no sophisticated convex optimization
solvers are needed. We also demonstrate that some boosting algorithms like
AdaBoost can be interpreted in our framework--even their optimization is not
totally corrective. We empirically show that various boosting algorithms based
on the proposed framework perform similarly on the UCIrvine machine learning
datasets [1] that we have used in the experiments.
| [
{
"version": "v1",
"created": "Mon, 30 Aug 2010 23:40:51 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2011 04:42:17 GMT"
}
] | 2011-12-13T00:00:00 | [
[
"Shen",
"Chunhua",
""
],
[
"Li",
"Hanxi",
""
],
[
"Barnes",
"Nick",
""
]
] | TITLE: Totally Corrective Boosting for Regularized Risk Minimization
ABSTRACT: Consideration of the primal and dual problems together leads to important new
insights into the characteristics of boosting algorithms. In this work, we
propose a general framework that can be used to design new boosting algorithms.
A wide variety of machine learning problems essentially minimize a regularized
risk functional. We show that the proposed boosting framework, termed CGBoost,
can accommodate various loss functions and different regularizers in a
totally-corrective optimization fashion. We show that, by solving the primal
rather than the dual, a large body of totally-corrective boosting algorithms
can actually be efficiently solved and no sophisticated convex optimization
solvers are needed. We also demonstrate that some boosting algorithms like
AdaBoost can be interpreted in our framework--even their optimization is not
totally corrective. We empirically show that various boosting algorithms based
on the proposed framework perform similarly on the UCIrvine machine learning
datasets [1] that we have used in the experiments.
|
1109.3701 | Kevin Jamieson | Kevin G. Jamieson and Robert D. Nowak | Active Ranking using Pairwise Comparisons | 17 pages, an extended version of our NIPS 2011 paper. The new version
revises the argument of the robust section and slightly modifies the result
there to give it more impact | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper examines the problem of ranking a collection of objects using
pairwise comparisons (rankings of two objects). In general, the ranking of $n$
objects can be identified by standard sorting methods using $n log_2 n$
pairwise comparisons. We are interested in natural situations in which
relationships among the objects may allow for ranking using far fewer pairwise
comparisons. Specifically, we assume that the objects can be embedded into a
$d$-dimensional Euclidean space and that the rankings reflect their relative
distances from a common reference point in $R^d$. We show that under this
assumption the number of possible rankings grows like $n^{2d}$ and demonstrate
an algorithm that can identify a randomly selected ranking using just slightly
more than $d log n$ adaptively selected pairwise comparisons, on average. If
instead the comparisons are chosen at random, then almost all pairwise
comparisons must be made in order to identify any ranking. In addition, we
propose a robust, error-tolerant algorithm that only requires that the pairwise
comparisons are probably correct. Experimental studies with synthetic and real
datasets support the conclusions of our theoretical analysis.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2011 19:35:13 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2011 01:02:14 GMT"
}
] | 2011-12-13T00:00:00 | [
[
"Jamieson",
"Kevin G.",
""
],
[
"Nowak",
"Robert D.",
""
]
] | TITLE: Active Ranking using Pairwise Comparisons
ABSTRACT: This paper examines the problem of ranking a collection of objects using
pairwise comparisons (rankings of two objects). In general, the ranking of $n$
objects can be identified by standard sorting methods using $n log_2 n$
pairwise comparisons. We are interested in natural situations in which
relationships among the objects may allow for ranking using far fewer pairwise
comparisons. Specifically, we assume that the objects can be embedded into a
$d$-dimensional Euclidean space and that the rankings reflect their relative
distances from a common reference point in $R^d$. We show that under this
assumption the number of possible rankings grows like $n^{2d}$ and demonstrate
an algorithm that can identify a randomly selected ranking using just slightly
more than $d log n$ adaptively selected pairwise comparisons, on average. If
instead the comparisons are chosen at random, then almost all pairwise
comparisons must be made in order to identify any ranking. In addition, we
propose a robust, error-tolerant algorithm that only requires that the pairwise
comparisons are probably correct. Experimental studies with synthetic and real
datasets support the conclusions of our theoretical analysis.
|
1112.2679 | Tong Zhang | Xiao-Tong Yuan and Tong Zhang | Truncated Power Method for Sparse Eigenvalue Problems | null | null | null | null | stat.ML cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the sparse eigenvalue problem, which is to extract
dominant (largest) sparse eigenvectors with at most $k$ non-zero components. We
propose a simple yet effective solution called truncated power method that can
approximately solve the underlying nonconvex optimization problem. A strong
sparse recovery result is proved for the truncated power method, and this
theory is our key motivation for developing the new algorithm. The proposed
method is tested on applications such as sparse principal component analysis
and the densest $k$-subgraph problem. Extensive experiments on several
synthetic and real-world large scale datasets demonstrate the competitive
empirical performance of our method.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2011 20:11:41 GMT"
}
] | 2011-12-13T00:00:00 | [
[
"Yuan",
"Xiao-Tong",
""
],
[
"Zhang",
"Tong",
""
]
] | TITLE: Truncated Power Method for Sparse Eigenvalue Problems
ABSTRACT: This paper considers the sparse eigenvalue problem, which is to extract
dominant (largest) sparse eigenvectors with at most $k$ non-zero components. We
propose a simple yet effective solution called truncated power method that can
approximately solve the underlying nonconvex optimization problem. A strong
sparse recovery result is proved for the truncated power method, and this
theory is our key motivation for developing the new algorithm. The proposed
method is tested on applications such as sparse principal component analysis
and the densest $k$-subgraph problem. Extensive experiments on several
synthetic and real-world large scale datasets demonstrate the competitive
empirical performance of our method.
|
1112.1966 | Marina Sapir | Marina Sapir | Bipartite ranking algorithm for classification and survival analysis | arXiv admin note: substantial text overlap with arXiv:1108.2820 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised aggregation of independently built univariate predictors is
explored as an alternative regularization approach for noisy, sparse datasets.
Bipartite ranking algorithm Smooth Rank implementing this approach is
introduced. The advantages of this algorithm are demonstrated on two types of
problems. First, Smooth Rank is applied to two-class problems from bio-medical
field, where ranking is often preferable to classification. In comparison
against SVMs with radial and linear kernels, Smooth Rank had the best
performance on 8 out of 12 benchmark benchmarks. The second area of application
is survival analysis, which is reduced here to bipartite ranking in a way which
allows one to use commonly accepted measures of methods performance. In
comparison of Smooth Rank with Cox PH regression and CoxPath methods, Smooth
Rank proved to be the best on 9 out of 10 benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2011 21:33:38 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Sapir",
"Marina",
""
]
] | TITLE: Bipartite ranking algorithm for classification and survival analysis
ABSTRACT: Unsupervised aggregation of independently built univariate predictors is
explored as an alternative regularization approach for noisy, sparse datasets.
Bipartite ranking algorithm Smooth Rank implementing this approach is
introduced. The advantages of this algorithm are demonstrated on two types of
problems. First, Smooth Rank is applied to two-class problems from bio-medical
field, where ranking is often preferable to classification. In comparison
against SVMs with radial and linear kernels, Smooth Rank had the best
performance on 8 out of 12 benchmark benchmarks. The second area of application
is survival analysis, which is reduced here to bipartite ranking in a way which
allows one to use commonly accepted measures of methods performance. In
comparison of Smooth Rank with Cox PH regression and CoxPath methods, Smooth
Rank proved to be the best on 9 out of 10 benchmark datasets.
|
1112.2020 | Rui Chen | Rui Chen, Benjamin C. M. Fung, Bipin C. Desai | Differentially Private Trajectory Data Publication | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing prevalence of location-aware devices, trajectory data has
been generated and collected in various application domains. Trajectory data
carries rich information that is useful for many data analysis tasks. Yet,
improper publishing and use of trajectory data could jeopardize individual
privacy. However, it has been shown that existing privacy-preserving trajectory
data publishing methods derived from partition-based privacy models, for
example k-anonymity, are unable to provide sufficient privacy protection.
In this paper, motivated by the data publishing scenario at the Societe de
transport de Montreal (STM), the public transit agency in Montreal area, we
study the problem of publishing trajectory data under the rigorous differential
privacy model. We propose an efficient data-dependent yet differentially
private sanitization algorithm, which is applicable to different types of
trajectory data. The efficiency of our approach comes from adaptively narrowing
down the output domain by building a noisy prefix tree based on the underlying
data. Moreover, as a post-processing step, we make use of the inherent
constraints of a prefix tree to conduct constrained inferences, which lead to
better utility. This is the first paper to introduce a practical solution for
publishing large volume of trajectory data under differential privacy. We
examine the utility of sanitized data in terms of count queries and frequent
sequential pattern mining. Extensive experiments on real-life trajectory data
from the STM demonstrate that our approach maintains high utility and is
scalable to large trajectory datasets.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2011 05:19:57 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Chen",
"Rui",
""
],
[
"Fung",
"Benjamin C. M.",
""
],
[
"Desai",
"Bipin C.",
""
]
] | TITLE: Differentially Private Trajectory Data Publication
ABSTRACT: With the increasing prevalence of location-aware devices, trajectory data has
been generated and collected in various application domains. Trajectory data
carries rich information that is useful for many data analysis tasks. Yet,
improper publishing and use of trajectory data could jeopardize individual
privacy. However, it has been shown that existing privacy-preserving trajectory
data publishing methods derived from partition-based privacy models, for
example k-anonymity, are unable to provide sufficient privacy protection.
In this paper, motivated by the data publishing scenario at the Societe de
transport de Montreal (STM), the public transit agency in Montreal area, we
study the problem of publishing trajectory data under the rigorous differential
privacy model. We propose an efficient data-dependent yet differentially
private sanitization algorithm, which is applicable to different types of
trajectory data. The efficiency of our approach comes from adaptively narrowing
down the output domain by building a noisy prefix tree based on the underlying
data. Moreover, as a post-processing step, we make use of the inherent
constraints of a prefix tree to conduct constrained inferences, which lead to
better utility. This is the first paper to introduce a practical solution for
publishing large volume of trajectory data under differential privacy. We
examine the utility of sanitized data in terms of count queries and frequent
sequential pattern mining. Extensive experiments on real-life trajectory data
from the STM demonstrate that our approach maintains high utility and is
scalable to large trajectory datasets.
|
1112.2027 | JaeDeok Lim | JaeDeok Lim, ByeongCheol Choi, SeungWan Han, ChoelHoon Lee | Automatic Classification of X-rated Videos using Obscene Sound Analysis
based on a Repeated Curve-like Spectrum Feature | 18 pages, 5 figures, 11 tables, IJMA(The International Journal of
Multimedia & Its Applications) | The International Journal of Multimedia & Its Applications (IJMA)
Vol.3, No.4, November 2011, pp.1-17 | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the automatic classification of X-rated videos by
analyzing its obscene sounds. In this paper, obscene sounds refer to audio
signals generated from sexual moans and screams during sexual scenes. By
analyzing various sound samples, we determined the distinguishable
characteristics of obscene sounds and propose a repeated curve-like spectrum
feature that represents the characteristics of such sounds. We constructed
6,269 audio clips to evaluate the proposed feature, and separately constructed
1,200 X-rated and general videos for classification. The proposed feature has
an F1-score, precision, and recall rate of 96.6%, 98.2%, and 95.2%,
respectively, for the original dataset, and 92.6%, 97.6%, and 88.0% for a noisy
dataset of 5dB SNR. And, in classifying videos, the feature has more than a 90%
F1-score, 97% precision, and an 84% recall rate. From the measured performance,
X-rated videos can be classified with only the audio features and the repeated
curve-like spectrum feature is suitable to detect obscene sounds.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2011 07:05:49 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Lim",
"JaeDeok",
""
],
[
"Choi",
"ByeongCheol",
""
],
[
"Han",
"SeungWan",
""
],
[
"Lee",
"ChoelHoon",
""
]
] | TITLE: Automatic Classification of X-rated Videos using Obscene Sound Analysis
based on a Repeated Curve-like Spectrum Feature
ABSTRACT: This paper addresses the automatic classification of X-rated videos by
analyzing its obscene sounds. In this paper, obscene sounds refer to audio
signals generated from sexual moans and screams during sexual scenes. By
analyzing various sound samples, we determined the distinguishable
characteristics of obscene sounds and propose a repeated curve-like spectrum
feature that represents the characteristics of such sounds. We constructed
6,269 audio clips to evaluate the proposed feature, and separately constructed
1,200 X-rated and general videos for classification. The proposed feature has
an F1-score, precision, and recall rate of 96.6%, 98.2%, and 95.2%,
respectively, for the original dataset, and 92.6%, 97.6%, and 88.0% for a noisy
dataset of 5dB SNR. And, in classifying videos, the feature has more than a 90%
F1-score, 97% precision, and an 84% recall rate. From the measured performance,
X-rated videos can be classified with only the audio features and the repeated
curve-like spectrum feature is suitable to detect obscene sounds.
|
1112.2028 | Bhawna Nigam | Bhawna Nigam, Poorvi Ahirwal, Sonal Salve, Swati Vamney | Document Classification Using Expectation Maximization with Semi
Supervised Learning | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the amount of online document increases, the demand for document
classification to aid the analysis and management of document is increasing.
Text is cheap, but information, in the form of knowing what classes a document
belongs to, is expensive. The main purpose of this paper is to explain the
expectation maximization technique of data mining to classify the document and
to learn how to improve the accuracy while using semi-supervised approach.
Expectation maximization algorithm is applied with both supervised and
semi-supervised approach. It is found that semi-supervised approach is more
accurate and effective. The main advantage of semi supervised approach is
"Dynamically Generation of New Class". The algorithm first trains a classifier
using the labeled document and probabilistically classifies the unlabeled
documents. The car dataset for the evaluation purpose is collected from UCI
repository dataset in which some changes have been done from our side.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2011 07:09:21 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Nigam",
"Bhawna",
""
],
[
"Ahirwal",
"Poorvi",
""
],
[
"Salve",
"Sonal",
""
],
[
"Vamney",
"Swati",
""
]
] | TITLE: Document Classification Using Expectation Maximization with Semi
Supervised Learning
ABSTRACT: As the amount of online document increases, the demand for document
classification to aid the analysis and management of document is increasing.
Text is cheap, but information, in the form of knowing what classes a document
belongs to, is expensive. The main purpose of this paper is to explain the
expectation maximization technique of data mining to classify the document and
to learn how to improve the accuracy while using semi-supervised approach.
Expectation maximization algorithm is applied with both supervised and
semi-supervised approach. It is found that semi-supervised approach is more
accurate and effective. The main advantage of semi supervised approach is
"Dynamically Generation of New Class". The algorithm first trains a classifier
using the labeled document and probabilistically classifies the unlabeled
documents. The car dataset for the evaluation purpose is collected from UCI
repository dataset in which some changes have been done from our side.
|
1112.2031 | Yashodhara Haribhakta | Y.V. Haribhakta and Dr. Parag Kulkarni | Learning Context for Text Categorization | 9 pages, selected in IJDKP (International Journal of Data Mining and
Knowledge Management Process) | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes our work which is based on discovering context for text
document categorization. The document categorization approach is derived from a
combination of a learning paradigm known as relation extraction and an
technique known as context discovery. We demonstrate the effectiveness of our
categorization approach using reuters 21578 dataset and synthetic real world
data from sports domain. Our experimental results indicate that the learned
context greatly improves the categorization performance as compared to
traditional categorization approaches.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2011 07:24:13 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Haribhakta",
"Y. V.",
""
],
[
"Kulkarni",
"Dr. Parag",
""
]
] | TITLE: Learning Context for Text Categorization
ABSTRACT: This paper describes our work which is based on discovering context for text
document categorization. The document categorization approach is derived from a
combination of a learning paradigm known as relation extraction and an
technique known as context discovery. We demonstrate the effectiveness of our
categorization approach using reuters 21578 dataset and synthetic real world
data from sports domain. Our experimental results indicate that the learned
context greatly improves the categorization performance as compared to
traditional categorization approaches.
|
1112.2137 | Syed Ibrahim | S.P.Syed Ibrahim and K.R.Chandran | Compact Weighted Class Association Rule Mining using Information Gain | 13 pages; International Journal of Data Mining & Knowledge Management
Process (IJDKP) Vol.1, No.6, November 2011 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weighted association rule mining reflects semantic significance of item by
considering its weight. Classification constructs the classifier and predicts
the new data instance. This paper proposes compact weighted class association
rule mining method, which applies weighted association rule mining in the
classification and constructs an efficient weighted associative classifier.
This proposed associative classification algorithm chooses one non class
informative attribute from dataset and all the weighted class association rules
are generated based on that attribute. The weight of the item is considered as
one of the parameter in generating the weighted class association rules. This
proposed algorithm calculates the weight using the HITS model. Experimental
results show that the proposed system generates less number of high quality
rules which improves the classification accuracy.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2011 16:22:00 GMT"
}
] | 2011-12-12T00:00:00 | [
[
"Ibrahim",
"S. P. Syed",
""
],
[
"Chandran",
"K. R.",
""
]
] | TITLE: Compact Weighted Class Association Rule Mining using Information Gain
ABSTRACT: Weighted association rule mining reflects semantic significance of item by
considering its weight. Classification constructs the classifier and predicts
the new data instance. This paper proposes compact weighted class association
rule mining method, which applies weighted association rule mining in the
classification and constructs an efficient weighted associative classifier.
This proposed associative classification algorithm chooses one non class
informative attribute from dataset and all the weighted class association rules
are generated based on that attribute. The weight of the item is considered as
one of the parameter in generating the weighted class association rules. This
proposed algorithm calculates the weight using the HITS model. Experimental
results show that the proposed system generates less number of high quality
rules which improves the classification accuracy.
|
1112.1688 | Alberto Accomazzi | Alberto Accomazzi, Sebastien Derriere, Chris Biemesderfer and Norman
Gray | Why don't we already have an Integrated Framework for the Publication
and Preservation of all Data Products? | 4 pages, submitted to the ADASS XXI proceedings | null | null | null | astro-ph.IM cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Astronomy has long had a working network of archives supporting the curation
of publications and data. The discipline has already created many of the
features which perplex other areas of science: (1) data repositories:
(supra)national institutes, dedicated to large projects; a culture of
user-contributed data; practical experience of long-term data preservation; (2)
dataset identifiers: the community has already piloted experiments, knows what
can undermine these efforts, and is participating in the development of
next-generation standards; (3) citation of datasets in papers: the community
has an innovative and expanding infrastructure for the curation of data and
bibliographic resources, and through them a community of author s and editors
familiar with such electronic publication efforts; as well, it has experimented
with next-generation web standards (e.g. the Semantic Web); (4) publisher
buy-in: publishers in this area have been willing to innovate within the
constraints of their commercial imperatives. What can possibly be missing? Why
don't we have an integrated framework for the publication and preservation of
all data products already? Are there technical barriers? We don't believe so.
Are there cultural or commercial forces inhibiting this? We aren't aware of
any. This Birds of a Feather session (BoF) attempted to identify existing
barriers to the creation of such a framework, and attempted to identify the
parties or groups which can contribute to the creation of a VO-powered
data-publishing framework.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2011 20:58:34 GMT"
}
] | 2011-12-08T00:00:00 | [
[
"Accomazzi",
"Alberto",
""
],
[
"Derriere",
"Sebastien",
""
],
[
"Biemesderfer",
"Chris",
""
],
[
"Gray",
"Norman",
""
]
] | TITLE: Why don't we already have an Integrated Framework for the Publication
and Preservation of all Data Products?
ABSTRACT: Astronomy has long had a working network of archives supporting the curation
of publications and data. The discipline has already created many of the
features which perplex other areas of science: (1) data repositories:
(supra)national institutes, dedicated to large projects; a culture of
user-contributed data; practical experience of long-term data preservation; (2)
dataset identifiers: the community has already piloted experiments, knows what
can undermine these efforts, and is participating in the development of
next-generation standards; (3) citation of datasets in papers: the community
has an innovative and expanding infrastructure for the curation of data and
bibliographic resources, and through them a community of author s and editors
familiar with such electronic publication efforts; as well, it has experimented
with next-generation web standards (e.g. the Semantic Web); (4) publisher
buy-in: publishers in this area have been willing to innovate within the
constraints of their commercial imperatives. What can possibly be missing? Why
don't we have an integrated framework for the publication and preservation of
all data products already? Are there technical barriers? We don't believe so.
Are there cultural or commercial forces inhibiting this? We aren't aware of
any. This Birds of a Feather session (BoF) attempted to identify existing
barriers to the creation of such a framework, and attempted to identify the
parties or groups which can contribute to the creation of a VO-powered
data-publishing framework.
|
1112.1200 | Duc Phu Chau | Duc Phu Chau (INRIA Sophia Antipolis), Fran\c{c}ois Bremond (INRIA
Sophia Antipolis), Monique Thonnat (INRIA Sophia Antipolis) | A multi-feature tracking algorithm enabling adaptation to context
variations | The International Conference on Imaging for Crime Detection and
Prevention (ICDP) (2011) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose in this paper a tracking algorithm which is able to adapt itself
to different scene contexts. A feature pool is used to compute the matching
score between two detected objects. This feature pool includes 2D, 3D
displacement distances, 2D sizes, color histogram, histogram of oriented
gradient (HOG), color covariance and dominant color. An offline learning
process is proposed to search for useful features and to estimate their weights
for each context. In the online tracking process, a temporal window is defined
to establish the links between the detected objects. This enables to find the
object trajectories even if the objects are misdetected in some frames. A
trajectory filter is proposed to remove noisy trajectories. Experimentation on
different contexts is shown. The proposed tracker has been tested in videos
belonging to three public datasets and to the Caretaker European project. The
experimental results prove the effect of the proposed feature weight learning,
and the robustness of the proposed tracker compared to some methods in the
state of the art. The contributions of our approach over the state of the art
trackers are: (i) a robust tracking algorithm based on a feature pool, (ii) a
supervised learning scheme to learn feature weights for each context, (iii) a
new method to quantify the reliability of HOG descriptor, (iv) a combination of
color covariance and dominant color features with spatial pyramid distance to
manage the case of object occlusion.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2011 09:19:17 GMT"
}
] | 2011-12-07T00:00:00 | [
[
"Chau",
"Duc Phu",
"",
"INRIA Sophia Antipolis"
],
[
"Bremond",
"François",
"",
"INRIA\n Sophia Antipolis"
],
[
"Thonnat",
"Monique",
"",
"INRIA Sophia Antipolis"
]
] | TITLE: A multi-feature tracking algorithm enabling adaptation to context
variations
ABSTRACT: We propose in this paper a tracking algorithm which is able to adapt itself
to different scene contexts. A feature pool is used to compute the matching
score between two detected objects. This feature pool includes 2D, 3D
displacement distances, 2D sizes, color histogram, histogram of oriented
gradient (HOG), color covariance and dominant color. An offline learning
process is proposed to search for useful features and to estimate their weights
for each context. In the online tracking process, a temporal window is defined
to establish the links between the detected objects. This enables to find the
object trajectories even if the objects are misdetected in some frames. A
trajectory filter is proposed to remove noisy trajectories. Experimentation on
different contexts is shown. The proposed tracker has been tested in videos
belonging to three public datasets and to the Caretaker European project. The
experimental results prove the effect of the proposed feature weight learning,
and the robustness of the proposed tracker compared to some methods in the
state of the art. The contributions of our approach over the state of the art
trackers are: (i) a robust tracking algorithm based on a feature pool, (ii) a
supervised learning scheme to learn feature weights for each context, (iii) a
new method to quantify the reliability of HOG descriptor, (iv) a combination of
color covariance and dominant color features with spatial pyramid distance to
manage the case of object occlusion.
|
1112.0750 | Massimo Brescia Dr | M. Brescia, S. Cavuoti, R. D'Abrusco, O. Laurino, G. Longo | DAME: A Distributed Data Mining & Exploration Framework within the
Virtual Observatory | 20 pages, INGRID 2010 - 5th International Workshop on Distributed
Cooperative Laboratories: "Instrumenting" the Grid, May 12-14, 2010, Poznan,
Poland; Volume Remote Instrumentation for eScience and Related Aspects, 2011,
F. Davoli et al. (eds.), SPRINGER NY | null | null | null | astro-ph.IM cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, many scientific areas share the same broad requirements of being
able to deal with massive and distributed datasets while, when possible, being
integrated with services and applications. In order to solve the growing gap
between the incremental generation of data and our understanding of it, it is
required to know how to access, retrieve, analyze, mine and integrate data from
disparate sources. One of the fundamental aspects of any new generation of data
mining software tool or package which really wants to become a service for the
community is the possibility to use it within complex workflows which each user
can fine tune in order to match the specific demands of his scientific goal.
These workflows need often to access different resources (data, providers,
computing facilities and packages) and require a strict interoperability on (at
least) the client side. The project DAME (DAta Mining & Exploration) arises
from these requirements by providing a distributed WEB-based data mining
infrastructure specialized on Massive Data Sets exploration with Soft Computing
methods. Originally designed to deal with astrophysical use cases, where first
scientific application examples have demonstrated its effectiveness, the DAME
Suite results as a multi-disciplinary platform-independent tool perfectly
compliant with modern KDD (Knowledge Discovery in Databases) requirements and
Information & Communication Technology trends.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2011 13:06:35 GMT"
}
] | 2011-12-06T00:00:00 | [
[
"Brescia",
"M.",
""
],
[
"Cavuoti",
"S.",
""
],
[
"D'Abrusco",
"R.",
""
],
[
"Laurino",
"O.",
""
],
[
"Longo",
"G.",
""
]
] | TITLE: DAME: A Distributed Data Mining & Exploration Framework within the
Virtual Observatory
ABSTRACT: Nowadays, many scientific areas share the same broad requirements of being
able to deal with massive and distributed datasets while, when possible, being
integrated with services and applications. In order to solve the growing gap
between the incremental generation of data and our understanding of it, it is
required to know how to access, retrieve, analyze, mine and integrate data from
disparate sources. One of the fundamental aspects of any new generation of data
mining software tool or package which really wants to become a service for the
community is the possibility to use it within complex workflows which each user
can fine tune in order to match the specific demands of his scientific goal.
These workflows need often to access different resources (data, providers,
computing facilities and packages) and require a strict interoperability on (at
least) the client side. The project DAME (DAta Mining & Exploration) arises
from these requirements by providing a distributed WEB-based data mining
infrastructure specialized on Massive Data Sets exploration with Soft Computing
methods. Originally designed to deal with astrophysical use cases, where first
scientific application examples have demonstrated its effectiveness, the DAME
Suite results as a multi-disciplinary platform-independent tool perfectly
compliant with modern KDD (Knowledge Discovery in Databases) requirements and
Information & Communication Technology trends.
|
1111.7295 | Prateek Jain | Raajay Viswanathan, Prateek Jain, Srivatsan Laxman, Arvind Arasu | A Learning Framework for Self-Tuning Histograms | Submitted to VLDB-2012 | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of estimating self-tuning histograms
using query workloads. To this end, we propose a general learning theoretic
formulation. Specifically, we use query feedback from a workload as training
data to estimate a histogram with a small memory footprint that minimizes the
expected error on future queries. Our formulation provides a framework in which
different approaches can be studied and developed. We first study the simple
class of equi-width histograms and present a learning algorithm, EquiHist, that
is competitive in many settings. We also provide formal guarantees for
equi-width histograms that highlight scenarios in which equi-width histograms
can be expected to succeed or fail. We then go beyond equi-width histograms and
present a novel learning algorithm, SpHist, for estimating general histograms.
Here we use Haar wavelets to reduce the problem of learning histograms to that
of learning a sparse vector. Both algorithms have multiple advantages over
existing methods: 1) simple and scalable extensions to multi-dimensional data,
2) scalability with number of histogram buckets and size of query feedback, 3)
natural extensions to incorporate new feedback and handle database updates. We
demonstrate these advantages over the current state-of-the-art, ISOMER, through
detailed experiments on real and synthetic data. In particular, we show that
SpHist obtains up to 50% less error than ISOMER on real-world multi-dimensional
datasets.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2011 20:17:29 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2011 16:01:50 GMT"
}
] | 2011-12-05T00:00:00 | [
[
"Viswanathan",
"Raajay",
""
],
[
"Jain",
"Prateek",
""
],
[
"Laxman",
"Srivatsan",
""
],
[
"Arasu",
"Arvind",
""
]
] | TITLE: A Learning Framework for Self-Tuning Histograms
ABSTRACT: In this paper, we consider the problem of estimating self-tuning histograms
using query workloads. To this end, we propose a general learning theoretic
formulation. Specifically, we use query feedback from a workload as training
data to estimate a histogram with a small memory footprint that minimizes the
expected error on future queries. Our formulation provides a framework in which
different approaches can be studied and developed. We first study the simple
class of equi-width histograms and present a learning algorithm, EquiHist, that
is competitive in many settings. We also provide formal guarantees for
equi-width histograms that highlight scenarios in which equi-width histograms
can be expected to succeed or fail. We then go beyond equi-width histograms and
present a novel learning algorithm, SpHist, for estimating general histograms.
Here we use Haar wavelets to reduce the problem of learning histograms to that
of learning a sparse vector. Both algorithms have multiple advantages over
existing methods: 1) simple and scalable extensions to multi-dimensional data,
2) scalability with number of histogram buckets and size of query feedback, 3)
natural extensions to incorporate new feedback and handle database updates. We
demonstrate these advantages over the current state-of-the-art, ISOMER, through
detailed experiments on real and synthetic data. In particular, we show that
SpHist obtains up to 50% less error than ISOMER on real-world multi-dimensional
datasets.
|
1112.0059 | Sancho McCann | Sancho McCann, David G. Lowe | Local Naive Bayes Nearest Neighbor for Image Classification | null | null | null | TR-2011-11 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Local Naive Bayes Nearest Neighbor, an improvement to the NBNN
image classification algorithm that increases classification accuracy and
improves its ability to scale to large numbers of object classes. The key
observation is that only the classes represented in the local neighborhood of a
descriptor contribute significantly and reliably to their posterior probability
estimates. Instead of maintaining a separate search structure for each class,
we merge all of the reference data together into one search structure, allowing
quick identification of a descriptor's local neighborhood. We show an increase
in classification accuracy when we ignore adjustments to the more distant
classes and show that the run time grows with the log of the number of classes
rather than linearly in the number of classes as did the original. This gives a
100 times speed-up over the original method on the Caltech 256 dataset. We also
provide the first head-to-head comparison of NBNN against spatial pyramid
methods using a common set of input features. We show that local NBNN
outperforms all previous NBNN based methods and the original spatial pyramid
model. However, we find that local NBNN, while competitive with, does not beat
state-of-the-art spatial pyramid methods that use local soft assignment and
max-pooling.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2011 01:19:08 GMT"
}
] | 2011-12-02T00:00:00 | [
[
"McCann",
"Sancho",
""
],
[
"Lowe",
"David G.",
""
]
] | TITLE: Local Naive Bayes Nearest Neighbor for Image Classification
ABSTRACT: We present Local Naive Bayes Nearest Neighbor, an improvement to the NBNN
image classification algorithm that increases classification accuracy and
improves its ability to scale to large numbers of object classes. The key
observation is that only the classes represented in the local neighborhood of a
descriptor contribute significantly and reliably to their posterior probability
estimates. Instead of maintaining a separate search structure for each class,
we merge all of the reference data together into one search structure, allowing
quick identification of a descriptor's local neighborhood. We show an increase
in classification accuracy when we ignore adjustments to the more distant
classes and show that the run time grows with the log of the number of classes
rather than linearly in the number of classes as did the original. This gives a
100 times speed-up over the original method on the Caltech 256 dataset. We also
provide the first head-to-head comparison of NBNN against spatial pyramid
methods using a common set of input features. We show that local NBNN
outperforms all previous NBNN based methods and the original spatial pyramid
model. However, we find that local NBNN, while competitive with, does not beat
state-of-the-art spatial pyramid methods that use local soft assignment and
max-pooling.
|
1112.0248 | Roberto Iuppa | Roberto Iuppa | A needlet-based approach to the full-sky data analysis | 6 pages, 8 figures, eConf C110509 | null | null | null | astro-ph.HE astro-ph.IM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cosmic-ray physics, large field of view experiments are triggered by a
number of signals laying on different angular scales: point-like and extended
gamma-ray sources, diffuse emissions, as well as large and intermediate scale
cosmic-ray anisotropies. The separation of all these contributions is crucial,
mostly when they overlap with each other. Needlets are a form of spherical
wavelets that have recently drawn a lot of attention in the cosmological
literature, especially in connection with the analysis of CMB data. Needlets
enjoy a number of important statistical and numerical properties which suggest
that they can be very effective in handling cosmic-ray and gamma-ray data
analysis. An application of needlets to astroparticle physics is shown here. In
particular, light will be thrown on how useful they might be for estimating
background and foreground contributions. Since such an estimation is expected
to be optimal or nearly-optimal in a well-defined mathematical sense, needlets
turn out to be a powerful method for unbiased point-source detections. In this
paper needlets were applied to two distinct simulated datasets, for satellite
and EAS array experiments, both large field of view telescopes. Results will be
compared to those achievable with standard analysis tecniques in any of these
cases.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2011 17:26:36 GMT"
}
] | 2011-12-02T00:00:00 | [
[
"Iuppa",
"Roberto",
""
]
] | TITLE: A needlet-based approach to the full-sky data analysis
ABSTRACT: In cosmic-ray physics, large field of view experiments are triggered by a
number of signals laying on different angular scales: point-like and extended
gamma-ray sources, diffuse emissions, as well as large and intermediate scale
cosmic-ray anisotropies. The separation of all these contributions is crucial,
mostly when they overlap with each other. Needlets are a form of spherical
wavelets that have recently drawn a lot of attention in the cosmological
literature, especially in connection with the analysis of CMB data. Needlets
enjoy a number of important statistical and numerical properties which suggest
that they can be very effective in handling cosmic-ray and gamma-ray data
analysis. An application of needlets to astroparticle physics is shown here. In
particular, light will be thrown on how useful they might be for estimating
background and foreground contributions. Since such an estimation is expected
to be optimal or nearly-optimal in a well-defined mathematical sense, needlets
turn out to be a powerful method for unbiased point-source detections. In this
paper needlets were applied to two distinct simulated datasets, for satellite
and EAS array experiments, both large field of view telescopes. Results will be
compared to those achievable with standard analysis tecniques in any of these
cases.
|
1111.7165 | Sayan Ranu | Sayan Ranu, Ambuj K. Singh | Answering Top-k Queries Over a Mixture of Attractive and Repulsive
Dimensions | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 3, pp.
169-180 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we formulate a top-k query that compares objects in a database
to a user-provided query object on a novel scoring function. The proposed
scoring function combines the idea of attractive and repulsive dimensions into
a general framework to overcome the weakness of traditional distance or
similarity measures. We study the properties of the proposed class of scoring
functions and develop efficient and scalable index structures that index the
isolines of the function. We demonstrate various scenarios where the query
finds application. Empirical evaluation demonstrates a performance gain of one
to two orders of magnitude on querying time over existing state-of-the-art
top-k techniques. Further, a qualitative analysis is performed on a real
dataset to highlight the potential of the proposed query in discovering hidden
data characteristics.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2011 14:09:11 GMT"
}
] | 2011-12-01T00:00:00 | [
[
"Ranu",
"Sayan",
""
],
[
"Singh",
"Ambuj K.",
""
]
] | TITLE: Answering Top-k Queries Over a Mixture of Attractive and Repulsive
Dimensions
ABSTRACT: In this paper, we formulate a top-k query that compares objects in a database
to a user-provided query object on a novel scoring function. The proposed
scoring function combines the idea of attractive and repulsive dimensions into
a general framework to overcome the weakness of traditional distance or
similarity measures. We study the properties of the proposed class of scoring
functions and develop efficient and scalable index structures that index the
isolines of the function. We demonstrate various scenarios where the query
finds application. Empirical evaluation demonstrates a performance gain of one
to two orders of magnitude on querying time over existing state-of-the-art
top-k techniques. Further, a qualitative analysis is performed on a real
dataset to highlight the potential of the proposed query in discovering hidden
data characteristics.
|
1111.7171 | Guoliang Li | Guoliang Li, Dong Deng, Jiannan Wang, Jianhua Feng | PASS-JOIN: A Partition-based Method for Similarity Joins | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 3, pp.
253-264 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an essential operation in data cleaning, the similarity join has attracted
considerable attention from the database community. In this paper, we study
string similarity joins with edit-distance constraints, which find similar
string pairs from two large sets of strings whose edit distance is within a
given threshold. Existing algorithms are efficient either for short strings or
for long strings, and there is no algorithm that can efficiently and adaptively
support both short strings and long strings. To address this problem, we
propose a partition-based method called Pass-Join. Pass-Join partitions a
string into a set of segments and creates inverted indices for the segments.
Then for each string, Pass-Join selects some of its substrings and uses the
selected substrings to find candidate pairs using the inverted indices. We
devise efficient techniques to select the substrings and prove that our method
can minimize the number of selected substrings. We develop novel pruning
techniques to efficiently verify the candidate pairs. Experimental results show
that our algorithms are efficient for both short strings and long strings, and
outperform state-of-the-art methods on real datasets.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2011 14:12:22 GMT"
}
] | 2011-12-01T00:00:00 | [
[
"Li",
"Guoliang",
""
],
[
"Deng",
"Dong",
""
],
[
"Wang",
"Jiannan",
""
],
[
"Feng",
"Jianhua",
""
]
] | TITLE: PASS-JOIN: A Partition-based Method for Similarity Joins
ABSTRACT: As an essential operation in data cleaning, the similarity join has attracted
considerable attention from the database community. In this paper, we study
string similarity joins with edit-distance constraints, which find similar
string pairs from two large sets of strings whose edit distance is within a
given threshold. Existing algorithms are efficient either for short strings or
for long strings, and there is no algorithm that can efficiently and adaptively
support both short strings and long strings. To address this problem, we
propose a partition-based method called Pass-Join. Pass-Join partitions a
string into a set of segments and creates inverted indices for the segments.
Then for each string, Pass-Join selects some of its substrings and uses the
selected substrings to find candidate pairs using the inverted indices. We
devise efficient techniques to select the substrings and prove that our method
can minimize the number of selected substrings. We develop novel pruning
techniques to efficiently verify the candidate pairs. Experimental results show
that our algorithms are efficient for both short strings and long strings, and
outperform state-of-the-art methods on real datasets.
|
1111.6661 | Amr Hassan | A. H. Hassan, C. J. Fluke, and D. G. Barnes | Unleashing the Power of Distributed CPU/GPU Architectures: Massive
Astronomical Data Analysis and Visualization case study | 4 Pages, 1 figures, To appear in the proceedings of ADASS XXI, ed.
P.Ballester and D.Egret, ASP Conf. Series | null | null | null | astro-ph.IM cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Upcoming and future astronomy research facilities will systematically
generate terabyte-sized data sets moving astronomy into the Petascale data era.
While such facilities will provide astronomers with unprecedented levels of
accuracy and coverage, the increases in dataset size and dimensionality will
pose serious computational challenges for many current astronomy data analysis
and visualization tools. With such data sizes, even simple data analysis tasks
(e.g. calculating a histogram or computing data minimum/maximum) may not be
achievable without access to a supercomputing facility.
To effectively handle such dataset sizes, which exceed today's single machine
memory and processing limits, we present a framework that exploits the
distributed power of GPUs and many-core CPUs, with a goal of providing data
analysis and visualizing tasks as a service for astronomers. By mixing shared
and distributed memory architectures, our framework effectively utilizes the
underlying hardware infrastructure handling both batched and real-time data
analysis and visualization tasks. Offering such functionality as a service in a
"software as a service" manner will reduce the total cost of ownership, provide
an easy to use tool to the wider astronomical community, and enable a more
optimized utilization of the underlying hardware infrastructure.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2011 01:34:45 GMT"
}
] | 2011-11-30T00:00:00 | [
[
"Hassan",
"A. H.",
""
],
[
"Fluke",
"C. J.",
""
],
[
"Barnes",
"D. G.",
""
]
] | TITLE: Unleashing the Power of Distributed CPU/GPU Architectures: Massive
Astronomical Data Analysis and Visualization case study
ABSTRACT: Upcoming and future astronomy research facilities will systematically
generate terabyte-sized data sets moving astronomy into the Petascale data era.
While such facilities will provide astronomers with unprecedented levels of
accuracy and coverage, the increases in dataset size and dimensionality will
pose serious computational challenges for many current astronomy data analysis
and visualization tools. With such data sizes, even simple data analysis tasks
(e.g. calculating a histogram or computing data minimum/maximum) may not be
achievable without access to a supercomputing facility.
To effectively handle such dataset sizes, which exceed today's single machine
memory and processing limits, we present a framework that exploits the
distributed power of GPUs and many-core CPUs, with a goal of providing data
analysis and visualizing tasks as a service for astronomers. By mixing shared
and distributed memory architectures, our framework effectively utilizes the
underlying hardware infrastructure handling both batched and real-time data
analysis and visualization tasks. Offering such functionality as a service in a
"software as a service" manner will reduce the total cost of ownership, provide
an easy to use tool to the wider astronomical community, and enable a more
optimized utilization of the underlying hardware infrastructure.
|
1111.6677 | Chengfang Fang | Chengfang Fang and Ee-Chien Chang | Publishing Location Dataset Differential Privately with Isotonic
Regression | null | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of publishing location datasets, in particular 2D
spatial pointsets, in a differentially private manner. Many existing mechanisms
focus on frequency counts of the points in some a priori partition of the
domain that is difficult to determine. We propose an approach that adds noise
directly to the point, or to a group of neighboring points. Our approach is
based on the observation that, the sensitivity of sorting, as a function on
sets of real numbers, can be bounded. Together with isotonic regression, the
dataset can be accurately reconstructed. To extend the mechanism to higher
dimension, we employ locality preserving function to map the dataset to a
bounded interval. Although there are fundamental limits on the performance of
locality preserving functions, fortunately, our problem only requires distance
preservation in the "easier" direction, and the well-known Hilbert
space-filling curve suffices to provide high accuracy. The publishing process
is simple from the publisher's point of view: the publisher just needs to map
the data, sort them, group them, add Laplace noise and publish the dataset. The
only parameter to determine is the group size which can be chosen based on
predicted generalization errors. Empirical study shows that the published
dataset can also exploited to answer other queries, for example, range query
and median query, accurately.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2011 03:18:16 GMT"
}
] | 2011-11-30T00:00:00 | [
[
"Fang",
"Chengfang",
""
],
[
"Chang",
"Ee-Chien",
""
]
] | TITLE: Publishing Location Dataset Differential Privately with Isotonic
Regression
ABSTRACT: We consider the problem of publishing location datasets, in particular 2D
spatial pointsets, in a differentially private manner. Many existing mechanisms
focus on frequency counts of the points in some a priori partition of the
domain that is difficult to determine. We propose an approach that adds noise
directly to the point, or to a group of neighboring points. Our approach is
based on the observation that, the sensitivity of sorting, as a function on
sets of real numbers, can be bounded. Together with isotonic regression, the
dataset can be accurately reconstructed. To extend the mechanism to higher
dimension, we employ locality preserving function to map the dataset to a
bounded interval. Although there are fundamental limits on the performance of
locality preserving functions, fortunately, our problem only requires distance
preservation in the "easier" direction, and the well-known Hilbert
space-filling curve suffices to provide high accuracy. The publishing process
is simple from the publisher's point of view: the publisher just needs to map
the data, sort them, group them, add Laplace noise and publish the dataset. The
only parameter to determine is the group size which can be chosen based on
predicted generalization errors. Empirical study shows that the published
dataset can also exploited to answer other queries, for example, range query
and median query, accurately.
|
1111.6553 | Jan P\"oschko | Jan P\"oschko | Exploring Twitter Hashtags | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Twitter messages often contain so-called hashtags to denote keywords related
to them. Using a dataset of 29 million messages, I explore relations among
these hashtags with respect to co-occurrences. Furthermore, I present an
attempt to classify hashtags into five intuitive classes, using a
machine-learning approach. The overall outcome is an interactive Web
application to explore Twitter hashtags.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2011 19:17:57 GMT"
}
] | 2011-11-29T00:00:00 | [
[
"Pöschko",
"Jan",
""
]
] | TITLE: Exploring Twitter Hashtags
ABSTRACT: Twitter messages often contain so-called hashtags to denote keywords related
to them. Using a dataset of 29 million messages, I explore relations among
these hashtags with respect to co-occurrences. Furthermore, I present an
attempt to classify hashtags into five intuitive classes, using a
machine-learning approach. The overall outcome is an interactive Web
application to explore Twitter hashtags.
|
1111.5648 | David Balduzzi | David Balduzzi | Falsification and future performance | 10 pages, 2 figures | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We information-theoretically reformulate two measures of capacity from
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. We show these capacity measures count the number of hypotheses
about a dataset that a learning algorithm falsifies when it finds the
classifier in its repertoire minimizing empirical risk. It then follows from
that the future performance of predictors on unseen data is controlled in part
by how many hypotheses the learner falsifies. As a corollary we show that
empirical VC-entropy quantifies the message length of the true hypothesis in
the optimal code of a particular probability distribution, the so-called actual
repertoire.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2011 23:25:57 GMT"
}
] | 2011-11-28T00:00:00 | [
[
"Balduzzi",
"David",
""
]
] | TITLE: Falsification and future performance
ABSTRACT: We information-theoretically reformulate two measures of capacity from
statistical learning theory: empirical VC-entropy and empirical Rademacher
complexity. We show these capacity measures count the number of hypotheses
about a dataset that a learning algorithm falsifies when it finds the
classifier in its repertoire minimizing empirical risk. It then follows from
that the future performance of predictors on unseen data is controlled in part
by how many hypotheses the learner falsifies. As a corollary we show that
empirical VC-entropy quantifies the message length of the true hypothesis in
the optimal code of a particular probability distribution, the so-called actual
repertoire.
|
1106.1813 | K. W. Bowyer | N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer | SMOTE: Synthetic Minority Over-sampling Technique | null | Journal Of Artificial Intelligence Research, Volume 16, pages
321-357, 2002 | 10.1613/jair.953 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An approach to the construction of classifiers from imbalanced datasets is
described. A dataset is imbalanced if the classification categories are not
approximately equally represented. Often real-world data sets are predominately
composed of "normal" examples with only a small percentage of "abnormal" or
"interesting" examples. It is also the case that the cost of misclassifying an
abnormal (interesting) example as a normal example is often much higher than
the cost of the reverse error. Under-sampling of the majority (normal) class
has been proposed as a good means of increasing the sensitivity of a classifier
to the minority class. This paper shows that a combination of our method of
over-sampling the minority (abnormal) class and under-sampling the majority
(normal) class can achieve better classifier performance (in ROC space) than
only under-sampling the majority class. This paper also shows that a
combination of our method of over-sampling the minority class and
under-sampling the majority class can achieve better classifier performance (in
ROC space) than varying the loss ratios in Ripper or class priors in Naive
Bayes. Our method of over-sampling the minority class involves creating
synthetic minority class examples. Experiments are performed using C4.5, Ripper
and a Naive Bayes classifier. The method is evaluated using the area under the
Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:53:42 GMT"
}
] | 2011-11-25T00:00:00 | [
[
"Chawla",
"N. V.",
""
],
[
"Bowyer",
"K. W.",
""
],
[
"Hall",
"L. O.",
""
],
[
"Kegelmeyer",
"W. P.",
""
]
] | TITLE: SMOTE: Synthetic Minority Over-sampling Technique
ABSTRACT: An approach to the construction of classifiers from imbalanced datasets is
described. A dataset is imbalanced if the classification categories are not
approximately equally represented. Often real-world data sets are predominately
composed of "normal" examples with only a small percentage of "abnormal" or
"interesting" examples. It is also the case that the cost of misclassifying an
abnormal (interesting) example as a normal example is often much higher than
the cost of the reverse error. Under-sampling of the majority (normal) class
has been proposed as a good means of increasing the sensitivity of a classifier
to the minority class. This paper shows that a combination of our method of
over-sampling the minority (abnormal) class and under-sampling the majority
(normal) class can achieve better classifier performance (in ROC space) than
only under-sampling the majority class. This paper also shows that a
combination of our method of over-sampling the minority class and
under-sampling the majority class can achieve better classifier performance (in
ROC space) than varying the loss ratios in Ripper or class priors in Naive
Bayes. Our method of over-sampling the minority class involves creating
synthetic minority class examples. Experiments are performed using C4.5, Ripper
and a Naive Bayes classifier. The method is evaluated using the area under the
Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
|
1111.5572 | Matei Zaharia | Matei Zaharia, William J. Bolosky, Kristal Curtis, Armando Fox, David
Patterson, Scott Shenker, Ion Stoica, Richard M. Karp, Taylor Sittler | Faster and More Accurate Sequence Alignment with SNAP | null | null | null | null | cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the Scalable Nucleotide Alignment Program (SNAP), a new short and
long read aligner that is both more accurate (i.e., aligns more reads with
fewer errors) and 10-100x faster than state-of-the-art tools such as BWA.
Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a
simple hash index of short seed sequences from the genome, similar to BLAST's.
However, SNAP greatly reduces the number and cost of local alignment checks
performed through several measures: it uses longer seeds to reduce the false
positive locations considered, leverages larger memory capacities to speed
index lookup, and excludes most candidate locations without fully computing
their edit distance to the read. The result is an algorithm that scales well
for reads from one hundred to thousands of bases long and provides a rich error
model that can match classes of mutations (e.g., longer indels) that today's
fast aligners ignore. We calculate that SNAP can align a dataset with 30x
coverage of a human genome in less than an hour for a cost of $2 on Amazon EC2,
with higher accuracy than BWA. Finally, we describe ongoing work to further
improve SNAP.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2011 17:46:03 GMT"
}
] | 2011-11-24T00:00:00 | [
[
"Zaharia",
"Matei",
""
],
[
"Bolosky",
"William J.",
""
],
[
"Curtis",
"Kristal",
""
],
[
"Fox",
"Armando",
""
],
[
"Patterson",
"David",
""
],
[
"Shenker",
"Scott",
""
],
[
"Stoica",
"Ion",
""
],
[
"Karp",
"Richard M.",
""
],
[
"Sittler",
"Taylor",
""
]
] | TITLE: Faster and More Accurate Sequence Alignment with SNAP
ABSTRACT: We present the Scalable Nucleotide Alignment Program (SNAP), a new short and
long read aligner that is both more accurate (i.e., aligns more reads with
fewer errors) and 10-100x faster than state-of-the-art tools such as BWA.
Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a
simple hash index of short seed sequences from the genome, similar to BLAST's.
However, SNAP greatly reduces the number and cost of local alignment checks
performed through several measures: it uses longer seeds to reduce the false
positive locations considered, leverages larger memory capacities to speed
index lookup, and excludes most candidate locations without fully computing
their edit distance to the read. The result is an algorithm that scales well
for reads from one hundred to thousands of bases long and provides a rich error
model that can match classes of mutations (e.g., longer indels) that today's
fast aligners ignore. We calculate that SNAP can align a dataset with 30x
coverage of a human genome in less than an hour for a cost of $2 on Amazon EC2,
with higher accuracy than BWA. Finally, we describe ongoing work to further
improve SNAP.
|
1111.5312 | Ryan Rossi | Ryan A. Rossi and Jennifer Neville | Representations and Ensemble Methods for Dynamic Relational
Classification | null | null | null | null | cs.AI cs.SI physics.soc-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal networks are ubiquitous and evolve over time by the addition,
deletion, and changing of links, nodes, and attributes. Although many
relational datasets contain temporal information, the majority of existing
techniques in relational learning focus on static snapshots and ignore the
temporal dynamics. We propose a framework for discovering temporal
representations of relational data to increase the accuracy of statistical
relational learning algorithms. The temporal relational representations serve
as a basis for classification, ensembles, and pattern mining in evolving
domains. The framework includes (1) selecting the time-varying relational
components (links, attributes, nodes), (2) selecting the temporal granularity,
(3) predicting the temporal influence of each time-varying relational
component, and (4) choosing the weighted relational classifier. Additionally,
we propose temporal ensemble methods that exploit the temporal-dimension of
relational data. These ensembles outperform traditional and more sophisticated
relational ensembles while avoiding the issue of learning the most optimal
representation. Finally, the space of temporal-relational models are evaluated
using a sample of classifiers. In all cases, the proposed temporal-relational
classifiers outperform competing models that ignore the temporal information.
The results demonstrate the capability and necessity of the temporal-relational
representations for classification, ensembles, and for mining temporal
datasets.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2011 20:21:19 GMT"
}
] | 2011-11-23T00:00:00 | [
[
"Rossi",
"Ryan A.",
""
],
[
"Neville",
"Jennifer",
""
]
] | TITLE: Representations and Ensemble Methods for Dynamic Relational
Classification
ABSTRACT: Temporal networks are ubiquitous and evolve over time by the addition,
deletion, and changing of links, nodes, and attributes. Although many
relational datasets contain temporal information, the majority of existing
techniques in relational learning focus on static snapshots and ignore the
temporal dynamics. We propose a framework for discovering temporal
representations of relational data to increase the accuracy of statistical
relational learning algorithms. The temporal relational representations serve
as a basis for classification, ensembles, and pattern mining in evolving
domains. The framework includes (1) selecting the time-varying relational
components (links, attributes, nodes), (2) selecting the temporal granularity,
(3) predicting the temporal influence of each time-varying relational
component, and (4) choosing the weighted relational classifier. Additionally,
we propose temporal ensemble methods that exploit the temporal-dimension of
relational data. These ensembles outperform traditional and more sophisticated
relational ensembles while avoiding the issue of learning the most optimal
representation. Finally, the space of temporal-relational models are evaluated
using a sample of classifiers. In all cases, the proposed temporal-relational
classifiers outperform competing models that ignore the temporal information.
The results demonstrate the capability and necessity of the temporal-relational
representations for classification, ensembles, and for mining temporal
datasets.
|
1111.4645 | Yaniv Altshuler | Yaniv Altshuler, Nadav Aharony, Michael Fire, Yuval Elovici, Alex
Pentland | Incremental Learning with Accuracy Prediction of Social and Individual
Properties from Mobile-Phone Data | 10 pages | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile phones are quickly becoming the primary source for social, behavioral,
and environmental sensing and data collection. Today's smartphones are equipped
with increasingly more sensors and accessible data types that enable the
collection of literally dozens of signals related to the phone, its user, and
its environment. A great deal of research effort in academia and industry is
put into mining this raw data for higher level sense-making, such as
understanding user context, inferring social networks, learning individual
features, predicting outcomes, and so on. In this work we investigate the
properties of learning and inference of real world data collected via mobile
phones over time. In particular, we look at the dynamic learning process over
time, and how the ability to predict individual parameters and social links is
incrementally enhanced with the accumulation of additional data. To do this, we
use the Friends and Family dataset, which contains rich data signals gathered
from the smartphones of 140 adult members of a young-family residential
community for over a year, and is one of the most comprehensive mobile phone
datasets gathered in academia to date. We develop several models that predict
social and individual properties from sensed mobile phone data, including
detection of life-partners, ethnicity, and whether a person is a student or
not. Then, for this set of diverse learning tasks, we investigate how the
prediction accuracy evolves over time, as new data is collected. Finally, based
on gained insights, we propose a method for advance prediction of the maximal
learning accuracy possible for the learning task at hand, based on an initial
set of measurements. This has practical implications, like informing the design
of mobile data collection campaigns, or evaluating analysis strategies.
| [
{
"version": "v1",
"created": "Sun, 20 Nov 2011 16:10:53 GMT"
}
] | 2011-11-22T00:00:00 | [
[
"Altshuler",
"Yaniv",
""
],
[
"Aharony",
"Nadav",
""
],
[
"Fire",
"Michael",
""
],
[
"Elovici",
"Yuval",
""
],
[
"Pentland",
"Alex",
""
]
] | TITLE: Incremental Learning with Accuracy Prediction of Social and Individual
Properties from Mobile-Phone Data
ABSTRACT: Mobile phones are quickly becoming the primary source for social, behavioral,
and environmental sensing and data collection. Today's smartphones are equipped
with increasingly more sensors and accessible data types that enable the
collection of literally dozens of signals related to the phone, its user, and
its environment. A great deal of research effort in academia and industry is
put into mining this raw data for higher level sense-making, such as
understanding user context, inferring social networks, learning individual
features, predicting outcomes, and so on. In this work we investigate the
properties of learning and inference of real world data collected via mobile
phones over time. In particular, we look at the dynamic learning process over
time, and how the ability to predict individual parameters and social links is
incrementally enhanced with the accumulation of additional data. To do this, we
use the Friends and Family dataset, which contains rich data signals gathered
from the smartphones of 140 adult members of a young-family residential
community for over a year, and is one of the most comprehensive mobile phone
datasets gathered in academia to date. We develop several models that predict
social and individual properties from sensed mobile phone data, including
detection of life-partners, ethnicity, and whether a person is a student or
not. Then, for this set of diverse learning tasks, we investigate how the
prediction accuracy evolves over time, as new data is collected. Finally, based
on gained insights, we propose a method for advance prediction of the maximal
learning accuracy possible for the learning task at hand, based on an initial
set of measurements. This has practical implications, like informing the design
of mobile data collection campaigns, or evaluating analysis strategies.
|
1111.4650 | Yaniv Altshuler | Yaniv Altshuler, Wei Pan, Alex Pentland | Trends Prediction Using Social Diffusion Models | 6 Pages + Appendix | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of the ability of predict trends in social media has been
growing rapidly in the past few years with the growing dominance of social
media in our everyday's life. Whereas many works focus on the detection of
anomalies in networks, there exist little theoretical work on the prediction of
the likelihood of anomalous network pattern to globally spread and become
"trends". In this work we present an analytic model the social diffusion
dynamics of spreading network patterns. Our proposed method is based on
information diffusion models, and is capable of predicting future trends based
on the analysis of past social interactions between the community's members. We
present an analytic lower bound for the probability that emerging trends would
successful spread through the network. We demonstrate our model using two
comprehensive social datasets - the "Friends and Family" experiment that was
held in MIT for over a year, where the complete activity of 140 users was
analyzed, and a financial dataset containing the complete activities of over
1.5 million members of the "eToro" social trading community.
| [
{
"version": "v1",
"created": "Sun, 20 Nov 2011 17:09:21 GMT"
}
] | 2011-11-22T00:00:00 | [
[
"Altshuler",
"Yaniv",
""
],
[
"Pan",
"Wei",
""
],
[
"Pentland",
"Alex",
""
]
] | TITLE: Trends Prediction Using Social Diffusion Models
ABSTRACT: The importance of the ability of predict trends in social media has been
growing rapidly in the past few years with the growing dominance of social
media in our everyday's life. Whereas many works focus on the detection of
anomalies in networks, there exist little theoretical work on the prediction of
the likelihood of anomalous network pattern to globally spread and become
"trends". In this work we present an analytic model the social diffusion
dynamics of spreading network patterns. Our proposed method is based on
information diffusion models, and is capable of predicting future trends based
on the analysis of past social interactions between the community's members. We
present an analytic lower bound for the probability that emerging trends would
successful spread through the network. We demonstrate our model using two
comprehensive social datasets - the "Friends and Family" experiment that was
held in MIT for over a year, where the complete activity of 140 users was
analyzed, and a financial dataset containing the complete activities of over
1.5 million members of the "eToro" social trading community.
|
1111.3689 | Anish Das Sarma | Anish Das Sarma, Ankur Jain, Ashwin Machanavajjhala, Philip Bohannon | CBLOCK: An Automatic Blocking Mechanism for Large-Scale De-duplication
Tasks | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | De-duplication---identification of distinct records referring to the same
real-world entity---is a well-known challenge in data integration. Since very
large datasets prohibit the comparison of every pair of records, {\em blocking}
has been identified as a technique of dividing the dataset for pairwise
comparisons, thereby trading off {\em recall} of identified duplicates for {\em
efficiency}. Traditional de-duplication tasks, while challenging, typically
involved a fixed schema such as Census data or medical records. However, with
the presence of large, diverse sets of structured data on the web and the need
to organize it effectively on content portals, de-duplication systems need to
scale in a new dimension to handle a large number of schemas, tasks and data
sets, while handling ever larger problem sizes. In addition, when working in a
map-reduce framework it is important that canopy formation be implemented as a
{\em hash function}, making the canopy design problem more challenging. We
present CBLOCK, a system that addresses these challenges. CBLOCK learns hash
functions automatically from attribute domains and a labeled dataset consisting
of duplicates. Subsequently, CBLOCK expresses blocking functions using a
hierarchical tree structure composed of atomic hash functions. The application
may guide the automated blocking process based on architectural constraints,
such as by specifying a maximum size of each block (based on memory
requirements), impose disjointness of blocks (in a grid environment), or
specify a particular objective function trading off recall for efficiency. As a
post-processing step to automatically generated blocks, CBLOCK {\em rolls-up}
smaller blocks to increase recall. We present experimental results on two
large-scale de-duplication datasets at Yahoo!---consisting of over 140K movies
and 40K restaurants respectively---and demonstrate the utility of CBLOCK.
| [
{
"version": "v1",
"created": "Tue, 15 Nov 2011 23:32:34 GMT"
}
] | 2011-11-17T00:00:00 | [
[
"Sarma",
"Anish Das",
""
],
[
"Jain",
"Ankur",
""
],
[
"Machanavajjhala",
"Ashwin",
""
],
[
"Bohannon",
"Philip",
""
]
] | TITLE: CBLOCK: An Automatic Blocking Mechanism for Large-Scale De-duplication
Tasks
ABSTRACT: De-duplication---identification of distinct records referring to the same
real-world entity---is a well-known challenge in data integration. Since very
large datasets prohibit the comparison of every pair of records, {\em blocking}
has been identified as a technique of dividing the dataset for pairwise
comparisons, thereby trading off {\em recall} of identified duplicates for {\em
efficiency}. Traditional de-duplication tasks, while challenging, typically
involved a fixed schema such as Census data or medical records. However, with
the presence of large, diverse sets of structured data on the web and the need
to organize it effectively on content portals, de-duplication systems need to
scale in a new dimension to handle a large number of schemas, tasks and data
sets, while handling ever larger problem sizes. In addition, when working in a
map-reduce framework it is important that canopy formation be implemented as a
{\em hash function}, making the canopy design problem more challenging. We
present CBLOCK, a system that addresses these challenges. CBLOCK learns hash
functions automatically from attribute domains and a labeled dataset consisting
of duplicates. Subsequently, CBLOCK expresses blocking functions using a
hierarchical tree structure composed of atomic hash functions. The application
may guide the automated blocking process based on architectural constraints,
such as by specifying a maximum size of each block (based on memory
requirements), impose disjointness of blocks (in a grid environment), or
specify a particular objective function trading off recall for efficiency. As a
post-processing step to automatically generated blocks, CBLOCK {\em rolls-up}
smaller blocks to increase recall. We present experimental results on two
large-scale de-duplication datasets at Yahoo!---consisting of over 140K movies
and 40K restaurants respectively---and demonstrate the utility of CBLOCK.
|
1107.2462 | Timothy Rubin | Timothy N. Rubin, America Chambers, Padhraic Smyth and Mark Steyvers | Statistical Topic Models for Multi-Label Document Classification | 44 Pages (Including Appendices). To be published in: The Machine
Learning Journal, special issue on Learning from Multi-Label Data. Version 2
corrects some typos, updates some of the notation used in the paper for
clarification of some equations, and incorporates several relatively minor
changes to the text throughout the paper | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning approaches to multi-label document classification have to
date largely relied on discriminative modeling techniques such as support
vector machines. A drawback of these approaches is that performance rapidly
drops off as the total number of labels and the number of labels per document
increase. This problem is amplified when the label frequencies exhibit the type
of highly skewed distributions that are often observed in real-world datasets.
In this paper we investigate a class of generative statistical topic models for
multi-label documents that associate individual word tokens with different
labels. We investigate the advantages of this approach relative to
discriminative models, particularly with respect to classification problems
involving large numbers of relatively rare labels. We compare the performance
of generative and discriminative approaches on document labeling tasks ranging
from datasets with several thousand labels to datasets with tens of labels. The
experimental results indicate that probabilistic generative models can achieve
competitive multi-label classification performance compared to discriminative
methods, and have advantages for datasets with many labels and skewed label
frequencies.
| [
{
"version": "v1",
"created": "Wed, 13 Jul 2011 04:28:32 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2011 04:24:38 GMT"
}
] | 2011-11-11T00:00:00 | [
[
"Rubin",
"Timothy N.",
""
],
[
"Chambers",
"America",
""
],
[
"Smyth",
"Padhraic",
""
],
[
"Steyvers",
"Mark",
""
]
] | TITLE: Statistical Topic Models for Multi-Label Document Classification
ABSTRACT: Machine learning approaches to multi-label document classification have to
date largely relied on discriminative modeling techniques such as support
vector machines. A drawback of these approaches is that performance rapidly
drops off as the total number of labels and the number of labels per document
increase. This problem is amplified when the label frequencies exhibit the type
of highly skewed distributions that are often observed in real-world datasets.
In this paper we investigate a class of generative statistical topic models for
multi-label documents that associate individual word tokens with different
labels. We investigate the advantages of this approach relative to
discriminative models, particularly with respect to classification problems
involving large numbers of relatively rare labels. We compare the performance
of generative and discriminative approaches on document labeling tasks ranging
from datasets with several thousand labels to datasets with tens of labels. The
experimental results indicate that probabilistic generative models can achieve
competitive multi-label classification performance compared to discriminative
methods, and have advantages for datasets with many labels and skewed label
frequencies.
|
0707.1646 | Tam\'as Nepusz | Tam\'as Nepusz, Andrea Petr\'oczi, L\'aszl\'o N\'egyessy, F\"ul\"op
Bazs\'o | Fuzzy communities and the concept of bridgeness in complex networks | 13 pages, 9 figures. Quality of Fig. 4 reduced due to file size
considerations | Phys Rev E, 77:016107, 2008 | 10.1103/PhysRevE.77.016107 | null | physics.soc-ph | null | We consider the problem of fuzzy community detection in networks, which
complements and expands the concept of overlapping community structure. Our
approach allows each vertex of the graph to belong to multiple communities at
the same time, determined by exact numerical membership degrees, even in the
presence of uncertainty in the data being analyzed. We created an algorithm for
determining the optimal membership degrees with respect to a given goal
function. Based on the membership degrees, we introduce a new measure that is
able to identify outlier vertices that do not belong to any of the communities,
bridge vertices that belong significantly to more than one single community,
and regular vertices that fundamentally restrict their interactions within
their own community, while also being able to quantify the centrality of a
vertex with respect to its dominant community. The method can also be used for
prediction in case of uncertainty in the dataset analyzed. The number of
communities can be given in advance, or determined by the algorithm itself
using a fuzzified variant of the modularity function. The technique is able to
discover the fuzzy community structure of different real world networks
including, but not limited to social networks, scientific collaboration
networks and cortical networks with high confidence.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2007 15:34:00 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jul 2007 17:39:39 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Nov 2007 18:34:41 GMT"
}
] | 2011-11-10T00:00:00 | [
[
"Nepusz",
"Tamás",
""
],
[
"Petróczi",
"Andrea",
""
],
[
"Négyessy",
"László",
""
],
[
"Bazsó",
"Fülöp",
""
]
] | TITLE: Fuzzy communities and the concept of bridgeness in complex networks
ABSTRACT: We consider the problem of fuzzy community detection in networks, which
complements and expands the concept of overlapping community structure. Our
approach allows each vertex of the graph to belong to multiple communities at
the same time, determined by exact numerical membership degrees, even in the
presence of uncertainty in the data being analyzed. We created an algorithm for
determining the optimal membership degrees with respect to a given goal
function. Based on the membership degrees, we introduce a new measure that is
able to identify outlier vertices that do not belong to any of the communities,
bridge vertices that belong significantly to more than one single community,
and regular vertices that fundamentally restrict their interactions within
their own community, while also being able to quantify the centrality of a
vertex with respect to its dominant community. The method can also be used for
prediction in case of uncertainty in the dataset analyzed. The number of
communities can be given in advance, or determined by the algorithm itself
using a fuzzified variant of the modularity function. The technique is able to
discover the fuzzy community structure of different real world networks
including, but not limited to social networks, scientific collaboration
networks and cortical networks with high confidence.
|
0710.3979 | William Yurcik | William Yurcik, Clay Woolam, Greg Hellings, Latifur Khan, Bhavani
Thuraisingham | Toward Trusted Sharing of Network Packet Traces Using Anonymization:
Single-Field Privacy/Analysis Tradeoffs | 8 pages,1 figure, 4 tables | null | null | null | cs.CR cs.NI | null | Network data needs to be shared for distributed security analysis.
Anonymization of network data for sharing sets up a fundamental tradeoff
between privacy protection versus security analysis capability. This
privacy/analysis tradeoff has been acknowledged by many researchers but this is
the first paper to provide empirical measurements to characterize the
privacy/analysis tradeoff for an enterprise dataset. Specifically we perform
anonymization options on single-fields within network packet traces and then
make measurements using intrusion detection system alarms as a proxy for
security analysis capability. Our results show: (1) two fields have a zero sum
tradeoff (more privacy lessens security analysis and vice versa) and (2) eight
fields have a more complex tradeoff (that is not zero sum) in which both
privacy and analysis can both be simultaneously accomplished.
| [
{
"version": "v1",
"created": "Mon, 22 Oct 2007 19:18:11 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Oct 2007 14:55:08 GMT"
}
] | 2011-11-10T00:00:00 | [
[
"Yurcik",
"William",
""
],
[
"Woolam",
"Clay",
""
],
[
"Hellings",
"Greg",
""
],
[
"Khan",
"Latifur",
""
],
[
"Thuraisingham",
"Bhavani",
""
]
] | TITLE: Toward Trusted Sharing of Network Packet Traces Using Anonymization:
Single-Field Privacy/Analysis Tradeoffs
ABSTRACT: Network data needs to be shared for distributed security analysis.
Anonymization of network data for sharing sets up a fundamental tradeoff
between privacy protection versus security analysis capability. This
privacy/analysis tradeoff has been acknowledged by many researchers but this is
the first paper to provide empirical measurements to characterize the
privacy/analysis tradeoff for an enterprise dataset. Specifically we perform
anonymization options on single-fields within network packet traces and then
make measurements using intrusion detection system alarms as a proxy for
security analysis capability. Our results show: (1) two fields have a zero sum
tradeoff (more privacy lessens security analysis and vice versa) and (2) eight
fields have a more complex tradeoff (that is not zero sum) in which both
privacy and analysis can both be simultaneously accomplished.
|
1111.2092 | Sanmay Das | Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail | Pushing Your Point of View: Behavioral Measures of Manipulation in
Wikipedia | null | null | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a major source for information on virtually any topic, Wikipedia serves an
important role in public dissemination and consumption of knowledge. As a
result, it presents tremendous potential for people to promulgate their own
points of view; such efforts may be more subtle than typical vandalism. In this
paper, we introduce new behavioral metrics to quantify the level of controversy
associated with a particular user: a Controversy Score (C-Score) based on the
amount of attention the user focuses on controversial pages, and a Clustered
Controversy Score (CC-Score) that also takes into account topical clustering.
We show that both these measures are useful for identifying people who try to
"push" their points of view, by showing that they are good predictors of which
editors get blocked. The metrics can be used to triage potential POV pushers.
We apply this idea to a dataset of users who requested promotion to
administrator status and easily identify some editors who significantly changed
their behavior upon becoming administrators. At the same time, such behavior is
not rampant. Those who are promoted to administrator status tend to have more
stable behavior than comparable groups of prolific editors. This suggests that
the Adminship process works well, and that the Wikipedia community is not
overwhelmed by users who become administrators to promote their own points of
view.
| [
{
"version": "v1",
"created": "Wed, 9 Nov 2011 03:22:16 GMT"
}
] | 2011-11-10T00:00:00 | [
[
"Das",
"Sanmay",
""
],
[
"Lavoie",
"Allen",
""
],
[
"Magdon-Ismail",
"Malik",
""
]
] | TITLE: Pushing Your Point of View: Behavioral Measures of Manipulation in
Wikipedia
ABSTRACT: As a major source for information on virtually any topic, Wikipedia serves an
important role in public dissemination and consumption of knowledge. As a
result, it presents tremendous potential for people to promulgate their own
points of view; such efforts may be more subtle than typical vandalism. In this
paper, we introduce new behavioral metrics to quantify the level of controversy
associated with a particular user: a Controversy Score (C-Score) based on the
amount of attention the user focuses on controversial pages, and a Clustered
Controversy Score (CC-Score) that also takes into account topical clustering.
We show that both these measures are useful for identifying people who try to
"push" their points of view, by showing that they are good predictors of which
editors get blocked. The metrics can be used to triage potential POV pushers.
We apply this idea to a dataset of users who requested promotion to
administrator status and easily identify some editors who significantly changed
their behavior upon becoming administrators. At the same time, such behavior is
not rampant. Those who are promoted to administrator status tend to have more
stable behavior than comparable groups of prolific editors. This suggests that
the Adminship process works well, and that the Wikipedia community is not
overwhelmed by users who become administrators to promote their own points of
view.
|
q-bio/0603007 | Francesc Rossell\'o | Jairo Rocha, Francesc Rossell\'o, Joan Segura | Compression ratios based on the Universal Similarity Metric still yield
protein distances far from CATH distances | 11 pages; It replaces the former "The Universal Similarity Metric
does not detect domain similarity." This version reports on more extensive
tests | null | null | null | q-bio.QM cs.CE physics.data-an q-bio.OT | null | Kolmogorov complexity has inspired several alignment-free distance measures,
based on the comparison of lengths of compressions, which have been applied
successfully in many areas. One of these measures, the so-called Universal
Similarity Metric (USM), has been used by Krasnogor and Pelta to compare simple
protein contact maps, showing that it yielded good clustering on four small
datasets. We report an extensive test of this metric using a much larger and
representative protein dataset: the domain dataset used by Sierk and Pearson to
evaluate seven protein structure comparison methods and two protein sequence
comparison methods. One result is that Krasnogor-Pelta method has less domain
discriminant power than any one of the methods considered by Sierk and Pearson
when using these simple contact maps. In another test, we found that the USM
based distance has low agreement with the CATH tree structure for the same
benchmark of Sierk and Pearson. In any case, its agreement is lower than the
one of a standard sequential alignment method, SSEARCH. Finally, we manually
found lots of small subsets of the database that are better clustered using
SSEARCH than USM, to confirm that Krasnogor-Pelta's conclusions were based on
datasets that were too small.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2006 12:00:41 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Oct 2006 09:35:04 GMT"
}
] | 2011-11-10T00:00:00 | [
[
"Rocha",
"Jairo",
""
],
[
"Rosselló",
"Francesc",
""
],
[
"Segura",
"Joan",
""
]
] | TITLE: Compression ratios based on the Universal Similarity Metric still yield
protein distances far from CATH distances
ABSTRACT: Kolmogorov complexity has inspired several alignment-free distance measures,
based on the comparison of lengths of compressions, which have been applied
successfully in many areas. One of these measures, the so-called Universal
Similarity Metric (USM), has been used by Krasnogor and Pelta to compare simple
protein contact maps, showing that it yielded good clustering on four small
datasets. We report an extensive test of this metric using a much larger and
representative protein dataset: the domain dataset used by Sierk and Pearson to
evaluate seven protein structure comparison methods and two protein sequence
comparison methods. One result is that Krasnogor-Pelta method has less domain
discriminant power than any one of the methods considered by Sierk and Pearson
when using these simple contact maps. In another test, we found that the USM
based distance has low agreement with the CATH tree structure for the same
benchmark of Sierk and Pearson. In any case, its agreement is lower than the
one of a standard sequential alignment method, SSEARCH. Finally, we manually
found lots of small subsets of the database that are better clustered using
SSEARCH than USM, to confirm that Krasnogor-Pelta's conclusions were based on
datasets that were too small.
|
1111.2018 | Lionel Tabourier | Bivas Mitra, Lionel Tabourier and Camille Roth | Intrinsically Dynamic Network Communities | 27 pages, 11 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community finding algorithms for networks have recently been extended to
dynamic data. Most of these recent methods aim at exhibiting community
partitions from successive graph snapshots and thereafter connecting or
smoothing these partitions using clever time-dependent features and sampling
techniques. These approaches are nonetheless achieving longitudinal rather than
dynamic community detection. We assume that communities are fundamentally
defined by the repetition of interactions among a set of nodes over time.
According to this definition, analyzing the data by considering successive
snapshots induces a significant loss of information: we suggest that it blurs
essentially dynamic phenomena - such as communities based on repeated
inter-temporal interactions, nodes switching from a community to another across
time, or the possibility that a community survives while its members are being
integrally replaced over a longer time period. We propose a formalism which
aims at tackling this issue in the context of time-directed datasets (such as
citation networks), and present several illustrations on both empirical and
synthetic dynamic networks. We eventually introduce intrinsically dynamic
metrics to qualify temporal community structure and emphasize their possible
role as an estimator of the quality of the community detection - taking into
account the fact that various empirical contexts may call for distinct
`community' definitions and detection criteria.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2011 19:12:43 GMT"
}
] | 2011-11-09T00:00:00 | [
[
"Mitra",
"Bivas",
""
],
[
"Tabourier",
"Lionel",
""
],
[
"Roth",
"Camille",
""
]
] | TITLE: Intrinsically Dynamic Network Communities
ABSTRACT: Community finding algorithms for networks have recently been extended to
dynamic data. Most of these recent methods aim at exhibiting community
partitions from successive graph snapshots and thereafter connecting or
smoothing these partitions using clever time-dependent features and sampling
techniques. These approaches are nonetheless achieving longitudinal rather than
dynamic community detection. We assume that communities are fundamentally
defined by the repetition of interactions among a set of nodes over time.
According to this definition, analyzing the data by considering successive
snapshots induces a significant loss of information: we suggest that it blurs
essentially dynamic phenomena - such as communities based on repeated
inter-temporal interactions, nodes switching from a community to another across
time, or the possibility that a community survives while its members are being
integrally replaced over a longer time period. We propose a formalism which
aims at tackling this issue in the context of time-directed datasets (such as
citation networks), and present several illustrations on both empirical and
synthetic dynamic networks. We eventually introduce intrinsically dynamic
metrics to qualify temporal community structure and emphasize their possible
role as an estimator of the quality of the community detection - taking into
account the fact that various empirical contexts may call for distinct
`community' definitions and detection criteria.
|
nlin/0609042 | Haluk Bingol | Amac Herdagdelen, Eser Aygun, Haluk Bingol | A Formal Treatment of Generalized Preferential Attachment and its
Empirical Validation | null | EPL 78 No 6 (June 2007) 60007 | 10.1209/0295-5075/78/60007 | null | nlin.AO cond-mat.stat-mech cs.CY physics.data-an | null | Generalized preferential attachment is defined as the tendency of a vertex to
acquire new links in the future with respect to a particular vertex property.
Understanding which properties influence link acquisition tendency (LAT) gives
us a predictive power to estimate the future growth of network and insight
about the actual dynamics governing the complex networks. In this study, we
explore the effect of age and degree on LAT by analyzing data collected from a
new complex-network growth dataset. We found that LAT and degree of a vertex
are linearly correlated in accordance with previous studies. Interestingly, the
relation between LAT and age of a vertex is found to be in conflict with the
known models of network growth. We identified three different periods in the
network's lifetime where the relation between age and LAT is strongly positive,
almost stationary and negative correspondingly.
| [
{
"version": "v1",
"created": "Fri, 15 Sep 2006 20:08:23 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Jul 2007 17:09:44 GMT"
}
] | 2011-11-09T00:00:00 | [
[
"Herdagdelen",
"Amac",
""
],
[
"Aygun",
"Eser",
""
],
[
"Bingol",
"Haluk",
""
]
] | TITLE: A Formal Treatment of Generalized Preferential Attachment and its
Empirical Validation
ABSTRACT: Generalized preferential attachment is defined as the tendency of a vertex to
acquire new links in the future with respect to a particular vertex property.
Understanding which properties influence link acquisition tendency (LAT) gives
us a predictive power to estimate the future growth of network and insight
about the actual dynamics governing the complex networks. In this study, we
explore the effect of age and degree on LAT by analyzing data collected from a
new complex-network growth dataset. We found that LAT and degree of a vertex
are linearly correlated in accordance with previous studies. Interestingly, the
relation between LAT and age of a vertex is found to be in conflict with the
known models of network growth. We identified three different periods in the
network's lifetime where the relation between age and LAT is strongly positive,
almost stationary and negative correspondingly.
|
1111.1562 | Mahmoud Yassien Shams el den Eng | M. Y. Shams, M. Z. Rashad, O. Nomir, and R. M. El-Awady | Iris Recognition Based on LBP and Combined LVQ Classifier | 12 Pages, 12 Figures | International Journal of Computer Science & Information Technology
(IJCSIT) Vol 3, No 5, Oct 2011 | 10.5121/ijcsit.2011.3506 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iris recognition is considered as one of the best biometric methods used for
human identification and verification, this is because of its unique features
that differ from one person to another, and its importance in the security
field. This paper proposes an algorithm for iris recognition and classification
using a system based on Local Binary Pattern and histogram properties as a
statistical approaches for feature extraction, and Combined Learning Vector
Quantization Classifier as Neural Network approach for classification, in order
to build a hybrid model depends on both features. The localization and
segmentation techniques are presented using both Canny edge detection and Hough
Circular Transform in order to isolate an iris from the whole eye image and for
noise detection .Feature vectors results from LBP is applied to a Combined LVQ
classifier with different classes to determine the minimum acceptable
performance, and the result is based on majority voting among several LVQ
classifier. Different iris datasets CASIA, MMU1, MMU2, and LEI with different
extensions and size are presented. Since LBP is working on a grayscale level so
colored iris images should be transformed into a grayscale level. The proposed
system gives a high recognition rate 99.87 % on different iris datasets
compared with other methods.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2011 12:35:29 GMT"
}
] | 2011-11-08T00:00:00 | [
[
"Shams",
"M. Y.",
""
],
[
"Rashad",
"M. Z.",
""
],
[
"Nomir",
"O.",
""
],
[
"El-Awady",
"R. M.",
""
]
] | TITLE: Iris Recognition Based on LBP and Combined LVQ Classifier
ABSTRACT: Iris recognition is considered as one of the best biometric methods used for
human identification and verification, this is because of its unique features
that differ from one person to another, and its importance in the security
field. This paper proposes an algorithm for iris recognition and classification
using a system based on Local Binary Pattern and histogram properties as a
statistical approaches for feature extraction, and Combined Learning Vector
Quantization Classifier as Neural Network approach for classification, in order
to build a hybrid model depends on both features. The localization and
segmentation techniques are presented using both Canny edge detection and Hough
Circular Transform in order to isolate an iris from the whole eye image and for
noise detection .Feature vectors results from LBP is applied to a Combined LVQ
classifier with different classes to determine the minimum acceptable
performance, and the result is based on majority voting among several LVQ
classifier. Different iris datasets CASIA, MMU1, MMU2, and LEI with different
extensions and size are presented. Since LBP is working on a grayscale level so
colored iris images should be transformed into a grayscale level. The proposed
system gives a high recognition rate 99.87 % on different iris datasets
compared with other methods.
|
1111.0045 | I. Bhattacharya | I. Bhattacharya, L. Getoor | Query-time Entity Resolution | null | Journal Of Artificial Intelligence Research, Volume 30, pages
621-657, 2007 | 10.1613/jair.2290 | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity resolution is the problem of reconciling database references
corresponding to the same real-world entities. Given the abundance of publicly
available databases that have unresolved entities, we motivate the problem of
query-time entity resolution quick and accurate resolution for answering
queries over such unclean databases at query-time. Since collective entity
resolution approaches --- where related references are resolved jointly ---
have been shown to be more accurate than independent attribute-based resolution
for off-line entity resolution, we focus on developing new algorithms for
collective resolution for answering entity resolution queries at query-time.
For this purpose, we first formally show that, for collective resolution,
precision and recall for individual entities follow a geometric progression as
neighbors at increasing distances are considered. Unfolding this progression
leads naturally to a two stage expand and resolve query processing strategy. In
this strategy, we first extract the related records for a query using two novel
expansion operators, and then resolve the extracted records collectively. We
then show how the same strategy can be adapted for query-time entity resolution
by identifying and resolving only those database references that are the most
helpful for processing the query. We validate our approach on two large
real-world publication databases where we show the usefulness of collective
resolution and at the same time demonstrate the need for adaptive strategies
for query processing. We then show how the same queries can be answered in
real-time using our adaptive approach while preserving the gains of collective
resolution. In addition to experiments on real datasets, we use synthetically
generated data to empirically demonstrate the validity of the performance
trends predicted by our analysis of collective entity resolution over a wide
range of structural characteristics in the data.
| [
{
"version": "v1",
"created": "Mon, 31 Oct 2011 21:48:16 GMT"
}
] | 2011-11-02T00:00:00 | [
[
"Bhattacharya",
"I.",
""
],
[
"Getoor",
"L.",
""
]
] | TITLE: Query-time Entity Resolution
ABSTRACT: Entity resolution is the problem of reconciling database references
corresponding to the same real-world entities. Given the abundance of publicly
available databases that have unresolved entities, we motivate the problem of
query-time entity resolution quick and accurate resolution for answering
queries over such unclean databases at query-time. Since collective entity
resolution approaches --- where related references are resolved jointly ---
have been shown to be more accurate than independent attribute-based resolution
for off-line entity resolution, we focus on developing new algorithms for
collective resolution for answering entity resolution queries at query-time.
For this purpose, we first formally show that, for collective resolution,
precision and recall for individual entities follow a geometric progression as
neighbors at increasing distances are considered. Unfolding this progression
leads naturally to a two stage expand and resolve query processing strategy. In
this strategy, we first extract the related records for a query using two novel
expansion operators, and then resolve the extracted records collectively. We
then show how the same strategy can be adapted for query-time entity resolution
by identifying and resolving only those database references that are the most
helpful for processing the query. We validate our approach on two large
real-world publication databases where we show the usefulness of collective
resolution and at the same time demonstrate the need for adaptive strategies
for query processing. We then show how the same queries can be answered in
real-time using our adaptive approach while preserving the gains of collective
resolution. In addition to experiments on real datasets, we use synthetically
generated data to empirically demonstrate the validity of the performance
trends predicted by our analysis of collective entity resolution over a wide
range of structural characteristics in the data.
|
1111.0051 | George M. Coghill | George M. Coghill, Ross D. King, Ashwin Srinivasan | Qualitative System Identification from Imperfect Data | null | Journal Of Artificial Intelligence Research, Volume 32, pages
825-877, 2008 | 10.1613/jair.2374 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experience in the physical sciences suggests that the only realistic means of
understanding complex systems is through the use of mathematical models.
Typically, this has come to mean the identification of quantitative models
expressed as differential equations. Quantitative modelling works best when the
structure of the model (i.e., the form of the equations) is known; and the
primary concern is one of estimating the values of the parameters in the model.
For complex biological systems, the model-structure is rarely known and the
modeler has to deal with both model-identification and parameter-estimation. In
this paper we are concerned with providing automated assistance to the first of
these problems. Specifically, we examine the identification by machine of the
structural relationships between experimentally observed variables. These
relationship will be expressed in the form of qualitative abstractions of a
quantitative model. Such qualitative models may not only provide clues to the
precise quantitative model, but also assist in understanding the essence of
that model. Our position in this paper is that background knowledge
incorporating system modelling principles can be used to constrain effectively
the set of good qualitative models. Utilising the model-identification
framework provided by Inductive Logic Programming (ILP) we present empirical
support for this position using a series of increasingly complex artificial
datasets. The results are obtained with qualitative and quantitative data
subject to varying amounts of noise and different degrees of sparsity. The
results also point to the presence of a set of qualitative states, which we
term kernel subsets, that may be necessary for a qualitative model-learner to
learn correct models. We demonstrate scalability of the method to biological
system modelling by identification of the glycolysis metabolic pathway from
data.
| [
{
"version": "v1",
"created": "Mon, 31 Oct 2011 22:02:30 GMT"
}
] | 2011-11-02T00:00:00 | [
[
"Coghill",
"George M.",
""
],
[
"King",
"Ross D.",
""
],
[
"Srinivasan",
"Ashwin",
""
]
] | TITLE: Qualitative System Identification from Imperfect Data
ABSTRACT: Experience in the physical sciences suggests that the only realistic means of
understanding complex systems is through the use of mathematical models.
Typically, this has come to mean the identification of quantitative models
expressed as differential equations. Quantitative modelling works best when the
structure of the model (i.e., the form of the equations) is known; and the
primary concern is one of estimating the values of the parameters in the model.
For complex biological systems, the model-structure is rarely known and the
modeler has to deal with both model-identification and parameter-estimation. In
this paper we are concerned with providing automated assistance to the first of
these problems. Specifically, we examine the identification by machine of the
structural relationships between experimentally observed variables. These
relationship will be expressed in the form of qualitative abstractions of a
quantitative model. Such qualitative models may not only provide clues to the
precise quantitative model, but also assist in understanding the essence of
that model. Our position in this paper is that background knowledge
incorporating system modelling principles can be used to constrain effectively
the set of good qualitative models. Utilising the model-identification
framework provided by Inductive Logic Programming (ILP) we present empirical
support for this position using a series of increasingly complex artificial
datasets. The results are obtained with qualitative and quantitative data
subject to varying amounts of noise and different degrees of sparsity. The
results also point to the presence of a set of qualitative states, which we
term kernel subsets, that may be necessary for a qualitative model-learner to
learn correct models. We demonstrate scalability of the method to biological
system modelling by identification of the glycolysis metabolic pathway from
data.
|
1111.0158 | Sanaa Elyassami | Sanaa Elyassami and Ali Idri | Applying Fuzzy ID3 Decision Tree for Software Effort Estimation | null | IJCSI International Journal of Computer Science Issues, Vol. 8,
Issue 4, No 1, 131-138 (2011) | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web Effort Estimation is a process of predicting the efforts and cost in
terms of money, schedule and staff for any software project system. Many
estimation models have been proposed over the last three decades and it is
believed that it is a must for the purpose of: Budgeting, risk analysis,
project planning and control, and project improvement investment analysis. In
this paper, we investigate the use of Fuzzy ID3 decision tree for software cost
estimation; it is designed by integrating the principles of ID3 decision tree
and the fuzzy set-theoretic concepts, enabling the model to handle uncertain
and imprecise data when describing the software projects, which can improve
greatly the accuracy of obtained estimates. MMRE and Pred are used as measures
of prediction accuracy for this study. A series of experiments is reported
using two different software projects datasets namely, Tukutuku and COCOMO'81
datasets. The results are compared with those produced by the crisp version of
the ID3 decision tree.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2011 09:58:08 GMT"
}
] | 2011-11-02T00:00:00 | [
[
"Elyassami",
"Sanaa",
""
],
[
"Idri",
"Ali",
""
]
] | TITLE: Applying Fuzzy ID3 Decision Tree for Software Effort Estimation
ABSTRACT: Web Effort Estimation is a process of predicting the efforts and cost in
terms of money, schedule and staff for any software project system. Many
estimation models have been proposed over the last three decades and it is
believed that it is a must for the purpose of: Budgeting, risk analysis,
project planning and control, and project improvement investment analysis. In
this paper, we investigate the use of Fuzzy ID3 decision tree for software cost
estimation; it is designed by integrating the principles of ID3 decision tree
and the fuzzy set-theoretic concepts, enabling the model to handle uncertain
and imprecise data when describing the software projects, which can improve
greatly the accuracy of obtained estimates. MMRE and Pred are used as measures
of prediction accuracy for this study. A series of experiments is reported
using two different software projects datasets namely, Tukutuku and COCOMO'81
datasets. The results are compared with those produced by the crisp version of
the ID3 decision tree.
|
1107.3350 | Yang Li Daniel | Yang D. Li, Zhenjie Zhang, Marianne Winslett, Yin Yang | Compressive Mechanism: Utilizing Sparse Representation in Differential
Privacy | 20 pages, 6 figures | WPES '11 Proceedings of the 10th annual ACM workshop on Privacy in
the electronic society ACM New York, NY, USA (2011), pages 177-182 | 10.1145/2046556.2046581 | null | cs.DS cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy provides the first theoretical foundation with provable
privacy guarantee against adversaries with arbitrary prior knowledge. The main
idea to achieve differential privacy is to inject random noise into statistical
query results. Besides correctness, the most important goal in the design of a
differentially private mechanism is to reduce the effect of random noise,
ensuring that the noisy results can still be useful.
This paper proposes the \emph{compressive mechanism}, a novel solution on the
basis of state-of-the-art compression technique, called \emph{compressive
sensing}. Compressive sensing is a decent theoretical tool for compact synopsis
construction, using random projections. In this paper, we show that the amount
of noise is significantly reduced from $O(\sqrt{n})$ to $O(\log(n))$, when the
noise insertion procedure is carried on the synopsis samples instead of the
original database. As an extension, we also apply the proposed compressive
mechanism to solve the problem of continual release of statistical results.
Extensive experiments using real datasets justify our accuracy claims.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2011 03:20:58 GMT"
}
] | 2011-11-01T00:00:00 | [
[
"Li",
"Yang D.",
""
],
[
"Zhang",
"Zhenjie",
""
],
[
"Winslett",
"Marianne",
""
],
[
"Yang",
"Yin",
""
]
] | TITLE: Compressive Mechanism: Utilizing Sparse Representation in Differential
Privacy
ABSTRACT: Differential privacy provides the first theoretical foundation with provable
privacy guarantee against adversaries with arbitrary prior knowledge. The main
idea to achieve differential privacy is to inject random noise into statistical
query results. Besides correctness, the most important goal in the design of a
differentially private mechanism is to reduce the effect of random noise,
ensuring that the noisy results can still be useful.
This paper proposes the \emph{compressive mechanism}, a novel solution on the
basis of state-of-the-art compression technique, called \emph{compressive
sensing}. Compressive sensing is a decent theoretical tool for compact synopsis
construction, using random projections. In this paper, we show that the amount
of noise is significantly reduced from $O(\sqrt{n})$ to $O(\log(n))$, when the
noise insertion procedure is carried on the synopsis samples instead of the
original database. As an extension, we also apply the proposed compressive
mechanism to solve the problem of continual release of statistical results.
Extensive experiments using real datasets justify our accuracy claims.
|
1110.6649 | Feifei Li | Jeffrey Jestes, Ke Yi, Feifei Li | Building Wavelet Histograms on Large Data in MapReduce | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 2, pp.
109-120 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MapReduce is becoming the de facto framework for storing and processing
massive data, due to its excellent scalability, reliability, and elasticity. In
many MapReduce applications, obtaining a compact accurate summary of data is
essential. Among various data summarization tools, histograms have proven to be
particularly important and useful for summarizing data, and the wavelet
histogram is one of the most widely used histograms. In this paper, we
investigate the problem of building wavelet histograms efficiently on large
datasets in MapReduce. We measure the efficiency of the algorithms by both
end-to-end running time and communication cost. We demonstrate straightforward
adaptations of existing exact and approximate methods for building wavelet
histograms to MapReduce clusters are highly inefficient. To that end, we design
new algorithms for computing exact and approximate wavelet histograms and
discuss their implementation in MapReduce. We illustrate our techniques in
Hadoop, and compare to baseline solutions with extensive experiments performed
in a heterogeneous Hadoop cluster of 16 nodes, using large real and synthetic
datasets, up to hundreds of gigabytes. The results suggest significant (often
orders of magnitude) performance improvement achieved by our new algorithms.
| [
{
"version": "v1",
"created": "Sun, 30 Oct 2011 20:21:30 GMT"
}
] | 2011-11-01T00:00:00 | [
[
"Jestes",
"Jeffrey",
""
],
[
"Yi",
"Ke",
""
],
[
"Li",
"Feifei",
""
]
] | TITLE: Building Wavelet Histograms on Large Data in MapReduce
ABSTRACT: MapReduce is becoming the de facto framework for storing and processing
massive data, due to its excellent scalability, reliability, and elasticity. In
many MapReduce applications, obtaining a compact accurate summary of data is
essential. Among various data summarization tools, histograms have proven to be
particularly important and useful for summarizing data, and the wavelet
histogram is one of the most widely used histograms. In this paper, we
investigate the problem of building wavelet histograms efficiently on large
datasets in MapReduce. We measure the efficiency of the algorithms by both
end-to-end running time and communication cost. We demonstrate straightforward
adaptations of existing exact and approximate methods for building wavelet
histograms to MapReduce clusters are highly inefficient. To that end, we design
new algorithms for computing exact and approximate wavelet histograms and
discuss their implementation in MapReduce. We illustrate our techniques in
Hadoop, and compare to baseline solutions with extensive experiments performed
in a heterogeneous Hadoop cluster of 16 nodes, using large real and synthetic
datasets, up to hundreds of gigabytes. The results suggest significant (often
orders of magnitude) performance improvement achieved by our new algorithms.
|
1110.6652 | Guimei Liu | Guimei Liu, Haojun Zhang, Limsoon Wong | Controlling False Positives in Association Rule Mining | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 2, pp.
145-156 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Association rule mining is an important problem in the data mining area. It
enumerates and tests a large number of rules on a dataset and outputs rules
that satisfy user-specified constraints. Due to the large number of rules being
tested, rules that do not represent real systematic effect in the data can
satisfy the given constraints purely by random chance. Hence association rule
mining often suffers from a high risk of false positive errors. There is a lack
of comprehensive study on controlling false positives in association rule
mining. In this paper, we adopt three multiple testing correction
approaches---the direct adjustment approach, the permutation-based approach and
the holdout approach---to control false positives in association rule mining,
and conduct extensive experiments to study their performance. Our results show
that (1) Numerous spurious rules are generated if no correction is made. (2)
The three approaches can control false positives effectively. Among the three
approaches, the permutation-based approach has the highest power of detecting
real association rules, but it is very computationally expensive. We employ
several techniques to reduce its cost effectively.
| [
{
"version": "v1",
"created": "Sun, 30 Oct 2011 20:22:00 GMT"
}
] | 2011-11-01T00:00:00 | [
[
"Liu",
"Guimei",
""
],
[
"Zhang",
"Haojun",
""
],
[
"Wong",
"Limsoon",
""
]
] | TITLE: Controlling False Positives in Association Rule Mining
ABSTRACT: Association rule mining is an important problem in the data mining area. It
enumerates and tests a large number of rules on a dataset and outputs rules
that satisfy user-specified constraints. Due to the large number of rules being
tested, rules that do not represent real systematic effect in the data can
satisfy the given constraints purely by random chance. Hence association rule
mining often suffers from a high risk of false positive errors. There is a lack
of comprehensive study on controlling false positives in association rule
mining. In this paper, we adopt three multiple testing correction
approaches---the direct adjustment approach, the permutation-based approach and
the holdout approach---to control false positives in association rule mining,
and conduct extensive experiments to study their performance. Our results show
that (1) Numerous spurious rules are generated if no correction is made. (2)
The three approaches can control false positives effectively. Among the three
approaches, the permutation-based approach has the highest power of detecting
real association rules, but it is very computationally expensive. We employ
several techniques to reduce its cost effectively.
|
1002.4058 | Lev Reyzin | Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E.
Schapire | Contextual Bandit Algorithms with Supervised Learning Guarantees | 10 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of learning in an online, bandit setting where the
learner must repeatedly select among $K$ actions, but only receives partial
feedback based on its choices. We establish two new facts: First, using a new
algorithm called Exp4.P, we show that it is possible to compete with the best
in a set of $N$ experts with probability $1-\delta$ while incurring regret at
most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is
tested empirically in a large-scale, real-world dataset. Second, we give a new
algorithm called VE that competes with a possibly infinite set of policies of
VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln
(1/\delta))})$ with probability $1-\delta$. These guarantees improve on those
of all previous algorithms, whether in a stochastic or adversarial environment,
and bring us closer to providing supervised learning type guarantees for the
contextual bandit setting.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2010 07:11:39 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jul 2010 21:25:22 GMT"
},
{
"version": "v3",
"created": "Thu, 27 Oct 2011 19:28:49 GMT"
}
] | 2011-10-28T00:00:00 | [
[
"Beygelzimer",
"Alina",
""
],
[
"Langford",
"John",
""
],
[
"Li",
"Lihong",
""
],
[
"Reyzin",
"Lev",
""
],
[
"Schapire",
"Robert E.",
""
]
] | TITLE: Contextual Bandit Algorithms with Supervised Learning Guarantees
ABSTRACT: We address the problem of learning in an online, bandit setting where the
learner must repeatedly select among $K$ actions, but only receives partial
feedback based on its choices. We establish two new facts: First, using a new
algorithm called Exp4.P, we show that it is possible to compete with the best
in a set of $N$ experts with probability $1-\delta$ while incurring regret at
most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is
tested empirically in a large-scale, real-world dataset. Second, we give a new
algorithm called VE that competes with a possibly infinite set of policies of
VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln
(1/\delta))})$ with probability $1-\delta$. These guarantees improve on those
of all previous algorithms, whether in a stochastic or adversarial environment,
and bring us closer to providing supervised learning type guarantees for the
contextual bandit setting.
|
1110.5688 | Nicholas M. Ball | Nicholas M. Ball (Herzberg Institute of Astrophysics, Victoria, BC,
Canada) | Discussion on "Techniques for Massive-Data Machine Learning in
Astronomy" by A. Gray | 6 pages, 1 figure. Invited commentary, Statistical Challenges in
Modern Astronomy V, Penn State, Jun 2011 | null | null | null | astro-ph.IM astro-ph.CO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Astronomy is increasingly encountering two fundamental truths: (1) The field
is faced with the task of extracting useful information from extremely large,
complex, and high dimensional datasets; (2) The techniques of astroinformatics
and astrostatistics are the only way to make this tractable, and bring the
required level of sophistication to the analysis. Thus, an approach which
provides these tools in a way that scales to these datasets is not just
desirable, it is vital. The expertise required spans not just astronomy, but
also computer science, statistics, and informatics. As a computer scientist and
expert in machine learning, Alex's contribution of expertise and a large number
of fast algorithms designed to scale to large datasets, is extremely welcome.
We focus in this discussion on the questions raised by the practical
application of these algorithms to real astronomical datasets. That is, what is
needed to maximally leverage their potential to improve the science return?
This is not a trivial task. While computing and statistical expertise are
required, so is astronomical expertise. Precedent has shown that, to-date, the
collaborations most productive in producing astronomical science results (e.g,
the Sloan Digital Sky Survey), have either involved astronomers expert in
computer science and/or statistics, or astronomers involved in close, long-term
collaborations with experts in those fields. This does not mean that the
astronomers are giving the most important input, but simply that their input is
crucial in guiding the effort in the most fruitful directions, and coping with
the issues raised by real data. Thus, the tools must be useable and
understandable by those whose primary expertise is not computing or statistics,
even though they may have quite extensive knowledge of those fields.
| [
{
"version": "v1",
"created": "Wed, 26 Oct 2011 00:22:36 GMT"
}
] | 2011-10-28T00:00:00 | [
[
"Ball",
"Nicholas M.",
"",
"Herzberg Institute of Astrophysics, Victoria, BC,\n Canada"
]
] | TITLE: Discussion on "Techniques for Massive-Data Machine Learning in
Astronomy" by A. Gray
ABSTRACT: Astronomy is increasingly encountering two fundamental truths: (1) The field
is faced with the task of extracting useful information from extremely large,
complex, and high dimensional datasets; (2) The techniques of astroinformatics
and astrostatistics are the only way to make this tractable, and bring the
required level of sophistication to the analysis. Thus, an approach which
provides these tools in a way that scales to these datasets is not just
desirable, it is vital. The expertise required spans not just astronomy, but
also computer science, statistics, and informatics. As a computer scientist and
expert in machine learning, Alex's contribution of expertise and a large number
of fast algorithms designed to scale to large datasets, is extremely welcome.
We focus in this discussion on the questions raised by the practical
application of these algorithms to real astronomical datasets. That is, what is
needed to maximally leverage their potential to improve the science return?
This is not a trivial task. While computing and statistical expertise are
required, so is astronomical expertise. Precedent has shown that, to-date, the
collaborations most productive in producing astronomical science results (e.g,
the Sloan Digital Sky Survey), have either involved astronomers expert in
computer science and/or statistics, or astronomers involved in close, long-term
collaborations with experts in those fields. This does not mean that the
astronomers are giving the most important input, but simply that their input is
crucial in guiding the effort in the most fruitful directions, and coping with
the issues raised by real data. Thus, the tools must be useable and
understandable by those whose primary expertise is not computing or statistics,
even though they may have quite extensive knowledge of those fields.
|
1110.4723 | Xinran He | Xinran He, Guojie Song, Wei Chen, Qingye Jiang | Influence Blocking Maximization in Social Networks under the Competitive
Linear Threshold Model Technical Report | Full version technical report of Paper "Influence Blocking
Maximization in Social Networks under the Competitive Linear Threshold Model"
which has been submitted to SDM2012. 14 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world situations, different and often opposite opinions,
innovations, or products are competing with one another for their social
influence in a networked society. In this paper, we study competitive influence
propagation in social networks under the competitive linear threshold (CLT)
model, an extension to the classic linear threshold model. Under the CLT model,
we focus on the problem that one entity tries to block the influence
propagation of its competing entity as much as possible by strategically
selecting a number of seed nodes that could initiate its own influence
propagation. We call this problem the influence blocking maximization (IBM)
problem. We prove that the objective function of IBM in the CLT model is
submodular, and thus a greedy algorithm could achieve 1-1/e approximation
ratio. However, the greedy algorithm requires Monte-Carlo simulations of
competitive influence propagation, which makes the algorithm not efficient. We
design an efficient algorithm CLDAG, which utilizes the properties of the CLT
model, to address this issue. We conduct extensive simulations of CLDAG, the
greedy algorithm, and other baseline algorithms on real-world and synthetic
datasets. Our results show that CLDAG is able to provide best accuracy in par
with the greedy algorithm and often better than other algorithms, while it is
two orders of magnitude faster than the greedy algorithm.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2011 07:59:37 GMT"
}
] | 2011-10-24T00:00:00 | [
[
"He",
"Xinran",
""
],
[
"Song",
"Guojie",
""
],
[
"Chen",
"Wei",
""
],
[
"Jiang",
"Qingye",
""
]
] | TITLE: Influence Blocking Maximization in Social Networks under the Competitive
Linear Threshold Model Technical Report
ABSTRACT: In many real-world situations, different and often opposite opinions,
innovations, or products are competing with one another for their social
influence in a networked society. In this paper, we study competitive influence
propagation in social networks under the competitive linear threshold (CLT)
model, an extension to the classic linear threshold model. Under the CLT model,
we focus on the problem that one entity tries to block the influence
propagation of its competing entity as much as possible by strategically
selecting a number of seed nodes that could initiate its own influence
propagation. We call this problem the influence blocking maximization (IBM)
problem. We prove that the objective function of IBM in the CLT model is
submodular, and thus a greedy algorithm could achieve 1-1/e approximation
ratio. However, the greedy algorithm requires Monte-Carlo simulations of
competitive influence propagation, which makes the algorithm not efficient. We
design an efficient algorithm CLDAG, which utilizes the properties of the CLT
model, to address this issue. We conduct extensive simulations of CLDAG, the
greedy algorithm, and other baseline algorithms on real-world and synthetic
datasets. Our results show that CLDAG is able to provide best accuracy in par
with the greedy algorithm and often better than other algorithms, while it is
two orders of magnitude faster than the greedy algorithm.
|
1110.4474 | Sebastiano Vigna | Paolo Boldi, Marco Rosa, Sebastiano Vigna | Robustness of Social Networks: Comparative Results Based on Distance
Distributions | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a social network, which of its nodes have a stronger impact in
determining its structure? More formally: which node-removal order has the
greatest impact on the network structure? We approach this well-known problem
for the first time in a setting that combines both web graphs and social
networks, using datasets that are orders of magnitude larger than those
appearing in the previous literature, thanks to some recently developed
algorithms and software tools that make it possible to approximate accurately
the number of reachable pairs and the distribution of distances in a graph. Our
experiments highlight deep differences in the structure of social networks and
web graphs, show significant limitations of previous experimental results, and
at the same time reveal clustering by label propagation as a new and very
effective way of locating nodes that are important from a structural viewpoint.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2011 08:49:01 GMT"
}
] | 2011-10-21T00:00:00 | [
[
"Boldi",
"Paolo",
""
],
[
"Rosa",
"Marco",
""
],
[
"Vigna",
"Sebastiano",
""
]
] | TITLE: Robustness of Social Networks: Comparative Results Based on Distance
Distributions
ABSTRACT: Given a social network, which of its nodes have a stronger impact in
determining its structure? More formally: which node-removal order has the
greatest impact on the network structure? We approach this well-known problem
for the first time in a setting that combines both web graphs and social
networks, using datasets that are orders of magnitude larger than those
appearing in the previous literature, thanks to some recently developed
algorithms and software tools that make it possible to approximate accurately
the number of reachable pairs and the distribution of distances in a graph. Our
experiments highlight deep differences in the structure of social networks and
web graphs, show significant limitations of previous experimental results, and
at the same time reveal clustering by label propagation as a new and very
effective way of locating nodes that are important from a structural viewpoint.
|
1110.4278 | Marina Sokol | Konstantin Avrachenkov (INRIA Sophia Antipolis), Paulo Gon\c{c}alves
(LIP), Alexey Mishenin, Marina Sokol (INRIA Sophia Antipolis) | Generalized Optimization Framework for Graph-based Semi-supervised
Learning | null | null | null | RR-7774 | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a generalized optimization framework for graph-based
semi-supervised learning. The framework gives as particular cases the Standard
Laplacian, Normalized Laplacian and PageRank based methods. We have also
provided new probabilistic interpretation based on random walks and
characterized the limiting behaviour of the methods. The random walk based
interpretation allows us to explain di erences between the performances of
methods with di erent smoothing kernels. It appears that the PageRank based
method is robust with respect to the choice of the regularization parameter and
the labelled data. We illustrate our theoretical results with two realistic
datasets, characterizing di erent challenges: Les Miserables characters social
network and Wikipedia hyper-link graph. The graph-based semi-supervised
learning classi- es the Wikipedia articles with very good precision and perfect
recall employing only the information about the hyper-text links.
| [
{
"version": "v1",
"created": "Wed, 19 Oct 2011 13:29:32 GMT"
}
] | 2011-10-20T00:00:00 | [
[
"Avrachenkov",
"Konstantin",
"",
"INRIA Sophia Antipolis"
],
[
"Gonçalves",
"Paulo",
"",
"LIP"
],
[
"Mishenin",
"Alexey",
"",
"INRIA Sophia Antipolis"
],
[
"Sokol",
"Marina",
"",
"INRIA Sophia Antipolis"
]
] | TITLE: Generalized Optimization Framework for Graph-based Semi-supervised
Learning
ABSTRACT: We develop a generalized optimization framework for graph-based
semi-supervised learning. The framework gives as particular cases the Standard
Laplacian, Normalized Laplacian and PageRank based methods. We have also
provided new probabilistic interpretation based on random walks and
characterized the limiting behaviour of the methods. The random walk based
interpretation allows us to explain di erences between the performances of
methods with di erent smoothing kernels. It appears that the PageRank based
method is robust with respect to the choice of the regularization parameter and
the labelled data. We illustrate our theoretical results with two realistic
datasets, characterizing di erent challenges: Les Miserables characters social
network and Wikipedia hyper-link graph. The graph-based semi-supervised
learning classi- es the Wikipedia articles with very good precision and perfect
recall employing only the information about the hyper-text links.
|
1110.4012 | Maria Emilia Ruiz | M. E. Ruiz (Instituto de Astronom\'ia y F\'isica del Espacio
(CONICET-Universidad de Buenos Aires), Argentina), S. Dasso (Instituto de
Astronom\'ia y F\'isica del Espacio (CONICET-Universidad de Buenos Aires) and
Departamento de F\'isica, Facultad de Ciencias Exactas y Naturales,
Universidad de Buenos Aires, Argentina), W. H. Matthaeus (Department of
Geography, Bartol Research Institute, University of Delaware, Newark, DE,
USA), E. Marsch (Max-Planck-Institut f\"ur Sonnensystemforschung,
Max-Planck-Stra{\ss}e 2, Katlenburg-Lindau, Germany) and J. M. Weygand
(Institute of Geophysics and Planetary Physics, University of California, Los
Angeles, CA, USA) | Aging of anisotropy of solar wind magnetic fluctuations in the inner
heliosphere | Published | J. Geophys. Res., 116, (2011) A10102 | 10.1029/2011JA016697 | null | astro-ph.SR physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the evolution of the interplanetary magnetic field spatial
structure by examining the inner heliospheric autocorrelation function, using
Helios 1 and Helios 2 "in situ" observations. We focus on the evolution of the
integral length scale (\lambda) anisotropy associated with the turbulent
magnetic fluctuations, with respect to the aging of fluid parcels traveling
away from the Sun, and according to whether the measured \lambda is principally
parallel (\lambda_parallel) or perpendicular (\lambda_perp) to the direction of
a suitably defined local ensemble average magnetic field B0. We analyze a set
of 1065 24-hour long intervals (covering full missions). For each interval, we
compute the magnetic autocorrelation function, using classical
single-spacecraft techniques, and estimate \lambda with help of two different
proxies for both Helios datasets. We find that close to the Sun,
\lambda_parallel < \lambda_perp. This supports a slab-like spectral model,
where the population of fluctuations having wavevector k parallel to B0 is much
larger than the one with k-vector perpendicular. A population favoring
perpendicular k-vectors would be considered quasi-two dimensional (2D). Moving
towards 1 AU, we find a progressive isotropization of \lambda and a trend to
reach an inverted abundance, consistent with the well-known result at 1 AU that
\lambda_parallel > \lambda_perp, usually interpreted as a dominant quasi-2D
picture over the slab picture. Thus, our results are consistent with driving
modes having wavevectors parallel to B0 near Sun, and a progressive dynamical
spectral transfer of energy to modes with perpendicular wavevectors as the
solar wind parcels age while moving from the Sun to 1 AU.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2011 14:59:23 GMT"
}
] | 2011-10-19T00:00:00 | [
[
"Ruiz",
"M. E.",
"",
"Instituto de Astronomía y Física del Espacio"
],
[
"Dasso",
"S.",
"",
"Instituto de\n Astronomía y Física del Espacio"
],
[
"Matthaeus",
"W. H.",
"",
"Department of\n Geography, Bartol Research Institute, University of Delaware, Newark, DE,\n USA"
],
[
"Marsch",
"E.",
"",
"Max-Planck-Institut für Sonnensystemforschung,\n Max-Planck-Straße 2, Katlenburg-Lindau, Germany"
],
[
"Weygand",
"J. M.",
"",
"Institute of Geophysics and Planetary Physics, University of California, Los\n Angeles, CA, USA"
]
] | TITLE: Aging of anisotropy of solar wind magnetic fluctuations in the inner
heliosphere
ABSTRACT: We analyze the evolution of the interplanetary magnetic field spatial
structure by examining the inner heliospheric autocorrelation function, using
Helios 1 and Helios 2 "in situ" observations. We focus on the evolution of the
integral length scale (\lambda) anisotropy associated with the turbulent
magnetic fluctuations, with respect to the aging of fluid parcels traveling
away from the Sun, and according to whether the measured \lambda is principally
parallel (\lambda_parallel) or perpendicular (\lambda_perp) to the direction of
a suitably defined local ensemble average magnetic field B0. We analyze a set
of 1065 24-hour long intervals (covering full missions). For each interval, we
compute the magnetic autocorrelation function, using classical
single-spacecraft techniques, and estimate \lambda with help of two different
proxies for both Helios datasets. We find that close to the Sun,
\lambda_parallel < \lambda_perp. This supports a slab-like spectral model,
where the population of fluctuations having wavevector k parallel to B0 is much
larger than the one with k-vector perpendicular. A population favoring
perpendicular k-vectors would be considered quasi-two dimensional (2D). Moving
towards 1 AU, we find a progressive isotropization of \lambda and a trend to
reach an inverted abundance, consistent with the well-known result at 1 AU that
\lambda_parallel > \lambda_perp, usually interpreted as a dominant quasi-2D
picture over the slab picture. Thus, our results are consistent with driving
modes having wavevectors parallel to B0 near Sun, and a progressive dynamical
spectral transfer of energy to modes with perpendicular wavevectors as the
solar wind parcels age while moving from the Sun to 1 AU.
|
1006.2322 | Yoshiharu Maeno | Yoshiharu Maeno | Discovery of a missing disease spreader | in press | Physica A vol.390, pp.3412-3426 (2011) | 10.1016/j.physa.2011.05.005 | null | cs.AI cs.SI physics.bio-ph physics.soc-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study presents a method to discover an outbreak of an infectious disease
in a region for which data are missing, but which is at work as a disease
spreader. Node discovery for the spread of an infectious disease is defined as
discriminating between the nodes which are neighboring to a missing disease
spreader node, and the rest, given a dataset on the number of cases. The spread
is described by stochastic differential equations. A perturbation theory
quantifies the impact of the missing spreader on the moments of the number of
cases. Statistical discriminators examine the mid-body or tail-ends of the
probability density function, and search for the disturbance from the missing
spreader. They are tested with computationally synthesized datasets, and
applied to the SARS outbreak and flu pandemic.
| [
{
"version": "v1",
"created": "Fri, 11 Jun 2010 14:33:18 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Sep 2010 13:49:21 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Dec 2010 13:39:48 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Jun 2011 07:09:13 GMT"
}
] | 2011-10-18T00:00:00 | [
[
"Maeno",
"Yoshiharu",
""
]
] | TITLE: Discovery of a missing disease spreader
ABSTRACT: This study presents a method to discover an outbreak of an infectious disease
in a region for which data are missing, but which is at work as a disease
spreader. Node discovery for the spread of an infectious disease is defined as
discriminating between the nodes which are neighboring to a missing disease
spreader node, and the rest, given a dataset on the number of cases. The spread
is described by stochastic differential equations. A perturbation theory
quantifies the impact of the missing spreader on the moments of the number of
cases. Statistical discriminators examine the mid-body or tail-ends of the
probability density function, and search for the disturbance from the missing
spreader. They are tested with computationally synthesized datasets, and
applied to the SARS outbreak and flu pandemic.
|
1110.3569 | Rahmat Widia Sembiring | Rahmat Widia Sembiring, Jasni Mohamad Zain, Abdullah Embong | Dimension Reduction of Health Data Clustering | 10 pages, 9 figures, published at International Journal on New
Computer Architectures and Their Applications (IJNCAA) | International Journal on New Computer Architectures and Their
Applications (IJNCAA), 2011, Vol.1, No.4, 1041-1050 | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current data tends to be more complex than conventional data and need
dimension reduction. Dimension reduction is important in cluster analysis and
creates a smaller data in volume and has the same analytical results as the
original representation. A clustering process needs data reduction to obtain an
efficient processing time while clustering and mitigate curse of
dimensionality. This paper proposes a model for extracting multidimensional
data clustering of health database. We implemented four dimension reduction
techniques such as Singular Value Decomposition (SVD), Principal Component
Analysis (PCA), Self Organizing Map (SOM) and FastICA. The results show that
dimension reductions significantly reduce dimension and shorten processing time
and also increased performance of cluster in several health datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2011 03:40:07 GMT"
}
] | 2011-10-18T00:00:00 | [
[
"Sembiring",
"Rahmat Widia",
""
],
[
"Zain",
"Jasni Mohamad",
""
],
[
"Embong",
"Abdullah",
""
]
] | TITLE: Dimension Reduction of Health Data Clustering
ABSTRACT: The current data tends to be more complex than conventional data and need
dimension reduction. Dimension reduction is important in cluster analysis and
creates a smaller data in volume and has the same analytical results as the
original representation. A clustering process needs data reduction to obtain an
efficient processing time while clustering and mitigate curse of
dimensionality. This paper proposes a model for extracting multidimensional
data clustering of health database. We implemented four dimension reduction
techniques such as Singular Value Decomposition (SVD), Principal Component
Analysis (PCA), Self Organizing Map (SOM) and FastICA. The results show that
dimension reductions significantly reduce dimension and shorten processing time
and also increased performance of cluster in several health datasets.
|
1011.1043 | Xin Liu | Xin Liu and Tsuyoshi Murata | Detecting Communities in Tripartite Hypergraphs | 4 pages, 3 figures | Journal of Computer Science and Technology, vol.26, no.5,
pp.778-791, Sep 2011 | 10.1007/s11390-011-0177-0 | vol.26, no.5, pp.778-791 | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In social tagging systems, also known as folksonomies, users collaboratively
manage tags to annotate resources. Naturally, social tagging systems can be
modeled as a tripartite hypergraph, where there are three different types of
nodes, namely users, resources and tags, and each hyperedge has three end
nodes, connecting a user, a resource and a tag that the user employs to
annotate the resource. Then, how can we automatically detect user, resource and
tag communities from the tripartite hypergraph? In this paper, by turning the
problem into a problem of finding an efficient compression of the hypergraph's
structure, we propose a quality function for measuring the goodness of
partitions of a tripartite hypergraph into communities. Later, we develop a
fast community detection algorithm based on minimizing the quality function. We
explain advantages of our method and validate it by comparing with various
state of the art techniques in a set of synthetic datasets.
| [
{
"version": "v1",
"created": "Thu, 4 Nov 2010 01:24:07 GMT"
}
] | 2011-10-17T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Murata",
"Tsuyoshi",
""
]
] | TITLE: Detecting Communities in Tripartite Hypergraphs
ABSTRACT: In social tagging systems, also known as folksonomies, users collaboratively
manage tags to annotate resources. Naturally, social tagging systems can be
modeled as a tripartite hypergraph, where there are three different types of
nodes, namely users, resources and tags, and each hyperedge has three end
nodes, connecting a user, a resource and a tag that the user employs to
annotate the resource. Then, how can we automatically detect user, resource and
tag communities from the tripartite hypergraph? In this paper, by turning the
problem into a problem of finding an efficient compression of the hypergraph's
structure, we propose a quality function for measuring the goodness of
partitions of a tripartite hypergraph into communities. Later, we develop a
fast community detection algorithm based on minimizing the quality function. We
explain advantages of our method and validate it by comparing with various
state of the art techniques in a set of synthetic datasets.
|
1110.2162 | Ruben Sipos | Ruben Sipos, Pannaga Shivaswamy, Thorsten Joachims | Large-Margin Learning of Submodular Summarization Methods | update: improved formatting (figure placement) and algorithm
pseudocode clarity (Fig. 3) | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a supervised learning approach to training
submodular scoring functions for extractive multi-document summarization. By
taking a structured predicition approach, we provide a large-margin method that
directly optimizes a convex relaxation of the desired performance measure. The
learning method applies to all submodular summarization methods, and we
demonstrate its effectiveness for both pairwise as well as coverage-based
scoring functions on multiple datasets. Compared to state-of-the-art functions
that were tuned manually, our method significantly improves performance and
enables high-fidelity models with numbers of parameters well beyond what could
reasonbly be tuned by hand.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2011 19:54:57 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2011 17:51:20 GMT"
}
] | 2011-10-14T00:00:00 | [
[
"Sipos",
"Ruben",
""
],
[
"Shivaswamy",
"Pannaga",
""
],
[
"Joachims",
"Thorsten",
""
]
] | TITLE: Large-Margin Learning of Submodular Summarization Methods
ABSTRACT: In this paper, we present a supervised learning approach to training
submodular scoring functions for extractive multi-document summarization. By
taking a structured predicition approach, we provide a large-margin method that
directly optimizes a convex relaxation of the desired performance measure. The
learning method applies to all submodular summarization methods, and we
demonstrate its effectiveness for both pairwise as well as coverage-based
scoring functions on multiple datasets. Compared to state-of-the-art functions
that were tuned manually, our method significantly improves performance and
enables high-fidelity models with numbers of parameters well beyond what could
reasonbly be tuned by hand.
|
0912.4196 | Tamon Stephen | Cedric Chauve, Utz-Uwe Haus, Tamon Stephen, Vivija P. You | Minimal Conflicting Sets for the Consecutive Ones Property in ancestral
genome reconstruction | 20 pages, 3 figures | J Comput Biol. 2010 Sep;17(9):1167-81 | 10.1089/cmb.2010.0113 | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A binary matrix has the Consecutive Ones Property (C1P) if its columns can be
ordered in such a way that all 1's on each row are consecutive. A Minimal
Conflicting Set is a set of rows that does not have the C1P, but every proper
subset has the C1P. Such submatrices have been considered in comparative
genomics applications, but very little is known about their combinatorial
structure and efficient algorithms to compute them. We first describe an
algorithm that detects rows that belong to Minimal Conflicting Sets. This
algorithm has a polynomial time complexity when the number of 1's in each row
of the considered matrix is bounded by a constant. Next, we show that the
problem of computing all Minimal Conflicting Sets can be reduced to the joint
generation of all minimal true clauses and maximal false clauses for some
monotone boolean function. We use these methods on simulated data related to
ancestral genome reconstruction to show that computing Minimal Conflicting Set
is useful in discriminating between true positive and false positive ancestral
syntenies. We also study a dataset of yeast genomes and address the reliability
of an ancestral genome proposal of the Saccahromycetaceae yeasts.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2009 16:03:06 GMT"
}
] | 2011-10-13T00:00:00 | [
[
"Chauve",
"Cedric",
""
],
[
"Haus",
"Utz-Uwe",
""
],
[
"Stephen",
"Tamon",
""
],
[
"You",
"Vivija P.",
""
]
] | TITLE: Minimal Conflicting Sets for the Consecutive Ones Property in ancestral
genome reconstruction
ABSTRACT: A binary matrix has the Consecutive Ones Property (C1P) if its columns can be
ordered in such a way that all 1's on each row are consecutive. A Minimal
Conflicting Set is a set of rows that does not have the C1P, but every proper
subset has the C1P. Such submatrices have been considered in comparative
genomics applications, but very little is known about their combinatorial
structure and efficient algorithms to compute them. We first describe an
algorithm that detects rows that belong to Minimal Conflicting Sets. This
algorithm has a polynomial time complexity when the number of 1's in each row
of the considered matrix is bounded by a constant. Next, we show that the
problem of computing all Minimal Conflicting Sets can be reduced to the joint
generation of all minimal true clauses and maximal false clauses for some
monotone boolean function. We use these methods on simulated data related to
ancestral genome reconstruction to show that computing Minimal Conflicting Set
is useful in discriminating between true positive and false positive ancestral
syntenies. We also study a dataset of yeast genomes and address the reliability
of an ancestral genome proposal of the Saccahromycetaceae yeasts.
|
1110.2626 | Kuruba Usha Rani | K. Usha Rani | Analysis of Heart Diseases Dataset using Neural Network Approach | 8 pages, 2 figures, 1 table; International Journal of Data Mining &
Knowledge Management Process (IJDKP) Vol.1, No.5, September 2011 | null | 10.5121/ijdkp.2011.1501 | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the important techniques of Data mining is Classification. Many real
world problems in various fields such as business, science, industry and
medicine can be solved by using classification approach. Neural Networks have
emerged as an important tool for classification. The advantages of Neural
Networks helps for efficient classification of given data. In this study a
Heart diseases dataset is analyzed using Neural Network approach. To increase
the efficiency of the classification process parallel approach is also adopted
in the training phase.
| [
{
"version": "v1",
"created": "Wed, 12 Oct 2011 10:56:29 GMT"
}
] | 2011-10-13T00:00:00 | [
[
"Rani",
"K. Usha",
""
]
] | TITLE: Analysis of Heart Diseases Dataset using Neural Network Approach
ABSTRACT: One of the important techniques of Data mining is Classification. Many real
world problems in various fields such as business, science, industry and
medicine can be solved by using classification approach. Neural Networks have
emerged as an important tool for classification. The advantages of Neural
Networks helps for efficient classification of given data. In this study a
Heart diseases dataset is analyzed using Neural Network approach. To increase
the efficiency of the classification process parallel approach is also adopted
in the training phase.
|
1110.2396 | Riccardo Albertoni | Riccardo Albertoni, Monica De Martino | Semantic Technology to Exploit Digital Content Exposed as Linked Data | Published in eChallenges e-2011 Conference Proceedings Paul
Cunningham and Miriam Cunningham (Eds) IIMC International Information
Management Corporation, 2011 ISBN: 978-1-905824-27-4 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper illustrates the research result of the application of semantic
technology to ease the use and reuse of digital contents exposed as Linked Data
on the web. It focuses on the specific issue of explorative research for the
resource selection: a context dependent semantic similarity assessment is
proposed in order to compare datasets annotated through terminologies exposed
as Linked Data (e.g. habitats, species). Semantic similarity is shown as a
building block technology to sift linked data resources. From semantic
similarity application, we derived a set of recommendations underlying open
issues in scaling the similarity assessment up to the Web of Data.
| [
{
"version": "v1",
"created": "Tue, 11 Oct 2011 15:20:15 GMT"
}
] | 2011-10-12T00:00:00 | [
[
"Albertoni",
"Riccardo",
""
],
[
"De Martino",
"Monica",
""
]
] | TITLE: Semantic Technology to Exploit Digital Content Exposed as Linked Data
ABSTRACT: The paper illustrates the research result of the application of semantic
technology to ease the use and reuse of digital contents exposed as Linked Data
on the web. It focuses on the specific issue of explorative research for the
resource selection: a context dependent semantic similarity assessment is
proposed in order to compare datasets annotated through terminologies exposed
as Linked Data (e.g. habitats, species). Semantic similarity is shown as a
building block technology to sift linked data resources. From semantic
similarity application, we derived a set of recommendations underlying open
issues in scaling the similarity assessment up to the Web of Data.
|
0710.0958 | Valerio Lucarini | Valerio Lucarini | Response Theory for Equilibrium and Non-Equilibrium Statistical
Mechanics: Causality and Generalized Kramers-Kronig relations | 22 pages | J. Stat. Phys.,131, 543-558 (2008) | 10.1007/s10955-008-9498-y | null | cond-mat.stat-mech cond-mat.str-el math-ph math.MP nlin.CD physics.flu-dyn | null | We consider the general response theory proposed by Ruelle for describing the
impact of small perturbations to the non-equilibrium steady states resulting
from Axiom A dynamical systems. We show that the causality of the response
functions allows for writing a set of Kramers-Kronig relations for the
corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only
a special class of observable susceptibilities obey Kramers-Kronig relations.
Specific results are provided for arbitrary order harmonic response, which
allows for a very comprehensive Kramers-Kronig analysis and the establishment
of sum rules connecting the asymptotic behavior of the susceptibility to the
short-time response of the system. These results generalize previous findings
on optical Hamiltonian systems and simple mechanical models, and shed light on
the general impact of considering the principle of causality for testing
self-consistency: the described dispersion relations constitute unavoidable
benchmarks for any experimental and model generated dataset. In order to
connect the response theory for equilibrium and non equilibrium systems, we
rewrite the classical results by Kubo so that response functions formally
identical to those proposed by Ruelle, apart from the measure involved in the
phase space integration, are obtained. We briefly discuss how these results,
taking into account the chaotic hypothesis, might be relevant for climate
research. In particular, whereas the fluctuation-dissipation theorem does not
work for non-equilibrium systems, because of the non-equivalence between
internal and external fluctuations, Kramers-Kronig relations might be more
robust tools for the definition of a self-consistent theory of climate change.
| [
{
"version": "v1",
"created": "Thu, 4 Oct 2007 09:14:21 GMT"
}
] | 2011-10-11T00:00:00 | [
[
"Lucarini",
"Valerio",
""
]
] | TITLE: Response Theory for Equilibrium and Non-Equilibrium Statistical
Mechanics: Causality and Generalized Kramers-Kronig relations
ABSTRACT: We consider the general response theory proposed by Ruelle for describing the
impact of small perturbations to the non-equilibrium steady states resulting
from Axiom A dynamical systems. We show that the causality of the response
functions allows for writing a set of Kramers-Kronig relations for the
corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only
a special class of observable susceptibilities obey Kramers-Kronig relations.
Specific results are provided for arbitrary order harmonic response, which
allows for a very comprehensive Kramers-Kronig analysis and the establishment
of sum rules connecting the asymptotic behavior of the susceptibility to the
short-time response of the system. These results generalize previous findings
on optical Hamiltonian systems and simple mechanical models, and shed light on
the general impact of considering the principle of causality for testing
self-consistency: the described dispersion relations constitute unavoidable
benchmarks for any experimental and model generated dataset. In order to
connect the response theory for equilibrium and non equilibrium systems, we
rewrite the classical results by Kubo so that response functions formally
identical to those proposed by Ruelle, apart from the measure involved in the
phase space integration, are obtained. We briefly discuss how these results,
taking into account the chaotic hypothesis, might be relevant for climate
research. In particular, whereas the fluctuation-dissipation theorem does not
work for non-equilibrium systems, because of the non-equivalence between
internal and external fluctuations, Kramers-Kronig relations might be more
robust tools for the definition of a self-consistent theory of climate change.
|
1110.1863 | Stefano Forte | The NNPDF Collaboration: Richard D.Ball, Valerio Bertone, Francesco
Cerutti, Luigi Del Debbio, Stefano Forte, Alberto Guffanti, Jose I.Latorre,
Juan Rojo and Maria Ubiali | Parton distributions: determining probabilities in a space of functions | 11 pages, 8 figures, presented by Stefano Forte at PHYSTAT 2011 (to
be published in the proceedings) | null | null | IFUM-988-FT, TTK-11-48 | hep-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the statistical properties of parton distributions within the
framework of the NNPDF methodology. We present various tests of statistical
consistency, in particular that the distribution of results does not depend on
the underlying parametrization and that it behaves according to Bayes' theorem
upon the addition of new data. We then study the dependence of results on
consistent or inconsistent datasets and present tools to assess the consistency
of new data. Finally we estimate the relative size of the PDF uncertainty due
to data uncertainties, and that due to the need to infer a functional form from
a finite set of data.
| [
{
"version": "v1",
"created": "Sun, 9 Oct 2011 17:51:23 GMT"
}
] | 2011-10-11T00:00:00 | [
[
"The NNPDF Collaboration",
"",
""
],
[
"Ball",
"Richard D.",
""
],
[
"Bertone",
"Valerio",
""
],
[
"Cerutti",
"Francesco",
""
],
[
"Del Debbio",
"Luigi",
""
],
[
"Forte",
"Stefano",
""
],
[
"Guffanti",
"Alberto",
""
],
[
"Latorre",
"Jose I.",
""
],
[
"Rojo",
"Juan",
""
],
[
"Ubiali",
"Maria",
""
]
] | TITLE: Parton distributions: determining probabilities in a space of functions
ABSTRACT: We discuss the statistical properties of parton distributions within the
framework of the NNPDF methodology. We present various tests of statistical
consistency, in particular that the distribution of results does not depend on
the underlying parametrization and that it behaves according to Bayes' theorem
upon the addition of new data. We then study the dependence of results on
consistent or inconsistent datasets and present tools to assess the consistency
of new data. Finally we estimate the relative size of the PDF uncertainty due
to data uncertainties, and that due to the need to infer a functional form from
a finite set of data.
|
physics/0612222 | Valerio Lucarini | Valerio Lucarini, Robert Danihlik, Ida Kriegerova, Antonio Speranza | Does the Danube exist? Versions of reality given by various regional
climate models and climatological datasets | 25 pages 8 figures, 5 tables | J. Geophys. Res., 112, D13103 (2007) | 10.1029/2006JD008360 | null | physics.ao-ph physics.data-an physics.geo-ph physics.soc-ph | null | We present an intercomparison and verification analysis of several regional
climate models (RCMs) nested into the same run of the same Atmospheric Global
Circulation Model (AGCM) regarding their representation of the statistical
properties of the hydrological balance of the Danube river basin for 1961-1990.
We also consider the datasets produced by the driving AGCM, from the ECMWF and
NCEP-NCAR reanalyses. The hydrological balance is computed by integrating the
precipitation and evaporation fields over the area of interest. Large
discrepancies exist among RCMs for the monthly climatology as well as for the
mean and variability of the annual balances, and only few datasets are
consistent with the observed discharge values of the Danube at its Delta, even
if the driving AGCM provides itself an excellent estimate. Since the considered
approach relies on the mass conservation principle and bypasses the details of
the air-land interface modeling, we propose that the atmospheric components of
RCMs still face difficulties in representing the water balance even on a
relatively large scale. Their reliability on smaller river basins may be even
more problematic. Moreover, since for some models the hydrological balance
estimates obtained with the runoff fields do not agree with those obtained via
precipitation and evaporation, some deficiencies of the land models are also
apparent. NCEP-NCAR and ERA-40 reanalyses result to be largely inadequate for
representing the hydrology of the Danube river basin, both for the
reconstruction of the long-term averages and of the seasonal cycle, and cannot
in any sense be used as verification. We suggest that these results should be
carefully considered in the perspective of auditing climate models and
assessing their ability to simulate future climate changes.
| [
{
"version": "v1",
"created": "Fri, 22 Dec 2006 12:02:41 GMT"
}
] | 2011-10-11T00:00:00 | [
[
"Lucarini",
"Valerio",
""
],
[
"Danihlik",
"Robert",
""
],
[
"Kriegerova",
"Ida",
""
],
[
"Speranza",
"Antonio",
""
]
] | TITLE: Does the Danube exist? Versions of reality given by various regional
climate models and climatological datasets
ABSTRACT: We present an intercomparison and verification analysis of several regional
climate models (RCMs) nested into the same run of the same Atmospheric Global
Circulation Model (AGCM) regarding their representation of the statistical
properties of the hydrological balance of the Danube river basin for 1961-1990.
We also consider the datasets produced by the driving AGCM, from the ECMWF and
NCEP-NCAR reanalyses. The hydrological balance is computed by integrating the
precipitation and evaporation fields over the area of interest. Large
discrepancies exist among RCMs for the monthly climatology as well as for the
mean and variability of the annual balances, and only few datasets are
consistent with the observed discharge values of the Danube at its Delta, even
if the driving AGCM provides itself an excellent estimate. Since the considered
approach relies on the mass conservation principle and bypasses the details of
the air-land interface modeling, we propose that the atmospheric components of
RCMs still face difficulties in representing the water balance even on a
relatively large scale. Their reliability on smaller river basins may be even
more problematic. Moreover, since for some models the hydrological balance
estimates obtained with the runoff fields do not agree with those obtained via
precipitation and evaporation, some deficiencies of the land models are also
apparent. NCEP-NCAR and ERA-40 reanalyses result to be largely inadequate for
representing the hydrology of the Danube river basin, both for the
reconstruction of the long-term averages and of the seasonal cycle, and cannot
in any sense be used as verification. We suggest that these results should be
carefully considered in the perspective of auditing climate models and
assessing their ability to simulate future climate changes.
|
1110.1513 | Abdul Kadir | Abdul Kadir, Lukito Edi Nugroho, Adhi Susanto, Paulus Insap Santosa | Foliage Plant Retrieval using Polar Fourier Transform, Color Moments and
Vein Features | 13 pages; Signal & Image Processing : An International Journal
(SIPIJ) Vol.2, No.3, September 2011 | null | 10.5121/sipij.2011.2301 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposed a method that combines Polar Fourier Transform, color
moments, and vein features to retrieve leaf images based on a leaf image. The
method is very useful to help people in recognizing foliage plants. Foliage
plants are plants that have various colors and unique patterns in the leaf.
Therefore, the colors and its patterns are information that should be counted
on in the processing of plant identification. To compare the performance of
retrieving system to other result, the experiments used Flavia dataset, which
is very popular in recognizing plants. The result shows that the method gave
better performance than PNN, SVM, and Fourier Transform. The method was also
tested using foliage plants with various colors. The accuracy was 90.80% for 50
kinds of plants.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2011 13:00:03 GMT"
}
] | 2011-10-10T00:00:00 | [
[
"Kadir",
"Abdul",
""
],
[
"Nugroho",
"Lukito Edi",
""
],
[
"Susanto",
"Adhi",
""
],
[
"Santosa",
"Paulus Insap",
""
]
] | TITLE: Foliage Plant Retrieval using Polar Fourier Transform, Color Moments and
Vein Features
ABSTRACT: This paper proposed a method that combines Polar Fourier Transform, color
moments, and vein features to retrieve leaf images based on a leaf image. The
method is very useful to help people in recognizing foliage plants. Foliage
plants are plants that have various colors and unique patterns in the leaf.
Therefore, the colors and its patterns are information that should be counted
on in the processing of plant identification. To compare the performance of
retrieving system to other result, the experiments used Flavia dataset, which
is very popular in recognizing plants. The result shows that the method gave
better performance than PNN, SVM, and Fourier Transform. The method was also
tested using foliage plants with various colors. The accuracy was 90.80% for 50
kinds of plants.
|
1110.1303 | Stamatia Bibi | Makrina Viola Kosti, Sofia Lazaridou, Nikoleta Bourazani, Lefteris
Angelis | Discovering patterns of correlation and similarities in software project
data with the Circos visualization tool | 4th Workshop on Intelligent Techniques in Software Engineering, 5
September 2011 at the European Conference on Machine Learning and Principles
and Practices of Knowledge Discovery in Databases (ECML-PKDD) | null | null | null | cs.SE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software cost estimation based on multivariate data from completed projects
requires the building of efficient models. These models essentially describe
relations in the data, either on the basis of correlations between variables or
of similarities between the projects. The continuous growth of the amount of
data gathered and the need to perform preliminary analysis in order to discover
patterns able to drive the building of reasonable models, leads the researchers
towards intelligent and time-saving tools which can effectively describe data
and their relationships. The goal of this paper is to suggest an innovative
visualization tool, widely used in bioinformatics, which represents relations
in data in an aesthetic and intelligent way. In order to illustrate the
capabilities of the tool, we use a well known dataset from software engineering
projects.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2011 15:48:11 GMT"
}
] | 2011-10-07T00:00:00 | [
[
"Kosti",
"Makrina Viola",
""
],
[
"Lazaridou",
"Sofia",
""
],
[
"Bourazani",
"Nikoleta",
""
],
[
"Angelis",
"Lefteris",
""
]
] | TITLE: Discovering patterns of correlation and similarities in software project
data with the Circos visualization tool
ABSTRACT: Software cost estimation based on multivariate data from completed projects
requires the building of efficient models. These models essentially describe
relations in the data, either on the basis of correlations between variables or
of similarities between the projects. The continuous growth of the amount of
data gathered and the need to perform preliminary analysis in order to discover
patterns able to drive the building of reasonable models, leads the researchers
towards intelligent and time-saving tools which can effectively describe data
and their relationships. The goal of this paper is to suggest an innovative
visualization tool, widely used in bioinformatics, which represents relations
in data in an aesthetic and intelligent way. In order to illustrate the
capabilities of the tool, we use a well known dataset from software engineering
projects.
|
1110.0879 | Subhransu Maji | Subhransu Maji | Linearized Additive Classifiers | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the additive model learning literature and adapt a penalized
spline formulation due to Eilers and Marx, to train additive classifiers
efficiently. We also propose two new embeddings based two classes of orthogonal
basis with orthogonal derivatives, which can also be used to efficiently learn
additive classifiers. This paper follows the popular theme in the current
literature where kernel SVMs are learned much more efficiently using a
approximate embedding and linear machine. In this paper we show that spline
basis are especially well suited for learning additive models because of their
sparsity structure and the ease of computing the embedding which enables one to
train these models in an online manner, without incurring the memory overhead
of precomputing the storing the embeddings. We show interesting connections
between B-Spline basis and histogram intersection kernel and show that for a
particular choice of regularization and degree of the B-Splines, our proposed
learning algorithm closely approximates the histogram intersection kernel SVM.
This enables one to learn additive models with almost no memory overhead
compared to fast a linear solver, such as LIBLINEAR, while being only 5-6X
slower on average. On two large scale image classification datasets, MNIST and
Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate
as the kernel SVM, while being two orders of magnitude faster to train.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2011 02:11:38 GMT"
}
] | 2011-10-06T00:00:00 | [
[
"Maji",
"Subhransu",
""
]
] | TITLE: Linearized Additive Classifiers
ABSTRACT: We revisit the additive model learning literature and adapt a penalized
spline formulation due to Eilers and Marx, to train additive classifiers
efficiently. We also propose two new embeddings based two classes of orthogonal
basis with orthogonal derivatives, which can also be used to efficiently learn
additive classifiers. This paper follows the popular theme in the current
literature where kernel SVMs are learned much more efficiently using a
approximate embedding and linear machine. In this paper we show that spline
basis are especially well suited for learning additive models because of their
sparsity structure and the ease of computing the embedding which enables one to
train these models in an online manner, without incurring the memory overhead
of precomputing the storing the embeddings. We show interesting connections
between B-Spline basis and histogram intersection kernel and show that for a
particular choice of regularization and degree of the B-Splines, our proposed
learning algorithm closely approximates the histogram intersection kernel SVM.
This enables one to learn additive models with almost no memory overhead
compared to fast a linear solver, such as LIBLINEAR, while being only 5-6X
slower on average. On two large scale image classification datasets, MNIST and
Daimler Chrysler pedestrians, the proposed additive classifiers are as accurate
as the kernel SVM, while being two orders of magnitude faster to train.
|
1110.0585 | Jacob Whitehill | Jacob Whitehill and Javier Movellan | Discriminately Decreasing Discriminability with Learned Image Filters | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning and computer vision, input images are often filtered to
increase data discriminability. In some situations, however, one may wish to
purposely decrease discriminability of one classification task (a "distractor"
task), while simultaneously preserving information relevant to another (the
task-of-interest): For example, it may be important to mask the identity of
persons contained in face images before submitting them to a crowdsourcing site
(e.g., Mechanical Turk) when labeling them for certain facial attributes.
Another example is inter-dataset generalization: when training on a dataset
with a particular covariance structure among multiple attributes, it may be
useful to suppress one attribute while preserving another so that a trained
classifier does not learn spurious correlations between attributes. In this
paper we present an algorithm that finds optimal filters to give high
discriminability to one task while simultaneously giving low discriminability
to a distractor task. We present results showing the effectiveness of the
proposed technique on both simulated data and natural face images.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2011 06:48:29 GMT"
}
] | 2011-10-05T00:00:00 | [
[
"Whitehill",
"Jacob",
""
],
[
"Movellan",
"Javier",
""
]
] | TITLE: Discriminately Decreasing Discriminability with Learned Image Filters
ABSTRACT: In machine learning and computer vision, input images are often filtered to
increase data discriminability. In some situations, however, one may wish to
purposely decrease discriminability of one classification task (a "distractor"
task), while simultaneously preserving information relevant to another (the
task-of-interest): For example, it may be important to mask the identity of
persons contained in face images before submitting them to a crowdsourcing site
(e.g., Mechanical Turk) when labeling them for certain facial attributes.
Another example is inter-dataset generalization: when training on a dataset
with a particular covariance structure among multiple attributes, it may be
useful to suppress one attribute while preserving another so that a trained
classifier does not learn spurious correlations between attributes. In this
paper we present an algorithm that finds optimal filters to give high
discriminability to one task while simultaneously giving low discriminability
to a distractor task. We present results showing the effectiveness of the
proposed technique on both simulated data and natural face images.
|
1108.0748 | Rathipriya R | R. Rathipriya, K. Thangavel, J. Bagyamani | Binary Particle Swarm Optimization based Biclustering of Web usage Data | null | null | 10.5120/3001-4036 | null | cs.IR cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web mining is the nontrivial process to discover valid, novel, potentially
useful knowledge from web data using the data mining techniques or methods. It
may give information that is useful for improving the services offered by web
portals and information access and retrieval tools. With the rapid development
of biclustering, more researchers have applied the biclustering technique to
different fields in recent years. When biclustering approach is applied to the
web usage data it automatically captures the hidden browsing patterns from it
in the form of biclusters. In this work, swarm intelligent technique is
combined with biclustering approach to propose an algorithm called Binary
Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The
main objective of this algorithm is to retrieve the global optimal bicluster
from the web usage data. These biclusters contain relationships between web
users and web pages which are useful for the E-Commerce applications like web
advertising and marketing. Experiments are conducted on real dataset to prove
the efficiency of the proposed algorithms.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2011 05:54:26 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Sep 2011 06:42:45 GMT"
}
] | 2011-10-03T00:00:00 | [
[
"Rathipriya",
"R.",
""
],
[
"Thangavel",
"K.",
""
],
[
"Bagyamani",
"J.",
""
]
] | TITLE: Binary Particle Swarm Optimization based Biclustering of Web usage Data
ABSTRACT: Web mining is the nontrivial process to discover valid, novel, potentially
useful knowledge from web data using the data mining techniques or methods. It
may give information that is useful for improving the services offered by web
portals and information access and retrieval tools. With the rapid development
of biclustering, more researchers have applied the biclustering technique to
different fields in recent years. When biclustering approach is applied to the
web usage data it automatically captures the hidden browsing patterns from it
in the form of biclusters. In this work, swarm intelligent technique is
combined with biclustering approach to propose an algorithm called Binary
Particle Swarm Optimization (BPSO) based Biclustering for Web Usage Data. The
main objective of this algorithm is to retrieve the global optimal bicluster
from the web usage data. These biclusters contain relationships between web
users and web pages which are useful for the E-Commerce applications like web
advertising and marketing. Experiments are conducted on real dataset to prove
the efficiency of the proposed algorithms.
|
1109.6726 | Rathipriya R | R.Rathipriya, K.Thangavel | A Fuzzy Co-Clustering approach for Clickstream Data Pattern | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Web Usage mining is a very important tool to extract the hidden business
intelligence data from large databases. The extracted information provides the
organizations with the ability to produce results more effectively to improve
their businesses and increasing of sales. Co-clustering is a powerful
bipartition technique which identifies group of users associated to group of
web pages. These associations are quantified to reveal the users' interest in
the different web pages' clusters. In this paper, Fuzzy Co-Clustering algorithm
is proposed for clickstream data to identify the subset of users of similar
navigational behavior /interest over a subset of web pages of a website.
Targeting the users group for various promotional activities is an important
aspect of marketing practices. Experiments are conducted on real dataset to
prove the efficiency of proposed algorithm. The results and findings of this
algorithm could be used to enhance the marketing strategy for directing
marketing, advertisements for web based businesses and so on.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2011 06:45:41 GMT"
}
] | 2011-10-03T00:00:00 | [
[
"Rathipriya",
"R.",
""
],
[
"Thangavel",
"K.",
""
]
] | TITLE: A Fuzzy Co-Clustering approach for Clickstream Data Pattern
ABSTRACT: Web Usage mining is a very important tool to extract the hidden business
intelligence data from large databases. The extracted information provides the
organizations with the ability to produce results more effectively to improve
their businesses and increasing of sales. Co-clustering is a powerful
bipartition technique which identifies group of users associated to group of
web pages. These associations are quantified to reveal the users' interest in
the different web pages' clusters. In this paper, Fuzzy Co-Clustering algorithm
is proposed for clickstream data to identify the subset of users of similar
navigational behavior /interest over a subset of web pages of a website.
Targeting the users group for various promotional activities is an important
aspect of marketing practices. Experiments are conducted on real dataset to
prove the efficiency of proposed algorithm. The results and findings of this
algorithm could be used to enhance the marketing strategy for directing
marketing, advertisements for web based businesses and so on.
|
1109.6881 | Adam Marcus | Adam Marcus, Eugene Wu, David Karger, Samuel Madden, Robert Miller | Human-powered Sorts and Joins | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 1, pp.
13-24 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourcing markets like Amazon's Mechanical Turk (MTurk) make it possible
to task people with small jobs, such as labeling images or looking up phone
numbers, via a programmatic interface. MTurk tasks for processing datasets with
humans are currently designed with significant reimplementation of common
workflows and ad-hoc selection of parameters such as price to pay per task. We
describe how we have integrated crowds into a declarative workflow engine
called Qurk to reduce the burden on workflow designers. In this paper, we focus
on how to use humans to compare items for sorting and joining data, two of the
most common operations in DBMSs. We describe our basic query interface and the
user interface of the tasks we post to MTurk. We also propose a number of
optimizations, including task batching, replacing pairwise comparisons with
numerical ratings, and pre-filtering tables before joining them, which
dramatically reduce the overall cost of running sorts and joins on the crowd.
In an experiment joining two sets of images, we reduce the overall cost from
$67 in a naive implementation to about $3, without substantially affecting
accuracy or latency. In an end-to-end experiment, we reduced cost by a factor
of 14.5.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2011 16:24:47 GMT"
}
] | 2011-10-03T00:00:00 | [
[
"Marcus",
"Adam",
""
],
[
"Wu",
"Eugene",
""
],
[
"Karger",
"David",
""
],
[
"Madden",
"Samuel",
""
],
[
"Miller",
"Robert",
""
]
] | TITLE: Human-powered Sorts and Joins
ABSTRACT: Crowdsourcing markets like Amazon's Mechanical Turk (MTurk) make it possible
to task people with small jobs, such as labeling images or looking up phone
numbers, via a programmatic interface. MTurk tasks for processing datasets with
humans are currently designed with significant reimplementation of common
workflows and ad-hoc selection of parameters such as price to pay per task. We
describe how we have integrated crowds into a declarative workflow engine
called Qurk to reduce the burden on workflow designers. In this paper, we focus
on how to use humans to compare items for sorting and joining data, two of the
most common operations in DBMSs. We describe our basic query interface and the
user interface of the tasks we post to MTurk. We also propose a number of
optimizations, including task batching, replacing pairwise comparisons with
numerical ratings, and pre-filtering tables before joining them, which
dramatically reduce the overall cost of running sorts and joins on the crowd.
In an experiment joining two sets of images, we reduce the overall cost from
$67 in a naive implementation to about $3, without substantially affecting
accuracy or latency. In an end-to-end experiment, we reduced cost by a factor
of 14.5.
|
0908.2061 | Sebastian Roch | Sebastien Roch | Sequence-Length Requirement of Distance-Based Phylogeny Reconstruction:
Breaking the Polynomial Barrier | null | null | null | null | math.PR cs.CE cs.DS math.ST q-bio.PE q-bio.QM stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new distance-based phylogeny reconstruction technique which
provably achieves, at sufficiently short branch lengths, a polylogarithmic
sequence-length requirement -- improving significantly over previous polynomial
bounds for distance-based methods. The technique is based on an averaging
procedure that implicitly reconstructs ancestral sequences.
In the same token, we extend previous results on phase transitions in
phylogeny reconstruction to general time-reversible models. More precisely, we
show that in the so-called Kesten-Stigum zone (roughly, a region of the
parameter space where ancestral sequences are well approximated by ``linear
combinations'' of the observed sequences) sequences of length $\poly(\log n)$
suffice for reconstruction when branch lengths are discretized. Here $n$ is the
number of extant species.
Our results challenge, to some extent, the conventional wisdom that estimates
of evolutionary distances alone carry significantly less information about
phylogenies than full sequence datasets.
| [
{
"version": "v1",
"created": "Fri, 14 Aug 2009 13:20:44 GMT"
}
] | 2011-09-30T00:00:00 | [
[
"Roch",
"Sebastien",
""
]
] | TITLE: Sequence-Length Requirement of Distance-Based Phylogeny Reconstruction:
Breaking the Polynomial Barrier
ABSTRACT: We introduce a new distance-based phylogeny reconstruction technique which
provably achieves, at sufficiently short branch lengths, a polylogarithmic
sequence-length requirement -- improving significantly over previous polynomial
bounds for distance-based methods. The technique is based on an averaging
procedure that implicitly reconstructs ancestral sequences.
In the same token, we extend previous results on phase transitions in
phylogeny reconstruction to general time-reversible models. More precisely, we
show that in the so-called Kesten-Stigum zone (roughly, a region of the
parameter space where ancestral sequences are well approximated by ``linear
combinations'' of the observed sequences) sequences of length $\poly(\log n)$
suffice for reconstruction when branch lengths are discretized. Here $n$ is the
number of extant species.
Our results challenge, to some extent, the conventional wisdom that estimates
of evolutionary distances alone carry significantly less information about
phylogenies than full sequence datasets.
|
1108.5781 | Sebastian Roch | Sebastien Roch | Phase Transition in Distance-Based Phylogeny Reconstruction | null | null | null | null | math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new distance-based phylogeny reconstruction technique which
provably achieves, at sufficiently short branch lengths, a logarithmic
sequence-length requirement---improving significantly over previous polynomial
bounds for distance-based methods and matching existing results for general
methods. The technique is based on an averaging procedure that implicitly
reconstructs ancestral sequences.
In the same token, we extend previous results on phase transitions in
phylogeny reconstruction to general time-reversible models. More precisely, we
show that in the so-called Kesten-Stigum zone (roughly, a region of the
parameter space where ancestral sequences are well approximated by "linear
combinations" of the observed sequences) sequences of length $O(\log n)$
suffice for reconstruction when branch lengths are discretized. Here $n$ is the
number of extant species.
Our results challenge, to some extent, the conventional wisdom that estimates
of evolutionary distances alone carry significantly less information about
phylogenies than full sequence datasets.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2011 23:59:24 GMT"
}
] | 2011-09-30T00:00:00 | [
[
"Roch",
"Sebastien",
""
]
] | TITLE: Phase Transition in Distance-Based Phylogeny Reconstruction
ABSTRACT: We introduce a new distance-based phylogeny reconstruction technique which
provably achieves, at sufficiently short branch lengths, a logarithmic
sequence-length requirement---improving significantly over previous polynomial
bounds for distance-based methods and matching existing results for general
methods. The technique is based on an averaging procedure that implicitly
reconstructs ancestral sequences.
In the same token, we extend previous results on phase transitions in
phylogeny reconstruction to general time-reversible models. More precisely, we
show that in the so-called Kesten-Stigum zone (roughly, a region of the
parameter space where ancestral sequences are well approximated by "linear
combinations" of the observed sequences) sequences of length $O(\log n)$
suffice for reconstruction when branch lengths are discretized. Here $n$ is the
number of extant species.
Our results challenge, to some extent, the conventional wisdom that estimates
of evolutionary distances alone carry significantly less information about
phylogenies than full sequence datasets.
|
1109.5286 | Dimitrios Giannakis | Peter Schwander, Chun Hong Yoon, Abbas Ourmazd, and Dimitrios
Giannakis | The symmetries of image formation by scattering. II. Applications | 12 pages, 47 references, 6 figures, 5 tables. Movies available at
http://www.cims.nyu.edu/~dimitris | null | null | null | physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the symmetries of image formation by scattering enable
graph-theoretic manifold-embedding techniques to extract structural and timing
information from simulated and experimental snapshots at extremely low signal.
The approach constitutes a physically-based, computationally efficient, and
noise-robust route to analyzing the large and varied datasets generated by
existing and emerging methods for studying structure and dynamics by
scattering. We demonstrate three-dimensional structure recovery from X-ray
diffraction and cryo-electron microscope image snapshots of unknown
orientation, the latter at 12 times lower dose than currently in use. We also
show that ultra-low-signal, random sightings of dynamically evolving systems
can be sequenced into high quality movies to reveal their evolution. Our
approach offers a route to recovering timing information in time-resolved
experiments, and extracting 3D movies from two-dimensional random sightings of
dynamic systems.
| [
{
"version": "v1",
"created": "Sat, 24 Sep 2011 16:43:04 GMT"
}
] | 2011-09-27T00:00:00 | [
[
"Schwander",
"Peter",
""
],
[
"Yoon",
"Chun Hong",
""
],
[
"Ourmazd",
"Abbas",
""
],
[
"Giannakis",
"Dimitrios",
""
]
] | TITLE: The symmetries of image formation by scattering. II. Applications
ABSTRACT: We show that the symmetries of image formation by scattering enable
graph-theoretic manifold-embedding techniques to extract structural and timing
information from simulated and experimental snapshots at extremely low signal.
The approach constitutes a physically-based, computationally efficient, and
noise-robust route to analyzing the large and varied datasets generated by
existing and emerging methods for studying structure and dynamics by
scattering. We demonstrate three-dimensional structure recovery from X-ray
diffraction and cryo-electron microscope image snapshots of unknown
orientation, the latter at 12 times lower dose than currently in use. We also
show that ultra-low-signal, random sightings of dynamically evolving systems
can be sequenced into high quality movies to reveal their evolution. Our
approach offers a route to recovering timing information in time-resolved
experiments, and extracting 3D movies from two-dimensional random sightings of
dynamic systems.
|
1009.3240 | Hugh Brendan McMahan | H. Brendan McMahan | A Unified View of Regularized Dual Averaging and Mirror Descent with
Implicit Updates | Extensively updated version of earlier draft with new analysis
including a general treatment of composite objectives and experiments. Also
fixes a small bug in some of one of the proofs in the early version | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study three families of online convex optimization algorithms:
follow-the-proximally-regularized-leader (FTRL-Proximal), regularized dual
averaging (RDA), and composite-objective mirror descent. We first prove
equivalence theorems that show all of these algorithms are instantiations of a
general FTRL update. This provides theoretical insight on previous experimental
observations. In particular, even though the FOBOS composite mirror descent
algorithm handles L1 regularization explicitly, it has been observed that RDA
is even more effective at producing sparsity. Our results demonstrate that
FOBOS uses subgradient approximations to the L1 penalty from previous rounds,
leading to less sparsity than RDA, which handles the cumulative penalty in
closed form. The FTRL-Proximal algorithm can be seen as a hybrid of these two,
and outperforms both on a large, real-world dataset.
Our second contribution is a unified analysis which produces regret bounds
that match (up to logarithmic terms) or improve the best previously known
bounds. This analysis also extends these algorithms in two important ways: we
support a more general type of composite objective and we analyze implicit
updates, which replace the subgradient approximation of the current loss
function with an exact optimization.
| [
{
"version": "v1",
"created": "Thu, 16 Sep 2010 18:40:32 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2011 18:38:13 GMT"
}
] | 2011-09-21T00:00:00 | [
[
"McMahan",
"H. Brendan",
""
]
] | TITLE: A Unified View of Regularized Dual Averaging and Mirror Descent with
Implicit Updates
ABSTRACT: We study three families of online convex optimization algorithms:
follow-the-proximally-regularized-leader (FTRL-Proximal), regularized dual
averaging (RDA), and composite-objective mirror descent. We first prove
equivalence theorems that show all of these algorithms are instantiations of a
general FTRL update. This provides theoretical insight on previous experimental
observations. In particular, even though the FOBOS composite mirror descent
algorithm handles L1 regularization explicitly, it has been observed that RDA
is even more effective at producing sparsity. Our results demonstrate that
FOBOS uses subgradient approximations to the L1 penalty from previous rounds,
leading to less sparsity than RDA, which handles the cumulative penalty in
closed form. The FTRL-Proximal algorithm can be seen as a hybrid of these two,
and outperforms both on a large, real-world dataset.
Our second contribution is a unified analysis which produces regret bounds
that match (up to logarithmic terms) or improve the best previously known
bounds. This analysis also extends these algorithms in two important ways: we
support a more general type of composite objective and we analyze implicit
updates, which replace the subgradient approximation of the current loss
function with an exact optimization.
|
1109.3650 | Rohan Agrawal | Rohan Agrawal | Bi-Objective Community Detection (BOCD) in Networks using Genetic
Algorithm | 11 pages, 3 Figures, 3 Tables. arXiv admin note: substantial text
overlap with arXiv:0906.0612 | null | 10.1007/978-3-642-22606-9_5 | null | cs.SI cs.AI cs.NE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A lot of research effort has been put into community detection from all
corners of academic interest such as physics, mathematics and computer science.
In this paper I have proposed a Bi-Objective Genetic Algorithm for community
detection which maximizes modularity and community score. Then the results
obtained for both benchmark and real life data sets are compared with other
algorithms using the modularity and MNI performance metrics. The results show
that the BOCD algorithm is capable of successfully detecting community
structure in both real life and synthetic datasets, as well as improving upon
the performance of previous techniques.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2011 15:48:29 GMT"
}
] | 2011-09-19T00:00:00 | [
[
"Agrawal",
"Rohan",
""
]
] | TITLE: Bi-Objective Community Detection (BOCD) in Networks using Genetic
Algorithm
ABSTRACT: A lot of research effort has been put into community detection from all
corners of academic interest such as physics, mathematics and computer science.
In this paper I have proposed a Bi-Objective Genetic Algorithm for community
detection which maximizes modularity and community score. Then the results
obtained for both benchmark and real life data sets are compared with other
algorithms using the modularity and MNI performance metrics. The results show
that the BOCD algorithm is capable of successfully detecting community
structure in both real life and synthetic datasets, as well as improving upon
the performance of previous techniques.
|
1109.3138 | Massimiliano Dal Mas | Massimiliano Dal Mas | Folksodriven Structure Network | 4 pages, 2 figures; for details see: http://www.maxdalmas.com | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays folksonomy is used as a system derived from user-generated
electronic tags or keywords that annotate and describe online content. But it
is not a classification system as an ontology. To consider it as a
classification system it would be necessary to share a representation of
contexts by all the users. This paper is proposing the use of folksonomies and
network theory to devise a new concept: a "Folksodriven Structure Network" to
represent folksonomies. This paper proposed and analyzed the network structure
of Folksodriven tags thought as folsksonomy tags suggestions for the user on a
dataset built on chosen websites. It is observed that the Folksodriven Network
has relative low path lengths checking it with classic networking measures
(clustering coefficient). Experiment result shows it can facilitate
serendipitous discovery of content among users. Neat examples and clear
formulas can show how a "Folksodriven Structure Network" can be used to tackle
ontology mapping challenges.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2011 17:06:21 GMT"
}
] | 2011-09-15T00:00:00 | [
[
"Mas",
"Massimiliano Dal",
""
]
] | TITLE: Folksodriven Structure Network
ABSTRACT: Nowadays folksonomy is used as a system derived from user-generated
electronic tags or keywords that annotate and describe online content. But it
is not a classification system as an ontology. To consider it as a
classification system it would be necessary to share a representation of
contexts by all the users. This paper is proposing the use of folksonomies and
network theory to devise a new concept: a "Folksodriven Structure Network" to
represent folksonomies. This paper proposed and analyzed the network structure
of Folksodriven tags thought as folsksonomy tags suggestions for the user on a
dataset built on chosen websites. It is observed that the Folksodriven Network
has relative low path lengths checking it with classic networking measures
(clustering coefficient). Experiment result shows it can facilitate
serendipitous discovery of content among users. Neat examples and clear
formulas can show how a "Folksodriven Structure Network" can be used to tackle
ontology mapping challenges.
|
1109.2388 | Emre Akbas | Emre Akbas, Bernard Ghanem, Narendra Ahuja | MIS-Boost: Multiple Instance Selection Boosting | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a new multiple instance learning (MIL) method,
called MIS-Boost, which learns discriminative instance prototypes by explicit
instance selection in a boosting framework. Unlike previous instance selection
based MIL methods, we do not restrict the prototypes to a discrete set of
training instances but allow them to take arbitrary values in the instance
feature space. We also do not restrict the total number of prototypes and the
number of selected-instances per bag; these quantities are completely
data-driven. We show that MIS-Boost outperforms state-of-the-art MIL methods on
a number of benchmark datasets. We also apply MIS-Boost to large-scale image
classification, where we show that the automatically selected prototypes map to
visually meaningful image regions.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2011 07:31:34 GMT"
}
] | 2011-09-13T00:00:00 | [
[
"Akbas",
"Emre",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Ahuja",
"Narendra",
""
]
] | TITLE: MIS-Boost: Multiple Instance Selection Boosting
ABSTRACT: In this paper, we present a new multiple instance learning (MIL) method,
called MIS-Boost, which learns discriminative instance prototypes by explicit
instance selection in a boosting framework. Unlike previous instance selection
based MIL methods, we do not restrict the prototypes to a discrete set of
training instances but allow them to take arbitrary values in the instance
feature space. We also do not restrict the total number of prototypes and the
number of selected-instances per bag; these quantities are completely
data-driven. We show that MIS-Boost outperforms state-of-the-art MIL methods on
a number of benchmark datasets. We also apply MIS-Boost to large-scale image
classification, where we show that the automatically selected prototypes map to
visually meaningful image regions.
|
1008.1635 | Roberto Alamino | Roberto C. Alamino | A Bayesian Foundation for Physical Theories | 33 pages, 2 figures | null | null | null | physics.data-an gr-qc physics.hist-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian probability theory is used as a framework to develop a formalism for
the scientific method based on principles of inductive reasoning. The formalism
allows for precise definitions of the key concepts in theories of physics and
also leads to a well-defined procedure to select one or more theories among a
family of (well-defined) candidates by ranking them according to their
posterior probability distributions, which result from Bayes's theorem by
incorporating to an initial prior the information extracted from a dataset,
ultimately defined by experimental evidence. Examples with different levels of
complexity are given and three main applications to basic cosmological
questions are analysed: (i) typicality of human observers, (ii) the multiverse
hypothesis and, extremely briefly, some few observations about (iii) the
anthropic principle. Finally, it is demonstrated that this formulation can
address problems that were out of the scope of scientific research until now by
presenting the isolated worlds problem and its resolution via the presented
framework.
| [
{
"version": "v1",
"created": "Tue, 10 Aug 2010 05:25:31 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Nov 2010 13:48:21 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Sep 2011 20:02:33 GMT"
}
] | 2011-09-12T00:00:00 | [
[
"Alamino",
"Roberto C.",
""
]
] | TITLE: A Bayesian Foundation for Physical Theories
ABSTRACT: Bayesian probability theory is used as a framework to develop a formalism for
the scientific method based on principles of inductive reasoning. The formalism
allows for precise definitions of the key concepts in theories of physics and
also leads to a well-defined procedure to select one or more theories among a
family of (well-defined) candidates by ranking them according to their
posterior probability distributions, which result from Bayes's theorem by
incorporating to an initial prior the information extracted from a dataset,
ultimately defined by experimental evidence. Examples with different levels of
complexity are given and three main applications to basic cosmological
questions are analysed: (i) typicality of human observers, (ii) the multiverse
hypothesis and, extremely briefly, some few observations about (iii) the
anthropic principle. Finally, it is demonstrated that this formulation can
address problems that were out of the scope of scientific research until now by
presenting the isolated worlds problem and its resolution via the presented
framework.
|
1109.2047 | N. V. Chawla | N. V. Chawla, Grigoris Karakoulas | Learning From Labeled And Unlabeled Data: An Empirical Study Across
Techniques And Domains | null | Journal Of Artificial Intelligence Research, Volume 23, pages
331-366, 2005 | 10.1613/jair.1509 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been increased interest in devising learning techniques that
combine unlabeled data with labeled data ? i.e. semi-supervised learning.
However, to the best of our knowledge, no study has been performed across
various techniques and different types and amounts of labeled and unlabeled
data. Moreover, most of the published work on semi-supervised learning
techniques assumes that the labeled and unlabeled data come from the same
distribution. It is possible for the labeling process to be associated with a
selection bias such that the distributions of data points in the labeled and
unlabeled sets are different. Not correcting for such bias can result in biased
function approximation with potentially poor performance. In this paper, we
present an empirical study of various semi-supervised learning techniques on a
variety of datasets. We attempt to answer various questions such as the effect
of independence or relevance amongst features, the effect of the size of the
labeled and unlabeled sets and the effect of noise. We also investigate the
impact of sample-selection bias on the semi-supervised learning techniques
under study and implement a bivariate probit technique particularly designed to
correct for such bias.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 15:56:58 GMT"
}
] | 2011-09-12T00:00:00 | [
[
"Chawla",
"N. V.",
""
],
[
"Karakoulas",
"Grigoris",
""
]
] | TITLE: Learning From Labeled And Unlabeled Data: An Empirical Study Across
Techniques And Domains
ABSTRACT: There has been increased interest in devising learning techniques that
combine unlabeled data with labeled data ? i.e. semi-supervised learning.
However, to the best of our knowledge, no study has been performed across
various techniques and different types and amounts of labeled and unlabeled
data. Moreover, most of the published work on semi-supervised learning
techniques assumes that the labeled and unlabeled data come from the same
distribution. It is possible for the labeling process to be associated with a
selection bias such that the distributions of data points in the labeled and
unlabeled sets are different. Not correcting for such bias can result in biased
function approximation with potentially poor performance. In this paper, we
present an empirical study of various semi-supervised learning techniques on a
variety of datasets. We attempt to answer various questions such as the effect
of independence or relevance amongst features, the effect of the size of the
labeled and unlabeled sets and the effect of noise. We also investigate the
impact of sample-selection bias on the semi-supervised learning techniques
under study and implement a bivariate probit technique particularly designed to
correct for such bias.
|
1109.1579 | Benjmain Moseley | Alina Ene, Sungjin Im, Benjamin Moseley | Fast Clustering using MapReduce | Accepted to KDD 2011 | null | null | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering problems have numerous applications and are becoming more
challenging as the size of the data increases. In this paper, we consider
designing clustering algorithms that can be used in MapReduce, the most popular
programming environment for processing large datasets. We focus on the
practical and popular clustering problems, $k$-center and $k$-median. We
develop fast clustering algorithms with constant factor approximation
guarantees. From a theoretical perspective, we give the first analysis that
shows several clustering algorithms are in $\mathcal{MRC}^0$, a theoretical
MapReduce class introduced by Karloff et al. \cite{KarloffSV10}. Our algorithms
use sampling to decrease the data size and they run a time consuming clustering
algorithm such as local search or Lloyd's algorithm on the resulting data set.
Our algorithms have sufficient flexibility to be used in practice since they
run in a constant number of MapReduce rounds. We complement these results by
performing experiments using our algorithms. We compare the empirical
performance of our algorithms to several sequential and parallel algorithms for
the $k$-median problem. The experiments show that our algorithms' solutions are
similar to or better than the other algorithms' solutions. Furthermore, on data
sets that are sufficiently large, our algorithms are faster than the other
parallel algorithms that we tested.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2011 21:10:36 GMT"
}
] | 2011-09-09T00:00:00 | [
[
"Ene",
"Alina",
""
],
[
"Im",
"Sungjin",
""
],
[
"Moseley",
"Benjamin",
""
]
] | TITLE: Fast Clustering using MapReduce
ABSTRACT: Clustering problems have numerous applications and are becoming more
challenging as the size of the data increases. In this paper, we consider
designing clustering algorithms that can be used in MapReduce, the most popular
programming environment for processing large datasets. We focus on the
practical and popular clustering problems, $k$-center and $k$-median. We
develop fast clustering algorithms with constant factor approximation
guarantees. From a theoretical perspective, we give the first analysis that
shows several clustering algorithms are in $\mathcal{MRC}^0$, a theoretical
MapReduce class introduced by Karloff et al. \cite{KarloffSV10}. Our algorithms
use sampling to decrease the data size and they run a time consuming clustering
algorithm such as local search or Lloyd's algorithm on the resulting data set.
Our algorithms have sufficient flexibility to be used in practice since they
run in a constant number of MapReduce rounds. We complement these results by
performing experiments using our algorithms. We compare the empirical
performance of our algorithms to several sequential and parallel algorithms for
the $k$-median problem. The experiments show that our algorithms' solutions are
similar to or better than the other algorithms' solutions. Furthermore, on data
sets that are sufficiently large, our algorithms are faster than the other
parallel algorithms that we tested.
|
1109.1664 | Daniel Roggen | Daniel Roggen, Martin Wirz, Gerhard Tr\"oster, Dirk Helbing | Recognition of Crowd Behavior from Mobile Sensors with Pattern Analysis
and Graph Clustering Methods | null | Networks and Heterogenous Media, 6(3), 2011, pages 521-544 | 10.3934/nhm.2011.6.521 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile on-body sensing has distinct advantages for the analysis and
understanding of crowd dynamics: sensing is not geographically restricted to a
specific instrumented area, mobile phones offer on-body sensing and they are
already deployed on a large scale, and the rich sets of sensors they contain
allows one to characterize the behavior of users through pattern recognition
techniques.
In this paper we present a methodological framework for the machine
recognition of crowd behavior from on-body sensors, such as those in mobile
phones. The recognition of crowd behaviors opens the way to the acquisition of
large-scale datasets for the analysis and understanding of crowd dynamics. It
has also practical safety applications by providing improved crowd situational
awareness in cases of emergency.
The framework comprises: behavioral recognition with the user's mobile
device, pairwise analyses of the activity relatedness of two users, and graph
clustering in order to uncover globally, which users participate in a given
crowd behavior. We illustrate this framework for the identification of groups
of persons walking, using empirically collected data.
We discuss the challenges and research avenues for theoretical and applied
mathematics arising from the mobile sensing of crowd behaviors.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2011 09:06:04 GMT"
}
] | 2011-09-09T00:00:00 | [
[
"Roggen",
"Daniel",
""
],
[
"Wirz",
"Martin",
""
],
[
"Tröster",
"Gerhard",
""
],
[
"Helbing",
"Dirk",
""
]
] | TITLE: Recognition of Crowd Behavior from Mobile Sensors with Pattern Analysis
and Graph Clustering Methods
ABSTRACT: Mobile on-body sensing has distinct advantages for the analysis and
understanding of crowd dynamics: sensing is not geographically restricted to a
specific instrumented area, mobile phones offer on-body sensing and they are
already deployed on a large scale, and the rich sets of sensors they contain
allows one to characterize the behavior of users through pattern recognition
techniques.
In this paper we present a methodological framework for the machine
recognition of crowd behavior from on-body sensors, such as those in mobile
phones. The recognition of crowd behaviors opens the way to the acquisition of
large-scale datasets for the analysis and understanding of crowd dynamics. It
has also practical safety applications by providing improved crowd situational
awareness in cases of emergency.
The framework comprises: behavioral recognition with the user's mobile
device, pairwise analyses of the activity relatedness of two users, and graph
clustering in order to uncover globally, which users participate in a given
crowd behavior. We illustrate this framework for the identification of groups
of persons walking, using empirically collected data.
We discuss the challenges and research avenues for theoretical and applied
mathematics arising from the mobile sensing of crowd behaviors.
|
1109.1068 | Karteeka Pavan Kanadam | K. Karteeka Pavan, Allam Appa Rao, A. V. Dattatreya Rao | An Automatic Clustering Technique for Optimal Clusters | 12 pages, 5 figures, 2 tables | International journal of Computer Sciene Engineering and
Applications, Vol., No.4, 2011, pp 133-144 | 10.5121/ijcsea.2011.1412 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a simple, automatic and efficient clustering algorithm,
namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate
nearly optimal clusters for the given datasets automatically. The AMOC is an
extension to standard k-means with a two phase iterative procedure combining
certain validation techniques in order to find optimal clusters with automation
of merging of clusters. Experiments on both synthetic and real data have proved
that the proposed algorithm finds nearly optimal clustering structures in terms
of number of clusters, compactness and separation.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2011 05:34:28 GMT"
}
] | 2011-09-07T00:00:00 | [
[
"Pavan",
"K. Karteeka",
""
],
[
"Rao",
"Allam Appa",
""
],
[
"Rao",
"A. V. Dattatreya",
""
]
] | TITLE: An Automatic Clustering Technique for Optimal Clusters
ABSTRACT: This paper proposes a simple, automatic and efficient clustering algorithm,
namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate
nearly optimal clusters for the given datasets automatically. The AMOC is an
extension to standard k-means with a two phase iterative procedure combining
certain validation techniques in order to find optimal clusters with automation
of merging of clusters. Experiments on both synthetic and real data have proved
that the proposed algorithm finds nearly optimal clustering structures in terms
of number of clusters, compactness and separation.
|
1109.0714 | Till Moritz Karbach | Till Moritz Karbach | Feldman-Cousins Confidence Levels - Toy MC Method | 4 pages, 5 figures | null | null | null | physics.data-an hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In particle physics, the likelihood ratio ordering principle is frequently
used to determine confidence regions. This method has statistical properties
that are superior to that of other confidence regions. But it often requires
intensive computations involving thousands of toy Monte Carlo datasets. The
original paper by Feldman and Cousins contains a recipe to perform the toy MC
computation. In this note, we explain their recipe in a more algorithmic way,
show its connection to 1-CL plots, and apply it to simple Gaussian situations
with boundaries.
| [
{
"version": "v1",
"created": "Sun, 4 Sep 2011 13:57:32 GMT"
}
] | 2011-09-06T00:00:00 | [
[
"Karbach",
"Till Moritz",
""
]
] | TITLE: Feldman-Cousins Confidence Levels - Toy MC Method
ABSTRACT: In particle physics, the likelihood ratio ordering principle is frequently
used to determine confidence regions. This method has statistical properties
that are superior to that of other confidence regions. But it often requires
intensive computations involving thousands of toy Monte Carlo datasets. The
original paper by Feldman and Cousins contains a recipe to perform the toy MC
computation. In this note, we explain their recipe in a more algorithmic way,
show its connection to 1-CL plots, and apply it to simple Gaussian situations
with boundaries.
|
1109.0758 | Mao Ye | Mao Ye and Xingjie Liu and Wang-Chien Lee | Exploring Social Influence for Recommendation - A Probabilistic
Generative Model Approach | null | null | null | null | cs.SI cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a probabilistic generative model, called unified
model, which naturally unifies the ideas of social influence, collaborative
filtering and content-based methods for item recommendation. To address the
issue of hidden social influence, we devise new algorithms to learn the model
parameters of our proposal based on expectation maximization (EM). In addition
to a single-machine version of our EM algorithm, we further devise a
parallelized implementation on the Map-Reduce framework to process two
large-scale datasets we collect. Moreover, we show that the social influence
obtained from our generative models can be used for group recommendation.
Finally, we conduct comprehensive experiments using the datasets crawled from
last.fm and whrrl.com to validate our ideas. Experimental results show that the
generative models with social influence significantly outperform those without
incorporating social influence. The unified generative model proposed in this
paper obtains the best performance. Moreover, our study on social influence
finds that users in whrrl.com are more likely to get influenced by friends than
those in last.fm. The experimental results also confirm that our social
influence based group recommendation algorithm outperforms the state-of-the-art
algorithms for group recommendation.
| [
{
"version": "v1",
"created": "Sun, 4 Sep 2011 21:15:12 GMT"
}
] | 2011-09-06T00:00:00 | [
[
"Ye",
"Mao",
""
],
[
"Liu",
"Xingjie",
""
],
[
"Lee",
"Wang-Chien",
""
]
] | TITLE: Exploring Social Influence for Recommendation - A Probabilistic
Generative Model Approach
ABSTRACT: In this paper, we propose a probabilistic generative model, called unified
model, which naturally unifies the ideas of social influence, collaborative
filtering and content-based methods for item recommendation. To address the
issue of hidden social influence, we devise new algorithms to learn the model
parameters of our proposal based on expectation maximization (EM). In addition
to a single-machine version of our EM algorithm, we further devise a
parallelized implementation on the Map-Reduce framework to process two
large-scale datasets we collect. Moreover, we show that the social influence
obtained from our generative models can be used for group recommendation.
Finally, we conduct comprehensive experiments using the datasets crawled from
last.fm and whrrl.com to validate our ideas. Experimental results show that the
generative models with social influence significantly outperform those without
incorporating social influence. The unified generative model proposed in this
paper obtains the best performance. Moreover, our study on social influence
finds that users in whrrl.com are more likely to get influenced by friends than
those in last.fm. The experimental results also confirm that our social
influence based group recommendation algorithm outperforms the state-of-the-art
algorithms for group recommendation.
|
1109.0094 | Heba Affify | Heba Afify, Muhammad Islam and Manal Abdel Wahed | DNA Lossless Differential Compression Algorithm based on Similarity of
Genomic Sequence Database | null | null | null | null | cs.DS cs.CE cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern biological science produces vast amounts of genomic sequence data.
This is fuelling the need for efficient algorithms for sequence compression and
analysis. Data compression and the associated techniques coming from
information theory are often perceived as being of interest for data
communication and storage. In recent years, a substantial effort has been made
for the application of textual data compression techniques to various
computational biology tasks, ranging from storage and indexing of large
datasets to comparison of genomic databases. This paper presents a differential
compression algorithm that is based on production of difference sequences
according to op-code table in order to optimize the compression of homologous
sequences in dataset. Therefore, the stored data are composed of reference
sequence, the set of differences, and differences locations, instead of storing
each sequence individually. This algorithm does not require a priori knowledge
about the statistics of the sequence set. The algorithm was applied to three
different datasets of genomic sequences, it achieved up to 195-fold compression
rate corresponding to 99.4% space saving.
| [
{
"version": "v1",
"created": "Thu, 1 Sep 2011 05:39:35 GMT"
}
] | 2011-09-05T00:00:00 | [
[
"Afify",
"Heba",
""
],
[
"Islam",
"Muhammad",
""
],
[
"Wahed",
"Manal Abdel",
""
]
] | TITLE: DNA Lossless Differential Compression Algorithm based on Similarity of
Genomic Sequence Database
ABSTRACT: Modern biological science produces vast amounts of genomic sequence data.
This is fuelling the need for efficient algorithms for sequence compression and
analysis. Data compression and the associated techniques coming from
information theory are often perceived as being of interest for data
communication and storage. In recent years, a substantial effort has been made
for the application of textual data compression techniques to various
computational biology tasks, ranging from storage and indexing of large
datasets to comparison of genomic databases. This paper presents a differential
compression algorithm that is based on production of difference sequences
according to op-code table in order to optimize the compression of homologous
sequences in dataset. Therefore, the stored data are composed of reference
sequence, the set of differences, and differences locations, instead of storing
each sequence individually. This algorithm does not require a priori knowledge
about the statistics of the sequence set. The algorithm was applied to three
different datasets of genomic sequences, it achieved up to 195-fold compression
rate corresponding to 99.4% space saving.
|
1108.5002 | Yoshitaka Kameya | Yoshitaka Kameya, Satoru Nakamura, Tatsuya Iwasaki and Taisuke Sato | Verbal Characterization of Probabilistic Clusters using Minimal
Discriminative Propositions | 13 pages including 3 figures. This is the full version of a paper at
ICTAI-2011 (http://www.cse.fau.edu/ictai2011/) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a knowledge discovery process, interpretation and evaluation of the mined
results are indispensable in practice. In the case of data clustering, however,
it is often difficult to see in what aspect each cluster has been formed. This
paper proposes a method for automatic and objective characterization or
"verbalization" of the clusters obtained by mixture models, in which we collect
conjunctions of propositions (attribute-value pairs) that help us interpret or
evaluate the clusters. The proposed method provides us with a new, in-depth and
consistent tool for cluster interpretation/evaluation, and works for various
types of datasets including continuous attributes and missing values.
Experimental results with a couple of standard datasets exhibit the utility of
the proposed method, and the importance of the feedbacks from the
interpretation/evaluation step.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2011 03:41:26 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2011 02:48:36 GMT"
}
] | 2011-09-01T00:00:00 | [
[
"Kameya",
"Yoshitaka",
""
],
[
"Nakamura",
"Satoru",
""
],
[
"Iwasaki",
"Tatsuya",
""
],
[
"Sato",
"Taisuke",
""
]
] | TITLE: Verbal Characterization of Probabilistic Clusters using Minimal
Discriminative Propositions
ABSTRACT: In a knowledge discovery process, interpretation and evaluation of the mined
results are indispensable in practice. In the case of data clustering, however,
it is often difficult to see in what aspect each cluster has been formed. This
paper proposes a method for automatic and objective characterization or
"verbalization" of the clusters obtained by mixture models, in which we collect
conjunctions of propositions (attribute-value pairs) that help us interpret or
evaluate the clusters. The proposed method provides us with a new, in-depth and
consistent tool for cluster interpretation/evaluation, and works for various
types of datasets including continuous attributes and missing values.
Experimental results with a couple of standard datasets exhibit the utility of
the proposed method, and the importance of the feedbacks from the
interpretation/evaluation step.
|
1108.5397 | Charles Bergeron PhD | Charles Bergeron, Theresa Hepburn, C. Matthew Sundling, Michael Krein,
Bill Katt, Nagamani Sukumar, Curt M. Breneman, Kristin P. Bennett | Prediction of peptide bonding affinity: kernel methods for nonlinear
modeling | null | null | null | null | stat.ML cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents regression models obtained from a process of blind
prediction of peptide binding affinity from provided descriptors for several
distinct datasets as part of the 2006 Comparative Evaluation of Prediction
Algorithms (COEPRA) contest. This paper finds that kernel partial least
squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS,
and that the incorporation of transferable atom equivalent features improves
predictive capability.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2011 21:21:51 GMT"
}
] | 2011-08-30T00:00:00 | [
[
"Bergeron",
"Charles",
""
],
[
"Hepburn",
"Theresa",
""
],
[
"Sundling",
"C. Matthew",
""
],
[
"Krein",
"Michael",
""
],
[
"Katt",
"Bill",
""
],
[
"Sukumar",
"Nagamani",
""
],
[
"Breneman",
"Curt M.",
""
],
[
"Bennett",
"Kristin P.",
""
]
] | TITLE: Prediction of peptide bonding affinity: kernel methods for nonlinear
modeling
ABSTRACT: This paper presents regression models obtained from a process of blind
prediction of peptide binding affinity from provided descriptors for several
distinct datasets as part of the 2006 Comparative Evaluation of Prediction
Algorithms (COEPRA) contest. This paper finds that kernel partial least
squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS,
and that the incorporation of transferable atom equivalent features improves
predictive capability.
|
1108.5592 | Sivakumar Madesan | Abhishek Taneja, R.K.Chauhan | A Performance Study of Data Mining Techniques: Multiple Linear
Regression vs. Factor Analysis | Data mining, Multiple Linear Regression, Factor Analysis, Principal
Component Regression, Maximum Liklihood Regression, Generalized Least Square
Regression | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing volume of data usually creates an interesting challenge for the
need of data analysis tools that discover regularities in these data. Data
mining has emerged as disciplines that contribute tools for data analysis,
discovery of hidden knowledge, and autonomous decision making in many
application domains. The purpose of this study is to compare the performance of
two data mining techniques viz., factor analysis and multiple linear regression
for different sample sizes on three unique sets of data. The performance of the
two data mining techniques is compared on following parameters like mean square
error (MSE), R-square, R-Square adjusted, condition number, root mean square
error(RMSE), number of variables included in the prediction model, modified
coefficient of efficiency, F-value, and test of normality. These parameters
have been computed using various data mining tools like SPSS, XLstat, Stata,
and MS-Excel. It is seen that for all the given dataset, factor analysis
outperform multiple linear regression. But the absolute value of prediction
accuracy varied between the three datasets indicating that the data
distribution and data characteristics play a major role in choosing the correct
prediction technique.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2011 07:08:13 GMT"
}
] | 2011-08-30T00:00:00 | [
[
"Taneja",
"Abhishek",
""
],
[
"Chauhan",
"R. K.",
""
]
] | TITLE: A Performance Study of Data Mining Techniques: Multiple Linear
Regression vs. Factor Analysis
ABSTRACT: The growing volume of data usually creates an interesting challenge for the
need of data analysis tools that discover regularities in these data. Data
mining has emerged as disciplines that contribute tools for data analysis,
discovery of hidden knowledge, and autonomous decision making in many
application domains. The purpose of this study is to compare the performance of
two data mining techniques viz., factor analysis and multiple linear regression
for different sample sizes on three unique sets of data. The performance of the
two data mining techniques is compared on following parameters like mean square
error (MSE), R-square, R-Square adjusted, condition number, root mean square
error(RMSE), number of variables included in the prediction model, modified
coefficient of efficiency, F-value, and test of normality. These parameters
have been computed using various data mining tools like SPSS, XLstat, Stata,
and MS-Excel. It is seen that for all the given dataset, factor analysis
outperform multiple linear regression. But the absolute value of prediction
accuracy varied between the three datasets indicating that the data
distribution and data characteristics play a major role in choosing the correct
prediction technique.
|
1108.5217 | Hieu Dinh | Dolly Sharma and Sanguthevar Rajasekaran and Hieu Dinh | An Experimental Comparison of PMSPrune and Other Algorithms for Motif
Search | null | null | null | null | q-bio.QM cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting meaningful patterns from voluminous amount of biological data is a
very big challenge. Motifs are biological patterns of great interest to
biologists. Many different versions of the motif finding problem have been
identified by researchers. Examples include the Planted $(l, d)$ Motif version,
those based on position-specific score matrices, etc. A comparative study of
the various motif search algorithms is very important for several reasons. For
example, we could identify the strengths and weaknesses of each. As a result,
we might be able to devise hybrids that will perform better than the individual
components. In this paper we (either directly or indirectly) compare the
performance of PMSprune (an algorithm based on the $(l, d)$ motif model) and
several other algorithms in terms of seven measures and using well established
benchmarks
In this paper, we (directly or indirectly) compare the quality of motifs
predicted by PMSprune and 14 other algorithms. We have employed several
benchmark datasets including the one used by Tompa, et.al. These comparisons
show that the performance of PMSprune is competitive when compared to the other
14 algorithms tested.
We have compared (directly or indirectly) the performance of PMSprune and 14
other algorithms using the Benchmark dataset provided by Tompa, et.al. It is
observed that both PMSprune and DME (an algorithm based on position-specific
score matrices) in general perform better than the 13 algorithms reported in
Tompa et. al.. Subsequently we have compared PMSprune and DME on other
benchmark data sets including ChIP-Chip, ChIP-seq, and ABS. Between PMSprune
and DME, PMSprune performs better than DME on six measures. DME performs better
than PMSprune on one measure (namely, specificity).
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2011 00:26:44 GMT"
}
] | 2011-08-29T00:00:00 | [
[
"Sharma",
"Dolly",
""
],
[
"Rajasekaran",
"Sanguthevar",
""
],
[
"Dinh",
"Hieu",
""
]
] | TITLE: An Experimental Comparison of PMSPrune and Other Algorithms for Motif
Search
ABSTRACT: Extracting meaningful patterns from voluminous amount of biological data is a
very big challenge. Motifs are biological patterns of great interest to
biologists. Many different versions of the motif finding problem have been
identified by researchers. Examples include the Planted $(l, d)$ Motif version,
those based on position-specific score matrices, etc. A comparative study of
the various motif search algorithms is very important for several reasons. For
example, we could identify the strengths and weaknesses of each. As a result,
we might be able to devise hybrids that will perform better than the individual
components. In this paper we (either directly or indirectly) compare the
performance of PMSprune (an algorithm based on the $(l, d)$ motif model) and
several other algorithms in terms of seven measures and using well established
benchmarks
In this paper, we (directly or indirectly) compare the quality of motifs
predicted by PMSprune and 14 other algorithms. We have employed several
benchmark datasets including the one used by Tompa, et.al. These comparisons
show that the performance of PMSprune is competitive when compared to the other
14 algorithms tested.
We have compared (directly or indirectly) the performance of PMSprune and 14
other algorithms using the Benchmark dataset provided by Tompa, et.al. It is
observed that both PMSprune and DME (an algorithm based on position-specific
score matrices) in general perform better than the 13 algorithms reported in
Tompa et. al.. Subsequently we have compared PMSprune and DME on other
benchmark data sets including ChIP-Chip, ChIP-seq, and ABS. Between PMSprune
and DME, PMSprune performs better than DME on six measures. DME performs better
than PMSprune on one measure (namely, specificity).
|
1106.0288 | Hang-Hyun Jo | Hang-Hyun Jo, Raj Kumar Pan, and Kimmo Kaski | Emergence of Bursts and Communities in Evolving Weighted Networks | 9 pages, 6 figures | PLoS ONE 6(8): e22687 (2011) | 10.1371/journal.pone.0022687 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the patterns of human dynamics and social interaction, and the
way they lead to the formation of an organized and functional society are
important issues especially for techno-social development. Addressing these
issues of social networks has recently become possible through large scale data
analysis of e.g. mobile phone call records, which has revealed the existence of
modular or community structure with many links between nodes of the same
community and relatively few links between nodes of different communities. The
weights of links, e.g. the number of calls between two users, and the network
topology are found correlated such that intra-community links are stronger
compared to the weak inter-community links. This is known as Granovetter's "The
strength of weak ties" hypothesis. In addition to this inhomogeneous community
structure, the temporal patterns of human dynamics turn out to be inhomogeneous
or bursty, characterized by the heavy tailed distribution of inter-event time
between two consecutive events. In this paper, we study how the community
structure and the bursty dynamics emerge together in an evolving weighted
network model. The principal mechanisms behind these patterns are social
interaction by cyclic closure, i.e. links to friends of friends and the focal
closure, i.e. links to individuals sharing similar attributes or interests, and
human dynamics by task handling process. These three mechanisms have been
implemented as a network model with local attachment, global attachment, and
priority-based queuing processes. By comprehensive numerical simulations we
show that the interplay of these mechanisms leads to the emergence of heavy
tailed inter-event time distribution and the evolution of Granovetter-type
community structure. Moreover, the numerical results are found to be in
qualitative agreement with empirical results from mobile phone call dataset.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 19:26:18 GMT"
}
] | 2011-08-26T00:00:00 | [
[
"Jo",
"Hang-Hyun",
""
],
[
"Pan",
"Raj Kumar",
""
],
[
"Kaski",
"Kimmo",
""
]
] | TITLE: Emergence of Bursts and Communities in Evolving Weighted Networks
ABSTRACT: Understanding the patterns of human dynamics and social interaction, and the
way they lead to the formation of an organized and functional society are
important issues especially for techno-social development. Addressing these
issues of social networks has recently become possible through large scale data
analysis of e.g. mobile phone call records, which has revealed the existence of
modular or community structure with many links between nodes of the same
community and relatively few links between nodes of different communities. The
weights of links, e.g. the number of calls between two users, and the network
topology are found correlated such that intra-community links are stronger
compared to the weak inter-community links. This is known as Granovetter's "The
strength of weak ties" hypothesis. In addition to this inhomogeneous community
structure, the temporal patterns of human dynamics turn out to be inhomogeneous
or bursty, characterized by the heavy tailed distribution of inter-event time
between two consecutive events. In this paper, we study how the community
structure and the bursty dynamics emerge together in an evolving weighted
network model. The principal mechanisms behind these patterns are social
interaction by cyclic closure, i.e. links to friends of friends and the focal
closure, i.e. links to individuals sharing similar attributes or interests, and
human dynamics by task handling process. These three mechanisms have been
implemented as a network model with local attachment, global attachment, and
priority-based queuing processes. By comprehensive numerical simulations we
show that the interplay of these mechanisms leads to the emergence of heavy
tailed inter-event time distribution and the evolution of Granovetter-type
community structure. Moreover, the numerical results are found to be in
qualitative agreement with empirical results from mobile phone call dataset.
|
0803.4063 | Arne Kesting | Arne Kesting and Martin Treiber | Calibrating Car-Following Models using Trajectory Data: Methodological
Study | null | Transportation Research Record: Journal of the Transportation
Research Board, Volume 2088, Pages 148-156 (2008) | 10.3141/2088-16 | null | physics.soc-ph physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The car-following behavior of individual drivers in real city traffic is
studied on the basis of (publicly available) trajectory datasets recorded by a
vehicle equipped with an radar sensor. By means of a nonlinear optimization
procedure based on a genetic algorithm, we calibrate the Intelligent Driver
Model and the Velocity Difference Model by minimizing the deviations between
the observed driving dynamics and the simulated trajectory when following the
same leading vehicle. The reliability and robustness of the nonlinear fits are
assessed by applying different optimization criteria, i.e., different measures
for the deviations between two trajectories. The obtained errors are in the
range between~11% and~29% which is consistent with typical error ranges
obtained in previous studies. In addition, we found that the calibrated
parameter values of the Velocity Difference Model strongly depend on the
optimization criterion, while the Intelligent Driver Model is more robust in
this respect. By applying an explicit delay to the model input, we investigated
the influence of a reaction time. Remarkably, we found a negligible influence
of the reaction time indicating that drivers compensate for their reaction time
by anticipation. Furthermore, the parameter sets calibrated to a certain
trajectory are applied to the other trajectories allowing for model validation.
The results indicate that ``intra-driver variability'' rather than
``inter-driver variability'' accounts for a large part of the calibration
errors. The results are used to suggest some criteria towards a benchmarking of
car-following models.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2008 08:40:37 GMT"
}
] | 2011-08-25T00:00:00 | [
[
"Kesting",
"Arne",
""
],
[
"Treiber",
"Martin",
""
]
] | TITLE: Calibrating Car-Following Models using Trajectory Data: Methodological
Study
ABSTRACT: The car-following behavior of individual drivers in real city traffic is
studied on the basis of (publicly available) trajectory datasets recorded by a
vehicle equipped with an radar sensor. By means of a nonlinear optimization
procedure based on a genetic algorithm, we calibrate the Intelligent Driver
Model and the Velocity Difference Model by minimizing the deviations between
the observed driving dynamics and the simulated trajectory when following the
same leading vehicle. The reliability and robustness of the nonlinear fits are
assessed by applying different optimization criteria, i.e., different measures
for the deviations between two trajectories. The obtained errors are in the
range between~11% and~29% which is consistent with typical error ranges
obtained in previous studies. In addition, we found that the calibrated
parameter values of the Velocity Difference Model strongly depend on the
optimization criterion, while the Intelligent Driver Model is more robust in
this respect. By applying an explicit delay to the model input, we investigated
the influence of a reaction time. Remarkably, we found a negligible influence
of the reaction time indicating that drivers compensate for their reaction time
by anticipation. Furthermore, the parameter sets calibrated to a certain
trajectory are applied to the other trajectories allowing for model validation.
The results indicate that ``intra-driver variability'' rather than
``inter-driver variability'' accounts for a large part of the calibration
errors. The results are used to suggest some criteria towards a benchmarking of
car-following models.
|
1108.4041 | Daniel Lemire | Daniel Lemire and Andre Vellino | Extracting, Transforming and Archiving Scientific Data | 8 pages, Fourth Workshop on Very Large Digital Libraries, 2011 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is becoming common to archive research datasets that are not only large
but also numerous. In addition, their corresponding metadata and the software
required to analyse or display them need to be archived. Yet the manual
curation of research data can be difficult and expensive, particularly in very
large digital repositories, hence the importance of models and tools for
automating digital curation tasks. The automation of these tasks faces three
major challenges: (1) research data and data sources are highly heterogeneous,
(2) future research needs are difficult to anticipate, (3) data is hard to
index. To address these problems, we propose the Extract, Transform and Archive
(ETA) model for managing and mechanizing the curation of research data.
Specifically, we propose a scalable strategy for addressing the research-data
problem, ranging from the extraction of legacy data to its long-term storage.
We review some existing solutions and propose novel avenues of research.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2011 20:16:02 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Aug 2011 02:21:59 GMT"
}
] | 2011-08-24T00:00:00 | [
[
"Lemire",
"Daniel",
""
],
[
"Vellino",
"Andre",
""
]
] | TITLE: Extracting, Transforming and Archiving Scientific Data
ABSTRACT: It is becoming common to archive research datasets that are not only large
but also numerous. In addition, their corresponding metadata and the software
required to analyse or display them need to be archived. Yet the manual
curation of research data can be difficult and expensive, particularly in very
large digital repositories, hence the importance of models and tools for
automating digital curation tasks. The automation of these tasks faces three
major challenges: (1) research data and data sources are highly heterogeneous,
(2) future research needs are difficult to anticipate, (3) data is hard to
index. To address these problems, we propose the Extract, Transform and Archive
(ETA) model for managing and mechanizing the curation of research data.
Specifically, we propose a scalable strategy for addressing the research-data
problem, ranging from the extraction of legacy data to its long-term storage.
We review some existing solutions and propose novel avenues of research.
|
1108.4551 | Tshilidzi Marwala | Mlungisi Duma, Bhekisipho Twala, Tshilidzi Marwala | Improving the performance of the ripper in insurance risk classification
: A comparitive study using feature selection | ICINCO 2011: 8th International Conference on Informatics in Control,
Automation and Robotics | null | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Ripper algorithm is designed to generate rule sets for large datasets
with many features. However, it was shown that the algorithm struggles with
classification performance in the presence of missing data. The algorithm
struggles to classify instances when the quality of the data deteriorates as a
result of increasing missing data. In this paper, a feature selection technique
is used to help improve the classification performance of the Ripper model.
Principal component analysis and evidence automatic relevance determination
techniques are used to improve the performance. A comparison is done to see
which technique helps the algorithm improve the most. Training datasets with
completely observable data were used to construct the model and testing
datasets with missing values were used for measuring accuracy. The results
showed that principal component analysis is a better feature selection for the
Ripper in improving the classification performance.
| [
{
"version": "v1",
"created": "Tue, 23 Aug 2011 10:52:18 GMT"
}
] | 2011-08-24T00:00:00 | [
[
"Duma",
"Mlungisi",
""
],
[
"Twala",
"Bhekisipho",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] | TITLE: Improving the performance of the ripper in insurance risk classification
: A comparitive study using feature selection
ABSTRACT: The Ripper algorithm is designed to generate rule sets for large datasets
with many features. However, it was shown that the algorithm struggles with
classification performance in the presence of missing data. The algorithm
struggles to classify instances when the quality of the data deteriorates as a
result of increasing missing data. In this paper, a feature selection technique
is used to help improve the classification performance of the Ripper model.
Principal component analysis and evidence automatic relevance determination
techniques are used to improve the performance. A comparison is done to see
which technique helps the algorithm improve the most. Training datasets with
completely observable data were used to construct the model and testing
datasets with missing values were used for measuring accuracy. The results
showed that principal component analysis is a better feature selection for the
Ripper in improving the classification performance.
|
1108.4079 | Jason J Corso | Jason J. Corso | Toward Parts-Based Scene Understanding with Pixel-Support Parts-Sparse
Pictorial Structures | null | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene understanding remains a significant challenge in the computer vision
community. The visual psychophysics literature has demonstrated the importance
of interdependence among parts of the scene. Yet, the majority of methods in
computer vision remain local. Pictorial structures have arisen as a fundamental
parts-based model for some vision problems, such as articulated object
detection. However, the form of classical pictorial structures limits their
applicability for global problems, such as semantic pixel labeling. In this
paper, we propose an extension of the pictorial structures approach, called
pixel-support parts-sparse pictorial structures, or PS3, to overcome this
limitation. Our model extends the classical form in two ways: first, it defines
parts directly based on pixel-support rather than in a parametric form, and
second, it specifies a space of plausible parts-based scene models and permits
one to be used for inference on any given image. PS3 makes strides toward
unifying object-level and pixel-level modeling of scene elements. In this
report, we implement the first half of our model and rely upon external
knowledge to provide an initial graph structure for a given image. Our
experimental results on benchmark datasets demonstrate the capability of this
new parts-based view of scene modeling.
| [
{
"version": "v1",
"created": "Sat, 20 Aug 2011 02:08:45 GMT"
}
] | 2011-08-23T00:00:00 | [
[
"Corso",
"Jason J.",
""
]
] | TITLE: Toward Parts-Based Scene Understanding with Pixel-Support Parts-Sparse
Pictorial Structures
ABSTRACT: Scene understanding remains a significant challenge in the computer vision
community. The visual psychophysics literature has demonstrated the importance
of interdependence among parts of the scene. Yet, the majority of methods in
computer vision remain local. Pictorial structures have arisen as a fundamental
parts-based model for some vision problems, such as articulated object
detection. However, the form of classical pictorial structures limits their
applicability for global problems, such as semantic pixel labeling. In this
paper, we propose an extension of the pictorial structures approach, called
pixel-support parts-sparse pictorial structures, or PS3, to overcome this
limitation. Our model extends the classical form in two ways: first, it defines
parts directly based on pixel-support rather than in a parametric form, and
second, it specifies a space of plausible parts-based scene models and permits
one to be used for inference on any given image. PS3 makes strides toward
unifying object-level and pixel-level modeling of scene elements. In this
report, we implement the first half of our model and rely upon external
knowledge to provide an initial graph structure for a given image. Our
experimental results on benchmark datasets demonstrate the capability of this
new parts-based view of scene modeling.
|
1108.3974 | Dylan Harp | Dylan R. Harp and Velimir V. Vesselinov | Accounting for the influence of aquifer heterogeneity on spatial
propagation of pumping drawdown | null | null | null | LA-UR 10-06334 | physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been previously observed that during a pumping test in heterogeneous
media, drawdown data from different time periods collected at a single location
produce different estimates of aquifer properties and that Theis type-curve
inferences are more variable than late-time Cooper-Jacob inferences. In order
to obtain estimates of aquifer properties from highly transient drawdown data
using the Theis solution, it is necessary to account for this behavior. We
present an approach that utilizes an exponential functional form to represent
Theis parameter behavior resulting from the spatial propagation of a cone of
depression. This approach allows the use of transient data consisting of
early-time drawdown data to obtain late-time convergent Theis parameters
consistent with Cooper-Jacob method inferences. We demonstrate the approach on
a multi-year dataset consisting of multi-well transient water-level
observations due to transient multi-well water-supply pumping. Based on
previous research, transmissivities associated with each of the pumping wells
are required to converge to a single value, while storativities are allowed to
converge to distinct values.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2011 14:41:31 GMT"
}
] | 2011-08-22T00:00:00 | [
[
"Harp",
"Dylan R.",
""
],
[
"Vesselinov",
"Velimir V.",
""
]
] | TITLE: Accounting for the influence of aquifer heterogeneity on spatial
propagation of pumping drawdown
ABSTRACT: It has been previously observed that during a pumping test in heterogeneous
media, drawdown data from different time periods collected at a single location
produce different estimates of aquifer properties and that Theis type-curve
inferences are more variable than late-time Cooper-Jacob inferences. In order
to obtain estimates of aquifer properties from highly transient drawdown data
using the Theis solution, it is necessary to account for this behavior. We
present an approach that utilizes an exponential functional form to represent
Theis parameter behavior resulting from the spatial propagation of a cone of
depression. This approach allows the use of transient data consisting of
early-time drawdown data to obtain late-time convergent Theis parameters
consistent with Cooper-Jacob method inferences. We demonstrate the approach on
a multi-year dataset consisting of multi-well transient water-level
observations due to transient multi-well water-supply pumping. Based on
previous research, transmissivities associated with each of the pumping wells
are required to converge to a single value, while storativities are allowed to
converge to distinct values.
|
1108.0442 | Feng Wang | Feng Wang, Haiyan Wang, Kuai Xu | Diffusive Logistic Model Towards Predicting Information Diffusion in
Online Social Networks | null | null | null | null | cs.SI math.AP physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online social networks have recently become an effective and innovative
channel for spreading information and influence among hundreds of millions of
end users. Many prior work have carried out empirical studies and proposed
diffusion models to understand the information diffusion process in online
social networks. However, most of these studies focus on the information
diffusion in temporal dimension, that is, how the information propagates over
time. Little attempt has been given on understanding information diffusion over
both temporal and spatial dimensions. In this paper, we propose a Partial
Differential Equation (PDE), specifically, a Diffusive Logistic (DL) equation
to model the temporal and spatial characteristics of information diffusion in
online social networks. To be more specific, we develop a PDE-based theoretical
framework to measure and predict the density of influenced users at a given
distance from the original information source after a time period. The density
of influenced users over time and distance provides valuable insight on the
actual information diffusion process. We present the temporal and spatial
patterns in a real dataset collected from Digg social news site, and validate
the proposed DL equation in terms of predicting the information diffusion
process. Our experiment results show that the DL model is indeed able to
characterize and predict the process of information propagation in online
social networks. For example, for the most popular news with 24,099 votes in
Digg, the average prediction accuracy of DL model over all distances during the
first 6 hours is 92.08%. To the best of our knowledge, this paper is the first
attempt to use PDE-based model to study the information diffusion process in
both temporal and spatial dimensions in online social networks.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2011 22:04:45 GMT"
}
] | 2011-08-20T00:00:00 | [
[
"Wang",
"Feng",
""
],
[
"Wang",
"Haiyan",
""
],
[
"Xu",
"Kuai",
""
]
] | TITLE: Diffusive Logistic Model Towards Predicting Information Diffusion in
Online Social Networks
ABSTRACT: Online social networks have recently become an effective and innovative
channel for spreading information and influence among hundreds of millions of
end users. Many prior work have carried out empirical studies and proposed
diffusion models to understand the information diffusion process in online
social networks. However, most of these studies focus on the information
diffusion in temporal dimension, that is, how the information propagates over
time. Little attempt has been given on understanding information diffusion over
both temporal and spatial dimensions. In this paper, we propose a Partial
Differential Equation (PDE), specifically, a Diffusive Logistic (DL) equation
to model the temporal and spatial characteristics of information diffusion in
online social networks. To be more specific, we develop a PDE-based theoretical
framework to measure and predict the density of influenced users at a given
distance from the original information source after a time period. The density
of influenced users over time and distance provides valuable insight on the
actual information diffusion process. We present the temporal and spatial
patterns in a real dataset collected from Digg social news site, and validate
the proposed DL equation in terms of predicting the information diffusion
process. Our experiment results show that the DL model is indeed able to
characterize and predict the process of information propagation in online
social networks. For example, for the most popular news with 24,099 votes in
Digg, the average prediction accuracy of DL model over all distances during the
first 6 hours is 92.08%. To the best of our knowledge, this paper is the first
attempt to use PDE-based model to study the information diffusion process in
both temporal and spatial dimensions in online social networks.
|
1108.3154 | Stephane Ross | Stephane Ross, J. Andrew Bagnell | Stability Conditions for Online Learnability | 16 pages. Earlier version of this work submitted (but rejected) to
COLT 2011 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stability is a general notion that quantifies the sensitivity of a learning
algorithm's output to small change in the training dataset (e.g. deletion or
replacement of a single training sample). Such conditions have recently been
shown to be more powerful to characterize learnability in the general learning
setting under i.i.d. samples where uniform convergence is not necessary for
learnability, but where stability is both sufficient and necessary for
learnability. We here show that similar stability conditions are also
sufficient for online learnability, i.e. whether there exists a learning
algorithm such that under any sequence of examples (potentially chosen
adversarially) produces a sequence of hypotheses that has no regret in the
limit with respect to the best hypothesis in hindsight. We introduce online
stability, a stability condition related to uniform-leave-one-out stability in
the batch setting, that is sufficient for online learnability. In particular we
show that popular classes of online learners, namely algorithms that fall in
the category of Follow-the-(Regularized)-Leader, Mirror Descent, gradient-based
methods and randomized algorithms like Weighted Majority and Hedge, are
guaranteed to have no regret if they have such online stability property. We
provide examples that suggest the existence of an algorithm with such stability
condition might in fact be necessary for online learnability. For the more
restricted binary classification setting, we establish that such stability
condition is in fact both sufficient and necessary. We also show that for a
large class of online learnable problems in the general learning setting,
namely those with a notion of sub-exponential covering, no-regret online
algorithms that have such stability condition exists.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2011 05:11:54 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2011 17:01:35 GMT"
}
] | 2011-08-18T00:00:00 | [
[
"Ross",
"Stephane",
""
],
[
"Bagnell",
"J. Andrew",
""
]
] | TITLE: Stability Conditions for Online Learnability
ABSTRACT: Stability is a general notion that quantifies the sensitivity of a learning
algorithm's output to small change in the training dataset (e.g. deletion or
replacement of a single training sample). Such conditions have recently been
shown to be more powerful to characterize learnability in the general learning
setting under i.i.d. samples where uniform convergence is not necessary for
learnability, but where stability is both sufficient and necessary for
learnability. We here show that similar stability conditions are also
sufficient for online learnability, i.e. whether there exists a learning
algorithm such that under any sequence of examples (potentially chosen
adversarially) produces a sequence of hypotheses that has no regret in the
limit with respect to the best hypothesis in hindsight. We introduce online
stability, a stability condition related to uniform-leave-one-out stability in
the batch setting, that is sufficient for online learnability. In particular we
show that popular classes of online learners, namely algorithms that fall in
the category of Follow-the-(Regularized)-Leader, Mirror Descent, gradient-based
methods and randomized algorithms like Weighted Majority and Hedge, are
guaranteed to have no regret if they have such online stability property. We
provide examples that suggest the existence of an algorithm with such stability
condition might in fact be necessary for online learnability. For the more
restricted binary classification setting, we establish that such stability
condition is in fact both sufficient and necessary. We also show that for a
large class of online learnable problems in the general learning setting,
namely those with a notion of sub-exponential covering, no-regret online
algorithms that have such stability condition exists.
|
1108.3072 | Ping Li | Ping Li, Anshumali Shrivastava, Christian Konig | Training Logistic Regression and SVM on 200GB Data Using b-Bit Minwise
Hashing and Comparisons with Vowpal Wabbit (VW) | null | null | null | null | cs.LG stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We generated a dataset of 200 GB with 10^9 features, to test our recent b-bit
minwise hashing algorithms for training very large-scale logistic regression
and SVM. The results confirm our prior work that, compared with the VW hashing
algorithm (which has the same variance as random projections), b-bit minwise
hashing is substantially more accurate at the same storage. For example, with
merely 30 hashed values per data point, b-bit minwise hashing can achieve
similar accuracies as VW with 2^14 hashed values per data point.
We demonstrate that the preprocessing cost of b-bit minwise hashing is
roughly on the same order of magnitude as the data loading time. Furthermore,
by using a GPU, the preprocessing cost can be reduced to a small fraction of
the data loading time.
Minwise hashing has been widely used in industry, at least in the context of
search. One reason for its popularity is that one can efficiently simulate
permutations by (e.g.,) universal hashing. In other words, there is no need to
store the permutation matrix. In this paper, we empirically verify this
practice, by demonstrating that even using the simplest 2-universal hashing
does not degrade the learning performance.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2011 19:53:55 GMT"
}
] | 2011-08-16T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Shrivastava",
"Anshumali",
""
],
[
"Konig",
"Christian",
""
]
] | TITLE: Training Logistic Regression and SVM on 200GB Data Using b-Bit Minwise
Hashing and Comparisons with Vowpal Wabbit (VW)
ABSTRACT: We generated a dataset of 200 GB with 10^9 features, to test our recent b-bit
minwise hashing algorithms for training very large-scale logistic regression
and SVM. The results confirm our prior work that, compared with the VW hashing
algorithm (which has the same variance as random projections), b-bit minwise
hashing is substantially more accurate at the same storage. For example, with
merely 30 hashed values per data point, b-bit minwise hashing can achieve
similar accuracies as VW with 2^14 hashed values per data point.
We demonstrate that the preprocessing cost of b-bit minwise hashing is
roughly on the same order of magnitude as the data loading time. Furthermore,
by using a GPU, the preprocessing cost can be reduced to a small fraction of
the data loading time.
Minwise hashing has been widely used in industry, at least in the context of
search. One reason for its popularity is that one can efficiently simulate
permutations by (e.g.,) universal hashing. In other words, there is no need to
store the permutation matrix. In this paper, we empirically verify this
practice, by demonstrating that even using the simplest 2-universal hashing
does not degrade the learning performance.
|
1108.1351 | Raied Salman Dr | Raied Salman, Vojislav Kecman, Qi Li, Robert Strack and Erik Test | Fast k-means algorithm clustering | 16 pages, Wimo2011; International Journal of Computer Networks &
Communications (IJCNC) Vol.3, No.4, July 2011 | null | 10.5121/ijcnc.2011.3402 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | k-means has recently been recognized as one of the best algorithms for
clustering unsupervised data. Since k-means depends mainly on distance
calculation between all data points and the centers, the time cost will be high
when the size of the dataset is large (for example more than 500millions of
points). We propose a two stage algorithm to reduce the time cost of distance
calculation for huge datasets. The first stage is a fast distance calculation
using only a small portion of the data to produce the best possible location of
the centers. The second stage is a slow distance calculation in which the
initial centers used are taken from the first stage. The fast and slow stages
represent the speed of the movement of the centers. In the slow stage, the
whole dataset can be used to get the exact location of the centers. The time
cost of the distance calculation for the fast stage is very low due to the
small size of the training data chosen. The time cost of the distance
calculation for the slow stage is also minimized due to small number of
iterations. Different initial locations of the clusters have been used during
the test of the proposed algorithms. For large datasets, experiments show that
the 2-stage clustering method achieves better speed-up (1-9 times).
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2011 15:37:23 GMT"
}
] | 2011-08-08T00:00:00 | [
[
"Salman",
"Raied",
""
],
[
"Kecman",
"Vojislav",
""
],
[
"Li",
"Qi",
""
],
[
"Strack",
"Robert",
""
],
[
"Test",
"Erik",
""
]
] | TITLE: Fast k-means algorithm clustering
ABSTRACT: k-means has recently been recognized as one of the best algorithms for
clustering unsupervised data. Since k-means depends mainly on distance
calculation between all data points and the centers, the time cost will be high
when the size of the dataset is large (for example more than 500millions of
points). We propose a two stage algorithm to reduce the time cost of distance
calculation for huge datasets. The first stage is a fast distance calculation
using only a small portion of the data to produce the best possible location of
the centers. The second stage is a slow distance calculation in which the
initial centers used are taken from the first stage. The fast and slow stages
represent the speed of the movement of the centers. In the slow stage, the
whole dataset can be used to get the exact location of the centers. The time
cost of the distance calculation for the fast stage is very low due to the
small size of the training data chosen. The time cost of the distance
calculation for the slow stage is also minimized due to small number of
iterations. Different initial locations of the clusters have been used during
the test of the proposed algorithms. For large datasets, experiments show that
the 2-stage clustering method achieves better speed-up (1-9 times).
|
1108.1353 | Susheel Kumar k | K.Susheel Kumar, Vijay Bhaskar Semwal, R C Tripathi | Real time face recognition using adaboost improved fast PCA algorithm | 14 pages; ISSN : 0975-900X (Online), 0976-2191 (Print) | International Journal of Artificial Intelligence & Applications
(IJAIA), Vol.2, No.3, July 2011, 45-58 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an automated system for human face recognition in a real
time background world for a large homemade dataset of persons face. The task is
very difficult as the real time background subtraction in an image is still a
challenge. Addition to this there is a huge variation in human face image in
terms of size, pose and expression. The system proposed collapses most of this
variance. To detect real time human face AdaBoost with Haar cascade is used and
a simple fast PCA and LDA is used to recognize the faces detected. The matched
face is then used to mark attendance in the laboratory, in our case. This
biometric system is a real time attendance system based on the human face
recognition with a simple and fast algorithms and gaining a high accuracy
rate..
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2011 15:41:31 GMT"
}
] | 2011-08-08T00:00:00 | [
[
"Kumar",
"K. Susheel",
""
],
[
"Semwal",
"Vijay Bhaskar",
""
],
[
"Tripathi",
"R C",
""
]
] | TITLE: Real time face recognition using adaboost improved fast PCA algorithm
ABSTRACT: This paper presents an automated system for human face recognition in a real
time background world for a large homemade dataset of persons face. The task is
very difficult as the real time background subtraction in an image is still a
challenge. Addition to this there is a huge variation in human face image in
terms of size, pose and expression. The system proposed collapses most of this
variance. To detect real time human face AdaBoost with Haar cascade is used and
a simple fast PCA and LDA is used to recognize the faces detected. The matched
face is then used to mark attendance in the laboratory, in our case. This
biometric system is a real time attendance system based on the human face
recognition with a simple and fast algorithms and gaining a high accuracy
rate..
|
1107.5628 | Timothy Dubois | Alex Skvortsov, Milan Jamriska and Timothy C. DuBois | Scaling laws of passive tracer dispersion in the turbulent surface layer | 5 pages, 3 figures, 1 table | Phys. Rev. E 82, 056304 (2010) | 10.1103/PhysRevE.82.056304 | null | nlin.CD physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Experimental results for passive tracer dispersion in the turbulent surface
layer under stable conditions are presented. In this case, the dispersion of
tracer particles is determined by the interplay of three mechanisms: relative
dispersion (celebrated Richardson's mechanism), shear dispersion (particle
separation due to variation of the mean velocity field) and specific
surface-layer dispersion (induced by the gradient of the energy dissipation
rate in the turbulent surface layer). The latter mechanism results in the
rather slow (ballistic) law for the mean squared particle separation. Based on
a simplified Langevin equation for particle separation we found that the
ballistic regime always dominates at large times. This conclusion is supported
by our extensive atmospheric observations. Exit-time statistics are derived
from the experimental dataset and show a reasonable match with the simple
dimensional asymptotes for different mechanisms of tracer dispersion, as well
as predictions of the multifractal model and experimental data from other
sources.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2011 05:55:35 GMT"
}
] | 2011-07-29T00:00:00 | [
[
"Skvortsov",
"Alex",
""
],
[
"Jamriska",
"Milan",
""
],
[
"DuBois",
"Timothy C.",
""
]
] | TITLE: Scaling laws of passive tracer dispersion in the turbulent surface layer
ABSTRACT: Experimental results for passive tracer dispersion in the turbulent surface
layer under stable conditions are presented. In this case, the dispersion of
tracer particles is determined by the interplay of three mechanisms: relative
dispersion (celebrated Richardson's mechanism), shear dispersion (particle
separation due to variation of the mean velocity field) and specific
surface-layer dispersion (induced by the gradient of the energy dissipation
rate in the turbulent surface layer). The latter mechanism results in the
rather slow (ballistic) law for the mean squared particle separation. Based on
a simplified Langevin equation for particle separation we found that the
ballistic regime always dominates at large times. This conclusion is supported
by our extensive atmospheric observations. Exit-time statistics are derived
from the experimental dataset and show a reasonable match with the simple
dimensional asymptotes for different mechanisms of tracer dispersion, as well
as predictions of the multifractal model and experimental data from other
sources.
|
1107.4557 | Myle Ott | Myle Ott, Yejin Choi, Claire Cardie, Jeffrey T. Hancock | Finding Deceptive Opinion Spam by Any Stretch of the Imagination | 11 pages, 5 tables, data available at:
http://www.cs.cornell.edu/~myleott | Proceedings of ACL 2011: HLT, pp. 309-319 | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consumers increasingly rate, review and research products online.
Consequently, websites containing consumer reviews are becoming targets of
opinion spam. While recent work has focused primarily on manually identifiable
instances of opinion spam, in this work we study deceptive opinion
spam---fictitious opinions that have been deliberately written to sound
authentic. Integrating work from psychology and computational linguistics, we
develop and compare three approaches to detecting deceptive opinion spam, and
ultimately develop a classifier that is nearly 90% accurate on our
gold-standard opinion spam dataset. Based on feature analysis of our learned
models, we additionally make several theoretical contributions, including
revealing a relationship between deceptive opinions and imaginative writing.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2011 16:02:06 GMT"
}
] | 2011-07-25T00:00:00 | [
[
"Ott",
"Myle",
""
],
[
"Choi",
"Yejin",
""
],
[
"Cardie",
"Claire",
""
],
[
"Hancock",
"Jeffrey T.",
""
]
] | TITLE: Finding Deceptive Opinion Spam by Any Stretch of the Imagination
ABSTRACT: Consumers increasingly rate, review and research products online.
Consequently, websites containing consumer reviews are becoming targets of
opinion spam. While recent work has focused primarily on manually identifiable
instances of opinion spam, in this work we study deceptive opinion
spam---fictitious opinions that have been deliberately written to sound
authentic. Integrating work from psychology and computational linguistics, we
develop and compare three approaches to detecting deceptive opinion spam, and
ultimately develop a classifier that is nearly 90% accurate on our
gold-standard opinion spam dataset. Based on feature analysis of our learned
models, we additionally make several theoretical contributions, including
revealing a relationship between deceptive opinions and imaginative writing.
|
1104.0651 | Mariano Tepper | Mariano Tepper, Pablo Mus\'e, Andr\'es Almansa | Meaningful Clustered Forest: an Automatic and Robust Clustering
Algorithm | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new clustering technique that can be regarded as a numerical
method to compute the proximity gestalt. The method analyzes edge length
statistics in the MST of the dataset and provides an a contrario cluster
detection criterion. The approach is fully parametric on the chosen distance
and can detect arbitrarily shaped clusters. The method is also automatic, in
the sense that only a single parameter is left to the user. This parameter has
an intuitive interpretation as it controls the expected number of false
detections. We show that the iterative application of our method can (1)
provide robustness to noise and (2) solve a masking phenomenon in which a
highly populated and salient cluster dominates the scene and inhibits the
detection of less-populated, but still salient, clusters.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2011 19:04:25 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Jun 2011 22:30:03 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2011 14:39:35 GMT"
}
] | 2011-07-20T00:00:00 | [
[
"Tepper",
"Mariano",
""
],
[
"Musé",
"Pablo",
""
],
[
"Almansa",
"Andrés",
""
]
] | TITLE: Meaningful Clustered Forest: an Automatic and Robust Clustering
Algorithm
ABSTRACT: We propose a new clustering technique that can be regarded as a numerical
method to compute the proximity gestalt. The method analyzes edge length
statistics in the MST of the dataset and provides an a contrario cluster
detection criterion. The approach is fully parametric on the chosen distance
and can detect arbitrarily shaped clusters. The method is also automatic, in
the sense that only a single parameter is left to the user. This parameter has
an intuitive interpretation as it controls the expected number of false
detections. We show that the iterative application of our method can (1)
provide robustness to noise and (2) solve a masking phenomenon in which a
highly populated and salient cluster dominates the scene and inhibits the
detection of less-populated, but still salient, clusters.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.