Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1208.6231 | Beyza Ermis Ms | Beyza Ermi\c{s} and Evrim Acar and A. Taylan Cemgil | Link Prediction via Generalized Coupled Tensor Factorisation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study deals with the missing link prediction problem: the problem of
predicting the existence of missing connections between entities of interest.
We address link prediction using coupled analysis of relational datasets
represented as heterogeneous data, i.e., datasets in the form of matrices and
higher-order tensors. We propose to use an approach based on probabilistic
interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor
Factorisation, which can simultaneously fit a large class of tensor models to
higher-order tensors/matrices with com- mon latent factors using different loss
functions. Numerical experiments demonstrate that joint analysis of data from
multiple sources via coupled factorisation improves the link prediction
performance and the selection of right loss function and tensor model is
crucial for accurately predicting missing links.
| [
{
"version": "v1",
"created": "Thu, 30 Aug 2012 16:48:05 GMT"
}
] | 2012-08-31T00:00:00 | [
[
"Ermiş",
"Beyza",
""
],
[
"Acar",
"Evrim",
""
],
[
"Cemgil",
"A. Taylan",
""
]
] | TITLE: Link Prediction via Generalized Coupled Tensor Factorisation
ABSTRACT: This study deals with the missing link prediction problem: the problem of
predicting the existence of missing connections between entities of interest.
We address link prediction using coupled analysis of relational datasets
represented as heterogeneous data, i.e., datasets in the form of matrices and
higher-order tensors. We propose to use an approach based on probabilistic
interpretation of tensor factorisation models, i.e., Generalised Coupled Tensor
Factorisation, which can simultaneously fit a large class of tensor models to
higher-order tensors/matrices with com- mon latent factors using different loss
functions. Numerical experiments demonstrate that joint analysis of data from
multiple sources via coupled factorisation improves the link prediction
performance and the selection of right loss function and tensor model is
crucial for accurately predicting missing links.
|
1204.4166 | Yandong Guo | Yuan Qi and Yandong Guo | Message passing with relaxed moment matching | null | null | null | null | cs.LG stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian learning is often hampered by large computational expense. As a
powerful generalization of popular belief propagation, expectation propagation
(EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP
can be sensitive to outliers and suffer from divergence for difficult cases. To
address this issue, we propose a new approximate inference approach, relaxed
expectation propagation (REP). It relaxes the moment matching requirement of
expectation propagation by adding a relaxation factor into the KL minimization.
We penalize this relaxation with a $l_1$ penalty. As a result, when two
distributions in the relaxed KL divergence are similar, the relaxation factor
will be penalized to zero and, therefore, we obtain the original moment
matching; In the presence of outliers, these two distributions are
significantly different and the relaxation factor will be used to reduce the
contribution of the outlier. Based on this penalized KL minimization, REP is
robust to outliers and can greatly improve the posterior approximation quality
over EP. To examine the effectiveness of REP, we apply it to Gaussian process
classification, a task known to be suitable to EP. Our classification results
on synthetic and UCI benchmark datasets demonstrate significant improvement of
REP over EP and Power EP--in terms of algorithmic stability, estimation
accuracy and predictive performance.
| [
{
"version": "v1",
"created": "Wed, 18 Apr 2012 19:21:59 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Aug 2012 16:02:21 GMT"
}
] | 2012-08-30T00:00:00 | [
[
"Qi",
"Yuan",
""
],
[
"Guo",
"Yandong",
""
]
] | TITLE: Message passing with relaxed moment matching
ABSTRACT: Bayesian learning is often hampered by large computational expense. As a
powerful generalization of popular belief propagation, expectation propagation
(EP) efficiently approximates the exact Bayesian computation. Nevertheless, EP
can be sensitive to outliers and suffer from divergence for difficult cases. To
address this issue, we propose a new approximate inference approach, relaxed
expectation propagation (REP). It relaxes the moment matching requirement of
expectation propagation by adding a relaxation factor into the KL minimization.
We penalize this relaxation with a $l_1$ penalty. As a result, when two
distributions in the relaxed KL divergence are similar, the relaxation factor
will be penalized to zero and, therefore, we obtain the original moment
matching; In the presence of outliers, these two distributions are
significantly different and the relaxation factor will be used to reduce the
contribution of the outlier. Based on this penalized KL minimization, REP is
robust to outliers and can greatly improve the posterior approximation quality
over EP. To examine the effectiveness of REP, we apply it to Gaussian process
classification, a task known to be suitable to EP. Our classification results
on synthetic and UCI benchmark datasets demonstrate significant improvement of
REP over EP and Power EP--in terms of algorithmic stability, estimation
accuracy and predictive performance.
|
1208.5792 | Stefano Allesina | Stefano Allesina | Measuring Nepotism Through Shared Last Names: Response to Ferlazzo and
Sdoia | 17 pages, 1 figure | null | null | null | stat.AP physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent article, I showed that in several academic disciplines in Italy,
professors display a paucity of last names that cannot be explained by
unbiased, random, hiring processes. I suggested that this scarcity of last
names could be related to the prevalence of nepotistic hires, i.e., professors
engaging in illegal practices to have their relatives hired as academics. My
findings have recently been questioned through repeat analysis to the United
Kingdom university system. Ferlazzo & Sdoia found that several disciplines in
this system also display a scarcity of last names, and that a similar scarcity
is found when analyzing the first (given) names of Italian professors. Here I
show that the scarcity of first names in Italian disciplines is completely
explained by uneven male/female representation, while the scarcity of last
names in United Kingdom academia is due to discipline-specific immigration.
However, these factors cannot explain the scarcity of last names in Italian
disciplines. Geographic and demographic considerations -- proposed as a
possible explanation of my findings -- appear to have no significant effect:
after correcting for these factors, the scarcity of last names remains highly
significant in several disciplines, and there is a marked trend from north to
south, with a higher likelihood of nepotism in the south and in Sicily.
Moreover, I show that in several Italian disciplines positions tend to be
inherited as with last names (i.e., from father to son, but not from mother to
daughter). Taken together, these results strenghten the case for nepotism,
highlighting that statistical tests cannot be applied to a dataset without
carefully considering the characteristics of the data and critically
interpreting the results.
| [
{
"version": "v1",
"created": "Tue, 28 Aug 2012 20:57:52 GMT"
}
] | 2012-08-30T00:00:00 | [
[
"Allesina",
"Stefano",
""
]
] | TITLE: Measuring Nepotism Through Shared Last Names: Response to Ferlazzo and
Sdoia
ABSTRACT: In a recent article, I showed that in several academic disciplines in Italy,
professors display a paucity of last names that cannot be explained by
unbiased, random, hiring processes. I suggested that this scarcity of last
names could be related to the prevalence of nepotistic hires, i.e., professors
engaging in illegal practices to have their relatives hired as academics. My
findings have recently been questioned through repeat analysis to the United
Kingdom university system. Ferlazzo & Sdoia found that several disciplines in
this system also display a scarcity of last names, and that a similar scarcity
is found when analyzing the first (given) names of Italian professors. Here I
show that the scarcity of first names in Italian disciplines is completely
explained by uneven male/female representation, while the scarcity of last
names in United Kingdom academia is due to discipline-specific immigration.
However, these factors cannot explain the scarcity of last names in Italian
disciplines. Geographic and demographic considerations -- proposed as a
possible explanation of my findings -- appear to have no significant effect:
after correcting for these factors, the scarcity of last names remains highly
significant in several disciplines, and there is a marked trend from north to
south, with a higher likelihood of nepotism in the south and in Sicily.
Moreover, I show that in several Italian disciplines positions tend to be
inherited as with last names (i.e., from father to son, but not from mother to
daughter). Taken together, these results strenghten the case for nepotism,
highlighting that statistical tests cannot be applied to a dataset without
carefully considering the characteristics of the data and critically
interpreting the results.
|
0911.4889 | Cms Collaboration | CMS Collaboration | Commissioning of the CMS High-Level Trigger with Cosmic Rays | null | JINST 5:T03005,2010 | 10.1088/1748-0221/5/03/T03005 | CMS-CFT-09-020 | physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The CMS High-Level Trigger (HLT) is responsible for ensuring that data
samples with potentially interesting events are recorded with high efficiency
and good quality. This paper gives an overview of the HLT and focuses on its
commissioning using cosmic rays. The selection of triggers that were deployed
is presented and the online grouping of triggered events into streams and
primary datasets is discussed. Tools for online and offline data quality
monitoring for the HLT are described, and the operational performance of the
muon HLT algorithms is reviewed. The average time taken for the HLT selection
and its dependence on detector and operating conditions are presented. The HLT
performed reliably and helped provide a large dataset. This dataset has proven
to be invaluable for understanding the performance of the trigger and the CMS
experiment as a whole.
| [
{
"version": "v1",
"created": "Wed, 25 Nov 2009 15:49:24 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jan 2010 14:00:10 GMT"
}
] | 2012-08-27T00:00:00 | [
[
"CMS Collaboration",
"",
""
]
] | TITLE: Commissioning of the CMS High-Level Trigger with Cosmic Rays
ABSTRACT: The CMS High-Level Trigger (HLT) is responsible for ensuring that data
samples with potentially interesting events are recorded with high efficiency
and good quality. This paper gives an overview of the HLT and focuses on its
commissioning using cosmic rays. The selection of triggers that were deployed
is presented and the online grouping of triggered events into streams and
primary datasets is discussed. Tools for online and offline data quality
monitoring for the HLT are described, and the operational performance of the
muon HLT algorithms is reviewed. The average time taken for the HLT selection
and its dependence on detector and operating conditions are presented. The HLT
performed reliably and helped provide a large dataset. This dataset has proven
to be invaluable for understanding the performance of the trigger and the CMS
experiment as a whole.
|
1011.6665 | Atlas Publications | The ATLAS Collaboration | Studies of the performance of the ATLAS detector using cosmic-ray muons | 22 pages plus author list (33 pages total), 21 figures, 2 tables | Eur.Phys.J. C71 (2011) 1593 | 10.1140/epjc/s10052-011-1593-6 | CERN-PH-EP-2010-070 | physics.ins-det hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Muons from cosmic-ray interactions in the atmosphere provide a
high-statistics source of particles that can be used to study the performance
and calibration of the ATLAS detector. Cosmic-ray muons can penetrate to the
cavern and deposit energy in all detector subsystems. Such events have played
an important role in the commissioning of the detector since the start of the
installation phase in 2005 and were particularly important for understanding
the detector performance in the time prior to the arrival of the first LHC
beams. Global cosmic-ray runs were undertaken in both 2008 and 2009 and these
data have been used through to the early phases of collision data-taking as a
tool for calibration, alignment and detector monitoring. These large datasets
have also been used for detector performance studies, including investigations
that rely on the combined performance of different subsystems. This paper
presents the results of performance studies related to combined tracking,
lepton identification and the reconstruction of jets and missing transverse
energy. Results are compared to expectations based on a cosmic-ray event
generator and a full simulation of the detector response.
| [
{
"version": "v1",
"created": "Tue, 30 Nov 2010 20:23:11 GMT"
}
] | 2012-08-27T00:00:00 | [
[
"The ATLAS Collaboration",
"",
""
]
] | TITLE: Studies of the performance of the ATLAS detector using cosmic-ray muons
ABSTRACT: Muons from cosmic-ray interactions in the atmosphere provide a
high-statistics source of particles that can be used to study the performance
and calibration of the ATLAS detector. Cosmic-ray muons can penetrate to the
cavern and deposit energy in all detector subsystems. Such events have played
an important role in the commissioning of the detector since the start of the
installation phase in 2005 and were particularly important for understanding
the detector performance in the time prior to the arrival of the first LHC
beams. Global cosmic-ray runs were undertaken in both 2008 and 2009 and these
data have been used through to the early phases of collision data-taking as a
tool for calibration, alignment and detector monitoring. These large datasets
have also been used for detector performance studies, including investigations
that rely on the combined performance of different subsystems. This paper
presents the results of performance studies related to combined tracking,
lepton identification and the reconstruction of jets and missing transverse
energy. Results are compared to expectations based on a cosmic-ray event
generator and a full simulation of the detector response.
|
1208.4429 | Tshilidzi Marwala | E. Hurwitz and T. Marwala | Common Mistakes when Applying Computational Intelligence and Machine
Learning to Stock Market modelling | 5 pages | null | null | null | stat.AP cs.CY q-fin.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a number of reasons, computational intelligence and machine learning
methods have been largely dismissed by the professional community. The reasons
for this are numerous and varied, but inevitably amongst the reasons given is
that the systems designed often do not perform as expected by their designers.
The reasons for this lack of performance is a direct result of mistakes that
are commonly seen in market-prediction systems. This paper examines some of the
more common mistakes, namely dataset insufficiency; inappropriate scaling;
time-series tracking; inappropriate target quantification and inappropriate
measures of performance. The rationale that leads to each of these mistakes is
examined, as well as the nature of the errors they introduce to the analysis /
design. Alternative ways of performing each task are also recommended in order
to avoid perpetuating these mistakes, and hopefully to aid in clearing the way
for the use of these powerful techniques in industry.
| [
{
"version": "v1",
"created": "Wed, 22 Aug 2012 06:20:00 GMT"
}
] | 2012-08-23T00:00:00 | [
[
"Hurwitz",
"E.",
""
],
[
"Marwala",
"T.",
""
]
] | TITLE: Common Mistakes when Applying Computational Intelligence and Machine
Learning to Stock Market modelling
ABSTRACT: For a number of reasons, computational intelligence and machine learning
methods have been largely dismissed by the professional community. The reasons
for this are numerous and varied, but inevitably amongst the reasons given is
that the systems designed often do not perform as expected by their designers.
The reasons for this lack of performance is a direct result of mistakes that
are commonly seen in market-prediction systems. This paper examines some of the
more common mistakes, namely dataset insufficiency; inappropriate scaling;
time-series tracking; inappropriate target quantification and inappropriate
measures of performance. The rationale that leads to each of these mistakes is
examined, as well as the nature of the errors they introduce to the analysis /
design. Alternative ways of performing each task are also recommended in order
to avoid perpetuating these mistakes, and hopefully to aid in clearing the way
for the use of these powerful techniques in industry.
|
1208.4138 | Zahoor Khan | Ashraf Mohammed Iqbal, Abidalrahman Moh'd, Zahoor Khan | Semi-supervised Clustering Ensemble by Voting | The International Conference on Information and Communication Systems
(ICICS 2009), Amman, Jordan | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering ensemble is one of the most recent advances in unsupervised
learning. It aims to combine the clustering results obtained using different
algorithms or from different runs of the same clustering algorithm for the same
data set, this is accomplished using on a consensus function, the efficiency
and accuracy of this method has been proven in many works in literature. In the
first part of this paper we make a comparison among current approaches to
clustering ensemble in literature. All of these approaches consist of two main
steps: the ensemble generation and consensus function. In the second part of
the paper, we suggest engaging supervision in the clustering ensemble procedure
to get more enhancements on the clustering results. Supervision can be applied
in two places: either by using semi-supervised algorithms in the clustering
ensemble generation step or in the form of a feedback used by the consensus
function stage. Also, we introduce a flexible two parameter weighting
mechanism, the first parameter describes the compatibility between the datasets
under study and the semi-supervised clustering algorithms used to generate the
base partitions, the second parameter is used to provide the user feedback on
the these partitions. The two parameters are engaged in a "relabeling and
voting" based consensus function to produce the final clustering.
| [
{
"version": "v1",
"created": "Mon, 20 Aug 2012 23:21:10 GMT"
}
] | 2012-08-22T00:00:00 | [
[
"Iqbal",
"Ashraf Mohammed",
""
],
[
"Moh'd",
"Abidalrahman",
""
],
[
"Khan",
"Zahoor",
""
]
] | TITLE: Semi-supervised Clustering Ensemble by Voting
ABSTRACT: Clustering ensemble is one of the most recent advances in unsupervised
learning. It aims to combine the clustering results obtained using different
algorithms or from different runs of the same clustering algorithm for the same
data set, this is accomplished using on a consensus function, the efficiency
and accuracy of this method has been proven in many works in literature. In the
first part of this paper we make a comparison among current approaches to
clustering ensemble in literature. All of these approaches consist of two main
steps: the ensemble generation and consensus function. In the second part of
the paper, we suggest engaging supervision in the clustering ensemble procedure
to get more enhancements on the clustering results. Supervision can be applied
in two places: either by using semi-supervised algorithms in the clustering
ensemble generation step or in the form of a feedback used by the consensus
function stage. Also, we introduce a flexible two parameter weighting
mechanism, the first parameter describes the compatibility between the datasets
under study and the semi-supervised clustering algorithms used to generate the
base partitions, the second parameter is used to provide the user feedback on
the these partitions. The two parameters are engaged in a "relabeling and
voting" based consensus function to produce the final clustering.
|
1208.4238 | Enrico Siragusa | Enrico Siragusa, David Weese, Knut Reinert | Fast and sensitive read mapping with approximate seeds and multiple
backtracking | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Masai, a read mapper representing the state of the art in terms of
speed and sensitivity. Our tool is an order of magnitude faster than RazerS 3
and mrFAST, 2--3 times faster and more accurate than Bowtie 2 and BWA. The
novelties of our read mapper are filtration with approximate seeds and a method
for multiple backtracking. Approximate seeds, compared to exact seeds, increase
filtration specificity while preserving sensitivity. Multiple backtracking
amortizes the cost of searching a large set of seeds by taking advantage of the
repetitiveness of next-generation sequencing data. Combined together, these two
methods significantly speed up approximate search on genomic datasets. Masai is
implemented in C++ using the SeqAn library. The source code is distributed
under the BSD license and binaries for Linux, Mac OS X and Windows can be
freely downloaded from http://www.seqan.de/projects/masai.
| [
{
"version": "v1",
"created": "Tue, 21 Aug 2012 11:08:06 GMT"
}
] | 2012-08-22T00:00:00 | [
[
"Siragusa",
"Enrico",
""
],
[
"Weese",
"David",
""
],
[
"Reinert",
"Knut",
""
]
] | TITLE: Fast and sensitive read mapping with approximate seeds and multiple
backtracking
ABSTRACT: We present Masai, a read mapper representing the state of the art in terms of
speed and sensitivity. Our tool is an order of magnitude faster than RazerS 3
and mrFAST, 2--3 times faster and more accurate than Bowtie 2 and BWA. The
novelties of our read mapper are filtration with approximate seeds and a method
for multiple backtracking. Approximate seeds, compared to exact seeds, increase
filtration specificity while preserving sensitivity. Multiple backtracking
amortizes the cost of searching a large set of seeds by taking advantage of the
repetitiveness of next-generation sequencing data. Combined together, these two
methods significantly speed up approximate search on genomic datasets. Masai is
implemented in C++ using the SeqAn library. The source code is distributed
under the BSD license and binaries for Linux, Mac OS X and Windows can be
freely downloaded from http://www.seqan.de/projects/masai.
|
1205.3137 | Saurabh Singh | Saurabh Singh, Abhinav Gupta, Alexei A. Efros | Unsupervised Discovery of Mid-Level Discriminative Patches | null | European Conference on Computer Vision, 2012 | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to discover a set of discriminative patches which
can serve as a fully unsupervised mid-level visual representation. The desired
patches need to satisfy two requirements: 1) to be representative, they need to
occur frequently enough in the visual world; 2) to be discriminative, they need
to be different enough from the rest of the visual world. The patches could
correspond to parts, objects, "visual phrases", etc. but are not restricted to
be any one of them. We pose this as an unsupervised discriminative clustering
problem on a huge dataset of image patches. We use an iterative procedure which
alternates between clustering and training discriminative classifiers, while
applying careful cross-validation at each step to prevent overfitting. The
paper experimentally demonstrates the effectiveness of discriminative patches
as an unsupervised mid-level visual representation, suggesting that it could be
used in place of visual words for many tasks. Furthermore, discriminative
patches can also be used in a supervised regime, such as scene classification,
where they demonstrate state-of-the-art performance on the MIT Indoor-67
dataset.
| [
{
"version": "v1",
"created": "Mon, 14 May 2012 18:52:57 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Aug 2012 04:16:13 GMT"
}
] | 2012-08-21T00:00:00 | [
[
"Singh",
"Saurabh",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Efros",
"Alexei A.",
""
]
] | TITLE: Unsupervised Discovery of Mid-Level Discriminative Patches
ABSTRACT: The goal of this paper is to discover a set of discriminative patches which
can serve as a fully unsupervised mid-level visual representation. The desired
patches need to satisfy two requirements: 1) to be representative, they need to
occur frequently enough in the visual world; 2) to be discriminative, they need
to be different enough from the rest of the visual world. The patches could
correspond to parts, objects, "visual phrases", etc. but are not restricted to
be any one of them. We pose this as an unsupervised discriminative clustering
problem on a huge dataset of image patches. We use an iterative procedure which
alternates between clustering and training discriminative classifiers, while
applying careful cross-validation at each step to prevent overfitting. The
paper experimentally demonstrates the effectiveness of discriminative patches
as an unsupervised mid-level visual representation, suggesting that it could be
used in place of visual words for many tasks. Furthermore, discriminative
patches can also be used in a supervised regime, such as scene classification,
where they demonstrate state-of-the-art performance on the MIT Indoor-67
dataset.
|
1208.3943 | Jay Gholap B.Tech.(Computer Engineering) | Jay Gholap | Performance Tuning Of J48 Algorithm For Prediction Of Soil Fertility | 5 Pages | Published in Asian Journal of Computer Science and Information
Technology,Vol 2,No. 8 (2012) | null | null | cs.LG cs.DB cs.PF stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data mining involves the systematic analysis of large data sets, and data
mining in agricultural soil datasets is exciting and modern research area. The
productive capacity of a soil depends on soil fertility. Achieving and
maintaining appropriate levels of soil fertility, is of utmost importance if
agricultural land is to remain capable of nourishing crop production. In this
research, Steps for building a predictive model of soil fertility have been
explained.
This paper aims at predicting soil fertility class using decision tree
algorithms in data mining . Further, it focuses on performance tuning of J48
decision tree algorithm with the help of meta-techniques such as attribute
selection and boosting.
| [
{
"version": "v1",
"created": "Mon, 20 Aug 2012 08:48:40 GMT"
}
] | 2012-08-21T00:00:00 | [
[
"Gholap",
"Jay",
""
]
] | TITLE: Performance Tuning Of J48 Algorithm For Prediction Of Soil Fertility
ABSTRACT: Data mining involves the systematic analysis of large data sets, and data
mining in agricultural soil datasets is exciting and modern research area. The
productive capacity of a soil depends on soil fertility. Achieving and
maintaining appropriate levels of soil fertility, is of utmost importance if
agricultural land is to remain capable of nourishing crop production. In this
research, Steps for building a predictive model of soil fertility have been
explained.
This paper aims at predicting soil fertility class using decision tree
algorithms in data mining . Further, it focuses on performance tuning of J48
decision tree algorithm with the help of meta-techniques such as attribute
selection and boosting.
|
1208.3623 | Rafi Muhammad | Muhammad Rafi, Sundus Hassan and Mohammad Shahid Shaikh | Content-based Text Categorization using Wikitology | 9 pages; IJCSI August 2012 | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assign a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed semantic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns. The experimental
studies on the text mining datasets reveal that this new similarity measure is
more effective as compared to commonly used similarity measures in text
clustering.
| [
{
"version": "v1",
"created": "Fri, 17 Aug 2012 15:49:38 GMT"
}
] | 2012-08-20T00:00:00 | [
[
"Rafi",
"Muhammad",
""
],
[
"Hassan",
"Sundus",
""
],
[
"Shaikh",
"Mohammad Shahid",
""
]
] | TITLE: Content-based Text Categorization using Wikitology
ABSTRACT: A major computational burden, while performing document clustering, is the
calculation of similarity measure between a pair of documents. Similarity
measure is a function that assign a real number between 0 and 1 to a pair of
documents, depending upon the degree of similarity between them. A value of
zero means that the documents are completely dissimilar whereas a value of one
indicates that the documents are practically identical. Traditionally,
vector-based models have been used for computing the document similarity. The
vector-based models represent several features present in documents. These
approaches to similarity measures, in general, cannot account for the semantics
of the document. Documents written in human languages contain contexts and the
words used to describe these contexts are generally semantically related.
Motivated by this fact, many researchers have proposed semantic-based
similarity measures by utilizing text annotation through external thesauruses
like WordNet (a lexical database). In this paper, we define a semantic
similarity measure based on documents represented in topic maps. Topic maps are
rapidly becoming an industrial standard for knowledge representation with a
focus for later search and extraction. The documents are transformed into a
topic map based coded knowledge and the similarity between a pair of documents
is represented as a correlation between the common patterns. The experimental
studies on the text mining datasets reveal that this new similarity measure is
more effective as compared to commonly used similarity measures in text
clustering.
|
1112.4133 | Hocine Cherifi | Vincent Labatut, Hocine Cherifi (Le2i) | Evaluation of Performance Measures for Classifiers Comparison | null | Ubiquitous Computing and Communication Journal, 6:21-34, 2011 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The selection of the best classification algorithm for a given dataset is a
very widespread problem, occuring each time one has to choose a classifier to
solve a real-world problem. It is also a complex task with many important
methodological decisions to make. Among those, one of the most crucial is the
choice of an appropriate measure in order to properly assess the classification
performance and rank the algorithms. In this article, we focus on this specific
task. We present the most popular measures and compare their behavior through
discrimination plots. We then discuss their properties from a more theoretical
perspective. It turns out several of them are equivalent for classifiers
comparison purposes. Futhermore. they can also lead to interpretation problems.
Among the numerous measures proposed over the years, it appears that the
classical overall success rate and marginal rates are the more suitable for
classifier comparison task.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2011 08:02:49 GMT"
}
] | 2012-08-16T00:00:00 | [
[
"Labatut",
"Vincent",
"",
"Le2i"
],
[
"Cherifi",
"Hocine",
"",
"Le2i"
]
] | TITLE: Evaluation of Performance Measures for Classifiers Comparison
ABSTRACT: The selection of the best classification algorithm for a given dataset is a
very widespread problem, occuring each time one has to choose a classifier to
solve a real-world problem. It is also a complex task with many important
methodological decisions to make. Among those, one of the most crucial is the
choice of an appropriate measure in order to properly assess the classification
performance and rank the algorithms. In this article, we focus on this specific
task. We present the most popular measures and compare their behavior through
discrimination plots. We then discuss their properties from a more theoretical
perspective. It turns out several of them are equivalent for classifiers
comparison purposes. Futhermore. they can also lead to interpretation problems.
Among the numerous measures proposed over the years, it appears that the
classical overall success rate and marginal rates are the more suitable for
classifier comparison task.
|
1009.0881 | Nicolas Gillis | Nicolas Gillis, Fran\c{c}ois Glineur | A Multilevel Approach For Nonnegative Matrix Factorization | 23 pages, 10 figures. Section 6 added discussing limitations of the
method. Accepted in Journal of Computational and Applied Mathematics | Journal of Computational and Applied Mathematics 236 (7), pp.
1708-1723, 2012 | 10.1016/j.cam.2011.10.002 | null | math.OC cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative Matrix Factorization (NMF) is the problem of approximating a
nonnegative matrix with the product of two low-rank nonnegative matrices and
has been shown to be particularly useful in many applications, e.g., in text
mining, image processing, computational biology, etc. In this paper, we explain
how algorithms for NMF can be embedded into the framework of multilevel methods
in order to accelerate their convergence. This technique can be applied in
situations where data admit a good approximate representation in a lower
dimensional space through linear transformations preserving nonnegativity. A
simple multilevel strategy is described and is experimentally shown to speed up
significantly three popular NMF algorithms (alternating nonnegative least
squares, multiplicative updates and hierarchical alternating least squares) on
several standard image datasets.
| [
{
"version": "v1",
"created": "Sat, 4 Sep 2010 22:55:34 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Sep 2010 16:47:01 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Oct 2011 00:02:21 GMT"
}
] | 2012-08-13T00:00:00 | [
[
"Gillis",
"Nicolas",
""
],
[
"Glineur",
"François",
""
]
] | TITLE: A Multilevel Approach For Nonnegative Matrix Factorization
ABSTRACT: Nonnegative Matrix Factorization (NMF) is the problem of approximating a
nonnegative matrix with the product of two low-rank nonnegative matrices and
has been shown to be particularly useful in many applications, e.g., in text
mining, image processing, computational biology, etc. In this paper, we explain
how algorithms for NMF can be embedded into the framework of multilevel methods
in order to accelerate their convergence. This technique can be applied in
situations where data admit a good approximate representation in a lower
dimensional space through linear transformations preserving nonnegativity. A
simple multilevel strategy is described and is experimentally shown to speed up
significantly three popular NMF algorithms (alternating nonnegative least
squares, multiplicative updates and hierarchical alternating least squares) on
several standard image datasets.
|
1107.5194 | Nicolas Gillis | Nicolas Gillis, Fran\c{c}ois Glineur | Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for
Nonnegative Matrix Factorization | 17 pages, 10 figures. New Section 4 about the convergence of the
accelerated algorithms; Removed Section 5 about efficiency of HALS. Accepted
in Neural Computation | Neural Computation 24 (4), pp. 1085-1105, 2012 | 10.1162/NECO_a_00256 | null | math.OC cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonnegative matrix factorization (NMF) is a data analysis technique used in a
great variety of applications such as text mining, image processing,
hyperspectral data analysis, computational biology, and clustering. In this
paper, we consider two well-known algorithms designed to solve NMF problems,
namely the multiplicative updates of Lee and Seung and the hierarchical
alternating least squares of Cichocki et al. We propose a simple way to
significantly accelerate these schemes, based on a careful analysis of the
computational cost needed at each iteration, while preserving their convergence
properties. This acceleration technique can also be applied to other
algorithms, which we illustrate on the projected gradient method of Lin. The
efficiency of the accelerated algorithms is empirically demonstrated on image
and text datasets, and compares favorably with a state-of-the-art alternating
nonnegative least squares algorithm.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2011 12:26:07 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2011 13:16:20 GMT"
}
] | 2012-08-13T00:00:00 | [
[
"Gillis",
"Nicolas",
""
],
[
"Glineur",
"François",
""
]
] | TITLE: Accelerated Multiplicative Updates and Hierarchical ALS Algorithms for
Nonnegative Matrix Factorization
ABSTRACT: Nonnegative matrix factorization (NMF) is a data analysis technique used in a
great variety of applications such as text mining, image processing,
hyperspectral data analysis, computational biology, and clustering. In this
paper, we consider two well-known algorithms designed to solve NMF problems,
namely the multiplicative updates of Lee and Seung and the hierarchical
alternating least squares of Cichocki et al. We propose a simple way to
significantly accelerate these schemes, based on a careful analysis of the
computational cost needed at each iteration, while preserving their convergence
properties. This acceleration technique can also be applied to other
algorithms, which we illustrate on the projected gradient method of Lin. The
efficiency of the accelerated algorithms is empirically demonstrated on image
and text datasets, and compares favorably with a state-of-the-art alternating
nonnegative least squares algorithm.
|
1208.1846 | Qiang Qian | Guangxu Guo and Songcan Chen | Margin Distribution Controlled Boosting | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Schapire's margin theory provides a theoretical explanation to the success of
boosting-type methods and manifests that a good margin distribution (MD) of
training samples is essential for generalization. However the statement that a
MD is good is vague, consequently, many recently developed algorithms try to
generate a MD in their goodness senses for boosting generalization. Unlike
their indirect control over MD, in this paper, we propose an alternative
boosting algorithm termed Margin distribution Controlled Boosting (MCBoost)
which directly controls the MD by introducing and optimizing a key adjustable
margin parameter. MCBoost's optimization implementation adopts the column
generation technique to ensure fast convergence and small number of weak
classifiers involved in the final MCBooster. We empirically demonstrate: 1)
AdaBoost is actually also a MD controlled algorithm and its iteration number
acts as a parameter controlling the distribution and 2) the generalization
performance of MCBoost evaluated on UCI benchmark datasets is validated better
than those of AdaBoost, L2Boost, LPBoost, AdaBoost-CG and MDBoost.
| [
{
"version": "v1",
"created": "Thu, 9 Aug 2012 08:53:11 GMT"
}
] | 2012-08-10T00:00:00 | [
[
"Guo",
"Guangxu",
""
],
[
"Chen",
"Songcan",
""
]
] | TITLE: Margin Distribution Controlled Boosting
ABSTRACT: Schapire's margin theory provides a theoretical explanation to the success of
boosting-type methods and manifests that a good margin distribution (MD) of
training samples is essential for generalization. However the statement that a
MD is good is vague, consequently, many recently developed algorithms try to
generate a MD in their goodness senses for boosting generalization. Unlike
their indirect control over MD, in this paper, we propose an alternative
boosting algorithm termed Margin distribution Controlled Boosting (MCBoost)
which directly controls the MD by introducing and optimizing a key adjustable
margin parameter. MCBoost's optimization implementation adopts the column
generation technique to ensure fast convergence and small number of weak
classifiers involved in the final MCBooster. We empirically demonstrate: 1)
AdaBoost is actually also a MD controlled algorithm and its iteration number
acts as a parameter controlling the distribution and 2) the generalization
performance of MCBoost evaluated on UCI benchmark datasets is validated better
than those of AdaBoost, L2Boost, LPBoost, AdaBoost-CG and MDBoost.
|
1208.1259 | Ping Li | Ping Li and Art Owen and Cun-Hui Zhang | One Permutation Hashing for Efficient Search and Learning | null | null | null | null | cs.LG cs.IR cs.IT math.IT stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the method of b-bit minwise hashing has been applied to large-scale
linear learning and sublinear time near-neighbor search. The major drawback of
minwise hashing is the expensive preprocessing cost, as the method requires
applying (e.g.,) k=200 to 500 permutations on the data. The testing time can
also be expensive if a new data point (e.g., a new document or image) has not
been processed, which might be a significant issue in user-facing applications.
We develop a very simple solution based on one permutation hashing.
Conceptually, given a massive binary data matrix, we permute the columns only
once and divide the permuted columns evenly into k bins; and we simply store,
for each data vector, the smallest nonzero location in each bin. The
interesting probability analysis (which is validated by experiments) reveals
that our one permutation scheme should perform very similarly to the original
(k-permutation) minwise hashing. In fact, the one permutation scheme can be
even slightly more accurate, due to the "sample-without-replacement" effect.
Our experiments with training linear SVM and logistic regression on the
webspam dataset demonstrate that this one permutation hashing scheme can
achieve the same (or even slightly better) accuracies compared to the original
k-permutation scheme. To test the robustness of our method, we also experiment
with the small news20 dataset which is very sparse and has merely on average
500 nonzeros in each data vector. Interestingly, our one permutation scheme
noticeably outperforms the k-permutation scheme when k is not too small on the
news20 dataset. In summary, our method can achieve at least the same accuracy
as the original k-permutation scheme, at merely 1/k of the original
preprocessing cost.
| [
{
"version": "v1",
"created": "Mon, 6 Aug 2012 12:28:06 GMT"
}
] | 2012-08-08T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Owen",
"Art",
""
],
[
"Zhang",
"Cun-Hui",
""
]
] | TITLE: One Permutation Hashing for Efficient Search and Learning
ABSTRACT: Recently, the method of b-bit minwise hashing has been applied to large-scale
linear learning and sublinear time near-neighbor search. The major drawback of
minwise hashing is the expensive preprocessing cost, as the method requires
applying (e.g.,) k=200 to 500 permutations on the data. The testing time can
also be expensive if a new data point (e.g., a new document or image) has not
been processed, which might be a significant issue in user-facing applications.
We develop a very simple solution based on one permutation hashing.
Conceptually, given a massive binary data matrix, we permute the columns only
once and divide the permuted columns evenly into k bins; and we simply store,
for each data vector, the smallest nonzero location in each bin. The
interesting probability analysis (which is validated by experiments) reveals
that our one permutation scheme should perform very similarly to the original
(k-permutation) minwise hashing. In fact, the one permutation scheme can be
even slightly more accurate, due to the "sample-without-replacement" effect.
Our experiments with training linear SVM and logistic regression on the
webspam dataset demonstrate that this one permutation hashing scheme can
achieve the same (or even slightly better) accuracies compared to the original
k-permutation scheme. To test the robustness of our method, we also experiment
with the small news20 dataset which is very sparse and has merely on average
500 nonzeros in each data vector. Interestingly, our one permutation scheme
noticeably outperforms the k-permutation scheme when k is not too small on the
news20 dataset. In summary, our method can achieve at least the same accuracy
as the original k-permutation scheme, at merely 1/k of the original
preprocessing cost.
|
1208.0967 | Hema Swetha Koppula | Hema Swetha Koppula, Rudhir Gupta, Ashutosh Saxena | Human Activity Learning using Object Affordances from RGB-D Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human activities comprise several sub-activities performed in a sequence and
involve interactions with various objects. This makes reasoning about the
object affordances a central task for activity recognition. In this work, we
consider the problem of jointly labeling the object affordances and human
activities from RGB-D videos. We frame the problem as a Markov Random Field
where the nodes represent objects and sub-activities, and the edges represent
the relationships between object affordances, their relations with
sub-activities, and their evolution over time. We formulate the learning
problem using a structural SVM approach, where labeling over various alternate
temporal segmentations are considered as latent variables. We tested our method
on a dataset comprising 120 activity videos collected from four subjects, and
obtained an end-to-end precision of 81.8% and recall of 80.0% for labeling the
activities.
| [
{
"version": "v1",
"created": "Sat, 4 Aug 2012 23:44:07 GMT"
}
] | 2012-08-07T00:00:00 | [
[
"Koppula",
"Hema Swetha",
""
],
[
"Gupta",
"Rudhir",
""
],
[
"Saxena",
"Ashutosh",
""
]
] | TITLE: Human Activity Learning using Object Affordances from RGB-D Videos
ABSTRACT: Human activities comprise several sub-activities performed in a sequence and
involve interactions with various objects. This makes reasoning about the
object affordances a central task for activity recognition. In this work, we
consider the problem of jointly labeling the object affordances and human
activities from RGB-D videos. We frame the problem as a Markov Random Field
where the nodes represent objects and sub-activities, and the edges represent
the relationships between object affordances, their relations with
sub-activities, and their evolution over time. We formulate the learning
problem using a structural SVM approach, where labeling over various alternate
temporal segmentations are considered as latent variables. We tested our method
on a dataset comprising 120 activity videos collected from four subjects, and
obtained an end-to-end precision of 81.8% and recall of 80.0% for labeling the
activities.
|
1207.6744 | Lluis Pamies-Juarez | Lluis Pamies-Juarez, Anwitaman Datta and Frederique Oggier | RapidRAID: Pipelined Erasure Codes for Fast Data Archival in Distributed
Storage Systems | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To achieve reliability in distributed storage systems, data has usually been
replicated across different nodes. However the increasing volume of data to be
stored has motivated the introduction of erasure codes, a storage efficient
alternative to replication, particularly suited for archival in data centers,
where old datasets (rarely accessed) can be erasure encoded, while replicas are
maintained only for the latest data. Many recent works consider the design of
new storage-centric erasure codes for improved repairability. In contrast, this
paper addresses the migration from replication to encoding: traditionally
erasure coding is an atomic operation in that a single node with the whole
object encodes and uploads all the encoded pieces. Although large datasets can
be concurrently archived by distributing individual object encodings among
different nodes, the network and computing capacity of individual nodes
constrain the archival process due to such atomicity.
We propose a new pipelined coding strategy that distributes the network and
computing load of single-object encodings among different nodes, which also
speeds up multiple object archival. We further present RapidRAID codes, an
explicit family of pipelined erasure codes which provides fast archival without
compromising either data reliability or storage overheads. Finally, we provide
a real implementation of RapidRAID codes and benchmark its performance using
both a cluster of 50 nodes and a set of Amazon EC2 instances. Experiments show
that RapidRAID codes reduce a single object's coding time by up to 90%, while
when multiple objects are encoded concurrently, the reduction is up to 20%.
| [
{
"version": "v1",
"created": "Sun, 29 Jul 2012 04:27:44 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Aug 2012 07:02:25 GMT"
}
] | 2012-08-06T00:00:00 | [
[
"Pamies-Juarez",
"Lluis",
""
],
[
"Datta",
"Anwitaman",
""
],
[
"Oggier",
"Frederique",
""
]
] | TITLE: RapidRAID: Pipelined Erasure Codes for Fast Data Archival in Distributed
Storage Systems
ABSTRACT: To achieve reliability in distributed storage systems, data has usually been
replicated across different nodes. However the increasing volume of data to be
stored has motivated the introduction of erasure codes, a storage efficient
alternative to replication, particularly suited for archival in data centers,
where old datasets (rarely accessed) can be erasure encoded, while replicas are
maintained only for the latest data. Many recent works consider the design of
new storage-centric erasure codes for improved repairability. In contrast, this
paper addresses the migration from replication to encoding: traditionally
erasure coding is an atomic operation in that a single node with the whole
object encodes and uploads all the encoded pieces. Although large datasets can
be concurrently archived by distributing individual object encodings among
different nodes, the network and computing capacity of individual nodes
constrain the archival process due to such atomicity.
We propose a new pipelined coding strategy that distributes the network and
computing load of single-object encodings among different nodes, which also
speeds up multiple object archival. We further present RapidRAID codes, an
explicit family of pipelined erasure codes which provides fast archival without
compromising either data reliability or storage overheads. Finally, we provide
a real implementation of RapidRAID codes and benchmark its performance using
both a cluster of 50 nodes and a set of Amazon EC2 instances. Experiments show
that RapidRAID codes reduce a single object's coding time by up to 90%, while
when multiple objects are encoded concurrently, the reduction is up to 20%.
|
1208.0541 | Simon Powers | Simon T. Powers and Jun He | A hybrid artificial immune system and Self Organising Map for network
intrusion detection | Post-print of accepted manuscript. 32 pages and 3 figures | Information Sciences 178(15), pp. 3024-3042, August 2008 | 10.1016/j.ins.2007.11.028 | null | cs.NE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network intrusion detection is the problem of detecting unauthorised use of,
or access to, computer systems over a network. Two broad approaches exist to
tackle this problem: anomaly detection and misuse detection. An anomaly
detection system is trained only on examples of normal connections, and thus
has the potential to detect novel attacks. However, many anomaly detection
systems simply report the anomalous activity, rather than analysing it further
in order to report higher-level information that is of more use to a security
officer. On the other hand, misuse detection systems recognise known attack
patterns, thereby allowing them to provide more detailed information about an
intrusion. However, such systems cannot detect novel attacks.
A hybrid system is presented in this paper with the aim of combining the
advantages of both approaches. Specifically, anomalous network connections are
initially detected using an artificial immune system. Connections that are
flagged as anomalous are then categorised using a Kohonen Self Organising Map,
allowing higher-level information, in the form of cluster membership, to be
extracted. Experimental results on the KDD 1999 Cup dataset show a low false
positive rate and a detection and classification rate for Denial-of-Service and
User-to-Root attacks that is higher than those in a sample of other works.
| [
{
"version": "v1",
"created": "Thu, 2 Aug 2012 16:53:13 GMT"
}
] | 2012-08-03T00:00:00 | [
[
"Powers",
"Simon T.",
""
],
[
"He",
"Jun",
""
]
] | TITLE: A hybrid artificial immune system and Self Organising Map for network
intrusion detection
ABSTRACT: Network intrusion detection is the problem of detecting unauthorised use of,
or access to, computer systems over a network. Two broad approaches exist to
tackle this problem: anomaly detection and misuse detection. An anomaly
detection system is trained only on examples of normal connections, and thus
has the potential to detect novel attacks. However, many anomaly detection
systems simply report the anomalous activity, rather than analysing it further
in order to report higher-level information that is of more use to a security
officer. On the other hand, misuse detection systems recognise known attack
patterns, thereby allowing them to provide more detailed information about an
intrusion. However, such systems cannot detect novel attacks.
A hybrid system is presented in this paper with the aim of combining the
advantages of both approaches. Specifically, anomalous network connections are
initially detected using an artificial immune system. Connections that are
flagged as anomalous are then categorised using a Kohonen Self Organising Map,
allowing higher-level information, in the form of cluster membership, to be
extracted. Experimental results on the KDD 1999 Cup dataset show a low false
positive rate and a detection and classification rate for Denial-of-Service and
User-to-Root attacks that is higher than those in a sample of other works.
|
1208.0075 | Yufei Tao | Cheng Sheng, Nan Zhang, Yufei Tao, Xin Jin | Optimal Algorithms for Crawling a Hidden Database in the Web | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1112-1123 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A hidden database refers to a dataset that an organization makes accessible
on the web by allowing users to issue queries through a search interface. In
other words, data acquisition from such a source is not by following static
hyper-links. Instead, data are obtained by querying the interface, and reading
the result page dynamically generated. This, with other facts such as the
interface may answer a query only partially, has prevented hidden databases
from being crawled effectively by existing search engines. This paper remedies
the problem by giving algorithms to extract all the tuples from a hidden
database. Our algorithms are provably efficient, namely, they accomplish the
task by performing only a small number of queries, even in the worst case. We
also establish theoretical results indicating that these algorithms are
asymptotically optimal -- i.e., it is impossible to improve their efficiency by
more than a constant factor. The derivation of our upper and lower bound
results reveals significant insight into the characteristics of the underlying
problem. Extensive experiments confirm the proposed techniques work very well
on all the real datasets examined.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 03:43:52 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Sheng",
"Cheng",
""
],
[
"Zhang",
"Nan",
""
],
[
"Tao",
"Yufei",
""
],
[
"Jin",
"Xin",
""
]
] | TITLE: Optimal Algorithms for Crawling a Hidden Database in the Web
ABSTRACT: A hidden database refers to a dataset that an organization makes accessible
on the web by allowing users to issue queries through a search interface. In
other words, data acquisition from such a source is not by following static
hyper-links. Instead, data are obtained by querying the interface, and reading
the result page dynamically generated. This, with other facts such as the
interface may answer a query only partially, has prevented hidden databases
from being crawled effectively by existing search engines. This paper remedies
the problem by giving algorithms to extract all the tuples from a hidden
database. Our algorithms are provably efficient, namely, they accomplish the
task by performing only a small number of queries, even in the worst case. We
also establish theoretical results indicating that these algorithms are
asymptotically optimal -- i.e., it is impossible to improve their efficiency by
more than a constant factor. The derivation of our upper and lower bound
results reveals significant insight into the characteristics of the underlying
problem. Extensive experiments confirm the proposed techniques work very well
on all the real datasets examined.
|
1208.0076 | Lu Qin | Lu Qin, Jeffrey Xu Yu, Lijun Chang | Diversifying Top-K Results | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1124-1135 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Top-k query processing finds a list of k results that have largest scores
w.r.t the user given query, with the assumption that all the k results are
independent to each other. In practice, some of the top-k results returned can
be very similar to each other. As a result some of the top-k results returned
are redundant. In the literature, diversified top-k search has been studied to
return k results that take both score and diversity into consideration. Most
existing solutions on diversified top-k search assume that scores of all the
search results are given, and some works solve the diversity problem on a
specific problem and can hardly be extended to general cases. In this paper, we
study the diversified top-k search problem. We define a general diversified
top-k search problem that only considers the similarity of the search results
themselves. We propose a framework, such that most existing solutions for top-k
query processing can be extended easily to handle diversified top-k search, by
simply applying three new functions, a sufficient stop condition sufficient(),
a necessary stop condition necessary(), and an algorithm for diversified top-k
search on the current set of generated results, div-search-current(). We
propose three new algorithms, namely, div-astar, div-dp, and div-cut to solve
the div-search-current() problem. div-astar is an A* based algorithm, div-dp is
an algorithm that decomposes the results into components which are searched
using div-astar independently and combined using dynamic programming. div-cut
further decomposes the current set of generated results using cut points and
combines the results using sophisticated operations. We conducted extensive
performance studies using two real datasets, enwiki and reuters. Our div-cut
algorithm finds the optimal solution for diversified top-k search problem in
seconds even for k as large as 2,000.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 03:44:46 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Qin",
"Lu",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Chang",
"Lijun",
""
]
] | TITLE: Diversifying Top-K Results
ABSTRACT: Top-k query processing finds a list of k results that have largest scores
w.r.t the user given query, with the assumption that all the k results are
independent to each other. In practice, some of the top-k results returned can
be very similar to each other. As a result some of the top-k results returned
are redundant. In the literature, diversified top-k search has been studied to
return k results that take both score and diversity into consideration. Most
existing solutions on diversified top-k search assume that scores of all the
search results are given, and some works solve the diversity problem on a
specific problem and can hardly be extended to general cases. In this paper, we
study the diversified top-k search problem. We define a general diversified
top-k search problem that only considers the similarity of the search results
themselves. We propose a framework, such that most existing solutions for top-k
query processing can be extended easily to handle diversified top-k search, by
simply applying three new functions, a sufficient stop condition sufficient(),
a necessary stop condition necessary(), and an algorithm for diversified top-k
search on the current set of generated results, div-search-current(). We
propose three new algorithms, namely, div-astar, div-dp, and div-cut to solve
the div-search-current() problem. div-astar is an A* based algorithm, div-dp is
an algorithm that decomposes the results into components which are searched
using div-astar independently and combined using dynamic programming. div-cut
further decomposes the current set of generated results using cut points and
combines the results using sophisticated operations. We conducted extensive
performance studies using two real datasets, enwiki and reuters. Our div-cut
algorithm finds the optimal solution for diversified top-k search problem in
seconds even for k as large as 2,000.
|
1208.0082 | Harold Lim | Harold Lim, Herodotos Herodotou, Shivnath Babu | Stubby: A Transformation-based Optimizer for MapReduce Workflows | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1196-1207 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing trend of performing analysis on large datasets using
workflows composed of MapReduce jobs connected through producer-consumer
relationships based on data. This trend has spurred the development of a number
of interfaces--ranging from program-based to query-based interfaces--for
generating MapReduce workflows. Studies have shown that the gap in performance
can be quite large between optimized and unoptimized workflows. However,
automatic cost-based optimization of MapReduce workflows remains a challenge
due to the multitude of interfaces, large size of the execution plan space, and
the frequent unavailability of all types of information needed for
optimization. We introduce a comprehensive plan space for MapReduce workflows
generated by popular workflow generators. We then propose Stubby, a cost-based
optimizer that searches selectively through the subspace of the full plan space
that can be enumerated correctly and costed based on the information available
in any given setting. Stubby enumerates the plan space based on plan-to-plan
transformations and an efficient search algorithm. Stubby is designed to be
extensible to new interfaces and new types of optimizations, which is a
desirable feature given how rapidly MapReduce systems are evolving. Stubby's
efficiency and effectiveness have been evaluated using representative workflows
from many domains.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 03:49:32 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Lim",
"Harold",
""
],
[
"Herodotou",
"Herodotos",
""
],
[
"Babu",
"Shivnath",
""
]
] | TITLE: Stubby: A Transformation-based Optimizer for MapReduce Workflows
ABSTRACT: There is a growing trend of performing analysis on large datasets using
workflows composed of MapReduce jobs connected through producer-consumer
relationships based on data. This trend has spurred the development of a number
of interfaces--ranging from program-based to query-based interfaces--for
generating MapReduce workflows. Studies have shown that the gap in performance
can be quite large between optimized and unoptimized workflows. However,
automatic cost-based optimization of MapReduce workflows remains a challenge
due to the multitude of interfaces, large size of the execution plan space, and
the frequent unavailability of all types of information needed for
optimization. We introduce a comprehensive plan space for MapReduce workflows
generated by popular workflow generators. We then propose Stubby, a cost-based
optimizer that searches selectively through the subspace of the full plan space
that can be enumerated correctly and costed based on the information available
in any given setting. Stubby enumerates the plan space based on plan-to-plan
transformations and an efficient search algorithm. Stubby is designed to be
extensible to new interfaces and new types of optimizations, which is a
desirable feature given how rapidly MapReduce systems are evolving. Stubby's
efficiency and effectiveness have been evaluated using representative workflows
from many domains.
|
1208.0086 | Yu Cao | Yu Cao, Chee-Yong Chan, Jie Li, Kian-Lee Tan | Optimization of Analytic Window Functions | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1244-1255 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analytic functions represent the state-of-the-art way of performing complex
data analysis within a single SQL statement. In particular, an important class
of analytic functions that has been frequently used in commercial systems to
support OLAP and decision support applications is the class of window
functions. A window function returns for each input tuple a value derived from
applying a function over a window of neighboring tuples. However, existing
window function evaluation approaches are based on a naive sorting scheme. In
this paper, we study the problem of optimizing the evaluation of window
functions. We propose several efficient techniques, and identify optimization
opportunities that allow us to optimize the evaluation of a set of window
functions. We have integrated our scheme into PostgreSQL. Our comprehensive
experimental study on the TPC-DS datasets as well as synthetic datasets and
queries demonstrate significant speedup over existing approaches.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 03:52:40 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Cao",
"Yu",
""
],
[
"Chan",
"Chee-Yong",
""
],
[
"Li",
"Jie",
""
],
[
"Tan",
"Kian-Lee",
""
]
] | TITLE: Optimization of Analytic Window Functions
ABSTRACT: Analytic functions represent the state-of-the-art way of performing complex
data analysis within a single SQL statement. In particular, an important class
of analytic functions that has been frequently used in commercial systems to
support OLAP and decision support applications is the class of window
functions. A window function returns for each input tuple a value derived from
applying a function over a window of neighboring tuples. However, existing
window function evaluation approaches are based on a naive sorting scheme. In
this paper, we study the problem of optimizing the evaluation of window
functions. We propose several efficient techniques, and identify optimization
opportunities that allow us to optimize the evaluation of a set of window
functions. We have integrated our scheme into PostgreSQL. Our comprehensive
experimental study on the TPC-DS datasets as well as synthetic datasets and
queries demonstrate significant speedup over existing approaches.
|
1208.0090 | James Cheng | James Cheng, Zechao Shang, Hong Cheng, Haixun Wang, Jeffrey Xu Yu | K-Reach: Who is in Your Small World | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1292-1303 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of answering k-hop reachability queries in a directed
graph, i.e., whether there exists a directed path of length k, from a source
query vertex to a target query vertex in the input graph. The problem of k-hop
reachability is a general problem of the classic reachability (where
k=infinity). Existing indexes for processing classic reachability queries, as
well as for processing shortest path queries, are not applicable or not
efficient for processing k-hop reachability queries. We propose an index for
processing k-hop reachability queries, which is simple in design and efficient
to construct. Our experimental results on a wide range of real datasets show
that our index is more efficient than the state-of-the-art indexes even for
processing classic reachability queries, for which these indexes are primarily
designed. We also show that our index is efficient in answering k-hop
reachability queries.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 03:55:46 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Cheng",
"James",
""
],
[
"Shang",
"Zechao",
""
],
[
"Cheng",
"Hong",
""
],
[
"Wang",
"Haixun",
""
],
[
"Yu",
"Jeffrey Xu",
""
]
] | TITLE: K-Reach: Who is in Your Small World
ABSTRACT: We study the problem of answering k-hop reachability queries in a directed
graph, i.e., whether there exists a directed path of length k, from a source
query vertex to a target query vertex in the input graph. The problem of k-hop
reachability is a general problem of the classic reachability (where
k=infinity). Existing indexes for processing classic reachability queries, as
well as for processing shortest path queries, are not applicable or not
efficient for processing k-hop reachability queries. We propose an index for
processing k-hop reachability queries, which is simple in design and efficient
to construct. Our experimental results on a wide range of real datasets show
that our index is more efficient than the state-of-the-art indexes even for
processing classic reachability queries, for which these indexes are primarily
designed. We also show that our index is efficient in answering k-hop
reachability queries.
|
1208.0221 | Ziyu Guan | Ziyu Guan, Xifeng Yan, Lance M. Kaplan | Measuring Two-Event Structural Correlations on Graphs | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1400-1411 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-life graphs usually have various kinds of events happening on them,
e.g., product purchases in online social networks and intrusion alerts in
computer networks. The occurrences of events on the same graph could be
correlated, exhibiting either attraction or repulsion. Such structural
correlations can reveal important relationships between different events.
Unfortunately, correlation relationships on graph structures are not well
studied and cannot be captured by traditional measures. In this work, we design
a novel measure for assessing two-event structural correlations on graphs.
Given the occurrences of two events, we choose uniformly a sample of "reference
nodes" from the vicinity of all event nodes and employ the Kendall's tau rank
correlation measure to compute the average concordance of event density
changes. Significance can be efficiently assessed by tau's nice property of
being asymptotically normal under the null hypothesis. In order to compute the
measure in large scale networks, we develop a scalable framework using
different sampling strategies. The complexity of these strategies is analyzed.
Experiments on real graph datasets with both synthetic and real events
demonstrate that the proposed framework is not only efficacious, but also
efficient and scalable.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 14:12:02 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Guan",
"Ziyu",
""
],
[
"Yan",
"Xifeng",
""
],
[
"Kaplan",
"Lance M.",
""
]
] | TITLE: Measuring Two-Event Structural Correlations on Graphs
ABSTRACT: Real-life graphs usually have various kinds of events happening on them,
e.g., product purchases in online social networks and intrusion alerts in
computer networks. The occurrences of events on the same graph could be
correlated, exhibiting either attraction or repulsion. Such structural
correlations can reveal important relationships between different events.
Unfortunately, correlation relationships on graph structures are not well
studied and cannot be captured by traditional measures. In this work, we design
a novel measure for assessing two-event structural correlations on graphs.
Given the occurrences of two events, we choose uniformly a sample of "reference
nodes" from the vicinity of all event nodes and employ the Kendall's tau rank
correlation measure to compute the average concordance of event density
changes. Significance can be efficiently assessed by tau's nice property of
being asymptotically normal under the null hypothesis. In order to compute the
measure in large scale networks, we develop a scalable framework using
different sampling strategies. The complexity of these strategies is analyzed.
Experiments on real graph datasets with both synthetic and real events
demonstrate that the proposed framework is not only efficacious, but also
efficient and scalable.
|
1208.0222 | Feifei Li | Jeffrey Jestes, Jeff M. Phillips, Feifei Li, Mingwang Tang | Ranking Large Temporal Data | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1412-1423 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ranking temporal data has not been studied until recently, even though
ranking is an important operator (being promoted as a firstclass citizen) in
database systems. However, only the instant top-k queries on temporal data were
studied in, where objects with the k highest scores at a query time instance t
are to be retrieved. The instant top-k definition clearly comes with
limitations (sensitive to outliers, difficult to choose a meaningful query time
t). A more flexible and general ranking operation is to rank objects based on
the aggregation of their scores in a query interval, which we dub the aggregate
top-k query on temporal data. For example, return the top-10 weather stations
having the highest average temperature from 10/01/2010 to 10/07/2010; find the
top-20 stocks having the largest total transaction volumes from 02/05/2011 to
02/07/2011. This work presents a comprehensive study to this problem by
designing both exact and approximate methods (with approximation quality
guarantees). We also provide theoretical analysis on the construction cost, the
index size, the update and the query costs of each approach. Extensive
experiments on large real datasets clearly demonstrate the efficiency, the
effectiveness, and the scalability of our methods compared to the baseline
methods.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 14:12:21 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Jestes",
"Jeffrey",
""
],
[
"Phillips",
"Jeff M.",
""
],
[
"Li",
"Feifei",
""
],
[
"Tang",
"Mingwang",
""
]
] | TITLE: Ranking Large Temporal Data
ABSTRACT: Ranking temporal data has not been studied until recently, even though
ranking is an important operator (being promoted as a firstclass citizen) in
database systems. However, only the instant top-k queries on temporal data were
studied in, where objects with the k highest scores at a query time instance t
are to be retrieved. The instant top-k definition clearly comes with
limitations (sensitive to outliers, difficult to choose a meaningful query time
t). A more flexible and general ranking operation is to rank objects based on
the aggregation of their scores in a query interval, which we dub the aggregate
top-k query on temporal data. For example, return the top-10 weather stations
having the highest average temperature from 10/01/2010 to 10/07/2010; find the
top-20 stocks having the largest total transaction volumes from 02/05/2011 to
02/07/2011. This work presents a comprehensive study to this problem by
designing both exact and approximate methods (with approximation quality
guarantees). We also provide theoretical analysis on the construction cost, the
index size, the update and the query costs of each approach. Extensive
experiments on large real datasets clearly demonstrate the efficiency, the
effectiveness, and the scalability of our methods compared to the baseline
methods.
|
1208.0225 | Alexander Hall | Alexander Hall, Olaf Bachmann, Robert B\"ussow, Silviu G\u{a}nceanu,
Marc Nunkesser | Processing a Trillion Cells per Mouse Click | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1436-1446 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Column-oriented database systems have been a real game changer for the
industry in recent years. Highly tuned and performant systems have evolved that
provide users with the possibility of answering ad hoc queries over large
datasets in an interactive manner. In this paper we present the column-oriented
datastore developed as one of the central components of PowerDrill. It combines
the advantages of columnar data layout with other known techniques (such as
using composite range partitions) and extensive algorithmic engineering on key
data structures. The main goal of the latter being to reduce the main memory
footprint and to increase the efficiency in processing typical user queries. In
this combination we achieve large speed-ups. These enable a highly interactive
Web UI where it is common that a single mouse click leads to processing a
trillion values in the underlying dataset.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 14:13:23 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Hall",
"Alexander",
""
],
[
"Bachmann",
"Olaf",
""
],
[
"Büssow",
"Robert",
""
],
[
"Gănceanu",
"Silviu",
""
],
[
"Nunkesser",
"Marc",
""
]
] | TITLE: Processing a Trillion Cells per Mouse Click
ABSTRACT: Column-oriented database systems have been a real game changer for the
industry in recent years. Highly tuned and performant systems have evolved that
provide users with the possibility of answering ad hoc queries over large
datasets in an interactive manner. In this paper we present the column-oriented
datastore developed as one of the central components of PowerDrill. It combines
the advantages of columnar data layout with other known techniques (such as
using composite range partitions) and extensive algorithmic engineering on key
data structures. The main goal of the latter being to reduce the main memory
footprint and to increase the efficiency in processing typical user queries. In
this combination we achieve large speed-ups. These enable a highly interactive
Web UI where it is common that a single mouse click leads to processing a
trillion values in the underlying dataset.
|
1208.0276 | Farhan Tauheed | Farhan Tauheed, Thomas Heinis, Felix Sh\"urmann, Henry Markram,
Anastasia Ailamaki | SCOUT: Prefetching for Latent Feature Following Queries | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1531-1542 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's scientists are quickly moving from in vitro to in silico
experimentation: they no longer analyze natural phenomena in a petri dish, but
instead they build models and simulate them. Managing and analyzing the massive
amounts of data involved in simulations is a major task. Yet, they lack the
tools to efficiently work with data of this size. One problem many scientists
share is the analysis of the massive spatial models they build. For several
types of analysis they need to interactively follow the structures in the
spatial model, e.g., the arterial tree, neuron fibers, etc., and issue range
queries along the way. Each query takes long to execute, and the total time for
executing a sequence of queries significantly delays data analysis. Prefetching
the spatial data reduces the response time considerably, but known approaches
do not prefetch with high accuracy. We develop SCOUT, a structure-aware method
for prefetching data along interactive spatial query sequences. SCOUT uses an
approximate graph model of the structures involved in past queries and attempts
to identify what particular structure the user follows. Our experiments with
neuroscience data show that SCOUT prefetches with an accuracy from 71% to 92%,
which translates to a speedup of 4x-15x. SCOUT also improves the prefetching
accuracy on datasets from other scientific domains, such as medicine and
biology.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 16:49:56 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Tauheed",
"Farhan",
""
],
[
"Heinis",
"Thomas",
""
],
[
"Shürmann",
"Felix",
""
],
[
"Markram",
"Henry",
""
],
[
"Ailamaki",
"Anastasia",
""
]
] | TITLE: SCOUT: Prefetching for Latent Feature Following Queries
ABSTRACT: Today's scientists are quickly moving from in vitro to in silico
experimentation: they no longer analyze natural phenomena in a petri dish, but
instead they build models and simulate them. Managing and analyzing the massive
amounts of data involved in simulations is a major task. Yet, they lack the
tools to efficiently work with data of this size. One problem many scientists
share is the analysis of the massive spatial models they build. For several
types of analysis they need to interactively follow the structures in the
spatial model, e.g., the arterial tree, neuron fibers, etc., and issue range
queries along the way. Each query takes long to execute, and the total time for
executing a sequence of queries significantly delays data analysis. Prefetching
the spatial data reduces the response time considerably, but known approaches
do not prefetch with high accuracy. We develop SCOUT, a structure-aware method
for prefetching data along interactive spatial query sequences. SCOUT uses an
approximate graph model of the structures involved in past queries and attempts
to identify what particular structure the user follows. Our experiments with
neuroscience data show that SCOUT prefetches with an accuracy from 71% to 92%,
which translates to a speedup of 4x-15x. SCOUT also improves the prefetching
accuracy on datasets from other scientific domains, such as medicine and
biology.
|
1208.0286 | Haohan Zhu | Haohan Zhu, George Kollios, Vassilis Athitsos | A Generic Framework for Efficient and Effective Subsequence Retrieval | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1579-1590 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a general framework for matching similar subsequences in
both time series and string databases. The matching results are pairs of query
subsequences and database subsequences. The framework finds all possible pairs
of similar subsequences if the distance measure satisfies the "consistency"
property, which is a property introduced in this paper. We show that most
popular distance functions, such as the Euclidean distance, DTW, ERP, the
Frechet distance for time series, and the Hamming distance and Levenshtein
distance for strings, are all "consistent". We also propose a generic index
structure for metric spaces named "reference net". The reference net occupies
O(n) space, where n is the size of the dataset and is optimized to work well
with our framework. The experiments demonstrate the ability of our method to
improve retrieval performance when combined with diverse distance measures. The
experiments also illustrate that the reference net scales well in terms of
space overhead and query time.
| [
{
"version": "v1",
"created": "Wed, 1 Aug 2012 17:20:11 GMT"
}
] | 2012-08-02T00:00:00 | [
[
"Zhu",
"Haohan",
""
],
[
"Kollios",
"George",
""
],
[
"Athitsos",
"Vassilis",
""
]
] | TITLE: A Generic Framework for Efficient and Effective Subsequence Retrieval
ABSTRACT: This paper proposes a general framework for matching similar subsequences in
both time series and string databases. The matching results are pairs of query
subsequences and database subsequences. The framework finds all possible pairs
of similar subsequences if the distance measure satisfies the "consistency"
property, which is a property introduced in this paper. We show that most
popular distance functions, such as the Euclidean distance, DTW, ERP, the
Frechet distance for time series, and the Hamming distance and Levenshtein
distance for strings, are all "consistent". We also propose a generic index
structure for metric spaces named "reference net". The reference net occupies
O(n) space, where n is the size of the dataset and is optimized to work well
with our framework. The experiments demonstrate the ability of our method to
improve retrieval performance when combined with diverse distance measures. The
experiments also illustrate that the reference net scales well in terms of
space overhead and query time.
|
1207.7103 | John Whitbeck | John Whitbeck, Marcelo Dias de Amorim, Vania Conan, Jean-Loup
Guillaume | Temporal Reachability Graphs | In proceedings ACM Mobicom 2012 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While a natural fit for modeling and understanding mobile networks,
time-varying graphs remain poorly understood. Indeed, many of the usual
concepts of static graphs have no obvious counterpart in time-varying ones. In
this paper, we introduce the notion of temporal reachability graphs. A
(tau,delta)-reachability graph} is a time-varying directed graph derived from
an existing connectivity graph. An edge exists from one node to another in the
reachability graph at time t if there exists a journey (i.e., a spatiotemporal
path) in the connectivity graph from the first node to the second, leaving
after t, with a positive edge traversal time tau, and arriving within a maximum
delay delta. We make three contributions. First, we develop the theoretical
framework around temporal reachability graphs. Second, we harness our
theoretical findings to propose an algorithm for their efficient computation.
Finally, we demonstrate the analytic power of the temporal reachability graph
concept by applying it to synthetic and real-life datasets. On top of defining
clear upper bounds on communication capabilities, reachability graphs highlight
asymmetric communication opportunities and offloading potential.
| [
{
"version": "v1",
"created": "Mon, 30 Jul 2012 21:05:54 GMT"
}
] | 2012-08-01T00:00:00 | [
[
"Whitbeck",
"John",
""
],
[
"de Amorim",
"Marcelo Dias",
""
],
[
"Conan",
"Vania",
""
],
[
"Guillaume",
"Jean-Loup",
""
]
] | TITLE: Temporal Reachability Graphs
ABSTRACT: While a natural fit for modeling and understanding mobile networks,
time-varying graphs remain poorly understood. Indeed, many of the usual
concepts of static graphs have no obvious counterpart in time-varying ones. In
this paper, we introduce the notion of temporal reachability graphs. A
(tau,delta)-reachability graph} is a time-varying directed graph derived from
an existing connectivity graph. An edge exists from one node to another in the
reachability graph at time t if there exists a journey (i.e., a spatiotemporal
path) in the connectivity graph from the first node to the second, leaving
after t, with a positive edge traversal time tau, and arriving within a maximum
delay delta. We make three contributions. First, we develop the theoretical
framework around temporal reachability graphs. Second, we harness our
theoretical findings to propose an algorithm for their efficient computation.
Finally, we demonstrate the analytic power of the temporal reachability graph
concept by applying it to synthetic and real-life datasets. On top of defining
clear upper bounds on communication capabilities, reachability graphs highlight
asymmetric communication opportunities and offloading potential.
|
1207.6269 | Arnau Prat-P\'erez | Arnau Prat-P\'erez, David Dominguez-Sal, Josep M. Brunat, Josep-Lluis
Larriba-Pey | Shaping Communities out of Triangles | 10 pages, 6 figures, CIKM 2012 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection has arisen as one of the most relevant topics in the
field of graph data mining due to its importance in many fields such as
biology, social networks or network traffic analysis. The metrics proposed to
shape communities are generic and follow two approaches: maximizing the
internal density of such communities or reducing the connectivity of the
internal vertices with those outside the community. However, these metrics take
the edges as a set and do not consider the internal layout of the edges in the
community. We define a set of properties oriented to social networks that
ensure that communities are cohesive, structured and well defined. Then, we
propose the Weighted Community Clustering (WCC), which is a community metric
based on triangles. We proof that analyzing communities by triangles gives
communities that fulfill the listed set of properties, in contrast to previous
metrics. Finally, we experimentally show that WCC correctly captures the
concept of community in social networks using real and syntethic datasets, and
compare statistically some of the most relevant community detection algorithms
in the state of the art.
| [
{
"version": "v1",
"created": "Thu, 26 Jul 2012 13:36:59 GMT"
}
] | 2012-07-27T00:00:00 | [
[
"Prat-Pérez",
"Arnau",
""
],
[
"Dominguez-Sal",
"David",
""
],
[
"Brunat",
"Josep M.",
""
],
[
"Larriba-Pey",
"Josep-Lluis",
""
]
] | TITLE: Shaping Communities out of Triangles
ABSTRACT: Community detection has arisen as one of the most relevant topics in the
field of graph data mining due to its importance in many fields such as
biology, social networks or network traffic analysis. The metrics proposed to
shape communities are generic and follow two approaches: maximizing the
internal density of such communities or reducing the connectivity of the
internal vertices with those outside the community. However, these metrics take
the edges as a set and do not consider the internal layout of the edges in the
community. We define a set of properties oriented to social networks that
ensure that communities are cohesive, structured and well defined. Then, we
propose the Weighted Community Clustering (WCC), which is a community metric
based on triangles. We proof that analyzing communities by triangles gives
communities that fulfill the listed set of properties, in contrast to previous
metrics. Finally, we experimentally show that WCC correctly captures the
concept of community in social networks using real and syntethic datasets, and
compare statistically some of the most relevant community detection algorithms
in the state of the art.
|
1207.6329 | Sean Chester | Sean Chester and Alex Thomo and S. Venkatesh and Sue Whitesides | Computing optimal k-regret minimizing sets with top-k depth contours | 10 pages, 9 figures | null | null | null | cs.DB cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regret minimizing sets are a very recent approach to representing a dataset D
with a small subset S of representative tuples. The set S is chosen such that
executing any top-1 query on S rather than D is minimally perceptible to any
user. To discover an optimal regret minimizing set of a predetermined
cardinality is conjectured to be a hard problem. In this paper, we generalize
the problem to that of finding an optimal k$regret minimizing set, wherein the
difference is computed over top-k queries, rather than top-1 queries.
We adapt known geometric ideas of top-k depth contours and the reverse top-k
problem. We show that the depth contours themselves offer a means of comparing
the optimality of regret minimizing sets using L2 distance. We design an
O(cn^2) plane sweep algorithm for two dimensions to compute an optimal regret
minimizing set of cardinality c. For higher dimensions, we introduce a greedy
algorithm that progresses towards increasingly optimal solutions by exploiting
the transitivity of L2 distance.
| [
{
"version": "v1",
"created": "Thu, 26 Jul 2012 16:59:17 GMT"
}
] | 2012-07-27T00:00:00 | [
[
"Chester",
"Sean",
""
],
[
"Thomo",
"Alex",
""
],
[
"Venkatesh",
"S.",
""
],
[
"Whitesides",
"Sue",
""
]
] | TITLE: Computing optimal k-regret minimizing sets with top-k depth contours
ABSTRACT: Regret minimizing sets are a very recent approach to representing a dataset D
with a small subset S of representative tuples. The set S is chosen such that
executing any top-1 query on S rather than D is minimally perceptible to any
user. To discover an optimal regret minimizing set of a predetermined
cardinality is conjectured to be a hard problem. In this paper, we generalize
the problem to that of finding an optimal k$regret minimizing set, wherein the
difference is computed over top-k queries, rather than top-1 queries.
We adapt known geometric ideas of top-k depth contours and the reverse top-k
problem. We show that the depth contours themselves offer a means of comparing
the optimality of regret minimizing sets using L2 distance. We design an
O(cn^2) plane sweep algorithm for two dimensions to compute an optimal regret
minimizing set of cardinality c. For higher dimensions, we introduce a greedy
algorithm that progresses towards increasingly optimal solutions by exploiting
the transitivity of L2 distance.
|
1207.6379 | Jose Bento | Jos\'e Bento, Nadia Fawaz, Andrea Montanari, Stratis Ioannidis | Identifying Users From Their Rating Patterns | Winner of the 2011 Challenge on Context-Aware Movie Recommendation
(RecSys 2011 - CAMRa2011) | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports on our analysis of the 2011 CAMRa Challenge dataset (Track
2) for context-aware movie recommendation systems. The train dataset comprises
4,536,891 ratings provided by 171,670 users on 23,974$ movies, as well as the
household groupings of a subset of the users. The test dataset comprises 5,450
ratings for which the user label is missing, but the household label is
provided. The challenge required to identify the user labels for the ratings in
the test set. Our main finding is that temporal information (time labels of the
ratings) is significantly more useful for achieving this objective than the
user preferences (the actual ratings). Using a model that leverages on this
fact, we are able to identify users within a known household with an accuracy
of approximately 96% (i.e. misclassification rate around 4%).
| [
{
"version": "v1",
"created": "Thu, 26 Jul 2012 19:27:03 GMT"
}
] | 2012-07-27T00:00:00 | [
[
"Bento",
"José",
""
],
[
"Fawaz",
"Nadia",
""
],
[
"Montanari",
"Andrea",
""
],
[
"Ioannidis",
"Stratis",
""
]
] | TITLE: Identifying Users From Their Rating Patterns
ABSTRACT: This paper reports on our analysis of the 2011 CAMRa Challenge dataset (Track
2) for context-aware movie recommendation systems. The train dataset comprises
4,536,891 ratings provided by 171,670 users on 23,974$ movies, as well as the
household groupings of a subset of the users. The test dataset comprises 5,450
ratings for which the user label is missing, but the household label is
provided. The challenge required to identify the user labels for the ratings in
the test set. Our main finding is that temporal information (time labels of the
ratings) is significantly more useful for achieving this objective than the
user preferences (the actual ratings). Using a model that leverages on this
fact, we are able to identify users within a known household with an accuracy
of approximately 96% (i.e. misclassification rate around 4%).
|
1202.0077 | Marco Alberto Javarone | Giuliano Armano and Marco Alberto Javarone | Datasets as Interacting Particle Systems: a Framework for Clustering | 13 pages, 5 figures. Submitted to ACS - Advances in Complex Systems | null | null | null | cond-mat.stat-mech cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a framework inspired by interacting particle physics
and devised to perform clustering on multidimensional datasets. To this end,
any given dataset is modeled as an interacting particle system, under the
assumption that each element of the dataset corresponds to a different particle
and that particle interactions are rendered through gaussian potentials.
Moreover, the way particle interactions are evaluated depends on a parameter
that controls the shape of the underlying gaussian model. In principle,
different clusters of proximal particles can be identified, according to the
value adopted for the parameter. This degree of freedom in gaussian potentials
has been introduced with the goal of allowing multiresolution analysis. In
particular, upon the adoption of a standard community detection algorithm,
multiresolution analysis is put into practice by repeatedly running the
algorithm on a set of adjacency matrices, each dependent on a specific value of
the parameter that controls the shape of gaussian potentials. As a result,
different partitioning schemas are obtained on the given dataset, so that the
information thereof can be better highlighted, with the goal of identifying the
most appropriate number of clusters. Solutions achieved in synthetic datasets
allowed to identify a repetitive pattern, which appear to be useful in the task
of identifying optimal solutions while analysing other synthetic and real
datasets.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2012 01:40:54 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Feb 2012 18:28:45 GMT"
},
{
"version": "v3",
"created": "Thu, 9 Feb 2012 18:31:16 GMT"
},
{
"version": "v4",
"created": "Tue, 24 Jul 2012 22:23:48 GMT"
}
] | 2012-07-26T00:00:00 | [
[
"Armano",
"Giuliano",
""
],
[
"Javarone",
"Marco Alberto",
""
]
] | TITLE: Datasets as Interacting Particle Systems: a Framework for Clustering
ABSTRACT: In this paper we propose a framework inspired by interacting particle physics
and devised to perform clustering on multidimensional datasets. To this end,
any given dataset is modeled as an interacting particle system, under the
assumption that each element of the dataset corresponds to a different particle
and that particle interactions are rendered through gaussian potentials.
Moreover, the way particle interactions are evaluated depends on a parameter
that controls the shape of the underlying gaussian model. In principle,
different clusters of proximal particles can be identified, according to the
value adopted for the parameter. This degree of freedom in gaussian potentials
has been introduced with the goal of allowing multiresolution analysis. In
particular, upon the adoption of a standard community detection algorithm,
multiresolution analysis is put into practice by repeatedly running the
algorithm on a set of adjacency matrices, each dependent on a specific value of
the parameter that controls the shape of gaussian potentials. As a result,
different partitioning schemas are obtained on the given dataset, so that the
information thereof can be better highlighted, with the goal of identifying the
most appropriate number of clusters. Solutions achieved in synthetic datasets
allowed to identify a repetitive pattern, which appear to be useful in the task
of identifying optimal solutions while analysing other synthetic and real
datasets.
|
1205.2822 | Zi-Ke Zhang Dr. | Tian Qiu, Zi-Ke Zhang, Guang Chen | Promotional effect on cold start problem and diversity in a data
characteristic based recommendation method | null | null | null | null | cs.IR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pure methods generally perform excellently in either recommendation accuracy
or diversity, whereas hybrid methods generally outperform pure cases in both
recommendation accuracy and diversity, but encounter the dilemma of optimal
hybridization parameter selection for different recommendation focuses. In this
article, based on a user-item bipartite network, we propose a data
characteristic based algorithm, by relating the hybridization parameter to the
data characteristic. Different from previous hybrid methods, the present
algorithm adaptively assign the optimal parameter specifically for each
individual items according to the correlation between the algorithm and the
item degrees. Compared with a highly accurate pure method, and a hybrid method
which is outstanding in both the recommendation accuracy and the diversity, our
method shows a remarkably promotional effect on the long-standing challenging
problem of the cold start, as well as the recommendation diversity, while
simultaneously keeps a high overall recommendation accuracy. Even compared with
an improved hybrid method which is highly efficient on the cold start problem,
the proposed method not only further improves the recommendation accuracy of
the cold items, but also enhances the recommendation diversity. Our work might
provide a promising way to better solving the personal recommendation from the
perspective of relating algorithms with dataset properties.
| [
{
"version": "v1",
"created": "Sun, 13 May 2012 02:47:08 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jun 2012 15:43:06 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Jul 2012 20:17:06 GMT"
}
] | 2012-07-26T00:00:00 | [
[
"Qiu",
"Tian",
""
],
[
"Zhang",
"Zi-Ke",
""
],
[
"Chen",
"Guang",
""
]
] | TITLE: Promotional effect on cold start problem and diversity in a data
characteristic based recommendation method
ABSTRACT: Pure methods generally perform excellently in either recommendation accuracy
or diversity, whereas hybrid methods generally outperform pure cases in both
recommendation accuracy and diversity, but encounter the dilemma of optimal
hybridization parameter selection for different recommendation focuses. In this
article, based on a user-item bipartite network, we propose a data
characteristic based algorithm, by relating the hybridization parameter to the
data characteristic. Different from previous hybrid methods, the present
algorithm adaptively assign the optimal parameter specifically for each
individual items according to the correlation between the algorithm and the
item degrees. Compared with a highly accurate pure method, and a hybrid method
which is outstanding in both the recommendation accuracy and the diversity, our
method shows a remarkably promotional effect on the long-standing challenging
problem of the cold start, as well as the recommendation diversity, while
simultaneously keeps a high overall recommendation accuracy. Even compared with
an improved hybrid method which is highly efficient on the cold start problem,
the proposed method not only further improves the recommendation accuracy of
the cold items, but also enhances the recommendation diversity. Our work might
provide a promising way to better solving the personal recommendation from the
perspective of relating algorithms with dataset properties.
|
1207.6037 | Emilio Ferrara | Giovanni Quattrone, Emilio Ferrara, Pasquale De Meo, Licia Capra | Measuring Similarity in Large-scale Folksonomies | 7 pages, SEKE '11: 23rd International Conference on Software
Engineering and Knowledge Engineering | SEKE '11: Proceedings of the 23rd International Conference on
Software Engineering and Knowledge Engineering, pp. 385-391, 2011 | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social (or folksonomic) tagging has become a very popular way to describe
content within Web 2.0 websites. Unlike taxonomies, which overimpose a
hierarchical categorisation of content, folksonomies enable end-users to freely
create and choose the categories (in this case, tags) that best describe some
content. However, as tags are informally defined, continually changing, and
ungoverned, social tagging has often been criticised for lowering, rather than
increasing, the efficiency of searching, due to the number of synonyms,
homonyms, polysemy, as well as the heterogeneity of users and the noise they
introduce. To address this issue, a variety of approaches have been proposed
that recommend users what tags to use, both when labelling and when looking for
resources.
As we illustrate in this paper, real world folksonomies are characterized by
power law distributions of tags, over which commonly used similarity metrics,
including the Jaccard coefficient and the cosine similarity, fail to compute.
We thus propose a novel metric, specifically developed to capture similarity in
large-scale folksonomies, that is based on a mutual reinforcement principle:
that is, two tags are deemed similar if they have been associated to similar
resources, and vice-versa two resources are deemed similar if they have been
labelled by similar tags. We offer an efficient realisation of this similarity
metric, and assess its quality experimentally, by comparing it against cosine
similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and
CiteULike.
| [
{
"version": "v1",
"created": "Wed, 25 Jul 2012 16:01:22 GMT"
}
] | 2012-07-26T00:00:00 | [
[
"Quattrone",
"Giovanni",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"De Meo",
"Pasquale",
""
],
[
"Capra",
"Licia",
""
]
] | TITLE: Measuring Similarity in Large-scale Folksonomies
ABSTRACT: Social (or folksonomic) tagging has become a very popular way to describe
content within Web 2.0 websites. Unlike taxonomies, which overimpose a
hierarchical categorisation of content, folksonomies enable end-users to freely
create and choose the categories (in this case, tags) that best describe some
content. However, as tags are informally defined, continually changing, and
ungoverned, social tagging has often been criticised for lowering, rather than
increasing, the efficiency of searching, due to the number of synonyms,
homonyms, polysemy, as well as the heterogeneity of users and the noise they
introduce. To address this issue, a variety of approaches have been proposed
that recommend users what tags to use, both when labelling and when looking for
resources.
As we illustrate in this paper, real world folksonomies are characterized by
power law distributions of tags, over which commonly used similarity metrics,
including the Jaccard coefficient and the cosine similarity, fail to compute.
We thus propose a novel metric, specifically developed to capture similarity in
large-scale folksonomies, that is based on a mutual reinforcement principle:
that is, two tags are deemed similar if they have been associated to similar
resources, and vice-versa two resources are deemed similar if they have been
labelled by similar tags. We offer an efficient realisation of this similarity
metric, and assess its quality experimentally, by comparing it against cosine
similarity, on three large-scale datasets, namely Bibsonomy, MovieLens and
CiteULike.
|
1207.5775 | Peter Morgan | Peter Morgan | A graphical presentation of signal delays in the datasets of Weihs et al | 9 pages, 9 figures (all data visualization) | null | null | null | quant-ph physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A graphical presentation of the timing of avalanche photodiode events in the
datasets from the experiment of Weihs et al. [Phys. Rev. Lett. 81, 5039 (1998)]
makes manifest the existence of two types of signal delay: (1) The introduction
of rapid switching of the input to a pair of transverse electro-optical
modulators causes a delay of approximately 20 nanoseconds for a proportion of
coincident avalanche photodiode events; this effect has been previously noted,
but a different cause is suggested by the data as considered here. (2) There
are delays that depend on in which avalanche photodiode an event occurs; this
effect has also been previously noted even though it is only strongly apparent
when the relative time difference between avalanche photodiode events is near
the stated 0.5 nanosecond accuracy of the timestamps (but it is identifiable
because of 75 picosecond resolution). The cause of the second effect is a
difference between signal delays for the four avalanche photodiodes, for which
correction can be made by straightforward local adjustments (with almost no
effect on the degree of violation of Bell-CHSH inequalities).
| [
{
"version": "v1",
"created": "Tue, 24 Jul 2012 19:09:28 GMT"
}
] | 2012-07-25T00:00:00 | [
[
"Morgan",
"Peter",
""
]
] | TITLE: A graphical presentation of signal delays in the datasets of Weihs et al
ABSTRACT: A graphical presentation of the timing of avalanche photodiode events in the
datasets from the experiment of Weihs et al. [Phys. Rev. Lett. 81, 5039 (1998)]
makes manifest the existence of two types of signal delay: (1) The introduction
of rapid switching of the input to a pair of transverse electro-optical
modulators causes a delay of approximately 20 nanoseconds for a proportion of
coincident avalanche photodiode events; this effect has been previously noted,
but a different cause is suggested by the data as considered here. (2) There
are delays that depend on in which avalanche photodiode an event occurs; this
effect has also been previously noted even though it is only strongly apparent
when the relative time difference between avalanche photodiode events is near
the stated 0.5 nanosecond accuracy of the timestamps (but it is identifiable
because of 75 picosecond resolution). The cause of the second effect is a
difference between signal delays for the four avalanche photodiodes, for which
correction can be made by straightforward local adjustments (with almost no
effect on the degree of violation of Bell-CHSH inequalities).
|
1207.3031 | Konstantinos Tsianos | Konstantinos I. Tsianos and Michael G. Rabbat | Distributed Strongly Convex Optimization | 18 pages single column draftcls format, 1 figure, Submitted to
Allerton 2012 | null | null | null | cs.DC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A lot of effort has been invested into characterizing the convergence rates
of gradient based algorithms for non-linear convex optimization. Recently,
motivated by large datasets and problems in machine learning, the interest has
shifted towards distributed optimization. In this work we present a distributed
algorithm for strongly convex constrained optimization. Each node in a network
of n computers converges to the optimum of a strongly convex, L-Lipchitz
continuous, separable objective at a rate O(log (sqrt(n) T) / T) where T is the
number of iterations. This rate is achieved in the online setting where the
data is revealed one at a time to the nodes, and in the batch setting where
each node has access to its full local dataset from the start. The same
convergence rate is achieved in expectation when the subgradients used at each
node are corrupted with additive zero-mean noise.
| [
{
"version": "v1",
"created": "Thu, 12 Jul 2012 17:38:46 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Jul 2012 03:08:51 GMT"
}
] | 2012-07-23T00:00:00 | [
[
"Tsianos",
"Konstantinos I.",
""
],
[
"Rabbat",
"Michael G.",
""
]
] | TITLE: Distributed Strongly Convex Optimization
ABSTRACT: A lot of effort has been invested into characterizing the convergence rates
of gradient based algorithms for non-linear convex optimization. Recently,
motivated by large datasets and problems in machine learning, the interest has
shifted towards distributed optimization. In this work we present a distributed
algorithm for strongly convex constrained optimization. Each node in a network
of n computers converges to the optimum of a strongly convex, L-Lipchitz
continuous, separable objective at a rate O(log (sqrt(n) T) / T) where T is the
number of iterations. This rate is achieved in the online setting where the
data is revealed one at a time to the nodes, and in the batch setting where
each node has access to its full local dataset from the start. The same
convergence rate is achieved in expectation when the subgradients used at each
node are corrupted with additive zero-mean noise.
|
1207.4525 | Simon Lacoste-Julien | Simon Lacoste-Julien, Konstantina Palla, Alex Davies, Gjergji Kasneci,
Thore Graepel, Zoubin Ghahramani | SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases | 10 pages + 2 pages appendix; 5 figures -- initial preprint | null | null | null | cs.AI cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet has enabled the creation of a growing number of large-scale
knowledge bases in a variety of domains containing complementary information.
Tools for automatically aligning these knowledge bases would make it possible
to unify many sources of structured knowledge and answer complex queries.
However, the efficient alignment of large-scale knowledge bases still poses a
considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a
simple algorithm for aligning knowledge bases with millions of entities and
facts. SiGMa is an iterative propagation algorithm which leverages both the
structural information from the relationship graph as well as flexible
similarity measures between entity properties in a greedy local search, thus
making it scalable. Despite its greedy nature, our experiments indicate that
SiGMa can efficiently match some of the world's largest knowledge bases with
high precision. We provide additional experiments on benchmark datasets which
demonstrate that SiGMa can outperform state-of-the-art approaches both in
accuracy and efficiency.
| [
{
"version": "v1",
"created": "Thu, 19 Jul 2012 00:15:05 GMT"
}
] | 2012-07-20T00:00:00 | [
[
"Lacoste-Julien",
"Simon",
""
],
[
"Palla",
"Konstantina",
""
],
[
"Davies",
"Alex",
""
],
[
"Kasneci",
"Gjergji",
""
],
[
"Graepel",
"Thore",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases
ABSTRACT: The Internet has enabled the creation of a growing number of large-scale
knowledge bases in a variety of domains containing complementary information.
Tools for automatically aligning these knowledge bases would make it possible
to unify many sources of structured knowledge and answer complex queries.
However, the efficient alignment of large-scale knowledge bases still poses a
considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a
simple algorithm for aligning knowledge bases with millions of entities and
facts. SiGMa is an iterative propagation algorithm which leverages both the
structural information from the relationship graph as well as flexible
similarity measures between entity properties in a greedy local search, thus
making it scalable. Despite its greedy nature, our experiments indicate that
SiGMa can efficiently match some of the world's largest knowledge bases with
high precision. We provide additional experiments on benchmark datasets which
demonstrate that SiGMa can outperform state-of-the-art approaches both in
accuracy and efficiency.
|
1207.4567 | Rong-Hua Li | Rong-Hua Li, Jeffrey Xu Yu | Efficient Core Maintenance in Large Dynamic Graphs | null | null | null | null | cs.DS cs.DB cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The $k$-core decomposition in a graph is a fundamental problem for social
network analysis. The problem of $k$-core decomposition is to calculate the
core number for every node in a graph. Previous studies mainly focus on
$k$-core decomposition in a static graph. There exists a linear time algorithm
for $k$-core decomposition in a static graph. However, in many real-world
applications such as online social networks and the Internet, the graph
typically evolves over time. Under such applications, a key issue is to
maintain the core number of nodes given the graph changes over time. A simple
implementation is to perform the linear time algorithm to recompute the core
number for every node after the graph is updated. Such simple implementation is
expensive when the graph is very large. In this paper, we propose a new
efficient algorithm to maintain the core number for every node in a dynamic
graph. Our main result is that only certain nodes need to update their core
number given the graph is changed by inserting/deleting an edge. We devise an
efficient algorithm to identify and recompute the core number of such nodes.
The complexity of our algorithm is independent of the graph size. In addition,
to further accelerate the algorithm, we develop two pruning strategies by
exploiting the lower and upper bounds of the core number. Finally, we conduct
extensive experiments over both real-world and synthetic datasets, and the
results demonstrate the efficiency of the proposed algorithm.
| [
{
"version": "v1",
"created": "Thu, 19 Jul 2012 06:57:10 GMT"
}
] | 2012-07-20T00:00:00 | [
[
"Li",
"Rong-Hua",
""
],
[
"Yu",
"Jeffrey Xu",
""
]
] | TITLE: Efficient Core Maintenance in Large Dynamic Graphs
ABSTRACT: The $k$-core decomposition in a graph is a fundamental problem for social
network analysis. The problem of $k$-core decomposition is to calculate the
core number for every node in a graph. Previous studies mainly focus on
$k$-core decomposition in a static graph. There exists a linear time algorithm
for $k$-core decomposition in a static graph. However, in many real-world
applications such as online social networks and the Internet, the graph
typically evolves over time. Under such applications, a key issue is to
maintain the core number of nodes given the graph changes over time. A simple
implementation is to perform the linear time algorithm to recompute the core
number for every node after the graph is updated. Such simple implementation is
expensive when the graph is very large. In this paper, we propose a new
efficient algorithm to maintain the core number for every node in a dynamic
graph. Our main result is that only certain nodes need to update their core
number given the graph is changed by inserting/deleting an edge. We devise an
efficient algorithm to identify and recompute the core number of such nodes.
The complexity of our algorithm is independent of the graph size. In addition,
to further accelerate the algorithm, we develop two pruning strategies by
exploiting the lower and upper bounds of the core number. Finally, we conduct
extensive experiments over both real-world and synthetic datasets, and the
results demonstrate the efficiency of the proposed algorithm.
|
1201.4639 | Vicente Pablo Guerrero Bote Vicente Pablo Guerrero Bote | Vicente P. Guerrero-Bote and Felix Moya-Anegon | A further step forward in measuring journals' scientific prestige: The
SJR2 indicator | null | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new size-independent indicator of scientific journal prestige, the SJR2
indicator, is proposed. This indicator takes into account not only the prestige
of the citing scientific journal but also its closeness to the cited journal
using the cosine of the angle between the vectors of the two journals'
cocitation profiles. To eliminate the size effect, the accumulated prestige is
divided by the fraction of the journal's citable documents, thus eliminating
the decreasing tendency of this type of indicator and giving meaning to the
scores. Its method of computation is described, and the results of its
implementation on the Scopus 2008 dataset is compared with those of an ad hoc
Journal Impact Factor, JIF(3y), and SNIP, the comparison being made both
overall and within specific scientific areas. All three, the SJR2 indicator,
the SNIP indicator and the JIF distributions, were found to fit well to a
logarithmic law. Although the three metrics were strongly correlated, there
were major changes in rank. In addition, the SJR2 was distributed more
equalized than the JIF by Subject Area and almost as equalized as the SNIP, and
better than both at the lower level of Specific Subject Areas. The
incorporation of the cosine increased the values of the flows of prestige
between thematically close journals.
| [
{
"version": "v1",
"created": "Mon, 23 Jan 2012 07:39:03 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jul 2012 04:19:07 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Guerrero-Bote",
"Vicente P.",
""
],
[
"Moya-Anegon",
"Felix",
""
]
] | TITLE: A further step forward in measuring journals' scientific prestige: The
SJR2 indicator
ABSTRACT: A new size-independent indicator of scientific journal prestige, the SJR2
indicator, is proposed. This indicator takes into account not only the prestige
of the citing scientific journal but also its closeness to the cited journal
using the cosine of the angle between the vectors of the two journals'
cocitation profiles. To eliminate the size effect, the accumulated prestige is
divided by the fraction of the journal's citable documents, thus eliminating
the decreasing tendency of this type of indicator and giving meaning to the
scores. Its method of computation is described, and the results of its
implementation on the Scopus 2008 dataset is compared with those of an ad hoc
Journal Impact Factor, JIF(3y), and SNIP, the comparison being made both
overall and within specific scientific areas. All three, the SJR2 indicator,
the SNIP indicator and the JIF distributions, were found to fit well to a
logarithmic law. Although the three metrics were strongly correlated, there
were major changes in rank. In addition, the SJR2 was distributed more
equalized than the JIF by Subject Area and almost as equalized as the SNIP, and
better than both at the lower level of Specific Subject Areas. The
incorporation of the cosine increased the values of the flows of prestige
between thematically close journals.
|
1207.4129 | Dragomir Anguelov | Dragomir Anguelov, Daphne Koller, Hoi-Cheung Pang, Praveen Srinivasan,
Sebastian Thrun | Recovering Articulated Object Models from 3D Range Data | Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004) | null | null | UAI-P-2004-PG-18-26 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of unsupervised learning of complex articulated object
models from 3D range data. We describe an algorithm whose input is a set of
meshes corresponding to different configurations of an articulated object. The
algorithm automatically recovers a decomposition of the object into
approximately rigid parts, the location of the parts in the different object
instances, and the articulated object skeleton linking the parts. Our algorithm
first registers allthe meshes using an unsupervised non-rigid technique
described in a companion paper. It then segments the meshes using a graphical
model that captures the spatial contiguity of parts. The segmentation is done
using the EM algorithm, iterating between finding a decomposition of the object
into rigid parts, and finding the location of the parts in the object
instances. Although the graphical model is densely connected, the object
decomposition step can be performed optimally and efficiently, allowing us to
identify a large number of object parts while avoiding local maxima. We
demonstrate the algorithm on real world datasets, recovering a 15-part
articulated model of a human puppet from just 7 different puppet
configurations, as well as a 4 part model of a fiexing arm where significant
non-rigid deformation was present.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2012 14:48:13 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Anguelov",
"Dragomir",
""
],
[
"Koller",
"Daphne",
""
],
[
"Pang",
"Hoi-Cheung",
""
],
[
"Srinivasan",
"Praveen",
""
],
[
"Thrun",
"Sebastian",
""
]
] | TITLE: Recovering Articulated Object Models from 3D Range Data
ABSTRACT: We address the problem of unsupervised learning of complex articulated object
models from 3D range data. We describe an algorithm whose input is a set of
meshes corresponding to different configurations of an articulated object. The
algorithm automatically recovers a decomposition of the object into
approximately rigid parts, the location of the parts in the different object
instances, and the articulated object skeleton linking the parts. Our algorithm
first registers allthe meshes using an unsupervised non-rigid technique
described in a companion paper. It then segments the meshes using a graphical
model that captures the spatial contiguity of parts. The segmentation is done
using the EM algorithm, iterating between finding a decomposition of the object
into rigid parts, and finding the location of the parts in the object
instances. Although the graphical model is densely connected, the object
decomposition step can be performed optimally and efficiently, allowing us to
identify a large number of object parts while avoiding local maxima. We
demonstrate the algorithm on real world datasets, recovering a 15-part
articulated model of a human puppet from just 7 different puppet
configurations, as well as a 4 part model of a fiexing arm where significant
non-rigid deformation was present.
|
1207.4132 | Rodney Nielsen | Rodney Nielsen | MOB-ESP and other Improvements in Probability Estimation | Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004) | null | null | UAI-P-2004-PG-418-425 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key prerequisite to optimal reasoning under uncertainty in intelligent
systems is to start with good class probability estimates. This paper improves
on the current best probability estimation trees (Bagged-PETs) and also
presents a new ensemble-based algorithm (MOB-ESP). Comparisons are made using
several benchmark datasets and multiple metrics. These experiments show that
MOB-ESP outputs significantly more accurate class probabilities than either the
baseline BPETs algorithm or the enhanced version presented here (EB-PETs).
These results are based on metrics closely associated with the average accuracy
of the predictions. MOB-ESP also provides much better probability rankings than
B-PETs. The paper further suggests how these estimation techniques can be
applied in concert with a broader category of classifiers.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2012 14:51:03 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Nielsen",
"Rodney",
""
]
] | TITLE: MOB-ESP and other Improvements in Probability Estimation
ABSTRACT: A key prerequisite to optimal reasoning under uncertainty in intelligent
systems is to start with good class probability estimates. This paper improves
on the current best probability estimation trees (Bagged-PETs) and also
presents a new ensemble-based algorithm (MOB-ESP). Comparisons are made using
several benchmark datasets and multiple metrics. These experiments show that
MOB-ESP outputs significantly more accurate class probabilities than either the
baseline BPETs algorithm or the enhanced version presented here (EB-PETs).
These results are based on metrics closely associated with the average accuracy
of the predictions. MOB-ESP also provides much better probability rankings than
B-PETs. The paper further suggests how these estimation techniques can be
applied in concert with a broader category of classifiers.
|
1207.4146 | Rong Jin | Rong Jin, Luo Si | A Bayesian Approach toward Active Learning for Collaborative Filtering | Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004) | null | null | UAI-P-2004-PG-278-285 | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering is a useful technique for exploiting the preference
patterns of a group of users to predict the utility of items for the active
user. In general, the performance of collaborative filtering depends on the
number of rated examples given by the active user. The more the number of rated
examples given by the active user, the more accurate the predicted ratings will
be. Active learning provides an effective way to acquire the most informative
rated examples from active users. Previous work on active learning for
collaborative filtering only considers the expected loss function based on the
estimated model, which can be misleading when the estimated model is
inaccurate. This paper takes one step further by taking into account of the
posterior distribution of the estimated model, which results in more robust
active learning algorithm. Empirical studies with datasets of movie ratings
show that when the number of ratings from the active user is restricted to be
small, active learning methods only based on the estimated model don't perform
well while the active learning method using the model distribution achieves
substantially better performance.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2012 14:55:41 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Jin",
"Rong",
""
],
[
"Si",
"Luo",
""
]
] | TITLE: A Bayesian Approach toward Active Learning for Collaborative Filtering
ABSTRACT: Collaborative filtering is a useful technique for exploiting the preference
patterns of a group of users to predict the utility of items for the active
user. In general, the performance of collaborative filtering depends on the
number of rated examples given by the active user. The more the number of rated
examples given by the active user, the more accurate the predicted ratings will
be. Active learning provides an effective way to acquire the most informative
rated examples from active users. Previous work on active learning for
collaborative filtering only considers the expected loss function based on the
estimated model, which can be misleading when the estimated model is
inaccurate. This paper takes one step further by taking into account of the
posterior distribution of the estimated model, which results in more robust
active learning algorithm. Empirical studies with datasets of movie ratings
show that when the number of ratings from the active user is restricted to be
small, active learning methods only based on the estimated model don't perform
well while the active learning method using the model distribution achieves
substantially better performance.
|
1207.4169 | Michal Rosen-Zvi | Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, Padhraic Smyth | The Author-Topic Model for Authors and Documents | Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004) | null | null | UAI-P-2004-PG-487-494 | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the author-topic model, a generative model for documents that
extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include
authorship information. Each author is associated with a multinomial
distribution over topics and each topic is associated with a multinomial
distribution over words. A document with multiple authors is modeled as a
distribution over topics that is a mixture of the distributions associated with
the authors. We apply the model to a collection of 1,700 NIPS conference papers
and 160,000 CiteSeer abstracts. Exact inference is intractable for these
datasets and we use Gibbs sampling to estimate the topic and author
distributions. We compare the performance with two other generative models for
documents, which are special cases of the author-topic model: LDA (a topic
model) and a simple author model in which each author is associated with a
distribution over words rather than a distribution over topics. We show topics
recovered by the author-topic model, and demonstrate applications to computing
similarity between authors and entropy of author output.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2012 15:05:53 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Rosen-Zvi",
"Michal",
""
],
[
"Griffiths",
"Thomas",
""
],
[
"Steyvers",
"Mark",
""
],
[
"Smyth",
"Padhraic",
""
]
] | TITLE: The Author-Topic Model for Authors and Documents
ABSTRACT: We introduce the author-topic model, a generative model for documents that
extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include
authorship information. Each author is associated with a multinomial
distribution over topics and each topic is associated with a multinomial
distribution over words. A document with multiple authors is modeled as a
distribution over topics that is a mixture of the distributions associated with
the authors. We apply the model to a collection of 1,700 NIPS conference papers
and 160,000 CiteSeer abstracts. Exact inference is intractable for these
datasets and we use Gibbs sampling to estimate the topic and author
distributions. We compare the performance with two other generative models for
documents, which are special cases of the author-topic model: LDA (a topic
model) and a simple author model in which each author is associated with a
distribution over words rather than a distribution over topics. We show topics
recovered by the author-topic model, and demonstrate applications to computing
similarity between authors and entropy of author output.
|
1207.4293 | Piotr Br\'odka | Piotr Br\'odka, Przemys{\l}aw Kazienko, Katarzyna Musia{\l}, Krzysztof
Skibicki | Analysis of Neighbourhoods in Multi-layered Dynamic Social Networks | 16 pages, International Journal of Computational Intelligence Systems | International Journal of Computational Intelligence Systems, Vol.
5, No. 3 (June, 2012), 582-596 | 10.1080/18756891.2012.696922 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks existing among employees, customers or users of various IT
systems have become one of the research areas of growing importance. A social
network consists of nodes - social entities and edges linking pairs of nodes.
In regular, one-layered social networks, two nodes - i.e. people are connected
with a single edge whereas in the multi-layered social networks, there may be
many links of different types for a pair of nodes. Nowadays data about people
and their interactions, which exists in all social media, provides information
about many different types of relationships within one network. Analysing this
data one can obtain knowledge not only about the structure and characteristics
of the network but also gain understanding about semantic of human relations.
Are they direct or not? Do people tend to sustain single or multiple relations
with a given person? What types of communication is the most important for
them? Answers to these and more questions enable us to draw conclusions about
semantic of human interactions. Unfortunately, most of the methods used for
social network analysis (SNA) may be applied only to one-layered social
networks. Thus, some new structural measures for multi-layered social networks
are proposed in the paper, in particular: cross-layer clustering coefficient,
cross-layer degree centrality and various versions of multi-layered degree
centralities. Authors also investigated the dynamics of multi-layered
neighbourhood for five different layers within the social network. The
evaluation of the presented concepts on the real-world dataset is presented.
The measures proposed in the paper may directly be used to various methods for
collective classification, in which nodes are assigned to labels according to
their structural input features.
| [
{
"version": "v1",
"created": "Wed, 18 Jul 2012 08:06:25 GMT"
}
] | 2012-07-19T00:00:00 | [
[
"Bródka",
"Piotr",
""
],
[
"Kazienko",
"Przemysław",
""
],
[
"Musiał",
"Katarzyna",
""
],
[
"Skibicki",
"Krzysztof",
""
]
] | TITLE: Analysis of Neighbourhoods in Multi-layered Dynamic Social Networks
ABSTRACT: Social networks existing among employees, customers or users of various IT
systems have become one of the research areas of growing importance. A social
network consists of nodes - social entities and edges linking pairs of nodes.
In regular, one-layered social networks, two nodes - i.e. people are connected
with a single edge whereas in the multi-layered social networks, there may be
many links of different types for a pair of nodes. Nowadays data about people
and their interactions, which exists in all social media, provides information
about many different types of relationships within one network. Analysing this
data one can obtain knowledge not only about the structure and characteristics
of the network but also gain understanding about semantic of human relations.
Are they direct or not? Do people tend to sustain single or multiple relations
with a given person? What types of communication is the most important for
them? Answers to these and more questions enable us to draw conclusions about
semantic of human interactions. Unfortunately, most of the methods used for
social network analysis (SNA) may be applied only to one-layered social
networks. Thus, some new structural measures for multi-layered social networks
are proposed in the paper, in particular: cross-layer clustering coefficient,
cross-layer degree centrality and various versions of multi-layered degree
centralities. Authors also investigated the dynamics of multi-layered
neighbourhood for five different layers within the social network. The
evaluation of the presented concepts on the real-world dataset is presented.
The measures proposed in the paper may directly be used to various methods for
collective classification, in which nodes are assigned to labels according to
their structural input features.
|
1207.3790 | Hocine Cherifi | Vincent Labatut (BIT Lab), Hocine Cherifi (Le2i) | Accuracy Measures for the Comparison of Classifiers | The 5th International Conference on Information Technology, amman :
Jordanie (2011) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The selection of the best classification algorithm for a given dataset is a
very widespread problem. It is also a complex one, in the sense it requires to
make several important methodological choices. Among them, in this work we
focus on the measure used to assess the classification performance and rank the
algorithms. We present the most popular measures and discuss their properties.
Despite the numerous measures proposed over the years, many of them turn out to
be equivalent in this specific case, to have interpretation problems, or to be
unsuitable for our purpose. Consequently, classic overall success rate or
marginal rates should be preferred for this specific task.
| [
{
"version": "v1",
"created": "Mon, 16 Jul 2012 08:49:34 GMT"
}
] | 2012-07-18T00:00:00 | [
[
"Labatut",
"Vincent",
"",
"BIT Lab"
],
[
"Cherifi",
"Hocine",
"",
"Le2i"
]
] | TITLE: Accuracy Measures for the Comparison of Classifiers
ABSTRACT: The selection of the best classification algorithm for a given dataset is a
very widespread problem. It is also a complex one, in the sense it requires to
make several important methodological choices. Among them, in this work we
focus on the measure used to assess the classification performance and rank the
algorithms. We present the most popular measures and discuss their properties.
Despite the numerous measures proposed over the years, many of them turn out to
be equivalent in this specific case, to have interpretation problems, or to be
unsuitable for our purpose. Consequently, classic overall success rate or
marginal rates should be preferred for this specific task.
|
1207.3809 | Julian McAuley | Julian McAuley and Jure Leskovec | Image Labeling on a Network: Using Social-Network Metadata for Image
Classification | ECCV 2012; 14 pages, 4 figures | null | null | null | cs.CV cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale image retrieval benchmarks invariably consist of images from the
Web. Many of these benchmarks are derived from online photo sharing networks,
like Flickr, which in addition to hosting images also provide a highly
interactive social community. Such communities generate rich metadata that can
naturally be harnessed for image classification and retrieval. Here we study
four popular benchmark datasets, extending them with social-network metadata,
such as the groups to which each image belongs, the comment thread associated
with the image, who uploaded it, their location, and their network of friends.
Since these types of data are inherently relational, we propose a model that
explicitly accounts for the interdependencies between images sharing common
properties. We model the task as a binary labeling problem on a network, and
use structured learning techniques to learn model parameters. We find that
social-network metadata are useful in a variety of classification tasks, in
many cases outperforming methods based on image content.
| [
{
"version": "v1",
"created": "Mon, 16 Jul 2012 20:04:12 GMT"
}
] | 2012-07-18T00:00:00 | [
[
"McAuley",
"Julian",
""
],
[
"Leskovec",
"Jure",
""
]
] | TITLE: Image Labeling on a Network: Using Social-Network Metadata for Image
Classification
ABSTRACT: Large-scale image retrieval benchmarks invariably consist of images from the
Web. Many of these benchmarks are derived from online photo sharing networks,
like Flickr, which in addition to hosting images also provide a highly
interactive social community. Such communities generate rich metadata that can
naturally be harnessed for image classification and retrieval. Here we study
four popular benchmark datasets, extending them with social-network metadata,
such as the groups to which each image belongs, the comment thread associated
with the image, who uploaded it, their location, and their network of friends.
Since these types of data are inherently relational, we propose a model that
explicitly accounts for the interdependencies between images sharing common
properties. We model the task as a binary labeling problem on a network, and
use structured learning techniques to learn model parameters. We find that
social-network metadata are useful in a variety of classification tasks, in
many cases outperforming methods based on image content.
|
1207.3520 | Fabian Pedregosa | Fabian Pedregosa (INRIA Paris - Rocquencourt), Alexandre Gramfort
(LNAO, INRIA Saclay - Ile de France), Ga\"el Varoquaux (LNAO, INRIA Saclay -
Ile de France), Bertrand Thirion (INRIA Saclay - Ile de France), Christophe
Pallier (NEUROSPIN), Elodie Cauvet (NEUROSPIN) | Improved brain pattern recovery through ranking approaches | null | Pattern Recognition in NeuroImaging (PRNI 2012) (2012) | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring the functional specificity of brain regions from functional
Magnetic Resonance Images (fMRI) data is a challenging statistical problem.
While the General Linear Model (GLM) remains the standard approach for brain
mapping, supervised learning techniques (a.k.a.} decoding) have proven to be
useful to capture multivariate statistical effects distributed across voxels
and brain regions. Up to now, much effort has been made to improve decoding by
incorporating prior knowledge in the form of a particular regularization term.
In this paper we demonstrate that further improvement can be made by accounting
for non-linearities using a ranking approach rather than the commonly used
least-square regression. Through simulation, we compare the recovery properties
of our approach to linear models commonly used in fMRI based decoding. We
demonstrate the superiority of ranking with a real fMRI dataset.
| [
{
"version": "v1",
"created": "Sun, 15 Jul 2012 15:06:35 GMT"
}
] | 2012-07-17T00:00:00 | [
[
"Pedregosa",
"Fabian",
"",
"INRIA Paris - Rocquencourt"
],
[
"Gramfort",
"Alexandre",
"",
"LNAO, INRIA Saclay - Ile de France"
],
[
"Varoquaux",
"Gaël",
"",
"LNAO, INRIA Saclay -\n Ile de France"
],
[
"Thirion",
"Bertrand",
"",
"INRIA Saclay - Ile de France"
],
[
"Pallier",
"Christophe",
"",
"NEUROSPIN"
],
[
"Cauvet",
"Elodie",
"",
"NEUROSPIN"
]
] | TITLE: Improved brain pattern recovery through ranking approaches
ABSTRACT: Inferring the functional specificity of brain regions from functional
Magnetic Resonance Images (fMRI) data is a challenging statistical problem.
While the General Linear Model (GLM) remains the standard approach for brain
mapping, supervised learning techniques (a.k.a.} decoding) have proven to be
useful to capture multivariate statistical effects distributed across voxels
and brain regions. Up to now, much effort has been made to improve decoding by
incorporating prior knowledge in the form of a particular regularization term.
In this paper we demonstrate that further improvement can be made by accounting
for non-linearities using a ranking approach rather than the commonly used
least-square regression. Through simulation, we compare the recovery properties
of our approach to linear models commonly used in fMRI based decoding. We
demonstrate the superiority of ranking with a real fMRI dataset.
|
1207.3532 | Xifeng Yan Xifeng Yan | Yang Li, Pegah Kamousi, Fangqiu Han, Shengqi Yang, Xifeng Yan, Subhash
Suri | Memory Efficient De Bruijn Graph Construction | 13 pages, 19 figures, 1 table | null | null | null | cs.DS cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Massively parallel DNA sequencing technologies are revolutionizing genomics
research. Billions of short reads generated at low costs can be assembled for
reconstructing the whole genomes. Unfortunately, the large memory footprint of
the existing de novo assembly algorithms makes it challenging to get the
assembly done for higher eukaryotes like mammals. In this work, we investigate
the memory issue of constructing de Bruijn graph, a core task in leading
assembly algorithms, which often consumes several hundreds of gigabytes memory
for large genomes. We propose a disk-based partition method, called Minimum
Substring Partitioning (MSP), to complete the task using less than 10 gigabytes
memory, without runtime slowdown. MSP breaks the short reads into multiple
small disjoint partitions so that each partition can be loaded into memory,
processed individually and later merged with others to form a de Bruijn graph.
By leveraging the overlaps among the k-mers (substring of length k), MSP
achieves astonishing compression ratio: The total size of partitions is reduced
from $\Theta(kn)$ to $\Theta(n)$, where $n$ is the size of the short read
database, and $k$ is the length of a $k$-mer. Experimental results show that
our method can build de Bruijn graphs using a commodity computer for any
large-volume sequence dataset.
| [
{
"version": "v1",
"created": "Sun, 15 Jul 2012 19:45:19 GMT"
}
] | 2012-07-17T00:00:00 | [
[
"Li",
"Yang",
""
],
[
"Kamousi",
"Pegah",
""
],
[
"Han",
"Fangqiu",
""
],
[
"Yang",
"Shengqi",
""
],
[
"Yan",
"Xifeng",
""
],
[
"Suri",
"Subhash",
""
]
] | TITLE: Memory Efficient De Bruijn Graph Construction
ABSTRACT: Massively parallel DNA sequencing technologies are revolutionizing genomics
research. Billions of short reads generated at low costs can be assembled for
reconstructing the whole genomes. Unfortunately, the large memory footprint of
the existing de novo assembly algorithms makes it challenging to get the
assembly done for higher eukaryotes like mammals. In this work, we investigate
the memory issue of constructing de Bruijn graph, a core task in leading
assembly algorithms, which often consumes several hundreds of gigabytes memory
for large genomes. We propose a disk-based partition method, called Minimum
Substring Partitioning (MSP), to complete the task using less than 10 gigabytes
memory, without runtime slowdown. MSP breaks the short reads into multiple
small disjoint partitions so that each partition can be loaded into memory,
processed individually and later merged with others to form a de Bruijn graph.
By leveraging the overlaps among the k-mers (substring of length k), MSP
achieves astonishing compression ratio: The total size of partitions is reduced
from $\Theta(kn)$ to $\Theta(n)$, where $n$ is the size of the short read
database, and $k$ is the length of a $k$-mer. Experimental results show that
our method can build de Bruijn graphs using a commodity computer for any
large-volume sequence dataset.
|
1207.2600 | Sokyna Alqatawneh Dr | Sokyna Qatawneh, Afaf Alneaimi, Thamer Rawashdeh, Mmohammad Muhairat,
Rami Qahwaji and Stan Ipson | Efficient Prediction of DNA-Binding Proteins Using Machine Learning | null | S. Qatawneh, A. Alneaimi, Th. Rawashdeh, M. Muhairat, R. Qahwaji
and S. Ipson,"Efficient Prediction of DNA-Binding Proteins using Machine
Learning", International Journal on Bioinformatics & Biosciences (IJBB)
Vol.2, No.2, June 2012 | null | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA-binding proteins are a class of proteins which have a specific or general
affinity to DNA and include three important components: transcription factors;
nucleases, and histones. DNA-binding proteins also perform important roles in
many types of cellular activities. In this paper we describe machine learning
systems for the prediction of DNA- binding proteins where a Support Vector
Machine and a Cascade Correlation Neural Network are optimized and then
compared to determine the learning algorithm that achieves the best prediction
performance. The information used for classification is derived from
characteristics that include overall charge, patch size and amino acids
composition. In total 121 DNA- binding proteins and 238 non-binding proteins
are used to build and evaluate the system. For SVM using the ANOVA Kernel with
Jack-knife evaluation, an accuracy of 86.7% has been achieved with 91.1% for
sensitivity and 85.3% for specificity. For CCNN optimized over the entire
dataset with Jack knife evaluation we report an accuracy of 75.4%, while the
values of specificity and sensitivity achieved were 72.3% and 82.6%,
respectively.
| [
{
"version": "v1",
"created": "Wed, 11 Jul 2012 11:28:57 GMT"
}
] | 2012-07-12T00:00:00 | [
[
"Qatawneh",
"Sokyna",
""
],
[
"Alneaimi",
"Afaf",
""
],
[
"Rawashdeh",
"Thamer",
""
],
[
"Muhairat",
"Mmohammad",
""
],
[
"Qahwaji",
"Rami",
""
],
[
"Ipson",
"Stan",
""
]
] | TITLE: Efficient Prediction of DNA-Binding Proteins Using Machine Learning
ABSTRACT: DNA-binding proteins are a class of proteins which have a specific or general
affinity to DNA and include three important components: transcription factors;
nucleases, and histones. DNA-binding proteins also perform important roles in
many types of cellular activities. In this paper we describe machine learning
systems for the prediction of DNA- binding proteins where a Support Vector
Machine and a Cascade Correlation Neural Network are optimized and then
compared to determine the learning algorithm that achieves the best prediction
performance. The information used for classification is derived from
characteristics that include overall charge, patch size and amino acids
composition. In total 121 DNA- binding proteins and 238 non-binding proteins
are used to build and evaluate the system. For SVM using the ANOVA Kernel with
Jack-knife evaluation, an accuracy of 86.7% has been achieved with 91.1% for
sensitivity and 85.3% for specificity. For CCNN optimized over the entire
dataset with Jack knife evaluation we report an accuracy of 75.4%, while the
values of specificity and sensitivity achieved were 72.3% and 82.6%,
respectively.
|
1207.2424 | Daniel Jones | Daniel C. Jones, Walter L. Ruzzo, Xinxia Peng, and Michael G. Katze | Compression of next-generation sequencing reads aided by highly
efficient de novo assembly | null | null | null | null | q-bio.QM cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Quip, a lossless compression algorithm for next-generation
sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing
reference-based compression, we have developed, to our knowledge, the first
assembly-based compressor, using a novel de novo assembly algorithm. A
probabilistic data structure is used to dramatically reduce the memory required
by traditional de Bruijn graph assemblers, allowing millions of reads to be
assembled very efficiently. Read sequences are then stored as positions within
the assembled contigs. This is combined with statistical compression of read
identifiers, quality scores, alignment information, and sequences, effectively
collapsing very large datasets to less than 15% of their original size with no
loss of information.
Availability: Quip is freely available under the BSD license from
http://cs.washington.edu/homes/dcjones/quip.
| [
{
"version": "v1",
"created": "Tue, 10 Jul 2012 17:49:17 GMT"
}
] | 2012-07-11T00:00:00 | [
[
"Jones",
"Daniel C.",
""
],
[
"Ruzzo",
"Walter L.",
""
],
[
"Peng",
"Xinxia",
""
],
[
"Katze",
"Michael G.",
""
]
] | TITLE: Compression of next-generation sequencing reads aided by highly
efficient de novo assembly
ABSTRACT: We present Quip, a lossless compression algorithm for next-generation
sequencing data in the FASTQ and SAM/BAM formats. In addition to implementing
reference-based compression, we have developed, to our knowledge, the first
assembly-based compressor, using a novel de novo assembly algorithm. A
probabilistic data structure is used to dramatically reduce the memory required
by traditional de Bruijn graph assemblers, allowing millions of reads to be
assembled very efficiently. Read sequences are then stored as positions within
the assembled contigs. This is combined with statistical compression of read
identifiers, quality scores, alignment information, and sequences, effectively
collapsing very large datasets to less than 15% of their original size with no
loss of information.
Availability: Quip is freely available under the BSD license from
http://cs.washington.edu/homes/dcjones/quip.
|
1205.6912 | D\'avid W\'agner | D\'avid W\'agner, Emiliano Fable, Andreas Pitzschke, Olivier Sauter,
Henri Weisen | Understanding the core density profile in TCV H-mode plasmas | 23 pages, 12 figures | 2012 Plasma Phys. Control. Fusion 54 085018 | 10.1088/0741-3335/54/8/085018 | null | physics.plasm-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Results from a database analysis of H-mode electron density profiles on the
Tokamak \`a Configuration Variable (TCV) in stationary conditions show that the
logarithmic electron density gradient increases with collisionality. By
contrast, usual observations of H-modes showed that the electron density
profiles tend to flatten with increasing collisionality. In this work it is
reinforced that the role of collisionality alone, depending on the parameter
regime, can be rather weak and in these, dominantly electron heated TCV cases,
the electron density gradient is tailored by the underlying turbulence regime,
which is mostly determined by the ratio of the electron to ion temperature and
that of their gradients. Additionally, mostly in ohmic plasmas, the Ware-pinch
can significantly contribute to the density peaking. Qualitative agreement
between the predicted density peaking by quasi-linear gyrokinetic simulations
and the experimental results is found. Quantitative comparison would
necessitate ion temperature measurements, which are lacking in the considered
experimental dataset. However, the simulation results show that it is the
combination of several effects that influences the density peaking in TCV
H-mode plasmas.
| [
{
"version": "v1",
"created": "Thu, 31 May 2012 08:23:50 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jul 2012 13:50:53 GMT"
}
] | 2012-07-10T00:00:00 | [
[
"Wágner",
"Dávid",
""
],
[
"Fable",
"Emiliano",
""
],
[
"Pitzschke",
"Andreas",
""
],
[
"Sauter",
"Olivier",
""
],
[
"Weisen",
"Henri",
""
]
] | TITLE: Understanding the core density profile in TCV H-mode plasmas
ABSTRACT: Results from a database analysis of H-mode electron density profiles on the
Tokamak \`a Configuration Variable (TCV) in stationary conditions show that the
logarithmic electron density gradient increases with collisionality. By
contrast, usual observations of H-modes showed that the electron density
profiles tend to flatten with increasing collisionality. In this work it is
reinforced that the role of collisionality alone, depending on the parameter
regime, can be rather weak and in these, dominantly electron heated TCV cases,
the electron density gradient is tailored by the underlying turbulence regime,
which is mostly determined by the ratio of the electron to ion temperature and
that of their gradients. Additionally, mostly in ohmic plasmas, the Ware-pinch
can significantly contribute to the density peaking. Qualitative agreement
between the predicted density peaking by quasi-linear gyrokinetic simulations
and the experimental results is found. Quantitative comparison would
necessitate ion temperature measurements, which are lacking in the considered
experimental dataset. However, the simulation results show that it is the
combination of several effects that influences the density peaking in TCV
H-mode plasmas.
|
1207.1765 | Jonathan Masci | Jonathan Masci and Ueli Meier and Gabriel Fricout and J\"urgen
Schmidhuber | Object Recognition with Multi-Scale Pyramidal Pooling Networks | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a Multi-Scale Pyramidal Pooling Network, featuring a novel
pyramidal pooling layer at multiple scales and a novel encoding layer. Thanks
to the former the network does not require all images of a given classification
task to be of equal size. The encoding layer improves generalisation
performance in comparison to similar neural network architectures, especially
when training data is scarce. We evaluate and compare our system to
convolutional neural networks and state-of-the-art computer vision methods on
various benchmark datasets. We also present results on industrial steel defect
classification, where existing architectures are not applicable because of the
constraint on equally sized input images. The proposed architecture can be seen
as a fully supervised hierarchical bag-of-features extension that is trained
online and can be fine-tuned for any given task.
| [
{
"version": "v1",
"created": "Sat, 7 Jul 2012 06:27:52 GMT"
}
] | 2012-07-10T00:00:00 | [
[
"Masci",
"Jonathan",
""
],
[
"Meier",
"Ueli",
""
],
[
"Fricout",
"Gabriel",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] | TITLE: Object Recognition with Multi-Scale Pyramidal Pooling Networks
ABSTRACT: We present a Multi-Scale Pyramidal Pooling Network, featuring a novel
pyramidal pooling layer at multiple scales and a novel encoding layer. Thanks
to the former the network does not require all images of a given classification
task to be of equal size. The encoding layer improves generalisation
performance in comparison to similar neural network architectures, especially
when training data is scarce. We evaluate and compare our system to
convolutional neural networks and state-of-the-art computer vision methods on
various benchmark datasets. We also present results on industrial steel defect
classification, where existing architectures are not applicable because of the
constraint on equally sized input images. The proposed architecture can be seen
as a fully supervised hierarchical bag-of-features extension that is trained
online and can be fine-tuned for any given task.
|
1207.1916 | Alejandro Frery | Eliana S. de Almeida, Antonio C. Medeiros and Alejandro C. Frery | How good are MatLab, Octave and Scilab for Computational Modelling? | Accepted for publication in the Computational and Applied Mathematics
journal | null | null | null | cs.MS stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we test the accuracy of three platforms used in computational
modelling: MatLab, Octave and Scilab, running on i386 architecture and three
operating systems (Windows, Ubuntu and Mac OS). We submitted them to numerical
tests using standard data sets and using the functions provided by each
platform. A Monte Carlo study was conducted in some of the datasets in order to
verify the stability of the results with respect to small departures from the
original input. We propose a set of operations which include the computation of
matrix determinants and eigenvalues, whose results are known. We also used data
provided by NIST (National Institute of Standards and Technology), a protocol
which includes the computation of basic univariate statistics (mean, standard
deviation and first-lag correlation), linear regression and extremes of
probability distributions. The assessment was made comparing the results
computed by the platforms with certified values, that is, known results,
computing the number of correct significant digits.
| [
{
"version": "v1",
"created": "Sun, 8 Jul 2012 21:52:03 GMT"
}
] | 2012-07-10T00:00:00 | [
[
"de Almeida",
"Eliana S.",
""
],
[
"Medeiros",
"Antonio C.",
""
],
[
"Frery",
"Alejandro C.",
""
]
] | TITLE: How good are MatLab, Octave and Scilab for Computational Modelling?
ABSTRACT: In this article we test the accuracy of three platforms used in computational
modelling: MatLab, Octave and Scilab, running on i386 architecture and three
operating systems (Windows, Ubuntu and Mac OS). We submitted them to numerical
tests using standard data sets and using the functions provided by each
platform. A Monte Carlo study was conducted in some of the datasets in order to
verify the stability of the results with respect to small departures from the
original input. We propose a set of operations which include the computation of
matrix determinants and eigenvalues, whose results are known. We also used data
provided by NIST (National Institute of Standards and Technology), a protocol
which includes the computation of basic univariate statistics (mean, standard
deviation and first-lag correlation), linear regression and extremes of
probability distributions. The assessment was made comparing the results
computed by the platforms with certified values, that is, known results,
computing the number of correct significant digits.
|
1207.1394 | Andreas Krause | Andreas Krause, Carlos E. Guestrin | Near-optimal Nonmyopic Value of Information in Graphical Models | Appears in Proceedings of the Twenty-First Conference on Uncertainty
in Artificial Intelligence (UAI2005) | null | null | UAI-P-2005-PG-324-331 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental issue in real-world systems, such as sensor networks, is the
selection of observations which most effectively reduce uncertainty. More
specifically, we address the long standing problem of nonmyopically selecting
the most informative subset of variables in a graphical model. We present the
first efficient randomized algorithm providing a constant factor
(1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high
confidence. The algorithm leverages the theory of submodular functions, in
combination with a polynomial bound on sample complexity. We furthermore prove
that no polynomial time algorithm can provide a constant factor approximation
better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of
the effectiveness of our method on two complex real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 4 Jul 2012 16:16:25 GMT"
}
] | 2012-07-09T00:00:00 | [
[
"Krause",
"Andreas",
""
],
[
"Guestrin",
"Carlos E.",
""
]
] | TITLE: Near-optimal Nonmyopic Value of Information in Graphical Models
ABSTRACT: A fundamental issue in real-world systems, such as sensor networks, is the
selection of observations which most effectively reduce uncertainty. More
specifically, we address the long standing problem of nonmyopically selecting
the most informative subset of variables in a graphical model. We present the
first efficient randomized algorithm providing a constant factor
(1-1/e-epsilon) approximation guarantee for any epsilon > 0 with high
confidence. The algorithm leverages the theory of submodular functions, in
combination with a polynomial bound on sample complexity. We furthermore prove
that no polynomial time algorithm can provide a constant factor approximation
better than (1 - 1/e) unless P = NP. Finally, we provide extensive evidence of
the effectiveness of our method on two complex real-world datasets.
|
1207.0833 | Fr\'ed\'eric Blanchard | Fr\'ed\'eric Blanchard and Michel Herbin | Relational Data Mining Through Extraction of Representative Exemplars | null | null | null | null | cs.AI cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing interest on Network Analysis, Relational Data Mining is
becoming an emphasized domain of Data Mining. This paper addresses the problem
of extracting representative elements from a relational dataset. After defining
the notion of degree of representativeness, computed using the Borda
aggregation procedure, we present the extraction of exemplars which are the
representative elements of the dataset. We use these concepts to build a
network on the dataset. We expose the main properties of these notions and we
propose two typical applications of our framework. The first application
consists in resuming and structuring a set of binary images and the second in
mining co-authoring relation in a research team.
| [
{
"version": "v1",
"created": "Tue, 3 Jul 2012 20:48:36 GMT"
}
] | 2012-07-05T00:00:00 | [
[
"Blanchard",
"Frédéric",
""
],
[
"Herbin",
"Michel",
""
]
] | TITLE: Relational Data Mining Through Extraction of Representative Exemplars
ABSTRACT: With the growing interest on Network Analysis, Relational Data Mining is
becoming an emphasized domain of Data Mining. This paper addresses the problem
of extracting representative elements from a relational dataset. After defining
the notion of degree of representativeness, computed using the Borda
aggregation procedure, we present the extraction of exemplars which are the
representative elements of the dataset. We use these concepts to build a
network on the dataset. We expose the main properties of these notions and we
propose two typical applications of our framework. The first application
consists in resuming and structuring a set of binary images and the second in
mining co-authoring relation in a research team.
|
1207.0913 | Rong-Hua Li | Rong-Hua Li, Jeffrey Xu Yu, Zechao Shang | Estimating Node Influenceability in Social Networks | null | null | null | null | cs.SI cs.DB physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | Influence analysis is a fundamental problem in social network analysis and
mining. The important applications of the influence analysis in social network
include influence maximization for viral marketing, finding the most
influential nodes, online advertising, etc. For many of these applications, it
is crucial to evaluate the influenceability of a node. In this paper, we study
the problem of evaluating influenceability of nodes in social network based on
the widely used influence spread model, namely, the independent cascade model.
Since this problem is #P-complete, most existing work is based on Naive
Monte-Carlo (\nmc) sampling. However, the \nmc estimator typically results in a
large variance, which significantly reduces its effectiveness. To overcome this
problem, we propose two families of new estimators based on the idea of
stratified sampling. We first present two basic stratified sampling (\bss)
estimators, namely \bssi estimator and \bssii estimator, which partition the
entire population into $2^r$ and $r+1$ strata by choosing $r$ edges
respectively. Second, to further reduce the variance, we find that both \bssi
and \bssii estimators can be recursively performed on each stratum, thus we
propose two recursive stratified sampling (\rss) estimators, namely \rssi
estimator and \rssii estimator. Theoretically, all of our estimators are shown
to be unbiased and their variances are significantly smaller than the variance
of the \nmc estimator. Finally, our extensive experimental results on both
synthetic and real datasets demonstrate the efficiency and accuracy of our new
estimators.
| [
{
"version": "v1",
"created": "Wed, 4 Jul 2012 06:49:22 GMT"
}
] | 2012-07-05T00:00:00 | [
[
"Li",
"Rong-Hua",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Shang",
"Zechao",
""
]
] | TITLE: Estimating Node Influenceability in Social Networks
ABSTRACT: Influence analysis is a fundamental problem in social network analysis and
mining. The important applications of the influence analysis in social network
include influence maximization for viral marketing, finding the most
influential nodes, online advertising, etc. For many of these applications, it
is crucial to evaluate the influenceability of a node. In this paper, we study
the problem of evaluating influenceability of nodes in social network based on
the widely used influence spread model, namely, the independent cascade model.
Since this problem is #P-complete, most existing work is based on Naive
Monte-Carlo (\nmc) sampling. However, the \nmc estimator typically results in a
large variance, which significantly reduces its effectiveness. To overcome this
problem, we propose two families of new estimators based on the idea of
stratified sampling. We first present two basic stratified sampling (\bss)
estimators, namely \bssi estimator and \bssii estimator, which partition the
entire population into $2^r$ and $r+1$ strata by choosing $r$ edges
respectively. Second, to further reduce the variance, we find that both \bssi
and \bssii estimators can be recursively performed on each stratum, thus we
propose two recursive stratified sampling (\rss) estimators, namely \rssi
estimator and \rssii estimator. Theoretically, all of our estimators are shown
to be unbiased and their variances are significantly smaller than the variance
of the \nmc estimator. Finally, our extensive experimental results on both
synthetic and real datasets demonstrate the efficiency and accuracy of our new
estimators.
|
1207.0677 | Romain Giot | Romain Giot (GREYC), Christophe Charrier (GREYC), Maxime Descoteaux
(SCIL) | Local Water Diffusion Phenomenon Clustering From High Angular Resolution
Diffusion Imaging (HARDI) | IAPR International Conference on Pattern Recognition (ICPR), Tsukuba,
Japan : France (2012) | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The understanding of neurodegenerative diseases undoubtedly passes through
the study of human brain white matter fiber tracts. To date, diffusion magnetic
resonance imaging (dMRI) is the unique technique to obtain information about
the neural architecture of the human brain, thus permitting the study of white
matter connections and their integrity. However, a remaining challenge of the
dMRI community is to better characterize complex fiber crossing configurations,
where diffusion tensor imaging (DTI) is limited but high angular resolution
diffusion imaging (HARDI) now brings solutions. This paper investigates the
development of both identification and classification process of the local
water diffusion phenomenon based on HARDI data to automatically detect imaging
voxels where there are single and crossing fiber bundle populations. The
technique is based on knowledge extraction processes and is validated on a dMRI
phantom dataset with ground truth.
| [
{
"version": "v1",
"created": "Tue, 3 Jul 2012 13:52:19 GMT"
}
] | 2012-07-04T00:00:00 | [
[
"Giot",
"Romain",
"",
"GREYC"
],
[
"Charrier",
"Christophe",
"",
"GREYC"
],
[
"Descoteaux",
"Maxime",
"",
"SCIL"
]
] | TITLE: Local Water Diffusion Phenomenon Clustering From High Angular Resolution
Diffusion Imaging (HARDI)
ABSTRACT: The understanding of neurodegenerative diseases undoubtedly passes through
the study of human brain white matter fiber tracts. To date, diffusion magnetic
resonance imaging (dMRI) is the unique technique to obtain information about
the neural architecture of the human brain, thus permitting the study of white
matter connections and their integrity. However, a remaining challenge of the
dMRI community is to better characterize complex fiber crossing configurations,
where diffusion tensor imaging (DTI) is limited but high angular resolution
diffusion imaging (HARDI) now brings solutions. This paper investigates the
development of both identification and classification process of the local
water diffusion phenomenon based on HARDI data to automatically detect imaging
voxels where there are single and crossing fiber bundle populations. The
technique is based on knowledge extraction processes and is validated on a dMRI
phantom dataset with ground truth.
|
1207.0783 | Romain Giot | Romain Giot (GREYC), Christophe Rosenberger (GREYC), Bernadette
Dorizzi (EPH, SAMOVAR) | Hybrid Template Update System for Unimodal Biometric Systems | IEEE International Conference on Biometrics: Theory, Applications and
Systems (BTAS 2012), Washington, District of Columbia, USA : France (2012) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.
| [
{
"version": "v1",
"created": "Tue, 3 Jul 2012 19:12:13 GMT"
}
] | 2012-07-04T00:00:00 | [
[
"Giot",
"Romain",
"",
"GREYC"
],
[
"Rosenberger",
"Christophe",
"",
"GREYC"
],
[
"Dorizzi",
"Bernadette",
"",
"EPH, SAMOVAR"
]
] | TITLE: Hybrid Template Update System for Unimodal Biometric Systems
ABSTRACT: Semi-supervised template update systems allow to automatically take into
account the intra-class variability of the biometric data over time. Such
systems can be inefficient by including too many impostor's samples or skipping
too many genuine's samples. In the first case, the biometric reference drifts
from the real biometric data and attracts more often impostors. In the second
case, the biometric reference does not evolve quickly enough and also
progressively drifts from the real biometric data. We propose a hybrid system
using several biometric sub-references in order to increase per- formance of
self-update systems by reducing the previously cited errors. The proposition is
validated for a keystroke- dynamics authentication system (this modality
suffers of high variability over time) on two consequent datasets from the
state of the art.
|
1207.0784 | Romain Giot | Romain Giot (GREYC), Mohamad El-Abed (GREYC), Christophe Rosenberger
(GREYC) | Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A
Statistical Analysis | The Eighth International Conference on Intelligent Information Hiding
and Multimedia Signal Processing (IIHMSP 2012), Piraeus : Greece (2012) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.
| [
{
"version": "v1",
"created": "Tue, 3 Jul 2012 19:12:56 GMT"
}
] | 2012-07-04T00:00:00 | [
[
"Giot",
"Romain",
"",
"GREYC"
],
[
"El-Abed",
"Mohamad",
"",
"GREYC"
],
[
"Rosenberger",
"Christophe",
"",
"GREYC"
]
] | TITLE: Web-Based Benchmark for Keystroke Dynamics Biometric Systems: A
Statistical Analysis
ABSTRACT: Most keystroke dynamics studies have been evaluated using a specific kind of
dataset in which users type an imposed login and password. Moreover, these
studies are optimistics since most of them use different acquisition protocols,
private datasets, controlled environment, etc. In order to enhance the accuracy
of keystroke dynamics' performance, the main contribution of this paper is
twofold. First, we provide a new kind of dataset in which users have typed both
an imposed and a chosen pairs of logins and passwords. In addition, the
keystroke dynamics samples are collected in a web-based uncontrolled
environment (OS, keyboards, browser, etc.). Such kind of dataset is important
since it provides us more realistic results of keystroke dynamics' performance
in comparison to the literature (controlled environment, etc.). Second, we
present a statistical analysis of well known assertions such as the
relationship between performance and password size, impact of fusion schemes on
system overall performance, and others such as the relationship between
performance and entropy. We put into obviousness in this paper some new results
on keystroke dynamics in realistic conditions.
|
1206.1728 | Derek Greene | Derek Greene, Gavin Sheridan, Barry Smyth, P\'adraig Cunningham | Aggregating Content and Network Information to Curate Twitter User Lists | null | null | null | null | cs.SI cs.AI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Twitter introduced user lists in late 2009, allowing users to be grouped
according to meaningful topics or themes. Lists have since been adopted by
media outlets as a means of organising content around news stories. Thus the
curation of these lists is important - they should contain the key information
gatekeepers and present a balanced perspective on a story. Here we address this
list curation process from a recommender systems perspective. We propose a
variety of criteria for generating user list recommendations, based on content
analysis, network analysis, and the "crowdsourcing" of existing user lists. We
demonstrate that these types of criteria are often only successful for datasets
with certain characteristics. To resolve this issue, we propose the aggregation
of these different "views" of a news story on Twitter to produce more accurate
user recommendations to support the curation process.
| [
{
"version": "v1",
"created": "Fri, 8 Jun 2012 11:12:53 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Jul 2012 12:20:38 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Greene",
"Derek",
""
],
[
"Sheridan",
"Gavin",
""
],
[
"Smyth",
"Barry",
""
],
[
"Cunningham",
"Pádraig",
""
]
] | TITLE: Aggregating Content and Network Information to Curate Twitter User Lists
ABSTRACT: Twitter introduced user lists in late 2009, allowing users to be grouped
according to meaningful topics or themes. Lists have since been adopted by
media outlets as a means of organising content around news stories. Thus the
curation of these lists is important - they should contain the key information
gatekeepers and present a balanced perspective on a story. Here we address this
list curation process from a recommender systems perspective. We propose a
variety of criteria for generating user list recommendations, based on content
analysis, network analysis, and the "crowdsourcing" of existing user lists. We
demonstrate that these types of criteria are often only successful for datasets
with certain characteristics. To resolve this issue, we propose the aggregation
of these different "views" of a news story on Twitter to produce more accurate
user recommendations to support the curation process.
|
1206.6392 | Nicolas Boulanger-Lewandowski | Nicolas Boulanger-Lewandowski (Universite de Montreal), Yoshua Bengio
(Universite de Montreal), Pascal Vincent (Universite de Montreal) | Modeling Temporal Dependencies in High-Dimensional Sequences:
Application to Polyphonic Music Generation and Transcription | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG cs.SD stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the problem of modeling symbolic sequences of polyphonic music
in a completely general piano-roll representation. We introduce a probabilistic
model based on distribution estimators conditioned on a recurrent neural
network that is able to discover temporal dependencies in high-dimensional
sequences. Our approach outperforms many traditional models of polyphonic music
on a variety of realistic datasets. We show how our musical language model can
serve as a symbolic prior to improve the accuracy of polyphonic transcription.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Boulanger-Lewandowski",
"Nicolas",
"",
"Universite de Montreal"
],
[
"Bengio",
"Yoshua",
"",
"Universite de Montreal"
],
[
"Vincent",
"Pascal",
"",
"Universite de Montreal"
]
] | TITLE: Modeling Temporal Dependencies in High-Dimensional Sequences:
Application to Polyphonic Music Generation and Transcription
ABSTRACT: We investigate the problem of modeling symbolic sequences of polyphonic music
in a completely general piano-roll representation. We introduce a probabilistic
model based on distribution estimators conditioned on a recurrent neural
network that is able to discover temporal dependencies in high-dimensional
sequences. Our approach outperforms many traditional models of polyphonic music
on a variety of realistic datasets. We show how our musical language model can
serve as a symbolic prior to improve the accuracy of polyphonic transcription.
|
1206.6397 | Minhua Chen | Minhua Chen (Duke University), William Carson (PA Consulting Group,
Cambridge Technology Centre), Miguel Rodrigues (University College London),
Robert Calderbank (Duke University), Lawrence Carin (Duke University) | Communications Inspired Linear Discriminant Analysis | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of supervised linear dimensionality reduction, taking an
information-theoretic viewpoint. The linear projection matrix is designed by
maximizing the mutual information between the projected signal and the class
label (based on a Shannon entropy measure). By harnessing a recent theoretical
result on the gradient of mutual information, the above optimization problem
can be solved directly using gradient descent, without requiring simplification
of the objective function. Theoretical analysis and empirical comparison are
made between the proposed method and two closely related methods (Linear
Discriminant Analysis and Information Discriminant Analysis), and comparisons
are also made with a method in which Renyi entropy is used to define the mutual
information (in this case the gradient may be computed simply, under a special
parameter setting). Relative to these alternative approaches, the proposed
method achieves promising results on real datasets.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Chen",
"Minhua",
"",
"Duke University"
],
[
"Carson",
"William",
"",
"PA Consulting Group,\n Cambridge Technology Centre"
],
[
"Rodrigues",
"Miguel",
"",
"University College London"
],
[
"Calderbank",
"Robert",
"",
"Duke University"
],
[
"Carin",
"Lawrence",
"",
"Duke University"
]
] | TITLE: Communications Inspired Linear Discriminant Analysis
ABSTRACT: We study the problem of supervised linear dimensionality reduction, taking an
information-theoretic viewpoint. The linear projection matrix is designed by
maximizing the mutual information between the projected signal and the class
label (based on a Shannon entropy measure). By harnessing a recent theoretical
result on the gradient of mutual information, the above optimization problem
can be solved directly using gradient descent, without requiring simplification
of the objective function. Theoretical analysis and empirical comparison are
made between the proposed method and two closely related methods (Linear
Discriminant Analysis and Information Discriminant Analysis), and comparisons
are also made with a method in which Renyi entropy is used to define the mutual
information (in this case the gradient may be computed simply, under a special
parameter setting). Relative to these alternative approaches, the proposed
method achieves promising results on real datasets.
|
1206.6407 | Ian Goodfellow | Ian Goodfellow (Universite de Montreal), Aaron Courville (Universite
de Montreal), Yoshua Bengio (Universite de Montreal) | Large-Scale Feature Learning With Spike-and-Slab Sparse Coding | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012). arXiv admin note: substantial text overlap with
arXiv:1201.3382 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of object recognition with a large number of classes.
In order to overcome the low amount of labeled examples available in this
setting, we introduce a new feature learning and extraction procedure based on
a factor model we call spike-and-slab sparse coding (S3C). Prior work on S3C
has not prioritized the ability to exploit parallel architectures and scale S3C
to the enormous problem sizes needed for object recognition. We present a novel
inference procedure for appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors that S3C may be trained with. We demonstrate that this approach
improves upon the supervised learning capabilities of both sparse coding and
the spike-and-slab Restricted Boltzmann Machine (ssRBM) on the CIFAR-10
dataset. We use the CIFAR-100 dataset to demonstrate that our method scales to
large numbers of classes better than previous methods. Finally, we use our
method to win the NIPS 2011 Workshop on Challenges In Learning Hierarchical
Models? Transfer Learning Challenge.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Goodfellow",
"Ian",
"",
"Universite de Montreal"
],
[
"Courville",
"Aaron",
"",
"Universite\n de Montreal"
],
[
"Bengio",
"Yoshua",
"",
"Universite de Montreal"
]
] | TITLE: Large-Scale Feature Learning With Spike-and-Slab Sparse Coding
ABSTRACT: We consider the problem of object recognition with a large number of classes.
In order to overcome the low amount of labeled examples available in this
setting, we introduce a new feature learning and extraction procedure based on
a factor model we call spike-and-slab sparse coding (S3C). Prior work on S3C
has not prioritized the ability to exploit parallel architectures and scale S3C
to the enormous problem sizes needed for object recognition. We present a novel
inference procedure for appropriate for use with GPUs which allows us to
dramatically increase both the training set size and the amount of latent
factors that S3C may be trained with. We demonstrate that this approach
improves upon the supervised learning capabilities of both sparse coding and
the spike-and-slab Restricted Boltzmann Machine (ssRBM) on the CIFAR-10
dataset. We use the CIFAR-100 dataset to demonstrate that our method scales to
large numbers of classes better than previous methods. Finally, we use our
method to win the NIPS 2011 Workshop on Challenges In Learning Hierarchical
Models? Transfer Learning Challenge.
|
1206.6413 | Armand Joulin | Armand Joulin (INRIA - Ecole Normale Superieure), Francis Bach (INRIA
- Ecole Normale Superieure) | A Convex Relaxation for Weakly Supervised Classifiers | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a general multi-class approach to weakly supervised
classification. Inferring the labels and learning the parameters of the model
is usually done jointly through a block-coordinate descent algorithm such as
expectation-maximization (EM), which may lead to local minima. To avoid this
problem, we propose a cost function based on a convex relaxation of the
soft-max loss. We then propose an algorithm specifically designed to
efficiently solve the corresponding semidefinite program (SDP). Empirically,
our method compares favorably to standard ones on different datasets for
multiple instance learning and semi-supervised learning as well as on
clustering tasks.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Joulin",
"Armand",
"",
"INRIA - Ecole Normale Superieure"
],
[
"Bach",
"Francis",
"",
"INRIA\n - Ecole Normale Superieure"
]
] | TITLE: A Convex Relaxation for Weakly Supervised Classifiers
ABSTRACT: This paper introduces a general multi-class approach to weakly supervised
classification. Inferring the labels and learning the parameters of the model
is usually done jointly through a block-coordinate descent algorithm such as
expectation-maximization (EM), which may lead to local minima. To avoid this
problem, we propose a cost function based on a convex relaxation of the
soft-max loss. We then propose an algorithm specifically designed to
efficiently solve the corresponding semidefinite program (SDP). Empirically,
our method compares favorably to standard ones on different datasets for
multiple instance learning and semi-supervised learning as well as on
clustering tasks.
|
1206.6415 | Ariel Kleiner | Ariel Kleiner (UC Berkeley), Ameet Talwalkar (UC Berkeley), Purnamrita
Sarkar (UC Berkeley), Michael Jordan (UC Berkeley) | The Big Data Bootstrap | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012). arXiv admin note: text overlap with
arXiv:1112.5016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bootstrap provides a simple and powerful means of assessing the quality
of estimators. However, in settings involving large datasets, the computation
of bootstrap-based quantities can be prohibitively demanding. As an
alternative, we present the Bag of Little Bootstraps (BLB), a new procedure
which incorporates features of both the bootstrap and subsampling to obtain a
robust, computationally efficient means of assessing estimator quality. BLB is
well suited to modern parallel and distributed computing architectures and
retains the generic applicability, statistical efficiency, and favorable
theoretical properties of the bootstrap. We provide the results of an extensive
empirical and theoretical investigation of BLB's behavior, including a study of
its statistical correctness, its large-scale implementation and performance,
selection of hyperparameters, and performance on real data.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Kleiner",
"Ariel",
"",
"UC Berkeley"
],
[
"Talwalkar",
"Ameet",
"",
"UC Berkeley"
],
[
"Sarkar",
"Purnamrita",
"",
"UC Berkeley"
],
[
"Jordan",
"Michael",
"",
"UC Berkeley"
]
] | TITLE: The Big Data Bootstrap
ABSTRACT: The bootstrap provides a simple and powerful means of assessing the quality
of estimators. However, in settings involving large datasets, the computation
of bootstrap-based quantities can be prohibitively demanding. As an
alternative, we present the Bag of Little Bootstraps (BLB), a new procedure
which incorporates features of both the bootstrap and subsampling to obtain a
robust, computationally efficient means of assessing estimator quality. BLB is
well suited to modern parallel and distributed computing architectures and
retains the generic applicability, statistical efficiency, and favorable
theoretical properties of the bootstrap. We provide the results of an extensive
empirical and theoretical investigation of BLB's behavior, including a study of
its statistical correctness, its large-scale implementation and performance,
selection of hyperparameters, and performance on real data.
|
1206.6417 | Abhishek Kumar | Abhishek Kumar (University of Maryland), Hal Daume III (University of
Maryland) | Learning Task Grouping and Overlap in Multi-task Learning | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the paradigm of multi-task learning, mul- tiple related prediction tasks
are learned jointly, sharing information across the tasks. We propose a
framework for multi-task learn- ing that enables one to selectively share the
information across the tasks. We assume that each task parameter vector is a
linear combi- nation of a finite number of underlying basis tasks. The
coefficients of the linear combina- tion are sparse in nature and the overlap
in the sparsity patterns of two tasks controls the amount of sharing across
these. Our model is based on on the assumption that task pa- rameters within a
group lie in a low dimen- sional subspace but allows the tasks in differ- ent
groups to overlap with each other in one or more bases. Experimental results on
four datasets show that our approach outperforms competing methods.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Kumar",
"Abhishek",
"",
"University of Maryland"
],
[
"Daume",
"Hal",
"III",
"University of\n Maryland"
]
] | TITLE: Learning Task Grouping and Overlap in Multi-task Learning
ABSTRACT: In the paradigm of multi-task learning, mul- tiple related prediction tasks
are learned jointly, sharing information across the tasks. We propose a
framework for multi-task learn- ing that enables one to selectively share the
information across the tasks. We assume that each task parameter vector is a
linear combi- nation of a finite number of underlying basis tasks. The
coefficients of the linear combina- tion are sparse in nature and the overlap
in the sparsity patterns of two tasks controls the amount of sharing across
these. Our model is based on on the assumption that task pa- rameters within a
group lie in a low dimen- sional subspace but allows the tasks in differ- ent
groups to overlap with each other in one or more bases. Experimental results on
four datasets show that our approach outperforms competing methods.
|
1206.6418 | Honglak Lee | Kihyuk Sohn (University of Michigan), Honglak Lee (University of
Michigan) | Learning Invariant Representations with Local Transformations | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning invariant representations is an important problem in machine
learning and pattern recognition. In this paper, we present a novel framework
of transformation-invariant feature learning by incorporating linear
transformations into the feature learning algorithms. For example, we present
the transformation-invariant restricted Boltzmann machine that compactly
represents data by its weights and their transformations, which achieves
invariance of the feature representation via probabilistic max pooling. In
addition, we show that our transformation-invariant feature learning framework
can also be extended to other unsupervised learning methods, such as
autoencoders or sparse coding. We evaluate our method on several image
classification benchmark datasets, such as MNIST variations, CIFAR-10, and
STL-10, and show competitive or superior classification performance when
compared to the state-of-the-art. Furthermore, our method achieves
state-of-the-art performance on phone classification tasks with the TIMIT
dataset, which demonstrates wide applicability of our proposed algorithms to
other domains.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Sohn",
"Kihyuk",
"",
"University of Michigan"
],
[
"Lee",
"Honglak",
"",
"University of\n Michigan"
]
] | TITLE: Learning Invariant Representations with Local Transformations
ABSTRACT: Learning invariant representations is an important problem in machine
learning and pattern recognition. In this paper, we present a novel framework
of transformation-invariant feature learning by incorporating linear
transformations into the feature learning algorithms. For example, we present
the transformation-invariant restricted Boltzmann machine that compactly
represents data by its weights and their transformations, which achieves
invariance of the feature representation via probabilistic max pooling. In
addition, we show that our transformation-invariant feature learning framework
can also be extended to other unsupervised learning methods, such as
autoencoders or sparse coding. We evaluate our method on several image
classification benchmark datasets, such as MNIST variations, CIFAR-10, and
STL-10, and show competitive or superior classification performance when
compared to the state-of-the-art. Furthermore, our method achieves
state-of-the-art performance on phone classification tasks with the TIMIT
dataset, which demonstrates wide applicability of our proposed algorithms to
other domains.
|
1206.6419 | Xuejun Liao | Shaobo Han (Duke University), Xuejun Liao (Duke University), Lawrence
Carin (Duke University) | Cross-Domain Multitask Learning with Latent Probit Models | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning multiple tasks across heterogeneous domains is a challenging problem
since the feature space may not be the same for different tasks. We assume the
data in multiple tasks are generated from a latent common domain via sparse
domain transforms and propose a latent probit model (LPM) to jointly learn the
domain transforms, and the shared probit classifier in the common domain. To
learn meaningful task relatedness and avoid over-fitting in classification, we
introduce sparsity in the domain transforms matrices, as well as in the common
classifier. We derive theoretical bounds for the estimation error of the
classifier in terms of the sparsity of domain transforms. An
expectation-maximization algorithm is derived for learning the LPM. The
effectiveness of the approach is demonstrated on several real datasets.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Han",
"Shaobo",
"",
"Duke University"
],
[
"Liao",
"Xuejun",
"",
"Duke University"
],
[
"Carin",
"Lawrence",
"",
"Duke University"
]
] | TITLE: Cross-Domain Multitask Learning with Latent Probit Models
ABSTRACT: Learning multiple tasks across heterogeneous domains is a challenging problem
since the feature space may not be the same for different tasks. We assume the
data in multiple tasks are generated from a latent common domain via sparse
domain transforms and propose a latent probit model (LPM) to jointly learn the
domain transforms, and the shared probit classifier in the common domain. To
learn meaningful task relatedness and avoid over-fitting in classification, we
introduce sparsity in the domain transforms matrices, as well as in the common
classifier. We derive theoretical bounds for the estimation error of the
classifier in terms of the sparsity of domain transforms. An
expectation-maximization algorithm is derived for learning the LPM. The
effectiveness of the approach is demonstrated on several real datasets.
|
1206.6447 | Gael Varoquaux | Gael Varoquaux (INRIA), Alexandre Gramfort (INRIA), Bertrand Thirion
(INRIA) | Small-sample Brain Mapping: Sparse Recovery on Spatially Correlated
Designs with Randomization and Clustering | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG cs.CV stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional neuroimaging can measure the brain?s response to an external
stimulus. It is used to perform brain mapping: identifying from these
observations the brain regions involved. This problem can be cast into a linear
supervised learning task where the neuroimaging data are used as predictors for
the stimulus. Brain mapping is then seen as a support recovery problem. On
functional MRI (fMRI) data, this problem is particularly challenging as i) the
number of samples is small due to limited acquisition time and ii) the
variables are strongly correlated. We propose to overcome these difficulties
using sparse regression models over new variables obtained by clustering of the
original variables. The use of randomization techniques, e.g. bootstrap
samples, and clustering of the variables improves the recovery properties of
sparse methods. We demonstrate the benefit of our approach on an extensive
simulation study as well as two fMRI datasets.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Varoquaux",
"Gael",
"",
"INRIA"
],
[
"Gramfort",
"Alexandre",
"",
"INRIA"
],
[
"Thirion",
"Bertrand",
"",
"INRIA"
]
] | TITLE: Small-sample Brain Mapping: Sparse Recovery on Spatially Correlated
Designs with Randomization and Clustering
ABSTRACT: Functional neuroimaging can measure the brain?s response to an external
stimulus. It is used to perform brain mapping: identifying from these
observations the brain regions involved. This problem can be cast into a linear
supervised learning task where the neuroimaging data are used as predictors for
the stimulus. Brain mapping is then seen as a support recovery problem. On
functional MRI (fMRI) data, this problem is particularly challenging as i) the
number of samples is small due to limited acquisition time and ii) the
variables are strongly correlated. We propose to overcome these difficulties
using sparse regression models over new variables obtained by clustering of the
original variables. The use of randomization techniques, e.g. bootstrap
samples, and clustering of the variables improves the recovery properties of
sparse methods. We demonstrate the benefit of our approach on an extensive
simulation study as well as two fMRI datasets.
|
1206.6458 | Javad Azimi | Javad Azimi (Oregon State University), Alan Fern (Oregon State
University), Xiaoli Zhang-Fern (Oregon State University), Glencora Borradaile
(Oregon State University), Brent Heeringa (Williams College) | Batch Active Learning via Coordinated Matching | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most prior work on active learning of classifiers has focused on sequentially
selecting one unlabeled example at a time to be labeled in order to reduce the
overall labeling effort. In many scenarios, however, it is desirable to label
an entire batch of examples at once, for example, when labels can be acquired
in parallel. This motivates us to study batch active learning, which
iteratively selects batches of $k>1$ examples to be labeled. We propose a novel
batch active learning method that leverages the availability of high-quality
and efficient sequential active-learning policies by attempting to approximate
their behavior when applied for $k$ steps. Specifically, our algorithm first
uses Monte-Carlo simulation to estimate the distribution of unlabeled examples
selected by a sequential policy over $k$ step executions. The algorithm then
attempts to select a set of $k$ examples that best matches this distribution,
leading to a combinatorial optimization problem that we term "bounded
coordinated matching". While we show this problem is NP-hard in general, we
give an efficient greedy solution, which inherits approximation bounds from
supermodular minimization theory. Our experimental results on eight benchmark
datasets show that the proposed approach is highly effective
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Azimi",
"Javad",
"",
"Oregon State University"
],
[
"Fern",
"Alan",
"",
"Oregon State\n University"
],
[
"Zhang-Fern",
"Xiaoli",
"",
"Oregon State University"
],
[
"Borradaile",
"Glencora",
"",
"Oregon State University"
],
[
"Heeringa",
"Brent",
"",
"Williams College"
]
] | TITLE: Batch Active Learning via Coordinated Matching
ABSTRACT: Most prior work on active learning of classifiers has focused on sequentially
selecting one unlabeled example at a time to be labeled in order to reduce the
overall labeling effort. In many scenarios, however, it is desirable to label
an entire batch of examples at once, for example, when labels can be acquired
in parallel. This motivates us to study batch active learning, which
iteratively selects batches of $k>1$ examples to be labeled. We propose a novel
batch active learning method that leverages the availability of high-quality
and efficient sequential active-learning policies by attempting to approximate
their behavior when applied for $k$ steps. Specifically, our algorithm first
uses Monte-Carlo simulation to estimate the distribution of unlabeled examples
selected by a sequential policy over $k$ step executions. The algorithm then
attempts to select a set of $k$ examples that best matches this distribution,
leading to a combinatorial optimization problem that we term "bounded
coordinated matching". While we show this problem is NP-hard in general, we
give an efficient greedy solution, which inherits approximation bounds from
supermodular minimization theory. Our experimental results on eight benchmark
datasets show that the proposed approach is highly effective
|
1206.6466 | Lawrence McAfee | Lawrence McAfee (Stanford University), Kunle Olukotun (Stanford
University) | Utilizing Static Analysis and Code Generation to Accelerate Neural
Networks | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.NE cs.MS cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As datasets continue to grow, neural network (NN) applications are becoming
increasingly limited by both the amount of available computational power and
the ease of developing high-performance applications. Researchers often must
have expert systems knowledge to make their algorithms run efficiently.
Although available computing power increases rapidly each year, algorithm
efficiency is not able to keep pace due to the use of general purpose
compilers, which are not able to fully optimize specialized application
domains. Within the domain of NNs, we have the added knowledge that network
architecture remains constant during training, meaning the architecture's data
structure can be statically optimized by a compiler. In this paper, we present
SONNC, a compiler for NNs that utilizes static analysis to generate optimized
parallel code. We show that SONNC's use of static optimizations make it able to
outperform hand-optimized C++ code by up to 7.8X, and MATLAB code by up to 24X.
Additionally, we show that use of SONNC significantly reduces code complexity
when using structurally sparse networks.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"McAfee",
"Lawrence",
"",
"Stanford University"
],
[
"Olukotun",
"Kunle",
"",
"Stanford\n University"
]
] | TITLE: Utilizing Static Analysis and Code Generation to Accelerate Neural
Networks
ABSTRACT: As datasets continue to grow, neural network (NN) applications are becoming
increasingly limited by both the amount of available computational power and
the ease of developing high-performance applications. Researchers often must
have expert systems knowledge to make their algorithms run efficiently.
Although available computing power increases rapidly each year, algorithm
efficiency is not able to keep pace due to the use of general purpose
compilers, which are not able to fully optimize specialized application
domains. Within the domain of NNs, we have the added knowledge that network
architecture remains constant during training, meaning the architecture's data
structure can be statically optimized by a compiler. In this paper, we present
SONNC, a compiler for NNs that utilizes static analysis to generate optimized
parallel code. We show that SONNC's use of static optimizations make it able to
outperform hand-optimized C++ code by up to 7.8X, and MATLAB code by up to 24X.
Additionally, we show that use of SONNC significantly reduces code complexity
when using structurally sparse networks.
|
1206.6467 | Luke McDowell | Luke McDowell (U.S. Naval Academy), David Aha (U.S. Naval Research
Laboratory) | Semi-Supervised Collective Classification via Hybrid Label
Regularization | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many classification problems involve data instances that are interlinked with
each other, such as webpages connected by hyperlinks. Techniques for
"collective classification" (CC) often increase accuracy for such data graphs,
but usually require a fully-labeled training graph. In contrast, we examine how
to improve the semi-supervised learning of CC models when given only a
sparsely-labeled graph, a common situation. We first describe how to use novel
combinations of classifiers to exploit the different characteristics of the
relational features vs. the non-relational features. We also extend the ideas
of "label regularization" to such hybrid classifiers, enabling them to leverage
the unlabeled data to bias the learning process. We find that these techniques,
which are efficient and easy to implement, significantly increase accuracy on
three real datasets. In addition, our results explain conflicting findings from
prior related studies.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"McDowell",
"Luke",
"",
"U.S. Naval Academy"
],
[
"Aha",
"David",
"",
"U.S. Naval Research\n Laboratory"
]
] | TITLE: Semi-Supervised Collective Classification via Hybrid Label
Regularization
ABSTRACT: Many classification problems involve data instances that are interlinked with
each other, such as webpages connected by hyperlinks. Techniques for
"collective classification" (CC) often increase accuracy for such data graphs,
but usually require a fully-labeled training graph. In contrast, we examine how
to improve the semi-supervised learning of CC models when given only a
sparsely-labeled graph, a common situation. We first describe how to use novel
combinations of classifiers to exploit the different characteristics of the
relational features vs. the non-relational features. We also extend the ideas
of "label regularization" to such hybrid classifiers, enabling them to leverage
the unlabeled data to bias the learning process. We find that these techniques,
which are efficient and easy to implement, significantly increase accuracy on
three real datasets. In addition, our results explain conflicting findings from
prior related studies.
|
1206.6477 | Yiteng Zhai | Yiteng Zhai (Nanyang Technological University), Mingkui Tan (Nanyang
Technological University), Ivor Tsang (Nanyang Technological University), Yew
Soon Ong (Nanyang Technological University) | Discovering Support and Affiliated Features from Very High Dimensions | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel learning paradigm is presented to automatically
identify groups of informative and correlated features from very high
dimensions. Specifically, we explicitly incorporate correlation measures as
constraints and then propose an efficient embedded feature selection method
using recently developed cutting plane strategy. The benefits of the proposed
algorithm are two-folds. First, it can identify the optimal discriminative and
uncorrelated feature subset to the output labels, denoted here as Support
Features, which brings about significant improvements in prediction performance
over other state of the art feature selection methods considered in the paper.
Second, during the learning process, the underlying group structures of
correlated features associated with each support feature, denoted as Affiliated
Features, can also be discovered without any additional cost. These affiliated
features serve to improve the interpretations on the learning tasks. Extensive
empirical studies on both synthetic and very high dimensional real-world
datasets verify the validity and efficiency of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Zhai",
"Yiteng",
"",
"Nanyang Technological University"
],
[
"Tan",
"Mingkui",
"",
"Nanyang\n Technological University"
],
[
"Tsang",
"Ivor",
"",
"Nanyang Technological University"
],
[
"Ong",
"Yew Soon",
"",
"Nanyang Technological University"
]
] | TITLE: Discovering Support and Affiliated Features from Very High Dimensions
ABSTRACT: In this paper, a novel learning paradigm is presented to automatically
identify groups of informative and correlated features from very high
dimensions. Specifically, we explicitly incorporate correlation measures as
constraints and then propose an efficient embedded feature selection method
using recently developed cutting plane strategy. The benefits of the proposed
algorithm are two-folds. First, it can identify the optimal discriminative and
uncorrelated feature subset to the output labels, denoted here as Support
Features, which brings about significant improvements in prediction performance
over other state of the art feature selection methods considered in the paper.
Second, during the learning process, the underlying group structures of
correlated features associated with each support feature, denoted as Affiliated
Features, can also be discovered without any additional cost. These affiliated
features serve to improve the interpretations on the learning tasks. Extensive
empirical studies on both synthetic and very high dimensional real-world
datasets verify the validity and efficiency of the proposed method.
|
1206.6479 | Krishnakumar Balasubramanian | Krishnakumar Balasubramanian (Georgia Institute of Technology), Guy
Lebanon (Georgia Institute of Technology) | The Landmark Selection Method for Multiple Output Prediction | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional modeling x \to y is a central problem in machine learning. A
substantial research effort is devoted to such modeling when x is high
dimensional. We consider, instead, the case of a high dimensional y, where x is
either low dimensional or high dimensional. Our approach is based on selecting
a small subset y_L of the dimensions of y, and proceed by modeling (i) x \to
y_L and (ii) y_L \to y. Composing these two models, we obtain a conditional
model x \to y that possesses convenient statistical properties. Multi-label
classification and multivariate regression experiments on several datasets show
that this model outperforms the one vs. all approach as well as several
sophisticated multiple output prediction methods.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Balasubramanian",
"Krishnakumar",
"",
"Georgia Institute of Technology"
],
[
"Lebanon",
"Guy",
"",
"Georgia Institute of Technology"
]
] | TITLE: The Landmark Selection Method for Multiple Output Prediction
ABSTRACT: Conditional modeling x \to y is a central problem in machine learning. A
substantial research effort is devoted to such modeling when x is high
dimensional. We consider, instead, the case of a high dimensional y, where x is
either low dimensional or high dimensional. Our approach is based on selecting
a small subset y_L of the dimensions of y, and proceed by modeling (i) x \to
y_L and (ii) y_L \to y. Composing these two models, we obtain a conditional
model x \to y that possesses convenient statistical properties. Multi-label
classification and multivariate regression experiments on several datasets show
that this model outperforms the one vs. all approach as well as several
sophisticated multiple output prediction methods.
|
1206.6486 | Piyush Rai | Alexandre Passos (UMass Amherst), Piyush Rai (University of Utah),
Jacques Wainer (University of Campinas), Hal Daume III (University of
Maryland) | Flexible Modeling of Latent Task Structures in Multitask Learning | Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012) | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multitask learning algorithms are typically designed assuming some fixed, a
priori known latent structure shared by all the tasks. However, it is usually
unclear what type of latent task structure is the most appropriate for a given
multitask learning problem. Ideally, the "right" latent task structure should
be learned in a data-driven manner. We present a flexible, nonparametric
Bayesian model that posits a mixture of factor analyzers structure on the
tasks. The nonparametric aspect makes the model expressive enough to subsume
many existing models of latent task structures (e.g, mean-regularized tasks,
clustered tasks, low-rank or linear/non-linear subspace assumption on tasks,
etc.). Moreover, it can also learn more general task structures, addressing the
shortcomings of such models. We present a variational inference algorithm for
our model. Experimental results on synthetic and real-world datasets, on both
regression and classification problems, demonstrate the effectiveness of the
proposed method.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 19:59:59 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Passos",
"Alexandre",
"",
"UMass Amherst"
],
[
"Rai",
"Piyush",
"",
"University of Utah"
],
[
"Wainer",
"Jacques",
"",
"University of Campinas"
],
[
"Daume",
"Hal",
"III",
"University of\n Maryland"
]
] | TITLE: Flexible Modeling of Latent Task Structures in Multitask Learning
ABSTRACT: Multitask learning algorithms are typically designed assuming some fixed, a
priori known latent structure shared by all the tasks. However, it is usually
unclear what type of latent task structure is the most appropriate for a given
multitask learning problem. Ideally, the "right" latent task structure should
be learned in a data-driven manner. We present a flexible, nonparametric
Bayesian model that posits a mixture of factor analyzers structure on the
tasks. The nonparametric aspect makes the model expressive enough to subsume
many existing models of latent task structures (e.g, mean-regularized tasks,
clustered tasks, low-rank or linear/non-linear subspace assumption on tasks,
etc.). Moreover, it can also learn more general task structures, addressing the
shortcomings of such models. We present a variational inference algorithm for
our model. Experimental results on synthetic and real-world datasets, on both
regression and classification problems, demonstrate the effectiveness of the
proposed method.
|
1207.0078 | Hugo Hernandez-Salda\~na | H. Hern\'andez-Salda\~na | Three predictions on July 2012 Federal Elections in Mexico based on past
regularities | 6 pages, one table | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electoral systems are subject of study for physicist and mathematicians in
last years given place to a new area: sociophysics. Based on previous works of
the author on the Mexican electoral processes in the new millennium, he found
three characteristics appearing along the 2000 and 2006 preliminary dataset
offered by the electoral authorities, named PREP: I) Error distributions are
not Gaussian or Lorentzian, they are characterized for power laws at the center
and asymmetric lobes at each side. II) The Partido Revolucionario Institucional
(PRI) presented a change in the slope of the percentage of votes obtained when
it go beyond the 70% of processed certificates; hence it have an improvement at
the end of the electoral computation. III) The distribution of votes for the
PRI is a smooth function well described by Daisy model distributions of rank
$r$ in all the analyzed cases, presidential and congressional elections in
2000, 2003 and 2006. If all these characteristics are proper of the Mexican
reality they should appear in the July 2012 process. Here I discuss some
arguments on why such a behaviors could appear in the present process
| [
{
"version": "v1",
"created": "Sat, 30 Jun 2012 11:07:38 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Hernández-Saldaña",
"H.",
""
]
] | TITLE: Three predictions on July 2012 Federal Elections in Mexico based on past
regularities
ABSTRACT: Electoral systems are subject of study for physicist and mathematicians in
last years given place to a new area: sociophysics. Based on previous works of
the author on the Mexican electoral processes in the new millennium, he found
three characteristics appearing along the 2000 and 2006 preliminary dataset
offered by the electoral authorities, named PREP: I) Error distributions are
not Gaussian or Lorentzian, they are characterized for power laws at the center
and asymmetric lobes at each side. II) The Partido Revolucionario Institucional
(PRI) presented a change in the slope of the percentage of votes obtained when
it go beyond the 70% of processed certificates; hence it have an improvement at
the end of the electoral computation. III) The distribution of votes for the
PRI is a smooth function well described by Daisy model distributions of rank
$r$ in all the analyzed cases, presidential and congressional elections in
2000, 2003 and 2006. If all these characteristics are proper of the Mexican
reality they should appear in the July 2012 process. Here I discuss some
arguments on why such a behaviors could appear in the present process
|
1207.0135 | Manolis Terrovitis | Manolis Terrovitis, John Liagouris, Nikos Mamoulis, Spiros
Skiadopoulos | Privacy Preservation by Disassociation | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 10, pp.
944-955 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we focus on protection against identity disclosure in the
publication of sparse multidimensional data. Existing multidimensional
anonymization techniquesa) protect the privacy of users either by altering the
set of quasi-identifiers of the original data (e.g., by generalization or
suppression) or by adding noise (e.g., using differential privacy) and/or (b)
assume a clear distinction between sensitive and non-sensitive information and
sever the possible linkage. In many real world applications the above
techniques are not applicable. For instance, consider web search query logs.
Suppressing or generalizing anonymization methods would remove the most
valuable information in the dataset: the original query terms. Additionally,
web search query logs contain millions of query terms which cannot be
categorized as sensitive or non-sensitive since a term may be sensitive for a
user and non-sensitive for another. Motivated by this observation, we propose
an anonymization technique termed disassociation that preserves the original
terms but hides the fact that two or more different terms appear in the same
record. We protect the users' privacy by disassociating record terms that
participate in identifying combinations. This way the adversary cannot
associate with high probability a record with a rare combination of terms. To
the best of our knowledge, our proposal is the first to employ such a technique
to provide protection against identity disclosure. We propose an anonymization
algorithm based on our approach and evaluate its performance on real and
synthetic datasets, comparing it against other state-of-the-art methods based
on generalization and differential privacy.
| [
{
"version": "v1",
"created": "Sat, 30 Jun 2012 20:16:16 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Terrovitis",
"Manolis",
""
],
[
"Liagouris",
"John",
""
],
[
"Mamoulis",
"Nikos",
""
],
[
"Skiadopoulos",
"Spiros",
""
]
] | TITLE: Privacy Preservation by Disassociation
ABSTRACT: In this work, we focus on protection against identity disclosure in the
publication of sparse multidimensional data. Existing multidimensional
anonymization techniquesa) protect the privacy of users either by altering the
set of quasi-identifiers of the original data (e.g., by generalization or
suppression) or by adding noise (e.g., using differential privacy) and/or (b)
assume a clear distinction between sensitive and non-sensitive information and
sever the possible linkage. In many real world applications the above
techniques are not applicable. For instance, consider web search query logs.
Suppressing or generalizing anonymization methods would remove the most
valuable information in the dataset: the original query terms. Additionally,
web search query logs contain millions of query terms which cannot be
categorized as sensitive or non-sensitive since a term may be sensitive for a
user and non-sensitive for another. Motivated by this observation, we propose
an anonymization technique termed disassociation that preserves the original
terms but hides the fact that two or more different terms appear in the same
record. We protect the users' privacy by disassociating record terms that
participate in identifying combinations. This way the adversary cannot
associate with high probability a record with a rare combination of terms. To
the best of our knowledge, our proposal is the first to employ such a technique
to provide protection against identity disclosure. We propose an anonymization
algorithm based on our approach and evaluate its performance on real and
synthetic datasets, comparing it against other state-of-the-art methods based
on generalization and differential privacy.
|
1207.0136 | Bhargav Kanagal | Bhargav Kanagal, Amr Ahmed, Sandeep Pandey, Vanja Josifovski, Jeff
Yuan, Lluis Garcia-Pueyo | Supercharging Recommender Systems using Taxonomies for Learning User
Purchase Behavior | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 10, pp.
956-967 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems based on latent factor models have been effectively used
for understanding user interests and predicting future actions. Such models
work by projecting the users and items into a smaller dimensional space,
thereby clustering similar users and items together and subsequently compute
similarity between unknown user-item pairs. When user-item interactions are
sparse (sparsity problem) or when new items continuously appear (cold start
problem), these models perform poorly. In this paper, we exploit the
combination of taxonomies and latent factor models to mitigate these issues and
improve recommendation accuracy. We observe that taxonomies provide structure
similar to that of a latent factor model: namely, it imposes human-labeled
categories (clusters) over items. This leads to our proposed taxonomy-aware
latent factor model (TF) which combines taxonomies and latent factors using
additive models. We develop efficient algorithms to train the TF models, which
scales to large number of users/items and develop scalable
inference/recommendation algorithms by exploiting the structure of the
taxonomy. In addition, we extend the TF model to account for the temporal
dynamics of user interests using high-order Markov chains. To deal with
large-scale data, we develop a parallel multi-core implementation of our TF
model. We empirically evaluate the TF model for the task of predicting user
purchases using a real-world shopping dataset spanning more than a million
users and products. Our experiments demonstrate the benefits of using our TF
models over existing approaches, in terms of both prediction accuracy and
running time.
| [
{
"version": "v1",
"created": "Sat, 30 Jun 2012 20:17:05 GMT"
}
] | 2012-07-03T00:00:00 | [
[
"Kanagal",
"Bhargav",
""
],
[
"Ahmed",
"Amr",
""
],
[
"Pandey",
"Sandeep",
""
],
[
"Josifovski",
"Vanja",
""
],
[
"Yuan",
"Jeff",
""
],
[
"Garcia-Pueyo",
"Lluis",
""
]
] | TITLE: Supercharging Recommender Systems using Taxonomies for Learning User
Purchase Behavior
ABSTRACT: Recommender systems based on latent factor models have been effectively used
for understanding user interests and predicting future actions. Such models
work by projecting the users and items into a smaller dimensional space,
thereby clustering similar users and items together and subsequently compute
similarity between unknown user-item pairs. When user-item interactions are
sparse (sparsity problem) or when new items continuously appear (cold start
problem), these models perform poorly. In this paper, we exploit the
combination of taxonomies and latent factor models to mitigate these issues and
improve recommendation accuracy. We observe that taxonomies provide structure
similar to that of a latent factor model: namely, it imposes human-labeled
categories (clusters) over items. This leads to our proposed taxonomy-aware
latent factor model (TF) which combines taxonomies and latent factors using
additive models. We develop efficient algorithms to train the TF models, which
scales to large number of users/items and develop scalable
inference/recommendation algorithms by exploiting the structure of the
taxonomy. In addition, we extend the TF model to account for the temporal
dynamics of user interests using high-order Markov chains. To deal with
large-scale data, we develop a parallel multi-core implementation of our TF
model. We empirically evaluate the TF model for the task of predicting user
purchases using a real-world shopping dataset spanning more than a million
users and products. Our experiments demonstrate the benefits of using our TF
models over existing approaches, in terms of both prediction accuracy and
running time.
|
1206.6815 | Koby Crammer | Koby Crammer, Amir Globerson | Discriminative Learning via Semidefinite Probabilistic Models | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-98-105 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discriminative linear models are a popular tool in machine learning. These
can be generally divided into two types: The first is linear classifiers, such
as support vector machines, which are well studied and provide state-of-the-art
results. One shortcoming of these models is that their output (known as the
'margin') is not calibrated, and cannot be translated naturally into a
distribution over the labels. Thus, it is difficult to incorporate such models
as components of larger systems, unlike probabilistic based approaches. The
second type of approach constructs class conditional distributions using a
nonlinearity (e.g. log-linear models), but is occasionally worse in terms of
classification error. We propose a supervised learning method which combines
the best of both approaches. Specifically, our method provides a distribution
over the labels, which is a linear function of the model parameters. As a
consequence, differences between probabilities are linear functions, a property
which most probabilistic models (e.g. log-linear) do not have.
Our model assumes that classes correspond to linear subspaces (rather than to
half spaces). Using a relaxed projection operator, we construct a measure which
evaluates the degree to which a given vector 'belongs' to a subspace, resulting
in a distribution over labels. Interestingly, this view is closely related to
similar concepts in quantum detection theory. The resulting models can be
trained either to maximize the margin or to optimize average likelihood
measures. The corresponding optimization problems are semidefinite programs
which can be solved efficiently. We illustrate the performance of our algorithm
on real world datasets, and show that it outperforms 2nd order kernel methods.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 15:38:14 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Crammer",
"Koby",
""
],
[
"Globerson",
"Amir",
""
]
] | TITLE: Discriminative Learning via Semidefinite Probabilistic Models
ABSTRACT: Discriminative linear models are a popular tool in machine learning. These
can be generally divided into two types: The first is linear classifiers, such
as support vector machines, which are well studied and provide state-of-the-art
results. One shortcoming of these models is that their output (known as the
'margin') is not calibrated, and cannot be translated naturally into a
distribution over the labels. Thus, it is difficult to incorporate such models
as components of larger systems, unlike probabilistic based approaches. The
second type of approach constructs class conditional distributions using a
nonlinearity (e.g. log-linear models), but is occasionally worse in terms of
classification error. We propose a supervised learning method which combines
the best of both approaches. Specifically, our method provides a distribution
over the labels, which is a linear function of the model parameters. As a
consequence, differences between probabilities are linear functions, a property
which most probabilistic models (e.g. log-linear) do not have.
Our model assumes that classes correspond to linear subspaces (rather than to
half spaces). Using a relaxed projection operator, we construct a measure which
evaluates the degree to which a given vector 'belongs' to a subspace, resulting
in a distribution over labels. Interestingly, this view is closely related to
similar concepts in quantum detection theory. The resulting models can be
trained either to maximize the margin or to optimize average likelihood
measures. The corresponding optimization problems are semidefinite programs
which can be solved efficiently. We illustrate the performance of our algorithm
on real world datasets, and show that it outperforms 2nd order kernel methods.
|
1206.6850 | Guobiao Mei | Guobiao Mei, Christian R. Shelton | Visualization of Collaborative Data | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-341-348 | cs.GR cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative data consist of ratings relating two distinct sets of objects:
users and items. Much of the work with such data focuses on filtering:
predicting unknown ratings for pairs of users and items. In this paper we focus
on the problem of visualizing the information. Given all of the ratings, our
task is to embed all of the users and items as points in the same Euclidean
space. We would like to place users near items that they have rated (or would
rate) high, and far away from those they would give a low rating. We pose this
problem as a real-valued non-linear Bayesian network and employ Markov chain
Monte Carlo and expectation maximization to find an embedding. We present a
metric by which to judge the quality of a visualization and compare our results
to local linear embedding and Eigentaste on three real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 16:24:29 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Mei",
"Guobiao",
""
],
[
"Shelton",
"Christian R.",
""
]
] | TITLE: Visualization of Collaborative Data
ABSTRACT: Collaborative data consist of ratings relating two distinct sets of objects:
users and items. Much of the work with such data focuses on filtering:
predicting unknown ratings for pairs of users and items. In this paper we focus
on the problem of visualizing the information. Given all of the ratings, our
task is to embed all of the users and items as points in the same Euclidean
space. We would like to place users near items that they have rated (or would
rate) high, and far away from those they would give a low rating. We pose this
problem as a real-valued non-linear Bayesian network and employ Markov chain
Monte Carlo and expectation maximization to find an embedding. We present a
metric by which to judge the quality of a visualization and compare our results
to local linear embedding and Eigentaste on three real-world datasets.
|
1206.6852 | Vikash Mansinghka | Vikash Mansinghka, Charles Kemp, Thomas Griffiths, Joshua Tenenbaum | Structured Priors for Structure Learning | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-324-331 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional approaches to Bayes net structure learning typically assume
little regularity in graph structure other than sparseness. However, in many
cases, we expect more systematicity: variables in real-world systems often
group into classes that predict the kinds of probabilistic dependencies they
participate in. Here we capture this form of prior knowledge in a hierarchical
Bayesian framework, and exploit it to enable structure learning and type
discovery from small datasets. Specifically, we present a nonparametric
generative model for directed acyclic graphs as a prior for Bayes net structure
learning. Our model assumes that variables come in one or more classes and that
the prior probability of an edge existing between two variables is a function
only of their classes. We derive an MCMC algorithm for simultaneous inference
of the number of classes, the class assignments of variables, and the Bayes net
structure over variables. For several realistic, sparse datasets, we show that
the bias towards systematicity of connections provided by our model yields more
accurate learned networks than a traditional, uniform prior approach, and that
the classes found by our model are appropriate.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 16:24:57 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Mansinghka",
"Vikash",
""
],
[
"Kemp",
"Charles",
""
],
[
"Griffiths",
"Thomas",
""
],
[
"Tenenbaum",
"Joshua",
""
]
] | TITLE: Structured Priors for Structure Learning
ABSTRACT: Traditional approaches to Bayes net structure learning typically assume
little regularity in graph structure other than sparseness. However, in many
cases, we expect more systematicity: variables in real-world systems often
group into classes that predict the kinds of probabilistic dependencies they
participate in. Here we capture this form of prior knowledge in a hierarchical
Bayesian framework, and exploit it to enable structure learning and type
discovery from small datasets. Specifically, we present a nonparametric
generative model for directed acyclic graphs as a prior for Bayes net structure
learning. Our model assumes that variables come in one or more classes and that
the prior probability of an edge existing between two variables is a function
only of their classes. We derive an MCMC algorithm for simultaneous inference
of the number of classes, the class assignments of variables, and the Bayes net
structure over variables. For several realistic, sparse datasets, we show that
the bias towards systematicity of connections provided by our model yields more
accurate learned networks than a traditional, uniform prior approach, and that
the classes found by our model are appropriate.
|
1206.6860 | John Langford | John Langford, Roberto Oliveira, Bianca Zadrozny | Predicting Conditional Quantiles via Reduction to Classification | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-257-264 | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show how to reduce the process of predicting general order statistics (and
the median in particular) to solving classification. The accompanying
theoretical statement shows that the regret of the classifier bounds the regret
of the quantile regression under a quantile loss. We also test this reduction
empirically against existing quantile regression methods on large real-world
datasets and discover that it provides state-of-the-art performance.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 16:27:25 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Langford",
"John",
""
],
[
"Oliveira",
"Roberto",
""
],
[
"Zadrozny",
"Bianca",
""
]
] | TITLE: Predicting Conditional Quantiles via Reduction to Classification
ABSTRACT: We show how to reduce the process of predicting general order statistics (and
the median in particular) to solving classification. The accompanying
theoretical statement shows that the regret of the classifier bounds the regret
of the quantile regression under a quantile loss. We also test this reduction
empirically against existing quantile regression methods on large real-world
datasets and discover that it provides state-of-the-art performance.
|
1206.6865 | Frank Wood | Frank Wood, Thomas Griffiths, Zoubin Ghahramani | A Non-Parametric Bayesian Method for Inferring Hidden Causes | Appears in Proceedings of the Twenty-Second Conference on Uncertainty
in Artificial Intelligence (UAI2006) | null | null | UAI-P-2006-PG-536-543 | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a non-parametric Bayesian approach to structure learning with
hidden causes. Previous Bayesian treatments of this problem define a prior over
the number of hidden causes and use algorithms such as reversible jump Markov
chain Monte Carlo to move between solutions. In contrast, we assume that the
number of hidden causes is unbounded, but only a finite number influence
observable variables. This makes it possible to use a Gibbs sampler to
approximate the distribution over causal structures. We evaluate the
performance of both approaches in discovering hidden causes in simulated data,
and use our non-parametric approach to discover hidden causes in a real medical
dataset.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 16:28:41 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Wood",
"Frank",
""
],
[
"Griffiths",
"Thomas",
""
],
[
"Ghahramani",
"Zoubin",
""
]
] | TITLE: A Non-Parametric Bayesian Method for Inferring Hidden Causes
ABSTRACT: We present a non-parametric Bayesian approach to structure learning with
hidden causes. Previous Bayesian treatments of this problem define a prior over
the number of hidden causes and use algorithms such as reversible jump Markov
chain Monte Carlo to move between solutions. In contrast, we assume that the
number of hidden causes is unbounded, but only a finite number influence
observable variables. This makes it possible to use a Gibbs sampler to
approximate the distribution over causal structures. We evaluate the
performance of both approaches in discovering hidden causes in simulated data,
and use our non-parametric approach to discover hidden causes in a real medical
dataset.
|
1206.6883 | Jun Wang | Jun Wang, Adam Woznica, Alexandros Kalousis | Learning Neighborhoods for Metric Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metric learning methods have been shown to perform well on different learning
tasks. Many of them rely on target neighborhood relationships that are computed
in the original feature space and remain fixed throughout learning. As a
result, the learned metric reflects the original neighborhood relations. We
propose a novel formulation of the metric learning problem in which, in
addition to the metric, the target neighborhood relations are also learned in a
two-step iterative approach. The new formulation can be seen as a
generalization of many existing metric learning methods. The formulation
includes a target neighbor assignment rule that assigns different numbers of
neighbors to instances according to their quality; `high quality' instances get
more neighbors. We experiment with two of its instantiations that correspond to
the metric learning algorithms LMNN and MCML and compare it to other metric
learning methods on a number of datasets. The experimental results show
state-of-the-art performance and provide evidence that learning the
neighborhood relations does improve predictive performance.
| [
{
"version": "v1",
"created": "Thu, 28 Jun 2012 18:57:01 GMT"
}
] | 2012-07-02T00:00:00 | [
[
"Wang",
"Jun",
""
],
[
"Woznica",
"Adam",
""
],
[
"Kalousis",
"Alexandros",
""
]
] | TITLE: Learning Neighborhoods for Metric Learning
ABSTRACT: Metric learning methods have been shown to perform well on different learning
tasks. Many of them rely on target neighborhood relationships that are computed
in the original feature space and remain fixed throughout learning. As a
result, the learned metric reflects the original neighborhood relations. We
propose a novel formulation of the metric learning problem in which, in
addition to the metric, the target neighborhood relations are also learned in a
two-step iterative approach. The new formulation can be seen as a
generalization of many existing metric learning methods. The formulation
includes a target neighbor assignment rule that assigns different numbers of
neighbors to instances according to their quality; `high quality' instances get
more neighbors. We experiment with two of its instantiations that correspond to
the metric learning algorithms LMNN and MCML and compare it to other metric
learning methods on a number of datasets. The experimental results show
state-of-the-art performance and provide evidence that learning the
neighborhood relations does improve predictive performance.
|
1204.3251 | Vladimir Vovk | Valentina Fedorova, Alex Gammerman, Ilia Nouretdinov, and Vladimir
Vovk | Plug-in martingales for testing exchangeability on-line | 8 pages, 7 figures; ICML 2012 Conference Proceedings | null | null | On-line Compression Modelling Project (New Series), Working Paper 04 | cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A standard assumption in machine learning is the exchangeability of data,
which is equivalent to assuming that the examples are generated from the same
probability distribution independently. This paper is devoted to testing the
assumption of exchangeability on-line: the examples arrive one by one, and
after receiving each example we would like to have a valid measure of the
degree to which the assumption of exchangeability has been falsified. Such
measures are provided by exchangeability martingales. We extend known
techniques for constructing exchangeability martingales and show that our new
method is competitive with the martingales introduced before. Finally we
investigate the performance of our testing method on two benchmark datasets,
USPS and Statlog Satellite data; for the former, the known techniques give
satisfactory results, but for the latter our new more flexible method becomes
necessary.
| [
{
"version": "v1",
"created": "Sun, 15 Apr 2012 10:21:57 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jun 2012 09:36:27 GMT"
}
] | 2012-06-29T00:00:00 | [
[
"Fedorova",
"Valentina",
""
],
[
"Gammerman",
"Alex",
""
],
[
"Nouretdinov",
"Ilia",
""
],
[
"Vovk",
"Vladimir",
""
]
] | TITLE: Plug-in martingales for testing exchangeability on-line
ABSTRACT: A standard assumption in machine learning is the exchangeability of data,
which is equivalent to assuming that the examples are generated from the same
probability distribution independently. This paper is devoted to testing the
assumption of exchangeability on-line: the examples arrive one by one, and
after receiving each example we would like to have a valid measure of the
degree to which the assumption of exchangeability has been falsified. Such
measures are provided by exchangeability martingales. We extend known
techniques for constructing exchangeability martingales and show that our new
method is competitive with the martingales introduced before. Finally we
investigate the performance of our testing method on two benchmark datasets,
USPS and Statlog Satellite data; for the former, the known techniques give
satisfactory results, but for the latter our new more flexible method becomes
necessary.
|
1205.6359 | Akshay Deepak | Akshay Deepak, David Fern\'andez-Baca, and Michelle M. McMahon | Extracting Conflict-free Information from Multi-labeled Trees | Submitted in Workshop on Algorithms in Bioinformatics 2012
(http://algo12.fri.uni-lj.si/?file=wabi) | null | null | null | cs.DS q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A multi-labeled tree, or MUL-tree, is a phylogenetic tree where two or more
leaves share a label, e.g., a species name. A MUL-tree can imply multiple
conflicting phylogenetic relationships for the same set of taxa, but can also
contain conflict-free information that is of interest and yet is not obvious.
We define the information content of a MUL-tree T as the set of all
conflict-free quartet topologies implied by T, and define the maximal reduced
form of T as the smallest tree that can be obtained from T by pruning leaves
and contracting edges while retaining the same information content. We show
that any two MUL-trees with the same information content exhibit the same
reduced form. This introduces an equivalence relation in MUL-trees with
potential applications to comparing MUL-trees. We present an efficient
algorithm to reduce a MUL-tree to its maximally reduced form and evaluate its
performance on empirical datasets in terms of both quality of the reduced tree
and the degree of data reduction achieved.
| [
{
"version": "v1",
"created": "Tue, 29 May 2012 13:35:56 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jun 2012 14:50:07 GMT"
}
] | 2012-06-29T00:00:00 | [
[
"Deepak",
"Akshay",
""
],
[
"Fernández-Baca",
"David",
""
],
[
"McMahon",
"Michelle M.",
""
]
] | TITLE: Extracting Conflict-free Information from Multi-labeled Trees
ABSTRACT: A multi-labeled tree, or MUL-tree, is a phylogenetic tree where two or more
leaves share a label, e.g., a species name. A MUL-tree can imply multiple
conflicting phylogenetic relationships for the same set of taxa, but can also
contain conflict-free information that is of interest and yet is not obvious.
We define the information content of a MUL-tree T as the set of all
conflict-free quartet topologies implied by T, and define the maximal reduced
form of T as the smallest tree that can be obtained from T by pruning leaves
and contracting edges while retaining the same information content. We show
that any two MUL-trees with the same information content exhibit the same
reduced form. This introduces an equivalence relation in MUL-trees with
potential applications to comparing MUL-trees. We present an efficient
algorithm to reduce a MUL-tree to its maximally reduced form and evaluate its
performance on empirical datasets in terms of both quality of the reduced tree
and the degree of data reduction achieved.
|
1206.6588 | Bosiljka Tadic | Milovan Suvakov, Marija Mitrovic, Vladimir Gligorijevic, Bosiljka
Tadic | How the online social networks are used: Dialogs-based structure of
MySpace | 18 pages, 12 figures (resized to 50KB) | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantitative study of collective dynamics in online social networks is a new
challenge based on the abundance of empirical data. Conclusions, however, may
depend on factors as user's psychology profiles and their reasons to use the
online contacts. In this paper we have compiled and analyzed two datasets from
\texttt{MySpace}. The data contain networked dialogs occurring within a
specified time depth, high temporal resolution, and texts of messages, in which
the emotion valence is assessed by using SentiStrength classifier. Performing a
comprehensive analysis we obtain three groups of results: Dynamic topology of
the dialogs-based networks have characteristic structure with Zipf's
distribution of communities, low link reciprocity, and disassortative
correlations. Overlaps supporting "weak-ties" hypothesis are found to follow
the laws recently conjectured for online games. Long-range temporal
correlations and persistent fluctuations occur in the time series of messages
carrying positive (negative) emotion. Patterns of user communications have
dominant positive emotion (attractiveness) and strong impact of circadian
cycles and nteractivity times longer than one day. Taken together, these
results give a new insight into functioning of the online social networks and
unveil importance of the amount of information and emotion that is communicated
along the social links. (All data used in this study are fully anonymized.)
| [
{
"version": "v1",
"created": "Thu, 28 Jun 2012 08:20:03 GMT"
}
] | 2012-06-29T00:00:00 | [
[
"Suvakov",
"Milovan",
""
],
[
"Mitrovic",
"Marija",
""
],
[
"Gligorijevic",
"Vladimir",
""
],
[
"Tadic",
"Bosiljka",
""
]
] | TITLE: How the online social networks are used: Dialogs-based structure of
MySpace
ABSTRACT: Quantitative study of collective dynamics in online social networks is a new
challenge based on the abundance of empirical data. Conclusions, however, may
depend on factors as user's psychology profiles and their reasons to use the
online contacts. In this paper we have compiled and analyzed two datasets from
\texttt{MySpace}. The data contain networked dialogs occurring within a
specified time depth, high temporal resolution, and texts of messages, in which
the emotion valence is assessed by using SentiStrength classifier. Performing a
comprehensive analysis we obtain three groups of results: Dynamic topology of
the dialogs-based networks have characteristic structure with Zipf's
distribution of communities, low link reciprocity, and disassortative
correlations. Overlaps supporting "weak-ties" hypothesis are found to follow
the laws recently conjectured for online games. Long-range temporal
correlations and persistent fluctuations occur in the time series of messages
carrying positive (negative) emotion. Patterns of user communications have
dominant positive emotion (attractiveness) and strong impact of circadian
cycles and nteractivity times longer than one day. Taken together, these
results give a new insight into functioning of the online social networks and
unveil importance of the amount of information and emotion that is communicated
along the social links. (All data used in this study are fully anonymized.)
|
1206.6646 | Arnab Bhattacharya | Arnab Bhattacharya and B. Palvali Teja | Aggregate Skyline Join Queries: Skylines with Aggregate Operations over
Multiple Relations | Best student paper award; COMAD 2010 (International Conference on
Management of Data) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The multi-criteria decision making, which is possible with the advent of
skyline queries, has been applied in many areas. Though most of the existing
research is concerned with only a single relation, several real world
applications require finding the skyline set of records over multiple
relations. Consequently, the join operation over skylines where the preferences
are local to each relation, has been proposed. In many of those cases, however,
the join often involves performing aggregate operations among some of the
attributes from the different relations. In this paper, we introduce such
queries as "aggregate skyline join queries". Since the naive algorithm is
impractical, we propose three algorithms to efficiently process such queries.
The algorithms utilize certain properties of skyline sets, and processes the
skylines as much as possible locally before computing the join. Experiments
with real and synthetic datasets exhibit the practicality and scalability of
the algorithms with respect to the cardinality and dimensionality of the
relations.
| [
{
"version": "v1",
"created": "Thu, 28 Jun 2012 12:06:51 GMT"
}
] | 2012-06-29T00:00:00 | [
[
"Bhattacharya",
"Arnab",
""
],
[
"Teja",
"B. Palvali",
""
]
] | TITLE: Aggregate Skyline Join Queries: Skylines with Aggregate Operations over
Multiple Relations
ABSTRACT: The multi-criteria decision making, which is possible with the advent of
skyline queries, has been applied in many areas. Though most of the existing
research is concerned with only a single relation, several real world
applications require finding the skyline set of records over multiple
relations. Consequently, the join operation over skylines where the preferences
are local to each relation, has been proposed. In many of those cases, however,
the join often involves performing aggregate operations among some of the
attributes from the different relations. In this paper, we introduce such
queries as "aggregate skyline join queries". Since the naive algorithm is
impractical, we propose three algorithms to efficiently process such queries.
The algorithms utilize certain properties of skyline sets, and processes the
skylines as much as possible locally before computing the join. Experiments
with real and synthetic datasets exhibit the practicality and scalability of
the algorithms with respect to the cardinality and dimensionality of the
relations.
|
1206.6196 | Pierre-Francois Marteau | Pierre-Fran\c{c}ois Marteau (IRISA), Nicolas Bonnel (IRISA), Gilbas
M\'enier (IRISA) | Discrete Elastic Inner Vector Spaces with Application in Time Series and
Sequence Mining | arXiv admin note: substantial text overlap with arXiv:1101.4318 | IEEE Transactions on Knowledge and Data Engineering (2012) pp 1-14 | 10.1109/TKDE.2012.131 | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 07:44:15 GMT"
}
] | 2012-06-28T00:00:00 | [
[
"Marteau",
"Pierre-François",
"",
"IRISA"
],
[
"Bonnel",
"Nicolas",
"",
"IRISA"
],
[
"Ménier",
"Gilbas",
"",
"IRISA"
]
] | TITLE: Discrete Elastic Inner Vector Spaces with Application in Time Series and
Sequence Mining
ABSTRACT: This paper proposes a framework dedicated to the construction of what we call
discrete elastic inner product allowing one to embed sets of non-uniformly
sampled multivariate time series or sequences of varying lengths into inner
product space structures. This framework is based on a recursive definition
that covers the case of multiple embedded time elastic dimensions. We prove
that such inner products exist in our general framework and show how a simple
instance of this inner product class operates on some prospective applications,
while generalizing the Euclidean inner product. Classification experimentations
on time series and symbolic sequences datasets demonstrate the benefits that we
can expect by embedding time series or sequences into elastic inner spaces
rather than into classical Euclidean spaces. These experiments show good
accuracy when compared to the euclidean distance or even dynamic programming
algorithms while maintaining a linear algorithmic complexity at exploitation
stage, although a quadratic indexing phase beforehand is required.
|
1206.6293 | Alexander Sch\"atzle | Martin Przyjaciel-Zablocki, Alexander Sch\"atzle, Thomas Hornung,
Christopher Dorner, Georg Lausen | Cascading map-side joins over HBase for scalable join processing | null | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the major challenges in large-scale data processing with MapReduce is
the smart computation of joins. Since Semantic Web datasets published in RDF
have increased rapidly over the last few years, scalable join techniques become
an important issue for SPARQL query processing as well. In this paper, we
introduce the Map-Side Index Nested Loop Join (MAPSIN join) which combines
scalable indexing capabilities of NoSQL storage systems like HBase, that suffer
from an insufficient distributed processing layer, with MapReduce, which in
turn does not provide appropriate storage structures for efficient large-scale
join processing. While retaining the flexibility of commonly used reduce-side
joins, we leverage the effectiveness of map-side joins without any changes to
the underlying framework. We demonstrate the significant benefits of MAPSIN
joins for the processing of SPARQL basic graph patterns on large RDF datasets
by an evaluation with the LUBM and SP2Bench benchmarks. For most queries,
MAPSIN join based query execution outperforms reduce-side join based execution
by an order of magnitude.
| [
{
"version": "v1",
"created": "Wed, 27 Jun 2012 15:05:05 GMT"
}
] | 2012-06-28T00:00:00 | [
[
"Przyjaciel-Zablocki",
"Martin",
""
],
[
"Schätzle",
"Alexander",
""
],
[
"Hornung",
"Thomas",
""
],
[
"Dorner",
"Christopher",
""
],
[
"Lausen",
"Georg",
""
]
] | TITLE: Cascading map-side joins over HBase for scalable join processing
ABSTRACT: One of the major challenges in large-scale data processing with MapReduce is
the smart computation of joins. Since Semantic Web datasets published in RDF
have increased rapidly over the last few years, scalable join techniques become
an important issue for SPARQL query processing as well. In this paper, we
introduce the Map-Side Index Nested Loop Join (MAPSIN join) which combines
scalable indexing capabilities of NoSQL storage systems like HBase, that suffer
from an insufficient distributed processing layer, with MapReduce, which in
turn does not provide appropriate storage structures for efficient large-scale
join processing. While retaining the flexibility of commonly used reduce-side
joins, we leverage the effectiveness of map-side joins without any changes to
the underlying framework. We demonstrate the significant benefits of MAPSIN
joins for the processing of SPARQL basic graph patterns on large RDF datasets
by an evaluation with the LUBM and SP2Bench benchmarks. For most queries,
MAPSIN join based query execution outperforms reduce-side join based execution
by an order of magnitude.
|
1206.5915 | Sundararajan Sellamanickam | Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj | Graph Based Classification Methods Using Inaccurate External Classifier
Information | 12 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of collectively classifying entities
where relational information is available across the entities. In practice
inaccurate class distribution for each entity is often available from another
(external) classifier. For example this distribution could come from a
classifier built using content features or a simple dictionary. Given the
relational and inaccurate external classifier information, we consider two
graph based settings in which the problem of collective classification can be
solved. In the first setting the class distribution is used to fix labels to a
subset of nodes and the labels for the remaining nodes are obtained like in a
transductive setting. In the other setting the class distributions of all nodes
are used to define the fitting function part of a graph regularized objective
function. We define a generalized objective function that handles both the
settings. Methods like harmonic Gaussian field and local-global consistency
(LGC) reported in the literature can be seen as special cases. We extend the
LGC and weighted vote relational neighbor classification (WvRN) methods to
support usage of external classifier information. We also propose an efficient
least squares regularization (LSR) based method and relate it to information
regularization methods. All the methods are evaluated on several benchmark and
real world datasets. Considering together speed, robustness and accuracy,
experimental results indicate that the LSR and WvRN-extension methods perform
better than other methods.
| [
{
"version": "v1",
"created": "Tue, 26 Jun 2012 08:29:43 GMT"
}
] | 2012-06-27T00:00:00 | [
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Selvaraj",
"Sathiya Keerthi",
""
]
] | TITLE: Graph Based Classification Methods Using Inaccurate External Classifier
Information
ABSTRACT: In this paper we consider the problem of collectively classifying entities
where relational information is available across the entities. In practice
inaccurate class distribution for each entity is often available from another
(external) classifier. For example this distribution could come from a
classifier built using content features or a simple dictionary. Given the
relational and inaccurate external classifier information, we consider two
graph based settings in which the problem of collective classification can be
solved. In the first setting the class distribution is used to fix labels to a
subset of nodes and the labels for the remaining nodes are obtained like in a
transductive setting. In the other setting the class distributions of all nodes
are used to define the fitting function part of a graph regularized objective
function. We define a generalized objective function that handles both the
settings. Methods like harmonic Gaussian field and local-global consistency
(LGC) reported in the literature can be seen as special cases. We extend the
LGC and weighted vote relational neighbor classification (WvRN) methods to
support usage of external classifier information. We also propose an efficient
least squares regularization (LSR) based method and relate it to information
regularization methods. All the methods are evaluated on several benchmark and
real world datasets. Considering together speed, robustness and accuracy,
experimental results indicate that the LSR and WvRN-extension methods perform
better than other methods.
|
1206.6015 | Sundararajan Sellamanickam | Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj | Transductive Classification Methods for Mixed Graphs | 8 Pages, 2 Tables, 2 Figures, KDD Workshop - MLG'11 San Diego, CA,
USA | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we provide a principled approach to solve a transductive
classification problem involving a similar graph (edges tend to connect nodes
with same labels) and a dissimilar graph (edges tend to connect nodes with
opposing labels). Most of the existing methods, e.g., Information
Regularization (IR), Weighted vote Relational Neighbor classifier (WvRN) etc,
assume that the given graph is only a similar graph. We extend the IR and WvRN
methods to deal with mixed graphs. We evaluate the proposed extensions on
several benchmark datasets as well as two real world datasets and demonstrate
the usefulness of our ideas.
| [
{
"version": "v1",
"created": "Tue, 26 Jun 2012 14:56:33 GMT"
}
] | 2012-06-27T00:00:00 | [
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Selvaraj",
"Sathiya Keerthi",
""
]
] | TITLE: Transductive Classification Methods for Mixed Graphs
ABSTRACT: In this paper we provide a principled approach to solve a transductive
classification problem involving a similar graph (edges tend to connect nodes
with same labels) and a dissimilar graph (edges tend to connect nodes with
opposing labels). Most of the existing methods, e.g., Information
Regularization (IR), Weighted vote Relational Neighbor classifier (WvRN) etc,
assume that the given graph is only a similar graph. We extend the IR and WvRN
methods to deal with mixed graphs. We evaluate the proposed extensions on
several benchmark datasets as well as two real world datasets and demonstrate
the usefulness of our ideas.
|
1206.6030 | Sundararajan Sellamanickam | Sundararajan Sellamanickam, Shirish Shevade | An Additive Model View to Sparse Gaussian Process Classifier Design | 14 pages, 3 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of designing a sparse Gaussian process classifier
(SGPC) that generalizes well. Viewing SGPC design as constructing an additive
model like in boosting, we present an efficient and effective SGPC design
method to perform a stage-wise optimization of a predictive loss function. We
introduce new methods for two key components viz., site parameter estimation
and basis vector selection in any SGPC design. The proposed adaptive sampling
based basis vector selection method aids in achieving improved generalization
performance at a reduced computational cost. This method can also be used in
conjunction with any other site parameter estimation methods. It has similar
computational and storage complexities as the well-known information vector
machine and is suitable for large datasets. The hyperparameters can be
determined by optimizing a predictive loss function. The experimental results
show better generalization performance of the proposed basis vector selection
method on several benchmark datasets, particularly for relatively smaller basis
vector set sizes or on difficult datasets.
| [
{
"version": "v1",
"created": "Tue, 26 Jun 2012 15:58:21 GMT"
}
] | 2012-06-27T00:00:00 | [
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Shevade",
"Shirish",
""
]
] | TITLE: An Additive Model View to Sparse Gaussian Process Classifier Design
ABSTRACT: We consider the problem of designing a sparse Gaussian process classifier
(SGPC) that generalizes well. Viewing SGPC design as constructing an additive
model like in boosting, we present an efficient and effective SGPC design
method to perform a stage-wise optimization of a predictive loss function. We
introduce new methods for two key components viz., site parameter estimation
and basis vector selection in any SGPC design. The proposed adaptive sampling
based basis vector selection method aids in achieving improved generalization
performance at a reduced computational cost. This method can also be used in
conjunction with any other site parameter estimation methods. It has similar
computational and storage complexities as the well-known information vector
machine and is suitable for large datasets. The hyperparameters can be
determined by optimizing a predictive loss function. The experimental results
show better generalization performance of the proposed basis vector selection
method on several benchmark datasets, particularly for relatively smaller basis
vector set sizes or on difficult datasets.
|
1206.6038 | Sundararajan Sellamanickam | Sundararajan Sellamanickam, Sathiya Keerthi Selvaraj | Predictive Approaches For Gaussian Process Classifier Model Selection | 21 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of Gaussian process classifier (GPC)
model selection with different Leave-One-Out (LOO) Cross Validation (CV) based
optimization criteria and provide a practical algorithm using LOO predictive
distributions with such criteria to select hyperparameters. Apart from the
standard average negative logarithm of predictive probability (NLP), we also
consider smoothed versions of criteria such as F-measure and Weighted Error
Rate (WER), which are useful for handling imbalanced data. Unlike the
regression case, LOO predictive distributions for the classifier case are
intractable. We use approximate LOO predictive distributions arrived from
Expectation Propagation (EP) approximation. We conduct experiments on several
real world benchmark datasets. When the NLP criterion is used for optimizing
the hyperparameters, the predictive approaches show better or comparable NLP
generalization performance with existing GPC approaches. On the other hand,
when the F-measure criterion is used, the F-measure generalization performance
improves significantly on several datasets. Overall, the EP-based predictive
algorithm comes out as an excellent choice for GP classifier model selection
with different optimization criteria.
| [
{
"version": "v1",
"created": "Tue, 26 Jun 2012 16:19:51 GMT"
}
] | 2012-06-27T00:00:00 | [
[
"Sellamanickam",
"Sundararajan",
""
],
[
"Selvaraj",
"Sathiya Keerthi",
""
]
] | TITLE: Predictive Approaches For Gaussian Process Classifier Model Selection
ABSTRACT: In this paper we consider the problem of Gaussian process classifier (GPC)
model selection with different Leave-One-Out (LOO) Cross Validation (CV) based
optimization criteria and provide a practical algorithm using LOO predictive
distributions with such criteria to select hyperparameters. Apart from the
standard average negative logarithm of predictive probability (NLP), we also
consider smoothed versions of criteria such as F-measure and Weighted Error
Rate (WER), which are useful for handling imbalanced data. Unlike the
regression case, LOO predictive distributions for the classifier case are
intractable. We use approximate LOO predictive distributions arrived from
Expectation Propagation (EP) approximation. We conduct experiments on several
real world benchmark datasets. When the NLP criterion is used for optimizing
the hyperparameters, the predictive approaches show better or comparable NLP
generalization performance with existing GPC approaches. On the other hand,
when the F-measure criterion is used, the F-measure generalization performance
improves significantly on several datasets. Overall, the EP-based predictive
algorithm comes out as an excellent choice for GP classifier model selection
with different optimization criteria.
|
1206.5270 | Wei Li | Wei Li, David Blei, Andrew McCallum | Nonparametric Bayes Pachinko Allocation | Appears in Proceedings of the Twenty-Third Conference on Uncertainty
in Artificial Intelligence (UAI2007) | null | null | UAI-P-2007-PG-243-250 | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in topic models have explored complicated structured
distributions to represent topic correlation. For example, the pachinko
allocation model (PAM) captures arbitrary, nested, and possibly sparse
correlations between topics using a directed acyclic graph (DAG). While PAM
provides more flexibility and greater expressive power than previous models
like latent Dirichlet allocation (LDA), it is also more difficult to determine
the appropriate topic structure for a specific dataset. In this paper, we
propose a nonparametric Bayesian prior for PAM based on a variant of the
hierarchical Dirichlet process (HDP). Although the HDP can capture topic
correlations defined by nested data structure, it does not automatically
discover such correlations from unstructured data. By assuming an HDP-based
prior for PAM, we are able to learn both the number of topics and how the
topics are correlated. We evaluate our model on synthetic and real-world text
datasets, and show that nonparametric PAM achieves performance matching the
best of PAM without manually tuning the number of topics.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2012 15:04:47 GMT"
}
] | 2012-06-26T00:00:00 | [
[
"Li",
"Wei",
""
],
[
"Blei",
"David",
""
],
[
"McCallum",
"Andrew",
""
]
] | TITLE: Nonparametric Bayes Pachinko Allocation
ABSTRACT: Recent advances in topic models have explored complicated structured
distributions to represent topic correlation. For example, the pachinko
allocation model (PAM) captures arbitrary, nested, and possibly sparse
correlations between topics using a directed acyclic graph (DAG). While PAM
provides more flexibility and greater expressive power than previous models
like latent Dirichlet allocation (LDA), it is also more difficult to determine
the appropriate topic structure for a specific dataset. In this paper, we
propose a nonparametric Bayesian prior for PAM based on a variant of the
hierarchical Dirichlet process (HDP). Although the HDP can capture topic
correlations defined by nested data structure, it does not automatically
discover such correlations from unstructured data. By assuming an HDP-based
prior for PAM, we are able to learn both the number of topics and how the
topics are correlated. We evaluate our model on synthetic and real-world text
datasets, and show that nonparametric PAM achieves performance matching the
best of PAM without manually tuning the number of topics.
|
1206.5278 | Michael P. Holmes | Michael P. Holmes, Alexander G. Gray, Charles Lee Isbell | Fast Nonparametric Conditional Density Estimation | Appears in Proceedings of the Twenty-Third Conference on Uncertainty
in Artificial Intelligence (UAI2007) | null | null | UAI-P-2007-PG-175-182 | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conditional density estimation generalizes regression by modeling a full
density f(yjx) rather than only the expected value E(yjx). This is important
for many tasks, including handling multi-modality and generating prediction
intervals. Though fundamental and widely applicable, nonparametric conditional
density estimators have received relatively little attention from statisticians
and little or none from the machine learning community. None of that work has
been applied to greater than bivariate data, presumably due to the
computational difficulty of data-driven bandwidth selection. We describe the
double kernel conditional density estimator and derive fast dual-tree-based
algorithms for bandwidth selection using a maximum likelihood criterion. These
techniques give speedups of up to 3.8 million in our experiments, and enable
the first applications to previously intractable large multivariate datasets,
including a redshift prediction problem from the Sloan Digital Sky Survey.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2012 15:08:36 GMT"
}
] | 2012-06-26T00:00:00 | [
[
"Holmes",
"Michael P.",
""
],
[
"Gray",
"Alexander G.",
""
],
[
"Isbell",
"Charles Lee",
""
]
] | TITLE: Fast Nonparametric Conditional Density Estimation
ABSTRACT: Conditional density estimation generalizes regression by modeling a full
density f(yjx) rather than only the expected value E(yjx). This is important
for many tasks, including handling multi-modality and generating prediction
intervals. Though fundamental and widely applicable, nonparametric conditional
density estimators have received relatively little attention from statisticians
and little or none from the machine learning community. None of that work has
been applied to greater than bivariate data, presumably due to the
computational difficulty of data-driven bandwidth selection. We describe the
double kernel conditional density estimator and derive fast dual-tree-based
algorithms for bandwidth selection using a maximum likelihood criterion. These
techniques give speedups of up to 3.8 million in our experiments, and enable
the first applications to previously intractable large multivariate datasets,
including a redshift prediction problem from the Sloan Digital Sky Survey.
|
1112.3265 | Neil Zhenqiang Gong | Neil Zhenqiang Gong, Ameet Talwalkar, Lester Mackey, Ling Huang, Eui
Chul Richard Shin, Emil Stefanov, Elaine (Runting) Shi and Dawn Song | Jointly Predicting Links and Inferring Attributes using a
Social-Attribute Network (SAN) | 9 pages, 4 figures and 4 tables | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effects of social influence and homophily suggest that both network
structure and node attribute information should inform the tasks of link
prediction and node attribute inference. Recently, Yin et al. proposed
Social-Attribute Network (SAN), an attribute-augmented social network, to
integrate network structure and node attributes to perform both link prediction
and attribute inference. They focused on generalizing the random walk with
restart algorithm to the SAN framework and showed improved performance. In this
paper, we extend the SAN framework with several leading supervised and
unsupervised link prediction algorithms and demonstrate performance improvement
for each algorithm on both link prediction and attribute inference. Moreover,
we make the novel observation that attribute inference can help inform link
prediction, i.e., link prediction accuracy is further improved by first
inferring missing attributes. We comprehensively evaluate these algorithms and
compare them with other existing algorithms using a novel, large-scale Google+
dataset, which we make publicly available.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2011 16:13:02 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2011 04:22:15 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2011 14:01:37 GMT"
},
{
"version": "v4",
"created": "Mon, 13 Feb 2012 23:44:46 GMT"
},
{
"version": "v5",
"created": "Wed, 29 Feb 2012 23:55:03 GMT"
},
{
"version": "v6",
"created": "Wed, 13 Jun 2012 02:07:42 GMT"
},
{
"version": "v7",
"created": "Thu, 14 Jun 2012 03:35:19 GMT"
},
{
"version": "v8",
"created": "Fri, 15 Jun 2012 00:57:00 GMT"
},
{
"version": "v9",
"created": "Fri, 22 Jun 2012 14:43:41 GMT"
}
] | 2012-06-25T00:00:00 | [
[
"Gong",
"Neil Zhenqiang",
"",
"Runting"
],
[
"Talwalkar",
"Ameet",
"",
"Runting"
],
[
"Mackey",
"Lester",
"",
"Runting"
],
[
"Huang",
"Ling",
"",
"Runting"
],
[
"Shin",
"Eui Chul Richard",
"",
"Runting"
],
[
"Stefanov",
"Emil",
"",
"Runting"
],
[
"Elaine",
"",
"",
"Runting"
],
[
"Shi",
"",
""
],
[
"Song",
"Dawn",
""
]
] | TITLE: Jointly Predicting Links and Inferring Attributes using a
Social-Attribute Network (SAN)
ABSTRACT: The effects of social influence and homophily suggest that both network
structure and node attribute information should inform the tasks of link
prediction and node attribute inference. Recently, Yin et al. proposed
Social-Attribute Network (SAN), an attribute-augmented social network, to
integrate network structure and node attributes to perform both link prediction
and attribute inference. They focused on generalizing the random walk with
restart algorithm to the SAN framework and showed improved performance. In this
paper, we extend the SAN framework with several leading supervised and
unsupervised link prediction algorithms and demonstrate performance improvement
for each algorithm on both link prediction and attribute inference. Moreover,
we make the novel observation that attribute inference can help inform link
prediction, i.e., link prediction accuracy is further improved by first
inferring missing attributes. We comprehensively evaluate these algorithms and
compare them with other existing algorithms using a novel, large-scale Google+
dataset, which we make publicly available.
|
1206.5102 | Stevenn Volant | Stevenn Volant, Caroline B\'erard, Marie-Laure Martin-Magniette and
St\'ephane Robin | Hidden Markov Models with mixtures as emission distributions | null | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In unsupervised classification, Hidden Markov Models (HMM) are used to
account for a neighborhood structure between observations. The emission
distributions are often supposed to belong to some parametric family. In this
paper, a semiparametric modeling where the emission distributions are a mixture
of parametric distributions is proposed to get a higher flexibility. We show
that the classical EM algorithm can be adapted to infer the model parameters.
For the initialisation step, starting from a large number of components, a
hierarchical method to combine them into the hidden states is proposed. Three
likelihood-based criteria to select the components to be combined are
discussed. To estimate the number of hidden states, BIC-like criteria are
derived. A simulation study is carried out both to determine the best
combination between the merging criteria and the model selection criteria and
to evaluate the accuracy of classification. The proposed method is also
illustrated using a biological dataset from the model plant Arabidopsis
thaliana. A R package HMMmix is freely available on the CRAN.
| [
{
"version": "v1",
"created": "Fri, 22 Jun 2012 10:24:55 GMT"
}
] | 2012-06-25T00:00:00 | [
[
"Volant",
"Stevenn",
""
],
[
"Bérard",
"Caroline",
""
],
[
"Martin-Magniette",
"Marie-Laure",
""
],
[
"Robin",
"Stéphane",
""
]
] | TITLE: Hidden Markov Models with mixtures as emission distributions
ABSTRACT: In unsupervised classification, Hidden Markov Models (HMM) are used to
account for a neighborhood structure between observations. The emission
distributions are often supposed to belong to some parametric family. In this
paper, a semiparametric modeling where the emission distributions are a mixture
of parametric distributions is proposed to get a higher flexibility. We show
that the classical EM algorithm can be adapted to infer the model parameters.
For the initialisation step, starting from a large number of components, a
hierarchical method to combine them into the hidden states is proposed. Three
likelihood-based criteria to select the components to be combined are
discussed. To estimate the number of hidden states, BIC-like criteria are
derived. A simulation study is carried out both to determine the best
combination between the merging criteria and the model selection criteria and
to evaluate the accuracy of classification. The proposed method is also
illustrated using a biological dataset from the model plant Arabidopsis
thaliana. A R package HMMmix is freely available on the CRAN.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.