id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1503.08348 | Ravi Ganti | Ravi Ganti and Rebecca M. Willett | Sparse Linear Regression With Missing Data | 14 pages, 7 figures | null | null | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a fast and accurate method for sparse regression in the
presence of missing data. The underlying statistical model encapsulates the
low-dimensional structure of the incomplete data matrix and the sparsity of the
regression coefficients, and the proposed algorithm jointly learns the
low-dimensional structure of the data and a linear regressor with sparse
coefficients. The proposed stochastic optimization method, Sparse Linear
Regression with Missing Data (SLRM), performs an alternating minimization
procedure and scales well with the problem size. Large deviation inequalities
shed light on the impact of the various problem-dependent parameters on the
expected squared loss of the learned regressor. Extensive simulations on both
synthetic and real datasets show that SLRM performs better than competing
algorithms in a variety of contexts.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2015 21:03:32 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Ganti",
"Ravi",
""
],
[
"Willett",
"Rebecca M.",
""
]
] | TITLE: Sparse Linear Regression With Missing Data
ABSTRACT: This paper proposes a fast and accurate method for sparse regression in the
presence of missing data. The underlying statistical model encapsulates the
low-dimensional structure of the incomplete data matrix and the sparsity of the
regression coefficients, and the proposed algorithm jointly learns the
low-dimensional structure of the data and a linear regressor with sparse
coefficients. The proposed stochastic optimization method, Sparse Linear
Regression with Missing Data (SLRM), performs an alternating minimization
procedure and scales well with the problem size. Large deviation inequalities
shed light on the impact of the various problem-dependent parameters on the
expected squared loss of the learned regressor. Extensive simulations on both
synthetic and real datasets show that SLRM performs better than competing
algorithms in a variety of contexts.
| no_new_dataset | 0.947527 |
1503.08407 | Zimu Yuan | Zimu Yuan, Zhiwei Xu | CIUV: Collaborating Information Against Unreliable Views | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real world applications, the information of an object can be obtained
from multiple sources. The sources may provide different point of views based
on their own origin. As a consequence, conflicting pieces of information are
inevitable, which gives rise to a crucial problem: how to find the truth from
these conflicts. Many truth-finding methods have been proposed to resolve
conflicts based on information trustworthy (i.e. more appearance means more
trustworthy) as well as source reliability. However, the factor of men's
involvement, i.e., information may be falsified by men with malicious
intension, is more or less ignored in existing methods. Collaborating the
possible relationship between information's origins and men's participation are
still not studied in research. To deal with this challenge, we propose a method
-- Collaborating Information against Unreliable Views (CIUV) --- in dealing
with men's involvement for finding the truth. CIUV contains 3 stages for
interactively mitigating the impact of unreliable views, and calculate the
truth by weighting possible biases between sources. We theoretically analyze
the error bound of CIUV, and conduct intensive experiments on real dataset for
evaluation. The experimental results show that CIUV is feasible and has the
smallest error compared with other methods.
| [
{
"version": "v1",
"created": "Sun, 29 Mar 2015 09:30:58 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Yuan",
"Zimu",
""
],
[
"Xu",
"Zhiwei",
""
]
] | TITLE: CIUV: Collaborating Information Against Unreliable Views
ABSTRACT: In many real world applications, the information of an object can be obtained
from multiple sources. The sources may provide different point of views based
on their own origin. As a consequence, conflicting pieces of information are
inevitable, which gives rise to a crucial problem: how to find the truth from
these conflicts. Many truth-finding methods have been proposed to resolve
conflicts based on information trustworthy (i.e. more appearance means more
trustworthy) as well as source reliability. However, the factor of men's
involvement, i.e., information may be falsified by men with malicious
intension, is more or less ignored in existing methods. Collaborating the
possible relationship between information's origins and men's participation are
still not studied in research. To deal with this challenge, we propose a method
-- Collaborating Information against Unreliable Views (CIUV) --- in dealing
with men's involvement for finding the truth. CIUV contains 3 stages for
interactively mitigating the impact of unreliable views, and calculate the
truth by weighting possible biases between sources. We theoretically analyze
the error bound of CIUV, and conduct intensive experiments on real dataset for
evaluation. The experimental results show that CIUV is feasible and has the
smallest error compared with other methods.
| no_new_dataset | 0.946349 |
1503.08463 | S. K. Sahay | Rajendra Kumar Roul, Saransh Varshneya, Ashu Kalra, Sanjay Kumar Sahay | A Novel Modified Apriori Approach for Web Document Clustering | 11 Pages, 5 Figures | Springer, Smart Innovation Systems and Technologies, Vol. 33,
2015, p. 159-171; Proceedings of the ICCIDM, Dec. 2014 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional apriori algorithm can be used for clustering the web
documents based on the association technique of data mining. But this algorithm
has several limitations due to repeated database scans and its weak association
rule analysis. In modern world of large databases, efficiency of traditional
apriori algorithm would reduce manifolds. In this paper, we proposed a new
modified apriori approach by cutting down the repeated database scans and
improving association analysis of traditional apriori algorithm to cluster the
web documents. Further we improve those clusters by applying Fuzzy C-Means
(FCM), K-Means and Vector Space Model (VSM) techniques separately. For
experimental purpose, we use Classic3 and Classic4 datasets of Cornell
University having more than 10,000 documents and run both traditional apriori
and our modified apriori approach on it. Experimental results show that our
approach outperforms the traditional apriori algorithm in terms of database
scan and improvement on association of analysis. We found out that FCM is
better than K-Means and VSM in terms of F-measure of clusters of different
sizes.
| [
{
"version": "v1",
"created": "Sun, 29 Mar 2015 17:40:18 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Roul",
"Rajendra Kumar",
""
],
[
"Varshneya",
"Saransh",
""
],
[
"Kalra",
"Ashu",
""
],
[
"Sahay",
"Sanjay Kumar",
""
]
] | TITLE: A Novel Modified Apriori Approach for Web Document Clustering
ABSTRACT: The traditional apriori algorithm can be used for clustering the web
documents based on the association technique of data mining. But this algorithm
has several limitations due to repeated database scans and its weak association
rule analysis. In modern world of large databases, efficiency of traditional
apriori algorithm would reduce manifolds. In this paper, we proposed a new
modified apriori approach by cutting down the repeated database scans and
improving association analysis of traditional apriori algorithm to cluster the
web documents. Further we improve those clusters by applying Fuzzy C-Means
(FCM), K-Means and Vector Space Model (VSM) techniques separately. For
experimental purpose, we use Classic3 and Classic4 datasets of Cornell
University having more than 10,000 documents and run both traditional apriori
and our modified apriori approach on it. Experimental results show that our
approach outperforms the traditional apriori algorithm in terms of database
scan and improvement on association of analysis. We found out that FCM is
better than K-Means and VSM in terms of F-measure of clusters of different
sizes.
| no_new_dataset | 0.948822 |
1503.08482 | Spyros Blanas | Spyros Blanas and Surendra Byna | Towards Exascale Scientific Metadata Management | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions.
| [
{
"version": "v1",
"created": "Sun, 29 Mar 2015 19:13:18 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Blanas",
"Spyros",
""
],
[
"Byna",
"Surendra",
""
]
] | TITLE: Towards Exascale Scientific Metadata Management
ABSTRACT: Advances in technology and computing hardware are enabling scientists from
all areas of science to produce massive amounts of data using large-scale
simulations or observational facilities. In this era of data deluge, effective
coordination between the data production and the analysis phases hinges on the
availability of metadata that describe the scientific datasets. Existing
workflow engines have been capturing a limited form of metadata to provide
provenance information about the identity and lineage of the data. However,
much of the data produced by simulations, experiments, and analyses still need
to be annotated manually in an ad hoc manner by domain scientists. Systematic
and transparent acquisition of rich metadata becomes a crucial prerequisite to
sustain and accelerate the pace of scientific innovation. Yet, ubiquitous and
domain-agnostic metadata management infrastructure that can meet the demands of
extreme-scale science is notable by its absence.
To address this gap in scientific data management research and practice, we
present our vision for an integrated approach that (1) automatically captures
and manipulates information-rich metadata while the data is being produced or
analyzed and (2) stores metadata within each dataset to permeate
metadata-oblivious processes and to query metadata through established and
standardized data access interfaces. We motivate the need for the proposed
integrated approach using applications from plasma physics, climate modeling
and neuroscience, and then discuss research challenges and possible solutions.
| no_new_dataset | 0.946448 |
1503.08535 | Junyu Xuan | Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo | Infinite Author Topic Model based on Mixed Gamma-Negative Binomial
Process | 10 pages, 5 figures, submitted to KDD conference | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incorporating the side information of text corpus, i.e., authors, time
stamps, and emotional tags, into the traditional text mining models has gained
significant interests in the area of information retrieval, statistical natural
language processing, and machine learning. One branch of these works is the
so-called Author Topic Model (ATM), which incorporates the authors's interests
as side information into the classical topic model. However, the existing ATM
needs to predefine the number of topics, which is difficult and inappropriate
in many real-world settings. In this paper, we propose an Infinite Author Topic
(IAT) model to resolve this issue. Instead of assigning a discrete probability
on fixed number of topics, we use a stochastic process to determine the number
of topics from the data itself. To be specific, we extend a gamma-negative
binomial process to three levels in order to capture the
author-document-keyword hierarchical structure. Furthermore, each document is
assigned a mixed gamma process that accounts for the multi-author's
contribution towards this document. An efficient Gibbs sampling inference
algorithm with each conditional distribution being closed-form is developed for
the IAT model. Experiments on several real-world datasets show the capabilities
of our IAT model to learn the hidden topics, authors' interests on these topics
and the number of topics simultaneously.
| [
{
"version": "v1",
"created": "Mon, 30 Mar 2015 05:03:37 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Xuan",
"Junyu",
""
],
[
"Lu",
"Jie",
""
],
[
"Zhang",
"Guangquan",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Luo",
"Xiangfeng",
""
]
] | TITLE: Infinite Author Topic Model based on Mixed Gamma-Negative Binomial
Process
ABSTRACT: Incorporating the side information of text corpus, i.e., authors, time
stamps, and emotional tags, into the traditional text mining models has gained
significant interests in the area of information retrieval, statistical natural
language processing, and machine learning. One branch of these works is the
so-called Author Topic Model (ATM), which incorporates the authors's interests
as side information into the classical topic model. However, the existing ATM
needs to predefine the number of topics, which is difficult and inappropriate
in many real-world settings. In this paper, we propose an Infinite Author Topic
(IAT) model to resolve this issue. Instead of assigning a discrete probability
on fixed number of topics, we use a stochastic process to determine the number
of topics from the data itself. To be specific, we extend a gamma-negative
binomial process to three levels in order to capture the
author-document-keyword hierarchical structure. Furthermore, each document is
assigned a mixed gamma process that accounts for the multi-author's
contribution towards this document. An efficient Gibbs sampling inference
algorithm with each conditional distribution being closed-form is developed for
the IAT model. Experiments on several real-world datasets show the capabilities
of our IAT model to learn the hidden topics, authors' interests on these topics
and the number of topics simultaneously.
| no_new_dataset | 0.951233 |
1503.08542 | Junyu Xuan | Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo | Nonparametric Relational Topic Models through Dependent Gamma Processes | null | null | null | null | stat.ML cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional Relational Topic Models provide a way to discover the hidden
topics from a document network. Many theoretical and practical tasks, such as
dimensional reduction, document clustering, link prediction, benefit from this
revealed knowledge. However, existing relational topic models are based on an
assumption that the number of hidden topics is known in advance, and this is
impractical in many real-world applications. Therefore, in order to relax this
assumption, we propose a nonparametric relational topic model in this paper.
Instead of using fixed-dimensional probability distributions in its generative
model, we use stochastic processes. Specifically, a gamma process is assigned
to each document, which represents the topic interest of this document.
Although this method provides an elegant solution, it brings additional
challenges when mathematically modeling the inherent network structure of
typical document network, i.e., two spatially closer documents tend to have
more similar topics. Furthermore, we require that the topics are shared by all
the documents. In order to resolve these challenges, we use a subsampling
strategy to assign each document a different gamma process from the global
gamma process, and the subsampling probabilities of documents are assigned with
a Markov Random Field constraint that inherits the document network structure.
Through the designed posterior inference algorithm, we can discover the hidden
topics and its number simultaneously. Experimental results on both synthetic
and real-world network datasets demonstrate the capabilities of learning the
hidden topics and, more importantly, the number of topics.
| [
{
"version": "v1",
"created": "Mon, 30 Mar 2015 05:40:41 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Xuan",
"Junyu",
""
],
[
"Lu",
"Jie",
""
],
[
"Zhang",
"Guangquan",
""
],
[
"Da Xu",
"Richard Yi",
""
],
[
"Luo",
"Xiangfeng",
""
]
] | TITLE: Nonparametric Relational Topic Models through Dependent Gamma Processes
ABSTRACT: Traditional Relational Topic Models provide a way to discover the hidden
topics from a document network. Many theoretical and practical tasks, such as
dimensional reduction, document clustering, link prediction, benefit from this
revealed knowledge. However, existing relational topic models are based on an
assumption that the number of hidden topics is known in advance, and this is
impractical in many real-world applications. Therefore, in order to relax this
assumption, we propose a nonparametric relational topic model in this paper.
Instead of using fixed-dimensional probability distributions in its generative
model, we use stochastic processes. Specifically, a gamma process is assigned
to each document, which represents the topic interest of this document.
Although this method provides an elegant solution, it brings additional
challenges when mathematically modeling the inherent network structure of
typical document network, i.e., two spatially closer documents tend to have
more similar topics. Furthermore, we require that the topics are shared by all
the documents. In order to resolve these challenges, we use a subsampling
strategy to assign each document a different gamma process from the global
gamma process, and the subsampling probabilities of documents are assigned with
a Markov Random Field constraint that inherits the document network structure.
Through the designed posterior inference algorithm, we can discover the hidden
topics and its number simultaneously. Experimental results on both synthetic
and real-world network datasets demonstrate the capabilities of learning the
hidden topics and, more importantly, the number of topics.
| no_new_dataset | 0.951953 |
1503.08581 | Ioannis Partalas | Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry
Artieres, George Paliouras, Eric Gaussier, Ion Androutsopoulos, Massih-Reza
Amini, Patrick Galinari | LSHTC: A Benchmark for Large-Scale Text Classification | null | null | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LSHTC is a series of challenges which aims to assess the performance of
classification systems in large-scale classification in a a large number of
classes (up to hundreds of thousands). This paper describes the dataset that
have been released along the LSHTC series. The paper details the construction
of the datsets and the design of the tracks as well as the evaluation measures
that we implemented and a quick overview of the results. All of these datasets
are available online and runs may still be submitted on the online server of
the challenges.
| [
{
"version": "v1",
"created": "Mon, 30 Mar 2015 08:03:47 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Partalas",
"Ioannis",
""
],
[
"Kosmopoulos",
"Aris",
""
],
[
"Baskiotis",
"Nicolas",
""
],
[
"Artieres",
"Thierry",
""
],
[
"Paliouras",
"George",
""
],
[
"Gaussier",
"Eric",
""
],
[
"Androutsopoulos",
"Ion",
""
],
[
"Amini",
"Massih-Reza",
""
],
[
"Galinari",
"Patrick",
""
]
] | TITLE: LSHTC: A Benchmark for Large-Scale Text Classification
ABSTRACT: LSHTC is a series of challenges which aims to assess the performance of
classification systems in large-scale classification in a a large number of
classes (up to hundreds of thousands). This paper describes the dataset that
have been released along the LSHTC series. The paper details the construction
of the datsets and the design of the tracks as well as the evaluation measures
that we implemented and a quick overview of the results. All of these datasets
are available online and runs may still be submitted on the online server of
the challenges.
| no_new_dataset | 0.849222 |
1503.08639 | Rapha\"el Li\'egeois | Rapha\"el Li\'egeois, Bamdev Mishra, Mattia Zorzi, Rodolphe Sepulchre | Sparse plus low-rank autoregressive identification in neuroimaging time
series | 6 pages paper submitted to CDC 2015 | null | null | null | cs.LG cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of identifying multivariate autoregressive
(AR) sparse plus low-rank graphical models. Based on the corresponding problem
formulation recently presented, we use the alternating direction method of
multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in
neuroimaging applications. We apply this decomposition on synthetic and real
neuroimaging datasets with a specific focus on the information encoded in the
low-rank structure of our model. In particular, we illustrate that this
information captures the spatio-temporal structure of the original data,
generalizing classical component analysis approaches.
| [
{
"version": "v1",
"created": "Mon, 30 Mar 2015 11:11:57 GMT"
}
] | 2015-03-31T00:00:00 | [
[
"Liégeois",
"Raphaël",
""
],
[
"Mishra",
"Bamdev",
""
],
[
"Zorzi",
"Mattia",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | TITLE: Sparse plus low-rank autoregressive identification in neuroimaging time
series
ABSTRACT: This paper considers the problem of identifying multivariate autoregressive
(AR) sparse plus low-rank graphical models. Based on the corresponding problem
formulation recently presented, we use the alternating direction method of
multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in
neuroimaging applications. We apply this decomposition on synthetic and real
neuroimaging datasets with a specific focus on the information encoded in the
low-rank structure of our model. In particular, we illustrate that this
information captures the spatio-temporal structure of the original data,
generalizing classical component analysis approaches.
| no_new_dataset | 0.948632 |
1406.0288 | Radu Horaud P | Kaustubh Kulkarni, Georgios Evangelidis, Jan Cech and Radu Horaud | Continuous Action Recognition Based on Sequence Alignment | null | International Journal of Computer Vision 112(1), 90-114, 2015 | 10.1007/s11263-014-0758-9 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continuous action recognition is more challenging than isolated recognition
because classification and segmentation must be simultaneously carried out. We
build on the well known dynamic time warping (DTW) framework and devise a novel
visual alignment technique, namely dynamic frame warping (DFW), which performs
isolated recognition based on per-frame representation of videos, and on
aligning a test sequence with a model sequence. Moreover, we propose two
extensions which enable to perform recognition concomitant with segmentation,
namely one-pass DFW and two-pass DFW. These two methods have their roots in the
domain of continuous recognition of speech and, to the best of our knowledge,
their extension to continuous visual action recognition has been overlooked. We
test and illustrate the proposed techniques with a recently released dataset
(RAVEL) and with two public-domain datasets widely used in action recognition
(Hollywood-1 and Hollywood-2). We also compare the performances of the proposed
isolated and continuous recognition algorithms with several recently published
methods.
| [
{
"version": "v1",
"created": "Mon, 2 Jun 2014 08:21:27 GMT"
}
] | 2015-03-30T00:00:00 | [
[
"Kulkarni",
"Kaustubh",
""
],
[
"Evangelidis",
"Georgios",
""
],
[
"Cech",
"Jan",
""
],
[
"Horaud",
"Radu",
""
]
] | TITLE: Continuous Action Recognition Based on Sequence Alignment
ABSTRACT: Continuous action recognition is more challenging than isolated recognition
because classification and segmentation must be simultaneously carried out. We
build on the well known dynamic time warping (DTW) framework and devise a novel
visual alignment technique, namely dynamic frame warping (DFW), which performs
isolated recognition based on per-frame representation of videos, and on
aligning a test sequence with a model sequence. Moreover, we propose two
extensions which enable to perform recognition concomitant with segmentation,
namely one-pass DFW and two-pass DFW. These two methods have their roots in the
domain of continuous recognition of speech and, to the best of our knowledge,
their extension to continuous visual action recognition has been overlooked. We
test and illustrate the proposed techniques with a recently released dataset
(RAVEL) and with two public-domain datasets widely used in action recognition
(Hollywood-1 and Hollywood-2). We also compare the performances of the proposed
isolated and continuous recognition algorithms with several recently published
methods.
| new_dataset | 0.964855 |
1503.07884 | Yongxin Yang | Yanwei Fu, Yongxin Yang, Timothy M. Hospedales, Tao Xiang and Shaogang
Gong | Transductive Multi-class and Multi-label Zero-shot Learning | 4 pages, 4 figures, ECCV 2014 Workshop on Parts and Attributes | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, zero-shot learning (ZSL) has received increasing interest. The key
idea underpinning existing ZSL approaches is to exploit knowledge transfer via
an intermediate-level semantic representation which is assumed to be shared
between the auxiliary and target datasets, and is used to bridge between these
domains for knowledge transfer. The semantic representation used in existing
approaches varies from visual attributes to semantic word vectors and semantic
relatedness. However, the overall pipeline is similar: a projection mapping
low-level features to the semantic representation is learned from the auxiliary
dataset by either classification or regression models and applied directly to
map each instance into the same semantic representation space where a zero-shot
classifier is used to recognise the unseen target class instances with a single
known 'prototype' of each target class. In this paper we discuss two related
lines of work improving the conventional approach: exploiting transductive
learning ZSL, and generalising ZSL to the multi-label case.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 20:07:37 GMT"
}
] | 2015-03-30T00:00:00 | [
[
"Fu",
"Yanwei",
""
],
[
"Yang",
"Yongxin",
""
],
[
"Hospedales",
"Timothy M.",
""
],
[
"Xiang",
"Tao",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Transductive Multi-class and Multi-label Zero-shot Learning
ABSTRACT: Recently, zero-shot learning (ZSL) has received increasing interest. The key
idea underpinning existing ZSL approaches is to exploit knowledge transfer via
an intermediate-level semantic representation which is assumed to be shared
between the auxiliary and target datasets, and is used to bridge between these
domains for knowledge transfer. The semantic representation used in existing
approaches varies from visual attributes to semantic word vectors and semantic
relatedness. However, the overall pipeline is similar: a projection mapping
low-level features to the semantic representation is learned from the auxiliary
dataset by either classification or regression models and applied directly to
map each instance into the same semantic representation space where a zero-shot
classifier is used to recognise the unseen target class instances with a single
known 'prototype' of each target class. In this paper we discuss two related
lines of work improving the conventional approach: exploiting transductive
learning ZSL, and generalising ZSL to the multi-label case.
| no_new_dataset | 0.942082 |
1503.07989 | Naveed Akhtar Mr. | Naveed Akhtar, Faisal Shafait, Ajmal Mian | Discriminative Bayesian Dictionary Learning for Classification | 15 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Bayesian approach to learn discriminative dictionaries for
sparse representation of data. The proposed approach infers probability
distributions over the atoms of a discriminative dictionary using a Beta
Process. It also computes sets of Bernoulli distributions that associate class
labels to the learned dictionary atoms. This association signifies the
selection probabilities of the dictionary atoms in the expansion of
class-specific data. Furthermore, the non-parametric character of the proposed
approach allows it to infer the correct size of the dictionary. We exploit the
aforementioned Bernoulli distributions in separately learning a linear
classifier. The classifier uses the same hierarchical Bayesian model as the
dictionary, which we present along the analytical inference solution for Gibbs
sampling. For classification, a test instance is first sparsely encoded over
the learned dictionary and the codes are fed to the classifier. We performed
experiments for face and action recognition; and object and scene-category
classification using five public datasets and compared the results with
state-of-the-art discriminative sparse representation approaches. Experiments
show that the proposed Bayesian approach consistently outperforms the existing
approaches.
| [
{
"version": "v1",
"created": "Fri, 27 Mar 2015 08:36:15 GMT"
}
] | 2015-03-30T00:00:00 | [
[
"Akhtar",
"Naveed",
""
],
[
"Shafait",
"Faisal",
""
],
[
"Mian",
"Ajmal",
""
]
] | TITLE: Discriminative Bayesian Dictionary Learning for Classification
ABSTRACT: We propose a Bayesian approach to learn discriminative dictionaries for
sparse representation of data. The proposed approach infers probability
distributions over the atoms of a discriminative dictionary using a Beta
Process. It also computes sets of Bernoulli distributions that associate class
labels to the learned dictionary atoms. This association signifies the
selection probabilities of the dictionary atoms in the expansion of
class-specific data. Furthermore, the non-parametric character of the proposed
approach allows it to infer the correct size of the dictionary. We exploit the
aforementioned Bernoulli distributions in separately learning a linear
classifier. The classifier uses the same hierarchical Bayesian model as the
dictionary, which we present along the analytical inference solution for Gibbs
sampling. For classification, a test instance is first sparsely encoded over
the learned dictionary and the codes are fed to the classifier. We performed
experiments for face and action recognition; and object and scene-category
classification using five public datasets and compared the results with
state-of-the-art discriminative sparse representation approaches. Experiments
show that the proposed Bayesian approach consistently outperforms the existing
approaches.
| no_new_dataset | 0.947235 |
1503.08081 | Manfred Poechacker DI | Manfred P\"ochacker, Dominik Egarter, Wilfried Elmenreich | Proficiency of Power Values for Load Disaggregation | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Load disaggregation techniques infer the operation of different power
consuming devices from a single measurement point that records the total power
draw over time. Thus, a device consuming power at the moment can be understood
as information encoded in the power draw. However, similar power draws or
similar combinations of power draws limit the ability to detect the currently
active device set. We present an information coding perspective of load
disaggregation to enable a better understanding of this process and to support
its future improvement. In typical cases of quantity and type of devices and
their respective power consumption, not all possible device configurations can
be mapped to distinguishable power values. We introduce the term of proficiency
to describe the suitability of a device set for load disaggregation. We provide
the notion and calculation of entropy of initial device states, mutual
information of power values and the resulting uncertainty coefficient or
proficiency. We show that the proficiency is highly dependent from the device
running probability especially for devices with multiple states of power
consumption. The application of the concept is demonstrated by exemplary
artificial data as well as with actual power consumption data from real-world
power draw datasets.
| [
{
"version": "v1",
"created": "Fri, 27 Mar 2015 14:01:07 GMT"
}
] | 2015-03-30T00:00:00 | [
[
"Pöchacker",
"Manfred",
""
],
[
"Egarter",
"Dominik",
""
],
[
"Elmenreich",
"Wilfried",
""
]
] | TITLE: Proficiency of Power Values for Load Disaggregation
ABSTRACT: Load disaggregation techniques infer the operation of different power
consuming devices from a single measurement point that records the total power
draw over time. Thus, a device consuming power at the moment can be understood
as information encoded in the power draw. However, similar power draws or
similar combinations of power draws limit the ability to detect the currently
active device set. We present an information coding perspective of load
disaggregation to enable a better understanding of this process and to support
its future improvement. In typical cases of quantity and type of devices and
their respective power consumption, not all possible device configurations can
be mapped to distinguishable power values. We introduce the term of proficiency
to describe the suitability of a device set for load disaggregation. We provide
the notion and calculation of entropy of initial device states, mutual
information of power values and the resulting uncertainty coefficient or
proficiency. We show that the proficiency is highly dependent from the device
running probability especially for devices with multiple states of power
consumption. The application of the concept is demonstrated by exemplary
artificial data as well as with actual power consumption data from real-world
power draw datasets.
| no_new_dataset | 0.946646 |
1503.05571 | Guillaume Alain | Guillaume Alain, Yoshua Bengio, Li Yao, Jason Yosinski, Eric
Thibodeau-Laufer, Saizheng Zhang, Pascal Vincent | GSNs : Generative Stochastic Networks | arXiv admin note: substantial text overlap with arXiv:1306.1091 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. Because the
transition distribution is a conditional distribution generally involving a
small move, it has fewer dominant modes, being unimodal in the limit of small
moves. Thus, it is easier to learn, more like learning to perform supervised
function approximation, with gradients that can be obtained by
back-propagation. The theorems provided here generalize recent work on the
probabilistic interpretation of denoising auto-encoders and provide an
interesting justification for dependency networks and generalized
pseudolikelihood (along with defining an appropriate joint distribution and
sampling mechanism, even when the conditionals are not consistent). We study
how GSNs can be used with missing inputs and can be used to sample subsets of
variables given the rest. Successful experiments are conducted, validating
these theoretical results, on two image datasets and with a particular
architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows
training to proceed with backprop, without the need for layerwise pretraining.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 20:06:07 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Mar 2015 16:44:52 GMT"
}
] | 2015-03-29T00:00:00 | [
[
"Alain",
"Guillaume",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Yao",
"Li",
""
],
[
"Yosinski",
"Jason",
""
],
[
"Thibodeau-Laufer",
"Eric",
""
],
[
"Zhang",
"Saizheng",
""
],
[
"Vincent",
"Pascal",
""
]
] | TITLE: GSNs : Generative Stochastic Networks
ABSTRACT: We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. Because the
transition distribution is a conditional distribution generally involving a
small move, it has fewer dominant modes, being unimodal in the limit of small
moves. Thus, it is easier to learn, more like learning to perform supervised
function approximation, with gradients that can be obtained by
back-propagation. The theorems provided here generalize recent work on the
probabilistic interpretation of denoising auto-encoders and provide an
interesting justification for dependency networks and generalized
pseudolikelihood (along with defining an appropriate joint distribution and
sampling mechanism, even when the conditionals are not consistent). We study
how GSNs can be used with missing inputs and can be used to sample subsets of
variables given the rest. Successful experiments are conducted, validating
these theoretical results, on two image datasets and with a particular
architecture that mimics the Deep Boltzmann Machine Gibbs sampler but allows
training to proceed with backprop, without the need for layerwise pretraining.
| no_new_dataset | 0.948202 |
1503.06268 | Tanmoy Chakraborty | Tanmoy Chakraborty, Suhansanu Kumar, Pawan Goyal, Niloy Ganguly,
Animesh Mukherjee | On the categorization of scientific citation profiles in computer
sciences | 11 pages, 10 figures, Accepted in Communications of the ACM (CACM),
2015. arXiv admin note: text overlap with arXiv:1206.0108 by other authors | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common consensus in the literature is that the citation profile of
published articles in general follows a universal pattern - an initial growth
in the number of citations within the first two to three years after
publication followed by a steady peak of one to two years and then a final
decline over the rest of the lifetime of the article. This observation has long
been the underlying heuristic in determining major bibliometric factors such as
the quality of a publication, the growth of scientific communities, impact
factor of publication venues etc. In this paper, we gather and analyze a
massive dataset of scientific papers from the computer science domain and
notice that the citation count of the articles over the years follows a
remarkably diverse set of patterns - a profile with an initial peak (PeakInit),
with distinct multiple peaks (PeakMul), with a peak late in time (PeakLate),
that is monotonically decreasing (MonDec), that is monotonically increasing
(MonIncr) and that can not be categorized into any of the above (Oth). We
conduct a thorough experiment to investigate several important characteristics
of these categories such as how individual categories attract citations, how
the categorization is influenced by the year and the venue of publication of
papers, how each category is affected by self-citations, the stability of the
categories over time, and how much each of these categories contribute to the
core of the network. Further, we show that the traditional preferential
attachment models fail to explain these citation profiles. Therefore, we
propose a novel dynamic growth model that takes both the preferential
attachment and the aging factor into account in order to replicate the
real-world behavior of various citation profiles. We believe that this paper
opens the scope for a serious re-investigation of the existing bibliometric
indices for scientific research.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2015 06:03:44 GMT"
}
] | 2015-03-29T00:00:00 | [
[
"Chakraborty",
"Tanmoy",
""
],
[
"Kumar",
"Suhansanu",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Mukherjee",
"Animesh",
""
]
] | TITLE: On the categorization of scientific citation profiles in computer
sciences
ABSTRACT: A common consensus in the literature is that the citation profile of
published articles in general follows a universal pattern - an initial growth
in the number of citations within the first two to three years after
publication followed by a steady peak of one to two years and then a final
decline over the rest of the lifetime of the article. This observation has long
been the underlying heuristic in determining major bibliometric factors such as
the quality of a publication, the growth of scientific communities, impact
factor of publication venues etc. In this paper, we gather and analyze a
massive dataset of scientific papers from the computer science domain and
notice that the citation count of the articles over the years follows a
remarkably diverse set of patterns - a profile with an initial peak (PeakInit),
with distinct multiple peaks (PeakMul), with a peak late in time (PeakLate),
that is monotonically decreasing (MonDec), that is monotonically increasing
(MonIncr) and that can not be categorized into any of the above (Oth). We
conduct a thorough experiment to investigate several important characteristics
of these categories such as how individual categories attract citations, how
the categorization is influenced by the year and the venue of publication of
papers, how each category is affected by self-citations, the stability of the
categories over time, and how much each of these categories contribute to the
core of the network. Further, we show that the traditional preferential
attachment models fail to explain these citation profiles. Therefore, we
propose a novel dynamic growth model that takes both the preferential
attachment and the aging factor into account in order to replicate the
real-world behavior of various citation profiles. We believe that this paper
opens the scope for a serious re-investigation of the existing bibliometric
indices for scientific research.
| no_new_dataset | 0.944587 |
1503.06608 | Lakshmi Devasena C | Lakshmi Devasena C | Proficiency Comparison of LADTree and REPTree Classifiers for Credit
Risk Forecast | arXiv admin note: text overlap with arXiv:1310.5963 by other authors | International Journal on Computational Sciences & Applications
(IJCSA) Vol.5, No.1, February 2015, pp. 39 - 50 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the Credit Defaulter is a perilous task of Financial Industries
like Banks. Ascertaining non-payer before giving loan is a significant and
conflict-ridden task of the Banker. Classification techniques are the better
choice for predictive analysis like finding the claimant, whether he/she is an
unpretentious customer or a cheat. Defining the outstanding classifier is a
risky assignment for any industrialist like a banker. This allow computer
science researchers to drill down efficient research works through evaluating
different classifiers and finding out the best classifier for such predictive
problems. This research work investigates the productivity of LADTree
Classifier and REPTree Classifier for the credit risk prediction and compares
their fitness through various measures. German credit dataset has been taken
and used to predict the credit risk with a help of open source machine learning
tool.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 11:47:05 GMT"
}
] | 2015-03-29T00:00:00 | [
[
"C",
"Lakshmi Devasena",
""
]
] | TITLE: Proficiency Comparison of LADTree and REPTree Classifiers for Credit
Risk Forecast
ABSTRACT: Predicting the Credit Defaulter is a perilous task of Financial Industries
like Banks. Ascertaining non-payer before giving loan is a significant and
conflict-ridden task of the Banker. Classification techniques are the better
choice for predictive analysis like finding the claimant, whether he/she is an
unpretentious customer or a cheat. Defining the outstanding classifier is a
risky assignment for any industrialist like a banker. This allow computer
science researchers to drill down efficient research works through evaluating
different classifiers and finding out the best classifier for such predictive
problems. This research work investigates the productivity of LADTree
Classifier and REPTree Classifier for the credit risk prediction and compares
their fitness through various measures. German credit dataset has been taken
and used to predict the credit risk with a help of open source machine learning
tool.
| no_new_dataset | 0.948775 |
1503.07783 | Faraz Saeedan | Faraz Saeedan, Barbara Caputo | Towards Learning free Naive Bayes Nearest Neighbor-based Domain
Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As of today, object categorization algorithms are not able to achieve the
level of robustness and generality necessary to work reliably in the real
world. Even the most powerful convolutional neural network we can train fails
to perform satisfactorily when trained and tested on data from different
databases. This issue, known as domain adaptation and/or dataset bias in the
literature, is due to a distribution mismatch between data collections. Methods
addressing it go from max-margin classifiers to learning how to modify the
features and obtain a more robust representation. Recent work showed that by
casting the problem into the image-to-class recognition framework, the domain
adaptation problem is significantly alleviated \cite{danbnn}. Here we follow
this approach, and show how a very simple, learning free Naive Bayes Nearest
Neighbor (NBNN)-based domain adaptation algorithm can significantly alleviate
the distribution mismatch among source and target data, especially when the
number of classes and the number of sources grow. Experiments on standard
benchmarks used in the literature show that our approach (a) is competitive
with the current state of the art on small scale problems, and (b) achieves the
current state of the art as the number of classes and sources grows, with
minimal computational requirements.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 16:55:19 GMT"
}
] | 2015-03-27T00:00:00 | [
[
"Saeedan",
"Faraz",
""
],
[
"Caputo",
"Barbara",
""
]
] | TITLE: Towards Learning free Naive Bayes Nearest Neighbor-based Domain
Adaptation
ABSTRACT: As of today, object categorization algorithms are not able to achieve the
level of robustness and generality necessary to work reliably in the real
world. Even the most powerful convolutional neural network we can train fails
to perform satisfactorily when trained and tested on data from different
databases. This issue, known as domain adaptation and/or dataset bias in the
literature, is due to a distribution mismatch between data collections. Methods
addressing it go from max-margin classifiers to learning how to modify the
features and obtain a more robust representation. Recent work showed that by
casting the problem into the image-to-class recognition framework, the domain
adaptation problem is significantly alleviated \cite{danbnn}. Here we follow
this approach, and show how a very simple, learning free Naive Bayes Nearest
Neighbor (NBNN)-based domain adaptation algorithm can significantly alleviate
the distribution mismatch among source and target data, especially when the
number of classes and the number of sources grow. Experiments on standard
benchmarks used in the literature show that our approach (a) is competitive
with the current state of the art on small scale problems, and (b) achieves the
current state of the art as the number of classes and sources grows, with
minimal computational requirements.
| no_new_dataset | 0.949949 |
1503.07790 | Yongxin Yang | Yanwei Fu, Yongxin Yang, Tim Hospedales, Tao Xiang and Shaogang Gong | Transductive Multi-label Zero-shot Learning | 12 pages, 6 figures, Accepted to BMVC 2014 (oral) | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot learning has received increasing interest as a means to alleviate
the often prohibitive expense of annotating training data for large scale
recognition problems. These methods have achieved great success via learning
intermediate semantic representations in the form of attributes and more
recently, semantic word vectors. However, they have thus far been constrained
to the single-label case, in contrast to the growing popularity and importance
of more realistic multi-label data. In this paper, for the first time, we
investigate and formalise a general framework for multi-label zero-shot
learning, addressing the unique challenge therein: how to exploit multi-label
correlation at test time with no training data for those classes? In
particular, we propose (1) a multi-output deep regression model to project an
image into a semantic word space, which explicitly exploits the correlations in
the intermediate semantic layer of word vectors; (2) a novel zero-shot learning
algorithm for multi-label data that exploits the unique compositionality
property of semantic word vector representations; and (3) a transductive
learning strategy to enable the regression model learned from seen classes to
generalise well to unseen classes. Our zero-shot learning experiments on a
number of standard multi-label datasets demonstrate that our method outperforms
a variety of baselines.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 17:12:34 GMT"
}
] | 2015-03-27T00:00:00 | [
[
"Fu",
"Yanwei",
""
],
[
"Yang",
"Yongxin",
""
],
[
"Hospedales",
"Tim",
""
],
[
"Xiang",
"Tao",
""
],
[
"Gong",
"Shaogang",
""
]
] | TITLE: Transductive Multi-label Zero-shot Learning
ABSTRACT: Zero-shot learning has received increasing interest as a means to alleviate
the often prohibitive expense of annotating training data for large scale
recognition problems. These methods have achieved great success via learning
intermediate semantic representations in the form of attributes and more
recently, semantic word vectors. However, they have thus far been constrained
to the single-label case, in contrast to the growing popularity and importance
of more realistic multi-label data. In this paper, for the first time, we
investigate and formalise a general framework for multi-label zero-shot
learning, addressing the unique challenge therein: how to exploit multi-label
correlation at test time with no training data for those classes? In
particular, we propose (1) a multi-output deep regression model to project an
image into a semantic word space, which explicitly exploits the correlations in
the intermediate semantic layer of word vectors; (2) a novel zero-shot learning
algorithm for multi-label data that exploits the unique compositionality
property of semantic word vector representations; and (3) a transductive
learning strategy to enable the regression model learned from seen classes to
generalise well to unseen classes. Our zero-shot learning experiments on a
number of standard multi-label datasets demonstrate that our method outperforms
a variety of baselines.
| no_new_dataset | 0.946597 |
1503.07852 | David Yaron | Matteus Tanha, Haichen Li, Shiva Kaul, Alexander Cappiello, Geoffrey
J. Gordon, David J. Yaron | Embedding parameters in ab initio theory to develop approximations based
on molecular similarity | Main text: 16 pages, 6 figures, 6 tables; Supporting information: 5
pages, 9 tables | null | null | null | physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A means to take advantage of molecular similarity to lower the computational
cost of electronic structure theory is explored, in which parameters are
embedded into a low-cost, low-level (LL) ab initio model and adjusted to obtain
agreement with results from a higher-level (HL) ab initio model. A parametrized
LL (pLL) model is created by multiplying selected matrix elements of the
Hamiltonian operators by scaling factors that depend on element types. Various
schemes for applying the scaling factors are compared, along with the impact of
making the scaling factors linear functions of variables related to bond
lengths, atomic charges, and bond orders. The models are trained on ethane and
ethylene, substituted with -NH2, -OH and -F, and tested on substituted propane,
propylene and t-butane. Training and test datasets are created by distorting
the molecular geometries and applying uniform electric fields. The fitted
properties include changes in total energy arising from geometric distortions
or applied fields, and frontier orbital energies. The impacts of including
additional training data, such as decomposition of the energy by operator or
interaction of the electron density with external charges, are also explored.
The best-performing model forms reduce the root mean square (RMS) difference
between the HL and LL energy predictions by over 85% on the training data and
over 75% on the test data. The argument is made that this approach has the
potential to provide a flexible and systematically-improvable means to take
advantage of molecular similarity in quantum chemistry.
| [
{
"version": "v1",
"created": "Thu, 26 Mar 2015 19:58:02 GMT"
}
] | 2015-03-27T00:00:00 | [
[
"Tanha",
"Matteus",
""
],
[
"Li",
"Haichen",
""
],
[
"Kaul",
"Shiva",
""
],
[
"Cappiello",
"Alexander",
""
],
[
"Gordon",
"Geoffrey J.",
""
],
[
"Yaron",
"David J.",
""
]
] | TITLE: Embedding parameters in ab initio theory to develop approximations based
on molecular similarity
ABSTRACT: A means to take advantage of molecular similarity to lower the computational
cost of electronic structure theory is explored, in which parameters are
embedded into a low-cost, low-level (LL) ab initio model and adjusted to obtain
agreement with results from a higher-level (HL) ab initio model. A parametrized
LL (pLL) model is created by multiplying selected matrix elements of the
Hamiltonian operators by scaling factors that depend on element types. Various
schemes for applying the scaling factors are compared, along with the impact of
making the scaling factors linear functions of variables related to bond
lengths, atomic charges, and bond orders. The models are trained on ethane and
ethylene, substituted with -NH2, -OH and -F, and tested on substituted propane,
propylene and t-butane. Training and test datasets are created by distorting
the molecular geometries and applying uniform electric fields. The fitted
properties include changes in total energy arising from geometric distortions
or applied fields, and frontier orbital energies. The impacts of including
additional training data, such as decomposition of the energy by operator or
interaction of the electron density with external charges, are also explored.
The best-performing model forms reduce the root mean square (RMS) difference
between the HL and LL energy predictions by over 85% on the training data and
over 75% on the test data. The argument is made that this approach has the
potential to provide a flexible and systematically-improvable means to take
advantage of molecular similarity in quantum chemistry.
| no_new_dataset | 0.952086 |
1503.05768 | Yunjin Chen | Yunjin Chen, Wei Yu, Thomas Pock | On learning optimized reaction diffusion processes for effective image
restoration | 9 pages, 3 figures, 3 tables. CVPR2015 oral presentation together
with the supplemental material of 13 pages, 8 pages (Notes on diffusion
networks) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For several decades, image restoration remains an active research topic in
low-level computer vision and hence new approaches are constantly emerging.
However, many recently proposed algorithms achieve state-of-the-art performance
only at the expense of very high computation time, which clearly limits their
practical relevance. In this work, we propose a simple but effective approach
with both high computational efficiency and high restoration quality. We extend
conventional nonlinear reaction diffusion models by several parametrized linear
filters as well as several parametrized influence functions. We propose to
train the parameters of the filters and the influence functions through a loss
based approach. Experiments show that our trained nonlinear reaction diffusion
models largely benefit from the training of the parameters and finally lead to
the best reported performance on common test datasets for image restoration.
Due to their structural simplicity, our trained models are highly efficient and
are also well-suited for parallel computation on GPUs.
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2015 14:01:42 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Mar 2015 19:59:44 GMT"
}
] | 2015-03-26T00:00:00 | [
[
"Chen",
"Yunjin",
""
],
[
"Yu",
"Wei",
""
],
[
"Pock",
"Thomas",
""
]
] | TITLE: On learning optimized reaction diffusion processes for effective image
restoration
ABSTRACT: For several decades, image restoration remains an active research topic in
low-level computer vision and hence new approaches are constantly emerging.
However, many recently proposed algorithms achieve state-of-the-art performance
only at the expense of very high computation time, which clearly limits their
practical relevance. In this work, we propose a simple but effective approach
with both high computational efficiency and high restoration quality. We extend
conventional nonlinear reaction diffusion models by several parametrized linear
filters as well as several parametrized influence functions. We propose to
train the parameters of the filters and the influence functions through a loss
based approach. Experiments show that our trained nonlinear reaction diffusion
models largely benefit from the training of the parameters and finally lead to
the best reported performance on common test datasets for image restoration.
Due to their structural simplicity, our trained models are highly efficient and
are also well-suited for parallel computation on GPUs.
| no_new_dataset | 0.951369 |
1503.07240 | Dengyong Zhou | Dengyong Zhou, Qiang Liu, John C. Platt, Christopher Meek, Nihar B.
Shah | Regularized Minimax Conditional Entropy for Crowdsourcing | 31 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a rapidly increasing interest in crowdsourcing for data labeling. By
crowdsourcing, a large number of labels can be often quickly gathered at low
cost. However, the labels provided by the crowdsourcing workers are usually not
of high quality. In this paper, we propose a minimax conditional entropy
principle to infer ground truth from noisy crowdsourced labels. Under this
principle, we derive a unique probabilistic labeling model jointly
parameterized by worker ability and item difficulty. We also propose an
objective measurement principle, and show that our method is the only method
which satisfies this objective measurement principle. We validate our method
through a variety of real crowdsourcing datasets with binary, multiclass or
ordinal labels.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2015 00:10:11 GMT"
}
] | 2015-03-26T00:00:00 | [
[
"Zhou",
"Dengyong",
""
],
[
"Liu",
"Qiang",
""
],
[
"Platt",
"John C.",
""
],
[
"Meek",
"Christopher",
""
],
[
"Shah",
"Nihar B.",
""
]
] | TITLE: Regularized Minimax Conditional Entropy for Crowdsourcing
ABSTRACT: There is a rapidly increasing interest in crowdsourcing for data labeling. By
crowdsourcing, a large number of labels can be often quickly gathered at low
cost. However, the labels provided by the crowdsourcing workers are usually not
of high quality. In this paper, we propose a minimax conditional entropy
principle to infer ground truth from noisy crowdsourced labels. Under this
principle, we derive a unique probabilistic labeling model jointly
parameterized by worker ability and item difficulty. We also propose an
objective measurement principle, and show that our method is the only method
which satisfies this objective measurement principle. We validate our method
through a variety of real crowdsourcing datasets with binary, multiclass or
ordinal labels.
| no_new_dataset | 0.955277 |
1503.07274 | Elman Mansimov | Elman Mansimov, Nitish Srivastava, Ruslan Salakhutdinov | Initialization Strategies of Spatio-Temporal Convolutional Neural
Networks | Technical Report | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new way of incorporating temporal information present in videos
into Spatial Convolutional Neural Networks (ConvNets) trained on images, that
avoids training Spatio-Temporal ConvNets from scratch. We describe several
initializations of weights in 3D Convolutional Layers of Spatio-Temporal
ConvNet using 2D Convolutional Weights learned from ImageNet. We show that it
is important to initialize 3D Convolutional Weights judiciously in order to
learn temporal representations of videos. We evaluate our methods on the
UCF-101 dataset and demonstrate improvement over Spatial ConvNets.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2015 03:41:47 GMT"
}
] | 2015-03-26T00:00:00 | [
[
"Mansimov",
"Elman",
""
],
[
"Srivastava",
"Nitish",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Initialization Strategies of Spatio-Temporal Convolutional Neural
Networks
ABSTRACT: We propose a new way of incorporating temporal information present in videos
into Spatial Convolutional Neural Networks (ConvNets) trained on images, that
avoids training Spatio-Temporal ConvNets from scratch. We describe several
initializations of weights in 3D Convolutional Layers of Spatio-Temporal
ConvNet using 2D Convolutional Weights learned from ImageNet. We show that it
is important to initialize 3D Convolutional Weights judiciously in order to
learn temporal representations of videos. We evaluate our methods on the
UCF-101 dataset and demonstrate improvement over Spatial ConvNets.
| no_new_dataset | 0.950869 |
1503.07405 | Arkaitz Zubiaga | Bo Wang, Arkaitz Zubiaga, Maria Liakata, Rob Procter | Making the Most of Tweet-Inherent Features for Social Spam Detection on
Twitter | null | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social spam produces a great amount of noise on social media services such as
Twitter, which reduces the signal-to-noise ratio that both end users and data
mining applications observe. Existing techniques on social spam detection have
focused primarily on the identification of spam accounts by using extensive
historical and network-based data. In this paper we focus on the detection of
spam tweets, which optimises the amount of data that needs to be gathered by
relying only on tweet-inherent features. This enables the application of the
spam detection system to a large set of tweets in a timely fashion, potentially
applicable in a real-time or near real-time setting. Using two large
hand-labelled datasets of tweets containing spam, we study the suitability of
five classification algorithms and four different feature sets to the social
spam detection task. Our results show that, by using the limited set of
features readily available in a tweet, we can achieve encouraging results which
are competitive when compared against existing spammer detection systems that
make use of additional, costly user features. Our study is the first that
attempts at generalising conclusions on the optimal classifiers and sets of
features for social spam detection over different datasets.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2015 14:58:59 GMT"
}
] | 2015-03-26T00:00:00 | [
[
"Wang",
"Bo",
""
],
[
"Zubiaga",
"Arkaitz",
""
],
[
"Liakata",
"Maria",
""
],
[
"Procter",
"Rob",
""
]
] | TITLE: Making the Most of Tweet-Inherent Features for Social Spam Detection on
Twitter
ABSTRACT: Social spam produces a great amount of noise on social media services such as
Twitter, which reduces the signal-to-noise ratio that both end users and data
mining applications observe. Existing techniques on social spam detection have
focused primarily on the identification of spam accounts by using extensive
historical and network-based data. In this paper we focus on the detection of
spam tweets, which optimises the amount of data that needs to be gathered by
relying only on tweet-inherent features. This enables the application of the
spam detection system to a large set of tweets in a timely fashion, potentially
applicable in a real-time or near real-time setting. Using two large
hand-labelled datasets of tweets containing spam, we study the suitability of
five classification algorithms and four different feature sets to the social
spam detection task. Our results show that, by using the limited set of
features readily available in a tweet, we can achieve encouraging results which
are competitive when compared against existing spammer detection systems that
make use of additional, costly user features. Our study is the first that
attempts at generalising conclusions on the optimal classifiers and sets of
features for social spam detection over different datasets.
| no_new_dataset | 0.944893 |
1503.07477 | Debajyoti Mukhopadhyay Prof. | Praful Koturwar, Sheetal Girase, Debajyoti Mukhopadhyay | A Survey of Classification Techniques in the Area of Big Data | 7 pages, 3 figures, 2 tables in IJAFRC, Vol.1, Issue 11, November
2014, ISSN: 2348-4853 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big Data concern large-volume, growing data sets that are complex and have
multiple autonomous sources. Earlier technologies were not able to handle
storage and processing of huge data thus Big Data concept comes into existence.
This is a tedious job for users unstructured data. So, there should be some
mechanism which classify unstructured data into organized form which helps user
to easily access required data. Classification techniques over big
transactional database provide required data to the users from large datasets
more simple way. There are two main classification techniques, supervised and
unsupervised. In this paper we focused on to study of different supervised
classification techniques. Further this paper shows a advantages and
limitations.
| [
{
"version": "v1",
"created": "Wed, 25 Mar 2015 17:56:19 GMT"
}
] | 2015-03-26T00:00:00 | [
[
"Koturwar",
"Praful",
""
],
[
"Girase",
"Sheetal",
""
],
[
"Mukhopadhyay",
"Debajyoti",
""
]
] | TITLE: A Survey of Classification Techniques in the Area of Big Data
ABSTRACT: Big Data concern large-volume, growing data sets that are complex and have
multiple autonomous sources. Earlier technologies were not able to handle
storage and processing of huge data thus Big Data concept comes into existence.
This is a tedious job for users unstructured data. So, there should be some
mechanism which classify unstructured data into organized form which helps user
to easily access required data. Classification techniques over big
transactional database provide required data to the users from large datasets
more simple way. There are two main classification techniques, supervised and
unsupervised. In this paper we focused on to study of different supervised
classification techniques. Further this paper shows a advantages and
limitations.
| no_new_dataset | 0.941277 |
1410.3469 | Daniel Whiteson | Pierre Baldi, Peter Sadowski, Daniel Whiteson | Enhanced Higgs to $\tau^+\tau^-$ Searches with Deep Learning | For submission to PRL | Phys. Rev. Lett. 114, 111801 (2015) | 10.1103/PhysRevLett.114.111801 | null | hep-ph cs.LG hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Higgs boson is thought to provide the interaction that imparts mass to
the fundamental fermions, but while measurements at the Large Hadron Collider
(LHC) are consistent with this hypothesis, current analysis techniques lack the
statistical power to cross the traditional 5$\sigma$ significance barrier
without more data. \emph{Deep learning} techniques have the potential to
increase the statistical power of this analysis by \emph{automatically}
learning complex, high-level data representations. In this work, deep neural
networks are used to detect the decay of the Higgs to a pair of tau leptons. A
Bayesian optimization algorithm is used to tune the network architecture and
training algorithm hyperparameters, resulting in a deep network of eight
non-linear processing layers that improves upon the performance of shallow
classifiers even without the use of features specifically engineered by
physicists for this application. The improvement in discovery significance is
equivalent to an increase in the accumulated dataset of 25\%.
| [
{
"version": "v1",
"created": "Mon, 13 Oct 2014 20:00:03 GMT"
}
] | 2015-03-25T00:00:00 | [
[
"Baldi",
"Pierre",
""
],
[
"Sadowski",
"Peter",
""
],
[
"Whiteson",
"Daniel",
""
]
] | TITLE: Enhanced Higgs to $\tau^+\tau^-$ Searches with Deep Learning
ABSTRACT: The Higgs boson is thought to provide the interaction that imparts mass to
the fundamental fermions, but while measurements at the Large Hadron Collider
(LHC) are consistent with this hypothesis, current analysis techniques lack the
statistical power to cross the traditional 5$\sigma$ significance barrier
without more data. \emph{Deep learning} techniques have the potential to
increase the statistical power of this analysis by \emph{automatically}
learning complex, high-level data representations. In this work, deep neural
networks are used to detect the decay of the Higgs to a pair of tau leptons. A
Bayesian optimization algorithm is used to tune the network architecture and
training algorithm hyperparameters, resulting in a deep network of eight
non-linear processing layers that improves upon the performance of shallow
classifiers even without the use of features specifically engineered by
physicists for this application. The improvement in discovery significance is
equivalent to an increase in the accumulated dataset of 25\%.
| no_new_dataset | 0.950319 |
1502.08040 | Mayank Kumar | Mayank Kumar, Ashok Veeraraghavan, Ashutosh Sabharval | DistancePPG: Robust non-contact vital signs monitoring using a camera | 24 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vital signs such as pulse rate and breathing rate are currently measured
using contact probes. But, non-contact methods for measuring vital signs are
desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ
health tracking (e.g. on mobile phone and computers with webcams). Recently,
camera-based non-contact vital sign monitoring have been shown to be feasible.
However, camera-based vital sign monitoring is challenging for people with
darker skin tone, under low lighting conditions, and/or during movement of an
individual in front of the camera. In this paper, we propose distancePPG, a new
camera-based vital sign estimation algorithm which addresses these challenges.
DistancePPG proposes a new method of combining skin-color change signals from
different tracked regions of the face using a weighted average, where the
weights depend on the blood perfusion and incident light intensity in the
region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.
One of our key contributions is a new automatic method for determining the
weights based only on the video recording of the subject. The gains in SNR of
camera-based PPG estimated using distancePPG translate into reduction of the
error in vital sign estimation, and thus expand the scope of camera-based vital
sign monitoring to potentially challenging scenarios. Further, a dataset will
be released, comprising of synchronized video recordings of face and pulse
oximeter based ground truth recordings from the earlobe for people with
different skin tones, under different lighting conditions and for various
motion scenarios.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 20:03:06 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Mar 2015 02:31:18 GMT"
}
] | 2015-03-25T00:00:00 | [
[
"Kumar",
"Mayank",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Sabharval",
"Ashutosh",
""
]
] | TITLE: DistancePPG: Robust non-contact vital signs monitoring using a camera
ABSTRACT: Vital signs such as pulse rate and breathing rate are currently measured
using contact probes. But, non-contact methods for measuring vital signs are
desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ
health tracking (e.g. on mobile phone and computers with webcams). Recently,
camera-based non-contact vital sign monitoring have been shown to be feasible.
However, camera-based vital sign monitoring is challenging for people with
darker skin tone, under low lighting conditions, and/or during movement of an
individual in front of the camera. In this paper, we propose distancePPG, a new
camera-based vital sign estimation algorithm which addresses these challenges.
DistancePPG proposes a new method of combining skin-color change signals from
different tracked regions of the face using a weighted average, where the
weights depend on the blood perfusion and incident light intensity in the
region, to improve the signal-to-noise ratio (SNR) of camera-based estimate.
One of our key contributions is a new automatic method for determining the
weights based only on the video recording of the subject. The gains in SNR of
camera-based PPG estimated using distancePPG translate into reduction of the
error in vital sign estimation, and thus expand the scope of camera-based vital
sign monitoring to potentially challenging scenarios. Further, a dataset will
be released, comprising of synchronized video recordings of face and pulse
oximeter based ground truth recordings from the earlobe for people with
different skin tones, under different lighting conditions and for various
motion scenarios.
| new_dataset | 0.969382 |
1503.06917 | Yilin Wang | Qiang Zhang, Yilin Wang, Baoxin Li | Unsupervised Video Analysis Based on a Spatiotemporal Saliency Detector | 21 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual saliency, which predicts regions in the field of view that draw the
most visual attention, has attracted a lot of interest from researchers. It has
already been used in several vision tasks, e.g., image classification, object
detection, foreground segmentation. Recently, the spectrum analysis based
visual saliency approach has attracted a lot of interest due to its simplicity
and good performance, where the phase information of the image is used to
construct the saliency map. In this paper, we propose a new approach for
detecting spatiotemporal visual saliency based on the phase spectrum of the
videos, which is easy to implement and computationally efficient. With the
proposed algorithm, we also study how the spatiotemporal saliency can be used
in two important vision task, abnormality detection and spatiotemporal interest
point detection. The proposed algorithm is evaluated on several commonly used
datasets with comparison to the state-of-art methods from the literature. The
experiments demonstrate the effectiveness of the proposed approach to
spatiotemporal visual saliency detection and its application to the above
vision tasks
| [
{
"version": "v1",
"created": "Tue, 24 Mar 2015 05:25:45 GMT"
}
] | 2015-03-25T00:00:00 | [
[
"Zhang",
"Qiang",
""
],
[
"Wang",
"Yilin",
""
],
[
"Li",
"Baoxin",
""
]
] | TITLE: Unsupervised Video Analysis Based on a Spatiotemporal Saliency Detector
ABSTRACT: Visual saliency, which predicts regions in the field of view that draw the
most visual attention, has attracted a lot of interest from researchers. It has
already been used in several vision tasks, e.g., image classification, object
detection, foreground segmentation. Recently, the spectrum analysis based
visual saliency approach has attracted a lot of interest due to its simplicity
and good performance, where the phase information of the image is used to
construct the saliency map. In this paper, we propose a new approach for
detecting spatiotemporal visual saliency based on the phase spectrum of the
videos, which is easy to implement and computationally efficient. With the
proposed algorithm, we also study how the spatiotemporal saliency can be used
in two important vision task, abnormality detection and spatiotemporal interest
point detection. The proposed algorithm is evaluated on several commonly used
datasets with comparison to the state-of-art methods from the literature. The
experiments demonstrate the effectiveness of the proposed approach to
spatiotemporal visual saliency detection and its application to the above
vision tasks
| no_new_dataset | 0.953622 |
1503.06952 | Maria-Carolina Monard MC | Jean Metz and Newton Spola\^or and Everton A. Cherman and Maria C.
Monard | Comparing published multi-label classifier performance measures to the
ones obtained by a simple multi-label baseline classifier | 19 pages, 8 figures, 7 tables | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In supervised learning, simple baseline classifiers can be constructed by
only looking at the class, i.e., ignoring any other information from the
dataset. The single-label learning community frequently uses as a reference the
one which always predicts the majority class. Although a classifier might
perform worse than this simple baseline classifier, this behaviour requires a
special explanation. Aiming to motivate the community to compare experimental
results with the ones provided by a multi-label baseline classifier, calling
the attention about the need of special explanations related to classifiers
which perform worse than the baseline, in this work we propose the use of
General_B, a multi-label baseline classifier. General_B was evaluated in
contrast to results published in the literature which were carefully selected
using a systematic review process. It was found that a considerable number of
published results on 10 frequently used datasets are worse than or equal to the
ones obtained by General_B, and for one dataset it reaches up to 43% of the
dataset published results. Moreover, although a simple baseline classifier was
not considered in these publications, it was observed that even for very poor
results no special explanations were provided in most of them. We hope that the
findings of this work would encourage the multi-label community to consider the
idea of using a simple baseline classifier, such that further explanations are
provided when a classifiers performs worse than a baseline.
| [
{
"version": "v1",
"created": "Tue, 24 Mar 2015 08:57:25 GMT"
}
] | 2015-03-25T00:00:00 | [
[
"Metz",
"Jean",
""
],
[
"Spolaôr",
"Newton",
""
],
[
"Cherman",
"Everton A.",
""
],
[
"Monard",
"Maria C.",
""
]
] | TITLE: Comparing published multi-label classifier performance measures to the
ones obtained by a simple multi-label baseline classifier
ABSTRACT: In supervised learning, simple baseline classifiers can be constructed by
only looking at the class, i.e., ignoring any other information from the
dataset. The single-label learning community frequently uses as a reference the
one which always predicts the majority class. Although a classifier might
perform worse than this simple baseline classifier, this behaviour requires a
special explanation. Aiming to motivate the community to compare experimental
results with the ones provided by a multi-label baseline classifier, calling
the attention about the need of special explanations related to classifiers
which perform worse than the baseline, in this work we propose the use of
General_B, a multi-label baseline classifier. General_B was evaluated in
contrast to results published in the literature which were carefully selected
using a systematic review process. It was found that a considerable number of
published results on 10 frequently used datasets are worse than or equal to the
ones obtained by General_B, and for one dataset it reaches up to 43% of the
dataset published results. Moreover, although a simple baseline classifier was
not considered in these publications, it was observed that even for very poor
results no special explanations were provided in most of them. We hope that the
findings of this work would encourage the multi-label community to consider the
idea of using a simple baseline classifier, such that further explanations are
provided when a classifiers performs worse than a baseline.
| no_new_dataset | 0.949482 |
1406.4877 | David Martins de Matos | Francisco Raposo, Ricardo Ribeiro, David Martins de Matos | On the Application of Generic Summarization Algorithms to Music | 12 pages, 1 table; Submitted to IEEE Signal Processing Letters | IEEE Signal Processing Letters, IEEE, vol. 22, n. 1, January 2015 | 10.1109/LSP.2014.2347582 | null | cs.IR cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several generic summarization algorithms were developed in the past and
successfully applied in fields such as text and speech summarization. In this
paper, we review and apply these algorithms to music. To evaluate this
summarization's performance, we adopt an extrinsic approach: we compare a Fado
Genre Classifier's performance using truncated contiguous clips against the
summaries extracted with those algorithms on 2 different datasets. We show that
Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA)
all improve classification performance in both datasets used for testing.
| [
{
"version": "v1",
"created": "Wed, 18 Jun 2014 20:10:22 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Raposo",
"Francisco",
""
],
[
"Ribeiro",
"Ricardo",
""
],
[
"de Matos",
"David Martins",
""
]
] | TITLE: On the Application of Generic Summarization Algorithms to Music
ABSTRACT: Several generic summarization algorithms were developed in the past and
successfully applied in fields such as text and speech summarization. In this
paper, we review and apply these algorithms to music. To evaluate this
summarization's performance, we adopt an extrinsic approach: we compare a Fado
Genre Classifier's performance using truncated contiguous clips against the
summaries extracted with those algorithms on 2 different datasets. We show that
Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA)
all improve classification performance in both datasets used for testing.
| no_new_dataset | 0.950824 |
1412.6572 | Ian Goodfellow | Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy | Explaining and Harnessing Adversarial Examples | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several machine learning models, including neural networks, consistently
misclassify adversarial examples---inputs formed by applying small but
intentionally worst-case perturbations to examples from the dataset, such that
the perturbed input results in the model outputting an incorrect answer with
high confidence. Early attempts at explaining this phenomenon focused on
nonlinearity and overfitting. We argue instead that the primary cause of neural
networks' vulnerability to adversarial perturbation is their linear nature.
This explanation is supported by new quantitative results while giving the
first explanation of the most intriguing fact about them: their generalization
across architectures and training sets. Moreover, this view yields a simple and
fast method of generating adversarial examples. Using this approach to provide
examples for adversarial training, we reduce the test set error of a maxout
network on the MNIST dataset.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2014 01:17:12 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Feb 2015 17:25:05 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Mar 2015 20:19:16 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Goodfellow",
"Ian J.",
""
],
[
"Shlens",
"Jonathon",
""
],
[
"Szegedy",
"Christian",
""
]
] | TITLE: Explaining and Harnessing Adversarial Examples
ABSTRACT: Several machine learning models, including neural networks, consistently
misclassify adversarial examples---inputs formed by applying small but
intentionally worst-case perturbations to examples from the dataset, such that
the perturbed input results in the model outputting an incorrect answer with
high confidence. Early attempts at explaining this phenomenon focused on
nonlinearity and overfitting. We argue instead that the primary cause of neural
networks' vulnerability to adversarial perturbation is their linear nature.
This explanation is supported by new quantitative results while giving the
first explanation of the most intriguing fact about them: their generalization
across architectures and training sets. Moreover, this view yields a simple and
fast method of generating adversarial examples. Using this approach to provide
examples for adversarial training, we reduce the test set error of a maxout
network on the MNIST dataset.
| no_new_dataset | 0.951278 |
1503.03562 | Zhiyong Cheng | Zhiyong Cheng, Daniel Soudry, Zexi Mao, Zhenzhong Lan | Training Binary Multilayer Neural Networks for Image Classification
using Expectation Backpropagation | 8 pages with 1 figures and 4 tables | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to Multilayer Neural Networks with real weights, Binary Multilayer
Neural Networks (BMNNs) can be implemented more efficiently on dedicated
hardware. BMNNs have been demonstrated to be effective on binary classification
tasks with Expectation BackPropagation (EBP) algorithm on high dimensional text
datasets. In this paper, we investigate the capability of BMNNs using the EBP
algorithm on multiclass image classification tasks. The performances of binary
neural networks with multiple hidden layers and different numbers of hidden
units are examined on MNIST. We also explore the effectiveness of image spatial
filters and the dropout technique in BMNNs. Experimental results on MNIST
dataset show that EBP can obtain 2.12% test error with binary weights and 1.66%
test error with real weights, which is comparable to the results of standard
BackPropagation algorithm on fully connected MNNs.
| [
{
"version": "v1",
"created": "Thu, 12 Mar 2015 02:24:31 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Mar 2015 01:32:15 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Mar 2015 21:47:56 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Cheng",
"Zhiyong",
""
],
[
"Soudry",
"Daniel",
""
],
[
"Mao",
"Zexi",
""
],
[
"Lan",
"Zhenzhong",
""
]
] | TITLE: Training Binary Multilayer Neural Networks for Image Classification
using Expectation Backpropagation
ABSTRACT: Compared to Multilayer Neural Networks with real weights, Binary Multilayer
Neural Networks (BMNNs) can be implemented more efficiently on dedicated
hardware. BMNNs have been demonstrated to be effective on binary classification
tasks with Expectation BackPropagation (EBP) algorithm on high dimensional text
datasets. In this paper, we investigate the capability of BMNNs using the EBP
algorithm on multiclass image classification tasks. The performances of binary
neural networks with multiple hidden layers and different numbers of hidden
units are examined on MNIST. We also explore the effectiveness of image spatial
filters and the dropout technique in BMNNs. Experimental results on MNIST
dataset show that EBP can obtain 2.12% test error with binary weights and 1.66%
test error with real weights, which is comparable to the results of standard
BackPropagation algorithm on fully connected MNNs.
| no_new_dataset | 0.950595 |
1503.06239 | Jinye Zhang | Jinye Zhang, Zhijian Ou | Block-Wise MAP Inference for Determinantal Point Processes with
Application to Change-Point Detection | null | null | null | null | cs.LG cs.AI stat.ME stat.ML | http://creativecommons.org/licenses/by/3.0/ | Existing MAP inference algorithms for determinantal point processes (DPPs)
need to calculate determinants or conduct eigenvalue decomposition generally at
the scale of the full kernel, which presents a great challenge for real-world
applications. In this paper, we introduce a class of DPPs, called BwDPPs, that
are characterized by an almost block diagonal kernel matrix and thus can allow
efficient block-wise MAP inference. Furthermore, BwDPPs are successfully
applied to address the difficulty of selecting change-points in the problem of
change-point detection (CPD), which results in a new BwDPP-based CPD method,
named BwDppCpd. In BwDppCpd, a preliminary set of change-point candidates is
first created based on existing well-studied metrics. Then, these change-point
candidates are treated as DPP items, and DPP-based subset selection is
conducted to give the final estimate of the change-points that favours both
quality and diversity. The effectiveness of BwDppCpd is demonstrated through
extensive experiments on five real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 20 Mar 2015 22:01:45 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Zhang",
"Jinye",
""
],
[
"Ou",
"Zhijian",
""
]
] | TITLE: Block-Wise MAP Inference for Determinantal Point Processes with
Application to Change-Point Detection
ABSTRACT: Existing MAP inference algorithms for determinantal point processes (DPPs)
need to calculate determinants or conduct eigenvalue decomposition generally at
the scale of the full kernel, which presents a great challenge for real-world
applications. In this paper, we introduce a class of DPPs, called BwDPPs, that
are characterized by an almost block diagonal kernel matrix and thus can allow
efficient block-wise MAP inference. Furthermore, BwDPPs are successfully
applied to address the difficulty of selecting change-points in the problem of
change-point detection (CPD), which results in a new BwDPP-based CPD method,
named BwDppCpd. In BwDppCpd, a preliminary set of change-point candidates is
first created based on existing well-studied metrics. Then, these change-point
candidates are treated as DPP items, and DPP-based subset selection is
conducted to give the final estimate of the change-points that favours both
quality and diversity. The effectiveness of BwDppCpd is demonstrated through
extensive experiments on five real-world datasets.
| no_new_dataset | 0.941385 |
1503.06250 | Ilya Safro | Talayeh Razzaghi and Oleg Roderick and Ilya Safro and Nick Marko | Fast Imbalanced Classification of Healthcare Data with Missing Values | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medical domain, data features often contain missing values. This can
create serious bias in the predictive modeling. Typical standard data mining
methods often produce poor performance measures. In this paper, we propose a
new method to simultaneously classify large datasets and reduce the effects of
missing values. The proposed method is based on a multilevel framework of the
cost-sensitive SVM and the expected maximization imputation method for missing
values, which relies on iterated regression analyses. We compare classification
results of multilevel SVM-based algorithms on public benchmark datasets with
imbalanced classes and missing values as well as real data in health
applications, and show that our multilevel SVM-based method produces fast, and
more accurate and robust classification results.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2015 00:13:54 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Razzaghi",
"Talayeh",
""
],
[
"Roderick",
"Oleg",
""
],
[
"Safro",
"Ilya",
""
],
[
"Marko",
"Nick",
""
]
] | TITLE: Fast Imbalanced Classification of Healthcare Data with Missing Values
ABSTRACT: In medical domain, data features often contain missing values. This can
create serious bias in the predictive modeling. Typical standard data mining
methods often produce poor performance measures. In this paper, we propose a
new method to simultaneously classify large datasets and reduce the effects of
missing values. The proposed method is based on a multilevel framework of the
cost-sensitive SVM and the expected maximization imputation method for missing
values, which relies on iterated regression analyses. We compare classification
results of multilevel SVM-based algorithms on public benchmark datasets with
imbalanced classes and missing values as well as real data in health
applications, and show that our multilevel SVM-based method produces fast, and
more accurate and robust classification results.
| no_new_dataset | 0.950641 |
1503.06271 | Mina Ghashami | Mina Ghashami and Amirali Abdullah | Binary Coding in Stream | 5 figures, 9 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big data is becoming ever more ubiquitous, ranging over massive video
repositories, document corpuses, image sets and Internet routing history.
Proximity search and clustering are two algorithmic primitives fundamental to
data analysis, but suffer from the "curse of dimensionality" on these gigantic
datasets. A popular attack for this problem is to convert object
representations into short binary codewords, while approximately preserving
near neighbor structure. However, there has been limited research on
constructing codewords in the "streaming" or "online" settings often applicable
to this scale of data, where one may only make a single pass over data too
massive to fit in local memory.
In this paper, we apply recent advances in matrix sketching techniques to
construct binary codewords in both streaming and online setting. Our
experimental results compete outperform several of the most popularly used
algorithms, and we prove theoretical guarantees on performance in the streaming
setting under mild assumptions on the data and randomness of the training set.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2015 06:25:02 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Ghashami",
"Mina",
""
],
[
"Abdullah",
"Amirali",
""
]
] | TITLE: Binary Coding in Stream
ABSTRACT: Big data is becoming ever more ubiquitous, ranging over massive video
repositories, document corpuses, image sets and Internet routing history.
Proximity search and clustering are two algorithmic primitives fundamental to
data analysis, but suffer from the "curse of dimensionality" on these gigantic
datasets. A popular attack for this problem is to convert object
representations into short binary codewords, while approximately preserving
near neighbor structure. However, there has been limited research on
constructing codewords in the "streaming" or "online" settings often applicable
to this scale of data, where one may only make a single pass over data too
massive to fit in local memory.
In this paper, we apply recent advances in matrix sketching techniques to
construct binary codewords in both streaming and online setting. Our
experimental results compete outperform several of the most popularly used
algorithms, and we prove theoretical guarantees on performance in the streaming
setting under mild assumptions on the data and randomness of the training set.
| no_new_dataset | 0.943504 |
1503.06301 | Kamalakar Karlapalem | Yash Gupta and Kamalakar Karlapalem | Effective Handling of Urgent Jobs - Speed Up Scheduling for Computing
Applications | Paper covering main contributions from MS Thesis of Yash Gupta
http://web2py.iiit.ac.in/research_centres/publications/view_publication/mastersthesis/247
- presented in ACM format | null | null | MS Thesis Number IIIT/TH/2014/7 | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A queue is required when a service provider is not able to handle jobs
arriving over the time. In a highly flexible and dynamic environment, some jobs
might demand for faster execution at run-time especially when the resources are
limited and the jobs are competing for acquiring resources. A user might demand
for speed up (reduced wait time) for some of the jobs present in the queue at
run time. In such cases, it is required to accelerate (directly sending the job
to the server) urgent jobs (requesting for speed up) ahead of other jobs
present in the queue for an earlier completion of urgent jobs. Under the
assumption of no additional resources, such acceleration of jobs would result
in slowing down of other jobs present in the queue. In this paper, we formulate
the problem of Speed Up Scheduling without acquiring any additional resources
for the scheduling of on-line speed up requests posed by a user at run-time and
present algorithms for the same. We apply the idea of Speed Up Scheduling to
two different domains -Web Scheduling and CPU Scheduling. We demonstrate our
results with a simulation based model using trace driven workload and synthetic
datasets to show the usefulness of Speed Up scheduling. Speed Up provides a new
way of addressing urgent jobs, provides a different evaluation criteria for
comparing scheduling algorithms and has practical applications.
| [
{
"version": "v1",
"created": "Sat, 21 Mar 2015 13:51:48 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Gupta",
"Yash",
""
],
[
"Karlapalem",
"Kamalakar",
""
]
] | TITLE: Effective Handling of Urgent Jobs - Speed Up Scheduling for Computing
Applications
ABSTRACT: A queue is required when a service provider is not able to handle jobs
arriving over the time. In a highly flexible and dynamic environment, some jobs
might demand for faster execution at run-time especially when the resources are
limited and the jobs are competing for acquiring resources. A user might demand
for speed up (reduced wait time) for some of the jobs present in the queue at
run time. In such cases, it is required to accelerate (directly sending the job
to the server) urgent jobs (requesting for speed up) ahead of other jobs
present in the queue for an earlier completion of urgent jobs. Under the
assumption of no additional resources, such acceleration of jobs would result
in slowing down of other jobs present in the queue. In this paper, we formulate
the problem of Speed Up Scheduling without acquiring any additional resources
for the scheduling of on-line speed up requests posed by a user at run-time and
present algorithms for the same. We apply the idea of Speed Up Scheduling to
two different domains -Web Scheduling and CPU Scheduling. We demonstrate our
results with a simulation based model using trace driven workload and synthetic
datasets to show the usefulness of Speed Up scheduling. Speed Up provides a new
way of addressing urgent jobs, provides a different evaluation criteria for
comparing scheduling algorithms and has practical applications.
| no_new_dataset | 0.949623 |
1503.06555 | Debajyoti Mukhopadhyay Prof. | Sumitkumar Kanoje, Sheetal Girase, Debajyoti Mukhopadhyay | User Profiling for Recommendation System | 5 pages, 5 figures, 5 tables | null | null | null | cs.IR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation system is a type of information filtering systems that
recommend various objects from a vast variety and quantity of items which are
of the user interest. This results in guiding an individual in personalized way
to interesting or useful objects in a large space of possible options. Such
systems also help many businesses to achieve more profits to sustain in their
filed against their rivals. But looking at the amount of information which a
business holds it becomes difficult to identify the items of user interest.
Therefore personalization or user profiling is one of the challenging tasks
that give access to user relevant information which can be used in solving the
difficult task of classification and ranking items according to an individuals
interest. Profiling can be done in various ways such assupervised or
unsupervised, individual or group profiling, distributive or and non
distributive profiling. Our focus in this paper will be on the dataset which we
will use, we identify some interesting facts by using Weka Tool that can be
used for recommending the items from dataset. Our aim is to present a novel
technique to achieve user profiling in recommendation system.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 08:47:35 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Kanoje",
"Sumitkumar",
""
],
[
"Girase",
"Sheetal",
""
],
[
"Mukhopadhyay",
"Debajyoti",
""
]
] | TITLE: User Profiling for Recommendation System
ABSTRACT: Recommendation system is a type of information filtering systems that
recommend various objects from a vast variety and quantity of items which are
of the user interest. This results in guiding an individual in personalized way
to interesting or useful objects in a large space of possible options. Such
systems also help many businesses to achieve more profits to sustain in their
filed against their rivals. But looking at the amount of information which a
business holds it becomes difficult to identify the items of user interest.
Therefore personalization or user profiling is one of the challenging tasks
that give access to user relevant information which can be used in solving the
difficult task of classification and ranking items according to an individuals
interest. Profiling can be done in various ways such assupervised or
unsupervised, individual or group profiling, distributive or and non
distributive profiling. Our focus in this paper will be on the dataset which we
will use, we identify some interesting facts by using Weka Tool that can be
used for recommending the items from dataset. Our aim is to present a novel
technique to achieve user profiling in recommendation system.
| no_new_dataset | 0.955651 |
1503.06562 | Debajyoti Mukhopadhyay Prof. | Dheeraj kumar Bokde, Sheetal Girase, Debajyoti Mukhopadhyay | An Item-Based Collaborative Filtering using Dimensionality Reduction
Techniques on Mahout Framework | 6 pages, 4 figures, 3 tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative Filtering is the most widely used prediction technique in
Recommendation System. Most of the current CF recommender systems maintains
single criteria user rating in user item matrix. However, recent studies
indicate that recommender system depending on multi criteria can improve
prediction and accuracy levels of recommendation by considering the user
preferences in multi aspects of items. This gives birth to Multi Criteria
Collaborative Filtering. In MC CF users provide the rating on multiple aspects
of an item in new dimensions,thereby increasing the size of rating matrix,
sparsity and scalability problem. Appropriate dimensionality reduction
techniques are thus needed to take care of these challenges to reduce the
dimension of user item rating matrix to improve the prediction accuracy and
efficiency of CF recommender system. The process of dimensionality reduction
maps the high dimensional input space into lower dimensional space. Thus, the
objective of this paper is to propose an efficient MC CF algorithm using
dimensionality reduction technique to improve the recommendation quality and
prediction accuracy. Dimensionality reduction techniques such as Singular Value
Decomposition and Principal Component Analysis are used to solve the
scalability and alleviate the sparsity problems in overall rating. The proposed
MC CF approach will be implemented using Apache Mahout, which allows processing
of massive dataset stored in distributed/non-distributed file system.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 09:09:07 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Bokde",
"Dheeraj kumar",
""
],
[
"Girase",
"Sheetal",
""
],
[
"Mukhopadhyay",
"Debajyoti",
""
]
] | TITLE: An Item-Based Collaborative Filtering using Dimensionality Reduction
Techniques on Mahout Framework
ABSTRACT: Collaborative Filtering is the most widely used prediction technique in
Recommendation System. Most of the current CF recommender systems maintains
single criteria user rating in user item matrix. However, recent studies
indicate that recommender system depending on multi criteria can improve
prediction and accuracy levels of recommendation by considering the user
preferences in multi aspects of items. This gives birth to Multi Criteria
Collaborative Filtering. In MC CF users provide the rating on multiple aspects
of an item in new dimensions,thereby increasing the size of rating matrix,
sparsity and scalability problem. Appropriate dimensionality reduction
techniques are thus needed to take care of these challenges to reduce the
dimension of user item rating matrix to improve the prediction accuracy and
efficiency of CF recommender system. The process of dimensionality reduction
maps the high dimensional input space into lower dimensional space. Thus, the
objective of this paper is to propose an efficient MC CF algorithm using
dimensionality reduction technique to improve the recommendation quality and
prediction accuracy. Dimensionality reduction techniques such as Singular Value
Decomposition and Principal Component Analysis are used to solve the
scalability and alleviate the sparsity problems in overall rating. The proposed
MC CF approach will be implemented using Apache Mahout, which allows processing
of massive dataset stored in distributed/non-distributed file system.
| no_new_dataset | 0.950732 |
1503.06575 | Sanja Brdar | Sanja Brdar, Katarina Gavric, Dubravko Culibrk, Vladimir Crnojevic | Unveiling Spatial Epidemiology of HIV with Mobile Phone Data | 13 pages, 4 figures, 2 tables | null | null | null | stat.AP cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An increasing amount of geo-referenced mobile phone data enables the
identification of behavioral patterns, habits and movements of people. With
this data, we can extract the knowledge potentially useful for many
applications including the one tackled in this study - understanding spatial
variation of epidemics. We explored the datasets collected by a cell phone
service provider and linked them to spatial HIV prevalence rates estimated from
publicly available surveys. For that purpose, 224 features were extracted from
mobility and connectivity traces and related to the level of HIV epidemic in 50
Ivory Coast departments. By means of regression models, we evaluated predictive
ability of extracted features. Several models predicted HIV prevalence that are
highly correlated (>0.7) with actual values. Through contribution analysis we
identified key elements that impact the rate of infections. Our findings
indicate that night connectivity and activity, spatial area covered by users
and overall migrations are strongly linked to HIV. By visualizing the
communication and mobility flows, we strived to explain the spatial structure
of epidemics. We discovered that strong ties and hubs in communication and
mobility align with HIV hot spots.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 09:47:16 GMT"
}
] | 2015-03-24T00:00:00 | [
[
"Brdar",
"Sanja",
""
],
[
"Gavric",
"Katarina",
""
],
[
"Culibrk",
"Dubravko",
""
],
[
"Crnojevic",
"Vladimir",
""
]
] | TITLE: Unveiling Spatial Epidemiology of HIV with Mobile Phone Data
ABSTRACT: An increasing amount of geo-referenced mobile phone data enables the
identification of behavioral patterns, habits and movements of people. With
this data, we can extract the knowledge potentially useful for many
applications including the one tackled in this study - understanding spatial
variation of epidemics. We explored the datasets collected by a cell phone
service provider and linked them to spatial HIV prevalence rates estimated from
publicly available surveys. For that purpose, 224 features were extracted from
mobility and connectivity traces and related to the level of HIV epidemic in 50
Ivory Coast departments. By means of regression models, we evaluated predictive
ability of extracted features. Several models predicted HIV prevalence that are
highly correlated (>0.7) with actual values. Through contribution analysis we
identified key elements that impact the rate of infections. Our findings
indicate that night connectivity and activity, spatial area covered by users
and overall migrations are strongly linked to HIV. By visualizing the
communication and mobility flows, we strived to explain the spatial structure
of epidemics. We discovered that strong ties and hubs in communication and
mobility align with HIV hot spots.
| no_new_dataset | 0.942823 |
1503.05947 | Yanlai Chen | Yanlai Chen | Reduced Basis Decomposition: a Certified and Fast Lossy Data Compression
Algorithm | null | null | null | null | math.NA cs.AI cs.CV cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimension reduction is often needed in the area of data mining. The goal of
these methods is to map the given high-dimensional data into a low-dimensional
space preserving certain properties of the initial data. There are two kinds of
techniques for this purpose. The first, projective methods, builds an explicit
linear projection from the high-dimensional space to the low-dimensional one.
On the other hand, the nonlinear methods utilizes nonlinear and implicit
mapping between the two spaces. In both cases, the methods considered in
literature have usually relied on computationally very intensive matrix
factorizations, frequently the Singular Value Decomposition (SVD). The
computational burden of SVD quickly renders these dimension reduction methods
infeasible thanks to the ever-increasing sizes of the practical datasets.
In this paper, we present a new decomposition strategy, Reduced Basis
Decomposition (RBD), which is inspired by the Reduced Basis Method (RBM). Given
$X$ the high-dimensional data, the method approximates it by $Y \, T (\approx
X)$ with $Y$ being the low-dimensional surrogate and $T$ the transformation
matrix. $Y$ is obtained through a greedy algorithm thus extremely efficient. In
fact, it is significantly faster than SVD with comparable accuracy. $T$ can be
computed on the fly. Moreover, unlike many compression algorithms, it easily
finds the mapping for an arbitrary ``out-of-sample'' vector and it comes with
an ``error indicator'' certifying the accuracy of the compression. Numerical
results are shown validating these claims.
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2015 21:10:57 GMT"
}
] | 2015-03-23T00:00:00 | [
[
"Chen",
"Yanlai",
""
]
] | TITLE: Reduced Basis Decomposition: a Certified and Fast Lossy Data Compression
Algorithm
ABSTRACT: Dimension reduction is often needed in the area of data mining. The goal of
these methods is to map the given high-dimensional data into a low-dimensional
space preserving certain properties of the initial data. There are two kinds of
techniques for this purpose. The first, projective methods, builds an explicit
linear projection from the high-dimensional space to the low-dimensional one.
On the other hand, the nonlinear methods utilizes nonlinear and implicit
mapping between the two spaces. In both cases, the methods considered in
literature have usually relied on computationally very intensive matrix
factorizations, frequently the Singular Value Decomposition (SVD). The
computational burden of SVD quickly renders these dimension reduction methods
infeasible thanks to the ever-increasing sizes of the practical datasets.
In this paper, we present a new decomposition strategy, Reduced Basis
Decomposition (RBD), which is inspired by the Reduced Basis Method (RBM). Given
$X$ the high-dimensional data, the method approximates it by $Y \, T (\approx
X)$ with $Y$ being the low-dimensional surrogate and $T$ the transformation
matrix. $Y$ is obtained through a greedy algorithm thus extremely efficient. In
fact, it is significantly faster than SVD with comparable accuracy. $T$ can be
computed on the fly. Moreover, unlike many compression algorithms, it easily
finds the mapping for an arbitrary ``out-of-sample'' vector and it comes with
an ``error indicator'' certifying the accuracy of the compression. Numerical
results are shown validating these claims.
| no_new_dataset | 0.947332 |
1202.2369 | Georgios Zervas | John W. Byers, Michael Mitzenmacher, Georgios Zervas | The Groupon Effect on Yelp Ratings: A Root Cause Analysis | Submitted to ACM EC 2012 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Daily deals sites such as Groupon offer deeply discounted goods and services
to tens of millions of customers through geographically targeted daily e-mail
marketing campaigns. In our prior work we observed that a negative side effect
for merchants using Groupons is that, on average, their Yelp ratings decline
significantly. However, this previous work was essentially observational,
rather than explanatory. In this work, we rigorously consider and evaluate
various hypotheses about underlying consumer and merchant behavior in order to
understand this phenomenon, which we dub the Groupon effect. We use statistical
analysis and mathematical modeling, leveraging a dataset we collected spanning
tens of thousands of daily deals and over 7 million Yelp reviews. In
particular, we investigate hypotheses such as whether Groupon subscribers are
more critical than their peers, or whether some fraction of Groupon merchants
provide significantly worse service to customers using Groupons. We suggest an
additional novel hypothesis: reviews from Groupon subscribers are lower on
average because such reviews correspond to real, unbiased customers, while the
body of reviews on Yelp contain some fraction of reviews from biased or even
potentially fake sources. Although we focus on a specific question, our work
provides broad insights into both consumer and merchant behavior within the
daily deals marketplace.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2012 21:03:11 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Byers",
"John W.",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Zervas",
"Georgios",
""
]
] | TITLE: The Groupon Effect on Yelp Ratings: A Root Cause Analysis
ABSTRACT: Daily deals sites such as Groupon offer deeply discounted goods and services
to tens of millions of customers through geographically targeted daily e-mail
marketing campaigns. In our prior work we observed that a negative side effect
for merchants using Groupons is that, on average, their Yelp ratings decline
significantly. However, this previous work was essentially observational,
rather than explanatory. In this work, we rigorously consider and evaluate
various hypotheses about underlying consumer and merchant behavior in order to
understand this phenomenon, which we dub the Groupon effect. We use statistical
analysis and mathematical modeling, leveraging a dataset we collected spanning
tens of thousands of daily deals and over 7 million Yelp reviews. In
particular, we investigate hypotheses such as whether Groupon subscribers are
more critical than their peers, or whether some fraction of Groupon merchants
provide significantly worse service to customers using Groupons. We suggest an
additional novel hypothesis: reviews from Groupon subscribers are lower on
average because such reviews correspond to real, unbiased customers, while the
body of reviews on Yelp contain some fraction of reviews from biased or even
potentially fake sources. Although we focus on a specific question, our work
provides broad insights into both consumer and merchant behavior within the
daily deals marketplace.
| new_dataset | 0.958963 |
1203.0059 | Prasang Upadhyaya | Prasang Upadhyaya, Magdalena Balazinska, Dan Suciu | How to Price Shared Optimizations in the Cloud | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 6, pp.
562-573 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-management-as-a-service systems are increasingly being used in
collaborative settings, where multiple users access common datasets. Cloud
providers have the choice to implement various optimizations, such as indexing
or materialized views, to accelerate queries over these datasets. Each
optimization carries a cost and may benefit multiple users. This creates a
major challenge: how to select which optimizations to perform and how to share
their cost among users. The problem is especially challenging when users are
selfish and will only report their true values for different optimizations if
doing so maximizes their utility. In this paper, we present a new approach for
selecting and pricing shared optimizations by using Mechanism Design. We first
show how to apply the Shapley Value Mechanism to the simple case of selecting
and pricing additive optimizations, assuming an offline game where all users
access the service for the same time-period. Second, we extend the approach to
online scenarios where users come and go. Finally, we consider the case of
substitutive optimizations. We show analytically that our mechanisms induce
truth- fulness and recover the optimization costs. We also show experimentally
that our mechanisms yield higher utility than the state-of-the-art approach
based on regret accumulation.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2012 00:17:40 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Upadhyaya",
"Prasang",
""
],
[
"Balazinska",
"Magdalena",
""
],
[
"Suciu",
"Dan",
""
]
] | TITLE: How to Price Shared Optimizations in the Cloud
ABSTRACT: Data-management-as-a-service systems are increasingly being used in
collaborative settings, where multiple users access common datasets. Cloud
providers have the choice to implement various optimizations, such as indexing
or materialized views, to accelerate queries over these datasets. Each
optimization carries a cost and may benefit multiple users. This creates a
major challenge: how to select which optimizations to perform and how to share
their cost among users. The problem is especially challenging when users are
selfish and will only report their true values for different optimizations if
doing so maximizes their utility. In this paper, we present a new approach for
selecting and pricing shared optimizations by using Mechanism Design. We first
show how to apply the Shapley Value Mechanism to the simple case of selecting
and pricing additive optimizations, assuming an offline game where all users
access the service for the same time-period. Second, we extend the approach to
online scenarios where users come and go. Finally, we consider the case of
substitutive optimizations. We show analytically that our mechanisms induce
truth- fulness and recover the optimization costs. We also show experimentally
that our mechanisms yield higher utility than the state-of-the-art approach
based on regret accumulation.
| no_new_dataset | 0.946151 |
1203.0453 | Song Liu Mr | Song Liu, Makoto Yamada, Nigel Collier, Masashi Sugiyama | Change-Point Detection in Time-Series Data by Relative Density-Ratio
Estimation | null | null | 10.1016/j.neunet.2013.01.012 | null | stat.ML cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of change-point detection is to discover abrupt property
changes lying behind time-series data. In this paper, we present a novel
statistical change-point detection algorithm based on non-parametric divergence
estimation between time-series samples from two retrospective segments. Our
method uses the relative Pearson divergence as a divergence measure, and it is
accurately and efficiently estimated by a method of direct density-ratio
estimation. Through experiments on artificial and real-world datasets including
human-activity sensing, speech, and Twitter messages, we demonstrate the
usefulness of the proposed method.
| [
{
"version": "v1",
"created": "Fri, 2 Mar 2012 13:12:03 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Jan 2013 06:44:58 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Liu",
"Song",
""
],
[
"Yamada",
"Makoto",
""
],
[
"Collier",
"Nigel",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Change-Point Detection in Time-Series Data by Relative Density-Ratio
Estimation
ABSTRACT: The objective of change-point detection is to discover abrupt property
changes lying behind time-series data. In this paper, we present a novel
statistical change-point detection algorithm based on non-parametric divergence
estimation between time-series samples from two retrospective segments. Our
method uses the relative Pearson divergence as a divergence measure, and it is
accurately and efficiently estimated by a method of direct density-ratio
estimation. Through experiments on artificial and real-world datasets including
human-activity sensing, speech, and Twitter messages, we demonstrate the
usefulness of the proposed method.
| no_new_dataset | 0.953144 |
1203.3453 | Davide Proserpio | Davide Proserpio, Sharon Goldberg and Frank McSherry | Calibrating Data to Sensitivity in Private Data Analysis | 17 pages | Calibrating Data to Sensitivity in Private Data Analysis
Proserpio, Davide, Sharon Goldberg, and Frank McSherry. "Calibrating Data to
Sensitivity in Private Data Analysis." Proceedings of the VLDB Endowment 7.8
(2014) | null | null | cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to differentially private computation in which one
does not scale up the magnitude of noise for challenging queries, but rather
scales down the contributions of challenging records. While scaling down all
records uniformly is equivalent to scaling up the noise magnitude, we show that
scaling records non-uniformly can result in substantially higher accuracy by
bypassing the worst-case requirements of differential privacy for the noise
magnitudes. This paper details the data analysis platform wPINQ, which
generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a
few simple operators (including a non-uniformly scaling Join operator) wPINQ
can reproduce (and improve) several recent results on graph analysis and
introduce new generalizations (e.g., counting triangles with given degrees). We
also show how to integrate probabilistic inference techniques to synthesize
datasets respecting more complicated (and less easily interpreted)
measurements.
| [
{
"version": "v1",
"created": "Thu, 15 Mar 2012 19:45:04 GMT"
},
{
"version": "v2",
"created": "Fri, 10 May 2013 19:17:28 GMT"
},
{
"version": "v3",
"created": "Mon, 13 May 2013 02:36:12 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Feb 2014 20:04:56 GMT"
},
{
"version": "v5",
"created": "Sun, 4 May 2014 20:20:24 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Proserpio",
"Davide",
""
],
[
"Goldberg",
"Sharon",
""
],
[
"McSherry",
"Frank",
""
]
] | TITLE: Calibrating Data to Sensitivity in Private Data Analysis
ABSTRACT: We present an approach to differentially private computation in which one
does not scale up the magnitude of noise for challenging queries, but rather
scales down the contributions of challenging records. While scaling down all
records uniformly is equivalent to scaling up the noise magnitude, we show that
scaling records non-uniformly can result in substantially higher accuracy by
bypassing the worst-case requirements of differential privacy for the noise
magnitudes. This paper details the data analysis platform wPINQ, which
generalizes the Privacy Integrated Query (PINQ) to weighted datasets. Using a
few simple operators (including a non-uniformly scaling Join operator) wPINQ
can reproduce (and improve) several recent results on graph analysis and
introduce new generalizations (e.g., counting triangles with given degrees). We
also show how to integrate probabilistic inference techniques to synthesize
datasets respecting more complicated (and less easily interpreted)
measurements.
| no_new_dataset | 0.947381 |
1203.3744 | Stelvio Cimato | Carlo Blundo and Stelvio Cimato | Constrained Role Mining | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Role Based Access Control (RBAC) is a very popular access control model, for
long time investigated and widely deployed in the security architecture of
different enterprises. To implement RBAC, roles have to be firstly identified
within the considered organization. Usually the process of (automatically)
defining the roles in a bottom up way, starting from the permissions assigned
to each user, is called {\it role mining}. In literature, the role mining
problem has been formally analyzed and several techniques have been proposed in
order to obtain a set of valid roles.
Recently, the problem of defining different kind of constraints on the number
and the size of the roles included in the resulting role set has been
addressed. In this paper we provide a formal definition of the role mining
problem under the cardinality constraint, i.e. restricting the maximum number
of permissions that can be included in a role. We discuss formally the
computational complexity of the problem and propose a novel heuristic.
Furthermore we present experimental results obtained after the application of
the proposed heuristic on both real and synthetic datasets, and compare the
resulting performance to previous proposals
| [
{
"version": "v1",
"created": "Fri, 16 Mar 2012 15:46:06 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Blundo",
"Carlo",
""
],
[
"Cimato",
"Stelvio",
""
]
] | TITLE: Constrained Role Mining
ABSTRACT: Role Based Access Control (RBAC) is a very popular access control model, for
long time investigated and widely deployed in the security architecture of
different enterprises. To implement RBAC, roles have to be firstly identified
within the considered organization. Usually the process of (automatically)
defining the roles in a bottom up way, starting from the permissions assigned
to each user, is called {\it role mining}. In literature, the role mining
problem has been formally analyzed and several techniques have been proposed in
order to obtain a set of valid roles.
Recently, the problem of defining different kind of constraints on the number
and the size of the roles included in the resulting role set has been
addressed. In this paper we provide a formal definition of the role mining
problem under the cardinality constraint, i.e. restricting the maximum number
of permissions that can be included in a role. We discuss formally the
computational complexity of the problem and propose a novel heuristic.
Furthermore we present experimental results obtained after the application of
the proposed heuristic on both real and synthetic datasets, and compare the
resulting performance to previous proposals
| no_new_dataset | 0.948298 |
1203.4903 | Edith Cohen | Edith Cohen | Distance Queries from Sampled Data: Accurate and Efficient | 13 pages; This is a full version of a KDD 2014 paper | null | null | null | cs.DS cs.DB math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distance queries are a basic tool in data analysis. They are used for
detection and localization of change for the purpose of anomaly detection,
monitoring, or planning. Distance queries are particularly useful when data
sets such as measurements, snapshots of a system, content, traffic matrices,
and activity logs are collected repeatedly.
Random sampling, which can be efficiently performed over streamed or
distributed data, is an important tool for scalable data analysis. The sample
constitutes an extremely flexible summary, which naturally supports domain
queries and scalable estimation of statistics, which can be specified after the
sample is generated. The effectiveness of a sample as a summary, however,
hinges on the estimators we have.
We derive novel estimators for estimating $L_p$ distance from sampled data.
Our estimators apply with the most common weighted sampling schemes: Poisson
Probability Proportional to Size (PPS) and its fixed sample size variants. They
also apply when the samples of different data sets are independent or
coordinated. Our estimators are admissible (Pareto optimal in terms of
variance) and have compelling properties.
We study the performance of our Manhattan and Euclidean distance ($p=1,2$)
estimators on diverse datasets, demonstrating scalability and accuracy even
when a small fraction of the data is sampled. Our work, for the first time,
facilitates effective distance estimation over sampled data.
| [
{
"version": "v1",
"created": "Thu, 22 Mar 2012 08:06:09 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Feb 2013 20:10:58 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Jun 2014 13:06:42 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Cohen",
"Edith",
""
]
] | TITLE: Distance Queries from Sampled Data: Accurate and Efficient
ABSTRACT: Distance queries are a basic tool in data analysis. They are used for
detection and localization of change for the purpose of anomaly detection,
monitoring, or planning. Distance queries are particularly useful when data
sets such as measurements, snapshots of a system, content, traffic matrices,
and activity logs are collected repeatedly.
Random sampling, which can be efficiently performed over streamed or
distributed data, is an important tool for scalable data analysis. The sample
constitutes an extremely flexible summary, which naturally supports domain
queries and scalable estimation of statistics, which can be specified after the
sample is generated. The effectiveness of a sample as a summary, however,
hinges on the estimators we have.
We derive novel estimators for estimating $L_p$ distance from sampled data.
Our estimators apply with the most common weighted sampling schemes: Poisson
Probability Proportional to Size (PPS) and its fixed sample size variants. They
also apply when the samples of different data sets are independent or
coordinated. Our estimators are admissible (Pareto optimal in terms of
variance) and have compelling properties.
We study the performance of our Manhattan and Euclidean distance ($p=1,2$)
estimators on diverse datasets, demonstrating scalability and accuracy even
when a small fraction of the data is sampled. Our work, for the first time,
facilitates effective distance estimation over sampled data.
| no_new_dataset | 0.939471 |
1203.5126 | VIkas Kawadia | Vikas Kawadia and Sameet Sreenivasan | Online detection of temporal communities in evolving networks by
estrangement confinement | null | Scientific Reports 2, Article number: 794, Mar 2012 | 10.1038/srep00794 | null | cs.SI cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal communities result from a consistent partitioning of nodes across
multiple snapshots of an evolving complex network that can help uncover how
dense clusters in a network emerge, combine, split and decay with time. Current
methods for finding communities in a single snapshot are not straightforwardly
generalizable to finding temporal communities since the quality functions used
for finding static communities have highly degenerate landscapes, and the
eventual partition chosen among the many partitions of similar quality is
highly sensitive to small changes in the network. To reliably detect temporal
communities we need not only to find a good community partition in a given
snapshot but also ensure that it bears some similarity to the partition(s)
found in immediately preceding snapshots. We present a new measure of partition
distance called "estrangement" motivated by the inertia of inter-node
relationships which, when incorporated into the measurement of partition
quality, facilitates the detection of meaningful temporal communities.
Specifically, we propose the estrangement confinement method, which postulates
that neighboring nodes in a community prefer to continue to share community
affiliation as the network evolves. Constraining estrangement enables us to
find meaningful temporal communities at various degrees of temporal smoothness
in diverse real-world datasets. Specifically, we study the evolution of voting
behavior of senators in the United States Congress, the evolution of proximity
in human mobility datasets, and the detection of evolving communities in
synthetic networks that are otherwise hard to find. Estrangement confinement
thus provides a principled approach to uncovering temporal communities in
evolving networks.
| [
{
"version": "v1",
"created": "Thu, 22 Mar 2012 21:03:28 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Kawadia",
"Vikas",
""
],
[
"Sreenivasan",
"Sameet",
""
]
] | TITLE: Online detection of temporal communities in evolving networks by
estrangement confinement
ABSTRACT: Temporal communities result from a consistent partitioning of nodes across
multiple snapshots of an evolving complex network that can help uncover how
dense clusters in a network emerge, combine, split and decay with time. Current
methods for finding communities in a single snapshot are not straightforwardly
generalizable to finding temporal communities since the quality functions used
for finding static communities have highly degenerate landscapes, and the
eventual partition chosen among the many partitions of similar quality is
highly sensitive to small changes in the network. To reliably detect temporal
communities we need not only to find a good community partition in a given
snapshot but also ensure that it bears some similarity to the partition(s)
found in immediately preceding snapshots. We present a new measure of partition
distance called "estrangement" motivated by the inertia of inter-node
relationships which, when incorporated into the measurement of partition
quality, facilitates the detection of meaningful temporal communities.
Specifically, we propose the estrangement confinement method, which postulates
that neighboring nodes in a community prefer to continue to share community
affiliation as the network evolves. Constraining estrangement enables us to
find meaningful temporal communities at various degrees of temporal smoothness
in diverse real-world datasets. Specifically, we study the evolution of voting
behavior of senators in the United States Congress, the evolution of proximity
in human mobility datasets, and the detection of evolving communities in
synthetic networks that are otherwise hard to find. Estrangement confinement
thus provides a principled approach to uncovering temporal communities in
evolving networks.
| no_new_dataset | 0.94801 |
1203.6744 | Matteo Zignani | Sabrina Gaito, Matteo Zignani, Gian Paolo Rossi, Alessandra Sala, Xiao
Wang, Haitao Zheng and Ben Y. Zhao | On the Bursty Evolution of Online Social Networks | 13 pages, 7 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The high level of dynamics in today's online social networks (OSNs) creates
new challenges for their infrastructures and providers. In particular, dynamics
involving edge creation has direct implications on strategies for resource
allocation, data partitioning and replication. Understanding network dynamics
in the context of physical time is a critical first step towards a predictive
approach towards infrastructure management in OSNs. Despite increasing efforts
to study social network dynamics, current analyses mainly focus on change over
time of static metrics computed on snapshots of social graphs. The limited
prior work models network dynamics with respect to a logical clock. In this
paper, we present results of analyzing a large timestamped dataset describing
the initial growth and evolution of Renren, the leading social network in
China. We analyze and model the burstiness of link creation process, using the
second derivative, i.e. the acceleration of the degree. This allows us to
detect bursts, and to characterize the social activity of a OSN user as one of
four phases: acceleration at the beginning of an activity burst, where link
creation rate is increasing; deceleration when burst is ending and link
creation process is slowing; cruising, when node activity is in a steady state,
and complete inactivity.
| [
{
"version": "v1",
"created": "Fri, 30 Mar 2012 08:49:22 GMT"
},
{
"version": "v2",
"created": "Fri, 25 May 2012 12:21:48 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Gaito",
"Sabrina",
""
],
[
"Zignani",
"Matteo",
""
],
[
"Rossi",
"Gian Paolo",
""
],
[
"Sala",
"Alessandra",
""
],
[
"Wang",
"Xiao",
""
],
[
"Zheng",
"Haitao",
""
],
[
"Zhao",
"Ben Y.",
""
]
] | TITLE: On the Bursty Evolution of Online Social Networks
ABSTRACT: The high level of dynamics in today's online social networks (OSNs) creates
new challenges for their infrastructures and providers. In particular, dynamics
involving edge creation has direct implications on strategies for resource
allocation, data partitioning and replication. Understanding network dynamics
in the context of physical time is a critical first step towards a predictive
approach towards infrastructure management in OSNs. Despite increasing efforts
to study social network dynamics, current analyses mainly focus on change over
time of static metrics computed on snapshots of social graphs. The limited
prior work models network dynamics with respect to a logical clock. In this
paper, we present results of analyzing a large timestamped dataset describing
the initial growth and evolution of Renren, the leading social network in
China. We analyze and model the burstiness of link creation process, using the
second derivative, i.e. the acceleration of the degree. This allows us to
detect bursts, and to characterize the social activity of a OSN user as one of
four phases: acceleration at the beginning of an activity burst, where link
creation rate is increasing; deceleration when burst is ending and link
creation process is slowing; cruising, when node activity is in a steady state,
and complete inactivity.
| no_new_dataset | 0.95275 |
1205.0192 | Anthony J Cox | Anthony J. Cox, Markus J. Bauer, Tobias Jakobi and Giovanna Rosone | Large-scale compression of genomic sequence databases with the
Burrows-Wheeler transform | Version here is as submitted to Bioinformatics and is same as the
previously archived version. This submission registers the fact that the
advanced access version is now available at
http://bioinformatics.oxfordjournals.org/content/early/2012/05/02/bioinformatics.bts173.abstract
. Bioinformatics should be considered as the original place of publication of
this article, please cite accordingly | null | 10.1093/bioinformatics/bts173 | null | cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation
The Burrows-Wheeler transform (BWT) is the foundation of many algorithms for
compression and indexing of text data, but the cost of computing the BWT of
very large string collections has prevented these techniques from being widely
applied to the large sets of sequences often encountered as the outcome of DNA
sequencing experiments. In previous work, we presented a novel algorithm that
allows the BWT of human genome scale data to be computed on very moderate
hardware, thus enabling us to investigate the BWT as a tool for the compression
of such datasets.
Results
We first used simulated reads to explore the relationship between the level
of compression and the error rate, the length of the reads and the level of
sampling of the underlying genome and compare choices of second-stage
compression algorithm.
We demonstrate that compression may be greatly improved by a particular
reordering of the sequences in the collection and give a novel `implicit
sorting' strategy that enables these benefits to be realised without the
overhead of sorting the reads. With these techniques, a 45x coverage of real
human genome sequence data compresses losslessly to under 0.5 bits per base,
allowing the 135.3Gbp of sequence to fit into only 8.2Gbytes of space (trimming
a small proportion of low-quality bases from the reads improves the compression
still further).
This is more than 4 times smaller than the size achieved by a standard
BWT-based compressor (bzip2) on the untrimmed reads, but an important further
advantage of our approach is that it facilitates the building of compressed
full text indexes such as the FM-index on large-scale DNA sequence collections.
| [
{
"version": "v1",
"created": "Tue, 1 May 2012 15:39:50 GMT"
},
{
"version": "v2",
"created": "Fri, 11 May 2012 11:22:55 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Cox",
"Anthony J.",
""
],
[
"Bauer",
"Markus J.",
""
],
[
"Jakobi",
"Tobias",
""
],
[
"Rosone",
"Giovanna",
""
]
] | TITLE: Large-scale compression of genomic sequence databases with the
Burrows-Wheeler transform
ABSTRACT: Motivation
The Burrows-Wheeler transform (BWT) is the foundation of many algorithms for
compression and indexing of text data, but the cost of computing the BWT of
very large string collections has prevented these techniques from being widely
applied to the large sets of sequences often encountered as the outcome of DNA
sequencing experiments. In previous work, we presented a novel algorithm that
allows the BWT of human genome scale data to be computed on very moderate
hardware, thus enabling us to investigate the BWT as a tool for the compression
of such datasets.
Results
We first used simulated reads to explore the relationship between the level
of compression and the error rate, the length of the reads and the level of
sampling of the underlying genome and compare choices of second-stage
compression algorithm.
We demonstrate that compression may be greatly improved by a particular
reordering of the sequences in the collection and give a novel `implicit
sorting' strategy that enables these benefits to be realised without the
overhead of sorting the reads. With these techniques, a 45x coverage of real
human genome sequence data compresses losslessly to under 0.5 bits per base,
allowing the 135.3Gbp of sequence to fit into only 8.2Gbytes of space (trimming
a small proportion of low-quality bases from the reads improves the compression
still further).
This is more than 4 times smaller than the size achieved by a standard
BWT-based compressor (bzip2) on the untrimmed reads, but an important further
advantage of our approach is that it facilitates the building of compressed
full text indexes such as the FM-index on large-scale DNA sequence collections.
| no_new_dataset | 0.940298 |
1205.2663 | Ot\'avio Penatti | Otavio A. B. Penatti, Eduardo Valle, Ricardo da S. Torres | Are visual dictionaries generalizable? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mid-level features based on visual dictionaries are today a cornerstone of
systems for classification and retrieval of images. Those state-of-the-art
representations depend crucially on the choice of a codebook (visual
dictionary), which is usually derived from the dataset. In general-purpose,
dynamic image collections (e.g., the Web), one cannot have the entire
collection in order to extract a representative dictionary. However, based on
the hypothesis that the dictionary reflects only the diversity of low-level
appearances and does not capture semantics, we argue that a dictionary based on
a small subset of the data, or even on an entirely different dataset, is able
to produce a good representation, provided that the chosen images span a
diverse enough portion of the low-level feature space. Our experiments confirm
that hypothesis, opening the opportunity to greatly alleviate the burden in
generating the codebook, and confirming the feasibility of employing visual
dictionaries in large-scale dynamic environments.
| [
{
"version": "v1",
"created": "Fri, 11 May 2012 18:54:12 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Penatti",
"Otavio A. B.",
""
],
[
"Valle",
"Eduardo",
""
],
[
"Torres",
"Ricardo da S.",
""
]
] | TITLE: Are visual dictionaries generalizable?
ABSTRACT: Mid-level features based on visual dictionaries are today a cornerstone of
systems for classification and retrieval of images. Those state-of-the-art
representations depend crucially on the choice of a codebook (visual
dictionary), which is usually derived from the dataset. In general-purpose,
dynamic image collections (e.g., the Web), one cannot have the entire
collection in order to extract a representative dictionary. However, based on
the hypothesis that the dictionary reflects only the diversity of low-level
appearances and does not capture semantics, we argue that a dictionary based on
a small subset of the data, or even on an entirely different dataset, is able
to produce a good representation, provided that the chosen images span a
diverse enough portion of the low-level feature space. Our experiments confirm
that hypothesis, opening the opportunity to greatly alleviate the burden in
generating the codebook, and confirming the feasibility of employing visual
dictionaries in large-scale dynamic environments.
| no_new_dataset | 0.948251 |
1205.4776 | Ilknur Icke | Ilknur Icke and Andrew Rosenberg | Visual and semantic interpretability of projections of high dimensional
data for classification tasks | Longer version of the VAST 2011 poster.
http://dx.doi.org/10.1109/VAST.2011.6102474 | null | 10.1109/VAST.2011.6102474 | null | cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of visual quality measures have been introduced in visual analytics
literature in order to automatically select the best views of high dimensional
data from a large number of candidate data projections. These methods generally
concentrate on the interpretability of the visualization and pay little
attention to the interpretability of the projection axes. In this paper, we
argue that interpretability of the visualizations and the feature
transformation functions are both crucial for visual exploration of high
dimensional labeled data. We present a two-part user study to examine these two
related but orthogonal aspects of interpretability. We first study how humans
judge the quality of 2D scatterplots of various datasets with varying number of
classes and provide comparisons with ten automated measures, including a number
of visual quality measures and related measures from various machine learning
fields. We then investigate how the user perception on interpretability of
mathematical expressions relate to various automated measures of complexity
that can be used to characterize data projection functions. We conclude with a
discussion of how automated measures of visual and semantic interpretability of
data projections can be used together for exploratory analysis in
classification tasks.
| [
{
"version": "v1",
"created": "Tue, 22 May 2012 00:10:45 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Icke",
"Ilknur",
""
],
[
"Rosenberg",
"Andrew",
""
]
] | TITLE: Visual and semantic interpretability of projections of high dimensional
data for classification tasks
ABSTRACT: A number of visual quality measures have been introduced in visual analytics
literature in order to automatically select the best views of high dimensional
data from a large number of candidate data projections. These methods generally
concentrate on the interpretability of the visualization and pay little
attention to the interpretability of the projection axes. In this paper, we
argue that interpretability of the visualizations and the feature
transformation functions are both crucial for visual exploration of high
dimensional labeled data. We present a two-part user study to examine these two
related but orthogonal aspects of interpretability. We first study how humans
judge the quality of 2D scatterplots of various datasets with varying number of
classes and provide comparisons with ten automated measures, including a number
of visual quality measures and related measures from various machine learning
fields. We then investigate how the user perception on interpretability of
mathematical expressions relate to various automated measures of complexity
that can be used to characterize data projection functions. We conclude with a
discussion of how automated measures of visual and semantic interpretability of
data projections can be used together for exploratory analysis in
classification tasks.
| no_new_dataset | 0.937783 |
1205.5295 | Oliver Krueger | Oliver Krueger, Frederik Schenk, Frauke Feser, Ralf Weisse | Inconsistencies between long-term trends in storminess derived from the
20CR reanalysis and observations | null | null | 10.1175/JCLI-D-12-00309.1 | null | physics.ao-ph physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global atmospheric reanalyses have become a common tool for both the
validation of climate models and diagnostic studies, such as assessing climate
variability and long-term trends. Presently, the 20th Century Reanalysis
(20CR), which assimilates only surface pressure reports, sea-ice, and sea
surface temperature distributions, represents the longest global reanalysis
dataset available covering the period from 1871 to the present. Currently, the
20CR dataset is extensively used for the assessment of climate variability and
trends. Here, we compare the variability and long-term trends in Northeast
Atlantic storminess derived from 20CR and from observations. A well established
storm index derived from pressure observations over a relatively densely
monitored marine area is used. It is found that both, variability and long-term
trends derived from 20CR and from observations, are inconsistent. In
particular, both time series show opposing trends during the first half of the
20th century. Only for the more recent periods both storm indices share a
similar behavior. While the variability and long-term trend derived from the
observations are supported by a number of independent data and analyses, the
behavior shown by 20CR is quite different, indicating substantial
inhomogeneities in the reanalysis most likely caused by the increasing number
of observations assimilated into 20CR over time. The latter makes 20CR likely
unsuitable for the identification of trends in storminess in the earlier part
of the record at least over the Northeast Atlantic. Our results imply and
reconfirm previous findings that care is needed in general, when global
reanalyses are used to assess long-term changes.
| [
{
"version": "v1",
"created": "Wed, 23 May 2012 21:23:41 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Aug 2012 09:09:22 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Krueger",
"Oliver",
""
],
[
"Schenk",
"Frederik",
""
],
[
"Feser",
"Frauke",
""
],
[
"Weisse",
"Ralf",
""
]
] | TITLE: Inconsistencies between long-term trends in storminess derived from the
20CR reanalysis and observations
ABSTRACT: Global atmospheric reanalyses have become a common tool for both the
validation of climate models and diagnostic studies, such as assessing climate
variability and long-term trends. Presently, the 20th Century Reanalysis
(20CR), which assimilates only surface pressure reports, sea-ice, and sea
surface temperature distributions, represents the longest global reanalysis
dataset available covering the period from 1871 to the present. Currently, the
20CR dataset is extensively used for the assessment of climate variability and
trends. Here, we compare the variability and long-term trends in Northeast
Atlantic storminess derived from 20CR and from observations. A well established
storm index derived from pressure observations over a relatively densely
monitored marine area is used. It is found that both, variability and long-term
trends derived from 20CR and from observations, are inconsistent. In
particular, both time series show opposing trends during the first half of the
20th century. Only for the more recent periods both storm indices share a
similar behavior. While the variability and long-term trend derived from the
observations are supported by a number of independent data and analyses, the
behavior shown by 20CR is quite different, indicating substantial
inhomogeneities in the reanalysis most likely caused by the increasing number
of observations assimilated into 20CR over time. The latter makes 20CR likely
unsuitable for the identification of trends in storminess in the earlier part
of the record at least over the Northeast Atlantic. Our results imply and
reconfirm previous findings that care is needed in general, when global
reanalyses are used to assess long-term changes.
| no_new_dataset | 0.933915 |
1206.4074 | Fuxin Li | Fuxin Li, Guy Lebanon, Cristian Sminchisescu | A Linear Approximation to the chi^2 Kernel with Geometric Convergence | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new analytical approximation to the $\chi^2$ kernel that
converges geometrically. The analytical approximation is derived with
elementary methods and adapts to the input distribution for optimal convergence
rate. Experiments show the new approximation leads to improved performance in
image classification and semantic segmentation tasks using a random Fourier
feature approximation of the $\exp-\chi^2$ kernel. Besides, out-of-core
principal component analysis (PCA) methods are introduced to reduce the
dimensionality of the approximation and achieve better performance at the
expense of only an additional constant factor to the time complexity. Moreover,
when PCA is performed jointly on the training and unlabeled testing data,
further performance improvements can be obtained. Experiments conducted on the
PASCAL VOC 2010 segmentation and the ImageNet ILSVRC 2010 datasets show
statistically significant improvements over alternative approximation methods.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2012 21:05:16 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Apr 2013 18:38:28 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Jun 2013 19:29:18 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Li",
"Fuxin",
""
],
[
"Lebanon",
"Guy",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] | TITLE: A Linear Approximation to the chi^2 Kernel with Geometric Convergence
ABSTRACT: We propose a new analytical approximation to the $\chi^2$ kernel that
converges geometrically. The analytical approximation is derived with
elementary methods and adapts to the input distribution for optimal convergence
rate. Experiments show the new approximation leads to improved performance in
image classification and semantic segmentation tasks using a random Fourier
feature approximation of the $\exp-\chi^2$ kernel. Besides, out-of-core
principal component analysis (PCA) methods are introduced to reduce the
dimensionality of the approximation and achieve better performance at the
expense of only an additional constant factor to the time complexity. Moreover,
when PCA is performed jointly on the training and unlabeled testing data,
further performance improvements can be obtained. Experiments conducted on the
PASCAL VOC 2010 segmentation and the ImageNet ILSVRC 2010 datasets show
statistically significant improvements over alternative approximation methods.
| no_new_dataset | 0.947817 |
1207.0141 | Beng Chin Ooi | Wei Lu, Yanyan Shen, Su Chen, Beng Chin Ooi | Efficient Processing of k Nearest Neighbor Joins using MapReduce | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 10, pp.
1016-1027 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | k nearest neighbor join (kNN join), designed to find k nearest neighbors from
a dataset S for every object in another dataset R, is a primitive operation
widely adopted by many data mining applications. As a combination of the k
nearest neighbor query and the join operation, kNN join is an expensive
operation. Given the increasing volume of data, it is difficult to perform a
kNN join on a centralized machine efficiently. In this paper, we investigate
how to perform kNN join using MapReduce which is a well-accepted framework for
data-intensive applications over clusters of computers. In brief, the mappers
cluster objects into groups; the reducers perform the kNN join on each group of
objects separately. We design an effective mapping mechanism that exploits
pruning rules for distance filtering, and hence reduces both the shuffling and
computational costs. To reduce the shuffling cost, we propose two approximate
algorithms to minimize the number of replicas. Extensive experiments on our
in-house cluster demonstrate that our proposed methods are efficient, robust
and scalable.
| [
{
"version": "v1",
"created": "Sat, 30 Jun 2012 20:20:31 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Lu",
"Wei",
""
],
[
"Shen",
"Yanyan",
""
],
[
"Chen",
"Su",
""
],
[
"Ooi",
"Beng Chin",
""
]
] | TITLE: Efficient Processing of k Nearest Neighbor Joins using MapReduce
ABSTRACT: k nearest neighbor join (kNN join), designed to find k nearest neighbors from
a dataset S for every object in another dataset R, is a primitive operation
widely adopted by many data mining applications. As a combination of the k
nearest neighbor query and the join operation, kNN join is an expensive
operation. Given the increasing volume of data, it is difficult to perform a
kNN join on a centralized machine efficiently. In this paper, we investigate
how to perform kNN join using MapReduce which is a well-accepted framework for
data-intensive applications over clusters of computers. In brief, the mappers
cluster objects into groups; the reducers perform the kNN join on each group of
objects separately. We design an effective mapping mechanism that exploits
pruning rules for distance filtering, and hence reduces both the shuffling and
computational costs. To reduce the shuffling cost, we propose two approximate
algorithms to minimize the number of replicas. Extensive experiments on our
in-house cluster demonstrate that our proposed methods are efficient, robust
and scalable.
| no_new_dataset | 0.942029 |
1207.6600 | Rama Badrinath | Rama Badrinath, C. E. Veni Madhavan | Diversity in Ranking using Negative Reinforcement | null | null | null | null | cs.IR cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of diversity in ranking of the nodes
in a graph. The task is to pick the top-k nodes in the graph which are both
'central' and 'diverse'. Many graph-based models of NLP like text
summarization, opinion summarization involve the concept of diversity in
generating the summaries. We develop a novel method which works in an iterative
fashion based on random walks to achieve diversity. Specifically, we use
negative reinforcement as a main tool to introduce diversity in the
Personalized PageRank framework. Experiments on two benchmark datasets show
that our algorithm is competitive to the existing methods.
| [
{
"version": "v1",
"created": "Fri, 27 Jul 2012 17:16:59 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Badrinath",
"Rama",
""
],
[
"Madhavan",
"C. E. Veni",
""
]
] | TITLE: Diversity in Ranking using Negative Reinforcement
ABSTRACT: In this paper, we consider the problem of diversity in ranking of the nodes
in a graph. The task is to pick the top-k nodes in the graph which are both
'central' and 'diverse'. Many graph-based models of NLP like text
summarization, opinion summarization involve the concept of diversity in
generating the summaries. We develop a novel method which works in an iterative
fashion based on random walks to achieve diversity. Specifically, we use
negative reinforcement as a main tool to introduce diversity in the
Personalized PageRank framework. Experiments on two benchmark datasets show
that our algorithm is competitive to the existing methods.
| no_new_dataset | 0.951997 |
1208.1231 | Sebastian Michel | Foteini Alvanaki and Sebastian Michel and Aleksandar Stupar | Building and Maintaining Halls of Fame over a Database | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Halls of Fame are fascinating constructs. They represent the elite of an
often very large amount of entities---persons, companies, products, countries
etc. Beyond their practical use as static rankings, changes to them are
particularly interesting---for decision making processes, as input to common
media or novel narrative science applications, or simply consumed by users. In
this work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and data
of a database can be used to generate Halls of Fame. In this database scenario,
by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We define every Hall of Fame
as one specific instance of an SQL query, such that a change in its result is
considered a noteworthy event. Identified changes (i.e., events) are ranked
using lexicographic tradeoffs over event and query properties and presented to
users or fed in higher-level applications. We have implemented a full-fledged
prototype system that uses either database triggers or a Java based middleware
for event identification. We report on an experimental evaluation using a
real-world dataset of basketball statistics.
| [
{
"version": "v1",
"created": "Mon, 6 Aug 2012 18:26:17 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Alvanaki",
"Foteini",
""
],
[
"Michel",
"Sebastian",
""
],
[
"Stupar",
"Aleksandar",
""
]
] | TITLE: Building and Maintaining Halls of Fame over a Database
ABSTRACT: Halls of Fame are fascinating constructs. They represent the elite of an
often very large amount of entities---persons, companies, products, countries
etc. Beyond their practical use as static rankings, changes to them are
particularly interesting---for decision making processes, as input to common
media or novel narrative science applications, or simply consumed by users. In
this work, we aim at detecting events that can be characterized by changes to a
Hall of Fame ranking in an automated way. We describe how the schema and data
of a database can be used to generate Halls of Fame. In this database scenario,
by Hall of Fame we refer to distinguished tuples; entities, whose
characteristics set them apart from the majority. We define every Hall of Fame
as one specific instance of an SQL query, such that a change in its result is
considered a noteworthy event. Identified changes (i.e., events) are ranked
using lexicographic tradeoffs over event and query properties and presented to
users or fed in higher-level applications. We have implemented a full-fledged
prototype system that uses either database triggers or a Java based middleware
for event identification. We report on an experimental evaluation using a
real-world dataset of basketball statistics.
| no_new_dataset | 0.952397 |
1208.1931 | Michele Dallachiesa | Michele Dallachiesa, Besmira Nushi, Katsiaryna Mirylenka, Themis
Palpanas | Uncertain Time-Series Similarity: Return to the Basics | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 11, pp.
1662-1673 (2012) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last years there has been a considerable increase in the availability
of continuous sensor measurements in a wide range of application domains, such
as Location-Based Services (LBS), medical monitoring systems, manufacturing
plants and engineering facilities to ensure efficiency, product quality and
safety, hydrologic and geologic observing systems, pollution management, and
others. Due to the inherent imprecision of sensor observations, many
investigations have recently turned into querying, mining and storing uncertain
data. Uncertainty can also be due to data aggregation, privacy-preserving
transforms, and error-prone mining algorithms. In this study, we survey the
techniques that have been proposed specifically for modeling and processing
uncertain time series, an important model for temporal data. We provide an
analytical evaluation of the alternatives that have been proposed in the
literature, highlighting the advantages and disadvantages of each approach, and
further compare these alternatives with two additional techniques that were
carefully studied before. We conduct an extensive experimental evaluation with
17 real datasets, and discuss some surprising results, which suggest that a
fruitful research direction is to take into account the temporal correlations
in the time series. Based on our evaluations, we also provide guidelines useful
for the practitioners in the field.
| [
{
"version": "v1",
"created": "Thu, 9 Aug 2012 14:52:01 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Dallachiesa",
"Michele",
""
],
[
"Nushi",
"Besmira",
""
],
[
"Mirylenka",
"Katsiaryna",
""
],
[
"Palpanas",
"Themis",
""
]
] | TITLE: Uncertain Time-Series Similarity: Return to the Basics
ABSTRACT: In the last years there has been a considerable increase in the availability
of continuous sensor measurements in a wide range of application domains, such
as Location-Based Services (LBS), medical monitoring systems, manufacturing
plants and engineering facilities to ensure efficiency, product quality and
safety, hydrologic and geologic observing systems, pollution management, and
others. Due to the inherent imprecision of sensor observations, many
investigations have recently turned into querying, mining and storing uncertain
data. Uncertainty can also be due to data aggregation, privacy-preserving
transforms, and error-prone mining algorithms. In this study, we survey the
techniques that have been proposed specifically for modeling and processing
uncertain time series, an important model for temporal data. We provide an
analytical evaluation of the alternatives that have been proposed in the
literature, highlighting the advantages and disadvantages of each approach, and
further compare these alternatives with two additional techniques that were
carefully studied before. We conduct an extensive experimental evaluation with
17 real datasets, and discuss some surprising results, which suggest that a
fruitful research direction is to take into account the temporal correlations
in the time series. Based on our evaluations, we also provide guidelines useful
for the practitioners in the field.
| no_new_dataset | 0.947817 |
1208.2007 | Vladimir Dergachev Ph.D. | Vladimir Dergachev | A Novel Universal Statistic for Computing Upper Limits in Ill-behaved
Background | 11 pages; expanded version of the original article | null | 10.1103/PhysRevD.87.062001 | LIGO-P1200065-v8 | gr-qc math.OC math.ST physics.data-an stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of experimental data must sometimes deal with abrupt changes in the
distribution of measured values. Setting upper limits on signals usually
involves a veto procedure that excludes data not described by an assumed
statistical model. We show how to implement statistical estimates of physical
quantities (such as upper limits) that are valid without assuming a particular
family of statistical distributions, while still providing close to optimal
values when the data is from an expected distribution (such as Gaussian or
exponential). This new technique can compute statistically sound results in the
presence of severe non-Gaussian noise, relaxes assumptions on distribution
stationarity and is especially useful in automated analysis of large datasets,
where computational speed is important.
| [
{
"version": "v1",
"created": "Thu, 9 Aug 2012 19:13:55 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Mar 2013 20:59:35 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Dergachev",
"Vladimir",
""
]
] | TITLE: A Novel Universal Statistic for Computing Upper Limits in Ill-behaved
Background
ABSTRACT: Analysis of experimental data must sometimes deal with abrupt changes in the
distribution of measured values. Setting upper limits on signals usually
involves a veto procedure that excludes data not described by an assumed
statistical model. We show how to implement statistical estimates of physical
quantities (such as upper limits) that are valid without assuming a particular
family of statistical distributions, while still providing close to optimal
values when the data is from an expected distribution (such as Gaussian or
exponential). This new technique can compute statistically sound results in the
presence of severe non-Gaussian noise, relaxes assumptions on distribution
stationarity and is especially useful in automated analysis of large datasets,
where computational speed is important.
| no_new_dataset | 0.945197 |
1208.2547 | Lexing Xie | Yanxiang Wang, Hari Sundaram, Lexing Xie | Social Event Detection with Interaction Graph Modeling | ACM Multimedia 2012 | null | null | null | cs.SI cs.IR cs.MM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on detecting social, physical-world events from photos
posted on social media sites. The problem is important: cheap media capture
devices have significantly increased the number of photos shared on these
sites. The main contribution of this paper is to incorporate online social
interaction features in the detection of physical events. We believe that
online social interaction reflect important signals among the participants on
the "social affinity" of two photos, thereby helping event detection. We
compute social affinity via a random-walk on a social interaction graph to
determine similarity between two photos on the graph. We train a support vector
machine classifier to combine the social affinity between photos and
photo-centric metadata including time, location, tags and description.
Incremental clustering is then used to group photos to event clusters. We have
very good results on two large scale real-world datasets: Upcoming and
MediaEval. We show an improvement between 0.06-0.10 in F1 on these datasets.
| [
{
"version": "v1",
"created": "Mon, 13 Aug 2012 11:20:05 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Wang",
"Yanxiang",
""
],
[
"Sundaram",
"Hari",
""
],
[
"Xie",
"Lexing",
""
]
] | TITLE: Social Event Detection with Interaction Graph Modeling
ABSTRACT: This paper focuses on detecting social, physical-world events from photos
posted on social media sites. The problem is important: cheap media capture
devices have significantly increased the number of photos shared on these
sites. The main contribution of this paper is to incorporate online social
interaction features in the detection of physical events. We believe that
online social interaction reflect important signals among the participants on
the "social affinity" of two photos, thereby helping event detection. We
compute social affinity via a random-walk on a social interaction graph to
determine similarity between two photos on the graph. We train a support vector
machine classifier to combine the social affinity between photos and
photo-centric metadata including time, location, tags and description.
Incremental clustering is then used to group photos to event clusters. We have
very good results on two large scale real-world datasets: Upcoming and
MediaEval. We show an improvement between 0.06-0.10 in F1 on these datasets.
| no_new_dataset | 0.950503 |
1208.3687 | Qiang Qiu | Qiang Qiu, Vishal M. Patel, Rama Chellappa | Information-theoretic Dictionary Learning for Image Classification | null | null | null | null | cs.CV cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a two-stage approach for learning dictionaries for object
classification tasks based on the principle of information maximization. The
proposed method seeks a dictionary that is compact, discriminative, and
generative. In the first stage, dictionary atoms are selected from an initial
dictionary by maximizing the mutual information measure on dictionary
compactness, discrimination and reconstruction. In the second stage, the
selected dictionary atoms are updated for improved reconstructive and
discriminative power using a simple gradient ascent algorithm on mutual
information. Experiments using real datasets demonstrate the effectiveness of
our approach for image classification tasks.
| [
{
"version": "v1",
"created": "Fri, 17 Aug 2012 20:38:56 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Qiu",
"Qiang",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Information-theoretic Dictionary Learning for Image Classification
ABSTRACT: We present a two-stage approach for learning dictionaries for object
classification tasks based on the principle of information maximization. The
proposed method seeks a dictionary that is compact, discriminative, and
generative. In the first stage, dictionary atoms are selected from an initial
dictionary by maximizing the mutual information measure on dictionary
compactness, discrimination and reconstruction. In the second stage, the
selected dictionary atoms are updated for improved reconstructive and
discriminative power using a simple gradient ascent algorithm on mutual
information. Experiments using real datasets demonstrate the effectiveness of
our approach for image classification tasks.
| no_new_dataset | 0.951233 |
1503.05784 | Alfredo Cobo | Alfredo Cobo, Denis Parra, Jaime Nav\'on | Identifying Relevant Messages in a Twitter-based Citizen Channel for
Natural Disaster Situations | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During recent years the online social networks (in particular Twitter) have
become an important alternative information channel to traditional media during
natural disasters, but the amount and diversity of messages poses the challenge
of information overload to end users. The goal of our research is to develop an
automatic classifier of tweets to feed a mobile application that reduces the
difficulties that citizens face to get relevant information during natural
disasters. In this paper, we present in detail the process to build a
classifier that filters tweets relevant and non-relevant to an earthquake. By
using a dataset from the Chilean earthquake of 2010, we first build and
validate a ground truth, and then we contribute by presenting in detail the
effect of class imbalance and dimensionality reduction over 5 classifiers. We
show how the performance of these models is affected by these variables,
providing important considerations at the moment of building these systems.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 14:10:10 GMT"
}
] | 2015-03-20T00:00:00 | [
[
"Cobo",
"Alfredo",
""
],
[
"Parra",
"Denis",
""
],
[
"Navón",
"Jaime",
""
]
] | TITLE: Identifying Relevant Messages in a Twitter-based Citizen Channel for
Natural Disaster Situations
ABSTRACT: During recent years the online social networks (in particular Twitter) have
become an important alternative information channel to traditional media during
natural disasters, but the amount and diversity of messages poses the challenge
of information overload to end users. The goal of our research is to develop an
automatic classifier of tweets to feed a mobile application that reduces the
difficulties that citizens face to get relevant information during natural
disasters. In this paper, we present in detail the process to build a
classifier that filters tweets relevant and non-relevant to an earthquake. By
using a dataset from the Chilean earthquake of 2010, we first build and
validate a ground truth, and then we contribute by presenting in detail the
effect of class imbalance and dimensionality reduction over 5 classifiers. We
show how the performance of these models is affected by these variables,
providing important considerations at the moment of building these systems.
| no_new_dataset | 0.955693 |
1102.4374 | Benjamin Rubinstein | Arvind Narayanan, Elaine Shi, Benjamin I. P. Rubinstein | Link Prediction by De-anonymization: How We Won the Kaggle Social
Network Challenge | 11 pages, 13 figures; submitted to IJCNN'2011 | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the winning entry to the IJCNN 2011 Social Network
Challenge run by Kaggle.com. The goal of the contest was to promote research on
real-world link prediction, and the dataset was a graph obtained by crawling
the popular Flickr social photo sharing website, with user identities scrubbed.
By de-anonymizing much of the competition test set using our own Flickr crawl,
we were able to effectively game the competition. Our attack represents a new
application of de-anonymization to gaming machine learning contests, suggesting
changes in how future competitions should be run.
We introduce a new simulated annealing-based weighted graph matching
algorithm for the seeding step of de-anonymization. We also show how to combine
de-anonymization with link prediction---the latter is required to achieve good
performance on the portion of the test set not de-anonymized---for example by
training the predictor on the de-anonymized portion of the test set, and
combining probabilistic predictions from de-anonymization and link prediction.
| [
{
"version": "v1",
"created": "Tue, 22 Feb 2011 00:11:14 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Narayanan",
"Arvind",
""
],
[
"Shi",
"Elaine",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
]
] | TITLE: Link Prediction by De-anonymization: How We Won the Kaggle Social
Network Challenge
ABSTRACT: This paper describes the winning entry to the IJCNN 2011 Social Network
Challenge run by Kaggle.com. The goal of the contest was to promote research on
real-world link prediction, and the dataset was a graph obtained by crawling
the popular Flickr social photo sharing website, with user identities scrubbed.
By de-anonymizing much of the competition test set using our own Flickr crawl,
we were able to effectively game the competition. Our attack represents a new
application of de-anonymization to gaming machine learning contests, suggesting
changes in how future competitions should be run.
We introduce a new simulated annealing-based weighted graph matching
algorithm for the seeding step of de-anonymization. We also show how to combine
de-anonymization with link prediction---the latter is required to achieve good
performance on the portion of the test set not de-anonymized---for example by
training the predictor on the de-anonymized portion of the test set, and
combining probabilistic predictions from de-anonymization and link prediction.
| no_new_dataset | 0.942771 |
1103.1013 | Qi Mao | Qi Mao, Ivor W. Tsang | A Feature Selection Method for Multivariate Performance Measures | null | IEEE Transactions on Pattern Analysis and Machine Intelligence,
2012 | 10.1109/TPAMI.2012.266 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection with specific multivariate performance measures is the key
to the success of many applications, such as image retrieval and text
classification. The existing feature selection methods are usually designed for
classification error. In this paper, we propose a generalized sparse
regularizer. Based on the proposed regularizer, we present a unified feature
selection framework for general loss functions. In particular, we study the
novel feature selection paradigm by optimizing multivariate performance
measures. The resultant formulation is a challenging problem for
high-dimensional data. Hence, a two-layer cutting plane algorithm is proposed
to solve this problem, and the convergence is presented. In addition, we adapt
the proposed method to optimize multivariate measures for multiple instance
learning problems. The analyses by comparing with the state-of-the-art feature
selection methods show that the proposed method is superior to others.
Extensive experiments on large-scale and high-dimensional real world datasets
show that the proposed method outperforms $l_1$-SVM and SVM-RFE when choosing a
small subset of features, and achieves significantly improved performances over
SVM$^{perf}$ in terms of $F_1$-score.
| [
{
"version": "v1",
"created": "Sat, 5 Mar 2011 07:10:41 GMT"
},
{
"version": "v2",
"created": "Sat, 4 May 2013 14:48:06 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Mao",
"Qi",
""
],
[
"Tsang",
"Ivor W.",
""
]
] | TITLE: A Feature Selection Method for Multivariate Performance Measures
ABSTRACT: Feature selection with specific multivariate performance measures is the key
to the success of many applications, such as image retrieval and text
classification. The existing feature selection methods are usually designed for
classification error. In this paper, we propose a generalized sparse
regularizer. Based on the proposed regularizer, we present a unified feature
selection framework for general loss functions. In particular, we study the
novel feature selection paradigm by optimizing multivariate performance
measures. The resultant formulation is a challenging problem for
high-dimensional data. Hence, a two-layer cutting plane algorithm is proposed
to solve this problem, and the convergence is presented. In addition, we adapt
the proposed method to optimize multivariate measures for multiple instance
learning problems. The analyses by comparing with the state-of-the-art feature
selection methods show that the proposed method is superior to others.
Extensive experiments on large-scale and high-dimensional real world datasets
show that the proposed method outperforms $l_1$-SVM and SVM-RFE when choosing a
small subset of features, and achieves significantly improved performances over
SVM$^{perf}$ in terms of $F_1$-score.
| no_new_dataset | 0.945298 |
1103.2215 | Xin Liu | Xin Liu and Anwitaman Datta and Krzysztof Rzadca | Trust beyond reputation: A computational trust model based on
stereotypes | null | null | null | null | cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of computational trust support users in taking decisions. They are
commonly used to guide users' judgements in online auction sites; or to
determine quality of contributions in Web 2.0 sites. However, most existing
systems require historical information about the past behavior of the specific
agent being judged. In contrast, in real life, to anticipate and to predict a
stranger's actions in absence of the knowledge of such behavioral history, we
often use our "instinct"- essentially stereotypes developed from our past
interactions with other "similar" persons. In this paper, we propose
StereoTrust, a computational trust model inspired by stereotypes as used in
real-life. A stereotype contains certain features of agents and an expected
outcome of the transaction. When facing a stranger, an agent derives its trust
by aggregating stereotypes matching the stranger's profile. Since stereotypes
are formed locally, recommendations stem from the trustor's own personal
experiences and perspective. Historical behavioral information, when available,
can be used to refine the analysis. According to our experiments using
Epinions.com dataset, StereoTrust compares favorably with existing trust models
that use different kinds of information and more complete historical
information.
| [
{
"version": "v1",
"created": "Fri, 11 Mar 2011 08:15:07 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2011 03:50:46 GMT"
},
{
"version": "v3",
"created": "Sun, 15 Jul 2012 14:07:02 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Datta",
"Anwitaman",
""
],
[
"Rzadca",
"Krzysztof",
""
]
] | TITLE: Trust beyond reputation: A computational trust model based on
stereotypes
ABSTRACT: Models of computational trust support users in taking decisions. They are
commonly used to guide users' judgements in online auction sites; or to
determine quality of contributions in Web 2.0 sites. However, most existing
systems require historical information about the past behavior of the specific
agent being judged. In contrast, in real life, to anticipate and to predict a
stranger's actions in absence of the knowledge of such behavioral history, we
often use our "instinct"- essentially stereotypes developed from our past
interactions with other "similar" persons. In this paper, we propose
StereoTrust, a computational trust model inspired by stereotypes as used in
real-life. A stereotype contains certain features of agents and an expected
outcome of the transaction. When facing a stranger, an agent derives its trust
by aggregating stereotypes matching the stranger's profile. Since stereotypes
are formed locally, recommendations stem from the trustor's own personal
experiences and perspective. Historical behavioral information, when available,
can be used to refine the analysis. According to our experiments using
Epinions.com dataset, StereoTrust compares favorably with existing trust models
that use different kinds of information and more complete historical
information.
| no_new_dataset | 0.945197 |
1104.2086 | Slav Petrov | Slav Petrov, Dipanjan Das and Ryan McDonald | A Universal Part-of-Speech Tagset | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To facilitate future research in unsupervised induction of syntactic
structure and to standardize best-practices, we propose a tagset that consists
of twelve universal part-of-speech categories. In addition to the tagset, we
develop a mapping from 25 different treebank tagsets to this universal set. As
a result, when combined with the original treebank data, this universal tagset
and mapping produce a dataset consisting of common parts-of-speech for 22
different languages. We highlight the use of this resource via two experiments,
including one that reports competitive accuracies for unsupervised grammar
induction without gold standard part-of-speech tags.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2011 23:06:54 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Petrov",
"Slav",
""
],
[
"Das",
"Dipanjan",
""
],
[
"McDonald",
"Ryan",
""
]
] | TITLE: A Universal Part-of-Speech Tagset
ABSTRACT: To facilitate future research in unsupervised induction of syntactic
structure and to standardize best-practices, we propose a tagset that consists
of twelve universal part-of-speech categories. In addition to the tagset, we
develop a mapping from 25 different treebank tagsets to this universal set. As
a result, when combined with the original treebank data, this universal tagset
and mapping produce a dataset consisting of common parts-of-speech for 22
different languages. We highlight the use of this resource via two experiments,
including one that reports competitive accuracies for unsupervised grammar
induction without gold standard part-of-speech tags.
| new_dataset | 0.953708 |
1104.3616 | Wei-Xing Zhou | Wei-Xing Zhou (ECUST), Guo-Hua Mu (ECUST), Wei Chen (SZSE), Didier
Sornette (ETH Zurich) | Strategies used as spectroscopy of financial markets reveal new stylized
facts | 13 pages including 5 figures and 1 table | PLoS ONE 6 (9), e24391 (2011) | 10.1371/journal.pone.0024391 | null | q-fin.ST physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new set of stylized facts quantifying the structure of financial
markets. The key idea is to study the combined structure of both investment
strategies and prices in order to open a qualitatively new level of
understanding of financial and economic markets. We study the detailed order
flow on the Shenzhen Stock Exchange of China for the whole year of 2003. This
enormous dataset allows us to compare (i) a closed national market (A-shares)
with an international market (B-shares), (ii) individuals and institutions and
(iii) real investors to random strategies with respect to timing that share
otherwise all other characteristics. We find that more trading results in
smaller net return due to trading frictions. We unveiled quantitative power
laws with non-trivial exponents, that quantify the deterioration of performance
with frequency and with holding period of the strategies used by investors.
Random strategies are found to perform much better than real ones, both for
winners and losers. Surprising large arbitrage opportunities exist, especially
when using zero-intelligence strategies. This is a diagnostic of possible
inefficiencies of these financial markets.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2011 00:56:41 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Zhou",
"Wei-Xing",
"",
"ECUST"
],
[
"Mu",
"Guo-Hua",
"",
"ECUST"
],
[
"Chen",
"Wei",
"",
"SZSE"
],
[
"Sornette",
"Didier",
"",
"ETH Zurich"
]
] | TITLE: Strategies used as spectroscopy of financial markets reveal new stylized
facts
ABSTRACT: We propose a new set of stylized facts quantifying the structure of financial
markets. The key idea is to study the combined structure of both investment
strategies and prices in order to open a qualitatively new level of
understanding of financial and economic markets. We study the detailed order
flow on the Shenzhen Stock Exchange of China for the whole year of 2003. This
enormous dataset allows us to compare (i) a closed national market (A-shares)
with an international market (B-shares), (ii) individuals and institutions and
(iii) real investors to random strategies with respect to timing that share
otherwise all other characteristics. We find that more trading results in
smaller net return due to trading frictions. We unveiled quantitative power
laws with non-trivial exponents, that quantify the deterioration of performance
with frequency and with holding period of the strategies used by investors.
Random strategies are found to perform much better than real ones, both for
winners and losers. Surprising large arbitrage opportunities exist, especially
when using zero-intelligence strategies. This is a diagnostic of possible
inefficiencies of these financial markets.
| no_new_dataset | 0.911377 |
1104.4704 | Chunhua Shen | Chunhua Shen, Junae Kim, Lei Wang, Anton van den Hengel | Positive Semidefinite Metric Learning Using Boosting-like Algorithms | 30 pages, appearing in Journal of Machine Learning Research | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of many machine learning and pattern recognition methods relies
heavily upon the identification of an appropriate distance metric on the input
data. It is often beneficial to learn such a metric from the input training
data, instead of using a default one such as the Euclidean distance. In this
work, we propose a boosting-based technique, termed BoostMetric, for learning a
quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance
metric requires enforcing the constraint that the matrix parameter to the
metric remains positive definite. Semidefinite programming is often used to
enforce this constraint, but does not scale well and easy to implement.
BoostMetric is instead based on the observation that any positive semidefinite
matrix can be decomposed into a linear combination of trace-one rank-one
matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak
learners within an efficient and scalable boosting-based learning process. The
resulting methods are easy to implement, efficient, and can accommodate various
types of constraints. We extend traditional boosting algorithms in that its
weak learner is a positive semidefinite matrix with trace and rank being one
rather than a classifier or regressor. Experiments on various datasets
demonstrate that the proposed algorithms compare favorably to those
state-of-the-art methods in terms of classification accuracy and running time.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2011 10:38:03 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Apr 2012 05:56:40 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Shen",
"Chunhua",
""
],
[
"Kim",
"Junae",
""
],
[
"Wang",
"Lei",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Positive Semidefinite Metric Learning Using Boosting-like Algorithms
ABSTRACT: The success of many machine learning and pattern recognition methods relies
heavily upon the identification of an appropriate distance metric on the input
data. It is often beneficial to learn such a metric from the input training
data, instead of using a default one such as the Euclidean distance. In this
work, we propose a boosting-based technique, termed BoostMetric, for learning a
quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance
metric requires enforcing the constraint that the matrix parameter to the
metric remains positive definite. Semidefinite programming is often used to
enforce this constraint, but does not scale well and easy to implement.
BoostMetric is instead based on the observation that any positive semidefinite
matrix can be decomposed into a linear combination of trace-one rank-one
matrices. BoostMetric thus uses rank-one positive semidefinite matrices as weak
learners within an efficient and scalable boosting-based learning process. The
resulting methods are easy to implement, efficient, and can accommodate various
types of constraints. We extend traditional boosting algorithms in that its
weak learner is a positive semidefinite matrix with trace and rank being one
rather than a classifier or regressor. Experiments on various datasets
demonstrate that the proposed algorithms compare favorably to those
state-of-the-art methods in terms of classification accuracy and running time.
| no_new_dataset | 0.946843 |
1105.0903 | Georgios Zervas | John W. Byers, Michael Mitzenmacher, Michalis Potamias, and Georgios
Zervas | A Month in the Life of Groupon | 6 pages | null | 10.1016/j.elerap.2012.11.006 | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Groupon has become the latest Internet sensation, providing daily deals to
customers in the form of discount offers for restaurants, ticketed events,
appliances, services, and other items. We undertake a study of the economics of
daily deals on the web, based on a dataset we compiled by monitoring Groupon
over several weeks. We use our dataset to characterize Groupon deal purchases,
and to glean insights about Groupon's operational strategy. Our focus is on
purchase incentives. For the primary purchase incentive, price, our regression
model indicates that demand for coupons is relatively inelastic, allowing room
for price-based revenue optimization. More interestingly, mining our dataset,
we find evidence that Groupon customers are sensitive to other, "soft",
incentives, e.g., deal scheduling and duration, deal featuring, and limited
inventory. Our analysis points to the importance of considering incentives
other than price in optimizing deal sites and similar systems.
| [
{
"version": "v1",
"created": "Wed, 4 May 2011 19:25:21 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Byers",
"John W.",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Potamias",
"Michalis",
""
],
[
"Zervas",
"Georgios",
""
]
] | TITLE: A Month in the Life of Groupon
ABSTRACT: Groupon has become the latest Internet sensation, providing daily deals to
customers in the form of discount offers for restaurants, ticketed events,
appliances, services, and other items. We undertake a study of the economics of
daily deals on the web, based on a dataset we compiled by monitoring Groupon
over several weeks. We use our dataset to characterize Groupon deal purchases,
and to glean insights about Groupon's operational strategy. Our focus is on
purchase incentives. For the primary purchase incentive, price, our regression
model indicates that demand for coupons is relatively inelastic, allowing room
for price-based revenue optimization. More interestingly, mining our dataset,
we find evidence that Groupon customers are sensitive to other, "soft",
incentives, e.g., deal scheduling and duration, deal featuring, and limited
inventory. Our analysis points to the importance of considering incentives
other than price in optimizing deal sites and similar systems.
| no_new_dataset | 0.864368 |
1105.4385 | Ping Li | Ping Li and Joshua Moore and Christian Konig | b-Bit Minwise Hashing for Large-Scale Linear SVM | null | null | null | null | cs.LG stat.AP stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose to (seamlessly) integrate b-bit minwise hashing
with linear SVM to substantially improve the training (and testing) efficiency
using much smaller memory, with essentially no loss of accuracy. Theoretically,
we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit
minwise hashing matrix are all positive definite matrices (kernels).
Interestingly, our proof for the positive definiteness of the b-bit minwise
hashing kernel naturally suggests a simple strategy to integrate b-bit hashing
with linear SVM. Our technique is particularly useful when the data can not fit
in memory, which is an increasingly critical issue in large-scale machine
learning. Our preliminary experimental results on a publicly available webspam
dataset (350K samples and 16 million dimensions) verified the effectiveness of
our algorithm. For example, the training time was reduced to merely a few
seconds. In addition, our technique can be easily extended to many other linear
and nonlinear machine learning applications such as logistic regression.
| [
{
"version": "v1",
"created": "Mon, 23 May 2011 01:56:24 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Moore",
"Joshua",
""
],
[
"Konig",
"Christian",
""
]
] | TITLE: b-Bit Minwise Hashing for Large-Scale Linear SVM
ABSTRACT: In this paper, we propose to (seamlessly) integrate b-bit minwise hashing
with linear SVM to substantially improve the training (and testing) efficiency
using much smaller memory, with essentially no loss of accuracy. Theoretically,
we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit
minwise hashing matrix are all positive definite matrices (kernels).
Interestingly, our proof for the positive definiteness of the b-bit minwise
hashing kernel naturally suggests a simple strategy to integrate b-bit hashing
with linear SVM. Our technique is particularly useful when the data can not fit
in memory, which is an increasingly critical issue in large-scale machine
learning. Our preliminary experimental results on a publicly available webspam
dataset (350K samples and 16 million dimensions) verified the effectiveness of
our algorithm. For example, the training time was reduced to merely a few
seconds. In addition, our technique can be easily extended to many other linear
and nonlinear machine learning applications such as logistic regression.
| no_new_dataset | 0.954351 |
1105.5196 | Jason Weston | Jason Weston, Samy Bengio, Philippe Hamel | Large-Scale Music Annotation and Retrieval: Learning to Rank in Joint
Semantic Spaces | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music prediction tasks range from predicting tags given a song or clip of
audio, predicting the name of the artist, or predicting related songs given a
song, clip, artist name or tag. That is, we are interested in every semantic
relationship between the different musical concepts in our database. In
realistically sized databases, the number of songs is measured in the hundreds
of thousands or more, and the number of artists in the tens of thousands or
more, providing a considerable challenge to standard machine learning
techniques. In this work, we propose a method that scales to such datasets
which attempts to capture the semantic similarities between the database items
by modeling audio, artist names, and tags in a single low-dimensional semantic
space. This choice of space is learnt by optimizing the set of prediction tasks
of interest jointly using multi-task learning. Our method both outperforms
baseline methods and, in comparison to them, is faster and consumes less
memory. We then demonstrate how our method learns an interpretable model, where
the semantic space captures well the similarities of interest.
| [
{
"version": "v1",
"created": "Thu, 26 May 2011 03:41:47 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Weston",
"Jason",
""
],
[
"Bengio",
"Samy",
""
],
[
"Hamel",
"Philippe",
""
]
] | TITLE: Large-Scale Music Annotation and Retrieval: Learning to Rank in Joint
Semantic Spaces
ABSTRACT: Music prediction tasks range from predicting tags given a song or clip of
audio, predicting the name of the artist, or predicting related songs given a
song, clip, artist name or tag. That is, we are interested in every semantic
relationship between the different musical concepts in our database. In
realistically sized databases, the number of songs is measured in the hundreds
of thousands or more, and the number of artists in the tens of thousands or
more, providing a considerable challenge to standard machine learning
techniques. In this work, we propose a method that scales to such datasets
which attempts to capture the semantic similarities between the database items
by modeling audio, artist names, and tags in a single low-dimensional semantic
space. This choice of space is learnt by optimizing the set of prediction tasks
of interest jointly using multi-task learning. Our method both outperforms
baseline methods and, in comparison to them, is faster and consumes less
memory. We then demonstrate how our method learns an interpretable model, where
the semantic space captures well the similarities of interest.
| no_new_dataset | 0.944944 |
1106.0987 | Junping Zhang | Junping Zhang and Ziyu Xie and Stan Z. Li | Nearest Prime Simplicial Complex for Object Recognition | 16pages, 6 figures | null | null | null | cs.LG cs.AI cs.CG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The structure representation of data distribution plays an important role in
understanding the underlying mechanism of generating data. In this paper, we
propose nearest prime simplicial complex approaches (NSC) by utilizing
persistent homology to capture such structures. Assuming that each class is
represented with a prime simplicial complex, we classify unlabeled samples
based on the nearest projection distances from the samples to the simplicial
complexes. We also extend the extrapolation ability of these complexes with a
projection constraint term. Experiments in simulated and practical datasets
indicate that compared with several published algorithms, the proposed NSC
approaches achieve promising performance without losing the structure
representation.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2011 08:32:16 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Zhang",
"Junping",
""
],
[
"Xie",
"Ziyu",
""
],
[
"Li",
"Stan Z.",
""
]
] | TITLE: Nearest Prime Simplicial Complex for Object Recognition
ABSTRACT: The structure representation of data distribution plays an important role in
understanding the underlying mechanism of generating data. In this paper, we
propose nearest prime simplicial complex approaches (NSC) by utilizing
persistent homology to capture such structures. Assuming that each class is
represented with a prime simplicial complex, we classify unlabeled samples
based on the nearest projection distances from the samples to the simplicial
complexes. We also extend the extrapolation ability of these complexes with a
projection constraint term. Experiments in simulated and practical datasets
indicate that compared with several published algorithms, the proposed NSC
approaches achieve promising performance without losing the structure
representation.
| no_new_dataset | 0.950869 |
1107.1697 | Aiyou Chen | Aiyou Chen, Jin Cao, Larry Shepp and Tuan Nguyen | Distinct counting with a self-learning bitmap | Journal of the American Statistical Association (accepted) | null | null | null | stat.CO cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counting the number of distinct elements (cardinality) in a dataset is a
fundamental problem in database management. In recent years, due to many of its
modern applications, there has been significant interest to address the
distinct counting problem in a data stream setting, where each incoming data
can be seen only once and cannot be stored for long periods of time. Many
probabilistic approaches based on either sampling or sketching have been
proposed in the computer science literature, that only require limited
computing and memory resources. However, the performances of these methods are
not scale-invariant, in the sense that their relative root mean square
estimation errors (RRMSE) depend on the unknown cardinalities. This is not
desirable in many applications where cardinalities can be very dynamic or
inhomogeneous and many cardinalities need to be estimated. In this paper, we
develop a novel approach, called self-learning bitmap (S-bitmap) that is
scale-invariant for cardinalities in a specified range. S-bitmap uses a binary
vector whose entries are updated from 0 to 1 by an adaptive sampling process
for inferring the unknown cardinality, where the sampling rates are reduced
sequentially as more and more entries change from 0 to 1. We prove rigorously
that the S-bitmap estimate is not only unbiased but scale-invariant. We
demonstrate that to achieve a small RRMSE value of $\epsilon$ or less, our
approach requires significantly less memory and consumes similar or less
operations than state-of-the-art methods for many common practice cardinality
scales. Both simulation and experimental studies are reported.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2011 18:50:16 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Chen",
"Aiyou",
""
],
[
"Cao",
"Jin",
""
],
[
"Shepp",
"Larry",
""
],
[
"Nguyen",
"Tuan",
""
]
] | TITLE: Distinct counting with a self-learning bitmap
ABSTRACT: Counting the number of distinct elements (cardinality) in a dataset is a
fundamental problem in database management. In recent years, due to many of its
modern applications, there has been significant interest to address the
distinct counting problem in a data stream setting, where each incoming data
can be seen only once and cannot be stored for long periods of time. Many
probabilistic approaches based on either sampling or sketching have been
proposed in the computer science literature, that only require limited
computing and memory resources. However, the performances of these methods are
not scale-invariant, in the sense that their relative root mean square
estimation errors (RRMSE) depend on the unknown cardinalities. This is not
desirable in many applications where cardinalities can be very dynamic or
inhomogeneous and many cardinalities need to be estimated. In this paper, we
develop a novel approach, called self-learning bitmap (S-bitmap) that is
scale-invariant for cardinalities in a specified range. S-bitmap uses a binary
vector whose entries are updated from 0 to 1 by an adaptive sampling process
for inferring the unknown cardinality, where the sampling rates are reduced
sequentially as more and more entries change from 0 to 1. We prove rigorously
that the S-bitmap estimate is not only unbiased but scale-invariant. We
demonstrate that to achieve a small RRMSE value of $\epsilon$ or less, our
approach requires significantly less memory and consumes similar or less
operations than state-of-the-art methods for many common practice cardinality
scales. Both simulation and experimental studies are reported.
| no_new_dataset | 0.947381 |
1107.2031 | Shishir Nagaraja | Shishir Nagaraja, Amir Houmansadr, Pratch Piyawongwisal, Vijit Singh,
Pragya Agarwal, Nikita Borisov | Stegobot: construction of an unobservable communication network
leveraging social behavior | Information Hiding, unobservability, anonymity, botnet | null | null | null | cs.CR cs.NI cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the construction of an unobservable communications network using
social networks. The communication endpoints are vertices on a social network.
Probabilistically unobservable communication channels are built by leveraging
image steganography and the social image sharing behavior of users. All
communication takes place along the edges of a social network overlay
connecting friends. We show that such a network can provide decent bandwidth
even with a far from optimal routing mechanism such as restricted flooding. We
show that such a network is indeed usable by constructing a botnet on top of
it, called Stegobot. It is designed to spread via social malware attacks and
steal information from its victims. Unlike conventional botnets, Stegobot
traffic does not introduce new communication endpoints between bots. We
analyzed a real-world dataset of image sharing between members of an online
social network. Analysis of Stegobot's network throughput indicates that
stealthy as it is, it is also functionally powerful -- capable of channeling
fair quantities of sensitive data from its victims to the botmaster at tens of
megabytes every month.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 13:56:15 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Nagaraja",
"Shishir",
""
],
[
"Houmansadr",
"Amir",
""
],
[
"Piyawongwisal",
"Pratch",
""
],
[
"Singh",
"Vijit",
""
],
[
"Agarwal",
"Pragya",
""
],
[
"Borisov",
"Nikita",
""
]
] | TITLE: Stegobot: construction of an unobservable communication network
leveraging social behavior
ABSTRACT: We propose the construction of an unobservable communications network using
social networks. The communication endpoints are vertices on a social network.
Probabilistically unobservable communication channels are built by leveraging
image steganography and the social image sharing behavior of users. All
communication takes place along the edges of a social network overlay
connecting friends. We show that such a network can provide decent bandwidth
even with a far from optimal routing mechanism such as restricted flooding. We
show that such a network is indeed usable by constructing a botnet on top of
it, called Stegobot. It is designed to spread via social malware attacks and
steal information from its victims. Unlike conventional botnets, Stegobot
traffic does not introduce new communication endpoints between bots. We
analyzed a real-world dataset of image sharing between members of an online
social network. Analysis of Stegobot's network throughput indicates that
stealthy as it is, it is also functionally powerful -- capable of channeling
fair quantities of sensitive data from its victims to the botmaster at tens of
megabytes every month.
| no_new_dataset | 0.904819 |
1107.3606 | Hideaki Kimura | Hideaki Kimura, Carleton Coffrin, Alexander Rasin, Stanley B. Zdonik | Optimizing Index Deployment Order for Evolving OLAP (Extended Version) | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Query workloads and database schemas in OLAP applications are becoming
increasingly complex. Moreover, the queries and the schemas have to continually
\textit{evolve} to address business requirements. During such repetitive
transitions, the \textit{order} of index deployment has to be considered while
designing the physical schemas such as indexes and MVs.
An effective index deployment ordering can produce (1) a prompt query runtime
improvement and (2) a reduced total deployment time. Both of these are
essential qualities of design tools for quickly evolving databases, but
optimizing the problem is challenging because of complex index interactions and
a factorial number of possible solutions.
We formulate the problem in a mathematical model and study several techniques
for solving the index ordering problem. We demonstrate that Constraint
Programming (CP) is a more flexible and efficient platform to solve the problem
than other methods such as mixed integer programming and A* search. In addition
to exact search techniques, we also studied local search algorithms to find
near optimal solution very quickly.
Our empirical analysis on the TPC-H dataset shows that our pruning techniques
can reduce the size of the search space by tens of orders of magnitude. Using
the TPC-DS dataset, we verify that our local search algorithm is a highly
scalable and stable method for quickly finding a near-optimal solution.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2011 01:52:52 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2011 00:25:35 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Feb 2012 15:46:22 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Kimura",
"Hideaki",
""
],
[
"Coffrin",
"Carleton",
""
],
[
"Rasin",
"Alexander",
""
],
[
"Zdonik",
"Stanley B.",
""
]
] | TITLE: Optimizing Index Deployment Order for Evolving OLAP (Extended Version)
ABSTRACT: Query workloads and database schemas in OLAP applications are becoming
increasingly complex. Moreover, the queries and the schemas have to continually
\textit{evolve} to address business requirements. During such repetitive
transitions, the \textit{order} of index deployment has to be considered while
designing the physical schemas such as indexes and MVs.
An effective index deployment ordering can produce (1) a prompt query runtime
improvement and (2) a reduced total deployment time. Both of these are
essential qualities of design tools for quickly evolving databases, but
optimizing the problem is challenging because of complex index interactions and
a factorial number of possible solutions.
We formulate the problem in a mathematical model and study several techniques
for solving the index ordering problem. We demonstrate that Constraint
Programming (CP) is a more flexible and efficient platform to solve the problem
than other methods such as mixed integer programming and A* search. In addition
to exact search techniques, we also studied local search algorithms to find
near optimal solution very quickly.
Our empirical analysis on the TPC-H dataset shows that our pruning techniques
can reduce the size of the search space by tens of orders of magnitude. Using
the TPC-DS dataset, we verify that our local search algorithm is a highly
scalable and stable method for quickly finding a near-optimal solution.
| no_new_dataset | 0.94366 |
1108.3605 | Adrian Barbu | Adrian Barbu | Hierarchical Object Parsing from Structured Noisy Point Clouds | 13 pages, 16 figures | null | 10.1109/TPAMI.2012.262 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object parsing and segmentation from point clouds are challenging tasks
because the relevant data is available only as thin structures along object
boundaries or other features, and is corrupted by large amounts of noise. To
handle this kind of data, flexible shape models are desired that can accurately
follow the object boundaries. Popular models such as Active Shape and Active
Appearance models lack the necessary flexibility for this task, while recent
approaches such as the Recursive Compositional Models make model
simplifications in order to obtain computational guarantees. This paper
investigates a hierarchical Bayesian model of shape and appearance in a
generative setting. The input data is explained by an object parsing layer,
which is a deformation of a hidden PCA shape model with Gaussian prior. The
paper also introduces a novel efficient inference algorithm that uses informed
data-driven proposals to initialize local searches for the hidden variables.
Applied to the problem of object parsing from structured point clouds such as
edge detection images, the proposed approach obtains state of the art parsing
errors on two standard datasets without using any intensity information.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2011 02:11:34 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Sep 2012 14:24:08 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Barbu",
"Adrian",
""
]
] | TITLE: Hierarchical Object Parsing from Structured Noisy Point Clouds
ABSTRACT: Object parsing and segmentation from point clouds are challenging tasks
because the relevant data is available only as thin structures along object
boundaries or other features, and is corrupted by large amounts of noise. To
handle this kind of data, flexible shape models are desired that can accurately
follow the object boundaries. Popular models such as Active Shape and Active
Appearance models lack the necessary flexibility for this task, while recent
approaches such as the Recursive Compositional Models make model
simplifications in order to obtain computational guarantees. This paper
investigates a hierarchical Bayesian model of shape and appearance in a
generative setting. The input data is explained by an object parsing layer,
which is a deformation of a hidden PCA shape model with Gaussian prior. The
paper also introduces a novel efficient inference algorithm that uses informed
data-driven proposals to initialize local searches for the hidden variables.
Applied to the problem of object parsing from structured point clouds such as
edge detection images, the proposed approach obtains state of the art parsing
errors on two standard datasets without using any intensity information.
| no_new_dataset | 0.950641 |
1109.1530 | Georgios Zervas | John W. Byers, Michael Mitzenmacher, Georgios Zervas | Daily Deals: Prediction, Social Diffusion, and Reputational
Ramifications | 15 pages, 9 tables, 11 figures | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Daily deal sites have become the latest Internet sensation, providing
discounted offers to customers for restaurants, ticketed events, services, and
other items. We begin by undertaking a study of the economics of daily deals on
the web, based on a dataset we compiled by monitoring Groupon and LivingSocial
sales in 20 large cities over several months. We use this dataset to
characterize deal purchases; glean insights about operational strategies of
these firms; and evaluate customers' sensitivity to factors such as price, deal
scheduling, and limited inventory. We then marry our daily deals dataset with
additional datasets we compiled from Facebook and Yelp users to study the
interplay between social networks and daily deal sites. First, by studying user
activity on Facebook while a deal is running, we provide evidence that daily
deal sites benefit from significant word-of-mouth effects during sales events,
consistent with results predicted by cascade models. Second, we consider the
effects of daily deals on the longer-term reputation of merchants, based on
their Yelp reviews before and after they run a daily deal. Our analysis shows
that while the number of reviews increases significantly due to daily deals,
average rating scores from reviewers who mention daily deals are 10% lower than
scores of their peers on average.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2011 18:29:30 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Byers",
"John W.",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Zervas",
"Georgios",
""
]
] | TITLE: Daily Deals: Prediction, Social Diffusion, and Reputational
Ramifications
ABSTRACT: Daily deal sites have become the latest Internet sensation, providing
discounted offers to customers for restaurants, ticketed events, services, and
other items. We begin by undertaking a study of the economics of daily deals on
the web, based on a dataset we compiled by monitoring Groupon and LivingSocial
sales in 20 large cities over several months. We use this dataset to
characterize deal purchases; glean insights about operational strategies of
these firms; and evaluate customers' sensitivity to factors such as price, deal
scheduling, and limited inventory. We then marry our daily deals dataset with
additional datasets we compiled from Facebook and Yelp users to study the
interplay between social networks and daily deal sites. First, by studying user
activity on Facebook while a deal is running, we provide evidence that daily
deal sites benefit from significant word-of-mouth effects during sales events,
consistent with results predicted by cascade models. Second, we consider the
effects of daily deals on the longer-term reputation of merchants, based on
their Yelp reviews before and after they run a daily deal. Our analysis shows
that while the number of reviews increases significantly due to daily deals,
average rating scores from reviewers who mention daily deals are 10% lower than
scores of their peers on average.
| no_new_dataset | 0.913058 |
1109.1966 | Timothy Hunter | Timothy Hunter, Pieter Abbeel, and Alexandre Bayen | The path inference filter: model-based low-latency map matching of probe
vehicle data | Preprint, 23 pages and 23 figures | null | 10.1016/j.trb.2013.03.008 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of reconstructing vehicle trajectories from sparse
sequences of GPS points, for which the sampling interval is between 10 seconds
and 2 minutes. We introduce a new class of algorithms, called altogether path
inference filter (PIF), that maps GPS data in real time, for a variety of
trade-offs and scenarios, and with a high throughput. Numerous prior approaches
in map-matching can be shown to be special cases of the path inference filter
presented in this article. We present an efficient procedure for automatically
training the filter on new data, with or without ground truth observations. The
framework is evaluated on a large San Francisco taxi dataset and is shown to
improve upon the current state of the art. This filter also provides insights
about driving patterns of drivers. The path inference filter has been deployed
at an industrial scale inside the Mobile Millennium traffic information system,
and is used to map fleets of data in San Francisco, Sacramento, Stockholm and
Porto.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 11:12:35 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jun 2012 17:12:40 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Hunter",
"Timothy",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Bayen",
"Alexandre",
""
]
] | TITLE: The path inference filter: model-based low-latency map matching of probe
vehicle data
ABSTRACT: We consider the problem of reconstructing vehicle trajectories from sparse
sequences of GPS points, for which the sampling interval is between 10 seconds
and 2 minutes. We introduce a new class of algorithms, called altogether path
inference filter (PIF), that maps GPS data in real time, for a variety of
trade-offs and scenarios, and with a high throughput. Numerous prior approaches
in map-matching can be shown to be special cases of the path inference filter
presented in this article. We present an efficient procedure for automatically
training the filter on new data, with or without ground truth observations. The
framework is evaluated on a large San Francisco taxi dataset and is shown to
improve upon the current state of the art. This filter also provides insights
about driving patterns of drivers. The path inference filter has been deployed
at an industrial scale inside the Mobile Millennium traffic information system,
and is used to map fleets of data in San Francisco, Sacramento, Stockholm and
Porto.
| no_new_dataset | 0.949949 |
1109.4684 | Zhiwu Lu | Zhiwu Lu, Horace H.S. Ip, Yuxin Peng | Exhaustive and Efficient Constraint Propagation: A Semi-Supervised
Learning Perspective and Its Applications | The short version of this paper appears as oral paper in ECCV 2010 | International Journal of Computer Vision (IJCV), 2012 | 10.1007/s11263-012-0602-z | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel pairwise constraint propagation approach by
decomposing the challenging constraint propagation problem into a set of
independent semi-supervised learning subproblems which can be solved in
quadratic time using label propagation based on k-nearest neighbor graphs.
Considering that this time cost is proportional to the number of all possible
pairwise constraints, our approach actually provides an efficient solution for
exhaustively propagating pairwise constraints throughout the entire dataset.
The resulting exhaustive set of propagated pairwise constraints are further
used to adjust the similarity matrix for constrained spectral clustering. Other
than the traditional constraint propagation on single-source data, our approach
is also extended to more challenging constraint propagation on multi-source
data where each pairwise constraint is defined over a pair of data points from
different sources. This multi-source constraint propagation has an important
application to cross-modal multimedia retrieval. Extensive results have shown
the superior performance of our approach.
| [
{
"version": "v1",
"created": "Thu, 22 Sep 2011 00:56:22 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Lu",
"Zhiwu",
""
],
[
"Ip",
"Horace H. S.",
""
],
[
"Peng",
"Yuxin",
""
]
] | TITLE: Exhaustive and Efficient Constraint Propagation: A Semi-Supervised
Learning Perspective and Its Applications
ABSTRACT: This paper presents a novel pairwise constraint propagation approach by
decomposing the challenging constraint propagation problem into a set of
independent semi-supervised learning subproblems which can be solved in
quadratic time using label propagation based on k-nearest neighbor graphs.
Considering that this time cost is proportional to the number of all possible
pairwise constraints, our approach actually provides an efficient solution for
exhaustively propagating pairwise constraints throughout the entire dataset.
The resulting exhaustive set of propagated pairwise constraints are further
used to adjust the similarity matrix for constrained spectral clustering. Other
than the traditional constraint propagation on single-source data, our approach
is also extended to more challenging constraint propagation on multi-source
data where each pairwise constraint is defined over a pair of data points from
different sources. This multi-source constraint propagation has an important
application to cross-modal multimedia retrieval. Extensive results have shown
the superior performance of our approach.
| no_new_dataset | 0.945197 |
1109.4979 | Zhiwu Lu | Zhiwu Lu, Yuxin Peng | Latent Semantic Learning with Structured Sparse Representation for Human
Action Recognition | The short version of this paper appears in ICCV 2011 | null | 10.1016/j.patcog.2012.09.027 | null | cs.MM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel latent semantic learning method for extracting
high-level features (i.e. latent semantics) from a large vocabulary of abundant
mid-level features (i.e. visual keywords) with structured sparse
representation, which can help to bridge the semantic gap in the challenging
task of human action recognition. To discover the manifold structure of
midlevel features, we develop a spectral embedding approach to latent semantic
learning based on L1-graph, without the need to tune any parameter for graph
construction as a key step of manifold learning. More importantly, we construct
the L1-graph with structured sparse representation, which can be obtained by
structured sparse coding with its structured sparsity ensured by novel L1-norm
hypergraph regularization over mid-level features. In the new embedding space,
we learn latent semantics automatically from abundant mid-level features
through spectral clustering. The learnt latent semantics can be readily used
for human action recognition with SVM by defining a histogram intersection
kernel. Different from the traditional latent semantic analysis based on topic
models, our latent semantic learning method can explore the manifold structure
of mid-level features in both L1-graph construction and spectral embedding,
which results in compact but discriminative high-level features. The
experimental results on the commonly used KTH action dataset and unconstrained
YouTube action dataset show the superior performance of our method.
| [
{
"version": "v1",
"created": "Fri, 23 Sep 2011 00:39:51 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Lu",
"Zhiwu",
""
],
[
"Peng",
"Yuxin",
""
]
] | TITLE: Latent Semantic Learning with Structured Sparse Representation for Human
Action Recognition
ABSTRACT: This paper proposes a novel latent semantic learning method for extracting
high-level features (i.e. latent semantics) from a large vocabulary of abundant
mid-level features (i.e. visual keywords) with structured sparse
representation, which can help to bridge the semantic gap in the challenging
task of human action recognition. To discover the manifold structure of
midlevel features, we develop a spectral embedding approach to latent semantic
learning based on L1-graph, without the need to tune any parameter for graph
construction as a key step of manifold learning. More importantly, we construct
the L1-graph with structured sparse representation, which can be obtained by
structured sparse coding with its structured sparsity ensured by novel L1-norm
hypergraph regularization over mid-level features. In the new embedding space,
we learn latent semantics automatically from abundant mid-level features
through spectral clustering. The learnt latent semantics can be readily used
for human action recognition with SVM by defining a histogram intersection
kernel. Different from the traditional latent semantic analysis based on topic
models, our latent semantic learning method can explore the manifold structure
of mid-level features in both L1-graph construction and spectral embedding,
which results in compact but discriminative high-level features. The
experimental results on the commonly used KTH action dataset and unconstrained
YouTube action dataset show the superior performance of our method.
| no_new_dataset | 0.946941 |
1109.6073 | Julian Heinrich | Julian Heinrich, Yuan Luo, Arthur E. Kirkpatrick, Hao Zhang, Daniel
Weiskopf | Evaluation of a Bundling Technique for Parallel Coordinates | null | null | null | TR-2011-08 | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a technique for bundled curve representations in
parallel-coordinates plots and present a controlled user study evaluating their
effectiveness. Replacing the traditional C^0 polygonal lines by C^1 continuous
piecewise Bezier curves makes it easier to visually trace data points through
each coordinate axis. The resulting Bezier curves can then be bundled to
visualize data with given cluster structures. Curve bundles are efficient to
compute, provide visual separation between data clusters, reduce visual
clutter, and present a clearer overview of the dataset. A controlled user study
with 14 participants confirmed the effectiveness of curve bundling for
parallel-coordinates visualization: 1) compared to polygonal lines, it is
equally capable of revealing correlations between neighboring data attributes;
2) its geometric cues can be effective in displaying cluster information. For
some datasets curve bundling allows the color perceptual channel to be applied
to other data attributes, while for complex cluster patterns, bundling and
color can represent clustering far more clearly than either alone.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2011 01:44:43 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Heinrich",
"Julian",
""
],
[
"Luo",
"Yuan",
""
],
[
"Kirkpatrick",
"Arthur E.",
""
],
[
"Zhang",
"Hao",
""
],
[
"Weiskopf",
"Daniel",
""
]
] | TITLE: Evaluation of a Bundling Technique for Parallel Coordinates
ABSTRACT: We describe a technique for bundled curve representations in
parallel-coordinates plots and present a controlled user study evaluating their
effectiveness. Replacing the traditional C^0 polygonal lines by C^1 continuous
piecewise Bezier curves makes it easier to visually trace data points through
each coordinate axis. The resulting Bezier curves can then be bundled to
visualize data with given cluster structures. Curve bundles are efficient to
compute, provide visual separation between data clusters, reduce visual
clutter, and present a clearer overview of the dataset. A controlled user study
with 14 participants confirmed the effectiveness of curve bundling for
parallel-coordinates visualization: 1) compared to polygonal lines, it is
equally capable of revealing correlations between neighboring data attributes;
2) its geometric cues can be effective in displaying cluster information. For
some datasets curve bundling allows the color perceptual channel to be applied
to other data attributes, while for complex cluster patterns, bundling and
color can represent clustering far more clearly than either alone.
| no_new_dataset | 0.951908 |
1111.0753 | Sourav Dutta | Sourav Dutta, Souvik Bhattacherjee and Ankur Narang | Towards "Intelligent Compression" in Streams: A Biased Reservoir
Sampling based Bloom Filter Approach | 11 pages, 8 figures, 5 tables | null | null | IBM TechReport RI11015 | cs.IR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the explosion of information stored world-wide,data intensive computing
has become a central area of research.Efficient management and processing of
this massively exponential amount of data from diverse sources,such as
telecommunication call data records,online transaction records,etc.,has become
a necessity.Removing redundancy from such huge(multi-billion records) datasets
resulting in resource and compute efficiency for downstream processing
constitutes an important area of study. "Intelligent compression" or
deduplication in streaming scenarios,for precise identification and elimination
of duplicates from the unbounded datastream is a greater challenge given the
realtime nature of data arrival.Stable Bloom Filters(SBF) address this problem
to a certain extent.However,SBF suffers from a high false negative rate(FNR)
and slow convergence rate,thereby rendering it inefficient for applications
with low FNR tolerance.In this paper, we present a novel Reservoir Sampling
based Bloom Filter,(RSBF) data structure,based on the combined concepts of
reservoir sampling and Bloom filters for approximate detection of duplicates in
data streams.Using detailed theoretical analysis we prove analytical bounds on
its false positive rate(FPR),false negative rate(FNR) and convergence rates
with low memory requirements.We show that RSBF offers the currently lowest FN
and convergence rates,and are better than those of SBF while using the same
memory.Using empirical analysis on real-world datasets(3 million records) and
synthetic datasets with around 1 billion records,we demonstrate upto 2x
improvement in FNR with better convergence rates as compared to SBF,while
exhibiting comparable FPR.To the best of our knowledge,this is the first
attempt to integrate reservoir sampling method with Bloom filters for
deduplication in streaming scenarios.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2011 08:45:44 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Dutta",
"Sourav",
""
],
[
"Bhattacherjee",
"Souvik",
""
],
[
"Narang",
"Ankur",
""
]
] | TITLE: Towards "Intelligent Compression" in Streams: A Biased Reservoir
Sampling based Bloom Filter Approach
ABSTRACT: With the explosion of information stored world-wide,data intensive computing
has become a central area of research.Efficient management and processing of
this massively exponential amount of data from diverse sources,such as
telecommunication call data records,online transaction records,etc.,has become
a necessity.Removing redundancy from such huge(multi-billion records) datasets
resulting in resource and compute efficiency for downstream processing
constitutes an important area of study. "Intelligent compression" or
deduplication in streaming scenarios,for precise identification and elimination
of duplicates from the unbounded datastream is a greater challenge given the
realtime nature of data arrival.Stable Bloom Filters(SBF) address this problem
to a certain extent.However,SBF suffers from a high false negative rate(FNR)
and slow convergence rate,thereby rendering it inefficient for applications
with low FNR tolerance.In this paper, we present a novel Reservoir Sampling
based Bloom Filter,(RSBF) data structure,based on the combined concepts of
reservoir sampling and Bloom filters for approximate detection of duplicates in
data streams.Using detailed theoretical analysis we prove analytical bounds on
its false positive rate(FPR),false negative rate(FNR) and convergence rates
with low memory requirements.We show that RSBF offers the currently lowest FN
and convergence rates,and are better than those of SBF while using the same
memory.Using empirical analysis on real-world datasets(3 million records) and
synthetic datasets with around 1 billion records,we demonstrate upto 2x
improvement in FNR with better convergence rates as compared to SBF,while
exhibiting comparable FPR.To the best of our knowledge,this is the first
attempt to integrate reservoir sampling method with Bloom filters for
deduplication in streaming scenarios.
| no_new_dataset | 0.950134 |
1111.1497 | Rishiraj Saha Roy | Rishiraj Saha Roy, Niloy Ganguly, Monojit Choudhury and Srivatsan
Laxman | An IR-based Evaluation Framework for Web Search Query Segmentation | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/3.0/ | This paper presents the first evaluation framework for Web search query
segmentation based directly on IR performance. In the past, segmentation
strategies were mainly validated against manual annotations. Our work shows
that the goodness of a segmentation algorithm as judged through evaluation
against a handful of human annotated segmentations hardly reflects its
effectiveness in an IR-based setup. In fact, state-of the-art algorithms are
shown to perform as good as, and sometimes even better than human annotations
-- a fact masked by previous validations. The proposed framework also provides
us an objective understanding of the gap between the present best and the best
possible segmentation algorithm. We draw these conclusions based on an
extensive evaluation of six segmentation strategies, including three most
recent algorithms, vis-a-vis segmentations from three human annotators. The
evaluation framework also gives insights about which segments should be
necessarily detected by an algorithm for achieving the best retrieval results.
The meticulously constructed dataset used in our experiments has been made
public for use by the research community.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2011 07:26:27 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Dec 2011 17:33:28 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Dec 2011 11:22:38 GMT"
},
{
"version": "v4",
"created": "Tue, 18 Sep 2012 03:26:22 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Roy",
"Rishiraj Saha",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Choudhury",
"Monojit",
""
],
[
"Laxman",
"Srivatsan",
""
]
] | TITLE: An IR-based Evaluation Framework for Web Search Query Segmentation
ABSTRACT: This paper presents the first evaluation framework for Web search query
segmentation based directly on IR performance. In the past, segmentation
strategies were mainly validated against manual annotations. Our work shows
that the goodness of a segmentation algorithm as judged through evaluation
against a handful of human annotated segmentations hardly reflects its
effectiveness in an IR-based setup. In fact, state-of the-art algorithms are
shown to perform as good as, and sometimes even better than human annotations
-- a fact masked by previous validations. The proposed framework also provides
us an objective understanding of the gap between the present best and the best
possible segmentation algorithm. We draw these conclusions based on an
extensive evaluation of six segmentation strategies, including three most
recent algorithms, vis-a-vis segmentations from three human annotators. The
evaluation framework also gives insights about which segments should be
necessarily detected by an algorithm for achieving the best retrieval results.
The meticulously constructed dataset used in our experiments has been made
public for use by the research community.
| new_dataset | 0.953837 |
1111.4297 | Cheng Chen | Cheng Chen, Kui Wu, Venkatesh Srinivasan, Xudong Zhang | Battling the Internet Water Army: Detection of Hidden Paid Posters | 10 pages, 13 figures | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We initiate a systematic study to help distinguish a special group of online
users, called hidden paid posters, or termed "Internet water army" in China,
from the legitimate ones. On the Internet, the paid posters represent a new
type of online job opportunity. They get paid for posting comments and new
threads or articles on different online communities and websites for some
hidden purposes, e.g., to influence the opinion of other people towards certain
social events or business markets. Though an interesting strategy in business
marketing, paid posters may create a significant negative effect on the online
communities, since the information from paid posters is usually not
trustworthy. When two competitive companies hire paid posters to post fake news
or negative comments about each other, normal online users may feel overwhelmed
and find it difficult to put any trust in the information they acquire from the
Internet. In this paper, we thoroughly investigate the behavioral pattern of
online paid posters based on real-world trace data. We design and validate a
new detection mechanism, using both non-semantic analysis and semantic
analysis, to identify potential online paid posters. Our test results with
real-world datasets show a very promising performance.
| [
{
"version": "v1",
"created": "Fri, 18 Nov 2011 08:21:58 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Chen",
"Cheng",
""
],
[
"Wu",
"Kui",
""
],
[
"Srinivasan",
"Venkatesh",
""
],
[
"Zhang",
"Xudong",
""
]
] | TITLE: Battling the Internet Water Army: Detection of Hidden Paid Posters
ABSTRACT: We initiate a systematic study to help distinguish a special group of online
users, called hidden paid posters, or termed "Internet water army" in China,
from the legitimate ones. On the Internet, the paid posters represent a new
type of online job opportunity. They get paid for posting comments and new
threads or articles on different online communities and websites for some
hidden purposes, e.g., to influence the opinion of other people towards certain
social events or business markets. Though an interesting strategy in business
marketing, paid posters may create a significant negative effect on the online
communities, since the information from paid posters is usually not
trustworthy. When two competitive companies hire paid posters to post fake news
or negative comments about each other, normal online users may feel overwhelmed
and find it difficult to put any trust in the information they acquire from the
Internet. In this paper, we thoroughly investigate the behavioral pattern of
online paid posters based on real-world trace data. We design and validate a
new detection mechanism, using both non-semantic analysis and semantic
analysis, to identify potential online paid posters. Our test results with
real-world datasets show a very promising performance.
| no_new_dataset | 0.943764 |
1111.6937 | Matteo Riondato | Matteo Riondato and Eli Upfal | Efficient Discovery of Association Rules and Frequent Itemsets through
Sampling with Tight Performance Guarantees | 19 pages, 7 figures. A shorter version of this paper appeared in the
proceedings of ECML PKDD 2012 | null | null | null | cs.DS cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tasks of extracting (top-$K$) Frequent Itemsets (FI's) and Association
Rules (AR's) are fundamental primitives in data mining and database
applications. Exact algorithms for these problems exist and are widely used,
but their running time is hindered by the need of scanning the entire dataset,
possibly multiple times. High quality approximations of FI's and AR's are
sufficient for most practical uses, and a number of recent works explored the
application of sampling for fast discovery of approximate solutions to the
problems. However, these works do not provide satisfactory performance
guarantees on the quality of the approximation, due to the difficulty of
bounding the probability of under- or over-sampling any one of an unknown
number of frequent itemsets. In this work we circumvent this issue by applying
the statistical concept of \emph{Vapnik-Chervonenkis (VC) dimension} to develop
a novel technique for providing tight bounds on the sample size that guarantees
approximation within user-specified parameters. Our technique applies both to
absolute and to relative approximations of (top-$K$) FI's and AR's. The
resulting sample size is linearly dependent on the VC-dimension of a range
space associated with the dataset to be mined. The main theoretical
contribution of this work is a proof that the VC-dimension of this range space
is upper bounded by an easy-to-compute characteristic quantity of the dataset
which we call \emph{d-index}, and is the maximum integer $d$ such that the
dataset contains at least $d$ transactions of length at least $d$ such that no
one of them is a superset of or equal to another. We show that this bound is
strict for a large class of datasets.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2011 19:11:50 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2011 14:45:50 GMT"
},
{
"version": "v3",
"created": "Tue, 24 Apr 2012 02:39:09 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Jun 2012 12:56:59 GMT"
},
{
"version": "v5",
"created": "Mon, 10 Dec 2012 20:07:02 GMT"
},
{
"version": "v6",
"created": "Fri, 22 Feb 2013 14:32:31 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Riondato",
"Matteo",
""
],
[
"Upfal",
"Eli",
""
]
] | TITLE: Efficient Discovery of Association Rules and Frequent Itemsets through
Sampling with Tight Performance Guarantees
ABSTRACT: The tasks of extracting (top-$K$) Frequent Itemsets (FI's) and Association
Rules (AR's) are fundamental primitives in data mining and database
applications. Exact algorithms for these problems exist and are widely used,
but their running time is hindered by the need of scanning the entire dataset,
possibly multiple times. High quality approximations of FI's and AR's are
sufficient for most practical uses, and a number of recent works explored the
application of sampling for fast discovery of approximate solutions to the
problems. However, these works do not provide satisfactory performance
guarantees on the quality of the approximation, due to the difficulty of
bounding the probability of under- or over-sampling any one of an unknown
number of frequent itemsets. In this work we circumvent this issue by applying
the statistical concept of \emph{Vapnik-Chervonenkis (VC) dimension} to develop
a novel technique for providing tight bounds on the sample size that guarantees
approximation within user-specified parameters. Our technique applies both to
absolute and to relative approximations of (top-$K$) FI's and AR's. The
resulting sample size is linearly dependent on the VC-dimension of a range
space associated with the dataset to be mined. The main theoretical
contribution of this work is a proof that the VC-dimension of this range space
is upper bounded by an easy-to-compute characteristic quantity of the dataset
which we call \emph{d-index}, and is the maximum integer $d$ such that the
dataset contains at least $d$ transactions of length at least $d$ such that no
one of them is a superset of or equal to another. We show that this bound is
strict for a large class of datasets.
| no_new_dataset | 0.943452 |
1112.1245 | Mikael Vejdemo-Johansson | David Lipsky, Primoz Skraba, Mikael Vejdemo-Johansson | A spectral sequence for parallelized persistence | 15 pages, 10 figures, submitted to the ACM Symposium on Computational
Geometry | null | null | null | cs.CG cs.DC math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We approach the problem of the computation of persistent homology for large
datasets by a divide-and-conquer strategy. Dividing the total space into
separate but overlapping components, we are able to limit the total memory
residency for any part of the computation, while not degrading the overall
complexity much. Locally computed persistence information is then merged from
the components and their intersections using a spectral sequence generalizing
the Mayer-Vietoris long exact sequence.
We describe the Mayer-Vietoris spectral sequence and give details on how to
compute with it. This allows us to merge local homological data into the global
persistent homology. Furthermore, we detail how the classical topology
constructions inherent in the spectral sequence adapt to a persistence
perspective, as well as describe the techniques from computational commutative
algebra necessary for this extension.
The resulting computational scheme suggests a parallelization scheme, and we
discuss the communication steps involved in this scheme. Furthermore, the
computational scheme can also serve as a guideline for which parts of the
boundary matrix manipulation need to co-exist in primary memory at any given
time allowing for stratified memory access in single-core computation. The
spectral sequence viewpoint also provides easy proofs of a homology nerve lemma
as well as a persistent homology nerve lemma. In addition, the algebraic tools
we develop to approch persistent homology provide a purely algebraic
formulation of kernel, image and cokernel persistence (D. Cohen-Steiner, H.
Edelsbrunner, J. Harer, and D. Morozov. Persistent homology for kernels,
images, and cokernels. In Proceedings of the twentieth Annual ACM-SIAM
Symposium on Discrete Algorithms, pages 1011-1020. Society for Industrial and
Applied Mathematics, 2009.)
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2011 12:01:16 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Lipsky",
"David",
""
],
[
"Skraba",
"Primoz",
""
],
[
"Vejdemo-Johansson",
"Mikael",
""
]
] | TITLE: A spectral sequence for parallelized persistence
ABSTRACT: We approach the problem of the computation of persistent homology for large
datasets by a divide-and-conquer strategy. Dividing the total space into
separate but overlapping components, we are able to limit the total memory
residency for any part of the computation, while not degrading the overall
complexity much. Locally computed persistence information is then merged from
the components and their intersections using a spectral sequence generalizing
the Mayer-Vietoris long exact sequence.
We describe the Mayer-Vietoris spectral sequence and give details on how to
compute with it. This allows us to merge local homological data into the global
persistent homology. Furthermore, we detail how the classical topology
constructions inherent in the spectral sequence adapt to a persistence
perspective, as well as describe the techniques from computational commutative
algebra necessary for this extension.
The resulting computational scheme suggests a parallelization scheme, and we
discuss the communication steps involved in this scheme. Furthermore, the
computational scheme can also serve as a guideline for which parts of the
boundary matrix manipulation need to co-exist in primary memory at any given
time allowing for stratified memory access in single-core computation. The
spectral sequence viewpoint also provides easy proofs of a homology nerve lemma
as well as a persistent homology nerve lemma. In addition, the algebraic tools
we develop to approch persistent homology provide a purely algebraic
formulation of kernel, image and cokernel persistence (D. Cohen-Steiner, H.
Edelsbrunner, J. Harer, and D. Morozov. Persistent homology for kernels,
images, and cokernels. In Proceedings of the twentieth Annual ACM-SIAM
Symposium on Discrete Algorithms, pages 1011-1020. Society for Industrial and
Applied Mathematics, 2009.)
| no_new_dataset | 0.946001 |
1112.5404 | Purushottam Kar | Purushottam Kar and Prateek Jain | Similarity-based Learning via Data Driven Embeddings | To appear in the proceedings of NIPS 2011, 14 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of classification using similarity/distance functions
over data. Specifically, we propose a framework for defining the goodness of a
(dis)similarity function with respect to a given learning task and propose
algorithms that have guaranteed generalization properties when working with
such good functions. Our framework unifies and generalizes the frameworks
proposed by [Balcan-Blum ICML 2006] and [Wang et al ICML 2007]. An attractive
feature of our framework is its adaptability to data - we do not promote a
fixed notion of goodness but rather let data dictate it. We show, by giving
theoretical guarantees that the goodness criterion best suited to a problem can
itself be learned which makes our approach applicable to a variety of domains
and problems. We propose a landmarking-based approach to obtaining a classifier
from such learned goodness criteria. We then provide a novel diversity based
heuristic to perform task-driven selection of landmark points instead of random
selection. We demonstrate the effectiveness of our goodness criteria learning
method as well as the landmark selection heuristic on a variety of
similarity-based learning datasets and benchmark UCI datasets on which our
method consistently outperforms existing approaches by a significant margin.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2011 18:08:27 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Kar",
"Purushottam",
""
],
[
"Jain",
"Prateek",
""
]
] | TITLE: Similarity-based Learning via Data Driven Embeddings
ABSTRACT: We consider the problem of classification using similarity/distance functions
over data. Specifically, we propose a framework for defining the goodness of a
(dis)similarity function with respect to a given learning task and propose
algorithms that have guaranteed generalization properties when working with
such good functions. Our framework unifies and generalizes the frameworks
proposed by [Balcan-Blum ICML 2006] and [Wang et al ICML 2007]. An attractive
feature of our framework is its adaptability to data - we do not promote a
fixed notion of goodness but rather let data dictate it. We show, by giving
theoretical guarantees that the goodness criterion best suited to a problem can
itself be learned which makes our approach applicable to a variety of domains
and problems. We propose a landmarking-based approach to obtaining a classifier
from such learned goodness criteria. We then provide a novel diversity based
heuristic to perform task-driven selection of landmark points instead of random
selection. We demonstrate the effectiveness of our goodness criteria learning
method as well as the landmark selection heuristic on a variety of
similarity-based learning datasets and benchmark UCI datasets on which our
method consistently outperforms existing approaches by a significant margin.
| no_new_dataset | 0.946695 |
1201.0233 | Marina Barsky | Marina Barsky, Sangkyum Kim, Tim Weninger, Jiawei Han | Mining Flipping Correlations from Large Datasets with Taxonomies | VLDB2012 | Proceedings of the VLDB Endowment (PVLDB), Vol. 5, No. 4, pp.
370-381 (2011) | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce a new type of pattern -- a flipping correlation
pattern. The flipping patterns are obtained from contrasting the correlations
between items at different levels of abstraction. They represent surprising
correlations, both positive and negative, which are specific for a given
abstraction level, and which "flip" from positive to negative and vice versa
when items are generalized to a higher level of abstraction. We design an
efficient algorithm for finding flipping correlations, the Flipper algorithm,
which outperforms naive pattern mining methods by several orders of magnitude.
We apply Flipper to real-life datasets and show that the discovered patterns
are non-redundant, surprising and actionable. Flipper finds strong contrasting
correlations in itemsets with low-to-medium support, while existing techniques
cannot handle the pattern discovery in this frequency range.
| [
{
"version": "v1",
"created": "Sat, 31 Dec 2011 05:36:29 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Barsky",
"Marina",
""
],
[
"Kim",
"Sangkyum",
""
],
[
"Weninger",
"Tim",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: Mining Flipping Correlations from Large Datasets with Taxonomies
ABSTRACT: In this paper we introduce a new type of pattern -- a flipping correlation
pattern. The flipping patterns are obtained from contrasting the correlations
between items at different levels of abstraction. They represent surprising
correlations, both positive and negative, which are specific for a given
abstraction level, and which "flip" from positive to negative and vice versa
when items are generalized to a higher level of abstraction. We design an
efficient algorithm for finding flipping correlations, the Flipper algorithm,
which outperforms naive pattern mining methods by several orders of magnitude.
We apply Flipper to real-life datasets and show that the discovered patterns
are non-redundant, surprising and actionable. Flipper finds strong contrasting
correlations in itemsets with low-to-medium support, while existing techniques
cannot handle the pattern discovery in this frequency range.
| no_new_dataset | 0.949248 |
1409.5021 | Pengtao Xie | Pengtao Xie and Eric Xing | CryptGraph: Privacy Preserving Graph Analytics on Encrypted Graph | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many graph mining and analysis services have been deployed on the cloud,
which can alleviate users from the burden of implementing and maintaining graph
algorithms. However, putting graph analytics on the cloud can invade users'
privacy. To solve this problem, we propose CryptGraph, which runs graph
analytics on encrypted graph to preserve the privacy of both users' graph data
and the analytic results. In CryptGraph, users encrypt their graphs before
uploading them to the cloud. The cloud runs graph analysis on the encrypted
graphs and obtains results which are also in encrypted form that the cloud
cannot decipher. During the process of computing, the encrypted graphs are
never decrypted on the cloud side. The encrypted results are sent back to users
and users perform the decryption to obtain the plaintext results. In this
process, users' graphs and the analytics results are both encrypted and the
cloud knows neither of them. Thereby, users' privacy can be strongly protected.
Meanwhile, with the help of homomorphic encryption, the results analyzed from
the encrypted graphs are guaranteed to be correct. In this paper, we present
how to encrypt a graph using homomorphic encryption and how to query the
structure of an encrypted graph by computing polynomials. To solve the problem
that certain operations are not executable on encrypted graphs, we propose hard
computation outsourcing to seek help from users. Using two graph algorithms as
examples, we show how to apply our methods to perform analytics on encrypted
graphs. Experiments on two datasets demonstrate the correctness and feasibility
of our methods.
| [
{
"version": "v1",
"created": "Wed, 17 Sep 2014 15:11:06 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Mar 2015 16:12:46 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Xie",
"Pengtao",
""
],
[
"Xing",
"Eric",
""
]
] | TITLE: CryptGraph: Privacy Preserving Graph Analytics on Encrypted Graph
ABSTRACT: Many graph mining and analysis services have been deployed on the cloud,
which can alleviate users from the burden of implementing and maintaining graph
algorithms. However, putting graph analytics on the cloud can invade users'
privacy. To solve this problem, we propose CryptGraph, which runs graph
analytics on encrypted graph to preserve the privacy of both users' graph data
and the analytic results. In CryptGraph, users encrypt their graphs before
uploading them to the cloud. The cloud runs graph analysis on the encrypted
graphs and obtains results which are also in encrypted form that the cloud
cannot decipher. During the process of computing, the encrypted graphs are
never decrypted on the cloud side. The encrypted results are sent back to users
and users perform the decryption to obtain the plaintext results. In this
process, users' graphs and the analytics results are both encrypted and the
cloud knows neither of them. Thereby, users' privacy can be strongly protected.
Meanwhile, with the help of homomorphic encryption, the results analyzed from
the encrypted graphs are guaranteed to be correct. In this paper, we present
how to encrypt a graph using homomorphic encryption and how to query the
structure of an encrypted graph by computing polynomials. To solve the problem
that certain operations are not executable on encrypted graphs, we propose hard
computation outsourcing to seek help from users. Using two graph algorithms as
examples, we show how to apply our methods to perform analytics on encrypted
graphs. Experiments on two datasets demonstrate the correctness and feasibility
of our methods.
| no_new_dataset | 0.951142 |
1411.4351 | Jacob Eisenstein | Vinodh Krishnan and Jacob Eisenstein | "You're Mr. Lebowski, I'm the Dude": Inducing Address Term Formality in
Signed Social Networks | In Proceedings of NAACL-HLT 2015 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an unsupervised model for inducing signed social networks from the
content exchanged across network edges. Inference in this model solves three
problems simultaneously: (1) identifying the sign of each edge; (2)
characterizing the distribution over content for each edge type; (3) estimating
weights for triadic features that map to theoretical models such as structural
balance. We apply this model to the problem of inducing the social function of
address terms, such as 'Madame', 'comrade', and 'dude'. On a dataset of movie
scripts, our system obtains a coherent clustering of address terms, while at
the same time making intuitively plausible judgments of the formality of social
relations in each film. As an additional contribution, we provide a
bootstrapping technique for identifying and tagging address terms in dialogue.
| [
{
"version": "v1",
"created": "Mon, 17 Nov 2014 03:33:27 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Mar 2015 22:20:22 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Krishnan",
"Vinodh",
""
],
[
"Eisenstein",
"Jacob",
""
]
] | TITLE: "You're Mr. Lebowski, I'm the Dude": Inducing Address Term Formality in
Signed Social Networks
ABSTRACT: We present an unsupervised model for inducing signed social networks from the
content exchanged across network edges. Inference in this model solves three
problems simultaneously: (1) identifying the sign of each edge; (2)
characterizing the distribution over content for each edge type; (3) estimating
weights for triadic features that map to theoretical models such as structural
balance. We apply this model to the problem of inducing the social function of
address terms, such as 'Madame', 'comrade', and 'dude'. On a dataset of movie
scripts, our system obtains a coherent clustering of address terms, while at
the same time making intuitively plausible judgments of the formality of social
relations in each film. As an additional contribution, we provide a
bootstrapping technique for identifying and tagging address terms in dialogue.
| no_new_dataset | 0.949809 |
1503.02675 | Clemens Arth | Clemens Arth, Christian Pirchheim, Jonathan Ventura, Vincent Lepetit | Global 6DOF Pose Estimation from Untextured 2D City Models | 9 pages excluding supplementary material | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | We propose a method for estimating the 3D pose for the camera of a mobile
device in outdoor conditions, using only an untextured 2D model. Previous
methods compute only a relative pose using a SLAM algorithm, or require many
registered images, which are cumbersome to acquire. By contrast, our method
returns an accurate, absolute camera pose in an absolute referential using
simple 2D+height maps, which are broadly available, to refine a first estimate
of the pose provided by the device's sensors. We show how to first estimate the
camera absolute orientation from straight line segments, and then how to
estimate the translation by aligning the 2D map with a semantic segmentation of
the input image. We demonstrate the robustness and accuracy of our approach on
a challenging dataset.
| [
{
"version": "v1",
"created": "Mon, 9 Mar 2015 20:18:19 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Mar 2015 12:11:35 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Arth",
"Clemens",
""
],
[
"Pirchheim",
"Christian",
""
],
[
"Ventura",
"Jonathan",
""
],
[
"Lepetit",
"Vincent",
""
]
] | TITLE: Global 6DOF Pose Estimation from Untextured 2D City Models
ABSTRACT: We propose a method for estimating the 3D pose for the camera of a mobile
device in outdoor conditions, using only an untextured 2D model. Previous
methods compute only a relative pose using a SLAM algorithm, or require many
registered images, which are cumbersome to acquire. By contrast, our method
returns an accurate, absolute camera pose in an absolute referential using
simple 2D+height maps, which are broadly available, to refine a first estimate
of the pose provided by the device's sensors. We show how to first estimate the
camera absolute orientation from straight line segments, and then how to
estimate the translation by aligning the 2D map with a semantic segmentation of
the input image. We demonstrate the robustness and accuracy of our approach on
a challenging dataset.
| no_new_dataset | 0.948585 |
1503.04851 | Enrico Glerean | Enrico Glerean, Raj Kumar Pan, Juha Salmi, Rainer Kujala, Juha
Lahnakoski, Ulrika Roine, Lauri Nummenmaa, Sami Lepp\"am\"aki, Taina
Nieminen-von Wendt, Pekka Tani, Jari Saram\"aki, Mikko Sams, Iiro P.
J\"a\"askel\"ainen | Reorganization of functionally connected brain subnetworks in
high-functioning autism | null | null | null | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Previous functional connectivity studies have found both hypo-
and hyper-connectivity in brains of individuals having autism spectrum disorder
(ASD). Here we studied abnormalities in functional brain subnetworks in
high-functioning individuals with ASD during free viewing of a movie containing
social cues and interactions. Methods: Thirteen subjects with ASD and 13
matched-pair controls watched a 68 minutes movie during functional magnetic
resonance imaging. For each subject, we computed Pearson`s correlation between
haemodynamic time-courses of each pair of 6-mm isotropic voxels. From the
whole-brain functional networks, we derived individual and group-level
subnetworks using graph theory. Scaled inclusivity was then calculated between
all subject pairs to estimate intersubject similarity of connectivity structure
of each subnetwork. Additional 27 individuals with ASD from the ABIDE
resting-state database were included to test the reproducibility of the
results. Results: Between-group differences were observed in the composition of
default-mode and a ventro-temporal-limbic (VTL) subnetwork. The VTL subnetwork
included amygdala, striatum, thalamus, parahippocampal, fusiform, and inferior
temporal gyri. Further, VTL subnetwork similarity between subject pairs
correlated significantly with similarity of symptom gravity measured with
autism quotient. This correlation was observed also within the controls, and in
the reproducibility dataset with ADI-R and ADOS scores. Conclusions:
Reorganization of functional subnetworks in individuals with ASD clarifies the
mixture of hypo- and hyper-connectivity findings. Importantly, only the
functional organization of the VTL subnetwork emerges as a marker of
inter-individual similarities that co-vary with behavioral measures across all
participants. These findings suggest a pivotal role of ventro-temporal and
limbic systems in autism.
| [
{
"version": "v1",
"created": "Mon, 16 Mar 2015 21:03:38 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Glerean",
"Enrico",
""
],
[
"Pan",
"Raj Kumar",
""
],
[
"Salmi",
"Juha",
""
],
[
"Kujala",
"Rainer",
""
],
[
"Lahnakoski",
"Juha",
""
],
[
"Roine",
"Ulrika",
""
],
[
"Nummenmaa",
"Lauri",
""
],
[
"Leppämäki",
"Sami",
""
],
[
"Wendt",
"Taina Nieminen-von",
""
],
[
"Tani",
"Pekka",
""
],
[
"Saramäki",
"Jari",
""
],
[
"Sams",
"Mikko",
""
],
[
"Jääskeläinen",
"Iiro P.",
""
]
] | TITLE: Reorganization of functionally connected brain subnetworks in
high-functioning autism
ABSTRACT: Background: Previous functional connectivity studies have found both hypo-
and hyper-connectivity in brains of individuals having autism spectrum disorder
(ASD). Here we studied abnormalities in functional brain subnetworks in
high-functioning individuals with ASD during free viewing of a movie containing
social cues and interactions. Methods: Thirteen subjects with ASD and 13
matched-pair controls watched a 68 minutes movie during functional magnetic
resonance imaging. For each subject, we computed Pearson`s correlation between
haemodynamic time-courses of each pair of 6-mm isotropic voxels. From the
whole-brain functional networks, we derived individual and group-level
subnetworks using graph theory. Scaled inclusivity was then calculated between
all subject pairs to estimate intersubject similarity of connectivity structure
of each subnetwork. Additional 27 individuals with ASD from the ABIDE
resting-state database were included to test the reproducibility of the
results. Results: Between-group differences were observed in the composition of
default-mode and a ventro-temporal-limbic (VTL) subnetwork. The VTL subnetwork
included amygdala, striatum, thalamus, parahippocampal, fusiform, and inferior
temporal gyri. Further, VTL subnetwork similarity between subject pairs
correlated significantly with similarity of symptom gravity measured with
autism quotient. This correlation was observed also within the controls, and in
the reproducibility dataset with ADI-R and ADOS scores. Conclusions:
Reorganization of functional subnetworks in individuals with ASD clarifies the
mixture of hypo- and hyper-connectivity findings. Importantly, only the
functional organization of the VTL subnetwork emerges as a marker of
inter-individual similarities that co-vary with behavioral measures across all
participants. These findings suggest a pivotal role of ventro-temporal and
limbic systems in autism.
| no_new_dataset | 0.929504 |
1503.05187 | Khaled Fawagreh | Khaled Fawagreh, Mohamad Medhat Gaber, Eyad Elyan | An Outlier Detection-based Tree Selection Approach to Extreme Pruning of
Random Forests | 21 pages, 4 Figures. arXiv admin note: substantial text overlap with
arXiv:1503.04996 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Forest (RF) is an ensemble classification technique that was developed
by Breiman over a decade ago. Compared with other ensemble techniques, it has
proved its accuracy and superiority. Many researchers, however, believe that
there is still room for enhancing and improving its performance in terms of
predictive accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empirically that ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofolds. First, it investigates how an unsupervised learning technique,
namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the
RF. Second, trees with the highest LOF scores are then used to produce an
extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet
performs at least as good as RF, but mostly exhibits higher performance in
terms of accuracy. The latter refers to a known technique called ensemble
pruning. Experimental results on 10 real datasets prove the superiority of our
proposed extension over the traditional RF. Unprecedented pruning levels
reaching 99% have been achieved at the time of boosting the predictive accuracy
of the ensemble. The notably high pruning level makes the technique a good
candidate for real-time applications.
| [
{
"version": "v1",
"created": "Tue, 17 Mar 2015 11:05:31 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Fawagreh",
"Khaled",
""
],
[
"Gaber",
"Mohamad Medhat",
""
],
[
"Elyan",
"Eyad",
""
]
] | TITLE: An Outlier Detection-based Tree Selection Approach to Extreme Pruning of
Random Forests
ABSTRACT: Random Forest (RF) is an ensemble classification technique that was developed
by Breiman over a decade ago. Compared with other ensemble techniques, it has
proved its accuracy and superiority. Many researchers, however, believe that
there is still room for enhancing and improving its performance in terms of
predictive accuracy. This explains why, over the past decade, there have been
many extensions of RF where each extension employed a variety of techniques and
strategies to improve certain aspect(s) of RF. Since it has been proven
empirically that ensembles tend to yield better results when there is a
significant diversity among the constituent models, the objective of this paper
is twofolds. First, it investigates how an unsupervised learning technique,
namely, Local Outlier Factor (LOF) can be used to identify diverse trees in the
RF. Second, trees with the highest LOF scores are then used to produce an
extension of RF termed LOFB-DRF that is much smaller in size than RF, and yet
performs at least as good as RF, but mostly exhibits higher performance in
terms of accuracy. The latter refers to a known technique called ensemble
pruning. Experimental results on 10 real datasets prove the superiority of our
proposed extension over the traditional RF. Unprecedented pruning levels
reaching 99% have been achieved at the time of boosting the predictive accuracy
of the ensemble. The notably high pruning level makes the technique a good
candidate for real-time applications.
| no_new_dataset | 0.948537 |
1503.05296 | Omar Al-Jarrah | O. Y. Al-Jarrah, P. D. Yoo, S Muhaidat, G. K. Karagiannidis, and K.
Taha | Efficient Machine Learning for Big Data: A Review | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the emerging technologies and all associated devices, it is predicted
that massive amount of data will be created in the next few years, in fact, as
much as 90% of current data were created in the last couple of years,a trend
that will continue for the foreseeable future. Sustainable computing studies
the process by which computer engineer/scientist designs computers and
associated subsystems efficiently and effectively with minimal impact on the
environment. However, current intelligent machine-learning systems are
performance driven, the focus is on the predictive/classification accuracy,
based on known properties learned from the training samples. For instance, most
machine-learning-based nonparametric models are known to require high
computational cost in order to find the global optima. With the learning task
in a large dataset, the number of hidden nodes within the network will
therefore increase significantly, which eventually leads to an exponential rise
in computational complexity. This paper thus reviews the theoretical and
experimental data-modeling literature, in large-scale data-intensive fields,
relating to: (1) model efficiency, including computational requirements in
learning, and data-intensive areas structure and design, and introduces (2) new
algorithmic approaches with the least memory requirements and processing to
minimize computational cost, while maintaining/improving its
predictive/classification accuracy and stability.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 07:56:12 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Al-Jarrah",
"O. Y.",
""
],
[
"Yoo",
"P. D.",
""
],
[
"Muhaidat",
"S",
""
],
[
"Karagiannidis",
"G. K.",
""
],
[
"Taha",
"K.",
""
]
] | TITLE: Efficient Machine Learning for Big Data: A Review
ABSTRACT: With the emerging technologies and all associated devices, it is predicted
that massive amount of data will be created in the next few years, in fact, as
much as 90% of current data were created in the last couple of years,a trend
that will continue for the foreseeable future. Sustainable computing studies
the process by which computer engineer/scientist designs computers and
associated subsystems efficiently and effectively with minimal impact on the
environment. However, current intelligent machine-learning systems are
performance driven, the focus is on the predictive/classification accuracy,
based on known properties learned from the training samples. For instance, most
machine-learning-based nonparametric models are known to require high
computational cost in order to find the global optima. With the learning task
in a large dataset, the number of hidden nodes within the network will
therefore increase significantly, which eventually leads to an exponential rise
in computational complexity. This paper thus reviews the theoretical and
experimental data-modeling literature, in large-scale data-intensive fields,
relating to: (1) model efficiency, including computational requirements in
learning, and data-intensive areas structure and design, and introduces (2) new
algorithmic approaches with the least memory requirements and processing to
minimize computational cost, while maintaining/improving its
predictive/classification accuracy and stability.
| no_new_dataset | 0.941654 |
1503.05426 | Danilo Giordano DG | Danilo Giordano, Stefano Traverso, Luigi Grimaudo, Marco Mellia, Elena
Baralis, Alok Tongaonkar and Sabyasachi Saha | YouLighter: An Unsupervised Methodology to Unveil YouTube CDN Changes | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | YouTube relies on a massively distributed Content Delivery Network (CDN) to
stream the billions of videos in its catalogue. Unfortunately, very little
information about the design of such CDN is available. This, combined with the
pervasiveness of YouTube, poses a big challenge for Internet Service Providers
(ISPs), which are compelled to optimize end-users' Quality of Experience (QoE)
while having no control on the CDN decisions.
This paper presents YouLighter, an unsupervised technique to identify changes
in the YouTube CDN. YouLighter leverages only passive measurements to cluster
co-located identical caches into edge-nodes. This automatically unveils the
structure of YouTube's CDN. Further, we propose a new metric, called
Constellation Distance, that compares the clustering obtained from two
different time snapshots, to pinpoint sudden changes. While several approaches
allow comparison between the clustering results from the same dataset, no
technique allows to measure the similarity of clusters from different datasets.
Hence, we develop a novel methodology, based on the Constellation Distance, to
solve this problem.
By running YouLighter over 10-month long traces obtained from two ISPs in
different countries, we pinpoint both sudden changes in edge-node allocation,
and small alterations to the cache allocation policies which actually impair
the QoE that the end-users perceive.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 14:30:47 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Giordano",
"Danilo",
""
],
[
"Traverso",
"Stefano",
""
],
[
"Grimaudo",
"Luigi",
""
],
[
"Mellia",
"Marco",
""
],
[
"Baralis",
"Elena",
""
],
[
"Tongaonkar",
"Alok",
""
],
[
"Saha",
"Sabyasachi",
""
]
] | TITLE: YouLighter: An Unsupervised Methodology to Unveil YouTube CDN Changes
ABSTRACT: YouTube relies on a massively distributed Content Delivery Network (CDN) to
stream the billions of videos in its catalogue. Unfortunately, very little
information about the design of such CDN is available. This, combined with the
pervasiveness of YouTube, poses a big challenge for Internet Service Providers
(ISPs), which are compelled to optimize end-users' Quality of Experience (QoE)
while having no control on the CDN decisions.
This paper presents YouLighter, an unsupervised technique to identify changes
in the YouTube CDN. YouLighter leverages only passive measurements to cluster
co-located identical caches into edge-nodes. This automatically unveils the
structure of YouTube's CDN. Further, we propose a new metric, called
Constellation Distance, that compares the clustering obtained from two
different time snapshots, to pinpoint sudden changes. While several approaches
allow comparison between the clustering results from the same dataset, no
technique allows to measure the similarity of clusters from different datasets.
Hence, we develop a novel methodology, based on the Constellation Distance, to
solve this problem.
By running YouLighter over 10-month long traces obtained from two ISPs in
different countries, we pinpoint both sudden changes in edge-node allocation,
and small alterations to the cache allocation policies which actually impair
the QoE that the end-users perceive.
| no_new_dataset | 0.948489 |
1503.05471 | Danila Doroshin | Danila Doroshin, Alexander Yamshinin, Nikolay Lubimov, Marina
Nastasenko, Mikhail Kotov, Maxim Tkachenko | Shared latent subspace modelling within Gaussian-Binary Restricted
Boltzmann Machines for NIST i-Vector Challenge 2014 | 5 pages, 3 figures, submitted to Interspeech 2015 | null | null | null | cs.LG cs.NE cs.SD stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach to speaker subspace modelling based on
Gaussian-Binary Restricted Boltzmann Machines (GRBM). The proposed model is
based on the idea of shared factors as in the Probabilistic Linear Discriminant
Analysis (PLDA). GRBM hidden layer is divided into speaker and channel factors,
herein the speaker factor is shared over all vectors of the speaker. Then
Maximum Likelihood Parameter Estimation (MLE) for proposed model is introduced.
Various new scoring techniques for speaker verification using GRBM are
proposed. The results for NIST i-vector Challenge 2014 dataset are presented.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 16:28:18 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Doroshin",
"Danila",
""
],
[
"Yamshinin",
"Alexander",
""
],
[
"Lubimov",
"Nikolay",
""
],
[
"Nastasenko",
"Marina",
""
],
[
"Kotov",
"Mikhail",
""
],
[
"Tkachenko",
"Maxim",
""
]
] | TITLE: Shared latent subspace modelling within Gaussian-Binary Restricted
Boltzmann Machines for NIST i-Vector Challenge 2014
ABSTRACT: This paper presents a novel approach to speaker subspace modelling based on
Gaussian-Binary Restricted Boltzmann Machines (GRBM). The proposed model is
based on the idea of shared factors as in the Probabilistic Linear Discriminant
Analysis (PLDA). GRBM hidden layer is divided into speaker and channel factors,
herein the speaker factor is shared over all vectors of the speaker. Then
Maximum Likelihood Parameter Estimation (MLE) for proposed model is introduced.
Various new scoring techniques for speaker verification using GRBM are
proposed. The results for NIST i-vector Challenge 2014 dataset are presented.
| no_new_dataset | 0.949902 |
1503.05543 | Alexander Alemi | Alexander A Alemi, Paul Ginsparg | Text Segmentation based on Semantic Word Embeddings | 10 pages, 4 figures. KDD2015 submission | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the use of semantic word embeddings in text segmentation
algorithms, including the C99 segmentation algorithm and new algorithms
inspired by the distributed word vector representation. By developing a general
framework for discussing a class of segmentation objectives, we study the
effectiveness of greedy versus exact optimization approaches and suggest a new
iterative refinement technique for improving the performance of greedy
strategies. We compare our results to known benchmarks, using known metrics. We
demonstrate state-of-the-art performance for an untrained method with our
Content Vector Segmentation (CVS) on the Choi test set. Finally, we apply the
segmentation procedure to an in-the-wild dataset consisting of text extracted
from scholarly articles in the arXiv.org database.
| [
{
"version": "v1",
"created": "Wed, 18 Mar 2015 19:44:06 GMT"
}
] | 2015-03-19T00:00:00 | [
[
"Alemi",
"Alexander A",
""
],
[
"Ginsparg",
"Paul",
""
]
] | TITLE: Text Segmentation based on Semantic Word Embeddings
ABSTRACT: We explore the use of semantic word embeddings in text segmentation
algorithms, including the C99 segmentation algorithm and new algorithms
inspired by the distributed word vector representation. By developing a general
framework for discussing a class of segmentation objectives, we study the
effectiveness of greedy versus exact optimization approaches and suggest a new
iterative refinement technique for improving the performance of greedy
strategies. We compare our results to known benchmarks, using known metrics. We
demonstrate state-of-the-art performance for an untrained method with our
Content Vector Segmentation (CVS) on the Choi test set. Finally, we apply the
segmentation procedure to an in-the-wild dataset consisting of text extracted
from scholarly articles in the arXiv.org database.
| no_new_dataset | 0.938857 |
1102.1027 | Alaa Abi Haidar | Alaa Abi-Haidar and Luis M. Rocha | Collective Classification of Textual Documents by Guided
Self-Organization in T-Cell Cross-Regulation Dynamics | null | Evolutionary Intelligence. 2011. Volume 4, Number 2, 69-80 | 10.1007/s12065-011-0052-5 | null | cs.IR cs.AI cs.LG nlin.AO q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present and study an agent-based model of T-Cell cross-regulation in the
adaptive immune system, which we apply to binary classification. Our method
expands an existing analytical model of T-cell cross-regulation (Carneiro et
al. in Immunol Rev 216(1):48-68, 2007) that was used to study the
self-organizing dynamics of a single population of T-Cells in interaction with
an idealized antigen presenting cell capable of presenting a single antigen.
With agent-based modeling we are able to study the self-organizing dynamics of
multiple populations of distinct T-cells which interact via antigen presenting
cells that present hundreds of distinct antigens. Moreover, we show that such
self-organizing dynamics can be guided to produce an effective binary
classification of antigens, which is competitive with existing machine learning
methods when applied to biomedical text classification. More specifically, here
we test our model on a dataset of publicly available full-text biomedical
articles provided by the BioCreative challenge (Krallinger in The biocreative
ii. 5 challenge overview, p 19, 2009). We study the robustness of our model's
parameter configurations, and show that it leads to encouraging results
comparable to state-of-the-art classifiers. Our results help us understand both
T-cell cross-regulation as a general principle of guided self-organization, as
well as its applicability to document classification. Therefore, we show that
our bio-inspired algorithm is a promising novel method for biomedical article
classification and for binary document classification in general.
| [
{
"version": "v1",
"created": "Fri, 4 Feb 2011 22:10:45 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Abi-Haidar",
"Alaa",
""
],
[
"Rocha",
"Luis M.",
""
]
] | TITLE: Collective Classification of Textual Documents by Guided
Self-Organization in T-Cell Cross-Regulation Dynamics
ABSTRACT: We present and study an agent-based model of T-Cell cross-regulation in the
adaptive immune system, which we apply to binary classification. Our method
expands an existing analytical model of T-cell cross-regulation (Carneiro et
al. in Immunol Rev 216(1):48-68, 2007) that was used to study the
self-organizing dynamics of a single population of T-Cells in interaction with
an idealized antigen presenting cell capable of presenting a single antigen.
With agent-based modeling we are able to study the self-organizing dynamics of
multiple populations of distinct T-cells which interact via antigen presenting
cells that present hundreds of distinct antigens. Moreover, we show that such
self-organizing dynamics can be guided to produce an effective binary
classification of antigens, which is competitive with existing machine learning
methods when applied to biomedical text classification. More specifically, here
we test our model on a dataset of publicly available full-text biomedical
articles provided by the BioCreative challenge (Krallinger in The biocreative
ii. 5 challenge overview, p 19, 2009). We study the robustness of our model's
parameter configurations, and show that it leads to encouraging results
comparable to state-of-the-art classifiers. Our results help us understand both
T-cell cross-regulation as a general principle of guided self-organization, as
well as its applicability to document classification. Therefore, we show that
our bio-inspired algorithm is a promising novel method for biomedical article
classification and for binary document classification in general.
| no_new_dataset | 0.944434 |
1102.1465 | Adrian Barbu | Adrian Barbu, Nathan Lay | An Introduction to Artificial Prediction Markets for Classification | 29 pages, 8 figures | Journal of Machine Learning Research, 13, 2177-2204, 2012 | null | null | stat.ML cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction markets are used in real life to predict outcomes of interest such
as presidential elections. This paper presents a mathematical theory of
artificial prediction markets for supervised learning of conditional
probability estimators. The artificial prediction market is a novel method for
fusing the prediction information of features or trained classifiers, where the
fusion result is the contract price on the possible outcomes. The market can be
trained online by updating the participants' budgets using training examples.
Inspired by the real prediction markets, the equations that govern the market
are derived from simple and reasonable assumptions. Efficient numerical
algorithms are presented for solving these equations. The obtained artificial
prediction market is shown to be a maximum likelihood estimator. It generalizes
linear aggregation, existent in boosting and random forest, as well as logistic
regression and some kernel methods. Furthermore, the market mechanism allows
the aggregation of specialized classifiers that participate only on specific
instances. Experimental comparisons show that the artificial prediction markets
often outperform random forest and implicit online learning on synthetic data
and real UCI datasets. Moreover, an extensive evaluation for pelvic and
abdominal lymph node detection in CT data shows that the prediction market
improves adaboost's detection rate from 79.6% to 81.2% at 3 false
positives/volume.
| [
{
"version": "v1",
"created": "Mon, 7 Feb 2011 23:25:47 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Feb 2011 15:48:12 GMT"
},
{
"version": "v3",
"created": "Mon, 14 Feb 2011 21:02:49 GMT"
},
{
"version": "v4",
"created": "Thu, 22 Sep 2011 20:23:30 GMT"
},
{
"version": "v5",
"created": "Sun, 26 Feb 2012 21:54:27 GMT"
},
{
"version": "v6",
"created": "Mon, 9 Jul 2012 19:24:19 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Barbu",
"Adrian",
""
],
[
"Lay",
"Nathan",
""
]
] | TITLE: An Introduction to Artificial Prediction Markets for Classification
ABSTRACT: Prediction markets are used in real life to predict outcomes of interest such
as presidential elections. This paper presents a mathematical theory of
artificial prediction markets for supervised learning of conditional
probability estimators. The artificial prediction market is a novel method for
fusing the prediction information of features or trained classifiers, where the
fusion result is the contract price on the possible outcomes. The market can be
trained online by updating the participants' budgets using training examples.
Inspired by the real prediction markets, the equations that govern the market
are derived from simple and reasonable assumptions. Efficient numerical
algorithms are presented for solving these equations. The obtained artificial
prediction market is shown to be a maximum likelihood estimator. It generalizes
linear aggregation, existent in boosting and random forest, as well as logistic
regression and some kernel methods. Furthermore, the market mechanism allows
the aggregation of specialized classifiers that participate only on specific
instances. Experimental comparisons show that the artificial prediction markets
often outperform random forest and implicit online learning on synthetic data
and real UCI datasets. Moreover, an extensive evaluation for pelvic and
abdominal lymph node detection in CT data shows that the prediction market
improves adaboost's detection rate from 79.6% to 81.2% at 3 false
positives/volume.
| no_new_dataset | 0.948585 |
1102.2808 | Chun Wei Seah | Chun-Wei Seah, Ivor W. Tsang, Yew-Soon Ong | Transductive Ordinal Regression | null | IEEE Transactions on Neural Networks and Learning Systems,
23(7):1074 - 1086, 2012 | 10.1109/TNNLS.2012.2198240 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ordinal regression is commonly formulated as a multi-class problem with
ordinal constraints. The challenge of designing accurate classifiers for
ordinal regression generally increases with the number of classes involved, due
to the large number of labeled patterns that are needed. The availability of
ordinal class labels, however, is often costly to calibrate or difficult to
obtain. Unlabeled patterns, on the other hand, often exist in much greater
abundance and are freely available. To take benefits from the abundance of
unlabeled patterns, we present a novel transductive learning paradigm for
ordinal regression in this paper, namely Transductive Ordinal Regression (TOR).
The key challenge of the present study lies in the precise estimation of both
the ordinal class label of the unlabeled data and the decision functions of the
ordinal classes, simultaneously. The core elements of the proposed TOR include
an objective function that caters to several commonly used loss functions
casted in transductive settings, for general ordinal regression. A label
swapping scheme that facilitates a strictly monotonic decrease in the objective
function value is also introduced. Extensive numerical studies on commonly used
benchmark datasets including the real world sentiment prediction problem are
then presented to showcase the characteristics and efficacies of the proposed
transductive ordinal regression. Further, comparisons to recent
state-of-the-art ordinal regression methods demonstrate the introduced
transductive learning paradigm for ordinal regression led to the robust and
improved performance.
| [
{
"version": "v1",
"created": "Mon, 14 Feb 2011 15:53:06 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Feb 2011 12:46:46 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Aug 2012 02:23:16 GMT"
},
{
"version": "v4",
"created": "Fri, 31 Aug 2012 02:54:05 GMT"
},
{
"version": "v5",
"created": "Mon, 3 Sep 2012 02:17:30 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Seah",
"Chun-Wei",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Ong",
"Yew-Soon",
""
]
] | TITLE: Transductive Ordinal Regression
ABSTRACT: Ordinal regression is commonly formulated as a multi-class problem with
ordinal constraints. The challenge of designing accurate classifiers for
ordinal regression generally increases with the number of classes involved, due
to the large number of labeled patterns that are needed. The availability of
ordinal class labels, however, is often costly to calibrate or difficult to
obtain. Unlabeled patterns, on the other hand, often exist in much greater
abundance and are freely available. To take benefits from the abundance of
unlabeled patterns, we present a novel transductive learning paradigm for
ordinal regression in this paper, namely Transductive Ordinal Regression (TOR).
The key challenge of the present study lies in the precise estimation of both
the ordinal class label of the unlabeled data and the decision functions of the
ordinal classes, simultaneously. The core elements of the proposed TOR include
an objective function that caters to several commonly used loss functions
casted in transductive settings, for general ordinal regression. A label
swapping scheme that facilitates a strictly monotonic decrease in the objective
function value is also introduced. Extensive numerical studies on commonly used
benchmark datasets including the real world sentiment prediction problem are
then presented to showcase the characteristics and efficacies of the proposed
transductive ordinal regression. Further, comparisons to recent
state-of-the-art ordinal regression methods demonstrate the introduced
transductive learning paradigm for ordinal regression led to the robust and
improved performance.
| no_new_dataset | 0.951323 |
1410.7376 | Nicholas Rhinehart | Nicholas Rhinehart, Jiaji Zhou, Martial Hebert, J. Andrew Bagnell | Visual Chunking: A List Prediction Framework for Region-Based Object
Detection | to appear at ICRA 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider detecting objects in an image by iteratively selecting from a set
of arbitrarily shaped candidate regions. Our generic approach, which we term
visual chunking, reasons about the locations of multiple object instances in an
image while expressively describing object boundaries. We design an
optimization criterion for measuring the performance of a list of such
detections as a natural extension to a common per-instance metric. We present
an efficient algorithm with provable performance for building a high-quality
list of detections from any candidate set of region-based proposals. We also
develop a simple class-specific algorithm to generate a candidate region
instance in near-linear time in the number of low-level superpixels that
outperforms other region generating methods. In order to make predictions on
novel images at testing time without access to ground truth, we develop
learning approaches to emulate these algorithms' behaviors. We demonstrate that
our new approach outperforms sophisticated baselines on benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2014 19:54:41 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Mar 2015 21:20:12 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Rhinehart",
"Nicholas",
""
],
[
"Zhou",
"Jiaji",
""
],
[
"Hebert",
"Martial",
""
],
[
"Bagnell",
"J. Andrew",
""
]
] | TITLE: Visual Chunking: A List Prediction Framework for Region-Based Object
Detection
ABSTRACT: We consider detecting objects in an image by iteratively selecting from a set
of arbitrarily shaped candidate regions. Our generic approach, which we term
visual chunking, reasons about the locations of multiple object instances in an
image while expressively describing object boundaries. We design an
optimization criterion for measuring the performance of a list of such
detections as a natural extension to a common per-instance metric. We present
an efficient algorithm with provable performance for building a high-quality
list of detections from any candidate set of region-based proposals. We also
develop a simple class-specific algorithm to generate a candidate region
instance in near-linear time in the number of low-level superpixels that
outperforms other region generating methods. In order to make predictions on
novel images at testing time without access to ground truth, we develop
learning approaches to emulate these algorithms' behaviors. We demonstrate that
our new approach outperforms sophisticated baselines on benchmark datasets.
| no_new_dataset | 0.944689 |
1502.07979 | Anastasios Noulas Anastasios Noulas | Anastasios Noulas, Blake Shaw, Renaud Lambiotte, Cecilia Mascolo | Topological Properties and Temporal Dynamics of Place Networks in Urban
Environments | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the spatial networks formed by the trajectories of mobile users
can be beneficial to applications ranging from epidemiology to local search.
Despite the potential for impact in a number of fields, several aspects of
human mobility networks remain largely unexplored due to the lack of
large-scale data at a fine spatiotemporal resolution. Using a longitudinal
dataset from the location-based service Foursquare, we perform an empirical
analysis of the topological properties of place networks and note their
resemblance to online social networks in terms of heavy-tailed degree
distributions, triadic closure mechanisms and the small world property. Unlike
social networks however, place networks present a mixture of connectivity
trends in terms of assortativity that are surprisingly similar to those of the
web graph. We take advantage of additional semantic information to interpret
how nodes that take on functional roles such as `travel hub', or `food spot'
behave in these networks. Finally, motivated by the large volume of new links
appearing in place networks over time, we formulate the classic link prediction
problem in this new domain. We propose a novel variant of gravity models that
brings together three essential elements of inter-place connectivity in urban
environments: network-level interactions, human mobility dynamics, and
geographic distance. We evaluate this model and find it outperforms a number of
baseline predictors and supervised learning algorithms on a task of predicting
new links in a sample of one hundred popular cities.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 17:30:16 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Mar 2015 14:03:02 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Noulas",
"Anastasios",
""
],
[
"Shaw",
"Blake",
""
],
[
"Lambiotte",
"Renaud",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | TITLE: Topological Properties and Temporal Dynamics of Place Networks in Urban
Environments
ABSTRACT: Understanding the spatial networks formed by the trajectories of mobile users
can be beneficial to applications ranging from epidemiology to local search.
Despite the potential for impact in a number of fields, several aspects of
human mobility networks remain largely unexplored due to the lack of
large-scale data at a fine spatiotemporal resolution. Using a longitudinal
dataset from the location-based service Foursquare, we perform an empirical
analysis of the topological properties of place networks and note their
resemblance to online social networks in terms of heavy-tailed degree
distributions, triadic closure mechanisms and the small world property. Unlike
social networks however, place networks present a mixture of connectivity
trends in terms of assortativity that are surprisingly similar to those of the
web graph. We take advantage of additional semantic information to interpret
how nodes that take on functional roles such as `travel hub', or `food spot'
behave in these networks. Finally, motivated by the large volume of new links
appearing in place networks over time, we formulate the classic link prediction
problem in this new domain. We propose a novel variant of gravity models that
brings together three essential elements of inter-place connectivity in urban
environments: network-level interactions, human mobility dynamics, and
geographic distance. We evaluate this model and find it outperforms a number of
baseline predictors and supervised learning algorithms on a task of predicting
new links in a sample of one hundred popular cities.
| no_new_dataset | 0.943971 |
1503.04598 | Reza Sabzevari | Reza Sabzevari, Vittori Murino, and Alessio Del Bue | PiMPeR: Piecewise Dense 3D Reconstruction from Multi-View and
Multi-Illumination Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of dense 3D reconstruction from
multiple view images subject to strong lighting variations. In this regard, a
new piecewise framework is proposed to explicitly take into account the change
of illumination across several wide-baseline images. Unlike multi-view stereo
and multi-view photometric stereo methods, this pipeline deals with
wide-baseline images that are uncalibrated, in terms of both camera parameters
and lighting conditions. Such a scenario is meant to avoid use of any specific
imaging setup and provide a tool for normal users without any expertise. To the
best of our knowledge, this paper presents the first work that deals with such
unconstrained setting. We propose a coarse-to-fine approach, in which a coarse
mesh is first created using a set of geometric constraints and, then, fine
details are recovered by exploiting photometric properties of the scene.
Augmenting the fine details on the coarse mesh is done via a final optimization
step. Note that the method does not provide a generic solution for multi-view
photometric stereo problem but it relaxes several common assumptions of this
problem. The approach scales very well in size given its piecewise nature,
dealing with large scale optimization and with severe missing data. Experiments
on a benchmark dataset Robot data-set show the method performance against 3D
ground truth.
| [
{
"version": "v1",
"created": "Mon, 16 Mar 2015 10:51:08 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Mar 2015 12:59:24 GMT"
}
] | 2015-03-18T00:00:00 | [
[
"Sabzevari",
"Reza",
""
],
[
"Murino",
"Vittori",
""
],
[
"Del Bue",
"Alessio",
""
]
] | TITLE: PiMPeR: Piecewise Dense 3D Reconstruction from Multi-View and
Multi-Illumination Images
ABSTRACT: In this paper, we address the problem of dense 3D reconstruction from
multiple view images subject to strong lighting variations. In this regard, a
new piecewise framework is proposed to explicitly take into account the change
of illumination across several wide-baseline images. Unlike multi-view stereo
and multi-view photometric stereo methods, this pipeline deals with
wide-baseline images that are uncalibrated, in terms of both camera parameters
and lighting conditions. Such a scenario is meant to avoid use of any specific
imaging setup and provide a tool for normal users without any expertise. To the
best of our knowledge, this paper presents the first work that deals with such
unconstrained setting. We propose a coarse-to-fine approach, in which a coarse
mesh is first created using a set of geometric constraints and, then, fine
details are recovered by exploiting photometric properties of the scene.
Augmenting the fine details on the coarse mesh is done via a final optimization
step. Note that the method does not provide a generic solution for multi-view
photometric stereo problem but it relaxes several common assumptions of this
problem. The approach scales very well in size given its piecewise nature,
dealing with large scale optimization and with severe missing data. Experiments
on a benchmark dataset Robot data-set show the method performance against 3D
ground truth.
| no_new_dataset | 0.947478 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.