id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1602.07507 | Alireza Ghasemi | Alireza Ghasemi, Hamid R. Rabiee, Mohammad T. Manzuri, M. H. Rohban | A Bayesian Approach to the Data Description Problem | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of data description using a Bayesian
framework. The goal of data description is to draw a boundary around objects of
a certain class of interest to discriminate that class from the rest of the
feature space. Data description is also known as one-class learning and has a
wide range of applications.
The proposed approach uses a Bayesian framework to precisely compute the
class boundary and therefore can utilize domain information in form of prior
knowledge in the framework. It can also operate in the kernel space and
therefore recognize arbitrary boundary shapes. Moreover, the proposed method
can utilize unlabeled data in order to improve accuracy of discrimination.
We evaluate our method using various real-world datasets and compare it with
other state of the art approaches of data description. Experiments show
promising results and improved performance over other data description and
one-class learning algorithms.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 13:52:52 GMT"
}
] | 2016-02-26T00:00:00 | [
[
"Ghasemi",
"Alireza",
""
],
[
"Rabiee",
"Hamid R.",
""
],
[
"Manzuri",
"Mohammad T.",
""
],
[
"Rohban",
"M. H.",
""
]
] | TITLE: A Bayesian Approach to the Data Description Problem
ABSTRACT: In this paper, we address the problem of data description using a Bayesian
framework. The goal of data description is to draw a boundary around objects of
a certain class of interest to discriminate that class from the rest of the
feature space. Data description is also known as one-class learning and has a
wide range of applications.
The proposed approach uses a Bayesian framework to precisely compute the
class boundary and therefore can utilize domain information in form of prior
knowledge in the framework. It can also operate in the kernel space and
therefore recognize arbitrary boundary shapes. Moreover, the proposed method
can utilize unlabeled data in order to improve accuracy of discrimination.
We evaluate our method using various real-world datasets and compare it with
other state of the art approaches of data description. Experiments show
promising results and improved performance over other data description and
one-class learning algorithms.
| no_new_dataset | 0.947088 |
1602.07810 | Junaid Qadir | Anwaar Ali, Junaid Qadir, Raihan ur Rasool, Arjuna Sathiaseelan,
Andrej Zwitter | Big Data For Development: Applications and Techniques | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the explosion of social media sites and proliferation of digital
computing devices and Internet access, massive amounts of public data is being
generated on a daily basis. Efficient techniques/ algorithms to analyze this
massive amount of data can provide near real-time information about emerging
trends and provide early warning in case of an imminent emergency (such as the
outbreak of a viral disease). In addition, careful mining of these data can
reveal many useful indicators of socioeconomic and political events, which can
help in establishing effective public policies. The focus of this study is to
review the application of big data analytics for the purpose of human
development. The emerging ability to use big data techniques for development
(BD4D) promises to revolutionalize healthcare, education, and agriculture;
facilitate the alleviation of poverty; and help to deal with humanitarian
crises and violent conflicts. Besides all the benefits, the large-scale
deployment of BD4D is beset with several challenges due to the massive size,
fast-changing and diverse nature of big data. The most pressing concerns relate
to efficient data acquisition and sharing, establishing of context (e.g.,
geolocation and time) and veracity of a dataset, and ensuring appropriate
privacy. In this study, we provide a review of existing BD4D work to study the
impact of big data on the development of society. In addition to reviewing the
important works, we also highlight important challenges and open issues.
| [
{
"version": "v1",
"created": "Thu, 25 Feb 2016 06:02:33 GMT"
}
] | 2016-02-26T00:00:00 | [
[
"Ali",
"Anwaar",
""
],
[
"Qadir",
"Junaid",
""
],
[
"Rasool",
"Raihan ur",
""
],
[
"Sathiaseelan",
"Arjuna",
""
],
[
"Zwitter",
"Andrej",
""
]
] | TITLE: Big Data For Development: Applications and Techniques
ABSTRACT: With the explosion of social media sites and proliferation of digital
computing devices and Internet access, massive amounts of public data is being
generated on a daily basis. Efficient techniques/ algorithms to analyze this
massive amount of data can provide near real-time information about emerging
trends and provide early warning in case of an imminent emergency (such as the
outbreak of a viral disease). In addition, careful mining of these data can
reveal many useful indicators of socioeconomic and political events, which can
help in establishing effective public policies. The focus of this study is to
review the application of big data analytics for the purpose of human
development. The emerging ability to use big data techniques for development
(BD4D) promises to revolutionalize healthcare, education, and agriculture;
facilitate the alleviation of poverty; and help to deal with humanitarian
crises and violent conflicts. Besides all the benefits, the large-scale
deployment of BD4D is beset with several challenges due to the massive size,
fast-changing and diverse nature of big data. The most pressing concerns relate
to efficient data acquisition and sharing, establishing of context (e.g.,
geolocation and time) and veracity of a dataset, and ensuring appropriate
privacy. In this study, we provide a review of existing BD4D work to study the
impact of big data on the development of society. In addition to reviewing the
important works, we also highlight important challenges and open issues.
| no_new_dataset | 0.935641 |
1602.07865 | Jesse Krijthe | Jesse H. Krijthe and Marco Loog | Projected Estimators for Robust Semi-supervised Classification | 13 pages, 2 figures, 1 table | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For semi-supervised techniques to be applied safely in practice we at least
want methods to outperform their supervised counterparts. We study this
question for classification using the well-known quadratic surrogate loss
function. Using a projection of the supervised estimate onto a set of
constraints imposed by the unlabeled data, we find we can safely improve over
the supervised solution in terms of this quadratic loss. Unlike other
approaches to semi-supervised learning, the procedure does not rely on
assumptions that are not intrinsic to the classifier at hand. It is
theoretically demonstrated that, measured on the labeled and unlabeled training
data, this semi-supervised procedure never gives a lower quadratic loss than
the supervised alternative. To our knowledge this is the first approach that
offers such strong, albeit conservative, guarantees for improvement over the
supervised solution. The characteristics of our approach are explicated using
benchmark datasets to further understand the similarities and differences
between the quadratic loss criterion used in the theoretical results and the
classification accuracy often considered in practice.
| [
{
"version": "v1",
"created": "Thu, 25 Feb 2016 09:57:42 GMT"
}
] | 2016-02-26T00:00:00 | [
[
"Krijthe",
"Jesse H.",
""
],
[
"Loog",
"Marco",
""
]
] | TITLE: Projected Estimators for Robust Semi-supervised Classification
ABSTRACT: For semi-supervised techniques to be applied safely in practice we at least
want methods to outperform their supervised counterparts. We study this
question for classification using the well-known quadratic surrogate loss
function. Using a projection of the supervised estimate onto a set of
constraints imposed by the unlabeled data, we find we can safely improve over
the supervised solution in terms of this quadratic loss. Unlike other
approaches to semi-supervised learning, the procedure does not rely on
assumptions that are not intrinsic to the classifier at hand. It is
theoretically demonstrated that, measured on the labeled and unlabeled training
data, this semi-supervised procedure never gives a lower quadratic loss than
the supervised alternative. To our knowledge this is the first approach that
offers such strong, albeit conservative, guarantees for improvement over the
supervised solution. The characteristics of our approach are explicated using
benchmark datasets to further understand the similarities and differences
between the quadratic loss criterion used in the theoretical results and the
classification accuracy often considered in practice.
| no_new_dataset | 0.944022 |
1602.08007 | Yann Ollivier | Ga\'etan Marceau-Caron, Yann Ollivier | Practical Riemannian Neural Networks | null | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide the first experimental results on non-synthetic datasets for the
quasi-diagonal Riemannian gradient descents for neural networks introduced in
[Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a
previously unpublished electroencephalogram dataset. The quasi-diagonal
Riemannian algorithms consistently beat simple stochastic gradient gradient
descents by a varying margin. The computational overhead with respect to simple
backpropagation is around a factor $2$. Perhaps more interestingly, these
methods also reach their final performance quickly, thus requiring fewer
training epochs and a smaller total computation time.
We also present an implementation guide to these Riemannian gradient descents
for neural networks, showing how the quasi-diagonal versions can be implemented
with minimal effort on top of existing routines which compute gradients.
| [
{
"version": "v1",
"created": "Thu, 25 Feb 2016 17:37:28 GMT"
}
] | 2016-02-26T00:00:00 | [
[
"Marceau-Caron",
"Gaétan",
""
],
[
"Ollivier",
"Yann",
""
]
] | TITLE: Practical Riemannian Neural Networks
ABSTRACT: We provide the first experimental results on non-synthetic datasets for the
quasi-diagonal Riemannian gradient descents for neural networks introduced in
[Ollivier, 2015]. These include the MNIST, SVHN, and FACE datasets as well as a
previously unpublished electroencephalogram dataset. The quasi-diagonal
Riemannian algorithms consistently beat simple stochastic gradient gradient
descents by a varying margin. The computational overhead with respect to simple
backpropagation is around a factor $2$. Perhaps more interestingly, these
methods also reach their final performance quickly, thus requiring fewer
training epochs and a smaller total computation time.
We also present an implementation guide to these Riemannian gradient descents
for neural networks, showing how the quasi-diagonal versions can be implemented
with minimal effort on top of existing routines which compute gradients.
| no_new_dataset | 0.878835 |
1407.3345 | Camellia Sarkar | Camellia Sarkar, Sarika Jalan | Social patterns revealed through random matrix theory | 22 pages, 7 figures | EPL 108, 48003 (2014) | 10.1209/0295-5075/108/48003 | null | physics.soc-ph cs.SI nlin.AO physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the tremendous advancements in the field of network theory, very few
studies have taken weights in the interactions into consideration that emerge
naturally in all real world systems. Using random matrix analysis of a weighted
social network, we demonstrate the profound impact of weights in interactions
on emerging structural properties. The analysis reveals that randomness
existing in particular time frame affects the decisions of individuals
rendering them more freedom of choice in situations of financial security.
While the structural organization of networks remain same throughout all
datasets, random matrix theory provides insight into interaction pattern of
individual of the society in situations of crisis. It has also been
contemplated that individual accountability in terms of weighted interactions
remains as a key to success unless segregation of tasks comes into play.
| [
{
"version": "v1",
"created": "Sat, 12 Jul 2014 05:15:18 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Oct 2014 04:49:16 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Nov 2014 05:40:16 GMT"
},
{
"version": "v4",
"created": "Wed, 24 Feb 2016 05:46:00 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Sarkar",
"Camellia",
""
],
[
"Jalan",
"Sarika",
""
]
] | TITLE: Social patterns revealed through random matrix theory
ABSTRACT: Despite the tremendous advancements in the field of network theory, very few
studies have taken weights in the interactions into consideration that emerge
naturally in all real world systems. Using random matrix analysis of a weighted
social network, we demonstrate the profound impact of weights in interactions
on emerging structural properties. The analysis reveals that randomness
existing in particular time frame affects the decisions of individuals
rendering them more freedom of choice in situations of financial security.
While the structural organization of networks remain same throughout all
datasets, random matrix theory provides insight into interaction pattern of
individual of the society in situations of crisis. It has also been
contemplated that individual accountability in terms of weighted interactions
remains as a key to success unless segregation of tasks comes into play.
| no_new_dataset | 0.943034 |
1511.05879 | Giorgos Tolias | Giorgos Tolias, Ronan Sicre and Herv\'e J\'egou | Particular object retrieval with integral max-pooling of CNN activations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, image representation built upon Convolutional Neural Network (CNN)
has been shown to provide effective descriptors for image search, outperforming
pre-CNN features as short-vector representations. Yet such models are not
compatible with geometry-aware re-ranking methods and still outperformed, on
some particular object retrieval benchmarks, by traditional image search
systems relying on precise descriptor matching, geometric re-ranking, or query
expansion. This work revisits both retrieval stages, namely initial search and
re-ranking, by employing the same primitive information derived from the CNN.
We build compact feature vectors that encode several image regions without the
need to feed multiple inputs to the network. Furthermore, we extend integral
images to handle max-pooling on convolutional layer activations, allowing us to
efficiently localize matching objects. The resulting bounding box is finally
used for image re-ranking. As a result, this paper significantly improves
existing CNN-based recognition pipeline: We report for the first time results
competing with traditional methods on the challenging Oxford5k and Paris6k
datasets.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 17:02:59 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Feb 2016 15:14:34 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Tolias",
"Giorgos",
""
],
[
"Sicre",
"Ronan",
""
],
[
"Jégou",
"Hervé",
""
]
] | TITLE: Particular object retrieval with integral max-pooling of CNN activations
ABSTRACT: Recently, image representation built upon Convolutional Neural Network (CNN)
has been shown to provide effective descriptors for image search, outperforming
pre-CNN features as short-vector representations. Yet such models are not
compatible with geometry-aware re-ranking methods and still outperformed, on
some particular object retrieval benchmarks, by traditional image search
systems relying on precise descriptor matching, geometric re-ranking, or query
expansion. This work revisits both retrieval stages, namely initial search and
re-ranking, by employing the same primitive information derived from the CNN.
We build compact feature vectors that encode several image regions without the
need to feed multiple inputs to the network. Furthermore, we extend integral
images to handle max-pooling on convolutional layer activations, allowing us to
efficiently localize matching objects. The resulting bounding box is finally
used for image re-ranking. As a result, this paper significantly improves
existing CNN-based recognition pipeline: We report for the first time results
competing with traditional methods on the challenging Oxford5k and Paris6k
datasets.
| no_new_dataset | 0.946843 |
1511.06644 | C\'esar Lincoln Cavalcante Mattos | C\'esar Lincoln C. Mattos, Zhenwen Dai, Andreas Damianou, Jeremy
Forth, Guilherme A. Barreto, Neil D. Lawrence | Recurrent Gaussian Processes | Published as a conference paper at ICLR 2016. 12 pages, 3 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define Recurrent Gaussian Processes (RGP) models, a general family of
Bayesian nonparametric models with recurrent GP priors which are able to learn
dynamical patterns from sequential data. Similar to Recurrent Neural Networks
(RNNs), RGPs can have different formulations for their internal states,
distinct inference methods and be extended with deep structures. In such
context, we propose a novel deep RGP model whose autoregressive states are
latent, thereby performing representation and dynamical learning
simultaneously. To fully exploit the Bayesian nature of the RGP model we
develop the Recurrent Variational Bayes (REVARB) framework, which enables
efficient inference and strong regularization through coherent propagation of
uncertainty across the RGP layers and states. We also introduce a RGP extension
where variational parameters are greatly reduced by being reparametrized
through RNN-based sequential recognition models. We apply our model to the
tasks of nonlinear system identification and human motion modeling. The
promising obtained results indicate that our RGP model maintains its highly
flexibility while being able to avoid overfitting and being applicable even
when larger datasets are not available.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 15:37:24 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Nov 2015 10:39:07 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Jan 2016 12:15:13 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jan 2016 18:03:50 GMT"
},
{
"version": "v5",
"created": "Tue, 9 Feb 2016 12:39:07 GMT"
},
{
"version": "v6",
"created": "Wed, 24 Feb 2016 20:01:19 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Mattos",
"César Lincoln C.",
""
],
[
"Dai",
"Zhenwen",
""
],
[
"Damianou",
"Andreas",
""
],
[
"Forth",
"Jeremy",
""
],
[
"Barreto",
"Guilherme A.",
""
],
[
"Lawrence",
"Neil D.",
""
]
] | TITLE: Recurrent Gaussian Processes
ABSTRACT: We define Recurrent Gaussian Processes (RGP) models, a general family of
Bayesian nonparametric models with recurrent GP priors which are able to learn
dynamical patterns from sequential data. Similar to Recurrent Neural Networks
(RNNs), RGPs can have different formulations for their internal states,
distinct inference methods and be extended with deep structures. In such
context, we propose a novel deep RGP model whose autoregressive states are
latent, thereby performing representation and dynamical learning
simultaneously. To fully exploit the Bayesian nature of the RGP model we
develop the Recurrent Variational Bayes (REVARB) framework, which enables
efficient inference and strong regularization through coherent propagation of
uncertainty across the RGP layers and states. We also introduce a RGP extension
where variational parameters are greatly reduced by being reparametrized
through RNN-based sequential recognition models. We apply our model to the
tasks of nonlinear system identification and human motion modeling. The
promising obtained results indicate that our RGP model maintains its highly
flexibility while being able to avoid overfitting and being applicable even
when larger datasets are not available.
| no_new_dataset | 0.946646 |
1602.07332 | Ranjay Krishna | Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata,
Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A.
Shamma, Michael S. Bernstein, Fei-Fei Li | Visual Genome: Connecting Language and Vision Using Crowdsourced Dense
Image Annotations | 44 pages, 37 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite progress in perceptual tasks such as image classification, computers
still perform poorly on cognitive tasks such as image description and question
answering. Cognition is core to tasks that involve not just recognizing, but
reasoning about our visual world. However, models used to tackle the rich
content in images for cognitive tasks are still being trained using the same
datasets designed for perceptual tasks. To achieve success at cognitive tasks,
models need to understand the interactions and relationships between objects in
an image. When asked "What vehicle is the person riding?", computers will need
to identify the objects in an image as well as the relationships riding(man,
carriage) and pulling(horse, carriage) in order to answer correctly that "the
person is riding a horse-drawn carriage".
In this paper, we present the Visual Genome dataset to enable the modeling of
such relationships. We collect dense annotations of objects, attributes, and
relationships within each image to learn these models. Specifically, our
dataset contains over 100K images where each image has an average of 21
objects, 18 attributes, and 18 pairwise relationships between objects. We
canonicalize the objects, attributes, relationships, and noun phrases in region
descriptions and questions answer pairs to WordNet synsets. Together, these
annotations represent the densest and largest dataset of image descriptions,
objects, attributes, relationships, and question answers.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 22:00:40 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Krishna",
"Ranjay",
""
],
[
"Zhu",
"Yuke",
""
],
[
"Groth",
"Oliver",
""
],
[
"Johnson",
"Justin",
""
],
[
"Hata",
"Kenji",
""
],
[
"Kravitz",
"Joshua",
""
],
[
"Chen",
"Stephanie",
""
],
[
"Kalantidis",
"Yannis",
""
],
[
"Li",
"Li-Jia",
""
],
[
"Shamma",
"David A.",
""
],
[
"Bernstein",
"Michael S.",
""
],
[
"Li",
"Fei-Fei",
""
]
] | TITLE: Visual Genome: Connecting Language and Vision Using Crowdsourced Dense
Image Annotations
ABSTRACT: Despite progress in perceptual tasks such as image classification, computers
still perform poorly on cognitive tasks such as image description and question
answering. Cognition is core to tasks that involve not just recognizing, but
reasoning about our visual world. However, models used to tackle the rich
content in images for cognitive tasks are still being trained using the same
datasets designed for perceptual tasks. To achieve success at cognitive tasks,
models need to understand the interactions and relationships between objects in
an image. When asked "What vehicle is the person riding?", computers will need
to identify the objects in an image as well as the relationships riding(man,
carriage) and pulling(horse, carriage) in order to answer correctly that "the
person is riding a horse-drawn carriage".
In this paper, we present the Visual Genome dataset to enable the modeling of
such relationships. We collect dense annotations of objects, attributes, and
relationships within each image to learn these models. Specifically, our
dataset contains over 100K images where each image has an average of 21
objects, 18 attributes, and 18 pairwise relationships between objects. We
canonicalize the objects, attributes, relationships, and noun phrases in region
descriptions and questions answer pairs to WordNet synsets. Together, these
annotations represent the densest and largest dataset of image descriptions,
objects, attributes, relationships, and question answers.
| new_dataset | 0.964355 |
1602.07366 | Weidong Wang | Weidong Wang, Liqiang Wang, Wei Lu | An Intelligent QoS Identification for Untrustworthy Web Services Via
Two-phase Neural Networks | 8 pages, 5 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | QoS identification for untrustworthy Web services is critical in QoS
management in the service computing since the performance of untrustworthy Web
services may result in QoS downgrade. The key issue is to intelligently learn
the characteristics of trustworthy Web services from different QoS levels, then
to identify the untrustworthy ones according to the characteristics of QoS
metrics. As one of the intelligent identification approaches, deep neural
network has emerged as a powerful technique in recent years. In this paper, we
propose a novel two-phase neural network model to identify the untrustworthy
Web services. In the first phase, Web services are collected from the published
QoS dataset. Then, we design a feedforward neural network model to build the
classifier for Web services with different QoS levels. In the second phase, we
employ a probabilistic neural network (PNN) model to identify the untrustworthy
Web services from each classification. The experimental results show the
proposed approach has 90.5% identification ratio far higher than other
competing approaches.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 01:38:14 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Wang",
"Weidong",
""
],
[
"Wang",
"Liqiang",
""
],
[
"Lu",
"Wei",
""
]
] | TITLE: An Intelligent QoS Identification for Untrustworthy Web Services Via
Two-phase Neural Networks
ABSTRACT: QoS identification for untrustworthy Web services is critical in QoS
management in the service computing since the performance of untrustworthy Web
services may result in QoS downgrade. The key issue is to intelligently learn
the characteristics of trustworthy Web services from different QoS levels, then
to identify the untrustworthy ones according to the characteristics of QoS
metrics. As one of the intelligent identification approaches, deep neural
network has emerged as a powerful technique in recent years. In this paper, we
propose a novel two-phase neural network model to identify the untrustworthy
Web services. In the first phase, Web services are collected from the published
QoS dataset. Then, we design a feedforward neural network model to build the
classifier for Web services with different QoS levels. In the second phase, we
employ a probabilistic neural network (PNN) model to identify the untrustworthy
Web services from each classification. The experimental results show the
proposed approach has 90.5% identification ratio far higher than other
competing approaches.
| no_new_dataset | 0.950319 |
1602.07383 | Weiguang Ding | Weiguang Ding, Graham Taylor | Automatic Moth Detection from Trap Images for Pest Management | Preprints accepted by Computers and electronics in agriculture | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monitoring the number of insect pests is a crucial component in
pheromone-based pest management systems. In this paper, we propose an automatic
detection pipeline based on deep learning for identifying and counting pests in
images taken inside field traps. Applied to a commercial codling moth dataset,
our method shows promising performance both qualitatively and quantitatively.
Compared to previous attempts at pest detection, our approach uses no
pest-specific engineering which enables it to adapt to other species and
environments with minimal human effort. It is amenable to implementation on
parallel hardware and therefore capable of deployment in settings where
real-time performance is required.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 03:35:42 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Ding",
"Weiguang",
""
],
[
"Taylor",
"Graham",
""
]
] | TITLE: Automatic Moth Detection from Trap Images for Pest Management
ABSTRACT: Monitoring the number of insect pests is a crucial component in
pheromone-based pest management systems. In this paper, we propose an automatic
detection pipeline based on deep learning for identifying and counting pests in
images taken inside field traps. Applied to a commercial codling moth dataset,
our method shows promising performance both qualitatively and quantitatively.
Compared to previous attempts at pest detection, our approach uses no
pest-specific engineering which enables it to adapt to other species and
environments with minimal human effort. It is amenable to implementation on
parallel hardware and therefore capable of deployment in settings where
real-time performance is required.
| no_new_dataset | 0.917154 |
1602.07428 | Jun Zhu | Jun Zhu and Jiaming Song and Bei Chen | Max-Margin Nonparametric Latent Feature Models for Link Prediction | 14 pages, 8 figures | null | null | null | cs.LG cs.SI stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Link prediction is a fundamental task in statistical network analysis. Recent
advances have been made on learning flexible nonparametric Bayesian latent
feature models for link prediction. In this paper, we present a max-margin
learning method for such nonparametric latent feature relational models. Our
approach attempts to unite the ideas of max-margin learning and Bayesian
nonparametrics to discover discriminative latent features for link prediction.
It inherits the advances of nonparametric Bayesian methods to infer the unknown
latent social dimension, while for discriminative link prediction, it adopts
the max-margin learning principle by minimizing a hinge-loss using the linear
expectation operator, without dealing with a highly nonlinear link likelihood
function. For posterior inference, we develop an efficient stochastic
variational inference algorithm under a truncated mean-field assumption. Our
methods can scale up to large-scale real networks with millions of entities and
tens of millions of positive links. We also provide a full Bayesian
formulation, which can avoid tuning regularization hyper-parameters.
Experimental results on a diverse range of real datasets demonstrate the
benefits inherited from max-margin learning and Bayesian nonparametric
inference.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 08:08:05 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Zhu",
"Jun",
""
],
[
"Song",
"Jiaming",
""
],
[
"Chen",
"Bei",
""
]
] | TITLE: Max-Margin Nonparametric Latent Feature Models for Link Prediction
ABSTRACT: Link prediction is a fundamental task in statistical network analysis. Recent
advances have been made on learning flexible nonparametric Bayesian latent
feature models for link prediction. In this paper, we present a max-margin
learning method for such nonparametric latent feature relational models. Our
approach attempts to unite the ideas of max-margin learning and Bayesian
nonparametrics to discover discriminative latent features for link prediction.
It inherits the advances of nonparametric Bayesian methods to infer the unknown
latent social dimension, while for discriminative link prediction, it adopts
the max-margin learning principle by minimizing a hinge-loss using the linear
expectation operator, without dealing with a highly nonlinear link likelihood
function. For posterior inference, we develop an efficient stochastic
variational inference algorithm under a truncated mean-field assumption. Our
methods can scale up to large-scale real networks with millions of entities and
tens of millions of positive links. We also provide a full Bayesian
formulation, which can avoid tuning regularization hyper-parameters.
Experimental results on a diverse range of real datasets demonstrate the
benefits inherited from max-margin learning and Bayesian nonparametric
inference.
| no_new_dataset | 0.945147 |
1602.07464 | Pawe{\l} Teisseyre | Pawe{\l} Teisseyre | Feature ranking for multi-label classification using Markov Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple and efficient method for ranking features in multi-label
classification. The method produces a ranking of features showing their
relevance in predicting labels, which in turn allows to choose a final subset
of features. The procedure is based on Markov Networks and allows to model the
dependencies between labels and features in a direct way. In the first step we
build a simple network using only labels and then we test how much adding a
single feature affects the initial network. More specifically, in the first
step we use the Ising model whereas the second step is based on the score
statistic, which allows to test a significance of added features very quickly.
The proposed approach does not require transformation of label space, gives
interpretable results and allows for attractive visualization of dependency
structure. We give a theoretical justification of the procedure by discussing
some theoretical properties of the Ising model and the score statistic. We also
discuss feature ranking procedure based on fitting Ising model using $l_1$
regularized logistic regressions. Numerical experiments show that the proposed
methods outperform the conventional approaches on the considered artificial and
real datasets.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 11:11:10 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Teisseyre",
"Paweł",
""
]
] | TITLE: Feature ranking for multi-label classification using Markov Networks
ABSTRACT: We propose a simple and efficient method for ranking features in multi-label
classification. The method produces a ranking of features showing their
relevance in predicting labels, which in turn allows to choose a final subset
of features. The procedure is based on Markov Networks and allows to model the
dependencies between labels and features in a direct way. In the first step we
build a simple network using only labels and then we test how much adding a
single feature affects the initial network. More specifically, in the first
step we use the Ising model whereas the second step is based on the score
statistic, which allows to test a significance of added features very quickly.
The proposed approach does not require transformation of label space, gives
interpretable results and allows for attractive visualization of dependency
structure. We give a theoretical justification of the procedure by discussing
some theoretical properties of the Ising model and the score statistic. We also
discuss feature ranking procedure based on fitting Ising model using $l_1$
regularized logistic regressions. Numerical experiments show that the proposed
methods outperform the conventional approaches on the considered artificial and
real datasets.
| no_new_dataset | 0.949153 |
1602.07475 | Lluis Gomez | Lluis Gomez and Dimosthenis Karatzas | A fine-grained approach to scene text script identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the problem of script identification in unconstrained
scenarios. Script identification is an important prerequisite to recognition,
and an indispensable condition for automatic text understanding systems
designed for multi-language environments. Although widely studied for document
images and handwritten documents, it remains an almost unexplored territory for
scene text images.
We detail a novel method for script identification in natural images that
combines convolutional features and the Naive-Bayes Nearest Neighbor
classifier. The proposed framework efficiently exploits the discriminative
power of small stroke-parts, in a fine-grained classification framework.
In addition, we propose a new public benchmark dataset for the evaluation of
joint text detection and script identification in natural scenes. Experiments
done in this new dataset demonstrate that the proposed method yields state of
the art results, while it generalizes well to different datasets and variable
number of scripts. The evidence provided shows that multi-lingual scene text
recognition in the wild is a viable proposition. Source code of the proposed
method is made available online.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 12:12:07 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Gomez",
"Lluis",
""
],
[
"Karatzas",
"Dimosthenis",
""
]
] | TITLE: A fine-grained approach to scene text script identification
ABSTRACT: This paper focuses on the problem of script identification in unconstrained
scenarios. Script identification is an important prerequisite to recognition,
and an indispensable condition for automatic text understanding systems
designed for multi-language environments. Although widely studied for document
images and handwritten documents, it remains an almost unexplored territory for
scene text images.
We detail a novel method for script identification in natural images that
combines convolutional features and the Naive-Bayes Nearest Neighbor
classifier. The proposed framework efficiently exploits the discriminative
power of small stroke-parts, in a fine-grained classification framework.
In addition, we propose a new public benchmark dataset for the evaluation of
joint text detection and script identification in natural scenes. Experiments
done in this new dataset demonstrate that the proposed method yields state of
the art results, while it generalizes well to different datasets and variable
number of scripts. The evidence provided shows that multi-lingual scene text
recognition in the wild is a viable proposition. Source code of the proposed
method is made available online.
| new_dataset | 0.960878 |
1602.07614 | Daniele Ramazzotti | Daniele Ramazzotti | A Model of Selective Advantage for the Efficient Inference of Cancer
Clonal Evolution | Doctoral thesis, University of Milan | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a resurgence of interest in rigorous algorithms for
the inference of cancer progression from genomic data. The motivations are
manifold: (i) growing NGS and single cell data from cancer patients, (ii) need
for novel Data Science and Machine Learning algorithms to infer models of
cancer progression, and (iii) a desire to understand the temporal and
heterogeneous structure of tumor to tame its progression by efficacious
therapeutic intervention. This thesis presents a multi-disciplinary effort to
model tumor progression involving successive accumulation of genetic
alterations, each resulting populations manifesting themselves in a cancer
phenotype. The framework presented in this work along with algorithms derived
from it, represents a novel approach for inferring cancer progression, whose
accuracy and convergence rates surpass the existing techniques. The approach
derives its power from several fields including algorithms in machine learning,
theory of causality and cancer biology. Furthermore, a modular pipeline to
extract ensemble-level progression models from sequenced cancer genomes is
proposed. The pipeline combines state-of-the-art techniques for sample
stratification, driver selection, identification of fitness-equivalent
exclusive alterations and progression model inference. Furthermore, the results
are validated by synthetic data with realistic generative models, and
empirically interpreted in the context of real cancer datasets; in the later
case, biologically significant conclusions are also highlighted. Specifically,
it demonstrates the pipeline's ability to reproduce much of the knowledge on
colorectal cancer, as well as to suggest novel hypotheses. Lastly, it also
proves that the proposed framework can be applied to reconstruct the
evolutionary history of cancer clones in single patients, as illustrated by an
example from clear cell renal carcinomas.
| [
{
"version": "v1",
"created": "Mon, 15 Feb 2016 16:33:39 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Ramazzotti",
"Daniele",
""
]
] | TITLE: A Model of Selective Advantage for the Efficient Inference of Cancer
Clonal Evolution
ABSTRACT: Recently, there has been a resurgence of interest in rigorous algorithms for
the inference of cancer progression from genomic data. The motivations are
manifold: (i) growing NGS and single cell data from cancer patients, (ii) need
for novel Data Science and Machine Learning algorithms to infer models of
cancer progression, and (iii) a desire to understand the temporal and
heterogeneous structure of tumor to tame its progression by efficacious
therapeutic intervention. This thesis presents a multi-disciplinary effort to
model tumor progression involving successive accumulation of genetic
alterations, each resulting populations manifesting themselves in a cancer
phenotype. The framework presented in this work along with algorithms derived
from it, represents a novel approach for inferring cancer progression, whose
accuracy and convergence rates surpass the existing techniques. The approach
derives its power from several fields including algorithms in machine learning,
theory of causality and cancer biology. Furthermore, a modular pipeline to
extract ensemble-level progression models from sequenced cancer genomes is
proposed. The pipeline combines state-of-the-art techniques for sample
stratification, driver selection, identification of fitness-equivalent
exclusive alterations and progression model inference. Furthermore, the results
are validated by synthetic data with realistic generative models, and
empirically interpreted in the context of real cancer datasets; in the later
case, biologically significant conclusions are also highlighted. Specifically,
it demonstrates the pipeline's ability to reproduce much of the knowledge on
colorectal cancer, as well as to suggest novel hypotheses. Lastly, it also
proves that the proposed framework can be applied to reconstruct the
evolutionary history of cancer clones in single patients, as illustrated by an
example from clear cell renal carcinomas.
| no_new_dataset | 0.943556 |
1602.07633 | Markus Borg | Markus Borg | Advancing Trace Recovery Evaluation - Applied Information Retrieval in a
Software Engineering Context | Introduction and synthesis of a cumulative thesis. The four papers
included in the thesis are not included in this file | null | null | Licentiate Thesis 13, 2012 ISSN 1652-4691 | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Successful development of software systems involves efficient navigation
among software artifacts. One state-of-practice approach to structure
information is to establish trace links between artifacts, a practice that is
also enforced by several development standards. Unfortunately, manually
maintaining trace links in an evolving system is a tedious task. To tackle this
issue, several researchers have proposed treating the capture and recovery of
trace links as an Information Retrieval (IR) problem. The work contains a
Systematic Literature Review (SLR) of previous evaluations of IR-based trace
recovery. We show that a majority of previous evaluations have been
technology-oriented, conducted in "the cave of IR evaluation", using small
datasets as experimental input. Also, software artifacts originating from
student projects have frequently been used in evaluations. We conducted a
survey among traceability researchers, and found that a majority consider
student artifacts to be only partly representative to industrial counterparts.
Our findings call for additional case studies to evaluate IR-based trace
recovery within the full complexity of an industrial setting. Also, this thesis
contributes to the body of empirical evidence of IR-based trace recovery in two
experiments with industrial software artifacts. The technology-oriented
experiment highlights the clear dependence between datasets and the accuracy of
IR-based trace recovery, in line with findings from the SLR. The human-oriented
experiment investigates how different quality levels of tool output affect the
tracing accuracy of engineers. Finally, we present how tools and methods are
evaluated in the general field of IR research, and propose a taxonomy of
evaluation contexts tailored for IR-based trace recovery.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 18:35:53 GMT"
}
] | 2016-02-25T00:00:00 | [
[
"Borg",
"Markus",
""
]
] | TITLE: Advancing Trace Recovery Evaluation - Applied Information Retrieval in a
Software Engineering Context
ABSTRACT: Successful development of software systems involves efficient navigation
among software artifacts. One state-of-practice approach to structure
information is to establish trace links between artifacts, a practice that is
also enforced by several development standards. Unfortunately, manually
maintaining trace links in an evolving system is a tedious task. To tackle this
issue, several researchers have proposed treating the capture and recovery of
trace links as an Information Retrieval (IR) problem. The work contains a
Systematic Literature Review (SLR) of previous evaluations of IR-based trace
recovery. We show that a majority of previous evaluations have been
technology-oriented, conducted in "the cave of IR evaluation", using small
datasets as experimental input. Also, software artifacts originating from
student projects have frequently been used in evaluations. We conducted a
survey among traceability researchers, and found that a majority consider
student artifacts to be only partly representative to industrial counterparts.
Our findings call for additional case studies to evaluate IR-based trace
recovery within the full complexity of an industrial setting. Also, this thesis
contributes to the body of empirical evidence of IR-based trace recovery in two
experiments with industrial software artifacts. The technology-oriented
experiment highlights the clear dependence between datasets and the accuracy of
IR-based trace recovery, in line with findings from the SLR. The human-oriented
experiment investigates how different quality levels of tool output affect the
tracing accuracy of engineers. Finally, we present how tools and methods are
evaluated in the general field of IR research, and propose a taxonomy of
evaluation contexts tailored for IR-based trace recovery.
| no_new_dataset | 0.944638 |
1510.05830 | Ariel Jaffe | Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger | Unsupervised Ensemble Learning with Dependent Classifiers | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In unsupervised ensemble learning, one obtains predictions from multiple
sources or classifiers, yet without knowing the reliability and expertise of
each source, and with no labeled data to assess it. The task is to combine
these possibly conflicting predictions into an accurate meta-learner. Most
works to date assumed perfect diversity between the different sources, a
property known as conditional independence. In realistic scenarios, however,
this assumption is often violated, and ensemble learners based on it can be
severely sub-optimal. The key challenges we address in this paper are:\ (i) how
to detect, in an unsupervised manner, strong violations of conditional
independence; and (ii) construct a suitable meta-learner. To this end we
introduce a statistical model that allows for dependencies between classifiers.
Our main contributions are the development of novel unsupervised methods to
detect strongly dependent classifiers, better estimate their accuracies, and
construct an improved meta-learner. Using both artificial and real datasets, we
showcase the importance of taking classifier dependencies into account and the
competitive performance of our approach.
| [
{
"version": "v1",
"created": "Tue, 20 Oct 2015 10:48:40 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Feb 2016 20:50:55 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Jaffe",
"Ariel",
""
],
[
"Fetaya",
"Ethan",
""
],
[
"Nadler",
"Boaz",
""
],
[
"Jiang",
"Tingting",
""
],
[
"Kluger",
"Yuval",
""
]
] | TITLE: Unsupervised Ensemble Learning with Dependent Classifiers
ABSTRACT: In unsupervised ensemble learning, one obtains predictions from multiple
sources or classifiers, yet without knowing the reliability and expertise of
each source, and with no labeled data to assess it. The task is to combine
these possibly conflicting predictions into an accurate meta-learner. Most
works to date assumed perfect diversity between the different sources, a
property known as conditional independence. In realistic scenarios, however,
this assumption is often violated, and ensemble learners based on it can be
severely sub-optimal. The key challenges we address in this paper are:\ (i) how
to detect, in an unsupervised manner, strong violations of conditional
independence; and (ii) construct a suitable meta-learner. To this end we
introduce a statistical model that allows for dependencies between classifiers.
Our main contributions are the development of novel unsupervised methods to
detect strongly dependent classifiers, better estimate their accuracies, and
construct an improved meta-learner. Using both artificial and real datasets, we
showcase the importance of taking classifier dependencies into account and the
competitive performance of our approach.
| no_new_dataset | 0.945045 |
1602.06687 | Margareta Ackerman Margareta Ackerman | Margareta Ackerman, Andreas Adolfsson, and Naomi Brownstein | An Effective and Efficient Approach for Clusterability Evaluation | 10 pages, 2 tables, 4 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is an essential data mining tool that aims to discover inherent
cluster structure in data. As such, the study of clusterability, which
evaluates whether data possesses such structure, is an integral part of cluster
analysis. Yet, despite their central role in the theory and application of
clustering, current notions of clusterability fall short in two crucial aspects
that render them impractical; most are computationally infeasible and others
fail to classify the structure of real datasets.
In this paper, we propose a novel approach to clusterability evaluation that
is both computationally efficient and successfully captures the structure of
real data. Our method applies multimodality tests to the (one-dimensional) set
of pairwise distances based on the original, potentially high-dimensional data.
We present extensive analyses of our approach for both the Dip and Silverman
multimodality tests on real data as well as 17,000 simulations, demonstrating
the success of our approach as the first practical notion of clusterability.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 09:01:10 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Ackerman",
"Margareta",
""
],
[
"Adolfsson",
"Andreas",
""
],
[
"Brownstein",
"Naomi",
""
]
] | TITLE: An Effective and Efficient Approach for Clusterability Evaluation
ABSTRACT: Clustering is an essential data mining tool that aims to discover inherent
cluster structure in data. As such, the study of clusterability, which
evaluates whether data possesses such structure, is an integral part of cluster
analysis. Yet, despite their central role in the theory and application of
clustering, current notions of clusterability fall short in two crucial aspects
that render them impractical; most are computationally infeasible and others
fail to classify the structure of real datasets.
In this paper, we propose a novel approach to clusterability evaluation that
is both computationally efficient and successfully captures the structure of
real data. Our method applies multimodality tests to the (one-dimensional) set
of pairwise distances based on the original, potentially high-dimensional data.
We present extensive analyses of our approach for both the Dip and Silverman
multimodality tests on real data as well as 17,000 simulations, demonstrating
the success of our approach as the first practical notion of clusterability.
| no_new_dataset | 0.948346 |
1602.06979 | Ethan Fast | Ethan Fast, Binbin Chen, Michael Bernstein | Empath: Understanding Topic Signals in Large-Scale Text | CHI: ACM Conference on Human Factors in Computing Systems 2016 | null | 10.1145/2858036.2858535 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human language is colored by a broad range of topics, but existing text
analysis tools only focus on a small number of them. We present Empath, a tool
that can generate and validate new lexical categories on demand from a small
set of seed terms (like "bleed" and "punch" to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural
embedding across more than 1.8 billion words of modern fiction. Given a small
set of seed words that characterize a category, Empath uses its neural
embedding to discover new related terms, then validates the category with a
crowd-powered filter. Empath also analyzes text across 200 built-in,
pre-validated categories we have generated from common topics in our web
dataset, like neglect, government, and social media. We show that Empath's
data-driven, human validated categories are highly correlated (r=0.906) with
similar categories in LIWC.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 21:47:43 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Fast",
"Ethan",
""
],
[
"Chen",
"Binbin",
""
],
[
"Bernstein",
"Michael",
""
]
] | TITLE: Empath: Understanding Topic Signals in Large-Scale Text
ABSTRACT: Human language is colored by a broad range of topics, but existing text
analysis tools only focus on a small number of them. We present Empath, a tool
that can generate and validate new lexical categories on demand from a small
set of seed terms (like "bleed" and "punch" to generate the category violence).
Empath draws connotations between words and phrases by deep learning a neural
embedding across more than 1.8 billion words of modern fiction. Given a small
set of seed words that characterize a category, Empath uses its neural
embedding to discover new related terms, then validates the category with a
crowd-powered filter. Empath also analyzes text across 200 built-in,
pre-validated categories we have generated from common topics in our web
dataset, like neglect, government, and social media. We show that Empath's
data-driven, human validated categories are highly correlated (r=0.906) with
similar categories in LIWC.
| no_new_dataset | 0.901097 |
1602.07040 | Eleni Rozaki | Eleni Rozaki | Clustering Optimisation Techniques in Mobile Networks | 8 pages, 4 figures | (IJRITCC), February 2016, Volume 4, Issue 2, PP:22-29 | null | null | cs.NI cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of mobile phones has exploded over the past years,abundantly through
the introduction of smartphones and the rapidly expanding use of mobile data.
This has resulted in a spiraling problem of ensuring quality of service for
users of mobile networks. Hence, mobile carriers and service providers need to
determine how to prioritise expansion decisions and optimise network faults to
ensure customer satisfaction and optimal network performance. To assist in that
decision-making process, this research employs data mining classification of
different Key Performance Indicator datasets to develop a monitoring scheme for
mobile networks as a means of identifying the causes of network malfunctions.
Then, the data are clustered to observe the characteristics of the technical
areas with the use of k-means clustering. The data output is further trained
with decision tree classification algorithms. The end result was that this
method of network optimisation allowed for significantly improved fault
detection performance
| [
{
"version": "v1",
"created": "Sat, 20 Feb 2016 14:17:05 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Rozaki",
"Eleni",
""
]
] | TITLE: Clustering Optimisation Techniques in Mobile Networks
ABSTRACT: The use of mobile phones has exploded over the past years,abundantly through
the introduction of smartphones and the rapidly expanding use of mobile data.
This has resulted in a spiraling problem of ensuring quality of service for
users of mobile networks. Hence, mobile carriers and service providers need to
determine how to prioritise expansion decisions and optimise network faults to
ensure customer satisfaction and optimal network performance. To assist in that
decision-making process, this research employs data mining classification of
different Key Performance Indicator datasets to develop a monitoring scheme for
mobile networks as a means of identifying the causes of network malfunctions.
Then, the data are clustered to observe the characteristics of the technical
areas with the use of k-means clustering. The data output is further trained
with decision tree classification algorithms. The end result was that this
method of network optimisation allowed for significantly improved fault
detection performance
| no_new_dataset | 0.949435 |
1602.07107 | Richard Combes | Thomas Bonald and Richard Combes | A Streaming Algorithm for Crowdsourced Data Classification | 23 pages | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a streaming algorithm for the binary classification of data based
on crowdsourcing. The algorithm learns the competence of each labeller by
comparing her labels to those of other labellers on the same tasks and uses
this information to minimize the prediction error rate on each task. We provide
performance guarantees of our algorithm for a fixed population of independent
labellers. In particular, we show that our algorithm is optimal in the sense
that the cumulative regret compared to the optimal decision with known labeller
error probabilities is finite, independently of the number of tasks to label.
The complexity of the algorithm is linear in the number of labellers and the
number of tasks, up to some logarithmic factors. Numerical experiments
illustrate the performance of our algorithm compared to existing algorithms,
including simple majority voting and expectation-maximization algorithms, on
both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 10:21:58 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Bonald",
"Thomas",
""
],
[
"Combes",
"Richard",
""
]
] | TITLE: A Streaming Algorithm for Crowdsourced Data Classification
ABSTRACT: We propose a streaming algorithm for the binary classification of data based
on crowdsourcing. The algorithm learns the competence of each labeller by
comparing her labels to those of other labellers on the same tasks and uses
this information to minimize the prediction error rate on each task. We provide
performance guarantees of our algorithm for a fixed population of independent
labellers. In particular, we show that our algorithm is optimal in the sense
that the cumulative regret compared to the optimal decision with known labeller
error probabilities is finite, independently of the number of tasks to label.
The complexity of the algorithm is linear in the number of labellers and the
number of tasks, up to some logarithmic factors. Numerical experiments
illustrate the performance of our algorithm compared to existing algorithms,
including simple majority voting and expectation-maximization algorithms, on
both synthetic and real datasets.
| no_new_dataset | 0.946892 |
1602.07280 | Vaibhav Rajan | Abhishek Sengupta, Vaibhav Rajan, Sakyajit Bhattacharya, G R K Sarma | A Statistical Model for Stroke Outcome Prediction and Treatment Planning | null | null | null | null | stat.AP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stroke is a major cause of mortality and long--term disability in the world.
Predictive outcome models in stroke are valuable for personalized treatment,
rehabilitation planning and in controlled clinical trials. In this paper we
design a new model to predict outcome in the short-term, the putative
therapeutic window for several treatments. Our regression-based model has a
parametric form that is designed to address many challenges common in medical
datasets like highly correlated variables and class imbalance. Empirically our
model outperforms the best--known previous models in predicting short--term
outcomes and in inferring the most effective treatments that improve outcome.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 12:51:39 GMT"
}
] | 2016-02-24T00:00:00 | [
[
"Sengupta",
"Abhishek",
""
],
[
"Rajan",
"Vaibhav",
""
],
[
"Bhattacharya",
"Sakyajit",
""
],
[
"Sarma",
"G R K",
""
]
] | TITLE: A Statistical Model for Stroke Outcome Prediction and Treatment Planning
ABSTRACT: Stroke is a major cause of mortality and long--term disability in the world.
Predictive outcome models in stroke are valuable for personalized treatment,
rehabilitation planning and in controlled clinical trials. In this paper we
design a new model to predict outcome in the short-term, the putative
therapeutic window for several treatments. Our regression-based model has a
parametric form that is designed to address many challenges common in medical
datasets like highly correlated variables and class imbalance. Empirically our
model outperforms the best--known previous models in predicting short--term
outcomes and in inferring the most effective treatments that improve outcome.
| no_new_dataset | 0.950088 |
1310.7467 | Katharine Turner | Andrew Robinson and Katharine Turner | Hypothesis Testing for Topological Data Analysis | 14 pages, 5 figures, 1 table | null | null | null | stat.AP cs.CG math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Persistent homology is a vital tool for topological data analysis. Previous
work has developed some statistical estimators for characteristics of
collections of persistence diagrams. However, tools that provide statistical
inference for observations that are persistence diagrams are limited.
Specifically, there is a need for tests that can assess the strength of
evidence against a claim that two samples arise from the same population or
process. We propose the use of randomization-style null hypothesis significance
tests (NHST) for these situations. The test is based on a loss function that
comprises pairwise distances between the elements of each sample and all the
elements in the other sample. We use this method to analyze a range of
simulated and experimental data. Through these examples we experimentally
explore the power of the p-values. Our results show that the
randomization-style NHST based on pairwise distances can distinguish between
samples from different processes, which suggests that its use for hypothesis
tests upon persistence diagrams is reasonable. We demonstrate its application
on a real dataset of fMRI data of patients with ADHD.
| [
{
"version": "v1",
"created": "Mon, 28 Oct 2013 15:49:46 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Feb 2016 15:42:46 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Robinson",
"Andrew",
""
],
[
"Turner",
"Katharine",
""
]
] | TITLE: Hypothesis Testing for Topological Data Analysis
ABSTRACT: Persistent homology is a vital tool for topological data analysis. Previous
work has developed some statistical estimators for characteristics of
collections of persistence diagrams. However, tools that provide statistical
inference for observations that are persistence diagrams are limited.
Specifically, there is a need for tests that can assess the strength of
evidence against a claim that two samples arise from the same population or
process. We propose the use of randomization-style null hypothesis significance
tests (NHST) for these situations. The test is based on a loss function that
comprises pairwise distances between the elements of each sample and all the
elements in the other sample. We use this method to analyze a range of
simulated and experimental data. Through these examples we experimentally
explore the power of the p-values. Our results show that the
randomization-style NHST based on pairwise distances can distinguish between
samples from different processes, which suggests that its use for hypothesis
tests upon persistence diagrams is reasonable. We demonstrate its application
on a real dataset of fMRI data of patients with ADHD.
| no_new_dataset | 0.940681 |
1408.2902 | Wei-Liang Qian | Adriano Francisco Siqueira, Carlos Jose Todero Peixoto, Chen Wu,
Wei-Liang Qian | Effect of stochastic transition in the fundamental diagram of traffic
flow | 21 pages, 4 figures | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose an alternative stochastic model for the fundamental
diagram of traffic flow with minimal number of parameters. Our approach is
based on a mesoscopic viewpoint of the traffic system in terms of the dynamics
of vehicle speed transitions. A key feature of the present approach lies in its
stochastic nature which makes it possible to study not only the
flow-concentration relation, namely, the fundamental diagram, but also its
uncertainty, namely, the variance of the fundamental diagram \textemdash an
important characteristic in the observed traffic flow data. It is shown that in
the simplified versions of the model consisting of only a few speed states,
analytic solutions for both quantities can be obtained, which facilitate the
discussion of the corresponding physical content. We also show that the effect
of vehicle size can be included into the model by introducing the maximal
congestion density $k_{max}$. By making use of this parameter, the free flow
region and congested flow region are naturally divided, and the transition is
characterized by the capacity drop at the maximum of the flow-concentration
relation. The model parameters are then adjusted to the observed traffic flow
on the I-80 Freeway Dataset in the San Francisco area from the NGSIM program,
where both the fundamental diagram and its variance are reasonably reproduced.
Despite its simplicity, we argue that the current model provides an alternative
description for the fundamental diagram and its uncertainty in the study of
traffic flow.
| [
{
"version": "v1",
"created": "Wed, 13 Aug 2014 02:54:57 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Aug 2014 15:46:47 GMT"
},
{
"version": "v3",
"created": "Mon, 22 Dec 2014 19:47:12 GMT"
},
{
"version": "v4",
"created": "Sun, 21 Feb 2016 13:18:49 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Siqueira",
"Adriano Francisco",
""
],
[
"Peixoto",
"Carlos Jose Todero",
""
],
[
"Wu",
"Chen",
""
],
[
"Qian",
"Wei-Liang",
""
]
] | TITLE: Effect of stochastic transition in the fundamental diagram of traffic
flow
ABSTRACT: In this work, we propose an alternative stochastic model for the fundamental
diagram of traffic flow with minimal number of parameters. Our approach is
based on a mesoscopic viewpoint of the traffic system in terms of the dynamics
of vehicle speed transitions. A key feature of the present approach lies in its
stochastic nature which makes it possible to study not only the
flow-concentration relation, namely, the fundamental diagram, but also its
uncertainty, namely, the variance of the fundamental diagram \textemdash an
important characteristic in the observed traffic flow data. It is shown that in
the simplified versions of the model consisting of only a few speed states,
analytic solutions for both quantities can be obtained, which facilitate the
discussion of the corresponding physical content. We also show that the effect
of vehicle size can be included into the model by introducing the maximal
congestion density $k_{max}$. By making use of this parameter, the free flow
region and congested flow region are naturally divided, and the transition is
characterized by the capacity drop at the maximum of the flow-concentration
relation. The model parameters are then adjusted to the observed traffic flow
on the I-80 Freeway Dataset in the San Francisco area from the NGSIM program,
where both the fundamental diagram and its variance are reasonably reproduced.
Despite its simplicity, we argue that the current model provides an alternative
description for the fundamental diagram and its uncertainty in the study of
traffic flow.
| no_new_dataset | 0.945147 |
1507.08137 | Gautier Marti | Gautier Marti, Philippe Donnat, Frank Nielsen, Philippe Very | HCMapper: An interactive visualization tool to compare partition-based
flat clustering extracted from pairs of dendrograms | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a new visualization tool, dubbed HCMapper, that visually helps to
compare a pair of dendrograms computed on the same dataset by displaying
multiscale partition-based layered structures. The dendrograms are obtained by
hierarchical clustering techniques whose output reflects some hypothesis on the
data and HCMapper is specifically designed to grasp at first glance both
whether the two compared hypotheses broadly agree and the data points on which
they do not concur. Leveraging juxtaposition and explicit encodings, HCMapper
focus on two selected partitions while displaying coarser ones in context areas
for understanding multiscale structure and eventually switching the selected
partitions. HCMapper utility is shown through the example of testing whether
the prices of credit default swap financial time series only undergo
correlation. This use case is detailed in the supplementary material as well as
experiments with code on toy-datasets for reproducible research. HCMapper is
currently released as a visualization tool on the DataGrapple time series and
clustering analysis platorm at www.datagrapple.com.
| [
{
"version": "v1",
"created": "Wed, 29 Jul 2015 13:26:05 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Feb 2016 11:13:13 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Marti",
"Gautier",
""
],
[
"Donnat",
"Philippe",
""
],
[
"Nielsen",
"Frank",
""
],
[
"Very",
"Philippe",
""
]
] | TITLE: HCMapper: An interactive visualization tool to compare partition-based
flat clustering extracted from pairs of dendrograms
ABSTRACT: We describe a new visualization tool, dubbed HCMapper, that visually helps to
compare a pair of dendrograms computed on the same dataset by displaying
multiscale partition-based layered structures. The dendrograms are obtained by
hierarchical clustering techniques whose output reflects some hypothesis on the
data and HCMapper is specifically designed to grasp at first glance both
whether the two compared hypotheses broadly agree and the data points on which
they do not concur. Leveraging juxtaposition and explicit encodings, HCMapper
focus on two selected partitions while displaying coarser ones in context areas
for understanding multiscale structure and eventually switching the selected
partitions. HCMapper utility is shown through the example of testing whether
the prices of credit default swap financial time series only undergo
correlation. This use case is detailed in the supplementary material as well as
experiments with code on toy-datasets for reproducible research. HCMapper is
currently released as a visualization tool on the DataGrapple time series and
clustering analysis platorm at www.datagrapple.com.
| no_new_dataset | 0.9455 |
1509.03789 | Bardia Yousefi | Bardia Yousefi, Chu Kiong Loo | Bio-Inspired Human Action Recognition using Hybrid Max-Product
Neuro-Fuzzy Classifier and Quantum-Behaved PSO | author's version, SWJ 2014 | null | null | null | cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studies on computational neuroscience through functional magnetic resonance
imaging (fMRI) and following biological inspired system stated that human
action recognition in the brain of mammalian leads two distinct pathways in the
model, which are specialized for analysis of motion (optic flow) and form
information. Principally, we have defined a novel and robust form features
applying active basis model as form extractor in form pathway in the biological
inspired model. An unbalanced synergetic neural net-work classifies shapes and
structures of human objects along with tuning its attention parameter by
quantum particle swarm optimization (QPSO) via initiation of Centroidal Voronoi
Tessellations. These tools utilized and justified as strong tools for following
biological system model in form pathway. But the final decision has done by
combination of ultimate outcomes of both pathways via fuzzy inference which
increases novality of proposed model. Combination of these two brain pathways
is done by considering each feature sets in Gaussian membership functions with
fuzzy product inference method. Two configurations have been proposed for form
pathway: applying multi-prototype human action templates using two time
synergetic neural network for obtaining uniform template regarding each
actions, and second scenario that it uses abstracting human action in four
key-frames. Experimental results showed promising accuracy performance on
different datasets (KTH and Weizmann).
| [
{
"version": "v1",
"created": "Sun, 13 Sep 2015 00:34:18 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Feb 2016 00:04:24 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Yousefi",
"Bardia",
""
],
[
"Loo",
"Chu Kiong",
""
]
] | TITLE: Bio-Inspired Human Action Recognition using Hybrid Max-Product
Neuro-Fuzzy Classifier and Quantum-Behaved PSO
ABSTRACT: Studies on computational neuroscience through functional magnetic resonance
imaging (fMRI) and following biological inspired system stated that human
action recognition in the brain of mammalian leads two distinct pathways in the
model, which are specialized for analysis of motion (optic flow) and form
information. Principally, we have defined a novel and robust form features
applying active basis model as form extractor in form pathway in the biological
inspired model. An unbalanced synergetic neural net-work classifies shapes and
structures of human objects along with tuning its attention parameter by
quantum particle swarm optimization (QPSO) via initiation of Centroidal Voronoi
Tessellations. These tools utilized and justified as strong tools for following
biological system model in form pathway. But the final decision has done by
combination of ultimate outcomes of both pathways via fuzzy inference which
increases novality of proposed model. Combination of these two brain pathways
is done by considering each feature sets in Gaussian membership functions with
fuzzy product inference method. Two configurations have been proposed for form
pathway: applying multi-prototype human action templates using two time
synergetic neural network for obtaining uniform template regarding each
actions, and second scenario that it uses abstracting human action in four
key-frames. Experimental results showed promising accuracy performance on
different datasets (KTH and Weizmann).
| no_new_dataset | 0.952442 |
1602.04693 | Shahid Alam | Shahid Alam, Zhengyang Qu, Ryan Riley, Yan Chen, Vaibhav Rastogi | DroidNative: Semantic-Based Detection of Android Native Code Malware | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the Symantec and F-Secure threat reports, mobile malware
development in 2013 and 2014 has continued to focus almost exclusively ~99% on
the Android platform. Malware writers are applying stealthy mutations
(obfuscations) to create malware variants, thwarting detection by signature
based detectors. In addition, the plethora of more sophisticated detectors
making use of static analysis techniques to detect such variants operate only
at the bytecode level, meaning that malware embedded in native code goes
undetected. A recent study shows that 86% of the most popular Android
applications contain native code, making this a plausible threat. This paper
proposes DroidNative, an Android malware detector that uses specific control
flow patterns to reduce the effect of obfuscations, provides automation and
platform independence, and as far as we know is the first system that operates
at the Android native code level, allowing it to detect malware embedded in
both native code and bytecode. When tested with traditional malware variants it
achieves a detection rate (DR) of 99.48%, compared to academic and commercial
tools' DRs that range from 8.33% -- 93.22%. When tested with a dataset of 2240
samples DroidNative achieves a DR of 99.16%, a false positive rate of 0.4% and
an average detection time of 26.87 sec/sample.
| [
{
"version": "v1",
"created": "Mon, 15 Feb 2016 14:26:20 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Feb 2016 07:37:51 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Alam",
"Shahid",
""
],
[
"Qu",
"Zhengyang",
""
],
[
"Riley",
"Ryan",
""
],
[
"Chen",
"Yan",
""
],
[
"Rastogi",
"Vaibhav",
""
]
] | TITLE: DroidNative: Semantic-Based Detection of Android Native Code Malware
ABSTRACT: According to the Symantec and F-Secure threat reports, mobile malware
development in 2013 and 2014 has continued to focus almost exclusively ~99% on
the Android platform. Malware writers are applying stealthy mutations
(obfuscations) to create malware variants, thwarting detection by signature
based detectors. In addition, the plethora of more sophisticated detectors
making use of static analysis techniques to detect such variants operate only
at the bytecode level, meaning that malware embedded in native code goes
undetected. A recent study shows that 86% of the most popular Android
applications contain native code, making this a plausible threat. This paper
proposes DroidNative, an Android malware detector that uses specific control
flow patterns to reduce the effect of obfuscations, provides automation and
platform independence, and as far as we know is the first system that operates
at the Android native code level, allowing it to detect malware embedded in
both native code and bytecode. When tested with traditional malware variants it
achieves a detection rate (DR) of 99.48%, compared to academic and commercial
tools' DRs that range from 8.33% -- 93.22%. When tested with a dataset of 2240
samples DroidNative achieves a DR of 99.16%, a false positive rate of 0.4% and
an average detection time of 26.87 sec/sample.
| no_new_dataset | 0.926802 |
1602.04844 | Leman Akoglu | Emaad A. Manzoor, Sadegh Momeni, Venkat N. Venkatakrishnan, Leman
Akoglu | Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous
Graphs | 10 pages, 2 tables, 14 figures | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a stream of heterogeneous graphs containing different types of nodes
and edges, how can we spot anomalous ones in real-time while consuming bounded
memory? This problem is motivated by and generalizes from its application in
security to host-level advanced persistent threat (APT) detection. We propose
StreamSpot, a clustering based anomaly detection approach that addresses
challenges in two key fronts: (1) heterogeneity, and (2) streaming nature. We
introduce a new similarity function for heterogeneous graphs that compares two
graphs based on their relative frequency of local substructures, represented as
short strings. This function lends itself to a vector representation of a
graph, which is (a) fast to compute, and (b) amenable to a sketched version
with bounded size that preserves similarity. StreamSpot exhibits desirable
properties that a streaming application requires---it is (i) fully-streaming;
processing the stream one edge at a time as it arrives, (ii) memory-efficient;
requiring constant space for the sketches and the clustering, (iii) fast;
taking constant time to update the graph sketches and the cluster summaries
that can process over 100K edges per second, and (iv) online; scoring and
flagging anomalies in real time. Experiments on datasets containing simulated
system-call flow graphs from normal browser activity and various attack
scenarios (ground truth) show that our proposed StreamSpot is high-performance;
achieving above 95% detection accuracy with small delay, as well as competitive
time and memory usage.
| [
{
"version": "v1",
"created": "Mon, 15 Feb 2016 21:26:34 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Feb 2016 14:08:12 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Manzoor",
"Emaad A.",
""
],
[
"Momeni",
"Sadegh",
""
],
[
"Venkatakrishnan",
"Venkat N.",
""
],
[
"Akoglu",
"Leman",
""
]
] | TITLE: Fast Memory-efficient Anomaly Detection in Streaming Heterogeneous
Graphs
ABSTRACT: Given a stream of heterogeneous graphs containing different types of nodes
and edges, how can we spot anomalous ones in real-time while consuming bounded
memory? This problem is motivated by and generalizes from its application in
security to host-level advanced persistent threat (APT) detection. We propose
StreamSpot, a clustering based anomaly detection approach that addresses
challenges in two key fronts: (1) heterogeneity, and (2) streaming nature. We
introduce a new similarity function for heterogeneous graphs that compares two
graphs based on their relative frequency of local substructures, represented as
short strings. This function lends itself to a vector representation of a
graph, which is (a) fast to compute, and (b) amenable to a sketched version
with bounded size that preserves similarity. StreamSpot exhibits desirable
properties that a streaming application requires---it is (i) fully-streaming;
processing the stream one edge at a time as it arrives, (ii) memory-efficient;
requiring constant space for the sketches and the clustering, (iii) fast;
taking constant time to update the graph sketches and the cluster summaries
that can process over 100K edges per second, and (iv) online; scoring and
flagging anomalies in real time. Experiments on datasets containing simulated
system-call flow graphs from normal browser activity and various attack
scenarios (ground truth) show that our proposed StreamSpot is high-performance;
achieving above 95% detection accuracy with small delay, as well as competitive
time and memory usage.
| no_new_dataset | 0.950088 |
1602.06397 | Will Ball | W.T. Ball, J.D. Haigh, E.V. Rozanov, A. Kuchar, T. Sukhodolov, F.
Tummon, A.V. Shapiro, W. Schmutz | High solar cycle spectral variations inconsistent with stratospheric
ozone observations | This is the original version submitted to Nature Geoscience in July
2015 with the title "Ozone observations reveal lower solar cycle spectral
variations", this has changed to the one given above. 4 Figures, Nature
Geoscience, 2016,
http://www.nature.com/ngeo/journal/vaop/ncurrent/full/ngeo2640.html | null | 10.1038/ngeo2640 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some of the natural variability in climate is understood to come from changes
in the Sun. A key route whereby the Sun may influence surface climate is
initiated in the tropical stratosphere by the absorption of solar ultraviolet
(UV) radiation by ozone, leading to a modification of the temperature and wind
structures and consequently to the surface through changes in wave propagation
and circulation. While changes in total, spectrally-integrated, solar
irradiance lead to small variations in global mean surface temperature, the
`top-down' UV effect preferentially influences on regional scales at
mid-to-high latitudes with, in particular, a solar signal noted in the North
Atlantic Oscillation (NAO). The amplitude of the UV variability is fundamental
in determining the magnitude of the climate response but understanding of the
UV variations has been challenged recently by measurements from the SOlar
Radiation and Climate Experiment (SORCE) satellite, which show UV solar cycle
changes up to 10 times larger than previously thought. Indeed, climate models
using these larger UV variations show a much greater response, similar to NAO
observations. Here we present estimates of the ozone solar cycle response using
a chemistry-climate model (CCM) in which the effects of transport are
constrained by observations. Thus the photolytic response to different spectral
solar irradiance (SSI) datasets can be isolated. Comparison of the results with
the solar signal in ozone extracted from observational datasets yields
significantly discriminable responses. According to our evaluation the SORCE UV
dataset is not consistent with the observed ozone response whereas the smaller
variations suggested by earlier satellite datasets, and by UV data from
empirical solar models, are in closer agreement with the measured stratospheric
variations. Determining the most appropriate SSI variability to apply in
models...
| [
{
"version": "v1",
"created": "Sat, 20 Feb 2016 11:22:44 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Ball",
"W. T.",
""
],
[
"Haigh",
"J. D.",
""
],
[
"Rozanov",
"E. V.",
""
],
[
"Kuchar",
"A.",
""
],
[
"Sukhodolov",
"T.",
""
],
[
"Tummon",
"F.",
""
],
[
"Shapiro",
"A. V.",
""
],
[
"Schmutz",
"W.",
""
]
] | TITLE: High solar cycle spectral variations inconsistent with stratospheric
ozone observations
ABSTRACT: Some of the natural variability in climate is understood to come from changes
in the Sun. A key route whereby the Sun may influence surface climate is
initiated in the tropical stratosphere by the absorption of solar ultraviolet
(UV) radiation by ozone, leading to a modification of the temperature and wind
structures and consequently to the surface through changes in wave propagation
and circulation. While changes in total, spectrally-integrated, solar
irradiance lead to small variations in global mean surface temperature, the
`top-down' UV effect preferentially influences on regional scales at
mid-to-high latitudes with, in particular, a solar signal noted in the North
Atlantic Oscillation (NAO). The amplitude of the UV variability is fundamental
in determining the magnitude of the climate response but understanding of the
UV variations has been challenged recently by measurements from the SOlar
Radiation and Climate Experiment (SORCE) satellite, which show UV solar cycle
changes up to 10 times larger than previously thought. Indeed, climate models
using these larger UV variations show a much greater response, similar to NAO
observations. Here we present estimates of the ozone solar cycle response using
a chemistry-climate model (CCM) in which the effects of transport are
constrained by observations. Thus the photolytic response to different spectral
solar irradiance (SSI) datasets can be isolated. Comparison of the results with
the solar signal in ozone extracted from observational datasets yields
significantly discriminable responses. According to our evaluation the SORCE UV
dataset is not consistent with the observed ozone response whereas the smaller
variations suggested by earlier satellite datasets, and by UV data from
empirical solar models, are in closer agreement with the measured stratospheric
variations. Determining the most appropriate SSI variability to apply in
models...
| no_new_dataset | 0.943608 |
1602.06431 | Rodrigo Alves | Rodrigo A S Alves, Renato Assun\c{c}\~ao and Pedro O S Vaz de Melo | Burstiness Scale: a highly parsimonious model for characterizing random
series of events | null | null | null | null | stat.ML cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem to accurately and parsimoniously characterize random series of
events (RSEs) present in the Web, such as e-mail conversations or Twitter
hashtags, is not trivial. Reports found in the literature reveal two apparent
conflicting visions of how RSEs should be modeled. From one side, the
Poissonian processes, of which consecutive events follow each other at a
relatively regular time and should not be correlated. On the other side, the
self-exciting processes, which are able to generate bursts of correlated events
and periods of inactivities. The existence of many and sometimes conflicting
approaches to model RSEs is a consequence of the unpredictability of the
aggregated dynamics of our individual and routine activities, which sometimes
show simple patterns, but sometimes results in irregular rising and falling
trends. In this paper we propose a highly parsimonious way to characterize
general RSEs, namely the Burstiness Scale (BuSca) model. BuSca views each RSE
as a mix of two independent process: a Poissonian and a self-exciting one. Here
we describe a fast method to extract the two parameters of BuSca that,
together, gives the burstyness scale, which represents how much of the RSE is
due to bursty and viral effects. We validated our method in eight diverse and
large datasets containing real random series of events seen in Twitter, Yelp,
e-mail conversations, Digg, and online forums. Results showed that, even using
only two parameters, BuSca is able to accurately describe RSEs seen in these
diverse systems, what can leverage many applications.
| [
{
"version": "v1",
"created": "Sat, 20 Feb 2016 16:47:10 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Alves",
"Rodrigo A S",
""
],
[
"Assunção",
"Renato",
""
],
[
"de Melo",
"Pedro O S Vaz",
""
]
] | TITLE: Burstiness Scale: a highly parsimonious model for characterizing random
series of events
ABSTRACT: The problem to accurately and parsimoniously characterize random series of
events (RSEs) present in the Web, such as e-mail conversations or Twitter
hashtags, is not trivial. Reports found in the literature reveal two apparent
conflicting visions of how RSEs should be modeled. From one side, the
Poissonian processes, of which consecutive events follow each other at a
relatively regular time and should not be correlated. On the other side, the
self-exciting processes, which are able to generate bursts of correlated events
and periods of inactivities. The existence of many and sometimes conflicting
approaches to model RSEs is a consequence of the unpredictability of the
aggregated dynamics of our individual and routine activities, which sometimes
show simple patterns, but sometimes results in irregular rising and falling
trends. In this paper we propose a highly parsimonious way to characterize
general RSEs, namely the Burstiness Scale (BuSca) model. BuSca views each RSE
as a mix of two independent process: a Poissonian and a self-exciting one. Here
we describe a fast method to extract the two parameters of BuSca that,
together, gives the burstyness scale, which represents how much of the RSE is
due to bursty and viral effects. We validated our method in eight diverse and
large datasets containing real random series of events seen in Twitter, Yelp,
e-mail conversations, Digg, and online forums. Results showed that, even using
only two parameters, BuSca is able to accurately describe RSEs seen in these
diverse systems, what can leverage many applications.
| no_new_dataset | 0.945147 |
1602.06539 | Liangcheng Liu | Liangchen Liu and Arnold Wiliem and Shaokang Chen and Kun Zhao and
Brian C. Lovell | Determining the best attributes for surveillance video keywords
generation | 7 pages, ISBA 2016. arXiv admin note: text overlap with
arXiv:1602.01940 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic video keyword generation is one of the key ingredients in reducing
the burden of security officers in analyzing surveillance videos. Keywords or
attributes are generally chosen manually based on expert knowledge of
surveillance. Most existing works primarily aim at either supervised learning
approaches relying on extensive manual labelling or hierarchical probabilistic
models that assume the features are extracted using the bag-of-words approach;
thus limiting the utilization of the other features. To address this, we turn
our attention to automatic attribute discovery approaches. However, it is not
clear which automatic discovery approach can discover the most meaningful
attributes. Furthermore, little research has been done on how to compare and
choose the best automatic attribute discovery methods. In this paper, we
propose a novel approach, based on the shared structure exhibited amongst
meaningful attributes, that enables us to compare between different automatic
attribute discovery approaches.We then validate our approach by comparing
various attribute discovery methods such as PiCoDeS on two attribute datasets.
The evaluation shows that our approach is able to select the automatic
discovery approach that discovers the most meaningful attributes. We then
employ the best discovery approach to generate keywords for videos recorded
from a surveillance system. This work shows it is possible to massively reduce
the amount of manual work in generating video keywords without limiting
ourselves to a particular video feature descriptor.
| [
{
"version": "v1",
"created": "Sun, 21 Feb 2016 15:08:51 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Liu",
"Liangchen",
""
],
[
"Wiliem",
"Arnold",
""
],
[
"Chen",
"Shaokang",
""
],
[
"Zhao",
"Kun",
""
],
[
"Lovell",
"Brian C.",
""
]
] | TITLE: Determining the best attributes for surveillance video keywords
generation
ABSTRACT: Automatic video keyword generation is one of the key ingredients in reducing
the burden of security officers in analyzing surveillance videos. Keywords or
attributes are generally chosen manually based on expert knowledge of
surveillance. Most existing works primarily aim at either supervised learning
approaches relying on extensive manual labelling or hierarchical probabilistic
models that assume the features are extracted using the bag-of-words approach;
thus limiting the utilization of the other features. To address this, we turn
our attention to automatic attribute discovery approaches. However, it is not
clear which automatic discovery approach can discover the most meaningful
attributes. Furthermore, little research has been done on how to compare and
choose the best automatic attribute discovery methods. In this paper, we
propose a novel approach, based on the shared structure exhibited amongst
meaningful attributes, that enables us to compare between different automatic
attribute discovery approaches.We then validate our approach by comparing
various attribute discovery methods such as PiCoDeS on two attribute datasets.
The evaluation shows that our approach is able to select the automatic
discovery approach that discovers the most meaningful attributes. We then
employ the best discovery approach to generate keywords for videos recorded
from a surveillance system. This work shows it is possible to massively reduce
the amount of manual work in generating video keywords without limiting
ourselves to a particular video feature descriptor.
| no_new_dataset | 0.948106 |
1602.06564 | Jiangye Yuan | Jiangye Yuan | Automatic Building Extraction in Aerial Scenes Using Convolutional
Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic building extraction from aerial and satellite imagery is highly
challenging due to extremely large variations of building appearances. To
attack this problem, we design a convolutional network with a final stage that
integrates activations from multiple preceding stages for pixel-wise
prediction, and introduce the signed distance function of building boundaries
as the output representation, which has an enhanced representation power. We
leverage abundant building footprint data available from geographic information
systems (GIS) to compile training data. The trained network achieves superior
performance on datasets that are significantly larger and more complex than
those used in prior work, demonstrating that the proposed method provides a
promising and scalable solution for automating this labor-intensive task.
| [
{
"version": "v1",
"created": "Sun, 21 Feb 2016 18:41:04 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Yuan",
"Jiangye",
""
]
] | TITLE: Automatic Building Extraction in Aerial Scenes Using Convolutional
Networks
ABSTRACT: Automatic building extraction from aerial and satellite imagery is highly
challenging due to extremely large variations of building appearances. To
attack this problem, we design a convolutional network with a final stage that
integrates activations from multiple preceding stages for pixel-wise
prediction, and introduce the signed distance function of building boundaries
as the output representation, which has an enhanced representation power. We
leverage abundant building footprint data available from geographic information
systems (GIS) to compile training data. The trained network achieves superior
performance on datasets that are significantly larger and more complex than
those used in prior work, demonstrating that the proposed method provides a
promising and scalable solution for automating this labor-intensive task.
| no_new_dataset | 0.956309 |
1602.06566 | Mohammad Islam | Dipayan Maiti and Mohammad Raihanul Islam and Scotland Leman and Naren
Ramakrishnan | Interactive Storytelling over Document Collections | This paper has been submitted to a conference for review | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Storytelling algorithms aim to 'connect the dots' between disparate documents
by linking starting and ending documents through a series of intermediate
documents. Existing storytelling algorithms are based on notions of coherence
and connectivity, and thus the primary way by which users can steer the story
construction is via design of suitable similarity functions. We present an
alternative approach to storytelling wherein the user can interactively and
iteratively provide 'must use' constraints to preferentially support the
construction of some stories over others. The three innovations in our approach
are distance measures based on (inferred) topic distributions, the use of
constraints to define sets of linear inequalities over paths, and the
introduction of slack and surplus variables to condition the topic distribution
to preferentially emphasize desired terms over others. We describe experimental
results to illustrate the effectiveness of our interactive storytelling
approach over multiple text datasets.
| [
{
"version": "v1",
"created": "Sun, 21 Feb 2016 18:46:35 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Maiti",
"Dipayan",
""
],
[
"Islam",
"Mohammad Raihanul",
""
],
[
"Leman",
"Scotland",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Interactive Storytelling over Document Collections
ABSTRACT: Storytelling algorithms aim to 'connect the dots' between disparate documents
by linking starting and ending documents through a series of intermediate
documents. Existing storytelling algorithms are based on notions of coherence
and connectivity, and thus the primary way by which users can steer the story
construction is via design of suitable similarity functions. We present an
alternative approach to storytelling wherein the user can interactively and
iteratively provide 'must use' constraints to preferentially support the
construction of some stories over others. The three innovations in our approach
are distance measures based on (inferred) topic distributions, the use of
constraints to define sets of linear inequalities over paths, and the
introduction of slack and surplus variables to condition the topic distribution
to preferentially emphasize desired terms over others. We describe experimental
results to illustrate the effectiveness of our interactive storytelling
approach over multiple text datasets.
| no_new_dataset | 0.950778 |
1602.06643 | Shaunak Bopardikar | Alberto Speranzon and Shaunak D. Bopardikar | An Algebraic Topological Approach to Privacy: Numerical and Categorical
Data | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we cast the classic problem of achieving k-anonymity for a
given database as a problem in algebraic topology. Using techniques from this
field of mathematics, we propose a framework for k-anonymity that brings new
insights and algorithms to anonymize a database. We begin by addressing the
simpler case when the data lies in a metric space. This case is instrumental to
introduce the main ideas and notation. Specifically, by mapping a database to
the Euclidean space and by considering the distance between datapoints, we
introduce a simplicial representation of the data and show how concepts from
algebraic topology, such as the nerve complex and persistent homology, can be
applied to efficiently obtain the entire spectrum of k-anonymity of the
database for various values of k and levels of generalization. For this
representation, we provide an analytic characterization of conditions under
which a given representation of the dataset is k-anonymous. We introduce a
weighted barcode diagram which, in this context, becomes a computational tool
to tradeoff data anonymity with data loss expressed as level of generalization.
Some simulations results are used to illustrate the main idea of the paper. We
conclude the paper with a discussion on how to extend this method to address
the general case of a mix of categorical and metric data.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 04:24:23 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Speranzon",
"Alberto",
""
],
[
"Bopardikar",
"Shaunak D.",
""
]
] | TITLE: An Algebraic Topological Approach to Privacy: Numerical and Categorical
Data
ABSTRACT: In this paper, we cast the classic problem of achieving k-anonymity for a
given database as a problem in algebraic topology. Using techniques from this
field of mathematics, we propose a framework for k-anonymity that brings new
insights and algorithms to anonymize a database. We begin by addressing the
simpler case when the data lies in a metric space. This case is instrumental to
introduce the main ideas and notation. Specifically, by mapping a database to
the Euclidean space and by considering the distance between datapoints, we
introduce a simplicial representation of the data and show how concepts from
algebraic topology, such as the nerve complex and persistent homology, can be
applied to efficiently obtain the entire spectrum of k-anonymity of the
database for various values of k and levels of generalization. For this
representation, we provide an analytic characterization of conditions under
which a given representation of the dataset is k-anonymous. We introduce a
weighted barcode diagram which, in this context, becomes a computational tool
to tradeoff data anonymity with data loss expressed as level of generalization.
Some simulations results are used to illustrate the main idea of the paper. We
conclude the paper with a discussion on how to extend this method to address
the general case of a mix of categorical and metric data.
| no_new_dataset | 0.94428 |
1602.06822 | William Whitney | William F. Whitney, Michael Chang, Tejas Kulkarni, Joshua B. Tenenbaum | Understanding Visual Concepts with Continuation Learning | Under review as a workshop paper for ICLR 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a neural network architecture and a learning algorithm to
produce factorized symbolic representations. We propose to learn these concepts
by observing consecutive frames, letting all the components of the hidden
representation except a small discrete set (gating units) be predicted from the
previous frame, and let the factors of variation in the next frame be
represented entirely by these discrete gated units (corresponding to symbolic
representations). We demonstrate the efficacy of our approach on datasets of
faces undergoing 3D transformations and Atari 2600 games.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 15:38:59 GMT"
}
] | 2016-02-23T00:00:00 | [
[
"Whitney",
"William F.",
""
],
[
"Chang",
"Michael",
""
],
[
"Kulkarni",
"Tejas",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] | TITLE: Understanding Visual Concepts with Continuation Learning
ABSTRACT: We introduce a neural network architecture and a learning algorithm to
produce factorized symbolic representations. We propose to learn these concepts
by observing consecutive frames, letting all the components of the hidden
representation except a small discrete set (gating units) be predicted from the
previous frame, and let the factors of variation in the next frame be
represented entirely by these discrete gated units (corresponding to symbolic
representations). We demonstrate the efficacy of our approach on datasets of
faces undergoing 3D transformations and Atari 2600 games.
| no_new_dataset | 0.948585 |
1411.4952 | Piotr Doll\'ar | Hao Fang and Saurabh Gupta and Forrest Iandola and Rupesh Srivastava
and Li Deng and Piotr Doll\'ar and Jianfeng Gao and Xiaodong He and Margaret
Mitchell and John C. Platt and C. Lawrence Zitnick and Geoffrey Zweig | From Captions to Visual Concepts and Back | version corresponding to CVPR15 paper | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach for automatically generating image
descriptions: visual detectors, language models, and multimodal similarity
models learnt directly from a dataset of image captions. We use multiple
instance learning to train visual detectors for words that commonly occur in
captions, including many different parts of speech such as nouns, verbs, and
adjectives. The word detector outputs serve as conditional inputs to a
maximum-entropy language model. The language model learns from a set of over
400,000 image descriptions to capture the statistics of word usage. We capture
global semantics by re-ranking caption candidates using sentence-level features
and a deep multimodal similarity model. Our system is state-of-the-art on the
official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When
human judges compare the system captions to ones written by other people on our
held-out test set, the system captions have equal or better quality 34% of the
time.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2014 18:23:45 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Nov 2014 20:19:56 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Apr 2015 18:05:07 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Fang",
"Hao",
""
],
[
"Gupta",
"Saurabh",
""
],
[
"Iandola",
"Forrest",
""
],
[
"Srivastava",
"Rupesh",
""
],
[
"Deng",
"Li",
""
],
[
"Dollár",
"Piotr",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"He",
"Xiaodong",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"Platt",
"John C.",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Zweig",
"Geoffrey",
""
]
] | TITLE: From Captions to Visual Concepts and Back
ABSTRACT: This paper presents a novel approach for automatically generating image
descriptions: visual detectors, language models, and multimodal similarity
models learnt directly from a dataset of image captions. We use multiple
instance learning to train visual detectors for words that commonly occur in
captions, including many different parts of speech such as nouns, verbs, and
adjectives. The word detector outputs serve as conditional inputs to a
maximum-entropy language model. The language model learns from a set of over
400,000 image descriptions to capture the statistics of word usage. We capture
global semantics by re-ranking caption candidates using sentence-level features
and a deep multimodal similarity model. Our system is state-of-the-art on the
official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When
human judges compare the system captions to ones written by other people on our
held-out test set, the system captions have equal or better quality 34% of the
time.
| no_new_dataset | 0.945851 |
1506.05702 | Diego Amancio | Diego R. Amancio | Comparing the writing style of real and artificial papers | To appear in Scientometrics (2015) | Scientometrics 105 (3), (2015) pp. 1763-1779 | 10.1007/s11192-015-1637-z | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed the increase of competition in science. While
promoting the quality of research in many cases, an intense competition among
scientists can also trigger unethical scientific behaviors. To increase the
total number of published papers, some authors even resort to software tools
that are able to produce grammatical, but meaningless scientific manuscripts.
Because automatically generated papers can be misunderstood as real papers, it
becomes of paramount importance to develop means to identify these scientific
frauds. In this paper, I devise a methodology to distinguish real manuscripts
from those generated with SCIGen, an automatic paper generator. Upon modeling
texts as complex networks (CN), it was possible to discriminate real from fake
papers with at least 89\% of accuracy. A systematic analysis of features
relevance revealed that the accessibility and betweenness were useful in
particular cases, even though the relevance depended upon the dataset. The
successful application of the methods described here show, as a proof of
principle, that network features can be used to identify scientific gibberish
papers. In addition, the CN-based approach can be combined in a straightforward
fashion with traditional statistical language processing methods to improve the
performance in identifying artificially generated papers.
| [
{
"version": "v1",
"created": "Thu, 18 Jun 2015 14:46:15 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jul 2015 15:50:56 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Amancio",
"Diego R.",
""
]
] | TITLE: Comparing the writing style of real and artificial papers
ABSTRACT: Recent years have witnessed the increase of competition in science. While
promoting the quality of research in many cases, an intense competition among
scientists can also trigger unethical scientific behaviors. To increase the
total number of published papers, some authors even resort to software tools
that are able to produce grammatical, but meaningless scientific manuscripts.
Because automatically generated papers can be misunderstood as real papers, it
becomes of paramount importance to develop means to identify these scientific
frauds. In this paper, I devise a methodology to distinguish real manuscripts
from those generated with SCIGen, an automatic paper generator. Upon modeling
texts as complex networks (CN), it was possible to discriminate real from fake
papers with at least 89\% of accuracy. A systematic analysis of features
relevance revealed that the accessibility and betweenness were useful in
particular cases, even though the relevance depended upon the dataset. The
successful application of the methods described here show, as a proof of
principle, that network features can be used to identify scientific gibberish
papers. In addition, the CN-based approach can be combined in a straightforward
fashion with traditional statistical language processing methods to improve the
performance in identifying artificially generated papers.
| no_new_dataset | 0.945851 |
1506.05865 | Baotian Hu | Baotian Hu, Qingcai Chen, Fangze Zhu | LCSTS: A Large Scale Chinese Short Text Summarization Dataset | Recently, we received feedbacks from Yuya Taguchi from NAIST in Japan
and Qian Chen from USTC of China, that the results in the EMNLP2015 version
seem to be underrated. So we carefully checked our results and find out that
we made a mistake while using the standard ROUGE. Then we re-evaluate all
methods in the paper and get corrected results listed in Table 2 of this
version | null | null | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic text summarization is widely regarded as the highly difficult
problem, partially because of the lack of large text summarization data set.
Due to the great challenge of constructing the large scale summaries for full
text, in this paper, we introduce a large corpus of Chinese short text
summarization dataset constructed from the Chinese microblogging website Sina
Weibo, which is released to the public
{http://icrc.hitsz.edu.cn/Article/show/139.html}. This corpus consists of over
2 million real Chinese short texts with short summaries given by the author of
each text. We also manually tagged the relevance of 10,666 short summaries with
their corresponding short texts. Based on the corpus, we introduce recurrent
neural network for the summary generation and achieve promising results, which
not only shows the usefulness of the proposed corpus for short text
summarization research, but also provides a baseline for further research on
this topic.
| [
{
"version": "v1",
"created": "Fri, 19 Jun 2015 02:40:42 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Jun 2015 14:33:39 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Aug 2015 02:43:38 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Feb 2016 16:35:35 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Hu",
"Baotian",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Zhu",
"Fangze",
""
]
] | TITLE: LCSTS: A Large Scale Chinese Short Text Summarization Dataset
ABSTRACT: Automatic text summarization is widely regarded as the highly difficult
problem, partially because of the lack of large text summarization data set.
Due to the great challenge of constructing the large scale summaries for full
text, in this paper, we introduce a large corpus of Chinese short text
summarization dataset constructed from the Chinese microblogging website Sina
Weibo, which is released to the public
{http://icrc.hitsz.edu.cn/Article/show/139.html}. This corpus consists of over
2 million real Chinese short texts with short summaries given by the author of
each text. We also manually tagged the relevance of 10,666 short summaries with
their corresponding short texts. Based on the corpus, we introduce recurrent
neural network for the summary generation and achieve promising results, which
not only shows the usefulness of the proposed corpus for short text
summarization research, but also provides a baseline for further research on
this topic.
| new_dataset | 0.963022 |
1511.04750 | Nikos Bikakis | Nikos Bikakis, George Papastefanatos, Melina Skourla, Timos Sellis | A Hierarchical Aggregation Framework for Efficient Multilevel Visual
Exploration and Analysis | Semantic Web Journal 2016 (to appear) | null | null | null | cs.HC cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data exploration and visualization systems are of great importance in the Big
Data era, in which the volume and heterogeneity of available information make
it difficult for humans to manually explore and analyse data. Most traditional
systems operate in an offline way, limited to accessing preprocessed (static)
sets of data. They also restrict themselves to dealing with small dataset
sizes, which can be easily handled with conventional techniques. However, the
Big Data era has realized the availability of a great amount and variety of big
datasets that are dynamic in nature; most of them offer API or query endpoints
for online access, or the data is received in a stream fashion. Therefore,
modern systems must address the challenge of on-the-fly scalable visualizations
over large dynamic sets of data, offering efficient exploration techniques, as
well as mechanisms for information abstraction and summarization. In this work,
we present a generic model for personalized multilevel exploration and analysis
over large dynamic sets of numeric and temporal data. Our model is built on top
of a lightweight tree-based structure which can be efficiently constructed
on-the-fly for a given set of data. This tree structure aggregates input
objects into a hierarchical multiscale model. Considering different exploration
scenarios over large datasets, the proposed model enables efficient multilevel
exploration, offering incremental construction and prefetching via user
interaction, and dynamic adaptation of the hierarchies based on user
preferences. A thorough theoretical analysis is presented, illustrating the
efficiency of the proposed model. The proposed model is realized in a web-based
prototype tool, called SynopsViz that offers multilevel visual exploration and
analysis over Linked Data datasets.
| [
{
"version": "v1",
"created": "Sun, 15 Nov 2015 18:23:27 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Dec 2015 12:51:23 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Jan 2016 18:08:18 GMT"
},
{
"version": "v4",
"created": "Fri, 19 Feb 2016 14:33:45 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Bikakis",
"Nikos",
""
],
[
"Papastefanatos",
"George",
""
],
[
"Skourla",
"Melina",
""
],
[
"Sellis",
"Timos",
""
]
] | TITLE: A Hierarchical Aggregation Framework for Efficient Multilevel Visual
Exploration and Analysis
ABSTRACT: Data exploration and visualization systems are of great importance in the Big
Data era, in which the volume and heterogeneity of available information make
it difficult for humans to manually explore and analyse data. Most traditional
systems operate in an offline way, limited to accessing preprocessed (static)
sets of data. They also restrict themselves to dealing with small dataset
sizes, which can be easily handled with conventional techniques. However, the
Big Data era has realized the availability of a great amount and variety of big
datasets that are dynamic in nature; most of them offer API or query endpoints
for online access, or the data is received in a stream fashion. Therefore,
modern systems must address the challenge of on-the-fly scalable visualizations
over large dynamic sets of data, offering efficient exploration techniques, as
well as mechanisms for information abstraction and summarization. In this work,
we present a generic model for personalized multilevel exploration and analysis
over large dynamic sets of numeric and temporal data. Our model is built on top
of a lightweight tree-based structure which can be efficiently constructed
on-the-fly for a given set of data. This tree structure aggregates input
objects into a hierarchical multiscale model. Considering different exploration
scenarios over large datasets, the proposed model enables efficient multilevel
exploration, offering incremental construction and prefetching via user
interaction, and dynamic adaptation of the hierarchies based on user
preferences. A thorough theoretical analysis is presented, illustrating the
efficiency of the proposed model. The proposed model is realized in a web-based
prototype tool, called SynopsViz that offers multilevel visual exploration and
analysis over Linked Data datasets.
| no_new_dataset | 0.950088 |
1511.06422 | Dmytro Mishkin | Dmytro Mishkin, Jiri Matas | All you need is a good init | Published as a conference paper at ICLR 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Layer-sequential unit-variance (LSUV) initialization - a simple method for
weight initialization for deep net learning - is proposed. The method consists
of the two steps. First, pre-initialize weights of each convolution or
inner-product layer with orthonormal matrices. Second, proceed from the first
to the final layer, normalizing the variance of the output of each layer to be
equal to one.
Experiment with different activation functions (maxout, ReLU-family, tanh)
show that the proposed initialization leads to learning of very deep nets that
(i) produces networks with test accuracy better or equal to standard methods
and (ii) is at least as fast as the complex schemes proposed specifically for
very deep nets such as FitNets (Romero et al. (2015)) and Highway (Srivastava
et al. (2015)).
Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets
and the state-of-the-art, or very close to it, is achieved on the MNIST,
CIFAR-10/100 and ImageNet datasets.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 22:19:15 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 14:38:33 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Jan 2016 18:46:03 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Jan 2016 17:47:07 GMT"
},
{
"version": "v5",
"created": "Mon, 18 Jan 2016 20:07:09 GMT"
},
{
"version": "v6",
"created": "Wed, 27 Jan 2016 15:10:19 GMT"
},
{
"version": "v7",
"created": "Fri, 19 Feb 2016 14:37:10 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Mishkin",
"Dmytro",
""
],
[
"Matas",
"Jiri",
""
]
] | TITLE: All you need is a good init
ABSTRACT: Layer-sequential unit-variance (LSUV) initialization - a simple method for
weight initialization for deep net learning - is proposed. The method consists
of the two steps. First, pre-initialize weights of each convolution or
inner-product layer with orthonormal matrices. Second, proceed from the first
to the final layer, normalizing the variance of the output of each layer to be
equal to one.
Experiment with different activation functions (maxout, ReLU-family, tanh)
show that the proposed initialization leads to learning of very deep nets that
(i) produces networks with test accuracy better or equal to standard methods
and (ii) is at least as fast as the complex schemes proposed specifically for
very deep nets such as FitNets (Romero et al. (2015)) and Highway (Srivastava
et al. (2015)).
Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets
and the state-of-the-art, or very close to it, is achieved on the MNIST,
CIFAR-10/100 and ImageNet datasets.
| no_new_dataset | 0.942665 |
1511.06830 | Xuan Dong | Xuan Dong, Boyan Bonev, Weixin Li, Weichao Qiu, Xianjie Chen, Alan
Yuille | Ground-truth dataset and baseline evaluations for image base-detail
separation algorithms | This paper has been withdrawn by the author due to some un-proper
examples | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Base-detail separation is a fundamental computer vision problem consisting of
modeling a smooth base layer with the coarse structures, and a detail layer
containing the texture-like structures. One of the challenges of estimating the
base is to preserve sharp boundaries between objects or parts to avoid halo
artifacts. Many methods have been proposed to address this problem, but there
is no ground-truth dataset of real images for quantitative evaluation. We
proposed a procedure to construct such a dataset, and provide two datasets:
Pascal Base-Detail and Fashionista Base-Detail, containing 1000 and 250 images,
respectively. Our assumption is that the base is piecewise smooth and we label
the appearance of each piece by a polynomial model. The pieces are objects and
parts of objects, obtained from human annotations. Finally, we proposed a way
to evaluate methods with our base-detail ground-truth and we compared the
performances of seven state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Sat, 21 Nov 2015 04:04:39 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Feb 2016 22:59:13 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Dong",
"Xuan",
""
],
[
"Bonev",
"Boyan",
""
],
[
"Li",
"Weixin",
""
],
[
"Qiu",
"Weichao",
""
],
[
"Chen",
"Xianjie",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Ground-truth dataset and baseline evaluations for image base-detail
separation algorithms
ABSTRACT: Base-detail separation is a fundamental computer vision problem consisting of
modeling a smooth base layer with the coarse structures, and a detail layer
containing the texture-like structures. One of the challenges of estimating the
base is to preserve sharp boundaries between objects or parts to avoid halo
artifacts. Many methods have been proposed to address this problem, but there
is no ground-truth dataset of real images for quantitative evaluation. We
proposed a procedure to construct such a dataset, and provide two datasets:
Pascal Base-Detail and Fashionista Base-Detail, containing 1000 and 250 images,
respectively. Our assumption is that the base is piecewise smooth and we label
the appearance of each piece by a polynomial model. The pieces are objects and
parts of objects, obtained from human annotations. Finally, we proposed a way
to evaluate methods with our base-detail ground-truth and we compared the
performances of seven state-of-the-art algorithms.
| new_dataset | 0.962708 |
1602.04854 | Shahin Mahdizadehaghdam | Shahin Mahdizadehaghdam, Han Wang, Hamid Krim, Liyi Dai | Information Diffusion of Topic Propagation in Social Media | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world social and/or operational networks consist of agents with
associated states, whose connectivity forms complex topologies. This complexity
is further compounded by interconnected information layers, consisting, for
instance, documents/resources of the agents which mutually share topical
similarities. Our goal in this work is to predict the specific states of the
agents, as their observed resources evolve in time and get updated. The
information diffusion among the agents and the publications themselves
effectively result in a dynamic process which we capture by an interconnected
system of networks (i.e. layered). More specifically, we use a notion of a
supra-Laplacian matrix to address such a generalized diffusion of an
interconnected network starting with the classical "graph Laplacian". The
auxiliary and external input update is modeled by a multidimensional Brownian
process, yielding two contributions to the variations in the states of the
agents: one that is due to the intrinsic interactions in the network system,
and the other due to the external inputs or innovations. A variation on this
theme, a priori knowledge of a fraction of the agents' states is shown to lead
to a Kalman predictor problem. This helps us refine the predicted states
exploiting the error in estimating the states of agents. Three real-world
datasets are used to evaluate and validate the information diffusion process in
this novel layered network approach. Our results demonstrate a lower prediction
error when using the interconnected network rather than the single connectivity
layer between the agents. The prediction error is further improved by using the
estimated diffusion connection and by applying the Kalman approach with partial
observations.
| [
{
"version": "v1",
"created": "Mon, 15 Feb 2016 22:14:55 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Mahdizadehaghdam",
"Shahin",
""
],
[
"Wang",
"Han",
""
],
[
"Krim",
"Hamid",
""
],
[
"Dai",
"Liyi",
""
]
] | TITLE: Information Diffusion of Topic Propagation in Social Media
ABSTRACT: Real-world social and/or operational networks consist of agents with
associated states, whose connectivity forms complex topologies. This complexity
is further compounded by interconnected information layers, consisting, for
instance, documents/resources of the agents which mutually share topical
similarities. Our goal in this work is to predict the specific states of the
agents, as their observed resources evolve in time and get updated. The
information diffusion among the agents and the publications themselves
effectively result in a dynamic process which we capture by an interconnected
system of networks (i.e. layered). More specifically, we use a notion of a
supra-Laplacian matrix to address such a generalized diffusion of an
interconnected network starting with the classical "graph Laplacian". The
auxiliary and external input update is modeled by a multidimensional Brownian
process, yielding two contributions to the variations in the states of the
agents: one that is due to the intrinsic interactions in the network system,
and the other due to the external inputs or innovations. A variation on this
theme, a priori knowledge of a fraction of the agents' states is shown to lead
to a Kalman predictor problem. This helps us refine the predicted states
exploiting the error in estimating the states of agents. Three real-world
datasets are used to evaluate and validate the information diffusion process in
this novel layered network approach. Our results demonstrate a lower prediction
error when using the interconnected network rather than the single connectivity
layer between the agents. The prediction error is further improved by using the
estimated diffusion connection and by applying the Kalman approach with partial
observations.
| no_new_dataset | 0.949059 |
1602.04874 | Yushi Yao | Yushi Yao, Zheng Huang | Bi-directional LSTM Recurrent Neural Network for Chinese Word
Segmentation | 2 figures | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural network(RNN) has been broadly applied to natural language
processing(NLP) problems. This kind of neural network is designed for modeling
sequential data and has been testified to be quite efficient in sequential
tagging tasks. In this paper, we propose to use bi-directional RNN with long
short-term memory(LSTM) units for Chinese word segmentation, which is a crucial
preprocess task for modeling Chinese sentences and articles. Classical methods
focus on designing and combining hand-craft features from context, whereas
bi-directional LSTM network(BLSTM) does not need any prior knowledge or
pre-designing, and it is expert in keeping the contextual information in both
directions. Experiment result shows that our approach gets state-of-the-art
performance in word segmentation on both traditional Chinese datasets and
simplified Chinese datasets.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 00:45:19 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Yao",
"Yushi",
""
],
[
"Huang",
"Zheng",
""
]
] | TITLE: Bi-directional LSTM Recurrent Neural Network for Chinese Word
Segmentation
ABSTRACT: Recurrent neural network(RNN) has been broadly applied to natural language
processing(NLP) problems. This kind of neural network is designed for modeling
sequential data and has been testified to be quite efficient in sequential
tagging tasks. In this paper, we propose to use bi-directional RNN with long
short-term memory(LSTM) units for Chinese word segmentation, which is a crucial
preprocess task for modeling Chinese sentences and articles. Classical methods
focus on designing and combining hand-craft features from context, whereas
bi-directional LSTM network(BLSTM) does not need any prior knowledge or
pre-designing, and it is expert in keeping the contextual information in both
directions. Experiment result shows that our approach gets state-of-the-art
performance in word segmentation on both traditional Chinese datasets and
simplified Chinese datasets.
| no_new_dataset | 0.951278 |
1602.06025 | Yong Ren | Yong Ren, Yining Wang, Jun Zhu | Spectral Learning for Supervised Topic Models | null | null | null | null | cs.LG cs.CL cs.IR stat.ML | http://creativecommons.org/licenses/by/4.0/ | Supervised topic models simultaneously model the latent topic structure of
large collections of documents and a response variable associated with each
document. Existing inference methods are based on variational approximation or
Monte Carlo sampling, which often suffers from the local minimum defect.
Spectral methods have been applied to learn unsupervised topic models, such as
latent Dirichlet allocation (LDA), with provable guarantees. This paper
investigates the possibility of applying spectral methods to recover the
parameters of supervised LDA (sLDA). We first present a two-stage spectral
method, which recovers the parameters of LDA followed by a power update method
to recover the regression model parameters. Then, we further present a
single-phase spectral algorithm to jointly recover the topic distribution
matrix as well as the regression weights. Our spectral algorithms are provably
correct and computationally efficient. We prove a sample complexity bound for
each algorithm and subsequently derive a sufficient condition for the
identifiability of sLDA. Thorough experiments on synthetic and real-world
datasets verify the theory and demonstrate the practical effectiveness of the
spectral algorithms. In fact, our results on a large-scale review rating
dataset demonstrate that our single-phase spectral algorithm alone gets
comparable or even better performance than state-of-the-art methods, while
previous work on spectral methods has rarely reported such promising
performance.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2016 02:07:20 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Ren",
"Yong",
""
],
[
"Wang",
"Yining",
""
],
[
"Zhu",
"Jun",
""
]
] | TITLE: Spectral Learning for Supervised Topic Models
ABSTRACT: Supervised topic models simultaneously model the latent topic structure of
large collections of documents and a response variable associated with each
document. Existing inference methods are based on variational approximation or
Monte Carlo sampling, which often suffers from the local minimum defect.
Spectral methods have been applied to learn unsupervised topic models, such as
latent Dirichlet allocation (LDA), with provable guarantees. This paper
investigates the possibility of applying spectral methods to recover the
parameters of supervised LDA (sLDA). We first present a two-stage spectral
method, which recovers the parameters of LDA followed by a power update method
to recover the regression model parameters. Then, we further present a
single-phase spectral algorithm to jointly recover the topic distribution
matrix as well as the regression weights. Our spectral algorithms are provably
correct and computationally efficient. We prove a sample complexity bound for
each algorithm and subsequently derive a sufficient condition for the
identifiability of sLDA. Thorough experiments on synthetic and real-world
datasets verify the theory and demonstrate the practical effectiveness of the
spectral algorithms. In fact, our results on a large-scale review rating
dataset demonstrate that our single-phase spectral algorithm alone gets
comparable or even better performance than state-of-the-art methods, while
previous work on spectral methods has rarely reported such promising
performance.
| no_new_dataset | 0.943815 |
1602.06136 | Mazen Alsarem | Mazen Alsarem (DRIM), Pierre-Edouard Portier (DRIM), Sylvie Calabretto
(DRIM), Harald Kosch | Ordonnancement d'entit\'es pour la rencontre du web des documents et du
web des donn\'ees | in French, Revue des Sciences et Technologies de l'Information -
S{\'e}rie Document Num\'erique, Lavoisier, 2015, Nouvelles approches en
recherche d'information, 18 (2-3/2015 ), pp.123-154 | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advances of the Linked Open Data (LOD) initiative are giving rise to a
more structured web of data. Indeed, a few datasets act as hubs (e.g., DBpedia)
connecting many other datasets. They also made possible new web services for
entity detection inside plain text (e.g., DBpedia Spotlight), thus allowing for
new applications that will benefit from a combination of the web of documents
and the web of data. To ease the emergence of these new use-cases, we propose a
query-biased algorithm for the ranking of entities detected inside a web page.
Our algorithm combine link analysis with dimensionality reduction. We use
crowdsourcing for building a publicly available and reusable dataset on which
we compare our algorithm to the state of the art. Finally, we use this
algorithm for the construction of semantic snippets for which we evaluate the
usability and the usefulness with a crowdsourcing-based approach.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2016 13:05:42 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Alsarem",
"Mazen",
"",
"DRIM"
],
[
"Portier",
"Pierre-Edouard",
"",
"DRIM"
],
[
"Calabretto",
"Sylvie",
"",
"DRIM"
],
[
"Kosch",
"Harald",
""
]
] | TITLE: Ordonnancement d'entit\'es pour la rencontre du web des documents et du
web des donn\'ees
ABSTRACT: The advances of the Linked Open Data (LOD) initiative are giving rise to a
more structured web of data. Indeed, a few datasets act as hubs (e.g., DBpedia)
connecting many other datasets. They also made possible new web services for
entity detection inside plain text (e.g., DBpedia Spotlight), thus allowing for
new applications that will benefit from a combination of the web of documents
and the web of data. To ease the emergence of these new use-cases, we propose a
query-biased algorithm for the ranking of entities detected inside a web page.
Our algorithm combine link analysis with dimensionality reduction. We use
crowdsourcing for building a publicly available and reusable dataset on which
we compare our algorithm to the state of the art. Finally, we use this
algorithm for the construction of semantic snippets for which we evaluate the
usability and the usefulness with a crowdsourcing-based approach.
| no_new_dataset | 0.951369 |
1602.06149 | Simone Bianco | Simone Bianco | Large age-gap face verification by feature injection in deep networks | Submitted | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new method for face verification across large age
gaps and also a dataset containing variations of age in the wild, the Large
Age-Gap (LAG) dataset, with images ranging from child/young to adult/old. The
proposed method exploits a deep convolutional neural network (DCNN) pre-trained
for the face recognition task on a large dataset and then fine-tuned for the
large age-gap face verification task. Finetuning is performed in a Siamese
architecture using a contrastive loss function. A feature injection layer is
introduced to boost verification accuracy, showing the ability of the DCNN to
learn a similarity metric leveraging external features. Experimental results on
the LAG dataset show that our method is able to outperform the face
verification solutions in the state of the art considered.
| [
{
"version": "v1",
"created": "Fri, 19 Feb 2016 13:39:22 GMT"
}
] | 2016-02-22T00:00:00 | [
[
"Bianco",
"Simone",
""
]
] | TITLE: Large age-gap face verification by feature injection in deep networks
ABSTRACT: This paper introduces a new method for face verification across large age
gaps and also a dataset containing variations of age in the wild, the Large
Age-Gap (LAG) dataset, with images ranging from child/young to adult/old. The
proposed method exploits a deep convolutional neural network (DCNN) pre-trained
for the face recognition task on a large dataset and then fine-tuned for the
large age-gap face verification task. Finetuning is performed in a Siamese
architecture using a contrastive loss function. A feature injection layer is
introduced to boost verification accuracy, showing the ability of the DCNN to
learn a similarity metric leveraging external features. Experimental results on
the LAG dataset show that our method is able to outperform the face
verification solutions in the state of the art considered.
| new_dataset | 0.961316 |
1412.3121 | Seungwhan Moon | Seungwhan Moon and Suyoun Kim and Haohan Wang | Multimodal Transfer Deep Learning with Applications in Audio-Visual
Recognition | 6 pages, MMML workshop at NIPS 2015 | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a transfer deep learning (TDL) framework that can transfer the
knowledge obtained from a single-modal neural network to a network with a
different modality. Specifically, we show that we can leverage speech data to
fine-tune the network trained for video recognition, given an initial set of
audio-video parallel dataset within the same semantics. Our approach first
learns the analogy-preserving embeddings between the abstract representations
learned from intermediate layers of each network, allowing for semantics-level
transfer between the source and target modalities. We then apply our neural
network operation that fine-tunes the target network with the additional
knowledge transferred from the source network, while keeping the topology of
the target network unchanged. While we present an audio-visual recognition task
as an application of our approach, our framework is flexible and thus can work
with any multimodal dataset, or with any already-existing deep networks that
share the common underlying semantics. In this work in progress report, we aim
to provide comprehensive results of different configurations of the proposed
approach on two widely used audio-visual datasets, and we discuss potential
applications of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 9 Dec 2014 21:12:19 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Feb 2016 19:56:41 GMT"
}
] | 2016-02-19T00:00:00 | [
[
"Moon",
"Seungwhan",
""
],
[
"Kim",
"Suyoun",
""
],
[
"Wang",
"Haohan",
""
]
] | TITLE: Multimodal Transfer Deep Learning with Applications in Audio-Visual
Recognition
ABSTRACT: We propose a transfer deep learning (TDL) framework that can transfer the
knowledge obtained from a single-modal neural network to a network with a
different modality. Specifically, we show that we can leverage speech data to
fine-tune the network trained for video recognition, given an initial set of
audio-video parallel dataset within the same semantics. Our approach first
learns the analogy-preserving embeddings between the abstract representations
learned from intermediate layers of each network, allowing for semantics-level
transfer between the source and target modalities. We then apply our neural
network operation that fine-tunes the target network with the additional
knowledge transferred from the source network, while keeping the topology of
the target network unchanged. While we present an audio-visual recognition task
as an application of our approach, our framework is flexible and thus can work
with any multimodal dataset, or with any already-existing deep networks that
share the common underlying semantics. In this work in progress report, we aim
to provide comprehensive results of different configurations of the proposed
approach on two widely used audio-visual datasets, and we discuss potential
applications of the proposed approach.
| no_new_dataset | 0.94474 |
1505.07427 | Alex Kendall | Alex Kendall, Matthew Grimes and Roberto Cipolla | PoseNet: A Convolutional Network for Real-Time 6-DOF Camera
Relocalization | 9 pages, 13 figures; Corrected numerical error in orientation results | null | null | null | cs.CV cs.NE cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a robust and real-time monocular six degree of freedom
relocalization system. Our system trains a convolutional neural network to
regress the 6-DOF camera pose from a single RGB image in an end-to-end manner
with no need of additional engineering or graph optimisation. The algorithm can
operate indoors and outdoors in real time, taking 5ms per frame to compute. It
obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes
and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23
layer deep convnet, demonstrating that convnets can be used to solve
complicated out of image plane regression problems. This was made possible by
leveraging transfer learning from large scale classification data. We show the
convnet localizes from high level features and is robust to difficult lighting,
motion blur and different camera intrinsics where point based SIFT registration
fails. Furthermore we show how the pose feature that is produced generalizes to
other scenes allowing us to regress pose with only a few dozen training
examples. PoseNet code, dataset and an online demonstration is available on our
project webpage, at http://mi.eng.cam.ac.uk/projects/relocalisation/
| [
{
"version": "v1",
"created": "Wed, 27 May 2015 18:18:42 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Jun 2015 11:52:30 GMT"
},
{
"version": "v3",
"created": "Mon, 23 Nov 2015 10:10:01 GMT"
},
{
"version": "v4",
"created": "Thu, 18 Feb 2016 13:52:18 GMT"
}
] | 2016-02-19T00:00:00 | [
[
"Kendall",
"Alex",
""
],
[
"Grimes",
"Matthew",
""
],
[
"Cipolla",
"Roberto",
""
]
] | TITLE: PoseNet: A Convolutional Network for Real-Time 6-DOF Camera
Relocalization
ABSTRACT: We present a robust and real-time monocular six degree of freedom
relocalization system. Our system trains a convolutional neural network to
regress the 6-DOF camera pose from a single RGB image in an end-to-end manner
with no need of additional engineering or graph optimisation. The algorithm can
operate indoors and outdoors in real time, taking 5ms per frame to compute. It
obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes
and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23
layer deep convnet, demonstrating that convnets can be used to solve
complicated out of image plane regression problems. This was made possible by
leveraging transfer learning from large scale classification data. We show the
convnet localizes from high level features and is robust to difficult lighting,
motion blur and different camera intrinsics where point based SIFT registration
fails. Furthermore we show how the pose feature that is produced generalizes to
other scenes allowing us to regress pose with only a few dozen training
examples. PoseNet code, dataset and an online demonstration is available on our
project webpage, at http://mi.eng.cam.ac.uk/projects/relocalisation/
| no_new_dataset | 0.94545 |
1509.05909 | Alex Kendall | Alex Kendall and Roberto Cipolla | Modelling Uncertainty in Deep Learning for Camera Relocalization | ICRA 2016; Fixed numerical error with rotation results | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a robust and real-time monocular six degree of freedom visual
relocalization system. We use a Bayesian convolutional neural network to
regress the 6-DOF camera pose from a single RGB image. It is trained in an
end-to-end manner with no need of additional engineering or graph optimisation.
The algorithm can operate indoors and outdoors in real time, taking under 6ms
to compute. It obtains approximately 2m and 6 degrees accuracy for very large
scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian
convolutional neural network implementation we obtain an estimate of the
model's relocalization uncertainty and improve state of the art localization
accuracy on a large scale outdoor dataset. We leverage the uncertainty measure
to estimate metric relocalization error and to detect the presence or absence
of the scene in the input image. We show that the model's uncertainty is caused
by images being dissimilar to the training dataset in either pose or
appearance.
| [
{
"version": "v1",
"created": "Sat, 19 Sep 2015 16:01:05 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Feb 2016 13:30:25 GMT"
}
] | 2016-02-19T00:00:00 | [
[
"Kendall",
"Alex",
""
],
[
"Cipolla",
"Roberto",
""
]
] | TITLE: Modelling Uncertainty in Deep Learning for Camera Relocalization
ABSTRACT: We present a robust and real-time monocular six degree of freedom visual
relocalization system. We use a Bayesian convolutional neural network to
regress the 6-DOF camera pose from a single RGB image. It is trained in an
end-to-end manner with no need of additional engineering or graph optimisation.
The algorithm can operate indoors and outdoors in real time, taking under 6ms
to compute. It obtains approximately 2m and 6 degrees accuracy for very large
scale outdoor scenes and 0.5m and 10 degrees accuracy indoors. Using a Bayesian
convolutional neural network implementation we obtain an estimate of the
model's relocalization uncertainty and improve state of the art localization
accuracy on a large scale outdoor dataset. We leverage the uncertainty measure
to estimate metric relocalization error and to detect the presence or absence
of the scene in the input image. We show that the model's uncertainty is caused
by images being dissimilar to the training dataset in either pose or
appearance.
| no_new_dataset | 0.947575 |
1602.01887 | Shu Wang | Shu Wang, Shaoting Zhang, Wei Liu and Dimitris N. Metaxas | Visual Tracking via Reliable Memories | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this paper, we propose a novel visual tracking framework that
intelligently discovers reliable patterns from a wide range of video to resist
drift error for long-term tracking tasks. First, we design a Discrete Fourier
Transform (DFT) based tracker which is able to exploit a large number of
tracked samples while still ensures real-time performance. Second, we propose a
clustering method with temporal constraints to explore and memorize consistent
patterns from previous frames, named as reliable memories. By virtue of this
method, our tracker can utilize uncontaminated information to alleviate
drifting issues. Experimental results show that our tracker performs favorably
against other state of-the-art methods on benchmark datasets. Furthermore, it
is significantly competent in handling drifts and able to robustly track
challenging long videos over 4000 frames, while most of others lose track at
early frames.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 23:40:14 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Feb 2016 22:36:07 GMT"
}
] | 2016-02-19T00:00:00 | [
[
"Wang",
"Shu",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Liu",
"Wei",
""
],
[
"Metaxas",
"Dimitris N.",
""
]
] | TITLE: Visual Tracking via Reliable Memories
ABSTRACT: In this paper, we propose a novel visual tracking framework that
intelligently discovers reliable patterns from a wide range of video to resist
drift error for long-term tracking tasks. First, we design a Discrete Fourier
Transform (DFT) based tracker which is able to exploit a large number of
tracked samples while still ensures real-time performance. Second, we propose a
clustering method with temporal constraints to explore and memorize consistent
patterns from previous frames, named as reliable memories. By virtue of this
method, our tracker can utilize uncontaminated information to alleviate
drifting issues. Experimental results show that our tracker performs favorably
against other state of-the-art methods on benchmark datasets. Furthermore, it
is significantly competent in handling drifts and able to robustly track
challenging long videos over 4000 frames, while most of others lose track at
early frames.
| no_new_dataset | 0.946941 |
1511.04707 | Matthias Dorfer | Matthias Dorfer, Rainer Kelz and Gerhard Widmer | Deep Linear Discriminant Analysis | Published as a conference paper at ICLR 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns
linearly separable latent representations in an end-to-end fashion. Classic LDA
extracts features which preserve class separability and is used for
dimensionality reduction for many classification problems. The central idea of
this paper is to put LDA on top of a deep neural network. This can be seen as a
non-linear extension of classic LDA. Instead of maximizing the likelihood of
target labels for individual samples, we propose an objective function that
pushes the network to produce feature distributions which: (a) have low
variance within the same class and (b) high variance between different classes.
Our objective is derived from the general LDA eigenvalue problem and still
allows to train with stochastic gradient descent and back-propagation. For
evaluation we test our approach on three different benchmark datasets (MNIST,
CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and
CIFAR-10 and outperforms a network trained with categorical cross entropy (same
architecture) on a supervised setting of STL-10.
| [
{
"version": "v1",
"created": "Sun, 15 Nov 2015 14:33:26 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Nov 2015 08:05:10 GMT"
},
{
"version": "v3",
"created": "Sat, 21 Nov 2015 17:59:18 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Dec 2015 09:52:47 GMT"
},
{
"version": "v5",
"created": "Wed, 17 Feb 2016 08:32:47 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Dorfer",
"Matthias",
""
],
[
"Kelz",
"Rainer",
""
],
[
"Widmer",
"Gerhard",
""
]
] | TITLE: Deep Linear Discriminant Analysis
ABSTRACT: We introduce Deep Linear Discriminant Analysis (DeepLDA) which learns
linearly separable latent representations in an end-to-end fashion. Classic LDA
extracts features which preserve class separability and is used for
dimensionality reduction for many classification problems. The central idea of
this paper is to put LDA on top of a deep neural network. This can be seen as a
non-linear extension of classic LDA. Instead of maximizing the likelihood of
target labels for individual samples, we propose an objective function that
pushes the network to produce feature distributions which: (a) have low
variance within the same class and (b) high variance between different classes.
Our objective is derived from the general LDA eigenvalue problem and still
allows to train with stochastic gradient descent and back-propagation. For
evaluation we test our approach on three different benchmark datasets (MNIST,
CIFAR-10 and STL-10). DeepLDA produces competitive results on MNIST and
CIFAR-10 and outperforms a network trained with categorical cross entropy (same
architecture) on a supervised setting of STL-10.
| no_new_dataset | 0.946794 |
1602.05285 | Truyen Tran | Truyen Tran, Dinh Phung and Svetha Venkatesh | Choice by Elimination via Deep Neural Networks | PAKDD workshop on Biologically Inspired Techniques for Data Mining
(BDM'16) | null | null | null | stat.ML cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Neural Choice by Elimination, a new framework that integrates
deep neural networks into probabilistic sequential choice models for learning
to rank. Given a set of items to chose from, the elimination strategy starts
with the whole item set and iteratively eliminates the least worthy item in the
remaining subset. We prove that the choice by elimination is equivalent to
marginalizing out the random Gompertz latent utilities. Coupled with the choice
model is the recently introduced Neural Highway Networks for approximating
arbitrarily complex rank functions. We evaluate the proposed framework on a
large-scale public dataset with over 425K items, drawn from the Yahoo! learning
to rank challenge. It is demonstrated that the proposed method is competitive
against state-of-the-art learning to rank methods.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 03:17:10 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Choice by Elimination via Deep Neural Networks
ABSTRACT: We introduce Neural Choice by Elimination, a new framework that integrates
deep neural networks into probabilistic sequential choice models for learning
to rank. Given a set of items to chose from, the elimination strategy starts
with the whole item set and iteratively eliminates the least worthy item in the
remaining subset. We prove that the choice by elimination is equivalent to
marginalizing out the random Gompertz latent utilities. Coupled with the choice
model is the recently introduced Neural Highway Networks for approximating
arbitrarily complex rank functions. We evaluate the proposed framework on a
large-scale public dataset with over 425K items, drawn from the Yahoo! learning
to rank challenge. It is demonstrated that the proposed method is competitive
against state-of-the-art learning to rank methods.
| no_new_dataset | 0.948106 |
1602.05292 | Zhenhao Ge | Zhenhao Ge, Yufang Sun and Mark J. T. Smith | Authorship Attribution Using a Neural Network Language Model | Proceedings of the 30th AAAI Conference on Artificial Intelligence
(AAAI'16) | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In practice, training language models for individual authors is often
expensive because of limited data resources. In such cases, Neural Network
Language Models (NNLMs), generally outperform the traditional non-parametric
N-gram models. Here we investigate the performance of a feed-forward NNLM on an
authorship attribution problem, with moderate author set size and relatively
limited data. We also consider how the text topics impact performance. Compared
with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the
proposed method achieves nearly 2:5% reduction in perplexity and increases
author classification accuracy by 3:43% on average, given as few as 5 test
sentences. The performance is very competitive with the state of the art in
terms of accuracy and demand on test data. The source code, preprocessed
datasets, a detailed description of the methodology and results are available
at https://github.com/zge/authorship-attribution.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 04:06:28 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Ge",
"Zhenhao",
""
],
[
"Sun",
"Yufang",
""
],
[
"Smith",
"Mark J. T.",
""
]
] | TITLE: Authorship Attribution Using a Neural Network Language Model
ABSTRACT: In practice, training language models for individual authors is often
expensive because of limited data resources. In such cases, Neural Network
Language Models (NNLMs), generally outperform the traditional non-parametric
N-gram models. Here we investigate the performance of a feed-forward NNLM on an
authorship attribution problem, with moderate author set size and relatively
limited data. We also consider how the text topics impact performance. Compared
with a well-constructed N-gram baseline method with Kneser-Ney smoothing, the
proposed method achieves nearly 2:5% reduction in perplexity and increases
author classification accuracy by 3:43% on average, given as few as 5 test
sentences. The performance is very competitive with the state of the art in
terms of accuracy and demand on test data. The source code, preprocessed
datasets, a detailed description of the methodology and results are available
at https://github.com/zge/authorship-attribution.
| no_new_dataset | 0.950457 |
1602.05307 | Xiang Ren | Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Jiawei Han | Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label
Embedding | Submitted to KDD 2016. 11 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current systems of fine-grained entity typing use distant supervision in
conjunction with existing knowledge bases to assign categories (type labels) to
entity mentions. However, the type labels so obtained from knowledge bases are
often noisy (i.e., incorrect for the entity mention's local context). We define
a new task, Label Noise Reduction in Entity Typing (LNR), to be the automatic
identification of correct type labels (type-paths) for training examples, given
the set of candidate type labels obtained by distant supervision with a given
type hierarchy. The unknown type labels for individual entity mentions and the
semantic similarity between entity types pose unique challenges for solving the
LNR task. We propose a general framework, called PLE, to jointly embed entity
mentions, text features and entity types into the same low-dimensional space
where, in that space, objects whose types are semantically close have similar
representations. Then we estimate the type-path for each training example in a
top-down manner using the learned embeddings. We formulate a global objective
for learning the embeddings from text corpora and knowledge bases, which adopts
a novel margin-based loss that is robust to noisy labels and faithfully models
type correlation derived from knowledge bases. Our experiments on three public
typing datasets demonstrate the effectiveness and robustness of PLE, with an
average of 25% improvement in accuracy compared to next best method.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 05:26:47 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Ren",
"Xiang",
""
],
[
"He",
"Wenqi",
""
],
[
"Qu",
"Meng",
""
],
[
"Voss",
"Clare R.",
""
],
[
"Ji",
"Heng",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label
Embedding
ABSTRACT: Current systems of fine-grained entity typing use distant supervision in
conjunction with existing knowledge bases to assign categories (type labels) to
entity mentions. However, the type labels so obtained from knowledge bases are
often noisy (i.e., incorrect for the entity mention's local context). We define
a new task, Label Noise Reduction in Entity Typing (LNR), to be the automatic
identification of correct type labels (type-paths) for training examples, given
the set of candidate type labels obtained by distant supervision with a given
type hierarchy. The unknown type labels for individual entity mentions and the
semantic similarity between entity types pose unique challenges for solving the
LNR task. We propose a general framework, called PLE, to jointly embed entity
mentions, text features and entity types into the same low-dimensional space
where, in that space, objects whose types are semantically close have similar
representations. Then we estimate the type-path for each training example in a
top-down manner using the learned embeddings. We formulate a global objective
for learning the embeddings from text corpora and knowledge bases, which adopts
a novel margin-based loss that is robust to noisy labels and faithfully models
type correlation derived from knowledge bases. Our experiments on three public
typing datasets demonstrate the effectiveness and robustness of PLE, with an
average of 25% improvement in accuracy compared to next best method.
| no_new_dataset | 0.949295 |
1602.05436 | Mike Gartrell | Mike Gartrell, Ulrich Paquet, Noam Koenigstein | Low-Rank Factorization of Determinantal Point Processes for
Recommendation | 10 pages, 4 figures. Submitted to KDD 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determinantal point processes (DPPs) have garnered attention as an elegant
probabilistic model of set diversity. They are useful for a number of subset
selection tasks, including product recommendation. DPPs are parametrized by a
positive semi-definite kernel matrix. In this work we present a new method for
learning the DPP kernel from observed data using a low-rank factorization of
this kernel. We show that this low-rank factorization enables a learning
algorithm that is nearly an order of magnitude faster than previous approaches,
while also providing for a method for computing product recommendation
predictions that is far faster (up to 20x faster or more for large item
catalogs) than previous techniques that involve a full-rank DPP kernel.
Furthermore, we show that our method provides equivalent or sometimes better
predictive performance than prior full-rank DPP approaches, and better
performance than several other competing recommendation methods in many cases.
We conduct an extensive experimental evaluation using several real-world
datasets in the domain of product recommendation to demonstrate the utility of
our method, along with its limitations.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 14:40:52 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Gartrell",
"Mike",
""
],
[
"Paquet",
"Ulrich",
""
],
[
"Koenigstein",
"Noam",
""
]
] | TITLE: Low-Rank Factorization of Determinantal Point Processes for
Recommendation
ABSTRACT: Determinantal point processes (DPPs) have garnered attention as an elegant
probabilistic model of set diversity. They are useful for a number of subset
selection tasks, including product recommendation. DPPs are parametrized by a
positive semi-definite kernel matrix. In this work we present a new method for
learning the DPP kernel from observed data using a low-rank factorization of
this kernel. We show that this low-rank factorization enables a learning
algorithm that is nearly an order of magnitude faster than previous approaches,
while also providing for a method for computing product recommendation
predictions that is far faster (up to 20x faster or more for large item
catalogs) than previous techniques that involve a full-rank DPP kernel.
Furthermore, we show that our method provides equivalent or sometimes better
predictive performance than prior full-rank DPP approaches, and better
performance than several other competing recommendation methods in many cases.
We conduct an extensive experimental evaluation using several real-world
datasets in the domain of product recommendation to demonstrate the utility of
our method, along with its limitations.
| no_new_dataset | 0.947672 |
1602.05439 | Arnaud Browet | Arnaud Browet, Christophe De Vleeschouwer, Laurent Jacques, Navrita
Mathiah, Bechara Saykali, Isabelle Migeotte | Cell segmentation with random ferns and graph-cuts | submitted to ICIP | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The progress in imaging techniques have allowed the study of various aspect
of cellular mechanisms. To isolate individual cells in live imaging data, we
introduce an elegant image segmentation framework that effectively extracts
cell boundaries, even in the presence of poor edge details. Our approach works
in two stages. First, we estimate pixel interior/border/exterior class
probabilities using random ferns. Then, we use an energy minimization framework
to compute boundaries whose localization is compliant with the pixel class
probabilities. We validate our approach on a manually annotated dataset.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 14:47:32 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Browet",
"Arnaud",
""
],
[
"De Vleeschouwer",
"Christophe",
""
],
[
"Jacques",
"Laurent",
""
],
[
"Mathiah",
"Navrita",
""
],
[
"Saykali",
"Bechara",
""
],
[
"Migeotte",
"Isabelle",
""
]
] | TITLE: Cell segmentation with random ferns and graph-cuts
ABSTRACT: The progress in imaging techniques have allowed the study of various aspect
of cellular mechanisms. To isolate individual cells in live imaging data, we
introduce an elegant image segmentation framework that effectively extracts
cell boundaries, even in the presence of poor edge details. Our approach works
in two stages. First, we estimate pixel interior/border/exterior class
probabilities using random ferns. Then, we use an energy minimization framework
to compute boundaries whose localization is compliant with the pixel class
probabilities. We validate our approach on a manually annotated dataset.
| no_new_dataset | 0.948537 |
1602.05568 | Mohammad Taha Bahadori | Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine
Coffey, Jimeng Sun | Multi-layer Representation Learning for Medical Concepts | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning efficient representations for concepts has been proven to be an
important basis for many applications such as machine translation or document
classification. Proper representations of medical concepts such as diagnosis,
medication, procedure codes and visits will have broad applications in
healthcare analytics. However, in Electronic Health Records (EHR) the visit
sequences of patients include multiple concepts (diagnosis, procedure, and
medication codes) per visit. This structure provides two types of relational
information, namely sequential order of visits and co-occurrence of the codes
within each visit. In this work, we propose Med2Vec, which not only learns
distributed representations for both medical codes and visits from a large EHR
dataset with over 3 million visits, but also allows us to interpret the learned
representations confirmed positively by clinical experts. In the experiments,
Med2Vec displays significant improvement in key medical applications compared
to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while
providing clinically meaningful interpretation.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 20:55:40 GMT"
}
] | 2016-02-18T00:00:00 | [
[
"Choi",
"Edward",
""
],
[
"Bahadori",
"Mohammad Taha",
""
],
[
"Searles",
"Elizabeth",
""
],
[
"Coffey",
"Catherine",
""
],
[
"Sun",
"Jimeng",
""
]
] | TITLE: Multi-layer Representation Learning for Medical Concepts
ABSTRACT: Learning efficient representations for concepts has been proven to be an
important basis for many applications such as machine translation or document
classification. Proper representations of medical concepts such as diagnosis,
medication, procedure codes and visits will have broad applications in
healthcare analytics. However, in Electronic Health Records (EHR) the visit
sequences of patients include multiple concepts (diagnosis, procedure, and
medication codes) per visit. This structure provides two types of relational
information, namely sequential order of visits and co-occurrence of the codes
within each visit. In this work, we propose Med2Vec, which not only learns
distributed representations for both medical codes and visits from a large EHR
dataset with over 3 million visits, but also allows us to interpret the learned
representations confirmed positively by clinical experts. In the experiments,
Med2Vec displays significant improvement in key medical applications compared
to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while
providing clinically meaningful interpretation.
| no_new_dataset | 0.945349 |
1403.1070 | Simon Walk | Simon Walk and Philipp Singer and Markus Strohmaier and Denis Helic
and Natalya F. Noy and Mark Musen | How to Apply Markov Chains for Modeling Sequential Edit Patterns in
Collaborative Ontology-Engineering Projects | null | null | 10.1016/j.ijhcs.2015.07.006 | null | cs.HC cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing popularity of large-scale collaborative ontology-engineering
projects, such as the creation of the 11th revision of the International
Classification of Diseases, we need new methods and insights to help project-
and community-managers to cope with the constantly growing complexity of such
projects. In this paper, we present a novel application of Markov chains to
model sequential usage patterns that can be found in the change-logs of
collaborative ontology-engineering projects. We provide a detailed presentation
of the analysis process, describing all the required steps that are necessary
to apply and determine the best fitting Markov chain model. Amongst others, the
model and results allow us to identify structural properties and regularities
as well as predict future actions based on usage sequences. We are specifically
interested in determining the appropriate Markov chain orders which postulate
on how many previous actions future ones depend on. To demonstrate the
practical usefulness of the extracted Markov chains we conduct sequential
pattern analyses on a large-scale collaborative ontology-engineering dataset,
the International Classification of Diseases in its 11th revision. To further
expand on the usefulness of the presented analysis, we show that the collected
sequential patterns provide potentially actionable information for
user-interface designers, ontology-engineering tool developers and
project-managers to monitor, coordinate and dynamically adapt to the natural
development processes that occur when collaboratively engineering an ontology.
We hope that presented work will spur a new line of ontology-development tools,
evaluation-techniques and new insights, further taking the interactive nature
of the collaborative ontology-engineering process into consideration.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2014 10:39:16 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2016 14:11:00 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Feb 2016 12:36:34 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Walk",
"Simon",
""
],
[
"Singer",
"Philipp",
""
],
[
"Strohmaier",
"Markus",
""
],
[
"Helic",
"Denis",
""
],
[
"Noy",
"Natalya F.",
""
],
[
"Musen",
"Mark",
""
]
] | TITLE: How to Apply Markov Chains for Modeling Sequential Edit Patterns in
Collaborative Ontology-Engineering Projects
ABSTRACT: With the growing popularity of large-scale collaborative ontology-engineering
projects, such as the creation of the 11th revision of the International
Classification of Diseases, we need new methods and insights to help project-
and community-managers to cope with the constantly growing complexity of such
projects. In this paper, we present a novel application of Markov chains to
model sequential usage patterns that can be found in the change-logs of
collaborative ontology-engineering projects. We provide a detailed presentation
of the analysis process, describing all the required steps that are necessary
to apply and determine the best fitting Markov chain model. Amongst others, the
model and results allow us to identify structural properties and regularities
as well as predict future actions based on usage sequences. We are specifically
interested in determining the appropriate Markov chain orders which postulate
on how many previous actions future ones depend on. To demonstrate the
practical usefulness of the extracted Markov chains we conduct sequential
pattern analyses on a large-scale collaborative ontology-engineering dataset,
the International Classification of Diseases in its 11th revision. To further
expand on the usefulness of the presented analysis, we show that the collected
sequential patterns provide potentially actionable information for
user-interface designers, ontology-engineering tool developers and
project-managers to monitor, coordinate and dynamically adapt to the natural
development processes that occur when collaboratively engineering an ontology.
We hope that presented work will spur a new line of ontology-development tools,
evaluation-techniques and new insights, further taking the interactive nature
of the collaborative ontology-engineering process into consideration.
| no_new_dataset | 0.928214 |
1404.0300 | Joshua Garland | David Darmon, Elisa Omodei, Joshua Garland | Followers Are Not Enough: A Question-Oriented Approach to Community
Detection in Online Social Networks | 22 pages, 4 figures, 1 tables | null | 10.1371/journal.pone.0134860 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection in online social networks is typically based on the
analysis of the explicit connections between users, such as "friends" on
Facebook and "followers" on Twitter. But online users often have hundreds or
even thousands of such connections, and many of these connections do not
correspond to real friendships or more generally to accounts that users
interact with. We claim that community detection in online social networks
should be question-oriented and rely on additional information beyond the
simple structure of the network. The concept of 'community' is very general,
and different questions such as "whom do we interact with?" and "with whom do
we share similar interests?" can lead to the discovery of different social
groups. In this paper we focus on three types of communities beyond structural
communities: activity-based, topic-based, and interaction-based. We analyze a
Twitter dataset using three different weightings of the structural network
meant to highlight these three community types, and then infer the communities
associated with these weightings. We show that the communities obtained in the
three weighted cases are highly different from each other, and from the
communities obtained by considering only the unweighted structural network. Our
results confirm that asking a precise question is an unavoidable first step in
community detection in online social networks, and that different questions can
lead to different insights about the network under study.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2014 16:23:19 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Aug 2014 20:13:45 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Darmon",
"David",
""
],
[
"Omodei",
"Elisa",
""
],
[
"Garland",
"Joshua",
""
]
] | TITLE: Followers Are Not Enough: A Question-Oriented Approach to Community
Detection in Online Social Networks
ABSTRACT: Community detection in online social networks is typically based on the
analysis of the explicit connections between users, such as "friends" on
Facebook and "followers" on Twitter. But online users often have hundreds or
even thousands of such connections, and many of these connections do not
correspond to real friendships or more generally to accounts that users
interact with. We claim that community detection in online social networks
should be question-oriented and rely on additional information beyond the
simple structure of the network. The concept of 'community' is very general,
and different questions such as "whom do we interact with?" and "with whom do
we share similar interests?" can lead to the discovery of different social
groups. In this paper we focus on three types of communities beyond structural
communities: activity-based, topic-based, and interaction-based. We analyze a
Twitter dataset using three different weightings of the structural network
meant to highlight these three community types, and then infer the communities
associated with these weightings. We show that the communities obtained in the
three weighted cases are highly different from each other, and from the
communities obtained by considering only the unweighted structural network. Our
results confirm that asking a precise question is an unavoidable first step in
community detection in online social networks, and that different questions can
lead to different insights about the network under study.
| no_new_dataset | 0.946498 |
1408.5558 | Xiao-Pu Han | Zhi-Qiang You, Xiao-Pu Han, Linyuan L\"u, Chi Ho Yeung | Empirical studies on the network of social groups: the case of Tencent
QQ | 18 pages, 9 figures | null | 10.1371/journal.pone.0130538 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Participation in social groups are important but the collective behaviors of
human as a group are difficult to analyze due to the difficulties to quantify
ordinary social relation, group membership, and to collect a comprehensive
dataset. Such difficulties can be circumvented by analyzing online social
networks. In this paper, we analyze a comprehensive dataset obtained from
Tencent QQ, an instant messenger with the highest market share in China.
Specifically, we analyze three derivative networks involving groups and their
members -- the hypergraph of groups, the network of groups and the user network
-- to reveal social interactions at microscopic and mesoscopic level. Our
results uncover interesting behaviors on the growth of user groups, the
interactions between groups, and their relationship with member age and gender.
These findings lead to insights which are difficult to obtain in ordinary
social networks.
| [
{
"version": "v1",
"created": "Sun, 24 Aug 2014 05:05:36 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"You",
"Zhi-Qiang",
""
],
[
"Han",
"Xiao-Pu",
""
],
[
"Lü",
"Linyuan",
""
],
[
"Yeung",
"Chi Ho",
""
]
] | TITLE: Empirical studies on the network of social groups: the case of Tencent
QQ
ABSTRACT: Participation in social groups are important but the collective behaviors of
human as a group are difficult to analyze due to the difficulties to quantify
ordinary social relation, group membership, and to collect a comprehensive
dataset. Such difficulties can be circumvented by analyzing online social
networks. In this paper, we analyze a comprehensive dataset obtained from
Tencent QQ, an instant messenger with the highest market share in China.
Specifically, we analyze three derivative networks involving groups and their
members -- the hypergraph of groups, the network of groups and the user network
-- to reveal social interactions at microscopic and mesoscopic level. Our
results uncover interesting behaviors on the growth of user groups, the
interactions between groups, and their relationship with member age and gender.
These findings lead to insights which are difficult to obtain in ordinary
social networks.
| no_new_dataset | 0.934335 |
1412.8307 | Mark McDonnell | Mark D. McDonnell, Migel D. Tissera, Tony Vladusich, Andr\'e van
Schaik, and Jonathan Tapson | Fast, simple and accurate handwritten digit classification by training
shallow neural network classifiers with the 'extreme learning machine'
algorithm | Accepted for publication; 9 pages of text, 6 figures and 1 table | null | 10.1371/journal.pone.0134254 | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in training deep (multi-layer) architectures have inspired a
renaissance in neural network use. For example, deep convolutional networks are
becoming the default option for difficult tasks on large datasets, such as
image and speech recognition. However, here we show that error rates below 1%
on the MNIST handwritten digit benchmark can be replicated with shallow
non-convolutional neural networks. This is achieved by training such networks
using the 'Extreme Learning Machine' (ELM) approach, which also enables a very
rapid training time (~10 minutes). Adding distortions, as is common practise
for MNIST, reduces error rates even further. Our methods are also shown to be
capable of achieving less than 5.5% error rates on the NORB image database. To
achieve these results, we introduce several enhancements to the standard ELM
algorithm, which individually and in combination can significantly improve
performance. The main innovation is to ensure each hidden-unit operates only on
a randomly sized and positioned patch of each image. This form of random
`receptive field' sampling of the input ensures the input weight matrix is
sparse, with about 90% of weights equal to zero. Furthermore, combining our
methods with a small number of iterations of a single-batch backpropagation
method can significantly reduce the number of hidden-units required to achieve
a particular performance. Our close to state-of-the-art results for MNIST and
NORB suggest that the ease of use and accuracy of the ELM algorithm for
designing a single-hidden-layer neural network classifier should cause it to be
given greater consideration either as a standalone method for simpler problems,
or as the final classification stage in deep neural networks applied to more
difficult problems.
| [
{
"version": "v1",
"created": "Mon, 29 Dec 2014 11:14:59 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jul 2015 08:28:03 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"McDonnell",
"Mark D.",
""
],
[
"Tissera",
"Migel D.",
""
],
[
"Vladusich",
"Tony",
""
],
[
"van Schaik",
"André",
""
],
[
"Tapson",
"Jonathan",
""
]
] | TITLE: Fast, simple and accurate handwritten digit classification by training
shallow neural network classifiers with the 'extreme learning machine'
algorithm
ABSTRACT: Recent advances in training deep (multi-layer) architectures have inspired a
renaissance in neural network use. For example, deep convolutional networks are
becoming the default option for difficult tasks on large datasets, such as
image and speech recognition. However, here we show that error rates below 1%
on the MNIST handwritten digit benchmark can be replicated with shallow
non-convolutional neural networks. This is achieved by training such networks
using the 'Extreme Learning Machine' (ELM) approach, which also enables a very
rapid training time (~10 minutes). Adding distortions, as is common practise
for MNIST, reduces error rates even further. Our methods are also shown to be
capable of achieving less than 5.5% error rates on the NORB image database. To
achieve these results, we introduce several enhancements to the standard ELM
algorithm, which individually and in combination can significantly improve
performance. The main innovation is to ensure each hidden-unit operates only on
a randomly sized and positioned patch of each image. This form of random
`receptive field' sampling of the input ensures the input weight matrix is
sparse, with about 90% of weights equal to zero. Furthermore, combining our
methods with a small number of iterations of a single-batch backpropagation
method can significantly reduce the number of hidden-units required to achieve
a particular performance. Our close to state-of-the-art results for MNIST and
NORB suggest that the ease of use and accuracy of the ELM algorithm for
designing a single-hidden-layer neural network classifier should cause it to be
given greater consideration either as a standalone method for simpler problems,
or as the final classification stage in deep neural networks applied to more
difficult problems.
| no_new_dataset | 0.947575 |
1501.00752 | Alexander Wong | Mohammad Shafiee, Zohreh Azimifar, and Alexander Wong | A Deep-structured Conditional Random Field Model for Object Silhouette
Tracking | 17 pages | null | 10.1371/journal.pone.0133036 | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2015 03:09:34 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Aug 2015 18:27:20 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Shafiee",
"Mohammad",
""
],
[
"Azimifar",
"Zohreh",
""
],
[
"Wong",
"Alexander",
""
]
] | TITLE: A Deep-structured Conditional Random Field Model for Object Silhouette
Tracking
ABSTRACT: In this work, we introduce a deep-structured conditional random field
(DS-CRF) model for the purpose of state-based object silhouette tracking. The
proposed DS-CRF model consists of a series of state layers, where each state
layer spatially characterizes the object silhouette at a particular point in
time. The interactions between adjacent state layers are established by
inter-layer connectivity dynamically determined based on inter-frame optical
flow. By incorporate both spatial and temporal context in a dynamic fashion
within such a deep-structured probabilistic graphical model, the proposed
DS-CRF model allows us to develop a framework that can accurately and
efficiently track object silhouettes that can change greatly over time, as well
as under different situations such as occlusion and multiple targets within the
scene. Experiment results using video surveillance datasets containing
different scenarios such as occlusion and multiple targets showed that the
proposed DS-CRF approach provides strong object silhouette tracking performance
when compared to baseline methods such as mean-shift tracking, as well as
state-of-the-art methods such as context tracking and boosted particle
filtering.
| no_new_dataset | 0.951369 |
1504.04387 | Jennifer Golbeck | Jennifer Golbeck | Benford's Law Applies To Online Social Networks | 9 pages, 2 figures | null | 10.1371/journal.pone.0135169 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benford's Law states that the frequency of first digits of numbers in
naturally occurring systems is not evenly distributed. Numbers beginning with a
1 occur roughly 30\% of the time, and are six times more common than numbers
beginning with a 9. We show that Benford's Law applies to social and behavioral
features of users in online social networks. We consider social data from five
major social networks: Facebook, Twitter, Google Plus, Pinterest, and Live
Journal. We show that the distribution of first significant digits of friend
and follower counts for users in these systems follow Benford's Law. The same
holds for the number of posts users make. We extend this to egocentric
networks, showing that friend counts among the people in an individual's social
network also follow the expected distribution. We discuss how this can be used
to detect suspicious or fraudulent activity online and to validate datasets.
| [
{
"version": "v1",
"created": "Thu, 16 Apr 2015 20:43:35 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Golbeck",
"Jennifer",
""
]
] | TITLE: Benford's Law Applies To Online Social Networks
ABSTRACT: Benford's Law states that the frequency of first digits of numbers in
naturally occurring systems is not evenly distributed. Numbers beginning with a
1 occur roughly 30\% of the time, and are six times more common than numbers
beginning with a 9. We show that Benford's Law applies to social and behavioral
features of users in online social networks. We consider social data from five
major social networks: Facebook, Twitter, Google Plus, Pinterest, and Live
Journal. We show that the distribution of first significant digits of friend
and follower counts for users in these systems follow Benford's Law. The same
holds for the number of posts users make. We extend this to egocentric
networks, showing that friend counts among the people in an individual's social
network also follow the expected distribution. We discuss how this can be used
to detect suspicious or fraudulent activity online and to validate datasets.
| no_new_dataset | 0.952486 |
1506.05659 | Radhika Arava | Radhika Arava | An Efficient homophilic model and Algorithms for Community Detection
using Nash Dynamics | The paper is not well-written. I would like to update the paper after
it is published, so that it will be more useful to the community | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of community detection is important as it helps in understanding
the spread of information in a social network. All real complex networks have
an inbuilt structure which captures and characterizes the network dynamics
between its nodes. Linkages are more likely to form between similar nodes,
leading to the formation of some community structure which characterizes the
network dynamic. The more friends they have in common, the more the influence
that each person can exercise on the other.
We propose a disjoint community detection algorithm, $\textit{NashDisjoint}$
that detects disjoint communities in any given network. We evaluate the
algorithm $\textit{NashDisjoint}$ on the standard LFR benchmarks, and we find
that our algorithm works at least as good as that of the state of the art
algorithms for the mixing factors less than 0.55 in all the cases. We propose
an overlapping community detection algorithm $\textit{NashOverlap}$ to detect
the overlapping communities in any given network. We evaluate the algorithm
$\textit{NashOverlap}$ on the standard LFR benchmarks and we find that our
algorithm works far better than the state of the art algorithms in around 152
different scenarios, generated by varying the number of nodes, mixing factor
and overlapping membership.
We run our algorithm $\textit{NashOverlap}$ on DBLP dataset to detect the
large collaboration groups and found very interesting results. Also, these
results of our algorithm on DBLP collaboration network are compared with the
results of the $\textit{COPRA}$ algorithm and $\textit{OSLOM}$.
| [
{
"version": "v1",
"created": "Thu, 18 Jun 2015 12:55:47 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 17:32:15 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Arava",
"Radhika",
""
]
] | TITLE: An Efficient homophilic model and Algorithms for Community Detection
using Nash Dynamics
ABSTRACT: The problem of community detection is important as it helps in understanding
the spread of information in a social network. All real complex networks have
an inbuilt structure which captures and characterizes the network dynamics
between its nodes. Linkages are more likely to form between similar nodes,
leading to the formation of some community structure which characterizes the
network dynamic. The more friends they have in common, the more the influence
that each person can exercise on the other.
We propose a disjoint community detection algorithm, $\textit{NashDisjoint}$
that detects disjoint communities in any given network. We evaluate the
algorithm $\textit{NashDisjoint}$ on the standard LFR benchmarks, and we find
that our algorithm works at least as good as that of the state of the art
algorithms for the mixing factors less than 0.55 in all the cases. We propose
an overlapping community detection algorithm $\textit{NashOverlap}$ to detect
the overlapping communities in any given network. We evaluate the algorithm
$\textit{NashOverlap}$ on the standard LFR benchmarks and we find that our
algorithm works far better than the state of the art algorithms in around 152
different scenarios, generated by varying the number of nodes, mixing factor
and overlapping membership.
We run our algorithm $\textit{NashOverlap}$ on DBLP dataset to detect the
large collaboration groups and found very interesting results. Also, these
results of our algorithm on DBLP collaboration network are compared with the
results of the $\textit{COPRA}$ algorithm and $\textit{OSLOM}$.
| no_new_dataset | 0.942929 |
1506.07032 | Taro Takaguchi | Taro Takaguchi, Yosuke Yano, Yuichi Yoshida | Coverage centralities for temporal networks | 13 pages, 10 figures | European Physical Journal B, 89, 35 (2016) | 10.1140/epjb/e2016-60498-7 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structure of real networked systems, such as social relationship, can be
modeled as temporal networks in which each edge appears only at the prescribed
time. Understanding the structure of temporal networks requires quantifying the
importance of a temporal vertex, which is a pair of vertex index and time. In
this paper, we define two centrality measures of a temporal vertex based on the
fastest temporal paths which use the temporal vertex. The definition is free
from parameters and robust against the change in time scale on which we focus.
In addition, we can efficiently compute these centrality values for all
temporal vertices. Using the two centrality measures, we reveal that
distributions of these centrality values of real-world temporal networks are
heterogeneous. For various datasets, we also demonstrate that a majority of the
highly central temporal vertices are located within a narrow time window around
a particular time. In other words, there is a bottleneck time at which most
information sent in the temporal network passes through a small number of
temporal vertices, which suggests an important role of these temporal vertices
in spreading phenomena.
| [
{
"version": "v1",
"created": "Tue, 23 Jun 2015 14:44:05 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 05:57:03 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Takaguchi",
"Taro",
""
],
[
"Yano",
"Yosuke",
""
],
[
"Yoshida",
"Yuichi",
""
]
] | TITLE: Coverage centralities for temporal networks
ABSTRACT: Structure of real networked systems, such as social relationship, can be
modeled as temporal networks in which each edge appears only at the prescribed
time. Understanding the structure of temporal networks requires quantifying the
importance of a temporal vertex, which is a pair of vertex index and time. In
this paper, we define two centrality measures of a temporal vertex based on the
fastest temporal paths which use the temporal vertex. The definition is free
from parameters and robust against the change in time scale on which we focus.
In addition, we can efficiently compute these centrality values for all
temporal vertices. Using the two centrality measures, we reveal that
distributions of these centrality values of real-world temporal networks are
heterogeneous. For various datasets, we also demonstrate that a majority of the
highly central temporal vertices are located within a narrow time window around
a particular time. In other words, there is a bottleneck time at which most
information sent in the temporal network passes through a small number of
temporal vertices, which suggests an important role of these temporal vertices
in spreading phenomena.
| no_new_dataset | 0.947914 |
1509.07979 | Yogesh Girdhar | Yogesh Girdhar, Walter Cho, Matthew Campbell, Jesus Pineda, Elizabeth
Clarke, Hanumant Singh | Anomaly Detection in Unstructured Environments using Bayesian
Nonparametric Scene Modeling | 6 pages, ICRA 2016 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the use of a Bayesian non-parametric topic modeling
technique for the purpose of anomaly detection in video data. We present
results from two experiments. The first experiment shows that the proposed
technique is automatically able characterize the underlying terrain, and detect
anomalous flora in image data collected by an underwater robot. The second
experiment shows that the same technique can be used on images from a static
camera in a dynamic unstructured environment. In the second dataset, consisting
of video data from a static seafloor camera capturing images of a busy coral
reef, the proposed technique was able to detect all three instances of an
underwater vehicle passing in front of the camera, amongst many other
observations of fishes, debris, lighting changes due to surface waves, and
benthic flora.
| [
{
"version": "v1",
"created": "Sat, 26 Sep 2015 13:51:39 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 02:45:52 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Girdhar",
"Yogesh",
""
],
[
"Cho",
"Walter",
""
],
[
"Campbell",
"Matthew",
""
],
[
"Pineda",
"Jesus",
""
],
[
"Clarke",
"Elizabeth",
""
],
[
"Singh",
"Hanumant",
""
]
] | TITLE: Anomaly Detection in Unstructured Environments using Bayesian
Nonparametric Scene Modeling
ABSTRACT: This paper explores the use of a Bayesian non-parametric topic modeling
technique for the purpose of anomaly detection in video data. We present
results from two experiments. The first experiment shows that the proposed
technique is automatically able characterize the underlying terrain, and detect
anomalous flora in image data collected by an underwater robot. The second
experiment shows that the same technique can be used on images from a static
camera in a dynamic unstructured environment. In the second dataset, consisting
of video data from a static seafloor camera capturing images of a busy coral
reef, the proposed technique was able to detect all three instances of an
underwater vehicle passing in front of the camera, amongst many other
observations of fishes, debris, lighting changes due to surface waves, and
benthic flora.
| no_new_dataset | 0.774669 |
1512.01344 | Sandipan Sikdar | Sandipan Sikdar, Niloy Ganguly and Animesh Mukherjee | Time series analysis of temporal networks | null | null | 10.1140/epjb/e2015-60654-7 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important feature of all real-world networks is that the network structure
changes over time. Due to this dynamic nature, it becomes difficult to propose
suitable growth models that can explain the various important characteristic
properties of these networks. In fact, in many application oriented studies
only knowing these properties is sufficient. We, in this paper show that even
if the network structure at a future time point is not available one can still
manage to estimate its properties. We propose a novel method to map a temporal
network to a set of time series instances, analyze them and using a standard
forecast model of time series, try to predict the properties of a temporal
network at a later time instance. We mainly focus on the temporal network of
human face- to-face contacts and observe that it represents a stochastic
process with memory that can be modeled as ARIMA. We use cross validation
techniques to find the percentage accuracy of our predictions. An important
observation is that the frequency domain properties of the time series obtained
from spectrogram analysis could be used to refine the prediction framework by
identifying beforehand the cases where the error in prediction is likely to be
high. This leads to an improvement of 7.96% (for error level <= 20%) in
prediction accuracy on an average across all datasets. As an application we
show how such prediction scheme can be used to launch targeted attacks on
temporal networks.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 09:17:11 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Sikdar",
"Sandipan",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Mukherjee",
"Animesh",
""
]
] | TITLE: Time series analysis of temporal networks
ABSTRACT: An important feature of all real-world networks is that the network structure
changes over time. Due to this dynamic nature, it becomes difficult to propose
suitable growth models that can explain the various important characteristic
properties of these networks. In fact, in many application oriented studies
only knowing these properties is sufficient. We, in this paper show that even
if the network structure at a future time point is not available one can still
manage to estimate its properties. We propose a novel method to map a temporal
network to a set of time series instances, analyze them and using a standard
forecast model of time series, try to predict the properties of a temporal
network at a later time instance. We mainly focus on the temporal network of
human face- to-face contacts and observe that it represents a stochastic
process with memory that can be modeled as ARIMA. We use cross validation
techniques to find the percentage accuracy of our predictions. An important
observation is that the frequency domain properties of the time series obtained
from spectrogram analysis could be used to refine the prediction framework by
identifying beforehand the cases where the error in prediction is likely to be
high. This leads to an improvement of 7.96% (for error level <= 20%) in
prediction accuracy on an average across all datasets. As an application we
show how such prediction scheme can be used to launch targeted attacks on
temporal networks.
| no_new_dataset | 0.946448 |
1512.04086 | Neeraj Kumar | Neeraj Kumar, Animesh Karmakar, Ranti Dev Sharma, Abhinav Mittal and
Amit Sethi | Deep Learning-Based Image Kernel for Inductive Transfer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method to classify images from target classes with a small
number of training examples based on transfer learning from non-target classes.
Without using any more information than class labels for samples from
non-target classes, we train a Siamese net to estimate the probability of two
images to belong to the same class. With some post-processing, output of the
Siamese net can be used to form a gram matrix of a Mercer kernel. Coupled with
a support vector machine (SVM), such a kernel gave reasonable classification
accuracy on target classes without any fine-tuning. When the Siamese net was
only partially fine-tuned using a small number of samples from the target
classes, the resulting classifier outperformed the state-of-the-art and other
alternatives. We share class separation capabilities and insights into the
learning process of such a kernel on MNIST, Dogs vs. Cats, and CIFAR-10
datasets.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 17:12:45 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Feb 2016 06:59:54 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Feb 2016 09:51:27 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Kumar",
"Neeraj",
""
],
[
"Karmakar",
"Animesh",
""
],
[
"Sharma",
"Ranti Dev",
""
],
[
"Mittal",
"Abhinav",
""
],
[
"Sethi",
"Amit",
""
]
] | TITLE: Deep Learning-Based Image Kernel for Inductive Transfer
ABSTRACT: We propose a method to classify images from target classes with a small
number of training examples based on transfer learning from non-target classes.
Without using any more information than class labels for samples from
non-target classes, we train a Siamese net to estimate the probability of two
images to belong to the same class. With some post-processing, output of the
Siamese net can be used to form a gram matrix of a Mercer kernel. Coupled with
a support vector machine (SVM), such a kernel gave reasonable classification
accuracy on target classes without any fine-tuning. When the Siamese net was
only partially fine-tuned using a small number of samples from the target
classes, the resulting classifier outperformed the state-of-the-art and other
alternatives. We share class separation capabilities and insights into the
learning process of such a kernel on MNIST, Dogs vs. Cats, and CIFAR-10
datasets.
| no_new_dataset | 0.945197 |
1601.03541 | Harsh Thakkar | Saeedeh Shekarpour, Denis Lukovnikov, Ashwini Jaya Kumar, Kemele
Endris, Kuldeep Singh, Harsh Thakkar, Christoph Lange | Question Answering on Linked Data: Challenges and Future Directions | Submitted to Question Answering And Activity Analysis in
Participatory Sites (Q4APS) 2016 | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Question Answering (QA) systems are becoming the inspiring model for the
future of search engines. While recently, underlying datasets for QA systems
have been promoted from unstructured datasets to structured datasets with
highly semantic-enriched metadata, but still question answering systems involve
serious challenges which cause to be far beyond desired expectations. In this
paper, we raise the challenges for building a Question Answering (QA) system
especially with the focus of employing structured data (i.e. knowledge graph).
This paper provide an exhaustive insight of the known challenges, so far. Thus,
it helps researchers to easily spot open rooms for the future research agenda.
| [
{
"version": "v1",
"created": "Thu, 14 Jan 2016 10:21:06 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 13:29:43 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Shekarpour",
"Saeedeh",
""
],
[
"Lukovnikov",
"Denis",
""
],
[
"Kumar",
"Ashwini Jaya",
""
],
[
"Endris",
"Kemele",
""
],
[
"Singh",
"Kuldeep",
""
],
[
"Thakkar",
"Harsh",
""
],
[
"Lange",
"Christoph",
""
]
] | TITLE: Question Answering on Linked Data: Challenges and Future Directions
ABSTRACT: Question Answering (QA) systems are becoming the inspiring model for the
future of search engines. While recently, underlying datasets for QA systems
have been promoted from unstructured datasets to structured datasets with
highly semantic-enriched metadata, but still question answering systems involve
serious challenges which cause to be far beyond desired expectations. In this
paper, we raise the challenges for building a Question Answering (QA) system
especially with the focus of employing structured data (i.e. knowledge graph).
This paper provide an exhaustive insight of the known challenges, so far. Thus,
it helps researchers to easily spot open rooms for the future research agenda.
| no_new_dataset | 0.944177 |
1602.03730 | Saravanan Thirumuruganathan | Md Farhadur Rahman, Weimo Liu, Saad Bin Suhaim, Saravanan
Thirumuruganathan, Nan Zhang, Gautam Das | HDBSCAN: Density based Clustering over Location Based Services | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Location Based Services (LBS) have become extremely popular and used by
millions of users. Popular LBS run the entire gamut from mapping services (such
as Google Maps) to restaurants (such as Yelp) and real-estate (such as Redfin).
The public query interfaces of LBS can be abstractly modeled as a kNN interface
over a database of two dimensional points: given an arbitrary query point, the
system returns the k points in the database that are nearest to the query
point. Often, k is set to a small value such as 20 or 50. In this paper, we
consider the novel problem of enabling density based clustering over an LBS
with only a limited, kNN query interface. Due to the query rate limits imposed
by LBS, even retrieving every tuple once is infeasible. Hence, we seek to
construct a cluster assignment function f(.) by issuing a small number of kNN
queries, such that for any given tuple t in the database which may or may not
have been accessed, f(.) outputs the cluster assignment of t with high
accuracy. We conduct a comprehensive set of experiments over benchmark datasets
and popular real-world LBS such as Yahoo! Flickr, Zillow, Redfin and Google
Maps.
| [
{
"version": "v1",
"created": "Thu, 11 Feb 2016 14:06:02 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 07:22:37 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Rahman",
"Md Farhadur",
""
],
[
"Liu",
"Weimo",
""
],
[
"Suhaim",
"Saad Bin",
""
],
[
"Thirumuruganathan",
"Saravanan",
""
],
[
"Zhang",
"Nan",
""
],
[
"Das",
"Gautam",
""
]
] | TITLE: HDBSCAN: Density based Clustering over Location Based Services
ABSTRACT: Location Based Services (LBS) have become extremely popular and used by
millions of users. Popular LBS run the entire gamut from mapping services (such
as Google Maps) to restaurants (such as Yelp) and real-estate (such as Redfin).
The public query interfaces of LBS can be abstractly modeled as a kNN interface
over a database of two dimensional points: given an arbitrary query point, the
system returns the k points in the database that are nearest to the query
point. Often, k is set to a small value such as 20 or 50. In this paper, we
consider the novel problem of enabling density based clustering over an LBS
with only a limited, kNN query interface. Due to the query rate limits imposed
by LBS, even retrieving every tuple once is infeasible. Hence, we seek to
construct a cluster assignment function f(.) by issuing a small number of kNN
queries, such that for any given tuple t in the database which may or may not
have been accessed, f(.) outputs the cluster assignment of t with high
accuracy. We conduct a comprehensive set of experiments over benchmark datasets
and popular real-world LBS such as Yahoo! Flickr, Zillow, Redfin and Google
Maps.
| no_new_dataset | 0.949529 |
1602.04886 | Andrew Jaegle | Andrew Jaegle, Stephen Phillips, Kostas Daniilidis | Fast, Robust, Continuous Monocular Egomotion Computation | Accepted as a conference paper at ICRA 2016. Main paper: 8 pages, 7
figures. Supplement: 4 pages, 2 figures | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose robust methods for estimating camera egomotion in noisy,
real-world monocular image sequences in the general case of unknown observer
rotation and translation with two views and a small baseline. This is a
difficult problem because of the nonconvex cost function of the perspective
camera motion equation and because of non-Gaussian noise arising from noisy
optical flow estimates and scene non-rigidity. To address this problem, we
introduce the expected residual likelihood method (ERL), which estimates
confidence weights for noisy optical flow data using likelihood distributions
of the residuals of the flow field under a range of counterfactual model
parameters. We show that ERL is effective at identifying outliers and
recovering appropriate confidence weights in many settings. We compare ERL to a
novel formulation of the perspective camera motion equation using a lifted
kernel, a recently proposed optimization framework for joint parameter and
confidence weight estimation with good empirical properties. We incorporate
these strategies into a motion estimation pipeline that avoids falling into
local minima. We find that ERL outperforms the lifted kernel method and
baseline monocular egomotion estimation strategies on the challenging KITTI
dataset, while adding almost no runtime cost over baseline egomotion methods.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 02:18:04 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Jaegle",
"Andrew",
""
],
[
"Phillips",
"Stephen",
""
],
[
"Daniilidis",
"Kostas",
""
]
] | TITLE: Fast, Robust, Continuous Monocular Egomotion Computation
ABSTRACT: We propose robust methods for estimating camera egomotion in noisy,
real-world monocular image sequences in the general case of unknown observer
rotation and translation with two views and a small baseline. This is a
difficult problem because of the nonconvex cost function of the perspective
camera motion equation and because of non-Gaussian noise arising from noisy
optical flow estimates and scene non-rigidity. To address this problem, we
introduce the expected residual likelihood method (ERL), which estimates
confidence weights for noisy optical flow data using likelihood distributions
of the residuals of the flow field under a range of counterfactual model
parameters. We show that ERL is effective at identifying outliers and
recovering appropriate confidence weights in many settings. We compare ERL to a
novel formulation of the perspective camera motion equation using a lifted
kernel, a recently proposed optimization framework for joint parameter and
confidence weight estimation with good empirical properties. We incorporate
these strategies into a motion estimation pipeline that avoids falling into
local minima. We find that ERL outperforms the lifted kernel method and
baseline monocular egomotion estimation strategies on the challenging KITTI
dataset, while adding almost no runtime cost over baseline egomotion methods.
| no_new_dataset | 0.947962 |
1602.04933 | Patrick Kenekayoro Mr | Patrick Kenekayoro and Godswill Zipamone | Greedy Ants Colony Optimization Strategy for Solving the Curriculum
Based University Course Timetabling Problem | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by/4.0/ | Timetabling is a problem faced in all higher education institutions. The
International Timetabling Competition (ITC) has published a dataset that can be
used to test the quality of methods used to solve this problem. A number of
meta-heuristic approaches have obtained good results when tested on the ITC
dataset, however few have used the ant colony optimization technique,
particularly on the ITC 2007 curriculum based university course timetabling
problem. This study describes an ant system that solves the curriculum based
university course timetabling problem and the quality of the algorithm is
tested on the ITC 2007 dataset. The ant system was able to find feasible
solutions in all instances of the dataset and close to optimal solutions in
some instances. The ant system performs better than some published approaches,
however results obtained are not as good as those obtained by the best
published approaches. This study may be used as a benchmark for ant based
algorithms that solve the curriculum based university course timetabling
problem.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 08:02:49 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Kenekayoro",
"Patrick",
""
],
[
"Zipamone",
"Godswill",
""
]
] | TITLE: Greedy Ants Colony Optimization Strategy for Solving the Curriculum
Based University Course Timetabling Problem
ABSTRACT: Timetabling is a problem faced in all higher education institutions. The
International Timetabling Competition (ITC) has published a dataset that can be
used to test the quality of methods used to solve this problem. A number of
meta-heuristic approaches have obtained good results when tested on the ITC
dataset, however few have used the ant colony optimization technique,
particularly on the ITC 2007 curriculum based university course timetabling
problem. This study describes an ant system that solves the curriculum based
university course timetabling problem and the quality of the algorithm is
tested on the ITC 2007 dataset. The ant system was able to find feasible
solutions in all instances of the dataset and close to optimal solutions in
some instances. The ant system performs better than some published approaches,
however results obtained are not as good as those obtained by the best
published approaches. This study may be used as a benchmark for ant based
algorithms that solve the curriculum based university course timetabling
problem.
| new_dataset | 0.697763 |
1602.04983 | Sreyasi Nag Chowdhury | Sreyasi Nag Chowdhury, Mateusz Malinowski, Andreas Bulling, Mario
Fritz | Contextual Media Retrieval Using Natural Language Queries | 8 pages, 9 figures, 1 table | null | null | null | cs.IR cs.AI cs.CL cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread integration of cameras in hand-held and head-worn devices as
well as the ability to share content online enables a large and diverse visual
capture of the world that millions of users build up collectively every day. We
envision these images as well as associated meta information, such as GPS
coordinates and timestamps, to form a collective visual memory that can be
queried while automatically taking the ever-changing context of mobile users
into account. As a first step towards this vision, in this work we present
Xplore-M-Ego: a novel media retrieval system that allows users to query a
dynamic database of images and videos using spatio-temporal natural language
queries. We evaluate our system using a new dataset of real user queries as
well as through a usability study. One key finding is that there is a
considerable amount of inter-user variability, for example in the resolution of
spatial relations in natural language utterances. We show that our retrieval
system can cope with this variability using personalisation through an online
learning-based retrieval formulation.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 11:04:29 GMT"
}
] | 2016-02-17T00:00:00 | [
[
"Chowdhury",
"Sreyasi Nag",
""
],
[
"Malinowski",
"Mateusz",
""
],
[
"Bulling",
"Andreas",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: Contextual Media Retrieval Using Natural Language Queries
ABSTRACT: The widespread integration of cameras in hand-held and head-worn devices as
well as the ability to share content online enables a large and diverse visual
capture of the world that millions of users build up collectively every day. We
envision these images as well as associated meta information, such as GPS
coordinates and timestamps, to form a collective visual memory that can be
queried while automatically taking the ever-changing context of mobile users
into account. As a first step towards this vision, in this work we present
Xplore-M-Ego: a novel media retrieval system that allows users to query a
dynamic database of images and videos using spatio-temporal natural language
queries. We evaluate our system using a new dataset of real user queries as
well as through a usability study. One key finding is that there is a
considerable amount of inter-user variability, for example in the resolution of
spatial relations in natural language utterances. We show that our retrieval
system can cope with this variability using personalisation through an online
learning-based retrieval formulation.
| new_dataset | 0.961461 |
1503.06858 | Yingyu Liang | Maria-Florina Balcan, Yingyu Liang, Le Song, David Woodruff, Bo Xie | Communication Efficient Distributed Kernel Principal Component Analysis | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel Principal Component Analysis (KPCA) is a key machine learning
algorithm for extracting nonlinear features from data. In the presence of a
large volume of high dimensional data collected in a distributed fashion, it
becomes very costly to communicate all of this data to a single data center and
then perform kernel PCA. Can we perform kernel PCA on the entire dataset in a
distributed and communication efficient fashion while maintaining provable and
strong guarantees in solution quality?
In this paper, we give an affirmative answer to the question by developing a
communication efficient algorithm to perform kernel PCA in the distributed
setting. The algorithm is a clever combination of subspace embedding and
adaptive sampling techniques, and we show that the algorithm can take as input
an arbitrary configuration of distributed datasets, and compute a set of global
kernel principal components with relative error guarantees independent of the
dimension of the feature space or the total number of data points. In
particular, computing $k$ principal components with relative error $\epsilon$
over $s$ workers has communication cost $\tilde{O}(s \rho k/\epsilon+s
k^2/\epsilon^3)$ words, where $\rho$ is the average number of nonzero entries
in each data point. Furthermore, we experimented the algorithm with large-scale
real world datasets and showed that the algorithm produces a high quality
kernel PCA solution while using significantly less communication than
alternative approaches.
| [
{
"version": "v1",
"created": "Mon, 23 Mar 2015 22:00:51 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Jul 2015 03:19:53 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Oct 2015 17:23:53 GMT"
},
{
"version": "v4",
"created": "Sat, 13 Feb 2016 23:40:11 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Balcan",
"Maria-Florina",
""
],
[
"Liang",
"Yingyu",
""
],
[
"Song",
"Le",
""
],
[
"Woodruff",
"David",
""
],
[
"Xie",
"Bo",
""
]
] | TITLE: Communication Efficient Distributed Kernel Principal Component Analysis
ABSTRACT: Kernel Principal Component Analysis (KPCA) is a key machine learning
algorithm for extracting nonlinear features from data. In the presence of a
large volume of high dimensional data collected in a distributed fashion, it
becomes very costly to communicate all of this data to a single data center and
then perform kernel PCA. Can we perform kernel PCA on the entire dataset in a
distributed and communication efficient fashion while maintaining provable and
strong guarantees in solution quality?
In this paper, we give an affirmative answer to the question by developing a
communication efficient algorithm to perform kernel PCA in the distributed
setting. The algorithm is a clever combination of subspace embedding and
adaptive sampling techniques, and we show that the algorithm can take as input
an arbitrary configuration of distributed datasets, and compute a set of global
kernel principal components with relative error guarantees independent of the
dimension of the feature space or the total number of data points. In
particular, computing $k$ principal components with relative error $\epsilon$
over $s$ workers has communication cost $\tilde{O}(s \rho k/\epsilon+s
k^2/\epsilon^3)$ words, where $\rho$ is the average number of nonzero entries
in each data point. Furthermore, we experimented the algorithm with large-scale
real world datasets and showed that the algorithm produces a high quality
kernel PCA solution while using significantly less communication than
alternative approaches.
| no_new_dataset | 0.94887 |
1510.00149 | Song Han | Song Han, Huizi Mao, William J. Dally | Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
Quantization and Huffman Coding | Published as a conference paper at ICLR 2016 (oral) | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency.
| [
{
"version": "v1",
"created": "Thu, 1 Oct 2015 09:03:44 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Oct 2015 23:53:10 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Nov 2015 06:35:19 GMT"
},
{
"version": "v4",
"created": "Tue, 19 Jan 2016 09:04:04 GMT"
},
{
"version": "v5",
"created": "Mon, 15 Feb 2016 06:25:40 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Han",
"Song",
""
],
[
"Mao",
"Huizi",
""
],
[
"Dally",
"William J.",
""
]
] | TITLE: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
Quantization and Huffman Coding
ABSTRACT: Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems with limited hardware
resources. To address this limitation, we introduce "deep compression", a three
stage pipeline: pruning, trained quantization and Huffman coding, that work
together to reduce the storage requirement of neural networks by 35x to 49x
without affecting their accuracy. Our method first prunes the network by
learning only the important connections. Next, we quantize the weights to
enforce weight sharing, finally, we apply Huffman coding. After the first two
steps we retrain the network to fine tune the remaining connections and the
quantized centroids. Pruning, reduces the number of connections by 9x to 13x;
Quantization then reduces the number of bits that represent each connection
from 32 to 5. On the ImageNet dataset, our method reduced the storage required
by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method
reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of
accuracy. This allows fitting the model into on-chip SRAM cache rather than
off-chip DRAM memory. Our compression method also facilitates the use of
complex neural networks in mobile applications where application size and
download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU,
compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy
efficiency.
| no_new_dataset | 0.943504 |
1511.04119 | Shikhar Sharma | Shikhar Sharma, Ryan Kiros, Ruslan Salakhutdinov | Action Recognition using Visual Attention | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a soft attention based model for the task of action recognition in
videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long
Short-Term Memory (LSTM) units which are deep both spatially and temporally.
Our model learns to focus selectively on parts of the video frames and
classifies videos after taking a few glimpses. The model essentially learns
which parts in the frames are relevant for the task at hand and attaches higher
importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51
and Hollywood2 datasets and analyze how the model focuses its attention
depending on the scene and the action being performed.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 23:06:42 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jan 2016 20:46:47 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Feb 2016 17:20:19 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Sharma",
"Shikhar",
""
],
[
"Kiros",
"Ryan",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Action Recognition using Visual Attention
ABSTRACT: We propose a soft attention based model for the task of action recognition in
videos. We use multi-layered Recurrent Neural Networks (RNNs) with Long
Short-Term Memory (LSTM) units which are deep both spatially and temporally.
Our model learns to focus selectively on parts of the video frames and
classifies videos after taking a few glimpses. The model essentially learns
which parts in the frames are relevant for the task at hand and attaches higher
importance to them. We evaluate the model on UCF-11 (YouTube Action), HMDB-51
and Hollywood2 datasets and analyze how the model focuses its attention
depending on the scene and the action being performed.
| no_new_dataset | 0.950319 |
1511.04581 | Eugene Belilovsky | Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis
Antonoglou, Arthur Gretton | A Test of Relative Similarity For Model Selection in Generative Models | International Conference on Learning Representations 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic generative models provide a powerful framework for representing
data that avoids the expense of manual annotation typically needed by
discriminative approaches. Model selection in this generative setting can be
challenging, however, particularly when likelihoods are not easily accessible.
To address this issue, we introduce a statistical test of relative similarity,
which is used to determine which of two models generates samples that are
significantly closer to a real-world reference dataset of interest. We use as
our test statistic the difference in maximum mean discrepancies (MMDs) between
the reference dataset and each model dataset, and derive a powerful,
low-variance test based on the joint asymptotic distribution of the MMDs
between each reference-model pair. In experiments on deep generative models,
including the variational auto-encoder and generative moment matching network,
the tests provide a meaningful ranking of model performance as a function of
parameter and training settings.
| [
{
"version": "v1",
"created": "Sat, 14 Nov 2015 17:18:47 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 11:12:05 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jan 2016 15:35:53 GMT"
},
{
"version": "v4",
"created": "Mon, 15 Feb 2016 15:12:44 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Bounliphone",
"Wacha",
""
],
[
"Belilovsky",
"Eugene",
""
],
[
"Blaschko",
"Matthew B.",
""
],
[
"Antonoglou",
"Ioannis",
""
],
[
"Gretton",
"Arthur",
""
]
] | TITLE: A Test of Relative Similarity For Model Selection in Generative Models
ABSTRACT: Probabilistic generative models provide a powerful framework for representing
data that avoids the expense of manual annotation typically needed by
discriminative approaches. Model selection in this generative setting can be
challenging, however, particularly when likelihoods are not easily accessible.
To address this issue, we introduce a statistical test of relative similarity,
which is used to determine which of two models generates samples that are
significantly closer to a real-world reference dataset of interest. We use as
our test statistic the difference in maximum mean discrepancies (MMDs) between
the reference dataset and each model dataset, and derive a powerful,
low-variance test based on the joint asymptotic distribution of the MMDs
between each reference-model pair. In experiments on deep generative models,
including the variational auto-encoder and generative moment matching network,
the tests provide a meaningful ranking of model performance as a function of
parameter and training settings.
| no_new_dataset | 0.930868 |
1511.04747 | Sayan Ghosh | Sayan Ghosh, Eugene Laksana, Louis-Philippe Morency, Stefan Scherer | Learning Representations of Affect from Speech | This is a submission for the ICLR (International Conference on
Learning Representations) Workshop 2016 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a lot of prior work on representation learning for speech
recognition applications, but not much emphasis has been given to an
investigation of effective representations of affect from speech, where the
paralinguistic elements of speech are separated out from the verbal content. In
this paper, we explore denoising autoencoders for learning paralinguistic
attributes i.e. categorical and dimensional affective traits from speech. We
show that the representations learnt by the bottleneck layer of the autoencoder
are highly discriminative of activation intensity and at separating out
negative valence (sadness and anger) from positive valence (happiness). We
experiment with different input speech features (such as FFT and log-mel
spectrograms with temporal context windows), and different autoencoder
architectures (such as stacked and deep autoencoders). We also learn utterance
specific representations by a combination of denoising autoencoders and BLSTM
based recurrent autoencoders. Emotion classification is performed with the
learnt temporal/dynamic representations to evaluate the quality of the
representations. Experiments on a well-established real-life speech dataset
(IEMOCAP) show that the learnt representations are comparable to state of the
art feature extractors (such as voice quality features and MFCCs) and are
competitive with state-of-the-art approaches at emotion and dimensional affect
recognition.
| [
{
"version": "v1",
"created": "Sun, 15 Nov 2015 18:16:20 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 01:37:01 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Jan 2016 20:44:51 GMT"
},
{
"version": "v4",
"created": "Mon, 18 Jan 2016 20:36:36 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Jan 2016 04:05:50 GMT"
},
{
"version": "v6",
"created": "Sun, 14 Feb 2016 18:11:46 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Ghosh",
"Sayan",
""
],
[
"Laksana",
"Eugene",
""
],
[
"Morency",
"Louis-Philippe",
""
],
[
"Scherer",
"Stefan",
""
]
] | TITLE: Learning Representations of Affect from Speech
ABSTRACT: There has been a lot of prior work on representation learning for speech
recognition applications, but not much emphasis has been given to an
investigation of effective representations of affect from speech, where the
paralinguistic elements of speech are separated out from the verbal content. In
this paper, we explore denoising autoencoders for learning paralinguistic
attributes i.e. categorical and dimensional affective traits from speech. We
show that the representations learnt by the bottleneck layer of the autoencoder
are highly discriminative of activation intensity and at separating out
negative valence (sadness and anger) from positive valence (happiness). We
experiment with different input speech features (such as FFT and log-mel
spectrograms with temporal context windows), and different autoencoder
architectures (such as stacked and deep autoencoders). We also learn utterance
specific representations by a combination of denoising autoencoders and BLSTM
based recurrent autoencoders. Emotion classification is performed with the
learnt temporal/dynamic representations to evaluate the quality of the
representations. Experiments on a well-established real-life speech dataset
(IEMOCAP) show that the learnt representations are comparable to state of the
art feature extractors (such as voice quality features and MFCCs) and are
competitive with state-of-the-art approaches at emotion and dimensional affect
recognition.
| no_new_dataset | 0.947235 |
1511.06067 | Cheng Tai | Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, Weinan E | Convolutional neural networks with low-rank regularization | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large CNNs have delivered impressive performance in various computer vision
applications. But the storage and computation requirements make it problematic
for deploying these models on mobile devices. Recently, tensor decompositions
have been used for speeding up CNNs. In this paper, we further develop the
tensor decomposition technique. We propose a new algorithm for computing the
low-rank tensor decomposition for removing the redundancy in the convolution
kernels. The algorithm finds the exact global optimizer of the decomposition
and is more effective than iterative methods. Based on the decomposition, we
further propose a new method for training low-rank constrained CNNs from
scratch. Interestingly, while achieving a significant speedup, sometimes the
low-rank constrained CNNs delivers significantly better performance than their
non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank
NIN model achieves $91.31\%$ accuracy (without data augmentation), which also
improves upon state-of-the-art result. We evaluated the proposed method on
CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet,
NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is
reduced by half while the performance is still comparable. Empirical success
suggests that low-rank tensor decompositions can be a very useful tool for
speeding up large CNNs.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 06:13:55 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Dec 2015 23:46:17 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Feb 2016 03:46:09 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Tai",
"Cheng",
""
],
[
"Xiao",
"Tong",
""
],
[
"Zhang",
"Yi",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"E",
"Weinan",
""
]
] | TITLE: Convolutional neural networks with low-rank regularization
ABSTRACT: Large CNNs have delivered impressive performance in various computer vision
applications. But the storage and computation requirements make it problematic
for deploying these models on mobile devices. Recently, tensor decompositions
have been used for speeding up CNNs. In this paper, we further develop the
tensor decomposition technique. We propose a new algorithm for computing the
low-rank tensor decomposition for removing the redundancy in the convolution
kernels. The algorithm finds the exact global optimizer of the decomposition
and is more effective than iterative methods. Based on the decomposition, we
further propose a new method for training low-rank constrained CNNs from
scratch. Interestingly, while achieving a significant speedup, sometimes the
low-rank constrained CNNs delivers significantly better performance than their
non-constrained counterparts. On the CIFAR-10 dataset, the proposed low-rank
NIN model achieves $91.31\%$ accuracy (without data augmentation), which also
improves upon state-of-the-art result. We evaluated the proposed method on
CIFAR-10 and ILSVRC12 datasets for a variety of modern CNNs, including AlexNet,
NIN, VGG and GoogleNet with success. For example, the forward time of VGG-16 is
reduced by half while the performance is still comparable. Empirical success
suggests that low-rank tensor decompositions can be a very useful tool for
speeding up large CNNs.
| no_new_dataset | 0.949201 |
1602.02255 | Qing-Yuan Jiang | Qing-Yuan Jiang, Wu-Jun Li | Deep Cross-Modal Hashing | 12 pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to its low storage cost and fast query speed, cross-modal hashing (CMH)
has been widely used for similarity search in multimedia retrieval
applications. However, almost all existing CMH methods are based on
hand-crafted features which might not be optimally compatible with the
hash-code learning procedure. As a result, existing CMH methods with
handcrafted features may not achieve satisfactory performance. In this paper,
we propose a novel cross-modal hashing method, called deep crossmodal hashing
(DCMH), by integrating feature learning and hash-code learning into the same
framework. DCMH is an end-to-end learning framework with deep neural networks,
one for each modality, to perform feature learning from scratch. Experiments on
two real datasets with text-image modalities show that DCMH can outperform
other baselines to achieve the state-of-the-art performance in cross-modal
retrieval applications.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 13:43:24 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Feb 2016 09:43:56 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Jiang",
"Qing-Yuan",
""
],
[
"Li",
"Wu-Jun",
""
]
] | TITLE: Deep Cross-Modal Hashing
ABSTRACT: Due to its low storage cost and fast query speed, cross-modal hashing (CMH)
has been widely used for similarity search in multimedia retrieval
applications. However, almost all existing CMH methods are based on
hand-crafted features which might not be optimally compatible with the
hash-code learning procedure. As a result, existing CMH methods with
handcrafted features may not achieve satisfactory performance. In this paper,
we propose a novel cross-modal hashing method, called deep crossmodal hashing
(DCMH), by integrating feature learning and hash-code learning into the same
framework. DCMH is an end-to-end learning framework with deep neural networks,
one for each modality, to perform feature learning from scratch. Experiments on
two real datasets with text-image modalities show that DCMH can outperform
other baselines to achieve the state-of-the-art performance in cross-modal
retrieval applications.
| no_new_dataset | 0.944638 |
1602.04281 | Nicholas Bolten | Nicholas Bolten, Amirhossein Amini, Yun Hao, Vaishnavi Ravichandran,
Andre Stephens, Anat Caspi | Urban sidewalks: visualization and routing for individuals with limited
mobility | null | null | null | null | cs.CY cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People with limited mobility in the U.S. (defined as having difficulty or
inability to walk a quarter of a mile without help and without the use of
special equipment) face a growing informational gap: while pedestrian routing
algorithms are getting faster and more informative, planning a route with a
wheeled device in urban centers is very difficult due to lack of integrated
pertinent information regarding accessibility along the route. Moreover,
reducing access to street-spaces translates to reduced access to other public
information and services that are increasingly made available to the public
along urban streets. To adequately plan a commute, a traveler with limited or
wheeled mobility must know whether her path may be blocked by construction,
whether the sidewalk would be too steep or rendered unusable due to poor
conditions, whether the street can be crossed or a highway is blocking the way,
or whether there is a sidewalk at all. These details populate different
datasets in many modern municipalities, but they are not immediately available
in a convenient, integrated format to be useful to people with limited
mobility. Our project, AccessMap, in its first phase (v.1) overlayed the
information that is most relevant to people with limited mobility on a map,
enabling self-planning of routes. Here, we describe the next phase of the
project: synthesizing commonly available open data (including streets,
sidewalks, curb ramps, elevation data, and construction permit information) to
generate a graph of paths to enable variable cost-function accessible routing.
| [
{
"version": "v1",
"created": "Sat, 13 Feb 2016 03:42:17 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Bolten",
"Nicholas",
""
],
[
"Amini",
"Amirhossein",
""
],
[
"Hao",
"Yun",
""
],
[
"Ravichandran",
"Vaishnavi",
""
],
[
"Stephens",
"Andre",
""
],
[
"Caspi",
"Anat",
""
]
] | TITLE: Urban sidewalks: visualization and routing for individuals with limited
mobility
ABSTRACT: People with limited mobility in the U.S. (defined as having difficulty or
inability to walk a quarter of a mile without help and without the use of
special equipment) face a growing informational gap: while pedestrian routing
algorithms are getting faster and more informative, planning a route with a
wheeled device in urban centers is very difficult due to lack of integrated
pertinent information regarding accessibility along the route. Moreover,
reducing access to street-spaces translates to reduced access to other public
information and services that are increasingly made available to the public
along urban streets. To adequately plan a commute, a traveler with limited or
wheeled mobility must know whether her path may be blocked by construction,
whether the sidewalk would be too steep or rendered unusable due to poor
conditions, whether the street can be crossed or a highway is blocking the way,
or whether there is a sidewalk at all. These details populate different
datasets in many modern municipalities, but they are not immediately available
in a convenient, integrated format to be useful to people with limited
mobility. Our project, AccessMap, in its first phase (v.1) overlayed the
information that is most relevant to people with limited mobility on a map,
enabling self-planning of routes. Here, we describe the next phase of the
project: synthesizing commonly available open data (including streets,
sidewalks, curb ramps, elevation data, and construction permit information) to
generate a graph of paths to enable variable cost-function accessible routing.
| no_new_dataset | 0.943815 |
1602.04348 | Shuye Zhang | Shuye Zhang, Mude Lin, Tianshui Chen, Lianwen Jin, Liang Lin | Character Proposal Network for Robust Text Extraction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximally stable extremal regions (MSER), which is a popular method to
generate character proposals/candidates, has shown superior performance in
scene text detection. However, the pixel-level operation limits its capability
for handling some challenging cases (e.g., multiple connected characters,
separated parts of one character and non-uniform illumination). To better
tackle these cases, we design a character proposal network (CPN) by taking
advantage of the high capacity and fast computing of fully convolutional
network (FCN). Specifically, the network simultaneously predicts characterness
scores and refines the corresponding locations. The characterness scores can be
used for proposal ranking to reject non-character proposals and the refining
process aims to obtain the more accurate locations. Furthermore, considering
the situation that different characters have different aspect ratios, we
propose a multi-template strategy, designing a refiner for each aspect ratio.
The extensive experiments indicate our method achieves recall rates of 93.88%,
93.60% and 96.46% on ICDAR 2013, SVT and Chinese2k datasets respectively using
less than 1000 proposals, demonstrating promising performance of our character
proposal network.
| [
{
"version": "v1",
"created": "Sat, 13 Feb 2016 15:55:17 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Zhang",
"Shuye",
""
],
[
"Lin",
"Mude",
""
],
[
"Chen",
"Tianshui",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Character Proposal Network for Robust Text Extraction
ABSTRACT: Maximally stable extremal regions (MSER), which is a popular method to
generate character proposals/candidates, has shown superior performance in
scene text detection. However, the pixel-level operation limits its capability
for handling some challenging cases (e.g., multiple connected characters,
separated parts of one character and non-uniform illumination). To better
tackle these cases, we design a character proposal network (CPN) by taking
advantage of the high capacity and fast computing of fully convolutional
network (FCN). Specifically, the network simultaneously predicts characterness
scores and refines the corresponding locations. The characterness scores can be
used for proposal ranking to reject non-character proposals and the refining
process aims to obtain the more accurate locations. Furthermore, considering
the situation that different characters have different aspect ratios, we
propose a multi-template strategy, designing a refiner for each aspect ratio.
The extensive experiments indicate our method achieves recall rates of 93.88%,
93.60% and 96.46% on ICDAR 2013, SVT and Chinese2k datasets respectively using
less than 1000 proposals, demonstrating promising performance of our character
proposal network.
| no_new_dataset | 0.950041 |
1602.04364 | Jimmy Ren | Jimmy Ren, Yongtao Hu, Yu-Wing Tai, Chuan Wang, Li Xu, Wenxiu Sun,
Qiong Yan | Look, Listen and Learn - A Multimodal LSTM for Speaker Identification | The 30th AAAI Conference on Artificial Intelligence (AAAI-16) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speaker identification refers to the task of localizing the face of a person
who has the same identity as the ongoing voice in a video. This task not only
requires collective perception over both visual and auditory signals, the
robustness to handle severe quality degradations and unconstrained content
variations are also indispensable. In this paper, we describe a novel
multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies
both visual and auditory modalities from the beginning of each sequence input.
The key idea is to extend the conventional LSTM by not only sharing weights
across time steps, but also sharing weights across modalities. We show that
modeling the temporal dependency across face and voice can significantly
improve the robustness to content quality degradations and variations. We also
found that our multimodal LSTM is robustness to distractors, namely the
non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory
dataset and showed that our system outperforms the state-of-the-art systems in
speaker identification with lower false alarm rate and higher recognition
accuracy.
| [
{
"version": "v1",
"created": "Sat, 13 Feb 2016 18:49:50 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Ren",
"Jimmy",
""
],
[
"Hu",
"Yongtao",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Wang",
"Chuan",
""
],
[
"Xu",
"Li",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Yan",
"Qiong",
""
]
] | TITLE: Look, Listen and Learn - A Multimodal LSTM for Speaker Identification
ABSTRACT: Speaker identification refers to the task of localizing the face of a person
who has the same identity as the ongoing voice in a video. This task not only
requires collective perception over both visual and auditory signals, the
robustness to handle severe quality degradations and unconstrained content
variations are also indispensable. In this paper, we describe a novel
multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies
both visual and auditory modalities from the beginning of each sequence input.
The key idea is to extend the conventional LSTM by not only sharing weights
across time steps, but also sharing weights across modalities. We show that
modeling the temporal dependency across face and voice can significantly
improve the robustness to content quality degradations and variations. We also
found that our multimodal LSTM is robustness to distractors, namely the
non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory
dataset and showed that our system outperforms the state-of-the-art systems in
speaker identification with lower false alarm rate and higher recognition
accuracy.
| no_new_dataset | 0.94428 |
1602.04422 | Chunhua Shen | Peng Wang, Lingqiao Liu, Chunhua Shen, Anton van den Hengel, Heng Tao
Shen | Hi Detector, What's Wrong with that Object? Identifying Irregular Object
From Images by Modelling the Detection Score Distribution | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we study the challenging problem of identifying the irregular
status of objects from images in an "open world" setting, that is,
distinguishing the irregular status of an object category from its regular
status as well as objects from other categories in the absence of "irregular
object" training data. To address this problem, we propose a novel approach by
inspecting the distribution of the detection scores at multiple image regions
based on the detector trained from the "regular object" and "other objects".
The key observation motivating our approach is that for "regular object" images
as well as "other objects" images, the region-level scores follow their own
essential patterns in terms of both the score values and the spatial
distributions while the detection scores obtained from an "irregular object"
image tend to break these patterns. To model this distribution, we propose to
use Gaussian Processes (GP) to construct two separate generative models for the
case of the "regular object" and the "other objects". More specifically, we
design a new covariance function to simultaneously model the detection score at
a single region and the score dependencies at multiple regions. We finally
demonstrate the superior performance of our method on a large dataset newly
proposed in this paper.
| [
{
"version": "v1",
"created": "Sun, 14 Feb 2016 06:39:05 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Wang",
"Peng",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Hi Detector, What's Wrong with that Object? Identifying Irregular Object
From Images by Modelling the Detection Score Distribution
ABSTRACT: In this work, we study the challenging problem of identifying the irregular
status of objects from images in an "open world" setting, that is,
distinguishing the irregular status of an object category from its regular
status as well as objects from other categories in the absence of "irregular
object" training data. To address this problem, we propose a novel approach by
inspecting the distribution of the detection scores at multiple image regions
based on the detector trained from the "regular object" and "other objects".
The key observation motivating our approach is that for "regular object" images
as well as "other objects" images, the region-level scores follow their own
essential patterns in terms of both the score values and the spatial
distributions while the detection scores obtained from an "irregular object"
image tend to break these patterns. To model this distribution, we propose to
use Gaussian Processes (GP) to construct two separate generative models for the
case of the "regular object" and the "other objects". More specifically, we
design a new covariance function to simultaneously model the detection score at
a single region and the score dependencies at multiple regions. We finally
demonstrate the superior performance of our method on a large dataset newly
proposed in this paper.
| no_new_dataset | 0.94625 |
1602.04502 | Bin Fan | Bin Fan, Qingqun Kong, Wei Sui, Zhiheng Wang, Xinchao Wang, Shiming
Xiang, Chunhong Pan, Pascal Fua | Do We Need Binary Features for 3D Reconstruction? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary features have been incrementally popular in the past few years due to
their low memory footprints and the efficient computation of Hamming distance
between binary descriptors. They have been shown with promising results on some
real time applications, e.g., SLAM, where the matching operations are relative
few. However, in computer vision, there are many applications such as 3D
reconstruction requiring lots of matching operations between local features.
Therefore, a natural question is that is the binary feature still a promising
solution to this kind of applications? To get the answer, this paper conducts a
comparative study of binary features and their matching methods on the context
of 3D reconstruction in a recently proposed large scale mutliview stereo
dataset. Our evaluations reveal that not all binary features are capable of
this task. Most of them are inferior to the classical SIFT based method in
terms of reconstruction accuracy and completeness with a not significant better
computational performance.
| [
{
"version": "v1",
"created": "Sun, 14 Feb 2016 20:24:57 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Fan",
"Bin",
""
],
[
"Kong",
"Qingqun",
""
],
[
"Sui",
"Wei",
""
],
[
"Wang",
"Zhiheng",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Xiang",
"Shiming",
""
],
[
"Pan",
"Chunhong",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Do We Need Binary Features for 3D Reconstruction?
ABSTRACT: Binary features have been incrementally popular in the past few years due to
their low memory footprints and the efficient computation of Hamming distance
between binary descriptors. They have been shown with promising results on some
real time applications, e.g., SLAM, where the matching operations are relative
few. However, in computer vision, there are many applications such as 3D
reconstruction requiring lots of matching operations between local features.
Therefore, a natural question is that is the binary feature still a promising
solution to this kind of applications? To get the answer, this paper conducts a
comparative study of binary features and their matching methods on the context
of 3D reconstruction in a recently proposed large scale mutliview stereo
dataset. Our evaluations reveal that not all binary features are capable of
this task. Most of them are inferior to the classical SIFT based method in
terms of reconstruction accuracy and completeness with a not significant better
computational performance.
| no_new_dataset | 0.935051 |
1602.04506 | Ranjay Krishna | Ranjay Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A.
Shamma, Li Fei-Fei, Michael S. Bernstein | Embracing Error to Enable Rapid Crowdsourcing | 10 pages, 7 figures, CHI '16, CHI: ACM Conference on Human Factors in
Computing Systems (2016) | null | 10.1145/2858036.2858115 | null | cs.HC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microtask crowdsourcing has enabled dataset advances in social science and
machine learning, but existing crowdsourcing schemes are too expensive to scale
up with the expanding volume of data. To scale and widen the applicability of
crowdsourcing, we present a technique that produces extremely rapid judgments
for binary and categorical labels. Rather than punishing all errors, which
causes workers to proceed slowly and deliberately, our technique speeds up
workers' judgments to the point where errors are acceptable and even expected.
We demonstrate that it is possible to rectify these errors by randomizing task
order and modeling response latency. We evaluate our technique on a breadth of
common labeling tasks such as image verification, word similarity, sentiment
analysis and topic classification. Where prior work typically achieves a 0.25x
to 1x speedup over fixed majority vote, our approach often achieves an order of
magnitude (10x) speedup.
| [
{
"version": "v1",
"created": "Sun, 14 Feb 2016 20:56:01 GMT"
}
] | 2016-02-16T00:00:00 | [
[
"Krishna",
"Ranjay",
""
],
[
"Hata",
"Kenji",
""
],
[
"Chen",
"Stephanie",
""
],
[
"Kravitz",
"Joshua",
""
],
[
"Shamma",
"David A.",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Bernstein",
"Michael S.",
""
]
] | TITLE: Embracing Error to Enable Rapid Crowdsourcing
ABSTRACT: Microtask crowdsourcing has enabled dataset advances in social science and
machine learning, but existing crowdsourcing schemes are too expensive to scale
up with the expanding volume of data. To scale and widen the applicability of
crowdsourcing, we present a technique that produces extremely rapid judgments
for binary and categorical labels. Rather than punishing all errors, which
causes workers to proceed slowly and deliberately, our technique speeds up
workers' judgments to the point where errors are acceptable and even expected.
We demonstrate that it is possible to rectify these errors by randomizing task
order and modeling response latency. We evaluate our technique on a breadth of
common labeling tasks such as image verification, word similarity, sentiment
analysis and topic classification. Where prior work typically achieves a 0.25x
to 1x speedup over fixed majority vote, our approach often achieves an order of
magnitude (10x) speedup.
| no_new_dataset | 0.953492 |
1105.5332 | Andrej Cvetkovski | Andrej Cvetkovski and Mark Crovella | Multidimensional Scaling in the Poincare Disk | null | Applied Mathematics & Information Sciences, 10(1):125, 2016 | null | null | stat.ML cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multidimensional scaling (MDS) is a class of projective algorithms
traditionally used in Euclidean space to produce two- or three-dimensional
visualizations of datasets of multidimensional points or point distances. More
recently however, several authors have pointed out that for certain datasets,
hyperbolic target space may provide a better fit than Euclidean space.
In this paper we develop PD-MDS, a metric MDS algorithm designed specifically
for the Poincare disk (PD) model of the hyperbolic plane. Emphasizing the
importance of proceeding from first principles in spite of the availability of
various black box optimizers, our construction is based on an elementary
hyperbolic line search and reveals numerous particulars that need to be
carefully addressed when implementing this as well as more sophisticated
iterative optimization methods in a hyperbolic space model.
| [
{
"version": "v1",
"created": "Thu, 26 May 2011 16:05:23 GMT"
},
{
"version": "v2",
"created": "Sun, 29 May 2011 06:06:30 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Feb 2016 09:39:02 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Cvetkovski",
"Andrej",
""
],
[
"Crovella",
"Mark",
""
]
] | TITLE: Multidimensional Scaling in the Poincare Disk
ABSTRACT: Multidimensional scaling (MDS) is a class of projective algorithms
traditionally used in Euclidean space to produce two- or three-dimensional
visualizations of datasets of multidimensional points or point distances. More
recently however, several authors have pointed out that for certain datasets,
hyperbolic target space may provide a better fit than Euclidean space.
In this paper we develop PD-MDS, a metric MDS algorithm designed specifically
for the Poincare disk (PD) model of the hyperbolic plane. Emphasizing the
importance of proceeding from first principles in spite of the availability of
various black box optimizers, our construction is based on an elementary
hyperbolic line search and reveals numerous particulars that need to be
carefully addressed when implementing this as well as more sophisticated
iterative optimization methods in a hyperbolic space model.
| no_new_dataset | 0.945801 |
1509.03959 | James A. Grieve | James A. Grieve, Rakhitha Chandrasekara, Zhongkan Tang, Cliff Cheng,
Alexander Ling | Correcting for accidental correlations in saturated avalanche
photodiodes | 8 pages, 6 figures; accepted for publication in Optics Express (final
text) | Opt. Express 24, 3592-3600 (2016) | 10.1364/OE.24.003592 | null | quant-ph physics.ins-det physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a general method for estimating rates of accidental
coincidence between a pair of single photon detectors operated within their
saturation regimes. By folding the effects of recovery time of both detectors
and the detection circuit into an "effective duty cycle" we are able to
accomodate complex recovery behaviour at high event rates. As an example, we
provide a detailed high-level model for the behaviour of passively quenched
avalanche photodiodes, and demonstrate effective background subtraction at
rates commonly associated with detector saturation. We show that by
post-processing using the updated model, we observe an improvement in
polarization correlation visibility from 88.7% to 96.9% in our experimental
dataset. This technique will be useful in improving the signal-to-noise ratio
in applications which depend on coincidence measurements, especially in
situations where rapid changes in flux may cause detector saturation.
| [
{
"version": "v1",
"created": "Mon, 14 Sep 2015 05:50:07 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Oct 2015 08:37:33 GMT"
},
{
"version": "v3",
"created": "Fri, 12 Feb 2016 02:25:23 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Grieve",
"James A.",
""
],
[
"Chandrasekara",
"Rakhitha",
""
],
[
"Tang",
"Zhongkan",
""
],
[
"Cheng",
"Cliff",
""
],
[
"Ling",
"Alexander",
""
]
] | TITLE: Correcting for accidental correlations in saturated avalanche
photodiodes
ABSTRACT: In this paper we present a general method for estimating rates of accidental
coincidence between a pair of single photon detectors operated within their
saturation regimes. By folding the effects of recovery time of both detectors
and the detection circuit into an "effective duty cycle" we are able to
accomodate complex recovery behaviour at high event rates. As an example, we
provide a detailed high-level model for the behaviour of passively quenched
avalanche photodiodes, and demonstrate effective background subtraction at
rates commonly associated with detector saturation. We show that by
post-processing using the updated model, we observe an improvement in
polarization correlation visibility from 88.7% to 96.9% in our experimental
dataset. This technique will be useful in improving the signal-to-noise ratio
in applications which depend on coincidence measurements, especially in
situations where rapid changes in flux may cause detector saturation.
| no_new_dataset | 0.942401 |
1601.07539 | Xiaolan Wang Xiaolan Wang | Xiaolan Wang, Alexandra Meliou, Eugene Wu | QFix: Diagnosing errors through query histories | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven applications rely on the correctness of their data to function
properly and effectively. Errors in data can be incredibly costly and
disruptive, leading to loss of revenue, incorrect conclusions, and misguided
policy decisions. While data cleaning tools can purge datasets of many errors
before the data is used, applications and users interacting with the data can
introduce new errors. Subsequent valid updates can obscure these errors and
propagate them through the dataset causing more discrepancies. Even when some
of these discrepancies are discovered, they are often corrected superficially,
on a case-by-case basis, further obscuring the true underlying cause, and
making detection of the remaining errors harder. In this paper, we propose
QFix, a framework that derives explanations and repairs for discrepancies in
relational data, by analyzing the effect of queries that operated on the data
and identifying potential mistakes in those queries. QFix is flexible, handling
scenarios where only a subset of the true discrepancies is known, and robust to
different types of update workloads. We make four important contributions: (a)
we formalize the problem of diagnosing the causes of data errors based on the
queries that operated on and introduced errors to a dataset; (b) we develop
exact methods for deriving diagnoses and fixes for identified errors using
state-of-the-art tools; (c) we present several optimization techniques that
improve our basic approach without compromising accuracy, and (d) we leverage a
tradeoff between accuracy and performance to scale diagnosis to large datasets
and query logs, while achieving near-optimal results. We demonstrate the
effectiveness of QFix through extensive evaluation over benchmark and synthetic
data.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 20:40:06 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Feb 2016 21:29:47 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Wang",
"Xiaolan",
""
],
[
"Meliou",
"Alexandra",
""
],
[
"Wu",
"Eugene",
""
]
] | TITLE: QFix: Diagnosing errors through query histories
ABSTRACT: Data-driven applications rely on the correctness of their data to function
properly and effectively. Errors in data can be incredibly costly and
disruptive, leading to loss of revenue, incorrect conclusions, and misguided
policy decisions. While data cleaning tools can purge datasets of many errors
before the data is used, applications and users interacting with the data can
introduce new errors. Subsequent valid updates can obscure these errors and
propagate them through the dataset causing more discrepancies. Even when some
of these discrepancies are discovered, they are often corrected superficially,
on a case-by-case basis, further obscuring the true underlying cause, and
making detection of the remaining errors harder. In this paper, we propose
QFix, a framework that derives explanations and repairs for discrepancies in
relational data, by analyzing the effect of queries that operated on the data
and identifying potential mistakes in those queries. QFix is flexible, handling
scenarios where only a subset of the true discrepancies is known, and robust to
different types of update workloads. We make four important contributions: (a)
we formalize the problem of diagnosing the causes of data errors based on the
queries that operated on and introduced errors to a dataset; (b) we develop
exact methods for deriving diagnoses and fixes for identified errors using
state-of-the-art tools; (c) we present several optimization techniques that
improve our basic approach without compromising accuracy, and (d) we leverage a
tradeoff between accuracy and performance to scale diagnosis to large datasets
and query logs, while achieving near-optimal results. We demonstrate the
effectiveness of QFix through extensive evaluation over benchmark and synthetic
data.
| no_new_dataset | 0.945248 |
1602.02575 | Xiangyu Wang | Xiangyu Wang, David Dunson, Chenlei Leng | DECOrrelated feature space partitioning for distributed sparse
regression | Correct legend errors in Figure 3 | null | null | null | stat.ME cs.DC stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fitting statistical models is computationally challenging when the sample
size or the dimension of the dataset is huge. An attractive approach for
down-scaling the problem size is to first partition the dataset into subsets
and then fit using distributed algorithms. The dataset can be partitioned
either horizontally (in the sample space) or vertically (in the feature space).
While the majority of the literature focuses on sample space partitioning,
feature space partitioning is more effective when $p\gg n$. Existing methods
for partitioning features, however, are either vulnerable to high correlations
or inefficient in reducing the model dimension. In this paper, we solve these
problems through a new embarrassingly parallel framework named DECO for
distributed variable selection and parameter estimation. In DECO, variables are
first partitioned and allocated to $m$ distributed workers. The decorrelated
subset data within each worker are then fitted via any algorithm designed for
high-dimensional problems. We show that by incorporating the decorrelation
step, DECO can achieve consistent variable selection and parameter estimation
on each subset with (almost) no assumptions. In addition, the convergence rate
is nearly minimax optimal for both sparse and weakly sparse models and does NOT
depend on the partition number $m$. Extensive numerical experiments are
provided to illustrate the performance of the new framework.
| [
{
"version": "v1",
"created": "Mon, 8 Feb 2016 14:17:38 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Feb 2016 13:18:57 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Wang",
"Xiangyu",
""
],
[
"Dunson",
"David",
""
],
[
"Leng",
"Chenlei",
""
]
] | TITLE: DECOrrelated feature space partitioning for distributed sparse
regression
ABSTRACT: Fitting statistical models is computationally challenging when the sample
size or the dimension of the dataset is huge. An attractive approach for
down-scaling the problem size is to first partition the dataset into subsets
and then fit using distributed algorithms. The dataset can be partitioned
either horizontally (in the sample space) or vertically (in the feature space).
While the majority of the literature focuses on sample space partitioning,
feature space partitioning is more effective when $p\gg n$. Existing methods
for partitioning features, however, are either vulnerable to high correlations
or inefficient in reducing the model dimension. In this paper, we solve these
problems through a new embarrassingly parallel framework named DECO for
distributed variable selection and parameter estimation. In DECO, variables are
first partitioned and allocated to $m$ distributed workers. The decorrelated
subset data within each worker are then fitted via any algorithm designed for
high-dimensional problems. We show that by incorporating the decorrelation
step, DECO can achieve consistent variable selection and parameter estimation
on each subset with (almost) no assumptions. In addition, the convergence rate
is nearly minimax optimal for both sparse and weakly sparse models and does NOT
depend on the partition number $m$. Extensive numerical experiments are
provided to illustrate the performance of the new framework.
| no_new_dataset | 0.950595 |
1602.04124 | Srinath Sridhar | Srinath Sridhar, Franziska Mueller, Antti Oulasvirta, Christian
Theobalt | Fast and Robust Hand Tracking Using Detection-Guided Optimization | 9 pages, Accepted version of paper published at CVPR 2015 | Computer Vision and Pattern Recognition (CVPR), 2015 IEEE
Conference on , vol., no., pp.3213-3221, 7-12 June 2015 | 10.1109/CVPR.2015.7298941 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markerless tracking of hands and fingers is a promising enabler for
human-computer interaction. However, adoption has been limited because of
tracking inaccuracies, incomplete coverage of motions, low framerate, complex
camera setups, and high computational requirements. In this paper, we present a
fast method for accurately tracking rapid and complex articulations of the hand
using a single depth camera. Our algorithm uses a novel detection-guided
optimization strategy that increases the robustness and speed of pose
estimation. In the detection step, a randomized decision forest classifies
pixels into parts of the hand. In the optimization step, a novel objective
function combines the detected part labels and a Gaussian mixture
representation of the depth to estimate a pose that best fits the depth. Our
approach needs comparably less computational resources which makes it extremely
fast (50 fps without GPU support). The approach also supports varying static,
or moving, camera-to-scene arrangements. We show the benefits of our method by
evaluating on public datasets and comparing against previous work.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 17:05:04 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Sridhar",
"Srinath",
""
],
[
"Mueller",
"Franziska",
""
],
[
"Oulasvirta",
"Antti",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: Fast and Robust Hand Tracking Using Detection-Guided Optimization
ABSTRACT: Markerless tracking of hands and fingers is a promising enabler for
human-computer interaction. However, adoption has been limited because of
tracking inaccuracies, incomplete coverage of motions, low framerate, complex
camera setups, and high computational requirements. In this paper, we present a
fast method for accurately tracking rapid and complex articulations of the hand
using a single depth camera. Our algorithm uses a novel detection-guided
optimization strategy that increases the robustness and speed of pose
estimation. In the detection step, a randomized decision forest classifies
pixels into parts of the hand. In the optimization step, a novel objective
function combines the detected part labels and a Gaussian mixture
representation of the depth to estimate a pose that best fits the depth. Our
approach needs comparably less computational resources which makes it extremely
fast (50 fps without GPU support). The approach also supports varying static,
or moving, camera-to-scene arrangements. We show the benefits of our method by
evaluating on public datasets and comparing against previous work.
| no_new_dataset | 0.948585 |
1602.04133 | Thang Bui | Thang D. Bui and Daniel Hern\'andez-Lobato and Yingzhen Li and Jos\'e
Miguel Hern\'andez-Lobato and Richard E. Turner | Deep Gaussian Processes for Regression using Approximate Expectation
Propagation | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations
of Gaussian processes (GPs) and are formally equivalent to neural networks with
multiple, infinitely wide hidden layers. DGPs are nonparametric probabilistic
models and as such are arguably more flexible, have a greater capacity to
generalise, and provide better calibrated uncertainty estimates than
alternative deep models. This paper develops a new approximate Bayesian
learning scheme that enables DGPs to be applied to a range of medium to large
scale regression problems for the first time. The new method uses an
approximate Expectation Propagation procedure and a novel and efficient
extension of the probabilistic backpropagation algorithm for learning. We
evaluate the new method for non-linear regression on eleven real-world
datasets, showing that it always outperforms GP regression and is almost always
better than state-of-the-art deterministic and sampling-based approximate
inference methods for Bayesian neural networks. As a by-product, this work
provides a comprehensive analysis of six approximate Bayesian methods for
training neural networks.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 17:32:39 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Bui",
"Thang D.",
""
],
[
"Hernández-Lobato",
"Daniel",
""
],
[
"Li",
"Yingzhen",
""
],
[
"Hernández-Lobato",
"José Miguel",
""
],
[
"Turner",
"Richard E.",
""
]
] | TITLE: Deep Gaussian Processes for Regression using Approximate Expectation
Propagation
ABSTRACT: Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations
of Gaussian processes (GPs) and are formally equivalent to neural networks with
multiple, infinitely wide hidden layers. DGPs are nonparametric probabilistic
models and as such are arguably more flexible, have a greater capacity to
generalise, and provide better calibrated uncertainty estimates than
alternative deep models. This paper develops a new approximate Bayesian
learning scheme that enables DGPs to be applied to a range of medium to large
scale regression problems for the first time. The new method uses an
approximate Expectation Propagation procedure and a novel and efficient
extension of the probabilistic backpropagation algorithm for learning. We
evaluate the new method for non-linear regression on eleven real-world
datasets, showing that it always outperforms GP regression and is almost always
better than state-of-the-art deterministic and sampling-based approximate
inference methods for Bayesian neural networks. As a by-product, this work
provides a comprehensive analysis of six approximate Bayesian methods for
training neural networks.
| no_new_dataset | 0.949201 |
1602.04208 | Martin Jaggi | Rajiv Khanna, Michael Tschannen, Martin Jaggi | Pursuits in Structured Non-Convex Matrix Factorizations | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficiently representing real world data in a succinct and parsimonious
manner is of central importance in many fields. We present a generalized greedy
pursuit framework, allowing us to efficiently solve structured matrix
factorization problems, where the factors are allowed to be from arbitrary sets
of structured vectors. Such structure may include sparsity, non-negativeness,
order, or a combination thereof. The algorithm approximates a given matrix by a
linear combination of few rank-1 matrices, each factorized into an outer
product of two vector atoms of the desired structure. For the non-convex
subproblems of obtaining good rank-1 structured matrix atoms, we employ and
analyze a general atomic power method. In addition to the above applications,
we prove linear convergence for generalized pursuit variants in Hilbert spaces
- for the task of approximation over the linear span of arbitrary dictionaries
- which generalizes OMP and is useful beyond matrix problems. Our experiments
on real datasets confirm both the efficiency and also the broad applicability
of our framework in practice.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 20:57:35 GMT"
}
] | 2016-02-15T00:00:00 | [
[
"Khanna",
"Rajiv",
""
],
[
"Tschannen",
"Michael",
""
],
[
"Jaggi",
"Martin",
""
]
] | TITLE: Pursuits in Structured Non-Convex Matrix Factorizations
ABSTRACT: Efficiently representing real world data in a succinct and parsimonious
manner is of central importance in many fields. We present a generalized greedy
pursuit framework, allowing us to efficiently solve structured matrix
factorization problems, where the factors are allowed to be from arbitrary sets
of structured vectors. Such structure may include sparsity, non-negativeness,
order, or a combination thereof. The algorithm approximates a given matrix by a
linear combination of few rank-1 matrices, each factorized into an outer
product of two vector atoms of the desired structure. For the non-convex
subproblems of obtaining good rank-1 structured matrix atoms, we employ and
analyze a general atomic power method. In addition to the above applications,
we prove linear convergence for generalized pursuit variants in Hilbert spaces
- for the task of approximation over the linear span of arbitrary dictionaries
- which generalizes OMP and is useful beyond matrix problems. Our experiments
on real datasets confirm both the efficiency and also the broad applicability
of our framework in practice.
| no_new_dataset | 0.947088 |
1503.03488 | Robert Murphy | Robert A. Murphy | Estimating the Mean Number of K-Means Clusters to Form | These writings are part of a longer writing which has been submitted
for publication. I plan to replace this writing (and the other 2 writings)
with the single writing that has been submitted for publication. The other
writings to be withdraw are 1501.07227 and 1412.4178 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utilizing the sample size of a dataset, the random cluster model is employed
in order to derive an estimate of the mean number of K-Means clusters to form
during classification of a dataset.
| [
{
"version": "v1",
"created": "Sat, 7 Mar 2015 22:45:54 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 22:28:16 GMT"
}
] | 2016-02-12T00:00:00 | [
[
"Murphy",
"Robert A.",
""
]
] | TITLE: Estimating the Mean Number of K-Means Clusters to Form
ABSTRACT: Utilizing the sample size of a dataset, the random cluster model is employed
in order to derive an estimate of the mean number of K-Means clusters to form
during classification of a dataset.
| no_new_dataset | 0.94801 |
1602.03585 | Yangmuzi Zhang | Yangmuzi Zhang, Zhuolin Jiang, Xi Chen, Larry S. Davis | Generating Discriminative Object Proposals via Submodular Ranking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A multi-scale greedy-based object proposal generation approach is presented.
Based on the multi-scale nature of objects in images, our approach is built on
top of a hierarchical segmentation. We first identify the representative and
diverse exemplar clusters within each scale by using a diversity ranking
algorithm. Object proposals are obtained by selecting a subset from the
multi-scale segment pool via maximizing a submodular objective function, which
consists of a weighted coverage term, a single-scale diversity term and a
multi-scale reward term. The weighted coverage term forces the selected set of
object proposals to be representative and compact; the single-scale diversity
term encourages choosing segments from different exemplar clusters so that they
will cover as many object patterns as possible; the multi-scale reward term
encourages the selected proposals to be discriminative and selected from
multiple layers generated by the hierarchical image segmentation. The
experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012
segmentation dataset demonstrate the accuracy and efficiency of our object
proposal model. Additionally, we validate our object proposals in simultaneous
segmentation and detection and outperform the state-of-art performance.
| [
{
"version": "v1",
"created": "Thu, 11 Feb 2016 00:50:17 GMT"
}
] | 2016-02-12T00:00:00 | [
[
"Zhang",
"Yangmuzi",
""
],
[
"Jiang",
"Zhuolin",
""
],
[
"Chen",
"Xi",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Generating Discriminative Object Proposals via Submodular Ranking
ABSTRACT: A multi-scale greedy-based object proposal generation approach is presented.
Based on the multi-scale nature of objects in images, our approach is built on
top of a hierarchical segmentation. We first identify the representative and
diverse exemplar clusters within each scale by using a diversity ranking
algorithm. Object proposals are obtained by selecting a subset from the
multi-scale segment pool via maximizing a submodular objective function, which
consists of a weighted coverage term, a single-scale diversity term and a
multi-scale reward term. The weighted coverage term forces the selected set of
object proposals to be representative and compact; the single-scale diversity
term encourages choosing segments from different exemplar clusters so that they
will cover as many object patterns as possible; the multi-scale reward term
encourages the selected proposals to be discriminative and selected from
multiple layers generated by the hierarchical image segmentation. The
experimental results on the Berkeley Segmentation Dataset and PASCAL VOC2012
segmentation dataset demonstrate the accuracy and efficiency of our object
proposal model. Additionally, we validate our object proposals in simultaneous
segmentation and detection and outperform the state-of-art performance.
| no_new_dataset | 0.951594 |
1602.03770 | Kasper Grud Skat Madsen | Kasper Grud Skat Madsen and Yongluan Zhou and Jianneng Cao | Integrative Dynamic Reconfiguration in a Parallel Stream Processing
Engine | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Load balancing, operator instance collocations and horizontal scaling are
critical issues in Parallel Stream Processing Engines to achieve low data
processing latency, optimized cluster utilization and minimized communication
cost respectively. In previous work, these issues are typically tackled
separately and independently. We argue that these problems are tightly coupled
in the sense that they all need to determine the allocations of workloads and
migrate computational states at runtime. Optimizing them independently would
result in suboptimal solutions. Therefore, in this paper, we investigate how
these three issues can be modeled as one integrated optimization problem. In
particular, we first consider jobs where workload allocations have little
effect on the communication cost, and model the problem of load balance as a
Mixed-Integer Linear Program. Afterwards, we present an extended solution
called ALBIC, which support general jobs. We implement the proposed techniques
on top of Apache Storm, an open-source Parallel Stream Processing Engine. The
extensive experimental results over both synthetic and real datasets show that
our techniques clearly outperform existing approaches.
| [
{
"version": "v1",
"created": "Thu, 11 Feb 2016 15:29:18 GMT"
}
] | 2016-02-12T00:00:00 | [
[
"Madsen",
"Kasper Grud Skat",
""
],
[
"Zhou",
"Yongluan",
""
],
[
"Cao",
"Jianneng",
""
]
] | TITLE: Integrative Dynamic Reconfiguration in a Parallel Stream Processing
Engine
ABSTRACT: Load balancing, operator instance collocations and horizontal scaling are
critical issues in Parallel Stream Processing Engines to achieve low data
processing latency, optimized cluster utilization and minimized communication
cost respectively. In previous work, these issues are typically tackled
separately and independently. We argue that these problems are tightly coupled
in the sense that they all need to determine the allocations of workloads and
migrate computational states at runtime. Optimizing them independently would
result in suboptimal solutions. Therefore, in this paper, we investigate how
these three issues can be modeled as one integrated optimization problem. In
particular, we first consider jobs where workload allocations have little
effect on the communication cost, and model the problem of load balance as a
Mixed-Integer Linear Program. Afterwards, we present an extended solution
called ALBIC, which support general jobs. We implement the proposed techniques
on top of Apache Storm, an open-source Parallel Stream Processing Engine. The
extensive experimental results over both synthetic and real datasets show that
our techniques clearly outperform existing approaches.
| no_new_dataset | 0.942188 |
1602.03860 | Srinath Sridhar | Srinath Sridhar, Helge Rhodin, Hans-Peter Seidel, Antti Oulasvirta,
Christian Theobalt | Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model | 8 pages, Accepted version of paper published at 3DV 2014 | 2nd International Conference on , vol.1, no., pp.319-326, 8-11
Dec. 2014 | 10.1109/3DV.2014.37 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time marker-less hand tracking is of increasing importance in
human-computer interaction. Robust and accurate tracking of arbitrary hand
motion is a challenging problem due to the many degrees of freedom, frequent
self-occlusions, fast motions, and uniform skin color. In this paper, we
propose a new approach that tracks the full skeleton motion of the hand from
multiple RGB cameras in real-time. The main contributions include a new
generative tracking method which employs an implicit hand shape representation
based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is
smooth and analytically differentiable making fast gradient based pose
optimization possible. This shape representation, together with a full
perspective projection model, enables more accurate hand modeling than a
related baseline method from literature. Our method achieves better accuracy
than previous methods and runs at 25 fps. We show these improvements both
qualitatively and quantitatively on publicly available datasets.
| [
{
"version": "v1",
"created": "Thu, 11 Feb 2016 20:03:53 GMT"
}
] | 2016-02-12T00:00:00 | [
[
"Sridhar",
"Srinath",
""
],
[
"Rhodin",
"Helge",
""
],
[
"Seidel",
"Hans-Peter",
""
],
[
"Oulasvirta",
"Antti",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model
ABSTRACT: Real-time marker-less hand tracking is of increasing importance in
human-computer interaction. Robust and accurate tracking of arbitrary hand
motion is a challenging problem due to the many degrees of freedom, frequent
self-occlusions, fast motions, and uniform skin color. In this paper, we
propose a new approach that tracks the full skeleton motion of the hand from
multiple RGB cameras in real-time. The main contributions include a new
generative tracking method which employs an implicit hand shape representation
based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is
smooth and analytically differentiable making fast gradient based pose
optimization possible. This shape representation, together with a full
perspective projection model, enables more accurate hand modeling than a
related baseline method from literature. Our method achieves better accuracy
than previous methods and runs at 25 fps. We show these improvements both
qualitatively and quantitatively on publicly available datasets.
| no_new_dataset | 0.95275 |
1506.00852 | Ulrike von Luxburg | Mehdi S. M. Sajjadi, Morteza Alamgir, Ulrike von Luxburg | Peer Grading in a Course on Algorithms and Data Structures: Machine
Learning Algorithms do not Improve over Simple Baselines | Published at the Third Annual ACM Conference on Learning at Scale L@S | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peer grading is the process of students reviewing each others' work, such as
homework submissions, and has lately become a popular mechanism used in massive
open online courses (MOOCs). Intrigued by this idea, we used it in a course on
algorithms and data structures at the University of Hamburg. Throughout the
whole semester, students repeatedly handed in submissions to exercises, which
were then evaluated both by teaching assistants and by a peer grading
mechanism, yielding a large dataset of teacher and peer grades. We applied
different statistical and machine learning methods to aggregate the peer grades
in order to come up with accurate final grades for the submissions (supervised
and unsupervised, methods based on numeric scores and ordinal rankings).
Surprisingly, none of them improves over the baseline of using the mean peer
grade as the final grade. We discuss a number of possible explanations for
these results and present a thorough analysis of the generated dataset.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 12:03:30 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 14:49:19 GMT"
}
] | 2016-02-11T00:00:00 | [
[
"Sajjadi",
"Mehdi S. M.",
""
],
[
"Alamgir",
"Morteza",
""
],
[
"von Luxburg",
"Ulrike",
""
]
] | TITLE: Peer Grading in a Course on Algorithms and Data Structures: Machine
Learning Algorithms do not Improve over Simple Baselines
ABSTRACT: Peer grading is the process of students reviewing each others' work, such as
homework submissions, and has lately become a popular mechanism used in massive
open online courses (MOOCs). Intrigued by this idea, we used it in a course on
algorithms and data structures at the University of Hamburg. Throughout the
whole semester, students repeatedly handed in submissions to exercises, which
were then evaluated both by teaching assistants and by a peer grading
mechanism, yielding a large dataset of teacher and peer grades. We applied
different statistical and machine learning methods to aggregate the peer grades
in order to come up with accurate final grades for the submissions (supervised
and unsupervised, methods based on numeric scores and ordinal rankings).
Surprisingly, none of them improves over the baseline of using the mean peer
grade as the final grade. We discuss a number of possible explanations for
these results and present a thorough analysis of the generated dataset.
| new_dataset | 0.953449 |
1506.01911 | Lionel Pigou | Lionel Pigou, A\"aron van den Oord, Sander Dieleman, Mieke Van
Herreweghe, Joni Dambre | Beyond Temporal Pooling: Recurrence and Temporal Convolutions for
Gesture Recognition in Video | null | null | null | null | cs.CV cs.AI cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have demonstrated the power of recurrent neural networks for
machine translation, image captioning and speech recognition. For the task of
capturing temporal structure in video, however, there still remain numerous
open research questions. Current research suggests using a simple temporal
feature pooling strategy to take into account the temporal aspect of video. We
demonstrate that this method is not sufficient for gesture recognition, where
temporal information is more discriminative compared to general video
classification tasks. We explore deep architectures for gesture recognition in
video and propose a new end-to-end trainable neural network architecture
incorporating temporal convolutions and bidirectional recurrence. Our main
contributions are twofold; first, we show that recurrence is crucial for this
task; second, we show that adding temporal convolutions leads to significant
improvements. We evaluate the different approaches on the Montalbano gesture
recognition dataset, where we achieve state-of-the-art results.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 13:43:01 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Nov 2015 16:20:26 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Feb 2016 16:50:29 GMT"
}
] | 2016-02-11T00:00:00 | [
[
"Pigou",
"Lionel",
""
],
[
"Oord",
"Aäron van den",
""
],
[
"Dieleman",
"Sander",
""
],
[
"Van Herreweghe",
"Mieke",
""
],
[
"Dambre",
"Joni",
""
]
] | TITLE: Beyond Temporal Pooling: Recurrence and Temporal Convolutions for
Gesture Recognition in Video
ABSTRACT: Recent studies have demonstrated the power of recurrent neural networks for
machine translation, image captioning and speech recognition. For the task of
capturing temporal structure in video, however, there still remain numerous
open research questions. Current research suggests using a simple temporal
feature pooling strategy to take into account the temporal aspect of video. We
demonstrate that this method is not sufficient for gesture recognition, where
temporal information is more discriminative compared to general video
classification tasks. We explore deep architectures for gesture recognition in
video and propose a new end-to-end trainable neural network architecture
incorporating temporal convolutions and bidirectional recurrence. Our main
contributions are twofold; first, we show that recurrence is crucial for this
task; second, we show that adding temporal convolutions leads to significant
improvements. We evaluate the different approaches on the Montalbano gesture
recognition dataset, where we achieve state-of-the-art results.
| no_new_dataset | 0.947039 |
1602.03346 | Li Liu | Li Liu and Yi Zhou and Ling Shao | DAP3D-Net: Where, What and How Actions Occur in Videos? | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Action parsing in videos with complex scenes is an interesting but
challenging task in computer vision. In this paper, we propose a generic 3D
convolutional neural network in a multi-task learning manner for effective Deep
Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase,
action localization, classification and attributes learning can be jointly
optimized on our appearancemotion data via DAP3D-Net. For an upcoming test
video, we can describe each individual action in the video simultaneously as:
Where the action occurs, What the action is and How the action is performed. To
well demonstrate the effectiveness of the proposed DAP3D-Net, we also
contribute a new Numerous-category Aligned Synthetic Action dataset, i.e.,
NASA, which consists of 200; 000 action clips of more than 300 categories and
with 33 pre-defined action attributes in two hierarchical levels (i.e.,
low-level attributes of basic body part movements and high-level attributes
related to action motion). We learn DAP3D-Net using the NASA dataset and then
evaluate it on our collected Human Action Understanding (HAU) dataset.
Experimental results show that our approach can accurately localize, categorize
and describe multiple actions in realistic videos.
| [
{
"version": "v1",
"created": "Wed, 10 Feb 2016 12:25:52 GMT"
}
] | 2016-02-11T00:00:00 | [
[
"Liu",
"Li",
""
],
[
"Zhou",
"Yi",
""
],
[
"Shao",
"Ling",
""
]
] | TITLE: DAP3D-Net: Where, What and How Actions Occur in Videos?
ABSTRACT: Action parsing in videos with complex scenes is an interesting but
challenging task in computer vision. In this paper, we propose a generic 3D
convolutional neural network in a multi-task learning manner for effective Deep
Action Parsing (DAP3D-Net) in videos. Particularly, in the training phase,
action localization, classification and attributes learning can be jointly
optimized on our appearancemotion data via DAP3D-Net. For an upcoming test
video, we can describe each individual action in the video simultaneously as:
Where the action occurs, What the action is and How the action is performed. To
well demonstrate the effectiveness of the proposed DAP3D-Net, we also
contribute a new Numerous-category Aligned Synthetic Action dataset, i.e.,
NASA, which consists of 200; 000 action clips of more than 300 categories and
with 33 pre-defined action attributes in two hierarchical levels (i.e.,
low-level attributes of basic body part movements and high-level attributes
related to action motion). We learn DAP3D-Net using the NASA dataset and then
evaluate it on our collected Human Action Understanding (HAU) dataset.
Experimental results show that our approach can accurately localize, categorize
and describe multiple actions in realistic videos.
| new_dataset | 0.964052 |
1602.03409 | Hoo Chang Shin | Hoo-Chang Shin, Holger R. Roth, Mingchen Gao, Le Lu, Ziyue Xu,
Isabella Nogues, Jianhua Yao, Daniel Mollura, Ronald M. Summers | Deep Convolutional Neural Networks for Computer-Aided Detection: CNN
Architectures, Dataset Characteristics and Transfer Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remarkable progress has been made in image recognition, primarily due to the
availability of large-scale annotated datasets and the revival of deep CNN.
CNNs enable learning data-driven, highly representative, layered hierarchical
image features from sufficient training data. However, obtaining datasets as
comprehensively annotated as ImageNet in the medical imaging domain remains a
challenge. There are currently three major techniques that successfully employ
CNNs to medical image classification: training the CNN from scratch, using
off-the-shelf pre-trained CNN features, and conducting unsupervised CNN
pre-training with supervised fine-tuning. Another effective method is transfer
learning, i.e., fine-tuning CNN models pre-trained from natural image dataset
to medical image tasks. In this paper, we exploit three important, but
previously understudied factors of employing deep convolutional neural networks
to computer-aided detection problems. We first explore and evaluate different
CNN architectures. The studied models contain 5 thousand to 160 million
parameters, and vary in numbers of layers. We then evaluate the influence of
dataset scale and spatial image context on performance. Finally, we examine
when and why transfer learning from pre-trained ImageNet (via fine-tuning) can
be useful. We study two specific computer-aided detection (CADe) problems,
namely thoraco-abdominal lymph node (LN) detection and interstitial lung
disease (ILD) classification. We achieve the state-of-the-art performance on
the mediastinal LN detection, with 85% sensitivity at 3 false positive per
patient, and report the first five-fold cross-validation classification results
on predicting axial CT slices with ILD categories. Our extensive empirical
evaluation, CNN model analysis and valuable insights can be extended to the
design of high performance CAD systems for other medical imaging tasks.
| [
{
"version": "v1",
"created": "Wed, 10 Feb 2016 15:33:32 GMT"
}
] | 2016-02-11T00:00:00 | [
[
"Shin",
"Hoo-Chang",
""
],
[
"Roth",
"Holger R.",
""
],
[
"Gao",
"Mingchen",
""
],
[
"Lu",
"Le",
""
],
[
"Xu",
"Ziyue",
""
],
[
"Nogues",
"Isabella",
""
],
[
"Yao",
"Jianhua",
""
],
[
"Mollura",
"Daniel",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: Deep Convolutional Neural Networks for Computer-Aided Detection: CNN
Architectures, Dataset Characteristics and Transfer Learning
ABSTRACT: Remarkable progress has been made in image recognition, primarily due to the
availability of large-scale annotated datasets and the revival of deep CNN.
CNNs enable learning data-driven, highly representative, layered hierarchical
image features from sufficient training data. However, obtaining datasets as
comprehensively annotated as ImageNet in the medical imaging domain remains a
challenge. There are currently three major techniques that successfully employ
CNNs to medical image classification: training the CNN from scratch, using
off-the-shelf pre-trained CNN features, and conducting unsupervised CNN
pre-training with supervised fine-tuning. Another effective method is transfer
learning, i.e., fine-tuning CNN models pre-trained from natural image dataset
to medical image tasks. In this paper, we exploit three important, but
previously understudied factors of employing deep convolutional neural networks
to computer-aided detection problems. We first explore and evaluate different
CNN architectures. The studied models contain 5 thousand to 160 million
parameters, and vary in numbers of layers. We then evaluate the influence of
dataset scale and spatial image context on performance. Finally, we examine
when and why transfer learning from pre-trained ImageNet (via fine-tuning) can
be useful. We study two specific computer-aided detection (CADe) problems,
namely thoraco-abdominal lymph node (LN) detection and interstitial lung
disease (ILD) classification. We achieve the state-of-the-art performance on
the mediastinal LN detection, with 85% sensitivity at 3 false positive per
patient, and report the first five-fold cross-validation classification results
on predicting axial CT slices with ILD categories. Our extensive empirical
evaluation, CNN model analysis and valuable insights can be extended to the
design of high performance CAD systems for other medical imaging tasks.
| no_new_dataset | 0.947962 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.