id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.07982 | Miltiadis Allamanis | Fani A. Tzima, Miltiadis Allamanis, Alexandros Filotheou, Pericles A.
Mitkas | Inducing Generalized Multi-Label Rules with Learning Classifier Systems | null | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, multi-label classification has attracted a significant body
of research, motivated by real-life applications, such as text classification
and medical diagnoses. Although sparsely studied in this context, Learning
Classifier Systems are naturally well-suited to multi-label classification
problems, whose search space typically involves multiple highly specific
niches. This is the motivation behind our current work that introduces a
generalized multi-label rule format -- allowing for flexible label-dependency
modeling, with no need for explicit knowledge of which correlations to search
for -- and uses it as a guide for further adapting the general Michigan-style
supervised Learning Classifier System framework. The integration of the
aforementioned rule format and framework adaptations results in a novel
algorithm for multi-label classification whose behavior is studied through a
set of properly defined artificial problems. The proposed algorithm is also
thoroughly evaluated on a set of multi-label datasets and found competitive to
other state-of-the-art multi-label classification methods.
| [
{
"version": "v1",
"created": "Fri, 25 Dec 2015 10:03:55 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Tzima",
"Fani A.",
""
],
[
"Allamanis",
"Miltiadis",
""
],
[
"Filotheou",
"Alexandros",
""
],
[
"Mitkas",
"Pericles A.",
""
]
] | TITLE: Inducing Generalized Multi-Label Rules with Learning Classifier Systems
ABSTRACT: In recent years, multi-label classification has attracted a significant body
of research, motivated by real-life applications, such as text classification
and medical diagnoses. Although sparsely studied in this context, Learning
Classifier Systems are naturally well-suited to multi-label classification
problems, whose search space typically involves multiple highly specific
niches. This is the motivation behind our current work that introduces a
generalized multi-label rule format -- allowing for flexible label-dependency
modeling, with no need for explicit knowledge of which correlations to search
for -- and uses it as a guide for further adapting the general Michigan-style
supervised Learning Classifier System framework. The integration of the
aforementioned rule format and framework adaptations results in a novel
algorithm for multi-label classification whose behavior is studied through a
set of properly defined artificial problems. The proposed algorithm is also
thoroughly evaluated on a set of multi-label datasets and found competitive to
other state-of-the-art multi-label classification methods.
| no_new_dataset | 0.945551 |
1512.08017 | Poorna Dasgupta | Poorna Banerjee Dasgupta | An Analytical Evaluation of Matricizing Least-Square-Errors Curve
Fitting to Support High Performance Computation on Large Datasets | 3 pages, Published with International Journal of Computer Trends and
Technology (IJCTT), Volume-30 Number-2, December-2015 | International Journal of Computer Trends and Technology (IJCTT)
V30(2):113-115, December 2015 | 10.14445/22312803/IJCTT-V30P120 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The procedure of Least Square-Errors curve fitting is extensively used in
many computer applications for fitting a polynomial curve of a given degree to
approximate a set of data. Although various methodologies exist to carry out
curve fitting on data, most of them have shortcomings with respect to
efficiency especially where huge datasets are involved. This paper proposes and
analyzes a matricized approach to the Least Square-Errors curve fitting with
the primary objective of parallelizing the whole algorithm so that high
performance efficiency can be achieved when algorithmic execution takes place
on colossal datasets.
| [
{
"version": "v1",
"created": "Fri, 25 Dec 2015 16:53:57 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Dasgupta",
"Poorna Banerjee",
""
]
] | TITLE: An Analytical Evaluation of Matricizing Least-Square-Errors Curve
Fitting to Support High Performance Computation on Large Datasets
ABSTRACT: The procedure of Least Square-Errors curve fitting is extensively used in
many computer applications for fitting a polynomial curve of a given degree to
approximate a set of data. Although various methodologies exist to carry out
curve fitting on data, most of them have shortcomings with respect to
efficiency especially where huge datasets are involved. This paper proposes and
analyzes a matricized approach to the Least Square-Errors curve fitting with
the primary objective of parallelizing the whole algorithm so that high
performance efficiency can be achieved when algorithmic execution takes place
on colossal datasets.
| no_new_dataset | 0.946399 |
1512.08041 | Tsunehiko Kameda | Yuan Sun, Shiwei Ye, Yi Sun, Tsunehiko Kameda | Improved Algorithms for Exact and Approximate Boolean Matrix
Decomposition | DSAA2015 | null | null | null | cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An arbitrary $m\times n$ Boolean matrix $M$ can be decomposed {\em exactly}
as $M =U\circ V$, where $U$ (resp. $V$) is an $m\times k$ (resp. $k\times n$)
Boolean matrix and $\circ$ denotes the Boolean matrix multiplication operator.
We first prove an exact formula for the Boolean matrix $J$ such that $M =M\circ
J^T$ holds, where $J$ is maximal in the sense that if any 0 element in $J$ is
changed to a 1 then this equality no longer holds. Since minimizing $k$ is
NP-hard, we propose two heuristic algorithms for finding suboptimal but good
decomposition. We measure the performance (in minimizing $k$) of our algorithms
on several real datasets in comparison with other representative heuristic
algorithms for Boolean matrix decomposition (BMD). The results on some popular
benchmark datasets demonstrate that one of our proposed algorithms performs as
well or better on most of them. Our algorithms have a number of other
advantages: They are based on exact mathematical formula, which can be
interpreted intuitively. They can be used for approximation as well with
competitive "coverage." Last but not least, they also run very fast. Due to
interpretability issues in data mining, we impose the condition, called the
"column use condition," that the columns of the factor matrix $U$ must form a
subset of the columns of $M$.
| [
{
"version": "v1",
"created": "Fri, 25 Dec 2015 21:48:05 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Sun",
"Yuan",
""
],
[
"Ye",
"Shiwei",
""
],
[
"Sun",
"Yi",
""
],
[
"Kameda",
"Tsunehiko",
""
]
] | TITLE: Improved Algorithms for Exact and Approximate Boolean Matrix
Decomposition
ABSTRACT: An arbitrary $m\times n$ Boolean matrix $M$ can be decomposed {\em exactly}
as $M =U\circ V$, where $U$ (resp. $V$) is an $m\times k$ (resp. $k\times n$)
Boolean matrix and $\circ$ denotes the Boolean matrix multiplication operator.
We first prove an exact formula for the Boolean matrix $J$ such that $M =M\circ
J^T$ holds, where $J$ is maximal in the sense that if any 0 element in $J$ is
changed to a 1 then this equality no longer holds. Since minimizing $k$ is
NP-hard, we propose two heuristic algorithms for finding suboptimal but good
decomposition. We measure the performance (in minimizing $k$) of our algorithms
on several real datasets in comparison with other representative heuristic
algorithms for Boolean matrix decomposition (BMD). The results on some popular
benchmark datasets demonstrate that one of our proposed algorithms performs as
well or better on most of them. Our algorithms have a number of other
advantages: They are based on exact mathematical formula, which can be
interpreted intuitively. They can be used for approximation as well with
competitive "coverage." Last but not least, they also run very fast. Due to
interpretability issues in data mining, we impose the condition, called the
"column use condition," that the columns of the factor matrix $U$ must form a
subset of the columns of $M$.
| no_new_dataset | 0.939637 |
1512.08061 | Mehwish Nasim | Mehwish Nasim, Aimal Rextin, Numair Khan, Muhammad Muddassir Malik | On Temporal Regularity in Social Interactions: Predicting Mobile Phone
Calls | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we predict outgoing mobile phone calls using a machine learning
approach. We analyze to which extent the activity of mobile phone users is
predictable. The premise is that mobile phone users exhibit temporal regularity
in their interactions with majority of their contacts. In the sociological
context, most social interactions have fairly reliable temporal regularity. If
we quantify the extension of this behavior to interactions on mobile phones we
expect that caller-callee interaction is not merely a result of randomness,
rather it exhibits a temporal pattern. To this end, we tested our approach on
an anonymized mobile phone usage dataset collected specifically for analyzing
temporal patterns in mobile phone communication. The data consists of 783 users
and more than 12,000 caller-callee pairs. The results show that users' historic
calling patterns can predict future calls with reasonable accuracy.
| [
{
"version": "v1",
"created": "Sat, 26 Dec 2015 00:56:12 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Nasim",
"Mehwish",
""
],
[
"Rextin",
"Aimal",
""
],
[
"Khan",
"Numair",
""
],
[
"Malik",
"Muhammad Muddassir",
""
]
] | TITLE: On Temporal Regularity in Social Interactions: Predicting Mobile Phone
Calls
ABSTRACT: In this paper we predict outgoing mobile phone calls using a machine learning
approach. We analyze to which extent the activity of mobile phone users is
predictable. The premise is that mobile phone users exhibit temporal regularity
in their interactions with majority of their contacts. In the sociological
context, most social interactions have fairly reliable temporal regularity. If
we quantify the extension of this behavior to interactions on mobile phones we
expect that caller-callee interaction is not merely a result of randomness,
rather it exhibits a temporal pattern. To this end, we tested our approach on
an anonymized mobile phone usage dataset collected specifically for analyzing
temporal patterns in mobile phone communication. The data consists of 783 users
and more than 12,000 caller-callee pairs. The results show that users' historic
calling patterns can predict future calls with reasonable accuracy.
| new_dataset | 0.950686 |
1512.08103 | Wei Liu | Wei Liu, Yun Gu, Chunhua Shen, Xiaogang Chen, Qiang Wu and Jie Yang | Data Driven Robust Image Guided Depth Map Restoration | 9 pages, 9 figures, conference paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depth maps captured by modern depth cameras such as Kinect and Time-of-Flight
(ToF) are usually contaminated by missing data, noises and suffer from being of
low resolution. In this paper, we present a robust method for high-quality
restoration of a degraded depth map with the guidance of the corresponding
color image. We solve the problem in an energy optimization framework that
consists of a novel robust data term and smoothness term. To accommodate not
only the noise but also the inconsistency between depth discontinuities and the
color edges, we model both the data term and smoothness term with a robust
exponential error norm function. We propose to use Iteratively Re-weighted
Least Squares (IRLS) methods for efficiently solving the resulting highly
non-convex optimization problem. More importantly, we further develop a
data-driven adaptive parameter selection scheme to properly determine the
parameter in the model. We show that the proposed approach can preserve fine
details and sharp depth discontinuities even for a large upsampling factor
($8\times$ for example). Experimental results on both simulated and real
datasets demonstrate that the proposed method outperforms recent
state-of-the-art methods in coping with the heavy noise, preserving sharp depth
discontinuities and suppressing the texture copy artifacts.
| [
{
"version": "v1",
"created": "Sat, 26 Dec 2015 12:04:54 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Liu",
"Wei",
""
],
[
"Gu",
"Yun",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Chen",
"Xiaogang",
""
],
[
"Wu",
"Qiang",
""
],
[
"Yang",
"Jie",
""
]
] | TITLE: Data Driven Robust Image Guided Depth Map Restoration
ABSTRACT: Depth maps captured by modern depth cameras such as Kinect and Time-of-Flight
(ToF) are usually contaminated by missing data, noises and suffer from being of
low resolution. In this paper, we present a robust method for high-quality
restoration of a degraded depth map with the guidance of the corresponding
color image. We solve the problem in an energy optimization framework that
consists of a novel robust data term and smoothness term. To accommodate not
only the noise but also the inconsistency between depth discontinuities and the
color edges, we model both the data term and smoothness term with a robust
exponential error norm function. We propose to use Iteratively Re-weighted
Least Squares (IRLS) methods for efficiently solving the resulting highly
non-convex optimization problem. More importantly, we further develop a
data-driven adaptive parameter selection scheme to properly determine the
parameter in the model. We show that the proposed approach can preserve fine
details and sharp depth discontinuities even for a large upsampling factor
($8\times$ for example). Experimental results on both simulated and real
datasets demonstrate that the proposed method outperforms recent
state-of-the-art methods in coping with the heavy noise, preserving sharp depth
discontinuities and suppressing the texture copy artifacts.
| no_new_dataset | 0.949389 |
1512.08150 | Faisal Orakzai Faisal Orakzai | Faisal Orakzai, Thomas Devogele, Toon Calders | Towards Distributed Convoy Pattern Mining | SIGSPATIAL'15 November 03-06, 2015, Bellevue, WA, USA | null | 10.1145/2820783.2820840 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining movement data to reveal interesting behavioral patterns has gained
attention in recent years. One such pattern is the convoy pattern which
consists of at least m objects moving together for at least k consecutive time
instants where m and k are user-defined parameters. Existing algorithms for
detecting convoy patterns, however do not scale to real-life dataset sizes.
Therefore a distributed algorithm for convoy mining is inevitable. In this
paper, we discuss the problem of convoy mining and analyze different data
partitioning strategies to pave the way for a generic distributed convoy
pattern mining algorithm.
| [
{
"version": "v1",
"created": "Sat, 26 Dec 2015 22:10:05 GMT"
}
] | 2015-12-29T00:00:00 | [
[
"Orakzai",
"Faisal",
""
],
[
"Devogele",
"Thomas",
""
],
[
"Calders",
"Toon",
""
]
] | TITLE: Towards Distributed Convoy Pattern Mining
ABSTRACT: Mining movement data to reveal interesting behavioral patterns has gained
attention in recent years. One such pattern is the convoy pattern which
consists of at least m objects moving together for at least k consecutive time
instants where m and k are user-defined parameters. Existing algorithms for
detecting convoy patterns, however do not scale to real-life dataset sizes.
Therefore a distributed algorithm for convoy mining is inevitable. In this
paper, we discuss the problem of convoy mining and analyze different data
partitioning strategies to pave the way for a generic distributed convoy
pattern mining algorithm.
| no_new_dataset | 0.949106 |
1512.06498 | Oruganti Ramana Mr | O. V. Ramana Murthy and Roland Goecke | Harnessing the Deep Net Object Models for Enhancing Human Action
Recognition | 6 pages. arXiv admin note: text overlap with arXiv:1411.4006 by other
authors | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, the influence of objects is investigated in the scenario of
human action recognition with large number of classes. We hypothesize that the
objects the humans are interacting will have good say in determining the action
being performed. Especially, if the objects are non-moving, such as objects
appearing in the background, features such as spatio-temporal interest points,
dense trajectories may fail to detect them. Hence we propose to detect objects
using pre-trained object detectors in every frame statically. Trained Deep
network models are used as object detectors. Information from different layers
in conjunction with different encoding techniques is extensively studied to
obtain the richest feature vectors. This technique is observed to yield
state-of-the-art performance on HMDB51 and UCF101 datasets.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 05:28:23 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Dec 2015 04:37:51 GMT"
}
] | 2015-12-25T00:00:00 | [
[
"Murthy",
"O. V. Ramana",
""
],
[
"Goecke",
"Roland",
""
]
] | TITLE: Harnessing the Deep Net Object Models for Enhancing Human Action
Recognition
ABSTRACT: In this study, the influence of objects is investigated in the scenario of
human action recognition with large number of classes. We hypothesize that the
objects the humans are interacting will have good say in determining the action
being performed. Especially, if the objects are non-moving, such as objects
appearing in the background, features such as spatio-temporal interest points,
dense trajectories may fail to detect them. Hence we propose to detect objects
using pre-trained object detectors in every frame statically. Trained Deep
network models are used as object detectors. Information from different layers
in conjunction with different encoding techniques is extensively studied to
obtain the richest feature vectors. This technique is observed to yield
state-of-the-art performance on HMDB51 and UCF101 datasets.
| no_new_dataset | 0.948202 |
1512.07344 | Xin Yuan | Yunchen Pu, Xin Yuan, Andrew Stevens, Chunyuan Li, Lawrence Carin | A Deep Generative Deconvolutional Image Model | 10 pages, 7 figures. Appearing in Proceedings of the 19th
International Conference on Artificial Intelligence and Statistics (AISTATS)
2016, Cadiz, Spain. JMLR: W&CP volume 41 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A deep generative model is developed for representation and analysis of
images, based on a hierarchical convolutional dictionary-learning framework.
Stochastic {\em unpooling} is employed to link consecutive layers in the model,
yielding top-down image generation. A Bayesian support vector machine is linked
to the top-layer features, yielding max-margin discrimination. Deep
deconvolutional inference is employed when testing, to infer the latent
features, and the top-layer features are connected with the max-margin
classifier for discrimination tasks. The model is efficiently trained using a
Monte Carlo expectation-maximization (MCEM) algorithm, with implementation on
graphical processor units (GPUs) for efficient large-scale learning, and fast
testing. Excellent results are obtained on several benchmark datasets,
including ImageNet, demonstrating that the proposed model achieves results that
are highly competitive with similarly sized convolutional neural networks.
| [
{
"version": "v1",
"created": "Wed, 23 Dec 2015 03:10:29 GMT"
}
] | 2015-12-25T00:00:00 | [
[
"Pu",
"Yunchen",
""
],
[
"Yuan",
"Xin",
""
],
[
"Stevens",
"Andrew",
""
],
[
"Li",
"Chunyuan",
""
],
[
"Carin",
"Lawrence",
""
]
] | TITLE: A Deep Generative Deconvolutional Image Model
ABSTRACT: A deep generative model is developed for representation and analysis of
images, based on a hierarchical convolutional dictionary-learning framework.
Stochastic {\em unpooling} is employed to link consecutive layers in the model,
yielding top-down image generation. A Bayesian support vector machine is linked
to the top-layer features, yielding max-margin discrimination. Deep
deconvolutional inference is employed when testing, to infer the latent
features, and the top-layer features are connected with the max-margin
classifier for discrimination tasks. The model is efficiently trained using a
Monte Carlo expectation-maximization (MCEM) algorithm, with implementation on
graphical processor units (GPUs) for efficient large-scale learning, and fast
testing. Excellent results are obtained on several benchmark datasets,
including ImageNet, demonstrating that the proposed model achieves results that
are highly competitive with similarly sized convolutional neural networks.
| no_new_dataset | 0.94801 |
1401.0852 | Qing Zhou | Bryon Aragam and Qing Zhou | Concave Penalized Estimation of Sparse Gaussian Bayesian Networks | 57 pages | Journal of Machine Learning Research 16(Nov):2273-2328, 2015 | null | null | stat.ME cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a penalized likelihood estimation framework to estimate the
structure of Gaussian Bayesian networks from observational data. In contrast to
recent methods which accelerate the learning problem by restricting the search
space, our main contribution is a fast algorithm for score-based structure
learning which does not restrict the search space in any way and works on
high-dimensional datasets with thousands of variables. Our use of concave
regularization, as opposed to the more popular $\ell_0$ (e.g. BIC) penalty, is
new. Moreover, we provide theoretical guarantees which generalize existing
asymptotic results when the underlying distribution is Gaussian. Most notably,
our framework does not require the existence of a so-called faithful DAG
representation, and as a result the theory must handle the inherent
nonidentifiability of the estimation problem in a novel way. Finally, as a
matter of independent interest, we provide a comprehensive comparison of our
approach to several standard structure learning methods using open-source
packages developed for the R language. Based on these experiments, we show that
our algorithm is significantly faster than other competing methods while
obtaining higher sensitivity with comparable false discovery rates for
high-dimensional data. In particular, the total runtime for our method to
generate a solution path of 20 estimates for DAGs with 8000 nodes is around one
hour.
| [
{
"version": "v1",
"created": "Sat, 4 Jan 2014 23:27:48 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jan 2015 23:34:01 GMT"
}
] | 2015-12-24T00:00:00 | [
[
"Aragam",
"Bryon",
""
],
[
"Zhou",
"Qing",
""
]
] | TITLE: Concave Penalized Estimation of Sparse Gaussian Bayesian Networks
ABSTRACT: We develop a penalized likelihood estimation framework to estimate the
structure of Gaussian Bayesian networks from observational data. In contrast to
recent methods which accelerate the learning problem by restricting the search
space, our main contribution is a fast algorithm for score-based structure
learning which does not restrict the search space in any way and works on
high-dimensional datasets with thousands of variables. Our use of concave
regularization, as opposed to the more popular $\ell_0$ (e.g. BIC) penalty, is
new. Moreover, we provide theoretical guarantees which generalize existing
asymptotic results when the underlying distribution is Gaussian. Most notably,
our framework does not require the existence of a so-called faithful DAG
representation, and as a result the theory must handle the inherent
nonidentifiability of the estimation problem in a novel way. Finally, as a
matter of independent interest, we provide a comprehensive comparison of our
approach to several standard structure learning methods using open-source
packages developed for the R language. Based on these experiments, we show that
our algorithm is significantly faster than other competing methods while
obtaining higher sensitivity with comparable false discovery rates for
high-dimensional data. In particular, the total runtime for our method to
generate a solution path of 20 estimates for DAGs with 8000 nodes is around one
hour.
| no_new_dataset | 0.945551 |
1505.07002 | Martin Monperrus | Matias Martinez and Thomas Durieux and Jifeng Xuan and Romain
Sommerard and Martin Monperrus | Automatic Repair of Real Bugs: An Experience Report on the Defects4J
Dataset | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Defects4J is a large, peer-reviewed, structured dataset of real-world Java
bugs. Each bug in Defects4J is provided with a test suite and at least one
failing test case that triggers the bug. In this paper, we report on an
experiment to explore the effectiveness of automatic repair on Defects4J. The
result of our experiment shows that 47 bugs of the Defects4J dataset can be
automatically repaired by state-of- the-art repair. This sets a baseline for
future research on automatic repair for Java. We have manually analyzed 84
different patches to assess their real correctness. In total, 9 real Java bugs
can be correctly fixed with test-suite based repair. This analysis shows that
test-suite based repair suffers from under-specified bugs, for which trivial
and incorrect patches still pass the test suite. With respect to practical
applicability, it takes in average 14.8 minutes to find a patch. The experiment
was done on a scientific grid, totaling 17.6 days of computation time. All
their systems and experimental results are publicly available on Github in
order to facilitate future research on automatic repair.
| [
{
"version": "v1",
"created": "Tue, 26 May 2015 15:21:34 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Dec 2015 11:09:46 GMT"
}
] | 2015-12-24T00:00:00 | [
[
"Martinez",
"Matias",
""
],
[
"Durieux",
"Thomas",
""
],
[
"Xuan",
"Jifeng",
""
],
[
"Sommerard",
"Romain",
""
],
[
"Monperrus",
"Martin",
""
]
] | TITLE: Automatic Repair of Real Bugs: An Experience Report on the Defects4J
Dataset
ABSTRACT: Defects4J is a large, peer-reviewed, structured dataset of real-world Java
bugs. Each bug in Defects4J is provided with a test suite and at least one
failing test case that triggers the bug. In this paper, we report on an
experiment to explore the effectiveness of automatic repair on Defects4J. The
result of our experiment shows that 47 bugs of the Defects4J dataset can be
automatically repaired by state-of- the-art repair. This sets a baseline for
future research on automatic repair for Java. We have manually analyzed 84
different patches to assess their real correctness. In total, 9 real Java bugs
can be correctly fixed with test-suite based repair. This analysis shows that
test-suite based repair suffers from under-specified bugs, for which trivial
and incorrect patches still pass the test suite. With respect to practical
applicability, it takes in average 14.8 minutes to find a patch. The experiment
was done on a scientific grid, totaling 17.6 days of computation time. All
their systems and experimental results are publicly available on Github in
order to facilitate future research on automatic repair.
| new_dataset | 0.93744 |
1512.07314 | Moin Nabi | Moin Nabi | Mid-level Representation for Visual Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Recognition is one of the fundamental challenges in AI, where the goal
is to understand the semantics of visual data. Employing mid-level
representation, in particular, shifted the paradigm in visual recognition. The
mid-level image/video representation involves discovering and training a set of
mid-level visual patterns (e.g., parts and attributes) and represent a given
image/video utilizing them. The mid-level patterns can be extracted from images
and videos using the motion and appearance information of visual phenomenas.
This thesis targets employing mid-level representations for different
high-level visual recognition tasks, namely (i)image understanding and
(ii)video understanding.
In the case of image understanding, we focus on object detection/recognition
task. We investigate on discovering and learning a set of mid-level patches to
be used for representing the images of an object category. We specifically
employ the discriminative patches in a subcategory-aware webly-supervised
fashion. We, additionally, study the outcomes provided by employing the
subcategory-based models for undoing dataset bias.
| [
{
"version": "v1",
"created": "Wed, 23 Dec 2015 00:45:41 GMT"
}
] | 2015-12-24T00:00:00 | [
[
"Nabi",
"Moin",
""
]
] | TITLE: Mid-level Representation for Visual Recognition
ABSTRACT: Visual Recognition is one of the fundamental challenges in AI, where the goal
is to understand the semantics of visual data. Employing mid-level
representation, in particular, shifted the paradigm in visual recognition. The
mid-level image/video representation involves discovering and training a set of
mid-level visual patterns (e.g., parts and attributes) and represent a given
image/video utilizing them. The mid-level patterns can be extracted from images
and videos using the motion and appearance information of visual phenomenas.
This thesis targets employing mid-level representations for different
high-level visual recognition tasks, namely (i)image understanding and
(ii)video understanding.
In the case of image understanding, we focus on object detection/recognition
task. We investigate on discovering and learning a set of mid-level patches to
be used for representing the images of an object category. We specifically
employ the discriminative patches in a subcategory-aware webly-supervised
fashion. We, additionally, study the outcomes provided by employing the
subcategory-based models for undoing dataset bias.
| no_new_dataset | 0.947284 |
1512.07502 | J.T. Turner | J.T. Turner, David Aha, Leslie Smith, Kalyan Moy Gupta | Convolutional Architecture Exploration for Action Recognition and Image
Classification | 12 pages. 11 tables. 0 Images. Written Summer 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a
software package for the training, classifying, and feature extraction of
images. The UCF Sports Action dataset is a widely used machine learning dataset
that has 200 videos taken in 720x480 resolution of 9 different sporting
activities: diving, golf, swinging, kicking, lifting, horseback riding,
running, skateboarding, swinging (various gymnastics), and walking. In this
report we report on a caffe feature extraction pipeline of images taken from
the videos of the UCF Sports Action dataset. A similar test was performed on
overfeat, and results were inferior to caffe. This study is intended to explore
the architecture and hyper parameters needed for effective static analysis of
action in videos and classification over a variety of image datasets.
| [
{
"version": "v1",
"created": "Wed, 23 Dec 2015 14:54:43 GMT"
}
] | 2015-12-24T00:00:00 | [
[
"Turner",
"J. T.",
""
],
[
"Aha",
"David",
""
],
[
"Smith",
"Leslie",
""
],
[
"Gupta",
"Kalyan Moy",
""
]
] | TITLE: Convolutional Architecture Exploration for Action Recognition and Image
Classification
ABSTRACT: Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a
software package for the training, classifying, and feature extraction of
images. The UCF Sports Action dataset is a widely used machine learning dataset
that has 200 videos taken in 720x480 resolution of 9 different sporting
activities: diving, golf, swinging, kicking, lifting, horseback riding,
running, skateboarding, swinging (various gymnastics), and walking. In this
report we report on a caffe feature extraction pipeline of images taken from
the videos of the UCF Sports Action dataset. A similar test was performed on
overfeat, and results were inferior to caffe. This study is intended to explore
the architecture and hyper parameters needed for effective static analysis of
action in videos and classification over a variety of image datasets.
| no_new_dataset | 0.683525 |
1512.07080 | Toby Perrett | Toby Perrett, Majid Mirmehdi, Eduardo Dias | Cost-based Feature Transfer for Vehicle Occupant Classification | 9 pages, 4 figures, 5 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge of human presence and interaction in a vehicle is of growing
interest to vehicle manufacturers for design and safety purposes. We present a
framework to perform the tasks of occupant detection and occupant
classification for automatic child locks and airbag suppression. It operates
for all passenger seats, using a single overhead camera. A transfer learning
technique is introduced to make full use of training data from all seats whilst
still maintaining some control over the bias, necessary for a system designed
to penalize certain misclassifications more than others. An evaluation is
performed on a challenging dataset with both weighted and unweighted
classifiers, demonstrating the effectiveness of the transfer process.
| [
{
"version": "v1",
"created": "Tue, 22 Dec 2015 13:35:10 GMT"
}
] | 2015-12-23T00:00:00 | [
[
"Perrett",
"Toby",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Dias",
"Eduardo",
""
]
] | TITLE: Cost-based Feature Transfer for Vehicle Occupant Classification
ABSTRACT: Knowledge of human presence and interaction in a vehicle is of growing
interest to vehicle manufacturers for design and safety purposes. We present a
framework to perform the tasks of occupant detection and occupant
classification for automatic child locks and airbag suppression. It operates
for all passenger seats, using a single overhead camera. A transfer learning
technique is introduced to make full use of training data from all seats whilst
still maintaining some control over the bias, necessary for a system designed
to penalize certain misclassifications more than others. An evaluation is
performed on a challenging dataset with both weighted and unweighted
classifiers, demonstrating the effectiveness of the transfer process.
| no_new_dataset | 0.94545 |
1512.07155 | Sarah Adel Bargal | Shugao Ma, Sarah Adel Bargal, Jianming Zhang, Leonid Sigal, Stan
Sclaroff | Do Less and Achieve More: Training CNNs for Action Recognition Utilizing
Action Images from the Web | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, attempts have been made to collect millions of videos to train CNN
models for action recognition in videos. However, curating such large-scale
video datasets requires immense human labor, and training CNNs on millions of
videos demands huge computational resources. In contrast, collecting action
images from the Web is much easier and training on images requires much less
computation. In addition, labeled web images tend to contain discriminative
action poses, which highlight discriminative portions of a video's temporal
progression. We explore the question of whether we can utilize web action
images to train better CNN models for action recognition in videos. We collect
23.8K manually filtered images from the Web that depict the 101 actions in the
UCF101 action video dataset. We show that by utilizing web action images along
with videos in training, significant performance boosts of CNN models can be
achieved. We then investigate the scalability of the process by leveraging
crawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M
video frames by 393K unfiltered images and get comparable performance.
| [
{
"version": "v1",
"created": "Tue, 22 Dec 2015 16:52:19 GMT"
}
] | 2015-12-23T00:00:00 | [
[
"Ma",
"Shugao",
""
],
[
"Bargal",
"Sarah Adel",
""
],
[
"Zhang",
"Jianming",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Sclaroff",
"Stan",
""
]
] | TITLE: Do Less and Achieve More: Training CNNs for Action Recognition Utilizing
Action Images from the Web
ABSTRACT: Recently, attempts have been made to collect millions of videos to train CNN
models for action recognition in videos. However, curating such large-scale
video datasets requires immense human labor, and training CNNs on millions of
videos demands huge computational resources. In contrast, collecting action
images from the Web is much easier and training on images requires much less
computation. In addition, labeled web images tend to contain discriminative
action poses, which highlight discriminative portions of a video's temporal
progression. We explore the question of whether we can utilize web action
images to train better CNN models for action recognition in videos. We collect
23.8K manually filtered images from the Web that depict the 101 actions in the
UCF101 action video dataset. We show that by utilizing web action images along
with videos in training, significant performance boosts of CNN models can be
achieved. We then investigate the scalability of the process by leveraging
crawled web images (unfiltered) for UCF101 and ActivityNet. We replace 16.2M
video frames by 393K unfiltered images and get comparable performance.
| no_new_dataset | 0.934694 |
1505.06027 | Piotr Bojanowski | Piotr Bojanowski (WILLOW, LIENS), R\'emi Lajugie (LIENS, SIERRA),
Edouard Grave (APAM), Francis Bach (LIENS, SIERRA), Ivan Laptev (WILLOW,
LIENS), Jean Ponce (WILLOW, LIENS), Cordelia Schmid (LEAR) | Weakly-Supervised Alignment of Video With Text | ICCV 2015 - IEEE International Conference on Computer Vision, Dec
2015, Santiago, Chile | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suppose that we are given a set of videos, along with natural language
descriptions in the form of multiple sentences (e.g., manual annotations, movie
scripts, sport summaries etc.), and that these sentences appear in the same
temporal order as their visual counterparts. We propose in this paper a method
for aligning the two modalities, i.e., automatically providing a time stamp for
every sentence. Given vectorial features for both video and text, we propose to
cast this task as a temporal assignment problem, with an implicit linear
mapping between the two feature modalities. We formulate this problem as an
integer quadratic program, and solve its continuous convex relaxation using an
efficient conditional gradient algorithm. Several rounding procedures are
proposed to construct the final integer solution. After demonstrating
significant improvements over the state of the art on the related task of
aligning video with symbolic labels [7], we evaluate our method on a
challenging dataset of videos with associated textual descriptions [36], using
both bag-of-words and continuous representations for text.
| [
{
"version": "v1",
"created": "Fri, 22 May 2015 11:08:39 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Dec 2015 14:57:40 GMT"
}
] | 2015-12-22T00:00:00 | [
[
"Bojanowski",
"Piotr",
"",
"WILLOW, LIENS"
],
[
"Lajugie",
"Rémi",
"",
"LIENS, SIERRA"
],
[
"Grave",
"Edouard",
"",
"APAM"
],
[
"Bach",
"Francis",
"",
"LIENS, SIERRA"
],
[
"Laptev",
"Ivan",
"",
"WILLOW,\n LIENS"
],
[
"Ponce",
"Jean",
"",
"WILLOW, LIENS"
],
[
"Schmid",
"Cordelia",
"",
"LEAR"
]
] | TITLE: Weakly-Supervised Alignment of Video With Text
ABSTRACT: Suppose that we are given a set of videos, along with natural language
descriptions in the form of multiple sentences (e.g., manual annotations, movie
scripts, sport summaries etc.), and that these sentences appear in the same
temporal order as their visual counterparts. We propose in this paper a method
for aligning the two modalities, i.e., automatically providing a time stamp for
every sentence. Given vectorial features for both video and text, we propose to
cast this task as a temporal assignment problem, with an implicit linear
mapping between the two feature modalities. We formulate this problem as an
integer quadratic program, and solve its continuous convex relaxation using an
efficient conditional gradient algorithm. Several rounding procedures are
proposed to construct the final integer solution. After demonstrating
significant improvements over the state of the art on the related task of
aligning video with symbolic labels [7], we evaluate our method on a
challenging dataset of videos with associated textual descriptions [36], using
both bag-of-words and continuous representations for text.
| no_new_dataset | 0.837088 |
1512.05172 | Steven Weber | Ni An and Steven Weber | On the performance overhead tradeoff of distributed principal component
analysis via data partitioning | 6 pages, 6 figures, submitted to CISS 2016 | null | null | null | cs.DC cs.NI cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal component analysis (PCA) is not only a fundamental dimension
reduction method, but is also a widely used network anomaly detection
technique. Traditionally, PCA is performed in a centralized manner, which has
poor scalability for large distributed systems, on account of the large network
bandwidth cost required to gather the distributed state at a fusion center.
Consequently, several recent works have proposed various distributed PCA
algorithms aiming to reduce the communication overhead incurred by PCA without
losing its inferential power. This paper evaluates the tradeoff between
communication cost and solution quality of two distributed PCA algorithms on a
real domain name system (DNS) query dataset from a large network. We also apply
the distributed PCA algorithm in the area of network anomaly detection and
demonstrate that the detection accuracy of both distributed PCA-based methods
has little degradation in quality, yet achieves significant savings in
communication bandwidth.
| [
{
"version": "v1",
"created": "Wed, 16 Dec 2015 13:35:47 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Dec 2015 14:07:29 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Dec 2015 13:01:59 GMT"
}
] | 2015-12-22T00:00:00 | [
[
"An",
"Ni",
""
],
[
"Weber",
"Steven",
""
]
] | TITLE: On the performance overhead tradeoff of distributed principal component
analysis via data partitioning
ABSTRACT: Principal component analysis (PCA) is not only a fundamental dimension
reduction method, but is also a widely used network anomaly detection
technique. Traditionally, PCA is performed in a centralized manner, which has
poor scalability for large distributed systems, on account of the large network
bandwidth cost required to gather the distributed state at a fusion center.
Consequently, several recent works have proposed various distributed PCA
algorithms aiming to reduce the communication overhead incurred by PCA without
losing its inferential power. This paper evaluates the tradeoff between
communication cost and solution quality of two distributed PCA algorithms on a
real domain name system (DNS) query dataset from a large network. We also apply
the distributed PCA algorithm in the area of network anomaly detection and
demonstrate that the detection accuracy of both distributed PCA-based methods
has little degradation in quality, yet achieves significant savings in
communication bandwidth.
| no_new_dataset | 0.949295 |
1512.06216 | Hao Zhang | Hao Zhang, Zhiting Hu, Jinliang Wei, Pengtao Xie, Gunhee Kim, Qirong
Ho and Eric Xing | Poseidon: A System Architecture for Efficient GPU-based Deep Learning on
Multiple Machines | 14 pages, 8 figures, 6 tables | null | null | null | cs.LG cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning (DL) has achieved notable successes in many machine learning
tasks. A number of frameworks have been developed to expedite the process of
designing and training deep neural networks (DNNs), such as Caffe, Torch and
Theano. Currently they can harness multiple GPUs on a single machine, but are
unable to use GPUs that are distributed across multiple machines; as even
average-sized DNNs can take days to train on a single GPU with 100s of GBs to
TBs of data, distributed GPUs present a prime opportunity for scaling up DL.
However, the limited bandwidth available on commodity Ethernet networks
presents a bottleneck to distributed GPU training, and prevents its trivial
realization.
To investigate how to adapt existing frameworks to efficiently support
distributed GPUs, we propose Poseidon, a scalable system architecture for
distributed inter-machine communication in existing DL frameworks. We integrate
Poseidon with Caffe and evaluate its performance at training DNNs for object
recognition. Poseidon features three key contributions that accelerate DNN
training on clusters: (1) a three-level hybrid architecture that allows
Poseidon to support both CPU-only and GPU-equipped clusters, (2) a distributed
wait-free backpropagation (DWBP) algorithm to improve GPU utilization and to
balance communication, and (3) a structure-aware communication protocol (SACP)
to minimize communication overheads. We empirically show that Poseidon
converges to same objectives as a single machine, and achieves state-of-art
training speedup across multiple models and well-established datasets using a
commodity GPU cluster of 8 nodes (e.g. 4.5x speedup on AlexNet, 4x on
GoogLeNet, 4x on CIFAR-10). On the much larger ImageNet22K dataset, Poseidon
with 8 nodes achieves better speedup and competitive accuracy to recent
CPU-based distributed systems such as Adam and Le et al., which use 10s to
1000s of nodes.
| [
{
"version": "v1",
"created": "Sat, 19 Dec 2015 09:55:37 GMT"
}
] | 2015-12-22T00:00:00 | [
[
"Zhang",
"Hao",
""
],
[
"Hu",
"Zhiting",
""
],
[
"Wei",
"Jinliang",
""
],
[
"Xie",
"Pengtao",
""
],
[
"Kim",
"Gunhee",
""
],
[
"Ho",
"Qirong",
""
],
[
"Xing",
"Eric",
""
]
] | TITLE: Poseidon: A System Architecture for Efficient GPU-based Deep Learning on
Multiple Machines
ABSTRACT: Deep learning (DL) has achieved notable successes in many machine learning
tasks. A number of frameworks have been developed to expedite the process of
designing and training deep neural networks (DNNs), such as Caffe, Torch and
Theano. Currently they can harness multiple GPUs on a single machine, but are
unable to use GPUs that are distributed across multiple machines; as even
average-sized DNNs can take days to train on a single GPU with 100s of GBs to
TBs of data, distributed GPUs present a prime opportunity for scaling up DL.
However, the limited bandwidth available on commodity Ethernet networks
presents a bottleneck to distributed GPU training, and prevents its trivial
realization.
To investigate how to adapt existing frameworks to efficiently support
distributed GPUs, we propose Poseidon, a scalable system architecture for
distributed inter-machine communication in existing DL frameworks. We integrate
Poseidon with Caffe and evaluate its performance at training DNNs for object
recognition. Poseidon features three key contributions that accelerate DNN
training on clusters: (1) a three-level hybrid architecture that allows
Poseidon to support both CPU-only and GPU-equipped clusters, (2) a distributed
wait-free backpropagation (DWBP) algorithm to improve GPU utilization and to
balance communication, and (3) a structure-aware communication protocol (SACP)
to minimize communication overheads. We empirically show that Poseidon
converges to same objectives as a single machine, and achieves state-of-art
training speedup across multiple models and well-established datasets using a
commodity GPU cluster of 8 nodes (e.g. 4.5x speedup on AlexNet, 4x on
GoogLeNet, 4x on CIFAR-10). On the much larger ImageNet22K dataset, Poseidon
with 8 nodes achieves better speedup and competitive accuracy to recent
CPU-based distributed systems such as Adam and Le et al., which use 10s to
1000s of nodes.
| no_new_dataset | 0.946448 |
1512.06709 | Xiaoxia Sun | Xiaoxia Sun, Nasser M. Nasrabadi and Trac D. Tran | Sparse Coding with Fast Image Alignment via Large Displacement Optical
Flow | ICASSP 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse representation-based classifiers have shown outstanding accuracy and
robustness in image classification tasks even with the presence of intense
noise and occlusion. However, it has been discovered that the performance
degrades significantly either when test image is not aligned with the
dictionary atoms or the dictionary atoms themselves are not aligned with each
other, in which cases the sparse linear representation assumption fails. In
this paper, having both training and test images misaligned, we introduce a
novel sparse coding framework that is able to efficiently adapt the dictionary
atoms to the test image via large displacement optical flow. In the proposed
algorithm, every dictionary atom is automatically aligned with the input image
and the sparse code is then recovered using the adapted dictionary atoms. A
corresponding supervised dictionary learning algorithm is also developed for
the proposed framework. Experimental results on digit datasets recognition
verify the efficacy and robustness of the proposed algorithm.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 17:10:35 GMT"
}
] | 2015-12-22T00:00:00 | [
[
"Sun",
"Xiaoxia",
""
],
[
"Nasrabadi",
"Nasser M.",
""
],
[
"Tran",
"Trac D.",
""
]
] | TITLE: Sparse Coding with Fast Image Alignment via Large Displacement Optical
Flow
ABSTRACT: Sparse representation-based classifiers have shown outstanding accuracy and
robustness in image classification tasks even with the presence of intense
noise and occlusion. However, it has been discovered that the performance
degrades significantly either when test image is not aligned with the
dictionary atoms or the dictionary atoms themselves are not aligned with each
other, in which cases the sparse linear representation assumption fails. In
this paper, having both training and test images misaligned, we introduce a
novel sparse coding framework that is able to efficiently adapt the dictionary
atoms to the test image via large displacement optical flow. In the proposed
algorithm, every dictionary atom is automatically aligned with the input image
and the sparse code is then recovered using the adapted dictionary atoms. A
corresponding supervised dictionary learning algorithm is also developed for
the proposed framework. Experimental results on digit datasets recognition
verify the efficacy and robustness of the proposed algorithm.
| no_new_dataset | 0.948822 |
1511.02986 | Li-Hao Yeh | Li-Hao Yeh, Jonathan Dong, Jingshan Zhong, Lei Tian, Michael Chen,
Gongguo Tang, Mahdi Soltanolkotabi, and Laura Waller | Experimental robustness of Fourier Ptychography phase retrieval
algorithms | null | Opt. Express 23, 33214-33240 (2015) | 10.1364/OE.23.033214 | null | physics.optics cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fourier ptychography is a new computational microscopy technique that
provides gigapixel-scale intensity and phase images with both wide
field-of-view and high resolution. By capturing a stack of low-resolution
images under different illumination angles, a nonlinear inverse algorithm can
be used to computationally reconstruct the high-resolution complex field. Here,
we compare and classify multiple proposed inverse algorithms in terms of
experimental robustness. We find that the main sources of error are noise,
aberrations and mis-calibration (i.e. model mis-match). Using simulations and
experiments, we demonstrate that the choice of cost function plays a critical
role, with amplitude-based cost functions performing better than
intensity-based ones. The reason for this is that Fourier ptychography datasets
consist of images from both brightfield and darkfield illumination,
representing a large range of measured intensities. Both noise (e.g. Poisson
noise) and model mis-match errors are shown to scale with intensity. Hence,
algorithms that use an appropriate cost function will be more tolerant to both
noise and model mis-match. Given these insights, we propose a global Newton's
method algorithm which is robust and computationally efficient. Finally, we
discuss the impact of procedures for algorithmic correction of aberrations and
mis-calibration.
| [
{
"version": "v1",
"created": "Tue, 10 Nov 2015 03:45:02 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Dec 2015 07:33:10 GMT"
}
] | 2015-12-21T00:00:00 | [
[
"Yeh",
"Li-Hao",
""
],
[
"Dong",
"Jonathan",
""
],
[
"Zhong",
"Jingshan",
""
],
[
"Tian",
"Lei",
""
],
[
"Chen",
"Michael",
""
],
[
"Tang",
"Gongguo",
""
],
[
"Soltanolkotabi",
"Mahdi",
""
],
[
"Waller",
"Laura",
""
]
] | TITLE: Experimental robustness of Fourier Ptychography phase retrieval
algorithms
ABSTRACT: Fourier ptychography is a new computational microscopy technique that
provides gigapixel-scale intensity and phase images with both wide
field-of-view and high resolution. By capturing a stack of low-resolution
images under different illumination angles, a nonlinear inverse algorithm can
be used to computationally reconstruct the high-resolution complex field. Here,
we compare and classify multiple proposed inverse algorithms in terms of
experimental robustness. We find that the main sources of error are noise,
aberrations and mis-calibration (i.e. model mis-match). Using simulations and
experiments, we demonstrate that the choice of cost function plays a critical
role, with amplitude-based cost functions performing better than
intensity-based ones. The reason for this is that Fourier ptychography datasets
consist of images from both brightfield and darkfield illumination,
representing a large range of measured intensities. Both noise (e.g. Poisson
noise) and model mis-match errors are shown to scale with intensity. Hence,
algorithms that use an appropriate cost function will be more tolerant to both
noise and model mis-match. Given these insights, we propose a global Newton's
method algorithm which is robust and computationally efficient. Finally, we
discuss the impact of procedures for algorithmic correction of aberrations and
mis-calibration.
| no_new_dataset | 0.94699 |
1512.05819 | Anastasios Noulas Anastasios Noulas | Matthew Daggitt, Anastasios Noulas, Blake Shaw, Cecilia Mascolo | Tracking Urban Activity Growth Globally with Big Location Data | null | null | null | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent decades the world has experienced rates of urban growth
unparalleled in any other period of history and this growth is shaping the
environment in which an increasing proportion of us live. In this paper we use
a longitudinal dataset from Foursquare, a location-based social network, to
analyse urban growth across 100 major cities worldwide.
Initially we explore how urban growth differs in cities across the world. We
show that there exists a strong spatial correlation, with nearby pairs of
cities more likely to share similar growth profiles than remote pairs of
cities. Subsequently we investigate how growth varies inside cities and
demonstrate that, given the existing local density of places,
higher-than-expected growth is highly localised while lower-than-expected
growth is more diffuse. Finally we attempt to use the dataset to characterise
competition between new and existing venues. By defining a measure based on the
change in throughput of a venue before and after the opening of a new nearby
venue, we demonstrate which venue types have a positive effect on venues of the
same type and which have a negative effect. For example, our analysis confirms
the hypothesis that there is large degree of competition between bookstores, in
the sense that existing bookstores normally experience a notable drop in
footfall after a new bookstore opens nearby. Other place categories however,
such as Airport Gates or Museums, have a cooperative effect and their presence
fosters higher traffic volumes to nearby places of the same type.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 22:43:11 GMT"
}
] | 2015-12-21T00:00:00 | [
[
"Daggitt",
"Matthew",
""
],
[
"Noulas",
"Anastasios",
""
],
[
"Shaw",
"Blake",
""
],
[
"Mascolo",
"Cecilia",
""
]
] | TITLE: Tracking Urban Activity Growth Globally with Big Location Data
ABSTRACT: In recent decades the world has experienced rates of urban growth
unparalleled in any other period of history and this growth is shaping the
environment in which an increasing proportion of us live. In this paper we use
a longitudinal dataset from Foursquare, a location-based social network, to
analyse urban growth across 100 major cities worldwide.
Initially we explore how urban growth differs in cities across the world. We
show that there exists a strong spatial correlation, with nearby pairs of
cities more likely to share similar growth profiles than remote pairs of
cities. Subsequently we investigate how growth varies inside cities and
demonstrate that, given the existing local density of places,
higher-than-expected growth is highly localised while lower-than-expected
growth is more diffuse. Finally we attempt to use the dataset to characterise
competition between new and existing venues. By defining a measure based on the
change in throughput of a venue before and after the opening of a new nearby
venue, we demonstrate which venue types have a positive effect on venues of the
same type and which have a negative effect. For example, our analysis confirms
the hypothesis that there is large degree of competition between bookstores, in
the sense that existing bookstores normally experience a notable drop in
footfall after a new bookstore opens nearby. Other place categories however,
such as Airport Gates or Museums, have a cooperative effect and their presence
fosters higher traffic volumes to nearby places of the same type.
| no_new_dataset | 0.937612 |
1512.05986 | Vlado Menkovski | Vlado Menkovski, Zharko Aleksovski, Axel Saalbach, Hannes Nickisch | Can Pretrained Neural Networks Detect Anatomy? | NIPS 2015 Workshop on Machine Learning in Healthcare | null | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks demonstrated outstanding empirical results in
computer vision and speech recognition tasks where labeled training data is
abundant. In medical imaging, there is a huge variety of possible imaging
modalities and contrasts, where annotated data is usually very scarce. We
present two approaches to deal with this challenge. A network pretrained in a
different domain with abundant data is used as a feature extractor, while a
subsequent classifier is trained on a small target dataset; and a deep
architecture trained with heavy augmentation and equipped with sophisticated
regularization methods. We test the approaches on a corpus of X-ray images to
design an anatomy detection system.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 15:16:31 GMT"
}
] | 2015-12-21T00:00:00 | [
[
"Menkovski",
"Vlado",
""
],
[
"Aleksovski",
"Zharko",
""
],
[
"Saalbach",
"Axel",
""
],
[
"Nickisch",
"Hannes",
""
]
] | TITLE: Can Pretrained Neural Networks Detect Anatomy?
ABSTRACT: Convolutional neural networks demonstrated outstanding empirical results in
computer vision and speech recognition tasks where labeled training data is
abundant. In medical imaging, there is a huge variety of possible imaging
modalities and contrasts, where annotated data is usually very scarce. We
present two approaches to deal with this challenge. A network pretrained in a
different domain with abundant data is used as a feature extractor, while a
subsequent classifier is trained on a small target dataset; and a deep
architecture trained with heavy augmentation and equipped with sophisticated
regularization methods. We test the approaches on a corpus of X-ray images to
design an anatomy detection system.
| no_new_dataset | 0.951142 |
1512.06017 | Dmytro Zubov | Dmytro Zubov | Cloud Computation and Google Earth Visualization of Heat/Cold Waves: A
Nonanticipative Long-Range Forecasting Case Study | 10 pages, 2 figures, 4 tables, 30 references. arXiv admin note: text
overlap with arXiv:1507.03283 | null | null | null | cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Long-range forecasting of heat/cold waves is a topical issue nowadays. High
computational complexity of the design of numerical and statistical models is a
bottleneck for the forecast process. In this work, Windows Server 2012 R2
virtual machines are used as a high-performance tool for the speed-up of the
computational process. Six D-series and one standard tier A-series virtual
machines were hosted in Microsoft Azure public cloud for this purpose.
Visualization of the forecasted data is based on the Google Earth Pro virtual
globe in ASP.NET web-site against http://gearth.azurewebsites.net (prototype),
where KMZ file represents geographic placemarks. The long-range predictions of
the heat/cold waves are computed for several specifically located places based
on nonanticipative analog algorithm. The arguments of forecast models are
datasets from around the world, which reflects the concept of teleconnections.
This methodology does not require the probability distribution to design the
forecast models and/or calculate the predictions. Heat weaves at Annaba
(Algeria) are discussed in detail. Up to 36.4% of heat waves are specifically
predicted. Up to 33.3% of cold waves are specifically predicted for other four
locations around the world. The proposed approach is 100% accurate if the signs
of predicted and actual values are compared according to climatological
baseline. These high-accuracy predictions were achieved due to the
interdisciplinary approach, but advanced computer science techniques, public
cloud computing and Google Earth Pro virtual globe mainly, form the major part
of the work.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 16:24:13 GMT"
}
] | 2015-12-21T00:00:00 | [
[
"Zubov",
"Dmytro",
""
]
] | TITLE: Cloud Computation and Google Earth Visualization of Heat/Cold Waves: A
Nonanticipative Long-Range Forecasting Case Study
ABSTRACT: Long-range forecasting of heat/cold waves is a topical issue nowadays. High
computational complexity of the design of numerical and statistical models is a
bottleneck for the forecast process. In this work, Windows Server 2012 R2
virtual machines are used as a high-performance tool for the speed-up of the
computational process. Six D-series and one standard tier A-series virtual
machines were hosted in Microsoft Azure public cloud for this purpose.
Visualization of the forecasted data is based on the Google Earth Pro virtual
globe in ASP.NET web-site against http://gearth.azurewebsites.net (prototype),
where KMZ file represents geographic placemarks. The long-range predictions of
the heat/cold waves are computed for several specifically located places based
on nonanticipative analog algorithm. The arguments of forecast models are
datasets from around the world, which reflects the concept of teleconnections.
This methodology does not require the probability distribution to design the
forecast models and/or calculate the predictions. Heat weaves at Annaba
(Algeria) are discussed in detail. Up to 36.4% of heat waves are specifically
predicted. Up to 33.3% of cold waves are specifically predicted for other four
locations around the world. The proposed approach is 100% accurate if the signs
of predicted and actual values are compared according to climatological
baseline. These high-accuracy predictions were achieved due to the
interdisciplinary approach, but advanced computer science techniques, public
cloud computing and Google Earth Pro virtual globe mainly, form the major part
of the work.
| no_new_dataset | 0.958304 |
1506.04089 | Hongyuan Mei | Hongyuan Mei, Mohit Bansal, Matthew R. Walter | Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to
Action Sequences | To appear at AAAI 2016 (and an extended version of a NIPS 2015
Multimodal Machine Learning workshop paper) | null | null | null | cs.CL cs.AI cs.LG cs.NE cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a neural sequence-to-sequence model for direction following, a
task that is essential to realizing effective autonomous agents. Our
alignment-based encoder-decoder model with long short-term memory recurrent
neural networks (LSTM-RNN) translates natural language instructions to action
sequences based upon a representation of the observable world state. We
introduce a multi-level aligner that empowers our model to focus on sentence
"regions" salient to the current world state by using multiple abstractions of
the input sentence. In contrast to existing methods, our model uses no
specialized linguistic resources (e.g., parsers) or task-specific annotations
(e.g., seed lexicons). It is therefore generalizable, yet still achieves the
best results reported to-date on a benchmark single-sentence dataset and
competitive results for the limited-training multi-sentence setting. We analyze
our model through a series of ablations that elucidate the contributions of the
primary components of our model.
| [
{
"version": "v1",
"created": "Fri, 12 Jun 2015 18:05:00 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Jul 2015 19:22:33 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Dec 2015 20:46:09 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Dec 2015 17:57:42 GMT"
}
] | 2015-12-18T00:00:00 | [
[
"Mei",
"Hongyuan",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Walter",
"Matthew R.",
""
]
] | TITLE: Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to
Action Sequences
ABSTRACT: We propose a neural sequence-to-sequence model for direction following, a
task that is essential to realizing effective autonomous agents. Our
alignment-based encoder-decoder model with long short-term memory recurrent
neural networks (LSTM-RNN) translates natural language instructions to action
sequences based upon a representation of the observable world state. We
introduce a multi-level aligner that empowers our model to focus on sentence
"regions" salient to the current world state by using multiple abstractions of
the input sentence. In contrast to existing methods, our model uses no
specialized linguistic resources (e.g., parsers) or task-specific annotations
(e.g., seed lexicons). It is therefore generalizable, yet still achieves the
best results reported to-date on a benchmark single-sentence dataset and
competitive results for the limited-training multi-sentence setting. We analyze
our model through a series of ablations that elucidate the contributions of the
primary components of our model.
| no_new_dataset | 0.942295 |
1507.02772 | Anoop Cherian | Anoop Cherian and Suvrit Sra | Riemannian Dictionary Learning and Sparse Coding for Positive Definite
Matrices | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data encoded as symmetric positive definite (SPD) matrices frequently arise
in many areas of computer vision and machine learning. While these matrices
form an open subset of the Euclidean space of symmetric matrices, viewing them
through the lens of non-Euclidean Riemannian geometry often turns out to be
better suited in capturing several desirable data properties. However,
formulating classical machine learning algorithms within such a geometry is
often non-trivial and computationally expensive. Inspired by the great success
of dictionary learning and sparse coding for vector-valued data, our goal in
this paper is to represent data in the form of SPD matrices as sparse conic
combinations of SPD atoms from a learned dictionary via a Riemannian geometric
approach. To that end, we formulate a novel Riemannian optimization objective
for dictionary learning and sparse coding in which the representation loss is
characterized via the affine invariant Riemannian metric. We also present a
computationally simple algorithm for optimizing our model. Experiments on
several computer vision datasets demonstrate superior classification and
retrieval performance using our approach when compared to sparse coding via
alternative non-Riemannian formulations.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2015 03:18:50 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Dec 2015 03:33:50 GMT"
}
] | 2015-12-18T00:00:00 | [
[
"Cherian",
"Anoop",
""
],
[
"Sra",
"Suvrit",
""
]
] | TITLE: Riemannian Dictionary Learning and Sparse Coding for Positive Definite
Matrices
ABSTRACT: Data encoded as symmetric positive definite (SPD) matrices frequently arise
in many areas of computer vision and machine learning. While these matrices
form an open subset of the Euclidean space of symmetric matrices, viewing them
through the lens of non-Euclidean Riemannian geometry often turns out to be
better suited in capturing several desirable data properties. However,
formulating classical machine learning algorithms within such a geometry is
often non-trivial and computationally expensive. Inspired by the great success
of dictionary learning and sparse coding for vector-valued data, our goal in
this paper is to represent data in the form of SPD matrices as sparse conic
combinations of SPD atoms from a learned dictionary via a Riemannian geometric
approach. To that end, we formulate a novel Riemannian optimization objective
for dictionary learning and sparse coding in which the representation loss is
characterized via the affine invariant Riemannian metric. We also present a
computationally simple algorithm for optimizing our model. Experiments on
several computer vision datasets demonstrate superior classification and
retrieval performance using our approach when compared to sparse coding via
alternative non-Riemannian formulations.
| no_new_dataset | 0.949153 |
1509.08089 | Junzhou Zhao | Pinghui Wang, Jing Tao, Junzhou Zhao, Xiaohong Guan | Moss: A Scalable Tool for Efficiently Sampling and Counting 4- and
5-Node Graphlets | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Counting the frequencies of 3-, 4-, and 5-node undirected motifs (also know
as graphlets) is widely used for understanding complex networks such as social
and biology networks. However, it is a great challenge to compute these metrics
for a large graph due to the intensive computation. Despite recent efforts to
count triangles (i.e., 3-node undirected motif counting), little attention has
been given to developing scalable tools that can be used to characterize 4- and
5-node motifs. In this paper, we develop computational efficient methods to
sample and count 4- and 5- node undirected motifs. Our methods provide unbiased
estimators of motif frequencies, and we derive simple and exact formulas for
the variances of the estimators. Moreover, our methods are designed to fit
vertex centric programming models, so they can be easily applied to current
graph computing systems such as Pregel and GraphLab. We conduct experiments on
a variety of real-word datasets, and experimental results show that our methods
are several orders of magnitude faster than the state-of-the-art methods under
the same estimation errors.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 12:04:58 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2015 06:39:38 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Oct 2015 04:18:06 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Dec 2015 13:07:06 GMT"
}
] | 2015-12-18T00:00:00 | [
[
"Wang",
"Pinghui",
""
],
[
"Tao",
"Jing",
""
],
[
"Zhao",
"Junzhou",
""
],
[
"Guan",
"Xiaohong",
""
]
] | TITLE: Moss: A Scalable Tool for Efficiently Sampling and Counting 4- and
5-Node Graphlets
ABSTRACT: Counting the frequencies of 3-, 4-, and 5-node undirected motifs (also know
as graphlets) is widely used for understanding complex networks such as social
and biology networks. However, it is a great challenge to compute these metrics
for a large graph due to the intensive computation. Despite recent efforts to
count triangles (i.e., 3-node undirected motif counting), little attention has
been given to developing scalable tools that can be used to characterize 4- and
5-node motifs. In this paper, we develop computational efficient methods to
sample and count 4- and 5- node undirected motifs. Our methods provide unbiased
estimators of motif frequencies, and we derive simple and exact formulas for
the variances of the estimators. Moreover, our methods are designed to fit
vertex centric programming models, so they can be easily applied to current
graph computing systems such as Pregel and GraphLab. We conduct experiments on
a variety of real-word datasets, and experimental results show that our methods
are several orders of magnitude faster than the state-of-the-art methods under
the same estimation errors.
| no_new_dataset | 0.947332 |
1512.05467 | Marian-Andrei Rizoiu | Marian-Andrei Rizoiu, Julien Velcin, St\'ephane Lallich | Unsupervised Feature Construction for Improving Data Representation and
Semantics | null | Journal of Intelligent Information Systems, vol. 40, iss. 3, pp.
501-527, 2013 | 10.1007/s10844-013-0235-x | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature-based format is the main data representation format used by machine
learning algorithms. When the features do not properly describe the initial
data, performance starts to degrade. Some algorithms address this problem by
internally changing the representation space, but the newly-constructed
features are rarely comprehensible. We seek to construct, in an unsupervised
way, new features that are more appropriate for describing a given dataset and,
at the same time, comprehensible for a human user. We propose two algorithms
that construct the new features as conjunctions of the initial primitive
features or their negations. The generated feature sets have reduced
correlations between features and succeed in catching some of the hidden
relations between individuals in a dataset. For example, a feature like $sky
\wedge \neg building \wedge panorama$ would be true for non-urban images and is
more informative than simple features expressing the presence or the absence of
an object. The notion of Pareto optimality is used to evaluate feature sets and
to obtain a balance between total correlation and the complexity of the
resulted feature set. Statistical hypothesis testing is used in order to
automatically determine the values of the parameters used for constructing a
data-dependent feature set. We experimentally show that our approaches achieve
the construction of informative feature sets for multiple datasets.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 05:18:05 GMT"
}
] | 2015-12-18T00:00:00 | [
[
"Rizoiu",
"Marian-Andrei",
""
],
[
"Velcin",
"Julien",
""
],
[
"Lallich",
"Stéphane",
""
]
] | TITLE: Unsupervised Feature Construction for Improving Data Representation and
Semantics
ABSTRACT: Feature-based format is the main data representation format used by machine
learning algorithms. When the features do not properly describe the initial
data, performance starts to degrade. Some algorithms address this problem by
internally changing the representation space, but the newly-constructed
features are rarely comprehensible. We seek to construct, in an unsupervised
way, new features that are more appropriate for describing a given dataset and,
at the same time, comprehensible for a human user. We propose two algorithms
that construct the new features as conjunctions of the initial primitive
features or their negations. The generated feature sets have reduced
correlations between features and succeed in catching some of the hidden
relations between individuals in a dataset. For example, a feature like $sky
\wedge \neg building \wedge panorama$ would be true for non-urban images and is
more informative than simple features expressing the presence or the absence of
an object. The notion of Pareto optimality is used to evaluate feature sets and
to obtain a balance between total correlation and the complexity of the
resulted feature set. Statistical hypothesis testing is used in order to
automatically determine the values of the parameters used for constructing a
data-dependent feature set. We experimentally show that our approaches achieve
the construction of informative feature sets for multiple datasets.
| no_new_dataset | 0.94699 |
1512.05484 | Mohsen Malmir | Mohsen Malmir, Karan Sikka, Deborah Forster, Ian Fasel, Javier R.
Movellan, Garrison W. Cottrell | Deep Active Object Recognition by Joint Label and Action Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An active object recognition system has the advantage of being able to act in
the environment to capture images that are more suited for training and that
lead to better performance at test time. In this paper, we propose a deep
convolutional neural network for active object recognition that simultaneously
predicts the object label, and selects the next action to perform on the object
with the aim of improving recognition performance. We treat active object
recognition as a reinforcement learning problem and derive the cost function to
train the network for joint prediction of the object label and the action. A
generative model of object similarities based on the Dirichlet distribution is
proposed and embedded in the network for encoding the state of the system. The
training is carried out by simultaneously minimizing the label and action
prediction errors using gradient descent. We empirically show that the proposed
network is able to predict both the object label and the actions on GERMS, a
dataset for active object recognition. We compare the test label prediction
accuracy of the proposed model with Dirichlet and Naive Bayes state encoding.
The results of experiments suggest that the proposed model equipped with
Dirichlet state encoding is superior in performance, and selects images that
lead to better training and higher accuracy of label prediction at test time.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 07:33:45 GMT"
}
] | 2015-12-18T00:00:00 | [
[
"Malmir",
"Mohsen",
""
],
[
"Sikka",
"Karan",
""
],
[
"Forster",
"Deborah",
""
],
[
"Fasel",
"Ian",
""
],
[
"Movellan",
"Javier R.",
""
],
[
"Cottrell",
"Garrison W.",
""
]
] | TITLE: Deep Active Object Recognition by Joint Label and Action Prediction
ABSTRACT: An active object recognition system has the advantage of being able to act in
the environment to capture images that are more suited for training and that
lead to better performance at test time. In this paper, we propose a deep
convolutional neural network for active object recognition that simultaneously
predicts the object label, and selects the next action to perform on the object
with the aim of improving recognition performance. We treat active object
recognition as a reinforcement learning problem and derive the cost function to
train the network for joint prediction of the object label and the action. A
generative model of object similarities based on the Dirichlet distribution is
proposed and embedded in the network for encoding the state of the system. The
training is carried out by simultaneously minimizing the label and action
prediction errors using gradient descent. We empirically show that the proposed
network is able to predict both the object label and the actions on GERMS, a
dataset for active object recognition. We compare the test label prediction
accuracy of the proposed model with Dirichlet and Naive Bayes state encoding.
The results of experiments suggest that the proposed model equipped with
Dirichlet state encoding is superior in performance, and selects images that
lead to better training and higher accuracy of label prediction at test time.
| no_new_dataset | 0.950869 |
1512.01715 | Hang Qi | Hang Qi, Tianfu Wu, Mun-Wai Lee, Song-Chun Zhu | A Restricted Visual Turing Test for Deep Scene and Event Understanding | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a restricted visual Turing test (VTT) for story-line
based deep understanding in long-term and multi-camera captured videos. Given a
set of videos of a scene (such as a multi-room office, a garden, and a parking
lot.) and a sequence of story-line based queries, the task is to provide
answers either simply in binary form "true/false" (to a polar query) or in an
accurate natural language description (to a non-polar query). Queries, polar or
non-polar, consist of view-based queries which can be answered from a
particular camera view and scene-centered queries which involves joint
inference across different cameras. The story lines are collected to cover
spatial, temporal and causal understanding of input videos. The data and
queries distinguish our VTT from recently proposed visual question answering in
images and video captioning. A vision system is proposed to perform joint video
and query parsing which integrates different vision modules, a knowledge base
and a query engine. The system provides unified interfaces for different
modules so that individual modules can be reconfigured to test a new method. We
provide a benchmark dataset and a toolkit for ontology guided story-line query
generation which consists of about 93.5 hours videos captured in four different
locations and 3,426 queries split into 127 story lines. We also provide a
baseline implementation and result analyses.
| [
{
"version": "v1",
"created": "Sun, 6 Dec 2015 00:40:02 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2015 19:19:25 GMT"
}
] | 2015-12-17T00:00:00 | [
[
"Qi",
"Hang",
""
],
[
"Wu",
"Tianfu",
""
],
[
"Lee",
"Mun-Wai",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: A Restricted Visual Turing Test for Deep Scene and Event Understanding
ABSTRACT: This paper presents a restricted visual Turing test (VTT) for story-line
based deep understanding in long-term and multi-camera captured videos. Given a
set of videos of a scene (such as a multi-room office, a garden, and a parking
lot.) and a sequence of story-line based queries, the task is to provide
answers either simply in binary form "true/false" (to a polar query) or in an
accurate natural language description (to a non-polar query). Queries, polar or
non-polar, consist of view-based queries which can be answered from a
particular camera view and scene-centered queries which involves joint
inference across different cameras. The story lines are collected to cover
spatial, temporal and causal understanding of input videos. The data and
queries distinguish our VTT from recently proposed visual question answering in
images and video captioning. A vision system is proposed to perform joint video
and query parsing which integrates different vision modules, a knowledge base
and a query engine. The system provides unified interfaces for different
modules so that individual modules can be reconfigured to test a new method. We
provide a benchmark dataset and a toolkit for ontology guided story-line query
generation which consists of about 93.5 hours videos captured in four different
locations and 3,426 queries split into 127 story lines. We also provide a
baseline implementation and result analyses.
| new_dataset | 0.951188 |
1512.02573 | Nour El-Mawass | Nour El-Mawass, Saad Alaboodi | Hunting for Spammers: Detecting Evolved Spammers on Twitter | null | null | null | null | cs.IR cs.CR cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Once an email problem, spam has nowadays branched into new territories with
disruptive effects. In particular, spam has established itself over the recent
years as a ubiquitous, annoying, and sometimes threatening aspect of online
social networks. Due to its prevalent existence, many works have tackled spam
on Twitter from different angles. Spam is, however, a moving target. The new
generation of spammers on Twitter has evolved into online creatures that are
not easily recognizable by old detection systems. With the strong tangled
spamming community, automatic tweeting scripts, and the ability to massively
create Twitter accounts with a negligible cost, spam on Twitter is becoming
smarter, fuzzier and harder to detect. Our own analysis of spam content on
Arabic trending hashtags in Saudi Arabia results in an estimate of about three
quarters of the total generated content. This alarming rate makes the
development of adaptive spam detection techniques a very real and pressing
need. In this paper, we analyze the spam content of trending hashtags on Saudi
Twitter, and assess the performance of previous spam detection systems on our
recently gathered dataset. Due to the escalating manipulation that
characterizes newer spamming accounts, simple manual labeling currently leads
to inaccurate results. In order to get reliable ground-truth data, we propose
an updated manual classification algorithm that avoids the deficiencies of
older manual approaches. We also adapt the previously proposed features to
respond to spammers evading techniques, and use these features to build a new
data-driven detection system.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 18:21:31 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2015 21:53:18 GMT"
}
] | 2015-12-17T00:00:00 | [
[
"El-Mawass",
"Nour",
""
],
[
"Alaboodi",
"Saad",
""
]
] | TITLE: Hunting for Spammers: Detecting Evolved Spammers on Twitter
ABSTRACT: Once an email problem, spam has nowadays branched into new territories with
disruptive effects. In particular, spam has established itself over the recent
years as a ubiquitous, annoying, and sometimes threatening aspect of online
social networks. Due to its prevalent existence, many works have tackled spam
on Twitter from different angles. Spam is, however, a moving target. The new
generation of spammers on Twitter has evolved into online creatures that are
not easily recognizable by old detection systems. With the strong tangled
spamming community, automatic tweeting scripts, and the ability to massively
create Twitter accounts with a negligible cost, spam on Twitter is becoming
smarter, fuzzier and harder to detect. Our own analysis of spam content on
Arabic trending hashtags in Saudi Arabia results in an estimate of about three
quarters of the total generated content. This alarming rate makes the
development of adaptive spam detection techniques a very real and pressing
need. In this paper, we analyze the spam content of trending hashtags on Saudi
Twitter, and assess the performance of previous spam detection systems on our
recently gathered dataset. Due to the escalating manipulation that
characterizes newer spamming accounts, simple manual labeling currently leads
to inaccurate results. In order to get reliable ground-truth data, we propose
an updated manual classification algorithm that avoids the deficiencies of
older manual approaches. We also adapt the previously proposed features to
respond to spammers evading techniques, and use these features to build a new
data-driven detection system.
| no_new_dataset | 0.677501 |
1512.03564 | Matteo Brucato | Matteo Brucato, Juan Felipe Beltran, Azza Abouzied, Alexandra Meliou | Scalable Package Queries in Relational Database Systems | Extended version of PVLDB 2016 submission | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional database queries follow a simple model: they define constraints
that each tuple in the result must satisfy. This model is computationally
efficient, as the database system can evaluate the query conditions on each
tuple individually. However, many practical, real-world problems require a
collection of result tuples to satisfy constraints collectively, rather than
individually. In this paper, we present package queries, a new query model that
extends traditional database queries to handle complex constraints and
preferences over answer sets. We develop a full-fledged package query system,
implemented on top of a traditional database engine. Our work makes several
contributions. First, we design PaQL, a SQL-based query language that supports
the declarative specification of package queries. We prove that PaQL is as
least as expressive as integer linear programming, and therefore, evaluation of
package queries is in general NP-hard. Second, we present a fundamental
evaluation strategy that combines the capabilities of databases and constraint
optimization solvers to derive solutions to package queries. The core of our
approach is a set of translation rules that transform a package query to an
integer linear program. Third, we introduce an offline data partitioning
strategy allowing query evaluation to scale to large data sizes. Fourth, we
introduce SketchRefine, a scalable algorithm for package evaluation, with
strong approximation guarantees ($(1 \pm\epsilon)^6$-factor approximation).
Finally, we present extensive experiments over real-world and benchmark data.
The results demonstrate that SketchRefine is effective at deriving high-quality
package results, and achieves runtime performance that is an order of magnitude
faster than directly using ILP solvers over large datasets.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 09:47:43 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2015 00:53:52 GMT"
}
] | 2015-12-17T00:00:00 | [
[
"Brucato",
"Matteo",
""
],
[
"Beltran",
"Juan Felipe",
""
],
[
"Abouzied",
"Azza",
""
],
[
"Meliou",
"Alexandra",
""
]
] | TITLE: Scalable Package Queries in Relational Database Systems
ABSTRACT: Traditional database queries follow a simple model: they define constraints
that each tuple in the result must satisfy. This model is computationally
efficient, as the database system can evaluate the query conditions on each
tuple individually. However, many practical, real-world problems require a
collection of result tuples to satisfy constraints collectively, rather than
individually. In this paper, we present package queries, a new query model that
extends traditional database queries to handle complex constraints and
preferences over answer sets. We develop a full-fledged package query system,
implemented on top of a traditional database engine. Our work makes several
contributions. First, we design PaQL, a SQL-based query language that supports
the declarative specification of package queries. We prove that PaQL is as
least as expressive as integer linear programming, and therefore, evaluation of
package queries is in general NP-hard. Second, we present a fundamental
evaluation strategy that combines the capabilities of databases and constraint
optimization solvers to derive solutions to package queries. The core of our
approach is a set of translation rules that transform a package query to an
integer linear program. Third, we introduce an offline data partitioning
strategy allowing query evaluation to scale to large data sizes. Fourth, we
introduce SketchRefine, a scalable algorithm for package evaluation, with
strong approximation guarantees ($(1 \pm\epsilon)^6$-factor approximation).
Finally, we present extensive experiments over real-world and benchmark data.
The results demonstrate that SketchRefine is effective at deriving high-quality
package results, and achieves runtime performance that is an order of magnitude
faster than directly using ILP solvers over large datasets.
| no_new_dataset | 0.942876 |
1512.04973 | Ndapandula Nakashole | Ndapandula Nakashole | An Operator for Entity Extraction in MapReduce | 7 pages | null | null | null | cs.DB cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dictionary-based entity extraction involves finding mentions of dictionary
entities in text. Text mentions are often noisy, containing spurious or missing
words. Efficient algorithms for detecting approximate entity mentions follow
one of two general techniques. The first approach is to build an index on the
entities and perform index lookups of document substrings. The second approach
recognizes that the number of substrings generated from documents can explode
to large numbers, to get around this, they use a filter to prune many such
substrings which do not match any dictionary entity and then only verify the
remaining substrings if they are entity mentions of dictionary entities, by
means of a text join. The choice between the index-based approach and the
filter & verification-based approach is a case-to-case decision as the best
approach depends on the characteristics of the input entity dictionary, for
example frequency of entity mentions. Choosing the right approach for the
setting can make a substantial difference in execution time. Making this choice
is however non-trivial as there are parameters within each of the approaches
that make the space of possible approaches very large. In this paper, we
present a cost-based operator for making the choice among execution plans for
entity extraction. Since we need to deal with large dictionaries and even
larger large datasets, our operator is developed for implementations of
MapReduce distributed algorithms.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 21:23:20 GMT"
}
] | 2015-12-17T00:00:00 | [
[
"Nakashole",
"Ndapandula",
""
]
] | TITLE: An Operator for Entity Extraction in MapReduce
ABSTRACT: Dictionary-based entity extraction involves finding mentions of dictionary
entities in text. Text mentions are often noisy, containing spurious or missing
words. Efficient algorithms for detecting approximate entity mentions follow
one of two general techniques. The first approach is to build an index on the
entities and perform index lookups of document substrings. The second approach
recognizes that the number of substrings generated from documents can explode
to large numbers, to get around this, they use a filter to prune many such
substrings which do not match any dictionary entity and then only verify the
remaining substrings if they are entity mentions of dictionary entities, by
means of a text join. The choice between the index-based approach and the
filter & verification-based approach is a case-to-case decision as the best
approach depends on the characteristics of the input entity dictionary, for
example frequency of entity mentions. Choosing the right approach for the
setting can make a substantial difference in execution time. Making this choice
is however non-trivial as there are parameters within each of the approaches
that make the space of possible approaches very large. In this paper, we
present a cost-based operator for making the choice among execution plans for
entity extraction. Since we need to deal with large dictionaries and even
larger large datasets, our operator is developed for implementations of
MapReduce distributed algorithms.
| no_new_dataset | 0.951006 |
1412.0781 | Zhizhen Zhao | Zhizhen Zhao, Yoel Shkolnisky, and Amit Singer | Fast Steerable Principal Component Analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cryo-electron microscopy nowadays often requires the analysis of hundreds of
thousands of 2D images as large as a few hundred pixels in each direction. Here
we introduce an algorithm that efficiently and accurately performs principal
component analysis (PCA) for a large set of two-dimensional images, and, for
each image, the set of its uniform rotations in the plane and their
reflections. For a dataset consisting of $n$ images of size $L \times L$
pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while
existing algorithms take $O(nL^4)$. The new algorithm computes the expansion
coefficients of the images in a Fourier-Bessel basis efficiently using the
non-uniform fast Fourier transform. We compare the accuracy and efficiency of
the new algorithm with traditional PCA and existing algorithms for steerable
PCA.
| [
{
"version": "v1",
"created": "Tue, 2 Dec 2014 04:24:03 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Dec 2014 18:21:40 GMT"
},
{
"version": "v3",
"created": "Sat, 16 May 2015 02:06:04 GMT"
},
{
"version": "v4",
"created": "Fri, 23 Oct 2015 02:14:53 GMT"
},
{
"version": "v5",
"created": "Tue, 15 Dec 2015 19:26:37 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Zhao",
"Zhizhen",
""
],
[
"Shkolnisky",
"Yoel",
""
],
[
"Singer",
"Amit",
""
]
] | TITLE: Fast Steerable Principal Component Analysis
ABSTRACT: Cryo-electron microscopy nowadays often requires the analysis of hundreds of
thousands of 2D images as large as a few hundred pixels in each direction. Here
we introduce an algorithm that efficiently and accurately performs principal
component analysis (PCA) for a large set of two-dimensional images, and, for
each image, the set of its uniform rotations in the plane and their
reflections. For a dataset consisting of $n$ images of size $L \times L$
pixels, the computational complexity of our algorithm is $O(nL^3 + L^4)$, while
existing algorithms take $O(nL^4)$. The new algorithm computes the expansion
coefficients of the images in a Fourier-Bessel basis efficiently using the
non-uniform fast Fourier transform. We compare the accuracy and efficiency of
the new algorithm with traditional PCA and existing algorithms for steerable
PCA.
| no_new_dataset | 0.936807 |
1504.06787 | Chongxuan Li | Chongxuan Li and Jun Zhu and Tianlin Shi and Bo Zhang | Max-margin Deep Generative Models | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep generative models (DGMs) are effective on learning multilayered
representations of complex data and performing inference of input data by
exploring the generative ability. However, little work has been done on
examining or empowering the discriminative ability of DGMs on making accurate
predictions. This paper presents max-margin deep generative models (mmDGMs),
which explore the strongly discriminative principle of max-margin learning to
improve the discriminative power of DGMs, while retaining the generative
capability. We develop an efficient doubly stochastic subgradient algorithm for
the piecewise linear objective. Empirical results on MNIST and SVHN datasets
demonstrate that (1) max-margin learning can significantly improve the
prediction performance of DGMs and meanwhile retain the generative ability; and
(2) mmDGMs are competitive to the state-of-the-art fully discriminative
networks by employing deep convolutional neural networks (CNNs) as both
recognition and generative models.
| [
{
"version": "v1",
"created": "Sun, 26 Apr 2015 06:01:19 GMT"
},
{
"version": "v2",
"created": "Fri, 1 May 2015 01:58:31 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Jun 2015 08:40:09 GMT"
},
{
"version": "v4",
"created": "Tue, 15 Dec 2015 03:01:06 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Li",
"Chongxuan",
""
],
[
"Zhu",
"Jun",
""
],
[
"Shi",
"Tianlin",
""
],
[
"Zhang",
"Bo",
""
]
] | TITLE: Max-margin Deep Generative Models
ABSTRACT: Deep generative models (DGMs) are effective on learning multilayered
representations of complex data and performing inference of input data by
exploring the generative ability. However, little work has been done on
examining or empowering the discriminative ability of DGMs on making accurate
predictions. This paper presents max-margin deep generative models (mmDGMs),
which explore the strongly discriminative principle of max-margin learning to
improve the discriminative power of DGMs, while retaining the generative
capability. We develop an efficient doubly stochastic subgradient algorithm for
the piecewise linear objective. Empirical results on MNIST and SVHN datasets
demonstrate that (1) max-margin learning can significantly improve the
prediction performance of DGMs and meanwhile retain the generative ability; and
(2) mmDGMs are competitive to the state-of-the-art fully discriminative
networks by employing deep convolutional neural networks (CNNs) as both
recognition and generative models.
| no_new_dataset | 0.946498 |
1512.02167 | Bolei Zhou | Bolei Zhou and Yuandong Tian and Sainbayar Sukhbaatar and Arthur Szlam
and Rob Fergus | Simple Baseline for Visual Question Answering | One comparison method's scores are put into the correct column, and a
new experiment of generating attention map is added | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. .
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 19:00:54 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2015 05:17:49 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Zhou",
"Bolei",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Sukhbaatar",
"Sainbayar",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Fergus",
"Rob",
""
]
] | TITLE: Simple Baseline for Visual Question Answering
ABSTRACT: We describe a very simple bag-of-words baseline for visual question
answering. This baseline concatenates the word features from the question and
CNN features from the image to predict the answer. When evaluated on the
challenging VQA dataset [2], it shows comparable performance to many recent
approaches using recurrent neural networks. To explore the strength and
weakness of the trained model, we also provide an interactive web demo and
open-source code. .
| no_new_dataset | 0.944228 |
1512.04701 | Weixin Li | Weixin Li, Jungseock Joo, Hang Qi, and Song-Chun Zhu | Joint Image-Text News Topic Detection and Tracking with And-Or Graph
Representation | null | null | null | null | cs.IR cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to develop a method for automatically detecting and
tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG)
to jointly represent the latent structure of both texts and visuals. The AOG
embeds a context sensitive grammar that can describe the hierarchical
composition of news topics by semantic elements about people involved, related
places and what happened, and model contextual relationships between elements
in the hierarchy. We detect news topics through a cluster sampling process
which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an
effective cluster sampling algorithm, is adopted for traversing the solution
space and obtaining optimal clustering solutions by maximizing a Bayesian
posterior probability. Topics are tracked to deal with the continuously updated
news streams. We generate topic trajectories to show how topics emerge, evolve
and disappear over time. The experimental results show that our method can
explicitly describe the textual and visual data in news videos and produce
meaningful topic trajectories. Our method achieves superior performance
compared to state-of-the-art methods on both a public dataset Reuters-21578 and
a self-collected dataset named UCLA Broadcast News Dataset.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 10:01:37 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Li",
"Weixin",
""
],
[
"Joo",
"Jungseock",
""
],
[
"Qi",
"Hang",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: Joint Image-Text News Topic Detection and Tracking with And-Or Graph
Representation
ABSTRACT: In this paper, we aim to develop a method for automatically detecting and
tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG)
to jointly represent the latent structure of both texts and visuals. The AOG
embeds a context sensitive grammar that can describe the hierarchical
composition of news topics by semantic elements about people involved, related
places and what happened, and model contextual relationships between elements
in the hierarchy. We detect news topics through a cluster sampling process
which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an
effective cluster sampling algorithm, is adopted for traversing the solution
space and obtaining optimal clustering solutions by maximizing a Bayesian
posterior probability. Topics are tracked to deal with the continuously updated
news streams. We generate topic trajectories to show how topics emerge, evolve
and disappear over time. The experimental results show that our method can
explicitly describe the textual and visual data in news videos and produce
meaningful topic trajectories. Our method achieves superior performance
compared to state-of-the-art methods on both a public dataset Reuters-21578 and
a self-collected dataset named UCLA Broadcast News Dataset.
| new_dataset | 0.96128 |
1512.04776 | Lionel Tabourier | Lionel Tabourier, Anne-Sophie Libert, Renaud Lambiotte | Predicting links in ego-networks using temporal information | submitted to EPJ Data Science | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Link prediction appears as a central problem of network science, as it calls
for unfolding the mechanisms that govern the micro-dynamics of the network. In
this work, we are interested in ego-networks, that is the mere information of
interactions of a node to its neighbors, in the context of social
relationships. As the structural information is very poor, we rely on another
source of information to predict links among egos' neighbors: the timing of
interactions. We define several features to capture different kinds of temporal
information and apply machine learning methods to combine these various
features and improve the quality of the prediction. We demonstrate the
efficiency of this temporal approach on a cellphone interaction dataset,
pointing out features which prove themselves to perform well in this context,
in particular the temporal profile of interactions and elapsed time between
contacts.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 13:32:47 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Tabourier",
"Lionel",
""
],
[
"Libert",
"Anne-Sophie",
""
],
[
"Lambiotte",
"Renaud",
""
]
] | TITLE: Predicting links in ego-networks using temporal information
ABSTRACT: Link prediction appears as a central problem of network science, as it calls
for unfolding the mechanisms that govern the micro-dynamics of the network. In
this work, we are interested in ego-networks, that is the mere information of
interactions of a node to its neighbors, in the context of social
relationships. As the structural information is very poor, we rely on another
source of information to predict links among egos' neighbors: the timing of
interactions. We define several features to capture different kinds of temporal
information and apply machine learning methods to combine these various
features and improve the quality of the prediction. We demonstrate the
efficiency of this temporal approach on a cellphone interaction dataset,
pointing out features which prove themselves to perform well in this context,
in particular the temporal profile of interactions and elapsed time between
contacts.
| no_new_dataset | 0.946941 |
1512.04817 | Michael Hay | Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, Yan Chen, and Dan
Zhang | Principled Evaluation of Differentially Private Algorithms using DPBench | null | null | null | null | cs.DB cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy has become the dominant standard in the research
community for strong privacy protection. There has been a flood of research
into query answering algorithms that meet this standard. Algorithms are
becoming increasingly complex, and in particular, the performance of many
emerging algorithms is {\em data dependent}, meaning the distribution of the
noise added to query answers may change depending on the input data.
Theoretical analysis typically only considers the worst case, making empirical
study of average case performance increasingly important.
In this paper we propose a set of evaluation principles which we argue are
essential for sound evaluation. Based on these principles we propose DPBench, a
novel evaluation framework for standardized evaluation of privacy algorithms.
We then apply our benchmark to evaluate algorithms for answering 1- and
2-dimensional range queries. The result is a thorough empirical study of 15
published algorithms on a total of 27 datasets that offers new insights into
algorithm behavior---in particular the influence of dataset scale and
shape---and a more complete characterization of the state of the art. Our
methodology is able to resolve inconsistencies in prior empirical studies and
place algorithm performance in context through comparison to simple baselines.
Finally, we pose open research questions which we hope will guide future
algorithm design.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 15:29:36 GMT"
}
] | 2015-12-16T00:00:00 | [
[
"Hay",
"Michael",
""
],
[
"Machanavajjhala",
"Ashwin",
""
],
[
"Miklau",
"Gerome",
""
],
[
"Chen",
"Yan",
""
],
[
"Zhang",
"Dan",
""
]
] | TITLE: Principled Evaluation of Differentially Private Algorithms using DPBench
ABSTRACT: Differential privacy has become the dominant standard in the research
community for strong privacy protection. There has been a flood of research
into query answering algorithms that meet this standard. Algorithms are
becoming increasingly complex, and in particular, the performance of many
emerging algorithms is {\em data dependent}, meaning the distribution of the
noise added to query answers may change depending on the input data.
Theoretical analysis typically only considers the worst case, making empirical
study of average case performance increasingly important.
In this paper we propose a set of evaluation principles which we argue are
essential for sound evaluation. Based on these principles we propose DPBench, a
novel evaluation framework for standardized evaluation of privacy algorithms.
We then apply our benchmark to evaluate algorithms for answering 1- and
2-dimensional range queries. The result is a thorough empirical study of 15
published algorithms on a total of 27 datasets that offers new insights into
algorithm behavior---in particular the influence of dataset scale and
shape---and a more complete characterization of the state of the art. Our
methodology is able to resolve inconsistencies in prior empirical studies and
place algorithm performance in context through comparison to simple baselines.
Finally, we pose open research questions which we hope will guide future
algorithm design.
| no_new_dataset | 0.924313 |
1112.5997 | Shervin Minaee | Sina Akbari Mistani, Shervin Minaee and Emad Fatemizadeh | Multispectral Palmprint Recognition Using a Hybrid Feature | 6 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Personal identification problem has been a major field of research in recent
years. Biometrics-based technologies that exploit fingerprints, iris, face,
voice and palmprints, have been in the center of attention to solve this
problem. Palmprints can be used instead of fingerprints that have been of the
earliest of these biometrics technologies. A palm is covered with the same skin
as the fingertips but has a larger surface, giving us more information than the
fingertips. The major features of the palm are palm-lines, including principal
lines, wrinkles and ridges. Using these lines is one of the most popular
approaches towards solving the palmprint recognition problem. Another robust
feature is the wavelet energy of palms. In this paper we used a hybrid feature
which combines both of these features. %Moreover, multispectral analysis is
applied to improve the performance of the system. At the end, minimum distance
classifier is used to match test images with one of the training samples. The
proposed algorithm has been tested on a well-known multispectral palmprint
dataset and achieved an average accuracy of 98.8\%.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2011 18:19:04 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2015 14:56:31 GMT"
},
{
"version": "v3",
"created": "Fri, 11 Dec 2015 22:52:06 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Mistani",
"Sina Akbari",
""
],
[
"Minaee",
"Shervin",
""
],
[
"Fatemizadeh",
"Emad",
""
]
] | TITLE: Multispectral Palmprint Recognition Using a Hybrid Feature
ABSTRACT: Personal identification problem has been a major field of research in recent
years. Biometrics-based technologies that exploit fingerprints, iris, face,
voice and palmprints, have been in the center of attention to solve this
problem. Palmprints can be used instead of fingerprints that have been of the
earliest of these biometrics technologies. A palm is covered with the same skin
as the fingertips but has a larger surface, giving us more information than the
fingertips. The major features of the palm are palm-lines, including principal
lines, wrinkles and ridges. Using these lines is one of the most popular
approaches towards solving the palmprint recognition problem. Another robust
feature is the wavelet energy of palms. In this paper we used a hybrid feature
which combines both of these features. %Moreover, multispectral analysis is
applied to improve the performance of the system. At the end, minimum distance
classifier is used to match test images with one of the training samples. The
proposed algorithm has been tested on a well-known multispectral palmprint
dataset and achieved an average accuracy of 98.8\%.
| no_new_dataset | 0.944434 |
1501.03669 | Giovanni Chierchia | G. Chierchia, Nelly Pustelnik, Jean-Christophe Pesquet, B.
Pesquet-Popescu | A Proximal Approach for Sparse Multiclass SVM | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparsity-inducing penalties are useful tools to design multiclass support
vector machines (SVMs). In this paper, we propose a convex optimization
approach for efficiently and exactly solving the multiclass SVM learning
problem involving a sparse regularization and the multiclass hinge loss
formulated by Crammer and Singer. We provide two algorithms: the first one
dealing with the hinge loss as a penalty term, and the other one addressing the
case when the hinge loss is enforced through a constraint. The related convex
optimization problems can be efficiently solved thanks to the flexibility
offered by recent primal-dual proximal algorithms and epigraphical splitting
techniques. Experiments carried out on several datasets demonstrate the
interest of considering the exact expression of the hinge loss rather than a
smooth approximation. The efficiency of the proposed algorithms w.r.t. several
state-of-the-art methods is also assessed through comparisons of execution
times.
| [
{
"version": "v1",
"created": "Thu, 15 Jan 2015 13:23:14 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jan 2015 09:26:32 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Feb 2015 23:36:03 GMT"
},
{
"version": "v4",
"created": "Sun, 26 Apr 2015 15:33:36 GMT"
},
{
"version": "v5",
"created": "Mon, 14 Dec 2015 09:49:32 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Chierchia",
"G.",
""
],
[
"Pustelnik",
"Nelly",
""
],
[
"Pesquet",
"Jean-Christophe",
""
],
[
"Pesquet-Popescu",
"B.",
""
]
] | TITLE: A Proximal Approach for Sparse Multiclass SVM
ABSTRACT: Sparsity-inducing penalties are useful tools to design multiclass support
vector machines (SVMs). In this paper, we propose a convex optimization
approach for efficiently and exactly solving the multiclass SVM learning
problem involving a sparse regularization and the multiclass hinge loss
formulated by Crammer and Singer. We provide two algorithms: the first one
dealing with the hinge loss as a penalty term, and the other one addressing the
case when the hinge loss is enforced through a constraint. The related convex
optimization problems can be efficiently solved thanks to the flexibility
offered by recent primal-dual proximal algorithms and epigraphical splitting
techniques. Experiments carried out on several datasets demonstrate the
interest of considering the exact expression of the hinge loss rather than a
smooth approximation. The efficiency of the proposed algorithms w.r.t. several
state-of-the-art methods is also assessed through comparisons of execution
times.
| no_new_dataset | 0.950778 |
1503.08322 | Marco Piastra | Giacomo Parigi, Andrea Pedrini, Marco Piastra | Some Further Evidence about Magnification and Shape in Neural Gas | null | null | 10.1109/IJCNN.2015.7280550 | null | cs.NE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Neural gas (NG) is a robust vector quantization algorithm with a well-known
mathematical model. According to this, the neural gas samples the underlying
data distribution following a power law with a magnification exponent that
depends on data dimensionality only. The effects of shape in the input data
distribution, however, are not entirely covered by the NG model above, due to
the technical difficulties involved. The experimental work described here shows
that shape is indeed relevant in determining the overall NG behavior; in
particular, some experiments reveal richer and complex behaviors induced by
shape that cannot be explained by the power law alone. Although a more
comprehensive analytical model remains to be defined, the evidence collected in
these experiments suggests that the NG algorithm has an interesting potential
for detecting complex shapes in noisy datasets.
| [
{
"version": "v1",
"created": "Sat, 28 Mar 2015 16:33:20 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Parigi",
"Giacomo",
""
],
[
"Pedrini",
"Andrea",
""
],
[
"Piastra",
"Marco",
""
]
] | TITLE: Some Further Evidence about Magnification and Shape in Neural Gas
ABSTRACT: Neural gas (NG) is a robust vector quantization algorithm with a well-known
mathematical model. According to this, the neural gas samples the underlying
data distribution following a power law with a magnification exponent that
depends on data dimensionality only. The effects of shape in the input data
distribution, however, are not entirely covered by the NG model above, due to
the technical difficulties involved. The experimental work described here shows
that shape is indeed relevant in determining the overall NG behavior; in
particular, some experiments reveal richer and complex behaviors induced by
shape that cannot be explained by the power law alone. Although a more
comprehensive analytical model remains to be defined, the evidence collected in
these experiments suggests that the NG algorithm has an interesting potential
for detecting complex shapes in noisy datasets.
| no_new_dataset | 0.94743 |
1511.01064 | Alexandros Karargyris | Alexandros Karargyris | Color Space Transformation Network | Report | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Deep networks have become very popular over the past few years. The main
reason for this widespread use is their excellent ability to learn and predict
knowledge in a very easy and efficient way. Convolutional neural networks and
auto-encoders have become the normal in the area of imaging and computer vision
achieving unprecedented accuracy levels in many applications. The most common
strategy is to build and train networks with many layers by tuning their
hyper-parameters. While this approach has proven to be a successful way to
build robust deep learning schemes it suffers from high complexity. In this
paper we introduce a module that learns color space transformations within a
network. Given a large dataset of colored images the color space transformation
module tries to learn color space transformations that increase overall
classification accuracy. This module has shown to increase overall accuracy for
the same network design and to achieve faster convergence. It is part of a
broader family of image transformations (e.g. spatial transformer network).
| [
{
"version": "v1",
"created": "Sat, 31 Oct 2015 13:25:20 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Dec 2015 21:17:27 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Karargyris",
"Alexandros",
""
]
] | TITLE: Color Space Transformation Network
ABSTRACT: Deep networks have become very popular over the past few years. The main
reason for this widespread use is their excellent ability to learn and predict
knowledge in a very easy and efficient way. Convolutional neural networks and
auto-encoders have become the normal in the area of imaging and computer vision
achieving unprecedented accuracy levels in many applications. The most common
strategy is to build and train networks with many layers by tuning their
hyper-parameters. While this approach has proven to be a successful way to
build robust deep learning schemes it suffers from high complexity. In this
paper we introduce a module that learns color space transformations within a
network. Given a large dataset of colored images the color space transformation
module tries to learn color space transformations that increase overall
classification accuracy. This module has shown to increase overall accuracy for
the same network design and to achieve faster convergence. It is part of a
broader family of image transformations (e.g. spatial transformer network).
| no_new_dataset | 0.947478 |
1512.03844 | Alexander Wong | Mohammad Javad Shafiee, Parthipan Siva, Paul Fieguth, and Alexander
Wong | Efficient Deep Feature Learning and Extraction via StochasticNets | 10 pages. arXiv admin note: substantial text overlap with
arXiv:1508.05463 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 22:47:34 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Siva",
"Parthipan",
""
],
[
"Fieguth",
"Paul",
""
],
[
"Wong",
"Alexander",
""
]
] | TITLE: Efficient Deep Feature Learning and Extraction via StochasticNets
ABSTRACT: Deep neural networks are a powerful tool for feature learning and extraction
given their ability to model high-level abstractions in highly complex data.
One area worth exploring in feature learning and extraction using deep neural
networks is efficient neural connectivity formation for faster feature learning
and extraction. Motivated by findings of stochastic synaptic connectivity
formation in the brain as well as the brain's uncanny ability to efficiently
represent information, we propose the efficient learning and extraction of
features via StochasticNets, where sparsely-connected deep neural networks can
be formed via stochastic connectivity between neurons. To evaluate the
feasibility of such a deep neural network architecture for feature learning and
extraction, we train deep convolutional StochasticNets to learn abstract
features using the CIFAR-10 dataset, and extract the learned features from
images to perform classification on the SVHN and STL-10 datasets. Experimental
results show that features learned using deep convolutional StochasticNets,
with fewer neural connections than conventional deep convolutional neural
networks, can allow for better or comparable classification accuracy than
conventional deep neural networks: relative test error decrease of ~4.5% for
classification on the STL-10 dataset and ~1% for classification on the SVHN
dataset. Furthermore, it was shown that the deep features extracted using deep
convolutional StochasticNets can provide comparable classification accuracy
even when only 10% of the training data is used for feature learning. Finally,
it was also shown that significant gains in feature extraction speed can be
achieved in embedded applications using StochasticNets. As such, StochasticNets
allow for faster feature learning and extraction performance while facilitate
for better or comparable accuracy performances.
| no_new_dataset | 0.951504 |
1512.03950 | Kamal Sarkar | Kamal Sarkar | A Hidden Markov Model Based System for Entity Extraction from Social
Media English Text at FIRE 2015 | FIRE 2015 Task:Entity Extraction from Social Media Text - Indian
Languages (ESM-IL) - See more at:
http://fire.irsi.res.in/fire/home#sthash.HpgiwjP5.dpuf. arXiv admin note:
substantial text overlap with arXiv:1405.7397 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the experiments carried out by us at Jadavpur University
as part of the participation in FIRE 2015 task: Entity Extraction from Social
Media Text - Indian Languages (ESM-IL). The tool that we have developed for the
task is based on Trigram Hidden Markov Model that utilizes information like
gazetteer list, POS tag and some other word level features to enhance the
observation probabilities of the known tokens as well as unknown tokens. We
submitted runs for English only. A statistical HMM (Hidden Markov Models) based
model has been used to implement our system. The system has been trained and
tested on the datasets released for FIRE 2015 task: Entity Extraction from
Social Media Text - Indian Languages (ESM-IL). Our system is the best performer
for English language and it obtains precision, recall and F-measures of 61.96,
39.46 and 48.21 respectively.
| [
{
"version": "v1",
"created": "Sat, 12 Dec 2015 18:57:11 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Sarkar",
"Kamal",
""
]
] | TITLE: A Hidden Markov Model Based System for Entity Extraction from Social
Media English Text at FIRE 2015
ABSTRACT: This paper presents the experiments carried out by us at Jadavpur University
as part of the participation in FIRE 2015 task: Entity Extraction from Social
Media Text - Indian Languages (ESM-IL). The tool that we have developed for the
task is based on Trigram Hidden Markov Model that utilizes information like
gazetteer list, POS tag and some other word level features to enhance the
observation probabilities of the known tokens as well as unknown tokens. We
submitted runs for English only. A statistical HMM (Hidden Markov Models) based
model has been used to implement our system. The system has been trained and
tested on the datasets released for FIRE 2015 task: Entity Extraction from
Social Media Text - Indian Languages (ESM-IL). Our system is the best performer
for English language and it obtains precision, recall and F-measures of 61.96,
39.46 and 48.21 respectively.
| no_new_dataset | 0.951594 |
1512.03953 | Mehrdad Ghadiri | Mehrdad Ghadiri, Amin Aghaee, Mahdieh Soleymani Baghshah | Active Distance-Based Clustering using K-medoids | 12 pages, 3 figures, PAKDD 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | k-medoids algorithm is a partitional, centroid-based clustering algorithm
which uses pairwise distances of data points and tries to directly decompose
the dataset with $n$ points into a set of $k$ disjoint clusters. However,
k-medoids itself requires all distances between data points that are not so
easy to get in many applications. In this paper, we introduce a new method
which requires only a small proportion of the whole set of distances and makes
an effort to estimate an upper-bound for unknown distances using the inquired
ones. This algorithm makes use of the triangle inequality to calculate an
upper-bound estimation of the unknown distances. Our method is built upon a
recursive approach to cluster objects and to choose some points actively from
each bunch of data and acquire the distances between these prominent points
from oracle. Experimental results show that the proposed method using only a
small subset of the distances can find proper clustering on many real-world and
synthetic datasets.
| [
{
"version": "v1",
"created": "Sat, 12 Dec 2015 19:33:52 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Ghadiri",
"Mehrdad",
""
],
[
"Aghaee",
"Amin",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
] | TITLE: Active Distance-Based Clustering using K-medoids
ABSTRACT: k-medoids algorithm is a partitional, centroid-based clustering algorithm
which uses pairwise distances of data points and tries to directly decompose
the dataset with $n$ points into a set of $k$ disjoint clusters. However,
k-medoids itself requires all distances between data points that are not so
easy to get in many applications. In this paper, we introduce a new method
which requires only a small proportion of the whole set of distances and makes
an effort to estimate an upper-bound for unknown distances using the inquired
ones. This algorithm makes use of the triangle inequality to calculate an
upper-bound estimation of the unknown distances. Our method is built upon a
recursive approach to cluster objects and to choose some points actively from
each bunch of data and acquire the distances between these prominent points
from oracle. Experimental results show that the proposed method using only a
small subset of the distances can find proper clustering on many real-world and
synthetic datasets.
| no_new_dataset | 0.948202 |
1512.03980 | Mahdyar Ravanbakhsh | Mahdyar Ravanbakhsh, Hossein Mousavi, Mohammad Rastegari, Vittorio
Murino, Larry S. Davis | Action Recognition with Image Based CNN Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of human actions consist of complex temporal compositions of more simple
actions. Action recognition tasks usually relies on complex handcrafted
structures as features to represent the human action model. Convolutional
Neural Nets (CNN) have shown to be a powerful tool that eliminate the need for
designing handcrafted features. Usually, the output of the last layer in CNN (a
layer before the classification layer -known as fc7) is used as a generic
feature for images. In this paper, we show that fc7 features, per se, can not
get a good performance for the task of action recognition, when the network is
trained only on images. We present a feature structure on top of fc7 features,
which can capture the temporal variation in a video. To represent the temporal
components, which is needed to capture motion information, we introduced a
hierarchical structure. The hierarchical model enables to capture sub-actions
from a complex action. At the higher levels of the hierarchy, it represents a
coarse capture of action sequence and lower levels represent fine action
elements. Furthermore, we introduce a method for extracting key-frames using
binary coding of each frame in a video, which helps to improve the performance
of our hierarchical model. We experimented our method on several action
datasets and show that our method achieves superior results compared to other
state-of-the-arts methods.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 00:17:24 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Ravanbakhsh",
"Mahdyar",
""
],
[
"Mousavi",
"Hossein",
""
],
[
"Rastegari",
"Mohammad",
""
],
[
"Murino",
"Vittorio",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Action Recognition with Image Based CNN Features
ABSTRACT: Most of human actions consist of complex temporal compositions of more simple
actions. Action recognition tasks usually relies on complex handcrafted
structures as features to represent the human action model. Convolutional
Neural Nets (CNN) have shown to be a powerful tool that eliminate the need for
designing handcrafted features. Usually, the output of the last layer in CNN (a
layer before the classification layer -known as fc7) is used as a generic
feature for images. In this paper, we show that fc7 features, per se, can not
get a good performance for the task of action recognition, when the network is
trained only on images. We present a feature structure on top of fc7 features,
which can capture the temporal variation in a video. To represent the temporal
components, which is needed to capture motion information, we introduced a
hierarchical structure. The hierarchical model enables to capture sub-actions
from a complex action. At the higher levels of the hierarchy, it represents a
coarse capture of action sequence and lower levels represent fine action
elements. Furthermore, we introduce a method for extracting key-frames using
binary coding of each frame in a video, which helps to improve the performance
of our hierarchical model. We experimented our method on several action
datasets and show that our method achieves superior results compared to other
state-of-the-arts methods.
| no_new_dataset | 0.949059 |
1512.04036 | Shixia Liu | Yangxin Zhong, Shixia Liu, Xiting Wang, Jiannan Xiao, and Yangqiu Song | Tracking Idea Flows between Social Groups | 8 pages, AAAI 2016 | null | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications, ideas that are described by a set of words often flow
between different groups. To facilitate users in analyzing the flow, we present
a method to model the flow behaviors that aims at identifying the lead-lag
relationships between word clusters of different user groups. In particular, an
improved Bayesian conditional cointegration based on dynamic time warping is
employed to learn links between words in different groups. A tensor-based
technique is developed to cluster these linked words into different clusters
(ideas) and track the flow of ideas. The main feature of the tensor
representation is that we introduce two additional dimensions to represent both
time and lead-lag relationships. Experiments on both synthetic and real
datasets show that our method is more effective than methods based on
traditional clustering techniques and achieves better accuracy. A case study
was conducted to demonstrate the usefulness of our method in helping users
understand the flow of ideas between different user groups on social media
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 11:33:44 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Zhong",
"Yangxin",
""
],
[
"Liu",
"Shixia",
""
],
[
"Wang",
"Xiting",
""
],
[
"Xiao",
"Jiannan",
""
],
[
"Song",
"Yangqiu",
""
]
] | TITLE: Tracking Idea Flows between Social Groups
ABSTRACT: In many applications, ideas that are described by a set of words often flow
between different groups. To facilitate users in analyzing the flow, we present
a method to model the flow behaviors that aims at identifying the lead-lag
relationships between word clusters of different user groups. In particular, an
improved Bayesian conditional cointegration based on dynamic time warping is
employed to learn links between words in different groups. A tensor-based
technique is developed to cluster these linked words into different clusters
(ideas) and track the flow of ideas. The main feature of the tensor
representation is that we introduce two additional dimensions to represent both
time and lead-lag relationships. Experiments on both synthetic and real
datasets show that our method is more effective than methods based on
traditional clustering techniques and achieves better accuracy. A case study
was conducted to demonstrate the usefulness of our method in helping users
understand the flow of ideas between different user groups on social media
| no_new_dataset | 0.949995 |
1512.04038 | Shixia Liu | Mengchen Liu, Shixia Liu, Xizhou Zhu, Qinying Liao, Furu Wei, and
Shimei Pan | An Uncertainty-Aware Approach for Exploratory Microblog Retrieval | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there has been a great deal of interest in analyzing customer
opinions and breaking news in microblogs, progress has been hampered by the
lack of an effective mechanism to discover and retrieve data of interest from
microblogs. To address this problem, we have developed an uncertainty-aware
visual analytics approach to retrieve salient posts, users, and hashtags. We
extend an existing ranking technique to compute a multifaceted retrieval
result: the mutual reinforcement rank of a graph node, the uncertainty of each
rank, and the propagation of uncertainty among different graph nodes. To
illustrate the three facets, we have also designed a composite visualization
with three visual components: a graph visualization, an uncertainty glyph, and
a flow map. The graph visualization with glyphs, the flow map, and the
uncertainty analysis together enable analysts to effectively find the most
uncertain results and interactively refine them. We have applied our approach
to several Twitter datasets. Qualitative evaluation and two real-world case
studies demonstrate the promise of our approach for retrieving high-quality
microblog data.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 11:56:09 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Liu",
"Mengchen",
""
],
[
"Liu",
"Shixia",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Liao",
"Qinying",
""
],
[
"Wei",
"Furu",
""
],
[
"Pan",
"Shimei",
""
]
] | TITLE: An Uncertainty-Aware Approach for Exploratory Microblog Retrieval
ABSTRACT: Although there has been a great deal of interest in analyzing customer
opinions and breaking news in microblogs, progress has been hampered by the
lack of an effective mechanism to discover and retrieve data of interest from
microblogs. To address this problem, we have developed an uncertainty-aware
visual analytics approach to retrieve salient posts, users, and hashtags. We
extend an existing ranking technique to compute a multifaceted retrieval
result: the mutual reinforcement rank of a graph node, the uncertainty of each
rank, and the propagation of uncertainty among different graph nodes. To
illustrate the three facets, we have also designed a composite visualization
with three visual components: a graph visualization, an uncertainty glyph, and
a flow map. The graph visualization with glyphs, the flow map, and the
uncertainty analysis together enable analysts to effectively find the most
uncertain results and interactively refine them. We have applied our approach
to several Twitter datasets. Qualitative evaluation and two real-world case
studies demonstrate the promise of our approach for retrieving high-quality
microblog data.
| no_new_dataset | 0.947575 |
1512.04092 | Shagun Sodhani | Sanket Mehta, Shagun Sodhani | Stack Exchange Tagger | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of our project is to develop an accurate tagger for questions posted
on Stack Exchange. Our problem is an instance of the more general problem of
developing accurate classifiers for large scale text datasets. We are tackling
the multilabel classification problem where each item (in this case, question)
can belong to multiple classes (in this case, tags). We are predicting the tags
(or keywords) for a particular Stack Exchange post given only the question text
and the title of the post. In the process, we compare the performance of
Support Vector Classification (SVC) for different kernel functions, loss
function, etc. We found linear SVC with Crammer Singer technique produces best
results.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 17:52:44 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Mehta",
"Sanket",
""
],
[
"Sodhani",
"Shagun",
""
]
] | TITLE: Stack Exchange Tagger
ABSTRACT: The goal of our project is to develop an accurate tagger for questions posted
on Stack Exchange. Our problem is an instance of the more general problem of
developing accurate classifiers for large scale text datasets. We are tackling
the multilabel classification problem where each item (in this case, question)
can belong to multiple classes (in this case, tags). We are predicting the tags
(or keywords) for a particular Stack Exchange post given only the question text
and the title of the post. In the process, we compare the performance of
Support Vector Classification (SVC) for different kernel functions, loss
function, etc. We found linear SVC with Crammer Singer technique produces best
results.
| no_new_dataset | 0.945751 |
1512.04118 | Jiongxin Liu | Jiongxin Liu, Yinxiao Li, Peter Allen, Peter Belhumeur | Articulated Pose Estimation Using Hierarchical Exemplar-Based Models | 8 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exemplar-based models have achieved great success on localizing the parts of
semi-rigid objects. However, their efficacy on highly articulated objects such
as humans is yet to be explored. Inspired by hierarchical object representation
and recent application of Deep Convolutional Neural Networks (DCNNs) on human
pose estimation, we propose a novel formulation that incorporates both
hierarchical exemplar-based models and DCNNs in the spatial terms.
Specifically, we obtain more expressive spatial models by assuming independence
between exemplars at different levels in the hierarchy; we also obtain stronger
spatial constraints by inferring the spatial relations between parts at the
same level. As our method strikes a good balance between expressiveness and
strength of spatial models, it is both effective and generalizable, achieving
state-of-the-art results on different benchmarks: Leeds Sports Dataset and
CUB-200-2011.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 20:37:10 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Liu",
"Jiongxin",
""
],
[
"Li",
"Yinxiao",
""
],
[
"Allen",
"Peter",
""
],
[
"Belhumeur",
"Peter",
""
]
] | TITLE: Articulated Pose Estimation Using Hierarchical Exemplar-Based Models
ABSTRACT: Exemplar-based models have achieved great success on localizing the parts of
semi-rigid objects. However, their efficacy on highly articulated objects such
as humans is yet to be explored. Inspired by hierarchical object representation
and recent application of Deep Convolutional Neural Networks (DCNNs) on human
pose estimation, we propose a novel formulation that incorporates both
hierarchical exemplar-based models and DCNNs in the spatial terms.
Specifically, we obtain more expressive spatial models by assuming independence
between exemplars at different levels in the hierarchy; we also obtain stronger
spatial constraints by inferring the spatial relations between parts at the
same level. As our method strikes a good balance between expressiveness and
strength of spatial models, it is both effective and generalizable, achieving
state-of-the-art results on different benchmarks: Leeds Sports Dataset and
CUB-200-2011.
| no_new_dataset | 0.94801 |
1512.04143 | Sean Bell | Sean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick | Inside-Outside Net: Detecting Objects in Context with Skip Pooling and
Recurrent Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well known that contextual and multi-scale representations are
important for accurate visual recognition. In this paper we present the
Inside-Outside Net (ION), an object detector that exploits information both
inside and outside the region of interest. Contextual information outside the
region of interest is integrated using spatial recurrent neural networks.
Inside, we use skip pooling to extract information at multiple scales and
levels of abstraction. Through extensive experiments we evaluate the design
space and provide readers with an overview of what tricks of the trade are
important. ION improves state-of-the-art on PASCAL VOC 2012 object detection
from 73.9% to 76.4% mAP. On the new and more challenging MS COCO dataset, we
improve state-of-art-the from 19.7% to 33.1% mAP. In the 2015 MS COCO Detection
Challenge, our ION model won the Best Student Entry and finished 3rd place
overall. As intuition suggests, our detection results provide strong evidence
that context and multi-scale representations improve small object detection.
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 00:37:31 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Bell",
"Sean",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Bala",
"Kavita",
""
],
[
"Girshick",
"Ross",
""
]
] | TITLE: Inside-Outside Net: Detecting Objects in Context with Skip Pooling and
Recurrent Neural Networks
ABSTRACT: It is well known that contextual and multi-scale representations are
important for accurate visual recognition. In this paper we present the
Inside-Outside Net (ION), an object detector that exploits information both
inside and outside the region of interest. Contextual information outside the
region of interest is integrated using spatial recurrent neural networks.
Inside, we use skip pooling to extract information at multiple scales and
levels of abstraction. Through extensive experiments we evaluate the design
space and provide readers with an overview of what tricks of the trade are
important. ION improves state-of-the-art on PASCAL VOC 2012 object detection
from 73.9% to 76.4% mAP. On the new and more challenging MS COCO dataset, we
improve state-of-art-the from 19.7% to 33.1% mAP. In the 2015 MS COCO Detection
Challenge, our ION model won the Best Student Entry and finished 3rd place
overall. As intuition suggests, our detection results provide strong evidence
that context and multi-scale representations improve small object detection.
| no_new_dataset | 0.943295 |
1512.04208 | Chenxia Wu | Chenxia Wu, Jiemi Zhang, Bart Selman, Silvio Savarese, Ashutosh Saxena | Watch-Bot: Unsupervised Learning for Reminding Humans of Forgotten
Actions | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a robotic system that watches a human using a Kinect v2 RGB-D
sensor, detects what he forgot to do while performing an activity, and if
necessary reminds the person using a laser pointer to point out the related
object. Our simple setup can be easily deployed on any assistive robot.
Our approach is based on a learning algorithm trained in a purely
unsupervised setting, which does not require any human annotations. This makes
our approach scalable and applicable to variant scenarios. Our model learns the
action/object co-occurrence and action temporal relations in the activity, and
uses the learned rich relationships to infer the forgotten action and the
related object. We show that our approach not only improves the unsupervised
action segmentation and action cluster assignment performance, but also
effectively detects the forgotten actions on a challenging human activity RGB-D
video dataset. In robotic experiments, we show that our robot is able to remind
people of forgotten actions successfully.
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 07:50:22 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Wu",
"Chenxia",
""
],
[
"Zhang",
"Jiemi",
""
],
[
"Selman",
"Bart",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Saxena",
"Ashutosh",
""
]
] | TITLE: Watch-Bot: Unsupervised Learning for Reminding Humans of Forgotten
Actions
ABSTRACT: We present a robotic system that watches a human using a Kinect v2 RGB-D
sensor, detects what he forgot to do while performing an activity, and if
necessary reminds the person using a laser pointer to point out the related
object. Our simple setup can be easily deployed on any assistive robot.
Our approach is based on a learning algorithm trained in a purely
unsupervised setting, which does not require any human annotations. This makes
our approach scalable and applicable to variant scenarios. Our model learns the
action/object co-occurrence and action temporal relations in the activity, and
uses the learned rich relationships to infer the forgotten action and the
related object. We show that our approach not only improves the unsupervised
action segmentation and action cluster assignment performance, but also
effectively detects the forgotten actions on a challenging human activity RGB-D
video dataset. In robotic experiments, we show that our robot is able to remind
people of forgotten actions successfully.
| no_new_dataset | 0.933734 |
1512.04466 | Shuangfei Zhai | Shuangfei Zhai, Zhongfei Zhang | Semisupervised Autoencoder for Sentiment Analysis | To appear in AAAI 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the usage of autoencoders in modeling textual
data. Traditional autoencoders suffer from at least two aspects: scalability
with the high dimensionality of vocabulary size and dealing with
task-irrelevant words. We address this problem by introducing supervision via
the loss function of autoencoders. In particular, we first train a linear
classifier on the labeled data, then define a loss for the autoencoder with the
weights learned from the linear classifier. To reduce the bias brought by one
single classifier, we define a posterior probability distribution on the
weights of the classifier, and derive the marginalized loss of the autoencoder
with Laplace approximation. We show that our choice of loss function can be
rationalized from the perspective of Bregman Divergence, which justifies the
soundness of our model. We evaluate the effectiveness of our model on six
sentiment analysis datasets, and show that our model significantly outperforms
all the competing methods with respect to classification accuracy. We also show
that our model is able to take advantage of unlabeled dataset and get improved
performance. We further show that our model successfully learns highly
discriminative feature maps, which explains its superior performance.
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 19:09:53 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Zhai",
"Shuangfei",
""
],
[
"Zhang",
"Zhongfei",
""
]
] | TITLE: Semisupervised Autoencoder for Sentiment Analysis
ABSTRACT: In this paper, we investigate the usage of autoencoders in modeling textual
data. Traditional autoencoders suffer from at least two aspects: scalability
with the high dimensionality of vocabulary size and dealing with
task-irrelevant words. We address this problem by introducing supervision via
the loss function of autoencoders. In particular, we first train a linear
classifier on the labeled data, then define a loss for the autoencoder with the
weights learned from the linear classifier. To reduce the bias brought by one
single classifier, we define a posterior probability distribution on the
weights of the classifier, and derive the marginalized loss of the autoencoder
with Laplace approximation. We show that our choice of loss function can be
rationalized from the perspective of Bregman Divergence, which justifies the
soundness of our model. We evaluate the effectiveness of our model on six
sentiment analysis datasets, and show that our model significantly outperforms
all the competing methods with respect to classification accuracy. We also show
that our model is able to take advantage of unlabeled dataset and get improved
performance. We further show that our model successfully learns highly
discriminative feature maps, which explains its superior performance.
| no_new_dataset | 0.943452 |
1512.04483 | Shuangfei Zhai | Shuangfei Zhai, Zhongfei Zhang | Dropout Training of Matrix Factorization and Autoencoder for Link
Prediction in Sparse Graphs | Published in SDM 2015 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matrix factorization (MF) and Autoencoder (AE) are among the most successful
approaches of unsupervised learning. While MF based models have been
extensively exploited in the graph modeling and link prediction literature, the
AE family has not gained much attention. In this paper we investigate both MF
and AE's application to the link prediction problem in sparse graphs. We show
the connection between AE and MF from the perspective of multiview learning,
and further propose MF+AE: a model training MF and AE jointly with shared
parameters. We apply dropout to training both the MF and AE parts, and show
that it can significantly prevent overfitting by acting as an adaptive
regularization. We conduct experiments on six real world sparse graph datasets,
and show that MF+AE consistently outperforms the competing methods, especially
on datasets that demonstrate strong non-cohesive structures.
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 19:38:14 GMT"
}
] | 2015-12-15T00:00:00 | [
[
"Zhai",
"Shuangfei",
""
],
[
"Zhang",
"Zhongfei",
""
]
] | TITLE: Dropout Training of Matrix Factorization and Autoencoder for Link
Prediction in Sparse Graphs
ABSTRACT: Matrix factorization (MF) and Autoencoder (AE) are among the most successful
approaches of unsupervised learning. While MF based models have been
extensively exploited in the graph modeling and link prediction literature, the
AE family has not gained much attention. In this paper we investigate both MF
and AE's application to the link prediction problem in sparse graphs. We show
the connection between AE and MF from the perspective of multiview learning,
and further propose MF+AE: a model training MF and AE jointly with shared
parameters. We apply dropout to training both the MF and AE parts, and show
that it can significantly prevent overfitting by acting as an adaptive
regularization. We conduct experiments on six real world sparse graph datasets,
and show that MF+AE consistently outperforms the competing methods, especially
on datasets that demonstrate strong non-cohesive structures.
| no_new_dataset | 0.948155 |
1411.7715 | Artem Rozantsev Mr. | Artem Rozantsev, Vincent Lepetit, Pascal Fua | Flying Objects Detection from a Single Moving Camera | null | null | 10.1109/CVPR.2015.7299040 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach to detect flying objects such as UAVs and aircrafts
when they occupy a small portion of the field of view, possibly moving against
complex backgrounds, and are filmed by a camera that itself moves.
Solving such a difficult problem requires combining both appearance and
motion cues. To this end we propose a regression-based approach to motion
stabilization of local image patches that allows us to achieve effective
classification on spatio-temporal image cubes and outperform state-of-the-art
techniques.
As the problem is relatively new, we collected two challenging datasets for
UAVs and Aircrafts, which can be used as benchmarks for flying objects
detection and vision-guided collision avoidance.
| [
{
"version": "v1",
"created": "Thu, 27 Nov 2014 22:39:50 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Rozantsev",
"Artem",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Flying Objects Detection from a Single Moving Camera
ABSTRACT: We propose an approach to detect flying objects such as UAVs and aircrafts
when they occupy a small portion of the field of view, possibly moving against
complex backgrounds, and are filmed by a camera that itself moves.
Solving such a difficult problem requires combining both appearance and
motion cues. To this end we propose a regression-based approach to motion
stabilization of local image patches that allows us to achieve effective
classification on spatio-temporal image cubes and outperform state-of-the-art
techniques.
As the problem is relatively new, we collected two challenging datasets for
UAVs and Aircrafts, which can be used as benchmarks for flying objects
detection and vision-guided collision avoidance.
| new_dataset | 0.95388 |
1508.00998 | Simone Bianco | Simone Bianco, Claudio Cusano, Raimondo Schettini | Single and Multiple Illuminant Estimation Using Convolutional Neural
Networks | Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a method for the estimation of the color of the
illuminant in RAW images. The method includes a Convolutional Neural Network
that has been specially designed to produce multiple local estimates. A
multiple illuminant detector determines whether or not the local outputs of the
network must be aggregated into a single estimate. We evaluated our method on
standard datasets with single and multiple illuminants, obtaining lower
estimation errors with respect to those obtained by other general purpose
methods in the state of the art.
| [
{
"version": "v1",
"created": "Wed, 5 Aug 2015 08:25:27 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Dec 2015 14:35:20 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Bianco",
"Simone",
""
],
[
"Cusano",
"Claudio",
""
],
[
"Schettini",
"Raimondo",
""
]
] | TITLE: Single and Multiple Illuminant Estimation Using Convolutional Neural
Networks
ABSTRACT: In this paper we present a method for the estimation of the color of the
illuminant in RAW images. The method includes a Convolutional Neural Network
that has been specially designed to produce multiple local estimates. A
multiple illuminant detector determines whether or not the local outputs of the
network must be aggregated into a single estimate. We evaluated our method on
standard datasets with single and multiple illuminants, obtaining lower
estimation errors with respect to those obtained by other general purpose
methods in the state of the art.
| no_new_dataset | 0.947235 |
1512.03443 | Abhimanu Kumar | Abhimanu Kumar and Shriphani Palakodety and Chong Wang and Carolyn P.
Rose and Eric P. Xing and Miaomiao Wen | Scalable Modeling of Conversational-role based Self-presentation
Characteristics in Large Online Forums | null | null | null | null | stat.ML cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online discussion forums are complex webs of overlapping subcommunities
(macrolevel structure, across threads) in which users enact different roles
depending on which subcommunity they are participating in within a particular
time point (microlevel structure, within threads). This sub-network structure
is implicit in massive collections of threads. To uncover this structure, we
develop a scalable algorithm based on stochastic variational inference and
leverage topic models (LDA) along with mixed membership stochastic block (MMSB)
models. We evaluate our model on three large-scale datasets,
Cancer-ThreadStarter (22K users and 14.4K threads), Cancer-NameMention(15.1K
users and 12.4K threads) and StackOverFlow (1.19 million users and 4.55 million
threads). Qualitatively, we demonstrate that our model can provide useful
explanations of microlevel and macrolevel user presentation characteristics in
different communities using the topics discovered from posts. Quantitatively,
we show that our model does better than MMSB and LDA in predicting user reply
structure within threads. In addition, we demonstrate via synthetic data
experiments that the proposed active sub-network discovery model is stable and
recovers the original parameters of the experimental setup with high
probability.
| [
{
"version": "v1",
"created": "Thu, 10 Dec 2015 21:19:42 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Kumar",
"Abhimanu",
""
],
[
"Palakodety",
"Shriphani",
""
],
[
"Wang",
"Chong",
""
],
[
"Rose",
"Carolyn P.",
""
],
[
"Xing",
"Eric P.",
""
],
[
"Wen",
"Miaomiao",
""
]
] | TITLE: Scalable Modeling of Conversational-role based Self-presentation
Characteristics in Large Online Forums
ABSTRACT: Online discussion forums are complex webs of overlapping subcommunities
(macrolevel structure, across threads) in which users enact different roles
depending on which subcommunity they are participating in within a particular
time point (microlevel structure, within threads). This sub-network structure
is implicit in massive collections of threads. To uncover this structure, we
develop a scalable algorithm based on stochastic variational inference and
leverage topic models (LDA) along with mixed membership stochastic block (MMSB)
models. We evaluate our model on three large-scale datasets,
Cancer-ThreadStarter (22K users and 14.4K threads), Cancer-NameMention(15.1K
users and 12.4K threads) and StackOverFlow (1.19 million users and 4.55 million
threads). Qualitatively, we demonstrate that our model can provide useful
explanations of microlevel and macrolevel user presentation characteristics in
different communities using the topics discovered from posts. Quantitatively,
we show that our model does better than MMSB and LDA in predicting user reply
structure within threads. In addition, we demonstrate via synthetic data
experiments that the proposed active sub-network discovery model is stable and
recovers the original parameters of the experimental setup with high
probability.
| no_new_dataset | 0.948822 |
1512.03460 | Yezhou Yang | Yezhou Yang and Yi Li and Cornelia Fermuller and Yiannis Aloimonos | Neural Self Talk: Image Understanding via Continuous Questioning and
Answering | null | null | null | null | cs.CV cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of continuously discovering image
contents by actively asking image based questions and subsequently answering
the questions being asked. The key components include a Visual Question
Generation (VQG) module and a Visual Question Answering module, in which
Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are
used. Given a dataset that contains images, questions and their answers, both
modules are trained at the same time, with the difference being VQG uses the
images as input and the corresponding questions as output, while VQA uses
images and questions as input and the corresponding answers as output. We
evaluate the self talk process subjectively using Amazon Mechanical Turk, which
show effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Thu, 10 Dec 2015 21:58:46 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Yang",
"Yezhou",
""
],
[
"Li",
"Yi",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] | TITLE: Neural Self Talk: Image Understanding via Continuous Questioning and
Answering
ABSTRACT: In this paper we consider the problem of continuously discovering image
contents by actively asking image based questions and subsequently answering
the questions being asked. The key components include a Visual Question
Generation (VQG) module and a Visual Question Answering module, in which
Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are
used. Given a dataset that contains images, questions and their answers, both
modules are trained at the same time, with the difference being VQG uses the
images as input and the corresponding questions as output, while VQA uses
images and questions as input and the corresponding answers as output. We
evaluate the self talk process subjectively using Amazon Mechanical Turk, which
show effectiveness of the proposed method.
| no_new_dataset | 0.941868 |
1512.03501 | Marian-Andrei Rizoiu | Marian-Andrei Rizoiu, Julien Velcin, St\'ephane Bonnevay, St\'ephane
Lallich | ClusPath: A Temporal-driven Clustering to Infer Typical Evolution Paths | null | null | 10.1007/s10618-015-0445-7 | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose ClusPath, a novel algorithm for detecting general evolution
tendencies in a population of entities. We show how abstract notions, such as
the Swedish socio-economical model (in a political dataset) or the companies
fiscal optimization (in an economical dataset) can be inferred from low-level
descriptive features. Such high-level regularities in the evolution of entities
are detected by combining spatial and temporal features into a spatio-temporal
dissimilarity measure and using semi-supervised clustering techniques. The
relations between the evolution phases are modeled using a graph structure,
inferred simultaneously with the partition, by using a "slow changing world"
assumption. The idea is to ensure a smooth passage for entities along their
evolution paths, which catches the long-term trends in the dataset.
Additionally, we also provide a method, based on an evolutionary algorithm, to
tune the parameters of ClusPath to new, unseen datasets. This method assesses
the fitness of a solution using four opposed quality measures and proposes a
balanced compromise.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 01:32:20 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Rizoiu",
"Marian-Andrei",
""
],
[
"Velcin",
"Julien",
""
],
[
"Bonnevay",
"Stéphane",
""
],
[
"Lallich",
"Stéphane",
""
]
] | TITLE: ClusPath: A Temporal-driven Clustering to Infer Typical Evolution Paths
ABSTRACT: We propose ClusPath, a novel algorithm for detecting general evolution
tendencies in a population of entities. We show how abstract notions, such as
the Swedish socio-economical model (in a political dataset) or the companies
fiscal optimization (in an economical dataset) can be inferred from low-level
descriptive features. Such high-level regularities in the evolution of entities
are detected by combining spatial and temporal features into a spatio-temporal
dissimilarity measure and using semi-supervised clustering techniques. The
relations between the evolution phases are modeled using a graph structure,
inferred simultaneously with the partition, by using a "slow changing world"
assumption. The idea is to ensure a smooth passage for entities along their
evolution paths, which catches the long-term trends in the dataset.
Additionally, we also provide a method, based on an evolutionary algorithm, to
tune the parameters of ClusPath to new, unseen datasets. This method assesses
the fitness of a solution using four opposed quality measures and proposes a
balanced compromise.
| no_new_dataset | 0.947962 |
1512.03542 | Zhengping Che | Zhengping Che, Sanjay Purushotham, Robinder Khemani, Yan Liu | Distilling Knowledge from Deep Networks with Applications to Healthcare
Domain | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exponential growth in Electronic Healthcare Records (EHR) has resulted in new
opportunities and urgent needs for discovery of meaningful data-driven
representations and patterns of diseases in Computational Phenotyping research.
Deep Learning models have shown superior performance for robust prediction in
computational phenotyping tasks, but suffer from the issue of model
interpretability which is crucial for clinicians involved in decision-making.
In this paper, we introduce a novel knowledge-distillation approach called
Interpretable Mimic Learning, to learn interpretable phenotype features for
making robust prediction while mimicking the performance of deep learning
models. Our framework uses Gradient Boosting Trees to learn interpretable
features from deep learning models such as Stacked Denoising Autoencoder and
Long Short-Term Memory. Exhaustive experiments on a real-world clinical
time-series dataset show that our method obtains similar or better performance
than the deep learning models, and it provides interpretable phenotypes for
clinical decision making.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 07:38:12 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Che",
"Zhengping",
""
],
[
"Purushotham",
"Sanjay",
""
],
[
"Khemani",
"Robinder",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: Distilling Knowledge from Deep Networks with Applications to Healthcare
Domain
ABSTRACT: Exponential growth in Electronic Healthcare Records (EHR) has resulted in new
opportunities and urgent needs for discovery of meaningful data-driven
representations and patterns of diseases in Computational Phenotyping research.
Deep Learning models have shown superior performance for robust prediction in
computational phenotyping tasks, but suffer from the issue of model
interpretability which is crucial for clinicians involved in decision-making.
In this paper, we introduce a novel knowledge-distillation approach called
Interpretable Mimic Learning, to learn interpretable phenotype features for
making robust prediction while mimicking the performance of deep learning
models. Our framework uses Gradient Boosting Trees to learn interpretable
features from deep learning models such as Stacked Denoising Autoencoder and
Long Short-Term Memory. Exhaustive experiments on a real-world clinical
time-series dataset show that our method obtains similar or better performance
than the deep learning models, and it provides interpretable phenotypes for
clinical decision making.
| no_new_dataset | 0.947478 |
1512.03549 | Pranjal Singh | Pranjal Singh, Amitabha Mukerjee | Words are not Equal: Graded Weighting Model for building Composite
Document Vectors | 10 Pages, 2 Figures, 11 Tables | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the success of distributional semantics, composing phrases from word
vectors remains an important challenge. Several methods have been tried for
benchmark tasks such as sentiment classification, including word vector
averaging, matrix-vector approaches based on parsing, and on-the-fly learning
of paragraph vectors. Most models usually omit stop words from the composition.
Instead of such an yes-no decision, we consider several graded schemes where
words are weighted according to their discriminatory relevance with respect to
its use in the document (e.g., idf). Some of these methods (particularly
tf-idf) are seen to result in a significant improvement in performance over
prior state of the art. Further, combining such approaches into an ensemble
based on alternate classifiers such as the RNN model, results in an 1.6%
performance improvement on the standard IMDB movie review dataset, and a 7.01%
improvement on Amazon product reviews. Since these are language free models and
can be obtained in an unsupervised manner, they are of interest also for
under-resourced languages such as Hindi as well and many more languages. We
demonstrate the language free aspects by showing a gain of 12% for two review
datasets over earlier results, and also release a new larger dataset for future
testing (Singh,2015).
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 08:44:45 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Singh",
"Pranjal",
""
],
[
"Mukerjee",
"Amitabha",
""
]
] | TITLE: Words are not Equal: Graded Weighting Model for building Composite
Document Vectors
ABSTRACT: Despite the success of distributional semantics, composing phrases from word
vectors remains an important challenge. Several methods have been tried for
benchmark tasks such as sentiment classification, including word vector
averaging, matrix-vector approaches based on parsing, and on-the-fly learning
of paragraph vectors. Most models usually omit stop words from the composition.
Instead of such an yes-no decision, we consider several graded schemes where
words are weighted according to their discriminatory relevance with respect to
its use in the document (e.g., idf). Some of these methods (particularly
tf-idf) are seen to result in a significant improvement in performance over
prior state of the art. Further, combining such approaches into an ensemble
based on alternate classifiers such as the RNN model, results in an 1.6%
performance improvement on the standard IMDB movie review dataset, and a 7.01%
improvement on Amazon product reviews. Since these are language free models and
can be obtained in an unsupervised manner, they are of interest also for
under-resourced languages such as Hindi as well and many more languages. We
demonstrate the language free aspects by showing a gain of 12% for two review
datasets over earlier results, and also release a new larger dataset for future
testing (Singh,2015).
| new_dataset | 0.971266 |
1512.03740 | Zhenzhong Lan | Zhenzhong Lan, Shoou-I Yu, Alexander G. Hauptmann | Improving Human Activity Recognition Through Ranking and Re-ranking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose two well-motivated ranking-based methods to enhance the
performance of current state-of-the-art human activity recognition systems.
First, as an improvement over the classic power normalization method, we
propose a parameter-free ranking technique called rank normalization (RaN). RaN
normalizes each dimension of the video features to address the sparse and
bursty distribution problems of Fisher Vectors and VLAD. Second, inspired by
curriculum learning, we introduce a training-free re-ranking technique called
multi-class iterative re-ranking (MIR). MIR captures relationships among action
classes by separating easy and typical videos from difficult ones and
re-ranking the prediction scores of classifiers accordingly. We demonstrate
that our methods significantly improve the performance of state-of-the-art
motion features on six real-world datasets.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 17:41:53 GMT"
}
] | 2015-12-14T00:00:00 | [
[
"Lan",
"Zhenzhong",
""
],
[
"Yu",
"Shoou-I",
""
],
[
"Hauptmann",
"Alexander G.",
""
]
] | TITLE: Improving Human Activity Recognition Through Ranking and Re-ranking
ABSTRACT: We propose two well-motivated ranking-based methods to enhance the
performance of current state-of-the-art human activity recognition systems.
First, as an improvement over the classic power normalization method, we
propose a parameter-free ranking technique called rank normalization (RaN). RaN
normalizes each dimension of the video features to address the sparse and
bursty distribution problems of Fisher Vectors and VLAD. Second, inspired by
curriculum learning, we introduce a training-free re-ranking technique called
multi-class iterative re-ranking (MIR). MIR captures relationships among action
classes by separating easy and typical videos from difficult ones and
re-ranking the prediction scores of classifiers accordingly. We demonstrate
that our methods significantly improve the performance of state-of-the-art
motion features on six real-world datasets.
| no_new_dataset | 0.94801 |
1405.5189 | Bowei Chen | Bowei Chen and Shuai Yuan and Jun Wang | A dynamic pricing model for unifying programmatic guarantee and
real-time bidding in display advertising | Chen, Bowei and Yuan, Shuai and Wang, Jun (2014) A dynamic pricing
model for unifying programmatic guarantee and real-time bidding in display
advertising. In: The Eighth International Workshop on Data Mining for Online
Advertising, 24 - 27 August 2014, New York City | null | 10.1145/2648584.2648585 | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are two major ways of selling impressions in display advertising. They
are either sold in spot through auction mechanisms or in advance via guaranteed
contracts. The former has achieved a significant automation via real-time
bidding (RTB); however, the latter is still mainly done over the counter
through direct sales. This paper proposes a mathematical model that allocates
and prices the future impressions between real-time auctions and guaranteed
contracts. Under conventional economic assumptions, our model shows that the
two ways can be seamless combined programmatically and the publisher's revenue
can be maximized via price discrimination and optimal allocation. We consider
advertisers are risk-averse, and they would be willing to purchase guaranteed
impressions if the total costs are less than their private values. We also
consider that an advertiser's purchase behavior can be affected by both the
guaranteed price and the time interval between the purchase time and the
impression delivery date. Our solution suggests an optimal percentage of future
impressions to sell in advance and provides an explicit formula to calculate at
what prices to sell. We find that the optimal guaranteed prices are dynamic and
are non-decreasing over time. We evaluate our method with RTB datasets and find
that the model adopts different strategies in allocation and pricing according
to the level of competition. From the experiments we find that, in a less
competitive market, lower prices of the guaranteed contracts will encourage the
purchase in advance and the revenue gain is mainly contributed by the increased
competition in future RTB. In a highly competitive market, advertisers are more
willing to purchase the guaranteed contracts and thus higher prices are
expected. The revenue gain is largely contributed by the guaranteed selling.
| [
{
"version": "v1",
"created": "Tue, 20 May 2014 19:01:27 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Jul 2014 00:10:13 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Dec 2015 23:30:00 GMT"
}
] | 2015-12-11T00:00:00 | [
[
"Chen",
"Bowei",
""
],
[
"Yuan",
"Shuai",
""
],
[
"Wang",
"Jun",
""
]
] | TITLE: A dynamic pricing model for unifying programmatic guarantee and
real-time bidding in display advertising
ABSTRACT: There are two major ways of selling impressions in display advertising. They
are either sold in spot through auction mechanisms or in advance via guaranteed
contracts. The former has achieved a significant automation via real-time
bidding (RTB); however, the latter is still mainly done over the counter
through direct sales. This paper proposes a mathematical model that allocates
and prices the future impressions between real-time auctions and guaranteed
contracts. Under conventional economic assumptions, our model shows that the
two ways can be seamless combined programmatically and the publisher's revenue
can be maximized via price discrimination and optimal allocation. We consider
advertisers are risk-averse, and they would be willing to purchase guaranteed
impressions if the total costs are less than their private values. We also
consider that an advertiser's purchase behavior can be affected by both the
guaranteed price and the time interval between the purchase time and the
impression delivery date. Our solution suggests an optimal percentage of future
impressions to sell in advance and provides an explicit formula to calculate at
what prices to sell. We find that the optimal guaranteed prices are dynamic and
are non-decreasing over time. We evaluate our method with RTB datasets and find
that the model adopts different strategies in allocation and pricing according
to the level of competition. From the experiments we find that, in a less
competitive market, lower prices of the guaranteed contracts will encourage the
purchase in advance and the revenue gain is mainly contributed by the increased
competition in future RTB. In a highly competitive market, advertisers are more
willing to purchase the guaranteed contracts and thus higher prices are
expected. The revenue gain is largely contributed by the guaranteed selling.
| no_new_dataset | 0.943712 |
1511.02436 | Syvester Olubolu Orimaye Dr | Sylvester Olubolu Orimaye, Kah Yee Tai, Jojo Sze-Meng Wong and Chee
Piau Wong | Learning Linguistic Biomarkers for Predicting Mild Cognitive Impairment
using Compound Skip-grams | Accepted and presented at the 2015 NIPS Workshop on Machine Learning
in Healthcare (MLHC), Montreal, Canada | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting Mild Cognitive Impairment (MCI) is currently a challenge as
existing diagnostic criteria rely on neuropsychological examinations. Automated
Machine Learning (ML) models that are trained on verbal utterances of MCI
patients can aid diagnosis. Using a combination of skip-gram features, our
model learned several linguistic biomarkers to distinguish between 19 patients
with MCI and 19 healthy control individuals from the DementiaBank language
transcript clinical dataset. Results show that a model with compound of
skip-grams has better AUC and could help ML prediction on small MCI data
sample.
| [
{
"version": "v1",
"created": "Sun, 8 Nov 2015 03:45:49 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Dec 2015 03:25:54 GMT"
}
] | 2015-12-11T00:00:00 | [
[
"Orimaye",
"Sylvester Olubolu",
""
],
[
"Tai",
"Kah Yee",
""
],
[
"Wong",
"Jojo Sze-Meng",
""
],
[
"Wong",
"Chee Piau",
""
]
] | TITLE: Learning Linguistic Biomarkers for Predicting Mild Cognitive Impairment
using Compound Skip-grams
ABSTRACT: Predicting Mild Cognitive Impairment (MCI) is currently a challenge as
existing diagnostic criteria rely on neuropsychological examinations. Automated
Machine Learning (ML) models that are trained on verbal utterances of MCI
patients can aid diagnosis. Using a combination of skip-gram features, our
model learned several linguistic biomarkers to distinguish between 19 patients
with MCI and 19 healthy control individuals from the DementiaBank language
transcript clinical dataset. Results show that a model with compound of
skip-grams has better AUC and could help ML prediction on small MCI data
sample.
| no_new_dataset | 0.939803 |
1512.02159 | Malgorzata Krawczyk | Mateusz Pomorski, Malgorzata J. Krawczyk, Krzysztof Kulakowski,
Jaroslaw Kwapien, Marcel Ausloos | Inferring cultural regions from correlation networks of given baby names | null | Physica A 445 (2016) 169-175 | 10.1016/j.physa.2015.11.003 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report investigations on the statistical characteristics of the baby names
given between 1910 and 2010 in the United States of America. For each year, the
100 most frequent names in the USA are sorted out. For these names, the
correlations between the names profiles are calculated for all pairs of states
(minus Hawaii and Alaska). The correlations are used to form a weighted network
which is found to vary mildly in time. In fact, the structure of communities in
the network remains quite stable till about 1980. The goal is that the
calculated structure approximately reproduces the usually accepted geopolitical
regions: the North East, the South, and the "Midwest + West" as the third one.
Furthermore, the dataset reveals that the name distribution satisfies the Zipf
law, separately for each state and each year, i.e. the name frequency $f\propto
r^{-\alpha}$, where r is the name rank. Between 1920 and 1980, the exponent
alpha is the largest one for the set of states classified as 'the South', but
the smallest one for the set of states classified as "Midwest + West". Our
interpretation is that the pool of selected names was quite narrow in the
Southern states. The data is compared with some related statistics of names in
Belgium, a country also with different regions, but having quite a different
scale than the USA. There, the Zipf exponent is low for young people and for
the Brussels citizens.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 18:42:10 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 10:17:54 GMT"
}
] | 2015-12-11T00:00:00 | [
[
"Pomorski",
"Mateusz",
""
],
[
"Krawczyk",
"Malgorzata J.",
""
],
[
"Kulakowski",
"Krzysztof",
""
],
[
"Kwapien",
"Jaroslaw",
""
],
[
"Ausloos",
"Marcel",
""
]
] | TITLE: Inferring cultural regions from correlation networks of given baby names
ABSTRACT: We report investigations on the statistical characteristics of the baby names
given between 1910 and 2010 in the United States of America. For each year, the
100 most frequent names in the USA are sorted out. For these names, the
correlations between the names profiles are calculated for all pairs of states
(minus Hawaii and Alaska). The correlations are used to form a weighted network
which is found to vary mildly in time. In fact, the structure of communities in
the network remains quite stable till about 1980. The goal is that the
calculated structure approximately reproduces the usually accepted geopolitical
regions: the North East, the South, and the "Midwest + West" as the third one.
Furthermore, the dataset reveals that the name distribution satisfies the Zipf
law, separately for each state and each year, i.e. the name frequency $f\propto
r^{-\alpha}$, where r is the name rank. Between 1920 and 1980, the exponent
alpha is the largest one for the set of states classified as 'the South', but
the smallest one for the set of states classified as "Midwest + West". Our
interpretation is that the pool of selected names was quite narrow in the
Southern states. The data is compared with some related statistics of names in
Belgium, a country also with different regions, but having quite a different
scale than the USA. There, the Zipf exponent is low for young people and for
the Brussels citizens.
| no_new_dataset | 0.926636 |
1512.02665 | Feng Zhou | Feng Zhou, Yuanqing Lin | Fine-grained Image Classification by Exploring Bipartite-Graph Labels | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Given a food image, can a fine-grained object recognition engine tell "which
restaurant which dish" the food belongs to? Such ultra-fine grained image
recognition is the key for many applications like search by images, but it is
very challenging because it needs to discern subtle difference between classes
while dealing with the scarcity of training data. Fortunately, the ultra-fine
granularity naturally brings rich relationships among object classes. This
paper proposes a novel approach to exploit the rich relationships through
bipartite-graph labels (BGL). We show how to model BGL in an overall
convolutional neural networks and the resulting system can be optimized through
back-propagation. We also show that it is computationally efficient in
inference thanks to the bipartite structure. To facilitate the study, we
construct a new food benchmark dataset, which consists of 37,885 food images
collected from 6 restaurants and totally 975 menus. Experimental results on
this new food and three other datasets demonstrates BGL advances previous works
in fine-grained object recognition. An online demo is available at
http://www.f-zhou.com/fg_demo/.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 21:18:35 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Dec 2015 18:49:54 GMT"
}
] | 2015-12-11T00:00:00 | [
[
"Zhou",
"Feng",
""
],
[
"Lin",
"Yuanqing",
""
]
] | TITLE: Fine-grained Image Classification by Exploring Bipartite-Graph Labels
ABSTRACT: Given a food image, can a fine-grained object recognition engine tell "which
restaurant which dish" the food belongs to? Such ultra-fine grained image
recognition is the key for many applications like search by images, but it is
very challenging because it needs to discern subtle difference between classes
while dealing with the scarcity of training data. Fortunately, the ultra-fine
granularity naturally brings rich relationships among object classes. This
paper proposes a novel approach to exploit the rich relationships through
bipartite-graph labels (BGL). We show how to model BGL in an overall
convolutional neural networks and the resulting system can be optimized through
back-propagation. We also show that it is computationally efficient in
inference thanks to the bipartite structure. To facilitate the study, we
construct a new food benchmark dataset, which consists of 37,885 food images
collected from 6 restaurants and totally 975 menus. Experimental results on
this new food and three other datasets demonstrates BGL advances previous works
in fine-grained object recognition. An online demo is available at
http://www.f-zhou.com/fg_demo/.
| new_dataset | 0.960805 |
1512.03385 | Kaiming He | Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun | Deep Residual Learning for Image Recognition | Tech report | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation.
| [
{
"version": "v1",
"created": "Thu, 10 Dec 2015 19:51:55 GMT"
}
] | 2015-12-11T00:00:00 | [
[
"He",
"Kaiming",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Ren",
"Shaoqing",
""
],
[
"Sun",
"Jian",
""
]
] | TITLE: Deep Residual Learning for Image Recognition
ABSTRACT: Deeper neural networks are more difficult to train. We present a residual
learning framework to ease the training of networks that are substantially
deeper than those used previously. We explicitly reformulate the layers as
learning residual functions with reference to the layer inputs, instead of
learning unreferenced functions. We provide comprehensive empirical evidence
showing that these residual networks are easier to optimize, and can gain
accuracy from considerably increased depth. On the ImageNet dataset we evaluate
residual nets with a depth of up to 152 layers---8x deeper than VGG nets but
still having lower complexity. An ensemble of these residual nets achieves
3.57% error on the ImageNet test set. This result won the 1st place on the
ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100
and 1000 layers.
The depth of representations is of central importance for many visual
recognition tasks. Solely due to our extremely deep representations, we obtain
a 28% relative improvement on the COCO object detection dataset. Deep residual
nets are foundations of our submissions to ILSVRC & COCO 2015 competitions,
where we also won the 1st places on the tasks of ImageNet detection, ImageNet
localization, COCO detection, and COCO segmentation.
| no_new_dataset | 0.950319 |
1412.2154 | Raja Jurdak | Raja Jurdak, Kun Zhao, Jiajun Liu, Maurice AbouJaoude, Mark Cameron,
David Newth | Understanding Human Mobility from Twitter | 17 pages, 6 Figures | PLoS ONE 10(7): e0131469 (2015) | 10.1371/journal.pone.0131469 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding human mobility is crucial for a broad range of applications
from disease prediction to communication networks. Most efforts on studying
human mobility have so far used private and low resolution data, such as call
data records. Here, we propose Twitter as a proxy for human mobility, as it
relies on publicly available data and provides high resolution positioning when
users opt to geotag their tweets with their current location. We analyse a
Twitter dataset with more than six million geotagged tweets posted in
Australia, and we demonstrate that Twitter can be a reliable source for
studying human mobility patterns. Our analysis shows that geotagged tweets can
capture rich features of human mobility, such as the diversity of movement
orbits among individuals and of movements within and between cities. We also
find that short and long-distance movers both spend most of their time in large
metropolitan areas, in contrast with intermediate-distance movers movements,
reflecting the impact of different modes of travel. Our study provides solid
evidence that Twitter can indeed be a useful proxy for tracking and predicting
human movement.
| [
{
"version": "v1",
"created": "Wed, 3 Dec 2014 23:26:08 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Dec 2014 23:02:10 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Apr 2015 02:54:26 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Jurdak",
"Raja",
""
],
[
"Zhao",
"Kun",
""
],
[
"Liu",
"Jiajun",
""
],
[
"AbouJaoude",
"Maurice",
""
],
[
"Cameron",
"Mark",
""
],
[
"Newth",
"David",
""
]
] | TITLE: Understanding Human Mobility from Twitter
ABSTRACT: Understanding human mobility is crucial for a broad range of applications
from disease prediction to communication networks. Most efforts on studying
human mobility have so far used private and low resolution data, such as call
data records. Here, we propose Twitter as a proxy for human mobility, as it
relies on publicly available data and provides high resolution positioning when
users opt to geotag their tweets with their current location. We analyse a
Twitter dataset with more than six million geotagged tweets posted in
Australia, and we demonstrate that Twitter can be a reliable source for
studying human mobility patterns. Our analysis shows that geotagged tweets can
capture rich features of human mobility, such as the diversity of movement
orbits among individuals and of movements within and between cities. We also
find that short and long-distance movers both spend most of their time in large
metropolitan areas, in contrast with intermediate-distance movers movements,
reflecting the impact of different modes of travel. Our study provides solid
evidence that Twitter can indeed be a useful proxy for tracking and predicting
human movement.
| no_new_dataset | 0.893681 |
1503.01052 | Emre Can Kara | Emre Can Kara, Jason S. Macdonald, Douglas Black, Mario Berges,
Gabriela Hug, Sila Kiliccote | Estimating the Benefits of Electric Vehicle Smart Charging at
Non-Residential Locations: A Data-Driven Approach | Pre-print, under review at Applied Energy | Applied Energy, Volume 155, 1 October 2015, Pages 515 525 | 10.1016/j.apenergy.2015.05.072 | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we use data collected from over 2000 non-residential electric
vehicle supply equipments (EVSEs) located in Northern California for the year
of 2013 to estimate the potential benefits of smart electric vehicle (EV)
charging. We develop a smart charging framework to identify the benefits of
non-residential EV charging to the load aggregators and the distribution grid.
Using this extensive dataset, we aim to improve upon past studies focusing on
the benefits of smart EV charging by relaxing the assumptions made in these
studies regarding: (i) driving patterns, driver behavior and driver types; (ii)
the scalability of a limited number of simulated vehicles to represent
different load aggregation points in the power system with different customer
characteristics; and (iii) the charging profile of EVs. First, we study the
benefits of EV aggregations behind-the-meter, where a time-of-use pricing
schema is used to understand the benefits to the owner when EV aggregations
shift load from high cost periods to lower cost periods. For the year of 2013,
we show a reduction of up to 24.8% in the monthly bill is possible. Then,
following a similar aggregation strategy, we show that EV aggregations decrease
their contribution to the system peak load by approximately 40% when charging
is controlled within arrival and departure times. Our results also show that it
could be expected to shift approximately 0.25kWh (~2.8%) of energy per
non-residential EV charging session from peak periods (12PM-6PM) to off-peak
periods (after 6PM) in Northern California for the year of 2013.
| [
{
"version": "v1",
"created": "Tue, 3 Mar 2015 18:50:54 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Kara",
"Emre Can",
""
],
[
"Macdonald",
"Jason S.",
""
],
[
"Black",
"Douglas",
""
],
[
"Berges",
"Mario",
""
],
[
"Hug",
"Gabriela",
""
],
[
"Kiliccote",
"Sila",
""
]
] | TITLE: Estimating the Benefits of Electric Vehicle Smart Charging at
Non-Residential Locations: A Data-Driven Approach
ABSTRACT: In this paper, we use data collected from over 2000 non-residential electric
vehicle supply equipments (EVSEs) located in Northern California for the year
of 2013 to estimate the potential benefits of smart electric vehicle (EV)
charging. We develop a smart charging framework to identify the benefits of
non-residential EV charging to the load aggregators and the distribution grid.
Using this extensive dataset, we aim to improve upon past studies focusing on
the benefits of smart EV charging by relaxing the assumptions made in these
studies regarding: (i) driving patterns, driver behavior and driver types; (ii)
the scalability of a limited number of simulated vehicles to represent
different load aggregation points in the power system with different customer
characteristics; and (iii) the charging profile of EVs. First, we study the
benefits of EV aggregations behind-the-meter, where a time-of-use pricing
schema is used to understand the benefits to the owner when EV aggregations
shift load from high cost periods to lower cost periods. For the year of 2013,
we show a reduction of up to 24.8% in the monthly bill is possible. Then,
following a similar aggregation strategy, we show that EV aggregations decrease
their contribution to the system peak load by approximately 40% when charging
is controlled within arrival and departure times. Our results also show that it
could be expected to shift approximately 0.25kWh (~2.8%) of energy per
non-residential EV charging session from peak periods (12PM-6PM) to off-peak
periods (after 6PM) in Northern California for the year of 2013.
| no_new_dataset | 0.943348 |
1507.00043 | Athanasios N. Nikolakopoulos | Athanasios N. Nikolakopoulos and John D. Garofalakis | Top-N recommendations in the presence of sparsity: An NCD-based approach | To appear in the Web Intelligence Journal as a regular paper | null | 10.3233/WEB-150324 | null | cs.IR cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Making recommendations in the presence of sparsity is known to present one of
the most challenging problems faced by collaborative filtering methods. In this
work we tackle this problem by exploiting the innately hierarchical structure
of the item space following an approach inspired by the theory of
Decomposability. We view the itemspace as a Nearly Decomposable system and we
define blocks of closely related elements and corresponding indirect proximity
components. We study the theoretical properties of the decomposition and we
derive sufficient conditions that guarantee full item space coverage even in
cold-start recommendation scenarios. A comprehensive set of experiments on the
MovieLens and the Yahoo!R2Music datasets, using several widely applied
performance metrics, support our model's theoretically predicted properties and
verify that NCDREC outperforms several state-of-the-art algorithms, in terms of
recommendation accuracy, diversity and sparseness insensitivity.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 21:34:53 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jul 2015 13:55:35 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Nikolakopoulos",
"Athanasios N.",
""
],
[
"Garofalakis",
"John D.",
""
]
] | TITLE: Top-N recommendations in the presence of sparsity: An NCD-based approach
ABSTRACT: Making recommendations in the presence of sparsity is known to present one of
the most challenging problems faced by collaborative filtering methods. In this
work we tackle this problem by exploiting the innately hierarchical structure
of the item space following an approach inspired by the theory of
Decomposability. We view the itemspace as a Nearly Decomposable system and we
define blocks of closely related elements and corresponding indirect proximity
components. We study the theoretical properties of the decomposition and we
derive sufficient conditions that guarantee full item space coverage even in
cold-start recommendation scenarios. A comprehensive set of experiments on the
MovieLens and the Yahoo!R2Music datasets, using several widely applied
performance metrics, support our model's theoretically predicted properties and
verify that NCDREC outperforms several state-of-the-art algorithms, in terms of
recommendation accuracy, diversity and sparseness insensitivity.
| no_new_dataset | 0.944689 |
1511.05547 | Baochen Sun | Baochen Sun, Jiashi Feng, Kate Saenko | Return of Frustratingly Easy Domain Adaptation | Fixed typos. Full paper to appear in AAAI-16. Extended Abstract of
the full paper to appear in TASK-CV 2015 workshop | null | null | null | cs.CV cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unlike human learning, machine learning often fails to handle changes between
training (source) and test (target) input distributions. Such domain shifts,
common in practical scenarios, severely damage the performance of conventional
machine learning methods. Supervised domain adaptation methods have been
proposed for the case when the target data have labels, including some that
perform very well despite being "frustratingly easy" to implement. However, in
practice, the target domain is often unlabeled, requiring unsupervised
adaptation. We propose a simple, effective, and efficient method for
unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL
minimizes domain shift by aligning the second-order statistics of source and
target distributions, without requiring any target labels. Even though it is
extraordinarily simple--it can be implemented in four lines of Matlab
code--CORAL performs remarkably well in extensive evaluations on standard
benchmark datasets.
| [
{
"version": "v1",
"created": "Tue, 17 Nov 2015 20:53:26 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 05:39:43 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Sun",
"Baochen",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Return of Frustratingly Easy Domain Adaptation
ABSTRACT: Unlike human learning, machine learning often fails to handle changes between
training (source) and test (target) input distributions. Such domain shifts,
common in practical scenarios, severely damage the performance of conventional
machine learning methods. Supervised domain adaptation methods have been
proposed for the case when the target data have labels, including some that
perform very well despite being "frustratingly easy" to implement. However, in
practice, the target domain is often unlabeled, requiring unsupervised
adaptation. We propose a simple, effective, and efficient method for
unsupervised domain adaptation called CORrelation ALignment (CORAL). CORAL
minimizes domain shift by aligning the second-order statistics of source and
target distributions, without requiring any target labels. Even though it is
extraordinarily simple--it can be implemented in four lines of Matlab
code--CORAL performs remarkably well in extensive evaluations on standard
benchmark datasets.
| no_new_dataset | 0.950319 |
1512.02033 | Danny Karmon | Danny Karmon and Joseph Keshet | Risk Minimization in Structured Prediction using Orbit Loss | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new surrogate loss function called orbit loss in the
structured prediction framework, which has good theoretical and practical
advantages. While the orbit loss is not convex, it has a simple analytical
gradient and a simple perceptron-like learning rule. We analyze the new loss
theoretically and state a PAC-Bayesian generalization bound. We also prove that
the new loss is consistent in the strong sense; namely, the risk achieved by
the set of the trained parameters approaches the infimum risk achievable by any
linear decoder over the given features. Methods that are aimed at risk
minimization, such as the structured ramp loss, the structured probit loss and
the direct loss minimization require at least two inference operations per
training iteration. In this sense, the orbit loss is more efficient as it
requires only one inference operation per training iteration, while yields
similar performance. We conclude the paper with an empirical comparison of the
proposed loss function to the structured hinge loss, the structured ramp loss,
the structured probit loss and the direct loss minimization method on several
benchmark datasets and tasks.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 13:30:27 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 09:59:56 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Karmon",
"Danny",
""
],
[
"Keshet",
"Joseph",
""
]
] | TITLE: Risk Minimization in Structured Prediction using Orbit Loss
ABSTRACT: We introduce a new surrogate loss function called orbit loss in the
structured prediction framework, which has good theoretical and practical
advantages. While the orbit loss is not convex, it has a simple analytical
gradient and a simple perceptron-like learning rule. We analyze the new loss
theoretically and state a PAC-Bayesian generalization bound. We also prove that
the new loss is consistent in the strong sense; namely, the risk achieved by
the set of the trained parameters approaches the infimum risk achievable by any
linear decoder over the given features. Methods that are aimed at risk
minimization, such as the structured ramp loss, the structured probit loss and
the direct loss minimization require at least two inference operations per
training iteration. In this sense, the orbit loss is more efficient as it
requires only one inference operation per training iteration, while yields
similar performance. We conclude the paper with an empirical comparison of the
proposed loss function to the structured hinge loss, the structured ramp loss,
the structured probit loss and the direct loss minimization method on several
benchmark datasets and tasks.
| no_new_dataset | 0.949623 |
1512.02736 | Xingyu Zeng | Xingyu Zeng, Wanli Ouyang, Xiaogang Wang | Window-Object Relationship Guided Representation Learning for Generic
Object Detections | 9 pages, including 1 reference page, 6 figures | null | null | null | cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In existing works that learn representation for object detection, the
relationship between a candidate window and the ground truth bounding box of an
object is simplified by thresholding their overlap. This paper shows
information loss in this simplification and picks up the relative location/size
information discarded by thresholding. We propose a representation learning
pipeline to use the relationship as supervision for improving the learned
representation in object detection. Such relationship is not limited to object
of the target category, but also includes surrounding objects of other
categories. We show that image regions with multiple contexts and multiple
rotations are effective in capturing such relationship during the
representation learning process and in handling the semantic and visual
variation caused by different window-object configurations. Experimental
results show that the representation learned by our approach can improve the
object detection accuracy by 6.4% in mean average precision (mAP) on
ILSVRC2014. On the challenging ILSVRC2014 test dataset, 48.6% mAP is achieved
by our single model and it is the best among published results. On PASCAL VOC,
it outperforms the state-of-the-art result of Fast RCNN by 3.3% in absolute
mAP.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 03:32:21 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Zeng",
"Xingyu",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Window-Object Relationship Guided Representation Learning for Generic
Object Detections
ABSTRACT: In existing works that learn representation for object detection, the
relationship between a candidate window and the ground truth bounding box of an
object is simplified by thresholding their overlap. This paper shows
information loss in this simplification and picks up the relative location/size
information discarded by thresholding. We propose a representation learning
pipeline to use the relationship as supervision for improving the learned
representation in object detection. Such relationship is not limited to object
of the target category, but also includes surrounding objects of other
categories. We show that image regions with multiple contexts and multiple
rotations are effective in capturing such relationship during the
representation learning process and in handling the semantic and visual
variation caused by different window-object configurations. Experimental
results show that the representation learned by our approach can improve the
object detection accuracy by 6.4% in mean average precision (mAP) on
ILSVRC2014. On the challenging ILSVRC2014 test dataset, 48.6% mAP is achieved
by our single model and it is the best among published results. On PASCAL VOC,
it outperforms the state-of-the-art result of Fast RCNN by 3.3% in absolute
mAP.
| no_new_dataset | 0.949669 |
1512.02896 | Farid M. Naini | Farid M. Naini, Jayakrishnan Unnikrishnan, Patrick Thiran, Martin
Vetterli | Where You Are Is Who You Are: User Identification by Matching Statistics | null | null | 10.1109/TIFS.2015.2498131 | null | cs.LG cs.CR cs.SI stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most users of online services have unique behavioral or usage patterns. These
behavioral patterns can be exploited to identify and track users by using only
the observed patterns in the behavior. We study the task of identifying users
from statistics of their behavioral patterns. Specifically, we focus on the
setting in which we are given histograms of users' data collected during two
different experiments. We assume that, in the first dataset, the users'
identities are anonymized or hidden and that, in the second dataset, their
identities are known. We study the task of identifying the users by matching
the histograms of their data in the first dataset with the histograms from the
second dataset. In recent works, the optimal algorithm for this user
identification task is introduced. In this paper, we evaluate the effectiveness
of this method on three different types of datasets and in multiple scenarios.
Using datasets such as call data records, web browsing histories, and GPS
trajectories, we show that a large fraction of users can be easily identified
given only histograms of their data; hence these histograms can act as users'
fingerprints. We also verify that simultaneous identification of users achieves
better performance compared to one-by-one user identification. We show that
using the optimal method for identification gives higher identification
accuracy than heuristics-based approaches in practical scenarios. The accuracy
obtained under this optimal method can thus be used to quantify the maximum
level of user identification that is possible in such settings. We show that
the key factors affecting the accuracy of the optimal identification algorithm
are the duration of the data collection, the number of users in the anonymized
dataset, and the resolution of the dataset. We analyze the effectiveness of
k-anonymization in resisting user identification attacks on these datasets.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 15:23:33 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Naini",
"Farid M.",
""
],
[
"Unnikrishnan",
"Jayakrishnan",
""
],
[
"Thiran",
"Patrick",
""
],
[
"Vetterli",
"Martin",
""
]
] | TITLE: Where You Are Is Who You Are: User Identification by Matching Statistics
ABSTRACT: Most users of online services have unique behavioral or usage patterns. These
behavioral patterns can be exploited to identify and track users by using only
the observed patterns in the behavior. We study the task of identifying users
from statistics of their behavioral patterns. Specifically, we focus on the
setting in which we are given histograms of users' data collected during two
different experiments. We assume that, in the first dataset, the users'
identities are anonymized or hidden and that, in the second dataset, their
identities are known. We study the task of identifying the users by matching
the histograms of their data in the first dataset with the histograms from the
second dataset. In recent works, the optimal algorithm for this user
identification task is introduced. In this paper, we evaluate the effectiveness
of this method on three different types of datasets and in multiple scenarios.
Using datasets such as call data records, web browsing histories, and GPS
trajectories, we show that a large fraction of users can be easily identified
given only histograms of their data; hence these histograms can act as users'
fingerprints. We also verify that simultaneous identification of users achieves
better performance compared to one-by-one user identification. We show that
using the optimal method for identification gives higher identification
accuracy than heuristics-based approaches in practical scenarios. The accuracy
obtained under this optimal method can thus be used to quantify the maximum
level of user identification that is possible in such settings. We show that
the key factors affecting the accuracy of the optimal identification algorithm
are the duration of the data collection, the number of users in the anonymized
dataset, and the resolution of the dataset. We analyze the effectiveness of
k-anonymization in resisting user identification attacks on these datasets.
| no_new_dataset | 0.943086 |
1512.03012 | Manolis Savva | Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan,
Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su,
Jianxiong Xiao, Li Yi, and Fisher Yu | ShapeNet: An Information-Rich 3D Model Repository | null | null | null | null | cs.GR cs.AI cs.CG cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ShapeNet: a richly-annotated, large-scale repository of shapes
represented by 3D CAD models of objects. ShapeNet contains 3D models from a
multitude of semantic categories and organizes them under the WordNet taxonomy.
It is a collection of datasets providing many semantic annotations for each 3D
model such as consistent rigid alignments, parts and bilateral symmetry planes,
physical sizes, keywords, as well as other planned annotations. Annotations are
made available through a public web-based interface to enable data
visualization of object attributes, promote data-driven geometric analysis, and
provide a large-scale quantitative benchmark for research in computer graphics
and vision. At the time of this technical report, ShapeNet has indexed more
than 3,000,000 models, 220,000 models out of which are classified into 3,135
categories (WordNet synsets). In this report we describe the ShapeNet effort as
a whole, provide details for all currently available datasets, and summarize
future plans.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 19:42:48 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Chang",
"Angel X.",
""
],
[
"Funkhouser",
"Thomas",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Hanrahan",
"Pat",
""
],
[
"Huang",
"Qixing",
""
],
[
"Li",
"Zimo",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Savva",
"Manolis",
""
],
[
"Song",
"Shuran",
""
],
[
"Su",
"Hao",
""
],
[
"Xiao",
"Jianxiong",
""
],
[
"Yi",
"Li",
""
],
[
"Yu",
"Fisher",
""
]
] | TITLE: ShapeNet: An Information-Rich 3D Model Repository
ABSTRACT: We present ShapeNet: a richly-annotated, large-scale repository of shapes
represented by 3D CAD models of objects. ShapeNet contains 3D models from a
multitude of semantic categories and organizes them under the WordNet taxonomy.
It is a collection of datasets providing many semantic annotations for each 3D
model such as consistent rigid alignments, parts and bilateral symmetry planes,
physical sizes, keywords, as well as other planned annotations. Annotations are
made available through a public web-based interface to enable data
visualization of object attributes, promote data-driven geometric analysis, and
provide a large-scale quantitative benchmark for research in computer graphics
and vision. At the time of this technical report, ShapeNet has indexed more
than 3,000,000 models, 220,000 models out of which are classified into 3,135
categories (WordNet synsets). In this report we describe the ShapeNet effort as
a whole, provide details for all currently available datasets, and summarize
future plans.
| no_new_dataset | 0.905573 |
1512.03019 | Alexandra Maria Radu | Alexandra Maria Radu | Minimally Supervised Feature Selection for Classification (Master's
Thesis, University Politehnica of Bucharest) | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of the highly increasing number of features that are available
nowadays we design a robust and fast method for feature selection. The method
tries to select the most representative features that are independent from each
other, but are strong together. We propose an algorithm that requires very
limited labeled data (as few as one labeled frame per class) and can
accommodate as many unlabeled samples. We also present here the supervised
approach from which we started. We compare our two formulations with
established methods like AdaBoost, SVM, Lasso, Elastic Net and FoBa and show
that our method is much faster and it has constant training time. Moreover, the
unsupervised approach outperforms all the methods with which we compared and
the difference might be quite prominent. The supervised approach is in most
cases better than the other methods, especially when the number of training
shots is very limited. All that the algorithm needs is to choose from a pool of
positively correlated features. The methods are evaluated on the
Youtube-Objects dataset of videos and on MNIST digits dataset, while at
training time we also used features obtained on CIFAR10 dataset and others
pre-trained on ImageNet dataset. Thereby, we also proved that transfer learning
is useful, even though the datasets differ very much: from low-resolution
centered images from 10 classes, to high-resolution images with objects from
1000 classes occurring in different regions of the images or to very difficult
videos with very high intraclass variance. 7
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 19:49:29 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Radu",
"Alexandra Maria",
""
]
] | TITLE: Minimally Supervised Feature Selection for Classification (Master's
Thesis, University Politehnica of Bucharest)
ABSTRACT: In the context of the highly increasing number of features that are available
nowadays we design a robust and fast method for feature selection. The method
tries to select the most representative features that are independent from each
other, but are strong together. We propose an algorithm that requires very
limited labeled data (as few as one labeled frame per class) and can
accommodate as many unlabeled samples. We also present here the supervised
approach from which we started. We compare our two formulations with
established methods like AdaBoost, SVM, Lasso, Elastic Net and FoBa and show
that our method is much faster and it has constant training time. Moreover, the
unsupervised approach outperforms all the methods with which we compared and
the difference might be quite prominent. The supervised approach is in most
cases better than the other methods, especially when the number of training
shots is very limited. All that the algorithm needs is to choose from a pool of
positively correlated features. The methods are evaluated on the
Youtube-Objects dataset of videos and on MNIST digits dataset, while at
training time we also used features obtained on CIFAR10 dataset and others
pre-trained on ImageNet dataset. Thereby, we also proved that transfer learning
is useful, even though the datasets differ very much: from low-resolution
centered images from 10 classes, to high-resolution images with objects from
1000 classes occurring in different regions of the images or to very difficult
videos with very high intraclass variance. 7
| no_new_dataset | 0.944228 |
1512.03020 | Hamidreza Chinaei | Hamidreza Chinaei, Mohsen Rais-Ghasem, Frank Rudzicz | Learning measures of semi-additive behaviour | 7 pages, 11 figures, 5 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In business analytics, measure values, such as sales numbers or volumes of
cargo transported, are often summed along values of one or more corresponding
categories, such as time or shipping container. However, not every measure
should be added by default (e.g., one might more typically want a mean over the
heights of a set of people); similarly, some measures should only be summed
within certain constraints (e.g., population measures need not be summed over
years). In systems such as Watson Analytics, the exact additive behaviour of a
measure is often determined by a human expert. In this work, we propose a small
set of features for this issue. We use these features in a case-based reasoning
approach, where the system suggests an aggregation behaviour, with 86% accuracy
in our collected dataset.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 19:52:55 GMT"
}
] | 2015-12-10T00:00:00 | [
[
"Chinaei",
"Hamidreza",
""
],
[
"Rais-Ghasem",
"Mohsen",
""
],
[
"Rudzicz",
"Frank",
""
]
] | TITLE: Learning measures of semi-additive behaviour
ABSTRACT: In business analytics, measure values, such as sales numbers or volumes of
cargo transported, are often summed along values of one or more corresponding
categories, such as time or shipping container. However, not every measure
should be added by default (e.g., one might more typically want a mean over the
heights of a set of people); similarly, some measures should only be summed
within certain constraints (e.g., population measures need not be summed over
years). In systems such as Watson Analytics, the exact additive behaviour of a
measure is often determined by a human expert. In this work, we propose a small
set of features for this issue. We use these features in a case-based reasoning
approach, where the system suggests an aggregation behaviour, with 86% accuracy
in our collected dataset.
| new_dataset | 0.955486 |
1404.1129 | Chengyu Peng | Chengyu Peng, Hong Cheng, Manchor Ko | An Efficient Two-Stage Sparse Representation Method | 21 pages, 2 figures, 4 tables | null | 10.1142/S0218001416510010 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are a large number of methods for solving under-determined linear
inverse problem. Many of them have very high time complexity for large
datasets. We propose a new method called Two-Stage Sparse Representation (TSSR)
to tackle this problem. We decompose the representing space of signals into two
parts, the measurement dictionary and the sparsifying basis. The dictionary is
designed to approximate a sub-Gaussian distribution to exploit its
concentration property. We apply sparse coding to the signals on the dictionary
in the first stage, and obtain the training and testing coefficients
respectively. Then we design the basis to approach an identity matrix in the
second stage, to acquire the Restricted Isometry Property (RIP) and
universality property. The testing coefficients are encoded over the basis and
the final representing coefficients are obtained. We verify that the projection
of testing coefficients onto the basis is a good approximation of the signal
onto the representing space. Since the projection is conducted on a much
sparser space, the runtime is greatly reduced. For concrete realization, we
provide an instance for the proposed TSSR. Experiments on four biometrics
databases show that TSSR is effective and efficient, comparing with several
classical methods for solving linear inverse problem.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2014 01:57:25 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Jul 2014 14:31:31 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Peng",
"Chengyu",
""
],
[
"Cheng",
"Hong",
""
],
[
"Ko",
"Manchor",
""
]
] | TITLE: An Efficient Two-Stage Sparse Representation Method
ABSTRACT: There are a large number of methods for solving under-determined linear
inverse problem. Many of them have very high time complexity for large
datasets. We propose a new method called Two-Stage Sparse Representation (TSSR)
to tackle this problem. We decompose the representing space of signals into two
parts, the measurement dictionary and the sparsifying basis. The dictionary is
designed to approximate a sub-Gaussian distribution to exploit its
concentration property. We apply sparse coding to the signals on the dictionary
in the first stage, and obtain the training and testing coefficients
respectively. Then we design the basis to approach an identity matrix in the
second stage, to acquire the Restricted Isometry Property (RIP) and
universality property. The testing coefficients are encoded over the basis and
the final representing coefficients are obtained. We verify that the projection
of testing coefficients onto the basis is a good approximation of the signal
onto the representing space. Since the projection is conducted on a much
sparser space, the runtime is greatly reduced. For concrete realization, we
provide an instance for the proposed TSSR. Experiments on four biometrics
databases show that TSSR is effective and efficient, comparing with several
classical methods for solving linear inverse problem.
| no_new_dataset | 0.946151 |
1410.2167 | Luigi Nardi | Luigi Nardi, Bruno Bodin, M. Zeeshan Zia, John Mawer, Andy Nisbet,
Paul H. J. Kelly, Andrew J. Davison, Mikel Luj\'an, Michael F. P. O'Boyle,
Graham Riley, Nigel Topham, Steve Furber | Introducing SLAMBench, a performance and accuracy benchmarking
methodology for SLAM | 8 pages, ICRA 2015 conference paper | http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7140009
IEEE Xplore 2015 | 10.1109/ICRA.2015.7140009 | null | cs.RO cs.CV cs.DC cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time dense computer vision and SLAM offer great potential for a new
level of scene modelling, tracking and real environmental interaction for many
types of robot, but their high computational requirements mean that use on mass
market embedded platforms is challenging. Meanwhile, trends in low-cost,
low-power processing are towards massive parallelism and heterogeneity, making
it difficult for robotics and vision researchers to implement their algorithms
in a performance-portable way. In this paper we introduce SLAMBench, a
publicly-available software framework which represents a starting point for
quantitative, comparable and validatable experimental research to investigate
trade-offs in performance, accuracy and energy consumption of a dense RGB-D
SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP,
OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D
sequences with trajectory and scene ground truth for reliable accuracy
comparison of different implementation and algorithms. We present an analysis
and breakdown of the constituent algorithmic elements of KinectFusion, and
experimentally investigate their execution time on a variety of multicore and
GPUaccelerated platforms. For a popular embedded platform, we also present an
analysis of energy efficiency for different configuration alternatives.
| [
{
"version": "v1",
"created": "Wed, 8 Oct 2014 15:34:43 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Feb 2015 16:28:27 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Nardi",
"Luigi",
""
],
[
"Bodin",
"Bruno",
""
],
[
"Zia",
"M. Zeeshan",
""
],
[
"Mawer",
"John",
""
],
[
"Nisbet",
"Andy",
""
],
[
"Kelly",
"Paul H. J.",
""
],
[
"Davison",
"Andrew J.",
""
],
[
"Luján",
"Mikel",
""
],
[
"O'Boyle",
"Michael F. P.",
""
],
[
"Riley",
"Graham",
""
],
[
"Topham",
"Nigel",
""
],
[
"Furber",
"Steve",
""
]
] | TITLE: Introducing SLAMBench, a performance and accuracy benchmarking
methodology for SLAM
ABSTRACT: Real-time dense computer vision and SLAM offer great potential for a new
level of scene modelling, tracking and real environmental interaction for many
types of robot, but their high computational requirements mean that use on mass
market embedded platforms is challenging. Meanwhile, trends in low-cost,
low-power processing are towards massive parallelism and heterogeneity, making
it difficult for robotics and vision researchers to implement their algorithms
in a performance-portable way. In this paper we introduce SLAMBench, a
publicly-available software framework which represents a starting point for
quantitative, comparable and validatable experimental research to investigate
trade-offs in performance, accuracy and energy consumption of a dense RGB-D
SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP,
OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D
sequences with trajectory and scene ground truth for reliable accuracy
comparison of different implementation and algorithms. We present an analysis
and breakdown of the constituent algorithmic elements of KinectFusion, and
experimentally investigate their execution time on a variety of multicore and
GPUaccelerated platforms. For a popular embedded platform, we also present an
analysis of energy efficiency for different configuration alternatives.
| no_new_dataset | 0.918114 |
1506.03140 | Keenon Werling | Keenon Werling, Arun Chaganty, Percy Liang, Chris Manning | On-the-Job Learning with Bayesian Decision Theory | As appearing in NIPS 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our goal is to deploy a high-accuracy system starting with zero training
examples. We consider an "on-the-job" setting, where as inputs arrive, we use
real-time crowdsourcing to resolve uncertainty where needed and output our
prediction when confident. As the model improves over time, the reliance on
crowdsourcing queries decreases. We cast our setting as a stochastic game based
on Bayesian decision theory, which allows us to balance latency, cost, and
accuracy objectives in a principled way. Computing the optimal policy is
intractable, so we develop an approximation based on Monte Carlo Tree Search.
We tested our approach on three datasets---named-entity recognition, sentiment
classification, and image classification. On the NER task we obtained more than
an order of magnitude reduction in cost compared to full human annotation,
while boosting performance relative to the expert provided labels. We also
achieve a 8% F1 improvement over having a single human label the whole set, and
a 28% F1 improvement over online learning.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 00:40:34 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Dec 2015 21:44:07 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Werling",
"Keenon",
""
],
[
"Chaganty",
"Arun",
""
],
[
"Liang",
"Percy",
""
],
[
"Manning",
"Chris",
""
]
] | TITLE: On-the-Job Learning with Bayesian Decision Theory
ABSTRACT: Our goal is to deploy a high-accuracy system starting with zero training
examples. We consider an "on-the-job" setting, where as inputs arrive, we use
real-time crowdsourcing to resolve uncertainty where needed and output our
prediction when confident. As the model improves over time, the reliance on
crowdsourcing queries decreases. We cast our setting as a stochastic game based
on Bayesian decision theory, which allows us to balance latency, cost, and
accuracy objectives in a principled way. Computing the optimal policy is
intractable, so we develop an approximation based on Monte Carlo Tree Search.
We tested our approach on three datasets---named-entity recognition, sentiment
classification, and image classification. On the NER task we obtained more than
an order of magnitude reduction in cost compared to full human annotation,
while boosting performance relative to the expert provided labels. We also
achieve a 8% F1 improvement over having a single human label the whole set, and
a 28% F1 improvement over online learning.
| no_new_dataset | 0.954265 |
1507.05881 | Eusebio Vargas-Estrada | Ernesto Estrada, Eusebio Vargas-Estrada, Hiroyasu Ando | Communicability Angles Reveal Critical Edges for Network Consensus
Dynamics | 15 pages, 2 figures | null | 10.1103/PhysRevE.92.052809 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the question of determining how the topological structure
influences a consensus dynamical process taking place on a network. By
considering a large dataset of real-world networks we first determine that the
removal of edges according to their communicability angle -an angle between
position vectors of the nodes in an Euclidean communicability space- increases
the average time of consensus by a factor of 5.68 in real-world networks. The
edge betweenness centrality also identifies -in a smaller proportion- those
critical edges for the consensus dynamics, i.e., its removal increases the time
of consensus by a factor of 3.70. We justify theoretically these findings on
the basis of the role played by the algebraic connectivity and the
isoperimetric number of networks on the dynamical process studied, and their
connections with the properties mentioned before. Finally, we study the role
played by global topological parameters of networks on the consensus dynamics.
We determine that the network density and the average distance-sum -an
analogous of the node degree for shortest-path distances, account for more than
80% of the variance of the average time of consensus in the real-world networks
studied.
| [
{
"version": "v1",
"created": "Tue, 21 Jul 2015 15:56:22 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Estrada",
"Ernesto",
""
],
[
"Vargas-Estrada",
"Eusebio",
""
],
[
"Ando",
"Hiroyasu",
""
]
] | TITLE: Communicability Angles Reveal Critical Edges for Network Consensus
Dynamics
ABSTRACT: We consider the question of determining how the topological structure
influences a consensus dynamical process taking place on a network. By
considering a large dataset of real-world networks we first determine that the
removal of edges according to their communicability angle -an angle between
position vectors of the nodes in an Euclidean communicability space- increases
the average time of consensus by a factor of 5.68 in real-world networks. The
edge betweenness centrality also identifies -in a smaller proportion- those
critical edges for the consensus dynamics, i.e., its removal increases the time
of consensus by a factor of 3.70. We justify theoretically these findings on
the basis of the role played by the algebraic connectivity and the
isoperimetric number of networks on the dynamical process studied, and their
connections with the properties mentioned before. Finally, we study the role
played by global topological parameters of networks on the consensus dynamics.
We determine that the network density and the average distance-sum -an
analogous of the node degree for shortest-path distances, account for more than
80% of the variance of the average time of consensus in the real-world networks
studied.
| no_new_dataset | 0.950732 |
1509.01383 | Oleguer Sagarra | Oleguer Sagarra, Conrad J. P\'erez Vicente and Albert D\'iaz-Guilera | The role of adjacency matrix degeneration in maximum entropy weighted
network models | Main doc and Supplementary Material To be published in PRE | null | 10.1103/PhysRevE.92.052816 | null | physics.soc-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex network null models based on entropy maximization are becoming a
powerful tool to characterize and analyze data from real systems. However, it
is not easy to extract good and unbiased information from these models: A
proper understanding of the nature of the underlying events represented in them
is crucial. In this paper we emphasize this fact stressing how an accurate
counting of configurations compatible with given constraints is fundamental to
build good null models for the case of networks with integer valued adjacency
matrices constructed from aggregation of one or multiple layers. We show how
different assumptions about the elements from which the networks are built give
rise to distinctively different statistics, even when considering the same
observables to match those of real data. We illustrate our findings by applying
the formalism to three datasets using an open-source software package
accompanying the present work and demonstrate how such differences are clearly
seen when measuring network observables.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2015 09:45:24 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Nov 2015 18:03:27 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Sagarra",
"Oleguer",
""
],
[
"Vicente",
"Conrad J. Pérez",
""
],
[
"Díaz-Guilera",
"Albert",
""
]
] | TITLE: The role of adjacency matrix degeneration in maximum entropy weighted
network models
ABSTRACT: Complex network null models based on entropy maximization are becoming a
powerful tool to characterize and analyze data from real systems. However, it
is not easy to extract good and unbiased information from these models: A
proper understanding of the nature of the underlying events represented in them
is crucial. In this paper we emphasize this fact stressing how an accurate
counting of configurations compatible with given constraints is fundamental to
build good null models for the case of networks with integer valued adjacency
matrices constructed from aggregation of one or multiple layers. We show how
different assumptions about the elements from which the networks are built give
rise to distinctively different statistics, even when considering the same
observables to match those of real data. We illustrate our findings by applying
the formalism to three datasets using an open-source software package
accompanying the present work and demonstrate how such differences are clearly
seen when measuring network observables.
| no_new_dataset | 0.948058 |
1512.02311 | Michael Maire | Takuya Narihira, Michael Maire, Stella X. Yu | Direct Intrinsics: Learning Albedo-Shading Decomposition by
Convolutional Regression | International Conference on Computer Vision (ICCV), 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new approach to intrinsic image decomposition, the task of
decomposing a single image into albedo and shading components. Our strategy,
which we term direct intrinsics, is to learn a convolutional neural network
(CNN) that directly predicts output albedo and shading channels from an input
RGB image patch. Direct intrinsics is a departure from classical techniques for
intrinsic image decomposition, which typically rely on physically-motivated
priors and graph-based inference algorithms.
The large-scale synthetic ground-truth of the MPI Sintel dataset plays a key
role in training direct intrinsics. We demonstrate results on both the
synthetic images of Sintel and the real images of the classic MIT intrinsic
image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms
all prior work, including methods that rely on RGB+Depth input. Direct
intrinsics also generalizes across modalities; it produces quite reasonable
decompositions on the real images of the MIT dataset. Our results indicate that
the marriage of CNNs with synthetic training data may be a powerful new
technique for tackling classic problems in computer vision.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 03:38:52 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Narihira",
"Takuya",
""
],
[
"Maire",
"Michael",
""
],
[
"Yu",
"Stella X.",
""
]
] | TITLE: Direct Intrinsics: Learning Albedo-Shading Decomposition by
Convolutional Regression
ABSTRACT: We introduce a new approach to intrinsic image decomposition, the task of
decomposing a single image into albedo and shading components. Our strategy,
which we term direct intrinsics, is to learn a convolutional neural network
(CNN) that directly predicts output albedo and shading channels from an input
RGB image patch. Direct intrinsics is a departure from classical techniques for
intrinsic image decomposition, which typically rely on physically-motivated
priors and graph-based inference algorithms.
The large-scale synthetic ground-truth of the MPI Sintel dataset plays a key
role in training direct intrinsics. We demonstrate results on both the
synthetic images of Sintel and the real images of the classic MIT intrinsic
image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms
all prior work, including methods that rely on RGB+Depth input. Direct
intrinsics also generalizes across modalities; it produces quite reasonable
decompositions on the real images of the MIT dataset. Our results indicate that
the marriage of CNNs with synthetic training data may be a powerful new
technique for tackling classic problems in computer vision.
| no_new_dataset | 0.946794 |
1512.02326 | Dequan Wang | Jie Shao, Dequan Wang, Xiangyang Xue, Zheng Zhang | Learning to Point and Count | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes the problem of point-and-count as a test case to break
the what-and-where deadlock. Different from the traditional detection problem,
the goal is to discover key salient points as a way to localize and count the
number of objects simultaneously. We propose two alternatives, one that counts
first and then point, and another that works the other way around.
Fundamentally, they pivot around whether we solve "what" or "where" first. We
evaluate their performance on dataset that contains multiple instances of the
same class, demonstrating the potentials and their synergies. The experiences
derive a few important insights that explains why this is a much harder problem
than classification, including strong data bias and the inability to deal with
object scales robustly in state-of-art convolutional neural networks.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 04:48:52 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Shao",
"Jie",
""
],
[
"Wang",
"Dequan",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Zhang",
"Zheng",
""
]
] | TITLE: Learning to Point and Count
ABSTRACT: This paper proposes the problem of point-and-count as a test case to break
the what-and-where deadlock. Different from the traditional detection problem,
the goal is to discover key salient points as a way to localize and count the
number of objects simultaneously. We propose two alternatives, one that counts
first and then point, and another that works the other way around.
Fundamentally, they pivot around whether we solve "what" or "where" first. We
evaluate their performance on dataset that contains multiple instances of the
same class, demonstrating the potentials and their synergies. The experiences
derive a few important insights that explains why this is a much harder problem
than classification, including strong data bias and the inability to deal with
object scales robustly in state-of-art convolutional neural networks.
| no_new_dataset | 0.950732 |
1512.02595 | Awni Hannun | Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared
Casper, Bryan Catanzaro, Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg
Diamos, Erich Elsen, Jesse Engel, Linxi Fan, Christopher Fougner, Tony Han,
Awni Hannun, Billy Jun, Patrick LeGresley, Libby Lin, Sharan Narang, Andrew
Ng, Sherjil Ozair, Ryan Prenger, Jonathan Raiman, Sanjeev Satheesh, David
Seetapun, Shubho Sengupta, Yi Wang, Zhiqian Wang, Chong Wang, Bo Xiao, Dani
Yogatama, Jun Zhan, Zhenyao Zhu | Deep Speech 2: End-to-End Speech Recognition in English and Mandarin | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 19:13:50 GMT"
}
] | 2015-12-09T00:00:00 | [
[
"Amodei",
"Dario",
""
],
[
"Anubhai",
"Rishita",
""
],
[
"Battenberg",
"Eric",
""
],
[
"Case",
"Carl",
""
],
[
"Casper",
"Jared",
""
],
[
"Catanzaro",
"Bryan",
""
],
[
"Chen",
"Jingdong",
""
],
[
"Chrzanowski",
"Mike",
""
],
[
"Coates",
"Adam",
""
],
[
"Diamos",
"Greg",
""
],
[
"Elsen",
"Erich",
""
],
[
"Engel",
"Jesse",
""
],
[
"Fan",
"Linxi",
""
],
[
"Fougner",
"Christopher",
""
],
[
"Han",
"Tony",
""
],
[
"Hannun",
"Awni",
""
],
[
"Jun",
"Billy",
""
],
[
"LeGresley",
"Patrick",
""
],
[
"Lin",
"Libby",
""
],
[
"Narang",
"Sharan",
""
],
[
"Ng",
"Andrew",
""
],
[
"Ozair",
"Sherjil",
""
],
[
"Prenger",
"Ryan",
""
],
[
"Raiman",
"Jonathan",
""
],
[
"Satheesh",
"Sanjeev",
""
],
[
"Seetapun",
"David",
""
],
[
"Sengupta",
"Shubho",
""
],
[
"Wang",
"Yi",
""
],
[
"Wang",
"Zhiqian",
""
],
[
"Wang",
"Chong",
""
],
[
"Xiao",
"Bo",
""
],
[
"Yogatama",
"Dani",
""
],
[
"Zhan",
"Jun",
""
],
[
"Zhu",
"Zhenyao",
""
]
] | TITLE: Deep Speech 2: End-to-End Speech Recognition in English and Mandarin
ABSTRACT: We show that an end-to-end deep learning approach can be used to recognize
either English or Mandarin Chinese speech--two vastly different languages.
Because it replaces entire pipelines of hand-engineered components with neural
networks, end-to-end learning allows us to handle a diverse variety of speech
including noisy environments, accents and different languages. Key to our
approach is our application of HPC techniques, resulting in a 7x speedup over
our previous system. Because of this efficiency, experiments that previously
took weeks now run in days. This enables us to iterate more quickly to identify
superior architectures and algorithms. As a result, in several cases, our
system is competitive with the transcription of human workers when benchmarked
on standard datasets. Finally, using a technique called Batch Dispatch with
GPUs in the data center, we show that our system can be inexpensively deployed
in an online setting, delivering low latency when serving users at scale.
| no_new_dataset | 0.942876 |
1305.4886 | Christopher Paciorek | Christopher J. Paciorek, Benjamin Lipshitz, Wei Zhuo, Prabhat, Cari G.
Kaufman, Rollin C. Thomas | Parallelizing Gaussian Process Calculations in R | 21 pages, 8 figures | Journal of Statistical Software 2015, Vol. 63, Number 10, 1-23 | 10.18637/jss.v063.i10 | null | stat.CO cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider parallel computation for Gaussian process calculations to
overcome computational and memory constraints on the size of datasets that can
be analyzed. Using a hybrid parallelization approach that uses both threading
(shared memory) and message-passing (distributed memory), we implement the core
linear algebra operations used in spatial statistics and Gaussian process
regression in an R package called bigGP that relies on C and MPI. The approach
divides the matrix into blocks such that the computational load is balanced
across processes while communication between processes is limited. The package
provides an API enabling R programmers to implement Gaussian process-based
methods by using the distributed linear algebra operations without any C or MPI
coding. We illustrate the approach and software by analyzing an astrophysics
dataset with n=67,275 observations.
| [
{
"version": "v1",
"created": "Tue, 21 May 2013 17:08:54 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Paciorek",
"Christopher J.",
""
],
[
"Lipshitz",
"Benjamin",
""
],
[
"Zhuo",
"Wei",
""
],
[
"Prabhat",
"",
""
],
[
"Kaufman",
"Cari G.",
""
],
[
"Thomas",
"Rollin C.",
""
]
] | TITLE: Parallelizing Gaussian Process Calculations in R
ABSTRACT: We consider parallel computation for Gaussian process calculations to
overcome computational and memory constraints on the size of datasets that can
be analyzed. Using a hybrid parallelization approach that uses both threading
(shared memory) and message-passing (distributed memory), we implement the core
linear algebra operations used in spatial statistics and Gaussian process
regression in an R package called bigGP that relies on C and MPI. The approach
divides the matrix into blocks such that the computational load is balanced
across processes while communication between processes is limited. The package
provides an API enabling R programmers to implement Gaussian process-based
methods by using the distributed linear algebra operations without any C or MPI
coding. We illustrate the approach and software by analyzing an astrophysics
dataset with n=67,275 observations.
| no_new_dataset | 0.942295 |
1412.0985 | Joakim And\'en | Joakim And\'en and Eugene Katsevich and Amit Singer | Covariance estimation using conjugate gradient for 3D classification in
Cryo-EM | null | null | 10.1109/ISBI.2015.7163849 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifying structural variability in noisy projections of biological
macromolecules is a central problem in Cryo-EM. In this work, we build on a
previous method for estimating the covariance matrix of the three-dimensional
structure present in the molecules being imaged. Our proposed method allows for
incorporation of contrast transfer function and non-uniform distribution of
viewing angles, making it more suitable for real-world data. We evaluate its
performance on a synthetic dataset and an experimental dataset obtained by
imaging a 70S ribosome complex.
| [
{
"version": "v1",
"created": "Tue, 2 Dec 2014 17:18:13 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Dec 2014 14:53:01 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Feb 2015 17:51:12 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Andén",
"Joakim",
""
],
[
"Katsevich",
"Eugene",
""
],
[
"Singer",
"Amit",
""
]
] | TITLE: Covariance estimation using conjugate gradient for 3D classification in
Cryo-EM
ABSTRACT: Classifying structural variability in noisy projections of biological
macromolecules is a central problem in Cryo-EM. In this work, we build on a
previous method for estimating the covariance matrix of the three-dimensional
structure present in the molecules being imaged. Our proposed method allows for
incorporation of contrast transfer function and non-uniform distribution of
viewing angles, making it more suitable for real-world data. We evaluate its
performance on a synthetic dataset and an experimental dataset obtained by
imaging a 70S ribosome complex.
| no_new_dataset | 0.904482 |
1504.08289 | Marcel Simon | Marcel Simon and Erik Rodner | Neural Activation Constellations: Unsupervised Part Model Discovery with
Convolutional Networks | Published at IEEE International Conference on Computer Vision (ICCV)
2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Part models of object categories are essential for challenging recognition
tasks, where differences in categories are subtle and only reflected in
appearances of small parts of the object. We present an approach that is able
to learn part models in a completely unsupervised manner, without part
annotations and even without given bounding boxes during learning. The key idea
is to find constellations of neural activation patterns computed using
convolutional neural networks. In our experiments, we outperform existing
approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford
PETS, and Oxford Flowers dataset in case no part or bounding box annotations
are available and achieve state-of-the-art performance for the Stanford Dog
dataset. We also show the benefits of neural constellation models as a data
augmentation technique for fine-tuning. Furthermore, our paper unites the areas
of generic and fine-grained classification, since our approach is suitable for
both scenarios. The source code of our method is available online at
http://www.inf-cv.uni-jena.de/part_discovery
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 16:06:50 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 11:43:03 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Dec 2015 15:53:09 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Simon",
"Marcel",
""
],
[
"Rodner",
"Erik",
""
]
] | TITLE: Neural Activation Constellations: Unsupervised Part Model Discovery with
Convolutional Networks
ABSTRACT: Part models of object categories are essential for challenging recognition
tasks, where differences in categories are subtle and only reflected in
appearances of small parts of the object. We present an approach that is able
to learn part models in a completely unsupervised manner, without part
annotations and even without given bounding boxes during learning. The key idea
is to find constellations of neural activation patterns computed using
convolutional neural networks. In our experiments, we outperform existing
approaches for fine-grained recognition on the CUB200-2011, NA birds, Oxford
PETS, and Oxford Flowers dataset in case no part or bounding box annotations
are available and achieve state-of-the-art performance for the Stanford Dog
dataset. We also show the benefits of neural constellation models as a data
augmentation technique for fine-tuning. Furthermore, our paper unites the areas
of generic and fine-grained classification, since our approach is suitable for
both scenarios. The source code of our method is available online at
http://www.inf-cv.uni-jena.de/part_discovery
| no_new_dataset | 0.94868 |
1506.05173 | Saurabh Paul | Saurabh Paul, Petros Drineas | Feature Selection for Ridge Regression with Provable Guarantees | To appear in Neural Computation. A shorter version of this paper
appeared at ECML-PKDD 2014 under the title "Deterministic Feature Selection
for Regularized Least Squares Classification." | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce single-set spectral sparsification as a deterministic sampling
based feature selection technique for regularized least squares classification,
which is the classification analogue to ridge regression. The method is
unsupervised and gives worst-case guarantees of the generalization power of the
classification function after feature selection with respect to the
classification function obtained using all features. We also introduce
leverage-score sampling as an unsupervised randomized feature selection method
for ridge regression. We provide risk bounds for both single-set spectral
sparsification and leverage-score sampling on ridge regression in the fixed
design setting and show that the risk in the sampled space is comparable to the
risk in the full-feature space. We perform experiments on synthetic and
real-world datasets, namely a subset of TechTC-300 datasets, to support our
theory. Experimental results indicate that the proposed methods perform better
than the existing feature selection methods.
| [
{
"version": "v1",
"created": "Wed, 17 Jun 2015 00:05:04 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Dec 2015 18:27:38 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Paul",
"Saurabh",
""
],
[
"Drineas",
"Petros",
""
]
] | TITLE: Feature Selection for Ridge Regression with Provable Guarantees
ABSTRACT: We introduce single-set spectral sparsification as a deterministic sampling
based feature selection technique for regularized least squares classification,
which is the classification analogue to ridge regression. The method is
unsupervised and gives worst-case guarantees of the generalization power of the
classification function after feature selection with respect to the
classification function obtained using all features. We also introduce
leverage-score sampling as an unsupervised randomized feature selection method
for ridge regression. We provide risk bounds for both single-set spectral
sparsification and leverage-score sampling on ridge regression in the fixed
design setting and show that the risk in the sampled space is comparable to the
risk in the full-feature space. We perform experiments on synthetic and
real-world datasets, namely a subset of TechTC-300 datasets, to support our
theory. Experimental results indicate that the proposed methods perform better
than the existing feature selection methods.
| no_new_dataset | 0.947137 |
1508.00106 | Ira Leviant | Ira Leviant, Roi Reichart | Separated by an Un-common Language: Towards Judgment Language Informed
Vector Space Modeling | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common evaluation practice in the vector space models (VSMs) literature is
to measure the models' ability to predict human judgments about lexical
semantic relations between word pairs. Most existing evaluation sets, however,
consist of scores collected for English word pairs only, ignoring the potential
impact of the judgment language in which word pairs are presented on the human
scores. In this paper we translate two prominent evaluation sets, wordsim353
(association) and SimLex999 (similarity), from English to Italian, German and
Russian and collect scores for each dataset from crowdworkers fluent in its
language. Our analysis reveals that human judgments are strongly impacted by
the judgment language. Moreover, we show that the predictions of monolingual
VSMs do not necessarily best correlate with human judgments made with the
language used for model training, suggesting that models and humans are
affected differently by the language they use when making semantic judgments.
Finally, we show that in a large number of setups, multilingual VSM combination
results in improved correlations with human judgments, suggesting that
multilingualism may partially compensate for the judgment language effect on
human judgments.
| [
{
"version": "v1",
"created": "Sat, 1 Aug 2015 10:24:27 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Aug 2015 09:48:38 GMT"
},
{
"version": "v3",
"created": "Wed, 11 Nov 2015 19:31:42 GMT"
},
{
"version": "v4",
"created": "Sun, 29 Nov 2015 20:12:13 GMT"
},
{
"version": "v5",
"created": "Sun, 6 Dec 2015 09:58:17 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Leviant",
"Ira",
""
],
[
"Reichart",
"Roi",
""
]
] | TITLE: Separated by an Un-common Language: Towards Judgment Language Informed
Vector Space Modeling
ABSTRACT: A common evaluation practice in the vector space models (VSMs) literature is
to measure the models' ability to predict human judgments about lexical
semantic relations between word pairs. Most existing evaluation sets, however,
consist of scores collected for English word pairs only, ignoring the potential
impact of the judgment language in which word pairs are presented on the human
scores. In this paper we translate two prominent evaluation sets, wordsim353
(association) and SimLex999 (similarity), from English to Italian, German and
Russian and collect scores for each dataset from crowdworkers fluent in its
language. Our analysis reveals that human judgments are strongly impacted by
the judgment language. Moreover, we show that the predictions of monolingual
VSMs do not necessarily best correlate with human judgments made with the
language used for model training, suggesting that models and humans are
affected differently by the language they use when making semantic judgments.
Finally, we show that in a large number of setups, multilingual VSM combination
results in improved correlations with human judgments, suggesting that
multilingualism may partially compensate for the judgment language effect on
human judgments.
| no_new_dataset | 0.948632 |
1510.04935 | Maximilian Nickel | Maximilian Nickel, Lorenzo Rosasco, Tomaso Poggio | Holographic Embeddings of Knowledge Graphs | To appear in AAAI-16 | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning embeddings of entities and relations is an efficient and versatile
method to perform machine learning on relational data such as knowledge graphs.
In this work, we propose holographic embeddings (HolE) to learn compositional
vector space representations of entire knowledge graphs. The proposed method is
related to holographic models of associative memory in that it employs circular
correlation to create compositional representations. By using correlation as
the compositional operator HolE can capture rich interactions but
simultaneously remains efficient to compute, easy to train, and scalable to
very large datasets. In extensive experiments we show that holographic
embeddings are able to outperform state-of-the-art methods for link prediction
in knowledge graphs and relational learning benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2015 16:29:07 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Dec 2015 18:05:52 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Nickel",
"Maximilian",
""
],
[
"Rosasco",
"Lorenzo",
""
],
[
"Poggio",
"Tomaso",
""
]
] | TITLE: Holographic Embeddings of Knowledge Graphs
ABSTRACT: Learning embeddings of entities and relations is an efficient and versatile
method to perform machine learning on relational data such as knowledge graphs.
In this work, we propose holographic embeddings (HolE) to learn compositional
vector space representations of entire knowledge graphs. The proposed method is
related to holographic models of associative memory in that it employs circular
correlation to create compositional representations. By using correlation as
the compositional operator HolE can capture rich interactions but
simultaneously remains efficient to compute, easy to train, and scalable to
very large datasets. In extensive experiments we show that holographic
embeddings are able to outperform state-of-the-art methods for link prediction
in knowledge graphs and relational learning benchmark datasets.
| no_new_dataset | 0.94801 |
1511.02911 | Tal Remez | Tal Remez and Shai Avidan | Spatially Coherent Random Forests | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatially Coherent Random Forest (SCRF) extends Random Forest to create
spatially coherent labeling. Each split function in SCRF is evaluated based on
a traditional information gain measure that is regularized by a spatial
coherency term. This way, SCRF is encouraged to choose split functions that
cluster pixels both in appearance space and in image space. In particular, we
use SCRF to detect contours in images, where contours are taken to be the
boundaries between different regions. Each tree in the forest produces a
segmentation of the image plane and the boundaries of the segmentations of all
trees are aggregated to produce a final hierarchical contour map. We show that
this modification improves the performance of regular Random Forest by about
10% on the standard Berkeley Segmentation Datasets. We believe that SCRF can be
used in other settings as well.
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2015 22:14:00 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Dec 2015 10:13:06 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Remez",
"Tal",
""
],
[
"Avidan",
"Shai",
""
]
] | TITLE: Spatially Coherent Random Forests
ABSTRACT: Spatially Coherent Random Forest (SCRF) extends Random Forest to create
spatially coherent labeling. Each split function in SCRF is evaluated based on
a traditional information gain measure that is regularized by a spatial
coherency term. This way, SCRF is encouraged to choose split functions that
cluster pixels both in appearance space and in image space. In particular, we
use SCRF to detect contours in images, where contours are taken to be the
boundaries between different regions. Each tree in the forest produces a
segmentation of the image plane and the boundaries of the segmentations of all
trees are aggregated to produce a final hierarchical contour map. We show that
this modification improves the performance of regular Random Forest by about
10% on the standard Berkeley Segmentation Datasets. We believe that SCRF can be
used in other settings as well.
| no_new_dataset | 0.955734 |
1512.01568 | Sanjay Sahay | Aruna Govada, Pravin Joshi, Sahil Mittal and Sanjay K Sahay | Hybrid Approach for Inductive Semi Supervised Learning using Label
Propagation and Support Vector Machine | Presented in the 11th International Conference, MLDM, Germany, July
20 - 21, 2015. Springer, Machine Learning and Data Mining in Pattern
Recognition, LNAI Vol. 9166, p. 199-213, 2015 | null | 10.1007/978-3-319-21024-7_14 | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi supervised learning methods have gained importance in today's world
because of large expenses and time involved in labeling the unlabeled data by
human experts. The proposed hybrid approach uses SVM and Label Propagation to
label the unlabeled data. In the process, at each step SVM is trained to
minimize the error and thus improve the prediction quality. Experiments are
conducted by using SVM and logistic regression(Logreg). Results prove that SVM
performs tremendously better than Logreg. The approach is tested using 12
datasets of different sizes ranging from the order of 1000s to the order of
10000s. Results show that the proposed approach outperforms Label Propagation
by a large margin with F-measure of almost twice on average. The parallel
version of the proposed approach is also designed and implemented, the analysis
shows that the training time decreases significantly when parallel version is
used.
| [
{
"version": "v1",
"created": "Wed, 2 Dec 2015 12:04:30 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Govada",
"Aruna",
""
],
[
"Joshi",
"Pravin",
""
],
[
"Mittal",
"Sahil",
""
],
[
"Sahay",
"Sanjay K",
""
]
] | TITLE: Hybrid Approach for Inductive Semi Supervised Learning using Label
Propagation and Support Vector Machine
ABSTRACT: Semi supervised learning methods have gained importance in today's world
because of large expenses and time involved in labeling the unlabeled data by
human experts. The proposed hybrid approach uses SVM and Label Propagation to
label the unlabeled data. In the process, at each step SVM is trained to
minimize the error and thus improve the prediction quality. Experiments are
conducted by using SVM and logistic regression(Logreg). Results prove that SVM
performs tremendously better than Logreg. The approach is tested using 12
datasets of different sizes ranging from the order of 1000s to the order of
10000s. Results show that the proposed approach outperforms Label Propagation
by a large margin with F-measure of almost twice on average. The parallel
version of the proposed approach is also designed and implemented, the analysis
shows that the training time decreases significantly when parallel version is
used.
| no_new_dataset | 0.948251 |
1512.01728 | Qi Qian | Qi Qian, Inci M. Baytas, Rong Jin, Anil Jain and Shenghuo Zhu | Similarity Learning via Adaptive Regression and Its Application to Image
Retrieval | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of similarity learning and its application to image
retrieval with large-scale data. The similarity between pairs of images can be
measured by the distances between their high dimensional representations, and
the problem of learning the appropriate similarity is often addressed by
distance metric learning. However, distance metric learning requires the
learned metric to be a PSD matrix, which is computational expensive and not
necessary for retrieval ranking problem. On the other hand, the bilinear model
is shown to be more flexible for large-scale image retrieval task, hence, we
adopt it to learn a matrix for estimating pairwise similarities under the
regression framework. By adaptively updating the target matrix in regression,
we can mimic the hinge loss, which is more appropriate for similarity learning
problem. Although the regression problem can have the closed-form solution, the
computational cost can be very expensive. The computational challenges come
from two aspects: the number of images can be very large and image features
have high dimensionality. We address the first challenge by compressing the
data by a randomized algorithm with the theoretical guarantee. For the high
dimensional issue, we address it by taking low rank assumption and applying
alternating method to obtain the partial matrix, which has a global optimal
solution. Empirical studies on real world image datasets (i.e., Caltech and
ImageNet) demonstrate the effectiveness and efficiency of the proposed method.
| [
{
"version": "v1",
"created": "Sun, 6 Dec 2015 02:56:32 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Qian",
"Qi",
""
],
[
"Baytas",
"Inci M.",
""
],
[
"Jin",
"Rong",
""
],
[
"Jain",
"Anil",
""
],
[
"Zhu",
"Shenghuo",
""
]
] | TITLE: Similarity Learning via Adaptive Regression and Its Application to Image
Retrieval
ABSTRACT: We study the problem of similarity learning and its application to image
retrieval with large-scale data. The similarity between pairs of images can be
measured by the distances between their high dimensional representations, and
the problem of learning the appropriate similarity is often addressed by
distance metric learning. However, distance metric learning requires the
learned metric to be a PSD matrix, which is computational expensive and not
necessary for retrieval ranking problem. On the other hand, the bilinear model
is shown to be more flexible for large-scale image retrieval task, hence, we
adopt it to learn a matrix for estimating pairwise similarities under the
regression framework. By adaptively updating the target matrix in regression,
we can mimic the hinge loss, which is more appropriate for similarity learning
problem. Although the regression problem can have the closed-form solution, the
computational cost can be very expensive. The computational challenges come
from two aspects: the number of images can be very large and image features
have high dimensionality. We address the first challenge by compressing the
data by a randomized algorithm with the theoretical guarantee. For the high
dimensional issue, we address it by taking low rank assumption and applying
alternating method to obtain the partial matrix, which has a global optimal
solution. Empirical studies on real world image datasets (i.e., Caltech and
ImageNet) demonstrate the effectiveness and efficiency of the proposed method.
| no_new_dataset | 0.944791 |
1512.01768 | Danish . | Danish, Yogesh Dahiya, Partha Talukdar | Want Answers? A Reddit Inspired Study on How to Pose Questions | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Questions form an integral part of our everyday communication, both offline
and online. Getting responses to our questions from others is fundamental to
satisfying our information need and in extending our knowledge boundaries. A
question may be represented using various factors such as social, syntactic,
semantic, etc. We hypothesize that these factors contribute with varying
degrees towards getting responses from others for a given question. We perform
a thorough empirical study to measure effects of these factors using a novel
question and answer dataset from the website Reddit.com. To the best of our
knowledge, this is the first such analysis of its kind on this important topic.
We also use a sparse nonnegative matrix factorization technique to
automatically induce interpretable semantic factors from the question dataset.
We also document various patterns on response prediction we observe during our
analysis in the data. For instance, we found that preference-probing questions
are scantily answered. Our method is robust to capture such latent response
factors. We hope to make our code and datasets publicly available upon
publication of the paper.
| [
{
"version": "v1",
"created": "Sun, 6 Dec 2015 10:31:12 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Danish",
"",
""
],
[
"Dahiya",
"Yogesh",
""
],
[
"Talukdar",
"Partha",
""
]
] | TITLE: Want Answers? A Reddit Inspired Study on How to Pose Questions
ABSTRACT: Questions form an integral part of our everyday communication, both offline
and online. Getting responses to our questions from others is fundamental to
satisfying our information need and in extending our knowledge boundaries. A
question may be represented using various factors such as social, syntactic,
semantic, etc. We hypothesize that these factors contribute with varying
degrees towards getting responses from others for a given question. We perform
a thorough empirical study to measure effects of these factors using a novel
question and answer dataset from the website Reddit.com. To the best of our
knowledge, this is the first such analysis of its kind on this important topic.
We also use a sparse nonnegative matrix factorization technique to
automatically induce interpretable semantic factors from the question dataset.
We also document various patterns on response prediction we observe during our
analysis in the data. For instance, we found that preference-probing questions
are scantily answered. Our method is robust to capture such latent response
factors. We hope to make our code and datasets publicly available upon
publication of the paper.
| new_dataset | 0.915583 |
1512.01858 | Ali Borji | Mengyang Feng, Ali Borji, Huchuan Lu | Fixation prediction with a combined model of bottom-up saliency and
vanishing point | arXiv admin note: text overlap with arXiv:1512.01722 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By predicting where humans look in natural scenes, we can understand how they
perceive complex natural scenes and prioritize information for further
high-level visual processing. Several models have been proposed for this
purpose, yet there is a gap between best existing saliency models and human
performance. While many researchers have developed purely computational models
for fixation prediction, less attempts have been made to discover cognitive
factors that guide gaze. Here, we study the effect of a particular type of
scene structural information, known as the vanishing point, and show that human
gaze is attracted to the vanishing point regions. We record eye movements of 10
observers over 532 images, out of which 319 have vanishing points. We then
construct a combined model of traditional saliency and a vanishing point
channel and show that our model outperforms state of the art saliency models
using three scores on our dataset.
| [
{
"version": "v1",
"created": "Sun, 6 Dec 2015 23:29:53 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Feng",
"Mengyang",
""
],
[
"Borji",
"Ali",
""
],
[
"Lu",
"Huchuan",
""
]
] | TITLE: Fixation prediction with a combined model of bottom-up saliency and
vanishing point
ABSTRACT: By predicting where humans look in natural scenes, we can understand how they
perceive complex natural scenes and prioritize information for further
high-level visual processing. Several models have been proposed for this
purpose, yet there is a gap between best existing saliency models and human
performance. While many researchers have developed purely computational models
for fixation prediction, less attempts have been made to discover cognitive
factors that guide gaze. Here, we study the effect of a particular type of
scene structural information, known as the vanishing point, and show that human
gaze is attracted to the vanishing point regions. We record eye movements of 10
observers over 532 images, out of which 319 have vanishing points. We then
construct a combined model of traditional saliency and a vanishing point
channel and show that our model outperforms state of the art saliency models
using three scores on our dataset.
| new_dataset | 0.966092 |
1512.01872 | Pranav Rajpurkar | Pranav Rajpurkar, Toki Migimatsu, Jeff Kiske, Royce Cheng-Yue, Sameep
Tandon, Tao Wang, Andrew Ng | Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving | null | null | null | null | cs.HC cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While emerging deep-learning systems have outclassed knowledge-based
approaches in many tasks, their application to detection tasks for autonomous
technologies remains an open field for scientific exploration. Broadly, there
are two major developmental bottlenecks: the unavailability of comprehensively
labeled datasets and of expressive evaluation strategies. Approaches for
labeling datasets have relied on intensive hand-engineering, and strategies for
evaluating learning systems have been unable to identify failure-case
scenarios. Human intelligence offers an untapped approach for breaking through
these bottlenecks. This paper introduces Driverseat, a technology for embedding
crowds around learning systems for autonomous driving. Driverseat utilizes
crowd contributions for (a) collecting complex 3D labels and (b) tagging
diverse scenarios for ready evaluation of learning systems. We demonstrate how
Driverseat can crowdstrap a convolutional neural network on the lane-detection
task. More generally, crowdstrapping introduces a valuable paradigm for any
technology that can benefit from leveraging the powerful combination of human
and computer intelligence.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 01:34:23 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Rajpurkar",
"Pranav",
""
],
[
"Migimatsu",
"Toki",
""
],
[
"Kiske",
"Jeff",
""
],
[
"Cheng-Yue",
"Royce",
""
],
[
"Tandon",
"Sameep",
""
],
[
"Wang",
"Tao",
""
],
[
"Ng",
"Andrew",
""
]
] | TITLE: Driverseat: Crowdstrapping Learning Tasks for Autonomous Driving
ABSTRACT: While emerging deep-learning systems have outclassed knowledge-based
approaches in many tasks, their application to detection tasks for autonomous
technologies remains an open field for scientific exploration. Broadly, there
are two major developmental bottlenecks: the unavailability of comprehensively
labeled datasets and of expressive evaluation strategies. Approaches for
labeling datasets have relied on intensive hand-engineering, and strategies for
evaluating learning systems have been unable to identify failure-case
scenarios. Human intelligence offers an untapped approach for breaking through
these bottlenecks. This paper introduces Driverseat, a technology for embedding
crowds around learning systems for autonomous driving. Driverseat utilizes
crowd contributions for (a) collecting complex 3D labels and (b) tagging
diverse scenarios for ready evaluation of learning systems. We demonstrate how
Driverseat can crowdstrap a convolutional neural network on the lane-detection
task. More generally, crowdstrapping introduces a valuable paradigm for any
technology that can benefit from leveraging the powerful combination of human
and computer intelligence.
| no_new_dataset | 0.943452 |
1512.01993 | Sanjay Sahay | Aruna Govada, Shree Ranjani, Aditi Viswanathan and S.K.Sahay | A Novel Approach to Distributed Multi-Class SVM | 8 Pages | Transactions on Machine Learning and Artificial Intelligence, Vol.
2, No. 5, p. 72, 2014 | 10.14738/tmlai.25.562 | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With data sizes constantly expanding, and with classical machine learning
algorithms that analyze such data requiring larger and larger amounts of
computation time and storage space, the need to distribute computation and
memory requirements among several computers has become apparent. Although
substantial work has been done in developing distributed binary SVM algorithms
and multi-class SVM algorithms individually, the field of multi-class
distributed SVMs remains largely unexplored. This research proposes a novel
algorithm that implements the Support Vector Machine over a multi-class dataset
and is efficient in a distributed environment (here, Hadoop). The idea is to
divide the dataset into half recursively and thus compute the optimal Support
Vector Machine for this half during the training phase, much like a divide and
conquer approach. While testing, this structure has been effectively exploited
to significantly reduce the prediction time. Our algorithm has shown better
computation time during the prediction phase than the traditional sequential
SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of
the dataset grows. This approach also classifies the data with higher accuracy
than the traditional multi-class algorithms.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 11:44:35 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Govada",
"Aruna",
""
],
[
"Ranjani",
"Shree",
""
],
[
"Viswanathan",
"Aditi",
""
],
[
"Sahay",
"S. K.",
""
]
] | TITLE: A Novel Approach to Distributed Multi-Class SVM
ABSTRACT: With data sizes constantly expanding, and with classical machine learning
algorithms that analyze such data requiring larger and larger amounts of
computation time and storage space, the need to distribute computation and
memory requirements among several computers has become apparent. Although
substantial work has been done in developing distributed binary SVM algorithms
and multi-class SVM algorithms individually, the field of multi-class
distributed SVMs remains largely unexplored. This research proposes a novel
algorithm that implements the Support Vector Machine over a multi-class dataset
and is efficient in a distributed environment (here, Hadoop). The idea is to
divide the dataset into half recursively and thus compute the optimal Support
Vector Machine for this half during the training phase, much like a divide and
conquer approach. While testing, this structure has been effectively exploited
to significantly reduce the prediction time. Our algorithm has shown better
computation time during the prediction phase than the traditional sequential
SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of
the dataset grows. This approach also classifies the data with higher accuracy
than the traditional multi-class algorithms.
| no_new_dataset | 0.94545 |
1512.02013 | Etienne Gadeski | Adrian Popescu, Etienne Gadeski, Herv\'e Le Borgne | Scalable domain adaptation of convolutional neural networks | technical report, 6 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks (CNNs) tend to become a standard approach to
solve a wide array of computer vision problems. Besides important theoretical
and practical advances in their design, their success is built on the existence
of manually labeled visual resources, such as ImageNet. The creation of such
datasets is cumbersome and here we focus on alternatives to manual labeling. We
hypothesize that new resources are of uttermost importance in domains which are
not or weakly covered by ImageNet, such as tourism photographs. We first
collect noisy Flickr images for tourist points of interest and apply automatic
or weakly-supervised reranking techniques to reduce noise. Then, we learn
domain adapted models with a standard CNN architecture and compare them to a
generic model obtained from ImageNet. Experimental validation is conducted with
publicly available datasets, including Oxford5k, INRIA Holidays and Div150Cred.
Results show that low-cost domain adaptation improves results compared to the
use of generic models but also compared to strong non-CNN baselines such as
triangulation embedding.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 12:31:32 GMT"
}
] | 2015-12-08T00:00:00 | [
[
"Popescu",
"Adrian",
""
],
[
"Gadeski",
"Etienne",
""
],
[
"Borgne",
"Hervé Le",
""
]
] | TITLE: Scalable domain adaptation of convolutional neural networks
ABSTRACT: Convolutional neural networks (CNNs) tend to become a standard approach to
solve a wide array of computer vision problems. Besides important theoretical
and practical advances in their design, their success is built on the existence
of manually labeled visual resources, such as ImageNet. The creation of such
datasets is cumbersome and here we focus on alternatives to manual labeling. We
hypothesize that new resources are of uttermost importance in domains which are
not or weakly covered by ImageNet, such as tourism photographs. We first
collect noisy Flickr images for tourist points of interest and apply automatic
or weakly-supervised reranking techniques to reduce noise. Then, we learn
domain adapted models with a standard CNN architecture and compare them to a
generic model obtained from ImageNet. Experimental validation is conducted with
publicly available datasets, including Oxford5k, INRIA Holidays and Div150Cred.
Results show that low-cost domain adaptation improves results compared to the
use of generic models but also compared to strong non-CNN baselines such as
triangulation embedding.
| no_new_dataset | 0.940735 |
1512.01272 | Vikash Mansinghka | Vikash Mansinghka, Patrick Shafto, Eric Jonas, Cap Petschulat, Max
Gasner, Joshua B. Tenenbaum | CrossCat: A Fully Bayesian Nonparametric Method for Analyzing
Heterogeneous, High Dimensional Data | null | null | null | null | cs.AI stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a widespread need for statistical methods that can analyze
high-dimensional datasets with- out imposing restrictive or opaque modeling
assumptions. This paper describes a domain-general data analysis method called
CrossCat. CrossCat infers multiple non-overlapping views of the data, each
consisting of a subset of the variables, and uses a separate nonparametric
mixture to model each view. CrossCat is based on approximately Bayesian
inference in a hierarchical, nonparamet- ric model for data tables. This model
consists of a Dirichlet process mixture over the columns of a data table in
which each mixture component is itself an independent Dirichlet process mixture
over the rows; the inner mixture components are simple parametric models whose
form depends on the types of data in the table. CrossCat combines strengths of
mixture modeling and Bayesian net- work structure learning. Like mixture
modeling, CrossCat can model a broad class of distributions by positing latent
variables, and produces representations that can be efficiently conditioned and
sampled from for prediction. Like Bayesian networks, CrossCat represents the
dependencies and independencies between variables, and thus remains accurate
when there are multiple statistical signals. Inference is done via a scalable
Gibbs sampling scheme; this paper shows that it works well in practice. This
paper also includes empirical results on heterogeneous tabular data of up to 10
million cells, such as hospital cost and quality measures, voting records,
unemployment rates, gene expression measurements, and images of handwritten
digits. CrossCat infers structure that is consistent with accepted findings and
common-sense knowledge in multiple domains and yields predictive accuracy
competitive with generative, discriminative, and model-free alternatives.
| [
{
"version": "v1",
"created": "Thu, 3 Dec 2015 22:39:37 GMT"
}
] | 2015-12-07T00:00:00 | [
[
"Mansinghka",
"Vikash",
""
],
[
"Shafto",
"Patrick",
""
],
[
"Jonas",
"Eric",
""
],
[
"Petschulat",
"Cap",
""
],
[
"Gasner",
"Max",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] | TITLE: CrossCat: A Fully Bayesian Nonparametric Method for Analyzing
Heterogeneous, High Dimensional Data
ABSTRACT: There is a widespread need for statistical methods that can analyze
high-dimensional datasets with- out imposing restrictive or opaque modeling
assumptions. This paper describes a domain-general data analysis method called
CrossCat. CrossCat infers multiple non-overlapping views of the data, each
consisting of a subset of the variables, and uses a separate nonparametric
mixture to model each view. CrossCat is based on approximately Bayesian
inference in a hierarchical, nonparamet- ric model for data tables. This model
consists of a Dirichlet process mixture over the columns of a data table in
which each mixture component is itself an independent Dirichlet process mixture
over the rows; the inner mixture components are simple parametric models whose
form depends on the types of data in the table. CrossCat combines strengths of
mixture modeling and Bayesian net- work structure learning. Like mixture
modeling, CrossCat can model a broad class of distributions by positing latent
variables, and produces representations that can be efficiently conditioned and
sampled from for prediction. Like Bayesian networks, CrossCat represents the
dependencies and independencies between variables, and thus remains accurate
when there are multiple statistical signals. Inference is done via a scalable
Gibbs sampling scheme; this paper shows that it works well in practice. This
paper also includes empirical results on heterogeneous tabular data of up to 10
million cells, such as hospital cost and quality measures, voting records,
unemployment rates, gene expression measurements, and images of handwritten
digits. CrossCat infers structure that is consistent with accepted findings and
common-sense knowledge in multiple domains and yields predictive accuracy
competitive with generative, discriminative, and model-free alternatives.
| no_new_dataset | 0.951233 |
1512.01325 | Babak Saleh | Babak Saleh, Ahmed Elgammal, Jacob Feldman, Ali Farhadi | Toward a Taxonomy and Computational Models of Abnormalities in Images | To appear in the Thirtieth AAAI Conference on Artificial Intelligence
(AAAI 2016) | null | null | null | cs.CV cs.AI cs.HC cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human visual system can spot an abnormal image, and reason about what
makes it strange. This task has not received enough attention in computer
vision. In this paper we study various types of atypicalities in images in a
more comprehensive way than has been done before. We propose a new dataset of
abnormal images showing a wide range of atypicalities. We design human subject
experiments to discover a coarse taxonomy of the reasons for abnormality. Our
experiments reveal three major categories of abnormality: object-centric,
scene-centric, and contextual. Based on this taxonomy, we propose a
comprehensive computational model that can predict all different types of
abnormality in images and outperform prior arts in abnormality recognition.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 06:29:53 GMT"
}
] | 2015-12-07T00:00:00 | [
[
"Saleh",
"Babak",
""
],
[
"Elgammal",
"Ahmed",
""
],
[
"Feldman",
"Jacob",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: Toward a Taxonomy and Computational Models of Abnormalities in Images
ABSTRACT: The human visual system can spot an abnormal image, and reason about what
makes it strange. This task has not received enough attention in computer
vision. In this paper we study various types of atypicalities in images in a
more comprehensive way than has been done before. We propose a new dataset of
abnormal images showing a wide range of atypicalities. We design human subject
experiments to discover a coarse taxonomy of the reasons for abnormality. Our
experiments reveal three major categories of abnormality: object-centric,
scene-centric, and contextual. Based on this taxonomy, we propose a
comprehensive computational model that can predict all different types of
abnormality in images and outperform prior arts in abnormality recognition.
| new_dataset | 0.957636 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.