id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1607.08569 | Gernot Riegler | Gernot Riegler, David Ferstl, Matthias R\"uther, Horst Bischof | A Deep Primal-Dual Network for Guided Depth Super-Resolution | BMVC 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 18:49:55 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Riegler",
"Gernot",
""
],
[
"Ferstl",
"David",
""
],
[
"Rüther",
"Matthias",
""
],
[
"Bischof",
"Horst",
""
]
] | TITLE: A Deep Primal-Dual Network for Guided Depth Super-Resolution
ABSTRACT: In this paper we present a novel method to increase the spatial resolution of
depth images. We combine a deep fully convolutional network with a non-local
variational method in a deep primal-dual network. The joint network computes a
noise-free, high-resolution estimate from a noisy, low-resolution input depth
map. Additionally, a high-resolution intensity image is used to guide the
reconstruction in the network. By unrolling the optimization steps of a
first-order primal-dual algorithm and formulating it as a network, we can train
our joint method end-to-end. This not only enables us to learn the weights of
the fully convolutional network, but also to optimize all parameters of the
variational method and its optimization procedure. The training of such a deep
network requires a large dataset for supervision. Therefore, we generate
high-quality depth maps and corresponding color images with a physically based
renderer. In an exhaustive evaluation we show that our method outperforms the
state-of-the-art on multiple benchmarks.
| no_new_dataset | 0.946399 |
1505.03597 | Eshed Ohn-Bar | Eshed Ohn-Bar and M. M. Trivedi | Multi-scale Volumes for Deep Object Detection and Localization | To appear in Pattern Recognition 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aims to analyze the benefits of improved multi-scale reasoning for
object detection and localization with deep convolutional neural networks. To
that end, an efficient and general object detection framework which operates on
scale volumes of a deep feature pyramid is proposed. In contrast to the
proposed approach, most current state-of-the-art object detectors operate on a
single-scale in training, while testing involves independent evaluation across
scales. One benefit of the proposed approach is in better capturing of
multi-scale contextual information, resulting in significant gains in both
detection performance and localization quality of objects on the PASCAL VOC
dataset and a multi-view highway vehicles dataset. The joint detection and
localization scale-specific models are shown to especially benefit detection of
challenging object categories which exhibit large scale variation as well as
detection of small objects.
| [
{
"version": "v1",
"created": "Thu, 14 May 2015 02:07:10 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 21:15:12 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Ohn-Bar",
"Eshed",
""
],
[
"Trivedi",
"M. M.",
""
]
] | TITLE: Multi-scale Volumes for Deep Object Detection and Localization
ABSTRACT: This study aims to analyze the benefits of improved multi-scale reasoning for
object detection and localization with deep convolutional neural networks. To
that end, an efficient and general object detection framework which operates on
scale volumes of a deep feature pyramid is proposed. In contrast to the
proposed approach, most current state-of-the-art object detectors operate on a
single-scale in training, while testing involves independent evaluation across
scales. One benefit of the proposed approach is in better capturing of
multi-scale contextual information, resulting in significant gains in both
detection performance and localization quality of objects on the PASCAL VOC
dataset and a multi-view highway vehicles dataset. The joint detection and
localization scale-specific models are shown to especially benefit detection of
challenging object categories which exhibit large scale variation as well as
detection of small objects.
| no_new_dataset | 0.947527 |
1508.07680 | Muhammad Ghifary | Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David
Balduzzi | Domain Generalization for Object Recognition with Multi-task
Autoencoders | accepted in ICCV 2015 | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of domain generalization is to take knowledge acquired from a
number of related domains where training data is available, and to then
successfully apply it to previously unseen domains. We propose a new feature
learning algorithm, Multi-Task Autoencoder (MTAE), that provides good
generalization performance for cross-domain object recognition.
Our algorithm extends the standard denoising autoencoder framework by
substituting artificially induced corruption with naturally occurring
inter-domain variability in the appearance of objects. Instead of
reconstructing images from noisy versions, MTAE learns to transform the
original image into analogs in multiple related domains. It thereby learns
features that are robust to variations across domains. The learnt features are
then used as inputs to a classifier.
We evaluated the performance of the algorithm on benchmark image recognition
datasets, where the task is to learn features from multiple datasets and to
then predict the image label from unseen datasets. We found that (denoising)
MTAE outperforms alternative autoencoder-based models as well as the current
state-of-the-art algorithms for domain generalization.
| [
{
"version": "v1",
"created": "Mon, 31 Aug 2015 04:15:31 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Ghifary",
"Muhammad",
""
],
[
"Kleijn",
"W. Bastiaan",
""
],
[
"Zhang",
"Mengjie",
""
],
[
"Balduzzi",
"David",
""
]
] | TITLE: Domain Generalization for Object Recognition with Multi-task
Autoencoders
ABSTRACT: The problem of domain generalization is to take knowledge acquired from a
number of related domains where training data is available, and to then
successfully apply it to previously unseen domains. We propose a new feature
learning algorithm, Multi-Task Autoencoder (MTAE), that provides good
generalization performance for cross-domain object recognition.
Our algorithm extends the standard denoising autoencoder framework by
substituting artificially induced corruption with naturally occurring
inter-domain variability in the appearance of objects. Instead of
reconstructing images from noisy versions, MTAE learns to transform the
original image into analogs in multiple related domains. It thereby learns
features that are robust to variations across domains. The learnt features are
then used as inputs to a classifier.
We evaluated the performance of the algorithm on benchmark image recognition
datasets, where the task is to learn features from multiple datasets and to
then predict the image label from unseen datasets. We found that (denoising)
MTAE outperforms alternative autoencoder-based models as well as the current
state-of-the-art algorithms for domain generalization.
| no_new_dataset | 0.942454 |
1509.01007 | Shay Cohen | Dominique Osborne, Shashi Narayan and Shay B. Cohen | Encoding Prior Knowledge with Eigenword Embeddings | in Transactions of the Association of Computational Linguistics
(TACL), 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Canonical correlation analysis (CCA) is a method for reducing the dimension
of data represented using two views. It has been previously used to derive word
embeddings, where one view indicates a word, and the other view indicates its
context. We describe a way to incorporate prior knowledge into CCA, give a
theoretical justification for it, and test it by deriving word embeddings and
evaluating them on a myriad of datasets.
| [
{
"version": "v1",
"created": "Thu, 3 Sep 2015 09:39:36 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2016 10:54:17 GMT"
},
{
"version": "v3",
"created": "Wed, 27 Jul 2016 12:46:39 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Osborne",
"Dominique",
""
],
[
"Narayan",
"Shashi",
""
],
[
"Cohen",
"Shay B.",
""
]
] | TITLE: Encoding Prior Knowledge with Eigenword Embeddings
ABSTRACT: Canonical correlation analysis (CCA) is a method for reducing the dimension
of data represented using two views. It has been previously used to derive word
embeddings, where one view indicates a word, and the other view indicates its
context. We describe a way to incorporate prior knowledge into CCA, give a
theoretical justification for it, and test it by deriving word embeddings and
evaluating them on a myriad of datasets.
| no_new_dataset | 0.945248 |
1510.04373 | Muhammad Ghifary | Muhammad Ghifary and David Balduzzi and W. Bastiaan Kleijn and Mengjie
Zhang | Scatter Component Analysis: A Unified Framework for Domain Adaptation
and Domain Generalization | to appear in IEEE Transactions on Pattern Analysis and Machine
Intelligence | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses classification tasks on a particular target domain in
which labeled training data are only available from source domains different
from (but related to) the target. Two closely related frameworks, domain
adaptation and domain generalization, are concerned with such tasks, where the
only difference between those frameworks is the availability of the unlabeled
target data: domain adaptation can leverage unlabeled target information, while
domain generalization cannot. We propose Scatter Component Analyis (SCA), a
fast representation learning algorithm that can be applied to both domain
adaptation and domain generalization. SCA is based on a simple geometrical
measure, i.e., scatter, which operates on reproducing kernel Hilbert space. SCA
finds a representation that trades between maximizing the separability of
classes, minimizing the mismatch between domains, and maximizing the
separability of data; each of which is quantified through scatter. The
optimization problem of SCA can be reduced to a generalized eigenvalue problem,
which results in a fast and exact solution. Comprehensive experiments on
benchmark cross-domain object recognition datasets verify that SCA performs
much faster than several state-of-the-art algorithms and also provides
state-of-the-art classification accuracy in both domain adaptation and domain
generalization. We also show that scatter can be used to establish a
theoretical generalization bound in the case of domain adaptation.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 01:41:12 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 21:35:08 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Ghifary",
"Muhammad",
""
],
[
"Balduzzi",
"David",
""
],
[
"Kleijn",
"W. Bastiaan",
""
],
[
"Zhang",
"Mengjie",
""
]
] | TITLE: Scatter Component Analysis: A Unified Framework for Domain Adaptation
and Domain Generalization
ABSTRACT: This paper addresses classification tasks on a particular target domain in
which labeled training data are only available from source domains different
from (but related to) the target. Two closely related frameworks, domain
adaptation and domain generalization, are concerned with such tasks, where the
only difference between those frameworks is the availability of the unlabeled
target data: domain adaptation can leverage unlabeled target information, while
domain generalization cannot. We propose Scatter Component Analyis (SCA), a
fast representation learning algorithm that can be applied to both domain
adaptation and domain generalization. SCA is based on a simple geometrical
measure, i.e., scatter, which operates on reproducing kernel Hilbert space. SCA
finds a representation that trades between maximizing the separability of
classes, minimizing the mismatch between domains, and maximizing the
separability of data; each of which is quantified through scatter. The
optimization problem of SCA can be reduced to a generalized eigenvalue problem,
which results in a fast and exact solution. Comprehensive experiments on
benchmark cross-domain object recognition datasets verify that SCA performs
much faster than several state-of-the-art algorithms and also provides
state-of-the-art classification accuracy in both domain adaptation and domain
generalization. We also show that scatter can be used to establish a
theoretical generalization bound in the case of domain adaptation.
| no_new_dataset | 0.94743 |
1602.02434 | Shervin Minaee | Shervin Minaee and Yao Wang | Screen Content Image Segmentation Using Sparse Decomposition and Total
Variation Minimization | 5 pages in IEEE, International Conference on Image Processing, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse decomposition has been widely used for different applications, such as
source separation, image classification, image denoising and more. This paper
presents a new algorithm for segmentation of an image into background and
foreground text and graphics using sparse decomposition and total variation
minimization. The proposed method is designed based on the assumption that the
background part of the image is smoothly varying and can be represented by a
linear combination of a few smoothly varying basis functions, while the
foreground text and graphics can be modeled with a sparse component overlaid on
the smooth background. The background and foreground are separated using a
sparse decomposition framework regularized with a few suitable regularization
terms which promotes the sparsity and connectivity of foreground pixels. This
algorithm has been tested on a dataset of images extracted from HEVC standard
test sequences for screen content coding, and is shown to have superior
performance over some prior methods, including least absolute deviation
fitting, k-means clustering based segmentation in DjVu and shape primitive
extraction and coding (SPEC) algorithm.
| [
{
"version": "v1",
"created": "Sun, 7 Feb 2016 22:12:16 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 23:45:56 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Minaee",
"Shervin",
""
],
[
"Wang",
"Yao",
""
]
] | TITLE: Screen Content Image Segmentation Using Sparse Decomposition and Total
Variation Minimization
ABSTRACT: Sparse decomposition has been widely used for different applications, such as
source separation, image classification, image denoising and more. This paper
presents a new algorithm for segmentation of an image into background and
foreground text and graphics using sparse decomposition and total variation
minimization. The proposed method is designed based on the assumption that the
background part of the image is smoothly varying and can be represented by a
linear combination of a few smoothly varying basis functions, while the
foreground text and graphics can be modeled with a sparse component overlaid on
the smooth background. The background and foreground are separated using a
sparse decomposition framework regularized with a few suitable regularization
terms which promotes the sparsity and connectivity of foreground pixels. This
algorithm has been tested on a dataset of images extracted from HEVC standard
test sequences for screen content coding, and is shown to have superior
performance over some prior methods, including least absolute deviation
fitting, k-means clustering based segmentation in DjVu and shape primitive
extraction and coding (SPEC) algorithm.
| no_new_dataset | 0.946892 |
1604.01753 | Gunnar Sigurdsson | Gunnar A. Sigurdsson, G\"ul Varol, Xiaolong Wang, Ali Farhadi, Ivan
Laptev, Abhinav Gupta | Hollywood in Homes: Crowdsourcing Data Collection for Activity
Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer vision has a great potential to help our daily lives by searching
for lost keys, watering flowers or reminding us to take a pill. To succeed with
such tasks, computer vision methods need to be trained from real and diverse
examples of our daily dynamic scenes. While most of such scenes are not
particularly exciting, they typically do not appear on YouTube, in movies or TV
broadcasts. So how do we collect sufficiently many diverse but boring samples
representing our lives? We propose a novel Hollywood in Homes approach to
collect such data. Instead of shooting videos in the lab, we ensure diversity
by distributing and crowdsourcing the whole process of video creation from
script writing to video recording and annotation. Following this procedure we
collect a new dataset, Charades, with hundreds of people recording videos in
their own homes, acting out casual everyday activities. The dataset is composed
of 9,848 annotated videos with an average length of 30 seconds, showing
activities of 267 people from three continents. Each video is annotated by
multiple free-text descriptions, action labels, action intervals and classes of
interacted objects. In total, Charades provides 27,847 video descriptions,
66,500 temporally localized intervals for 157 action classes and 41,104 labels
for 46 object classes. Using this rich data, we evaluate and provide baseline
results for several tasks including action recognition and automatic
description generation. We believe that the realism, diversity, and casual
nature of this dataset will present unique challenges and new opportunities for
computer vision community.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 19:56:04 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jul 2016 19:57:24 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jul 2016 22:49:22 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Sigurdsson",
"Gunnar A.",
""
],
[
"Varol",
"Gül",
""
],
[
"Wang",
"Xiaolong",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Hollywood in Homes: Crowdsourcing Data Collection for Activity
Understanding
ABSTRACT: Computer vision has a great potential to help our daily lives by searching
for lost keys, watering flowers or reminding us to take a pill. To succeed with
such tasks, computer vision methods need to be trained from real and diverse
examples of our daily dynamic scenes. While most of such scenes are not
particularly exciting, they typically do not appear on YouTube, in movies or TV
broadcasts. So how do we collect sufficiently many diverse but boring samples
representing our lives? We propose a novel Hollywood in Homes approach to
collect such data. Instead of shooting videos in the lab, we ensure diversity
by distributing and crowdsourcing the whole process of video creation from
script writing to video recording and annotation. Following this procedure we
collect a new dataset, Charades, with hundreds of people recording videos in
their own homes, acting out casual everyday activities. The dataset is composed
of 9,848 annotated videos with an average length of 30 seconds, showing
activities of 267 people from three continents. Each video is annotated by
multiple free-text descriptions, action labels, action intervals and classes of
interacted objects. In total, Charades provides 27,847 video descriptions,
66,500 temporally localized intervals for 157 action classes and 41,104 labels
for 46 object classes. Using this rich data, we evaluate and provide baseline
results for several tasks including action recognition and automatic
description generation. We believe that the realism, diversity, and casual
nature of this dataset will present unique challenges and new opportunities for
computer vision community.
| new_dataset | 0.959421 |
1606.01621 | Shu Kong | Shu Kong, Xiaohui Shen, Zhe Lin, Radomir Mech, Charless Fowlkes | Photo Aesthetics Ranking Network with Attributes and Content Adaptation | null | null | null | null | cs.CV cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world applications could benefit from the ability to automatically
generate a fine-grained ranking of photo aesthetics. However, previous methods
for image aesthetics analysis have primarily focused on the coarse, binary
categorization of images into high- or low-aesthetic categories. In this work,
we propose to learn a deep convolutional neural network to rank photo
aesthetics in which the relative ranking of photo aesthetics are directly
modeled in the loss function. Our model incorporates joint learning of
meaningful photographic attributes and image content information which can help
regularize the complicated photo aesthetics rating problem.
To train and analyze this model, we have assembled a new aesthetics and
attributes database (AADB) which contains aesthetic scores and meaningful
attributes assigned to each image by multiple human raters. Anonymized rater
identities are recorded across images allowing us to exploit intra-rater
consistency using a novel sampling strategy when computing the ranking loss of
training image pairs. We show the proposed sampling strategy is very effective
and robust in face of subjective judgement of image aesthetics by individuals
with different aesthetic tastes. Experiments demonstrate that our unified model
can generate aesthetic rankings that are more consistent with human ratings. To
further validate our model, we show that by simply thresholding the estimated
aesthetic scores, we are able to achieve state-or-the-art classification
performance on the existing AVA dataset benchmark.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2016 06:14:00 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2016 00:20:07 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Kong",
"Shu",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Lin",
"Zhe",
""
],
[
"Mech",
"Radomir",
""
],
[
"Fowlkes",
"Charless",
""
]
] | TITLE: Photo Aesthetics Ranking Network with Attributes and Content Adaptation
ABSTRACT: Real-world applications could benefit from the ability to automatically
generate a fine-grained ranking of photo aesthetics. However, previous methods
for image aesthetics analysis have primarily focused on the coarse, binary
categorization of images into high- or low-aesthetic categories. In this work,
we propose to learn a deep convolutional neural network to rank photo
aesthetics in which the relative ranking of photo aesthetics are directly
modeled in the loss function. Our model incorporates joint learning of
meaningful photographic attributes and image content information which can help
regularize the complicated photo aesthetics rating problem.
To train and analyze this model, we have assembled a new aesthetics and
attributes database (AADB) which contains aesthetic scores and meaningful
attributes assigned to each image by multiple human raters. Anonymized rater
identities are recorded across images allowing us to exploit intra-rater
consistency using a novel sampling strategy when computing the ranking loss of
training image pairs. We show the proposed sampling strategy is very effective
and robust in face of subjective judgement of image aesthetics by individuals
with different aesthetic tastes. Experiments demonstrate that our unified model
can generate aesthetic rankings that are more consistent with human ratings. To
further validate our model, we show that by simply thresholding the estimated
aesthetic scores, we are able to achieve state-or-the-art classification
performance on the existing AVA dataset benchmark.
| new_dataset | 0.61181 |
1607.07988 | Gernot Riegler | Gernot Riegler, Matthias R\"uther, Horst Bischof | ATGV-Net: Accurate Depth Super-Resolution | ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present a novel approach for single depth map
super-resolution. Modern consumer depth sensors, especially Time-of-Flight
sensors, produce dense depth measurements, but are affected by noise and have a
low lateral resolution. We propose a method that combines the benefits of
recent advances in machine learning based single image super-resolution, i.e.
deep convolutional networks, with a variational method to recover accurate
high-resolution depth maps. In particular, we integrate a variational method
that models the piecewise affine structures apparent in depth data via an
anisotropic total generalized variation regularization term on top of a deep
network. We call our method ATGV-Net and train it end-to-end by unrolling the
optimization procedure of the variational method. To train deep networks, a
large corpus of training data with accurate ground-truth is required. We
demonstrate that it is feasible to train our method solely on synthetic data
that we generate in large quantities for this task. Our evaluations show that
we achieve state-of-the-art results on three different benchmarks, as well as
on a challenging Time-of-Flight dataset, all without utilizing an additional
intensity image as guidance.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 07:29:08 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Riegler",
"Gernot",
""
],
[
"Rüther",
"Matthias",
""
],
[
"Bischof",
"Horst",
""
]
] | TITLE: ATGV-Net: Accurate Depth Super-Resolution
ABSTRACT: In this work we present a novel approach for single depth map
super-resolution. Modern consumer depth sensors, especially Time-of-Flight
sensors, produce dense depth measurements, but are affected by noise and have a
low lateral resolution. We propose a method that combines the benefits of
recent advances in machine learning based single image super-resolution, i.e.
deep convolutional networks, with a variational method to recover accurate
high-resolution depth maps. In particular, we integrate a variational method
that models the piecewise affine structures apparent in depth data via an
anisotropic total generalized variation regularization term on top of a deep
network. We call our method ATGV-Net and train it end-to-end by unrolling the
optimization procedure of the variational method. To train deep networks, a
large corpus of training data with accurate ground-truth is required. We
demonstrate that it is feasible to train our method solely on synthetic data
that we generate in large quantities for this task. Our evaluations show that
we achieve state-of-the-art results on three different benchmarks, as well as
on a challenging Time-of-Flight dataset, all without utilizing an additional
intensity image as guidance.
| no_new_dataset | 0.943295 |
1607.08085 | Maxime Bucher | Maxime Bucher (Palaiseau), St\'ephane Herbin (Palaiseau), Fr\'ed\'eric
Jurie | Improving Semantic Embedding Consistency by Metric Learning for
Zero-Shot Classification | in ECCV 2016, Oct 2016, amsterdam, Netherlands. 2016 | null | null | null | cs.CV cs.AI cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the task of zero-shot image classification. The key
contribution of the proposed approach is to control the semantic embedding of
images -- one of the main ingredients of zero-shot learning -- by formulating
it as a metric learning problem. The optimized empirical criterion associates
two types of sub-task constraints: metric discriminating capacity and accurate
attribute prediction. This results in a novel expression of zero-shot learning
not requiring the notion of class in the training phase: only pairs of
image/attributes, augmented with a consistency indicator, are given as ground
truth. At test time, the learned model can predict the consistency of a test
image with a given set of attributes , allowing flexible ways to produce
recognition inferences. Despite its simplicity, the proposed approach gives
state-of-the-art results on four challenging datasets used for zero-shot
recognition evaluation.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 13:35:16 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Bucher",
"Maxime",
"",
"Palaiseau"
],
[
"Herbin",
"Stéphane",
"",
"Palaiseau"
],
[
"Jurie",
"Frédéric",
""
]
] | TITLE: Improving Semantic Embedding Consistency by Metric Learning for
Zero-Shot Classification
ABSTRACT: This paper addresses the task of zero-shot image classification. The key
contribution of the proposed approach is to control the semantic embedding of
images -- one of the main ingredients of zero-shot learning -- by formulating
it as a metric learning problem. The optimized empirical criterion associates
two types of sub-task constraints: metric discriminating capacity and accurate
attribute prediction. This results in a novel expression of zero-shot learning
not requiring the notion of class in the training phase: only pairs of
image/attributes, augmented with a consistency indicator, are given as ground
truth. At test time, the learned model can predict the consistency of a test
image with a given set of attributes , allowing flexible ways to produce
recognition inferences. Despite its simplicity, the proposed approach gives
state-of-the-art results on four challenging datasets used for zero-shot
recognition evaluation.
| no_new_dataset | 0.946151 |
1607.08087 | Suleiman Yerima | Suleiman Y. Yerima, Sakir Sezer, Igor Muttik | Android Malware Detection: an Eigenspace Analysis Approach | 7 pages, 4 figures, conference | null | 10.1109/SAI.2015.7237302 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The battle to mitigate Android malware has become more critical with the
emergence of new strains incorporating increasingly sophisticated evasion
techniques, in turn necessitating more advanced detection capabilities. Hence,
in this paper we propose and evaluate a machine learning based approach based
on eigenspace analysis for Android malware detection using features derived
from static analysis characterization of Android applications. Empirical
evaluation with a dataset of real malware and benign samples show that
detection rate of over 96% with a very low false positive rate is achievable
using the proposed method.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 13:37:54 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Yerima",
"Suleiman Y.",
""
],
[
"Sezer",
"Sakir",
""
],
[
"Muttik",
"Igor",
""
]
] | TITLE: Android Malware Detection: an Eigenspace Analysis Approach
ABSTRACT: The battle to mitigate Android malware has become more critical with the
emergence of new strains incorporating increasingly sophisticated evasion
techniques, in turn necessitating more advanced detection capabilities. Hence,
in this paper we propose and evaluate a machine learning based approach based
on eigenspace analysis for Android malware detection using features derived
from static analysis characterization of Android applications. Empirical
evaluation with a dataset of real malware and benign samples show that
detection rate of over 96% with a very low false positive rate is achievable
using the proposed method.
| no_new_dataset | 0.927822 |
1607.08128 | Angjoo Kanazawa | Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler,
Javier Romero, Michael J. Black | Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a
Single Image | To appear in ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the first method to automatically estimate the 3D pose of the
human body as well as its 3D shape from a single unconstrained image. We
estimate a full 3D mesh and show that 2D joints alone carry a surprising amount
of information about body shape. The problem is challenging because of the
complexity of the human body, articulation, occlusion, clothing, lighting, and
the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a
recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D
body joint locations. We then fit (top-down) a recently published statistical
body shape model, called SMPL, to the 2D joints. We do so by minimizing an
objective function that penalizes the error between the projected 3D model
joints and detected 2D joints. Because SMPL captures correlations in human
shape across the population, we are able to robustly fit it to very little
data. We further leverage the 3D model to prevent solutions that cause
interpenetration. We evaluate our method, SMPLify, on the Leeds Sports,
HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect
to the state of the art.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 14:46:36 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Bogo",
"Federica",
""
],
[
"Kanazawa",
"Angjoo",
""
],
[
"Lassner",
"Christoph",
""
],
[
"Gehler",
"Peter",
""
],
[
"Romero",
"Javier",
""
],
[
"Black",
"Michael J.",
""
]
] | TITLE: Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a
Single Image
ABSTRACT: We describe the first method to automatically estimate the 3D pose of the
human body as well as its 3D shape from a single unconstrained image. We
estimate a full 3D mesh and show that 2D joints alone carry a surprising amount
of information about body shape. The problem is challenging because of the
complexity of the human body, articulation, occlusion, clothing, lighting, and
the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a
recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D
body joint locations. We then fit (top-down) a recently published statistical
body shape model, called SMPL, to the 2D joints. We do so by minimizing an
objective function that penalizes the error between the projected 3D model
joints and detected 2D joints. Because SMPL captures correlations in human
shape across the population, we are able to robustly fit it to very little
data. We further leverage the 3D model to prevent solutions that cause
interpenetration. We evaluate our method, SMPLify, on the Leeds Sports,
HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect
to the state of the art.
| no_new_dataset | 0.941331 |
1607.08188 | Yehezkel Resheff | Yehezkel S. Resheff | Online Trajectory Segmentation and Summary With Applications to
Visualization and Retrieval | null | null | null | null | cs.CV cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trajectory segmentation is the process of subdividing a trajectory into parts
either by grouping points similar with respect to some measure of interest, or
by minimizing a global objective function. Here we present a novel online
algorithm for segmentation and summary, based on point density along the
trajectory, and based on the nature of the naturally occurring structure of
intermittent bouts of locomotive and local activity. We show an application to
visualization of trajectory datasets, and discuss the use of the summary as an
index allowing efficient queries which are otherwise impossible or
computationally expensive, over very large datasets.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 14:50:45 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Resheff",
"Yehezkel S.",
""
]
] | TITLE: Online Trajectory Segmentation and Summary With Applications to
Visualization and Retrieval
ABSTRACT: Trajectory segmentation is the process of subdividing a trajectory into parts
either by grouping points similar with respect to some measure of interest, or
by minimizing a global objective function. Here we present a novel online
algorithm for segmentation and summary, based on point density along the
trajectory, and based on the nature of the naturally occurring structure of
intermittent bouts of locomotive and local activity. We show an application to
visualization of trajectory datasets, and discuss the use of the summary as an
index allowing efficient queries which are otherwise impossible or
computationally expensive, over very large datasets.
| no_new_dataset | 0.945651 |
1607.08196 | Lili Tao | Lili Tao, Tilo Burghardt, Majid Mirmehdi, Dima Damen, Ashley Cooper,
Sion Hannuna, Massimo Camplani, Adeline Paiement, Ian Craddock | Calorie Counter: RGB-Depth Visual Estimation of Energy Expenditure at
Home | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new framework for vision-based estimation of calorific
expenditure from RGB-D data - the first that is validated on physical gas
exchange measurements and applied to daily living scenarios. Deriving a
person's energy expenditure from sensors is an important tool in tracking
physical activity levels for health and lifestyle monitoring. Most existing
methods use metabolic lookup tables (METs) for a manual estimate or systems
with inertial sensors which ultimately require users to wear devices. In
contrast, the proposed pose-invariant and individual-independent vision
framework allows for a remote estimation of calorific expenditure. We
introduce, and evaluate our approach on, a new dataset called SPHERE-calorie,
for which visual estimates can be compared against simultaneously obtained,
indirect calorimetry measures based on gas exchange. % based on per breath gas
exchange. We conclude from our experiments that the proposed vision pipeline is
suitable for home monitoring in a controlled environment, with calorific
expenditure estimates above accuracy levels of commonly used manual estimations
via METs. With the dataset released, our work establishes a baseline for future
research for this little-explored area of computer vision.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 17:47:44 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Tao",
"Lili",
""
],
[
"Burghardt",
"Tilo",
""
],
[
"Mirmehdi",
"Majid",
""
],
[
"Damen",
"Dima",
""
],
[
"Cooper",
"Ashley",
""
],
[
"Hannuna",
"Sion",
""
],
[
"Camplani",
"Massimo",
""
],
[
"Paiement",
"Adeline",
""
],
[
"Craddock",
"Ian",
""
]
] | TITLE: Calorie Counter: RGB-Depth Visual Estimation of Energy Expenditure at
Home
ABSTRACT: We present a new framework for vision-based estimation of calorific
expenditure from RGB-D data - the first that is validated on physical gas
exchange measurements and applied to daily living scenarios. Deriving a
person's energy expenditure from sensors is an important tool in tracking
physical activity levels for health and lifestyle monitoring. Most existing
methods use metabolic lookup tables (METs) for a manual estimate or systems
with inertial sensors which ultimately require users to wear devices. In
contrast, the proposed pose-invariant and individual-independent vision
framework allows for a remote estimation of calorific expenditure. We
introduce, and evaluate our approach on, a new dataset called SPHERE-calorie,
for which visual estimates can be compared against simultaneously obtained,
indirect calorimetry measures based on gas exchange. % based on per breath gas
exchange. We conclude from our experiments that the proposed vision pipeline is
suitable for home monitoring in a controlled environment, with calorific
expenditure estimates above accuracy levels of commonly used manual estimations
via METs. With the dataset released, our work establishes a baseline for future
research for this little-explored area of computer vision.
| new_dataset | 0.960988 |
1607.08221 | Yandong Guo | Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, Jianfeng Gao | MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we design a benchmark task and provide the associated datasets
for recognizing face images and link them to corresponding entity keys in a
knowledge base. More specifically, we propose a benchmark task to recognize one
million celebrities from their face images, by using all the possibly collected
face images of this individual on the web as training data. The rich
information provided by the knowledge base helps to conduct disambiguation and
improve the recognition accuracy, and contributes to various real-world
applications, such as image captioning and news video analysis. Associated with
this task, we design and provide concrete measurement set, evaluation protocol,
as well as training data. We also present in details our experiment setup and
report promising baseline results. Our benchmark task could lead to one of the
largest classification problems in computer vision. To the best of our
knowledge, our training dataset, which contains 10M images in version 1, is the
largest publicly available one in the world.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 19:18:16 GMT"
}
] | 2016-07-28T00:00:00 | [
[
"Guo",
"Yandong",
""
],
[
"Zhang",
"Lei",
""
],
[
"Hu",
"Yuxiao",
""
],
[
"He",
"Xiaodong",
""
],
[
"Gao",
"Jianfeng",
""
]
] | TITLE: MS-Celeb-1M: A Dataset and Benchmark for Large-Scale Face Recognition
ABSTRACT: In this paper, we design a benchmark task and provide the associated datasets
for recognizing face images and link them to corresponding entity keys in a
knowledge base. More specifically, we propose a benchmark task to recognize one
million celebrities from their face images, by using all the possibly collected
face images of this individual on the web as training data. The rich
information provided by the knowledge base helps to conduct disambiguation and
improve the recognition accuracy, and contributes to various real-world
applications, such as image captioning and news video analysis. Associated with
this task, we design and provide concrete measurement set, evaluation protocol,
as well as training data. We also present in details our experiment setup and
report promising baseline results. Our benchmark task could lead to one of the
largest classification problems in computer vision. To the best of our
knowledge, our training dataset, which contains 10M images in version 1, is the
largest publicly available one in the world.
| new_dataset | 0.956513 |
1512.00795 | Xiaolong Wang | Xiaolong Wang, Ali Farhadi, Abhinav Gupta | Actions ~ Transformations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What defines an action like "kicking ball"? We argue that the true meaning of
an action lies in the change or transformation an action brings to the
environment. In this paper, we propose a novel representation for actions by
modeling an action as a transformation which changes the state of the
environment before the action happens (precondition) to the state after the
action (effect). Motivated by recent advancements of video representation using
deep learning, we design a Siamese network which models the action as a
transformation on a high-level feature space. We show that our model gives
improvements on standard action recognition datasets including UCF101 and
HMDB51. More importantly, our approach is able to generalize beyond learned
action categories and shows significant performance improvement on
cross-category generalization on our new ACT dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Dec 2015 18:17:32 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 04:51:49 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Wang",
"Xiaolong",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Actions ~ Transformations
ABSTRACT: What defines an action like "kicking ball"? We argue that the true meaning of
an action lies in the change or transformation an action brings to the
environment. In this paper, we propose a novel representation for actions by
modeling an action as a transformation which changes the state of the
environment before the action happens (precondition) to the state after the
action (effect). Motivated by recent advancements of video representation using
deep learning, we design a Siamese network which models the action as a
transformation on a high-level feature space. We show that our model gives
improvements on standard action recognition datasets including UCF101 and
HMDB51. More importantly, our approach is able to generalize beyond learned
action categories and shows significant performance improvement on
cross-category generalization on our new ACT dataset.
| new_dataset | 0.962285 |
1603.07076 | Albert Haque | Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, Li
Fei-Fei | Towards Viewpoint Invariant 3D Human Pose Estimation | European Conference on Computer Vision (ECCV) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a viewpoint invariant model for 3D human pose estimation from a
single depth image. To achieve this, our discriminative model embeds local
regions into a learned viewpoint invariant feature space. Formulated as a
multi-task learning problem, our model is able to selectively predict partial
poses in the presence of noise and occlusion. Our approach leverages a
convolutional and recurrent network architecture with a top-down error feedback
mechanism to self-correct previous pose estimates in an end-to-end manner. We
evaluate our model on a previously published depth dataset and a newly
collected human pose dataset containing 100K annotated depth images from
extreme viewpoints. Experiments show that our model achieves competitive
performance on frontal views while achieving state-of-the-art performance on
alternate viewpoints.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 06:24:19 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 01:45:58 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jul 2016 06:59:37 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Haque",
"Albert",
""
],
[
"Peng",
"Boya",
""
],
[
"Luo",
"Zelun",
""
],
[
"Alahi",
"Alexandre",
""
],
[
"Yeung",
"Serena",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: Towards Viewpoint Invariant 3D Human Pose Estimation
ABSTRACT: We propose a viewpoint invariant model for 3D human pose estimation from a
single depth image. To achieve this, our discriminative model embeds local
regions into a learned viewpoint invariant feature space. Formulated as a
multi-task learning problem, our model is able to selectively predict partial
poses in the presence of noise and occlusion. Our approach leverages a
convolutional and recurrent network architecture with a top-down error feedback
mechanism to self-correct previous pose estimates in an end-to-end manner. We
evaluate our model on a previously published depth dataset and a newly
collected human pose dataset containing 100K annotated depth images from
extreme viewpoints. Experiments show that our model achieves competitive
performance on frontal views while achieving state-of-the-art performance on
alternate viewpoints.
| new_dataset | 0.957437 |
1603.08561 | Ishan Misra | Ishan Misra and C. Lawrence Zitnick and Martial Hebert | Shuffle and Learn: Unsupervised Learning using Temporal Order
Verification | Accepted at ECCV 2016 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an approach for learning a visual representation
from the raw spatiotemporal signals in videos. Our representation is learned
without supervision from semantic labels. We formulate our method as an
unsupervised sequential verification task, i.e., we determine whether a
sequence of frames from a video is in the correct temporal order. With this
simple task and no semantic labels, we learn a powerful visual representation
using a Convolutional Neural Network (CNN). The representation contains
complementary information to that learned from supervised image datasets like
ImageNet. Qualitative results show that our method captures information that is
temporally varying, such as human pose. When used as pre-training for action
recognition, our method gives significant gains over learning without external
data on benchmark datasets like UCF101 and HMDB51. To demonstrate its
sensitivity to human pose, we show results for pose estimation on the FLIC and
MPII datasets that are competitive, or better than approaches using
significantly more supervision. Our method can be combined with supervised
representations to provide an additional boost in accuracy.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 21:00:43 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 17:26:01 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Misra",
"Ishan",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Hebert",
"Martial",
""
]
] | TITLE: Shuffle and Learn: Unsupervised Learning using Temporal Order
Verification
ABSTRACT: In this paper, we present an approach for learning a visual representation
from the raw spatiotemporal signals in videos. Our representation is learned
without supervision from semantic labels. We formulate our method as an
unsupervised sequential verification task, i.e., we determine whether a
sequence of frames from a video is in the correct temporal order. With this
simple task and no semantic labels, we learn a powerful visual representation
using a Convolutional Neural Network (CNN). The representation contains
complementary information to that learned from supervised image datasets like
ImageNet. Qualitative results show that our method captures information that is
temporally varying, such as human pose. When used as pre-training for action
recognition, our method gives significant gains over learning without external
data on benchmark datasets like UCF101 and HMDB51. To demonstrate its
sensitivity to human pose, we show results for pose estimation on the FLIC and
MPII datasets that are competitive, or better than approaches using
significantly more supervision. Our method can be combined with supervised
representations to provide an additional boost in accuracy.
| no_new_dataset | 0.948728 |
1604.01360 | Lerrel Pinto Mr | Lerrel Pinto, Dhiraj Gandhi, Yuanfeng Han, Yong-Lae Park, Abhinav
Gupta | The Curious Robot: Learning Visual Representations via Physical
Interactions | null | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is the right supervisory signal to train visual representations? Current
approaches in computer vision use category labels from datasets such as
ImageNet to train ConvNets. However, in case of biological agents, visual
representation learning does not require millions of semantic labels. We argue
that biological agents use physical interactions with the world to learn visual
representations unlike current vision systems which just use passive
observations (images and videos downloaded from web). For example, babies push
objects, poke them, put them in their mouth and throw them to learn
representations. Towards this goal, we build one of the first systems on a
Baxter platform that pushes, pokes, grasps and observes objects in a tabletop
environment. It uses four different types of physical interactions to collect
more than 130K datapoints, with each datapoint providing supervision to a
shared ConvNet architecture allowing us to learn visual representations. We
show the quality of learned representations by observing neuron activations and
performing nearest neighbor retrieval on this learned representation.
Quantitatively, we evaluate our learned ConvNet on image classification tasks
and show improvements compared to learning without external data. Finally, on
the task of instance retrieval, our network outperforms the ImageNet network on
recall@1 by 3%
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 18:47:15 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 03:30:44 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Pinto",
"Lerrel",
""
],
[
"Gandhi",
"Dhiraj",
""
],
[
"Han",
"Yuanfeng",
""
],
[
"Park",
"Yong-Lae",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: The Curious Robot: Learning Visual Representations via Physical
Interactions
ABSTRACT: What is the right supervisory signal to train visual representations? Current
approaches in computer vision use category labels from datasets such as
ImageNet to train ConvNets. However, in case of biological agents, visual
representation learning does not require millions of semantic labels. We argue
that biological agents use physical interactions with the world to learn visual
representations unlike current vision systems which just use passive
observations (images and videos downloaded from web). For example, babies push
objects, poke them, put them in their mouth and throw them to learn
representations. Towards this goal, we build one of the first systems on a
Baxter platform that pushes, pokes, grasps and observes objects in a tabletop
environment. It uses four different types of physical interactions to collect
more than 130K datapoints, with each datapoint providing supervision to a
shared ConvNet architecture allowing us to learn visual representations. We
show the quality of learned representations by observing neuron activations and
performing nearest neighbor retrieval on this learned representation.
Quantitatively, we evaluate our learned ConvNet on image classification tasks
and show improvements compared to learning without external data. Finally, on
the task of instance retrieval, our network outperforms the ImageNet network on
recall@1 by 3%
| no_new_dataset | 0.940079 |
1604.02245 | Matthias Limmer | Matthias Limmer and Hendrik P.A. Lensch | Infrared Colorization Using Deep Convolutional Neural Networks | 8 pages, 11 figures, 1 table, submitted to ICMLA2016 | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a method for transferring the RGB color spectrum to
near-infrared (NIR) images using deep multi-scale convolutional neural
networks. A direct and integrated transfer between NIR and RGB pixels is
trained. The trained model does not require any user guidance or a reference
image database in the recall phase to produce images with a natural appearance.
To preserve the rich details of the NIR image, its high frequency features are
transferred to the estimated RGB image. The presented approach is trained and
evaluated on a real-world dataset containing a large amount of road scene
images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which
ensures a perfect pixel to pixel registration.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 07:10:47 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2016 13:07:59 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jul 2016 09:35:51 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Limmer",
"Matthias",
""
],
[
"Lensch",
"Hendrik P. A.",
""
]
] | TITLE: Infrared Colorization Using Deep Convolutional Neural Networks
ABSTRACT: This paper proposes a method for transferring the RGB color spectrum to
near-infrared (NIR) images using deep multi-scale convolutional neural
networks. A direct and integrated transfer between NIR and RGB pixels is
trained. The trained model does not require any user guidance or a reference
image database in the recall phase to produce images with a natural appearance.
To preserve the rich details of the NIR image, its high frequency features are
transferred to the estimated RGB image. The presented approach is trained and
evaluated on a real-world dataset containing a large amount of road scene
images in summer. The dataset was captured by a multi-CCD NIR/RGB camera, which
ensures a perfect pixel to pixel registration.
| no_new_dataset | 0.933066 |
1604.04783 | Haluk O. Bingol | Haluk O. Bingol and Omer Basar | Compatibility of Mating Preferences | 8 pages, 3 figures | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human mating is a complex phenomenon. Although men and women have different
preferences in mate selection, there should be compatibility in these
preferences since human mating requires agreement of both parties. We
investigate how compatible the mating preferences of men and women are in a
given property such as age, height, education and income. We use dataset of a
large online dating site (N = 44, 255 users). (i) Our findings are based on the
"actual behavior" of users trying to find a date online, rather than questions
about a "hypothetical" partner as in surveys. (ii) We confirm that women and
men have different mating preferences. Women prefer taller and older men with
better education and higher income then themselves. Men prefer just the
opposite. (iii) Our findings indicate that these differences complement each
other. (iv) Highest compatibility is observed in income with 95 %. This might
be an indication that income is in the process of becoming more important than
other properties, including age, in our modern society. (v) An evolutionary
model is developed which produces similar results.
| [
{
"version": "v1",
"created": "Sat, 16 Apr 2016 18:18:57 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 19:57:17 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Bingol",
"Haluk O.",
""
],
[
"Basar",
"Omer",
""
]
] | TITLE: Compatibility of Mating Preferences
ABSTRACT: Human mating is a complex phenomenon. Although men and women have different
preferences in mate selection, there should be compatibility in these
preferences since human mating requires agreement of both parties. We
investigate how compatible the mating preferences of men and women are in a
given property such as age, height, education and income. We use dataset of a
large online dating site (N = 44, 255 users). (i) Our findings are based on the
"actual behavior" of users trying to find a date online, rather than questions
about a "hypothetical" partner as in surveys. (ii) We confirm that women and
men have different mating preferences. Women prefer taller and older men with
better education and higher income then themselves. Men prefer just the
opposite. (iii) Our findings indicate that these differences complement each
other. (iv) Highest compatibility is observed in income with 95 %. This might
be an indication that income is in the process of becoming more important than
other properties, including age, in our modern society. (v) An evolutionary
model is developed which produces similar results.
| no_new_dataset | 0.943556 |
1604.05000 | Zhen Li | Zhen Li, Yukang Gan, Xiaodan Liang, Yizhou Yu, Hui Cheng and Liang Lin | LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene
Labeling | 17 pages, accepted by ECCV | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic labeling of RGB-D scenes is crucial to many intelligent applications
including perceptual robotics. It generates pixelwise and fine-grained label
maps from simultaneously sensed photometric (RGB) and depth channels. This
paper addresses this problem by i) developing a novel Long Short-Term Memorized
Context Fusion (LSTM-CF) Model that captures and fuses contextual information
from multiple channels of photometric and depth data, and ii) incorporating
this model into deep convolutional neural networks (CNNs) for end-to-end
training. Specifically, contexts in photometric and depth channels are,
respectively, captured by stacking several convolutional layers and a long
short-term memory layer; the memory layer encodes both short-range and
long-range spatial dependencies in an image along the vertical direction.
Another long short-term memorized fusion layer is set up to integrate the
contexts along the vertical direction from different channels, and perform
bi-directional propagation of the fused vertical contexts along the horizontal
direction to obtain true 2D global contexts. At last, the fused contextual
representation is concatenated with the convolutional features extracted from
the photometric channels in order to improve the accuracy of fine-scale
semantic labeling. Our proposed model has set a new state of the art, i.e.,
48.1% and 49.4% average class accuracy over 37 categories (2.2% and 5.4%
improvement) on the large-scale SUNRGBD dataset and the NYUDv2dataset,
respectively.
| [
{
"version": "v1",
"created": "Mon, 18 Apr 2016 05:59:50 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Apr 2016 08:15:19 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jul 2016 16:46:43 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Li",
"Zhen",
""
],
[
"Gan",
"Yukang",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Yu",
"Yizhou",
""
],
[
"Cheng",
"Hui",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene
Labeling
ABSTRACT: Semantic labeling of RGB-D scenes is crucial to many intelligent applications
including perceptual robotics. It generates pixelwise and fine-grained label
maps from simultaneously sensed photometric (RGB) and depth channels. This
paper addresses this problem by i) developing a novel Long Short-Term Memorized
Context Fusion (LSTM-CF) Model that captures and fuses contextual information
from multiple channels of photometric and depth data, and ii) incorporating
this model into deep convolutional neural networks (CNNs) for end-to-end
training. Specifically, contexts in photometric and depth channels are,
respectively, captured by stacking several convolutional layers and a long
short-term memory layer; the memory layer encodes both short-range and
long-range spatial dependencies in an image along the vertical direction.
Another long short-term memorized fusion layer is set up to integrate the
contexts along the vertical direction from different channels, and perform
bi-directional propagation of the fused vertical contexts along the horizontal
direction to obtain true 2D global contexts. At last, the fused contextual
representation is concatenated with the convolutional features extracted from
the photometric channels in order to improve the accuracy of fine-scale
semantic labeling. Our proposed model has set a new state of the art, i.e.,
48.1% and 49.4% average class accuracy over 37 categories (2.2% and 5.4%
improvement) on the large-scale SUNRGBD dataset and the NYUDv2dataset,
respectively.
| no_new_dataset | 0.954308 |
1604.05633 | Yanghao Li | Yanghao Li, Cuiling Lan, Junliang Xing, Wenjun Zeng, Chunfeng Yuan and
Jiaying Liu | Online Human Action Detection using Joint Classification-Regression
Recurrent Neural Networks | 2016 ECCV Conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human action recognition from well-segmented 3D skeleton data has been
intensively studied and has been attracting an increasing attention. Online
action detection goes one step further and is more challenging, which
identifies the action type and localizes the action positions on the fly from
the untrimmed stream data. In this paper, we study the problem of online action
detection from streaming skeleton data. We propose a multi-task end-to-end
Joint Classification-Regression Recurrent Neural Network to better explore the
action type and temporal localization information. By employing a joint
classification and regression optimization objective, this network is capable
of automatically localizing the start and end points of actions more
accurately. Specifically, by leveraging the merits of the deep Long Short-Term
Memory (LSTM) subnetwork, the proposed model automatically captures the complex
long-range temporal dynamics, which naturally avoids the typical sliding window
design and thus ensures high computational efficiency. Furthermore, the subtask
of regression optimization provides the ability to forecast the action prior to
its occurrence. To evaluate our proposed model, we build a large streaming
video dataset with annotations. Experimental results on our dataset and the
public G3D dataset both demonstrate very promising performance of our scheme.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2016 15:58:56 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 15:54:07 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Li",
"Yanghao",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Xing",
"Junliang",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Yuan",
"Chunfeng",
""
],
[
"Liu",
"Jiaying",
""
]
] | TITLE: Online Human Action Detection using Joint Classification-Regression
Recurrent Neural Networks
ABSTRACT: Human action recognition from well-segmented 3D skeleton data has been
intensively studied and has been attracting an increasing attention. Online
action detection goes one step further and is more challenging, which
identifies the action type and localizes the action positions on the fly from
the untrimmed stream data. In this paper, we study the problem of online action
detection from streaming skeleton data. We propose a multi-task end-to-end
Joint Classification-Regression Recurrent Neural Network to better explore the
action type and temporal localization information. By employing a joint
classification and regression optimization objective, this network is capable
of automatically localizing the start and end points of actions more
accurately. Specifically, by leveraging the merits of the deep Long Short-Term
Memory (LSTM) subnetwork, the proposed model automatically captures the complex
long-range temporal dynamics, which naturally avoids the typical sliding window
design and thus ensures high computational efficiency. Furthermore, the subtask
of regression optimization provides the ability to forecast the action prior to
its occurrence. To evaluate our proposed model, we build a large streaming
video dataset with annotations. Experimental results on our dataset and the
public G3D dataset both demonstrate very promising performance of our scheme.
| new_dataset | 0.970155 |
1606.07908 | Huy Phan | Huy Phan, Lars Hertel, Marco Maass, Philipp Koch, Alfred Mertins | Label Tree Embeddings for Acoustic Scene Classification | to appear in the Proceedings of ACM Multimedia 2016 (ACMMM 2016) | null | 10.1145/2964284.2967268 | null | cs.MM cs.AI cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in this paper an efficient approach for acoustic scene
classification by exploring the structure of class labels. Given a set of class
labels, a category taxonomy is automatically learned by collectively optimizing
a clustering of the labels into multiple meta-classes in a tree structure. An
acoustic scene instance is then embedded into a low-dimensional feature
representation which consists of the likelihoods that it belongs to the
meta-classes. We demonstrate state-of-the-art results on two different datasets
for the acoustic scene classification task, including the DCASE 2013 and LITIS
Rouen datasets.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2016 12:57:44 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 11:42:20 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Phan",
"Huy",
""
],
[
"Hertel",
"Lars",
""
],
[
"Maass",
"Marco",
""
],
[
"Koch",
"Philipp",
""
],
[
"Mertins",
"Alfred",
""
]
] | TITLE: Label Tree Embeddings for Acoustic Scene Classification
ABSTRACT: We present in this paper an efficient approach for acoustic scene
classification by exploring the structure of class labels. Given a set of class
labels, a category taxonomy is automatically learned by collectively optimizing
a clustering of the labels into multiple meta-classes in a tree structure. An
acoustic scene instance is then embedded into a low-dimensional feature
representation which consists of the likelihoods that it belongs to the
meta-classes. We demonstrate state-of-the-art results on two different datasets
for the acoustic scene classification task, including the DCASE 2013 and LITIS
Rouen datasets.
| no_new_dataset | 0.952131 |
1607.07043 | Amir Shahroudy | Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang | Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition | null | null | null | null | cs.CV cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D action recognition - analysis of human actions based on 3D skeleton data -
becomes popular recently due to its succinctness, robustness, and
view-invariant representation. Recent attempts on this problem suggested to
develop RNN-based learning methods to model the contextual dependency in the
temporal domain. In this paper, we extend this idea to spatio-temporal domains
to analyze the hidden sources of action-related information within the input
data over both domains concurrently. Inspired by the graphical structure of the
human skeleton, we further propose a more powerful tree-structure based
traversal method. To handle the noise and occlusion in 3D skeleton data, we
introduce new gating mechanism within LSTM to learn the reliability of the
sequential input data and accordingly adjust its effect on updating the
long-term context information stored in the memory cell. Our method achieves
state-of-the-art performance on 4 challenging benchmark datasets for 3D human
action analysis.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 13:39:11 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Liu",
"Jun",
""
],
[
"Shahroudy",
"Amir",
""
],
[
"Xu",
"Dong",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
ABSTRACT: 3D action recognition - analysis of human actions based on 3D skeleton data -
becomes popular recently due to its succinctness, robustness, and
view-invariant representation. Recent attempts on this problem suggested to
develop RNN-based learning methods to model the contextual dependency in the
temporal domain. In this paper, we extend this idea to spatio-temporal domains
to analyze the hidden sources of action-related information within the input
data over both domains concurrently. Inspired by the graphical structure of the
human skeleton, we further propose a more powerful tree-structure based
traversal method. To handle the noise and occlusion in 3D skeleton data, we
introduce new gating mechanism within LSTM to learn the reliability of the
sequential input data and accordingly adjust its effect on updating the
long-term context information stored in the memory cell. Our method achieves
state-of-the-art performance on 4 challenging benchmark datasets for 3D human
action analysis.
| no_new_dataset | 0.944074 |
1607.07525 | Jianming Zhang | Jianming Zhang, Shugao Ma, Mehrnoosh Sameki, Stan Sclaroff, Margrit
Betke, Zhe Lin, Xiaohui Shen, Brian Price, Radomir Mech | Salient Object Subitizing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of Salient Object Subitizing, i.e. predicting the
existence and the number of salient objects in an image using holistic cues.
This task is inspired by the ability of people to quickly and accurately
identify the number of items within the subitizing range (1-4). To this end, we
present a salient object subitizing image dataset of about 14K everyday images
which are annotated using an online crowdsourcing marketplace. We show that
using an end-to-end trained Convolutional Neural Network (CNN) model, we
achieve prediction accuracy comparable to human performance in identifying
images with zero or one salient object. For images with multiple salient
objects, our model also provides significantly better than chance performance
without requiring any localization process. Moreover, we propose a method to
improve the training of the CNN subitizing model by leveraging synthetic
images. In experiments, we demonstrate the accuracy and generalizability of our
CNN subitizing model and its applications in salient object detection and image
retrieval.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2016 02:26:01 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Zhang",
"Jianming",
""
],
[
"Ma",
"Shugao",
""
],
[
"Sameki",
"Mehrnoosh",
""
],
[
"Sclaroff",
"Stan",
""
],
[
"Betke",
"Margrit",
""
],
[
"Lin",
"Zhe",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Price",
"Brian",
""
],
[
"Mech",
"Radomir",
""
]
] | TITLE: Salient Object Subitizing
ABSTRACT: We study the problem of Salient Object Subitizing, i.e. predicting the
existence and the number of salient objects in an image using holistic cues.
This task is inspired by the ability of people to quickly and accurately
identify the number of items within the subitizing range (1-4). To this end, we
present a salient object subitizing image dataset of about 14K everyday images
which are annotated using an online crowdsourcing marketplace. We show that
using an end-to-end trained Convolutional Neural Network (CNN) model, we
achieve prediction accuracy comparable to human performance in identifying
images with zero or one salient object. For images with multiple salient
objects, our model also provides significantly better than chance performance
without requiring any localization process. Moreover, we propose a method to
improve the training of the CNN subitizing model by leveraging synthetic
images. In experiments, we demonstrate the accuracy and generalizability of our
CNN subitizing model and its applications in salient object detection and image
retrieval.
| new_dataset | 0.957912 |
1607.07614 | Marian George | Marian George, Mandar Dixit, G\'abor Zogg and Nuno Vasconcelos | Semantic Clustering for Robust Fine-Grained Scene Recognition | Accepted at the European Conference on Computer Vision (ECCV), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In domain generalization, the knowledge learnt from one or multiple source
domains is transferred to an unseen target domain. In this work, we propose a
novel domain generalization approach for fine-grained scene recognition. We
first propose a semantic scene descriptor that jointly captures the subtle
differences between fine-grained scenes, while being robust to varying object
configurations across domains. We model the occurrence patterns of objects in
scenes, capturing the informativeness and discriminability of each object for
each scene. We then transform such occurrences into scene probabilities for
each scene image. Second, we argue that scene images belong to hidden semantic
topics that can be discovered by clustering our semantic descriptors. To
evaluate the proposed method, we propose a new fine-grained scene dataset in
cross-domain settings. Extensive experiments on the proposed dataset and three
benchmark scene datasets show the effectiveness of the proposed approach for
fine-grained scene transfer, where we outperform state-of-the-art scene
recognition and domain generalization methods.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2016 09:46:48 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"George",
"Marian",
""
],
[
"Dixit",
"Mandar",
""
],
[
"Zogg",
"Gábor",
""
],
[
"Vasconcelos",
"Nuno",
""
]
] | TITLE: Semantic Clustering for Robust Fine-Grained Scene Recognition
ABSTRACT: In domain generalization, the knowledge learnt from one or multiple source
domains is transferred to an unseen target domain. In this work, we propose a
novel domain generalization approach for fine-grained scene recognition. We
first propose a semantic scene descriptor that jointly captures the subtle
differences between fine-grained scenes, while being robust to varying object
configurations across domains. We model the occurrence patterns of objects in
scenes, capturing the informativeness and discriminability of each object for
each scene. We then transform such occurrences into scene probabilities for
each scene image. Second, we argue that scene images belong to hidden semantic
topics that can be discovered by clustering our semantic descriptors. To
evaluate the proposed method, we propose a new fine-grained scene dataset in
cross-domain settings. Extensive experiments on the proposed dataset and three
benchmark scene datasets show the effectiveness of the proposed approach for
fine-grained scene transfer, where we outperform state-of-the-art scene
recognition and domain generalization methods.
| new_dataset | 0.956186 |
1607.07646 | Moin Nabi | Hamidreza Rabiee, Javad Haddadnia, Hossein Mousavi, Moin Nabi,
Vittorio Murino and Nicu Sebe | Emotion-Based Crowd Representation for Abnormality Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In crowd behavior understanding, a model of crowd behavior need to be trained
using the information extracted from video sequences. Since there is no
ground-truth available in crowd datasets except the crowd behavior labels, most
of the methods proposed so far are just based on low-level visual features.
However, there is a huge semantic gap between low-level motion/appearance
features and high-level concept of crowd behaviors. In this paper we propose an
attribute-based strategy to alleviate this problem. While similar strategies
have been recently adopted for object and action recognition, as far as we
know, we are the first showing that the crowd emotions can be used as
attributes for crowd behavior understanding. The main idea is to train a set of
emotion-based classifiers, which can subsequently be used to represent the
crowd motion. For this purpose, we collect a big dataset of video clips and
provide them with both annotations of "crowd behaviors" and "crowd emotions".
We show the results of the proposed method on our dataset, which demonstrate
that the crowd emotions enable the construction of more descriptive models for
crowd behaviors. We aim at publishing the dataset with the article, to be used
as a benchmark for the communities.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2016 11:26:44 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Rabiee",
"Hamidreza",
""
],
[
"Haddadnia",
"Javad",
""
],
[
"Mousavi",
"Hossein",
""
],
[
"Nabi",
"Moin",
""
],
[
"Murino",
"Vittorio",
""
],
[
"Sebe",
"Nicu",
""
]
] | TITLE: Emotion-Based Crowd Representation for Abnormality Detection
ABSTRACT: In crowd behavior understanding, a model of crowd behavior need to be trained
using the information extracted from video sequences. Since there is no
ground-truth available in crowd datasets except the crowd behavior labels, most
of the methods proposed so far are just based on low-level visual features.
However, there is a huge semantic gap between low-level motion/appearance
features and high-level concept of crowd behaviors. In this paper we propose an
attribute-based strategy to alleviate this problem. While similar strategies
have been recently adopted for object and action recognition, as far as we
know, we are the first showing that the crowd emotions can be used as
attributes for crowd behavior understanding. The main idea is to train a set of
emotion-based classifiers, which can subsequently be used to represent the
crowd motion. For this purpose, we collect a big dataset of video clips and
provide them with both annotations of "crowd behaviors" and "crowd emotions".
We show the results of the proposed method on our dataset, which demonstrate
that the crowd emotions enable the construction of more descriptive models for
crowd behaviors. We aim at publishing the dataset with the article, to be used
as a benchmark for the communities.
| new_dataset | 0.964052 |
1607.07770 | Behrooz Mahasseni | Behrooz Mahasseni, Sinisa Todorovic, and Alan Fern | Approximate Policy Iteration for Budgeted Semantic Video Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper formulates and presents a solution to the new problem of budgeted
semantic video segmentation. Given a video, the goal is to accurately assign a
semantic class label to every pixel in the video within a specified time
budget. Typical approaches to such labeling problems, such as Conditional
Random Fields (CRFs), focus on maximizing accuracy but do not provide a
principled method for satisfying a time budget. For video data, the time
required by CRF and related methods is often dominated by the time to compute
low-level descriptors of supervoxels across the video. Our key contribution is
the new budgeted inference framework for CRF models that intelligently selects
the most useful subsets of descriptors to run on subsets of supervoxels within
the time budget. The objective is to maintain an accuracy as close as possible
to the CRF model with no time bound, while remaining within the time budget.
Our second contribution is the algorithm for learning a policy for the sparse
selection of supervoxels and their descriptors for budgeted CRF inference. This
learning algorithm is derived by casting our problem in the framework of Markov
Decision Processes, and then instantiating a state-of-the-art policy learning
algorithm known as Classification-Based Approximate Policy Iteration. Our
experiments on multiple video datasets show that our learning approach and
framework is able to significantly reduce computation time, and maintain
competitive accuracy under varying budgets.
| [
{
"version": "v1",
"created": "Tue, 26 Jul 2016 15:58:32 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Mahasseni",
"Behrooz",
""
],
[
"Todorovic",
"Sinisa",
""
],
[
"Fern",
"Alan",
""
]
] | TITLE: Approximate Policy Iteration for Budgeted Semantic Video Segmentation
ABSTRACT: This paper formulates and presents a solution to the new problem of budgeted
semantic video segmentation. Given a video, the goal is to accurately assign a
semantic class label to every pixel in the video within a specified time
budget. Typical approaches to such labeling problems, such as Conditional
Random Fields (CRFs), focus on maximizing accuracy but do not provide a
principled method for satisfying a time budget. For video data, the time
required by CRF and related methods is often dominated by the time to compute
low-level descriptors of supervoxels across the video. Our key contribution is
the new budgeted inference framework for CRF models that intelligently selects
the most useful subsets of descriptors to run on subsets of supervoxels within
the time budget. The objective is to maintain an accuracy as close as possible
to the CRF model with no time bound, while remaining within the time budget.
Our second contribution is the algorithm for learning a policy for the sparse
selection of supervoxels and their descriptors for budgeted CRF inference. This
learning algorithm is derived by casting our problem in the framework of Markov
Decision Processes, and then instantiating a state-of-the-art policy learning
algorithm known as Classification-Based Approximate Policy Iteration. Our
experiments on multiple video datasets show that our learning approach and
framework is able to significantly reduce computation time, and maintain
competitive accuracy under varying budgets.
| no_new_dataset | 0.949389 |
1607.07788 | Igor Barahona Dr | Daria Micaela Hernandez, Monica Becue-Bertaut, Igor Barahona | How scientific literature has been evolving over the time? A novel
statistical approach using tracking verbal-based methods | null | JSM Proceedings (2014), Section on Statistical Learning and Data
Mining. Alexandria, VA. American Statistical Association. 1121-1131 | null | null | cs.CL cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides a global vision of the scientific publications related
with the Systemic Lupus Erythematosus (SLE), taking as starting point abstracts
of articles. Through the time, abstracts have been evolving towards higher
complexity on used terminology, which makes necessary the use of sophisticated
statistical methods and answering questions including: how vocabulary is
evolving through the time? Which ones are most influential articles? And which
one are the articles that introduced new terms and vocabulary? To answer these,
we analyze a dataset composed by 506 abstracts and downloaded from 115
different journals and cover a 18 year-period.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 17:59:55 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Hernandez",
"Daria Micaela",
""
],
[
"Becue-Bertaut",
"Monica",
""
],
[
"Barahona",
"Igor",
""
]
] | TITLE: How scientific literature has been evolving over the time? A novel
statistical approach using tracking verbal-based methods
ABSTRACT: This paper provides a global vision of the scientific publications related
with the Systemic Lupus Erythematosus (SLE), taking as starting point abstracts
of articles. Through the time, abstracts have been evolving towards higher
complexity on used terminology, which makes necessary the use of sophisticated
statistical methods and answering questions including: how vocabulary is
evolving through the time? Which ones are most influential articles? And which
one are the articles that introduced new terms and vocabulary? To answer these,
we analyze a dataset composed by 506 abstracts and downloaded from 115
different journals and cover a 18 year-period.
| no_new_dataset | 0.939748 |
1607.07804 | Sai Zhang | Sai Zhang, Naresh Shanbhag | Error-Resilient Machine Learning in Near Threshold Voltage via
Classifier Ensemble | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the design of error-resilient machine learning
architectures by employing a distributed machine learning framework referred to
as classifier ensemble (CE). CE combines several simple classifiers to obtain a
strong one. In contrast, centralized machine learning employs a single complex
block. We compare the random forest (RF) and the support vector machine (SVM),
which are representative techniques from the CE and centralized frameworks,
respectively. Employing the dataset from UCI machine learning repository and
architectural-level error models in a commercial 45 nm CMOS process, it is
demonstrated that RF-based architectures are significantly more robust than SVM
architectures in presence of timing errors due to process variations in
near-threshold voltage (NTV) regions (0.3 V - 0.7 V). In particular, the RF
architecture exhibits a detection accuracy (P_{det}) that varies by 3.2% while
maintaining a median P_{det} > 0.9 at a gate level delay variation of 28.9% .
In comparison, SVM exhibits a P_{det} that varies by 16.8%. Additionally, we
propose an error weighted voting technique that incorporates the timing error
statistics of the NTV circuit fabric to further enhance robustness. Simulation
results confirm that the error weighted voting achieves a P_{det} that varies
by only 1.4%, which is 12X lower compared to SVM.
| [
{
"version": "v1",
"created": "Sun, 3 Jul 2016 16:34:24 GMT"
}
] | 2016-07-27T00:00:00 | [
[
"Zhang",
"Sai",
""
],
[
"Shanbhag",
"Naresh",
""
]
] | TITLE: Error-Resilient Machine Learning in Near Threshold Voltage via
Classifier Ensemble
ABSTRACT: In this paper, we present the design of error-resilient machine learning
architectures by employing a distributed machine learning framework referred to
as classifier ensemble (CE). CE combines several simple classifiers to obtain a
strong one. In contrast, centralized machine learning employs a single complex
block. We compare the random forest (RF) and the support vector machine (SVM),
which are representative techniques from the CE and centralized frameworks,
respectively. Employing the dataset from UCI machine learning repository and
architectural-level error models in a commercial 45 nm CMOS process, it is
demonstrated that RF-based architectures are significantly more robust than SVM
architectures in presence of timing errors due to process variations in
near-threshold voltage (NTV) regions (0.3 V - 0.7 V). In particular, the RF
architecture exhibits a detection accuracy (P_{det}) that varies by 3.2% while
maintaining a median P_{det} > 0.9 at a gate level delay variation of 28.9% .
In comparison, SVM exhibits a P_{det} that varies by 16.8%. Additionally, we
propose an error weighted voting technique that incorporates the timing error
statistics of the NTV circuit fabric to further enhance robustness. Simulation
results confirm that the error weighted voting achieves a P_{det} that varies
by only 1.4%, which is 12X lower compared to SVM.
| no_new_dataset | 0.953837 |
1506.08909 | Ryan Lowe T. | Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems | SIGDIAL 2015. 10 pages, 5 figures. Update includes link to new
version of the dataset, with some added features and bug fixes. See:
https://github.com/rkadlec/ubuntu-ranking-dataset-creator | null | null | Proc. SIGDIAL 16 (2015) pp. 285-294 | cs.CL cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 00:37:09 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jul 2015 16:11:29 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Feb 2016 01:21:35 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Lowe",
"Ryan",
""
],
[
"Pow",
"Nissan",
""
],
[
"Serban",
"Iulian",
""
],
[
"Pineau",
"Joelle",
""
]
] | TITLE: The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems
ABSTRACT: This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response.
| new_dataset | 0.96051 |
1511.06457 | Peng Wang | Peng Wang and Alan Yuille | DOC: Deep OCclusion Estimation From a Single Image | Accepted to ECCV 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering the occlusion relationships between objects is a fundamental human
visual ability which yields important information about the 3D world. In this
paper we propose a deep network architecture, called DOC, which acts on a
single image, detects object boundaries and estimates the border ownership
(i.e. which side of the boundary is foreground and which is background). We
represent occlusion relations by a binary edge map, to indicate the object
boundary, and an occlusion orientation variable which is tangential to the
boundary and whose direction specifies border ownership by a left-hand rule. We
train two related deep convolutional neural networks, called DOC, which exploit
local and non-local image cues to estimate this representation and hence
recover occlusion relations. In order to train and test DOC we construct a
large-scale instance occlusion boundary dataset using PASCAL VOC images, which
we call the PASCAL instance occlusion dataset (PIOD). This contains 10,000
images and hence is two orders of magnitude larger than existing occlusion
datasets for outdoor images. We test two variants of DOC on PIOD and on the
BSDS occlusion dataset and show they outperform state-of-the-art methods.
Finally, we perform numerous experiments investigating multiple settings of DOC
and transfer between BSDS and PIOD, which provides more insights for further
study of occlusion estimation.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 00:04:06 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jan 2016 00:49:47 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jan 2016 06:46:26 GMT"
},
{
"version": "v4",
"created": "Sun, 24 Jul 2016 07:16:54 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Wang",
"Peng",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: DOC: Deep OCclusion Estimation From a Single Image
ABSTRACT: Recovering the occlusion relationships between objects is a fundamental human
visual ability which yields important information about the 3D world. In this
paper we propose a deep network architecture, called DOC, which acts on a
single image, detects object boundaries and estimates the border ownership
(i.e. which side of the boundary is foreground and which is background). We
represent occlusion relations by a binary edge map, to indicate the object
boundary, and an occlusion orientation variable which is tangential to the
boundary and whose direction specifies border ownership by a left-hand rule. We
train two related deep convolutional neural networks, called DOC, which exploit
local and non-local image cues to estimate this representation and hence
recover occlusion relations. In order to train and test DOC we construct a
large-scale instance occlusion boundary dataset using PASCAL VOC images, which
we call the PASCAL instance occlusion dataset (PIOD). This contains 10,000
images and hence is two orders of magnitude larger than existing occlusion
datasets for outdoor images. We test two variants of DOC on PIOD and on the
BSDS occlusion dataset and show they outperform state-of-the-art methods.
Finally, we perform numerous experiments investigating multiple settings of DOC
and transfer between BSDS and PIOD, which provides more insights for further
study of occlusion estimation.
| new_dataset | 0.955236 |
1603.07410 | Erkang Zhu | Erkang Zhu, Fatemeh Nargesian, Ken Q. Pu, Ren\'ee J. Miller | LSH Ensemble: Internet-Scale Domain Search | To appear in VLDB 2016 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of domain search where a domain is a set of distinct
values from an unspecified universe. We use Jaccard set containment, defined as
$|Q \cap X|/|Q|$, as the relevance measure of a domain $X$ to a query domain
$Q$. Our choice of Jaccard set containment over Jaccard similarity makes our
work particularly suitable for searching Open Data and data on the web, as
Jaccard similarity is known to have poor performance over sets with large
differences in their domain sizes. We demonstrate that the domains found in
several real-life Open Data and web data repositories show a power-law
distribution over their domain sizes.
We present a new index structure, Locality Sensitive Hashing (LSH) Ensemble,
that solves the domain search problem using set containment at Internet scale.
Our index structure and search algorithm cope with the data volume and skew by
means of data sketches (MinHash) and domain partitioning. Our index structure
does not assume a prescribed set of values. We construct a cost model that
describes the accuracy of LSH Ensemble with any given partitioning. This allows
us to formulate the partitioning for LSH Ensemble as an optimization problem.
We prove that there exists an optimal partitioning for any distribution.
Furthermore, for datasets following a power-law distribution, as observed in
Open Data and Web data corpora, we show that the optimal partitioning can be
approximated using equi-depth, making it efficient to use in practice.
We evaluate our algorithm using real data (Canadian Open Data and WDC Web
Tables) containing up over 262 M domains. The experiments demonstrate that our
index consistently outperforms other leading alternatives in accuracy and
performance. The improvements are most dramatic for data with large skew in the
domain sizes. Even at 262 M domains, our index sustains query performance with
under 3 seconds response time.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 01:43:28 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Mar 2016 00:52:45 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2016 18:54:13 GMT"
},
{
"version": "v4",
"created": "Sat, 23 Jul 2016 04:47:58 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Zhu",
"Erkang",
""
],
[
"Nargesian",
"Fatemeh",
""
],
[
"Pu",
"Ken Q.",
""
],
[
"Miller",
"Renée J.",
""
]
] | TITLE: LSH Ensemble: Internet-Scale Domain Search
ABSTRACT: We study the problem of domain search where a domain is a set of distinct
values from an unspecified universe. We use Jaccard set containment, defined as
$|Q \cap X|/|Q|$, as the relevance measure of a domain $X$ to a query domain
$Q$. Our choice of Jaccard set containment over Jaccard similarity makes our
work particularly suitable for searching Open Data and data on the web, as
Jaccard similarity is known to have poor performance over sets with large
differences in their domain sizes. We demonstrate that the domains found in
several real-life Open Data and web data repositories show a power-law
distribution over their domain sizes.
We present a new index structure, Locality Sensitive Hashing (LSH) Ensemble,
that solves the domain search problem using set containment at Internet scale.
Our index structure and search algorithm cope with the data volume and skew by
means of data sketches (MinHash) and domain partitioning. Our index structure
does not assume a prescribed set of values. We construct a cost model that
describes the accuracy of LSH Ensemble with any given partitioning. This allows
us to formulate the partitioning for LSH Ensemble as an optimization problem.
We prove that there exists an optimal partitioning for any distribution.
Furthermore, for datasets following a power-law distribution, as observed in
Open Data and Web data corpora, we show that the optimal partitioning can be
approximated using equi-depth, making it efficient to use in practice.
We evaluate our algorithm using real data (Canadian Open Data and WDC Web
Tables) containing up over 262 M domains. The experiments demonstrate that our
index consistently outperforms other leading alternatives in accuracy and
performance. The improvements are most dramatic for data with large skew in the
domain sizes. Even at 262 M domains, our index sustains query performance with
under 3 seconds response time.
| no_new_dataset | 0.949623 |
1605.05414 | Ryan Lowe T. | Ryan Lowe, Iulian V. Serban, Mike Noseworthy, Laurent Charlin, Joelle
Pineau | On the Evaluation of Dialogue Systems with Next Utterance Classification | Accepted to SIGDIAL 2016 (short paper). 5 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An open challenge in constructing dialogue systems is developing methods for
automatically learning dialogue strategies from large amounts of unlabelled
data. Recent work has proposed Next-Utterance-Classification (NUC) as a
surrogate task for building dialogue systems from text data. In this paper we
investigate the performance of humans on this task to validate the relevance of
NUC as a method of evaluation. Our results show three main findings: (1) humans
are able to correctly classify responses at a rate much better than chance,
thus confirming that the task is feasible, (2) human performance levels vary
across task domains (we consider 3 datasets) and expertise levels (novice vs
experts), thus showing that a range of performance is possible on this type of
task, (3) automated dialogue systems built using state-of-the-art machine
learning methods have similar performance to the human novices, but worse than
the experts, thus confirming the utility of this class of tasks for driving
further research in automated dialogue systems.
| [
{
"version": "v1",
"created": "Wed, 18 May 2016 01:36:29 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Jul 2016 00:00:36 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Lowe",
"Ryan",
""
],
[
"Serban",
"Iulian V.",
""
],
[
"Noseworthy",
"Mike",
""
],
[
"Charlin",
"Laurent",
""
],
[
"Pineau",
"Joelle",
""
]
] | TITLE: On the Evaluation of Dialogue Systems with Next Utterance Classification
ABSTRACT: An open challenge in constructing dialogue systems is developing methods for
automatically learning dialogue strategies from large amounts of unlabelled
data. Recent work has proposed Next-Utterance-Classification (NUC) as a
surrogate task for building dialogue systems from text data. In this paper we
investigate the performance of humans on this task to validate the relevance of
NUC as a method of evaluation. Our results show three main findings: (1) humans
are able to correctly classify responses at a rate much better than chance,
thus confirming that the task is feasible, (2) human performance levels vary
across task domains (we consider 3 datasets) and expertise levels (novice vs
experts), thus showing that a range of performance is possible on this type of
task, (3) automated dialogue systems built using state-of-the-art machine
learning methods have similar performance to the human novices, but worse than
the experts, thus confirming the utility of this class of tasks for driving
further research in automated dialogue systems.
| no_new_dataset | 0.951908 |
1607.02607 | Giovanni De Gasperis | Giovanni De Gasperis and Christian Del Pinto | Data Set From Molisan Regional Seismic Network Events | 9 pages, 4 figures | null | null | null | physics.geo-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After the earthquake occurred in Molise (Central Italy) on 31st October 2002
(Ml 5.4, 29 people dead), the local Servizio Regionale per la Protezione Civile
to ensure a better analysis of local seismic data, through a convention with
the Istituto Nazionale di Geofisica e Vulcanologia (INGV), promoted the design
of the Regional Seismic Network (RMSM) and funded its implementation. The 5
stations of RMSM worked since 2007 to 2013 collecting a large amount of seismic
data and giving an important contribution to the study of seismic sources
present in the region and the surrounding territory. This work reports about
the dataset containing all triggers collected by RMSM since July 2007 to March
2009, including actual seismic events; among them, all earthquakes events
recorded in coincidence to Rete Sismica Nazionale Centralizzata (RSNC) of INGV
have been marked with S and P arrival timestamps. Every trigger has been
associated to a spectrogram defined into a recorded time vs. frequency domain.
The main aim of this structured dataset is to be used for further analysis with
data mining and machine learning techniques on image patterns associated to the
waveforms.
| [
{
"version": "v1",
"created": "Sat, 9 Jul 2016 13:05:09 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"De Gasperis",
"Giovanni",
""
],
[
"Del Pinto",
"Christian",
""
]
] | TITLE: Data Set From Molisan Regional Seismic Network Events
ABSTRACT: After the earthquake occurred in Molise (Central Italy) on 31st October 2002
(Ml 5.4, 29 people dead), the local Servizio Regionale per la Protezione Civile
to ensure a better analysis of local seismic data, through a convention with
the Istituto Nazionale di Geofisica e Vulcanologia (INGV), promoted the design
of the Regional Seismic Network (RMSM) and funded its implementation. The 5
stations of RMSM worked since 2007 to 2013 collecting a large amount of seismic
data and giving an important contribution to the study of seismic sources
present in the region and the surrounding territory. This work reports about
the dataset containing all triggers collected by RMSM since July 2007 to March
2009, including actual seismic events; among them, all earthquakes events
recorded in coincidence to Rete Sismica Nazionale Centralizzata (RSNC) of INGV
have been marked with S and P arrival timestamps. Every trigger has been
associated to a spectrogram defined into a recorded time vs. frequency domain.
The main aim of this structured dataset is to be used for further analysis with
data mining and machine learning techniques on image patterns associated to the
waveforms.
| new_dataset | 0.835416 |
1607.05088 | Giorgio Roffo | Giorgio Roffo | Towards Personality-Aware Recommendation | This paper is an overview of Personality in Computational
Advertising: A Benchmark, G. Roffo, ACM RecSys workshop on Emotions and
Personality in Personalized Systems, (EMPIRE 2016) | null | 10.13140/RG.2.1.4167.0649 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the last decade new ways of shopping online have increased the possibility
of buying products and services more easily and faster than ever. In this new
context, personality is a key determinant in the decision making of the
consumer when shopping. The two main reasons are: firstly, a person's buying
choices are influenced by psychological factors like impulsiveness, and
secondly, some consumers may be more susceptible to making impulse purchases
than others. To the best of our knowledge, the impact of personality factors on
advertisements has been largely neglected at the level of recommender systems.
This work proposes a highly innovative research which uses a personality
perspective to determine the unique associations among the consumer's buying
tendency and advert recommendations. As a matter of fact, the lack of a
publicly available benchmark for computational advertising do not allow both
the exploration of this intriguing research direction and the evaluation of
state-of-the-art algorithms. We present the ADS Dataset, a publicly available
benchmark for computational advertising enriched with Big-Five users'
personality factors and 1,200 personal users' pictures. The proposed benchmark
allows two main tasks: rating prediction over 300 real advertisements (i.e.,
Rich Media Ads, Image Ads, Text Ads) and click-through rate prediction.
Moreover, this work carries out experiments, reviews various evaluation
criteria used in the literature, and provides a library for each one of them
within one integrated toolbox.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 14:08:20 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Jul 2016 11:06:03 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Jul 2016 09:45:57 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Roffo",
"Giorgio",
""
]
] | TITLE: Towards Personality-Aware Recommendation
ABSTRACT: In the last decade new ways of shopping online have increased the possibility
of buying products and services more easily and faster than ever. In this new
context, personality is a key determinant in the decision making of the
consumer when shopping. The two main reasons are: firstly, a person's buying
choices are influenced by psychological factors like impulsiveness, and
secondly, some consumers may be more susceptible to making impulse purchases
than others. To the best of our knowledge, the impact of personality factors on
advertisements has been largely neglected at the level of recommender systems.
This work proposes a highly innovative research which uses a personality
perspective to determine the unique associations among the consumer's buying
tendency and advert recommendations. As a matter of fact, the lack of a
publicly available benchmark for computational advertising do not allow both
the exploration of this intriguing research direction and the evaluation of
state-of-the-art algorithms. We present the ADS Dataset, a publicly available
benchmark for computational advertising enriched with Big-Five users'
personality factors and 1,200 personal users' pictures. The proposed benchmark
allows two main tasks: rating prediction over 300 real advertisements (i.e.,
Rich Media Ads, Image Ads, Text Ads) and click-through rate prediction.
Moreover, this work carries out experiments, reviews various evaluation
criteria used in the literature, and provides a library for each one of them
within one integrated toolbox.
| new_dataset | 0.972519 |
1607.06839 | Thanh Tran | Thanh Tran, Madhavi R.Dontham, Jinwook Chung, Kyumin Lee | How to Succeed in Crowdfunding: a Long-Term Study in Kickstarter | Submitting to ACM TIST | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdfunding platforms have become important sites where people can create
projects to seek funds toward turning their ideas into products, and back
someone else's projects. As news media have reported successfully funded
projects (e.g., Pebble Time, Coolest Cooler), more people have joined
crowdfunding platforms and launched projects. But in spite of rapid growth of
the number of users and projects, a project success rate at large has been
decreasing because of launching projects without enough preparation and
experience. Little is known about what reactions project creators made (e.g.,
giving up or making the failed projects better) when projects failed, and what
types of successful projects we can find. To solve these problems, in this
manuscript we (i) collect the largest datasets from Kickstarter, consisting of
all project profiles, corresponding user profiles, projects' temporal data and
users' social media information; (ii) analyze characteristics of successful
projects, behaviors of users and understand dynamics of the crowdfunding
platform; (iii) propose novel statistical approaches to predict whether a
project will be successful and a range of expected pledged money of the
project; (iv) develop predictive models and evaluate performance of the models;
(v) analyze what reactions project creators had when project failed, and if
they did not give up, how they made the failed projects successful; and (vi)
cluster successful projects by their evolutional patterns of pledged money
toward understanding what efforts project creators should make in order to get
more pledged money. Our experimental results show that the predictive models
can effectively predict project success and a range of expected pledged money.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 20:49:17 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Tran",
"Thanh",
""
],
[
"Dontham",
"Madhavi R.",
""
],
[
"Chung",
"Jinwook",
""
],
[
"Lee",
"Kyumin",
""
]
] | TITLE: How to Succeed in Crowdfunding: a Long-Term Study in Kickstarter
ABSTRACT: Crowdfunding platforms have become important sites where people can create
projects to seek funds toward turning their ideas into products, and back
someone else's projects. As news media have reported successfully funded
projects (e.g., Pebble Time, Coolest Cooler), more people have joined
crowdfunding platforms and launched projects. But in spite of rapid growth of
the number of users and projects, a project success rate at large has been
decreasing because of launching projects without enough preparation and
experience. Little is known about what reactions project creators made (e.g.,
giving up or making the failed projects better) when projects failed, and what
types of successful projects we can find. To solve these problems, in this
manuscript we (i) collect the largest datasets from Kickstarter, consisting of
all project profiles, corresponding user profiles, projects' temporal data and
users' social media information; (ii) analyze characteristics of successful
projects, behaviors of users and understand dynamics of the crowdfunding
platform; (iii) propose novel statistical approaches to predict whether a
project will be successful and a range of expected pledged money of the
project; (iv) develop predictive models and evaluate performance of the models;
(v) analyze what reactions project creators had when project failed, and if
they did not give up, how they made the failed projects successful; and (vi)
cluster successful projects by their evolutional patterns of pledged money
toward understanding what efforts project creators should make in order to get
more pledged money. Our experimental results show that the predictive models
can effectively predict project success and a range of expected pledged money.
| no_new_dataset | 0.936634 |
1607.06952 | Xinchi Chen | Xinchi Chen, Xipeng Qiu, Xuanjing Huang | Neural Sentence Ordering | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentence ordering is a general and critical task for natural language
generation applications. Previous works have focused on improving its
performance in an external, downstream task, such as multi-document
summarization. Given its importance, we propose to study it as an isolated
task. We collect a large corpus of academic texts, and derive a data driven
approach to learn pairwise ordering of sentences, and validate the efficacy
with extensive experiments. Source codes and dataset of this paper will be made
publicly available.
| [
{
"version": "v1",
"created": "Sat, 23 Jul 2016 16:22:23 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Chen",
"Xinchi",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: Neural Sentence Ordering
ABSTRACT: Sentence ordering is a general and critical task for natural language
generation applications. Previous works have focused on improving its
performance in an external, downstream task, such as multi-document
summarization. Given its importance, we propose to study it as an isolated
task. We collect a large corpus of academic texts, and derive a data driven
approach to learn pairwise ordering of sentences, and validate the efficacy
with extensive experiments. Source codes and dataset of this paper will be made
publicly available.
| new_dataset | 0.964954 |
1607.06988 | Shankar Vembu | Shankar Vembu, Sandra Zilles | Interactive Learning from Multiple Noisy Labels | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactive learning is a process in which a machine learning algorithm is
provided with meaningful, well-chosen examples as opposed to randomly chosen
examples typical in standard supervised learning. In this paper, we propose a
new method for interactive learning from multiple noisy labels where we exploit
the disagreement among annotators to quantify the easiness (or meaningfulness)
of an example. We demonstrate the usefulness of this method in estimating the
parameters of a latent variable classification model, and conduct experimental
analyses on a range of synthetic and benchmark datasets. Furthermore, we
theoretically analyze the performance of perceptron in this interactive
learning framework.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 01:14:19 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Vembu",
"Shankar",
""
],
[
"Zilles",
"Sandra",
""
]
] | TITLE: Interactive Learning from Multiple Noisy Labels
ABSTRACT: Interactive learning is a process in which a machine learning algorithm is
provided with meaningful, well-chosen examples as opposed to randomly chosen
examples typical in standard supervised learning. In this paper, we propose a
new method for interactive learning from multiple noisy labels where we exploit
the disagreement among annotators to quantify the easiness (or meaningfulness)
of an example. We demonstrate the usefulness of this method in estimating the
parameters of a latent variable classification model, and conduct experimental
analyses on a range of synthetic and benchmark datasets. Furthermore, we
theoretically analyze the performance of perceptron in this interactive
learning framework.
| no_new_dataset | 0.956756 |
1607.06999 | Zhen Cui | Yang Li, Wenming Zheng, Zhen Cui | Recurrent Regression for Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To address the sequential changes of images including poses, in this paper we
propose a recurrent regression neural network(RRNN) framework to unify two
classic tasks of cross-pose face recognition on still images and video-based
face recognition. To imitate the changes of images, we explicitly construct the
potential dependencies of sequential images so as to regularize the final
learning model. By performing progressive transforms for sequentially adjacent
images, RRNN can adaptively memorize and forget the information that benefits
for the final classification. For face recognition of still images, given any
one image with any one pose, we recurrently predict the images with its
sequential poses to expect to capture some useful information of others poses.
For video-based face recognition, the recurrent regression takes one entire
sequence rather than one image as its input. We verify RRNN in static face
dataset MultiPIE and face video dataset YouTube Celebrities(YTC). The
comprehensive experimental results demonstrate the effectiveness of the
proposed RRNN method.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 05:11:40 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Li",
"Yang",
""
],
[
"Zheng",
"Wenming",
""
],
[
"Cui",
"Zhen",
""
]
] | TITLE: Recurrent Regression for Face Recognition
ABSTRACT: To address the sequential changes of images including poses, in this paper we
propose a recurrent regression neural network(RRNN) framework to unify two
classic tasks of cross-pose face recognition on still images and video-based
face recognition. To imitate the changes of images, we explicitly construct the
potential dependencies of sequential images so as to regularize the final
learning model. By performing progressive transforms for sequentially adjacent
images, RRNN can adaptively memorize and forget the information that benefits
for the final classification. For face recognition of still images, given any
one image with any one pose, we recurrently predict the images with its
sequential poses to expect to capture some useful information of others poses.
For video-based face recognition, the recurrent regression takes one entire
sequence rather than one image as its input. We verify RRNN in static face
dataset MultiPIE and face video dataset YouTube Celebrities(YTC). The
comprehensive experimental results demonstrate the effectiveness of the
proposed RRNN method.
| no_new_dataset | 0.945801 |
1607.07155 | Zhaowei Cai | Zhaowei Cai and Quanfu Fan and Rogerio S. Feris and Nuno Vasconcelos | A Unified Multi-scale Deep Convolutional Neural Network for Fast Object
Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is
proposed for fast multi-scale object detection. The MS-CNN consists of a
proposal sub-network and a detection sub-network. In the proposal sub-network,
detection is performed at multiple output layers, so that receptive fields
match objects of different scales. These complementary scale-specific detectors
are combined to produce a strong multi-scale object detector. The unified
network is learned end-to-end, by optimizing a multi-task loss. Feature
upsampling by deconvolution is also explored, as an alternative to input
upsampling, to reduce the memory and computation costs. State-of-the-art object
detection performance, at up to 15 fps, is reported on datasets, such as KITTI
and Caltech, containing a substantial number of small objects.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 05:15:31 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Cai",
"Zhaowei",
""
],
[
"Fan",
"Quanfu",
""
],
[
"Feris",
"Rogerio S.",
""
],
[
"Vasconcelos",
"Nuno",
""
]
] | TITLE: A Unified Multi-scale Deep Convolutional Neural Network for Fast Object
Detection
ABSTRACT: A unified deep neural network, denoted the multi-scale CNN (MS-CNN), is
proposed for fast multi-scale object detection. The MS-CNN consists of a
proposal sub-network and a detection sub-network. In the proposal sub-network,
detection is performed at multiple output layers, so that receptive fields
match objects of different scales. These complementary scale-specific detectors
are combined to produce a strong multi-scale object detector. The unified
network is learned end-to-end, by optimizing a multi-task loss. Feature
upsampling by deconvolution is also explored, as an alternative to input
upsampling, to reduce the memory and computation costs. State-of-the-art object
detection performance, at up to 15 fps, is reported on datasets, such as KITTI
and Caltech, containing a substantial number of small objects.
| no_new_dataset | 0.948251 |
1607.07216 | Niki Martinel | Niki Martinel, Abir Das, Christian Micheloni, Amit K. Roy-Chowdhury | Temporal Model Adaptation for Person Re-Identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification is an open and challenging problem in computer
vision. Majority of the efforts have been spent either to design the best
feature representation or to learn the optimal matching metric. Most approaches
have neglected the problem of adapting the selected features or the learned
model over time. To address such a problem, we propose a temporal model
adaptation scheme with human in the loop. We first introduce a
similarity-dissimilarity learning method which can be trained in an incremental
fashion by means of a stochastic alternating directions methods of multipliers
optimization procedure. Then, to achieve temporal adaptation with limited human
effort, we exploit a graph-based approach to present the user only the most
informative probe-gallery matches that should be used to update the model.
Results on three datasets have shown that our approach performs on par or even
better than state-of-the-art approaches while reducing the manual pairwise
labeling effort by about 80%.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 11:30:03 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Martinel",
"Niki",
""
],
[
"Das",
"Abir",
""
],
[
"Micheloni",
"Christian",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Temporal Model Adaptation for Person Re-Identification
ABSTRACT: Person re-identification is an open and challenging problem in computer
vision. Majority of the efforts have been spent either to design the best
feature representation or to learn the optimal matching metric. Most approaches
have neglected the problem of adapting the selected features or the learned
model over time. To address such a problem, we propose a temporal model
adaptation scheme with human in the loop. We first introduce a
similarity-dissimilarity learning method which can be trained in an incremental
fashion by means of a stochastic alternating directions methods of multipliers
optimization procedure. Then, to achieve temporal adaptation with limited human
effort, we exploit a graph-based approach to present the user only the most
informative probe-gallery matches that should be used to update the model.
Results on three datasets have shown that our approach performs on par or even
better than state-of-the-art approaches while reducing the manual pairwise
labeling effort by about 80%.
| no_new_dataset | 0.946941 |
1607.07262 | Kota Yamaguchi | Sirion Vittayakorn and Takayuki Umeda and Kazuhiko Murasaki and Kyoko
Sudo and Takayuki Okatani and Kota Yamaguchi | Automatic Attribute Discovery with Neural Activations | ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can a machine learn to recognize visual attributes emerging out of online
community without a definitive supervised dataset? This paper proposes an
automatic approach to discover and analyze visual attributes from a noisy
collection of image-text data on the Web. Our approach is based on the
relationship between attributes and neural activations in the deep network. We
characterize the visual property of the attribute word as a divergence within
weakly-annotated set of images. We show that the neural activations are useful
for discovering and learning a classifier that well agrees with human
perception from the noisy real-world Web data. The empirical study suggests the
layered structure of the deep neural networks also gives us insights into the
perceptual depth of the given word. Finally, we demonstrate that we can utilize
highly-activating neurons for finding semantically relevant regions.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 13:30:10 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Vittayakorn",
"Sirion",
""
],
[
"Umeda",
"Takayuki",
""
],
[
"Murasaki",
"Kazuhiko",
""
],
[
"Sudo",
"Kyoko",
""
],
[
"Okatani",
"Takayuki",
""
],
[
"Yamaguchi",
"Kota",
""
]
] | TITLE: Automatic Attribute Discovery with Neural Activations
ABSTRACT: How can a machine learn to recognize visual attributes emerging out of online
community without a definitive supervised dataset? This paper proposes an
automatic approach to discover and analyze visual attributes from a noisy
collection of image-text data on the Web. Our approach is based on the
relationship between attributes and neural activations in the deep network. We
characterize the visual property of the attribute word as a divergence within
weakly-annotated set of images. We show that the neural activations are useful
for discovering and learning a classifier that well agrees with human
perception from the noisy real-world Web data. The empirical study suggests the
layered structure of the deep neural networks also gives us insights into the
perceptual depth of the given word. Finally, we demonstrate that we can utilize
highly-activating neurons for finding semantically relevant regions.
| no_new_dataset | 0.94699 |
1607.07270 | Francesco Solera | Francesco Solera and Andrea Palazzi | A Statistical Test for Joint Distributions Equivalence | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a distribution-free test that can be used to determine whether any
two joint distributions $p$ and $q$ are statistically different by inspection
of a large enough set of samples. Following recent efforts from Long et al.
[1], we rely on joint kernel distribution embedding to extend the kernel
two-sample test of Gretton et al. [2] to the case of joint probability
distributions. Our main result can be directly applied to verify if a
dataset-shift has occurred between training and test distributions in a
learning framework, without further assuming the shift has occurred only in the
input, in the target or in the conditional distribution.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 13:48:20 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Solera",
"Francesco",
""
],
[
"Palazzi",
"Andrea",
""
]
] | TITLE: A Statistical Test for Joint Distributions Equivalence
ABSTRACT: We provide a distribution-free test that can be used to determine whether any
two joint distributions $p$ and $q$ are statistically different by inspection
of a large enough set of samples. Following recent efforts from Long et al.
[1], we rely on joint kernel distribution embedding to extend the kernel
two-sample test of Gretton et al. [2] to the case of joint probability
distributions. Our main result can be directly applied to verify if a
dataset-shift has occurred between training and test distributions in a
learning framework, without further assuming the shift has occurred only in the
input, in the target or in the conditional distribution.
| no_new_dataset | 0.943243 |
1607.07295 | Lluis Castrejon | Lluis Castrejon, Yusuf Aytar, Carl Vondrick, Hamed Pirsiavash, Antonio
Torralba | Learning Aligned Cross-Modal Representations from Weakly Aligned Data | Conference paper at CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People can recognize scenes across many different modalities beyond natural
images. In this paper, we investigate how to learn cross-modal scene
representations that transfer across modalities. To study this problem, we
introduce a new cross-modal scene dataset. While convolutional neural networks
can categorize cross-modal scenes well, they also learn an intermediate
representation not aligned across modalities, which is undesirable for
cross-modal transfer applications. We present methods to regularize cross-modal
convolutional neural networks so that they have a shared representation that is
agnostic of the modality. Our experiments suggest that our scene representation
can help transfer representations across modalities for retrieval. Moreover,
our visualizations suggest that units emerge in the shared representation that
tend to activate on consistent concepts independently of the modality.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 14:38:36 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Castrejon",
"Lluis",
""
],
[
"Aytar",
"Yusuf",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Learning Aligned Cross-Modal Representations from Weakly Aligned Data
ABSTRACT: People can recognize scenes across many different modalities beyond natural
images. In this paper, we investigate how to learn cross-modal scene
representations that transfer across modalities. To study this problem, we
introduce a new cross-modal scene dataset. While convolutional neural networks
can categorize cross-modal scenes well, they also learn an intermediate
representation not aligned across modalities, which is undesirable for
cross-modal transfer applications. We present methods to regularize cross-modal
convolutional neural networks so that they have a shared representation that is
agnostic of the modality. Our experiments suggest that our scene representation
can help transfer representations across modalities for retrieval. Moreover,
our visualizations suggest that units emerge in the shared representation that
tend to activate on consistent concepts independently of the modality.
| new_dataset | 0.955361 |
1607.07311 | Majd Hawasly | Majd Hawasly, Florian T. Pokorny and Subramanian Ramamoorthy | Estimating Activity at Multiple Scales using Spatial Abstractions | 16 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous robots operating in dynamic environments must maintain beliefs
over a hypothesis space that is rich enough to represent the activities of
interest at different scales. This is important both in order to accommodate
the availability of evidence at varying degrees of coarseness, such as when
interpreting and assimilating natural instructions, but also in order to make
subsequent reactive planning more efficient. We present an algorithm that
combines a topology-based trajectory clustering procedure that generates
hierarchically-structured spatial abstractions with a bank of particle filters
at each of these abstraction levels so as to produce probability estimates over
an agent's navigation activity that is kept consistent across the hierarchy. We
study the performance of the proposed method using a synthetic trajectory
dataset in 2D, as well as a dataset taken from AIS-based tracking of ships in
an extended harbour area. We show that, in comparison to a baseline which is a
particle filter that estimates activity without exploiting such structure, our
method achieves a better normalised error in predicting the trajectory as well
as better time to convergence to a true class when compared against ground
truth.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 15:17:06 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Hawasly",
"Majd",
""
],
[
"Pokorny",
"Florian T.",
""
],
[
"Ramamoorthy",
"Subramanian",
""
]
] | TITLE: Estimating Activity at Multiple Scales using Spatial Abstractions
ABSTRACT: Autonomous robots operating in dynamic environments must maintain beliefs
over a hypothesis space that is rich enough to represent the activities of
interest at different scales. This is important both in order to accommodate
the availability of evidence at varying degrees of coarseness, such as when
interpreting and assimilating natural instructions, but also in order to make
subsequent reactive planning more efficient. We present an algorithm that
combines a topology-based trajectory clustering procedure that generates
hierarchically-structured spatial abstractions with a bank of particle filters
at each of these abstraction levels so as to produce probability estimates over
an agent's navigation activity that is kept consistent across the hierarchy. We
study the performance of the proposed method using a synthetic trajectory
dataset in 2D, as well as a dataset taken from AIS-based tracking of ships in
an extended harbour area. We show that, in comparison to a baseline which is a
particle filter that estimates activity without exploiting such structure, our
method achieves a better normalised error in predicting the trajectory as well
as better time to convergence to a true class when compared against ground
truth.
| no_new_dataset | 0.949342 |
1607.07326 | Flavian Vasile | Flavian Vasile, Elena Smirnova and Alexis Conneau | Meta-Prod2Vec - Product Embeddings Using Side-Information for
Recommendation | null | null | 10.1145/2959100.2959160 | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Meta-Prod2vec, a novel method to compute item similarities for
recommendation that leverages existing item metadata. Such scenarios are
frequently encountered in applications such as content recommendation, ad
targeting and web search. Our method leverages past user interactions with
items and their attributes to compute low-dimensional embeddings of items.
Specifically, the item metadata is in- jected into the model as side
information to regularize the item embeddings. We show that the new item
representa- tions lead to better performance on recommendation tasks on an open
music dataset.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 15:54:07 GMT"
}
] | 2016-07-26T00:00:00 | [
[
"Vasile",
"Flavian",
""
],
[
"Smirnova",
"Elena",
""
],
[
"Conneau",
"Alexis",
""
]
] | TITLE: Meta-Prod2Vec - Product Embeddings Using Side-Information for
Recommendation
ABSTRACT: We propose Meta-Prod2vec, a novel method to compute item similarities for
recommendation that leverages existing item metadata. Such scenarios are
frequently encountered in applications such as content recommendation, ad
targeting and web search. Our method leverages past user interactions with
items and their attributes to compute low-dimensional embeddings of items.
Specifically, the item metadata is in- jected into the model as side
information to regularize the item embeddings. We show that the new item
representa- tions lead to better performance on recommendation tasks on an open
music dataset.
| no_new_dataset | 0.945248 |
1607.06525 | Xi Zhang | Xi Zhang and Di Ma and Lin Gan and Shanshan Jiang and Gady Agam | CGMOS: Certainty Guided Minority OverSampling | Accepted by The 25th ACM International Conference on Information and
Knowledge Management (CIKM 2016) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Handling imbalanced datasets is a challenging problem that if not treated
correctly results in reduced classification performance. Imbalanced datasets
are commonly handled using minority oversampling, whereas the SMOTE algorithm
is a successful oversampling algorithm with numerous extensions. SMOTE
extensions do not have a theoretical guarantee during training to work better
than SMOTE and in many instances their performance is data dependent. In this
paper we propose a novel extension to the SMOTE algorithm with a theoretical
guarantee for improved classification performance. The proposed approach
considers the classification performance of both the majority and minority
classes. In the proposed approach CGMOS (Certainty Guided Minority
OverSampling) new data points are added by considering certainty changes in the
dataset. The paper provides a proof that the proposed algorithm is guaranteed
to work better than SMOTE for training data. Further experimental results on 30
real-world datasets show that CGMOS works better than existing algorithms when
using 6 different classifiers.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 23:09:46 GMT"
}
] | 2016-07-25T00:00:00 | [
[
"Zhang",
"Xi",
""
],
[
"Ma",
"Di",
""
],
[
"Gan",
"Lin",
""
],
[
"Jiang",
"Shanshan",
""
],
[
"Agam",
"Gady",
""
]
] | TITLE: CGMOS: Certainty Guided Minority OverSampling
ABSTRACT: Handling imbalanced datasets is a challenging problem that if not treated
correctly results in reduced classification performance. Imbalanced datasets
are commonly handled using minority oversampling, whereas the SMOTE algorithm
is a successful oversampling algorithm with numerous extensions. SMOTE
extensions do not have a theoretical guarantee during training to work better
than SMOTE and in many instances their performance is data dependent. In this
paper we propose a novel extension to the SMOTE algorithm with a theoretical
guarantee for improved classification performance. The proposed approach
considers the classification performance of both the majority and minority
classes. In the proposed approach CGMOS (Certainty Guided Minority
OverSampling) new data points are added by considering certainty changes in the
dataset. The paper provides a proof that the proposed algorithm is guaranteed
to work better than SMOTE for training data. Further experimental results on 30
real-world datasets show that CGMOS works better than existing algorithms when
using 6 different classifiers.
| no_new_dataset | 0.949059 |
1607.06783 | Santosh Tirunagari | Santosh Tirunagari, Norman Poh, Miroslaw Bober and David Windridge | Can DMD obtain a Scene Background in Color? | International Conference on Image, Vision and Computing (ICIVC 2016),
August 3-5, 2016, Portsmouth, UK | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A background model describes a scene without any foreground objects and has a
number of applications, ranging from video surveillance to computational
photography. Recent studies have introduced the method of Dynamic Mode
Decomposition (DMD) for robustly separating video frames into a background
model and foreground components. While the method introduced operates by
converting color images to grayscale, we in this study propose a technique to
obtain the background model in the color domain. The effectiveness of our
technique is demonstrated using a publicly available Scene Background
Initialisation (SBI) dataset. Our results both qualitatively and quantitatively
show that DMD can successfully obtain a colored background model.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 18:41:01 GMT"
}
] | 2016-07-25T00:00:00 | [
[
"Tirunagari",
"Santosh",
""
],
[
"Poh",
"Norman",
""
],
[
"Bober",
"Miroslaw",
""
],
[
"Windridge",
"David",
""
]
] | TITLE: Can DMD obtain a Scene Background in Color?
ABSTRACT: A background model describes a scene without any foreground objects and has a
number of applications, ranging from video surveillance to computational
photography. Recent studies have introduced the method of Dynamic Mode
Decomposition (DMD) for robustly separating video frames into a background
model and foreground components. While the method introduced operates by
converting color images to grayscale, we in this study propose a technique to
obtain the background model in the color domain. The effectiveness of our
technique is demonstrated using a publicly available Scene Background
Initialisation (SBI) dataset. Our results both qualitatively and quantitatively
show that DMD can successfully obtain a colored background model.
| no_new_dataset | 0.921145 |
1607.06787 | Enzo Ferrante | Mahsa Shakeri (2 and 4), Enzo Ferrante (1), Stavros Tsogkas (1), Sarah
Lippe (3 and 4), Samuel Kadoury (2 and 4), Iasonas Kokkinos (1), Nikos
Paragios (1) ((1) CVN, CentraleSupelec-Inria, Universite Paris-Saclay,
France, (2) Polytechnique Montreal, Canada (3) University of Montreal, Canada
(4) CHU Sainte-Justine Research Center, Montreal, Canada) | Prior-based Coregistration and Cosegmentation | The first two authors contributed equally | MICCAI 2016 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a modular and scalable framework for dense coregistration and
cosegmentation with two key characteristics: first, we substitute ground truth
data with the semantic map output of a classifier; second, we combine this
output with population deformable registration to improve both alignment and
segmentation. Our approach deforms all volumes towards consensus, taking into
account image similarities and label consistency. Our pipeline can incorporate
any classifier and similarity metric. Results on two datasets, containing
annotations of challenging brain structures, demonstrate the potential of our
method.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2016 18:49:09 GMT"
}
] | 2016-07-25T00:00:00 | [
[
"Shakeri",
"Mahsa",
"",
"2 and 4"
],
[
"Ferrante",
"Enzo",
"",
"3 and 4"
],
[
"Tsogkas",
"Stavros",
"",
"3 and 4"
],
[
"Lippe",
"Sarah",
"",
"3 and 4"
],
[
"Kadoury",
"Samuel",
"",
"2 and 4"
],
[
"Kokkinos",
"Iasonas",
""
],
[
"Paragios",
"Nikos",
""
]
] | TITLE: Prior-based Coregistration and Cosegmentation
ABSTRACT: We propose a modular and scalable framework for dense coregistration and
cosegmentation with two key characteristics: first, we substitute ground truth
data with the semantic map output of a classifier; second, we combine this
output with population deformable registration to improve both alignment and
segmentation. Our approach deforms all volumes towards consensus, taking into
account image similarities and label consistency. Our pipeline can incorporate
any classifier and similarity metric. Results on two datasets, containing
annotations of challenging brain structures, demonstrate the potential of our
method.
| no_new_dataset | 0.950503 |
1507.02186 | Nicol\`o Navarin | Nicol\`o Navarin, Alessandro Sperduti, Riccardo Tesselli | Extending local features with contextual information in graph kernels | To appear in ICONIP 2015 | Lecture Notes in Computer Science, Neural Information Processing,
22nd International Conference, ICONIP 2015, November 9-12, 2015, Proceedings,
Part IV | 10.1007/978-3-319-26561-2_33 | 9492, pp 271-279 | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 14:58:49 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Sep 2015 10:23:38 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Navarin",
"Nicolò",
""
],
[
"Sperduti",
"Alessandro",
""
],
[
"Tesselli",
"Riccardo",
""
]
] | TITLE: Extending local features with contextual information in graph kernels
ABSTRACT: Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.
| no_new_dataset | 0.951504 |
1602.04301 | Dingxiong Deng | Dingxiong Deng, Cyrus Shahabi, Ugur Demiryurek, Linhong Zhu, Rose Yu,
Yan Liu | Latent Space Model for Road Networks to Predict Time-Varying Traffic | null | null | null | null | cs.SI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time traffic prediction from high-fidelity spatiotemporal traffic sensor
datasets is an important problem for intelligent transportation systems and
sustainability. However, it is challenging due to the complex topological
dependencies and high dynamics associated with changing road conditions. In
this paper, we propose a Latent Space Model for Road Networks (LSM-RN) to
address these challenges. In particular, given a series of road network
snapshots, we learn the attributes of vertices in latent spaces which capture
both topological and temporal properties. As these latent attributes are
time-dependent, they can estimate how traffic patterns form and evolve. In
addition, we present an incremental online algorithm which sequentially and
adaptively learn the latent attributes from the temporal graph changes. Our
framework enables real-time traffic prediction by 1) exploiting real-time
sensor readings to adjust/update the existing latent spaces, and 2) training as
data arrives and making predictions on-the-fly with given data. By conducting
extensive experiments with a large volume of real-world traffic sensor data, we
demonstrate the utility superiority of our framework for real-time traffic
prediction on large road networks over competitors as well as a baseline
graph-based LSM.
| [
{
"version": "v1",
"created": "Sat, 13 Feb 2016 08:18:07 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 07:39:45 GMT"
},
{
"version": "v3",
"created": "Thu, 21 Jul 2016 01:36:57 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Deng",
"Dingxiong",
""
],
[
"Shahabi",
"Cyrus",
""
],
[
"Demiryurek",
"Ugur",
""
],
[
"Zhu",
"Linhong",
""
],
[
"Yu",
"Rose",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: Latent Space Model for Road Networks to Predict Time-Varying Traffic
ABSTRACT: Real-time traffic prediction from high-fidelity spatiotemporal traffic sensor
datasets is an important problem for intelligent transportation systems and
sustainability. However, it is challenging due to the complex topological
dependencies and high dynamics associated with changing road conditions. In
this paper, we propose a Latent Space Model for Road Networks (LSM-RN) to
address these challenges. In particular, given a series of road network
snapshots, we learn the attributes of vertices in latent spaces which capture
both topological and temporal properties. As these latent attributes are
time-dependent, they can estimate how traffic patterns form and evolve. In
addition, we present an incremental online algorithm which sequentially and
adaptively learn the latent attributes from the temporal graph changes. Our
framework enables real-time traffic prediction by 1) exploiting real-time
sensor readings to adjust/update the existing latent spaces, and 2) training as
data arrives and making predictions on-the-fly with given data. By conducting
extensive experiments with a large volume of real-world traffic sensor data, we
demonstrate the utility superiority of our framework for real-time traffic
prediction on large road networks over competitors as well as a baseline
graph-based LSM.
| no_new_dataset | 0.948632 |
1607.06141 | Jonathan Ullman | Lucas Kowalczyk, Tal Malkin, Jonathan Ullman, Mark Zhandry | Strong Hardness of Privacy from Weak Traitor Tracing | null | null | null | null | cs.CR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite much study, the computational complexity of differential privacy
remains poorly understood. In this paper we consider the computational
complexity of accurately answering a family $Q$ of statistical queries over a
data universe $X$ under differential privacy. A statistical query on a dataset
$D \in X^n$ asks "what fraction of the elements of $D$ satisfy a given
predicate $p$ on $X$?" Dwork et al. (STOC'09) and Boneh and Zhandry (CRYPTO'14)
showed that if both $Q$ and $X$ are of polynomial size, then there is an
efficient differentially private algorithm that accurately answers all the
queries, and if both $Q$ and $X$ are exponential size, then under a plausible
assumption, no efficient algorithm exists.
We show that, under the same assumption, if either the number of queries or
the data universe is of exponential size, and the other has size at least
$\tilde{O}(n^7)$, then there is no differentially private algorithm that
answers all the queries. In both cases, the result is nearly quantitatively
tight, since there is an efficient differentially private algorithm that
answers $\tilde{\Omega}(n^2)$ queries on an exponential size data universe, and
one that answers exponentially many queries on a data universe of size
$\tilde{\Omega}(n^2)$.
Our proofs build on the connection between hardness results in differential
privacy and traitor-tracing schemes (Dwork et al., STOC'09; Ullman, STOC'13).
We prove our hardness result for a polynomial size query set (resp., data
universe) by showing that they follow from the existence of a special type of
traitor-tracing scheme with very short ciphertexts (resp., secret keys), but
very weak security guarantees, and then constructing such a scheme.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 22:31:10 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Kowalczyk",
"Lucas",
""
],
[
"Malkin",
"Tal",
""
],
[
"Ullman",
"Jonathan",
""
],
[
"Zhandry",
"Mark",
""
]
] | TITLE: Strong Hardness of Privacy from Weak Traitor Tracing
ABSTRACT: Despite much study, the computational complexity of differential privacy
remains poorly understood. In this paper we consider the computational
complexity of accurately answering a family $Q$ of statistical queries over a
data universe $X$ under differential privacy. A statistical query on a dataset
$D \in X^n$ asks "what fraction of the elements of $D$ satisfy a given
predicate $p$ on $X$?" Dwork et al. (STOC'09) and Boneh and Zhandry (CRYPTO'14)
showed that if both $Q$ and $X$ are of polynomial size, then there is an
efficient differentially private algorithm that accurately answers all the
queries, and if both $Q$ and $X$ are exponential size, then under a plausible
assumption, no efficient algorithm exists.
We show that, under the same assumption, if either the number of queries or
the data universe is of exponential size, and the other has size at least
$\tilde{O}(n^7)$, then there is no differentially private algorithm that
answers all the queries. In both cases, the result is nearly quantitatively
tight, since there is an efficient differentially private algorithm that
answers $\tilde{\Omega}(n^2)$ queries on an exponential size data universe, and
one that answers exponentially many queries on a data universe of size
$\tilde{\Omega}(n^2)$.
Our proofs build on the connection between hardness results in differential
privacy and traitor-tracing schemes (Dwork et al., STOC'09; Ullman, STOC'13).
We prove our hardness result for a polynomial size query set (resp., data
universe) by showing that they follow from the existence of a special type of
traitor-tracing scheme with very short ciphertexts (resp., secret keys), but
very weak security guarantees, and then constructing such a scheme.
| no_new_dataset | 0.941493 |
1607.06144 | Tatiana Tommasi | Tatiana Tommasi, Martina Lanzi, Paolo Russo, Barbara Caputo | Learning the Roots of Visual Domain Shift | Extended Abstract | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we focus on the spatial nature of visual domain shift,
attempting to learn where domain adaptation originates in each given image of
the source and target set. We borrow concepts and techniques from the CNN
visualization literature, and learn domainnes maps able to localize the degree
of domain specificity in images. We derive from these maps features related to
different domainnes levels, and we show that by considering them as a
preprocessing step for a domain adaptation algorithm, the final classification
performance is strongly improved. Combined with the whole image representation,
these features provide state of the art results on the Office dataset.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 22:43:44 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Tommasi",
"Tatiana",
""
],
[
"Lanzi",
"Martina",
""
],
[
"Russo",
"Paolo",
""
],
[
"Caputo",
"Barbara",
""
]
] | TITLE: Learning the Roots of Visual Domain Shift
ABSTRACT: In this paper we focus on the spatial nature of visual domain shift,
attempting to learn where domain adaptation originates in each given image of
the source and target set. We borrow concepts and techniques from the CNN
visualization literature, and learn domainnes maps able to localize the degree
of domain specificity in images. We derive from these maps features related to
different domainnes levels, and we show that by considering them as a
preprocessing step for a domain adaptation algorithm, the final classification
performance is strongly improved. Combined with the whole image representation,
these features provide state of the art results on the Office dataset.
| no_new_dataset | 0.953751 |
1607.06182 | Shiyu Chang | Shiyu Chang, Yang Zhang, Jiliang Tang, Dawei Yin, Yi Chang, Mark A.
Hasegawa-Johnson, Thomas S. Huang | Streaming Recommender Systems | null | null | null | null | cs.SI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing popularity of real-world recommender systems produces data
continuously and rapidly, and it becomes more realistic to study recommender
systems under streaming scenarios. Data streams present distinct properties
such as temporally ordered, continuous and high-velocity, which poses
tremendous challenges to traditional recommender systems. In this paper, we
investigate the problem of recommendation with stream inputs. In particular, we
provide a principled framework termed sRec, which provides explicit
continuous-time random process models of the creation of users and topics, and
of the evolution of their interests. A variational Bayesian approach called
recursive meanfield approximation is proposed, which permits computationally
efficient instantaneous on-line inference. Experimental results on several
real-world datasets demonstrate the advantages of our sRec over other
state-of-the-arts.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 04:10:38 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Chang",
"Shiyu",
""
],
[
"Zhang",
"Yang",
""
],
[
"Tang",
"Jiliang",
""
],
[
"Yin",
"Dawei",
""
],
[
"Chang",
"Yi",
""
],
[
"Hasegawa-Johnson",
"Mark A.",
""
],
[
"Huang",
"Thomas S.",
""
]
] | TITLE: Streaming Recommender Systems
ABSTRACT: The increasing popularity of real-world recommender systems produces data
continuously and rapidly, and it becomes more realistic to study recommender
systems under streaming scenarios. Data streams present distinct properties
such as temporally ordered, continuous and high-velocity, which poses
tremendous challenges to traditional recommender systems. In this paper, we
investigate the problem of recommendation with stream inputs. In particular, we
provide a principled framework termed sRec, which provides explicit
continuous-time random process models of the creation of users and topics, and
of the evolution of their interests. A variational Bayesian approach called
recursive meanfield approximation is proposed, which permits computationally
efficient instantaneous on-line inference. Experimental results on several
real-world datasets demonstrate the advantages of our sRec over other
state-of-the-arts.
| no_new_dataset | 0.94801 |
1607.06186 | Uwe Aickelin | Javier Navarro, Christian Wagner, Uwe Aickelin | Applying Interval Type-2 Fuzzy Rule Based Classifiers Through a
Cluster-Based Class Representation | 2015 IEEE Symposium Series on Computational Intelligence, pp.
1816-1823, IEEE, 2015, ISBN: 978-1-4799-7560-0 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy Rule-Based Classification Systems (FRBCSs) have the potential to
provide so-called interpretable classifiers, i.e. classifiers which can be
introspective, understood, validated and augmented by human experts by relying
on fuzzy-set based rules. This paper builds on prior work for interval type-2
fuzzy set based FRBCs where the fuzzy sets and rules of the classifier are
generated using an initial clustering stage. By introducing Subtractive
Clustering in order to identify multiple cluster prototypes, the proposed
approach has the potential to deliver improved classification performance while
maintaining good interpretability, i.e. without resulting in an excessive
number of rules. The paper provides a detailed overview of the proposed FRBC
framework, followed by a series of exploratory experiments on both linearly and
non-linearly separable datasets, comparing results to existing rule-based and
SVM approaches. Overall, initial results indicate that the approach enables
comparable classification performance to non rule-based classifiers such as
SVM, while often achieving this with a very small number of rules.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 04:36:23 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Navarro",
"Javier",
""
],
[
"Wagner",
"Christian",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Applying Interval Type-2 Fuzzy Rule Based Classifiers Through a
Cluster-Based Class Representation
ABSTRACT: Fuzzy Rule-Based Classification Systems (FRBCSs) have the potential to
provide so-called interpretable classifiers, i.e. classifiers which can be
introspective, understood, validated and augmented by human experts by relying
on fuzzy-set based rules. This paper builds on prior work for interval type-2
fuzzy set based FRBCs where the fuzzy sets and rules of the classifier are
generated using an initial clustering stage. By introducing Subtractive
Clustering in order to identify multiple cluster prototypes, the proposed
approach has the potential to deliver improved classification performance while
maintaining good interpretability, i.e. without resulting in an excessive
number of rules. The paper provides a detailed overview of the proposed FRBC
framework, followed by a series of exploratory experiments on both linearly and
non-linearly separable datasets, comparing results to existing rule-based and
SVM approaches. Overall, initial results indicate that the approach enables
comparable classification performance to non rule-based classifiers such as
SVM, while often achieving this with a very small number of rules.
| no_new_dataset | 0.949763 |
1607.06215 | Qiyue Yin | Kaiye Wang, Qiyue Yin, Wei Wang, Shu Wu, Liang Wang | A Comprehensive Survey on Cross-modal Retrieval | 20 pages, 11 figures, 9 tables | null | null | null | cs.MM cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, cross-modal retrieval has drawn much attention due to the
rapid growth of multimodal data. It takes one type of data as the query to
retrieve relevant data of another type. For example, a user can use a text to
retrieve relevant pictures or videos. Since the query and its retrieved results
can be of different modalities, how to measure the content similarity between
different modalities of data remains a challenge. Various methods have been
proposed to deal with such a problem. In this paper, we first review a number
of representative methods for cross-modal retrieval and classify them into two
main groups: 1) real-valued representation learning, and 2) binary
representation learning. Real-valued representation learning methods aim to
learn real-valued common representations for different modalities of data. To
speed up the cross-modal retrieval, a number of binary representation learning
methods are proposed to map different modalities of data into a common Hamming
space. Then, we introduce several multimodal datasets in the community, and
show the experimental results on two commonly used multimodal datasets. The
comparison reveals the characteristic of different kinds of cross-modal
retrieval methods, which is expected to benefit both practical applications and
future research. Finally, we discuss open problems and future research
directions.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 07:20:44 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Wang",
"Kaiye",
""
],
[
"Yin",
"Qiyue",
""
],
[
"Wang",
"Wei",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Liang",
""
]
] | TITLE: A Comprehensive Survey on Cross-modal Retrieval
ABSTRACT: In recent years, cross-modal retrieval has drawn much attention due to the
rapid growth of multimodal data. It takes one type of data as the query to
retrieve relevant data of another type. For example, a user can use a text to
retrieve relevant pictures or videos. Since the query and its retrieved results
can be of different modalities, how to measure the content similarity between
different modalities of data remains a challenge. Various methods have been
proposed to deal with such a problem. In this paper, we first review a number
of representative methods for cross-modal retrieval and classify them into two
main groups: 1) real-valued representation learning, and 2) binary
representation learning. Real-valued representation learning methods aim to
learn real-valued common representations for different modalities of data. To
speed up the cross-modal retrieval, a number of binary representation learning
methods are proposed to map different modalities of data into a common Hamming
space. Then, we introduce several multimodal datasets in the community, and
show the experimental results on two commonly used multimodal datasets. The
comparison reveals the characteristic of different kinds of cross-modal
retrieval methods, which is expected to benefit both practical applications and
future research. Finally, we discuss open problems and future research
directions.
| no_new_dataset | 0.943243 |
1607.06235 | Yu Li | Yu Li, Shaodi You, Michael S. Brown, Robby T. Tan | Haze Visibility Enhancement: A Survey and Quantitative Benchmarking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper provides a comprehensive survey of methods dealing with visibility
enhancement of images taken in hazy or foggy scenes. The survey begins with
discussing the optical models of atmospheric scattering media and image
formation. This is followed by a survey of existing methods, which are grouped
to multiple image methods, polarizing filters based methods, methods with known
depth, and single-image methods. We also provide a benchmark of a number of
well known single-image methods, based on a recent dataset provided by Fattal
and our newly generated scattering media dataset that contains ground truth
images for quantitative evaluation. To our knowledge, this is the first
benchmark using numerical metrics to evaluate dehazing techniques. This
benchmark allows us to objectively compare the results of existing methods and
to better identify the strengths and limitations of each method.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 08:57:13 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Li",
"Yu",
""
],
[
"You",
"Shaodi",
""
],
[
"Brown",
"Michael S.",
""
],
[
"Tan",
"Robby T.",
""
]
] | TITLE: Haze Visibility Enhancement: A Survey and Quantitative Benchmarking
ABSTRACT: This paper provides a comprehensive survey of methods dealing with visibility
enhancement of images taken in hazy or foggy scenes. The survey begins with
discussing the optical models of atmospheric scattering media and image
formation. This is followed by a survey of existing methods, which are grouped
to multiple image methods, polarizing filters based methods, methods with known
depth, and single-image methods. We also provide a benchmark of a number of
well known single-image methods, based on a recent dataset provided by Fattal
and our newly generated scattering media dataset that contains ground truth
images for quantitative evaluation. To our knowledge, this is the first
benchmark using numerical metrics to evaluate dehazing techniques. This
benchmark allows us to objectively compare the results of existing methods and
to better identify the strengths and limitations of each method.
| new_dataset | 0.956391 |
1607.06250 | Arnaud Dapogny | Arnaud Dapogny, K\'evin Bailly, S\'everine Dubuisson | Dynamic Pose-Robust Facial Expression Recognition by Multi-View Pairwise
Conditional Random Forests | Extension of an ICCV 2015 paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic facial expression classification (FER) from videos is a critical
problem for the development of intelligent human-computer interaction systems.
Still, it is a challenging problem that involves capturing high-dimensional
spatio-temporal patterns describing the variation of one's appearance over
time. Such representation undergoes great variability of the facial morphology
and environmental factors as well as head pose variations. In this paper, we
use Conditional Random Forests to capture low-level expression transition
patterns. More specifically, heterogeneous derivative features (e.g. feature
point movements or texture variations) are evaluated upon pairs of images. When
testing on a video frame, pairs are created between this current frame and
previous ones and predictions for each previous frame are used to draw trees
from Pairwise Conditional Random Forests (PCRF) whose pairwise outputs are
averaged over time to produce robust estimates. Moreover, PCRF collections can
also be conditioned on head pose estimation for multi-view dynamic FER. As
such, our approach appears as a natural extension of Random Forests for
learning spatio-temporal patterns, potentially from multiple viewpoints.
Experiments on popular datasets show that our method leads to significant
improvements over standard Random Forests as well as state-of-the-art
approaches on several scenarios, including a novel multi-view video corpus
generated from a publicly available database.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 10:07:33 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Dapogny",
"Arnaud",
""
],
[
"Bailly",
"Kévin",
""
],
[
"Dubuisson",
"Séverine",
""
]
] | TITLE: Dynamic Pose-Robust Facial Expression Recognition by Multi-View Pairwise
Conditional Random Forests
ABSTRACT: Automatic facial expression classification (FER) from videos is a critical
problem for the development of intelligent human-computer interaction systems.
Still, it is a challenging problem that involves capturing high-dimensional
spatio-temporal patterns describing the variation of one's appearance over
time. Such representation undergoes great variability of the facial morphology
and environmental factors as well as head pose variations. In this paper, we
use Conditional Random Forests to capture low-level expression transition
patterns. More specifically, heterogeneous derivative features (e.g. feature
point movements or texture variations) are evaluated upon pairs of images. When
testing on a video frame, pairs are created between this current frame and
previous ones and predictions for each previous frame are used to draw trees
from Pairwise Conditional Random Forests (PCRF) whose pairwise outputs are
averaged over time to produce robust estimates. Moreover, PCRF collections can
also be conditioned on head pose estimation for multi-view dynamic FER. As
such, our approach appears as a natural extension of Random Forests for
learning spatio-temporal patterns, potentially from multiple viewpoints.
Experiments on popular datasets show that our method leads to significant
improvements over standard Random Forests as well as state-of-the-art
approaches on several scenarios, including a novel multi-view video corpus
generated from a publicly available database.
| no_new_dataset | 0.949295 |
1607.06299 | Roman Klinger | Janik Jaskolski, Fabian Siegberg, Thomas Tibroni, Philipp Cimiano,
Roman Klinger | Opinion Mining in Online Reviews About Distance Education Programs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The popularity of distance education programs is increasing at a fast pace.
En par with this development, online communication in fora, social media and
reviewing platforms between students is increasing as well. Exploiting this
information to support fellow students or institutions requires to extract the
relevant opinions in order to automatically generate reports providing an
overview of pros and cons of different distance education programs. We report
on an experiment involving distance education experts with the goal to develop
a dataset of reviews annotated with relevant categories and aspects in each
category discussed in the specific review together with an indication of the
sentiment.
Based on this experiment, we present an approach to extract general
categories and specific aspects under discussion in a review together with
their sentiment. We frame this task as a multi-label hierarchical text
classification problem and empirically investigate the performance of different
classification architectures to couple the prediction of a category with the
prediction of particular aspects in this category. We evaluate different
architectures and show that a hierarchical approach leads to superior results
in comparison to a flat model which makes decisions independently.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 12:43:21 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Jaskolski",
"Janik",
""
],
[
"Siegberg",
"Fabian",
""
],
[
"Tibroni",
"Thomas",
""
],
[
"Cimiano",
"Philipp",
""
],
[
"Klinger",
"Roman",
""
]
] | TITLE: Opinion Mining in Online Reviews About Distance Education Programs
ABSTRACT: The popularity of distance education programs is increasing at a fast pace.
En par with this development, online communication in fora, social media and
reviewing platforms between students is increasing as well. Exploiting this
information to support fellow students or institutions requires to extract the
relevant opinions in order to automatically generate reports providing an
overview of pros and cons of different distance education programs. We report
on an experiment involving distance education experts with the goal to develop
a dataset of reviews annotated with relevant categories and aspects in each
category discussed in the specific review together with an indication of the
sentiment.
Based on this experiment, we present an approach to extract general
categories and specific aspects under discussion in a review together with
their sentiment. We frame this task as a multi-label hierarchical text
classification problem and empirically investigate the performance of different
classification architectures to couple the prediction of a category with the
prediction of particular aspects in this category. We evaluate different
architectures and show that a hierarchical approach leads to superior results
in comparison to a flat model which makes decisions independently.
| new_dataset | 0.958499 |
1607.06339 | Santiago Segarra | Gunnar Carlsson, Facundo M\'emoli, Alejandro Ribeiro, Santiago Segarra | Excisive Hierarchical Clustering Methods for Network Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce two practical properties of hierarchical clustering methods for
(possibly asymmetric) network data: excisiveness and linear scale preservation.
The latter enforces imperviousness to change in units of measure whereas the
former ensures local consistency of the clustering outcome. Algorithmically,
excisiveness implies that we can reduce computational complexity by only
clustering a data subset of interest while theoretically guaranteeing that the
same hierarchical outcome would be observed when clustering the whole dataset.
Moreover, we introduce the concept of representability, i.e. a generative model
for describing clustering methods through the specification of their action on
a collection of networks. We further show that, within a rich set of admissible
methods, requiring representability is equivalent to requiring both
excisiveness and linear scale preservation. Leveraging this equivalence, we
show that all excisive and linear scale preserving methods can be factored into
two steps: a transformation of the weights in the input network followed by the
application of a canonical clustering method. Furthermore, their factorization
can be used to show stability of excisive and linear scale preserving methods
in the sense that a bounded perturbation in the input network entails a bounded
perturbation in the clustering output.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 14:28:51 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Carlsson",
"Gunnar",
""
],
[
"Mémoli",
"Facundo",
""
],
[
"Ribeiro",
"Alejandro",
""
],
[
"Segarra",
"Santiago",
""
]
] | TITLE: Excisive Hierarchical Clustering Methods for Network Data
ABSTRACT: We introduce two practical properties of hierarchical clustering methods for
(possibly asymmetric) network data: excisiveness and linear scale preservation.
The latter enforces imperviousness to change in units of measure whereas the
former ensures local consistency of the clustering outcome. Algorithmically,
excisiveness implies that we can reduce computational complexity by only
clustering a data subset of interest while theoretically guaranteeing that the
same hierarchical outcome would be observed when clustering the whole dataset.
Moreover, we introduce the concept of representability, i.e. a generative model
for describing clustering methods through the specification of their action on
a collection of networks. We further show that, within a rich set of admissible
methods, requiring representability is equivalent to requiring both
excisiveness and linear scale preservation. Leveraging this equivalence, we
show that all excisive and linear scale preserving methods can be factored into
two steps: a transformation of the weights in the input network followed by the
application of a canonical clustering method. Furthermore, their factorization
can be used to show stability of excisive and linear scale preserving methods
in the sense that a bounded perturbation in the input network entails a bounded
perturbation in the clustering output.
| no_new_dataset | 0.944228 |
1607.06402 | Mohammad Ashraful Hoque Mohammad Ashraful Hoque | Mohammad A. Hoque and Sasu Tarkoma | Characterizing Smartphone Power Management in the Wild | Proceedings of 7th International Workshop on Hot Topics in
Planet-Scale Measurement, HotPlanet'16 | null | 10.1145/2968219.2968295 | null | cs.OH | http://creativecommons.org/licenses/by/4.0/ | For better reliability and prolonged battery life, it is important that users
and vendors understand the quality of charging and the performance of
smartphone batteries. Considering the diverse set of devices and user behavior
it is a challenge. In this work, we analyze a large collection of battery
analytics dataset collected from 30K devices of 1.5K unique smartphone models.
We analyze their battery properties and state of charge while charging, and
reveal the characteristics of different components of their power management
systems: charging mechanisms, state of charge estimation techniques, and their
battery properties. We explore diverse charging behavior of devices and their
users.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 17:32:33 GMT"
}
] | 2016-07-22T00:00:00 | [
[
"Hoque",
"Mohammad A.",
""
],
[
"Tarkoma",
"Sasu",
""
]
] | TITLE: Characterizing Smartphone Power Management in the Wild
ABSTRACT: For better reliability and prolonged battery life, it is important that users
and vendors understand the quality of charging and the performance of
smartphone batteries. Considering the diverse set of devices and user behavior
it is a challenge. In this work, we analyze a large collection of battery
analytics dataset collected from 30K devices of 1.5K unique smartphone models.
We analyze their battery properties and state of charge while charging, and
reveal the characteristics of different components of their power management
systems: charging mechanisms, state of charge estimation techniques, and their
battery properties. We explore diverse charging behavior of devices and their
users.
| no_new_dataset | 0.937669 |
1507.03372 | Nicol\`o Navarin | Giovanni Da San Martino, Nicol\`o Navarin, Alessandro Sperduti | Ordered Decompositional DAG Kernels Enhancements | Paper accepted for publication in Neurocomputing | Neurocomputing, Volume 192, 5 June 2016, Pages 92--103 | 10.1016/j.neucom.2015.12.110 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel
framework, a framework that allows the definition of graph kernels from tree
kernels, allows to easily define new state-of-the-art graph kernels. Here we
consider a fast graph kernel based on the Subtree kernel (ST), and we propose
various enhancements to increase its expressiveness. The proposed DAG kernel
has the same worst-case complexity as the one based on ST, but an improved
expressivity due to an augmented set of features. Moreover, we propose a novel
weighting scheme for the features, which can be applied to other kernels of the
ODD framework. These improvements allow the proposed kernels to improve on the
classification performances of the ST-based kernel for several real-world
datasets, reaching state-of-the-art performances.
| [
{
"version": "v1",
"created": "Mon, 13 Jul 2015 09:50:41 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Dec 2015 14:03:57 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Martino",
"Giovanni Da San",
""
],
[
"Navarin",
"Nicolò",
""
],
[
"Sperduti",
"Alessandro",
""
]
] | TITLE: Ordered Decompositional DAG Kernels Enhancements
ABSTRACT: In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel
framework, a framework that allows the definition of graph kernels from tree
kernels, allows to easily define new state-of-the-art graph kernels. Here we
consider a fast graph kernel based on the Subtree kernel (ST), and we propose
various enhancements to increase its expressiveness. The proposed DAG kernel
has the same worst-case complexity as the one based on ST, but an improved
expressivity due to an augmented set of features. Moreover, we propose a novel
weighting scheme for the features, which can be applied to other kernels of the
ODD framework. These improvements allow the proposed kernels to improve on the
classification performances of the ST-based kernel for several real-world
datasets, reaching state-of-the-art performances.
| no_new_dataset | 0.949342 |
1509.02868 | Cesar Hidalgo | C\'esar A. Hidalgo and Elisa E. Casta\~ner | The amenity space and the evolution of neighborhoods | no comments | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neighborhoods populated by amenities--such as restaurants, cafes, and
libraries--are considered to be a key property of desirable cities. Yet,
despite the global enthusiasm for amenity-rich neighborhoods, little is known
about the empirical laws governing the colocation of amenities at the
neighborhood scale. Here, we contribute to our understanding of the naturally
occurring neighborhood-scale agglomerations of amenities observed in cities by
using a dataset summarizing the precise location of millions of amenities. We
use this dataset to build the network of co-location of amenities, or Amenity
Space, by first introducing a clustering algorithm to identify neighborhoods,
and then using the identified neighborhoods to map the probability that two
amenities will be co-located in one of them. Finally, we use the Amenity Space
to build a recommender system that identifies the amenities that are missing in
a neighborhood given its current pattern of specialization. This opens the door
for the construction of amenity recommendation algorithms that can be used to
evaluate neighborhoods and inform their improvement and development.
| [
{
"version": "v1",
"created": "Wed, 9 Sep 2015 17:50:35 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jul 2016 13:35:37 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Hidalgo",
"César A.",
""
],
[
"Castañer",
"Elisa E.",
""
]
] | TITLE: The amenity space and the evolution of neighborhoods
ABSTRACT: Neighborhoods populated by amenities--such as restaurants, cafes, and
libraries--are considered to be a key property of desirable cities. Yet,
despite the global enthusiasm for amenity-rich neighborhoods, little is known
about the empirical laws governing the colocation of amenities at the
neighborhood scale. Here, we contribute to our understanding of the naturally
occurring neighborhood-scale agglomerations of amenities observed in cities by
using a dataset summarizing the precise location of millions of amenities. We
use this dataset to build the network of co-location of amenities, or Amenity
Space, by first introducing a clustering algorithm to identify neighborhoods,
and then using the identified neighborhoods to map the probability that two
amenities will be co-located in one of them. Finally, we use the Amenity Space
to build a recommender system that identifies the amenities that are missing in
a neighborhood given its current pattern of specialization. This opens the door
for the construction of amenity recommendation algorithms that can be used to
evaluate neighborhoods and inform their improvement and development.
| new_dataset | 0.938969 |
1603.09687 | Claudio Gennaro | Claudio Gennaro | Large Scale Deep Convolutional Neural Network Features Search with
Lucene | This paper has been withdrawn by the author due to many errors | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose an approach to index Deep Convolutional Neural
Network Features to support efficient content-based retrieval on large image
databases. To this aim, we have converted the these features into a textual
form, to index them into an inverted index by means of Lucene. In this way, we
were able to set up a robust retrieval system that combines full-text search
with content-based image retrieval capabilities. We evaluated different
strategies of textual representation in order to optimize the index occupation
and the query response time. In order to show that our approach is able to
handle large datasets, we have developed a web-based prototype that provides an
interface for combined textual and visual searching into a dataset of about 100
million of images.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 17:11:43 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2016 09:43:48 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2016 15:02:51 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jul 2016 09:29:57 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Gennaro",
"Claudio",
""
]
] | TITLE: Large Scale Deep Convolutional Neural Network Features Search with
Lucene
ABSTRACT: In this work, we propose an approach to index Deep Convolutional Neural
Network Features to support efficient content-based retrieval on large image
databases. To this aim, we have converted the these features into a textual
form, to index them into an inverted index by means of Lucene. In this way, we
were able to set up a robust retrieval system that combines full-text search
with content-based image retrieval capabilities. We evaluated different
strategies of textual representation in order to optimize the index occupation
and the query response time. In order to show that our approach is able to
handle large datasets, we have developed a web-based prototype that provides an
interface for combined textual and visual searching into a dataset of about 100
million of images.
| no_new_dataset | 0.946051 |
1604.00790 | Cheng Wang | Cheng Wang, Haojin Yang, Christian Bartz, Christoph Meinel | Image Captioning with Deep Bidirectional LSTMs | accepted by ACMMM 2016 as full paper and oral presentation | null | null | null | cs.CV cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents an end-to-end trainable deep bidirectional LSTM
(Long-Short Term Memory) model for image captioning. Our model builds on a deep
convolutional neural network (CNN) and two separate LSTM networks. It is
capable of learning long term visual-language interactions by making use of
history and future context information at high level semantic space. Two novel
deep bidirectional variant models, in which we increase the depth of
nonlinearity transition in different way, are proposed to learn hierarchical
visual-language embeddings. Data augmentation techniques such as multi-crop,
multi-scale and vertical mirror are proposed to prevent overfitting in training
deep models. We visualize the evolution of bidirectional LSTM internal states
over time and qualitatively analyze how our models "translate" image to
sentence. Our proposed models are evaluated on caption generation and
image-sentence retrieval tasks with three benchmark datasets: Flickr8K,
Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models
achieve highly competitive performance to the state-of-the-art results on
caption generation even without integrating additional mechanism (e.g. object
detection, attention model etc.) and significantly outperform recent methods on
retrieval task.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 09:43:04 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2016 07:45:25 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2016 14:19:37 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Wang",
"Cheng",
""
],
[
"Yang",
"Haojin",
""
],
[
"Bartz",
"Christian",
""
],
[
"Meinel",
"Christoph",
""
]
] | TITLE: Image Captioning with Deep Bidirectional LSTMs
ABSTRACT: This work presents an end-to-end trainable deep bidirectional LSTM
(Long-Short Term Memory) model for image captioning. Our model builds on a deep
convolutional neural network (CNN) and two separate LSTM networks. It is
capable of learning long term visual-language interactions by making use of
history and future context information at high level semantic space. Two novel
deep bidirectional variant models, in which we increase the depth of
nonlinearity transition in different way, are proposed to learn hierarchical
visual-language embeddings. Data augmentation techniques such as multi-crop,
multi-scale and vertical mirror are proposed to prevent overfitting in training
deep models. We visualize the evolution of bidirectional LSTM internal states
over time and qualitatively analyze how our models "translate" image to
sentence. Our proposed models are evaluated on caption generation and
image-sentence retrieval tasks with three benchmark datasets: Flickr8K,
Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models
achieve highly competitive performance to the state-of-the-art results on
caption generation even without integrating additional mechanism (e.g. object
detection, attention model etc.) and significantly outperform recent methods on
retrieval task.
| no_new_dataset | 0.946597 |
1607.05749 | Md Mansurul Bhuiyan | Mansurul Bhuiyan and Mohammad Al Hasan | PRIIME: A Generic Framework for Interactive Personalized Interesting
Pattern Discovery | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional frequent pattern mining algorithms generate an exponentially
large number of patterns of which a substantial proportion are not much
significant for many data analysis endeavors. Discovery of a small number of
personalized interesting patterns from the large output set according to a
particular user's interest is an important as well as challenging task.
Existing works on pattern summarization do not solve this problem from the
personalization viewpoint. In this work, we propose an interactive pattern
discovery framework named PRIIME which identifies a set of interesting patterns
for a specific user without requiring any prior input on the interestingness
measure of patterns from the user. The proposed framework is generic to support
discovery of the interesting set, sequence and graph type patterns. We develop
a softmax classification based iterative learning algorithm that uses a limited
number of interactive feedback from the user to learn her interestingness
profile, and use this profile for pattern recommendation. To handle sequence
and graph type patterns PRIIME adopts a neural net (NN) based unsupervised
feature construction approach. We also develop a strategy that combines
exploration and exploitation to select patterns for feedback. We show
experimental results on several real-life datasets to validate the performance
of the proposed method. We also compare with the existing methods of
interactive pattern discovery to show that our method is substantially superior
in performance. To portray the applicability of the framework, we present a
case study from the real-estate domain.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 20:21:43 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Bhuiyan",
"Mansurul",
""
],
[
"Hasan",
"Mohammad Al",
""
]
] | TITLE: PRIIME: A Generic Framework for Interactive Personalized Interesting
Pattern Discovery
ABSTRACT: The traditional frequent pattern mining algorithms generate an exponentially
large number of patterns of which a substantial proportion are not much
significant for many data analysis endeavors. Discovery of a small number of
personalized interesting patterns from the large output set according to a
particular user's interest is an important as well as challenging task.
Existing works on pattern summarization do not solve this problem from the
personalization viewpoint. In this work, we propose an interactive pattern
discovery framework named PRIIME which identifies a set of interesting patterns
for a specific user without requiring any prior input on the interestingness
measure of patterns from the user. The proposed framework is generic to support
discovery of the interesting set, sequence and graph type patterns. We develop
a softmax classification based iterative learning algorithm that uses a limited
number of interactive feedback from the user to learn her interestingness
profile, and use this profile for pattern recommendation. To handle sequence
and graph type patterns PRIIME adopts a neural net (NN) based unsupervised
feature construction approach. We also develop a strategy that combines
exploration and exploitation to select patterns for feedback. We show
experimental results on several real-life datasets to validate the performance
of the proposed method. We also compare with the existing methods of
interactive pattern discovery to show that our method is substantially superior
in performance. To portray the applicability of the framework, we present a
case study from the real-estate domain.
| no_new_dataset | 0.948106 |
1607.05765 | Anurag Kumar | Anurag Kumar, Bhiksha Raj | Features and Kernels for Audio Event Recognition | 5 pages | null | null | null | cs.SD cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most important problems in audio event detection research is
absence of benchmark results for comparison with any proposed method. Different
works consider different sets of events and datasets which makes it difficult
to comprehensively analyze any novel method with an existing one. In this paper
we propose to establish results for audio event recognition on two recent
publicly-available datasets. In particular we use Gaussian Mixture model based
feature representation and combine them with linear as well as non-linear
kernel Support Vector Machines.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 21:29:03 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Kumar",
"Anurag",
""
],
[
"Raj",
"Bhiksha",
""
]
] | TITLE: Features and Kernels for Audio Event Recognition
ABSTRACT: One of the most important problems in audio event detection research is
absence of benchmark results for comparison with any proposed method. Different
works consider different sets of events and datasets which makes it difficult
to comprehensively analyze any novel method with an existing one. In this paper
we propose to establish results for audio event recognition on two recent
publicly-available datasets. In particular we use Gaussian Mixture model based
feature representation and combine them with linear as well as non-linear
kernel Support Vector Machines.
| no_new_dataset | 0.953535 |
1607.05781 | Guanghan Ning | Guanghan Ning, Zhi Zhang, Chen Huang, Zhihai He, Xiaobo Ren, Haohong
Wang | Spatially Supervised Recurrent Convolutional Neural Networks for Visual
Object Tracking | 10 pages, 9 figures, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a new approach of spatially supervised recurrent
convolutional neural networks for visual object tracking. Our recurrent
convolutional network exploits the history of locations as well as the
distinctive visual features learned by the deep neural networks. Inspired by
recent bounding box regression methods for object detection, we study the
regression capability of Long Short-Term Memory (LSTM) in the temporal domain,
and propose to concatenate high-level visual features produced by convolutional
networks with region information. In contrast to existing deep learning based
trackers that use binary classification for region candidates, we use
regression for direct prediction of the tracking locations both at the
convolutional layer and at the recurrent unit. Our extensive experimental
results and performance comparison with state-of-the-art tracking methods on
challenging benchmark video tracking datasets shows that our tracker is more
accurate and robust while maintaining low computational cost. For most test
video sequences, our method achieves the best tracking performance, often
outperforms the second best by a large margin.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 23:27:56 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Ning",
"Guanghan",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Huang",
"Chen",
""
],
[
"He",
"Zhihai",
""
],
[
"Ren",
"Xiaobo",
""
],
[
"Wang",
"Haohong",
""
]
] | TITLE: Spatially Supervised Recurrent Convolutional Neural Networks for Visual
Object Tracking
ABSTRACT: In this paper, we develop a new approach of spatially supervised recurrent
convolutional neural networks for visual object tracking. Our recurrent
convolutional network exploits the history of locations as well as the
distinctive visual features learned by the deep neural networks. Inspired by
recent bounding box regression methods for object detection, we study the
regression capability of Long Short-Term Memory (LSTM) in the temporal domain,
and propose to concatenate high-level visual features produced by convolutional
networks with region information. In contrast to existing deep learning based
trackers that use binary classification for region candidates, we use
regression for direct prediction of the tracking locations both at the
convolutional layer and at the recurrent unit. Our extensive experimental
results and performance comparison with state-of-the-art tracking methods on
challenging benchmark video tracking datasets shows that our tracker is more
accurate and robust while maintaining low computational cost. For most test
video sequences, our method achieves the best tracking performance, often
outperforms the second best by a large margin.
| no_new_dataset | 0.949106 |
1607.05809 | Kun Xiong | Kun Xiong, Anqi Cui, Zefeng Zhang, Ming Li | Neural Contextual Conversation Learning with Labeled Question-Answering
Pairs | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural conversational models tend to produce generic or safe responses in
different contexts, e.g., reply \textit{"Of course"} to narrative statements or
\textit{"I don't know"} to questions. In this paper, we propose an end-to-end
approach to avoid such problem in neural generative models. Additional memory
mechanisms have been introduced to standard sequence-to-sequence (seq2seq)
models, so that context can be considered while generating sentences. Three
seq2seq models, which memorize a fix-sized contextual vector from hidden input,
hidden input/output and a gated contextual attention structure respectively,
have been trained and tested on a dataset of labeled question-answering pairs
in Chinese. The model with contextual attention outperforms others including
the state-of-the-art seq2seq models on perplexity test. The novel contextual
model generates diverse and robust responses, and is able to carry out
conversations on a wide range of topics appropriately.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 03:25:31 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Xiong",
"Kun",
""
],
[
"Cui",
"Anqi",
""
],
[
"Zhang",
"Zefeng",
""
],
[
"Li",
"Ming",
""
]
] | TITLE: Neural Contextual Conversation Learning with Labeled Question-Answering
Pairs
ABSTRACT: Neural conversational models tend to produce generic or safe responses in
different contexts, e.g., reply \textit{"Of course"} to narrative statements or
\textit{"I don't know"} to questions. In this paper, we propose an end-to-end
approach to avoid such problem in neural generative models. Additional memory
mechanisms have been introduced to standard sequence-to-sequence (seq2seq)
models, so that context can be considered while generating sentences. Three
seq2seq models, which memorize a fix-sized contextual vector from hidden input,
hidden input/output and a gated contextual attention structure respectively,
have been trained and tested on a dataset of labeled question-answering pairs
in Chinese. The model with contextual attention outperforms others including
the state-of-the-art seq2seq models on perplexity test. The novel contextual
model generates diverse and robust responses, and is able to carry out
conversations on a wide range of topics appropriately.
| no_new_dataset | 0.905865 |
1607.05909 | Uwe Aickelin | Jiangang Ma, Le Sun, Hua Wang, Yanchun Zhang, Uwe Aickelin | Supervised Anomaly Detection in Uncertain Pseudoperiodic Data Streams | ACM Transactions on Internet Technology (TOIT), 16 (1 (4)), 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncertain data streams have been widely generated in many Web applications.
The uncertainty in data streams makes anomaly detection from sensor data
streams far more challenging. In this paper, we present a novel framework that
supports anomaly detection in uncertain data streams. The proposed framework
adopts an efficient uncertainty pre-processing procedure to identify and
eliminate uncertainties in data streams. Based on the corrected data streams,
we develop effective period pattern recognition and feature extraction
techniques to improve the computational efficiency. We use classification
methods for anomaly detection in the corrected data stream. We also empirically
show that the proposed approach shows a high accuracy of anomaly detection on a
number of real datasets.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 10:52:17 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Ma",
"Jiangang",
""
],
[
"Sun",
"Le",
""
],
[
"Wang",
"Hua",
""
],
[
"Zhang",
"Yanchun",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: Supervised Anomaly Detection in Uncertain Pseudoperiodic Data Streams
ABSTRACT: Uncertain data streams have been widely generated in many Web applications.
The uncertainty in data streams makes anomaly detection from sensor data
streams far more challenging. In this paper, we present a novel framework that
supports anomaly detection in uncertain data streams. The proposed framework
adopts an efficient uncertainty pre-processing procedure to identify and
eliminate uncertainties in data streams. Based on the corrected data streams,
we develop effective period pattern recognition and feature extraction
techniques to improve the computational efficiency. We use classification
methods for anomaly detection in the corrected data stream. We also empirically
show that the proposed approach shows a high accuracy of anomaly detection on a
number of real datasets.
| no_new_dataset | 0.9549 |
1607.05910 | Chunhua Shen | Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, Anton van
den Hengel | Visual Question Answering: A Survey of Methods and Datasets | 25 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 10:53:29 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Wu",
"Qi",
""
],
[
"Teney",
"Damien",
""
],
[
"Wang",
"Peng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Dick",
"Anthony",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Visual Question Answering: A Survey of Methods and Datasets
ABSTRACT: Visual Question Answering (VQA) is a challenging task that has received
increasing attention from both the computer vision and the natural language
processing communities. Given an image and a question in natural language, it
requires reasoning over visual elements of the image and general knowledge to
infer the correct answer. In the first part of this survey, we examine the
state of the art by comparing modern approaches to the problem. We classify
methods by their mechanism to connect the visual and textual modalities. In
particular, we examine the common approach of combining convolutional and
recurrent neural networks to map images and questions to a common feature
space. We also discuss memory-augmented and modular architectures that
interface with structured knowledge bases. In the second part of this survey,
we review the datasets available for training and evaluating VQA systems. The
various datatsets contain questions at different levels of complexity, which
require different capabilities and types of reasoning. We examine in depth the
question/answer pairs from the Visual Genome project, and evaluate the
relevance of the structured annotations of images with scene graphs for VQA.
Finally, we discuss promising future directions for the field, in particular
the connection to structured knowledge bases and the use of natural language
processing models.
| no_new_dataset | 0.938294 |
1607.05969 | Yun Gu | Yun Gu, Guang-Zhong Yang, Jie Yang and Kun Sun | 4D Cardiac Ultrasound Standard Plane Location by Spatial-Temporal
Correlation | submitted to MICCAI 2016 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Echocardiography plays an important part in diagnostic aid in cardiac
diseases. A critical step in echocardiography-aided diagnosis is to extract the
standard planes since they tend to provide promising views to present different
structures that are benefit to diagnosis. To this end, this paper proposes a
spatial-temporal embedding framework to extract the standard view planes from
4D STIC (spatial-temporal image corre- lation) volumes. The proposed method is
comprised of three stages, the frame smoothing, spatial-temporal embedding and
final classification. In first stage, an L 0 smoothing filter is used to
preprocess the frames that removes the noise and preserves the boundary. Then a
compact repre- sentation is learned via embedding spatial and temporal features
into a latent space in the supervised scheme considering both standard plane
information and diagnosis result. In last stage, the learned features are fed
into support vector machine to identify the standard plane. We eval- uate the
proposed method on a 4D STIC volume dataset with 92 normal cases and 93
abnormal cases in three standard planes. It demonstrates that our method
outperforms the baselines in both classification accuracy and computational
efficiency.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 14:19:03 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Gu",
"Yun",
""
],
[
"Yang",
"Guang-Zhong",
""
],
[
"Yang",
"Jie",
""
],
[
"Sun",
"Kun",
""
]
] | TITLE: 4D Cardiac Ultrasound Standard Plane Location by Spatial-Temporal
Correlation
ABSTRACT: Echocardiography plays an important part in diagnostic aid in cardiac
diseases. A critical step in echocardiography-aided diagnosis is to extract the
standard planes since they tend to provide promising views to present different
structures that are benefit to diagnosis. To this end, this paper proposes a
spatial-temporal embedding framework to extract the standard view planes from
4D STIC (spatial-temporal image corre- lation) volumes. The proposed method is
comprised of three stages, the frame smoothing, spatial-temporal embedding and
final classification. In first stage, an L 0 smoothing filter is used to
preprocess the frames that removes the noise and preserves the boundary. Then a
compact repre- sentation is learned via embedding spatial and temporal features
into a latent space in the supervised scheme considering both standard plane
information and diagnosis result. In last stage, the learned features are fed
into support vector machine to identify the standard plane. We eval- uate the
proposed method on a 4D STIC volume dataset with 92 normal cases and 93
abnormal cases in three standard planes. It demonstrates that our method
outperforms the baselines in both classification accuracy and computational
efficiency.
| no_new_dataset | 0.943608 |
1607.05975 | Furqan Khan | Furqan M. Khan and Francois Bremond | Person Re-identification for Real-world Surveillance Systems | Person re-identification, Visual surveillance | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Appearance based person re-identification in a real-world video surveillance
system with non-overlapping camera views is a challenging problem for many
reasons. Current state-of-the-art methods often address the problem by relying
on supervised learning of similarity metrics or ranking functions to implicitly
model appearance transformation between cameras for each camera pair, or group,
in the system. This requires considerable human effort to annotate data.
Furthermore, the learned models are camera specific and not transferable from
one set of cameras to another. Therefore, the annotation process is required
after every network expansion or camera replacement, which strongly limits
their applicability. Alternatively, we propose a novel modeling approach to
harness complementary appearance information without supervised learning that
significantly outperforms current state-of-the-art unsupervised methods on
multiple benchmark datasets.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 14:34:23 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Khan",
"Furqan M.",
""
],
[
"Bremond",
"Francois",
""
]
] | TITLE: Person Re-identification for Real-world Surveillance Systems
ABSTRACT: Appearance based person re-identification in a real-world video surveillance
system with non-overlapping camera views is a challenging problem for many
reasons. Current state-of-the-art methods often address the problem by relying
on supervised learning of similarity metrics or ranking functions to implicitly
model appearance transformation between cameras for each camera pair, or group,
in the system. This requires considerable human effort to annotate data.
Furthermore, the learned models are camera specific and not transferable from
one set of cameras to another. Therefore, the annotation process is required
after every network expansion or camera replacement, which strongly limits
their applicability. Alternatively, we propose a novel modeling approach to
harness complementary appearance information without supervised learning that
significantly outperforms current state-of-the-art unsupervised methods on
multiple benchmark datasets.
| no_new_dataset | 0.952794 |
1607.06038 | Wadim Kehl | Wadim Kehl, Fausto Milletari, Federico Tombari, Slobodan Ilic, Nassir
Navab | Deep Learning of Local RGB-D Patches for 3D Object Detection and 6D Pose
Estimation | To appear at ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a 3D object detection method that uses regressed descriptors of
locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a
convolutional auto-encoder that has been trained on a large collection of
random local patches. During testing, scene patch descriptors are matched
against a database of synthetic model view patches and cast 6D object votes
which are subsequently filtered to refined hypotheses. We evaluate on three
datasets to show that our method generalizes well to previously unseen input
data, delivers robust detection results that compete with and surpass the
state-of-the-art while being scalable in the number of objects.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 17:38:15 GMT"
}
] | 2016-07-21T00:00:00 | [
[
"Kehl",
"Wadim",
""
],
[
"Milletari",
"Fausto",
""
],
[
"Tombari",
"Federico",
""
],
[
"Ilic",
"Slobodan",
""
],
[
"Navab",
"Nassir",
""
]
] | TITLE: Deep Learning of Local RGB-D Patches for 3D Object Detection and 6D Pose
Estimation
ABSTRACT: We present a 3D object detection method that uses regressed descriptors of
locally-sampled RGB-D patches for 6D vote casting. For regression, we employ a
convolutional auto-encoder that has been trained on a large collection of
random local patches. During testing, scene patch descriptors are matched
against a database of synthetic model view patches and cast 6D object votes
which are subsequently filtered to refined hypotheses. We evaluate on three
datasets to show that our method generalizes well to previously unseen input
data, delivers robust detection results that compete with and surpass the
state-of-the-art while being scalable in the number of objects.
| no_new_dataset | 0.950503 |
1511.08308 | Eric Nichols | Jason P.C. Chiu and Eric Nichols | Named Entity Recognition with Bidirectional LSTM-CNNs | To appear in Transactions of the Association for Computational
Linguistics | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Named entity recognition is a challenging task that has traditionally
required large amounts of knowledge in the form of feature engineering and
lexicons to achieve high performance. In this paper, we present a novel neural
network architecture that automatically detects word- and character-level
features using a hybrid bidirectional LSTM and CNN architecture, eliminating
the need for most feature engineering. We also propose a novel method of
encoding partial lexicon matches in neural networks and compare it to existing
approaches. Extensive evaluation shows that, given only tokenized text and
publicly available word embeddings, our system is competitive on the CoNLL-2003
dataset and surpasses the previously reported state of the art performance on
the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed
from publicly-available sources, we establish new state of the art performance
with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing
systems that employ heavy feature engineering, proprietary lexicons, and rich
entity linking information.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 07:40:33 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2016 09:23:52 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Mar 2016 06:25:57 GMT"
},
{
"version": "v4",
"created": "Thu, 16 Jun 2016 06:15:49 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Jul 2016 05:02:51 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Chiu",
"Jason P. C.",
""
],
[
"Nichols",
"Eric",
""
]
] | TITLE: Named Entity Recognition with Bidirectional LSTM-CNNs
ABSTRACT: Named entity recognition is a challenging task that has traditionally
required large amounts of knowledge in the form of feature engineering and
lexicons to achieve high performance. In this paper, we present a novel neural
network architecture that automatically detects word- and character-level
features using a hybrid bidirectional LSTM and CNN architecture, eliminating
the need for most feature engineering. We also propose a novel method of
encoding partial lexicon matches in neural networks and compare it to existing
approaches. Extensive evaluation shows that, given only tokenized text and
publicly available word embeddings, our system is competitive on the CoNLL-2003
dataset and surpasses the previously reported state of the art performance on
the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed
from publicly-available sources, we establish new state of the art performance
with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing
systems that employ heavy feature engineering, proprietary lexicons, and rich
entity linking information.
| no_new_dataset | 0.950549 |
1603.00806 | Florian Strub | Florian Strub (SEQUEL, CRIStAL), Jeremie Mary (CRIStAL, SEQUEL),
Romaric Gaudel (LIFL) | Hybrid Collaborative Filtering with Autoencoders | null | null | null | null | cs.IR cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative Filtering aims at exploiting the feedback of users to provide
personalised recommendations. Such algorithms look for latent variables in a
large sparse matrix of ratings. They can be enhanced by adding side information
to tackle the well-known cold start problem. While Neu-ral Networks have
tremendous success in image and speech recognition, they have received less
attention in Collaborative Filtering. This is all the more surprising that
Neural Networks are able to discover latent variables in large and
heterogeneous datasets. In this paper, we introduce a Collaborative Filtering
Neural network architecture aka CFN which computes a non-linear Matrix
Factorization from sparse rating inputs and side information. We show
experimentally on the MovieLens and Douban dataset that CFN outper-forms the
state of the art and benefits from side information. We provide an
implementation of the algorithm as a reusable plugin for Torch, a popular
Neural Network framework.
| [
{
"version": "v1",
"created": "Wed, 2 Mar 2016 17:48:25 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2016 19:18:09 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jul 2016 08:10:08 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Strub",
"Florian",
"",
"SEQUEL, CRIStAL"
],
[
"Mary",
"Jeremie",
"",
"CRIStAL, SEQUEL"
],
[
"Gaudel",
"Romaric",
"",
"LIFL"
]
] | TITLE: Hybrid Collaborative Filtering with Autoencoders
ABSTRACT: Collaborative Filtering aims at exploiting the feedback of users to provide
personalised recommendations. Such algorithms look for latent variables in a
large sparse matrix of ratings. They can be enhanced by adding side information
to tackle the well-known cold start problem. While Neu-ral Networks have
tremendous success in image and speech recognition, they have received less
attention in Collaborative Filtering. This is all the more surprising that
Neural Networks are able to discover latent variables in large and
heterogeneous datasets. In this paper, we introduce a Collaborative Filtering
Neural network architecture aka CFN which computes a non-linear Matrix
Factorization from sparse rating inputs and side information. We show
experimentally on the MovieLens and Douban dataset that CFN outper-forms the
state of the art and benefits from side information. We provide an
implementation of the algorithm as a reusable plugin for Torch, a popular
Neural Network framework.
| no_new_dataset | 0.944638 |
1603.05201 | Wenling Shang | Wenling Shang, Kihyuk Sohn, Diogo Almeida, Honglak Lee | Understanding and Improving Convolutional Neural Networks via
Concatenated Rectified Linear Units | ICML 2016 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, convolutional neural networks (CNNs) have been used as a powerful
tool to solve many problems of machine learning and computer vision. In this
paper, we aim to provide insight on the property of convolutional neural
networks, as well as a generic method to improve the performance of many CNN
architectures. Specifically, we first examine existing CNN models and observe
an intriguing property that the filters in the lower layers form pairs (i.e.,
filters with opposite phase). Inspired by our observation, we propose a novel,
simple yet effective activation scheme called concatenated ReLU (CRelu) and
theoretically analyze its reconstruction property in CNNs. We integrate CRelu
into several state-of-the-art CNN architectures and demonstrate improvement in
their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer
trainable parameters. Our results suggest that better understanding of the
properties of CNNs can lead to significant performance improvement with a
simple modification.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2016 18:17:36 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2016 05:18:36 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Shang",
"Wenling",
""
],
[
"Sohn",
"Kihyuk",
""
],
[
"Almeida",
"Diogo",
""
],
[
"Lee",
"Honglak",
""
]
] | TITLE: Understanding and Improving Convolutional Neural Networks via
Concatenated Rectified Linear Units
ABSTRACT: Recently, convolutional neural networks (CNNs) have been used as a powerful
tool to solve many problems of machine learning and computer vision. In this
paper, we aim to provide insight on the property of convolutional neural
networks, as well as a generic method to improve the performance of many CNN
architectures. Specifically, we first examine existing CNN models and observe
an intriguing property that the filters in the lower layers form pairs (i.e.,
filters with opposite phase). Inspired by our observation, we propose a novel,
simple yet effective activation scheme called concatenated ReLU (CRelu) and
theoretically analyze its reconstruction property in CNNs. We integrate CRelu
into several state-of-the-art CNN architectures and demonstrate improvement in
their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer
trainable parameters. Our results suggest that better understanding of the
properties of CNNs can lead to significant performance improvement with a
simple modification.
| no_new_dataset | 0.951233 |
1604.02872 | Eduardo G. Altmann | J. C. Leitao, J.M. Miotto, M. Gerlach, and E. G. Altmann | Is this scaling nonlinear? | 11 pages, 3 figures | R. Soc. open sci. 3: 150649 (2016) | 10.1098/rsos.150649 | null | physics.soc-ph cond-mat.stat-mech physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most celebrated findings in complex systems in the last decade is
that different indexes y (e.g., patents) scale nonlinearly with the
population~x of the cities in which they appear, i.e., $y\sim x^\beta, \beta
\neq 1$. More recently, the generality of this finding has been questioned in
studies using new databases and different definitions of city boundaries. In
this paper we investigate the existence of nonlinear scaling using a
probabilistic framework in which fluctuations are accounted explicitly. In
particular, we show that this allows not only to (a) estimate $\beta$ and
confidence intervals, but also to (b) quantify the evidence in favor of $\beta
\neq 1$ and (c) test the hypothesis that the observations are compatible with
the nonlinear scaling. We employ this framework to compare $5$ different models
to $15$ different datasets and we find that the answers to points (a)-(c)
crucially depend on the fluctuations contained in the data, on how they are
modeled, and on the fact that the city sizes are heavy-tailed distributed.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 10:29:46 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Leitao",
"J. C.",
""
],
[
"Miotto",
"J. M.",
""
],
[
"Gerlach",
"M.",
""
],
[
"Altmann",
"E. G.",
""
]
] | TITLE: Is this scaling nonlinear?
ABSTRACT: One of the most celebrated findings in complex systems in the last decade is
that different indexes y (e.g., patents) scale nonlinearly with the
population~x of the cities in which they appear, i.e., $y\sim x^\beta, \beta
\neq 1$. More recently, the generality of this finding has been questioned in
studies using new databases and different definitions of city boundaries. In
this paper we investigate the existence of nonlinear scaling using a
probabilistic framework in which fluctuations are accounted explicitly. In
particular, we show that this allows not only to (a) estimate $\beta$ and
confidence intervals, but also to (b) quantify the evidence in favor of $\beta
\neq 1$ and (c) test the hypothesis that the observations are compatible with
the nonlinear scaling. We employ this framework to compare $5$ different models
to $15$ different datasets and we find that the answers to points (a)-(c)
crucially depend on the fluctuations contained in the data, on how they are
modeled, and on the fact that the city sizes are heavy-tailed distributed.
| no_new_dataset | 0.945901 |
1604.05978 | Decebal Constantin Mocanu | Decebal Constantin Mocanu, Elena Mocanu, Phuong H. Nguyen, Madeleine
Gibescu and Antonio Liotta | A topological insight into restricted Boltzmann machines | http://link.springer.com/article/10.1007/s10994-016-5570-z, Machine
Learning, issn=1573-0565, 2016 | null | 10.1007/s10994-016-5570-z | null | cs.NE cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restricted Boltzmann Machines (RBMs) and models derived from them have been
successfully used as basic building blocks in deep artificial neural networks
for automatic features extraction, unsupervised weights initialization, but
also as density estimators. Thus, their generative and discriminative
capabilities, but also their computational time are instrumental to a wide
range of applications. Our main contribution is to look at RBMs from a
topological perspective, bringing insights from network science. Firstly, here
we show that RBMs and Gaussian RBMs (GRBMs) are bipartite graphs which
naturally have a small-world topology. Secondly, we demonstrate both on
synthetic and real-world datasets that by constraining RBMs and GRBMs to a
scale-free topology (while still considering local neighborhoods and data
distribution), we reduce the number of weights that need to be computed by a
few orders of magnitude, at virtually no loss in generative performance.
Thirdly, we show that, for a fixed number of weights, our proposed sparse
models (which by design have a higher number of hidden neurons) achieve better
generative capabilities than standard fully connected RBMs and GRBMs (which by
design have a smaller number of hidden neurons), at no additional computational
costs.
| [
{
"version": "v1",
"created": "Wed, 20 Apr 2016 14:35:12 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2016 20:14:41 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Mocanu",
"Decebal Constantin",
""
],
[
"Mocanu",
"Elena",
""
],
[
"Nguyen",
"Phuong H.",
""
],
[
"Gibescu",
"Madeleine",
""
],
[
"Liotta",
"Antonio",
""
]
] | TITLE: A topological insight into restricted Boltzmann machines
ABSTRACT: Restricted Boltzmann Machines (RBMs) and models derived from them have been
successfully used as basic building blocks in deep artificial neural networks
for automatic features extraction, unsupervised weights initialization, but
also as density estimators. Thus, their generative and discriminative
capabilities, but also their computational time are instrumental to a wide
range of applications. Our main contribution is to look at RBMs from a
topological perspective, bringing insights from network science. Firstly, here
we show that RBMs and Gaussian RBMs (GRBMs) are bipartite graphs which
naturally have a small-world topology. Secondly, we demonstrate both on
synthetic and real-world datasets that by constraining RBMs and GRBMs to a
scale-free topology (while still considering local neighborhoods and data
distribution), we reduce the number of weights that need to be computed by a
few orders of magnitude, at virtually no loss in generative performance.
Thirdly, we show that, for a fixed number of weights, our proposed sparse
models (which by design have a higher number of hidden neurons) achieve better
generative capabilities than standard fully connected RBMs and GRBMs (which by
design have a smaller number of hidden neurons), at no additional computational
costs.
| no_new_dataset | 0.949153 |
1607.01437 | Luwei Yang | Luwei Yang, Ligen Zhu, Yichen Wei, Shuang Liang, Ping Tan | Attribute Recognition from Adaptive Parts | 11 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous part-based attribute recognition approaches perform part detection
and attribute recognition in separate steps. The parts are not optimized for
attribute recognition and therefore could be sub-optimal. We present an
end-to-end deep learning approach to overcome the limitation. It generates
object parts from key points and perform attribute recognition accordingly,
allowing adaptive spatial transform of the parts. Both key point estimation and
attribute recognition are learnt jointly in a multi-task setting. Extensive
experiments on two datasets verify the efficacy of proposed end-to-end
approach.
| [
{
"version": "v1",
"created": "Tue, 5 Jul 2016 23:29:06 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2016 21:08:19 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Yang",
"Luwei",
""
],
[
"Zhu",
"Ligen",
""
],
[
"Wei",
"Yichen",
""
],
[
"Liang",
"Shuang",
""
],
[
"Tan",
"Ping",
""
]
] | TITLE: Attribute Recognition from Adaptive Parts
ABSTRACT: Previous part-based attribute recognition approaches perform part detection
and attribute recognition in separate steps. The parts are not optimized for
attribute recognition and therefore could be sub-optimal. We present an
end-to-end deep learning approach to overcome the limitation. It generates
object parts from key points and perform attribute recognition accordingly,
allowing adaptive spatial transform of the parts. Both key point estimation and
attribute recognition are learnt jointly in a multi-task setting. Extensive
experiments on two datasets verify the efficacy of proposed end-to-end
approach.
| no_new_dataset | 0.941223 |
1607.04648 | Subarna Tripathi | Subarna Tripathi and Zachary C. Lipton and Serge Belongie and Truong
Nguyen | Context Matters: Refining Object Detection in Video with Recurrent
Neural Networks | To appear in BMVC 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the vast amounts of video available online, and recent breakthroughs in
object detection with static images, object detection in video offers a
promising new frontier. However, motion blur and compression artifacts cause
substantial frame-level variability, even in videos that appear smooth to the
eye. Additionally, video datasets tend to have sparsely annotated frames. We
present a new framework for improving object detection in videos that captures
temporal context and encourages consistency of predictions. First, we train a
pseudo-labeler, that is, a domain-adapted convolutional neural network for
object detection. The pseudo-labeler is first trained individually on the
subset of labeled frames, and then subsequently applied to all frames. Then we
train a recurrent neural network that takes as input sequences of
pseudo-labeled frames and optimizes an objective that encourages both accuracy
on the target frame and consistency across consecutive frames. The approach
incorporates strong supervision of target frames, weak-supervision on context
frames, and regularization via a smoothness penalty. Our approach achieves mean
Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest
image-based baselines for the Youtube-Video Objects dataset. Our experiments
demonstrate that neighboring frames can provide valuable information, even
absent labels.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2016 20:02:25 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2016 03:00:35 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Tripathi",
"Subarna",
""
],
[
"Lipton",
"Zachary C.",
""
],
[
"Belongie",
"Serge",
""
],
[
"Nguyen",
"Truong",
""
]
] | TITLE: Context Matters: Refining Object Detection in Video with Recurrent
Neural Networks
ABSTRACT: Given the vast amounts of video available online, and recent breakthroughs in
object detection with static images, object detection in video offers a
promising new frontier. However, motion blur and compression artifacts cause
substantial frame-level variability, even in videos that appear smooth to the
eye. Additionally, video datasets tend to have sparsely annotated frames. We
present a new framework for improving object detection in videos that captures
temporal context and encourages consistency of predictions. First, we train a
pseudo-labeler, that is, a domain-adapted convolutional neural network for
object detection. The pseudo-labeler is first trained individually on the
subset of labeled frames, and then subsequently applied to all frames. Then we
train a recurrent neural network that takes as input sequences of
pseudo-labeled frames and optimizes an objective that encourages both accuracy
on the target frame and consistency across consecutive frames. The approach
incorporates strong supervision of target frames, weak-supervision on context
frames, and regularization via a smoothness penalty. Our approach achieves mean
Average Precision (mAP) of 68.73, an improvement of 7.1 over the strongest
image-based baselines for the Youtube-Video Objects dataset. Our experiments
demonstrate that neighboring frames can provide valuable information, even
absent labels.
| no_new_dataset | 0.948106 |
1607.05396 | Thanh-Toan Do | Thanh-Toan Do, Anh-Dzung Doan, Duc-Thanh Nguyen, Ngai-Man Cheung | Binary Hashing with Semidefinite Relaxation and Augmented Lagrangian | Appearing in European Conference on Computer Vision (ECCV) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes two approaches for inferencing binary codes in two-step
(supervised, unsupervised) hashing. We first introduce an unified formulation
for both supervised and unsupervised hashing. Then, we cast the learning of one
bit as a Binary Quadratic Problem (BQP). We propose two approaches to solve
BQP. In the first approach, we relax BQP as a semidefinite programming problem
which its global optimum can be achieved. We theoretically prove that the
objective value of the binary solution achieved by this approach is well
bounded. In the second approach, we propose an augmented Lagrangian based
approach to solve BQP directly without relaxing the binary constraint.
Experimental results on three benchmark datasets show that our proposed methods
compare favorably with the state of the art.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 04:20:24 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Do",
"Thanh-Toan",
""
],
[
"Doan",
"Anh-Dzung",
""
],
[
"Nguyen",
"Duc-Thanh",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] | TITLE: Binary Hashing with Semidefinite Relaxation and Augmented Lagrangian
ABSTRACT: This paper proposes two approaches for inferencing binary codes in two-step
(supervised, unsupervised) hashing. We first introduce an unified formulation
for both supervised and unsupervised hashing. Then, we cast the learning of one
bit as a Binary Quadratic Problem (BQP). We propose two approaches to solve
BQP. In the first approach, we relax BQP as a semidefinite programming problem
which its global optimum can be achieved. We theoretically prove that the
objective value of the binary solution achieved by this approach is well
bounded. In the second approach, we propose an augmented Lagrangian based
approach to solve BQP directly without relaxing the binary constraint.
Experimental results on three benchmark datasets show that our proposed methods
compare favorably with the state of the art.
| no_new_dataset | 0.950595 |
1607.05423 | Xiaojie Jin Mr. | Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, Shuicheng Yan | Training Skinny Deep Neural Networks with Iterative Hard Thresholding
Methods | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have achieved remarkable success in a wide range of
practical problems. However, due to the inherent large parameter space, deep
models are notoriously prone to overfitting and difficult to be deployed in
portable devices with limited memory. In this paper, we propose an iterative
hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs).
An SDNN has much fewer parameters yet can achieve competitive or even better
performance than its full CNN counterpart. More concretely, the IHT approach
trains an SDNN through following two alternative phases: (I) perform hard
thresholding to drop connections with small activations and fine-tune the other
significant filters; (II)~re-activate the frozen connections and train the
entire network to improve its overall discriminative capability. We verify the
superiority of SDNNs in terms of efficiency and classification performance on
four benchmark object recognition datasets, including CIFAR-10, CIFAR-100,
MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be
applied for training SDNN based on various CNN architectures such as NIN and
AlexNet.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 06:41:31 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Jin",
"Xiaojie",
""
],
[
"Yuan",
"Xiaotong",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Training Skinny Deep Neural Networks with Iterative Hard Thresholding
Methods
ABSTRACT: Deep neural networks have achieved remarkable success in a wide range of
practical problems. However, due to the inherent large parameter space, deep
models are notoriously prone to overfitting and difficult to be deployed in
portable devices with limited memory. In this paper, we propose an iterative
hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs).
An SDNN has much fewer parameters yet can achieve competitive or even better
performance than its full CNN counterpart. More concretely, the IHT approach
trains an SDNN through following two alternative phases: (I) perform hard
thresholding to drop connections with small activations and fine-tune the other
significant filters; (II)~re-activate the frozen connections and train the
entire network to improve its overall discriminative capability. We verify the
superiority of SDNNs in terms of efficiency and classification performance on
four benchmark object recognition datasets, including CIFAR-10, CIFAR-100,
MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be
applied for training SDNN based on various CNN architectures such as NIN and
AlexNet.
| no_new_dataset | 0.94801 |
1607.05529 | Haomiao Liu | Haomiao Liu, Ruiping Wang, Shiguang Shan, Xilin Chen | Dual Purpose Hashing | With supplementary materials added to the end | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen more and more demand for a unified framework to
address multiple realistic image retrieval tasks concerning both category and
attributes. Considering the scale of modern datasets, hashing is favorable for
its low complexity. However, most existing hashing methods are designed to
preserve one single kind of similarity, thus improper for dealing with the
different tasks simultaneously. To overcome this limitation, we propose a new
hashing method, named Dual Purpose Hashing (DPH), which jointly preserves the
category and attribute similarities by exploiting the Convolutional Neural
Network (CNN) models to hierarchically capture the correlations between
category and attributes. Since images with both category and attribute labels
are scarce, our method is designed to take the abundant partially labelled
images on the Internet as training inputs. With such a framework, the binary
codes of new-coming images can be readily obtained by quantizing the network
outputs of a binary-like layer, and the attributes can be recovered from the
codes easily. Experiments on two large-scale datasets show that our dual
purpose hash codes can achieve comparable or even better performance than those
state-of-the-art methods specifically designed for each individual retrieval
task, while being more compact than the compared methods.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 11:37:00 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Liu",
"Haomiao",
""
],
[
"Wang",
"Ruiping",
""
],
[
"Shan",
"Shiguang",
""
],
[
"Chen",
"Xilin",
""
]
] | TITLE: Dual Purpose Hashing
ABSTRACT: Recent years have seen more and more demand for a unified framework to
address multiple realistic image retrieval tasks concerning both category and
attributes. Considering the scale of modern datasets, hashing is favorable for
its low complexity. However, most existing hashing methods are designed to
preserve one single kind of similarity, thus improper for dealing with the
different tasks simultaneously. To overcome this limitation, we propose a new
hashing method, named Dual Purpose Hashing (DPH), which jointly preserves the
category and attribute similarities by exploiting the Convolutional Neural
Network (CNN) models to hierarchically capture the correlations between
category and attributes. Since images with both category and attribute labels
are scarce, our method is designed to take the abundant partially labelled
images on the Internet as training inputs. With such a framework, the binary
codes of new-coming images can be readily obtained by quantizing the network
outputs of a binary-like layer, and the attributes can be recovered from the
codes easily. Experiments on two large-scale datasets show that our dual
purpose hash codes can achieve comparable or even better performance than those
state-of-the-art methods specifically designed for each individual retrieval
task, while being more compact than the compared methods.
| no_new_dataset | 0.9455 |
1607.05620 | Alina Marcu B.Sc | Alina Elena Marcu | A Local-Global Approach to Semantic Segmentation in Aerial Images | 50 pages, 18 figures. Master's Thesis, University Politehnica of
Bucharest | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aerial images are often taken under poor lighting conditions and contain low
resolution objects, many times occluded by other objects. In this domain,
visual context could be of great help, but there are still very few papers that
consider context in aerial image understanding and still remains an open
problem in computer vision. We propose a dual-stream deep neural network that
processes information along two independent pathways. Our model learns to
combine local and global appearance in a complementary way, such that together
form a powerful classifier. We test our dual-stream network on the task of
buildings segmentation in aerial images and obtain state-of-the-art results on
the Massachusetts Buildings Dataset. We study the relative importance of local
appearance versus the larger scene, as well as their performance in combination
on three new buildings datasets. We clearly demonstrate the effectiveness of
visual context in conjunction with deep neural networks for aerial image
understanding.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 15:02:57 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Marcu",
"Alina Elena",
""
]
] | TITLE: A Local-Global Approach to Semantic Segmentation in Aerial Images
ABSTRACT: Aerial images are often taken under poor lighting conditions and contain low
resolution objects, many times occluded by other objects. In this domain,
visual context could be of great help, but there are still very few papers that
consider context in aerial image understanding and still remains an open
problem in computer vision. We propose a dual-stream deep neural network that
processes information along two independent pathways. Our model learns to
combine local and global appearance in a complementary way, such that together
form a powerful classifier. We test our dual-stream network on the task of
buildings segmentation in aerial images and obtain state-of-the-art results on
the Massachusetts Buildings Dataset. We study the relative importance of local
appearance versus the larger scene, as well as their performance in combination
on three new buildings datasets. We clearly demonstrate the effectiveness of
visual context in conjunction with deep neural networks for aerial image
understanding.
| new_dataset | 0.969237 |
1607.05691 | Francois Chollet | Fran\c{c}ois Chollet | Information-theoretical label embeddings for large-scale image
classification | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for training multi-label, massively multi-class image
classification models, that is faster and more accurate than supervision via a
sigmoid cross-entropy loss (logistic regression). Our method consists in
embedding high-dimensional sparse labels onto a lower-dimensional dense sphere
of unit-normed vectors, and treating the classification problem as a cosine
proximity regression problem on this sphere. We test our method on a dataset of
300 million high-resolution images with 17,000 labels, where it yields
considerably faster convergence, as well as a 7% higher mean average precision
compared to logistic regression.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 18:40:01 GMT"
}
] | 2016-07-20T00:00:00 | [
[
"Chollet",
"François",
""
]
] | TITLE: Information-theoretical label embeddings for large-scale image
classification
ABSTRACT: We present a method for training multi-label, massively multi-class image
classification models, that is faster and more accurate than supervision via a
sigmoid cross-entropy loss (logistic regression). Our method consists in
embedding high-dimensional sparse labels onto a lower-dimensional dense sphere
of unit-normed vectors, and treating the classification problem as a cosine
proximity regression problem on this sphere. We test our method on a dataset of
300 million high-resolution images with 17,000 labels, where it yields
considerably faster convergence, as well as a 7% higher mean average precision
compared to logistic regression.
| no_new_dataset | 0.948251 |
1309.5762 | Saswata Shannigrahi | Talasila Sai Deepak, Hindol Adhya, Shyamal Kejriwal, Bhanuteja
Gullapalli, Saswata Shannigrahi | A new hierarchical clustering algorithm to identify non-overlapping
like-minded communities | null | null | 10.1145/2914586.2914613 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A network has a non-overlapping community structure if the nodes of the
network can be partitioned into disjoint sets such that each node in a set is
densely connected to other nodes inside the set and sparsely connected to the
nodes out- side it. There are many metrics to validate the efficacy of such a
structure, such as clustering coefficient, betweenness, centrality, modularity
and like-mindedness. Many methods have been proposed to optimize some of these
metrics, but none of these works well on the recently introduced metric
like-mindedness. To solve this problem, we propose a be- havioral property
based algorithm to identify communities that optimize the like-mindedness
metric and compare its performance on this metric with other behavioral data
based methodologies as well as community detection methods that rely only on
structural data. We execute these algorithms on real-life datasets of
Filmtipset and Twitter and show that our algorithm performs better than the
existing algorithms with respect to the like-mindedness metric.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2013 11:01:20 GMT"
},
{
"version": "v2",
"created": "Sat, 21 Feb 2015 07:29:47 GMT"
},
{
"version": "v3",
"created": "Wed, 24 Feb 2016 06:10:55 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Deepak",
"Talasila Sai",
""
],
[
"Adhya",
"Hindol",
""
],
[
"Kejriwal",
"Shyamal",
""
],
[
"Gullapalli",
"Bhanuteja",
""
],
[
"Shannigrahi",
"Saswata",
""
]
] | TITLE: A new hierarchical clustering algorithm to identify non-overlapping
like-minded communities
ABSTRACT: A network has a non-overlapping community structure if the nodes of the
network can be partitioned into disjoint sets such that each node in a set is
densely connected to other nodes inside the set and sparsely connected to the
nodes out- side it. There are many metrics to validate the efficacy of such a
structure, such as clustering coefficient, betweenness, centrality, modularity
and like-mindedness. Many methods have been proposed to optimize some of these
metrics, but none of these works well on the recently introduced metric
like-mindedness. To solve this problem, we propose a be- havioral property
based algorithm to identify communities that optimize the like-mindedness
metric and compare its performance on this metric with other behavioral data
based methodologies as well as community detection methods that rely only on
structural data. We execute these algorithms on real-life datasets of
Filmtipset and Twitter and show that our algorithm performs better than the
existing algorithms with respect to the like-mindedness metric.
| no_new_dataset | 0.950595 |
1508.01244 | Qiong Huang | Qiong Huang, Ashok Veeraraghavan and Ashutosh Sabharwal | TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile
Tablets | 18 pages, 17 figures, submitted to journal, website hosting the
dataset: http://sh.rice.edu/tablet_gaze.html | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study gaze estimation on tablets, our key design goal is uncalibrated gaze
estimation using the front-facing camera during natural use of tablets, where
the posture and method of holding the tablet is not constrained. We collected
the first large unconstrained gaze dataset of tablet users, labeled Rice
TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different
postures and 35 gaze locations. Subjects vary in race, gender and in their need
for prescription glasses, all of which might impact gaze estimation accuracy.
Driven by our observations on the collected data, we present a TabletGaze
algorithm for automatic gaze estimation using multi-level HoG feature and
Random Forests regressor. The TabletGaze algorithm achieves a mean error of
3.17 cm. We perform extensive evaluation on the impact of various factors such
as dataset size, race, wearing glasses and user posture on the gaze estimation
accuracy and make important observations about the impact of these factors.
| [
{
"version": "v1",
"created": "Wed, 5 Aug 2015 22:38:53 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Sep 2015 16:44:14 GMT"
},
{
"version": "v3",
"created": "Sat, 16 Jul 2016 09:06:23 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Huang",
"Qiong",
""
],
[
"Veeraraghavan",
"Ashok",
""
],
[
"Sabharwal",
"Ashutosh",
""
]
] | TITLE: TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile
Tablets
ABSTRACT: We study gaze estimation on tablets, our key design goal is uncalibrated gaze
estimation using the front-facing camera during natural use of tablets, where
the posture and method of holding the tablet is not constrained. We collected
the first large unconstrained gaze dataset of tablet users, labeled Rice
TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different
postures and 35 gaze locations. Subjects vary in race, gender and in their need
for prescription glasses, all of which might impact gaze estimation accuracy.
Driven by our observations on the collected data, we present a TabletGaze
algorithm for automatic gaze estimation using multi-level HoG feature and
Random Forests regressor. The TabletGaze algorithm achieves a mean error of
3.17 cm. We perform extensive evaluation on the impact of various factors such
as dataset size, race, wearing glasses and user posture on the gaze estimation
accuracy and make important observations about the impact of these factors.
| new_dataset | 0.956756 |
1511.02992 | Mrinal Haloi | Mrinal Haloi | Traffic Sign Classification Using Deep Inception Based Convolutional
Networks | modifications: Accepted version of 2016 IEEE Intelligent Vehicles
Symposium (IV 2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a novel deep network for traffic sign classification
that achieves outstanding performance on GTSRB surpassing all previous methods.
Our deep network consists of spatial transformer layers and a modified version
of inception module specifically designed for capturing local and global
features together. This features adoption allows our network to classify
precisely intraclass samples even under deformations. Use of spatial
transformer layer makes this network more robust to deformations such as
translation, rotation, scaling of input images. Unlike existing approaches that
are developed with hand-crafted features, multiple deep networks with huge
parameters and data augmentations, our method addresses the concern of
exploding parameters and augmentations. We have achieved the state-of-the-art
performance of 99.81\% on GTSRB dataset.
| [
{
"version": "v1",
"created": "Tue, 10 Nov 2015 05:07:03 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Jul 2016 11:05:22 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Haloi",
"Mrinal",
""
]
] | TITLE: Traffic Sign Classification Using Deep Inception Based Convolutional
Networks
ABSTRACT: In this work, we propose a novel deep network for traffic sign classification
that achieves outstanding performance on GTSRB surpassing all previous methods.
Our deep network consists of spatial transformer layers and a modified version
of inception module specifically designed for capturing local and global
features together. This features adoption allows our network to classify
precisely intraclass samples even under deformations. Use of spatial
transformer layer makes this network more robust to deformations such as
translation, rotation, scaling of input images. Unlike existing approaches that
are developed with hand-crafted features, multiple deep networks with huge
parameters and data augmentations, our method addresses the concern of
exploding parameters and augmentations. We have achieved the state-of-the-art
performance of 99.81\% on GTSRB dataset.
| no_new_dataset | 0.953535 |
1511.04412 | Mazen Melibari | Mazen Melibari, Pascal Poupart, Prashant Doshi and George Trimponias | Dynamic Sum Product Networks for Tractable Inference on Sequence Data
(Extended Version) | Published in the Proceedings of the International Conference on
Probabilistic Graphical Models (PGM), 2016 | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sum-Product Networks (SPN) have recently emerged as a new class of tractable
probabilistic graphical models. Unlike Bayesian networks and Markov networks
where inference may be exponential in the size of the network, inference in
SPNs is in time linear in the size of the network. Since SPNs represent
distributions over a fixed set of variables only, we propose dynamic sum
product networks (DSPNs) as a generalization of SPNs for sequence data of
varying length. A DSPN consists of a template network that is repeated as many
times as needed to model data sequences of any length. We present a local
search technique to learn the structure of the template network. In contrast to
dynamic Bayesian networks for which inference is generally exponential in the
number of variables per time slice, DSPNs inherit the linear inference
complexity of SPNs. We demonstrate the advantages of DSPNs over DBNs and other
models on several datasets of sequence data.
| [
{
"version": "v1",
"created": "Fri, 13 Nov 2015 19:56:15 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jul 2016 03:37:01 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Melibari",
"Mazen",
""
],
[
"Poupart",
"Pascal",
""
],
[
"Doshi",
"Prashant",
""
],
[
"Trimponias",
"George",
""
]
] | TITLE: Dynamic Sum Product Networks for Tractable Inference on Sequence Data
(Extended Version)
ABSTRACT: Sum-Product Networks (SPN) have recently emerged as a new class of tractable
probabilistic graphical models. Unlike Bayesian networks and Markov networks
where inference may be exponential in the size of the network, inference in
SPNs is in time linear in the size of the network. Since SPNs represent
distributions over a fixed set of variables only, we propose dynamic sum
product networks (DSPNs) as a generalization of SPNs for sequence data of
varying length. A DSPN consists of a template network that is repeated as many
times as needed to model data sequences of any length. We present a local
search technique to learn the structure of the template network. In contrast to
dynamic Bayesian networks for which inference is generally exponential in the
number of variables per time slice, DSPNs inherit the linear inference
complexity of SPNs. We demonstrate the advantages of DSPNs over DBNs and other
models on several datasets of sequence data.
| no_new_dataset | 0.950227 |
1512.04785 | Phong Vo | Phong D. Vo, Alexandru Ginsca, Herv\'e Le Borgne, Adrian Popescu | On Deep Representation Learning from Noisy Web Images | null | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The keep-growing content of Web images may be the next important data source
to scale up deep neural networks, which recently obtained a great success in
the ImageNet classification challenge and related tasks. This prospect,
however, has not been validated on convolutional networks (convnet) -- one of
best performing deep models -- because of their supervised regime. While
unsupervised alternatives are not so good as convnet in generalizing the
learned model to new domains, we use convnet to leverage semi-supervised
representation learning. Our approach is to use massive amounts of unlabeled
and noisy Web images to train convnets as general feature detectors despite
challenges coming from data such as high level of mislabeled data, outliers,
and data biases. Extensive experiments are conducted at several data scales,
different network architectures, and data reranking techniques. The learned
representations are evaluated on nine public datasets of various topics. The
best results obtained by our convnets, trained on 3.14 million Web images,
outperform AlexNet trained on 1.2 million clean images of ILSVRC 2012 and is
closing the gap with VGG-16. These prominent results suggest a budget solution
to use deep learning in practice and motivate more research in semi-supervised
representation learning.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 13:57:39 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Jul 2016 20:50:54 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Vo",
"Phong D.",
""
],
[
"Ginsca",
"Alexandru",
""
],
[
"Borgne",
"Hervé Le",
""
],
[
"Popescu",
"Adrian",
""
]
] | TITLE: On Deep Representation Learning from Noisy Web Images
ABSTRACT: The keep-growing content of Web images may be the next important data source
to scale up deep neural networks, which recently obtained a great success in
the ImageNet classification challenge and related tasks. This prospect,
however, has not been validated on convolutional networks (convnet) -- one of
best performing deep models -- because of their supervised regime. While
unsupervised alternatives are not so good as convnet in generalizing the
learned model to new domains, we use convnet to leverage semi-supervised
representation learning. Our approach is to use massive amounts of unlabeled
and noisy Web images to train convnets as general feature detectors despite
challenges coming from data such as high level of mislabeled data, outliers,
and data biases. Extensive experiments are conducted at several data scales,
different network architectures, and data reranking techniques. The learned
representations are evaluated on nine public datasets of various topics. The
best results obtained by our convnets, trained on 3.14 million Web images,
outperform AlexNet trained on 1.2 million clean images of ILSVRC 2012 and is
closing the gap with VGG-16. These prominent results suggest a budget solution
to use deep learning in practice and motivate more research in semi-supervised
representation learning.
| no_new_dataset | 0.945399 |
1606.09581 | Sahil Sharma | Sahil Sharma, Vinod Sharma and Atul Sharma | Performance Based Evaluation of Various Machine Learning Classification
Techniques for Chronic Kidney Disease Diagnosis | 6 pages, 4 figures, 2 tables | International Journal of Modern Computer Science, Vol.4, Issue3,
June 2016, pp.11-16 | null | null | cs.LG cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Areas where Artificial Intelligence (AI) & related fields are finding their
applications are increasing day by day, moving from core areas of computer
science they are finding their applications in various other domains.In recent
times Machine Learning i.e. a sub-domain of AI has been widely used in order to
assist medical experts and doctors in the prediction, diagnosis and prognosis
of various diseases and other medical disorders. In this manuscript the authors
applied various machine learning algorithms to a problem in the domain of
medical diagnosis and analyzed their efficiency in predicting the results. The
problem selected for the study is the diagnosis of the Chronic Kidney
Disease.The dataset used for the study consists of 400 instances and 24
attributes. The authors evaluated 12 classification techniques by applying them
to the Chronic Kidney Disease data. In order to calculate efficiency, results
of the prediction by candidate methods were compared with the actual medical
results of the subject.The various metrics used for performance evaluation are
predictive accuracy, precision, sensitivity and specificity. The results
indicate that decision-tree performed best with nearly the accuracy of 98.6%,
sensitivity of 0.9720, precision of 1 and specificity of 1.
| [
{
"version": "v1",
"created": "Tue, 28 Jun 2016 07:00:07 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2016 08:14:43 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Sharma",
"Sahil",
""
],
[
"Sharma",
"Vinod",
""
],
[
"Sharma",
"Atul",
""
]
] | TITLE: Performance Based Evaluation of Various Machine Learning Classification
Techniques for Chronic Kidney Disease Diagnosis
ABSTRACT: Areas where Artificial Intelligence (AI) & related fields are finding their
applications are increasing day by day, moving from core areas of computer
science they are finding their applications in various other domains.In recent
times Machine Learning i.e. a sub-domain of AI has been widely used in order to
assist medical experts and doctors in the prediction, diagnosis and prognosis
of various diseases and other medical disorders. In this manuscript the authors
applied various machine learning algorithms to a problem in the domain of
medical diagnosis and analyzed their efficiency in predicting the results. The
problem selected for the study is the diagnosis of the Chronic Kidney
Disease.The dataset used for the study consists of 400 instances and 24
attributes. The authors evaluated 12 classification techniques by applying them
to the Chronic Kidney Disease data. In order to calculate efficiency, results
of the prediction by candidate methods were compared with the actual medical
results of the subject.The various metrics used for performance evaluation are
predictive accuracy, precision, sensitivity and specificity. The results
indicate that decision-tree performed best with nearly the accuracy of 98.6%,
sensitivity of 0.9720, precision of 1 and specificity of 1.
| no_new_dataset | 0.94868 |
1607.04188 | Ying Li | Ying Li, Hui Li, Maria K. Y. Chan, Subramanian Sankaranarayanan and
Beno\^it Rouxb | Methodology of Parameterization of Molecular Mechanics Force Field From
Quantum Chemistry Calculations using Genetic Algorithm: A case study of
methanol | not submitted to anywhere else by July 2016 | null | null | null | physics.atm-clus physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In molecular dynamics (MD) simulation, force field determines the capability
of an individual model in capturing physical and chemistry properties. The
method for generating proper parameters of the force field form is the key
component for computational research in chemistry, biochemistry, and
condensed-phase physics. Our study showed that the feasibility to predict
experimental condensed phase properties (i.e., density and heat of
vaporization) of methanol through problem specific force field from only
quantum chemistry information. To acquire the satisfying parameter sets of the
force field, the genetic algorithm (GA) is the main optimization method. For
electrostatic potential energy, we optimized both the electrostatic parameters
of methanol using the GA method, which leads to low deviations of between the
quantum mechanics (QM) calculations and the GA optimized parameters. We
optimized the van der Waals (vdW) parameters both using GA and guided GA
methods by calibrating interaction energy of various methanol homo-clusters,
such as nonamers, undecamers, or tridecamers. Excellent agreement between the
training dataset from QM calculations (i.e., MP2) and GA optimized parameters
can be achieved. However, only the guided GA method, which eliminates the
overestimation of interaction energy from MP2 calculations in the optimization
process, provides proper vdW parameters for MD simulation to get the condensed
phase properties (i.e., density and heat of vaporization) of methanol.
Throughout the whole optimization process, the experimental value were not
involved in the objective functions, but were only used for the purpose of
justifying models (i.e., nonamers, undecamers, or tridecamers) and validating
methods (i.e., GA or guided GA). Our method shows the possibility of developing
descriptive polarizable force field using only QM calculations.
| [
{
"version": "v1",
"created": "Thu, 14 Jul 2016 16:18:08 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jul 2016 02:39:47 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Li",
"Ying",
""
],
[
"Li",
"Hui",
""
],
[
"Chan",
"Maria K. Y.",
""
],
[
"Sankaranarayanan",
"Subramanian",
""
],
[
"Rouxb",
"Benoît",
""
]
] | TITLE: Methodology of Parameterization of Molecular Mechanics Force Field From
Quantum Chemistry Calculations using Genetic Algorithm: A case study of
methanol
ABSTRACT: In molecular dynamics (MD) simulation, force field determines the capability
of an individual model in capturing physical and chemistry properties. The
method for generating proper parameters of the force field form is the key
component for computational research in chemistry, biochemistry, and
condensed-phase physics. Our study showed that the feasibility to predict
experimental condensed phase properties (i.e., density and heat of
vaporization) of methanol through problem specific force field from only
quantum chemistry information. To acquire the satisfying parameter sets of the
force field, the genetic algorithm (GA) is the main optimization method. For
electrostatic potential energy, we optimized both the electrostatic parameters
of methanol using the GA method, which leads to low deviations of between the
quantum mechanics (QM) calculations and the GA optimized parameters. We
optimized the van der Waals (vdW) parameters both using GA and guided GA
methods by calibrating interaction energy of various methanol homo-clusters,
such as nonamers, undecamers, or tridecamers. Excellent agreement between the
training dataset from QM calculations (i.e., MP2) and GA optimized parameters
can be achieved. However, only the guided GA method, which eliminates the
overestimation of interaction energy from MP2 calculations in the optimization
process, provides proper vdW parameters for MD simulation to get the condensed
phase properties (i.e., density and heat of vaporization) of methanol.
Throughout the whole optimization process, the experimental value were not
involved in the objective functions, but were only used for the purpose of
justifying models (i.e., nonamers, undecamers, or tridecamers) and validating
methods (i.e., GA or guided GA). Our method shows the possibility of developing
descriptive polarizable force field using only QM calculations.
| no_new_dataset | 0.956836 |
1607.04731 | Ke Yang | Ke Yang, Dongsheng Li, Yong Dou, Shaohe Lv, Qiang Wang | Weakly supervised object detection using pseudo-strong labels | 7 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection is an import task of computer vision.A variety of methods
have been proposed,but methods using the weak labels still do not have a
satisfactory result.In this paper,we propose a new framework that using the
weakly supervised method's output as the pseudo-strong labels to train a
strongly supervised model.One weakly supervised method is treated as black-box
to generate class-specific bounding boxes on train dataset.A de-noise method is
then applied to the noisy bounding boxes.Then the de-noised pseudo-strong
labels are used to train a strongly object detection network.The whole
framework is still weakly supervised because the entire process only uses the
image-level labels.The experiment results on PASCAL VOC 2007 prove the validity
of our framework, and we get result 43.4% on mean average precision compared to
39.5% of the previous best result and 34.5% of the initial
method,respectively.And this frame work is simple and distinct,and is promising
to be applied to other method easily.
| [
{
"version": "v1",
"created": "Sat, 16 Jul 2016 11:49:18 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Yang",
"Ke",
""
],
[
"Li",
"Dongsheng",
""
],
[
"Dou",
"Yong",
""
],
[
"Lv",
"Shaohe",
""
],
[
"Wang",
"Qiang",
""
]
] | TITLE: Weakly supervised object detection using pseudo-strong labels
ABSTRACT: Object detection is an import task of computer vision.A variety of methods
have been proposed,but methods using the weak labels still do not have a
satisfactory result.In this paper,we propose a new framework that using the
weakly supervised method's output as the pseudo-strong labels to train a
strongly supervised model.One weakly supervised method is treated as black-box
to generate class-specific bounding boxes on train dataset.A de-noise method is
then applied to the noisy bounding boxes.Then the de-noised pseudo-strong
labels are used to train a strongly object detection network.The whole
framework is still weakly supervised because the entire process only uses the
image-level labels.The experiment results on PASCAL VOC 2007 prove the validity
of our framework, and we get result 43.4% on mean average precision compared to
39.5% of the previous best result and 34.5% of the initial
method,respectively.And this frame work is simple and distinct,and is promising
to be applied to other method easily.
| no_new_dataset | 0.949949 |
1607.04780 | Junwei Liang | Junwei Liang, Lu Jiang, Deyu Meng, Alexander Hauptmann | Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale
Concept Learning | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning video concept detectors automatically from the big but noisy web
data with no additional manual annotations is a novel but challenging area in
the multimedia and the machine learning community. A considerable amount of
videos on the web are associated with rich but noisy contextual information,
such as the title, which provides weak annotations or labels about the video
content. To leverage the big noisy web labels, this paper proposes a novel
method called WEbly-Labeled Learning (WELL), which is established on the
state-of-the-art machine learning algorithm inspired by the learning process of
human. WELL introduces a number of novel multi-modal approaches to incorporate
meaningful prior knowledge called curriculum from the noisy web videos. To
investigate this problem, we empirically study the curriculum constructed from
the multi-modal features of the videos collected from YouTube and Flickr. The
efficacy and the scalability of WELL have been extensively demonstrated on two
public benchmarks, including the largest multimedia dataset and the largest
manually-labeled video set. The comprehensive experimental results demonstrate
that WELL outperforms state-of-the-art studies by a statically significant
margin on learning concepts from noisy web video data. In addition, the results
also verify that WELL is robust to the level of noisiness in the video data.
Notably, WELL trained on sufficient noisy web labels is able to achieve a
comparable accuracy to supervised learning methods trained on the clean
manually-labeled data.
| [
{
"version": "v1",
"created": "Sat, 16 Jul 2016 18:14:51 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Liang",
"Junwei",
""
],
[
"Jiang",
"Lu",
""
],
[
"Meng",
"Deyu",
""
],
[
"Hauptmann",
"Alexander",
""
]
] | TITLE: Exploiting Multi-modal Curriculum in Noisy Web Data for Large-scale
Concept Learning
ABSTRACT: Learning video concept detectors automatically from the big but noisy web
data with no additional manual annotations is a novel but challenging area in
the multimedia and the machine learning community. A considerable amount of
videos on the web are associated with rich but noisy contextual information,
such as the title, which provides weak annotations or labels about the video
content. To leverage the big noisy web labels, this paper proposes a novel
method called WEbly-Labeled Learning (WELL), which is established on the
state-of-the-art machine learning algorithm inspired by the learning process of
human. WELL introduces a number of novel multi-modal approaches to incorporate
meaningful prior knowledge called curriculum from the noisy web videos. To
investigate this problem, we empirically study the curriculum constructed from
the multi-modal features of the videos collected from YouTube and Flickr. The
efficacy and the scalability of WELL have been extensively demonstrated on two
public benchmarks, including the largest multimedia dataset and the largest
manually-labeled video set. The comprehensive experimental results demonstrate
that WELL outperforms state-of-the-art studies by a statically significant
margin on learning concepts from noisy web video data. In addition, the results
also verify that WELL is robust to the level of noisiness in the video data.
Notably, WELL trained on sufficient noisy web labels is able to achieve a
comparable accuracy to supervised learning methods trained on the clean
manually-labeled data.
| no_new_dataset | 0.948585 |
1607.04939 | Saurabh Prasad | Saurabh Prasad, Minshan Cui, Lifeng Yan | Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor
Geospatial Image Analysis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the emergence of passive and active optical sensors available for
geospatial imaging, information fusion across sensors is becoming ever more
important. An important aspect of single (or multiple) sensor geospatial image
analysis is feature extraction - the process of finding "optimal" lower
dimensional subspaces that adequately characterize class-specific information
for subsequent analysis tasks, such as classification, change and anomaly
detection etc. In recent work, we proposed and developed an angle-based
discriminant analysis approach that projected data onto subspaces with maximal
"angular" separability in the input (raw) feature space and Reproducible Kernel
Hilbert Space (RKHS). We also developed an angular locality preserving variant
of this algorithm. In this letter, we advance this work and make it suitable
for information fusion - we propose and validate a composite kernel local
angular discriminant analysis projection, that can operate on an ensemble of
feature sources (e.g. from different sources), and project the data onto a
unified space through composite kernels where the data are maximally separated
in an angular sense. We validate this method with the multi-sensor University
of Houston hyperspectral and LiDAR dataset, and demonstrate that the proposed
method significantly outperforms other composite kernel approaches to sensor
(information) fusion.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 02:50:40 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Prasad",
"Saurabh",
""
],
[
"Cui",
"Minshan",
""
],
[
"Yan",
"Lifeng",
""
]
] | TITLE: Composite Kernel Local Angular Discriminant Analysis for Multi-Sensor
Geospatial Image Analysis
ABSTRACT: With the emergence of passive and active optical sensors available for
geospatial imaging, information fusion across sensors is becoming ever more
important. An important aspect of single (or multiple) sensor geospatial image
analysis is feature extraction - the process of finding "optimal" lower
dimensional subspaces that adequately characterize class-specific information
for subsequent analysis tasks, such as classification, change and anomaly
detection etc. In recent work, we proposed and developed an angle-based
discriminant analysis approach that projected data onto subspaces with maximal
"angular" separability in the input (raw) feature space and Reproducible Kernel
Hilbert Space (RKHS). We also developed an angular locality preserving variant
of this algorithm. In this letter, we advance this work and make it suitable
for information fusion - we propose and validate a composite kernel local
angular discriminant analysis projection, that can operate on an ensemble of
feature sources (e.g. from different sources), and project the data onto a
unified space through composite kernels where the data are maximally separated
in an angular sense. We validate this method with the multi-sensor University
of Houston hyperspectral and LiDAR dataset, and demonstrate that the proposed
method significantly outperforms other composite kernel approaches to sensor
(information) fusion.
| no_new_dataset | 0.952175 |
1607.04942 | Saurabh Prasad | Minshan Cui, Saurabh Prasad | Sparse Representation-Based Classification: Orthogonal Least Squares or
Orthogonal Matching Pursuit? | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Spare representation of signals has received significant attention in recent
years. Based on these developments, a sparse representation-based
classification (SRC) has been proposed for a variety of classification and
related tasks, including face recognition. Recently, a class dependent variant
of SRC was proposed to overcome the limitations of SRC for remote sensing image
classification. Traditionally, greedy pursuit based method such as orthogonal
matching pursuit (OMP) are used for sparse coefficient recovery due to their
simplicity as well as low time-complexity. However, orthogonal least square
(OLS) has not yet been widely used in classifiers that exploit the sparse
representation properties of data. Since OLS produces lower signal
reconstruction error than OMP under similar conditions, we hypothesize that
more accurate signal estimation will further improve the classification
performance of classifiers that exploiting the sparsity of data. In this paper,
we present a classification method based on OLS, which implements OLS in a
classwise manner to perform the classification. We also develop and present its
kernelized variant to handle nonlinearly separable data. Based on two
real-world benchmarking hyperspectral datasets, we demonstrate that class
dependent OLS based methods outperform several baseline methods including
traditional SRC and the support vector machine classifier.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 03:05:07 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Cui",
"Minshan",
""
],
[
"Prasad",
"Saurabh",
""
]
] | TITLE: Sparse Representation-Based Classification: Orthogonal Least Squares or
Orthogonal Matching Pursuit?
ABSTRACT: Spare representation of signals has received significant attention in recent
years. Based on these developments, a sparse representation-based
classification (SRC) has been proposed for a variety of classification and
related tasks, including face recognition. Recently, a class dependent variant
of SRC was proposed to overcome the limitations of SRC for remote sensing image
classification. Traditionally, greedy pursuit based method such as orthogonal
matching pursuit (OMP) are used for sparse coefficient recovery due to their
simplicity as well as low time-complexity. However, orthogonal least square
(OLS) has not yet been widely used in classifiers that exploit the sparse
representation properties of data. Since OLS produces lower signal
reconstruction error than OMP under similar conditions, we hypothesize that
more accurate signal estimation will further improve the classification
performance of classifiers that exploiting the sparsity of data. In this paper,
we present a classification method based on OLS, which implements OLS in a
classwise manner to perform the classification. We also develop and present its
kernelized variant to handle nonlinearly separable data. Based on two
real-world benchmarking hyperspectral datasets, we demonstrate that class
dependent OLS based methods outperform several baseline methods including
traditional SRC and the support vector machine classifier.
| no_new_dataset | 0.948965 |
1607.05002 | Pourya Habib Zadeh | Pourya Habib Zadeh, Reshad Hosseini and Suvrit Sra | Geometric Mean Metric Learning | 7 pages, 4 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the task of learning a Euclidean metric from data. We approach
this problem from first principles and formulate it as a surprisingly simple
optimization problem. Indeed, our formulation even admits a closed form
solution. This solution possesses several very attractive properties: (i) an
innate geometric appeal through the Riemannian geometry of positive definite
matrices; (ii) ease of interpretability; and (iii) computational speed several
orders of magnitude faster than the widely used LMNN and ITML methods.
Furthermore, on standard benchmark datasets, our closed-form solution
consistently attains higher classification accuracy.
| [
{
"version": "v1",
"created": "Mon, 18 Jul 2016 10:14:46 GMT"
}
] | 2016-07-19T00:00:00 | [
[
"Zadeh",
"Pourya Habib",
""
],
[
"Hosseini",
"Reshad",
""
],
[
"Sra",
"Suvrit",
""
]
] | TITLE: Geometric Mean Metric Learning
ABSTRACT: We revisit the task of learning a Euclidean metric from data. We approach
this problem from first principles and formulate it as a surprisingly simple
optimization problem. Indeed, our formulation even admits a closed form
solution. This solution possesses several very attractive properties: (i) an
innate geometric appeal through the Riemannian geometry of positive definite
matrices; (ii) ease of interpretability; and (iii) computational speed several
orders of magnitude faster than the widely used LMNN and ITML methods.
Furthermore, on standard benchmark datasets, our closed-form solution
consistently attains higher classification accuracy.
| no_new_dataset | 0.950732 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.