id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.00148 | Vivek Kulkarni | Vivek Kulkarni, Yashar Mehdad, Troy Chevalier | Domain Adaptation for Named Entity Recognition in Online Media with Word
Embeddings | 12 pages, 3 figures, 8 tables arxiv preprint | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Content on the Internet is heterogeneous and arises from various domains like
News, Entertainment, Finance and Technology. Understanding such content
requires identifying named entities (persons, places and organizations) as one
of the key steps. Traditionally Named Entity Recognition (NER) systems have
been built using available annotated datasets (like CoNLL, MUC) and demonstrate
excellent performance. However, these models fail to generalize onto other
domains like Sports and Finance where conventions and language use can differ
significantly. Furthermore, several domains do not have large amounts of
annotated labeled data for training robust Named Entity Recognition models. A
key step towards this challenge is to adapt models learned on domains where
large amounts of annotated training data are available to domains with scarce
annotated data.
In this paper, we propose methods to effectively adapt models learned on one
domain onto other domains using distributed word representations. First we
analyze the linguistic variation present across domains to identify key
linguistic insights that can boost performance across domains. We propose
methods to capture domain specific semantics of word usage in addition to
global semantics. We then demonstrate how to effectively use such domain
specific knowledge to learn NER models that outperform previous baselines in
the domain adaptation setting.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 05:08:53 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Kulkarni",
"Vivek",
""
],
[
"Mehdad",
"Yashar",
""
],
[
"Chevalier",
"Troy",
""
]
] | TITLE: Domain Adaptation for Named Entity Recognition in Online Media with Word
Embeddings
ABSTRACT: Content on the Internet is heterogeneous and arises from various domains like
News, Entertainment, Finance and Technology. Understanding such content
requires identifying named entities (persons, places and organizations) as one
of the key steps. Traditionally Named Entity Recognition (NER) systems have
been built using available annotated datasets (like CoNLL, MUC) and demonstrate
excellent performance. However, these models fail to generalize onto other
domains like Sports and Finance where conventions and language use can differ
significantly. Furthermore, several domains do not have large amounts of
annotated labeled data for training robust Named Entity Recognition models. A
key step towards this challenge is to adapt models learned on domains where
large amounts of annotated training data are available to domains with scarce
annotated data.
In this paper, we propose methods to effectively adapt models learned on one
domain onto other domains using distributed word representations. First we
analyze the linguistic variation present across domains to identify key
linguistic insights that can boost performance across domains. We propose
methods to capture domain specific semantics of word usage in addition to
global semantics. We then demonstrate how to effectively use such domain
specific knowledge to learn NER models that outperform previous baselines in
the domain adaptation setting.
| no_new_dataset | 0.949153 |
1612.00155 | Pedro Tabacof | Pedro Tabacof, Julia Tavares, Eduardo Valle | Adversarial Images for Variational Autoencoders | Workshop on Adversarial Training, NIPS 2016, Barcelona, Spain | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate adversarial attacks for autoencoders. We propose a procedure
that distorts the input image to mislead the autoencoder in reconstructing a
completely different target image. We attack the internal latent
representations, attempting to make the adversarial input produce an internal
representation as similar as possible as the target's. We find that
autoencoders are much more robust to the attack than classifiers: while some
examples have tolerably small input distortion, and reasonable similarity to
the target image, there is a quasi-linear trade-off between those aims. We
report results on MNIST and SVHN datasets, and also test regular deterministic
autoencoders, reaching similar conclusions in all cases. Finally, we show that
the usual adversarial attack for classifiers, while being much easier, also
presents a direct proportion between distortion on the input, and misdirection
on the output. That proportionality however is hidden by the normalization of
the output, which maps a linear layer into non-linear probabilities.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 05:59:57 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Tabacof",
"Pedro",
""
],
[
"Tavares",
"Julia",
""
],
[
"Valle",
"Eduardo",
""
]
] | TITLE: Adversarial Images for Variational Autoencoders
ABSTRACT: We investigate adversarial attacks for autoencoders. We propose a procedure
that distorts the input image to mislead the autoencoder in reconstructing a
completely different target image. We attack the internal latent
representations, attempting to make the adversarial input produce an internal
representation as similar as possible as the target's. We find that
autoencoders are much more robust to the attack than classifiers: while some
examples have tolerably small input distortion, and reasonable similarity to
the target image, there is a quasi-linear trade-off between those aims. We
report results on MNIST and SVHN datasets, and also test regular deterministic
autoencoders, reaching similar conclusions in all cases. Finally, we show that
the usual adversarial attack for classifiers, while being much easier, also
presents a direct proportion between distortion on the input, and misdirection
on the output. That proportionality however is hidden by the normalization of
the output, which maps a linear layer into non-linear probabilities.
| no_new_dataset | 0.943608 |
1612.00227 | Loris Bozzato | Stefano Borgo, Loris Bozzato, Alessio Palmero Aprosio, Marco Rospocher
and Luciano Serafini | On Coreferring Text-extracted Event Descriptions with the aid of
Ontological Reasoning | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Systems for automatic extraction of semantic information about events from
large textual resources are now available: these tools are capable to generate
RDF datasets about text extracted events and this knowledge can be used to
reason over the recognized events. On the other hand, text based tasks for
event recognition, as for example event coreference (i.e. recognizing whether
two textual descriptions refer to the same event), do not take into account
ontological information of the extracted events in their process. In this
paper, we propose a method to derive event coreference on text extracted event
data using semantic based rule reasoning. We demonstrate our method considering
a limited (yet representative) set of event types: we introduce a formal
analysis on their ontological properties and, on the base of this, we define a
set of coreference criteria. We then implement these criteria as RDF-based
reasoning rules to be applied on text extracted event data. We evaluate the
effectiveness of our approach over a standard coreference benchmark dataset.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 12:58:02 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Borgo",
"Stefano",
""
],
[
"Bozzato",
"Loris",
""
],
[
"Aprosio",
"Alessio Palmero",
""
],
[
"Rospocher",
"Marco",
""
],
[
"Serafini",
"Luciano",
""
]
] | TITLE: On Coreferring Text-extracted Event Descriptions with the aid of
Ontological Reasoning
ABSTRACT: Systems for automatic extraction of semantic information about events from
large textual resources are now available: these tools are capable to generate
RDF datasets about text extracted events and this knowledge can be used to
reason over the recognized events. On the other hand, text based tasks for
event recognition, as for example event coreference (i.e. recognizing whether
two textual descriptions refer to the same event), do not take into account
ontological information of the extracted events in their process. In this
paper, we propose a method to derive event coreference on text extracted event
data using semantic based rule reasoning. We demonstrate our method considering
a limited (yet representative) set of event types: we introduce a formal
analysis on their ontological properties and, on the base of this, we define a
set of coreference criteria. We then implement these criteria as RDF-based
reasoning rules to be applied on text extracted event data. We evaluate the
effectiveness of our approach over a standard coreference benchmark dataset.
| no_new_dataset | 0.910466 |
1612.00234 | Xiang Long | Xiang Long, Chuang Gan, Gerard de Melo | Video Captioning with Multi-Faceted Attention | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, video captioning has been attracting an increasing amount of
interest, due to its potential for improving accessibility and information
retrieval. While existing methods rely on different kinds of visual features
and model structures, they do not fully exploit relevant semantic information.
We present an extensible approach to jointly leverage several sorts of visual
features and semantic attributes. Our novel architecture builds on LSTMs for
sentence generation, with several attention layers and two multimodal layers.
The attention mechanism learns to automatically select the most salient visual
features or semantic attributes, and the multimodal layer yields overall
representations for the input and outputs of the sentence generation component.
Experimental results on the challenging MSVD and MSR-VTT datasets show that our
framework outperforms the state-of-the-art approaches, while ground truth based
semantic attributes are able to further elevate the output quality to a
near-human level.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 13:11:29 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Long",
"Xiang",
""
],
[
"Gan",
"Chuang",
""
],
[
"de Melo",
"Gerard",
""
]
] | TITLE: Video Captioning with Multi-Faceted Attention
ABSTRACT: Recently, video captioning has been attracting an increasing amount of
interest, due to its potential for improving accessibility and information
retrieval. While existing methods rely on different kinds of visual features
and model structures, they do not fully exploit relevant semantic information.
We present an extensible approach to jointly leverage several sorts of visual
features and semantic attributes. Our novel architecture builds on LSTMs for
sentence generation, with several attention layers and two multimodal layers.
The attention mechanism learns to automatically select the most salient visual
features or semantic attributes, and the multimodal layer yields overall
representations for the input and outputs of the sentence generation component.
Experimental results on the challenging MSVD and MSR-VTT datasets show that our
framework outperforms the state-of-the-art approaches, while ground truth based
semantic attributes are able to further elevate the output quality to a
near-human level.
| no_new_dataset | 0.945045 |
1612.00240 | Kleanthi Georgala | Kleanthi Georgala, Micheal Hoffmann and Axel-Cyrille Ngonga Ngomo | An Evaluation of Models for Runtime Approximation in Link Discovery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-efficient link discovery is of central importance to implement the
vision of the Semantic Web. Some of the most rapid Link Discovery approaches
rely internally on planning to execute link specifications. In newer works,
linear models have been used to estimate the runtime the fastest planners.
However, no other category of models has been studied for this purpose so far.
In this paper, we study non-linear runtime estimation functions for runtime
estimation. In particular, we study exponential and mixed models for the
estimation of the runtimes of planners. To this end, we evaluate three
different models for runtime on six datasets using 400 link specifications. We
show that exponential and mixed models achieve better fits when trained but are
only to be preferred in some cases. Our evaluation also shows that the use of
better runtime approximation models has a positive impact on the overall
execution of link specifications.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 13:33:03 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Georgala",
"Kleanthi",
""
],
[
"Hoffmann",
"Micheal",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] | TITLE: An Evaluation of Models for Runtime Approximation in Link Discovery
ABSTRACT: Time-efficient link discovery is of central importance to implement the
vision of the Semantic Web. Some of the most rapid Link Discovery approaches
rely internally on planning to execute link specifications. In newer works,
linear models have been used to estimate the runtime the fastest planners.
However, no other category of models has been studied for this purpose so far.
In this paper, we study non-linear runtime estimation functions for runtime
estimation. In particular, we study exponential and mixed models for the
estimation of the runtimes of planners. To this end, we evaluate three
different models for runtime on six datasets using 400 link specifications. We
show that exponential and mixed models achieve better fits when trained but are
only to be preferred in some cases. Our evaluation also shows that the use of
better runtime approximation models has a positive impact on the overall
execution of link specifications.
| no_new_dataset | 0.94801 |
1612.00388 | Wesley Tansey | Wesley Tansey and Edward W. Lowe Jr. and James G. Scott | Diet2Vec: Multi-scale analysis of massive dietary data | Accepted to the NIPS 2016 Workshop on Machine Learning for Health | null | null | null | stat.ML cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart phone apps that enable users to easily track their diets have become
widespread in the last decade. This has created an opportunity to discover new
insights into obesity and weight loss by analyzing the eating habits of the
users of such apps. In this paper, we present diet2vec: an approach to modeling
latent structure in a massive database of electronic diet journals. Through an
iterative contract-and-expand process, our model learns real-valued embeddings
of users' diets, as well as embeddings for individual foods and meals. We
demonstrate the effectiveness of our approach on a real dataset of 55K users of
the popular diet-tracking app LoseIt\footnote{http://www.loseit.com/}. To the
best of our knowledge, this is the largest fine-grained diet tracking study in
the history of nutrition and obesity research. Our results suggest that
diet2vec finds interpretable results at all levels, discovering intuitive
representations of foods, meals, and diets.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 19:21:22 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Tansey",
"Wesley",
""
],
[
"Lowe",
"Edward W.",
"Jr."
],
[
"Scott",
"James G.",
""
]
] | TITLE: Diet2Vec: Multi-scale analysis of massive dietary data
ABSTRACT: Smart phone apps that enable users to easily track their diets have become
widespread in the last decade. This has created an opportunity to discover new
insights into obesity and weight loss by analyzing the eating habits of the
users of such apps. In this paper, we present diet2vec: an approach to modeling
latent structure in a massive database of electronic diet journals. Through an
iterative contract-and-expand process, our model learns real-valued embeddings
of users' diets, as well as embeddings for individual foods and meals. We
demonstrate the effectiveness of our approach on a real dataset of 55K users of
the popular diet-tracking app LoseIt\footnote{http://www.loseit.com/}. To the
best of our knowledge, this is the largest fine-grained diet tracking study in
the history of nutrition and obesity research. Our results suggest that
diet2vec finds interpretable results at all levels, discovering intuitive
representations of foods, meals, and diets.
| no_new_dataset | 0.940572 |
1612.00408 | Imon Banerjee | Imon Banerjee, Lewis Hahn, Geoffrey Sonn, Richard Fan, Daniel L. Rubin | Computerized Multiparametric MR image Analysis for Prostate Cancer
Aggressiveness-Assessment | NIPS 2016 Workshop on Machine Learning for Health (NIPS ML4HC) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose an automated method for detecting aggressive prostate cancer(CaP)
(Gleason score >=7) based on a comprehensive analysis of the lesion and the
surrounding normal prostate tissue which has been simultaneously captured in
T2-weighted MR images, diffusion-weighted images (DWI) and apparent diffusion
coefficient maps (ADC). The proposed methodology was tested on a dataset of 79
patients (40 aggressive, 39 non-aggressive). We evaluated the performance of a
wide range of popular quantitative imaging features on the characterization of
aggressive versus non-aggressive CaP. We found that a group of 44
discriminative predictors among 1464 quantitative imaging features can be used
to produce an area under the ROC curve of 0.73.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 20:10:37 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Banerjee",
"Imon",
""
],
[
"Hahn",
"Lewis",
""
],
[
"Sonn",
"Geoffrey",
""
],
[
"Fan",
"Richard",
""
],
[
"Rubin",
"Daniel L.",
""
]
] | TITLE: Computerized Multiparametric MR image Analysis for Prostate Cancer
Aggressiveness-Assessment
ABSTRACT: We propose an automated method for detecting aggressive prostate cancer(CaP)
(Gleason score >=7) based on a comprehensive analysis of the lesion and the
surrounding normal prostate tissue which has been simultaneously captured in
T2-weighted MR images, diffusion-weighted images (DWI) and apparent diffusion
coefficient maps (ADC). The proposed methodology was tested on a dataset of 79
patients (40 aggressive, 39 non-aggressive). We evaluated the performance of a
wide range of popular quantitative imaging features on the characterization of
aggressive versus non-aggressive CaP. We found that a group of 44
discriminative predictors among 1464 quantitative imaging features can be used
to produce an area under the ROC curve of 0.73.
| no_new_dataset | 0.937038 |
1612.00423 | Shenlong Wang | Shenlong Wang, Min Bai, Gellert Mattyus, Hang Chu, Wenjie Luo, Bin
Yang, Justin Liang, Joel Cheverie, Sanja Fidler, Raquel Urtasun | TorontoCity: Seeing the World with a Million Eyes | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper we introduce the TorontoCity benchmark, which covers the full
greater Toronto area (GTA) with 712.5 $km^2$ of land, 8439 $km$ of road and
around 400,000 buildings. Our benchmark provides different perspectives of the
world captured from airplanes, drones and cars driving around the city.
Manually labeling such a large scale dataset is infeasible. Instead, we propose
to utilize different sources of high-precision maps to create our ground truth.
Towards this goal, we develop algorithms that allow us to align all data
sources with the maps while requiring minimal human supervision. We have
designed a wide variety of tasks including building height estimation
(reconstruction), road centerline and curb extraction, building instance
segmentation, building contour extraction (reorganization), semantic labeling
and scene type classification (recognition). Our pilot study shows that most of
these tasks are still difficult for modern convolutional neural networks.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 20:39:49 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Wang",
"Shenlong",
""
],
[
"Bai",
"Min",
""
],
[
"Mattyus",
"Gellert",
""
],
[
"Chu",
"Hang",
""
],
[
"Luo",
"Wenjie",
""
],
[
"Yang",
"Bin",
""
],
[
"Liang",
"Justin",
""
],
[
"Cheverie",
"Joel",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Urtasun",
"Raquel",
""
]
] | TITLE: TorontoCity: Seeing the World with a Million Eyes
ABSTRACT: In this paper we introduce the TorontoCity benchmark, which covers the full
greater Toronto area (GTA) with 712.5 $km^2$ of land, 8439 $km$ of road and
around 400,000 buildings. Our benchmark provides different perspectives of the
world captured from airplanes, drones and cars driving around the city.
Manually labeling such a large scale dataset is infeasible. Instead, we propose
to utilize different sources of high-precision maps to create our ground truth.
Towards this goal, we develop algorithms that allow us to align all data
sources with the maps while requiring minimal human supervision. We have
designed a wide variety of tasks including building height estimation
(reconstruction), road centerline and curb extraction, building instance
segmentation, building contour extraction (reorganization), semantic labeling
and scene type classification (recognition). Our pilot study shows that most of
these tasks are still difficult for modern convolutional neural networks.
| new_dataset | 0.948917 |
1209.1759 | Yani Ioannou | Yani Ioannou, Babak Taati, Robin Harrap, Michael Greenspan | Difference of Normals as a Multi-Scale Operator in Unorganized Point
Clouds | To be published in proceedings of 3DIMPVT 2012 | Proceedings of the 2012 Second International Conference on 3D
Imaging, Modeling, Processing, Visualization & Transmission (3DIMPVT) | 10.1109/3DIMPVT.2012.12 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A novel multi-scale operator for unorganized 3D point clouds is introduced.
The Difference of Normals (DoN) provides a computationally efficient,
multi-scale approach to processing large unorganized 3D point clouds. The
application of DoN in the multi-scale filtering of two different real-world
outdoor urban LIDAR scene datasets is quantitatively and qualitatively
demonstrated. In both datasets the DoN operator is shown to segment large 3D
point clouds into scale-salient clusters, such as cars, people, and lamp posts
towards applications in semi-automatic annotation, and as a pre-processing step
in automatic object recognition. The application of the operator to
segmentation is evaluated on a large public dataset of outdoor LIDAR scenes
with ground truth annotations.
| [
{
"version": "v1",
"created": "Sat, 8 Sep 2012 22:43:28 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Ioannou",
"Yani",
""
],
[
"Taati",
"Babak",
""
],
[
"Harrap",
"Robin",
""
],
[
"Greenspan",
"Michael",
""
]
] | TITLE: Difference of Normals as a Multi-Scale Operator in Unorganized Point
Clouds
ABSTRACT: A novel multi-scale operator for unorganized 3D point clouds is introduced.
The Difference of Normals (DoN) provides a computationally efficient,
multi-scale approach to processing large unorganized 3D point clouds. The
application of DoN in the multi-scale filtering of two different real-world
outdoor urban LIDAR scene datasets is quantitatively and qualitatively
demonstrated. In both datasets the DoN operator is shown to segment large 3D
point clouds into scale-salient clusters, such as cars, people, and lamp posts
towards applications in semi-automatic annotation, and as a pre-processing step
in automatic object recognition. The application of the operator to
segmentation is evaluated on a large public dataset of outdoor LIDAR scenes
with ground truth annotations.
| no_new_dataset | 0.951818 |
1406.5472 | Carl Vondrick | Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, Antonio Torralba | Predicting Motivations of Actions by Leveraging Text | CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding human actions is a key problem in computer vision. However,
recognizing actions is only the first step of understanding what a person is
doing. In this paper, we introduce the problem of predicting why a person has
performed an action in images. This problem has many applications in human
activity understanding, such as anticipating or explaining an action. To study
this problem, we introduce a new dataset of people performing actions annotated
with likely motivations. However, the information in an image alone may not be
sufficient to automatically solve this task. Since humans can rely on their
lifetime of experiences to infer motivation, we propose to give computer vision
systems access to some of these experiences by using recently developed natural
language models to mine knowledge stored in massive amounts of text. While we
are still far away from fully understanding motivation, our results suggest
that transferring knowledge from language into vision can help machines
understand why people in images might be performing an action.
| [
{
"version": "v1",
"created": "Fri, 20 Jun 2014 18:02:02 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 03:58:15 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Vondrick",
"Carl",
""
],
[
"Oktay",
"Deniz",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Predicting Motivations of Actions by Leveraging Text
ABSTRACT: Understanding human actions is a key problem in computer vision. However,
recognizing actions is only the first step of understanding what a person is
doing. In this paper, we introduce the problem of predicting why a person has
performed an action in images. This problem has many applications in human
activity understanding, such as anticipating or explaining an action. To study
this problem, we introduce a new dataset of people performing actions annotated
with likely motivations. However, the information in an image alone may not be
sufficient to automatically solve this task. Since humans can rely on their
lifetime of experiences to infer motivation, we propose to give computer vision
systems access to some of these experiences by using recently developed natural
language models to mine knowledge stored in massive amounts of text. While we
are still far away from fully understanding motivation, our results suggest
that transferring knowledge from language into vision can help machines
understand why people in images might be performing an action.
| new_dataset | 0.967101 |
1504.08023 | Carl Vondrick | Carl Vondrick, Hamed Pirsiavash, Antonio Torralba | Anticipating Visual Representations from Unlabeled Video | CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anticipating actions and objects before they start or appear is a difficult
problem in computer vision with several real-world applications. This task is
challenging partly because it requires leveraging extensive knowledge of the
world that is difficult to write down. We believe that a promising resource for
efficiently learning this knowledge is through readily available unlabeled
video. We present a framework that capitalizes on temporal structure in
unlabeled video to learn to anticipate human actions and objects. The key idea
behind our approach is that we can train deep networks to predict the visual
representation of images in the future. Visual representations are a promising
prediction target because they encode images at a higher semantic level than
pixels yet are automatic to compute. We then apply recognition algorithms on
our predicted representation to anticipate objects and actions. We
experimentally validate this idea on two datasets, anticipating actions one
second in the future and objects five seconds in the future.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 21:01:51 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 03:49:34 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Vondrick",
"Carl",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Anticipating Visual Representations from Unlabeled Video
ABSTRACT: Anticipating actions and objects before they start or appear is a difficult
problem in computer vision with several real-world applications. This task is
challenging partly because it requires leveraging extensive knowledge of the
world that is difficult to write down. We believe that a promising resource for
efficiently learning this knowledge is through readily available unlabeled
video. We present a framework that capitalizes on temporal structure in
unlabeled video to learn to anticipate human actions and objects. The key idea
behind our approach is that we can train deep networks to predict the visual
representation of images in the future. Visual representations are a promising
prediction target because they encode images at a higher semantic level than
pixels yet are automatic to compute. We then apply recognition algorithms on
our predicted representation to anticipate objects and actions. We
experimentally validate this idea on two datasets, anticipating actions one
second in the future and objects five seconds in the future.
| no_new_dataset | 0.942981 |
1602.07043 | Suresh Venkatasubramanian | Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos
Scheidegger, Brandon Smith and Suresh Venkatasubramanian | Auditing Black-box Models for Indirect Influence | Final version of paper that appears in the IEEE International
Conference on Data Mining (ICDM), 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-trained predictive models see widespread use, but for the most part they
are used as black boxes which output a prediction or score. It is therefore
hard to acquire a deeper understanding of model behavior, and in particular how
different features influence the model prediction. This is important when
interpreting the behavior of complex models, or asserting that certain
problematic attributes (like race or gender) are not unduly influencing
decisions.
In this paper, we present a technique for auditing black-box models, which
lets us study the extent to which existing models take advantage of particular
features in the dataset, without knowing how the models work. Our work focuses
on the problem of indirect influence: how some features might indirectly
influence outcomes via other, related features. As a result, we can find
attribute influences even in cases where, upon further direct examination of
the model, the attribute is not referred to by the model at all.
Our approach does not require the black-box model to be retrained. This is
important if (for example) the model is only accessible via an API, and
contrasts our work with other methods that investigate feature influence like
feature selection. We present experimental evidence for the effectiveness of
our procedure using a variety of publicly available datasets and models. We
also validate our procedure using techniques from interpretable learning and
feature selection, as well as against other black-box auditing procedures.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 04:52:28 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 06:55:16 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Adler",
"Philip",
""
],
[
"Falk",
"Casey",
""
],
[
"Friedler",
"Sorelle A.",
""
],
[
"Rybeck",
"Gabriel",
""
],
[
"Scheidegger",
"Carlos",
""
],
[
"Smith",
"Brandon",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] | TITLE: Auditing Black-box Models for Indirect Influence
ABSTRACT: Data-trained predictive models see widespread use, but for the most part they
are used as black boxes which output a prediction or score. It is therefore
hard to acquire a deeper understanding of model behavior, and in particular how
different features influence the model prediction. This is important when
interpreting the behavior of complex models, or asserting that certain
problematic attributes (like race or gender) are not unduly influencing
decisions.
In this paper, we present a technique for auditing black-box models, which
lets us study the extent to which existing models take advantage of particular
features in the dataset, without knowing how the models work. Our work focuses
on the problem of indirect influence: how some features might indirectly
influence outcomes via other, related features. As a result, we can find
attribute influences even in cases where, upon further direct examination of
the model, the attribute is not referred to by the model at all.
Our approach does not require the black-box model to be retrained. This is
important if (for example) the model is only accessible via an API, and
contrasts our work with other methods that investigate feature influence like
feature selection. We present experimental evidence for the effectiveness of
our procedure using a variety of publicly available datasets and models. We
also validate our procedure using techniques from interpretable learning and
feature selection, as well as against other black-box auditing procedures.
| no_new_dataset | 0.942507 |
1611.02266 | Ryota Tomioka | Liwen Zhang and John Winn and Ryota Tomioka | Gaussian Attention Model and Its Application to Knowledge Base Embedding
and Question Answering | 16 pages, 4 figures | null | null | null | stat.ML cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the Gaussian attention model for content-based neural memory
access. With the proposed attention model, a neural network has the additional
degree of freedom to control the focus of its attention from a laser sharp
attention to a broad attention. It is applicable whenever we can assume that
the distance in the latent space reflects some notion of semantics. We use the
proposed attention model as a scoring function for the embedding of a knowledge
base into a continuous vector space and then train a model that performs
question answering about the entities in the knowledge base. The proposed
attention model can handle both the propagation of uncertainty when following a
series of relations and also the conjunction of conditions in a natural way. On
a dataset of soccer players who participated in the FIFA World Cup 2014, we
demonstrate that our model can handle both path queries and conjunctive queries
well.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 20:57:24 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 16:44:17 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Zhang",
"Liwen",
""
],
[
"Winn",
"John",
""
],
[
"Tomioka",
"Ryota",
""
]
] | TITLE: Gaussian Attention Model and Its Application to Knowledge Base Embedding
and Question Answering
ABSTRACT: We propose the Gaussian attention model for content-based neural memory
access. With the proposed attention model, a neural network has the additional
degree of freedom to control the focus of its attention from a laser sharp
attention to a broad attention. It is applicable whenever we can assume that
the distance in the latent space reflects some notion of semantics. We use the
proposed attention model as a scoring function for the embedding of a knowledge
base into a continuous vector space and then train a model that performs
question answering about the entities in the knowledge base. The proposed
attention model can handle both the propagation of uncertainty when following a
series of relations and also the conjunction of conditions in a natural way. On
a dataset of soccer players who participated in the FIFA World Cup 2014, we
demonstrate that our model can handle both path queries and conjunctive queries
well.
| no_new_dataset | 0.941761 |
1611.05109 | Shu Kong | Shu Kong, Charless Fowlkes | Low-rank Bilinear Pooling for Fine-Grained Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pooling second-order local feature statistics to form a high-dimensional
bilinear feature has been shown to achieve state-of-the-art performance on a
variety of fine-grained classification tasks. To address the computational
demands of high feature dimensionality, we propose to represent the covariance
features as a matrix and apply a low-rank bilinear classifier. The resulting
classifier can be evaluated without explicitly computing the bilinear feature
map which allows for a large reduction in the compute time as well as
decreasing the effective number of parameters to be learned.
To further compress the model, we propose classifier co-decomposition that
factorizes the collection of bilinear classifiers into a common factor and
compact per-class terms. The co-decomposition idea can be deployed through two
convolutional layers and trained in an end-to-end architecture. We suggest a
simple yet effective initialization that avoids explicitly first training and
factorizing the larger bilinear classifiers. Through extensive experiments, we
show that our model achieves state-of-the-art performance on several public
datasets for fine-grained classification trained with only category labels.
Importantly, our final model is an order of magnitude smaller than the recently
proposed compact bilinear model, and three orders smaller than the standard
bilinear CNN model.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 01:10:41 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Nov 2016 01:30:12 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Kong",
"Shu",
""
],
[
"Fowlkes",
"Charless",
""
]
] | TITLE: Low-rank Bilinear Pooling for Fine-Grained Classification
ABSTRACT: Pooling second-order local feature statistics to form a high-dimensional
bilinear feature has been shown to achieve state-of-the-art performance on a
variety of fine-grained classification tasks. To address the computational
demands of high feature dimensionality, we propose to represent the covariance
features as a matrix and apply a low-rank bilinear classifier. The resulting
classifier can be evaluated without explicitly computing the bilinear feature
map which allows for a large reduction in the compute time as well as
decreasing the effective number of parameters to be learned.
To further compress the model, we propose classifier co-decomposition that
factorizes the collection of bilinear classifiers into a common factor and
compact per-class terms. The co-decomposition idea can be deployed through two
convolutional layers and trained in an end-to-end architecture. We suggest a
simple yet effective initialization that avoids explicitly first training and
factorizing the larger bilinear classifiers. Through extensive experiments, we
show that our model achieves state-of-the-art performance on several public
datasets for fine-grained classification trained with only category labels.
Importantly, our final model is an order of magnitude smaller than the recently
proposed compact bilinear model, and three orders smaller than the standard
bilinear CNN model.
| no_new_dataset | 0.949248 |
1611.09960 | Chunhua Shen | Bohan Zhuang, Lingqiao Liu, Yao Li, Chunhua Shen, Ian Reid | Attend in groups: a weakly-supervised deep learning framework for
learning from web data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale datasets have driven the rapid development of deep neural
networks for visual recognition. However, annotating a massive dataset is
expensive and time-consuming. Web images and their labels are, in comparison,
much easier to obtain, but direct training on such automatically harvested
images can lead to unsatisfactory performance, because the noisy labels of Web
images adversely affect the learned recognition models. To address this
drawback we propose an end-to-end weakly-supervised deep learning framework
which is robust to the label noise in Web images. The proposed framework relies
on two unified strategies -- random grouping and attention -- to effectively
reduce the negative impact of noisy web image annotations. Specifically, random
grouping stacks multiple images into a single training instance and thus
increases the labeling accuracy at the instance level. Attention, on the other
hand, suppresses the noisy signals from both incorrectly labeled images and
less discriminative image regions. By conducting intensive experiments on two
challenging datasets, including a newly collected fine-grained dataset with Web
images of different car models, the superior performance of the proposed
methods over competitive baselines is clearly demonstrated.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 01:23:43 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Zhuang",
"Bohan",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Li",
"Yao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Attend in groups: a weakly-supervised deep learning framework for
learning from web data
ABSTRACT: Large-scale datasets have driven the rapid development of deep neural
networks for visual recognition. However, annotating a massive dataset is
expensive and time-consuming. Web images and their labels are, in comparison,
much easier to obtain, but direct training on such automatically harvested
images can lead to unsatisfactory performance, because the noisy labels of Web
images adversely affect the learned recognition models. To address this
drawback we propose an end-to-end weakly-supervised deep learning framework
which is robust to the label noise in Web images. The proposed framework relies
on two unified strategies -- random grouping and attention -- to effectively
reduce the negative impact of noisy web image annotations. Specifically, random
grouping stacks multiple images into a single training instance and thus
increases the labeling accuracy at the instance level. Attention, on the other
hand, suppresses the noisy signals from both incorrectly labeled images and
less discriminative image regions. By conducting intensive experiments on two
challenging datasets, including a newly collected fine-grained dataset with Web
images of different car models, the superior performance of the proposed
methods over competitive baselines is clearly demonstrated.
| new_dataset | 0.961678 |
1611.09967 | Chunhua Shen | Yao Li, Guosheng Lin, Bohan Zhuang, Lingqiao Liu, Chunhua Shen, Anton
van den Hengel | Sequential Person Recognition in Photo Albums with a Recurrent Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing the identities of people in everyday photos is still a very
challenging problem for machine vision, due to non-frontal faces, changes in
clothing, location, lighting and similar. Recent studies have shown that rich
relational information between people in the same photo can help in recognizing
their identities. In this work, we propose to model the relational information
between people as a sequence prediction task. At the core of our work is a
novel recurrent network architecture, in which relational information between
instances' labels and appearance are modeled jointly. In addition to relational
cues, scene context is incorporated in our sequence prediction model with no
additional cost. In this sense, our approach is a unified framework for
modeling both contextual cues and visual appearance of person instances. Our
model is trained end-to-end with a sequence of annotated instances in a photo
as inputs, and a sequence of corresponding labels as targets. We demonstrate
that this simple but elegant formulation achieves state-of-the-art performance
on the newly released People In Photo Albums (PIPA) dataset.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 01:45:23 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Li",
"Yao",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Zhuang",
"Bohan",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Sequential Person Recognition in Photo Albums with a Recurrent Network
ABSTRACT: Recognizing the identities of people in everyday photos is still a very
challenging problem for machine vision, due to non-frontal faces, changes in
clothing, location, lighting and similar. Recent studies have shown that rich
relational information between people in the same photo can help in recognizing
their identities. In this work, we propose to model the relational information
between people as a sequence prediction task. At the core of our work is a
novel recurrent network architecture, in which relational information between
instances' labels and appearance are modeled jointly. In addition to relational
cues, scene context is incorporated in our sequence prediction model with no
additional cost. In this sense, our approach is a unified framework for
modeling both contextual cues and visual appearance of person instances. Our
model is trained end-to-end with a sequence of annotated instances in a photo
as inputs, and a sequence of corresponding labels as targets. We demonstrate
that this simple but elegant formulation achieves state-of-the-art performance
on the newly released People In Photo Albums (PIPA) dataset.
| new_dataset | 0.962179 |
1611.09978 | Ronghang Hu | Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, Kate
Saenko | Modeling Relationships in Referential Expressions with Compositional
Modular Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People often refer to entities in an image in terms of their relationships
with other entities. For example, "the black cat sitting under the table"
refers to both a "black cat" entity and its relationship with another "table"
entity. Understanding these relationships is essential for interpreting and
grounding such natural language expressions. Most prior work focuses on either
grounding entire referential expressions holistically to one region, or
localizing relationships based on a fixed set of categories. In this paper we
instead present a modular deep architecture capable of analyzing referential
expressions into their component parts, identifying entities and relationships
mentioned in the input expression and grounding them all in the scene. We call
this approach Compositional Modular Networks (CMNs): a novel architecture that
learns linguistic analysis and visual inference end-to-end. Our approach is
built around two types of neural modules that inspect local regions and
pairwise interactions between regions. We evaluate CMNs on multiple referential
expression datasets, outperforming state-of-the-art approaches on all tasks.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 02:52:09 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Hu",
"Ronghang",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Andreas",
"Jacob",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Modeling Relationships in Referential Expressions with Compositional
Modular Networks
ABSTRACT: People often refer to entities in an image in terms of their relationships
with other entities. For example, "the black cat sitting under the table"
refers to both a "black cat" entity and its relationship with another "table"
entity. Understanding these relationships is essential for interpreting and
grounding such natural language expressions. Most prior work focuses on either
grounding entire referential expressions holistically to one region, or
localizing relationships based on a fixed set of categories. In this paper we
instead present a modular deep architecture capable of analyzing referential
expressions into their component parts, identifying entities and relationships
mentioned in the input expression and grounding them all in the scene. We call
this approach Compositional Modular Networks (CMNs): a novel architecture that
learns linguistic analysis and visual inference end-to-end. Our approach is
built around two types of neural modules that inspect local regions and
pairwise interactions between regions. We evaluate CMNs on multiple referential
expression datasets, outperforming state-of-the-art approaches on all tasks.
| no_new_dataset | 0.946597 |
1611.10053 | Stanislav Levin | Stanislav Levin, Amiram Yehudai | Using Temporal and Semantic Developer-Level Information to Predict
Maintenance Activity Profiles | Postprint, ICSME 2016 proceedings | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive models for software projects' characteristics have been
traditionally based on project-level metrics, employing only little
developer-level information, or none at all. In this work we suggest novel
metrics that capture temporal and semantic developer-level information
collected on a per developer basis. To address the scalability challenges
involved in computing these metrics for each and every developer for a large
number of source code repositories, we have built a designated repository
mining platform. This platform was used to create a metrics dataset based on
processing nearly 1000 highly popular open source GitHub repositories,
consisting of 147 million LOC, and maintained by 30,000 developers. The
computed metrics were then employed to predict the corrective, perfective, and
adaptive maintenance activity profiles identified in previous works. Our
results show both strong correlation and promising predictive power with
R-squared values of 0.83, 0.64, and 0.75. We also show how these results may
help project managers to detect anomalies in the development process and to
build better development teams. In addition, the platform we built has the
potential to yield further predictive models leveraging developer-level metrics
at scale.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 08:55:03 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Levin",
"Stanislav",
""
],
[
"Yehudai",
"Amiram",
""
]
] | TITLE: Using Temporal and Semantic Developer-Level Information to Predict
Maintenance Activity Profiles
ABSTRACT: Predictive models for software projects' characteristics have been
traditionally based on project-level metrics, employing only little
developer-level information, or none at all. In this work we suggest novel
metrics that capture temporal and semantic developer-level information
collected on a per developer basis. To address the scalability challenges
involved in computing these metrics for each and every developer for a large
number of source code repositories, we have built a designated repository
mining platform. This platform was used to create a metrics dataset based on
processing nearly 1000 highly popular open source GitHub repositories,
consisting of 147 million LOC, and maintained by 30,000 developers. The
computed metrics were then employed to predict the corrective, perfective, and
adaptive maintenance activity profiles identified in previous works. Our
results show both strong correlation and promising predictive power with
R-squared values of 0.83, 0.64, and 0.75. We also show how these results may
help project managers to detect anomalies in the development process and to
build better development teams. In addition, the platform we built has the
potential to yield further predictive models leveraging developer-level metrics
at scale.
| new_dataset | 0.964489 |
1611.10080 | Chunhua Shen | Zifeng Wu, Chunhua Shen, and Anton van den Hengel | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Code available at: https://github.com/itijyou/ademxapp | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The trend towards increasingly deep neural networks has been driven by a
general observation that increasing depth increases the performance of a
network. Recently, however, evidence has been amassing that simply increasing
depth may not be the best way to increase performance, particularly given other
limitations. Investigations into deep residual networks have also suggested
that they may not in fact be operating as a single deep network, but rather as
an ensemble of many relatively shallow networks. We examine these issues, and
in doing so arrive at a new interpretation of the unravelled view of deep
residual networks which explains some of the behaviours that have been observed
experimentally. As a result, we are able to derive a new, shallower,
architecture of residual networks which significantly outperforms much deeper
models such as ResNet-200 on the ImageNet classification dataset. We also show
that this performance is transferable to other problem domains by developing a
semantic segmentation approach which outperforms the state-of-the-art by a
remarkable margin on datasets including PASCAL VOC, PASCAL Context, and
Cityscapes. The architecture that we propose thus outperforms its comparators,
including very deep ResNets, and yet is more efficient in memory use and
sometimes also in training time. The code and models are available at
https://github.com/itijyou/ademxapp
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 10:24:32 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Wu",
"Zifeng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Wider or Deeper: Revisiting the ResNet Model for Visual Recognition
ABSTRACT: The trend towards increasingly deep neural networks has been driven by a
general observation that increasing depth increases the performance of a
network. Recently, however, evidence has been amassing that simply increasing
depth may not be the best way to increase performance, particularly given other
limitations. Investigations into deep residual networks have also suggested
that they may not in fact be operating as a single deep network, but rather as
an ensemble of many relatively shallow networks. We examine these issues, and
in doing so arrive at a new interpretation of the unravelled view of deep
residual networks which explains some of the behaviours that have been observed
experimentally. As a result, we are able to derive a new, shallower,
architecture of residual networks which significantly outperforms much deeper
models such as ResNet-200 on the ImageNet classification dataset. We also show
that this performance is transferable to other problem domains by developing a
semantic segmentation approach which outperforms the state-of-the-art by a
remarkable margin on datasets including PASCAL VOC, PASCAL Context, and
Cityscapes. The architecture that we propose thus outperforms its comparators,
including very deep ResNets, and yet is more efficient in memory use and
sometimes also in training time. The code and models are available at
https://github.com/itijyou/ademxapp
| no_new_dataset | 0.944177 |
1611.10176 | Shuchang Zhou | Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou,
Yuheng Zou | Effective Quantization Methods for Recurrent Neural Networks | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reducing bit-widths of weights, activations, and gradients of a Neural
Network can shrink its storage size and memory usage, and also allow for faster
training and inference by exploiting bitwise operations. However, previous
attempts for quantization of RNNs show considerable performance degradation
when using low bit-width weights and activations. In this paper, we propose
methods to quantize the structure of gates and interlinks in LSTM and GRU
cells. In addition, we propose balanced quantization methods for weights to
further reduce performance degradation. Experiments on PTB and IMDB datasets
confirm effectiveness of our methods as performances of our models match or
surpass the previous state-of-the-art of quantized RNN.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 14:33:08 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"He",
"Qinyao",
""
],
[
"Wen",
"He",
""
],
[
"Zhou",
"Shuchang",
""
],
[
"Wu",
"Yuxin",
""
],
[
"Yao",
"Cong",
""
],
[
"Zhou",
"Xinyu",
""
],
[
"Zou",
"Yuheng",
""
]
] | TITLE: Effective Quantization Methods for Recurrent Neural Networks
ABSTRACT: Reducing bit-widths of weights, activations, and gradients of a Neural
Network can shrink its storage size and memory usage, and also allow for faster
training and inference by exploiting bitwise operations. However, previous
attempts for quantization of RNNs show considerable performance degradation
when using low bit-width weights and activations. In this paper, we propose
methods to quantize the structure of gates and interlinks in LSTM and GRU
cells. In addition, we propose balanced quantization methods for weights to
further reduce performance degradation. Experiments on PTB and IMDB datasets
confirm effectiveness of our methods as performances of our models match or
surpass the previous state-of-the-art of quantized RNN.
| no_new_dataset | 0.948394 |
1611.10305 | Qunwei Li | Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang
Zhang, Pramod K. Varshney | Influential Node Detection in Implicit Social Networks using Multi-task
Gaussian Copula Models | NIPS 2016 Workshop, JMLR: Workshop and Conference Proceedings | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influential node detection is a central research topic in social network
analysis. Many existing methods rely on the assumption that the network
structure is completely known \textit{a priori}. However, in many applications,
network structure is unavailable to explain the underlying information
diffusion phenomenon. To address the challenge of information diffusion
analysis with incomplete knowledge of network structure, we develop a
multi-task low rank linear influence model. By exploiting the relationships
between contagions, our approach can simultaneously predict the volume (i.e.
time series prediction) for each contagion (or topic) and automatically
identify the most influential nodes for each contagion. The proposed model is
validated using synthetic data and an ISIS twitter dataset. In addition to
improving the volume prediction performance significantly, we show that the
proposed approach can reliably infer the most influential users for specific
contagions.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 18:46:55 GMT"
}
] | 2016-12-01T00:00:00 | [
[
"Li",
"Qunwei",
""
],
[
"Kailkhura",
"Bhavya",
""
],
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Zhang",
"Zhenliang",
""
],
[
"Varshney",
"Pramod K.",
""
]
] | TITLE: Influential Node Detection in Implicit Social Networks using Multi-task
Gaussian Copula Models
ABSTRACT: Influential node detection is a central research topic in social network
analysis. Many existing methods rely on the assumption that the network
structure is completely known \textit{a priori}. However, in many applications,
network structure is unavailable to explain the underlying information
diffusion phenomenon. To address the challenge of information diffusion
analysis with incomplete knowledge of network structure, we develop a
multi-task low rank linear influence model. By exploiting the relationships
between contagions, our approach can simultaneously predict the volume (i.e.
time series prediction) for each contagion (or topic) and automatically
identify the most influential nodes for each contagion. The proposed model is
validated using synthetic data and an ISIS twitter dataset. In addition to
improving the volume prediction performance significantly, we show that the
proposed approach can reliably infer the most influential users for specific
contagions.
| no_new_dataset | 0.950915 |
1511.06744 | Yani Ioannou | Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla,
Antonio Criminisi | Training CNNs with Low-Rank Filters for Efficient Image Classification | Published as a conference paper at ICLR 2016. v3: updated ICLR
status. v2: Incorporated reviewer's feedback including: Amend Fig. 2 and 5
descriptions to explain that there are no ReLUs within the figures. Fix
headings of Table 5 - Fix typo in the sentence at bottom of page 6. Add ref.
to Predicting Parameters in Deep Learning. Fix Table 6, GMP-LR and GMP-LR-2x
had incorrect numbers of filters | International Conference on Learning Representations (ICLR), San
Juan, Puerto Rico, 2-4 May 2016 | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new method for creating computationally efficient convolutional
neural networks (CNNs) by using low-rank representations of convolutional
filters. Rather than approximating filters in previously-trained networks with
more efficient versions, we learn a set of small basis filters from scratch;
during training, the network learns to combine these basis filters into more
complex filters that are discriminative for image classification. To train such
networks, a novel weight initialization scheme is used. This allows effective
initialization of connection weights in convolutional layers composed of groups
of differently-shaped filters. We validate our approach by applying it to
several existing CNN architectures and training these networks from scratch
using the CIFAR, ILSVRC and MIT Places datasets. Our results show similar or
higher accuracy than conventional CNNs with much less compute. Applying our
method to an improved version of VGG-11 network using global max-pooling, we
achieve comparable validation accuracy using 41% less compute and only 24% of
the original VGG-11 model parameters; another variant of our method gives a 1
percentage point increase in accuracy over our improved VGG-11 model, giving a
top-5 center-crop validation accuracy of 89.7% while reducing computation by
16% relative to the original VGG-11 model. Applying our method to the GoogLeNet
architecture for ILSVRC, we achieved comparable accuracy with 26% less compute
and 41% fewer model parameters. Applying our method to a near state-of-the-art
network for CIFAR, we achieved comparable accuracy with 46% less compute and
55% fewer parameters.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 20:14:28 GMT"
},
{
"version": "v2",
"created": "Sat, 23 Jan 2016 17:07:02 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Feb 2016 21:23:19 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Ioannou",
"Yani",
""
],
[
"Robertson",
"Duncan",
""
],
[
"Shotton",
"Jamie",
""
],
[
"Cipolla",
"Roberto",
""
],
[
"Criminisi",
"Antonio",
""
]
] | TITLE: Training CNNs with Low-Rank Filters for Efficient Image Classification
ABSTRACT: We propose a new method for creating computationally efficient convolutional
neural networks (CNNs) by using low-rank representations of convolutional
filters. Rather than approximating filters in previously-trained networks with
more efficient versions, we learn a set of small basis filters from scratch;
during training, the network learns to combine these basis filters into more
complex filters that are discriminative for image classification. To train such
networks, a novel weight initialization scheme is used. This allows effective
initialization of connection weights in convolutional layers composed of groups
of differently-shaped filters. We validate our approach by applying it to
several existing CNN architectures and training these networks from scratch
using the CIFAR, ILSVRC and MIT Places datasets. Our results show similar or
higher accuracy than conventional CNNs with much less compute. Applying our
method to an improved version of VGG-11 network using global max-pooling, we
achieve comparable validation accuracy using 41% less compute and only 24% of
the original VGG-11 model parameters; another variant of our method gives a 1
percentage point increase in accuracy over our improved VGG-11 model, giving a
top-5 center-crop validation accuracy of 89.7% while reducing computation by
16% relative to the original VGG-11 model. Applying our method to the GoogLeNet
architecture for ILSVRC, we achieved comparable accuracy with 26% less compute
and 41% fewer model parameters. Applying our method to a near state-of-the-art
network for CIFAR, we achieved comparable accuracy with 46% less compute and
55% fewer parameters.
| no_new_dataset | 0.953275 |
1511.09231 | Zhun Sun | Zhun Sun, Mete Ozay, Takayuki Okatani | Design of Kernels in Convolutional Neural Networks for Image
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the effectiveness of Convolutional Neural Networks (CNNs) for image
classification, our understanding of the relationship between shape of
convolution kernels and learned representations is limited. In this work, we
explore and employ the relationship between shape of kernels which define
Receptive Fields (RFs) in CNNs for learning of feature representations and
image classification. For this purpose, we first propose a feature
visualization method for visualization of pixel-wise classification score maps
of learned features. Motivated by our experimental results, and observations
reported in the literature for modeling of visual systems, we propose a novel
design of shape of kernels for learning of representations in CNNs. In the
experimental results, we achieved a state-of-the-art classification performance
compared to a base CNN model [28] by reducing the number of parameters and
computational time of the model using the ILSVRC-2012 dataset [24]. The
proposed models also outperform the state-of-the-art models employed on the
CIFAR-10/100 datasets [12] for image classification. Additionally, we analyzed
the robustness of the proposed method to occlusion for classification of
partially occluded images compared with the state-of-the-art methods. Our
results indicate the effectiveness of the proposed approach. The code is
available in github.com/minogame/caffe-qhconv.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 10:30:35 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2016 11:59:08 GMT"
},
{
"version": "v3",
"created": "Tue, 29 Nov 2016 04:11:58 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Sun",
"Zhun",
""
],
[
"Ozay",
"Mete",
""
],
[
"Okatani",
"Takayuki",
""
]
] | TITLE: Design of Kernels in Convolutional Neural Networks for Image
Classification
ABSTRACT: Despite the effectiveness of Convolutional Neural Networks (CNNs) for image
classification, our understanding of the relationship between shape of
convolution kernels and learned representations is limited. In this work, we
explore and employ the relationship between shape of kernels which define
Receptive Fields (RFs) in CNNs for learning of feature representations and
image classification. For this purpose, we first propose a feature
visualization method for visualization of pixel-wise classification score maps
of learned features. Motivated by our experimental results, and observations
reported in the literature for modeling of visual systems, we propose a novel
design of shape of kernels for learning of representations in CNNs. In the
experimental results, we achieved a state-of-the-art classification performance
compared to a base CNN model [28] by reducing the number of parameters and
computational time of the model using the ILSVRC-2012 dataset [24]. The
proposed models also outperform the state-of-the-art models employed on the
CIFAR-10/100 datasets [12] for image classification. Additionally, we analyzed
the robustness of the proposed method to occlusion for classification of
partially occluded images compared with the state-of-the-art methods. Our
results indicate the effectiveness of the proposed approach. The code is
available in github.com/minogame/caffe-qhconv.
| no_new_dataset | 0.948537 |
1604.01729 | Subhashini Venugopalan | Subhashini Venugopalan, Lisa Anne Hendricks, Raymond Mooney, Kate
Saenko | Improving LSTM-based Video Description with Linguistic Knowledge Mined
from Text | Accepted at EMNLP 2016. Project page:
http://vsubhashini.github.io/language_fusion.html | Proc.EMNLP (2016) pg.1961-1966 | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates how linguistic knowledge mined from large text
corpora can aid the generation of natural language descriptions of videos.
Specifically, we integrate both a neural language model and distributional
semantics trained on large text corpora into a recent LSTM-based architecture
for video description. We evaluate our approach on a collection of Youtube
videos as well as two large movie description datasets showing significant
improvements in grammaticality while modestly improving descriptive quality.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 19:01:28 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 20:37:42 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Venugopalan",
"Subhashini",
""
],
[
"Hendricks",
"Lisa Anne",
""
],
[
"Mooney",
"Raymond",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Improving LSTM-based Video Description with Linguistic Knowledge Mined
from Text
ABSTRACT: This paper investigates how linguistic knowledge mined from large text
corpora can aid the generation of natural language descriptions of videos.
Specifically, we integrate both a neural language model and distributional
semantics trained on large text corpora into a recent LSTM-based architecture
for video description. We evaluate our approach on a collection of Youtube
videos as well as two large movie description datasets showing significant
improvements in grammaticality while modestly improving descriptive quality.
| no_new_dataset | 0.953101 |
1611.07285 | Soumya Roy | Soumya Roy, Vinay P. Namboodiri, Arijit Biswas | Active learning with version spaces for object detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given an image, we would like to learn to detect objects belonging to
particular object categories. Common object detection methods train on large
annotated datasets which are annotated in terms of bounding boxes that contain
the object of interest. Previous works on object detection model the problem as
a structured regression problem which ranks the correct bounding boxes more
than the background ones. In this paper we develop algorithms which actively
obtain annotations from human annotators for a small set of images, instead of
all images, thereby reducing the annotation effort. Towards this goal, we make
the following contributions: 1. We develop a principled version space based
active learning method that solves for object detection as a structured
prediction problem in a weakly supervised setting 2. We also propose two
variants of the margin sampling strategy 3. We analyse the results on standard
object detection benchmarks that show that with only 20% of the data we can
obtain more than 95% of the localization accuracy of full supervision. Our
methods outperform random sampling and the classical uncertainty-based active
learning algorithms like entropy
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2016 12:58:24 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 06:47:29 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Roy",
"Soumya",
""
],
[
"Namboodiri",
"Vinay P.",
""
],
[
"Biswas",
"Arijit",
""
]
] | TITLE: Active learning with version spaces for object detection
ABSTRACT: Given an image, we would like to learn to detect objects belonging to
particular object categories. Common object detection methods train on large
annotated datasets which are annotated in terms of bounding boxes that contain
the object of interest. Previous works on object detection model the problem as
a structured regression problem which ranks the correct bounding boxes more
than the background ones. In this paper we develop algorithms which actively
obtain annotations from human annotators for a small set of images, instead of
all images, thereby reducing the annotation effort. Towards this goal, we make
the following contributions: 1. We develop a principled version space based
active learning method that solves for object detection as a structured
prediction problem in a weakly supervised setting 2. We also propose two
variants of the margin sampling strategy 3. We analyse the results on standard
object detection benchmarks that show that with only 20% of the data we can
obtain more than 95% of the localization accuracy of full supervision. Our
methods outperform random sampling and the classical uncertainty-based active
learning algorithms like entropy
| no_new_dataset | 0.947672 |
1611.08991 | Long Jin | Long Jin, Zeyu Chen, Zhuowen Tu | Object Detection Free Instance Segmentation With Labeling
Transformations | 10 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instance segmentation has attracted recent attention in computer vision and
existing methods in this domain mostly have an object detection stage. In this
paper, we study the intrinsic challenge of the instance segmentation problem,
the presence of a quotient space (swapping the labels of different instances
leads to the same result), and propose new methods that are object proposal-
and object detection- free. We propose three alternative methods, namely
pixel-based affinity mapping, superpixel-based affinity learning, and
boundary-based component segmentation, all focusing on performing labeling
transformations to cope with the quotient space problem. By adopting fully
convolutional neural networks (FCN) like models, our framework attains
competitive results on both the PASCAL dataset (object-centric) and the Gland
dataset (texture-centric), which the existing methods are not able to do. Our
work also has the advantages in its transparency, simplicity, and being all
segmentation based.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 05:52:37 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 05:42:11 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Jin",
"Long",
""
],
[
"Chen",
"Zeyu",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Object Detection Free Instance Segmentation With Labeling
Transformations
ABSTRACT: Instance segmentation has attracted recent attention in computer vision and
existing methods in this domain mostly have an object detection stage. In this
paper, we study the intrinsic challenge of the instance segmentation problem,
the presence of a quotient space (swapping the labels of different instances
leads to the same result), and propose new methods that are object proposal-
and object detection- free. We propose three alternative methods, namely
pixel-based affinity mapping, superpixel-based affinity learning, and
boundary-based component segmentation, all focusing on performing labeling
transformations to cope with the quotient space problem. By adopting fully
convolutional neural networks (FCN) like models, our framework attains
competitive results on both the PASCAL dataset (object-centric) and the Gland
dataset (texture-centric), which the existing methods are not able to do. Our
work also has the advantages in its transparency, simplicity, and being all
segmentation based.
| no_new_dataset | 0.953144 |
1611.09418 | Tao Lu | Hongrui Wang and Tao Lu and Xiaodai Dong and Peixue Li and Michael Xie | Hierarchical Online Intrusion Detection for SCADA Networks | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel hierarchical online intrusion detection system (HOIDS) for
supervisory control and data acquisition (SCADA) networks based on machine
learning algorithms. By utilizing the server-client topology while keeping
clients distributed for global protection, high detection rate is achieved with
minimum network impact. We implement accurate models of normal-abnormal binary
detection and multi-attack identification based on logistic regression and
quasi-Newton optimization algorithm using the Broyden-Fletcher-Goldfarb-Shanno
approach. The detection system is capable of accelerating detection by
information gain based feature selection or principle component analysis based
dimension reduction. By evaluating our system using the KDD99 dataset and the
industrial control system dataset, we demonstrate that HOIDS is highly
scalable, efficient and cost effective for securing SCADA infrastructures.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 22:54:48 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Wang",
"Hongrui",
""
],
[
"Lu",
"Tao",
""
],
[
"Dong",
"Xiaodai",
""
],
[
"Li",
"Peixue",
""
],
[
"Xie",
"Michael",
""
]
] | TITLE: Hierarchical Online Intrusion Detection for SCADA Networks
ABSTRACT: We propose a novel hierarchical online intrusion detection system (HOIDS) for
supervisory control and data acquisition (SCADA) networks based on machine
learning algorithms. By utilizing the server-client topology while keeping
clients distributed for global protection, high detection rate is achieved with
minimum network impact. We implement accurate models of normal-abnormal binary
detection and multi-attack identification based on logistic regression and
quasi-Newton optimization algorithm using the Broyden-Fletcher-Goldfarb-Shanno
approach. The detection system is capable of accelerating detection by
information gain based feature selection or principle component analysis based
dimension reduction. By evaluating our system using the KDD99 dataset and the
industrial control system dataset, we demonstrate that HOIDS is highly
scalable, efficient and cost effective for securing SCADA infrastructures.
| no_new_dataset | 0.945601 |
1611.09502 | Ting Yao | Zhaofan Qiu, Ting Yao, Tao Mei | Deep Quantization: Encoding Convolutional Activations with Deep
Generative Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (CNNs) have proven highly effective for
visual recognition, where learning a universal representation from activations
of convolutional layer plays a fundamental problem. In this paper, we present
Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep
architecture that quantizes the local activations of convolutional layer in a
deep generative model, by training them in an end-to-end manner. To incorporate
FV encoding strategy into deep generative models, we introduce Variational
Auto-Encoder model, which steers a variational inference and learning in a
neural network which can be straightforwardly optimized using standard
stochastic gradient method. Different from the FV characterized by conventional
generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a
discrete mixture model to data distribution, the proposed FV-VAE is more
flexible to represent the natural property of data for better generalization.
Extensive experiments are conducted on three public datasets, i.e., UCF101,
ActivityNet, and CUB-200-2011 in the context of video action recognition and
fine-grained image classification, respectively. Superior results are reported
when compared to state-of-the-art representations. Most remarkably, our
proposed FV-VAE achieves to-date the best published accuracy of 94.2% on
UCF101.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 06:07:28 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Qiu",
"Zhaofan",
""
],
[
"Yao",
"Ting",
""
],
[
"Mei",
"Tao",
""
]
] | TITLE: Deep Quantization: Encoding Convolutional Activations with Deep
Generative Model
ABSTRACT: Deep convolutional neural networks (CNNs) have proven highly effective for
visual recognition, where learning a universal representation from activations
of convolutional layer plays a fundamental problem. In this paper, we present
Fisher Vector encoding with Variational Auto-Encoder (FV-VAE), a novel deep
architecture that quantizes the local activations of convolutional layer in a
deep generative model, by training them in an end-to-end manner. To incorporate
FV encoding strategy into deep generative models, we introduce Variational
Auto-Encoder model, which steers a variational inference and learning in a
neural network which can be straightforwardly optimized using standard
stochastic gradient method. Different from the FV characterized by conventional
generative models (e.g., Gaussian Mixture Model) which parsimoniously fit a
discrete mixture model to data distribution, the proposed FV-VAE is more
flexible to represent the natural property of data for better generalization.
Extensive experiments are conducted on three public datasets, i.e., UCF101,
ActivityNet, and CUB-200-2011 in the context of video action recognition and
fine-grained image classification, respectively. Superior results are reported
when compared to state-of-the-art representations. Most remarkably, our
proposed FV-VAE achieves to-date the best published accuracy of 94.2% on
UCF101.
| no_new_dataset | 0.950273 |
1611.09524 | Shuhui Qu | Shuhui Qu, Juncheng Li, Wei Dai, Samarjit Das | Understanding Audio Pattern Using Convolutional Neural Network From Raw
Waveforms | null | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One key step in audio signal processing is to transform the raw signal into
representations that are efficient for encoding the original information.
Traditionally, people transform the audio into spectral representations, as a
function of frequency, amplitude and phase transformation. In this work, we
take a purely data-driven approach to understand the temporal dynamics of audio
at the raw signal level. We maximize the information extracted from the raw
signal through a deep convolutional neural network (CNN) model. Our CNN model
is trained on the urbansound8k dataset. We discover that salient audio patterns
embedded in the raw waveforms can be efficiently extracted through a
combination of nonlinear filters learned by the CNN model.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 08:33:48 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Qu",
"Shuhui",
""
],
[
"Li",
"Juncheng",
""
],
[
"Dai",
"Wei",
""
],
[
"Das",
"Samarjit",
""
]
] | TITLE: Understanding Audio Pattern Using Convolutional Neural Network From Raw
Waveforms
ABSTRACT: One key step in audio signal processing is to transform the raw signal into
representations that are efficient for encoding the original information.
Traditionally, people transform the audio into spectral representations, as a
function of frequency, amplitude and phase transformation. In this work, we
take a purely data-driven approach to understand the temporal dynamics of audio
at the raw signal level. We maximize the information extracted from the raw
signal through a deep convolutional neural network (CNN) model. Our CNN model
is trained on the urbansound8k dataset. We discover that salient audio patterns
embedded in the raw waveforms can be efficiently extracted through a
combination of nonlinear filters learned by the CNN model.
| no_new_dataset | 0.950549 |
1611.09526 | Shuhui Qu | Shuhui Qu, Juncheng Li, Wei Dai, Samarjit Das | Learning Filter Banks Using Deep Learning For Acoustic Signals | null | null | null | null | cs.SD cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing appropriate features for acoustic event recognition tasks is an
active field of research. Expressive features should both improve the
performance of the tasks and also be interpret-able. Currently, heuristically
designed features based on the domain knowledge requires tremendous effort in
hand-crafting, while features extracted through deep network are difficult for
human to interpret. In this work, we explore the experience guided learning
method for designing acoustic features. This is a novel hybrid approach
combining both domain knowledge and purely data driven feature designing. Based
on the procedure of log Mel-filter banks, we design a filter bank learning
layer. We concatenate this layer with a convolutional neural network (CNN)
model. After training the network, the weight of the filter bank learning layer
is extracted to facilitate the design of acoustic features. We smooth the
trained weight of the learning layer and re-initialize it in filter bank
learning layer as audio feature extractor. For the environmental sound
recognition task based on the Urban- sound8K dataset, the experience guided
learning leads to a 2% accuracy improvement compared with the fixed feature
extractors (the log Mel-filter bank). The shape of the new filter banks are
visualized and explained to prove the effectiveness of the feature design
process.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 08:46:26 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Qu",
"Shuhui",
""
],
[
"Li",
"Juncheng",
""
],
[
"Dai",
"Wei",
""
],
[
"Das",
"Samarjit",
""
]
] | TITLE: Learning Filter Banks Using Deep Learning For Acoustic Signals
ABSTRACT: Designing appropriate features for acoustic event recognition tasks is an
active field of research. Expressive features should both improve the
performance of the tasks and also be interpret-able. Currently, heuristically
designed features based on the domain knowledge requires tremendous effort in
hand-crafting, while features extracted through deep network are difficult for
human to interpret. In this work, we explore the experience guided learning
method for designing acoustic features. This is a novel hybrid approach
combining both domain knowledge and purely data driven feature designing. Based
on the procedure of log Mel-filter banks, we design a filter bank learning
layer. We concatenate this layer with a convolutional neural network (CNN)
model. After training the network, the weight of the filter bank learning layer
is extracted to facilitate the design of acoustic features. We smooth the
trained weight of the learning layer and re-initialize it in filter bank
learning layer as audio feature extractor. For the environmental sound
recognition task based on the Urban- sound8K dataset, the experience guided
learning leads to a 2% accuracy improvement compared with the fixed feature
extractors (the log Mel-filter bank). The shape of the new filter banks are
visualized and explained to prove the effectiveness of the feature design
process.
| no_new_dataset | 0.949716 |
1611.09534 | Tom Zahavy | Tom Zahavy and Alessandro Magnani and Abhinandan Krishnan and Shie
Mannor | Is a picture worth a thousand words? A Deep Multi-Modal Fusion
Architecture for Product Classification in e-commerce | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifying products into categories precisely and efficiently is a major
challenge in modern e-commerce. The high traffic of new products uploaded daily
and the dynamic nature of the categories raise the need for machine learning
models that can reduce the cost and time of human editors. In this paper, we
propose a decision level fusion approach for multi-modal product classification
using text and image inputs. We train input specific state-of-the-art deep
neural networks for each input source, show the potential of forging them
together into a multi-modal architecture and train a novel policy network that
learns to choose between them. Finally, we demonstrate that our multi-modal
network improves the top-1 accuracy % over both networks on a real-world
large-scale product classification dataset that we collected fromWalmart.com.
While we focus on image-text fusion that characterizes e-commerce domains, our
algorithms can be easily applied to other modalities such as audio, video,
physical sensors, etc.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 09:05:11 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Zahavy",
"Tom",
""
],
[
"Magnani",
"Alessandro",
""
],
[
"Krishnan",
"Abhinandan",
""
],
[
"Mannor",
"Shie",
""
]
] | TITLE: Is a picture worth a thousand words? A Deep Multi-Modal Fusion
Architecture for Product Classification in e-commerce
ABSTRACT: Classifying products into categories precisely and efficiently is a major
challenge in modern e-commerce. The high traffic of new products uploaded daily
and the dynamic nature of the categories raise the need for machine learning
models that can reduce the cost and time of human editors. In this paper, we
propose a decision level fusion approach for multi-modal product classification
using text and image inputs. We train input specific state-of-the-art deep
neural networks for each input source, show the potential of forging them
together into a multi-modal architecture and train a novel policy network that
learns to choose between them. Finally, we demonstrate that our multi-modal
network improves the top-1 accuracy % over both networks on a real-world
large-scale product classification dataset that we collected fromWalmart.com.
While we focus on image-text fusion that characterizes e-commerce domains, our
algorithms can be easily applied to other modalities such as audio, video,
physical sensors, etc.
| no_new_dataset | 0.934574 |
1611.09573 | Anoop V S | V. S. Anoop, S. Asharaf and P. Deepak | Learning Concept Hierarchies through Probabilistic Topic Modeling | null | International Journal of Information Processing (IJIP), Volume 10,
Issue 3, 2016 | null | null | cs.AI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of semantic web, various tools and techniques have been
introduced for presenting and organizing knowledge. Concept hierarchies are one
such technique which gained significant attention due to its usefulness in
creating domain ontologies that are considered as an integral part of semantic
web. Automated concept hierarchy learning algorithms focus on extracting
relevant concepts from unstructured text corpus and connect them together by
identifying some potential relations exist between them. In this paper, we
propose a novel approach for identifying relevant concepts from plain text and
then learns hierarchy of concepts by exploiting subsumption relation between
them. To start with, we model topics using a probabilistic topic model and then
make use of some lightweight linguistic process to extract semantically rich
concepts. Then we connect concepts by identifying an "is-a" relationship
between pair of concepts. The proposed method is completely unsupervised and
there is no need for a domain specific training corpus for concept extraction
and learning. Experiments on large and real-world text corpora such as BBC News
dataset and Reuters News corpus shows that the proposed method outperforms some
of the existing methods for concept extraction and efficient concept hierarchy
learning is possible if the overall task is guided by a probabilistic topic
modeling algorithm.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 11:28:59 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Anoop",
"V. S.",
""
],
[
"Asharaf",
"S.",
""
],
[
"Deepak",
"P.",
""
]
] | TITLE: Learning Concept Hierarchies through Probabilistic Topic Modeling
ABSTRACT: With the advent of semantic web, various tools and techniques have been
introduced for presenting and organizing knowledge. Concept hierarchies are one
such technique which gained significant attention due to its usefulness in
creating domain ontologies that are considered as an integral part of semantic
web. Automated concept hierarchy learning algorithms focus on extracting
relevant concepts from unstructured text corpus and connect them together by
identifying some potential relations exist between them. In this paper, we
propose a novel approach for identifying relevant concepts from plain text and
then learns hierarchy of concepts by exploiting subsumption relation between
them. To start with, we model topics using a probabilistic topic model and then
make use of some lightweight linguistic process to extract semantically rich
concepts. Then we connect concepts by identifying an "is-a" relationship
between pair of concepts. The proposed method is completely unsupervised and
there is no need for a domain specific training corpus for concept extraction
and learning. Experiments on large and real-world text corpora such as BBC News
dataset and Reuters News corpus shows that the proposed method outperforms some
of the existing methods for concept extraction and efficient concept hierarchy
learning is possible if the overall task is guided by a probabilistic topic
modeling algorithm.
| no_new_dataset | 0.948346 |
1611.09587 | Si Liu | Si Liu, Changhu Wang, Ruihe Qian, Han Yu, Renda Bao | Surveillance Video Parsing with Single Frame Supervision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Surveillance video parsing, which segments the video frames into several
labels, e.g., face, pants, left-leg, has wide applications.
However,pixel-wisely annotating all frames is tedious and inefficient. In this
paper, we develop a Single frame Video Parsing (SVP) method which requires only
one labeled frame per video in training stage. To parse one particular frame,
the video segment preceding the frame is jointly considered. SVP (1) roughly
parses the frames within the video segment, (2) estimates the optical flow
between frames and (3) fuses the rough parsing results warped by optical flow
to produce the refined parsing result. The three components of SVP, namely
frame parsing, optical flow estimation and temporal fusion are integrated in an
end-to-end manner. Experimental results on two surveillance video datasets show
the superiority of SVP over state-of-the-arts.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 12:22:46 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Liu",
"Si",
""
],
[
"Wang",
"Changhu",
""
],
[
"Qian",
"Ruihe",
""
],
[
"Yu",
"Han",
""
],
[
"Bao",
"Renda",
""
]
] | TITLE: Surveillance Video Parsing with Single Frame Supervision
ABSTRACT: Surveillance video parsing, which segments the video frames into several
labels, e.g., face, pants, left-leg, has wide applications.
However,pixel-wisely annotating all frames is tedious and inefficient. In this
paper, we develop a Single frame Video Parsing (SVP) method which requires only
one labeled frame per video in training stage. To parse one particular frame,
the video segment preceding the frame is jointly considered. SVP (1) roughly
parses the frames within the video segment, (2) estimates the optical flow
between frames and (3) fuses the rough parsing results warped by optical flow
to produce the refined parsing result. The three components of SVP, namely
frame parsing, optical flow estimation and temporal fusion are integrated in an
end-to-end manner. Experimental results on two surveillance video datasets show
the superiority of SVP over state-of-the-arts.
| no_new_dataset | 0.951684 |
1611.09621 | Ankit Singh Rawat | Arya Mazumdar and Ankit Singh Rawat | Associative Memory using Dictionary Learning and Expander Decoding | To appear in AAAI 2017 | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An associative memory is a framework of content-addressable memory that
stores a collection of message vectors (or a dataset) over a neural network
while enabling a neurally feasible mechanism to recover any message in the
dataset from its noisy version. Designing an associative memory requires
addressing two main tasks: 1) learning phase: given a dataset, learn a concise
representation of the dataset in the form of a graphical model (or a neural
network), 2) recall phase: given a noisy version of a message vector from the
dataset, output the correct message vector via a neurally feasible algorithm
over the network learnt during the learning phase. This paper studies the
problem of designing a class of neural associative memories which learns a
network representation for a large dataset that ensures correction against a
large number of adversarial errors during the recall phase. Specifically, the
associative memories designed in this paper can store dataset containing
$\exp(n)$ $n$-length message vectors over a network with $O(n)$ nodes and can
tolerate $\Omega(\frac{n}{{\rm polylog} n})$ adversarial errors. This paper
carries out this memory design by mapping the learning phase and recall phase
to the tasks of dictionary learning with a square dictionary and iterative
error correction in an expander code, respectively.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 13:27:18 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Mazumdar",
"Arya",
""
],
[
"Rawat",
"Ankit Singh",
""
]
] | TITLE: Associative Memory using Dictionary Learning and Expander Decoding
ABSTRACT: An associative memory is a framework of content-addressable memory that
stores a collection of message vectors (or a dataset) over a neural network
while enabling a neurally feasible mechanism to recover any message in the
dataset from its noisy version. Designing an associative memory requires
addressing two main tasks: 1) learning phase: given a dataset, learn a concise
representation of the dataset in the form of a graphical model (or a neural
network), 2) recall phase: given a noisy version of a message vector from the
dataset, output the correct message vector via a neurally feasible algorithm
over the network learnt during the learning phase. This paper studies the
problem of designing a class of neural associative memories which learns a
network representation for a large dataset that ensures correction against a
large number of adversarial errors during the recall phase. Specifically, the
associative memories designed in this paper can store dataset containing
$\exp(n)$ $n$-length message vectors over a network with $O(n)$ nodes and can
tolerate $\Omega(\frac{n}{{\rm polylog} n})$ adversarial errors. This paper
carries out this memory design by mapping the learning phase and recall phase
to the tasks of dictionary learning with a square dictionary and iterative
error correction in an expander code, respectively.
| no_new_dataset | 0.943712 |
1611.09691 | Shichao Zhang | Shichao Zhang | Data Partitioning View of Mining Big Data | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are two main approximations of mining big data in memory. One is to
partition a big dataset to several subsets, so as to mine each subset in
memory. By this way, global patterns can be obtained by synthesizing all local
patterns discovered from these subsets. Another is the statistical sampling
method. This indicates that data partitioning should be an important strategy
for mining big data. This paper recalls our work on mining big data with a data
partitioning and shows some interesting findings among the local patterns
discovered from subsets of a dataset.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 16:05:56 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Zhang",
"Shichao",
""
]
] | TITLE: Data Partitioning View of Mining Big Data
ABSTRACT: There are two main approximations of mining big data in memory. One is to
partition a big dataset to several subsets, so as to mine each subset in
memory. By this way, global patterns can be obtained by synthesizing all local
patterns discovered from these subsets. Another is the statistical sampling
method. This indicates that data partitioning should be an important strategy
for mining big data. This paper recalls our work on mining big data with a data
partitioning and shows some interesting findings among the local patterns
discovered from subsets of a dataset.
| no_new_dataset | 0.949201 |
1611.09769 | Shaikat Galib | Shaikat Galib, Fahima Islam, Muhammad Abir, and Hyoung-Koo Lee | Computer Aided Detection of Oral Lesions on CT Images | null | null | 10.1088/1748-0221-10-12-C12030 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oral lesions are important findings on computed tomography (CT) images. In
this study, a fully automatic method to detect oral lesions in mandibular
region from dental CT images is proposed. Two methods were developed to
recognize two types of lesions namely (1) Close border (CB) lesions and (2)
Open border (OB) lesions, which cover most of the lesion types that can be
found on CT images. For the detection of CB lesions, fifteen features were
extracted from each initial lesion candidates and multi layer perceptron (MLP)
neural network was used to classify suspicious regions. Moreover, OB lesions
were detected using a rule based image processing method, where no feature
extraction or classification algorithm were used. The results were validated
using a CT dataset of 52 patients, where 22 patients had abnormalities and 30
patients were normal. Using non-training dataset, CB detection algorithm
yielded 71% sensitivity with 0.31 false positives per patient. Furthermore, OB
detection algorithm achieved 100% sensitivity with 0.13 false positives per
patient. Results suggest that, the proposed framework, which consists of two
methods, has the potential to be used in clinical context, and assist
radiologists for better diagnosis.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 18:24:23 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Galib",
"Shaikat",
""
],
[
"Islam",
"Fahima",
""
],
[
"Abir",
"Muhammad",
""
],
[
"Lee",
"Hyoung-Koo",
""
]
] | TITLE: Computer Aided Detection of Oral Lesions on CT Images
ABSTRACT: Oral lesions are important findings on computed tomography (CT) images. In
this study, a fully automatic method to detect oral lesions in mandibular
region from dental CT images is proposed. Two methods were developed to
recognize two types of lesions namely (1) Close border (CB) lesions and (2)
Open border (OB) lesions, which cover most of the lesion types that can be
found on CT images. For the detection of CB lesions, fifteen features were
extracted from each initial lesion candidates and multi layer perceptron (MLP)
neural network was used to classify suspicious regions. Moreover, OB lesions
were detected using a rule based image processing method, where no feature
extraction or classification algorithm were used. The results were validated
using a CT dataset of 52 patients, where 22 patients had abnormalities and 30
patients were normal. Using non-training dataset, CB detection algorithm
yielded 71% sensitivity with 0.31 false positives per patient. Furthermore, OB
detection algorithm achieved 100% sensitivity with 0.13 false positives per
patient. Results suggest that, the proposed framework, which consists of two
methods, has the potential to be used in clinical context, and assist
radiologists for better diagnosis.
| new_dataset | 0.940626 |
1611.09799 | Hongyu Gong | Hongyu Gong, Suma Bhat, Pramod Viswanath | Geometry of Compositionality | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a simple test for compositionality (i.e., literal usage)
of a word or phrase in a context-specific way. The test is computationally
simple, relying on no external resources and only uses a set of trained word
vectors. Experiments show that the proposed method is competitive with state of
the art and displays high accuracy in context-specific compositionality
detection of a variety of natural language phenomena (idiomaticity, sarcasm,
metaphor) for different datasets in multiple languages. The key insight is to
connect compositionality to a curious geometric property of word embeddings,
which is of independent interest.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 19:23:41 GMT"
}
] | 2016-11-30T00:00:00 | [
[
"Gong",
"Hongyu",
""
],
[
"Bhat",
"Suma",
""
],
[
"Viswanath",
"Pramod",
""
]
] | TITLE: Geometry of Compositionality
ABSTRACT: This paper proposes a simple test for compositionality (i.e., literal usage)
of a word or phrase in a context-specific way. The test is computationally
simple, relying on no external resources and only uses a set of trained word
vectors. Experiments show that the proposed method is competitive with state of
the art and displays high accuracy in context-specific compositionality
detection of a variety of natural language phenomena (idiomaticity, sarcasm,
metaphor) for different datasets in multiple languages. The key insight is to
connect compositionality to a curious geometric property of word embeddings,
which is of independent interest.
| no_new_dataset | 0.942454 |
1410.7357 | Sonja Petrovic | Vishesh Karwa, Michael J. Pelsmajer, Sonja Petrovi\'c, Despina Stasi,
Dane Wilburne | Statistical models for cores decomposition of an undirected random graph | Subsection 3.1 is new: `Sample space restriction and degeneracy of
real-world networks'. Several clarifying comments have been added. Discussion
now mentions 2 additional specific open problems. Bibliography updated. 25
pages (including appendix), ~10 figures | null | null | null | math.ST cs.SI physics.soc-ph stat.CO stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The $k$-core decomposition is a widely studied summary statistic that
describes a graph's global connectivity structure. In this paper, we move
beyond using $k$-core decomposition as a tool to summarize a graph and propose
using $k$-core decomposition as a tool to model random graphs. We propose using
the shell distribution vector, a way of summarizing the decomposition, as a
sufficient statistic for a family of exponential random graph models. We study
the properties and behavior of the model family, implement a Markov chain Monte
Carlo algorithm for simulating graphs from the model, implement a direct
sampler from the set of graphs with a given shell distribution, and explore the
sampling distributions of some of the commonly used complementary statistics as
good candidates for heuristic model fitting. These algorithms provide first
fundamental steps necessary for solving the following problems: parameter
estimation in this ERGM, extending the model to its Bayesian relative, and
developing a rigorous methodology for testing goodness of fit of the model and
model selection. The methods are applied to a synthetic network as well as the
well-known Sampson monks dataset.
| [
{
"version": "v1",
"created": "Mon, 27 Oct 2014 19:08:50 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Oct 2015 19:59:15 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Nov 2016 15:59:31 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Karwa",
"Vishesh",
""
],
[
"Pelsmajer",
"Michael J.",
""
],
[
"Petrović",
"Sonja",
""
],
[
"Stasi",
"Despina",
""
],
[
"Wilburne",
"Dane",
""
]
] | TITLE: Statistical models for cores decomposition of an undirected random graph
ABSTRACT: The $k$-core decomposition is a widely studied summary statistic that
describes a graph's global connectivity structure. In this paper, we move
beyond using $k$-core decomposition as a tool to summarize a graph and propose
using $k$-core decomposition as a tool to model random graphs. We propose using
the shell distribution vector, a way of summarizing the decomposition, as a
sufficient statistic for a family of exponential random graph models. We study
the properties and behavior of the model family, implement a Markov chain Monte
Carlo algorithm for simulating graphs from the model, implement a direct
sampler from the set of graphs with a given shell distribution, and explore the
sampling distributions of some of the commonly used complementary statistics as
good candidates for heuristic model fitting. These algorithms provide first
fundamental steps necessary for solving the following problems: parameter
estimation in this ERGM, extending the model to its Bayesian relative, and
developing a rigorous methodology for testing goodness of fit of the model and
model selection. The methods are applied to a synthetic network as well as the
well-known Sampson monks dataset.
| no_new_dataset | 0.949902 |
1511.01245 | Thierry Bouwmans | Thierry Bouwmans, Andrews Sobral, Sajid Javed, Soon Ki Jung, El-Hadi
Zahzah | Decomposition into Low-rank plus Additive Matrices for
Background/Foreground Separation: A Review for a Comparative Evaluation with
a Large-Scale Dataset | 121 pages, 5 figures, submitted to Computer Science Review. arXiv
admin note: text overlap with arXiv:1312.7167, arXiv:1109.6297,
arXiv:1207.3438, arXiv:1105.2126, arXiv:1404.7592, arXiv:1210.0805,
arXiv:1403.8067 by other authors, Computer Science Review, November 2016 | null | 10.1016/j.cosrev.2016.11.001 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.
| [
{
"version": "v1",
"created": "Wed, 4 Nov 2015 08:51:59 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2015 08:35:59 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Nov 2016 12:48:44 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Bouwmans",
"Thierry",
""
],
[
"Sobral",
"Andrews",
""
],
[
"Javed",
"Sajid",
""
],
[
"Jung",
"Soon Ki",
""
],
[
"Zahzah",
"El-Hadi",
""
]
] | TITLE: Decomposition into Low-rank plus Additive Matrices for
Background/Foreground Separation: A Review for a Comparative Evaluation with
a Large-Scale Dataset
ABSTRACT: Recent research on problem formulations based on decomposition into low-rank
plus sparse matrices shows a suitable framework to separate moving objects from
the background. The most representative problem formulation is the Robust
Principal Component Analysis (RPCA) solved via Principal Component Pursuit
(PCP) which decomposes a data matrix in a low-rank matrix and a sparse matrix.
However, similar robust implicit or explicit decompositions can be made in the
following problem formulations: Robust Non-negative Matrix Factorization
(RNMF), Robust Matrix Completion (RMC), Robust Subspace Recovery (RSR), Robust
Subspace Tracking (RST) and Robust Low-Rank Minimization (RLRM). The main goal
of these similar problem formulations is to obtain explicitly or implicitly a
decomposition into low-rank matrix plus additive matrices. In this context,
this work aims to initiate a rigorous and comprehensive review of the similar
problem formulations in robust subspace learning and tracking based on
decomposition into low-rank plus additive matrices for testing and ranking
existing algorithms for background/foreground separation. For this, we first
provide a preliminary review of the recent developments in the different
problem formulations which allows us to define a unified view that we called
Decomposition into Low-rank plus Additive Matrices (DLAM). Then, we examine
carefully each method in each robust subspace learning/tracking frameworks with
their decomposition, their loss functions, their optimization problem and their
solvers. Furthermore, we investigate if incremental algorithms and real-time
implementations can be achieved for background/foreground separation. Finally,
experimental results on a large-scale dataset called Background Models
Challenge (BMC 2012) show the comparative performance of 32 different robust
subspace learning/tracking methods.
| no_new_dataset | 0.948537 |
1601.01432 | Xinglin Piao | Xinglin Piao, Yongli Hu, Yanfeng Sun, Junbin Gao, Baocai Yin | Block-Diagonal Sparse Representation by Learning a Linear Combination
Dictionary for Recognition | We want to withdraw this paper because we need more mathematical
derivation and experiments to support our method. Therefore, we think this
paper is not suitable to be published in this period | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a sparse representation based recognition scheme, it is critical to learn
a desired dictionary, aiming both good representational power and
discriminative performance. In this paper, we propose a new dictionary learning
model for recognition applications, in which three strategies are adopted to
achieve these two objectives simultaneously. First, a block-diagonal constraint
is introduced into the model to eliminate the correlation between classes and
enhance the discriminative performance. Second, a low-rank term is adopted to
model the coherence within classes for refining the sparse representation of
each class. Finally, instead of using the conventional over-complete
dictionary, a specific dictionary constructed from the linear combination of
the training samples is proposed to enhance the representational power of the
dictionary and to improve the robustness of the sparse representation model.
The proposed method is tested on several public datasets. The experimental
results show the method outperforms most state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2016 08:01:56 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2016 00:31:37 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Piao",
"Xinglin",
""
],
[
"Hu",
"Yongli",
""
],
[
"Sun",
"Yanfeng",
""
],
[
"Gao",
"Junbin",
""
],
[
"Yin",
"Baocai",
""
]
] | TITLE: Block-Diagonal Sparse Representation by Learning a Linear Combination
Dictionary for Recognition
ABSTRACT: In a sparse representation based recognition scheme, it is critical to learn
a desired dictionary, aiming both good representational power and
discriminative performance. In this paper, we propose a new dictionary learning
model for recognition applications, in which three strategies are adopted to
achieve these two objectives simultaneously. First, a block-diagonal constraint
is introduced into the model to eliminate the correlation between classes and
enhance the discriminative performance. Second, a low-rank term is adopted to
model the coherence within classes for refining the sparse representation of
each class. Finally, instead of using the conventional over-complete
dictionary, a specific dictionary constructed from the linear combination of
the training samples is proposed to enhance the representational power of the
dictionary and to improve the robustness of the sparse representation model.
The proposed method is tested on several public datasets. The experimental
results show the method outperforms most state-of-the-art methods.
| no_new_dataset | 0.947866 |
1603.07442 | Donggeun Yoo | Donggeun Yoo, Namil Kim, Sunggyun Park, Anthony S. Paek, In So Kweon | Pixel-Level Domain Transfer | Published in ECCV 2016. Code and dataset available at dgyoo.github.io | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an image-conditional image generation model. The model transfers
an input domain to a target domain in semantic level, and generates the target
image in pixel level. To generate realistic target images, we employ the
real/fake-discriminator as in Generative Adversarial Nets, but also introduce a
novel domain-discriminator to make the generated image relevant to the input
image. We verify our model through a challenging task of generating a piece of
clothing from an input image of a dressed person. We present a high quality
clothing dataset containing the two domains, and succeed in demonstrating
decent results.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 05:20:59 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2016 01:20:33 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Nov 2016 13:17:40 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Yoo",
"Donggeun",
""
],
[
"Kim",
"Namil",
""
],
[
"Park",
"Sunggyun",
""
],
[
"Paek",
"Anthony S.",
""
],
[
"Kweon",
"In So",
""
]
] | TITLE: Pixel-Level Domain Transfer
ABSTRACT: We present an image-conditional image generation model. The model transfers
an input domain to a target domain in semantic level, and generates the target
image in pixel level. To generate realistic target images, we employ the
real/fake-discriminator as in Generative Adversarial Nets, but also introduce a
novel domain-discriminator to make the generated image relevant to the input
image. We verify our model through a challenging task of generating a piece of
clothing from an input image of a dressed person. We present a high quality
clothing dataset containing the two domains, and succeed in demonstrating
decent results.
| new_dataset | 0.948822 |
1606.06724 | Klaus Greff | Klaus Greff, Antti Rasmus, Mathias Berglund, Tele Hotloo Hao, J\"urgen
Schmidhuber, Harri Valpola | Tagger: Deep Unsupervised Perceptual Grouping | 14 pages + 5 pages supplementary, accepted at NIPS 2016 | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a framework for efficient perceptual inference that explicitly
reasons about the segmentation of its inputs and features. Rather than being
trained for any specific segmentation, our framework learns the grouping
process in an unsupervised manner or alongside any supervised task. By
enriching the representations of a neural network, we enable it to group the
representations of different objects in an iterative manner. By allowing the
system to amortize the iterative inference of the groupings, we achieve very
fast convergence. In contrast to many other recently proposed methods for
addressing multi-object scenes, our system does not assume the inputs to be
images and can therefore directly handle other modalities. For multi-digit
classification of very cluttered images that require texture segmentation, our
method offers improved classification performance over convolutional networks
despite being fully connected. Furthermore, we observe that our system greatly
improves on the semi-supervised result of a baseline Ladder network on our
dataset, indicating that segmentation can also improve sample efficiency.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2016 19:55:32 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2016 18:59:28 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Greff",
"Klaus",
""
],
[
"Rasmus",
"Antti",
""
],
[
"Berglund",
"Mathias",
""
],
[
"Hao",
"Tele Hotloo",
""
],
[
"Schmidhuber",
"Jürgen",
""
],
[
"Valpola",
"Harri",
""
]
] | TITLE: Tagger: Deep Unsupervised Perceptual Grouping
ABSTRACT: We present a framework for efficient perceptual inference that explicitly
reasons about the segmentation of its inputs and features. Rather than being
trained for any specific segmentation, our framework learns the grouping
process in an unsupervised manner or alongside any supervised task. By
enriching the representations of a neural network, we enable it to group the
representations of different objects in an iterative manner. By allowing the
system to amortize the iterative inference of the groupings, we achieve very
fast convergence. In contrast to many other recently proposed methods for
addressing multi-object scenes, our system does not assume the inputs to be
images and can therefore directly handle other modalities. For multi-digit
classification of very cluttered images that require texture segmentation, our
method offers improved classification performance over convolutional networks
despite being fully connected. Furthermore, we observe that our system greatly
improves on the semi-supervised result of a baseline Ladder network on our
dataset, indicating that segmentation can also improve sample efficiency.
| no_new_dataset | 0.941975 |
1609.08764 | Sebastien Wong | Sebastien C. Wong, Adam Gatt, Victor Stamatescu and Mark D. McDonnell | Understanding data augmentation for classification: when to warp? | 6 pages, 6 figures, DICTA 2016 conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we investigate the benefit of augmenting data with
synthetically created samples when training a machine learning classifier. Two
approaches for creating additional training samples are data warping, which
generates additional samples through transformations applied in the data-space,
and synthetic over-sampling, which creates additional samples in feature-space.
We experimentally evaluate the benefits of data augmentation for a
convolutional backpropagation-trained neural network, a convolutional support
vector machine and a convolutional extreme learning machine classifier, using
the standard MNIST handwritten digit dataset. We found that while it is
possible to perform generic augmentation in feature-space, if plausible
transforms for the data are known then augmentation in data-space provides a
greater benefit for improving performance and reducing overfitting.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2016 04:37:32 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2016 11:08:19 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Wong",
"Sebastien C.",
""
],
[
"Gatt",
"Adam",
""
],
[
"Stamatescu",
"Victor",
""
],
[
"McDonnell",
"Mark D.",
""
]
] | TITLE: Understanding data augmentation for classification: when to warp?
ABSTRACT: In this paper we investigate the benefit of augmenting data with
synthetically created samples when training a machine learning classifier. Two
approaches for creating additional training samples are data warping, which
generates additional samples through transformations applied in the data-space,
and synthetic over-sampling, which creates additional samples in feature-space.
We experimentally evaluate the benefits of data augmentation for a
convolutional backpropagation-trained neural network, a convolutional support
vector machine and a convolutional extreme learning machine classifier, using
the standard MNIST handwritten digit dataset. We found that while it is
possible to perform generic augmentation in feature-space, if plausible
transforms for the data are known then augmentation in data-space provides a
greater benefit for improving performance and reducing overfitting.
| no_new_dataset | 0.956634 |
1611.06651 | He Yang | He Yang, Hengyong Yu and Ge Wang | Deep Learning for the Classification of Lung Nodules | null | null | null | null | q-bio.QM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning, as a promising new area of machine learning, has attracted a
rapidly increasing attention in the field of medical imaging. Compared to the
conventional machine learning methods, deep learning requires no hand-tuned
feature extractor, and has shown a superior performance in many visual object
recognition applications. In this study, we develop a deep convolutional neural
network (CNN) and apply it to thoracic CT images for the classification of lung
nodules. We present the CNN architecture and classification accuracy for the
original images of lung nodules. In order to understand the features of lung
nodules, we further construct new datasets, based on the combination of
artificial geometric nodules and some transformations of the original images,
as well as a stochastic nodule shape model. It is found that simplistic
geometric nodules cannot capture the important features of lung nodules.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 05:12:44 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2016 21:43:48 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Yang",
"He",
""
],
[
"Yu",
"Hengyong",
""
],
[
"Wang",
"Ge",
""
]
] | TITLE: Deep Learning for the Classification of Lung Nodules
ABSTRACT: Deep learning, as a promising new area of machine learning, has attracted a
rapidly increasing attention in the field of medical imaging. Compared to the
conventional machine learning methods, deep learning requires no hand-tuned
feature extractor, and has shown a superior performance in many visual object
recognition applications. In this study, we develop a deep convolutional neural
network (CNN) and apply it to thoracic CT images for the classification of lung
nodules. We present the CNN architecture and classification accuracy for the
original images of lung nodules. In order to understand the features of lung
nodules, we further construct new datasets, based on the combination of
artificial geometric nodules and some transformations of the original images,
as well as a stochastic nodule shape model. It is found that simplistic
geometric nodules cannot capture the important features of lung nodules.
| no_new_dataset | 0.643455 |
1611.06689 | Jiali Duan | Jiali Duan, Shuai Zhou, Jun Wan, Xiaoyuan Guo, and Stan Z. Li | Multi-Modality Fusion based on Consensus-Voting and 3D Convolution for
Isolated Gesture Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the popularity of depth-sensors such as Kinect has made depth
videos easily available while its advantages have not been fully exploited.
This paper investigates, for gesture recognition, to explore the spatial and
temporal information complementarily embedded in RGB and depth sequences. We
propose a convolutional twostream consensus voting network (2SCVN) which
explicitly models both the short-term and long-term structure of the RGB
sequences. To alleviate distractions from background, a 3d depth-saliency
ConvNet stream (3DDSN) is aggregated in parallel to identify subtle motion
characteristics. These two components in an unified framework significantly
improve the recognition accuracy. On the challenging Chalearn IsoGD benchmark,
our proposed method outperforms the first place on the leader-board by a large
margin (10.29%) while also achieving the best result on RGBD-HuDaAct dataset
(96.74%). Both quantitative experiments and qualitative analysis shows the
effectiveness of our proposed framework and codes will be released to
facilitate future research.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 09:16:21 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2016 08:16:27 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Duan",
"Jiali",
""
],
[
"Zhou",
"Shuai",
""
],
[
"Wan",
"Jun",
""
],
[
"Guo",
"Xiaoyuan",
""
],
[
"Li",
"Stan Z.",
""
]
] | TITLE: Multi-Modality Fusion based on Consensus-Voting and 3D Convolution for
Isolated Gesture Recognition
ABSTRACT: Recently, the popularity of depth-sensors such as Kinect has made depth
videos easily available while its advantages have not been fully exploited.
This paper investigates, for gesture recognition, to explore the spatial and
temporal information complementarily embedded in RGB and depth sequences. We
propose a convolutional twostream consensus voting network (2SCVN) which
explicitly models both the short-term and long-term structure of the RGB
sequences. To alleviate distractions from background, a 3d depth-saliency
ConvNet stream (3DDSN) is aggregated in parallel to identify subtle motion
characteristics. These two components in an unified framework significantly
improve the recognition accuracy. On the challenging Chalearn IsoGD benchmark,
our proposed method outperforms the first place on the leader-board by a large
margin (10.29%) while also achieving the best result on RGBD-HuDaAct dataset
(96.74%). Both quantitative experiments and qualitative analysis shows the
effectiveness of our proposed framework and codes will be released to
facilitate future research.
| no_new_dataset | 0.94428 |
1611.08512 | Xiatian Zhu | Xiaolong Ma, Xiatian Zhu, Shaogang Gong, Xudong Xie, Jianming Hu,
Kin-Man Lam, Yisheng Zhong | Person Re-Identification by Unsupervised Video Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most existing person re-identification (ReID) methods rely only on the
spatial appearance information from either one or multiple person images,
whilst ignore the space-time cues readily available in video or image-sequence
data. Moreover, they often assume the availability of exhaustively labelled
cross-view pairwise data for every camera pair, making them non-scalable to
ReID applications in real-world large scale camera networks. In this work, we
introduce a novel video based person ReID method capable of accurately matching
people across views from arbitrary unaligned image-sequences without any
labelled pairwise data. Specifically, we introduce a new space-time person
representation by encoding multiple granularities of spatio-temporal dynamics
in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW)
model is derived for performing automatically alignment whilst achieving data
selection and matching between inherently inaccurate and incomplete sequences
in a unified way. We further extend the TS-DTW model for accommodating multiple
feature-sequences of an image-sequence in order to fuse information from
different descriptions. Crucially, this model does not require pairwise
labelled training data (i.e. unsupervised) therefore readily scalable to large
scale camera networks of arbitrary camera pairs without the need for exhaustive
data annotation for every camera pair. We show the effectiveness and advantages
of the proposed method by extensive comparisons with related state-of-the-art
approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 16:47:39 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2016 09:20:29 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Ma",
"Xiaolong",
""
],
[
"Zhu",
"Xiatian",
""
],
[
"Gong",
"Shaogang",
""
],
[
"Xie",
"Xudong",
""
],
[
"Hu",
"Jianming",
""
],
[
"Lam",
"Kin-Man",
""
],
[
"Zhong",
"Yisheng",
""
]
] | TITLE: Person Re-Identification by Unsupervised Video Matching
ABSTRACT: Most existing person re-identification (ReID) methods rely only on the
spatial appearance information from either one or multiple person images,
whilst ignore the space-time cues readily available in video or image-sequence
data. Moreover, they often assume the availability of exhaustively labelled
cross-view pairwise data for every camera pair, making them non-scalable to
ReID applications in real-world large scale camera networks. In this work, we
introduce a novel video based person ReID method capable of accurately matching
people across views from arbitrary unaligned image-sequences without any
labelled pairwise data. Specifically, we introduce a new space-time person
representation by encoding multiple granularities of spatio-temporal dynamics
in form of time series. Moreover, a Time Shift Dynamic Time Warping (TS-DTW)
model is derived for performing automatically alignment whilst achieving data
selection and matching between inherently inaccurate and incomplete sequences
in a unified way. We further extend the TS-DTW model for accommodating multiple
feature-sequences of an image-sequence in order to fuse information from
different descriptions. Crucially, this model does not require pairwise
labelled training data (i.e. unsupervised) therefore readily scalable to large
scale camera networks of arbitrary camera pairs without the need for exhaustive
data annotation for every camera pair. We show the effectiveness and advantages
of the proposed method by extensive comparisons with related state-of-the-art
approaches using two benchmarking ReID datasets, PRID2011 and iLIDS-VID.
| no_new_dataset | 0.952838 |
1611.08624 | Odemir Bruno PhD | Lucas Correia Ribas, Odemir Martinez Bruno | Fast deterministic tourist walk for texture analysis | 7 page, 7 figure | WVC 2016 proceedings p45-50 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deterministic tourist walk (DTW) has attracted increasing interest in
computer vision. In the last years, different methods for analysis of dynamic
and static textures were proposed. So far, all works based on the DTW for
texture analysis use all image pixels as initial point of a walk. However, this
requires much runtime. In this paper, we conducted a study to verify the
performance of the DTW method according to the number of initial points to
start a walk. The proposed method assigns a unique code to each image pixel,
then, the pixels whose code is not divisible by a given $k$ value are ignored
as initial points of walks. Feature vectors were extracted and a classification
process was performed for different percentages of initial points. Experimental
results on the Brodatz and Vistex datasets indicate that to use fewer pixels as
initial points significantly improves the runtime compared to use all image
pixels. In addition, the correct classification rate decreases very little.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 22:21:05 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Ribas",
"Lucas Correia",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Fast deterministic tourist walk for texture analysis
ABSTRACT: Deterministic tourist walk (DTW) has attracted increasing interest in
computer vision. In the last years, different methods for analysis of dynamic
and static textures were proposed. So far, all works based on the DTW for
texture analysis use all image pixels as initial point of a walk. However, this
requires much runtime. In this paper, we conducted a study to verify the
performance of the DTW method according to the number of initial points to
start a walk. The proposed method assigns a unique code to each image pixel,
then, the pixels whose code is not divisible by a given $k$ value are ignored
as initial points of walks. Feature vectors were extracted and a classification
process was performed for different percentages of initial points. Experimental
results on the Brodatz and Vistex datasets indicate that to use fewer pixels as
initial points significantly improves the runtime compared to use all image
pixels. In addition, the correct classification rate decreases very little.
| no_new_dataset | 0.950273 |
1611.08655 | Vikraman Karunanidhi | K.Vikraman | A Deep Neural Network to identify foreshocks in real time | Paper on earthquake prediction based on deep learning approach. 6
figures, two tables and 4 pages in total | null | null | null | physics.geo-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foreshock events provide valuable insight to predict imminent major
earthquakes. However, it is difficult to identify them in real time. In this
paper, I propose an algorithm based on deep learning to instantaneously
classify a seismic waveform as a foreshock, mainshock or an aftershock event
achieving a high accuracy of 99% in classification. As a result, this is by far
the most reliable method to predict major earthquakes that are preceded by
foreshocks. In addition, I discuss methods to create an earthquake dataset that
is compatible with deep networks.
| [
{
"version": "v1",
"created": "Sat, 26 Nov 2016 04:19:54 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Vikraman",
"K.",
""
]
] | TITLE: A Deep Neural Network to identify foreshocks in real time
ABSTRACT: Foreshock events provide valuable insight to predict imminent major
earthquakes. However, it is difficult to identify them in real time. In this
paper, I propose an algorithm based on deep learning to instantaneously
classify a seismic waveform as a foreshock, mainshock or an aftershock event
achieving a high accuracy of 99% in classification. As a result, this is by far
the most reliable method to predict major earthquakes that are preceded by
foreshocks. In addition, I discuss methods to create an earthquake dataset that
is compatible with deep networks.
| new_dataset | 0.946349 |
1611.08754 | Lex Fridman | Lex Fridman, Heishiro Toyoda, Sean Seaman, Bobbie Seppelt, Linda
Angell, Joonbum Lee, Bruce Mehler, Bryan Reimer | What Can Be Predicted from Six Seconds of Driver Glances? | null | null | null | null | cs.CV cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a large dataset of real-world, on-road driving from a 100-car
naturalistic study to explore the predictive power of driver glances and,
specifically, to answer the following question: what can be predicted about the
state of the driver and the state of the driving environment from a 6-second
sequence of macro-glances? The context-based nature of such glances allows for
application of supervised learning to the problem of vision-based gaze
estimation, making it robust, accurate, and reliable in messy, real-world
conditions. So, it's valuable to ask whether such macro-glances can be used to
infer behavioral, environmental, and demographic variables? We analyze 27
binary classification problems based on these variables. The takeaway is that
glance can be used as part of a multi-sensor real-time system to predict
radio-tuning, fatigue state, failure to signal, talking, and several
environment variables.
| [
{
"version": "v1",
"created": "Sat, 26 Nov 2016 22:41:51 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Fridman",
"Lex",
""
],
[
"Toyoda",
"Heishiro",
""
],
[
"Seaman",
"Sean",
""
],
[
"Seppelt",
"Bobbie",
""
],
[
"Angell",
"Linda",
""
],
[
"Lee",
"Joonbum",
""
],
[
"Mehler",
"Bruce",
""
],
[
"Reimer",
"Bryan",
""
]
] | TITLE: What Can Be Predicted from Six Seconds of Driver Glances?
ABSTRACT: We consider a large dataset of real-world, on-road driving from a 100-car
naturalistic study to explore the predictive power of driver glances and,
specifically, to answer the following question: what can be predicted about the
state of the driver and the state of the driving environment from a 6-second
sequence of macro-glances? The context-based nature of such glances allows for
application of supervised learning to the problem of vision-based gaze
estimation, making it robust, accurate, and reliable in messy, real-world
conditions. So, it's valuable to ask whether such macro-glances can be used to
infer behavioral, environmental, and demographic variables? We analyze 27
binary classification problems based on these variables. The takeaway is that
glance can be used as part of a multi-sensor real-time system to predict
radio-tuning, fatigue state, failure to signal, talking, and several
environment variables.
| no_new_dataset | 0.927034 |
1611.08780 | Yale Song | Yale Song | Real-Time Video Highlights for Yahoo Esports | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Esports has gained global popularity in recent years and several companies
have started offering live streaming videos of esports games and events. This
creates opportunities to develop large scale video understanding systems for
new product features and services. We present a technique for detecting
highlights from live streaming videos of esports game matches. Most video games
use pronounced visual effects to emphasize highlight moments; we use CNNs to
learn convolution filters of those visual effects for detecting highlights. We
propose a cascaded prediction approach that allows us to deal with several
challenges arise in a production environment. We demonstrate our technique on
our new dataset of three popular game titles, Heroes of the Storm, League of
Legends, and Dota 2. Our technique achieves 18 FPS on a single CPU with an
average precision of up to 83.18%. Part of our technique is currently deployed
in production on Yahoo Esports.
| [
{
"version": "v1",
"created": "Sun, 27 Nov 2016 03:58:41 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Song",
"Yale",
""
]
] | TITLE: Real-Time Video Highlights for Yahoo Esports
ABSTRACT: Esports has gained global popularity in recent years and several companies
have started offering live streaming videos of esports games and events. This
creates opportunities to develop large scale video understanding systems for
new product features and services. We present a technique for detecting
highlights from live streaming videos of esports game matches. Most video games
use pronounced visual effects to emphasize highlight moments; we use CNNs to
learn convolution filters of those visual effects for detecting highlights. We
propose a cascaded prediction approach that allows us to deal with several
challenges arise in a production environment. We demonstrate our technique on
our new dataset of three popular game titles, Heroes of the Storm, League of
Legends, and Dota 2. Our technique achieves 18 FPS on a single CPU with an
average precision of up to 83.18%. Part of our technique is currently deployed
in production on Yahoo Esports.
| new_dataset | 0.952353 |
1611.08789 | Biswarup Bhattacharya | Arna Ghosh, Biswarup Bhattacharya, Somnath Basu Roy Chowdhury | Handwriting Profiling using Generative Adversarial Networks | 2 pages; 2 figures; Accepted at The Thirty-First AAAI Conference on
Artificial Intelligence (AAAI-17 Student Abstract and Poster Program), San
Francisco, USA; All authors have equal contribution | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Handwriting is a skill learned by humans from a very early age. The ability
to develop one's own unique handwriting as well as mimic another person's
handwriting is a task learned by the brain with practice. This paper deals with
this very problem where an intelligent system tries to learn the handwriting of
an entity using Generative Adversarial Networks (GANs). We propose a modified
architecture of DCGAN (Radford, Metz, and Chintala 2015) to achieve this. We
also discuss about applying reinforcement learning techniques to achieve faster
learning. Our algorithm hopes to give new insights in this area and its uses
include identification of forged documents, signature verification, computer
generated art, digitization of documents among others. Our early implementation
of the algorithm illustrates a good performance with MNIST datasets.
| [
{
"version": "v1",
"created": "Sun, 27 Nov 2016 05:02:47 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Ghosh",
"Arna",
""
],
[
"Bhattacharya",
"Biswarup",
""
],
[
"Chowdhury",
"Somnath Basu Roy",
""
]
] | TITLE: Handwriting Profiling using Generative Adversarial Networks
ABSTRACT: Handwriting is a skill learned by humans from a very early age. The ability
to develop one's own unique handwriting as well as mimic another person's
handwriting is a task learned by the brain with practice. This paper deals with
this very problem where an intelligent system tries to learn the handwriting of
an entity using Generative Adversarial Networks (GANs). We propose a modified
architecture of DCGAN (Radford, Metz, and Chintala 2015) to achieve this. We
also discuss about applying reinforcement learning techniques to achieve faster
learning. Our algorithm hopes to give new insights in this area and its uses
include identification of forged documents, signature verification, computer
generated art, digitization of documents among others. Our early implementation
of the algorithm illustrates a good performance with MNIST datasets.
| no_new_dataset | 0.947624 |
1611.08812 | Yulia Dodonova | Yulia Dodonova, Mikhail Belyaev, Anna Tkachev, Dmitry Petrov, and
Leonid Zhukov | Kernel classification of connectomes based on earth mover's distance
between graph spectra | Presented at The MICCAI-BACON 16 Workshop (arXiv:1611.03363) | null | null | BACON/2016/05 | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle a problem of predicting phenotypes from structural
connectomes. We propose that normalized Laplacian spectra can capture
structural properties of brain networks, and hence graph spectral distributions
are useful for a task of connectome-based classification. We introduce a kernel
that is based on earth mover's distance (EMD) between spectral distributions of
brain networks. We access performance of an SVM classifier with the proposed
kernel for a task of classification of autism spectrum disorder versus typical
development based on a publicly available dataset. Classification quality (area
under the ROC-curve) obtained with the EMD-based kernel on spectral
distributions is 0.71, which is higher than that based on simpler graph
embedding methods.
| [
{
"version": "v1",
"created": "Sun, 27 Nov 2016 09:35:04 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Dodonova",
"Yulia",
""
],
[
"Belyaev",
"Mikhail",
""
],
[
"Tkachev",
"Anna",
""
],
[
"Petrov",
"Dmitry",
""
],
[
"Zhukov",
"Leonid",
""
]
] | TITLE: Kernel classification of connectomes based on earth mover's distance
between graph spectra
ABSTRACT: In this paper, we tackle a problem of predicting phenotypes from structural
connectomes. We propose that normalized Laplacian spectra can capture
structural properties of brain networks, and hence graph spectral distributions
are useful for a task of connectome-based classification. We introduce a kernel
that is based on earth mover's distance (EMD) between spectral distributions of
brain networks. We access performance of an SVM classifier with the proposed
kernel for a task of classification of autism spectrum disorder versus typical
development based on a publicly available dataset. Classification quality (area
under the ROC-curve) obtained with the EMD-based kernel on spectral
distributions is 0.71, which is higher than that based on simpler graph
embedding methods.
| no_new_dataset | 0.952662 |
1611.08813 | Hila Gonen | Hila Gonen and Yoav Goldberg | Semi Supervised Preposition-Sense Disambiguation using Multilingual Data | 12 pages; COLING 2016 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Prepositions are very common and very ambiguous, and understanding their
sense is critical for understanding the meaning of the sentence. Supervised
corpora for the preposition-sense disambiguation task are small, suggesting a
semi-supervised approach to the task. We show that signals from unannotated
multilingual data can be used to improve supervised preposition-sense
disambiguation. Our approach pre-trains an LSTM encoder for predicting the
translation of a preposition, and then incorporates the pre-trained encoder as
a component in a supervised classification system, and fine-tunes it for the
task. The multilingual signals consistently improve results on two
preposition-sense datasets.
| [
{
"version": "v1",
"created": "Sun, 27 Nov 2016 09:53:36 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Gonen",
"Hila",
""
],
[
"Goldberg",
"Yoav",
""
]
] | TITLE: Semi Supervised Preposition-Sense Disambiguation using Multilingual Data
ABSTRACT: Prepositions are very common and very ambiguous, and understanding their
sense is critical for understanding the meaning of the sentence. Supervised
corpora for the preposition-sense disambiguation task are small, suggesting a
semi-supervised approach to the task. We show that signals from unannotated
multilingual data can be used to improve supervised preposition-sense
disambiguation. Our approach pre-trains an LSTM encoder for predicting the
translation of a preposition, and then incorporates the pre-trained encoder as
a component in a supervised classification system, and fine-tunes it for the
task. The multilingual signals consistently improve results on two
preposition-sense datasets.
| no_new_dataset | 0.953535 |
1611.08839 | Yasin Orouskhani | Yasin Orouskhani, Leili Tavabi | Ranking Research Institutions Based On Related Academic Conferences | 3 pages, 3 tables , ranked 12nd in KDD Cup 2016 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of influential nodes in a social network is an active research
area with many valuable applications including marketing and advertisement. As
a new application in academia, KDD Cup 2016 shed light on the lack of an
existing objective ranking for institutions within their respective research
areas and proposed a solution for it. In this problem, the academic fields are
defined as social networks whose nodes are the active institutions within the
field, with the most influential nodes representing the highest contributors.
The solution is able to provide a ranking of active institutions within their
specific domains. The problem statement provided an annual scoring mechanism
for institutions based on their publications and encouraged the use of any
publicly available dataset such as the Microsoft Academic Graph (MAG). The
contest was focused on research publications in selected conferences and asked
for a prediction of the ranking for active institutions within those
conferences in 2016. It should be noted that the results of the paper
submissions and therefore the ground truths for KDD Cup were unknown at the
time of the contest. Each team's final ranking list was evaluated by a metric
called NDCG@20 after the results were released. This metric was used to
indicate the distance between each team's proposed ranking and the actual one
once it was known. After computing the scores of institutions for each year
starting from 2011, we aggregated the rankings by summing the normalized scores
across the years and using the final score set to provide the final ranking.
Since the 2016 ground truths were unknown, we utilized the scores from
2011-2014 and used the 2015 publications as a test bed for evaluating our
aggregation method. Based on the testing, summing the normalized scores got us
closest to the actual 2015 rankings and using same heuristic for predicting the
2016 results.
| [
{
"version": "v1",
"created": "Sun, 27 Nov 2016 13:21:32 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Orouskhani",
"Yasin",
""
],
[
"Tavabi",
"Leili",
""
]
] | TITLE: Ranking Research Institutions Based On Related Academic Conferences
ABSTRACT: The detection of influential nodes in a social network is an active research
area with many valuable applications including marketing and advertisement. As
a new application in academia, KDD Cup 2016 shed light on the lack of an
existing objective ranking for institutions within their respective research
areas and proposed a solution for it. In this problem, the academic fields are
defined as social networks whose nodes are the active institutions within the
field, with the most influential nodes representing the highest contributors.
The solution is able to provide a ranking of active institutions within their
specific domains. The problem statement provided an annual scoring mechanism
for institutions based on their publications and encouraged the use of any
publicly available dataset such as the Microsoft Academic Graph (MAG). The
contest was focused on research publications in selected conferences and asked
for a prediction of the ranking for active institutions within those
conferences in 2016. It should be noted that the results of the paper
submissions and therefore the ground truths for KDD Cup were unknown at the
time of the contest. Each team's final ranking list was evaluated by a metric
called NDCG@20 after the results were released. This metric was used to
indicate the distance between each team's proposed ranking and the actual one
once it was known. After computing the scores of institutions for each year
starting from 2011, we aggregated the rankings by summing the normalized scores
across the years and using the final score set to provide the final ranking.
Since the 2016 ground truths were unknown, we utilized the scores from
2011-2014 and used the 2015 publications as a test bed for evaluating our
aggregation method. Based on the testing, summing the normalized scores got us
closest to the actual 2015 rankings and using same heuristic for predicting the
2016 results.
| no_new_dataset | 0.947769 |
1611.08974 | Shuran Song | Shuran Song, Fisher Yu, Andy Zeng, Angel X. Chang, Manolis Savva,
Thomas Funkhouser | Semantic Scene Completion from a Single Depth Image | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on semantic scene completion, a task for producing a
complete 3D voxel representation of volumetric occupancy and semantic labels
for a scene from a single-view depth map observation. Previous work has
considered scene completion and semantic labeling of depth maps separately.
However, we observe that these two problems are tightly intertwined. To
leverage the coupled nature of these two tasks, we introduce the semantic scene
completion network (SSCNet), an end-to-end 3D convolutional network that takes
a single depth image as input and simultaneously outputs occupancy and semantic
labels for all voxels in the camera view frustum. Our network uses a
dilation-based 3D context module to efficiently expand the receptive field and
enable 3D context learning. To train our network, we construct SUNCG - a
manually created large-scale dataset of synthetic 3D scenes with dense
volumetric annotations. Our experiments demonstrate that the joint model
outperforms methods addressing each task in isolation and outperforms
alternative approaches on the semantic scene completion task.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 03:38:42 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Song",
"Shuran",
""
],
[
"Yu",
"Fisher",
""
],
[
"Zeng",
"Andy",
""
],
[
"Chang",
"Angel X.",
""
],
[
"Savva",
"Manolis",
""
],
[
"Funkhouser",
"Thomas",
""
]
] | TITLE: Semantic Scene Completion from a Single Depth Image
ABSTRACT: This paper focuses on semantic scene completion, a task for producing a
complete 3D voxel representation of volumetric occupancy and semantic labels
for a scene from a single-view depth map observation. Previous work has
considered scene completion and semantic labeling of depth maps separately.
However, we observe that these two problems are tightly intertwined. To
leverage the coupled nature of these two tasks, we introduce the semantic scene
completion network (SSCNet), an end-to-end 3D convolutional network that takes
a single depth image as input and simultaneously outputs occupancy and semantic
labels for all voxels in the camera view frustum. Our network uses a
dilation-based 3D context module to efficiently expand the receptive field and
enable 3D context learning. To train our network, we construct SUNCG - a
manually created large-scale dataset of synthetic 3D scenes with dense
volumetric annotations. Our experiments demonstrate that the joint model
outperforms methods addressing each task in isolation and outperforms
alternative approaches on the semantic scene completion task.
| new_dataset | 0.955402 |
1611.08986 | Bing Shuai | Bing Shuai, Ting Liu and Gang Wang | Improving Fully Convolution Network for Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully Convolution Networks (FCN) have achieved great success in dense
prediction tasks including semantic segmentation. In this paper, we start from
discussing FCN by understanding its architecture limitations in building a
strong segmentation network. Next, we present our Improved Fully Convolution
Network (IFCN). In contrast to FCN, IFCN introduces a context network that
progressively expands the receptive fields of feature maps. In addition, dense
skip connections are added so that the context network can be effectively
optimized. More importantly, these dense skip connections enable IFCN to fuse
rich-scale context to make reliable predictions. Empirically, those
architecture modifications are proven to be significant to enhance the
segmentation performance. Without engaging any contextual post-processing, IFCN
significantly advances the state-of-the-arts on ADE20K (ImageNet scene
parsing), Pascal Context, Pascal VOC 2012 and SUN-RGBD segmentation datasets.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 05:31:10 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Shuai",
"Bing",
""
],
[
"Liu",
"Ting",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Improving Fully Convolution Network for Semantic Segmentation
ABSTRACT: Fully Convolution Networks (FCN) have achieved great success in dense
prediction tasks including semantic segmentation. In this paper, we start from
discussing FCN by understanding its architecture limitations in building a
strong segmentation network. Next, we present our Improved Fully Convolution
Network (IFCN). In contrast to FCN, IFCN introduces a context network that
progressively expands the receptive fields of feature maps. In addition, dense
skip connections are added so that the context network can be effectively
optimized. More importantly, these dense skip connections enable IFCN to fuse
rich-scale context to make reliable predictions. Empirically, those
architecture modifications are proven to be significant to enhance the
segmentation performance. Without engaging any contextual post-processing, IFCN
significantly advances the state-of-the-arts on ADE20K (ImageNet scene
parsing), Pascal Context, Pascal VOC 2012 and SUN-RGBD segmentation datasets.
| no_new_dataset | 0.950227 |
1611.09007 | Lloyd Windrim Mr | Lloyd Windrim, Rishi Ramakrishnan, Arman Melkumyan, Richard Murphy | Hyperspectral CNN Classification with Limited Training Samples | 10 pages, 6 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral imaging sensors are becoming increasingly popular in robotics
applications such as agriculture and mining, and allow per-pixel thematic
classification of materials in a scene based on their unique spectral
signatures. Recently, convolutional neural networks have shown remarkable
performance for classification tasks, but require substantial amounts of
labelled training data. This data must sufficiently cover the variability
expected to be encountered in the environment. For hyperspectral data, one of
the main variations encountered outdoors is due to incident illumination, which
can change in spectral shape and intensity depending on the scene geometry. For
example, regions occluded from the sun have a lower intensity and their
incident irradiance skewed towards shorter wavelengths.
In this work, a data augmentation strategy based on relighting is used during
training of a hyperspectral convolutional neural network. It allows training to
occur in the outdoor environment given only a small labelled region, which does
not need to sufficiently represent the geometric variability of the entire
scene. This is important for applications where obtaining large amounts of
training data is labourious, hazardous or difficult, such as labelling pixels
within shadows. Radiometric normalisation approaches for pre-processing the
hyperspectral data are analysed and it is shown that methods based on the raw
pixel data are sufficient to be used as input for the classifier. This removes
the need for external hardware such as calibration boards, which can restrict
the application of hyperspectral sensors in robotics applications. Experiments
to evaluate the classification system are carried out on two datasets captured
from a field-based platform.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 07:29:29 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Windrim",
"Lloyd",
""
],
[
"Ramakrishnan",
"Rishi",
""
],
[
"Melkumyan",
"Arman",
""
],
[
"Murphy",
"Richard",
""
]
] | TITLE: Hyperspectral CNN Classification with Limited Training Samples
ABSTRACT: Hyperspectral imaging sensors are becoming increasingly popular in robotics
applications such as agriculture and mining, and allow per-pixel thematic
classification of materials in a scene based on their unique spectral
signatures. Recently, convolutional neural networks have shown remarkable
performance for classification tasks, but require substantial amounts of
labelled training data. This data must sufficiently cover the variability
expected to be encountered in the environment. For hyperspectral data, one of
the main variations encountered outdoors is due to incident illumination, which
can change in spectral shape and intensity depending on the scene geometry. For
example, regions occluded from the sun have a lower intensity and their
incident irradiance skewed towards shorter wavelengths.
In this work, a data augmentation strategy based on relighting is used during
training of a hyperspectral convolutional neural network. It allows training to
occur in the outdoor environment given only a small labelled region, which does
not need to sufficiently represent the geometric variability of the entire
scene. This is important for applications where obtaining large amounts of
training data is labourious, hazardous or difficult, such as labelling pixels
within shadows. Radiometric normalisation approaches for pre-processing the
hyperspectral data are analysed and it is shown that methods based on the raw
pixel data are sufficient to be used as input for the classifier. This removes
the need for external hardware such as calibration boards, which can restrict
the application of hyperspectral sensors in robotics applications. Experiments
to evaluate the classification system are carried out on two datasets captured
from a field-based platform.
| no_new_dataset | 0.953362 |
1611.09010 | Francesc Moreno-Noguer | Francesc Moreno-Noguer | 3D Human Pose Estimation from a Single Image via Distance Matrix
Regression | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of 3D human pose estimation from a single
image. We follow a standard two-step pipeline by first detecting the 2D
position of the $N$ body joints, and then using these observations to infer 3D
pose. For the first step, we use a recent CNN-based detector. For the second
step, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian
joint coordinates. We show that more precise pose estimates can be obtained by
representing both the 2D and 3D human poses using $N\times N$ distance
matrices, and formulating the problem as a 2D-to-3D distance matrix regression.
For learning such a regressor we leverage on simple Neural Network
architectures, which by construction, enforce positivity and symmetry of the
predicted matrices. The approach has also the advantage to naturally handle
missing observations and allowing to hypothesize the position of non-observed
joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate
consistent performance gains over state-of-the-art. Qualitative evaluation on
the images in-the-wild of the LSP dataset, using the regressor learned on
Human3.6M, reveals very promising generalization results.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 07:36:31 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Moreno-Noguer",
"Francesc",
""
]
] | TITLE: 3D Human Pose Estimation from a Single Image via Distance Matrix
Regression
ABSTRACT: This paper addresses the problem of 3D human pose estimation from a single
image. We follow a standard two-step pipeline by first detecting the 2D
position of the $N$ body joints, and then using these observations to infer 3D
pose. For the first step, we use a recent CNN-based detector. For the second
step, most existing approaches perform 2$N$-to-3$N$ regression of the Cartesian
joint coordinates. We show that more precise pose estimates can be obtained by
representing both the 2D and 3D human poses using $N\times N$ distance
matrices, and formulating the problem as a 2D-to-3D distance matrix regression.
For learning such a regressor we leverage on simple Neural Network
architectures, which by construction, enforce positivity and symmetry of the
predicted matrices. The approach has also the advantage to naturally handle
missing observations and allowing to hypothesize the position of non-observed
joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate
consistent performance gains over state-of-the-art. Qualitative evaluation on
the images in-the-wild of the LSP dataset, using the regressor learned on
Human3.6M, reveals very promising generalization results.
| no_new_dataset | 0.943971 |
1611.09053 | Zhongwen Xu | Linchao Zhu, Zhongwen Xu, Yi Yang | Bidirectional Multirate Reconstruction for Temporal Modeling in Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent success of neural networks in image feature learning, a
major problem in the video domain is the lack of sufficient labeled data for
learning to model temporal information. In this paper, we propose an
unsupervised temporal modeling method that learns from untrimmed videos. The
speed of motion varies constantly, e.g., a man may run quickly or slowly. We
therefore train a Multirate Visual Recurrent Model (MVRM) by encoding frames of
a clip with different intervals. This learning process makes the learned model
more capable of dealing with motion speed variance. Given a clip sampled from a
video, we use its past and future neighboring clips as the temporal context,
and reconstruct the two temporal transitions, i.e., present$\rightarrow$past
transition and present$\rightarrow$future transition, reflecting the temporal
information in different views. The proposed method exploits the two
transitions simultaneously by incorporating a bidirectional reconstruction
which consists of a backward reconstruction and a forward reconstruction. We
apply the proposed method to two challenging video tasks, i.e., complex event
detection and video captioning, in which it achieves state-of-the-art
performance. Notably, our method generates the best single feature for event
detection with a relative improvement of 10.4% on the MEDTest-13 dataset and
achieves the best performance in video captioning across all evaluation metrics
on the YouTube2Text dataset.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 10:32:03 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Zhu",
"Linchao",
""
],
[
"Xu",
"Zhongwen",
""
],
[
"Yang",
"Yi",
""
]
] | TITLE: Bidirectional Multirate Reconstruction for Temporal Modeling in Videos
ABSTRACT: Despite the recent success of neural networks in image feature learning, a
major problem in the video domain is the lack of sufficient labeled data for
learning to model temporal information. In this paper, we propose an
unsupervised temporal modeling method that learns from untrimmed videos. The
speed of motion varies constantly, e.g., a man may run quickly or slowly. We
therefore train a Multirate Visual Recurrent Model (MVRM) by encoding frames of
a clip with different intervals. This learning process makes the learned model
more capable of dealing with motion speed variance. Given a clip sampled from a
video, we use its past and future neighboring clips as the temporal context,
and reconstruct the two temporal transitions, i.e., present$\rightarrow$past
transition and present$\rightarrow$future transition, reflecting the temporal
information in different views. The proposed method exploits the two
transitions simultaneously by incorporating a bidirectional reconstruction
which consists of a backward reconstruction and a forward reconstruction. We
apply the proposed method to two challenging video tasks, i.e., complex event
detection and video captioning, in which it achieves state-of-the-art
performance. Notably, our method generates the best single feature for event
detection with a relative improvement of 10.4% on the MEDTest-13 dataset and
achieves the best performance in video captioning across all evaluation metrics
on the YouTube2Text dataset.
| no_new_dataset | 0.948394 |
1611.09099 | Thierry Bouwmans | Thierry Bouwmans and Caroline Silva and Cristina Marghes and Mohammed
Sami Zitouni and Harish Bhaskar and Carl Frelicot | On the Role and the Importance of Features for Background Modeling and
Foreground Detection | To be submitted to Computer Science Review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background modeling has emerged as a popular foreground detection technique
for various applications in video surveillance. Background modeling methods
have become increasing efficient in robustly modeling the background and hence
detecting moving objects in any visual scene. Although several background
subtraction and foreground detection have been proposed recently, no
traditional algorithm today still seem to be able to simultaneously address all
the key challenges of illumination variation, dynamic camera motion, cluttered
background and occlusion. This limitation can be attributed to the lack of
systematic investigation concerning the role and importance of features within
background modeling and foreground detection. With the availability of a rather
large set of invariant features, the challenge is in determining the best
combination of features that would improve accuracy and robustness in
detection. The purpose of this study is to initiate a rigorous and
comprehensive survey of features used within background modeling and foreground
detection. Further, this paper presents a systematic experimental and
statistical analysis of techniques that provide valuable insight on the trends
in background modeling and use it to draw meaningful recommendations for
practitioners. In this paper, a preliminary review of the key characteristics
of features based on the types and sizes is provided in addition to
investigating their intrinsic spectral, spatial and temporal properties.
Furthermore, improvements using statistical and fuzzy tools are examined and
techniques based on multiple features are benchmarked against reliability and
selection criterion. Finally, a description of the different resources
available such as datasets and codes is provided.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 12:55:16 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Bouwmans",
"Thierry",
""
],
[
"Silva",
"Caroline",
""
],
[
"Marghes",
"Cristina",
""
],
[
"Zitouni",
"Mohammed Sami",
""
],
[
"Bhaskar",
"Harish",
""
],
[
"Frelicot",
"Carl",
""
]
] | TITLE: On the Role and the Importance of Features for Background Modeling and
Foreground Detection
ABSTRACT: Background modeling has emerged as a popular foreground detection technique
for various applications in video surveillance. Background modeling methods
have become increasing efficient in robustly modeling the background and hence
detecting moving objects in any visual scene. Although several background
subtraction and foreground detection have been proposed recently, no
traditional algorithm today still seem to be able to simultaneously address all
the key challenges of illumination variation, dynamic camera motion, cluttered
background and occlusion. This limitation can be attributed to the lack of
systematic investigation concerning the role and importance of features within
background modeling and foreground detection. With the availability of a rather
large set of invariant features, the challenge is in determining the best
combination of features that would improve accuracy and robustness in
detection. The purpose of this study is to initiate a rigorous and
comprehensive survey of features used within background modeling and foreground
detection. Further, this paper presents a systematic experimental and
statistical analysis of techniques that provide valuable insight on the trends
in background modeling and use it to draw meaningful recommendations for
practitioners. In this paper, a preliminary review of the key characteristics
of features based on the types and sizes is provided in addition to
investigating their intrinsic spectral, spatial and temporal properties.
Furthermore, improvements using statistical and fuzzy tools are examined and
techniques based on multiple features are benchmarked against reliability and
selection criterion. Finally, a description of the different resources
available such as datasets and codes is provided.
| no_new_dataset | 0.939858 |
1611.09232 | Meshia C\'edric Oveneke | Meshia C\'edric Oveneke, Mitchel Aliosha-Perez, Yong Zhao, Dongmei
Jiang and Hichem Sahli | Efficient Convolutional Auto-Encoding via Random Convexification and
Frequency-Domain Minimization | Accepted at NIPS 2016 Workshop on Efficient Methods for Deep Neural
Networks (EMDNN) | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The omnipresence of deep learning architectures such as deep convolutional
neural networks (CNN)s is fueled by the synergistic combination of
ever-increasing labeled datasets and specialized hardware. Despite the
indisputable success, the reliance on huge amounts of labeled data and
specialized hardware can be a limiting factor when approaching new
applications. To help alleviating these limitations, we propose an efficient
learning strategy for layer-wise unsupervised training of deep CNNs on
conventional hardware in acceptable time. Our proposed strategy consists of
randomly convexifying the reconstruction contractive auto-encoding (RCAE)
learning objective and solving the resulting large-scale convex minimization
problem in the frequency domain via coordinate descent (CD). The main
advantages of our proposed learning strategy are: (1) single tunable
optimization parameter; (2) fast and guaranteed convergence; (3) possibilities
for full parallelization. Numerical experiments show that our proposed learning
strategy scales (in the worst case) linearly with image size, number of filters
and filter size.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 16:42:11 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Oveneke",
"Meshia Cédric",
""
],
[
"Aliosha-Perez",
"Mitchel",
""
],
[
"Zhao",
"Yong",
""
],
[
"Jiang",
"Dongmei",
""
],
[
"Sahli",
"Hichem",
""
]
] | TITLE: Efficient Convolutional Auto-Encoding via Random Convexification and
Frequency-Domain Minimization
ABSTRACT: The omnipresence of deep learning architectures such as deep convolutional
neural networks (CNN)s is fueled by the synergistic combination of
ever-increasing labeled datasets and specialized hardware. Despite the
indisputable success, the reliance on huge amounts of labeled data and
specialized hardware can be a limiting factor when approaching new
applications. To help alleviating these limitations, we propose an efficient
learning strategy for layer-wise unsupervised training of deep CNNs on
conventional hardware in acceptable time. Our proposed strategy consists of
randomly convexifying the reconstruction contractive auto-encoding (RCAE)
learning objective and solving the resulting large-scale convex minimization
problem in the frequency domain via coordinate descent (CD). The main
advantages of our proposed learning strategy are: (1) single tunable
optimization parameter; (2) fast and guaranteed convergence; (3) possibilities
for full parallelization. Numerical experiments show that our proposed learning
strategy scales (in the worst case) linearly with image size, number of filters
and filter size.
| no_new_dataset | 0.94801 |
1611.09235 | Ziqiang Cao | Ziqiang Cao, Chuwei Luo, Wenjie Li, Sujian Li | Joint Copying and Restricted Generation for Paraphrase | 7 pages, 1 figure, AAAI-17 | null | null | null | cs.CL cs.IR | http://creativecommons.org/publicdomain/zero/1.0/ | Many natural language generation tasks, such as abstractive summarization and
text simplification, are paraphrase-orientated. In these tasks, copying and
rewriting are two main writing modes. Most previous sequence-to-sequence
(Seq2Seq) models use a single decoder and neglect this fact. In this paper, we
develop a novel Seq2Seq model to fuse a copying decoder and a restricted
generative decoder. The copying decoder finds the position to be copied based
on a typical attention model. The generative decoder produces words limited in
the source-specific vocabulary. To combine the two decoders and determine the
final output, we develop a predictor to predict the mode of copying or
rewriting. This predictor can be guided by the actual writing mode in the
training data. We conduct extensive experiments on two different paraphrase
datasets. The result shows that our model outperforms the state-of-the-art
approaches in terms of both informativeness and language quality.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 16:49:37 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Cao",
"Ziqiang",
""
],
[
"Luo",
"Chuwei",
""
],
[
"Li",
"Wenjie",
""
],
[
"Li",
"Sujian",
""
]
] | TITLE: Joint Copying and Restricted Generation for Paraphrase
ABSTRACT: Many natural language generation tasks, such as abstractive summarization and
text simplification, are paraphrase-orientated. In these tasks, copying and
rewriting are two main writing modes. Most previous sequence-to-sequence
(Seq2Seq) models use a single decoder and neglect this fact. In this paper, we
develop a novel Seq2Seq model to fuse a copying decoder and a restricted
generative decoder. The copying decoder finds the position to be copied based
on a typical attention model. The generative decoder produces words limited in
the source-specific vocabulary. To combine the two decoders and determine the
final output, we develop a predictor to predict the mode of copying or
rewriting. This predictor can be guided by the actual writing mode in the
training data. We conduct extensive experiments on two different paraphrase
datasets. The result shows that our model outperforms the state-of-the-art
approaches in terms of both informativeness and language quality.
| no_new_dataset | 0.944893 |
1611.09238 | Ziqiang Cao | Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei | Improving Multi-Document Summarization via Text Classification | 7 pages, 3 figures, AAAI-17 | null | null | null | cs.CL cs.IR | http://creativecommons.org/publicdomain/zero/1.0/ | Developed so far, multi-document summarization has reached its bottleneck due
to the lack of sufficient training data and diverse categories of documents.
Text classification just makes up for these deficiencies. In this paper, we
propose a novel summarization system called TCSum, which leverages plentiful
text classification data to improve the performance of multi-document
summarization. TCSum projects documents onto distributed representations which
act as a bridge between text classification and summarization. It also utilizes
the classification results to produce summaries of different styles. Extensive
experiments on DUC generic multi-document summarization datasets show that,
TCSum can achieve the state-of-the-art performance without using any
hand-crafted features and has the capability to catch the variations of summary
styles with respect to different text categories.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 16:53:06 GMT"
}
] | 2016-11-29T00:00:00 | [
[
"Cao",
"Ziqiang",
""
],
[
"Li",
"Wenjie",
""
],
[
"Li",
"Sujian",
""
],
[
"Wei",
"Furu",
""
]
] | TITLE: Improving Multi-Document Summarization via Text Classification
ABSTRACT: Developed so far, multi-document summarization has reached its bottleneck due
to the lack of sufficient training data and diverse categories of documents.
Text classification just makes up for these deficiencies. In this paper, we
propose a novel summarization system called TCSum, which leverages plentiful
text classification data to improve the performance of multi-document
summarization. TCSum projects documents onto distributed representations which
act as a bridge between text classification and summarization. It also utilizes
the classification results to produce summaries of different styles. Extensive
experiments on DUC generic multi-document summarization datasets show that,
TCSum can achieve the state-of-the-art performance without using any
hand-crafted features and has the capability to catch the variations of summary
styles with respect to different text categories.
| no_new_dataset | 0.942612 |
1510.05217 | Weiran Huang | Weiran Huang and Liang Li and Wei Chen | Partitioned Sampling of Public Opinions Based on Their Social Dynamics | null | null | null | null | cs.SI physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Public opinion polling is usually done by random sampling from the entire
population, treating individual opinions as independent. In the real world,
individuals' opinions are often correlated, e.g., among friends in a social
network. In this paper, we explore the idea of partitioned sampling, which
partitions individuals with high opinion similarities into groups and then
samples every group separately to obtain an accurate estimate of the population
opinion. We rigorously formulate the above idea as an optimization problem. We
then show that the simple partitions which contain only one sample in each
group are always better, and reduce finding the optimal simple partition to a
well-studied Min-r-Partition problem. We adapt an approximation algorithm and a
heuristic algorithm to solve the optimization problem. Moreover, to obtain
opinion similarity efficiently, we adapt a well-known opinion evolution model
to characterize social interactions, and provide an exact computation of
opinion similarities based on the model. We use both synthetic and real-world
datasets to demonstrate that the partitioned sampling method results in
significant improvement in sampling quality and it is robust when some opinion
similarities are inaccurate or even missing.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2015 10:07:39 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Feb 2016 08:02:07 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2016 04:50:08 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Huang",
"Weiran",
""
],
[
"Li",
"Liang",
""
],
[
"Chen",
"Wei",
""
]
] | TITLE: Partitioned Sampling of Public Opinions Based on Their Social Dynamics
ABSTRACT: Public opinion polling is usually done by random sampling from the entire
population, treating individual opinions as independent. In the real world,
individuals' opinions are often correlated, e.g., among friends in a social
network. In this paper, we explore the idea of partitioned sampling, which
partitions individuals with high opinion similarities into groups and then
samples every group separately to obtain an accurate estimate of the population
opinion. We rigorously formulate the above idea as an optimization problem. We
then show that the simple partitions which contain only one sample in each
group are always better, and reduce finding the optimal simple partition to a
well-studied Min-r-Partition problem. We adapt an approximation algorithm and a
heuristic algorithm to solve the optimization problem. Moreover, to obtain
opinion similarity efficiently, we adapt a well-known opinion evolution model
to characterize social interactions, and provide an exact computation of
opinion similarities based on the model. We use both synthetic and real-world
datasets to demonstrate that the partitioned sampling method results in
significant improvement in sampling quality and it is robust when some opinion
similarities are inaccurate or even missing.
| no_new_dataset | 0.946399 |
1603.02514 | Weidi Xu | Weidi Xu, Haoze Sun, Chao Deng, Ying Tan | Variational Autoencoders for Semi-supervised Text Classification | 8 pages, 4 figure | null | null | null | cs.CL cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Although semi-supervised variational autoencoder (SemiVAE) works in image
classification task, it fails in text classification task if using vanilla LSTM
as its decoder. From a perspective of reinforcement learning, it is verified
that the decoder's capability to distinguish between different categorical
labels is essential. Therefore, Semi-supervised Sequential Variational
Autoencoder (SSVAE) is proposed, which increases the capability by feeding
label into its decoder RNN at each time-step. Two specific decoder structures
are investigated and both of them are verified to be effective. Besides, in
order to reduce the computational complexity in training, a novel optimization
method is proposed, which estimates the gradient of the unlabeled objective
function by sampling, along with two variance reduction techniques.
Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus
show that the proposed approach significantly improves the classification
accuracy compared with pure-supervised classifiers, and achieves competitive
performance against previous advanced methods. State-of-the-art results can be
obtained by integrating other pretraining-based methods.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 13:24:45 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 14:33:50 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Nov 2016 08:18:31 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Xu",
"Weidi",
""
],
[
"Sun",
"Haoze",
""
],
[
"Deng",
"Chao",
""
],
[
"Tan",
"Ying",
""
]
] | TITLE: Variational Autoencoders for Semi-supervised Text Classification
ABSTRACT: Although semi-supervised variational autoencoder (SemiVAE) works in image
classification task, it fails in text classification task if using vanilla LSTM
as its decoder. From a perspective of reinforcement learning, it is verified
that the decoder's capability to distinguish between different categorical
labels is essential. Therefore, Semi-supervised Sequential Variational
Autoencoder (SSVAE) is proposed, which increases the capability by feeding
label into its decoder RNN at each time-step. Two specific decoder structures
are investigated and both of them are verified to be effective. Besides, in
order to reduce the computational complexity in training, a novel optimization
method is proposed, which estimates the gradient of the unlabeled objective
function by sampling, along with two variance reduction techniques.
Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus
show that the proposed approach significantly improves the classification
accuracy compared with pure-supervised classifiers, and achieves competitive
performance against previous advanced methods. State-of-the-art results can be
obtained by integrating other pretraining-based methods.
| no_new_dataset | 0.940353 |
1604.06838 | Xirong Li | Jianfeng Dong and Xirong Li and Cees G. M. Snoek | Word2VisualVec: Image and Video to Sentence Matching by Visual Feature
Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper strives to find the sentence best describing the content of an
image or video. Different from existing works, which rely on a joint subspace
for image / video to sentence matching, we propose to do so in a visual space
only. We contribute Word2VisualVec, a deep neural network architecture that
learns to predict a deep visual encoding of textual input based on sentence
vectorization and a multi-layer perceptron. We thoroughly analyze its
architectural design, by varying the sentence vectorization strategy, network
depth and the deep feature to predict for image to sentence matching. We also
generalize Word2VisualVec for matching a video to a sentence, by extending the
predictive abilities to 3-D ConvNet features as well as a visual-audio
representation. Experiments on four challenging image and video benchmarks
detail Word2VisualVec's properties, capabilities for image and video to
sentence matching, and on all datasets its state-of-the-art results.
| [
{
"version": "v1",
"created": "Sat, 23 Apr 2016 00:28:17 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2016 06:06:31 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Dong",
"Jianfeng",
""
],
[
"Li",
"Xirong",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: Word2VisualVec: Image and Video to Sentence Matching by Visual Feature
Prediction
ABSTRACT: This paper strives to find the sentence best describing the content of an
image or video. Different from existing works, which rely on a joint subspace
for image / video to sentence matching, we propose to do so in a visual space
only. We contribute Word2VisualVec, a deep neural network architecture that
learns to predict a deep visual encoding of textual input based on sentence
vectorization and a multi-layer perceptron. We thoroughly analyze its
architectural design, by varying the sentence vectorization strategy, network
depth and the deep feature to predict for image to sentence matching. We also
generalize Word2VisualVec for matching a video to a sentence, by extending the
predictive abilities to 3-D ConvNet features as well as a visual-audio
representation. Experiments on four challenging image and video benchmarks
detail Word2VisualVec's properties, capabilities for image and video to
sentence matching, and on all datasets its state-of-the-art results.
| no_new_dataset | 0.948585 |
1605.02697 | Mateusz Malinowski | Mateusz Malinowski and Marcus Rohrbach and Mario Fritz | Ask Your Neurons: A Deep Learning Approach to Visual Question Answering | Improved version, it also has a final table from the VQA challenge,
and more baselines on DAQUAR | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address a question answering task on real-world images that is set up as a
Visual Turing Test. By combining latest advances in image representation and
natural language processing, we propose Ask Your Neurons, a scalable, jointly
trained, end-to-end formulation to this problem.
In contrast to previous efforts, we are facing a multi-modal problem where
the language output (answer) is conditioned on visual and natural language
inputs (image and question). We provide additional insights into the problem by
analyzing how much information is contained only in the language part for which
we provide a new human baseline. To study human consensus, which is related to
the ambiguities inherent in this challenging task, we propose two novel metrics
and collect additional answers which extend the original DAQUAR dataset to
DAQUAR-Consensus.
Moreover, we also extend our analysis to VQA, a large-scale question
answering about images dataset, where we investigate some particular design
choices and show the importance of stronger visual models. At the same time, we
achieve strong performance of our model that still uses a global image
representation. Finally, based on such analysis, we refine our Ask Your Neurons
on DAQUAR, which also leads to a better performance on this challenging task.
| [
{
"version": "v1",
"created": "Mon, 9 May 2016 19:04:23 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 10:30:18 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Malinowski",
"Mateusz",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: Ask Your Neurons: A Deep Learning Approach to Visual Question Answering
ABSTRACT: We address a question answering task on real-world images that is set up as a
Visual Turing Test. By combining latest advances in image representation and
natural language processing, we propose Ask Your Neurons, a scalable, jointly
trained, end-to-end formulation to this problem.
In contrast to previous efforts, we are facing a multi-modal problem where
the language output (answer) is conditioned on visual and natural language
inputs (image and question). We provide additional insights into the problem by
analyzing how much information is contained only in the language part for which
we provide a new human baseline. To study human consensus, which is related to
the ambiguities inherent in this challenging task, we propose two novel metrics
and collect additional answers which extend the original DAQUAR dataset to
DAQUAR-Consensus.
Moreover, we also extend our analysis to VQA, a large-scale question
answering about images dataset, where we investigate some particular design
choices and show the importance of stronger visual models. At the same time, we
achieve strong performance of our model that still uses a global image
representation. Finally, based on such analysis, we refine our Ask Your Neurons
on DAQUAR, which also leads to a better performance on this challenging task.
| no_new_dataset | 0.938463 |
1606.04621 | Luowei Zhou | Luowei Zhou, Chenliang Xu, Parker Koch, Jason J. Corso | Watch What You Just Said: Image Captioning with Text-Conditional
Attention | source code is available online | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention mechanisms have attracted considerable interest in image captioning
due to its powerful performance. However, existing methods use only visual
content as attention and whether textual context can improve attention in image
captioning remains unsolved. To explore this problem, we propose a novel
attention mechanism, called \textit{text-conditional attention}, which allows
the caption generator to focus on certain image features given previously
generated text. To obtain text-related image features for our attention model,
we adopt the guiding Long Short-Term Memory (gLSTM) captioning architecture
with CNN fine-tuning. Our proposed method allows joint learning of the image
embedding, text embedding, text-conditional attention and language model with
one network architecture in an end-to-end manner. We perform extensive
experiments on the MS-COCO dataset. The experimental results show that our
method outperforms state-of-the-art captioning methods on various quantitative
metrics as well as in human evaluation, which supports the use of our
text-conditional attention in image captioning.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 02:26:22 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Sep 2016 21:17:42 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Nov 2016 04:36:42 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Zhou",
"Luowei",
""
],
[
"Xu",
"Chenliang",
""
],
[
"Koch",
"Parker",
""
],
[
"Corso",
"Jason J.",
""
]
] | TITLE: Watch What You Just Said: Image Captioning with Text-Conditional
Attention
ABSTRACT: Attention mechanisms have attracted considerable interest in image captioning
due to its powerful performance. However, existing methods use only visual
content as attention and whether textual context can improve attention in image
captioning remains unsolved. To explore this problem, we propose a novel
attention mechanism, called \textit{text-conditional attention}, which allows
the caption generator to focus on certain image features given previously
generated text. To obtain text-related image features for our attention model,
we adopt the guiding Long Short-Term Memory (gLSTM) captioning architecture
with CNN fine-tuning. Our proposed method allows joint learning of the image
embedding, text embedding, text-conditional attention and language model with
one network architecture in an end-to-end manner. We perform extensive
experiments on the MS-COCO dataset. The experimental results show that our
method outperforms state-of-the-art captioning methods on various quantitative
metrics as well as in human evaluation, which supports the use of our
text-conditional attention in image captioning.
| no_new_dataset | 0.946448 |
1607.05369 | Weihua Chen | Weihua Chen, Xiaotang Chen, Jianguo Zhang, Kaiqi Huang | A Multi-task Deep Network for Person Re-identification | Accepted by AAAI2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (ReID) focuses on identifying people across
different scenes in video surveillance, which is usually formulated as a binary
classification task or a ranking task in current person ReID approaches. In
this paper, we take both tasks into account and propose a multi-task deep
network (MTDnet) that makes use of their own advantages and jointly optimize
the two tasks simultaneously for person ReID. To the best of our knowledge, we
are the first to integrate both tasks in one network to solve the person ReID.
We show that our proposed architecture significantly boosts the performance.
Furthermore, deep architecture in general requires a sufficient dataset for
training, which is usually not met in person ReID. To cope with this situation,
we further extend the MTDnet and propose a cross-domain architecture that is
capable of using an auxiliary set to assist training on small target sets. In
the experiments, our approach outperforms most of existing person ReID
algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS
and PRID2011, which clearly demonstrates the effectiveness of the proposed
approach.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 01:59:02 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Sep 2016 14:32:38 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2016 06:22:57 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Chen",
"Weihua",
""
],
[
"Chen",
"Xiaotang",
""
],
[
"Zhang",
"Jianguo",
""
],
[
"Huang",
"Kaiqi",
""
]
] | TITLE: A Multi-task Deep Network for Person Re-identification
ABSTRACT: Person re-identification (ReID) focuses on identifying people across
different scenes in video surveillance, which is usually formulated as a binary
classification task or a ranking task in current person ReID approaches. In
this paper, we take both tasks into account and propose a multi-task deep
network (MTDnet) that makes use of their own advantages and jointly optimize
the two tasks simultaneously for person ReID. To the best of our knowledge, we
are the first to integrate both tasks in one network to solve the person ReID.
We show that our proposed architecture significantly boosts the performance.
Furthermore, deep architecture in general requires a sufficient dataset for
training, which is usually not met in person ReID. To cope with this situation,
we further extend the MTDnet and propose a cross-domain architecture that is
capable of using an auxiliary set to assist training on small target sets. In
the experiments, our approach outperforms most of existing person ReID
algorithms on representative datasets including CUHK03, CUHK01, VIPeR, iLIDS
and PRID2011, which clearly demonstrates the effectiveness of the proposed
approach.
| no_new_dataset | 0.95018 |
1608.05246 | Kadir Kirtac | Samil Karahan, Merve Kilinc Yildirim, Kadir Kirtac, Ferhat Sukru
Rende, Gultekin Butun, Hazim Kemal Ekenel | How Image Degradations Affect Deep CNN-based Face Recognition? | 8 pages, 3 figures | null | 10.1109/BIOSIG.2016.7736924 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition approaches that are based on deep convolutional neural
networks (CNN) have been dominating the field. The performance improvements
they have provided in the so called in-the-wild datasets are significant,
however, their performance under image quality degradations have not been
assessed, yet. This is particularly important, since in real-world face
recognition applications, images may contain various kinds of degradations due
to motion blur, noise, compression artifacts, color distortions, and occlusion.
In this work, we have addressed this problem and analyzed the influence of
these image degradations on the performance of deep CNN-based face recognition
approaches using the standard LFW closed-set identification protocol. We have
evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and
GoogLeNet. Results have indicated that blur, noise, and occlusion cause a
significant decrease in performance, while deep CNN models are found to be
robust to distortions, such as color distortions and change in color balance.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 11:48:26 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Karahan",
"Samil",
""
],
[
"Yildirim",
"Merve Kilinc",
""
],
[
"Kirtac",
"Kadir",
""
],
[
"Rende",
"Ferhat Sukru",
""
],
[
"Butun",
"Gultekin",
""
],
[
"Ekenel",
"Hazim Kemal",
""
]
] | TITLE: How Image Degradations Affect Deep CNN-based Face Recognition?
ABSTRACT: Face recognition approaches that are based on deep convolutional neural
networks (CNN) have been dominating the field. The performance improvements
they have provided in the so called in-the-wild datasets are significant,
however, their performance under image quality degradations have not been
assessed, yet. This is particularly important, since in real-world face
recognition applications, images may contain various kinds of degradations due
to motion blur, noise, compression artifacts, color distortions, and occlusion.
In this work, we have addressed this problem and analyzed the influence of
these image degradations on the performance of deep CNN-based face recognition
approaches using the standard LFW closed-set identification protocol. We have
evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and
GoogLeNet. Results have indicated that blur, noise, and occlusion cause a
significant decrease in performance, while deep CNN models are found to be
robust to distortions, such as color distortions and change in color balance.
| no_new_dataset | 0.947039 |
1610.00369 | Asif Hassan | A. Hassan, M. R. Amin, N. Mohammed, A. K. A. Azad | Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep
Recurrent models | null | null | null | null | cs.CL cs.IR cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentiment Analysis (SA) is an action research area in the digital age. With
rapid and constant growth of online social media sites and services, and the
increasing amount of textual data such as - statuses, comments, reviews etc.
available in them, application of automatic SA is on the rise. However, most of
the research works on SA in natural language processing (NLP) are based on
English language. Despite being the sixth most widely spoken language in the
world, Bangla still does not have a large and standard dataset. Because of
this, recent research works in Bangla have failed to produce results that can
be both comparable to works done by others and reusable as stepping stones for
future researchers to progress in this field. Therefore, we first tried to
provide a textual dataset - that includes not just Bangla, but Romanized Bangla
texts as well, is substantial, post-processed and multiple validated, ready to
be used in SA experiments. We tested this dataset in Deep Recurrent model,
specifically, Long Short Term Memory (LSTM), using two types of loss functions
- binary crossentropy and categorical crossentropy, and also did some
experimental pre-training by using data from one validation to pre-train the
other and vice versa. Lastly, we documented the results along with some
analysis on them, which were promising.
| [
{
"version": "v1",
"created": "Sun, 2 Oct 2016 23:45:23 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 02:13:05 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Hassan",
"A.",
""
],
[
"Amin",
"M. R.",
""
],
[
"Mohammed",
"N.",
""
],
[
"Azad",
"A. K. A.",
""
]
] | TITLE: Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep
Recurrent models
ABSTRACT: Sentiment Analysis (SA) is an action research area in the digital age. With
rapid and constant growth of online social media sites and services, and the
increasing amount of textual data such as - statuses, comments, reviews etc.
available in them, application of automatic SA is on the rise. However, most of
the research works on SA in natural language processing (NLP) are based on
English language. Despite being the sixth most widely spoken language in the
world, Bangla still does not have a large and standard dataset. Because of
this, recent research works in Bangla have failed to produce results that can
be both comparable to works done by others and reusable as stepping stones for
future researchers to progress in this field. Therefore, we first tried to
provide a textual dataset - that includes not just Bangla, but Romanized Bangla
texts as well, is substantial, post-processed and multiple validated, ready to
be used in SA experiments. We tested this dataset in Deep Recurrent model,
specifically, Long Short Term Memory (LSTM), using two types of loss functions
- binary crossentropy and categorical crossentropy, and also did some
experimental pre-training by using data from one validation to pre-train the
other and vice versa. Lastly, we documented the results along with some
analysis on them, which were promising.
| new_dataset | 0.973919 |
1610.04871 | Cristina Garcia Cifuentes | Cristina Garcia Cifuentes, Jan Issac, Manuel W\"uthrich, Stefan
Schaal, Jeannette Bohg | Probabilistic Articulated Real-Time Tracking for Robot Manipulation | 8 pages, 7 figures. Revision submitted to IEEE Robotics and
Automation Letters (RA-L). Fixed wrong order of bars in boxplots; further
argumentation | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a probabilistic filtering method which fuses joint measurements
with depth images to yield a precise, real-time estimate of the end-effector
pose in the camera frame. This avoids the need for frame transformations when
using it in combination with visual object tracking methods.
Precision is achieved by modeling and correcting biases in the joint
measurements as well as inaccuracies in the robot model, such as poor extrinsic
camera calibration. We make our method computationally efficient through a
principled combination of Kalman filtering of the joint measurements and
asynchronous depth-image updates based on the Coordinate Particle Filter.
We quantitatively evaluate our approach on a dataset recorded from a real
robotic platform, annotated with ground truth from a motion capture system. We
show that our approach is robust and accurate even under challenging conditions
such as fast motion, significant and long-term occlusions, and time-varying
biases. We release the dataset along with open-source code of our approach to
allow for quantitative comparison with alternative approaches.
| [
{
"version": "v1",
"created": "Sun, 16 Oct 2016 14:55:21 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2016 14:29:44 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Cifuentes",
"Cristina Garcia",
""
],
[
"Issac",
"Jan",
""
],
[
"Wüthrich",
"Manuel",
""
],
[
"Schaal",
"Stefan",
""
],
[
"Bohg",
"Jeannette",
""
]
] | TITLE: Probabilistic Articulated Real-Time Tracking for Robot Manipulation
ABSTRACT: We propose a probabilistic filtering method which fuses joint measurements
with depth images to yield a precise, real-time estimate of the end-effector
pose in the camera frame. This avoids the need for frame transformations when
using it in combination with visual object tracking methods.
Precision is achieved by modeling and correcting biases in the joint
measurements as well as inaccuracies in the robot model, such as poor extrinsic
camera calibration. We make our method computationally efficient through a
principled combination of Kalman filtering of the joint measurements and
asynchronous depth-image updates based on the Coordinate Particle Filter.
We quantitatively evaluate our approach on a dataset recorded from a real
robotic platform, annotated with ground truth from a motion capture system. We
show that our approach is robust and accurate even under challenging conditions
such as fast motion, significant and long-term occlusions, and time-varying
biases. We release the dataset along with open-source code of our approach to
allow for quantitative comparison with alternative approaches.
| new_dataset | 0.969062 |
1611.00284 | Zhenhua Feng | Xiaoning Song, Zhen-Hua Feng, Guosheng Hu, Josef Kittler, William
Christmas and Xiao-Jun Wu | Dictionary Integration using 3D Morphable Face Models for Pose-invariant
Collaborative-representation-based Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a dictionary integration algorithm using 3D morphable face
models (3DMM) for pose-invariant collaborative-representation-based face
classification. To this end, we first fit a 3DMM to the 2D face images of a
dictionary to reconstruct the 3D shape and texture of each image. The 3D faces
are used to render a number of virtual 2D face images with arbitrary pose
variations to augment the training data, by merging the original and rendered
virtual samples to create an extended dictionary. Second, to reduce the
information redundancy of the extended dictionary and improve the sparsity of
reconstruction coefficient vectors using collaborative-representation-based
classification (CRC), we exploit an on-line elimination scheme to optimise the
extended dictionary by identifying the most representative training samples for
a given query. The final goal is to perform pose-invariant face classification
using the proposed dictionary integration method and the on-line pruning
strategy under the CRC framework. Experimental results obtained for a set of
well-known face datasets demonstrate the merits of the proposed method,
especially its robustness to pose variations.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2016 16:06:07 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2016 18:22:31 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2016 16:27:37 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Song",
"Xiaoning",
""
],
[
"Feng",
"Zhen-Hua",
""
],
[
"Hu",
"Guosheng",
""
],
[
"Kittler",
"Josef",
""
],
[
"Christmas",
"William",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: Dictionary Integration using 3D Morphable Face Models for Pose-invariant
Collaborative-representation-based Classification
ABSTRACT: The paper presents a dictionary integration algorithm using 3D morphable face
models (3DMM) for pose-invariant collaborative-representation-based face
classification. To this end, we first fit a 3DMM to the 2D face images of a
dictionary to reconstruct the 3D shape and texture of each image. The 3D faces
are used to render a number of virtual 2D face images with arbitrary pose
variations to augment the training data, by merging the original and rendered
virtual samples to create an extended dictionary. Second, to reduce the
information redundancy of the extended dictionary and improve the sparsity of
reconstruction coefficient vectors using collaborative-representation-based
classification (CRC), we exploit an on-line elimination scheme to optimise the
extended dictionary by identifying the most representative training samples for
a given query. The final goal is to perform pose-invariant face classification
using the proposed dictionary integration method and the on-line pruning
strategy under the CRC framework. Experimental results obtained for a set of
well-known face datasets demonstrate the merits of the proposed method,
especially its robustness to pose variations.
| no_new_dataset | 0.949995 |
1611.04953 | Xinchi Chen | Jingjing Gong, Xinchi Chen, Xipeng Qiu, Xuanjing Huang | End-to-End Neural Sentence Ordering Using Pointer Network | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentence ordering is one of important tasks in NLP. Previous works mainly
focused on improving its performance by using pair-wise strategy. However, it
is nontrivial for pair-wise models to incorporate the contextual sentence
information. In addition, error prorogation could be introduced by using the
pipeline strategy in pair-wise models. In this paper, we propose an end-to-end
neural approach to address the sentence ordering problem, which uses the
pointer network (Ptr-Net) to alleviate the error propagation problem and
utilize the whole contextual information. Experimental results show the
effectiveness of the proposed model. Source codes and dataset of this paper are
available.
| [
{
"version": "v1",
"created": "Tue, 15 Nov 2016 17:38:10 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2016 16:38:30 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Gong",
"Jingjing",
""
],
[
"Chen",
"Xinchi",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: End-to-End Neural Sentence Ordering Using Pointer Network
ABSTRACT: Sentence ordering is one of important tasks in NLP. Previous works mainly
focused on improving its performance by using pair-wise strategy. However, it
is nontrivial for pair-wise models to incorporate the contextual sentence
information. In addition, error prorogation could be introduced by using the
pipeline strategy in pair-wise models. In this paper, we propose an end-to-end
neural approach to address the sentence ordering problem, which uses the
pointer network (Ptr-Net) to alleviate the error propagation problem and
utilize the whole contextual information. Experimental results show the
effectiveness of the proposed model. Source codes and dataset of this paper are
available.
| no_new_dataset | 0.943086 |
1611.06612 | Guosheng Lin | Guosheng Lin, Anton Milan, Chunhua Shen, Ian Reid | RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic
Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, very deep convolutional neural networks (CNNs) have shown
outstanding performance in object recognition and have also been the first
choice for dense classification problems such as semantic segmentation.
However, repeated subsampling operations like pooling or convolution striding
in deep CNNs lead to a significant decrease in the initial image resolution.
Here, we present RefineNet, a generic multi-path refinement network that
explicitly exploits all the information available along the down-sampling
process to enable high-resolution prediction using long-range residual
connections. In this way, the deeper layers that capture high-level semantic
features can be directly refined using fine-grained features from earlier
convolutions. The individual components of RefineNet employ residual
connections following the identity mapping mindset, which allows for effective
end-to-end training. Further, we introduce chained residual pooling, which
captures rich background context in an efficient manner. We carry out
comprehensive experiments and set new state-of-the-art results on seven public
datasets. In particular, we achieve an intersection-over-union score of 83.4 on
the challenging PASCAL VOC 2012 dataset, which is the best reported result to
date.
| [
{
"version": "v1",
"created": "Sun, 20 Nov 2016 23:39:52 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2016 06:14:12 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Nov 2016 02:01:05 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Lin",
"Guosheng",
""
],
[
"Milan",
"Anton",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic
Segmentation
ABSTRACT: Recently, very deep convolutional neural networks (CNNs) have shown
outstanding performance in object recognition and have also been the first
choice for dense classification problems such as semantic segmentation.
However, repeated subsampling operations like pooling or convolution striding
in deep CNNs lead to a significant decrease in the initial image resolution.
Here, we present RefineNet, a generic multi-path refinement network that
explicitly exploits all the information available along the down-sampling
process to enable high-resolution prediction using long-range residual
connections. In this way, the deeper layers that capture high-level semantic
features can be directly refined using fine-grained features from earlier
convolutions. The individual components of RefineNet employ residual
connections following the identity mapping mindset, which allows for effective
end-to-end training. Further, we introduce chained residual pooling, which
captures rich background context in an efficient manner. We carry out
comprehensive experiments and set new state-of-the-art results on seven public
datasets. In particular, we achieve an intersection-over-union score of 83.4 on
the challenging PASCAL VOC 2012 dataset, which is the best reported result to
date.
| no_new_dataset | 0.950457 |
1611.07435 | Nicholas Browning | Nicholas J. Browning, Raghunathan Ramakrishnan, O. Anatole von
Lilienfeld, Ursula R\"othlisberger | Genetic optimization of training sets for improved machine learning
models of molecular properties | 9 pages, 6 figures | null | null | null | physics.comp-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The training of molecular models of quantum mechanical properties based on
statistical machine learning requires large datasets which exemplify the map
from chemical structure to molecular property. Intelligent a priori selection
of training examples is often difficult or impossible to achieve as prior
knowledge may be sparse or unavailable. Ordinarily representative selection of
training molecules from such datasets is achieved through random sampling. We
use genetic algorithms for the optimization of training set composition
consisting of tens of thousands of small organic molecules. The resulting
machine learning models are considerably more accurate with respect to small
randomly selected training sets: mean absolute errors for out-of-sample
predictions are reduced to ~25% for enthalpies, free energies, and zero-point
vibrational energy, to ~50% for heat-capacity, electron-spread, and
polarizability, and by more than ~20% for electronic properties such as
frontier orbital eigenvalues or dipole-moments. We discuss and present
optimized training sets consisting of 10 molecular classes for all molecular
properties studied. We show that these classes can be used to design improved
training sets for the generation of machine learning models of the same
properties in similar but unrelated molecular sets.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2016 17:51:19 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 12:25:38 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Browning",
"Nicholas J.",
""
],
[
"Ramakrishnan",
"Raghunathan",
""
],
[
"von Lilienfeld",
"O. Anatole",
""
],
[
"Röthlisberger",
"Ursula",
""
]
] | TITLE: Genetic optimization of training sets for improved machine learning
models of molecular properties
ABSTRACT: The training of molecular models of quantum mechanical properties based on
statistical machine learning requires large datasets which exemplify the map
from chemical structure to molecular property. Intelligent a priori selection
of training examples is often difficult or impossible to achieve as prior
knowledge may be sparse or unavailable. Ordinarily representative selection of
training molecules from such datasets is achieved through random sampling. We
use genetic algorithms for the optimization of training set composition
consisting of tens of thousands of small organic molecules. The resulting
machine learning models are considerably more accurate with respect to small
randomly selected training sets: mean absolute errors for out-of-sample
predictions are reduced to ~25% for enthalpies, free energies, and zero-point
vibrational energy, to ~50% for heat-capacity, electron-spread, and
polarizability, and by more than ~20% for electronic properties such as
frontier orbital eigenvalues or dipole-moments. We discuss and present
optimized training sets consisting of 10 molecular classes for all molecular
properties studied. We show that these classes can be used to design improved
training sets for the generation of machine learning models of the same
properties in similar but unrelated molecular sets.
| no_new_dataset | 0.951369 |
1611.07703 | RaviKiran Sarvadevabhatla | Ravi Kiran Sarvadevabhatla, Shanthakumar Venkatraman, R. Venkatesh
Babu | 'Part'ly first among equals: Semantic part-based benchmarking for
state-of-the-art object recognition systems | Extended version of our ACCV-2016 paper. Author formatting modified | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An examination of object recognition challenge leaderboards (ILSVRC,
PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small
differences amongst themselves in terms of error rate/mAP. To better
differentiate the top performers, additional criteria are required. Moreover,
the (test) images, on which the performance scores are based, predominantly
contain fully visible objects. Therefore, `harder' test images, mimicking the
challenging conditions (e.g. occlusion) in which humans routinely recognize
objects, need to be utilized for benchmarking. To address the concerns
mentioned above, we make two contributions. First, we systematically vary the
level of local object-part content, global detail and spatial context in images
from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12.
Second, we propose an object-part based benchmarking procedure which quantifies
classifiers' robustness to a range of visibility and contextual settings. The
benchmarking procedure relies on a semantic similarity measure that naturally
addresses potential semantic granularity differences between the category
labels in training and test datasets, thus eliminating manual mapping. We use
our procedure on the PPSS-12 dataset to benchmark top-performing classifiers
trained on the ILSVRC-2012 dataset. Our results show that the proposed
benchmarking procedure enables additional differentiation among
state-of-the-art object classifiers in terms of their ability to handle missing
content and insufficient object detail. Given this capability for additional
differentiation, our approach can potentially supplement existing benchmarking
procedures used in object recognition challenge leaderboards.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2016 09:38:09 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 14:06:06 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Sarvadevabhatla",
"Ravi Kiran",
""
],
[
"Venkatraman",
"Shanthakumar",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | TITLE: 'Part'ly first among equals: Semantic part-based benchmarking for
state-of-the-art object recognition systems
ABSTRACT: An examination of object recognition challenge leaderboards (ILSVRC,
PASCAL-VOC) reveals that the top-performing classifiers typically exhibit small
differences amongst themselves in terms of error rate/mAP. To better
differentiate the top performers, additional criteria are required. Moreover,
the (test) images, on which the performance scores are based, predominantly
contain fully visible objects. Therefore, `harder' test images, mimicking the
challenging conditions (e.g. occlusion) in which humans routinely recognize
objects, need to be utilized for benchmarking. To address the concerns
mentioned above, we make two contributions. First, we systematically vary the
level of local object-part content, global detail and spatial context in images
from PASCAL VOC 2010 to create a new benchmarking dataset dubbed PPSS-12.
Second, we propose an object-part based benchmarking procedure which quantifies
classifiers' robustness to a range of visibility and contextual settings. The
benchmarking procedure relies on a semantic similarity measure that naturally
addresses potential semantic granularity differences between the category
labels in training and test datasets, thus eliminating manual mapping. We use
our procedure on the PPSS-12 dataset to benchmark top-performing classifiers
trained on the ILSVRC-2012 dataset. Our results show that the proposed
benchmarking procedure enables additional differentiation among
state-of-the-art object classifiers in terms of their ability to handle missing
content and insufficient object detail. Given this capability for additional
differentiation, our approach can potentially supplement existing benchmarking
procedures used in object recognition challenge leaderboards.
| new_dataset | 0.964288 |
1611.08061 | Hexiang Hu | Hexiang Hu, Zhiwei Deng, Guang-tong Zhou, Fei Sha, Greg Mori | Recalling Holistic Information for Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation requires a detailed labeling of image pixels by object
category. Information derived from local image patches is necessary to describe
the detailed shape of individual objects. However, this information is
ambiguous and can result in noisy labels. Global inference of image content can
instead capture the general semantic concepts present. We advocate that
high-recall holistic inference of image concepts provides valuable information
for detailed pixel labeling. We build a two-stream neural network architecture
that facilitates information flow from holistic information to local pixels,
while keeping common image features shared among the low-level layers of both
the holistic analysis and segmentation branches. We empirically evaluate our
network on four standard semantic segmentation datasets. Our network obtains
state-of-the-art performance on PASCAL-Context and NYUDv2, and ablation studies
verify its effectiveness on ADE20K and SIFT-Flow.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 03:46:37 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Hu",
"Hexiang",
""
],
[
"Deng",
"Zhiwei",
""
],
[
"Zhou",
"Guang-tong",
""
],
[
"Sha",
"Fei",
""
],
[
"Mori",
"Greg",
""
]
] | TITLE: Recalling Holistic Information for Semantic Segmentation
ABSTRACT: Semantic segmentation requires a detailed labeling of image pixels by object
category. Information derived from local image patches is necessary to describe
the detailed shape of individual objects. However, this information is
ambiguous and can result in noisy labels. Global inference of image content can
instead capture the general semantic concepts present. We advocate that
high-recall holistic inference of image concepts provides valuable information
for detailed pixel labeling. We build a two-stream neural network architecture
that facilitates information flow from holistic information to local pixels,
while keeping common image features shared among the low-level layers of both
the holistic analysis and segmentation branches. We empirically evaluate our
network on four standard semantic segmentation datasets. Our network obtains
state-of-the-art performance on PASCAL-Context and NYUDv2, and ablation studies
verify its effectiveness on ADE20K and SIFT-Flow.
| no_new_dataset | 0.949482 |
1611.08091 | Junyu Wu | Junyu Wu and Shengyong Ding and Wei Xu and Hongyang Chao | Deep Joint Face Hallucination and Recognition | 10 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep models have achieved impressive performance for face hallucination
tasks. However, we observe that directly feeding the hallucinated facial images
into recog- nition models can even degrade the recognition performance despite
the much better visualization quality. In this paper, we address this problem
by jointly learning a deep model for two tasks, i.e. face hallucination and
recognition. In particular, we design an end-to-end deep convolution network
with hallucination sub-network cascaded by recognition sub-network. The
recognition sub- network are responsible for producing discriminative feature
representations using the hallucinated images as inputs generated by
hallucination sub-network. During training, we feed LR facial images into the
network and optimize the parameters by minimizing two loss items, i.e. 1) face
hallucination loss measured by the pixel wise difference between the ground
truth HR images and network-generated images; and 2) verification loss which is
measured by the classification error and intra-class distance. We extensively
evaluate our method on LFW and YTF datasets. The experimental results show that
our method can achieve recognition accuracy 97.95% on 4x down-sampled LFW
testing set, outperforming the accuracy 96.35% of conventional face recognition
model. And on the more challenging YTF dataset, we achieve recognition accuracy
90.65%, a margin over the recognition accuracy 89.45% obtained by conventional
face recognition model on the 4x down-sampled version.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 08:19:49 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Wu",
"Junyu",
""
],
[
"Ding",
"Shengyong",
""
],
[
"Xu",
"Wei",
""
],
[
"Chao",
"Hongyang",
""
]
] | TITLE: Deep Joint Face Hallucination and Recognition
ABSTRACT: Deep models have achieved impressive performance for face hallucination
tasks. However, we observe that directly feeding the hallucinated facial images
into recog- nition models can even degrade the recognition performance despite
the much better visualization quality. In this paper, we address this problem
by jointly learning a deep model for two tasks, i.e. face hallucination and
recognition. In particular, we design an end-to-end deep convolution network
with hallucination sub-network cascaded by recognition sub-network. The
recognition sub- network are responsible for producing discriminative feature
representations using the hallucinated images as inputs generated by
hallucination sub-network. During training, we feed LR facial images into the
network and optimize the parameters by minimizing two loss items, i.e. 1) face
hallucination loss measured by the pixel wise difference between the ground
truth HR images and network-generated images; and 2) verification loss which is
measured by the classification error and intra-class distance. We extensively
evaluate our method on LFW and YTF datasets. The experimental results show that
our method can achieve recognition accuracy 97.95% on 4x down-sampled LFW
testing set, outperforming the accuracy 96.35% of conventional face recognition
model. And on the more challenging YTF dataset, we achieve recognition accuracy
90.65%, a margin over the recognition accuracy 89.45% obtained by conventional
face recognition model on the 4x down-sampled version.
| no_new_dataset | 0.949856 |
1611.08096 | Zheqian Chen | Zheqian Chen and Ben Gao and Huimin Zhang and Zhou Zhao and Deng Cai | User Personalized Satisfaction Prediction via Multiple Instance Deep
Learning | draft for www | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community based question answering services have arisen as a popular
knowledge sharing pattern for netizens. With abundant interactions among users,
individuals are capable of obtaining satisfactory information. However, it is
not effective for users to attain answers within minutes. Users have to check
the progress over time until the satisfying answers submitted. We address this
problem as a user personalized satisfaction prediction task. Existing methods
usually exploit manual feature selection. It is not desirable as it requires
careful design and is labor intensive. In this paper, we settle this issue by
developing a new multiple instance deep learning framework. Specifically, in
our settings, each question follows a weakly supervised learning multiple
instance learning assumption, where its obtained answers can be regarded as
instance sets and we define the question resolved with at least one
satisfactory answer. We thus design an efficient framework exploiting multiple
instance learning property with deep learning to model the question answer
pairs. Extensive experiments on large scale datasets from Stack Exchange
demonstrate the feasibility of our proposed framework in predicting askers
personalized satisfaction. Our framework can be extended to numerous
applications such as UI satisfaction Prediction, multi armed bandit problem,
expert finding and so on.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 08:43:03 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Chen",
"Zheqian",
""
],
[
"Gao",
"Ben",
""
],
[
"Zhang",
"Huimin",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Cai",
"Deng",
""
]
] | TITLE: User Personalized Satisfaction Prediction via Multiple Instance Deep
Learning
ABSTRACT: Community based question answering services have arisen as a popular
knowledge sharing pattern for netizens. With abundant interactions among users,
individuals are capable of obtaining satisfactory information. However, it is
not effective for users to attain answers within minutes. Users have to check
the progress over time until the satisfying answers submitted. We address this
problem as a user personalized satisfaction prediction task. Existing methods
usually exploit manual feature selection. It is not desirable as it requires
careful design and is labor intensive. In this paper, we settle this issue by
developing a new multiple instance deep learning framework. Specifically, in
our settings, each question follows a weakly supervised learning multiple
instance learning assumption, where its obtained answers can be regarded as
instance sets and we define the question resolved with at least one
satisfactory answer. We thus design an efficient framework exploiting multiple
instance learning property with deep learning to model the question answer
pairs. Extensive experiments on large scale datasets from Stack Exchange
demonstrate the feasibility of our proposed framework in predicting askers
personalized satisfaction. Our framework can be extended to numerous
applications such as UI satisfaction Prediction, multi armed bandit problem,
expert finding and so on.
| no_new_dataset | 0.946001 |
1611.08107 | Junyu Wu | Shengyong Ding and Junyu Wu and Wei Xu and Hongyang Chao | Automatically Building Face Datasets of New Domains from Weakly Labeled
Data with Pretrained Models | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training data are critical in face recognition systems. However, labeling a
large scale face data for a particular domain is very tedious. In this paper,
we propose a method to automatically and incrementally construct datasets from
massive weakly labeled data of the target domain which are readily available on
the Internet under the help of a pretrained face model. More specifically,
given a large scale weakly labeled dataset in which each face image is
associated with a label, i.e. the name of an identity, we create a graph for
each identity with edges linking matched faces verified by the existing model
under a tight threshold. Then we use the maximal subgraph as the cleaned data
for that identity. With the cleaned dataset, we update the existing face model
and use the new model to filter the original dataset to get a larger cleaned
dataset. We collect a large weakly labeled dataset containing 530,560 Asian
face images of 7,962 identities from the Internet, which will be published for
the study of face recognition. By running the filtering process, we obtain a
cleaned datasets (99.7+% purity) of size 223,767 (recall 70.9%). On our testing
dataset of Asian faces, the model trained by the cleaned dataset achieves
recognition rate 93.1%, which obviously outperforms the model trained by the
public dataset CASIA whose recognition rate is 85.9%.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 09:11:21 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Ding",
"Shengyong",
""
],
[
"Wu",
"Junyu",
""
],
[
"Xu",
"Wei",
""
],
[
"Chao",
"Hongyang",
""
]
] | TITLE: Automatically Building Face Datasets of New Domains from Weakly Labeled
Data with Pretrained Models
ABSTRACT: Training data are critical in face recognition systems. However, labeling a
large scale face data for a particular domain is very tedious. In this paper,
we propose a method to automatically and incrementally construct datasets from
massive weakly labeled data of the target domain which are readily available on
the Internet under the help of a pretrained face model. More specifically,
given a large scale weakly labeled dataset in which each face image is
associated with a label, i.e. the name of an identity, we create a graph for
each identity with edges linking matched faces verified by the existing model
under a tight threshold. Then we use the maximal subgraph as the cleaned data
for that identity. With the cleaned dataset, we update the existing face model
and use the new model to filter the original dataset to get a larger cleaned
dataset. We collect a large weakly labeled dataset containing 530,560 Asian
face images of 7,962 identities from the Internet, which will be published for
the study of face recognition. By running the filtering process, we obtain a
cleaned datasets (99.7+% purity) of size 223,767 (recall 70.9%). On our testing
dataset of Asian faces, the model trained by the cleaned dataset achieves
recognition rate 93.1%, which obviously outperforms the model trained by the
public dataset CASIA whose recognition rate is 85.9%.
| no_new_dataset | 0.816626 |
1611.08135 | Zheqian Chen | Zheqian Chen and Chi Zhang and Zhou Zhao and Deng Cai | Question Retrieval for Community-based Question Answering via
Heterogeneous Network Integration Learning | null | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community based question answering platforms have attracted substantial users
to share knowledge and learn from each other. As the rapid enlargement of CQA
platforms, quantities of overlapped questions emerge, which makes users
confounded to select a proper reference. It is urgent for us to take effective
automated algorithms to reuse historical questions with corresponding answers.
In this paper we focus on the problem with question retrieval, which aims to
match historical questions that are relevant or semantically equivalent to
resolve one s query directly. The challenges in this task are the lexical gaps
between questions for the word ambiguity and word mismatch problem.
Furthermore, limited words in queried sentences cause sparsity of word
features. To alleviate these challenges, we propose a novel framework named
HNIL which encodes not only the question contents but also the askers social
interactions to enhance the question embedding performance. More specifically,
we apply random walk based learning method with recurrent neural network to
match the similarities between askers question and historical questions
proposed by other users. Extensive experiments on a large scale dataset from a
real world CQA site show that employing the heterogeneous social network
information outperforms the other state of the art solutions in this task.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 11:01:32 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Chen",
"Zheqian",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Cai",
"Deng",
""
]
] | TITLE: Question Retrieval for Community-based Question Answering via
Heterogeneous Network Integration Learning
ABSTRACT: Community based question answering platforms have attracted substantial users
to share knowledge and learn from each other. As the rapid enlargement of CQA
platforms, quantities of overlapped questions emerge, which makes users
confounded to select a proper reference. It is urgent for us to take effective
automated algorithms to reuse historical questions with corresponding answers.
In this paper we focus on the problem with question retrieval, which aims to
match historical questions that are relevant or semantically equivalent to
resolve one s query directly. The challenges in this task are the lexical gaps
between questions for the word ambiguity and word mismatch problem.
Furthermore, limited words in queried sentences cause sparsity of word
features. To alleviate these challenges, we propose a novel framework named
HNIL which encodes not only the question contents but also the askers social
interactions to enhance the question embedding performance. More specifically,
we apply random walk based learning method with recurrent neural network to
match the similarities between askers question and historical questions
proposed by other users. Extensive experiments on a large scale dataset from a
real world CQA site show that employing the heterogeneous social network
information outperforms the other state of the art solutions in this task.
| no_new_dataset | 0.944434 |
1611.08144 | Daniel Gayo-Avello | Daniel Gayo-Avello | How I Stopped Worrying about the Twitter Archive at the Library of
Congress and Learned to Build a Little One for Myself | 22 pages, 13 figures | null | null | null | cs.CY cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Twitter is among the commonest sources of data employed in social media
research mainly because of its convenient APIs to collect tweets. However, most
researchers do not have access to the expensive Firehose and Twitter Historical
Archive, and they must rely on data collected with free APIs whose
representativeness has been questioned. In 2010 the Library of Congress
announced an agreement with Twitter to provide researchers access to the whole
Twitter Archive. However, such a task proved to be daunting and, at the moment
of this writing, no researcher has had the opportunity to access such
materials. Still, there have been experiences that proved that smaller
searchable archives are feasible and, therefore, amenable for academics to
build with relatively little resources. In this paper I describe my efforts to
build one of such archives, covering the first three years of Twitter (actually
from March 2006 to July 2009) and containing 1.48 billion tweets. If you
carefully follow my directions you may have your very own little Twitter
Historical Archive and you may forget about paying for historical tweets.
Please note that to achieve that you should be proficient in some programming
language, knowable about Twitter APIs, and have some basic knowledge on
ElasticSearch; moreover, you may very well get disappointed by the quality of
the contents of the final dataset.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 11:25:09 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Gayo-Avello",
"Daniel",
""
]
] | TITLE: How I Stopped Worrying about the Twitter Archive at the Library of
Congress and Learned to Build a Little One for Myself
ABSTRACT: Twitter is among the commonest sources of data employed in social media
research mainly because of its convenient APIs to collect tweets. However, most
researchers do not have access to the expensive Firehose and Twitter Historical
Archive, and they must rely on data collected with free APIs whose
representativeness has been questioned. In 2010 the Library of Congress
announced an agreement with Twitter to provide researchers access to the whole
Twitter Archive. However, such a task proved to be daunting and, at the moment
of this writing, no researcher has had the opportunity to access such
materials. Still, there have been experiences that proved that smaller
searchable archives are feasible and, therefore, amenable for academics to
build with relatively little resources. In this paper I describe my efforts to
build one of such archives, covering the first three years of Twitter (actually
from March 2006 to July 2009) and containing 1.48 billion tweets. If you
carefully follow my directions you may have your very own little Twitter
Historical Archive and you may forget about paying for historical tweets.
Please note that to achieve that you should be proficient in some programming
language, knowable about Twitter APIs, and have some basic knowledge on
ElasticSearch; moreover, you may very well get disappointed by the quality of
the contents of the final dataset.
| no_new_dataset | 0.909023 |
1611.08258 | Ali Diba | Ali Diba, Vivek Sharma, Ali Pazandeh, Hamed Pirsiavash, Luc Van Gool | Weakly Supervised Cascaded Convolutional Networks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Object detection is a challenging task in visual understanding domain, and
even more so if the supervision is to be weak. Recently, few efforts to handle
the task without expensive human annotations is established by promising deep
neural network. A new architecture of cascaded networks is proposed to learn a
convolutional neural network (CNN) under such conditions. We introduce two such
architectures, with either two cascade stages or three which are trained in an
end-to-end pipeline. The first stage of both architectures extracts best
candidate of class specific region proposals by training a fully convolutional
network. In the case of the three stage architecture, the middle stage provides
object segmentation, using the output of the activation maps of first stage.
The final stage of both architectures is a part of a convolutional neural
network that performs multiple instance learning on proposals extracted in the
previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large
scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the
areas of weakly-supervised object detection, classification and localization.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 17:07:48 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Diba",
"Ali",
""
],
[
"Sharma",
"Vivek",
""
],
[
"Pazandeh",
"Ali",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Weakly Supervised Cascaded Convolutional Networks
ABSTRACT: Object detection is a challenging task in visual understanding domain, and
even more so if the supervision is to be weak. Recently, few efforts to handle
the task without expensive human annotations is established by promising deep
neural network. A new architecture of cascaded networks is proposed to learn a
convolutional neural network (CNN) under such conditions. We introduce two such
architectures, with either two cascade stages or three which are trained in an
end-to-end pipeline. The first stage of both architectures extracts best
candidate of class specific region proposals by training a fully convolutional
network. In the case of the three stage architecture, the middle stage provides
object segmentation, using the output of the activation maps of first stage.
The final stage of both architectures is a part of a convolutional neural
network that performs multiple instance learning on proposals extracted in the
previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large
scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the
areas of weakly-supervised object detection, classification and localization.
| no_new_dataset | 0.951323 |
1611.08272 | Alexander Kirillov | Alexander Kirillov, Evgeny Levinkov, Bjoern Andres, Bogdan
Savchynskyy, Carsten Rother | InstanceCut: from Edges to Instances with MultiCut | The code would be released at
https://github.com/alexander-kirillov/InstanceCut | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work addresses the task of instance-aware semantic segmentation. Our key
motivation is to design a simple method with a new modelling-paradigm, which
therefore has a different trade-off between advantages and disadvantages
compared to known approaches. Our approach, we term InstanceCut, represents the
problem by two output modalities: (i) an instance-agnostic semantic
segmentation and (ii) all instance-boundaries. The former is computed from a
standard convolutional neural network for semantic segmentation, and the latter
is derived from a new instance-aware edge detection model. To reason globally
about the optimal partitioning of an image into instances, we combine these two
modalities into a novel MultiCut formulation. We evaluate our approach on the
challenging CityScapes dataset. Despite the conceptual simplicity of our
approach, we achieve the best result among all published methods, and perform
particularly well for rare object classes.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 17:54:32 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Kirillov",
"Alexander",
""
],
[
"Levinkov",
"Evgeny",
""
],
[
"Andres",
"Bjoern",
""
],
[
"Savchynskyy",
"Bogdan",
""
],
[
"Rother",
"Carsten",
""
]
] | TITLE: InstanceCut: from Edges to Instances with MultiCut
ABSTRACT: This work addresses the task of instance-aware semantic segmentation. Our key
motivation is to design a simple method with a new modelling-paradigm, which
therefore has a different trade-off between advantages and disadvantages
compared to known approaches. Our approach, we term InstanceCut, represents the
problem by two output modalities: (i) an instance-agnostic semantic
segmentation and (ii) all instance-boundaries. The former is computed from a
standard convolutional neural network for semantic segmentation, and the latter
is derived from a new instance-aware edge detection model. To reason globally
about the optimal partitioning of an image into instances, we combine these two
modalities into a novel MultiCut formulation. We evaluate our approach on the
challenging CityScapes dataset. Despite the conceptual simplicity of our
approach, we achieve the best result among all published methods, and perform
particularly well for rare object classes.
| no_new_dataset | 0.949059 |
1611.08321 | Junhua Mao | Junhua Mao, Jiajing Xu, Yushi Jing, Alan Yuille | Training and Evaluating Multimodal Word Embeddings with Large-scale Web
Annotated Images | Appears in NIPS 2016. The datasets introduced in this work will be
gradually released on the project page | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we focus on training and evaluating effective word embeddings
with both text and visual information. More specifically, we introduce a
large-scale dataset with 300 million sentences describing over 40 million
images crawled and downloaded from publicly available Pins (i.e. an image with
sentence descriptions uploaded by users) on Pinterest. This dataset is more
than 200 times larger than MS COCO, the standard large-scale image dataset with
sentence descriptions. In addition, we construct an evaluation dataset to
directly assess the effectiveness of word embeddings in terms of finding
semantically similar or related words and phrases. The word/phrase pairs in
this evaluation dataset are collected from the click data with millions of
users in an image search system, thus contain rich semantic relationships.
Based on these datasets, we propose and compare several Recurrent Neural
Networks (RNNs) based multimodal (text and image) models. Experiments show that
our model benefits from incorporating the visual information into the word
embeddings, and a weight sharing strategy is crucial for learning such
multimodal embeddings. The project page is:
http://www.stat.ucla.edu/~junhua.mao/multimodal_embedding.html
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 23:15:56 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Mao",
"Junhua",
""
],
[
"Xu",
"Jiajing",
""
],
[
"Jing",
"Yushi",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Training and Evaluating Multimodal Word Embeddings with Large-scale Web
Annotated Images
ABSTRACT: In this paper, we focus on training and evaluating effective word embeddings
with both text and visual information. More specifically, we introduce a
large-scale dataset with 300 million sentences describing over 40 million
images crawled and downloaded from publicly available Pins (i.e. an image with
sentence descriptions uploaded by users) on Pinterest. This dataset is more
than 200 times larger than MS COCO, the standard large-scale image dataset with
sentence descriptions. In addition, we construct an evaluation dataset to
directly assess the effectiveness of word embeddings in terms of finding
semantically similar or related words and phrases. The word/phrase pairs in
this evaluation dataset are collected from the click data with millions of
users in an image search system, thus contain rich semantic relationships.
Based on these datasets, we propose and compare several Recurrent Neural
Networks (RNNs) based multimodal (text and image) models. Experiments show that
our model benefits from incorporating the visual information into the word
embeddings, and a weight sharing strategy is crucial for learning such
multimodal embeddings. The project page is:
http://www.stat.ucla.edu/~junhua.mao/multimodal_embedding.html
| new_dataset | 0.962532 |
1611.08372 | Chen Xu | Chen Xu, Zhouchen Lin, Hongbin Zha | A Unified Convex Surrogate for the Schatten-$p$ Norm | The paper is accepted by AAAI-17. We show that multi-factor matrix
factorization enjoys superiority over the traditional two-factor case | null | null | null | stat.ML cs.LG math.NA math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Schatten-$p$ norm ($0<p<1$) has been widely used to replace the nuclear
norm for better approximating the rank function. However, existing methods are
either 1) not scalable for large scale problems due to relying on singular
value decomposition (SVD) in every iteration, or 2) specific to some $p$
values, e.g., $1/2$, and $2/3$. In this paper, we show that for any $p$, $p_1$,
and $p_2 >0$ satisfying $1/p=1/p_1+1/p_2$, there is an equivalence between the
Schatten-$p$ norm of one matrix and the Schatten-$p_1$ and the Schatten-$p_2$
norms of its two factor matrices. We further extend the equivalence to multiple
factor matrices and show that all the factor norms can be convex and smooth for
any $p>0$. In contrast, the original Schatten-$p$ norm for $0<p<1$ is
non-convex and non-smooth. As an example we conduct experiments on matrix
completion. To utilize the convexity of the factor matrix norms, we adopt the
accelerated proximal alternating linearized minimization algorithm and
establish its sequence convergence. Experiments on both synthetic and real
datasets exhibit its superior performance over the state-of-the-art methods.
Its speed is also highly competitive.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 08:03:31 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Xu",
"Chen",
""
],
[
"Lin",
"Zhouchen",
""
],
[
"Zha",
"Hongbin",
""
]
] | TITLE: A Unified Convex Surrogate for the Schatten-$p$ Norm
ABSTRACT: The Schatten-$p$ norm ($0<p<1$) has been widely used to replace the nuclear
norm for better approximating the rank function. However, existing methods are
either 1) not scalable for large scale problems due to relying on singular
value decomposition (SVD) in every iteration, or 2) specific to some $p$
values, e.g., $1/2$, and $2/3$. In this paper, we show that for any $p$, $p_1$,
and $p_2 >0$ satisfying $1/p=1/p_1+1/p_2$, there is an equivalence between the
Schatten-$p$ norm of one matrix and the Schatten-$p_1$ and the Schatten-$p_2$
norms of its two factor matrices. We further extend the equivalence to multiple
factor matrices and show that all the factor norms can be convex and smooth for
any $p>0$. In contrast, the original Schatten-$p$ norm for $0<p<1$ is
non-convex and non-smooth. As an example we conduct experiments on matrix
completion. To utilize the convexity of the factor matrix norms, we adopt the
accelerated proximal alternating linearized minimization algorithm and
establish its sequence convergence. Experiments on both synthetic and real
datasets exhibit its superior performance over the state-of-the-art methods.
Its speed is also highly competitive.
| no_new_dataset | 0.944434 |
1611.08387 | Shuochen Su | Shuochen Su, Mauricio Delbracio, Jue Wang, Guillermo Sapiro, Wolfgang
Heidrich, Oliver Wang | Deep Video Deblurring | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motion blur from camera shake is a major problem in videos captured by
hand-held devices. Unlike single-image deblurring, video-based approaches can
take advantage of the abundant information that exists across neighboring
frames. As a result the best performing methods rely on aligning nearby frames.
However, aligning images is a computationally expensive and fragile procedure,
and methods that aggregate information must therefore be able to identify which
regions have been accurately aligned and which have not, a task which requires
high level scene understanding. In this work, we introduce a deep learning
solution to video deblurring, where a CNN is trained end-to-end to learn how to
accumulate information across frames. To train this network, we collected a
dataset of real videos recorded with a high framerate camera, which we use to
generate synthetic motion blur for supervision. We show that the features
learned from this dataset extend to deblurring motion blur that arises due to
camera shake in a wide range of videos, and compare the quality of results to a
number of other baselines.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 08:51:51 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Su",
"Shuochen",
""
],
[
"Delbracio",
"Mauricio",
""
],
[
"Wang",
"Jue",
""
],
[
"Sapiro",
"Guillermo",
""
],
[
"Heidrich",
"Wolfgang",
""
],
[
"Wang",
"Oliver",
""
]
] | TITLE: Deep Video Deblurring
ABSTRACT: Motion blur from camera shake is a major problem in videos captured by
hand-held devices. Unlike single-image deblurring, video-based approaches can
take advantage of the abundant information that exists across neighboring
frames. As a result the best performing methods rely on aligning nearby frames.
However, aligning images is a computationally expensive and fragile procedure,
and methods that aggregate information must therefore be able to identify which
regions have been accurately aligned and which have not, a task which requires
high level scene understanding. In this work, we introduce a deep learning
solution to video deblurring, where a CNN is trained end-to-end to learn how to
accumulate information across frames. To train this network, we collected a
dataset of real videos recorded with a high framerate camera, which we use to
generate synthetic motion blur for supervision. We show that the features
learned from this dataset extend to deblurring motion blur that arises due to
camera shake in a wide range of videos, and compare the quality of results to a
number of other baselines.
| new_dataset | 0.908699 |
1611.08408 | Pauline Luc | Pauline Luc and Camille Couprie and Soumith Chintala and Jakob Verbeek | Semantic Segmentation using Adversarial Networks | null | NIPS Workshop on Adversarial Training, Dec 2016, Barcelona, Spain | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial training has been shown to produce state of the art results for
generative image modeling. In this paper we propose an adversarial training
approach to train semantic segmentation models. We train a convolutional
semantic segmentation network along with an adversarial network that
discriminates segmentation maps coming either from the ground truth or from the
segmentation network. The motivation for our approach is that it can detect and
correct higher-order inconsistencies between ground truth segmentation maps and
the ones produced by the segmentation net. Our experiments show that our
adversarial training approach leads to improved accuracy on the Stanford
Background and PASCAL VOC 2012 datasets.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 10:36:30 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Luc",
"Pauline",
""
],
[
"Couprie",
"Camille",
""
],
[
"Chintala",
"Soumith",
""
],
[
"Verbeek",
"Jakob",
""
]
] | TITLE: Semantic Segmentation using Adversarial Networks
ABSTRACT: Adversarial training has been shown to produce state of the art results for
generative image modeling. In this paper we propose an adversarial training
approach to train semantic segmentation models. We train a convolutional
semantic segmentation network along with an adversarial network that
discriminates segmentation maps coming either from the ground truth or from the
segmentation network. The motivation for our approach is that it can detect and
correct higher-order inconsistencies between ground truth segmentation maps and
the ones produced by the segmentation net. Our experiments show that our
adversarial training approach leads to improved accuracy on the Stanford
Background and PASCAL VOC 2012 datasets.
| no_new_dataset | 0.957358 |
1611.08417 | Christophe Guyeux | Sara Barakat, Bechara Al Bouna, Mohamed Nassar, Christophe Guyeux | On the Evaluation of the Privacy Breach in Disassociated Set-Valued
Datasets | Accepted to Secrypt 2016 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data anonymization is gaining much attention these days as it provides the
fundamental requirements to safely outsource datasets containing identifying
information. While some techniques add noise to protect privacy others use
generalization to hide the link between sensitive and non-sensitive information
or separate the dataset into clusters to gain more utility. In the latter,
often referred to as bucketization, data values are kept intact, only the link
is hidden to maximize the utility. In this paper, we showcase the limits of
disassociation, a bucketization technique that divides a set-valued dataset
into $k^m$-anonymous clusters. We demonstrate that a privacy breach might occur
if the disassociated dataset is subject to a cover problem. We finally evaluate
the privacy breach using the quantitative privacy breach detection algorithm on
real disassociated datasets.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 11:03:55 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Barakat",
"Sara",
""
],
[
"Bouna",
"Bechara Al",
""
],
[
"Nassar",
"Mohamed",
""
],
[
"Guyeux",
"Christophe",
""
]
] | TITLE: On the Evaluation of the Privacy Breach in Disassociated Set-Valued
Datasets
ABSTRACT: Data anonymization is gaining much attention these days as it provides the
fundamental requirements to safely outsource datasets containing identifying
information. While some techniques add noise to protect privacy others use
generalization to hide the link between sensitive and non-sensitive information
or separate the dataset into clusters to gain more utility. In the latter,
often referred to as bucketization, data values are kept intact, only the link
is hidden to maximize the utility. In this paper, we showcase the limits of
disassociation, a bucketization technique that divides a set-valued dataset
into $k^m$-anonymous clusters. We demonstrate that a privacy breach might occur
if the disassociated dataset is subject to a cover problem. We finally evaluate
the privacy breach using the quantitative privacy breach detection algorithm on
real disassociated datasets.
| no_new_dataset | 0.948106 |
1611.08573 | Dhanya R. Krishnan | Dhanya R Krishnan | The Marriage of Incremental and Approximate Computing | http://dl.acm.org/citation.cfm?id=2883026 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most data analytics systems that require low-latency execution and efficient
utilization of computing resources, increasingly adopt two computational
paradigms, namely, incremental and approximate computing. Incremental
computation updates the output incrementally instead of re-computing everything
from scratch for successive runs of a job with input changes. Approximate
computation returns an approximate output for a job instead of the exact
output.
Both paradigms rely on computing over a subset of data items instead of
computing over the entire dataset, but they differ in their means for skipping
parts of the computation. Incremental computing relies on the memoization of
intermediate results of sub-computations, and reusing these memoized results
across jobs for sub-computations that are unaffected by the changed input.
Approximate computing relies on representative sampling of the entire dataset
to compute over a subset of data items.
In this thesis, we make the observation that these two computing paradigms
are complementary, and can be married together! The high level idea is to:
design a sampling algorithm that biases the sample selection to the memoized
data items from previous runs. To concretize this idea, we designed an online
stratified sampling algorithm that uses self-adjusting computation to produce
an incrementally updated approximate output with bounded error. We implemented
our algorithm in a data analytics system called IncAppox based on Apache Spark
Streaming. Our evaluation of the system shows that IncApprox achieves the
benefits of both incremental and approximate computing.
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 20:05:08 GMT"
}
] | 2016-11-28T00:00:00 | [
[
"Krishnan",
"Dhanya R",
""
]
] | TITLE: The Marriage of Incremental and Approximate Computing
ABSTRACT: Most data analytics systems that require low-latency execution and efficient
utilization of computing resources, increasingly adopt two computational
paradigms, namely, incremental and approximate computing. Incremental
computation updates the output incrementally instead of re-computing everything
from scratch for successive runs of a job with input changes. Approximate
computation returns an approximate output for a job instead of the exact
output.
Both paradigms rely on computing over a subset of data items instead of
computing over the entire dataset, but they differ in their means for skipping
parts of the computation. Incremental computing relies on the memoization of
intermediate results of sub-computations, and reusing these memoized results
across jobs for sub-computations that are unaffected by the changed input.
Approximate computing relies on representative sampling of the entire dataset
to compute over a subset of data items.
In this thesis, we make the observation that these two computing paradigms
are complementary, and can be married together! The high level idea is to:
design a sampling algorithm that biases the sample selection to the memoized
data items from previous runs. To concretize this idea, we designed an online
stratified sampling algorithm that uses self-adjusting computation to produce
an incrementally updated approximate output with bounded error. We implemented
our algorithm in a data analytics system called IncAppox based on Apache Spark
Streaming. Our evaluation of the system shows that IncApprox achieves the
benefits of both incremental and approximate computing.
| no_new_dataset | 0.944434 |
1004.3460 | Uwe Aickelin | Feng Gu, Julie Greensmith, Robert Oates and Uwe Aickelin | PCA 4 DCA: The Application Of Principal Component Analysis To The
Dendritic Cell Algorithm | 6 pages, 4 figures, 3 tables, (UKCI 2009) | Proceedings of the 9th Annual Workshop on Computational
Intelligence (UKCI 2009), Nottingham, UK, 2009 | null | null | cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As one of the newest members in the field of artificial immune systems (AIS),
the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural
dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training
data, instead domain or expert knowledge is required to predetermine the
mapping between input signals from a particular instance to the three
categories used by the DCA. This data preprocessing phase has received the
criticism of having manually over-?tted the data to the algorithm, which is
undesirable. Therefore, in this paper we have attempted to ascertain if it is
possible to use principal component analysis (PCA) techniques to automatically
categorise input data while still generating useful and accurate classication
results. The integrated system is tested with a biometrics dataset for the
stress recognition of automobile drivers. The experimental results have shown
the application of PCA to the DCA for the purpose of automated data
preprocessing is successful.
| [
{
"version": "v1",
"created": "Tue, 20 Apr 2010 14:20:04 GMT"
}
] | 2016-11-26T00:00:00 | [
[
"Gu",
"Feng",
""
],
[
"Greensmith",
"Julie",
""
],
[
"Oates",
"Robert",
""
],
[
"Aickelin",
"Uwe",
""
]
] | TITLE: PCA 4 DCA: The Application Of Principal Component Analysis To The
Dendritic Cell Algorithm
ABSTRACT: As one of the newest members in the field of artificial immune systems (AIS),
the Dendritic Cell Algorithm (DCA) is based on behavioural models of natural
dendritic cells (DCs). Unlike other AIS, the DCA does not rely on training
data, instead domain or expert knowledge is required to predetermine the
mapping between input signals from a particular instance to the three
categories used by the DCA. This data preprocessing phase has received the
criticism of having manually over-?tted the data to the algorithm, which is
undesirable. Therefore, in this paper we have attempted to ascertain if it is
possible to use principal component analysis (PCA) techniques to automatically
categorise input data while still generating useful and accurate classication
results. The integrated system is tested with a biometrics dataset for the
stress recognition of automobile drivers. The experimental results have shown
the application of PCA to the DCA for the purpose of automated data
preprocessing is successful.
| no_new_dataset | 0.946941 |
1204.0864 | Hai Phan Nhat | Phan Nhat Hai, Pascal Poncelet, Maguelonne Teisseire | GeT_Move: An Efficient and Unifying Spatio-Temporal Pattern Mining
Algorithm for Moving Objects | 17 pages, 24 figures, submitted to KDD, TKDD | null | null | null | cs.DB | http://creativecommons.org/licenses/by/3.0/ | Recent improvements in positioning technology has led to a much wider
availability of massive moving object data. A crucial task is to find the
moving objects that travel together. Usually, these object sets are called
spatio-temporal patterns. Due to the emergence of many different kinds of
spatio-temporal patterns in recent years, different approaches have been
proposed to extract them. However, each approach only focuses on mining a
specific kind of pattern. In addition to being a painstaking task due to the
large number of algorithms used to mine and manage patterns, it is also time
consuming. Moreover, we have to execute these algorithms again whenever new
data are added to the existing database. To address these issues, we first
redefine spatio-temporal patterns in the itemset context. Secondly, we propose
a unifying approach, named GeT_Move, which uses a frequent closed itemset-based
spatio-temporal pattern-mining algorithm to mine and manage different
spatio-temporal patterns. GeT_Move is implemented in two versions which are
GeT_Move and Incremental GeT_Move. To optimize the efficiency and to free the
parameters setting, we also propose a Parameter Free Incremental GeT_Move
algorithm. Comprehensive experiments are performed on real datasets as well as
large synthetic datasets to demonstrate the effectiveness and efficiency of our
approaches.
| [
{
"version": "v1",
"created": "Wed, 4 Apr 2012 05:07:47 GMT"
}
] | 2016-11-26T00:00:00 | [
[
"Hai",
"Phan Nhat",
""
],
[
"Poncelet",
"Pascal",
""
],
[
"Teisseire",
"Maguelonne",
""
]
] | TITLE: GeT_Move: An Efficient and Unifying Spatio-Temporal Pattern Mining
Algorithm for Moving Objects
ABSTRACT: Recent improvements in positioning technology has led to a much wider
availability of massive moving object data. A crucial task is to find the
moving objects that travel together. Usually, these object sets are called
spatio-temporal patterns. Due to the emergence of many different kinds of
spatio-temporal patterns in recent years, different approaches have been
proposed to extract them. However, each approach only focuses on mining a
specific kind of pattern. In addition to being a painstaking task due to the
large number of algorithms used to mine and manage patterns, it is also time
consuming. Moreover, we have to execute these algorithms again whenever new
data are added to the existing database. To address these issues, we first
redefine spatio-temporal patterns in the itemset context. Secondly, we propose
a unifying approach, named GeT_Move, which uses a frequent closed itemset-based
spatio-temporal pattern-mining algorithm to mine and manage different
spatio-temporal patterns. GeT_Move is implemented in two versions which are
GeT_Move and Incremental GeT_Move. To optimize the efficiency and to free the
parameters setting, we also propose a Parameter Free Incremental GeT_Move
algorithm. Comprehensive experiments are performed on real datasets as well as
large synthetic datasets to demonstrate the effectiveness and efficiency of our
approaches.
| no_new_dataset | 0.947088 |
1405.6500 | Lei Gai | Lei Gai, Wei Chen, Zhichao Xu, Changhe Qiu, and Tengjiao Wang | Towards Efficient Path Query on Social Network with Hybrid RDF
Management | null | null | null | null | cs.DB cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scalability and exibility of Resource Description Framework(RDF) model
make it ideally suited for representing online social networks(OSN). One basic
operation in OSN is to find chains of relations,such as k-Hop friends. Property
path query in SPARQL can express this type of operation, but its implementation
suffers from performance problem considering the ever growing data size and
complexity of OSN.In this paper, we present a main memory/disk based hybrid RDF
data management framework for efficient property path query. In this hybrid
framework, we realize an efficient in-memory algebra operator for property path
query using graph traversal, and estimate the cost of this operator to
cooperate with existing cost-based optimization. Experiments on benchmark and
real dataset demonstrated that our approach can achieve a good tradeoff between
data load expense and online query performance.
| [
{
"version": "v1",
"created": "Mon, 26 May 2014 08:29:19 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Jun 2014 01:39:38 GMT"
}
] | 2016-11-25T00:00:00 | [
[
"Gai",
"Lei",
""
],
[
"Chen",
"Wei",
""
],
[
"Xu",
"Zhichao",
""
],
[
"Qiu",
"Changhe",
""
],
[
"Wang",
"Tengjiao",
""
]
] | TITLE: Towards Efficient Path Query on Social Network with Hybrid RDF
Management
ABSTRACT: The scalability and exibility of Resource Description Framework(RDF) model
make it ideally suited for representing online social networks(OSN). One basic
operation in OSN is to find chains of relations,such as k-Hop friends. Property
path query in SPARQL can express this type of operation, but its implementation
suffers from performance problem considering the ever growing data size and
complexity of OSN.In this paper, we present a main memory/disk based hybrid RDF
data management framework for efficient property path query. In this hybrid
framework, we realize an efficient in-memory algebra operator for property path
query using graph traversal, and estimate the cost of this operator to
cooperate with existing cost-based optimization. Experiments on benchmark and
real dataset demonstrated that our approach can achieve a good tradeoff between
data load expense and online query performance.
| no_new_dataset | 0.942718 |
1605.03718 | Anna Khoreva | Anna Khoreva, Rodrigo Benenson, Fabio Galasso, Matthias Hein, Bernt
Schiele | Improved Image Boundaries for Better Video Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-based video segmentation methods rely on superpixels as starting point.
While most previous work has focused on the construction of the graph edges and
weights as well as solving the graph partitioning problem, this paper focuses
on better superpixels for video segmentation. We demonstrate by a comparative
analysis that superpixels extracted from boundaries perform best, and show that
boundary estimation can be significantly improved via image and time domain
cues. With superpixels generated from our better boundaries we observe
consistent improvement for two video segmentation methods in two different
datasets.
| [
{
"version": "v1",
"created": "Thu, 12 May 2016 08:14:00 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2016 10:25:47 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Khoreva",
"Anna",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Galasso",
"Fabio",
""
],
[
"Hein",
"Matthias",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Improved Image Boundaries for Better Video Segmentation
ABSTRACT: Graph-based video segmentation methods rely on superpixels as starting point.
While most previous work has focused on the construction of the graph edges and
weights as well as solving the graph partitioning problem, this paper focuses
on better superpixels for video segmentation. We demonstrate by a comparative
analysis that superpixels extracted from boundaries perform best, and show that
boundary estimation can be significantly improved via image and time domain
cues. With superpixels generated from our better boundaries we observe
consistent improvement for two video segmentation methods in two different
datasets.
| no_new_dataset | 0.953622 |
1605.09304 | Anh Nguyen | Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, Jeff
Clune | Synthesizing the preferred inputs for neurons in neural networks via
deep generator networks | 29 pages, 35 figures, NIPS camera-ready | null | null | null | cs.NE cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) have demonstrated state-of-the-art results on
many pattern recognition tasks, especially vision classification problems.
Understanding the inner workings of such computational brains is both
fascinating basic science that is interesting in its own right - similar to why
we study the human brain - and will enable researchers to further improve DNNs.
One path to understanding how a neural network functions internally is to study
what each of its neurons has learned to detect. One such method is called
activation maximization (AM), which synthesizes an input (e.g. an image) that
highly activates a neuron. Here we dramatically improve the qualitative state
of the art of activation maximization by harnessing a powerful, learned prior:
a deep generator network (DGN). The algorithm (1) generates qualitatively
state-of-the-art synthetic images that look almost real, (2) reveals the
features learned by each neuron in an interpretable way, (3) generalizes well
to new datasets and somewhat well to different network architectures without
requiring the prior to be relearned, and (4) can be considered as a
high-quality generative method (in this case, by generating novel, creative,
interesting, recognizable images).
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 16:22:54 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jun 2016 15:52:04 GMT"
},
{
"version": "v3",
"created": "Mon, 6 Jun 2016 17:34:59 GMT"
},
{
"version": "v4",
"created": "Thu, 27 Oct 2016 22:16:07 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Nov 2016 18:41:12 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Nguyen",
"Anh",
""
],
[
"Dosovitskiy",
"Alexey",
""
],
[
"Yosinski",
"Jason",
""
],
[
"Brox",
"Thomas",
""
],
[
"Clune",
"Jeff",
""
]
] | TITLE: Synthesizing the preferred inputs for neurons in neural networks via
deep generator networks
ABSTRACT: Deep neural networks (DNNs) have demonstrated state-of-the-art results on
many pattern recognition tasks, especially vision classification problems.
Understanding the inner workings of such computational brains is both
fascinating basic science that is interesting in its own right - similar to why
we study the human brain - and will enable researchers to further improve DNNs.
One path to understanding how a neural network functions internally is to study
what each of its neurons has learned to detect. One such method is called
activation maximization (AM), which synthesizes an input (e.g. an image) that
highly activates a neuron. Here we dramatically improve the qualitative state
of the art of activation maximization by harnessing a powerful, learned prior:
a deep generator network (DGN). The algorithm (1) generates qualitatively
state-of-the-art synthetic images that look almost real, (2) reveals the
features learned by each neuron in an interpretable way, (3) generalizes well
to new datasets and somewhat well to different network architectures without
requiring the prior to be relearned, and (4) can be considered as a
high-quality generative method (in this case, by generating novel, creative,
interesting, recognizable images).
| no_new_dataset | 0.946151 |
1605.09553 | Chenxi Liu | Chenxi Liu, Junhua Mao, Fei Sha, Alan Yuille | Attention Correctness in Neural Image Captioning | To appear in AAAI-17. See http://www.cs.jhu.edu/~cxliu/ for
supplementary material | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attention mechanisms have recently been introduced in deep learning for
various tasks in natural language processing and computer vision. But despite
their popularity, the "correctness" of the implicitly-learned attention maps
has only been assessed qualitatively by visualization of several examples. In
this paper we focus on evaluating and improving the correctness of attention in
neural image captioning models. Specifically, we propose a quantitative
evaluation metric for the consistency between the generated attention maps and
human annotations, using recently released datasets with alignment between
regions in images and entities in captions. We then propose novel models with
different levels of explicit supervision for learning attention maps during
training. The supervision can be strong when alignment between regions and
caption entities are available, or weak when only object segments and
categories are provided. We show on the popular Flickr30k and COCO datasets
that introducing supervision of attention maps during training solidly improves
both attention correctness and caption quality, showing the promise of making
machine perception more human-like.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 10:04:20 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2016 07:29:46 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Liu",
"Chenxi",
""
],
[
"Mao",
"Junhua",
""
],
[
"Sha",
"Fei",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Attention Correctness in Neural Image Captioning
ABSTRACT: Attention mechanisms have recently been introduced in deep learning for
various tasks in natural language processing and computer vision. But despite
their popularity, the "correctness" of the implicitly-learned attention maps
has only been assessed qualitatively by visualization of several examples. In
this paper we focus on evaluating and improving the correctness of attention in
neural image captioning models. Specifically, we propose a quantitative
evaluation metric for the consistency between the generated attention maps and
human annotations, using recently released datasets with alignment between
regions in images and entities in captions. We then propose novel models with
different levels of explicit supervision for learning attention maps during
training. The supervision can be strong when alignment between regions and
caption entities are available, or weak when only object segments and
categories are provided. We show on the popular Flickr30k and COCO datasets
that introducing supervision of attention maps during training solidly improves
both attention correctness and caption quality, showing the promise of making
machine perception more human-like.
| no_new_dataset | 0.952042 |
1606.08390 | Armand Joulin | Allan Jabri, Armand Joulin, Laurens van der Maaten | Revisiting Visual Question Answering Baselines | European Conference on Computer Vision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual question answering (VQA) is an interesting learning setting for
evaluating the abilities and shortcomings of current systems for image
understanding. Many of the recently proposed VQA systems include attention or
memory mechanisms designed to support "reasoning". For multiple-choice VQA,
nearly all of these systems train a multi-class classifier on image and
question features to predict an answer. This paper questions the value of these
common practices and develops a simple alternative model based on binary
classification. Instead of treating answers as competing choices, our model
receives the answer as input and predicts whether or not an
image-question-answer triplet is correct. We evaluate our model on the Visual7W
Telling and the VQA Real Multiple Choice tasks, and find that even simple
versions of our model perform competitively. Our best model achieves
state-of-the-art performance on the Visual7W Telling task and compares
surprisingly well with the most complex systems proposed for the VQA Real
Multiple Choice task. We explore variants of the model and study its
transferability between both datasets. We also present an error analysis of our
model that suggests a key problem of current VQA systems lies in the lack of
visual grounding of concepts that occur in the questions and answers. Overall,
our results suggest that the performance of current VQA systems is not
significantly better than that of systems designed to exploit dataset biases.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2016 18:07:58 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Nov 2016 21:26:06 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Jabri",
"Allan",
""
],
[
"Joulin",
"Armand",
""
],
[
"van der Maaten",
"Laurens",
""
]
] | TITLE: Revisiting Visual Question Answering Baselines
ABSTRACT: Visual question answering (VQA) is an interesting learning setting for
evaluating the abilities and shortcomings of current systems for image
understanding. Many of the recently proposed VQA systems include attention or
memory mechanisms designed to support "reasoning". For multiple-choice VQA,
nearly all of these systems train a multi-class classifier on image and
question features to predict an answer. This paper questions the value of these
common practices and develops a simple alternative model based on binary
classification. Instead of treating answers as competing choices, our model
receives the answer as input and predicts whether or not an
image-question-answer triplet is correct. We evaluate our model on the Visual7W
Telling and the VQA Real Multiple Choice tasks, and find that even simple
versions of our model perform competitively. Our best model achieves
state-of-the-art performance on the Visual7W Telling task and compares
surprisingly well with the most complex systems proposed for the VQA Real
Multiple Choice task. We explore variants of the model and study its
transferability between both datasets. We also present an error analysis of our
model that suggests a key problem of current VQA systems lies in the lack of
visual grounding of concepts that occur in the questions and answers. Overall,
our results suggest that the performance of current VQA systems is not
significantly better than that of systems designed to exploit dataset biases.
| no_new_dataset | 0.948632 |
1607.08539 | Maciej Halber | Maciej Halber and Thomas Funkhouser | Fine-To-Coarse Global Registration of RGB-D Scans | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RGB-D scanning of indoor environments is important for many applications,
including real estate, interior design, and virtual reality. However, it is
still challenging to register RGB-D images from a hand-held camera over a long
video sequence into a globally consistent 3D model. Current methods often can
lose tracking or drift and thus fail to reconstruct salient structures in large
environments (e.g., parallel walls in different rooms). To address this
problem, we propose a "fine-to-coarse" global registration algorithm that
leverages robust registrations at finer scales to seed detection and
enforcement of new correspondence and structural constraints at coarser scales.
To test global registration algorithms, we provide a benchmark with 10,401
manually-clicked point correspondences in 25 scenes from the SUN3D dataset.
During experiments with this benchmark, we find that our fine-to-coarse
algorithm registers long RGB-D sequences better than previous methods.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 17:19:46 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2016 15:59:00 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Nov 2016 04:55:29 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Halber",
"Maciej",
""
],
[
"Funkhouser",
"Thomas",
""
]
] | TITLE: Fine-To-Coarse Global Registration of RGB-D Scans
ABSTRACT: RGB-D scanning of indoor environments is important for many applications,
including real estate, interior design, and virtual reality. However, it is
still challenging to register RGB-D images from a hand-held camera over a long
video sequence into a globally consistent 3D model. Current methods often can
lose tracking or drift and thus fail to reconstruct salient structures in large
environments (e.g., parallel walls in different rooms). To address this
problem, we propose a "fine-to-coarse" global registration algorithm that
leverages robust registrations at finer scales to seed detection and
enforcement of new correspondence and structural constraints at coarser scales.
To test global registration algorithms, we provide a benchmark with 10,401
manually-clicked point correspondences in 25 scenes from the SUN3D dataset.
During experiments with this benchmark, we find that our fine-to-coarse
algorithm registers long RGB-D sequences better than previous methods.
| no_new_dataset | 0.905782 |
1611.03751 | Pengfei Xu | Pengfei Xu, Jiaheng Lu | Top-k String Auto-Completion with Synonyms | 15 pages | null | null | null | cs.IR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auto-completion is one of the most prominent features of modern information
systems. The existing solutions of auto-completion provide the suggestions
based on the beginning of the currently input character sequence (i.e. prefix).
However, in many real applications, one entity often has synonyms or
abbreviations. For example, "DBMS" is an abbreviation of "Database Management
Systems". In this paper, we study a novel type of auto-completion by using
synonyms and abbreviations. We propose three trie-based algorithms to solve the
top-k auto-completion with synonyms; each one with different space and time
complexity trade-offs. Experiments on large-scale datasets show that it is
possible to support effective and efficient synonym-based retrieval of
completions of a million strings with thousands of synonyms rules at about a
microsecond per-completion, while taking small space overhead (i.e. 160-200
bytes per string). The source code of our experiments can be download at:
http://udbms.cs.helsinki.fi/?projects/autocompletion/download .
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 15:40:06 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2016 20:12:56 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Nov 2016 22:29:33 GMT"
}
] | 2016-11-24T00:00:00 | [
[
"Xu",
"Pengfei",
""
],
[
"Lu",
"Jiaheng",
""
]
] | TITLE: Top-k String Auto-Completion with Synonyms
ABSTRACT: Auto-completion is one of the most prominent features of modern information
systems. The existing solutions of auto-completion provide the suggestions
based on the beginning of the currently input character sequence (i.e. prefix).
However, in many real applications, one entity often has synonyms or
abbreviations. For example, "DBMS" is an abbreviation of "Database Management
Systems". In this paper, we study a novel type of auto-completion by using
synonyms and abbreviations. We propose three trie-based algorithms to solve the
top-k auto-completion with synonyms; each one with different space and time
complexity trade-offs. Experiments on large-scale datasets show that it is
possible to support effective and efficient synonym-based retrieval of
completions of a million strings with thousands of synonyms rules at about a
microsecond per-completion, while taking small space overhead (i.e. 160-200
bytes per string). The source code of our experiments can be download at:
http://udbms.cs.helsinki.fi/?projects/autocompletion/download .
| no_new_dataset | 0.944587 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.