id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.01752 | Pavol Bielik | Pavol Bielik, Veselin Raychev, Martin Vechev | Learning a Static Analyzer from Data | null | null | null | null | cs.PL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To be practically useful, modern static analyzers must precisely model the
effect of both, statements in the programming language as well as frameworks
used by the program under analysis. While important, manually addressing these
challenges is difficult for at least two reasons: (i) the effects on the
overall analysis can be non-trivial, and (ii) as the size and complexity of
modern libraries increase, so is the number of cases the analysis must handle.
In this paper we present a new, automated approach for creating static
analyzers: instead of manually providing the various inference rules of the
analyzer, the key idea is to learn these rules from a dataset of programs. Our
method consists of two ingredients: (i) a synthesis algorithm capable of
learning a candidate analyzer from a given dataset, and (ii) a counter-example
guided learning procedure which generates new programs beyond those in the
initial dataset, critical for discovering corner cases and ensuring the learned
analysis generalizes to unseen programs.
We implemented and instantiated our approach to the task of learning
JavaScript static analysis rules for a subset of points-to analysis and for
allocation sites analysis. These are challenging yet important problems that
have received significant research attention. We show that our approach is
effective: our system automatically discovered practical and useful inference
rules for many cases that are tricky to manually identify and are missed by
state-of-the-art, manually tuned analyzers.
| [
{
"version": "v1",
"created": "Sun, 6 Nov 2016 10:35:56 GMT"
},
{
"version": "v2",
"created": "Sun, 25 Jun 2017 16:32:21 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Bielik",
"Pavol",
""
],
[
"Raychev",
"Veselin",
""
],
[
"Vechev",
"Martin",
""
]
] | TITLE: Learning a Static Analyzer from Data
ABSTRACT: To be practically useful, modern static analyzers must precisely model the
effect of both, statements in the programming language as well as frameworks
used by the program under analysis. While important, manually addressing these
challenges is difficult for at least two reasons: (i) the effects on the
overall analysis can be non-trivial, and (ii) as the size and complexity of
modern libraries increase, so is the number of cases the analysis must handle.
In this paper we present a new, automated approach for creating static
analyzers: instead of manually providing the various inference rules of the
analyzer, the key idea is to learn these rules from a dataset of programs. Our
method consists of two ingredients: (i) a synthesis algorithm capable of
learning a candidate analyzer from a given dataset, and (ii) a counter-example
guided learning procedure which generates new programs beyond those in the
initial dataset, critical for discovering corner cases and ensuring the learned
analysis generalizes to unseen programs.
We implemented and instantiated our approach to the task of learning
JavaScript static analysis rules for a subset of points-to analysis and for
allocation sites analysis. These are challenging yet important problems that
have received significant research attention. We show that our approach is
effective: our system automatically discovered practical and useful inference
rules for many cases that are tricky to manually identify and are missed by
state-of-the-art, manually tuned analyzers.
| no_new_dataset | 0.940844 |
1611.03000 | Amirhossein Tavanaei | Amirhossein Tavanaei and Anthony S. Maida | Bio-Inspired Spiking Convolutional Neural Network using Layer-wise
Sparse Coding and STDP Learning | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical feature discovery using non-spiking convolutional neural
networks (CNNs) has attracted much recent interest in machine learning and
computer vision. However, it is still not well understood how to create a
biologically plausible network of brain-like, spiking neurons with multi-layer,
unsupervised learning. This paper explores a novel bio-inspired spiking CNN
that is trained in a greedy, layer-wise fashion. The proposed network consists
of a spiking convolutional-pooling layer followed by a feature discovery layer
extracting independent visual features. Kernels for the convolutional layer are
trained using local learning. The learning is implemented using a sparse,
spiking auto-encoder representing primary visual features. The feature
discovery layer extracts independent features by probabilistic, leaky
integrate-and-fire (LIF) neurons that are sparsely active in response to
stimuli. The layer of the probabilistic, LIF neurons implicitly provides
lateral inhibitions to extract sparse and independent features. Experimental
results show that the convolutional layer is stack-admissible, enabling it to
support a multi-layer learning. The visual features obtained from the proposed
probabilistic LIF neurons in the feature discovery layer are utilized for
training a classifier. Classification results contribute to the independent and
informative visual features extracted in a hierarchy of convolutional and
feature discovery layers. The proposed model is evaluated on the MNIST digit
dataset using clean and noisy images. The recognition performance for clean
images is above 98%. The performance loss for recognizing the noisy images is
in the range 0.1% to 8.5% depending on noise types and densities. This level of
performance loss indicates that the network is robust to additive noise.
| [
{
"version": "v1",
"created": "Wed, 9 Nov 2016 16:25:41 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2017 16:40:17 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2017 17:14:05 GMT"
},
{
"version": "v4",
"created": "Sat, 24 Jun 2017 02:20:57 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Tavanaei",
"Amirhossein",
""
],
[
"Maida",
"Anthony S.",
""
]
] | TITLE: Bio-Inspired Spiking Convolutional Neural Network using Layer-wise
Sparse Coding and STDP Learning
ABSTRACT: Hierarchical feature discovery using non-spiking convolutional neural
networks (CNNs) has attracted much recent interest in machine learning and
computer vision. However, it is still not well understood how to create a
biologically plausible network of brain-like, spiking neurons with multi-layer,
unsupervised learning. This paper explores a novel bio-inspired spiking CNN
that is trained in a greedy, layer-wise fashion. The proposed network consists
of a spiking convolutional-pooling layer followed by a feature discovery layer
extracting independent visual features. Kernels for the convolutional layer are
trained using local learning. The learning is implemented using a sparse,
spiking auto-encoder representing primary visual features. The feature
discovery layer extracts independent features by probabilistic, leaky
integrate-and-fire (LIF) neurons that are sparsely active in response to
stimuli. The layer of the probabilistic, LIF neurons implicitly provides
lateral inhibitions to extract sparse and independent features. Experimental
results show that the convolutional layer is stack-admissible, enabling it to
support a multi-layer learning. The visual features obtained from the proposed
probabilistic LIF neurons in the feature discovery layer are utilized for
training a classifier. Classification results contribute to the independent and
informative visual features extracted in a hierarchy of convolutional and
feature discovery layers. The proposed model is evaluated on the MNIST digit
dataset using clean and noisy images. The recognition performance for clean
images is above 98%. The performance loss for recognizing the noisy images is
in the range 0.1% to 8.5% depending on noise types and densities. This level of
performance loss indicates that the network is robust to additive noise.
| no_new_dataset | 0.954816 |
1611.05321 | Aurelien Lucchi | Wenhu Chen and Aurelien Lucchi and Thomas Hofmann | A Semi-supervised Framework for Image Captioning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art approaches for image captioning require supervised training
data consisting of captions with paired image data. These methods are typically
unable to use unsupervised data such as textual data with no corresponding
images, which is a much more abundant commodity. We here propose a novel way of
using such textual data by artificially generating missing visual information.
We evaluate this learning approach on a newly designed model that detects
visual concepts present in an image and feed them to a reviewer-decoder
architecture with an attention mechanism. Unlike previous approaches that
encode visual concepts using word embeddings, we instead suggest using regional
image features which capture more intrinsic information. The main benefit of
this architecture is that it synthesizes meaningful thought vectors that
capture salient image properties and then applies a soft attentive decoder to
decode the thought vectors and generate image captions. We evaluate our model
on both Microsoft COCO and Flickr30K datasets and demonstrate that this model
combined with our semi-supervised learning method can largely improve
performance and help the model to generate more accurate and diverse captions.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 15:33:12 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2016 13:51:31 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Jun 2017 08:24:44 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Chen",
"Wenhu",
""
],
[
"Lucchi",
"Aurelien",
""
],
[
"Hofmann",
"Thomas",
""
]
] | TITLE: A Semi-supervised Framework for Image Captioning
ABSTRACT: State-of-the-art approaches for image captioning require supervised training
data consisting of captions with paired image data. These methods are typically
unable to use unsupervised data such as textual data with no corresponding
images, which is a much more abundant commodity. We here propose a novel way of
using such textual data by artificially generating missing visual information.
We evaluate this learning approach on a newly designed model that detects
visual concepts present in an image and feed them to a reviewer-decoder
architecture with an attention mechanism. Unlike previous approaches that
encode visual concepts using word embeddings, we instead suggest using regional
image features which capture more intrinsic information. The main benefit of
this architecture is that it synthesizes meaningful thought vectors that
capture salient image properties and then applies a soft attentive decoder to
decode the thought vectors and generate image captions. We evaluate our model
on both Microsoft COCO and Flickr30K datasets and demonstrate that this model
combined with our semi-supervised learning method can largely improve
performance and help the model to generate more accurate and diverse captions.
| no_new_dataset | 0.94887 |
1611.08240 | Nishant Rai | Amlan Kar, Nishant Rai, Karan Sikka, Gaurav Sharma | AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for
Human Action Recognition in Videos | CVPR 2017 Camera Ready Version | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method for temporally pooling frames in a video for the
task of human action recognition. The method is motivated by the observation
that there are only a small number of frames which, together, contain
sufficient information to discriminate an action class present in a video, from
the rest. The proposed method learns to pool such discriminative and
informative frames, while discarding a majority of the non-informative frames
in a single temporal scan of the video. Our algorithm does so by continuously
predicting the discriminative importance of each video frame and subsequently
pooling them in a deep learning framework. We show the effectiveness of our
proposed pooling method on standard benchmarks where it consistently improves
on baseline pooling methods, with both RGB and optical flow based Convolutional
networks. Further, in combination with complementary video representations, we
show results that are competitive with respect to the state-of-the-art results
on two challenging and publicly available benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 16:26:11 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 18:04:51 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Jun 2017 16:20:12 GMT"
},
{
"version": "v4",
"created": "Sun, 25 Jun 2017 08:55:48 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Kar",
"Amlan",
""
],
[
"Rai",
"Nishant",
""
],
[
"Sikka",
"Karan",
""
],
[
"Sharma",
"Gaurav",
""
]
] | TITLE: AdaScan: Adaptive Scan Pooling in Deep Convolutional Neural Networks for
Human Action Recognition in Videos
ABSTRACT: We propose a novel method for temporally pooling frames in a video for the
task of human action recognition. The method is motivated by the observation
that there are only a small number of frames which, together, contain
sufficient information to discriminate an action class present in a video, from
the rest. The proposed method learns to pool such discriminative and
informative frames, while discarding a majority of the non-informative frames
in a single temporal scan of the video. Our algorithm does so by continuously
predicting the discriminative importance of each video frame and subsequently
pooling them in a deep learning framework. We show the effectiveness of our
proposed pooling method on standard benchmarks where it consistently improves
on baseline pooling methods, with both RGB and optical flow based Convolutional
networks. Further, in combination with complementary video representations, we
show results that are competitive with respect to the state-of-the-art results
on two challenging and publicly available benchmark datasets.
| no_new_dataset | 0.947039 |
1702.00178 | Filip Korzeniowski | Filip Korzeniowski and Gerhard Widmer | On the Futility of Learning Complex Frame-Level Language Models for
Chord Recognition | Published at AES Conference on Semantic Audio 2017 | null | 10.17743/aesconf.2017.978-1-942220-15-2 | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chord recognition systems use temporal models to post-process frame-wise
chord preditions from acoustic models. Traditionally, first-order models such
as Hidden Markov Models were used for this task, with recent works suggesting
to apply Recurrent Neural Networks instead. Due to their ability to learn
longer-term dependencies, these models are supposed to learn and to apply
musical knowledge, instead of just smoothing the output of the acoustic model.
In this paper, we argue that learning complex temporal models at the level of
audio frames is futile on principle, and that non-Markovian models do not
perform better than their first-order counterparts. We support our argument
through three experiments on the McGill Billboard dataset. The first two show
1) that when learning complex temporal models at the frame level, improvements
in chord sequence modelling are marginal; and 2) that these improvements do not
translate when applied within a full chord recognition system. The third, still
rather preliminary experiment gives first indications that the use of complex
sequential models for chord prediction at higher temporal levels might be more
promising.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2017 09:44:44 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 11:24:42 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Korzeniowski",
"Filip",
""
],
[
"Widmer",
"Gerhard",
""
]
] | TITLE: On the Futility of Learning Complex Frame-Level Language Models for
Chord Recognition
ABSTRACT: Chord recognition systems use temporal models to post-process frame-wise
chord preditions from acoustic models. Traditionally, first-order models such
as Hidden Markov Models were used for this task, with recent works suggesting
to apply Recurrent Neural Networks instead. Due to their ability to learn
longer-term dependencies, these models are supposed to learn and to apply
musical knowledge, instead of just smoothing the output of the acoustic model.
In this paper, we argue that learning complex temporal models at the level of
audio frames is futile on principle, and that non-Markovian models do not
perform better than their first-order counterparts. We support our argument
through three experiments on the McGill Billboard dataset. The first two show
1) that when learning complex temporal models at the frame level, improvements
in chord sequence modelling are marginal; and 2) that these improvements do not
translate when applied within a full chord recognition system. The third, still
rather preliminary experiment gives first indications that the use of complex
sequential models for chord prediction at higher temporal levels might be more
promising.
| no_new_dataset | 0.951323 |
1703.00617 | Benjamin Rubinstein | Neil G. Marchant and Benjamin I. P. Rubinstein | In Search of an Entity Resolution OASIS: Optimal Asymptotic Sequential
Importance Sampling | 13 pages, 5 figures | null | null | null | cs.LG cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity resolution (ER) presents unique challenges for evaluation methodology.
While crowdsourcing platforms acquire ground truth, sound approaches to
sampling must drive labelling efforts. In ER, extreme class imbalance between
matching and non-matching records can lead to enormous labelling requirements
when seeking statistically consistent estimates for rigorous evaluation. This
paper addresses this important challenge with the OASIS algorithm: a sampler
and F-measure estimator for ER evaluation. OASIS draws samples from a (biased)
instrumental distribution, chosen to ensure estimators with optimal asymptotic
variance. As new labels are collected OASIS updates this instrumental
distribution via a Bayesian latent variable model of the annotator oracle, to
quickly focus on unlabelled items providing more information. We prove that
resulting estimates of F-measure, precision, recall converge to the true
population values. Thorough comparisons of sampling methods on a variety of ER
datasets demonstrate significant labelling reductions of up to 83% without loss
to estimate accuracy.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2017 04:49:22 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2017 07:34:10 GMT"
},
{
"version": "v3",
"created": "Mon, 26 Jun 2017 01:28:50 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Marchant",
"Neil G.",
""
],
[
"Rubinstein",
"Benjamin I. P.",
""
]
] | TITLE: In Search of an Entity Resolution OASIS: Optimal Asymptotic Sequential
Importance Sampling
ABSTRACT: Entity resolution (ER) presents unique challenges for evaluation methodology.
While crowdsourcing platforms acquire ground truth, sound approaches to
sampling must drive labelling efforts. In ER, extreme class imbalance between
matching and non-matching records can lead to enormous labelling requirements
when seeking statistically consistent estimates for rigorous evaluation. This
paper addresses this important challenge with the OASIS algorithm: a sampler
and F-measure estimator for ER evaluation. OASIS draws samples from a (biased)
instrumental distribution, chosen to ensure estimators with optimal asymptotic
variance. As new labels are collected OASIS updates this instrumental
distribution via a Bayesian latent variable model of the annotator oracle, to
quickly focus on unlabelled items providing more information. We prove that
resulting estimates of F-measure, precision, recall converge to the true
population values. Thorough comparisons of sampling methods on a variety of ER
datasets demonstrate significant labelling reductions of up to 83% without loss
to estimate accuracy.
| no_new_dataset | 0.949716 |
1706.00153 | Yuxin Peng | Xin Huang, Yuxin Peng, and Mingkuan Yuan | Cross-modal Common Representation Learning by Hybrid Transfer Network | To appear in the proceedings of 26th International Joint Conference
on Artificial Intelligence (IJCAI), Melbourne, Australia, Aug. 19-25, 2017. 8
pages, 2 figures | null | null | null | cs.MM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNN-based cross-modal retrieval is a research hotspot to retrieve across
different modalities as image and text, but existing methods often face the
challenge of insufficient cross-modal training data. In single-modal scenario,
similar problem is usually relieved by transferring knowledge from large-scale
auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is
also very useful for cross-modal retrieval, which can provide rich general
semantic information that can be shared across different modalities. However,
it is challenging to transfer useful knowledge from single-modal (as image)
source domain to cross-modal (as image/text) target domain. Knowledge in source
domain cannot be directly transferred to both two different modalities in
target domain, and the inherent cross-modal correlation contained in target
domain provides key hints for cross-modal retrieval which should be preserved
during transfer process. This paper proposes Cross-modal Hybrid Transfer
Network (CHTN) with two subnetworks: Modal-sharing transfer subnetwork utilizes
the modality in both source and target domains as a bridge, for transferring
knowledge to both two modalities simultaneously; Layer-sharing correlation
subnetwork preserves the inherent cross-modal semantic correlation to further
adapt to cross-modal retrieval task. Cross-modal data can be converted to
common representation by CHTN for retrieval, and comprehensive experiment on 3
datasets shows its effectiveness.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2017 02:53:57 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2017 14:08:19 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Huang",
"Xin",
""
],
[
"Peng",
"Yuxin",
""
],
[
"Yuan",
"Mingkuan",
""
]
] | TITLE: Cross-modal Common Representation Learning by Hybrid Transfer Network
ABSTRACT: DNN-based cross-modal retrieval is a research hotspot to retrieve across
different modalities as image and text, but existing methods often face the
challenge of insufficient cross-modal training data. In single-modal scenario,
similar problem is usually relieved by transferring knowledge from large-scale
auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is
also very useful for cross-modal retrieval, which can provide rich general
semantic information that can be shared across different modalities. However,
it is challenging to transfer useful knowledge from single-modal (as image)
source domain to cross-modal (as image/text) target domain. Knowledge in source
domain cannot be directly transferred to both two different modalities in
target domain, and the inherent cross-modal correlation contained in target
domain provides key hints for cross-modal retrieval which should be preserved
during transfer process. This paper proposes Cross-modal Hybrid Transfer
Network (CHTN) with two subnetworks: Modal-sharing transfer subnetwork utilizes
the modality in both source and target domains as a bridge, for transferring
knowledge to both two modalities simultaneously; Layer-sharing correlation
subnetwork preserves the inherent cross-modal semantic correlation to further
adapt to cross-modal retrieval task. Cross-modal data can be converted to
common representation by CHTN for retrieval, and comprehensive experiment on 3
datasets shows its effectiveness.
| no_new_dataset | 0.947914 |
1706.01084 | Ting Chen | Ting Chen, Liangjie Hong, Yue Shi, Yizhou Sun | Joint Text Embedding for Personalized Content-based Recommendation | typo fixes | null | null | null | cs.IR cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a good representation of text is key to many recommendation
applications. Examples include news recommendation where texts to be
recommended are constantly published everyday. However, most existing
recommendation techniques, such as matrix factorization based methods, mainly
rely on interaction histories to learn representations of items. While latent
factors of items can be learned effectively from user interaction data, in many
cases, such data is not available, especially for newly emerged items.
In this work, we aim to address the problem of personalized recommendation
for completely new items with text information available. We cast the problem
as a personalized text ranking problem and propose a general framework that
combines text embedding with personalized recommendation. Users and textual
content are embedded into latent feature space. The text embedding function can
be learned end-to-end by predicting user interactions with items. To alleviate
sparsity in interaction data, and leverage large amount of text data with
little or no user interactions, we further propose a joint text embedding model
that incorporates unsupervised text embedding with a combination module.
Experimental results show that our model can significantly improve the
effectiveness of recommendation systems on real-world datasets.
| [
{
"version": "v1",
"created": "Sun, 4 Jun 2017 14:48:28 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jun 2017 21:55:56 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Chen",
"Ting",
""
],
[
"Hong",
"Liangjie",
""
],
[
"Shi",
"Yue",
""
],
[
"Sun",
"Yizhou",
""
]
] | TITLE: Joint Text Embedding for Personalized Content-based Recommendation
ABSTRACT: Learning a good representation of text is key to many recommendation
applications. Examples include news recommendation where texts to be
recommended are constantly published everyday. However, most existing
recommendation techniques, such as matrix factorization based methods, mainly
rely on interaction histories to learn representations of items. While latent
factors of items can be learned effectively from user interaction data, in many
cases, such data is not available, especially for newly emerged items.
In this work, we aim to address the problem of personalized recommendation
for completely new items with text information available. We cast the problem
as a personalized text ranking problem and propose a general framework that
combines text embedding with personalized recommendation. Users and textual
content are embedded into latent feature space. The text embedding function can
be learned end-to-end by predicting user interactions with items. To alleviate
sparsity in interaction data, and leverage large amount of text data with
little or no user interactions, we further propose a joint text embedding model
that incorporates unsupervised text embedding with a combination module.
Experimental results show that our model can significantly improve the
effectiveness of recommendation systems on real-world datasets.
| no_new_dataset | 0.945045 |
1706.07154 | Daniel Lopez Martinez | Daniel Lopez Martinez, Ognjen Rudovic, Rosalind Picard | Personalized Automatic Estimation of Self-reported Pain Intensity from
Facial Expressions | Computer Vision and Pattern Recognition Conference, The 1st
International Workshop on Deep Affective Learning and Context Modeling | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pain is a personal, subjective experience that is commonly evaluated through
visual analog scales (VAS). While this is often convenient and useful,
automatic pain detection systems can reduce pain score acquisition efforts in
large-scale studies by estimating it directly from the participants' facial
expressions. In this paper, we propose a novel two-stage learning approach for
VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)
to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels
from face images. The estimated scores are then fed into the personalized
Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by
each person. Personalization of the model is performed using a newly introduced
facial expressiveness score, unique for each person. To the best of our
knowledge, this is the first approach to automatically estimate VAS from face
images. We show the benefits of the proposed personalized over traditional
non-personalized approach on a benchmark dataset for pain analysis from face
images.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 03:11:29 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jun 2017 00:04:06 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Martinez",
"Daniel Lopez",
""
],
[
"Rudovic",
"Ognjen",
""
],
[
"Picard",
"Rosalind",
""
]
] | TITLE: Personalized Automatic Estimation of Self-reported Pain Intensity from
Facial Expressions
ABSTRACT: Pain is a personal, subjective experience that is commonly evaluated through
visual analog scales (VAS). While this is often convenient and useful,
automatic pain detection systems can reduce pain score acquisition efforts in
large-scale studies by estimating it directly from the participants' facial
expressions. In this paper, we propose a novel two-stage learning approach for
VAS estimation: first, our algorithm employs Recurrent Neural Networks (RNNs)
to automatically estimate Prkachin and Solomon Pain Intensity (PSPI) levels
from face images. The estimated scores are then fed into the personalized
Hidden Conditional Random Fields (HCRFs), used to estimate the VAS, provided by
each person. Personalization of the model is performed using a newly introduced
facial expressiveness score, unique for each person. To the best of our
knowledge, this is the first approach to automatically estimate VAS from face
images. We show the benefits of the proposed personalized over traditional
non-personalized approach on a benchmark dataset for pain analysis from face
images.
| no_new_dataset | 0.948632 |
1706.07555 | Chengxu Zhuang | Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins | Toward Goal-Driven Neural Network Models for the Rodent
Whisker-Trigeminal System | 17 pages including supplementary information, 8 figures | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large part, rodents see the world through their whiskers, a powerful
tactile sense enabled by a series of brain areas that form the
whisker-trigeminal system. Raw sensory data arrives in the form of mechanical
input to the exquisitely sensitive, actively-controllable whisker array, and is
processed through a sequence of neural circuits, eventually arriving in
cortical regions that communicate with decision-making and memory areas.
Although a long history of experimental studies has characterized many aspects
of these processing stages, the computational operations of the
whisker-trigeminal system remain largely unknown. In the present work, we take
a goal-driven deep neural network (DNN) approach to modeling these
computations. First, we construct a biophysically-realistic model of the rat
whisker array. We then generate a large dataset of whisker sweeps across a wide
variety of 3D objects in highly-varying poses, angles, and speeds. Next, we
train DNNs from several distinct architectural families to solve a shape
recognition task in this dataset. Each architectural family represents a
structurally-distinct hypothesis for processing in the whisker-trigeminal
system, corresponding to different ways in which spatial and temporal
information can be integrated. We find that most networks perform poorly on the
challenging shape recognition task, but that specific architectures from
several families can achieve reasonable performance levels. Finally, we show
that Representational Dissimilarity Matrices (RDMs), a tool for comparing
population codes between neural systems, can separate these higher-performing
networks with data of a type that could plausibly be collected in a
neurophysiological or imaging experiment. Our results are a proof-of-concept
that goal-driven DNN networks of the whisker-trigeminal system are potentially
within reach.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 03:34:03 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Zhuang",
"Chengxu",
""
],
[
"Kubilius",
"Jonas",
""
],
[
"Hartmann",
"Mitra",
""
],
[
"Yamins",
"Daniel",
""
]
] | TITLE: Toward Goal-Driven Neural Network Models for the Rodent
Whisker-Trigeminal System
ABSTRACT: In large part, rodents see the world through their whiskers, a powerful
tactile sense enabled by a series of brain areas that form the
whisker-trigeminal system. Raw sensory data arrives in the form of mechanical
input to the exquisitely sensitive, actively-controllable whisker array, and is
processed through a sequence of neural circuits, eventually arriving in
cortical regions that communicate with decision-making and memory areas.
Although a long history of experimental studies has characterized many aspects
of these processing stages, the computational operations of the
whisker-trigeminal system remain largely unknown. In the present work, we take
a goal-driven deep neural network (DNN) approach to modeling these
computations. First, we construct a biophysically-realistic model of the rat
whisker array. We then generate a large dataset of whisker sweeps across a wide
variety of 3D objects in highly-varying poses, angles, and speeds. Next, we
train DNNs from several distinct architectural families to solve a shape
recognition task in this dataset. Each architectural family represents a
structurally-distinct hypothesis for processing in the whisker-trigeminal
system, corresponding to different ways in which spatial and temporal
information can be integrated. We find that most networks perform poorly on the
challenging shape recognition task, but that specific architectures from
several families can achieve reasonable performance levels. Finally, we show
that Representational Dissimilarity Matrices (RDMs), a tool for comparing
population codes between neural systems, can separate these higher-performing
networks with data of a type that could plausibly be collected in a
neurophysiological or imaging experiment. Our results are a proof-of-concept
that goal-driven DNN networks of the whisker-trigeminal system are potentially
within reach.
| new_dataset | 0.53915 |
1706.07679 | Hammad Afzal | Maham Jahangir, Hammad Afzal, Mehreen Ahmed, Khawar Khurshid, Raheel
Nawaz | ECO-AMLP: A Decision Support System using an Enhanced Class Outlier with
Automatic Multilayer Perceptron for Diabetes Prediction | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With advanced data analytical techniques, efforts for more accurate decision
support systems for disease prediction are on rise. Surveys by World Health
Organization (WHO) indicate a great increase in number of diabetic patients and
related deaths each year. Early diagnosis of diabetes is a major concern among
researchers and practitioners. The paper presents an application of
\textit{Automatic Multilayer Perceptron }which\textit{ }is combined with an
outlier detection method \textit{Enhanced Class Outlier Detection using
distance based algorithm }to create a prediction framework named as Enhanced
Class Outlier with Automatic Multi layer Perceptron (ECO-AMLP). A series of
experiments are performed on publicly available Pima Indian Diabetes Dataset to
compare ECO-AMLP with other individual classifiers as well as ensemble based
methods. The outlier technique used in our framework gave better results as
compared to other pre-processing and classification techniques. Finally, the
results are compared with other state-of-the-art methods reported in literature
for diabetes prediction on PIDD and achieved accuracy of 88.7\% bests all other
reported studies.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 13:01:09 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Jahangir",
"Maham",
""
],
[
"Afzal",
"Hammad",
""
],
[
"Ahmed",
"Mehreen",
""
],
[
"Khurshid",
"Khawar",
""
],
[
"Nawaz",
"Raheel",
""
]
] | TITLE: ECO-AMLP: A Decision Support System using an Enhanced Class Outlier with
Automatic Multilayer Perceptron for Diabetes Prediction
ABSTRACT: With advanced data analytical techniques, efforts for more accurate decision
support systems for disease prediction are on rise. Surveys by World Health
Organization (WHO) indicate a great increase in number of diabetic patients and
related deaths each year. Early diagnosis of diabetes is a major concern among
researchers and practitioners. The paper presents an application of
\textit{Automatic Multilayer Perceptron }which\textit{ }is combined with an
outlier detection method \textit{Enhanced Class Outlier Detection using
distance based algorithm }to create a prediction framework named as Enhanced
Class Outlier with Automatic Multi layer Perceptron (ECO-AMLP). A series of
experiments are performed on publicly available Pima Indian Diabetes Dataset to
compare ECO-AMLP with other individual classifiers as well as ensemble based
methods. The outlier technique used in our framework gave better results as
compared to other pre-processing and classification techniques. Finally, the
results are compared with other state-of-the-art methods reported in literature
for diabetes prediction on PIDD and achieved accuracy of 88.7\% bests all other
reported studies.
| no_new_dataset | 0.951233 |
1706.07859 | Lantian Li Mr. | Dong Wang and Lantian Li and Zhiyuan Tang and Thomas Fang Zheng | Deep Speaker Verification: Do We Need End to End? | null | null | null | null | cs.SD cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end learning treats the entire system as a whole adaptable black box,
which, if sufficient data are available, may learn a system that works very
well for the target task. This principle has recently been applied to several
prototype research on speaker verification (SV), where the feature learning and
classifier are learned together with an objective function that is consistent
with the evaluation metric. An opposite approach to end-to-end is feature
learning, which firstly trains a feature learning model, and then constructs a
back-end classifier separately to perform SV. Recently, both approaches
achieved significant performance gains on SV, mainly attributed to the smart
utilization of deep neural networks. However, the two approaches have not been
carefully compared, and their respective advantages have not been well
discussed. In this paper, we compare the end-to-end and feature learning
approaches on a text-independent SV task. Our experiments on a dataset sampled
from the Fisher database and involving 5,000 speakers demonstrated that the
feature learning approach outperformed the end-to-end approach. This is a
strong support for the feature learning approach, at least with data and
computation resources similar to ours.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 04:33:59 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Wang",
"Dong",
""
],
[
"Li",
"Lantian",
""
],
[
"Tang",
"Zhiyuan",
""
],
[
"Zheng",
"Thomas Fang",
""
]
] | TITLE: Deep Speaker Verification: Do We Need End to End?
ABSTRACT: End-to-end learning treats the entire system as a whole adaptable black box,
which, if sufficient data are available, may learn a system that works very
well for the target task. This principle has recently been applied to several
prototype research on speaker verification (SV), where the feature learning and
classifier are learned together with an objective function that is consistent
with the evaluation metric. An opposite approach to end-to-end is feature
learning, which firstly trains a feature learning model, and then constructs a
back-end classifier separately to perform SV. Recently, both approaches
achieved significant performance gains on SV, mainly attributed to the smart
utilization of deep neural networks. However, the two approaches have not been
carefully compared, and their respective advantages have not been well
discussed. In this paper, we compare the end-to-end and feature learning
approaches on a text-independent SV task. Our experiments on a dataset sampled
from the Fisher database and involving 5,000 speakers demonstrated that the
feature learning approach outperformed the end-to-end approach. This is a
strong support for the feature learning approach, at least with data and
computation resources similar to ours.
| no_new_dataset | 0.946498 |
1706.07867 | Abhilasha Ravichander | Abhilasha Ravichander, Shruti Rijhwani, Rajat Kulshreshtha, Chirag
Nagpal, Tadas Baltru\v{s}aitis, Louis-Philippe Morency | Preserving Intermediate Objectives: One Simple Trick to Improve Learning
for Hierarchical Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical models are utilized in a wide variety of problems which are
characterized by task hierarchies, where predictions on smaller subtasks are
useful for trying to predict a final task. Typically, neural networks are first
trained for the subtasks, and the predictions of these networks are
subsequently used as additional features when training a model and doing
inference for a final task. In this work, we focus on improving learning for
such hierarchical models and demonstrate our method on the task of speaker
trait prediction. Speaker trait prediction aims to computationally identify
which personality traits a speaker might be perceived to have, and has been of
great interest to both the Artificial Intelligence and Social Science
communities. Persuasiveness prediction in particular has been of interest, as
persuasive speakers have a large amount of influence on our thoughts, opinions
and beliefs. In this work, we examine how leveraging the relationship between
related speaker traits in a hierarchical structure can help improve our ability
to predict how persuasive a speaker is. We present a novel algorithm that
allows us to backpropagate through this hierarchy. This hierarchical model
achieves a 25% relative error reduction in classification accuracy over current
state-of-the art methods on the publicly available POM dataset.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 21:16:18 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Ravichander",
"Abhilasha",
""
],
[
"Rijhwani",
"Shruti",
""
],
[
"Kulshreshtha",
"Rajat",
""
],
[
"Nagpal",
"Chirag",
""
],
[
"Baltrušaitis",
"Tadas",
""
],
[
"Morency",
"Louis-Philippe",
""
]
] | TITLE: Preserving Intermediate Objectives: One Simple Trick to Improve Learning
for Hierarchical Models
ABSTRACT: Hierarchical models are utilized in a wide variety of problems which are
characterized by task hierarchies, where predictions on smaller subtasks are
useful for trying to predict a final task. Typically, neural networks are first
trained for the subtasks, and the predictions of these networks are
subsequently used as additional features when training a model and doing
inference for a final task. In this work, we focus on improving learning for
such hierarchical models and demonstrate our method on the task of speaker
trait prediction. Speaker trait prediction aims to computationally identify
which personality traits a speaker might be perceived to have, and has been of
great interest to both the Artificial Intelligence and Social Science
communities. Persuasiveness prediction in particular has been of interest, as
persuasive speakers have a large amount of influence on our thoughts, opinions
and beliefs. In this work, we examine how leveraging the relationship between
related speaker traits in a hierarchical structure can help improve our ability
to predict how persuasive a speaker is. We present a novel algorithm that
allows us to backpropagate through this hierarchy. This hierarchical model
achieves a 25% relative error reduction in classification accuracy over current
state-of-the art methods on the publicly available POM dataset.
| no_new_dataset | 0.94699 |
1706.07880 | Aditya Balu | Zhanhong Jiang, Aditya Balu, Chinmay Hegde and Soumik Sarkar | Collaborative Deep Learning in Fixed Topology Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is significant recent interest to parallelize deep learning algorithms
in order to handle the enormous growth in data and model sizes. While most
advances focus on model parallelization and engaging multiple computing agents
via using a central parameter server, aspect of data parallelization along with
decentralized computation has not been explored sufficiently. In this context,
this paper presents a new consensus-based distributed SGD (CDSGD) (and its
momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed
topology networks that enables data parallelization as well as decentralized
computation. Such a framework can be extremely useful for learning agents with
access to only local/private data in a communication constrained environment.
We analyze the convergence properties of the proposed algorithm with strongly
convex and nonconvex objective functions with fixed and diminishing step sizes
using concepts of Lyapunov function construction. We demonstrate the efficacy
of our algorithms in comparison with the baseline centralized SGD and the
recently proposed federated averaging algorithm (that also enables data
parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 22:30:17 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Jiang",
"Zhanhong",
""
],
[
"Balu",
"Aditya",
""
],
[
"Hegde",
"Chinmay",
""
],
[
"Sarkar",
"Soumik",
""
]
] | TITLE: Collaborative Deep Learning in Fixed Topology Networks
ABSTRACT: There is significant recent interest to parallelize deep learning algorithms
in order to handle the enormous growth in data and model sizes. While most
advances focus on model parallelization and engaging multiple computing agents
via using a central parameter server, aspect of data parallelization along with
decentralized computation has not been explored sufficiently. In this context,
this paper presents a new consensus-based distributed SGD (CDSGD) (and its
momentum variant, CDMSGD) algorithm for collaborative deep learning over fixed
topology networks that enables data parallelization as well as decentralized
computation. Such a framework can be extremely useful for learning agents with
access to only local/private data in a communication constrained environment.
We analyze the convergence properties of the proposed algorithm with strongly
convex and nonconvex objective functions with fixed and diminishing step sizes
using concepts of Lyapunov function construction. We demonstrate the efficacy
of our algorithms in comparison with the baseline centralized SGD and the
recently proposed federated averaging algorithm (that also enables data
parallelism) based on benchmark datasets such as MNIST, CIFAR-10 and CIFAR-100.
| no_new_dataset | 0.94545 |
1706.07912 | Mahamad Suhil | Lavanya Narayana Raju, Mahamad Suhil, D S Guru and Harsha S Gowda | Cluster Based Symbolic Representation for Skewed Text Categorization | 14 Pages, 15 Figures, 1 Table, Conference: RTIP2R | null | 10.1007/978-981-10-4859-3_19 | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, a problem associated with imbalanced text corpora is addressed.
A method of converting an imbalanced text corpus into a balanced one is
presented. The presented method employs a clustering algorithm for conversion.
Initially to avoid curse of dimensionality, an effective representation scheme
based on term class relevancy measure is adapted, which drastically reduces the
dimension to the number of classes in the corpus. Subsequently, the samples of
larger sized classes are grouped into a number of subclasses of smaller sizes
to make the entire corpus balanced. Each subclass is then given a single
symbolic vector representation by the use of interval valued features. This
symbolic representation in addition to being compact helps in reducing the
space requirement and also the classification time. The proposed model has been
empirically demonstrated for its superiority on bench marking datasets viz.,
Reuters 21578 and TDT2. Further, it has been compared against several other
existing contemporary models including model based on support vector machine.
The comparative analysis indicates that the proposed model outperforms the
other existing models.
| [
{
"version": "v1",
"created": "Sat, 24 Jun 2017 06:04:21 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Raju",
"Lavanya Narayana",
""
],
[
"Suhil",
"Mahamad",
""
],
[
"Guru",
"D S",
""
],
[
"Gowda",
"Harsha S",
""
]
] | TITLE: Cluster Based Symbolic Representation for Skewed Text Categorization
ABSTRACT: In this work, a problem associated with imbalanced text corpora is addressed.
A method of converting an imbalanced text corpus into a balanced one is
presented. The presented method employs a clustering algorithm for conversion.
Initially to avoid curse of dimensionality, an effective representation scheme
based on term class relevancy measure is adapted, which drastically reduces the
dimension to the number of classes in the corpus. Subsequently, the samples of
larger sized classes are grouped into a number of subclasses of smaller sizes
to make the entire corpus balanced. Each subclass is then given a single
symbolic vector representation by the use of interval valued features. This
symbolic representation in addition to being compact helps in reducing the
space requirement and also the classification time. The proposed model has been
empirically demonstrated for its superiority on bench marking datasets viz.,
Reuters 21578 and TDT2. Further, it has been compared against several other
existing contemporary models including model based on support vector machine.
The comparative analysis indicates that the proposed model outperforms the
other existing models.
| no_new_dataset | 0.950457 |
1706.07913 | Mahamad Suhil | Harsha S. Gowda, Mahamad Suhil, D.S. Guru, and Lavanya Narayana Raju | Semi-supervised Text Categorization Using Recursive K-means Clustering | 11 Pages, 8 Figures, Conference: RTIP2R | null | 10.1007/978-981-10-4859-3_20 | null | cs.LG cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a semi-supervised learning algorithm for
classification of text documents. A method of labeling unlabeled text documents
is presented. The presented method is based on the principle of divide and
conquer strategy. It uses recursive K-means algorithm for partitioning both
labeled and unlabeled data collection. The K-means algorithm is applied
recursively on each partition till a desired level partition is achieved such
that each partition contains labeled documents of a single class. Once the
desired clusters are obtained, the respective cluster centroids are considered
as representatives of the clusters and the nearest neighbor rule is used for
classifying an unknown text document. Series of experiments have been conducted
to bring out the superiority of the proposed model over other recent state of
the art models on 20Newsgroups dataset.
| [
{
"version": "v1",
"created": "Sat, 24 Jun 2017 06:08:27 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Gowda",
"Harsha S.",
""
],
[
"Suhil",
"Mahamad",
""
],
[
"Guru",
"D. S.",
""
],
[
"Raju",
"Lavanya Narayana",
""
]
] | TITLE: Semi-supervised Text Categorization Using Recursive K-means Clustering
ABSTRACT: In this paper, we present a semi-supervised learning algorithm for
classification of text documents. A method of labeling unlabeled text documents
is presented. The presented method is based on the principle of divide and
conquer strategy. It uses recursive K-means algorithm for partitioning both
labeled and unlabeled data collection. The K-means algorithm is applied
recursively on each partition till a desired level partition is achieved such
that each partition contains labeled documents of a single class. Once the
desired clusters are obtained, the respective cluster centroids are considered
as representatives of the clusters and the nearest neighbor rule is used for
classifying an unknown text document. Series of experiments have been conducted
to bring out the superiority of the proposed model over other recent state of
the art models on 20Newsgroups dataset.
| no_new_dataset | 0.952397 |
1706.08032 | Huy Nguyen Thanh | Huy Nguyen and Minh-Le Nguyen | A Deep Neural Architecture for Sentence-level Sentiment Classification
in Twitter Social Networking | PACLING Conference 2017, 6 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | This paper introduces a novel deep learning framework including a
lexicon-based approach for sentence-level prediction of sentiment label
distribution. We propose to first apply semantic rules and then use a Deep
Convolutional Neural Network (DeepCNN) for character-level embeddings in order
to increase information for word-level embedding. After that, a Bidirectional
Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature
representation from the word-level embedding. We evaluate our approach on three
Twitter sentiment classification datasets. Experimental results show that our
model can improve the classification accuracy of sentence-level sentiment
analysis in Twitter social networking.
| [
{
"version": "v1",
"created": "Sun, 25 Jun 2017 04:05:09 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Nguyen",
"Huy",
""
],
[
"Nguyen",
"Minh-Le",
""
]
] | TITLE: A Deep Neural Architecture for Sentence-level Sentiment Classification
in Twitter Social Networking
ABSTRACT: This paper introduces a novel deep learning framework including a
lexicon-based approach for sentence-level prediction of sentiment label
distribution. We propose to first apply semantic rules and then use a Deep
Convolutional Neural Network (DeepCNN) for character-level embeddings in order
to increase information for word-level embedding. After that, a Bidirectional
Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature
representation from the word-level embedding. We evaluate our approach on three
Twitter sentiment classification datasets. Experimental results show that our
model can improve the classification accuracy of sentence-level sentiment
analysis in Twitter social networking.
| no_new_dataset | 0.953492 |
1706.08217 | Shujiao Huang | Zhenzhen Zhong, Shujiao Huang, Cheng Zhan, Licheng Zhang, Zhiwei Xiao,
Chang-Chun Wang, Pei Yang | An Effective Way to Improve YouTube-8M Classification Accuracy in Google
Cloud Platform | 5 pages, 2 figures | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale datasets have played a significant role in progress of neural
network and deep learning areas. YouTube-8M is such a benchmark dataset for
general multi-label video classification. It was created from over 7 million
YouTube videos (450,000 hours of video) and includes video labels from a
vocabulary of 4716 classes (3.4 labels/video on average). It also comes with
pre-extracted audio & visual features from every second of video (3.2 billion
feature vectors in total). Google cloud recently released the datasets and
organized 'Google Cloud & YouTube-8M Video Understanding Challenge' on Kaggle.
Competitors are challenged to develop classification algorithms that assign
video-level labels using the new and improved Youtube-8M V2 dataset. Inspired
by the competition, we started exploration of audio understanding and
classification using deep learning algorithms and ensemble methods. We built
several baseline predictions according to the benchmark paper and public github
tensorflow code. Furthermore, we improved global prediction accuracy (GAP) from
base level 77% to 80.7% through approaches of ensemble.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 03:50:51 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Zhong",
"Zhenzhen",
""
],
[
"Huang",
"Shujiao",
""
],
[
"Zhan",
"Cheng",
""
],
[
"Zhang",
"Licheng",
""
],
[
"Xiao",
"Zhiwei",
""
],
[
"Wang",
"Chang-Chun",
""
],
[
"Yang",
"Pei",
""
]
] | TITLE: An Effective Way to Improve YouTube-8M Classification Accuracy in Google
Cloud Platform
ABSTRACT: Large-scale datasets have played a significant role in progress of neural
network and deep learning areas. YouTube-8M is such a benchmark dataset for
general multi-label video classification. It was created from over 7 million
YouTube videos (450,000 hours of video) and includes video labels from a
vocabulary of 4716 classes (3.4 labels/video on average). It also comes with
pre-extracted audio & visual features from every second of video (3.2 billion
feature vectors in total). Google cloud recently released the datasets and
organized 'Google Cloud & YouTube-8M Video Understanding Challenge' on Kaggle.
Competitors are challenged to develop classification algorithms that assign
video-level labels using the new and improved Youtube-8M V2 dataset. Inspired
by the competition, we started exploration of audio understanding and
classification using deep learning algorithms and ensemble methods. We built
several baseline predictions according to the benchmark paper and public github
tensorflow code. Furthermore, we improved global prediction accuracy (GAP) from
base level 77% to 80.7% through approaches of ensemble.
| new_dataset | 0.927232 |
1706.08274 | Ziniu Hu | Ziniu Hu, Yun Ma, Qiaozhu Mei, Jian Tang | Roaming across the Castle Tunnels: an Empirical Study of Inter-App
Navigation Behaviors of Android Users | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mobile applications (a.k.a., apps), which facilitate a large variety of tasks
on mobile devices, have become indispensable in our everyday lives.
Accomplishing a task may require the user to navigate among various apps.
Unlike Web pages that are inherently interconnected through hyperlinks, mobile
apps are usually isolated building blocks, and the lack of direct links between
apps has largely compromised the efficiency of task completion. In this paper,
we present the first in-depth empirical study of inter-app navigation behaviors
of smartphone users based on a comprehensive dataset collected through a
sizable user study over three months. We propose a model to distinguish
informational pages and transitional pages, based on which a large number of
inter-app navigation are identified. We reveal that developing 'tunnels'
between of isolated apps has a huge potential to reduce the cost of navigation.
Our analysis provides various practical implications on how to improve
app-navigation experiences from both the operating system's perspective and the
developer's perspective.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 08:24:21 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Hu",
"Ziniu",
""
],
[
"Ma",
"Yun",
""
],
[
"Mei",
"Qiaozhu",
""
],
[
"Tang",
"Jian",
""
]
] | TITLE: Roaming across the Castle Tunnels: an Empirical Study of Inter-App
Navigation Behaviors of Android Users
ABSTRACT: Mobile applications (a.k.a., apps), which facilitate a large variety of tasks
on mobile devices, have become indispensable in our everyday lives.
Accomplishing a task may require the user to navigate among various apps.
Unlike Web pages that are inherently interconnected through hyperlinks, mobile
apps are usually isolated building blocks, and the lack of direct links between
apps has largely compromised the efficiency of task completion. In this paper,
we present the first in-depth empirical study of inter-app navigation behaviors
of smartphone users based on a comprehensive dataset collected through a
sizable user study over three months. We propose a model to distinguish
informational pages and transitional pages, based on which a large number of
inter-app navigation are identified. We reveal that developing 'tunnels'
between of isolated apps has a huge potential to reduce the cost of navigation.
Our analysis provides various practical implications on how to improve
app-navigation experiences from both the operating system's perspective and the
developer's perspective.
| no_new_dataset | 0.900135 |
1706.08276 | Amir Shahroudy | Jun Liu, Amir Shahroudy, Dong Xu, Alex C. Kot, Gang Wang | Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network
with Trust Gates | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skeleton-based human action recognition has attracted a lot of research
attention during the past few years. Recent works attempted to utilize
recurrent neural networks to model the temporal dependencies between the 3D
positional configurations of human body joints for better analysis of human
activities in the skeletal data. The proposed work extends this idea to spatial
domain as well as temporal domain to better analyze the hidden sources of
action-related information within the human skeleton sequences in both of these
domains simultaneously. Based on the pictorial structure of Kinect's skeletal
data, an effective tree-structure based traversal framework is also proposed.
In order to deal with the noise in the skeletal data, a new gating mechanism
within LSTM module is introduced, with which the network can learn the
reliability of the sequential data and accordingly adjust the effect of the
input data on the updating procedure of the long-term context representation
stored in the unit's memory cell. Moreover, we introduce a novel multi-modal
feature fusion strategy within the LSTM unit in this paper. The comprehensive
experimental results on seven challenging benchmark datasets for human action
recognition demonstrate the effectiveness of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 08:35:45 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Liu",
"Jun",
""
],
[
"Shahroudy",
"Amir",
""
],
[
"Xu",
"Dong",
""
],
[
"Kot",
"Alex C.",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network
with Trust Gates
ABSTRACT: Skeleton-based human action recognition has attracted a lot of research
attention during the past few years. Recent works attempted to utilize
recurrent neural networks to model the temporal dependencies between the 3D
positional configurations of human body joints for better analysis of human
activities in the skeletal data. The proposed work extends this idea to spatial
domain as well as temporal domain to better analyze the hidden sources of
action-related information within the human skeleton sequences in both of these
domains simultaneously. Based on the pictorial structure of Kinect's skeletal
data, an effective tree-structure based traversal framework is also proposed.
In order to deal with the noise in the skeletal data, a new gating mechanism
within LSTM module is introduced, with which the network can learn the
reliability of the sequential data and accordingly adjust the effect of the
input data on the updating procedure of the long-term context representation
stored in the unit's memory cell. Moreover, we introduce a novel multi-modal
feature fusion strategy within the LSTM unit in this paper. The comprehensive
experimental results on seven challenging benchmark datasets for human action
recognition demonstrate the effectiveness of the proposed method.
| no_new_dataset | 0.945147 |
1706.08355 | Ayush Dewan | Ayush Dewan, Gabriel L. Oliveira and Wolfram Burgard | Deep Semantic Classification for 3D LiDAR Data | 8 pages to be published in IROS 2017 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robots are expected to operate autonomously in dynamic environments.
Understanding the underlying dynamic characteristics of objects is a key
enabler for achieving this goal. In this paper, we propose a method for
pointwise semantic classification of 3D LiDAR data into three classes:
non-movable, movable and dynamic. We concentrate on understanding these
specific semantics because they characterize important information required for
an autonomous system. Non-movable points in the scene belong to unchanging
segments of the environment, whereas the remaining classes corresponds to the
changing parts of the scene. The difference between the movable and dynamic
class is their motion state. The dynamic points can be perceived as moving,
whereas movable objects can move, but are perceived as static. To learn the
distinction between movable and non-movable points in the environment, we
introduce an approach based on deep neural network and for detecting the
dynamic points, we estimate pointwise motion. We propose a Bayes filter
framework for combining the learned semantic cues with the motion cues to infer
the required semantic classification. In extensive experiments, we compare our
approach with other methods on a standard benchmark dataset and report
competitive results in comparison to the existing state-of-the-art.
Furthermore, we show an improvement in the classification of points by
combining the semantic cues retrieved from the neural network with the motion
cues.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 13:16:57 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Dewan",
"Ayush",
""
],
[
"Oliveira",
"Gabriel L.",
""
],
[
"Burgard",
"Wolfram",
""
]
] | TITLE: Deep Semantic Classification for 3D LiDAR Data
ABSTRACT: Robots are expected to operate autonomously in dynamic environments.
Understanding the underlying dynamic characteristics of objects is a key
enabler for achieving this goal. In this paper, we propose a method for
pointwise semantic classification of 3D LiDAR data into three classes:
non-movable, movable and dynamic. We concentrate on understanding these
specific semantics because they characterize important information required for
an autonomous system. Non-movable points in the scene belong to unchanging
segments of the environment, whereas the remaining classes corresponds to the
changing parts of the scene. The difference between the movable and dynamic
class is their motion state. The dynamic points can be perceived as moving,
whereas movable objects can move, but are perceived as static. To learn the
distinction between movable and non-movable points in the environment, we
introduce an approach based on deep neural network and for detecting the
dynamic points, we estimate pointwise motion. We propose a Bayes filter
framework for combining the learned semantic cues with the motion cues to infer
the required semantic classification. In extensive experiments, we compare our
approach with other methods on a standard benchmark dataset and report
competitive results in comparison to the existing state-of-the-art.
Furthermore, we show an improvement in the classification of points by
combining the semantic cues retrieved from the neural network with the motion
cues.
| no_new_dataset | 0.947575 |
1706.08359 | Huan Zhang | Huan Zhang, Si Si, Cho-Jui Hsieh | GPU-acceleration for Large-scale Tree Boosting | null | null | null | null | stat.ML cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel massively parallel algorithm for
accelerating the decision tree building procedure on GPUs (Graphics Processing
Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and
random forests training. Previous GPU based tree building algorithms are based
on parallel multi-scan or radix sort to find the exact tree split, and thus
suffer from scalability and performance issues. We show that using a histogram
based algorithm to approximately find the best split is more efficient and
scalable on GPU. By identifying the difference between classical GPU-based
image histogram construction and the feature histogram construction in decision
tree training, we develop a fast feature histogram building kernel on GPU with
carefully designed computational and memory access sequence to reduce atomic
update conflict and maximize GPU utilization. Our algorithm can be used as a
drop-in replacement for histogram construction in popular tree boosting systems
to improve their scalability. As an example, to train GBDT on epsilon dataset,
our method using a main-stream GPU is 7-8 times faster than histogram based
algorithm on CPU in LightGBM and 25 times faster than the exact-split finding
algorithm in XGBoost on a dual-socket 28-core Xeon server, while achieving
similar prediction accuracy.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 13:27:29 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Zhang",
"Huan",
""
],
[
"Si",
"Si",
""
],
[
"Hsieh",
"Cho-Jui",
""
]
] | TITLE: GPU-acceleration for Large-scale Tree Boosting
ABSTRACT: In this paper, we present a novel massively parallel algorithm for
accelerating the decision tree building procedure on GPUs (Graphics Processing
Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and
random forests training. Previous GPU based tree building algorithms are based
on parallel multi-scan or radix sort to find the exact tree split, and thus
suffer from scalability and performance issues. We show that using a histogram
based algorithm to approximately find the best split is more efficient and
scalable on GPU. By identifying the difference between classical GPU-based
image histogram construction and the feature histogram construction in decision
tree training, we develop a fast feature histogram building kernel on GPU with
carefully designed computational and memory access sequence to reduce atomic
update conflict and maximize GPU utilization. Our algorithm can be used as a
drop-in replacement for histogram construction in popular tree boosting systems
to improve their scalability. As an example, to train GBDT on epsilon dataset,
our method using a main-stream GPU is 7-8 times faster than histogram based
algorithm on CPU in LightGBM and 25 times faster than the exact-split finding
algorithm in XGBoost on a dual-socket 28-core Xeon server, while achieving
similar prediction accuracy.
| no_new_dataset | 0.948442 |
1706.08442 | Andrea Palazzi | Andrea Palazzi, Guido Borghi, Davide Abati, Simone Calderara, Rita
Cucchiara | Learning to Map Vehicles into Bird's Eye View | Accepted to International Conference on Image Analysis and Processing
(ICIAP) 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Awareness of the road scene is an essential component for both autonomous
vehicles and Advances Driver Assistance Systems and is gaining importance both
for the academia and car companies. This paper presents a way to learn a
semantic-aware transformation which maps detections from a dashboard camera
view onto a broader bird's eye occupancy map of the scene. To this end, a huge
synthetic dataset featuring 1M couples of frames, taken from both car dashboard
and bird's eye view, has been collected and automatically annotated. A
deep-network is then trained to warp detections from the first to the second
view. We demonstrate the effectiveness of our model against several baselines
and observe that is able to generalize on real-world data despite having been
trained solely on synthetic ones.
| [
{
"version": "v1",
"created": "Mon, 26 Jun 2017 15:39:53 GMT"
}
] | 2017-06-27T00:00:00 | [
[
"Palazzi",
"Andrea",
""
],
[
"Borghi",
"Guido",
""
],
[
"Abati",
"Davide",
""
],
[
"Calderara",
"Simone",
""
],
[
"Cucchiara",
"Rita",
""
]
] | TITLE: Learning to Map Vehicles into Bird's Eye View
ABSTRACT: Awareness of the road scene is an essential component for both autonomous
vehicles and Advances Driver Assistance Systems and is gaining importance both
for the academia and car companies. This paper presents a way to learn a
semantic-aware transformation which maps detections from a dashboard camera
view onto a broader bird's eye occupancy map of the scene. To this end, a huge
synthetic dataset featuring 1M couples of frames, taken from both car dashboard
and bird's eye view, has been collected and automatically annotated. A
deep-network is then trained to warp detections from the first to the second
view. We demonstrate the effectiveness of our model against several baselines
and observe that is able to generalize on real-world data despite having been
trained solely on synthetic ones.
| new_dataset | 0.95018 |
1610.06368 | Samaneh Abbasi Sureshjani | Samaneh Abbasi-Sureshjani and Jiong Zhang and Remco Duits and Bart ter
Haar Romeny | Retrieving challenging vessel connections in retinal images by line
co-occurrence statistics | null | null | 10.1007/s00422-017-0718-x | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural images contain often curvilinear structures, which might be
disconnected, or partly occluded. Recovering the missing connection of
disconnected structures is an open issue and needs appropriate geometric
reasoning. We propose to find line co-occurrence statistics from the
centerlines of blood vessels in retinal images and show its remarkable
similarity to a well-known probabilistic model for the connectivity pattern in
the primary visual cortex. Furthermore, the probabilistic model is trained from
the data via statistics and used for automated grouping of interrupted vessels
in a spectral clustering based approach. Several challenging image patches are
investigated around junction points, where successful results indicate the
perfect match of the trained model to the profiles of blood vessels in retinal
images. Also, comparisons among several statistical models obtained from
different datasets reveals their high similarity i.e., they are independent of
the dataset. On top of that, the best approximation of the statistical model
with the symmetrized extension of the probabilistic model on the projective
line bundle is found with a least square error smaller than 2%. Apparently, the
direction process on the projective line bundle is a good continuation model
for vessels in retinal images.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2016 11:31:06 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Abbasi-Sureshjani",
"Samaneh",
""
],
[
"Zhang",
"Jiong",
""
],
[
"Duits",
"Remco",
""
],
[
"Romeny",
"Bart ter Haar",
""
]
] | TITLE: Retrieving challenging vessel connections in retinal images by line
co-occurrence statistics
ABSTRACT: Natural images contain often curvilinear structures, which might be
disconnected, or partly occluded. Recovering the missing connection of
disconnected structures is an open issue and needs appropriate geometric
reasoning. We propose to find line co-occurrence statistics from the
centerlines of blood vessels in retinal images and show its remarkable
similarity to a well-known probabilistic model for the connectivity pattern in
the primary visual cortex. Furthermore, the probabilistic model is trained from
the data via statistics and used for automated grouping of interrupted vessels
in a spectral clustering based approach. Several challenging image patches are
investigated around junction points, where successful results indicate the
perfect match of the trained model to the profiles of blood vessels in retinal
images. Also, comparisons among several statistical models obtained from
different datasets reveals their high similarity i.e., they are independent of
the dataset. On top of that, the best approximation of the statistical model
with the symmetrized extension of the probabilistic model on the projective
line bundle is found with a least square error smaller than 2%. Apparently, the
direction process on the projective line bundle is a good continuation model
for vessels in retinal images.
| no_new_dataset | 0.955527 |
1706.07506 | Massimiliano Ruocco | Massimiliano Ruocco, Ole Steinar Lillest{\o}l Skrede, Helge Langseth | Inter-Session Modeling for Session-Based Recommendation | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, research has been done on applying Recurrent Neural Networks
(RNNs) as recommender systems. Results have been promising, especially in the
session-based setting where RNNs have been shown to outperform state-of-the-art
models. In many of these experiments, the RNN could potentially improve the
recommendations by utilizing information about the user's past sessions, in
addition to its own interactions in the current session. A problem for
session-based recommendation, is how to produce accurate recommendations at the
start of a session, before the system has learned much about the user's current
interests. We propose a novel approach that extends a RNN recommender to be
able to process the user's recent sessions, in order to improve
recommendations. This is done by using a second RNN to learn from recent
sessions, and predict the user's interest in the current session. By feeding
this information to the original RNN, it is able to improve its
recommendations. Our experiments on two different datasets show that the
proposed approach can significantly improve recommendations throughout the
sessions, compared to a single RNN working only on the current session. The
proposed model especially improves recommendations at the start of sessions,
and is therefore able to deal with the cold start problem within sessions.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 22:17:00 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Ruocco",
"Massimiliano",
""
],
[
"Skrede",
"Ole Steinar Lillestøl",
""
],
[
"Langseth",
"Helge",
""
]
] | TITLE: Inter-Session Modeling for Session-Based Recommendation
ABSTRACT: In recent years, research has been done on applying Recurrent Neural Networks
(RNNs) as recommender systems. Results have been promising, especially in the
session-based setting where RNNs have been shown to outperform state-of-the-art
models. In many of these experiments, the RNN could potentially improve the
recommendations by utilizing information about the user's past sessions, in
addition to its own interactions in the current session. A problem for
session-based recommendation, is how to produce accurate recommendations at the
start of a session, before the system has learned much about the user's current
interests. We propose a novel approach that extends a RNN recommender to be
able to process the user's recent sessions, in order to improve
recommendations. This is done by using a second RNN to learn from recent
sessions, and predict the user's interest in the current session. By feeding
this information to the original RNN, it is able to improve its
recommendations. Our experiments on two different datasets show that the
proposed approach can significantly improve recommendations throughout the
sessions, compared to a single RNN working only on the current session. The
proposed model especially improves recommendations at the start of sessions,
and is therefore able to deal with the cold start problem within sessions.
| no_new_dataset | 0.946892 |
1706.07522 | Hemanth Venkateswara | Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, Sethuraman
Panchanathan | Deep Hashing Network for Unsupervised Domain Adaptation | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep neural networks have emerged as a dominant machine
learning tool for a wide variety of application domains. However, training a
deep neural network requires a large amount of labeled data, which is an
expensive process in terms of time, labor and human expertise. Domain
adaptation or transfer learning algorithms address this challenge by leveraging
labeled data in a different, but related source domain, to develop a model for
the target domain. Further, the explosive growth of digital data has posed a
fundamental challenge concerning its storage and retrieval. Due to its storage
and retrieval efficiency, recent years have witnessed a wide application of
hashing in a variety of computer vision applications. In this paper, we first
introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms.
The dataset contains images of a variety of everyday objects from multiple
domains. We then propose a novel deep learning framework that can exploit
labeled source data and unlabeled target data to learn informative hash codes,
to accurately classify unseen target data. To the best of our knowledge, this
is the first research effort to exploit the feature learning capabilities of
deep neural networks to learn representative hash codes to address the domain
adaptation problem. Our extensive empirical studies on multiple transfer tasks
corroborate the usefulness of the framework in learning efficient hash codes
which outperform existing competitive baselines for unsupervised domain
adaptation.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 23:15:10 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Venkateswara",
"Hemanth",
""
],
[
"Eusebio",
"Jose",
""
],
[
"Chakraborty",
"Shayok",
""
],
[
"Panchanathan",
"Sethuraman",
""
]
] | TITLE: Deep Hashing Network for Unsupervised Domain Adaptation
ABSTRACT: In recent years, deep neural networks have emerged as a dominant machine
learning tool for a wide variety of application domains. However, training a
deep neural network requires a large amount of labeled data, which is an
expensive process in terms of time, labor and human expertise. Domain
adaptation or transfer learning algorithms address this challenge by leveraging
labeled data in a different, but related source domain, to develop a model for
the target domain. Further, the explosive growth of digital data has posed a
fundamental challenge concerning its storage and retrieval. Due to its storage
and retrieval efficiency, recent years have witnessed a wide application of
hashing in a variety of computer vision applications. In this paper, we first
introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms.
The dataset contains images of a variety of everyday objects from multiple
domains. We then propose a novel deep learning framework that can exploit
labeled source data and unlabeled target data to learn informative hash codes,
to accurately classify unseen target data. To the best of our knowledge, this
is the first research effort to exploit the feature learning capabilities of
deep neural networks to learn representative hash codes to address the domain
adaptation problem. Our extensive empirical studies on multiple transfer tasks
corroborate the usefulness of the framework in learning efficient hash codes
which outperform existing competitive baselines for unsupervised domain
adaptation.
| new_dataset | 0.960915 |
1706.07524 | Hemanth Venkateswara | Hemanth Venkateswara, Shayok Chakraborty, Sethuraman Panchanathan | Nonlinear Embedding Transform for Unsupervised Domain Adaptation | ECCV Workshops 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of domain adaptation (DA) deals with adapting classifier models
trained on one data distribution to different data distributions. In this
paper, we introduce the Nonlinear Embedding Transform (NET) for unsupervised DA
by combining domain alignment along with similarity-based embedding. We also
introduce a validation procedure to estimate the model parameters for the NET
algorithm using the source data. Comprehensive evaluations on multiple vision
datasets demonstrate that the NET algorithm outperforms existing competitive
procedures for unsupervised DA.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 23:42:27 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Venkateswara",
"Hemanth",
""
],
[
"Chakraborty",
"Shayok",
""
],
[
"Panchanathan",
"Sethuraman",
""
]
] | TITLE: Nonlinear Embedding Transform for Unsupervised Domain Adaptation
ABSTRACT: The problem of domain adaptation (DA) deals with adapting classifier models
trained on one data distribution to different data distributions. In this
paper, we introduce the Nonlinear Embedding Transform (NET) for unsupervised DA
by combining domain alignment along with similarity-based embedding. We also
introduce a validation procedure to estimate the model parameters for the NET
algorithm using the source data. Comprehensive evaluations on multiple vision
datasets demonstrate that the NET algorithm outperforms existing competitive
procedures for unsupervised DA.
| no_new_dataset | 0.946498 |
1706.07525 | Hemanth Venkateswara | Hemanth Venkateswara, Prasanth Lade, Jieping Ye, Sethuraman
Panchanathan | Coupled Support Vector Machines for Supervised Domain Adaptation | ACM Multimedia Conference 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Popular domain adaptation (DA) techniques learn a classifier for the target
domain by sampling relevant data points from the source and combining it with
the target data. We present a Support Vector Machine (SVM) based supervised DA
technique, where the similarity between source and target domains is modeled as
the similarity between their SVM decision boundaries. We couple the source and
target SVMs and reduce the model to a standard single SVM. We test the
Coupled-SVM on multiple datasets and compare our results with other popular SVM
based DA approaches.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 23:53:09 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Venkateswara",
"Hemanth",
""
],
[
"Lade",
"Prasanth",
""
],
[
"Ye",
"Jieping",
""
],
[
"Panchanathan",
"Sethuraman",
""
]
] | TITLE: Coupled Support Vector Machines for Supervised Domain Adaptation
ABSTRACT: Popular domain adaptation (DA) techniques learn a classifier for the target
domain by sampling relevant data points from the source and combining it with
the target data. We present a Support Vector Machine (SVM) based supervised DA
technique, where the similarity between source and target domains is modeled as
the similarity between their SVM decision boundaries. We couple the source and
target SVMs and reduce the model to a standard single SVM. We test the
Coupled-SVM on multiple datasets and compare our results with other popular SVM
based DA approaches.
| no_new_dataset | 0.949529 |
1706.07527 | Hemanth Venkateswara | Hemanth Venkateswara, Shayok Chakraborty, Troy McDaniel, Sethuraman
Panchanathan | Model Selection with Nonlinear Embedding for Unsupervised Domain
Adaptation | AAAI Workshops 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation deals with adapting classifiers trained on data from a
source distribution, to work effectively on data from a target distribution. In
this paper, we introduce the Nonlinear Embedding Transform (NET) for
unsupervised domain adaptation. The NET reduces cross-domain disparity through
nonlinear domain alignment. It also embeds the domain-aligned data such that
similar data points are clustered together. This results in enhanced
classification. To determine the parameters in the NET model (and in other
unsupervised domain adaptation models), we introduce a validation procedure by
sampling source data points that are similar in distribution to the target
data. We test the NET and the validation procedure using popular image datasets
and compare the classification results across competitive procedures for
unsupervised domain adaptation.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 00:04:38 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Venkateswara",
"Hemanth",
""
],
[
"Chakraborty",
"Shayok",
""
],
[
"McDaniel",
"Troy",
""
],
[
"Panchanathan",
"Sethuraman",
""
]
] | TITLE: Model Selection with Nonlinear Embedding for Unsupervised Domain
Adaptation
ABSTRACT: Domain adaptation deals with adapting classifiers trained on data from a
source distribution, to work effectively on data from a target distribution. In
this paper, we introduce the Nonlinear Embedding Transform (NET) for
unsupervised domain adaptation. The NET reduces cross-domain disparity through
nonlinear domain alignment. It also embeds the domain-aligned data such that
similar data points are clustered together. This results in enhanced
classification. To determine the parameters in the NET model (and in other
unsupervised domain adaptation models), we introduce a validation procedure by
sampling source data points that are similar in distribution to the target
data. We test the NET and the validation procedure using popular image datasets
and compare the classification results across competitive procedures for
unsupervised domain adaptation.
| no_new_dataset | 0.953013 |
1706.07535 | Hemanth Venkateswara | Hemanth Venkateswara, Prasanth Lade, Binbin Lin, Jieping Ye,
Sethuraman Panchanathan | Efficient Approximate Solutions to Mutual Information Based Global
Feature Selection | ICDM 2015 Conference | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutual Information (MI) is often used for feature selection when developing
classifier models. Estimating the MI for a subset of features is often
intractable. We demonstrate, that under the assumptions of conditional
independence, MI between a subset of features can be expressed as the
Conditional Mutual Information (CMI) between pairs of features. But selecting
features with the highest CMI turns out to be a hard combinatorial problem. In
this work, we have applied two unique global methods, Truncated Power Method
(TPower) and Low Rank Bilinear Approximation (LowRank), to solve the feature
selection problem. These algorithms provide very good approximations to the
NP-hard CMI based feature selection problem. We experimentally demonstrate the
effectiveness of these procedures across multiple datasets and compare them
with existing MI based global and iterative feature selection procedures.
| [
{
"version": "v1",
"created": "Fri, 23 Jun 2017 01:08:59 GMT"
}
] | 2017-06-26T00:00:00 | [
[
"Venkateswara",
"Hemanth",
""
],
[
"Lade",
"Prasanth",
""
],
[
"Lin",
"Binbin",
""
],
[
"Ye",
"Jieping",
""
],
[
"Panchanathan",
"Sethuraman",
""
]
] | TITLE: Efficient Approximate Solutions to Mutual Information Based Global
Feature Selection
ABSTRACT: Mutual Information (MI) is often used for feature selection when developing
classifier models. Estimating the MI for a subset of features is often
intractable. We demonstrate, that under the assumptions of conditional
independence, MI between a subset of features can be expressed as the
Conditional Mutual Information (CMI) between pairs of features. But selecting
features with the highest CMI turns out to be a hard combinatorial problem. In
this work, we have applied two unique global methods, Truncated Power Method
(TPower) and Low Rank Bilinear Approximation (LowRank), to solve the feature
selection problem. These algorithms provide very good approximations to the
NP-hard CMI based feature selection problem. We experimentally demonstrate the
effectiveness of these procedures across multiple datasets and compare them
with existing MI based global and iterative feature selection procedures.
| no_new_dataset | 0.946399 |
1603.01564 | Andreas ten Pas | Marcus Gualtieri, Andreas ten Pas, Kate Saenko, Robert Platt | High precision grasp pose detection in dense clutter | null | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper considers the problem of grasp pose detection in point clouds. We
follow a general algorithmic structure that first generates a large set of
6-DOF grasp candidates and then classifies each of them as a good or a bad
grasp. Our focus in this paper is on improving the second step by using depth
sensor scans from large online datasets to train a convolutional neural
network. We propose two new representations of grasp candidates, and we
quantify the effect of using prior knowledge of two forms: instance or category
knowledge of the object to be grasped, and pretraining the network on simulated
depth data obtained from idealized CAD models. Our analysis shows that a more
informative grasp candidate representation as well as pretraining and prior
knowledge significantly improve grasp detection. We evaluate our approach on a
Baxter Research Robot and demonstrate an average grasp success rate of 93% in
dense clutter. This is a 20% improvement compared to our prior work.
| [
{
"version": "v1",
"created": "Fri, 4 Mar 2016 18:27:23 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2017 17:38:33 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Gualtieri",
"Marcus",
""
],
[
"Pas",
"Andreas ten",
""
],
[
"Saenko",
"Kate",
""
],
[
"Platt",
"Robert",
""
]
] | TITLE: High precision grasp pose detection in dense clutter
ABSTRACT: This paper considers the problem of grasp pose detection in point clouds. We
follow a general algorithmic structure that first generates a large set of
6-DOF grasp candidates and then classifies each of them as a good or a bad
grasp. Our focus in this paper is on improving the second step by using depth
sensor scans from large online datasets to train a convolutional neural
network. We propose two new representations of grasp candidates, and we
quantify the effect of using prior knowledge of two forms: instance or category
knowledge of the object to be grasped, and pretraining the network on simulated
depth data obtained from idealized CAD models. Our analysis shows that a more
informative grasp candidate representation as well as pretraining and prior
knowledge significantly improve grasp detection. We evaluate our approach on a
Baxter Research Robot and demonstrate an average grasp success rate of 93% in
dense clutter. This is a 20% improvement compared to our prior work.
| no_new_dataset | 0.95418 |
1603.04535 | Ke Yan | Ke Yan, Lu Kou, and David Zhang | Learning Domain-Invariant Subspace using Domain Features and
Independence Maximization | Accepted | null | 10.1109/TCYB.2016.2633306 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation algorithms are useful when the distributions of the
training and the test data are different. In this paper, we focus on the
problem of instrumental variation and time-varying drift in the field of
sensors and measurement, which can be viewed as discrete and continuous
distributional change in the feature space. We propose maximum independence
domain adaptation (MIDA) and semi-supervised MIDA (SMIDA) to address this
problem. Domain features are first defined to describe the background
information of a sample, such as the device label and acquisition time. Then,
MIDA learns a subspace which has maximum independence with the domain features,
so as to reduce the inter-domain discrepancy in distributions. A feature
augmentation strategy is also designed to project samples according to their
backgrounds so as to improve the adaptation. The proposed algorithms are
flexible and fast. Their effectiveness is verified by experiments on synthetic
datasets and four real-world ones on sensors, measurement, and computer vision.
They can greatly enhance the practicability of sensor systems, as well as
extend the application scope of existing domain adaptation algorithms by
uniformly handling different kinds of distributional change.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 02:56:22 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2017 01:39:22 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Yan",
"Ke",
""
],
[
"Kou",
"Lu",
""
],
[
"Zhang",
"David",
""
]
] | TITLE: Learning Domain-Invariant Subspace using Domain Features and
Independence Maximization
ABSTRACT: Domain adaptation algorithms are useful when the distributions of the
training and the test data are different. In this paper, we focus on the
problem of instrumental variation and time-varying drift in the field of
sensors and measurement, which can be viewed as discrete and continuous
distributional change in the feature space. We propose maximum independence
domain adaptation (MIDA) and semi-supervised MIDA (SMIDA) to address this
problem. Domain features are first defined to describe the background
information of a sample, such as the device label and acquisition time. Then,
MIDA learns a subspace which has maximum independence with the domain features,
so as to reduce the inter-domain discrepancy in distributions. A feature
augmentation strategy is also designed to project samples according to their
backgrounds so as to improve the adaptation. The proposed algorithms are
flexible and fast. Their effectiveness is verified by experiments on synthetic
datasets and four real-world ones on sensors, measurement, and computer vision.
They can greatly enhance the practicability of sensor systems, as well as
extend the application scope of existing domain adaptation algorithms by
uniformly handling different kinds of distributional change.
| no_new_dataset | 0.946051 |
1703.04142 | Lech Madeyski | Lech Madeyski and Marcin Kawalerowicz | Continuous Defect Prediction: The Idea and a Related Dataset | Lech Madeyski and Marcin Kawalerowicz. "Continuous Defect Prediction:
The Idea and a Related Dataset" In: 14th International Conference on Mining
Software Repositories (MSR'17). Buenos Aires. 2017, pp. 515-518. doi:
10.1109/MSR.2017.46. URL:
http://madeyski.e-informatyka.pl/download/MadeyskiKawalerowiczMSR17.pdf | null | 10.1109/MSR.2017.46 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We would like to present the idea of our Continuous Defect Prediction (CDP)
research and a related dataset that we created and share. Our dataset is
currently a set of more than 11 million data rows, representing files involved
in Continuous Integration (CI) builds, that synthesize the results of CI builds
with data we mine from software repositories. Our dataset embraces 1265
software projects, 30,022 distinct commit authors and several software process
metrics that in earlier research appeared to be useful in software defect
prediction. In this particular dataset we use TravisTorrent as the source of CI
data. TravisTorrent synthesizes commit level information from the Travis CI
server and GitHub open-source projects repositories. We extend this data to a
file change level and calculate the software process metrics that may be used,
for example, as features to predict risky software changes that could break the
build if committed to a repository with CI enabled.
| [
{
"version": "v1",
"created": "Sun, 12 Mar 2017 17:08:47 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jun 2017 12:02:12 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Madeyski",
"Lech",
""
],
[
"Kawalerowicz",
"Marcin",
""
]
] | TITLE: Continuous Defect Prediction: The Idea and a Related Dataset
ABSTRACT: We would like to present the idea of our Continuous Defect Prediction (CDP)
research and a related dataset that we created and share. Our dataset is
currently a set of more than 11 million data rows, representing files involved
in Continuous Integration (CI) builds, that synthesize the results of CI builds
with data we mine from software repositories. Our dataset embraces 1265
software projects, 30,022 distinct commit authors and several software process
metrics that in earlier research appeared to be useful in software defect
prediction. In this particular dataset we use TravisTorrent as the source of CI
data. TravisTorrent synthesizes commit level information from the Travis CI
server and GitHub open-source projects repositories. We extend this data to a
file change level and calculate the software process metrics that may be used,
for example, as features to predict risky software changes that could break the
build if committed to a repository with CI enabled.
| new_dataset | 0.96128 |
1706.04206 | Hossein Hematialam | Hossein Hematialam, Wlodek Zadrozny | Identifying Condition-Action Statements in Medical Guidelines Using
Domain-Independent Features | null | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper advances the state of the art in text understanding of medical
guidelines by releasing two new annotated clinical guidelines datasets, and
establishing baselines for using machine learning to extract condition-action
pairs. In contrast to prior work that relies on manually created rules, we
report experiment with several supervised machine learning techniques to
classify sentences as to whether they express conditions and actions. We show
the limitations and possible extensions of this work on text mining of medical
guidelines.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 18:02:27 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2017 18:35:26 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Hematialam",
"Hossein",
""
],
[
"Zadrozny",
"Wlodek",
""
]
] | TITLE: Identifying Condition-Action Statements in Medical Guidelines Using
Domain-Independent Features
ABSTRACT: This paper advances the state of the art in text understanding of medical
guidelines by releasing two new annotated clinical guidelines datasets, and
establishing baselines for using machine learning to extract condition-action
pairs. In contrast to prior work that relies on manually created rules, we
report experiment with several supervised machine learning techniques to
classify sentences as to whether they express conditions and actions. We show
the limitations and possible extensions of this work on text mining of medical
guidelines.
| new_dataset | 0.949435 |
1706.07145 | Shuchang Zhou | Shuchang Zhou, Yuzhi Wang, He Wen, Qinyao He and Yuheng Zou | Balanced Quantization: An Effective and Efficient Approach to Quantized
Neural Networks | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantized Neural Networks (QNNs), which use low bitwidth numbers for
representing parameters and performing computations, have been proposed to
reduce the computation complexity, storage size and memory usage. In QNNs,
parameters and activations are uniformly quantized, such that the
multiplications and additions can be accelerated by bitwise operations.
However, distributions of parameters in Neural Networks are often imbalanced,
such that the uniform quantization determined from extremal values may under
utilize available bitwidth. In this paper, we propose a novel quantization
method that can ensure the balance of distributions of quantized values. Our
method first recursively partitions the parameters by percentiles into balanced
bins, and then applies uniform quantization. We also introduce computationally
cheaper approximations of percentiles to reduce the computation overhead
introduced. Overall, our method improves the prediction accuracies of QNNs
without introducing extra computation during inference, has negligible impact
on training speed, and is applicable to both Convolutional Neural Networks and
Recurrent Neural Networks. Experiments on standard datasets including ImageNet
and Penn Treebank confirm the effectiveness of our method. On ImageNet, the
top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is
superior to the state-of-the-arts of QNNs.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 01:25:37 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Zhou",
"Shuchang",
""
],
[
"Wang",
"Yuzhi",
""
],
[
"Wen",
"He",
""
],
[
"He",
"Qinyao",
""
],
[
"Zou",
"Yuheng",
""
]
] | TITLE: Balanced Quantization: An Effective and Efficient Approach to Quantized
Neural Networks
ABSTRACT: Quantized Neural Networks (QNNs), which use low bitwidth numbers for
representing parameters and performing computations, have been proposed to
reduce the computation complexity, storage size and memory usage. In QNNs,
parameters and activations are uniformly quantized, such that the
multiplications and additions can be accelerated by bitwise operations.
However, distributions of parameters in Neural Networks are often imbalanced,
such that the uniform quantization determined from extremal values may under
utilize available bitwidth. In this paper, we propose a novel quantization
method that can ensure the balance of distributions of quantized values. Our
method first recursively partitions the parameters by percentiles into balanced
bins, and then applies uniform quantization. We also introduce computationally
cheaper approximations of percentiles to reduce the computation overhead
introduced. Overall, our method improves the prediction accuracies of QNNs
without introducing extra computation during inference, has negligible impact
on training speed, and is applicable to both Convolutional Neural Networks and
Recurrent Neural Networks. Experiments on standard datasets including ImageNet
and Penn Treebank confirm the effectiveness of our method. On ImageNet, the
top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7\%, which is
superior to the state-of-the-arts of QNNs.
| no_new_dataset | 0.949153 |
1706.07156 | Muhammad Huzaifah Md Shahrin | M. Huzaifah | Comparison of Time-Frequency Representations for Environmental Sound
Classification using Convolutional Neural Networks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent successful applications of convolutional neural networks (CNNs) to
audio classification and speech recognition have motivated the search for
better input representations for more efficient training. Visual displays of an
audio signal, through various time-frequency representations such as
spectrograms offer a rich representation of the temporal and spectral structure
of the original signal. In this letter, we compare various popular signal
processing methods to obtain this representation, such as short-time Fourier
transform (STFT) with linear and Mel scales, constant-Q transform (CQT) and
continuous Wavelet transform (CWT), and assess their impact on the
classification performance of two environmental sound datasets using CNNs. This
study supports the hypothesis that time-frequency representations are valuable
in learning useful features for sound classification. Moreover, the actual
transformation used is shown to impact the classification accuracy, with
Mel-scaled STFT outperforming the other discussed methods slightly and baseline
MFCC features to a large degree. Additionally, we observe that the optimal
window size during transformation is dependent on the characteristics of the
audio signal and architecturally, 2D convolution yielded better results in most
cases compared to 1D.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 03:23:09 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Huzaifah",
"M.",
""
]
] | TITLE: Comparison of Time-Frequency Representations for Environmental Sound
Classification using Convolutional Neural Networks
ABSTRACT: Recent successful applications of convolutional neural networks (CNNs) to
audio classification and speech recognition have motivated the search for
better input representations for more efficient training. Visual displays of an
audio signal, through various time-frequency representations such as
spectrograms offer a rich representation of the temporal and spectral structure
of the original signal. In this letter, we compare various popular signal
processing methods to obtain this representation, such as short-time Fourier
transform (STFT) with linear and Mel scales, constant-Q transform (CQT) and
continuous Wavelet transform (CWT), and assess their impact on the
classification performance of two environmental sound datasets using CNNs. This
study supports the hypothesis that time-frequency representations are valuable
in learning useful features for sound classification. Moreover, the actual
transformation used is shown to impact the classification accuracy, with
Mel-scaled STFT outperforming the other discussed methods slightly and baseline
MFCC features to a large degree. Additionally, we observe that the optimal
window size during transformation is dependent on the characteristics of the
audio signal and architecturally, 2D convolution yielded better results in most
cases compared to 1D.
| no_new_dataset | 0.946843 |
1706.07346 | Lingxi Xie | Yuyin Zhou, Lingxi Xie, Elliot K. Fishman, Alan L. Yuille | Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans | Accepted to MICCAI 2017 (8 pages, 3 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic segmentation of an organ and its cystic region is a prerequisite of
computer-aided diagnosis. In this paper, we focus on pancreatic cyst
segmentation in abdominal CT scan. This task is important and very useful in
clinical practice yet challenging due to the low contrast in boundary, the
variability in location, shape and the different stages of the pancreatic
cancer. Inspired by the high relevance between the location of a pancreas and
its cystic region, we introduce extra deep supervision into the segmentation
network, so that cyst segmentation can be improved with the help of relatively
easier pancreas segmentation. Under a reasonable transformation function, our
approach can be factorized into two stages, and each stage can be efficiently
optimized via gradient back-propagation throughout the deep networks. We
collect a new dataset with 131 pathological samples, which, to the best of our
knowledge, is the largest set for pancreatic cyst segmentation. Without human
assistance, our approach reports a 63.44% average accuracy, measured by the
Dice-S{\o}rensen coefficient (DSC), which is higher than the number (60.46%)
without deep supervision.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 14:46:16 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Zhou",
"Yuyin",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Fishman",
"Elliot K.",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Deep Supervision for Pancreatic Cyst Segmentation in Abdominal CT Scans
ABSTRACT: Automatic segmentation of an organ and its cystic region is a prerequisite of
computer-aided diagnosis. In this paper, we focus on pancreatic cyst
segmentation in abdominal CT scan. This task is important and very useful in
clinical practice yet challenging due to the low contrast in boundary, the
variability in location, shape and the different stages of the pancreatic
cancer. Inspired by the high relevance between the location of a pancreas and
its cystic region, we introduce extra deep supervision into the segmentation
network, so that cyst segmentation can be improved with the help of relatively
easier pancreas segmentation. Under a reasonable transformation function, our
approach can be factorized into two stages, and each stage can be efficiently
optimized via gradient back-propagation throughout the deep networks. We
collect a new dataset with 131 pathological samples, which, to the best of our
knowledge, is the largest set for pancreatic cyst segmentation. Without human
assistance, our approach reports a 63.44% average accuracy, measured by the
Dice-S{\o}rensen coefficient (DSC), which is higher than the number (60.46%)
without deep supervision.
| new_dataset | 0.959724 |
1706.07397 | Ting Sun | Ting Sun, Lin Sun, Dit-Yan Yeung | Fine-Grained Categorization via CNN-Based Automatic Extraction and
Integration of Object-Level and Part-Level Features | 45 pages, 20 figures, accepted by Image and Vision Computing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained categorization can benefit from part-based features which reveal
subtle visual differences between object categories. Handcrafted features have
been widely used for part detection and classification. Although a recent trend
seeks to learn such features automatically using powerful deep learning models
such as convolutional neural networks (CNN), their training and possibly also
testing require manually provided annotations which are costly to obtain. To
relax these requirements, we assume in this study a general problem setting in
which the raw images are only provided with object-level class labels for model
training with no other side information needed. Specifically, by extracting and
interpreting the hierarchical hidden layer features learned by a CNN, we
propose an elaborate CNN-based system for fine-grained categorization. When
evaluated on the Caltech-UCSD Birds-200-2011, FGVC-Aircraft, Cars and Stanford
dogs datasets under the setting that only object-level class labels are used
for training and no other annotations are available for both training and
testing, our method achieves impressive performance that is superior or
comparable to the state of the art. Moreover, it sheds some light on ingenious
use of the hierarchical features learned by CNN which has wide applicability
well beyond the current fine-grained categorization task.
| [
{
"version": "v1",
"created": "Thu, 22 Jun 2017 16:59:16 GMT"
}
] | 2017-06-23T00:00:00 | [
[
"Sun",
"Ting",
""
],
[
"Sun",
"Lin",
""
],
[
"Yeung",
"Dit-Yan",
""
]
] | TITLE: Fine-Grained Categorization via CNN-Based Automatic Extraction and
Integration of Object-Level and Part-Level Features
ABSTRACT: Fine-grained categorization can benefit from part-based features which reveal
subtle visual differences between object categories. Handcrafted features have
been widely used for part detection and classification. Although a recent trend
seeks to learn such features automatically using powerful deep learning models
such as convolutional neural networks (CNN), their training and possibly also
testing require manually provided annotations which are costly to obtain. To
relax these requirements, we assume in this study a general problem setting in
which the raw images are only provided with object-level class labels for model
training with no other side information needed. Specifically, by extracting and
interpreting the hierarchical hidden layer features learned by a CNN, we
propose an elaborate CNN-based system for fine-grained categorization. When
evaluated on the Caltech-UCSD Birds-200-2011, FGVC-Aircraft, Cars and Stanford
dogs datasets under the setting that only object-level class labels are used
for training and no other annotations are available for both training and
testing, our method achieves impressive performance that is superior or
comparable to the state of the art. Moreover, it sheds some light on ingenious
use of the hierarchical features learned by CNN which has wide applicability
well beyond the current fine-grained categorization task.
| no_new_dataset | 0.950319 |
1612.08230 | Lingxi Xie | Yuyin Zhou, Lingxi Xie, Wei Shen, Yan Wang, Elliot K. Fishman, Alan L.
Yuille | A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans | Accepted to MICCAI 2017 (8 pages, 3 figures) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have been widely adopted for automatic organ
segmentation from abdominal CT scans. However, the segmentation accuracy of
some small organs (e.g., the pancreas) is sometimes below satisfaction,
arguably because deep networks are easily disrupted by the complex and variable
background regions which occupies a large fraction of the input volume. In this
paper, we formulate this problem into a fixed-point model which uses a
predicted segmentation mask to shrink the input region. This is motivated by
the fact that a smaller input region often leads to more accurate segmentation.
In the training process, we use the ground-truth annotation to generate
accurate input regions and optimize network weights. On the testing stage, we
fix the network parameters and update the segmentation results in an iterative
manner. We evaluate our approach on the NIH pancreas segmentation dataset, and
outperform the state-of-the-art by more than 4%, measured by the average
Dice-S{\o}rensen Coefficient (DSC). In addition, we report 62.43% DSC in the
worst case, which guarantees the reliability of our approach in clinical
applications.
| [
{
"version": "v1",
"created": "Sun, 25 Dec 2016 02:15:50 GMT"
},
{
"version": "v2",
"created": "Mon, 29 May 2017 07:41:05 GMT"
},
{
"version": "v3",
"created": "Sun, 18 Jun 2017 02:52:24 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Jun 2017 04:00:59 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Zhou",
"Yuyin",
""
],
[
"Xie",
"Lingxi",
""
],
[
"Shen",
"Wei",
""
],
[
"Wang",
"Yan",
""
],
[
"Fishman",
"Elliot K.",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans
ABSTRACT: Deep neural networks have been widely adopted for automatic organ
segmentation from abdominal CT scans. However, the segmentation accuracy of
some small organs (e.g., the pancreas) is sometimes below satisfaction,
arguably because deep networks are easily disrupted by the complex and variable
background regions which occupies a large fraction of the input volume. In this
paper, we formulate this problem into a fixed-point model which uses a
predicted segmentation mask to shrink the input region. This is motivated by
the fact that a smaller input region often leads to more accurate segmentation.
In the training process, we use the ground-truth annotation to generate
accurate input regions and optimize network weights. On the testing stage, we
fix the network parameters and update the segmentation results in an iterative
manner. We evaluate our approach on the NIH pancreas segmentation dataset, and
outperform the state-of-the-art by more than 4%, measured by the average
Dice-S{\o}rensen Coefficient (DSC). In addition, we report 62.43% DSC in the
worst case, which guarantees the reliability of our approach in clinical
applications.
| no_new_dataset | 0.951188 |
1703.00152 | Nevrez Imamoglu | Nevrez Imamoglu, Chi Zhang, Wataru Shimoda, Yuming Fang, Boxin Shi | Saliency Detection by Forward and Backward Cues in Deep-CNNs | 5 pages,4 figures,and 1 table. the content of this work is accepted
for ICIP 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As prior knowledge of objects or object features helps us make relations for
similar objects on attentional tasks, pre-trained deep convolutional neural
networks (CNNs) can be used to detect salient objects on images regardless of
the object class is in the network knowledge or not. In this paper, we propose
a top-down saliency model using CNN, a weakly supervised CNN model trained for
1000 object labelling task from RGB images. The model detects attentive regions
based on their objectness scores predicted by selected features from CNNs. To
estimate the salient objects effectively, we combine both forward and backward
features, while demonstrating that partially-guided backpropagation will
provide sufficient information for selecting the features from forward run of
CNN model. Finally, these top-down cues are enhanced with a state-of-the-art
bottom-up model as complementing the overall saliency. As the proposed model is
an effective integration of forward and backward cues through objectness
without any supervision or regression to ground truth data, it gives promising
results compared to state-of-the-art models in two different datasets.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 06:56:37 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Jun 2017 09:04:55 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Imamoglu",
"Nevrez",
""
],
[
"Zhang",
"Chi",
""
],
[
"Shimoda",
"Wataru",
""
],
[
"Fang",
"Yuming",
""
],
[
"Shi",
"Boxin",
""
]
] | TITLE: Saliency Detection by Forward and Backward Cues in Deep-CNNs
ABSTRACT: As prior knowledge of objects or object features helps us make relations for
similar objects on attentional tasks, pre-trained deep convolutional neural
networks (CNNs) can be used to detect salient objects on images regardless of
the object class is in the network knowledge or not. In this paper, we propose
a top-down saliency model using CNN, a weakly supervised CNN model trained for
1000 object labelling task from RGB images. The model detects attentive regions
based on their objectness scores predicted by selected features from CNNs. To
estimate the salient objects effectively, we combine both forward and backward
features, while demonstrating that partially-guided backpropagation will
provide sufficient information for selecting the features from forward run of
CNN model. Finally, these top-down cues are enhanced with a state-of-the-art
bottom-up model as complementing the overall saliency. As the proposed model is
an effective integration of forward and backward cues through objectness
without any supervision or regression to ground truth data, it gives promising
results compared to state-of-the-art models in two different datasets.
| no_new_dataset | 0.951504 |
1705.05742 | Rakshit Trivedi | Rakshit Trivedi, Hanjun Dai, Yichen Wang, Le Song | Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs | null | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of large scale event data with time stamps has given rise to
dynamically evolving knowledge graphs that contain temporal information for
each edge. Reasoning over time in such dynamic knowledge graphs is not yet well
understood. To this end, we present Know-Evolve, a novel deep evolutionary
knowledge network that learns non-linearly evolving entity representations over
time. The occurrence of a fact (edge) is modeled as a multivariate point
process whose intensity function is modulated by the score for that fact
computed based on the learned entity embeddings. We demonstrate significantly
improved performance over various relational learning approaches on two large
scale real-world datasets. Further, our method effectively predicts occurrence
or recurrence time of a fact which is novel compared to prior reasoning
approaches in multi-relational setting.
| [
{
"version": "v1",
"created": "Tue, 16 May 2017 14:53:02 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2017 04:54:07 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2017 05:21:46 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Trivedi",
"Rakshit",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Wang",
"Yichen",
""
],
[
"Song",
"Le",
""
]
] | TITLE: Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs
ABSTRACT: The availability of large scale event data with time stamps has given rise to
dynamically evolving knowledge graphs that contain temporal information for
each edge. Reasoning over time in such dynamic knowledge graphs is not yet well
understood. To this end, we present Know-Evolve, a novel deep evolutionary
knowledge network that learns non-linearly evolving entity representations over
time. The occurrence of a fact (edge) is modeled as a multivariate point
process whose intensity function is modulated by the score for that fact
computed based on the learned entity embeddings. We demonstrate significantly
improved performance over various relational learning approaches on two large
scale real-world datasets. Further, our method effectively predicts occurrence
or recurrence time of a fact which is novel compared to prior reasoning
approaches in multi-relational setting.
| no_new_dataset | 0.949856 |
1706.05993 | Hosnieh Sattar | Hosnieh Sattar, Mario Fritz, Andreas Bulling | Visual Decoding of Targets During Visual Search From Human Eye Fixations | null | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What does human gaze reveal about a users' intents and to which extend can
these intents be inferred or even visualized? Gaze was proposed as an implicit
source of information to predict the target of visual search and, more
recently, to predict the object class and attributes of the search target. In
this work, we go one step further and investigate the feasibility of combining
recent advances in encoding human gaze information using deep convolutional
neural networks with the power of generative image models to visually decode,
i.e. create a visual representation of, the search target. Such visual decoding
is challenging for two reasons: 1) the search target only resides in the user's
mind as a subjective visual pattern, and can most often not even be described
verbally by the person, and 2) it is, as of yet, unclear if gaze fixations
contain sufficient information for this task at all. We show, for the first
time, that visual representations of search targets can indeed be decoded only
from human gaze fixations. We propose to first encode fixations into a semantic
representation and then decode this representation into an image. We evaluate
our method on a recent gaze dataset of 14 participants searching for clothing
in image collages and validate the model's predictions using two human studies.
Our results show that 62% (Chance level = 10%) of the time users were able to
select the categories of the decoded image right. In our second studies we show
the importance of a local gaze encoding for decoding visual search targets of
user
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 14:52:30 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2017 05:28:51 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Jun 2017 11:19:10 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Sattar",
"Hosnieh",
""
],
[
"Fritz",
"Mario",
""
],
[
"Bulling",
"Andreas",
""
]
] | TITLE: Visual Decoding of Targets During Visual Search From Human Eye Fixations
ABSTRACT: What does human gaze reveal about a users' intents and to which extend can
these intents be inferred or even visualized? Gaze was proposed as an implicit
source of information to predict the target of visual search and, more
recently, to predict the object class and attributes of the search target. In
this work, we go one step further and investigate the feasibility of combining
recent advances in encoding human gaze information using deep convolutional
neural networks with the power of generative image models to visually decode,
i.e. create a visual representation of, the search target. Such visual decoding
is challenging for two reasons: 1) the search target only resides in the user's
mind as a subjective visual pattern, and can most often not even be described
verbally by the person, and 2) it is, as of yet, unclear if gaze fixations
contain sufficient information for this task at all. We show, for the first
time, that visual representations of search targets can indeed be decoded only
from human gaze fixations. We propose to first encode fixations into a semantic
representation and then decode this representation into an image. We evaluate
our method on a recent gaze dataset of 14 participants searching for clothing
in image collages and validate the model's predictions using two human studies.
Our results show that 62% (Chance level = 10%) of the time users were able to
select the categories of the decoded image right. In our second studies we show
the importance of a local gaze encoding for decoding visual search targets of
user
| no_new_dataset | 0.940188 |
1706.06660 | Venkatesh Saligrama | Yao Ma, Alex Olshevsky, Venkatesh Saligrama, Csaba Szepesvari | Crowdsourcing with Sparsely Interacting Workers | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider estimation of worker skills from worker-task interaction data
(with unknown labels) for the single-coin crowd-sourcing binary classification
model in symmetric noise. We define the (worker) interaction graph whose nodes
are workers and an edge between two nodes indicates whether or not the two
workers participated in a common task. We show that skills are asymptotically
identifiable if and only if an appropriate limiting version of the interaction
graph is irreducible and has odd-cycles. We then formulate a weighted rank-one
optimization problem to estimate skills based on observations on an
irreducible, aperiodic interaction graph. We propose a gradient descent scheme
and show that for such interaction graphs estimates converge asymptotically to
the global minimum. We characterize noise robustness of the gradient scheme in
terms of spectral properties of signless Laplacians of the interaction graph.
We then demonstrate that a plug-in estimator based on the estimated skills
achieves state-of-art performance on a number of real-world datasets. Our
results have implications for rank-one matrix completion problem in that
gradient descent can provably recover $W \times W$ rank-one matrices based on
$W+1$ off-diagonal observations of a connected graph with a single odd-cycle.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2017 20:41:25 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Ma",
"Yao",
""
],
[
"Olshevsky",
"Alex",
""
],
[
"Saligrama",
"Venkatesh",
""
],
[
"Szepesvari",
"Csaba",
""
]
] | TITLE: Crowdsourcing with Sparsely Interacting Workers
ABSTRACT: We consider estimation of worker skills from worker-task interaction data
(with unknown labels) for the single-coin crowd-sourcing binary classification
model in symmetric noise. We define the (worker) interaction graph whose nodes
are workers and an edge between two nodes indicates whether or not the two
workers participated in a common task. We show that skills are asymptotically
identifiable if and only if an appropriate limiting version of the interaction
graph is irreducible and has odd-cycles. We then formulate a weighted rank-one
optimization problem to estimate skills based on observations on an
irreducible, aperiodic interaction graph. We propose a gradient descent scheme
and show that for such interaction graphs estimates converge asymptotically to
the global minimum. We characterize noise robustness of the gradient scheme in
terms of spectral properties of signless Laplacians of the interaction graph.
We then demonstrate that a plug-in estimator based on the estimated skills
achieves state-of-art performance on a number of real-world datasets. Our
results have implications for rank-one matrix completion problem in that
gradient descent can provably recover $W \times W$ rank-one matrices based on
$W+1$ off-diagonal observations of a connected graph with a single odd-cycle.
| no_new_dataset | 0.945701 |
1706.06664 | Anshumali Shrivastava | Chen Luo, Anshumali Shrivastava | Arrays of (locality-sensitive) Count Estimators (ACE): High-Speed
Anomaly Detection via Cache Lookups | null | null | null | null | cs.DB cs.LG stat.CO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection is one of the frequent and important subroutines deployed
in large-scale data processing systems. Even being a well-studied topic,
existing techniques for unsupervised anomaly detection require storing
significant amounts of data, which is prohibitive from memory and latency
perspective. In the big-data world existing methods fail to address the new set
of memory and latency constraints. In this paper, we propose ACE (Arrays of
(locality-sensitive) Count Estimators) algorithm that can be 60x faster than
the ELKI package~\cite{DBLP:conf/ssd/AchtertBKSZ09}, which has the fastest
implementation of the unsupervised anomaly detection algorithms. ACE algorithm
requires less than $4MB$ memory, to dynamically compress the full data
information into a set of count arrays. These tiny $4MB$ arrays of counts are
sufficient for unsupervised anomaly detection. At the core of the ACE
algorithm, there is a novel statistical estimator which is derived from the
sampling view of Locality Sensitive Hashing(LSH). This view is significantly
different and efficient than the widely popular view of LSH for near-neighbor
search. We show the superiority of ACE algorithm over 11 popular baselines on 3
benchmark datasets, including the KDD-Cup99 data which is the largest available
benchmark comprising of more than half a million entries with ground truth
anomaly labels.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2017 21:09:22 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Luo",
"Chen",
""
],
[
"Shrivastava",
"Anshumali",
""
]
] | TITLE: Arrays of (locality-sensitive) Count Estimators (ACE): High-Speed
Anomaly Detection via Cache Lookups
ABSTRACT: Anomaly detection is one of the frequent and important subroutines deployed
in large-scale data processing systems. Even being a well-studied topic,
existing techniques for unsupervised anomaly detection require storing
significant amounts of data, which is prohibitive from memory and latency
perspective. In the big-data world existing methods fail to address the new set
of memory and latency constraints. In this paper, we propose ACE (Arrays of
(locality-sensitive) Count Estimators) algorithm that can be 60x faster than
the ELKI package~\cite{DBLP:conf/ssd/AchtertBKSZ09}, which has the fastest
implementation of the unsupervised anomaly detection algorithms. ACE algorithm
requires less than $4MB$ memory, to dynamically compress the full data
information into a set of count arrays. These tiny $4MB$ arrays of counts are
sufficient for unsupervised anomaly detection. At the core of the ACE
algorithm, there is a novel statistical estimator which is derived from the
sampling view of Locality Sensitive Hashing(LSH). This view is significantly
different and efficient than the widely popular view of LSH for near-neighbor
search. We show the superiority of ACE algorithm over 11 popular baselines on 3
benchmark datasets, including the KDD-Cup99 data which is the largest available
benchmark comprising of more than half a million entries with ground truth
anomaly labels.
| no_new_dataset | 0.945751 |
1706.06718 | Sean McMahon Mr | Sean McMahon, Niko S\"underhauf, Ben Upcroft, and Michael Milford | Multi-Modal Trip Hazard Affordance Detection On Construction Sites | 9 Pages, 12 Figures, 2 Tables, Accepted to Robotics and Automation
Letters (RA-L) | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trip hazards are a significant contributor to accidents on construction and
manufacturing sites, where over a third of Australian workplace injuries occur
[1]. Current safety inspections are labour intensive and limited by human
fallibility,making automation of trip hazard detection appealing from both a
safety and economic perspective. Trip hazards present an interesting challenge
to modern learning techniques because they are defined as much by affordance as
by object type; for example wires on a table are not a trip hazard, but can be
if lying on the ground. To address these challenges, we conduct a comprehensive
investigation into the performance characteristics of 11 different colour and
depth fusion approaches, including 4 fusion and one non fusion approach; using
colour and two types of depth images. Trained and tested on over 600 labelled
trip hazards over 4 floors and 2000m$\mathrm{^{2}}$ in an active construction
site,this approach was able to differentiate between identical objects in
different physical configurations (see Figure 1). Outperforming a colour-only
detector, our multi-modal trip detector fuses colour and depth information to
achieve a 4% absolute improvement in F1-score. These investigative results and
the extensive publicly available dataset moves us one step closer to assistive
or fully automated safety inspection systems on construction sites.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 01:58:18 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"McMahon",
"Sean",
""
],
[
"Sünderhauf",
"Niko",
""
],
[
"Upcroft",
"Ben",
""
],
[
"Milford",
"Michael",
""
]
] | TITLE: Multi-Modal Trip Hazard Affordance Detection On Construction Sites
ABSTRACT: Trip hazards are a significant contributor to accidents on construction and
manufacturing sites, where over a third of Australian workplace injuries occur
[1]. Current safety inspections are labour intensive and limited by human
fallibility,making automation of trip hazard detection appealing from both a
safety and economic perspective. Trip hazards present an interesting challenge
to modern learning techniques because they are defined as much by affordance as
by object type; for example wires on a table are not a trip hazard, but can be
if lying on the ground. To address these challenges, we conduct a comprehensive
investigation into the performance characteristics of 11 different colour and
depth fusion approaches, including 4 fusion and one non fusion approach; using
colour and two types of depth images. Trained and tested on over 600 labelled
trip hazards over 4 floors and 2000m$\mathrm{^{2}}$ in an active construction
site,this approach was able to differentiate between identical objects in
different physical configurations (see Figure 1). Outperforming a colour-only
detector, our multi-modal trip detector fuses colour and depth information to
achieve a 4% absolute improvement in F1-score. These investigative results and
the extensive publicly available dataset moves us one step closer to assistive
or fully automated safety inspection systems on construction sites.
| new_dataset | 0.760695 |
1706.06792 | Yujia Chen | Yujia Chen and Ce Li | GM-Net: Learning Features with More Efficiency | 6 Pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep Convolutional Neural Networks (CNNs) are capable of learning
unprecedentedly effective features from images. Some researchers have struggled
to enhance the parameters' efficiency using grouped convolution. However, the
relation between the optimal number of convolutional groups and the recognition
performance remains an open problem. In this paper, we propose a series of
Basic Units (BUs) and a two-level merging strategy to construct deep CNNs,
referred to as a joint Grouped Merging Net (GM-Net), which can produce joint
grouped and reused deep features while maintaining the feature discriminability
for classification tasks. Our GM-Net architectures with the proposed BU_A
(dense connection) and BU_B (straight mapping) lead to significant reduction in
the number of network parameters and obtain performance improvement in image
classification tasks. Extensive experiments are conducted to validate the
superior performance of the GM-Net than the state-of-the-arts on the benchmark
datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 08:45:15 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Chen",
"Yujia",
""
],
[
"Li",
"Ce",
""
]
] | TITLE: GM-Net: Learning Features with More Efficiency
ABSTRACT: Deep Convolutional Neural Networks (CNNs) are capable of learning
unprecedentedly effective features from images. Some researchers have struggled
to enhance the parameters' efficiency using grouped convolution. However, the
relation between the optimal number of convolutional groups and the recognition
performance remains an open problem. In this paper, we propose a series of
Basic Units (BUs) and a two-level merging strategy to construct deep CNNs,
referred to as a joint Grouped Merging Net (GM-Net), which can produce joint
grouped and reused deep features while maintaining the feature discriminability
for classification tasks. Our GM-Net architectures with the proposed BU_A
(dense connection) and BU_B (straight mapping) lead to significant reduction in
the number of network parameters and obtain performance improvement in image
classification tasks. Extensive experiments are conducted to validate the
superior performance of the GM-Net than the state-of-the-arts on the benchmark
datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.
| no_new_dataset | 0.949809 |
1706.06810 | Jongpil Lee | Jongpil Lee, Juhan Nam | Multi-Level and Multi-Scale Feature Aggregation Using Sample-level Deep
Convolutional Neural Networks for Music Classification | ICML Music Discovery Workshop 2017 | null | null | null | cs.SD cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music tag words that describe music audio by text have different levels of
abstraction. Taking this issue into account, we propose a music classification
approach that aggregates multi-level and multi-scale features using pre-trained
feature extractors. In particular, the feature extractors are trained in
sample-level deep convolutional neural networks using raw waveforms. We show
that this approach achieves state-of-the-art results on several music
classification datasets.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 09:57:24 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Lee",
"Jongpil",
""
],
[
"Nam",
"Juhan",
""
]
] | TITLE: Multi-Level and Multi-Scale Feature Aggregation Using Sample-level Deep
Convolutional Neural Networks for Music Classification
ABSTRACT: Music tag words that describe music audio by text have different levels of
abstraction. Taking this issue into account, we propose a music classification
approach that aggregates multi-level and multi-scale features using pre-trained
feature extractors. In particular, the feature extractors are trained in
sample-level deep convolutional neural networks using raw waveforms. We show
that this approach achieves state-of-the-art results on several music
classification datasets.
| no_new_dataset | 0.946498 |
1706.06917 | Milad Niknejad | Milad Niknejad, Jose M. Bioucas-Dias, Mario A. T. Figueiredo | Class-specific image denoising using importance sampling | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new image denoising method, tailored to specific
classes of images, assuming that a dataset of clean images of the same class is
available. Similarly to the non-local means (NLM) algorithm, the proposed
method computes a weighted average of non-local patches, which we interpret
under the importance sampling framework. This viewpoint introduces flexibility
regarding the adopted priors, the noise statistics, and the computation of
Bayesian estimates. The importance sampling viewpoint is exploited to
approximate the minimum mean squared error (MMSE) patch estimates, using the
true underlying prior on image patches. The estimates thus obtained converge to
the true MMSE estimates, as the number of samples approaches infinity.
Experimental results provide evidence that the proposed denoiser outperforms
the state-of-the-art in the specific classes of face and text images.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 14:11:29 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Niknejad",
"Milad",
""
],
[
"Bioucas-Dias",
"Jose M.",
""
],
[
"Figueiredo",
"Mario A. T.",
""
]
] | TITLE: Class-specific image denoising using importance sampling
ABSTRACT: In this paper, we propose a new image denoising method, tailored to specific
classes of images, assuming that a dataset of clean images of the same class is
available. Similarly to the non-local means (NLM) algorithm, the proposed
method computes a weighted average of non-local patches, which we interpret
under the importance sampling framework. This viewpoint introduces flexibility
regarding the adopted priors, the noise statistics, and the computation of
Bayesian estimates. The importance sampling viewpoint is exploited to
approximate the minimum mean squared error (MMSE) patch estimates, using the
true underlying prior on image patches. The estimates thus obtained converge to
the true MMSE estimates, as the number of samples approaches infinity.
Experimental results provide evidence that the proposed denoiser outperforms
the state-of-the-art in the specific classes of face and text images.
| no_new_dataset | 0.946794 |
1706.06918 | Kiyoharu Aizawa Dr. Prof. | Paulina Hensman and Kiyoharu Aizawa | cGAN-based Manga Colorization Using a Single Training Image | 8 pages, 13 figures | null | null | null | cs.GR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Japanese comic format known as Manga is popular all over the world. It is
traditionally produced in black and white, and colorization is time consuming
and costly. Automatic colorization methods generally rely on greyscale values,
which are not present in manga. Furthermore, due to copyright protection,
colorized manga available for training is scarce. We propose a manga
colorization method based on conditional Generative Adversarial Networks
(cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of
training images, our method requires only a single colorized reference image
for training, avoiding the need of a large dataset. Colorizing manga using
cGANs can produce blurry results with artifacts, and the resolution is limited.
We therefore also propose a method of segmentation and color-correction to
mitigate these issues. The final results are sharp, clear, and in high
resolution, and stay true to the character's original color scheme.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 14:11:32 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Hensman",
"Paulina",
""
],
[
"Aizawa",
"Kiyoharu",
""
]
] | TITLE: cGAN-based Manga Colorization Using a Single Training Image
ABSTRACT: The Japanese comic format known as Manga is popular all over the world. It is
traditionally produced in black and white, and colorization is time consuming
and costly. Automatic colorization methods generally rely on greyscale values,
which are not present in manga. Furthermore, due to copyright protection,
colorized manga available for training is scarce. We propose a manga
colorization method based on conditional Generative Adversarial Networks
(cGAN). Unlike previous cGAN approaches that use many hundreds or thousands of
training images, our method requires only a single colorized reference image
for training, avoiding the need of a large dataset. Colorizing manga using
cGANs can produce blurry results with artifacts, and the resolution is limited.
We therefore also propose a method of segmentation and color-correction to
mitigate these issues. The final results are sharp, clear, and in high
resolution, and stay true to the character's original color scheme.
| no_new_dataset | 0.953319 |
1706.06936 | Kushagra Singhal | Kushagra Singhal, Daniel Cullina, Negar Kiyavash | Significance of Side Information in the Graph Matching Problem | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Percolation based graph matching algorithms rely on the availability of seed
vertex pairs as side information to efficiently match users across networks.
Although such algorithms work well in practice, there are other types of side
information available which are potentially useful to an attacker. In this
paper, we consider the problem of matching two correlated graphs when an
attacker has access to side information, either in the form of community labels
or an imperfect initial matching. In the former case, we propose a naive graph
matching algorithm by introducing the community degree vectors which harness
the information from community labels in an efficient manner. Furthermore, we
analyze a variant of the basic percolation algorithm proposed in literature for
graphs with community structure. In the latter case, we propose a novel
percolation algorithm with two thresholds which uses an imperfect matching as
input to match correlated graphs.
We evaluate the proposed algorithms on synthetic as well as real world
datasets using various experiments. The experimental results demonstrate the
importance of communities as side information especially when the number of
seeds is small and the networks are weakly correlated.
| [
{
"version": "v1",
"created": "Wed, 21 Jun 2017 14:42:19 GMT"
}
] | 2017-06-22T00:00:00 | [
[
"Singhal",
"Kushagra",
""
],
[
"Cullina",
"Daniel",
""
],
[
"Kiyavash",
"Negar",
""
]
] | TITLE: Significance of Side Information in the Graph Matching Problem
ABSTRACT: Percolation based graph matching algorithms rely on the availability of seed
vertex pairs as side information to efficiently match users across networks.
Although such algorithms work well in practice, there are other types of side
information available which are potentially useful to an attacker. In this
paper, we consider the problem of matching two correlated graphs when an
attacker has access to side information, either in the form of community labels
or an imperfect initial matching. In the former case, we propose a naive graph
matching algorithm by introducing the community degree vectors which harness
the information from community labels in an efficient manner. Furthermore, we
analyze a variant of the basic percolation algorithm proposed in literature for
graphs with community structure. In the latter case, we propose a novel
percolation algorithm with two thresholds which uses an imperfect matching as
input to match correlated graphs.
We evaluate the proposed algorithms on synthetic as well as real world
datasets using various experiments. The experimental results demonstrate the
importance of communities as side information especially when the number of
seeds is small and the networks are weakly correlated.
| no_new_dataset | 0.95297 |
1505.06125 | David Mascharka | David Mascharka and Eric Manley | Machine Learning for Indoor Localization Using Mobile Phone-Based
Sensors | 6 pages, 4 figures | null | 10.1109/CCNC.2016.7444919 | null | cs.LG cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we investigate the problem of localizing a mobile device based
on readings from its embedded sensors utilizing machine learning methodologies.
We consider a real-world environment, collect a large dataset of 3110
datapoints, and examine the performance of a substantial number of machine
learning algorithms in localizing a mobile device. We have found algorithms
that give a mean error as accurate as 0.76 meters, outperforming other indoor
localization systems reported in the literature. We also propose a hybrid
instance-based approach that results in a speed increase by a factor of ten
with no loss of accuracy in a live deployment over standard instance-based
methods, allowing for fast and accurate localization. Further, we determine how
smaller datasets collected with less density affect accuracy of localization,
important for use in real-world environments. Finally, we demonstrate that
these approaches are appropriate for real-world deployment by evaluating their
performance in an online, in-motion experiment.
| [
{
"version": "v1",
"created": "Fri, 22 May 2015 15:39:52 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Mascharka",
"David",
""
],
[
"Manley",
"Eric",
""
]
] | TITLE: Machine Learning for Indoor Localization Using Mobile Phone-Based
Sensors
ABSTRACT: In this paper we investigate the problem of localizing a mobile device based
on readings from its embedded sensors utilizing machine learning methodologies.
We consider a real-world environment, collect a large dataset of 3110
datapoints, and examine the performance of a substantial number of machine
learning algorithms in localizing a mobile device. We have found algorithms
that give a mean error as accurate as 0.76 meters, outperforming other indoor
localization systems reported in the literature. We also propose a hybrid
instance-based approach that results in a speed increase by a factor of ten
with no loss of accuracy in a live deployment over standard instance-based
methods, allowing for fast and accurate localization. Further, we determine how
smaller datasets collected with less density affect accuracy of localization,
important for use in real-world environments. Finally, we demonstrate that
these approaches are appropriate for real-world deployment by evaluating their
performance in an online, in-motion experiment.
| no_new_dataset | 0.943138 |
1511.06251 | Qianxiao Li | Qianxiao Li, Cheng Tai, Weinan E | Stochastic modified equations and adaptive stochastic gradient
algorithms | Major changes including a proof of the weak approximation, asymptotic
expansions and application-oriented adaptive algorithms | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop the method of stochastic modified equations (SME), in which
stochastic gradient algorithms are approximated in the weak sense by
continuous-time stochastic differential equations. We exploit the continuous
formulation together with optimal control theory to derive novel adaptive
hyper-parameter adjustment policies. Our algorithms have competitive
performance with the added benefit of being robust to varying models and
datasets. This provides a general methodology for the analysis and design of
stochastic gradient algorithms.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 16:49:33 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 19:58:15 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2017 13:56:33 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Li",
"Qianxiao",
""
],
[
"Tai",
"Cheng",
""
],
[
"E",
"Weinan",
""
]
] | TITLE: Stochastic modified equations and adaptive stochastic gradient
algorithms
ABSTRACT: We develop the method of stochastic modified equations (SME), in which
stochastic gradient algorithms are approximated in the weak sense by
continuous-time stochastic differential equations. We exploit the continuous
formulation together with optimal control theory to derive novel adaptive
hyper-parameter adjustment policies. Our algorithms have competitive
performance with the added benefit of being robust to varying models and
datasets. This provides a general methodology for the analysis and design of
stochastic gradient algorithms.
| no_new_dataset | 0.948822 |
1609.05284 | Po-Sen Huang | Yelong Shen, Po-Sen Huang, Jianfeng Gao, Weizhu Chen | ReasoNet: Learning to Stop Reading in Machine Comprehension | in KDD 2017 | null | 10.1145/3097983.3098177 | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Teaching a computer to read and answer general questions pertaining to a
document is a challenging yet unsolved problem. In this paper, we describe a
novel neural network architecture called the Reasoning Network (ReasoNet) for
machine comprehension tasks. ReasoNets make use of multiple turns to
effectively exploit and then reason over the relation among queries, documents,
and answers. Different from previous approaches using a fixed number of turns
during inference, ReasoNets introduce a termination state to relax this
constraint on the reasoning depth. With the use of reinforcement learning,
ReasoNets can dynamically determine whether to continue the comprehension
process after digesting intermediate results, or to terminate reading when it
concludes that existing information is adequate to produce an answer. ReasoNets
have achieved exceptional performance in machine comprehension datasets,
including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset,
and a structured Graph Reachability dataset.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 05:12:50 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2017 06:29:36 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2017 01:12:07 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Shen",
"Yelong",
""
],
[
"Huang",
"Po-Sen",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Chen",
"Weizhu",
""
]
] | TITLE: ReasoNet: Learning to Stop Reading in Machine Comprehension
ABSTRACT: Teaching a computer to read and answer general questions pertaining to a
document is a challenging yet unsolved problem. In this paper, we describe a
novel neural network architecture called the Reasoning Network (ReasoNet) for
machine comprehension tasks. ReasoNets make use of multiple turns to
effectively exploit and then reason over the relation among queries, documents,
and answers. Different from previous approaches using a fixed number of turns
during inference, ReasoNets introduce a termination state to relax this
constraint on the reasoning depth. With the use of reinforcement learning,
ReasoNets can dynamically determine whether to continue the comprehension
process after digesting intermediate results, or to terminate reading when it
concludes that existing information is adequate to produce an answer. ReasoNets
have achieved exceptional performance in machine comprehension datasets,
including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset,
and a structured Graph Reachability dataset.
| no_new_dataset | 0.924552 |
1612.04022 | Sulin Liu | Sulin Liu, Sinno Jialin Pan, Qirong Ho | Distributed Multi-Task Relationship Learning | To appear in KDD 2017 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 04:22:10 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Feb 2017 14:09:19 GMT"
},
{
"version": "v3",
"created": "Tue, 20 Jun 2017 12:00:03 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Liu",
"Sulin",
""
],
[
"Pan",
"Sinno Jialin",
""
],
[
"Ho",
"Qirong",
""
]
] | TITLE: Distributed Multi-Task Relationship Learning
ABSTRACT: Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.
| no_new_dataset | 0.940463 |
1612.06357 | Jonas Haslbeck | Jonas M B Haslbeck and Eiko I Fried | How Predictable are Symptoms in Psychopathological Networks? A
Reanalysis of 18 Published Datasets | 24 pages, 1 table, 4 figures | null | null | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background Network analyses on psychopathological data focus on the network
structure and its derivatives such as node centrality. One conclusion one can
draw from centrality measures is that the node with the highest centrality is
likely to be the node that is determined most by its neighboring nodes.
However, centrality is a relative measure: knowing that a node is highly
central gives no information about the extent to which it is determined by its
neighbors. Here we provide an absolute measure of determination (or
controllability) of a node - its predictability. We introduce predictability,
estimate the predictability of all nodes in 18 prior empirical network papers
on psychopathology, and statistically relate it to centrality.
Methods We carried out a literature review and collected 25 datasets from 18
published papers in the field (several mood and anxiety disorders, substance
abuse, psychosis, autism, and transdiagnostic data). We fit state-of-the-art
net- work models to all datasets, and computed the predictability of all nodes.
Results Predictability was unrelated to sample size, moderately high in most
symptom networks, and differed considerable both within and between datasets.
Predictability was higher in community than clinical samples, highest for mood
and anxiety disorders, and lowest for psychosis.
Conclusions Predictability is an important additional characterization of
symptom networks because it gives an absolute measure of the controllability of
each node. It allows conclusions about how self-determined a symptom network
is, and may help to inform intervention strategies. Limitations of
predictability along with future directions are discussed.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 23:05:24 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2017 08:32:15 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Haslbeck",
"Jonas M B",
""
],
[
"Fried",
"Eiko I",
""
]
] | TITLE: How Predictable are Symptoms in Psychopathological Networks? A
Reanalysis of 18 Published Datasets
ABSTRACT: Background Network analyses on psychopathological data focus on the network
structure and its derivatives such as node centrality. One conclusion one can
draw from centrality measures is that the node with the highest centrality is
likely to be the node that is determined most by its neighboring nodes.
However, centrality is a relative measure: knowing that a node is highly
central gives no information about the extent to which it is determined by its
neighbors. Here we provide an absolute measure of determination (or
controllability) of a node - its predictability. We introduce predictability,
estimate the predictability of all nodes in 18 prior empirical network papers
on psychopathology, and statistically relate it to centrality.
Methods We carried out a literature review and collected 25 datasets from 18
published papers in the field (several mood and anxiety disorders, substance
abuse, psychosis, autism, and transdiagnostic data). We fit state-of-the-art
net- work models to all datasets, and computed the predictability of all nodes.
Results Predictability was unrelated to sample size, moderately high in most
symptom networks, and differed considerable both within and between datasets.
Predictability was higher in community than clinical samples, highest for mood
and anxiety disorders, and lowest for psychosis.
Conclusions Predictability is an important additional characterization of
symptom networks because it gives an absolute measure of the controllability of
each node. It allows conclusions about how self-determined a symptom network
is, and may help to inform intervention strategies. Limitations of
predictability along with future directions are discussed.
| no_new_dataset | 0.949809 |
1703.02083 | Seyed Sadegh Mohseni Salehi | Seyed Sadegh Mohseni Salehi, Deniz Erdogmus, and Ali Gholipour | Auto-context Convolutional Neural Network (Auto-Net) for Brain
Extraction in Magnetic Resonance Imaging | This manuscripts has been submitted to TMI | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Brain extraction or whole brain segmentation is an important first step in
many of the neuroimage analysis pipelines. The accuracy and robustness of brain
extraction, therefore, is crucial for the accuracy of the entire brain analysis
process. With the aim of designing a learning-based, geometry-independent and
registration-free brain extraction tool in this study, we present a technique
based on an auto-context convolutional neural network (CNN), in which intrinsic
local and global image features are learned through 2D patches of different
window sizes. In this architecture three parallel 2D convolutional pathways for
three different directions (axial, coronal, and sagittal) implicitly learn 3D
image information without the need for computationally expensive 3D
convolutions. Posterior probability maps generated by the network are used
iteratively as context information along with the original image patches to
learn the local shape and connectedness of the brain, to extract it from
non-brain tissue.
The brain extraction results we have obtained from our algorithm are superior
to the recently reported results in the literature on two publicly available
benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap
coefficients of 97.42% and 95.40%, respectively. Furthermore, we evaluated the
performance of our algorithm in the challenging problem of extracting
arbitrarily-oriented fetal brains in reconstructed fetal brain magnetic
resonance imaging (MRI) datasets. In this application our algorithm performed
much better than the other methods (Dice coefficient: 95.98%), where the other
methods performed poorly due to the non-standard orientation and geometry of
the fetal brain in MRI. Our CNN-based method can provide accurate,
geometry-independent brain extraction in challenging applications.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2017 19:50:20 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2017 20:31:43 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Salehi",
"Seyed Sadegh Mohseni",
""
],
[
"Erdogmus",
"Deniz",
""
],
[
"Gholipour",
"Ali",
""
]
] | TITLE: Auto-context Convolutional Neural Network (Auto-Net) for Brain
Extraction in Magnetic Resonance Imaging
ABSTRACT: Brain extraction or whole brain segmentation is an important first step in
many of the neuroimage analysis pipelines. The accuracy and robustness of brain
extraction, therefore, is crucial for the accuracy of the entire brain analysis
process. With the aim of designing a learning-based, geometry-independent and
registration-free brain extraction tool in this study, we present a technique
based on an auto-context convolutional neural network (CNN), in which intrinsic
local and global image features are learned through 2D patches of different
window sizes. In this architecture three parallel 2D convolutional pathways for
three different directions (axial, coronal, and sagittal) implicitly learn 3D
image information without the need for computationally expensive 3D
convolutions. Posterior probability maps generated by the network are used
iteratively as context information along with the original image patches to
learn the local shape and connectedness of the brain, to extract it from
non-brain tissue.
The brain extraction results we have obtained from our algorithm are superior
to the recently reported results in the literature on two publicly available
benchmark datasets, namely LPBA40 and OASIS, in which we obtained Dice overlap
coefficients of 97.42% and 95.40%, respectively. Furthermore, we evaluated the
performance of our algorithm in the challenging problem of extracting
arbitrarily-oriented fetal brains in reconstructed fetal brain magnetic
resonance imaging (MRI) datasets. In this application our algorithm performed
much better than the other methods (Dice coefficient: 95.98%), where the other
methods performed poorly due to the non-standard orientation and geometry of
the fetal brain in MRI. Our CNN-based method can provide accurate,
geometry-independent brain extraction in challenging applications.
| no_new_dataset | 0.95594 |
1703.05175 | Jake Snell | Jake Snell, Kevin Swersky, Richard S. Zemel | Prototypical Networks for Few-shot Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset.
| [
{
"version": "v1",
"created": "Wed, 15 Mar 2017 14:31:55 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2017 22:48:54 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Snell",
"Jake",
""
],
[
"Swersky",
"Kevin",
""
],
[
"Zemel",
"Richard S.",
""
]
] | TITLE: Prototypical Networks for Few-shot Learning
ABSTRACT: We propose prototypical networks for the problem of few-shot classification,
where a classifier must generalize to new classes not seen in the training set,
given only a small number of examples of each new class. Prototypical networks
learn a metric space in which classification can be performed by computing
distances to prototype representations of each class. Compared to recent
approaches for few-shot learning, they reflect a simpler inductive bias that is
beneficial in this limited-data regime, and achieve excellent results. We
provide an analysis showing that some simple design decisions can yield
substantial improvements over recent approaches involving complicated
architectural choices and meta-learning. We further extend prototypical
networks to zero-shot learning and achieve state-of-the-art results on the
CU-Birds dataset.
| no_new_dataset | 0.948058 |
1703.07823 | Mehrdad Farajtabar | Mehrdad Farajtabar, Jiachen Yang, Xiaojing Ye, Huan Xu, Rakshit
Trivedi, Elias Khalil, Shuang Li, Le Song, Hongyuan Zha | Fake News Mitigation via Point Process Based Intervention | Point Process, Hawkes Process, Social Networks, Intervention and
Control, Reinforcement Learning, ICML 2017 | null | null | null | cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the first multistage intervention framework that tackles fake news
in social networks by combining reinforcement learning with a point process
network activity model. The spread of fake news and mitigation events within
the network is modeled by a multivariate Hawkes process with additional
exogenous control terms. By choosing a feature representation of states,
defining mitigation actions and constructing reward functions to measure the
effectiveness of mitigation activities, we map the problem of fake news
mitigation into the reinforcement learning framework. We develop a policy
iteration method unique to the multivariate networked point process, with the
goal of optimizing the actions for maximal total reward under budget
constraints. Our method shows promising performance in real-time intervention
experiments on a Twitter network to mitigate a surrogate fake news campaign,
and outperforms alternatives on synthetic datasets.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2017 19:09:12 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2017 20:59:29 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Farajtabar",
"Mehrdad",
""
],
[
"Yang",
"Jiachen",
""
],
[
"Ye",
"Xiaojing",
""
],
[
"Xu",
"Huan",
""
],
[
"Trivedi",
"Rakshit",
""
],
[
"Khalil",
"Elias",
""
],
[
"Li",
"Shuang",
""
],
[
"Song",
"Le",
""
],
[
"Zha",
"Hongyuan",
""
]
] | TITLE: Fake News Mitigation via Point Process Based Intervention
ABSTRACT: We propose the first multistage intervention framework that tackles fake news
in social networks by combining reinforcement learning with a point process
network activity model. The spread of fake news and mitigation events within
the network is modeled by a multivariate Hawkes process with additional
exogenous control terms. By choosing a feature representation of states,
defining mitigation actions and constructing reward functions to measure the
effectiveness of mitigation activities, we map the problem of fake news
mitigation into the reinforcement learning framework. We develop a policy
iteration method unique to the multivariate networked point process, with the
goal of optimizing the actions for maximal total reward under budget
constraints. Our method shows promising performance in real-time intervention
experiments on a Twitter network to mitigate a surrogate fake news campaign,
and outperforms alternatives on synthetic datasets.
| no_new_dataset | 0.944485 |
1706.03199 | Olivier Teytaud | Olivier Bousquet, Sylvain Gelly, Karol Kurach, Marc Schoenauer,
Michele Sebag, Olivier Teytaud, Damien Vincent | Toward Optimal Run Racing: Application to Deep Learning Calibration | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2017 07:55:38 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2017 11:38:25 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Bousquet",
"Olivier",
""
],
[
"Gelly",
"Sylvain",
""
],
[
"Kurach",
"Karol",
""
],
[
"Schoenauer",
"Marc",
""
],
[
"Sebag",
"Michele",
""
],
[
"Teytaud",
"Olivier",
""
],
[
"Vincent",
"Damien",
""
]
] | TITLE: Toward Optimal Run Racing: Application to Deep Learning Calibration
ABSTRACT: This paper aims at one-shot learning of deep neural nets, where a highly
parallel setting is considered to address the algorithm calibration problem -
selecting the best neural architecture and learning hyper-parameter values
depending on the dataset at hand. The notoriously expensive calibration problem
is optimally reduced by detecting and early stopping non-optimal runs. The
theoretical contribution regards the optimality guarantees within the multiple
hypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki
benchmarks demonstrate the relevance of the approach with a principled and
consistent improvement on the state of the art with no extra hyper-parameter.
| no_new_dataset | 0.94256 |
1706.03428 | Joeran Beel | Joeran Beel, Zeljko Carevic, Johann Schaible, Gabor Neusch | RARD: The Related-Article Recommendation Dataset | null | D-Lib Magazine, Vol. 23, No. 7/8. Publication date: July 2017 | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender-system datasets are used for recommender-system evaluations,
training machine-learning algorithms, and exploring user behavior. While there
are many datasets for recommender systems in the domains of movies, books, and
music, there are rather few datasets from research-paper recommender systems.
In this paper, we introduce RARD, the Related-Article Recommendation Dataset,
from the digital library Sowiport and the recommendation-as-a-service provider
Mr. DLib. The dataset contains information about 57.4 million recommendations
that were displayed to the users of Sowiport. Information includes details on
which recommendation approaches were used (e.g. content-based filtering,
stereotype, most popular), what types of features were used in content based
filtering (simple terms vs. keyphrases), where the features were extracted from
(title or abstract), and the time when recommendations were delivered and
clicked. In addition, the dataset contains an implicit item-item rating matrix
that was created based on the recommendation click logs. RARD enables
researchers to train machine learning algorithms for research-paper
recommendations, perform offline evaluations, and do research on data from Mr.
DLib's recommender system, without implementing a recommender system
themselves. In the field of scientific recommender systems, our dataset is
unique. To the best of our knowledge, there is no dataset with more (implicit)
ratings available, and that many variations of recommendation algorithms. The
dataset is available at http://data.mr-dlib.org, and published under the
Creative Commons Attribution 3.0 Unported (CC-BY) license.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 01:00:25 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jun 2017 06:47:33 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Beel",
"Joeran",
""
],
[
"Carevic",
"Zeljko",
""
],
[
"Schaible",
"Johann",
""
],
[
"Neusch",
"Gabor",
""
]
] | TITLE: RARD: The Related-Article Recommendation Dataset
ABSTRACT: Recommender-system datasets are used for recommender-system evaluations,
training machine-learning algorithms, and exploring user behavior. While there
are many datasets for recommender systems in the domains of movies, books, and
music, there are rather few datasets from research-paper recommender systems.
In this paper, we introduce RARD, the Related-Article Recommendation Dataset,
from the digital library Sowiport and the recommendation-as-a-service provider
Mr. DLib. The dataset contains information about 57.4 million recommendations
that were displayed to the users of Sowiport. Information includes details on
which recommendation approaches were used (e.g. content-based filtering,
stereotype, most popular), what types of features were used in content based
filtering (simple terms vs. keyphrases), where the features were extracted from
(title or abstract), and the time when recommendations were delivered and
clicked. In addition, the dataset contains an implicit item-item rating matrix
that was created based on the recommendation click logs. RARD enables
researchers to train machine learning algorithms for research-paper
recommendations, perform offline evaluations, and do research on data from Mr.
DLib's recommender system, without implementing a recommender system
themselves. In the field of scientific recommender systems, our dataset is
unique. To the best of our knowledge, there is no dataset with more (implicit)
ratings available, and that many variations of recommendation algorithms. The
dataset is available at http://data.mr-dlib.org, and published under the
Creative Commons Attribution 3.0 Unported (CC-BY) license.
| new_dataset | 0.934215 |
1706.06160 | Arjun Bhardwaj | Arjun Bhardwaj, Alexander Rudnicky | User Intent Classification using Memory Networks: A Comparative Analysis
for a Limited Data Scenario | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this report, we provide a comparative analysis of different techniques for
user intent classification towards the task of app recommendation. We analyse
the performance of different models and architectures for multi-label
classification over a dataset with a relative large number of classes and only
a handful examples of each class. We focus, in particular, on memory network
architectures, and compare how well the different versions perform under the
task constraints. Since the classifier is meant to serve as a module in a
practical dialog system, it needs to be able to work with limited training data
and incorporate new data on the fly. We devise a 1-shot learning task to test
the models under the above constraint. We conclude that relatively simple
versions of memory networks perform better than other approaches. Although, for
tasks with very limited data, simple non-parametric methods perform comparably,
without needing the extra training data.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 20:12:07 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Bhardwaj",
"Arjun",
""
],
[
"Rudnicky",
"Alexander",
""
]
] | TITLE: User Intent Classification using Memory Networks: A Comparative Analysis
for a Limited Data Scenario
ABSTRACT: In this report, we provide a comparative analysis of different techniques for
user intent classification towards the task of app recommendation. We analyse
the performance of different models and architectures for multi-label
classification over a dataset with a relative large number of classes and only
a handful examples of each class. We focus, in particular, on memory network
architectures, and compare how well the different versions perform under the
task constraints. Since the classifier is meant to serve as a module in a
practical dialog system, it needs to be able to work with limited training data
and incorporate new data on the fly. We devise a 1-shot learning task to test
the models under the above constraint. We conclude that relatively simple
versions of memory networks perform better than other approaches. Although, for
tasks with very limited data, simple non-parametric methods perform comparably,
without needing the extra training data.
| no_new_dataset | 0.949201 |
1706.06176 | Nicholas Firth | Nicholas C. Firth, Emma Harding, Mary Pat Sullivan, Sebastian J.
Crutch, Daniel C. Alexander | ESCAPE - Echo SCraper and ClAssifier of PErsons: A novel tool to
facilitate using voice-controlled devices for research | 10 pages, 3 figures, currently in submission | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart devices have become common place in many homes, and these devices can
be utilized to provide support for people with mental or physical deficits.
Voice-controlled assistants are a class of smart device that collect a large
amount of data in the home. In this work we present Echo SCraper and ClAssifier
of Persons (ESCAPE), an open source software for the extraction of Amazon Echo
interaction data, and speaker recognition on that data. We show that ESCAPE is
able to extract data from a voice-controlled assistant and classify with
accuracy who is talking, based on a small number of labeled audio data. Using
ESCAPE to extract interactions recorded over 3 months in the first author's
home yields a rich dataset of transcribed audio recordings. Our results
demonstrate that using this software the Amazon Echo can be used to study
participants in a naturalistic setting with minimal intrusion. We also discuss
the potential for usage of voice-controlled devices together with ESCAPE to
understand how diseases affect individuals, and how these data can be used to
monitor disease processes in general.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 10:39:07 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Firth",
"Nicholas C.",
""
],
[
"Harding",
"Emma",
""
],
[
"Sullivan",
"Mary Pat",
""
],
[
"Crutch",
"Sebastian J.",
""
],
[
"Alexander",
"Daniel C.",
""
]
] | TITLE: ESCAPE - Echo SCraper and ClAssifier of PErsons: A novel tool to
facilitate using voice-controlled devices for research
ABSTRACT: Smart devices have become common place in many homes, and these devices can
be utilized to provide support for people with mental or physical deficits.
Voice-controlled assistants are a class of smart device that collect a large
amount of data in the home. In this work we present Echo SCraper and ClAssifier
of Persons (ESCAPE), an open source software for the extraction of Amazon Echo
interaction data, and speaker recognition on that data. We show that ESCAPE is
able to extract data from a voice-controlled assistant and classify with
accuracy who is talking, based on a small number of labeled audio data. Using
ESCAPE to extract interactions recorded over 3 months in the first author's
home yields a rich dataset of transcribed audio recordings. Our results
demonstrate that using this software the Amazon Echo can be used to study
participants in a naturalistic setting with minimal intrusion. We also discuss
the potential for usage of voice-controlled devices together with ESCAPE to
understand how diseases affect individuals, and how these data can be used to
monitor disease processes in general.
| new_dataset | 0.962708 |
1706.06177 | Efsun Kayi | Efsun Sarioglu Kayi, Kabir Yadav, James M. Chamberlain, Hyeong-Ah Choi | Topic Modeling for Classification of Clinical Reports | 18 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic health records (EHRs) contain important clinical information about
patients. Efficient and effective use of this information could supplement or
even replace manual chart review as a means of studying and improving the
quality and safety of healthcare delivery. However, some of these clinical data
are in the form of free text and require pre-processing before use in automated
systems. A common free text data source is radiology reports, typically
dictated by radiologists to explain their interpretations. We sought to
demonstrate machine learning classification of computed tomography (CT) imaging
reports into binary outcomes, i.e. positive and negative for fracture, using
regular text classification and classifiers based on topic modeling. Topic
modeling provides interpretable themes (topic distributions) in reports, a
representation that is more compact than the commonly used bag-of-words
representation and can be processed faster than raw text in subsequent
automated processes. We demonstrate new classifiers based on this topic
modeling representation of the reports. Aggregate topic classifier (ATC) and
confidence-based topic classifier (CTC) use a single topic that is determined
from the training dataset based on different measures to classify the reports
on the test dataset. Alternatively, similarity-based topic classifier (STC)
measures the similarity between the reports' topic distributions to determine
the predicted class. Our proposed topic modeling-based classifier systems are
shown to be competitive with existing text classification techniques and
provides an efficient and interpretable representation.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 21:04:22 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Kayi",
"Efsun Sarioglu",
""
],
[
"Yadav",
"Kabir",
""
],
[
"Chamberlain",
"James M.",
""
],
[
"Choi",
"Hyeong-Ah",
""
]
] | TITLE: Topic Modeling for Classification of Clinical Reports
ABSTRACT: Electronic health records (EHRs) contain important clinical information about
patients. Efficient and effective use of this information could supplement or
even replace manual chart review as a means of studying and improving the
quality and safety of healthcare delivery. However, some of these clinical data
are in the form of free text and require pre-processing before use in automated
systems. A common free text data source is radiology reports, typically
dictated by radiologists to explain their interpretations. We sought to
demonstrate machine learning classification of computed tomography (CT) imaging
reports into binary outcomes, i.e. positive and negative for fracture, using
regular text classification and classifiers based on topic modeling. Topic
modeling provides interpretable themes (topic distributions) in reports, a
representation that is more compact than the commonly used bag-of-words
representation and can be processed faster than raw text in subsequent
automated processes. We demonstrate new classifiers based on this topic
modeling representation of the reports. Aggregate topic classifier (ATC) and
confidence-based topic classifier (CTC) use a single topic that is determined
from the training dataset based on different measures to classify the reports
on the test dataset. Alternatively, similarity-based topic classifier (STC)
measures the similarity between the reports' topic distributions to determine
the predicted class. Our proposed topic modeling-based classifier systems are
shown to be competitive with existing text classification techniques and
provides an efficient and interpretable representation.
| no_new_dataset | 0.951504 |
1706.06195 | Ivo Gon\c{c}alves | Ivo Gon\c{c}alves, Sara Silva, Carlos M. Fonseca, Mauro Castelli | Unsure When to Stop? Ask Your Semantic Neighbors | null | null | 10.1145/3071178.3071328 | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In iterative supervised learning algorithms it is common to reach a point in
the search where no further induction seems to be possible with the available
data. If the search is continued beyond this point, the risk of overfitting
increases significantly. Following the recent developments in inductive
semantic stochastic methods, this paper studies the feasibility of using
information gathered from the semantic neighborhood to decide when to stop the
search. Two semantic stopping criteria are proposed and experimentally assessed
in Geometric Semantic Genetic Programming (GSGP) and in the Semantic Learning
Machine (SLM) algorithm (the equivalent algorithm for neural networks). The
experiments are performed on real-world high-dimensional regression datasets.
The results show that the proposed semantic stopping criteria are able to
detect stopping points that result in a competitive generalization for both
GSGP and SLM. This approach also yields computationally efficient algorithms as
it allows the evolution of neural networks in less than 3 seconds on average,
and of GP trees in at most 10 seconds. The usage of the proposed semantic
stopping criteria in conjunction with the computation of optimal
mutation/learning steps also results in small trees and neural networks.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 22:29:08 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Gonçalves",
"Ivo",
""
],
[
"Silva",
"Sara",
""
],
[
"Fonseca",
"Carlos M.",
""
],
[
"Castelli",
"Mauro",
""
]
] | TITLE: Unsure When to Stop? Ask Your Semantic Neighbors
ABSTRACT: In iterative supervised learning algorithms it is common to reach a point in
the search where no further induction seems to be possible with the available
data. If the search is continued beyond this point, the risk of overfitting
increases significantly. Following the recent developments in inductive
semantic stochastic methods, this paper studies the feasibility of using
information gathered from the semantic neighborhood to decide when to stop the
search. Two semantic stopping criteria are proposed and experimentally assessed
in Geometric Semantic Genetic Programming (GSGP) and in the Semantic Learning
Machine (SLM) algorithm (the equivalent algorithm for neural networks). The
experiments are performed on real-world high-dimensional regression datasets.
The results show that the proposed semantic stopping criteria are able to
detect stopping points that result in a competitive generalization for both
GSGP and SLM. This approach also yields computationally efficient algorithms as
it allows the evolution of neural networks in less than 3 seconds on average,
and of GP trees in at most 10 seconds. The usage of the proposed semantic
stopping criteria in conjunction with the computation of optimal
mutation/learning steps also results in small trees and neural networks.
| no_new_dataset | 0.952042 |
1706.06239 | Hao Wang | Hao Wang, Yanmei Fu, Qinyong Wang, Hongzhi Yin, Changying Du, Hui
Xiong | A Location-Sentiment-Aware Recommender System for Both Home-Town and
Out-of-Town Users | Accepted by KDD 2017 | null | null | null | cs.SI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial item recommendation has become an important means to help people
discover interesting locations, especially when people pay a visit to
unfamiliar regions. Some current researches are focusing on modelling
individual and collective geographical preferences for spatial item
recommendation based on users' check-in records, but they fail to explore the
phenomenon of user interest drift across geographical regions, i.e., users
would show different interests when they travel to different regions. Besides,
they ignore the influence of public comments for subsequent users' check-in
behaviors. Specifically, it is intuitive that users would refuse to check in to
a spatial item whose historical reviews seem negative overall, even though it
might fit their interests. Therefore, it is necessary to recommend the right
item to the right user at the right location. In this paper, we propose a
latent probabilistic generative model called LSARS to mimic the decision-making
process of users' check-in activities both in home-town and out-of-town
scenarios by adapting to user interest drift and crowd sentiments, which can
learn location-aware and sentiment-aware individual interests from the contents
of spatial items and user reviews. Due to the sparsity of user activities in
out-of-town regions, LSARS is further designed to incorporate the public
preferences learned from local users' check-in behaviors. Finally, we deploy
LSARS into two practical application scenes: spatial item recommendation and
target user discovery. Extensive experiments on two large-scale location-based
social networks (LBSNs) datasets show that LSARS achieves better performance
than existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2017 01:54:01 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Wang",
"Hao",
""
],
[
"Fu",
"Yanmei",
""
],
[
"Wang",
"Qinyong",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Du",
"Changying",
""
],
[
"Xiong",
"Hui",
""
]
] | TITLE: A Location-Sentiment-Aware Recommender System for Both Home-Town and
Out-of-Town Users
ABSTRACT: Spatial item recommendation has become an important means to help people
discover interesting locations, especially when people pay a visit to
unfamiliar regions. Some current researches are focusing on modelling
individual and collective geographical preferences for spatial item
recommendation based on users' check-in records, but they fail to explore the
phenomenon of user interest drift across geographical regions, i.e., users
would show different interests when they travel to different regions. Besides,
they ignore the influence of public comments for subsequent users' check-in
behaviors. Specifically, it is intuitive that users would refuse to check in to
a spatial item whose historical reviews seem negative overall, even though it
might fit their interests. Therefore, it is necessary to recommend the right
item to the right user at the right location. In this paper, we propose a
latent probabilistic generative model called LSARS to mimic the decision-making
process of users' check-in activities both in home-town and out-of-town
scenarios by adapting to user interest drift and crowd sentiments, which can
learn location-aware and sentiment-aware individual interests from the contents
of spatial items and user reviews. Due to the sparsity of user activities in
out-of-town regions, LSARS is further designed to incorporate the public
preferences learned from local users' check-in behaviors. Finally, we deploy
LSARS into two practical application scenes: spatial item recommendation and
target user discovery. Extensive experiments on two large-scale location-based
social networks (LBSNs) datasets show that LSARS achieves better performance
than existing state-of-the-art methods.
| no_new_dataset | 0.949295 |
1706.06314 | Qiang Liu | Qiang Liu, Feng Yu, Shu Wu, Liang Wang | Mining Significant Microblogs for Misinformation Identification: An
Attention-based Approach | null | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of social media, massive misinformation is also
spreading widely on social media, such as microblog, and bring negative effects
to human life. Nowadays, automatic misinformation identification has drawn
attention from academic and industrial communities. For an event on social
media usually consists of multiple microblogs, current methods are mainly based
on global statistical features. However, information on social media is full of
noisy and outliers, which should be alleviated. Moreover, most of microblogs
about an event have little contribution to the identification of
misinformation, where useful information can be easily overwhelmed by useless
information. Thus, it is important to mine significant microblogs for a
reliable misinformation identification method. In this paper, we propose an
Attention-based approach for Identification of Misinformation (AIM). Based on
the attention mechanism, AIM can select microblogs with largest attention
values for misinformation identification. The attention mechanism in AIM
contains two parts: content attention and dynamic attention. Content attention
is calculated based textual features of each microblog. Dynamic attention is
related to the time interval between the posting time of a microblog and the
beginning of the event. To evaluate AIM, we conduct a series of experiments on
the Weibo dataset and the Twitter dataset, and the experimental results show
that the proposed AIM model outperforms the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2017 08:36:56 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Liu",
"Qiang",
""
],
[
"Yu",
"Feng",
""
],
[
"Wu",
"Shu",
""
],
[
"Wang",
"Liang",
""
]
] | TITLE: Mining Significant Microblogs for Misinformation Identification: An
Attention-based Approach
ABSTRACT: With the rapid growth of social media, massive misinformation is also
spreading widely on social media, such as microblog, and bring negative effects
to human life. Nowadays, automatic misinformation identification has drawn
attention from academic and industrial communities. For an event on social
media usually consists of multiple microblogs, current methods are mainly based
on global statistical features. However, information on social media is full of
noisy and outliers, which should be alleviated. Moreover, most of microblogs
about an event have little contribution to the identification of
misinformation, where useful information can be easily overwhelmed by useless
information. Thus, it is important to mine significant microblogs for a
reliable misinformation identification method. In this paper, we propose an
Attention-based approach for Identification of Misinformation (AIM). Based on
the attention mechanism, AIM can select microblogs with largest attention
values for misinformation identification. The attention mechanism in AIM
contains two parts: content attention and dynamic attention. Content attention
is calculated based textual features of each microblog. Dynamic attention is
related to the time interval between the posting time of a microblog and the
beginning of the event. To evaluate AIM, we conduct a series of experiments on
the Weibo dataset and the Twitter dataset, and the experimental results show
that the proposed AIM model outperforms the state-of-the-art methods.
| no_new_dataset | 0.946843 |
1706.06415 | Yang Liu | Jiacheng Zhang, Yanzhuo Ding, Shiqi Shen, Yong Cheng, Maosong Sun,
Huanbo Luan, Yang Liu | THUMT: An Open Source Toolkit for Neural Machine Translation | 4 pages, 1 figure | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces THUMT, an open-source toolkit for neural machine
translation (NMT) developed by the Natural Language Processing Group at
Tsinghua University. THUMT implements the standard attention-based
encoder-decoder framework on top of Theano and supports three training
criteria: maximum likelihood estimation, minimum risk training, and
semi-supervised training. It features a visualization tool for displaying the
relevance between hidden states in neural networks and contextual words, which
helps to analyze the internal workings of NMT. Experiments on Chinese-English
datasets show that THUMT using minimum risk training significantly outperforms
GroundHog, a state-of-the-art toolkit for NMT.
| [
{
"version": "v1",
"created": "Tue, 20 Jun 2017 13:29:16 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Zhang",
"Jiacheng",
""
],
[
"Ding",
"Yanzhuo",
""
],
[
"Shen",
"Shiqi",
""
],
[
"Cheng",
"Yong",
""
],
[
"Sun",
"Maosong",
""
],
[
"Luan",
"Huanbo",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: THUMT: An Open Source Toolkit for Neural Machine Translation
ABSTRACT: This paper introduces THUMT, an open-source toolkit for neural machine
translation (NMT) developed by the Natural Language Processing Group at
Tsinghua University. THUMT implements the standard attention-based
encoder-decoder framework on top of Theano and supports three training
criteria: maximum likelihood estimation, minimum risk training, and
semi-supervised training. It features a visualization tool for displaying the
relevance between hidden states in neural networks and contextual words, which
helps to analyze the internal workings of NMT. Experiments on Chinese-English
datasets show that THUMT using minimum risk training significantly outperforms
GroundHog, a state-of-the-art toolkit for NMT.
| no_new_dataset | 0.950411 |
1706.06419 | Hussam Qassim Mr. | Hussam Qassim, David Feinzimer, and Abhishek Verma | The Compressed Model of Residual CNDS | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks have achieved a great success in the recent
years. Although, the way to maximize the performance of the convolutional
neural networks still in the beginning. Furthermore, the optimization of the
size and the time that need to train the convolutional neural networks is very
far away from reaching the researcher's ambition. In this paper, we proposed a
new convolutional neural network that combined several techniques to boost the
optimization of the convolutional neural network in the aspects of speed and
size. As we used our previous model Residual-CNDS (ResCNDS), which solved the
problems of slower convergence, overfitting, and degradation, and compressed
it. The outcome model called Residual-Squeeze-CNDS (ResSquCNDS), which we
demonstrated on our sold technique to add residual learning and our model of
compressing the convolutional neural networks. Our model of compressing adapted
from the SQUEEZENET model, but our model is more generalizable, which can be
applied almost to any neural network model, and fully integrated into the
residual learning, which addresses the problem of the degradation very
successfully. Our proposed model trained on very large-scale MIT
Places365-Standard scene datasets, which backing our hypothesis that the new
compressed model inherited the best of the previous ResCNDS8 model, and almost
get the same accuracy in the validation Top-1 and Top-5 with 87.64% smaller in
size and 13.33% faster in the training time.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2017 02:17:53 GMT"
}
] | 2017-06-21T00:00:00 | [
[
"Qassim",
"Hussam",
""
],
[
"Feinzimer",
"David",
""
],
[
"Verma",
"Abhishek",
""
]
] | TITLE: The Compressed Model of Residual CNDS
ABSTRACT: Convolutional neural networks have achieved a great success in the recent
years. Although, the way to maximize the performance of the convolutional
neural networks still in the beginning. Furthermore, the optimization of the
size and the time that need to train the convolutional neural networks is very
far away from reaching the researcher's ambition. In this paper, we proposed a
new convolutional neural network that combined several techniques to boost the
optimization of the convolutional neural network in the aspects of speed and
size. As we used our previous model Residual-CNDS (ResCNDS), which solved the
problems of slower convergence, overfitting, and degradation, and compressed
it. The outcome model called Residual-Squeeze-CNDS (ResSquCNDS), which we
demonstrated on our sold technique to add residual learning and our model of
compressing the convolutional neural networks. Our model of compressing adapted
from the SQUEEZENET model, but our model is more generalizable, which can be
applied almost to any neural network model, and fully integrated into the
residual learning, which addresses the problem of the degradation very
successfully. Our proposed model trained on very large-scale MIT
Places365-Standard scene datasets, which backing our hypothesis that the new
compressed model inherited the best of the previous ResCNDS8 model, and almost
get the same accuracy in the validation Top-1 and Top-5 with 87.64% smaller in
size and 13.33% faster in the training time.
| no_new_dataset | 0.952706 |
1606.07373 | Maksim Bolonkin | Du Tran, Maksim Bolonkin, Manohar Paluri, Lorenzo Torresani | VideoMCC: a New Benchmark for Video Comprehension | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While there is overall agreement that future technology for organizing,
browsing and searching videos hinges on the development of methods for
high-level semantic understanding of video, so far no consensus has been
reached on the best way to train and assess models for this task. Casting video
understanding as a form of action or event categorization is problematic as it
is not fully clear what the semantic classes or abstractions in this domain
should be. Language has been exploited to sidestep the problem of defining
video categories, by formulating video understanding as the task of captioning
or description. However, language is highly complex, redundant and sometimes
ambiguous. Many different captions may express the same semantic concept. To
account for this ambiguity, quantitative evaluation of video description
requires sophisticated metrics, whose performance scores are typically hard to
interpret by humans.
This paper provides four contributions to this problem. First, we formulate
Video Multiple Choice Caption (VideoMCC) as a new well-defined task with an
easy-to-interpret performance measure. Second, we describe a general
semi-automatic procedure to create benchmarks for this task. Third, we publicly
release a large-scale video benchmark created with an implementation of this
procedure and we include a human study that assesses human performance on our
dataset. Finally, we propose and test a varied collection of approaches on this
benchmark for the purpose of gaining a better understanding of the new
challenges posed by video comprehension.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 16:53:22 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Nov 2016 19:49:57 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Mar 2017 17:50:47 GMT"
},
{
"version": "v4",
"created": "Fri, 14 Apr 2017 17:30:12 GMT"
},
{
"version": "v5",
"created": "Fri, 16 Jun 2017 19:50:46 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Tran",
"Du",
""
],
[
"Bolonkin",
"Maksim",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Torresani",
"Lorenzo",
""
]
] | TITLE: VideoMCC: a New Benchmark for Video Comprehension
ABSTRACT: While there is overall agreement that future technology for organizing,
browsing and searching videos hinges on the development of methods for
high-level semantic understanding of video, so far no consensus has been
reached on the best way to train and assess models for this task. Casting video
understanding as a form of action or event categorization is problematic as it
is not fully clear what the semantic classes or abstractions in this domain
should be. Language has been exploited to sidestep the problem of defining
video categories, by formulating video understanding as the task of captioning
or description. However, language is highly complex, redundant and sometimes
ambiguous. Many different captions may express the same semantic concept. To
account for this ambiguity, quantitative evaluation of video description
requires sophisticated metrics, whose performance scores are typically hard to
interpret by humans.
This paper provides four contributions to this problem. First, we formulate
Video Multiple Choice Caption (VideoMCC) as a new well-defined task with an
easy-to-interpret performance measure. Second, we describe a general
semi-automatic procedure to create benchmarks for this task. Third, we publicly
release a large-scale video benchmark created with an implementation of this
procedure and we include a human study that assesses human performance on our
dataset. Finally, we propose and test a varied collection of approaches on this
benchmark for the purpose of gaining a better understanding of the new
challenges posed by video comprehension.
| new_dataset | 0.912592 |
1608.07019 | Haozhe Xie | Haozhe Xie, Jie Li, Qiaosheng Zhang and Yadong Wang | Comparison among dimensionality reduction techniques based on Random
Projection for cancer classification | null | Computational biology and chemistry, 65: 165-172, 2016 | 10.1016/j.compbiolchem.2016.09.010 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Projection (RP) technique has been widely applied in many scenarios
because it can reduce high-dimensional features into low-dimensional space
within short time and meet the need of real-time analysis of massive data.
There is an urgent need of dimensionality reduction with fast increase of big
genomics data. However, the performance of RP is usually lower. We attempt to
improve classification accuracy of RP through combining other reduction
dimension methods such as Principle Component Analysis (PCA), Linear
Discriminant Analysis (LDA), and Feature Selection (FS). We compared
classification accuracy and running time of different combination methods on
three microarray datasets and a simulation dataset. Experimental results show a
remarkable improvement of 14.77% in classification accuracy of FS followed by
RP compared to RP on BC-TCGA dataset. LDA followed by RP also helps RP to yield
a more discriminative subspace with an increase of 13.65% on classification
accuracy on the same dataset. FS followed by RP outperforms other combination
methods in classification accuracy on most of the datasets.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2016 05:14:57 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2017 13:56:03 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Feb 2017 02:52:17 GMT"
},
{
"version": "v4",
"created": "Tue, 30 May 2017 01:59:19 GMT"
},
{
"version": "v5",
"created": "Sat, 17 Jun 2017 04:12:57 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Xie",
"Haozhe",
""
],
[
"Li",
"Jie",
""
],
[
"Zhang",
"Qiaosheng",
""
],
[
"Wang",
"Yadong",
""
]
] | TITLE: Comparison among dimensionality reduction techniques based on Random
Projection for cancer classification
ABSTRACT: Random Projection (RP) technique has been widely applied in many scenarios
because it can reduce high-dimensional features into low-dimensional space
within short time and meet the need of real-time analysis of massive data.
There is an urgent need of dimensionality reduction with fast increase of big
genomics data. However, the performance of RP is usually lower. We attempt to
improve classification accuracy of RP through combining other reduction
dimension methods such as Principle Component Analysis (PCA), Linear
Discriminant Analysis (LDA), and Feature Selection (FS). We compared
classification accuracy and running time of different combination methods on
three microarray datasets and a simulation dataset. Experimental results show a
remarkable improvement of 14.77% in classification accuracy of FS followed by
RP compared to RP on BC-TCGA dataset. LDA followed by RP also helps RP to yield
a more discriminative subspace with an increase of 13.65% on classification
accuracy on the same dataset. FS followed by RP outperforms other combination
methods in classification accuracy on most of the datasets.
| no_new_dataset | 0.948106 |
1610.02237 | Hilde Kuehne | Hilde Kuehne, Alexander Richard, Juergen Gall | Weakly supervised learning of actions from transcripts | 33 pages, 9 figures, to appear in CVIU | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach for weakly supervised learning of human actions from
video transcriptions. Our system is based on the idea that, given a sequence of
input data and a transcript, i.e. a list of the order the actions occur in the
video, it is possible to infer the actions within the video stream, and thus,
learn the related action models without the need for any frame-based
annotation. Starting from the transcript information at hand, we split the
given data sequences uniformly based on the number of expected actions. We then
learn action models for each class by maximizing the probability that the
training video sequences are generated by the action models given the sequence
order as defined by the transcripts. The learned model can be used to
temporally segment an unseen video with or without transcript. We evaluate our
approach on four distinct activity datasets, namely Hollywood Extended, MPII
Cooking, Breakfast and CRIM13. We show that our system is able to align the
scripted actions with the video data and that the learned models localize and
classify actions competitively in comparison to models trained with full
supervision, i.e. with frame level annotations, and that they outperform any
current state-of-the-art approach for aligning transcripts with video data.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2016 12:00:08 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Jun 2017 09:25:13 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Kuehne",
"Hilde",
""
],
[
"Richard",
"Alexander",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: Weakly supervised learning of actions from transcripts
ABSTRACT: We present an approach for weakly supervised learning of human actions from
video transcriptions. Our system is based on the idea that, given a sequence of
input data and a transcript, i.e. a list of the order the actions occur in the
video, it is possible to infer the actions within the video stream, and thus,
learn the related action models without the need for any frame-based
annotation. Starting from the transcript information at hand, we split the
given data sequences uniformly based on the number of expected actions. We then
learn action models for each class by maximizing the probability that the
training video sequences are generated by the action models given the sequence
order as defined by the transcripts. The learned model can be used to
temporally segment an unseen video with or without transcript. We evaluate our
approach on four distinct activity datasets, namely Hollywood Extended, MPII
Cooking, Breakfast and CRIM13. We show that our system is able to align the
scripted actions with the video data and that the learned models localize and
classify actions competitively in comparison to models trained with full
supervision, i.e. with frame level annotations, and that they outperform any
current state-of-the-art approach for aligning transcripts with video data.
| no_new_dataset | 0.946101 |
1612.02897 | Kareem Abdelfatah | Kareem Abdelfatah, Junshu Bao, Gabriel Terejanu | Environmental Modeling Framework using Stacked Gaussian Processes | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A network of independently trained Gaussian processes (StackedGP) is
introduced to obtain predictions of quantities of interest with quantified
uncertainties. The main applications of the StackedGP framework are to
integrate different datasets through model composition, enhance predictions of
quantities of interest through a cascade of intermediate predictions, and to
propagate uncertainties through emulated dynamical systems driven by uncertain
forcing variables. By using analytical first and second-order moments of a
Gaussian process with uncertain inputs using squared exponential and polynomial
kernels, approximated expectations of quantities of interests that require an
arbitrary composition of functions can be obtained. The StackedGP model is
extended to any number of layers and nodes per layer, and it provides
flexibility in kernel selection for the input nodes. The proposed nonparametric
stacked model is validated using synthetic datasets, and its performance in
model composition and cascading predictions is measured in two applications
using real data.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 02:53:45 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2017 19:21:16 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Abdelfatah",
"Kareem",
""
],
[
"Bao",
"Junshu",
""
],
[
"Terejanu",
"Gabriel",
""
]
] | TITLE: Environmental Modeling Framework using Stacked Gaussian Processes
ABSTRACT: A network of independently trained Gaussian processes (StackedGP) is
introduced to obtain predictions of quantities of interest with quantified
uncertainties. The main applications of the StackedGP framework are to
integrate different datasets through model composition, enhance predictions of
quantities of interest through a cascade of intermediate predictions, and to
propagate uncertainties through emulated dynamical systems driven by uncertain
forcing variables. By using analytical first and second-order moments of a
Gaussian process with uncertain inputs using squared exponential and polynomial
kernels, approximated expectations of quantities of interests that require an
arbitrary composition of functions can be obtained. The StackedGP model is
extended to any number of layers and nodes per layer, and it provides
flexibility in kernel selection for the input nodes. The proposed nonparametric
stacked model is validated using synthetic datasets, and its performance in
model composition and cascading predictions is measured in two applications
using real data.
| no_new_dataset | 0.944228 |
1702.08139 | Zichao Yang | Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, Taylor Berg-Kirkpatrick | Improved Variational Autoencoders for Text Modeling using Dilated
Convolutions | camera ready | null | null | null | cs.NE cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work on generative modeling of text has found that variational
auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM
language models (Bowman et al., 2015). This negative result is so far poorly
understood, but has been attributed to the propensity of LSTM decoders to
ignore conditioning information from the encoder. In this paper, we experiment
with a new type of decoder for VAE: a dilated CNN. By changing the decoder's
dilation architecture, we control the effective context from previously
generated words. In experiments, we find that there is a trade off between the
contextual capacity of the decoder and the amount of encoding information used.
We show that with the right decoder, VAE can outperform LSTM language models.
We demonstrate perplexity gains on two datasets, representing the first
positive experimental result on the use VAE for generative modeling of text.
Further, we conduct an in-depth investigation of the use of VAE (with our new
decoding architecture) for semi-supervised and unsupervised labeling tasks,
demonstrating gains over several strong baselines.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 04:16:01 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Jun 2017 00:31:34 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Yang",
"Zichao",
""
],
[
"Hu",
"Zhiting",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
]
] | TITLE: Improved Variational Autoencoders for Text Modeling using Dilated
Convolutions
ABSTRACT: Recent work on generative modeling of text has found that variational
auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM
language models (Bowman et al., 2015). This negative result is so far poorly
understood, but has been attributed to the propensity of LSTM decoders to
ignore conditioning information from the encoder. In this paper, we experiment
with a new type of decoder for VAE: a dilated CNN. By changing the decoder's
dilation architecture, we control the effective context from previously
generated words. In experiments, we find that there is a trade off between the
contextual capacity of the decoder and the amount of encoding information used.
We show that with the right decoder, VAE can outperform LSTM language models.
We demonstrate perplexity gains on two datasets, representing the first
positive experimental result on the use VAE for generative modeling of text.
Further, we conduct an in-depth investigation of the use of VAE (with our new
decoding architecture) for semi-supervised and unsupervised labeling tasks,
demonstrating gains over several strong baselines.
| no_new_dataset | 0.941815 |
1705.08722 | Yang Yu | Yang Yu, Wei-Yang Qu, Nan Li, Zimin Guo | Open-Category Classification by Adversarial Sample Generation | Published in IJCAI 2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real-world classification tasks, it is difficult to collect training
samples from all possible categories of the environment. Therefore, when an
instance of an unseen class appears in the prediction stage, a robust
classifier should be able to tell that it is from an unseen class, instead of
classifying it to be any known category. In this paper, adopting the idea of
adversarial learning, we propose the ASG framework for open-category
classification. ASG generates positive and negative samples of seen categories
in the unsupervised manner via an adversarial learning strategy. With the
generated samples, ASG then learns to tell seen from unseen in the supervised
manner. Experiments performed on several datasets show the effectiveness of
ASG.
| [
{
"version": "v1",
"created": "Wed, 24 May 2017 12:27:06 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Jun 2017 09:08:34 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Yu",
"Yang",
""
],
[
"Qu",
"Wei-Yang",
""
],
[
"Li",
"Nan",
""
],
[
"Guo",
"Zimin",
""
]
] | TITLE: Open-Category Classification by Adversarial Sample Generation
ABSTRACT: In real-world classification tasks, it is difficult to collect training
samples from all possible categories of the environment. Therefore, when an
instance of an unseen class appears in the prediction stage, a robust
classifier should be able to tell that it is from an unseen class, instead of
classifying it to be any known category. In this paper, adopting the idea of
adversarial learning, we propose the ASG framework for open-category
classification. ASG generates positive and negative samples of seen categories
in the unsupervised manner via an adversarial learning strategy. With the
generated samples, ASG then learns to tell seen from unseen in the supervised
manner. Experiments performed on several datasets show the effectiveness of
ASG.
| no_new_dataset | 0.951504 |
1706.05436 | Wael Halbawi | Wael Halbawi, Navid Azizan-Ruhi, Fariborz Salehi, Babak Hassibi | Improving Distributed Gradient Descent Using Reed-Solomon Codes | null | null | null | null | cs.IT cs.DC math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's massively-sized datasets have made it necessary to often perform
computations on them in a distributed manner. In principle, a computational
task is divided into subtasks which are distributed over a cluster operated by
a taskmaster. One issue faced in practice is the delay incurred due to the
presence of slow machines, known as \emph{stragglers}. Several schemes,
including those based on replication, have been proposed in the literature to
mitigate the effects of stragglers and more recently, those inspired by coding
theory have begun to gain traction. In this work, we consider a distributed
gradient descent setting suitable for a wide class of machine learning
problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and
present a deterministic scheme that, for a prescribed per-machine computational
effort, recovers the gradient from the least number of machines $f$
theoretically permissible, via an $O(f^2)$ decoding algorithm. We also provide
a theoretical delay model which can be used to minimize the expected waiting
time per computation by optimally choosing the parameters of the scheme.
Finally, we supplement our theoretical findings with numerical results that
demonstrate the efficacy of the method and its advantages over competing
schemes.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 21:45:31 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Halbawi",
"Wael",
""
],
[
"Azizan-Ruhi",
"Navid",
""
],
[
"Salehi",
"Fariborz",
""
],
[
"Hassibi",
"Babak",
""
]
] | TITLE: Improving Distributed Gradient Descent Using Reed-Solomon Codes
ABSTRACT: Today's massively-sized datasets have made it necessary to often perform
computations on them in a distributed manner. In principle, a computational
task is divided into subtasks which are distributed over a cluster operated by
a taskmaster. One issue faced in practice is the delay incurred due to the
presence of slow machines, known as \emph{stragglers}. Several schemes,
including those based on replication, have been proposed in the literature to
mitigate the effects of stragglers and more recently, those inspired by coding
theory have begun to gain traction. In this work, we consider a distributed
gradient descent setting suitable for a wide class of machine learning
problems. We adapt the framework of Tandon et al. (arXiv:1612.03301) and
present a deterministic scheme that, for a prescribed per-machine computational
effort, recovers the gradient from the least number of machines $f$
theoretically permissible, via an $O(f^2)$ decoding algorithm. We also provide
a theoretical delay model which can be used to minimize the expected waiting
time per computation by optimally choosing the parameters of the scheme.
Finally, we supplement our theoretical findings with numerical results that
demonstrate the efficacy of the method and its advantages over competing
schemes.
| no_new_dataset | 0.942401 |
1706.05549 | Andrey Ignatov | Liliya Akhtyamova, Andrey Ignatov, John Cardiff | A Large-Scale CNN Ensemble for Medication Safety Analysis | null | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Revealing Adverse Drug Reactions (ADR) is an essential part of post-marketing
drug surveillance, and data from health-related forums and medical communities
can be of a great significance for estimating such effects. In this paper, we
propose an end-to-end CNN-based method for predicting drug safety on user
comments from healthcare discussion forums. We present an architecture that is
based on a vast ensemble of CNNs with varied structural parameters, where the
prediction is determined by the majority vote. To evaluate the performance of
the proposed solution, we present a large-scale dataset collected from a
medical website that consists of over 50 thousand reviews for more than 4000
drugs. The results demonstrate that our model significantly outperforms
conventional approaches and predicts medicine safety with an accuracy of 87.17%
for binary and 62.88% for multi-classification tasks.
| [
{
"version": "v1",
"created": "Sat, 17 Jun 2017 15:06:58 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Akhtyamova",
"Liliya",
""
],
[
"Ignatov",
"Andrey",
""
],
[
"Cardiff",
"John",
""
]
] | TITLE: A Large-Scale CNN Ensemble for Medication Safety Analysis
ABSTRACT: Revealing Adverse Drug Reactions (ADR) is an essential part of post-marketing
drug surveillance, and data from health-related forums and medical communities
can be of a great significance for estimating such effects. In this paper, we
propose an end-to-end CNN-based method for predicting drug safety on user
comments from healthcare discussion forums. We present an architecture that is
based on a vast ensemble of CNNs with varied structural parameters, where the
prediction is determined by the majority vote. To evaluate the performance of
the proposed solution, we present a large-scale dataset collected from a
medical website that consists of over 50 thousand reviews for more than 4000
drugs. The results demonstrate that our model significantly outperforms
conventional approaches and predicts medicine safety with an accuracy of 87.17%
for binary and 62.88% for multi-classification tasks.
| new_dataset | 0.95594 |
1706.05585 | Tom Hope | Tom Hope, Joel Chan, Aniket Kittur, Dafna Shahaf | Accelerating Innovation Through Analogy Mining | KDD 2017 | null | null | null | cs.CL cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The availability of large idea repositories (e.g., the U.S. patent database)
could significantly accelerate innovation and discovery by providing people
with inspiration from solutions to analogous problems. However, finding useful
analogies in these large, messy, real-world repositories remains a persistent
challenge for either human or automated methods. Previous approaches include
costly hand-created databases that have high relational structure (e.g.,
predicate calculus representations) but are very sparse. Simpler
machine-learning/information-retrieval similarity metrics can scale to large,
natural-language datasets, but struggle to account for structural similarity,
which is central to analogy. In this paper we explore the viability and value
of learning simpler structural representations, specifically, "problem
schemas", which specify the purpose of a product and the mechanisms by which it
achieves that purpose. Our approach combines crowdsourcing and recurrent neural
networks to extract purpose and mechanism vector representations from product
descriptions. We demonstrate that these learned vectors allow us to find
analogies with higher precision and recall than traditional
information-retrieval methods. In an ideation experiment, analogies retrieved
by our models significantly increased people's likelihood of generating
creative ideas compared to analogies retrieved by traditional methods. Our
results suggest a promising approach to enabling computational analogy at scale
is to learn and leverage weaker structural representations.
| [
{
"version": "v1",
"created": "Sat, 17 Jun 2017 22:29:37 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Hope",
"Tom",
""
],
[
"Chan",
"Joel",
""
],
[
"Kittur",
"Aniket",
""
],
[
"Shahaf",
"Dafna",
""
]
] | TITLE: Accelerating Innovation Through Analogy Mining
ABSTRACT: The availability of large idea repositories (e.g., the U.S. patent database)
could significantly accelerate innovation and discovery by providing people
with inspiration from solutions to analogous problems. However, finding useful
analogies in these large, messy, real-world repositories remains a persistent
challenge for either human or automated methods. Previous approaches include
costly hand-created databases that have high relational structure (e.g.,
predicate calculus representations) but are very sparse. Simpler
machine-learning/information-retrieval similarity metrics can scale to large,
natural-language datasets, but struggle to account for structural similarity,
which is central to analogy. In this paper we explore the viability and value
of learning simpler structural representations, specifically, "problem
schemas", which specify the purpose of a product and the mechanisms by which it
achieves that purpose. Our approach combines crowdsourcing and recurrent neural
networks to extract purpose and mechanism vector representations from product
descriptions. We demonstrate that these learned vectors allow us to find
analogies with higher precision and recall than traditional
information-retrieval methods. In an ideation experiment, analogies retrieved
by our models significantly increased people's likelihood of generating
creative ideas compared to analogies retrieved by traditional methods. Our
results suggest a promising approach to enabling computational analogy at scale
is to learn and leverage weaker structural representations.
| no_new_dataset | 0.932515 |
1706.05726 | Cemal Aker | Cemal Aker, Sinan Kalkan | Using Deep Networks for Drone Detection | To appear in International Workshop on Small-Drone Surveillance,
Detection and Counteraction Techniques organised within AVSS 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drone detection is the problem of finding the smallest rectangle that
encloses the drone(s) in a video sequence. In this study, we propose a solution
using an end-to-end object detection model based on convolutional neural
networks. To solve the scarce data problem for training the network, we propose
an algorithm for creating an extensive artificial dataset by combining
background-subtracted real images. With this approach, we can achieve precision
and recall values both of which are high at the same time.
| [
{
"version": "v1",
"created": "Sun, 18 Jun 2017 20:50:56 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Aker",
"Cemal",
""
],
[
"Kalkan",
"Sinan",
""
]
] | TITLE: Using Deep Networks for Drone Detection
ABSTRACT: Drone detection is the problem of finding the smallest rectangle that
encloses the drone(s) in a video sequence. In this study, we propose a solution
using an end-to-end object detection model based on convolutional neural
networks. To solve the scarce data problem for training the network, we propose
an algorithm for creating an extensive artificial dataset by combining
background-subtracted real images. With this approach, we can achieve precision
and recall values both of which are high at the same time.
| no_new_dataset | 0.944022 |
1706.05733 | Georgios Feretzakis | Dimitris Kalles, Vassilios S. Verykios, Georgios Feretzakis,
Athanasios Papagelis | Data set operations to hide decision tree rules | 7 pages, 4 figures and 2 tables. ECAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on preserving the privacy of sensitive patterns when
inducing decision trees. We adopt a record augmentation approach for hiding
sensitive classification rules in binary datasets. Such a hiding methodology is
preferred over other heuristic solutions like output perturbation or
cryptographic techniques - which restrict the usability of the data - since the
raw data itself is readily available for public use. We show some key lemmas
which are related to the hiding process and we also demonstrate the methodology
with an example and an indicative experiment using a prototype hiding tool.
| [
{
"version": "v1",
"created": "Sun, 18 Jun 2017 21:57:36 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Kalles",
"Dimitris",
""
],
[
"Verykios",
"Vassilios S.",
""
],
[
"Feretzakis",
"Georgios",
""
],
[
"Papagelis",
"Athanasios",
""
]
] | TITLE: Data set operations to hide decision tree rules
ABSTRACT: This paper focuses on preserving the privacy of sensitive patterns when
inducing decision trees. We adopt a record augmentation approach for hiding
sensitive classification rules in binary datasets. Such a hiding methodology is
preferred over other heuristic solutions like output perturbation or
cryptographic techniques - which restrict the usability of the data - since the
raw data itself is readily available for public use. We show some key lemmas
which are related to the hiding process and we also demonstrate the methodology
with an example and an indicative experiment using a prototype hiding tool.
| no_new_dataset | 0.941922 |
1706.05764 | Fenglong Ma | Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, Jing Gao | Dipole: Diagnosis Prediction in Healthcare via Attention-based
Bidirectional Recurrent Neural Networks | null | null | 10.1145/3097983.3098088 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future health information of patients from the historical
Electronic Health Records (EHR) is a core research task in the development of
personalized healthcare. Patient EHR data consist of sequences of visits over
time, where each visit contains multiple medical codes, including diagnosis,
medication, and procedure codes. The most important challenges for this task
are to model the temporality and high dimensionality of sequential EHR data and
to interpret the prediction results. Existing work solves this problem by
employing recurrent neural networks (RNNs) to model EHR data and utilizing
simple attention mechanism to interpret the results. However, RNN-based
approaches suffer from the problem that the performance of RNNs drops when the
length of sequences is large, and the relationships between subsequent visits
are ignored by current RNN-based approaches. To address these issues, we
propose {\sf Dipole}, an end-to-end, simple and robust model for predicting
patients' future health information. Dipole employs bidirectional recurrent
neural networks to remember all the information of both the past visits and the
future visits, and it introduces three attention mechanisms to measure the
relationships of different visits for the prediction. With the attention
mechanisms, Dipole can interpret the prediction results effectively. Dipole
also allows us to interpret the learned medical code representations which are
confirmed positively by medical experts. Experimental results on two real world
EHR datasets show that the proposed Dipole can significantly improve the
prediction accuracy compared with the state-of-the-art diagnosis prediction
approaches and provide clinically meaningful interpretation.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 02:30:58 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Ma",
"Fenglong",
""
],
[
"Chitta",
"Radha",
""
],
[
"Zhou",
"Jing",
""
],
[
"You",
"Quanzeng",
""
],
[
"Sun",
"Tong",
""
],
[
"Gao",
"Jing",
""
]
] | TITLE: Dipole: Diagnosis Prediction in Healthcare via Attention-based
Bidirectional Recurrent Neural Networks
ABSTRACT: Predicting the future health information of patients from the historical
Electronic Health Records (EHR) is a core research task in the development of
personalized healthcare. Patient EHR data consist of sequences of visits over
time, where each visit contains multiple medical codes, including diagnosis,
medication, and procedure codes. The most important challenges for this task
are to model the temporality and high dimensionality of sequential EHR data and
to interpret the prediction results. Existing work solves this problem by
employing recurrent neural networks (RNNs) to model EHR data and utilizing
simple attention mechanism to interpret the results. However, RNN-based
approaches suffer from the problem that the performance of RNNs drops when the
length of sequences is large, and the relationships between subsequent visits
are ignored by current RNN-based approaches. To address these issues, we
propose {\sf Dipole}, an end-to-end, simple and robust model for predicting
patients' future health information. Dipole employs bidirectional recurrent
neural networks to remember all the information of both the past visits and the
future visits, and it introduces three attention mechanisms to measure the
relationships of different visits for the prediction. With the attention
mechanisms, Dipole can interpret the prediction results effectively. Dipole
also allows us to interpret the learned medical code representations which are
confirmed positively by medical experts. Experimental results on two real world
EHR datasets show that the proposed Dipole can significantly improve the
prediction accuracy compared with the state-of-the-art diagnosis prediction
approaches and provide clinically meaningful interpretation.
| no_new_dataset | 0.94699 |
1706.05765 | Makoto Morishita | Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino,
Katsuhito Sudoh, Satoshi Nakamura | An Empirical Study of Mini-Batch Creation Strategies for Neural Machine
Translation | 8 pages, accepted to the First Workshop on Neural Machine Translation | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training of neural machine translation (NMT) models usually uses mini-batches
for efficiency purposes. During the mini-batched training process, it is
necessary to pad shorter sentences in a mini-batch to be equal in length to the
longest sentence therein for efficient computation. Previous work has noted
that sorting the corpus based on the sentence length before making mini-batches
reduces the amount of padding and increases the processing speed. However,
despite the fact that mini-batch creation is an essential step in NMT training,
widely used NMT toolkits implement disparate strategies for doing so, which
have not been empirically validated or compared. This work investigates
mini-batch creation strategies with experiments over two different datasets.
Our results suggest that the choice of a mini-batch creation strategy has a
large effect on NMT training and some length-based sorting strategies do not
always work well compared with simple shuffling.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 02:38:01 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Morishita",
"Makoto",
""
],
[
"Oda",
"Yusuke",
""
],
[
"Neubig",
"Graham",
""
],
[
"Yoshino",
"Koichiro",
""
],
[
"Sudoh",
"Katsuhito",
""
],
[
"Nakamura",
"Satoshi",
""
]
] | TITLE: An Empirical Study of Mini-Batch Creation Strategies for Neural Machine
Translation
ABSTRACT: Training of neural machine translation (NMT) models usually uses mini-batches
for efficiency purposes. During the mini-batched training process, it is
necessary to pad shorter sentences in a mini-batch to be equal in length to the
longest sentence therein for efficient computation. Previous work has noted
that sorting the corpus based on the sentence length before making mini-batches
reduces the amount of padding and increases the processing speed. However,
despite the fact that mini-batch creation is an essential step in NMT training,
widely used NMT toolkits implement disparate strategies for doing so, which
have not been empirically validated or compared. This work investigates
mini-batch creation strategies with experiments over two different datasets.
Our results suggest that the choice of a mini-batch creation strategy has a
large effect on NMT training and some length-based sorting strategies do not
always work well compared with simple shuffling.
| no_new_dataset | 0.949623 |
1706.05864 | Wei Zhou | Wei Zhou and Caiwen Ma and Arjan Kuijper | Histograms of Gaussian normal distribution for feature matching in
clutter scenes | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D feature descriptor provide information between corresponding models and
scenes. 3D objection recognition in cluttered scenes, however, remains a
largely unsolved problem. Practical applications impose several challenges
which are not fully addressed by existing methods. Especially in cluttered
scenes there are many feature mismatches between scenes and models. We
therefore propose Histograms of Gaussian Normal Distribution (HGND) for
extracting salient features on a local reference frame (LRF) that enables us to
solve this problem. We propose a LRF on each local surface patches using the
scatter matrix's eigenvectors. Then the HGND information of each salient point
is calculated on the LRF, for which we use both the mesh and point data of the
depth image. Experiments on 45 cluttered scenes of the Bologna Dataset and 50
cluttered scenes of the UWA Dataset are made to evaluate the robustness and
descriptiveness of our HGND. Experiments carried out by us demonstrate that
HGND obtains a more reliable matching rate than state-of-the-art approaches in
cluttered situations.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 10:23:14 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Zhou",
"Wei",
""
],
[
"Ma",
"Caiwen",
""
],
[
"Kuijper",
"Arjan",
""
]
] | TITLE: Histograms of Gaussian normal distribution for feature matching in
clutter scenes
ABSTRACT: 3D feature descriptor provide information between corresponding models and
scenes. 3D objection recognition in cluttered scenes, however, remains a
largely unsolved problem. Practical applications impose several challenges
which are not fully addressed by existing methods. Especially in cluttered
scenes there are many feature mismatches between scenes and models. We
therefore propose Histograms of Gaussian Normal Distribution (HGND) for
extracting salient features on a local reference frame (LRF) that enables us to
solve this problem. We propose a LRF on each local surface patches using the
scatter matrix's eigenvectors. Then the HGND information of each salient point
is calculated on the LRF, for which we use both the mesh and point data of the
depth image. Experiments on 45 cluttered scenes of the Bologna Dataset and 50
cluttered scenes of the UWA Dataset are made to evaluate the robustness and
descriptiveness of our HGND. Experiments carried out by us demonstrate that
HGND obtains a more reliable matching rate than state-of-the-art approaches in
cluttered situations.
| no_new_dataset | 0.939471 |
1706.05952 | Zhiyuan Shi | Zhiyuan Shi, Timothy M. Hospedales, Tao Xiang | Bayesian Joint Modelling for Object Localisation in Weakly Labelled
Images | Accepted in IEEE Transaction on Pattern Analysis and Machine
Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of localisation of objects as bounding boxes in images
and videos with weak labels. This weakly supervised object localisation problem
has been tackled in the past using discriminative models where each object
class is localised independently from other classes. In this paper, a novel
framework based on Bayesian joint topic modelling is proposed, which differs
significantly from the existing ones in that: (1) All foreground object classes
are modelled jointly in a single generative model that encodes multiple object
co-existence so that "explaining away" inference can resolve ambiguity and lead
to better learning and localisation. (2) Image backgrounds are shared across
classes to better learn varying surroundings and "push out" objects of
interest. (3) Our model can be learned with a mixture of weakly labelled and
unlabelled data, allowing the large volume of unlabelled images on the Internet
to be exploited for learning. Moreover, the Bayesian formulation enables the
exploitation of various types of prior knowledge to compensate for the limited
supervision offered by weakly labelled data, as well as Bayesian domain
adaptation for transfer learning. Extensive experiments on the PASCAL VOC,
ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of
our Bayesian joint model for weakly supervised object localisation.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 13:59:48 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Shi",
"Zhiyuan",
""
],
[
"Hospedales",
"Timothy M.",
""
],
[
"Xiang",
"Tao",
""
]
] | TITLE: Bayesian Joint Modelling for Object Localisation in Weakly Labelled
Images
ABSTRACT: We address the problem of localisation of objects as bounding boxes in images
and videos with weak labels. This weakly supervised object localisation problem
has been tackled in the past using discriminative models where each object
class is localised independently from other classes. In this paper, a novel
framework based on Bayesian joint topic modelling is proposed, which differs
significantly from the existing ones in that: (1) All foreground object classes
are modelled jointly in a single generative model that encodes multiple object
co-existence so that "explaining away" inference can resolve ambiguity and lead
to better learning and localisation. (2) Image backgrounds are shared across
classes to better learn varying surroundings and "push out" objects of
interest. (3) Our model can be learned with a mixture of weakly labelled and
unlabelled data, allowing the large volume of unlabelled images on the Internet
to be exploited for learning. Moreover, the Bayesian formulation enables the
exploitation of various types of prior knowledge to compensate for the limited
supervision offered by weakly labelled data, as well as Bayesian domain
adaptation for transfer learning. Extensive experiments on the PASCAL VOC,
ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of
our Bayesian joint model for weakly supervised object localisation.
| no_new_dataset | 0.951278 |
1706.05999 | Sascha Wirges | Sascha Wirges, Bj\"orn Roxin, Eike Rehder, Tilman K\"uhner and Martin
Lauer | Guided Depth Upsampling for Precise Mapping of Urban Environments | 6 pages, 6 figures | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an improved model for MRF-based depth upsampling, guided by image-
as well as 3D surface normal features. By exploiting the underlying camera
model we define a novel regularization term that implicitly evaluates the
planarity of arbitrary oriented surfaces. Our method improves upsampling
quality in scenes composed of predominantly planar surfaces, such as urban
areas. We use a synthetic dataset to demonstrate that our approach outperforms
recent methods that implement distance-based regularization terms. Finally, we
validate our approach for mapping applications on our experimental vehicle.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 15:04:41 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Wirges",
"Sascha",
""
],
[
"Roxin",
"Björn",
""
],
[
"Rehder",
"Eike",
""
],
[
"Kühner",
"Tilman",
""
],
[
"Lauer",
"Martin",
""
]
] | TITLE: Guided Depth Upsampling for Precise Mapping of Urban Environments
ABSTRACT: We present an improved model for MRF-based depth upsampling, guided by image-
as well as 3D surface normal features. By exploiting the underlying camera
model we define a novel regularization term that implicitly evaluates the
planarity of arbitrary oriented surfaces. Our method improves upsampling
quality in scenes composed of predominantly planar surfaces, such as urban
areas. We use a synthetic dataset to demonstrate that our approach outperforms
recent methods that implement distance-based regularization terms. Finally, we
validate our approach for mapping applications on our experimental vehicle.
| no_new_dataset | 0.9463 |
1706.06031 | Dmitry Petrov | Dmitry Petrov, Alexander Ivanov, Joshua Faskowitz, Boris Gutman,
Daniel Moyer, Julio Villalon, Neda Jahanshad and Paul Thompson | Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise
Classification | Accepted for MICCAI 2017, 8 pages, 3 figures | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is no consensus on how to construct structural brain networks from
diffusion MRI. How variations in pre-processing steps affect network
reliability and its ability to distinguish subjects remains opaque. In this
work, we address this issue by comparing 35 structural connectome-building
pipelines. We vary diffusion reconstruction models, tractography algorithms and
parcellations. Next, we classify structural connectome pairs as either
belonging to the same individual or not. Connectome weights and eight
topological derivative measures form our feature set. For experiments, we use
three test-retest datasets from the Consortium for Reliability and
Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare
pairwise classification results to a commonly used parametric test-retest
measure, Intraclass Correlation Coefficient (ICC).
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 16:05:11 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Petrov",
"Dmitry",
""
],
[
"Ivanov",
"Alexander",
""
],
[
"Faskowitz",
"Joshua",
""
],
[
"Gutman",
"Boris",
""
],
[
"Moyer",
"Daniel",
""
],
[
"Villalon",
"Julio",
""
],
[
"Jahanshad",
"Neda",
""
],
[
"Thompson",
"Paul",
""
]
] | TITLE: Evaluating 35 Methods to Generate Structural Connectomes Using Pairwise
Classification
ABSTRACT: There is no consensus on how to construct structural brain networks from
diffusion MRI. How variations in pre-processing steps affect network
reliability and its ability to distinguish subjects remains opaque. In this
work, we address this issue by comparing 35 structural connectome-building
pipelines. We vary diffusion reconstruction models, tractography algorithms and
parcellations. Next, we classify structural connectome pairs as either
belonging to the same individual or not. Connectome weights and eight
topological derivative measures form our feature set. For experiments, we use
three test-retest datasets from the Consortium for Reliability and
Reproducibility (CoRR) comprised of a total of 105 individuals. We also compare
pairwise classification results to a commonly used parametric test-retest
measure, Intraclass Correlation Coefficient (ICC).
| no_new_dataset | 0.939803 |
1706.06087 | Wei Wang | Wei Wang, Brian Bleakley, Chelsea Ju, Vincent Kyi, Patrick Tan, Howard
Choi, Xinxin Huang, Yichao Zhou, Justin Wood, Ding Wang, Alex Bui, Peipei
Ping | Aztec: A Platform to Render Biomedical Software Findable, Accessible,
Interoperable, and Reusable | 21 pages, 4 figures, 2 tables | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precision medicine and health requires the characterization and phenotyping
of biological systems and patient datasets using a variety of data formats.
This scenario mandates the centralization of various tools and resources in a
unified platform to render them Findable, Accessible, Interoperable, and
Reusable (FAIR Principles). Leveraging these principles, Aztec provides the
scientific community with a new platform that promotes a long-term, sustainable
ecosystem of biomedical research software. Aztec is available at
https://aztec.bio and its source code is hosted at
https://github.com/BD2K-Aztec.
| [
{
"version": "v1",
"created": "Mon, 19 Jun 2017 17:57:44 GMT"
}
] | 2017-06-20T00:00:00 | [
[
"Wang",
"Wei",
""
],
[
"Bleakley",
"Brian",
""
],
[
"Ju",
"Chelsea",
""
],
[
"Kyi",
"Vincent",
""
],
[
"Tan",
"Patrick",
""
],
[
"Choi",
"Howard",
""
],
[
"Huang",
"Xinxin",
""
],
[
"Zhou",
"Yichao",
""
],
[
"Wood",
"Justin",
""
],
[
"Wang",
"Ding",
""
],
[
"Bui",
"Alex",
""
],
[
"Ping",
"Peipei",
""
]
] | TITLE: Aztec: A Platform to Render Biomedical Software Findable, Accessible,
Interoperable, and Reusable
ABSTRACT: Precision medicine and health requires the characterization and phenotyping
of biological systems and patient datasets using a variety of data formats.
This scenario mandates the centralization of various tools and resources in a
unified platform to render them Findable, Accessible, Interoperable, and
Reusable (FAIR Principles). Leveraging these principles, Aztec provides the
scientific community with a new platform that promotes a long-term, sustainable
ecosystem of biomedical research software. Aztec is available at
https://aztec.bio and its source code is hosted at
https://github.com/BD2K-Aztec.
| no_new_dataset | 0.950411 |
1605.07262 | Osbert Bastani | Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios
Vytiniotis, Aditya Nori, Antonio Criminisi | Measuring Neural Net Robustness with Constraints | null | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite having high accuracy, neural nets have been shown to be susceptible
to adversarial examples, where a small perturbation to an input can cause it to
become mislabeled. We propose metrics for measuring the robustness of a neural
net and devise a novel algorithm for approximating these metrics based on an
encoding of robustness as a linear program. We show how our metrics can be used
to evaluate the robustness of deep neural nets with experiments on the MNIST
and CIFAR-10 datasets. Our algorithm generates more informative estimates of
robustness metrics compared to estimates based on existing algorithms.
Furthermore, we show how existing approaches to improving robustness "overfit"
to adversarial examples generated using a specific algorithm. Finally, we show
that our techniques can be used to additionally improve neural net robustness
both according to the metrics that we propose, but also according to previously
proposed metrics.
| [
{
"version": "v1",
"created": "Tue, 24 May 2016 02:18:21 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Jun 2017 11:58:51 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Bastani",
"Osbert",
""
],
[
"Ioannou",
"Yani",
""
],
[
"Lampropoulos",
"Leonidas",
""
],
[
"Vytiniotis",
"Dimitrios",
""
],
[
"Nori",
"Aditya",
""
],
[
"Criminisi",
"Antonio",
""
]
] | TITLE: Measuring Neural Net Robustness with Constraints
ABSTRACT: Despite having high accuracy, neural nets have been shown to be susceptible
to adversarial examples, where a small perturbation to an input can cause it to
become mislabeled. We propose metrics for measuring the robustness of a neural
net and devise a novel algorithm for approximating these metrics based on an
encoding of robustness as a linear program. We show how our metrics can be used
to evaluate the robustness of deep neural nets with experiments on the MNIST
and CIFAR-10 datasets. Our algorithm generates more informative estimates of
robustness metrics compared to estimates based on existing algorithms.
Furthermore, we show how existing approaches to improving robustness "overfit"
to adversarial examples generated using a specific algorithm. Finally, we show
that our techniques can be used to additionally improve neural net robustness
both according to the metrics that we propose, but also according to previously
proposed metrics.
| no_new_dataset | 0.947817 |
1706.04261 | Raghav Goyal | Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna
Materzy\'nska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend,
Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo
Bax, Roland Memisevic | The "something something" video database for learning and evaluating
visual common sense | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks trained on datasets such as ImageNet have led to major
advances in visual object classification. One obstacle that prevents networks
from reasoning more deeply about complex scenes and situations, and from
integrating visual knowledge with natural language, like humans do, is their
lack of common sense knowledge about the physical world. Videos, unlike still
images, contain a wealth of detailed information about the physical world.
However, most labelled video datasets represent high-level concepts rather than
detailed physical aspects about actions and scenes. In this work, we describe
our ongoing collection of the "something-something" database of video
prediction tasks whose solutions require a common sense understanding of the
depicted situation. The database currently contains more than 100,000 videos
across 174 classes, which are defined as caption-templates. We also describe
the challenges in crowd-sourcing this data at scale.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 21:26:19 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 21:15:13 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Goyal",
"Raghav",
""
],
[
"Kahou",
"Samira Ebrahimi",
""
],
[
"Michalski",
"Vincent",
""
],
[
"Materzyńska",
"Joanna",
""
],
[
"Westphal",
"Susanne",
""
],
[
"Kim",
"Heuna",
""
],
[
"Haenel",
"Valentin",
""
],
[
"Fruend",
"Ingo",
""
],
[
"Yianilos",
"Peter",
""
],
[
"Mueller-Freitag",
"Moritz",
""
],
[
"Hoppe",
"Florian",
""
],
[
"Thurau",
"Christian",
""
],
[
"Bax",
"Ingo",
""
],
[
"Memisevic",
"Roland",
""
]
] | TITLE: The "something something" video database for learning and evaluating
visual common sense
ABSTRACT: Neural networks trained on datasets such as ImageNet have led to major
advances in visual object classification. One obstacle that prevents networks
from reasoning more deeply about complex scenes and situations, and from
integrating visual knowledge with natural language, like humans do, is their
lack of common sense knowledge about the physical world. Videos, unlike still
images, contain a wealth of detailed information about the physical world.
However, most labelled video datasets represent high-level concepts rather than
detailed physical aspects about actions and scenes. In this work, we describe
our ongoing collection of the "something-something" database of video
prediction tasks whose solutions require a common sense understanding of the
depicted situation. The database currently contains more than 100,000 videos
across 174 classes, which are defined as caption-templates. We also describe
the challenges in crowd-sourcing this data at scale.
| new_dataset | 0.857828 |
1706.05069 | Vitaly Feldman | Vitaly Feldman and Thomas Steinke | Generalization for Adaptively-chosen Estimators via Stable Median | To appear in Conference on Learning Theory (COLT) 2017 | null | null | null | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Datasets are often reused to perform multiple statistical analyses in an
adaptive way, in which each analysis may depend on the outcomes of previous
analyses on the same dataset. Standard statistical guarantees do not account
for these dependencies and little is known about how to provably avoid
overfitting and false discovery in the adaptive setting. We consider a natural
formalization of this problem in which the goal is to design an algorithm that,
given a limited number of i.i.d.~samples from an unknown distribution, can
answer adaptively-chosen queries about that distribution.
We present an algorithm that estimates the expectations of $k$ arbitrary
adaptively-chosen real-valued estimators using a number of samples that scales
as $\sqrt{k}$. The answers given by our algorithm are essentially as accurate
as if fresh samples were used to evaluate each estimator. In contrast, prior
work yields error guarantees that scale with the worst-case sensitivity of each
estimator. We also give a version of our algorithm that can be used to verify
answers to such queries where the sample complexity depends logarithmically on
the number of queries $k$ (as in the reusable holdout technique).
Our algorithm is based on a simple approximate median algorithm that
satisfies the strong stability guarantees of differential privacy. Our
techniques provide a new approach for analyzing the generalization guarantees
of differentially private algorithms.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2017 20:21:17 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Feldman",
"Vitaly",
""
],
[
"Steinke",
"Thomas",
""
]
] | TITLE: Generalization for Adaptively-chosen Estimators via Stable Median
ABSTRACT: Datasets are often reused to perform multiple statistical analyses in an
adaptive way, in which each analysis may depend on the outcomes of previous
analyses on the same dataset. Standard statistical guarantees do not account
for these dependencies and little is known about how to provably avoid
overfitting and false discovery in the adaptive setting. We consider a natural
formalization of this problem in which the goal is to design an algorithm that,
given a limited number of i.i.d.~samples from an unknown distribution, can
answer adaptively-chosen queries about that distribution.
We present an algorithm that estimates the expectations of $k$ arbitrary
adaptively-chosen real-valued estimators using a number of samples that scales
as $\sqrt{k}$. The answers given by our algorithm are essentially as accurate
as if fresh samples were used to evaluate each estimator. In contrast, prior
work yields error guarantees that scale with the worst-case sensitivity of each
estimator. We also give a version of our algorithm that can be used to verify
answers to such queries where the sample complexity depends logarithmically on
the number of queries $k$ (as in the reusable holdout technique).
Our algorithm is based on a simple approximate median algorithm that
satisfies the strong stability guarantees of differential privacy. Our
techniques provide a new approach for analyzing the generalization guarantees
of differentially private algorithms.
| no_new_dataset | 0.940572 |
1706.05075 | Peng Zhou | Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, Bo Xu | Joint Extraction of Entities and Relations Based on a Novel Tagging
Scheme | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Joint extraction of entities and relations is an important task in
information extraction. To tackle this problem, we firstly propose a novel
tagging scheme that can convert the joint extraction task to a tagging problem.
Then, based on our tagging scheme, we study different end-to-end models to
extract entities and their relations directly, without identifying entities and
relations separately. We conduct experiments on a public dataset produced by
distant supervision method and the experimental results show that the tagging
based methods are better than most of the existing pipelined and joint learning
methods. What's more, the end-to-end model proposed in this paper, achieves the
best results on the public dataset.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 03:14:23 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Zheng",
"Suncong",
""
],
[
"Wang",
"Feng",
""
],
[
"Bao",
"Hongyun",
""
],
[
"Hao",
"Yuexing",
""
],
[
"Zhou",
"Peng",
""
],
[
"Xu",
"Bo",
""
]
] | TITLE: Joint Extraction of Entities and Relations Based on a Novel Tagging
Scheme
ABSTRACT: Joint extraction of entities and relations is an important task in
information extraction. To tackle this problem, we firstly propose a novel
tagging scheme that can convert the joint extraction task to a tagging problem.
Then, based on our tagging scheme, we study different end-to-end models to
extract entities and their relations directly, without identifying entities and
relations separately. We conduct experiments on a public dataset produced by
distant supervision method and the experimental results show that the tagging
based methods are better than most of the existing pipelined and joint learning
methods. What's more, the end-to-end model proposed in this paper, achieves the
best results on the public dataset.
| no_new_dataset | 0.953492 |
1706.05077 | Hossein Zeinali | Hossein Zeinali, Hossein Sameti, Nooshin Maghsoodi | SUT System Description for NIST SRE 2016 | Presented in NIST SRE 2016 Evaluation Workshop | null | null | null | cs.SD | http://creativecommons.org/licenses/by/4.0/ | This paper describes the submission to fixed condition of NIST SRE 2016 by
Sharif University of Technology (SUT) team. We provide a full description of
the systems that were included in our submission. We start with an overview of
the datasets that were used for training and development. It is followed by
describing front-ends which contain different VAD and feature types. UBM and
i-vector extractor training are the next details in this paper. As one of the
important steps in system preparation, preconditioning the i-vectors are
explained in more details. Then, we describe the classifier and score
normalization methods. And finally, some results on SRE16 evaluation dataset
are reported and analyzed.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 11:13:32 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Zeinali",
"Hossein",
""
],
[
"Sameti",
"Hossein",
""
],
[
"Maghsoodi",
"Nooshin",
""
]
] | TITLE: SUT System Description for NIST SRE 2016
ABSTRACT: This paper describes the submission to fixed condition of NIST SRE 2016 by
Sharif University of Technology (SUT) team. We provide a full description of
the systems that were included in our submission. We start with an overview of
the datasets that were used for training and development. It is followed by
describing front-ends which contain different VAD and feature types. UBM and
i-vector extractor training are the next details in this paper. As one of the
important steps in system preparation, preconditioning the i-vectors are
explained in more details. Then, we describe the classifier and score
normalization methods. And finally, some results on SRE16 evaluation dataset
are reported and analyzed.
| no_new_dataset | 0.949389 |
1706.05125 | Yann Dauphin | Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh and Dhruv Batra | Deal or No Deal? End-to-End Learning for Negotiation Dialogues | null | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator).
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 01:26:09 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Lewis",
"Mike",
""
],
[
"Yarats",
"Denis",
""
],
[
"Dauphin",
"Yann N.",
""
],
[
"Parikh",
"Devi",
""
],
[
"Batra",
"Dhruv",
""
]
] | TITLE: Deal or No Deal? End-to-End Learning for Negotiation Dialogues
ABSTRACT: Much of human dialogue occurs in semi-cooperative settings, where agents with
different goals attempt to agree on common decisions. Negotiations require
complex communication and reasoning skills, but success is easy to measure,
making this an interesting task for AI. We gather a large dataset of
human-human negotiations on a multi-issue bargaining task, where agents who
cannot observe each other's reward functions must reach an agreement (or a
deal) via natural language dialogue. For the first time, we show it is possible
to train end-to-end models for negotiation, which must learn both linguistic
and reasoning skills with no annotated dialogue states. We also introduce
dialogue rollouts, in which the model plans ahead by simulating possible
complete continuations of the conversation, and find that this technique
dramatically improves performance. Our code and dataset are publicly available
(https://github.com/facebookresearch/end-to-end-negotiator).
| new_dataset | 0.952486 |
1706.05137 | {\L}ukasz Kaiser | Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki
Parmar, Llion Jones, Jakob Uszkoreit | One Model To Learn Them All | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning yields great results across many fields, from speech
recognition, image classification, to translation. But for each problem,
getting a deep model to work well involves research into the architecture and a
long period of tuning. We present a single model that yields good results on a
number of problems spanning multiple domains. In particular, this single model
is trained concurrently on ImageNet, multiple translation tasks, image
captioning (COCO dataset), a speech recognition corpus, and an English parsing
task. Our model architecture incorporates building blocks from multiple
domains. It contains convolutional layers, an attention mechanism, and
sparsely-gated layers. Each of these computational blocks is crucial for a
subset of the tasks we train on. Interestingly, even if a block is not crucial
for a task, we observe that adding it never hurts performance and in most cases
improves it on all tasks. We also show that tasks with less data benefit
largely from joint training with other tasks, while performance on large tasks
degrades only slightly if at all.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 03:10:03 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Kaiser",
"Lukasz",
""
],
[
"Gomez",
"Aidan N.",
""
],
[
"Shazeer",
"Noam",
""
],
[
"Vaswani",
"Ashish",
""
],
[
"Parmar",
"Niki",
""
],
[
"Jones",
"Llion",
""
],
[
"Uszkoreit",
"Jakob",
""
]
] | TITLE: One Model To Learn Them All
ABSTRACT: Deep learning yields great results across many fields, from speech
recognition, image classification, to translation. But for each problem,
getting a deep model to work well involves research into the architecture and a
long period of tuning. We present a single model that yields good results on a
number of problems spanning multiple domains. In particular, this single model
is trained concurrently on ImageNet, multiple translation tasks, image
captioning (COCO dataset), a speech recognition corpus, and an English parsing
task. Our model architecture incorporates building blocks from multiple
domains. It contains convolutional layers, an attention mechanism, and
sparsely-gated layers. Each of these computational blocks is crucial for a
subset of the tasks we train on. Interestingly, even if a block is not crucial
for a task, we observe that adding it never hurts performance and in most cases
improves it on all tasks. We also show that tasks with less data benefit
largely from joint training with other tasks, while performance on large tasks
degrades only slightly if at all.
| no_new_dataset | 0.944434 |
1706.05150 | He-Da Wang | He-Da Wang, Teng Zhang, Ji Wu | The Monkeytyping Solution to the YouTube-8M Video Understanding
Challenge | Submitted to the CVPR 2017 Workshop on YouTube-8M Large-Scale Video
Understanding | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article describes the final solution of team monkeytyping, who finished
in second place in the YouTube-8M video understanding challenge. The dataset
used in this challenge is a large-scale benchmark for multi-label video
classification. We extend the work in [1] and propose several improvements for
frame sequence modeling. We propose a network structure called Chaining that
can better capture the interactions between labels. Also, we report our
approaches in dealing with multi-scale information and attention pooling. In
addition, We find that using the output of model ensemble as a side target in
training can boost single model performance. We report our experiments in
bagging, boosting, cascade, and stacking, and propose a stacking algorithm
called attention weighted stacking. Our final submission is an ensemble that
consists of 74 sub models, all of which are listed in the appendix.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 05:39:53 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Wang",
"He-Da",
""
],
[
"Zhang",
"Teng",
""
],
[
"Wu",
"Ji",
""
]
] | TITLE: The Monkeytyping Solution to the YouTube-8M Video Understanding
Challenge
ABSTRACT: This article describes the final solution of team monkeytyping, who finished
in second place in the YouTube-8M video understanding challenge. The dataset
used in this challenge is a large-scale benchmark for multi-label video
classification. We extend the work in [1] and propose several improvements for
frame sequence modeling. We propose a network structure called Chaining that
can better capture the interactions between labels. Also, we report our
approaches in dealing with multi-scale information and attention pooling. In
addition, We find that using the output of model ensemble as a side target in
training can boost single model performance. We report our experiments in
bagging, boosting, cascade, and stacking, and propose a stacking algorithm
called attention weighted stacking. Our final submission is an ensemble that
consists of 74 sub models, all of which are listed in the appendix.
| no_new_dataset | 0.942771 |
1706.05157 | Shuai Li | Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, Yanbo Gao | A Fully Trainable Network with RNN-based Pooling | 17 pages, 5 figures, 4 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pooling is an important component in convolutional neural networks (CNNs) for
aggregating features and reducing computational burden. Compared with other
components such as convolutional layers and fully connected layers which are
completely learned from data, the pooling component is still handcrafted such
as max pooling and average pooling. This paper proposes a learnable pooling
function using recurrent neural networks (RNN) so that the pooling can be fully
adapted to data and other components of the network, leading to an improved
performance. Such a network with learnable pooling function is referred to as a
fully trainable network (FTN). Experimental results have demonstrated that the
proposed RNN-based pooling can well approximate the existing pooling functions
and improve the performance of the network. Especially for small networks, the
proposed FTN can improve the performance by seven percentage points in terms of
error rate on the CIFAR-10 dataset compared with the traditional CNN.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 06:42:15 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Li",
"Shuai",
""
],
[
"Li",
"Wanqing",
""
],
[
"Cook",
"Chris",
""
],
[
"Zhu",
"Ce",
""
],
[
"Gao",
"Yanbo",
""
]
] | TITLE: A Fully Trainable Network with RNN-based Pooling
ABSTRACT: Pooling is an important component in convolutional neural networks (CNNs) for
aggregating features and reducing computational burden. Compared with other
components such as convolutional layers and fully connected layers which are
completely learned from data, the pooling component is still handcrafted such
as max pooling and average pooling. This paper proposes a learnable pooling
function using recurrent neural networks (RNN) so that the pooling can be fully
adapted to data and other components of the network, leading to an improved
performance. Such a network with learnable pooling function is referred to as a
fully trainable network (FTN). Experimental results have demonstrated that the
proposed RNN-based pooling can well approximate the existing pooling functions
and improve the performance of the network. Especially for small networks, the
proposed FTN can improve the performance by seven percentage points in terms of
error rate on the CIFAR-10 dataset compared with the traditional CNN.
| no_new_dataset | 0.950732 |
1706.05236 | David Weyburne | David Weyburne | Does the Outer Region of the Turbulent Boundary Layer Display Similar
Behavior? | 27 pages, 16 figures | null | null | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent theoretical results together with established theory have identified
the displacement thickness and the velocity at the boundary layer edge as
similarity scaling parameter candidates for the wall-bounded turbulent boundary
layer. In the work described herein, we examine these scaling parameters along
with the Prandtl Plus scaling's and the Zagarola and Smits scaling's to search
for similarity in the outer region of experimental turbulent boundary layer
velocity profile datasets. A new integral area method combined with the
traditional chi-by-eye method is used to search for similar velocity profiles.
The results indicate that strict whole profile similarity is not evident in any
of the datasets we searched. However, ten datasets are found that display
"similar-like" behavior using the ratio of the inner to outer thickness ratio
as a search criterion. In alignment with theory, the preferred similarity
scaling parameters for the similar-like behavior case are the displacement
thickness and the velocity at the boundary layer edge. It was found that there
are a few datasets for which the Prandtl Plus scaling and the Zagarola and
Smits scaling also work.
| [
{
"version": "v1",
"created": "Tue, 6 Jun 2017 15:13:11 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Weyburne",
"David",
""
]
] | TITLE: Does the Outer Region of the Turbulent Boundary Layer Display Similar
Behavior?
ABSTRACT: Recent theoretical results together with established theory have identified
the displacement thickness and the velocity at the boundary layer edge as
similarity scaling parameter candidates for the wall-bounded turbulent boundary
layer. In the work described herein, we examine these scaling parameters along
with the Prandtl Plus scaling's and the Zagarola and Smits scaling's to search
for similarity in the outer region of experimental turbulent boundary layer
velocity profile datasets. A new integral area method combined with the
traditional chi-by-eye method is used to search for similar velocity profiles.
The results indicate that strict whole profile similarity is not evident in any
of the datasets we searched. However, ten datasets are found that display
"similar-like" behavior using the ratio of the inner to outer thickness ratio
as a search criterion. In alignment with theory, the preferred similarity
scaling parameters for the similar-like behavior case are the displacement
thickness and the velocity at the boundary layer edge. It was found that there
are a few datasets for which the Prandtl Plus scaling and the Zagarola and
Smits scaling also work.
| no_new_dataset | 0.954095 |
1706.05288 | Mohammad Hosseini | Mohammad Hosseini, Yu Jiang, Ali Yekkehkhany, Richard R. Berlin, Lui
Sha | A Mobile Geo-Communication Dataset for Physiology-Aware DASH in Rural
Ambulance Transport | Proceedings of the 8th ACM on Multimedia Systems Conference
(MMSys'17), Pages 158-163, Taipei, Taiwan, June 20 - 23, 2017 | null | 10.1145/3083187.3083211 | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Use of telecommunication technologies for remote, continuous monitoring of
patients can enhance effectiveness of emergency ambulance care during transport
from rural areas to a regional center hospital. However, the communication
along the various routes in rural areas may have wide bandwidth ranges from 2G
to 4G; some regions may have only lower satellite bandwidth available.
Bandwidth fluctuation together with real-time communication of various clinical
multimedia pose a major challenge during rural patient ambulance transport.;
AB@The availability of a pre-transport route-dependent communication bandwidth
database is an important resource in remote monitoring and clinical multimedia
transmission in rural ambulance transport. Here, we present a geo-communication
dataset from extensive profiling of 4 major US mobile carriers in Illinois,
from the rural location of Hoopeston to the central referral hospital center at
Urbana. In collaboration with Carle Foundation Hospital, we developed a
profiler, and collected various geographical and communication traces for
realistic emergency rural ambulance transport scenarios. Our dataset is to
support our ongoing work of proposing "physiology-aware DASH", which is
particularly useful for adaptive remote monitoring of critically ill patients
in emergency rural ambulance transport. It provides insights on ensuring higher
Quality of Service (QoS) for most critical clinical multimedia in response to
changes in patients' physiological states and bandwidth conditions. Our dataset
is available online for research community.
| [
{
"version": "v1",
"created": "Fri, 16 Jun 2017 14:28:53 GMT"
}
] | 2017-06-19T00:00:00 | [
[
"Hosseini",
"Mohammad",
""
],
[
"Jiang",
"Yu",
""
],
[
"Yekkehkhany",
"Ali",
""
],
[
"Berlin",
"Richard R.",
""
],
[
"Sha",
"Lui",
""
]
] | TITLE: A Mobile Geo-Communication Dataset for Physiology-Aware DASH in Rural
Ambulance Transport
ABSTRACT: Use of telecommunication technologies for remote, continuous monitoring of
patients can enhance effectiveness of emergency ambulance care during transport
from rural areas to a regional center hospital. However, the communication
along the various routes in rural areas may have wide bandwidth ranges from 2G
to 4G; some regions may have only lower satellite bandwidth available.
Bandwidth fluctuation together with real-time communication of various clinical
multimedia pose a major challenge during rural patient ambulance transport.;
AB@The availability of a pre-transport route-dependent communication bandwidth
database is an important resource in remote monitoring and clinical multimedia
transmission in rural ambulance transport. Here, we present a geo-communication
dataset from extensive profiling of 4 major US mobile carriers in Illinois,
from the rural location of Hoopeston to the central referral hospital center at
Urbana. In collaboration with Carle Foundation Hospital, we developed a
profiler, and collected various geographical and communication traces for
realistic emergency rural ambulance transport scenarios. Our dataset is to
support our ongoing work of proposing "physiology-aware DASH", which is
particularly useful for adaptive remote monitoring of critically ill patients
in emergency rural ambulance transport. It provides insights on ensuring higher
Quality of Service (QoS) for most critical clinical multimedia in response to
changes in patients' physiological states and bandwidth conditions. Our dataset
is available online for research community.
| new_dataset | 0.968974 |
1605.09776 | Harish Sethu | Guyue Han and Harish Sethu | Waddling Random Walk: Fast and Accurate Mining of Motif Statistics in
Large Graphs | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms for mining very large graphs, such as those representing online
social networks, to discover the relative frequency of small subgraphs within
them are of high interest to sociologists, computer scientists and marketeers
alike. However, the computation of these network motif statistics via naive
enumeration is infeasible for either its prohibitive computational costs or
access restrictions on the full graph data. Methods to estimate the motif
statistics based on random walks by sampling only a small fraction of the
subgraphs in the large graph address both of these challenges. In this paper,
we present a new algorithm, called the Waddling Random Walk (WRW), which
estimates the concentration of motifs of any size. It derives its name from the
fact that it sways a little to the left and to the right, thus also sampling
nodes not directly on the path of the random walk. The WRW algorithm achieves
its computational efficiency by not trying to enumerate subgraphs around the
random walk but instead using a randomized protocol to sample subgraphs in the
neighborhood of the nodes visited by the walk. In addition, WRW achieves
significantly higher accuracy (measured by the closeness of its estimate to the
correct value) and higher precision (measured by the low variance in its
estimations) than the current state-of-the-art algorithms for mining subgraph
statistics. We illustrate these advantages in speed, accuracy and precision
using simulations on well-known and widely used graph datasets representing
real networks.
| [
{
"version": "v1",
"created": "Tue, 31 May 2016 19:22:40 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2017 21:17:03 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Han",
"Guyue",
""
],
[
"Sethu",
"Harish",
""
]
] | TITLE: Waddling Random Walk: Fast and Accurate Mining of Motif Statistics in
Large Graphs
ABSTRACT: Algorithms for mining very large graphs, such as those representing online
social networks, to discover the relative frequency of small subgraphs within
them are of high interest to sociologists, computer scientists and marketeers
alike. However, the computation of these network motif statistics via naive
enumeration is infeasible for either its prohibitive computational costs or
access restrictions on the full graph data. Methods to estimate the motif
statistics based on random walks by sampling only a small fraction of the
subgraphs in the large graph address both of these challenges. In this paper,
we present a new algorithm, called the Waddling Random Walk (WRW), which
estimates the concentration of motifs of any size. It derives its name from the
fact that it sways a little to the left and to the right, thus also sampling
nodes not directly on the path of the random walk. The WRW algorithm achieves
its computational efficiency by not trying to enumerate subgraphs around the
random walk but instead using a randomized protocol to sample subgraphs in the
neighborhood of the nodes visited by the walk. In addition, WRW achieves
significantly higher accuracy (measured by the closeness of its estimate to the
correct value) and higher precision (measured by the low variance in its
estimations) than the current state-of-the-art algorithms for mining subgraph
statistics. We illustrate these advantages in speed, accuracy and precision
using simulations on well-known and widely used graph datasets representing
real networks.
| no_new_dataset | 0.948106 |
1610.01465 | Kushal Kafle | Kushal Kafle, Christopher Kanan | Visual Question Answering: Datasets, Algorithms, and Future Challenges | null | null | 10.1016/j.cviu.2017.06.005 | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question Answering (VQA) is a recent problem in computer vision and
natural language processing that has garnered a large amount of interest from
the deep learning, computer vision, and natural language processing
communities. In VQA, an algorithm needs to answer text-based questions about
images. Since the release of the first VQA dataset in 2014, additional datasets
have been released and many algorithms have been proposed. In this review, we
critically examine the current state of VQA in terms of problem formulation,
existing datasets, evaluation metrics, and algorithms. In particular, we
discuss the limitations of current datasets with regard to their ability to
properly train and assess VQA algorithms. We then exhaustively review existing
algorithms for VQA. Finally, we discuss possible future directions for VQA and
image understanding research.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2016 14:58:36 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2016 01:39:40 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Mar 2017 05:39:21 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Jun 2017 01:52:59 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Kafle",
"Kushal",
""
],
[
"Kanan",
"Christopher",
""
]
] | TITLE: Visual Question Answering: Datasets, Algorithms, and Future Challenges
ABSTRACT: Visual Question Answering (VQA) is a recent problem in computer vision and
natural language processing that has garnered a large amount of interest from
the deep learning, computer vision, and natural language processing
communities. In VQA, an algorithm needs to answer text-based questions about
images. Since the release of the first VQA dataset in 2014, additional datasets
have been released and many algorithms have been proposed. In this review, we
critically examine the current state of VQA in terms of problem formulation,
existing datasets, evaluation metrics, and algorithms. In particular, we
discuss the limitations of current datasets with regard to their ability to
properly train and assess VQA algorithms. We then exhaustively review existing
algorithms for VQA. Finally, we discuss possible future directions for VQA and
image understanding research.
| new_dataset | 0.95846 |
1610.06525 | Lucas Maystre | Lucas Maystre, Matthias Grossglauser | ChoiceRank: Identifying Preferences from Node Traffic in Networks | Accepted at ICML 2017 | null | null | null | stat.ML cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Understanding how users navigate in a network is of high interest in many
applications. We consider a setting where only aggregate node-level traffic is
observed and tackle the task of learning edge transition probabilities. We cast
it as a preference learning problem, and we study a model where choices follow
Luce's axiom. In this case, the $O(n)$ marginal counts of node visits are a
sufficient statistic for the $O(n^2)$ transition probabilities. We show how to
make the inference problem well-posed regardless of the network's structure,
and we present ChoiceRank, an iterative algorithm that scales to networks that
contains billions of nodes and edges. We apply the model to two clickstream
datasets and show that it successfully recovers the transition probabilities
using only the network structure and marginal (node-level) traffic data.
Finally, we also consider an application to mobility networks and apply the
model to one year of rides on New York City's bicycle-sharing system.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2016 18:19:07 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 15:14:54 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Maystre",
"Lucas",
""
],
[
"Grossglauser",
"Matthias",
""
]
] | TITLE: ChoiceRank: Identifying Preferences from Node Traffic in Networks
ABSTRACT: Understanding how users navigate in a network is of high interest in many
applications. We consider a setting where only aggregate node-level traffic is
observed and tackle the task of learning edge transition probabilities. We cast
it as a preference learning problem, and we study a model where choices follow
Luce's axiom. In this case, the $O(n)$ marginal counts of node visits are a
sufficient statistic for the $O(n^2)$ transition probabilities. We show how to
make the inference problem well-posed regardless of the network's structure,
and we present ChoiceRank, an iterative algorithm that scales to networks that
contains billions of nodes and edges. We apply the model to two clickstream
datasets and show that it successfully recovers the transition probabilities
using only the network structure and marginal (node-level) traffic data.
Finally, we also consider an application to mobility networks and apply the
model to one year of rides on New York City's bicycle-sharing system.
| no_new_dataset | 0.94699 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.