id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1512.06474 | Theodoros Rekatsinas | Manas Joglekar and Theodoros Rekatsinas and Hector Garcia-Molina and
Aditya Parameswaran and Christopher R\'e | SLiMFast: Guaranteed Results for Data Fusion and Source Reliability | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on data fusion, i.e., the problem of unifying conflicting data from
data sources into a single representation by estimating the source accuracies.
We propose SLiMFast, a framework that expresses data fusion as a statistical
learning problem over discriminative probabilistic models, which in many cases
correspond to logistic regression. In contrast to previous approaches that use
complex generative models, discriminative models make fewer distributional
assumptions over data sources and allow us to obtain rigorous theoretical
guarantees. Furthermore, we show how SLiMFast enables incorporating domain
knowledge into data fusion, yielding accuracy improvements of up to 50\% over
state-of-the-art baselines. Building upon our theoretical results, we design an
optimizer that obviates the need for users to manually select an algorithm for
learning SLiMFast's parameters. We validate our optimizer on multiple
real-world datasets and show that it can accurately predict the learning
algorithm that yields the best data fusion results.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 02:28:17 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 22:55:37 GMT"
},
{
"version": "v3",
"created": "Sat, 12 Nov 2016 17:33:47 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Joglekar",
"Manas",
""
],
[
"Rekatsinas",
"Theodoros",
""
],
[
"Garcia-Molina",
"Hector",
""
],
[
"Parameswaran",
"Aditya",
""
],
[
"Ré",
"Christopher",
""
]
] | TITLE: SLiMFast: Guaranteed Results for Data Fusion and Source Reliability
ABSTRACT: We focus on data fusion, i.e., the problem of unifying conflicting data from
data sources into a single representation by estimating the source accuracies.
We propose SLiMFast, a framework that expresses data fusion as a statistical
learning problem over discriminative probabilistic models, which in many cases
correspond to logistic regression. In contrast to previous approaches that use
complex generative models, discriminative models make fewer distributional
assumptions over data sources and allow us to obtain rigorous theoretical
guarantees. Furthermore, we show how SLiMFast enables incorporating domain
knowledge into data fusion, yielding accuracy improvements of up to 50\% over
state-of-the-art baselines. Building upon our theoretical results, we design an
optimizer that obviates the need for users to manually select an algorithm for
learning SLiMFast's parameters. We validate our optimizer on multiple
real-world datasets and show that it can accurately predict the learning
algorithm that yields the best data fusion results.
| no_new_dataset | 0.945801 |
1602.00994 | Kai Zhao | Kai Zhao, C Mohan Prasath, Sasu Tarkoma | Automatic City Region Analysis for Urban Routing | In proceedings of the IEEE International Conference on Data Mining
(ICDM) workshop 2015 | null | 10.1109/ICDMW.2015.176 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are different functional regions in cities such as tourist attractions,
shopping centers, workplaces and residential places. The human mobility
patterns for different functional regions are different, e.g., people usually
go to work during daytime on weekdays, and visit shopping centers after work.
In this paper, we analyse urban human mobility patterns and infer the functions
of the regions in three cities. The analysis is based on three large taxi GPS
datasets in Rome, San Francisco and Beijing containing 21 million, 11 million
and 17 million GPS points respectively. We categorized the city regions into
four kinds of places, workplaces, entertainment places, residential places and
other places. First, we provide a new quad-tree region division method based on
the taxi visits. Second, we use the association rule to infer the functional
regions in these three cities according to temporal human mobility patterns.
Third, we show that these identified functional regions can help us deliver
data in network applications, such as urban Delay Tolerant Networks (DTNs),
more efficiently. The new functional-regions-based DTNs algorithm achieves up
to 183% improvement in terms of delivery ratio.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 16:18:58 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Zhao",
"Kai",
""
],
[
"Prasath",
"C Mohan",
""
],
[
"Tarkoma",
"Sasu",
""
]
] | TITLE: Automatic City Region Analysis for Urban Routing
ABSTRACT: There are different functional regions in cities such as tourist attractions,
shopping centers, workplaces and residential places. The human mobility
patterns for different functional regions are different, e.g., people usually
go to work during daytime on weekdays, and visit shopping centers after work.
In this paper, we analyse urban human mobility patterns and infer the functions
of the regions in three cities. The analysis is based on three large taxi GPS
datasets in Rome, San Francisco and Beijing containing 21 million, 11 million
and 17 million GPS points respectively. We categorized the city regions into
four kinds of places, workplaces, entertainment places, residential places and
other places. First, we provide a new quad-tree region division method based on
the taxi visits. Second, we use the association rule to infer the functional
regions in these three cities according to temporal human mobility patterns.
Third, we show that these identified functional regions can help us deliver
data in network applications, such as urban Delay Tolerant Networks (DTNs),
more efficiently. The new functional-regions-based DTNs algorithm achieves up
to 183% improvement in terms of delivery ratio.
| no_new_dataset | 0.947769 |
1604.00147 | Lijuan Zhou | Lijuan Zhou, Wanqing Li, and Philip Ogunbona | Learning a Pose Lexicon for Semantic Action Recognition | Accepted by the 2016 IEEE International Conference on Multimedia and
Expo (ICME 2016). 6 pages paper and 4 pages supplementary material | null | 10.1109/ICME.2016.7552882 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel method for learning a pose lexicon comprising
semantic poses defined by textual instructions and their associated visual
poses defined by visual features. The proposed method simultaneously takes two
input streams, semantic poses and visual pose candidates, and statistically
learns a mapping between them to construct the lexicon. With the learned
lexicon, action recognition can be cast as the problem of finding the maximum
translation probability of a sequence of semantic poses given a stream of
visual pose candidates. Experiments evaluating pre-trained and zero-shot action
recognition conducted on MSRC-12 gesture and WorkoutSu-10 exercise datasets
were used to verify the efficacy of the proposed method.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 06:24:31 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Zhou",
"Lijuan",
""
],
[
"Li",
"Wanqing",
""
],
[
"Ogunbona",
"Philip",
""
]
] | TITLE: Learning a Pose Lexicon for Semantic Action Recognition
ABSTRACT: This paper presents a novel method for learning a pose lexicon comprising
semantic poses defined by textual instructions and their associated visual
poses defined by visual features. The proposed method simultaneously takes two
input streams, semantic poses and visual pose candidates, and statistically
learns a mapping between them to construct the lexicon. With the learned
lexicon, action recognition can be cast as the problem of finding the maximum
translation probability of a sequence of semantic poses given a stream of
visual pose candidates. Experiments evaluating pre-trained and zero-shot action
recognition conducted on MSRC-12 gesture and WorkoutSu-10 exercise datasets
were used to verify the efficacy of the proposed method.
| no_new_dataset | 0.93852 |
1604.07045 | Mario Valerio Giuffrida | Mario Valerio Giuffrida and Sotirios A. Tsaftaris | Rotation-Invariant Restricted Boltzmann Machine Using Shared Gradient
Filters | 8 pages, 3 figures, 1 table | null | 10.1007/978-3-319-44781-0_57 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding suitable features has been an essential problem in computer vision.
We focus on Restricted Boltzmann Machines (RBMs), which, despite their
versatility, cannot accommodate transformations that may occur in the scene. As
a result, several approaches have been proposed that consider a set of
transformations, which are used to either augment the training set or transform
the actual learned filters. In this paper, we propose the Explicit
Rotation-Invariant Restricted Boltzmann Machine, which exploits prior
information coming from the dominant orientation of images. Our model extends
the standard RBM, by adding a suitable number of weight matrices, associated
with each dominant gradient. We show that our approach is able to learn
rotation-invariant features, comparing it with the classic formulation of RBM
on the MNIST benchmark dataset. Overall, requiring less hidden units, our
method learns compact features, which are robust to rotations.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2016 15:56:18 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Jun 2016 09:59:47 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Giuffrida",
"Mario Valerio",
""
],
[
"Tsaftaris",
"Sotirios A.",
""
]
] | TITLE: Rotation-Invariant Restricted Boltzmann Machine Using Shared Gradient
Filters
ABSTRACT: Finding suitable features has been an essential problem in computer vision.
We focus on Restricted Boltzmann Machines (RBMs), which, despite their
versatility, cannot accommodate transformations that may occur in the scene. As
a result, several approaches have been proposed that consider a set of
transformations, which are used to either augment the training set or transform
the actual learned filters. In this paper, we propose the Explicit
Rotation-Invariant Restricted Boltzmann Machine, which exploits prior
information coming from the dominant orientation of images. Our model extends
the standard RBM, by adding a suitable number of weight matrices, associated
with each dominant gradient. We show that our approach is able to learn
rotation-invariant features, comparing it with the classic formulation of RBM
on the MNIST benchmark dataset. Overall, requiring less hidden units, our
method learns compact features, which are robust to rotations.
| no_new_dataset | 0.949902 |
1604.07638 | Yixin Bao | Yixin Bao, Xiaoke Wang, Zhi Wang, Chuan Wu, Francis C.M. Lau | Online Influence Maximization in Non-Stationary Social Networks | 10 pages. To appear in IEEE/ACM IWQoS 2016. Full version | null | 10.1109/IWQoS.2016.7590438 | null | cs.SI cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social networks have been popular platforms for information propagation. An
important use case is viral marketing: given a promotion budget, an advertiser
can choose some influential users as the seed set and provide them free or
discounted sample products; in this way, the advertiser hopes to increase the
popularity of the product in the users' friend circles by the world-of-mouth
effect, and thus maximizes the number of users that information of the
production can reach. There has been a body of literature studying the
influence maximization problem. Nevertheless, the existing studies mostly
investigate the problem on a one-off basis, assuming fixed known influence
probabilities among users, or the knowledge of the exact social network
topology. In practice, the social network topology and the influence
probabilities are typically unknown to the advertiser, which can be varying
over time, i.e., in cases of newly established, strengthened or weakened social
ties. In this paper, we focus on a dynamic non-stationary social network and
design a randomized algorithm, RSB, based on multi-armed bandit optimization,
to maximize influence propagation over time. The algorithm produces a sequence
of online decisions and calibrates its explore-exploit strategy utilizing
outcomes of previous decisions. It is rigorously proven to achieve an
upper-bounded regret in reward and applicable to large-scale social networks.
Practical effectiveness of the algorithm is evaluated using both synthetic and
real-world datasets, which demonstrates that our algorithm outperforms previous
stationary methods under non-stationary conditions.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2016 12:02:55 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Bao",
"Yixin",
""
],
[
"Wang",
"Xiaoke",
""
],
[
"Wang",
"Zhi",
""
],
[
"Wu",
"Chuan",
""
],
[
"Lau",
"Francis C. M.",
""
]
] | TITLE: Online Influence Maximization in Non-Stationary Social Networks
ABSTRACT: Social networks have been popular platforms for information propagation. An
important use case is viral marketing: given a promotion budget, an advertiser
can choose some influential users as the seed set and provide them free or
discounted sample products; in this way, the advertiser hopes to increase the
popularity of the product in the users' friend circles by the world-of-mouth
effect, and thus maximizes the number of users that information of the
production can reach. There has been a body of literature studying the
influence maximization problem. Nevertheless, the existing studies mostly
investigate the problem on a one-off basis, assuming fixed known influence
probabilities among users, or the knowledge of the exact social network
topology. In practice, the social network topology and the influence
probabilities are typically unknown to the advertiser, which can be varying
over time, i.e., in cases of newly established, strengthened or weakened social
ties. In this paper, we focus on a dynamic non-stationary social network and
design a randomized algorithm, RSB, based on multi-armed bandit optimization,
to maximize influence propagation over time. The algorithm produces a sequence
of online decisions and calibrates its explore-exploit strategy utilizing
outcomes of previous decisions. It is rigorously proven to achieve an
upper-bounded regret in reward and applicable to large-scale social networks.
Practical effectiveness of the algorithm is evaluated using both synthetic and
real-world datasets, which demonstrates that our algorithm outperforms previous
stationary methods under non-stationary conditions.
| no_new_dataset | 0.943815 |
1605.03259 | Chi Su | Chi Su, Shiliang Zhang, Junliang Xing, Wen Gao and Qi Tian | Deep Attributes Driven Multi-Camera Person Re-identification | Person Re-identification; 17 pages; 5 figures; In IEEE ECCV 2016 | null | 10.1007/978-3-319-46475-6_30 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The visual appearance of a person is easily affected by many factors like
pose variations, viewpoint changes and camera parameter differences. This makes
person Re-Identification (ReID) among multiple cameras a very challenging task.
This work is motivated to learn mid-level human attributes which are robust to
such visual appearance variations. And we propose a semi-supervised attribute
learning framework which progressively boosts the accuracy of attributes only
using a limited number of labeled data. Specifically, this framework involves a
three-stage training. A deep Convolutional Neural Network (dCNN) is first
trained on an independent dataset labeled with attributes. Then it is
fine-tuned on another dataset only labeled with person IDs using our defined
triplet loss. Finally, the updated dCNN predicts attribute labels for the
target dataset, which is combined with the independent dataset for the final
round of fine-tuning. The predicted attributes, namely \emph{deep attributes}
exhibit superior generalization ability across different datasets. By directly
using the deep attributes with simple Cosine distance, we have obtained
surprisingly good accuracy on four person ReID datasets. Experiments also show
that a simple metric learning modular further boosts our method, making it
significantly outperform many recent works.
| [
{
"version": "v1",
"created": "Wed, 11 May 2016 02:05:22 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2016 05:58:03 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Su",
"Chi",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Xing",
"Junliang",
""
],
[
"Gao",
"Wen",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Deep Attributes Driven Multi-Camera Person Re-identification
ABSTRACT: The visual appearance of a person is easily affected by many factors like
pose variations, viewpoint changes and camera parameter differences. This makes
person Re-Identification (ReID) among multiple cameras a very challenging task.
This work is motivated to learn mid-level human attributes which are robust to
such visual appearance variations. And we propose a semi-supervised attribute
learning framework which progressively boosts the accuracy of attributes only
using a limited number of labeled data. Specifically, this framework involves a
three-stage training. A deep Convolutional Neural Network (dCNN) is first
trained on an independent dataset labeled with attributes. Then it is
fine-tuned on another dataset only labeled with person IDs using our defined
triplet loss. Finally, the updated dCNN predicts attribute labels for the
target dataset, which is combined with the independent dataset for the final
round of fine-tuning. The predicted attributes, namely \emph{deep attributes}
exhibit superior generalization ability across different datasets. By directly
using the deep attributes with simple Cosine distance, we have obtained
surprisingly good accuracy on four person ReID datasets. Experiments also show
that a simple metric learning modular further boosts our method, making it
significantly outperform many recent works.
| no_new_dataset | 0.945901 |
1605.04478 | Hamid Tizhoosh | Mina Nouredanesh, Hamid R. Tizhoosh, Ershad Banijamali | Gabor Barcodes for Medical Image Retrieval | To appear in proceedings of The 2016 IEEE International Conference on
Image Processing (ICIP 2016), Sep 25-28, 2016, Phoenix, Arizona, USA | null | 10.1109/ICIP.2016.7532807 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, advances in medical imaging have led to the emergence of
massive databases, containing images from a diverse range of modalities. This
has significantly heightened the need for automated annotation of the images on
one side, and fast and memory-efficient content-based image retrieval systems
on the other side. Binary descriptors have recently gained more attention as a
potential vehicle to achieve these goals. One of the recently introduced binary
descriptors for tagging of medical images are Radon barcodes (RBCs) that are
driven from Radon transform via local thresholding. Gabor transform is also a
powerful transform to extract texture-based information. Gabor features have
exhibited robustness against rotation, scale, and also photometric
disturbances, such as illumination changes and image noise in many
applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework
for the image annotation. To find the most discriminative GBC for a given query
image, the effects of employing Gabor filters with different parameters, i.e.,
different sets of scales and orientations, are investigated, resulting in
different barcode lengths and retrieval performances. The proposed method has
been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray
images for indexing, and 1,733 x-rays images for testing. A total error score
as low as $351$ ($\approx 80\%$ accuracy for the first hit) was achieved.
| [
{
"version": "v1",
"created": "Sat, 14 May 2016 22:39:29 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Nouredanesh",
"Mina",
""
],
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Banijamali",
"Ershad",
""
]
] | TITLE: Gabor Barcodes for Medical Image Retrieval
ABSTRACT: In recent years, advances in medical imaging have led to the emergence of
massive databases, containing images from a diverse range of modalities. This
has significantly heightened the need for automated annotation of the images on
one side, and fast and memory-efficient content-based image retrieval systems
on the other side. Binary descriptors have recently gained more attention as a
potential vehicle to achieve these goals. One of the recently introduced binary
descriptors for tagging of medical images are Radon barcodes (RBCs) that are
driven from Radon transform via local thresholding. Gabor transform is also a
powerful transform to extract texture-based information. Gabor features have
exhibited robustness against rotation, scale, and also photometric
disturbances, such as illumination changes and image noise in many
applications. This paper introduces Gabor Barcodes (GBCs), as a novel framework
for the image annotation. To find the most discriminative GBC for a given query
image, the effects of employing Gabor filters with different parameters, i.e.,
different sets of scales and orientations, are investigated, resulting in
different barcode lengths and retrieval performances. The proposed method has
been evaluated on the IRMA dataset with 193 classes comprising of 12,677 x-ray
images for indexing, and 1,733 x-rays images for testing. A total error score
as low as $351$ ($\approx 80\%$ accuracy for the first hit) was achieved.
| no_new_dataset | 0.949949 |
1605.09080 | Forough Arabshahi | Forough Arabshahi, Animashree Anandkumar | Spectral Methods for Correlated Topic Models | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose guaranteed spectral methods for learning a broad
range of topic models, which generalize the popular Latent Dirichlet Allocation
(LDA). We overcome the limitation of LDA to incorporate arbitrary topic
correlations, by assuming that the hidden topic proportions are drawn from a
flexible class of Normalized Infinitely Divisible (NID) distributions. NID
distributions are generated through the process of normalizing a family of
independent Infinitely Divisible (ID) random variables. The Dirichlet
distribution is a special case obtained by normalizing a set of Gamma random
variables. We prove that this flexible topic model class can be learned via
spectral methods using only moments up to the third order, with (low order)
polynomial sample and computational complexity. The proof is based on a key new
technique derived here that allows us to diagonalize the moments of the NID
distribution through an efficient procedure that requires evaluating only
univariate integrals, despite the fact that we are handling high dimensional
multivariate moments. In order to assess the performance of our proposed Latent
NID topic model, we use two real datasets of articles collected from New York
Times and Pubmed. Our experiments yield improved perplexity on both datasets
compared with the baseline.
| [
{
"version": "v1",
"created": "Mon, 30 May 2016 00:32:11 GMT"
},
{
"version": "v2",
"created": "Tue, 31 May 2016 14:30:11 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Jun 2016 08:27:34 GMT"
},
{
"version": "v4",
"created": "Sat, 20 Aug 2016 01:44:30 GMT"
},
{
"version": "v5",
"created": "Sun, 13 Nov 2016 20:24:02 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Arabshahi",
"Forough",
""
],
[
"Anandkumar",
"Animashree",
""
]
] | TITLE: Spectral Methods for Correlated Topic Models
ABSTRACT: In this paper, we propose guaranteed spectral methods for learning a broad
range of topic models, which generalize the popular Latent Dirichlet Allocation
(LDA). We overcome the limitation of LDA to incorporate arbitrary topic
correlations, by assuming that the hidden topic proportions are drawn from a
flexible class of Normalized Infinitely Divisible (NID) distributions. NID
distributions are generated through the process of normalizing a family of
independent Infinitely Divisible (ID) random variables. The Dirichlet
distribution is a special case obtained by normalizing a set of Gamma random
variables. We prove that this flexible topic model class can be learned via
spectral methods using only moments up to the third order, with (low order)
polynomial sample and computational complexity. The proof is based on a key new
technique derived here that allows us to diagonalize the moments of the NID
distribution through an efficient procedure that requires evaluating only
univariate integrals, despite the fact that we are handling high dimensional
multivariate moments. In order to assess the performance of our proposed Latent
NID topic model, we use two real datasets of articles collected from New York
Times and Pubmed. Our experiments yield improved perplexity on both datasets
compared with the baseline.
| no_new_dataset | 0.944228 |
1607.00455 | Ehsan Hosseini-Asl | Ehsan Hosseini-Asl, Robert Keynto, Ayman El-Baz | Alzheimer's Disease Diagnostics by Adaptation of 3D Convolutional
Network | This paper is accepted for publication at IEEE ICIP 2016 conference | null | 10.1109/ICIP.2016.7532332 | null | cs.LG q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer\{'}s disease (AD), is based on classification of
features extracted from brain images. The features have to accurately capture
main AD-related variations of anatomical brain structures, such as, e.g.,
ventricles size, hippocampus shape, cortical thickness, and brain volume. This
paper proposed to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
CADDementia MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy. Abilities of
the 3D-CNN to generalize the features learnt and adapt to other domains have
been validated on the ADNI dataset.
| [
{
"version": "v1",
"created": "Sat, 2 Jul 2016 02:55:16 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Hosseini-Asl",
"Ehsan",
""
],
[
"Keynto",
"Robert",
""
],
[
"El-Baz",
"Ayman",
""
]
] | TITLE: Alzheimer's Disease Diagnostics by Adaptation of 3D Convolutional
Network
ABSTRACT: Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer\{'}s disease (AD), is based on classification of
features extracted from brain images. The features have to accurately capture
main AD-related variations of anatomical brain structures, such as, e.g.,
ventricles size, hippocampus shape, cortical thickness, and brain volume. This
paper proposed to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
CADDementia MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy. Abilities of
the 3D-CNN to generalize the features learnt and adapt to other domains have
been validated on the ADNI dataset.
| no_new_dataset | 0.948298 |
1607.05387 | Hanock Kwak | Hanock Kwak, Byoung-Tak Zhang | Generating Images Part by Part with Composite Generative Adversarial
Networks | null | null | null | null | cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image generation remains a fundamental problem in artificial intelligence in
general and deep learning in specific. The generative adversarial network (GAN)
was successful in generating high quality samples of natural images. We propose
a model called composite generative adversarial network, that reveals the
complex structure of images with multiple generators in which each generator
generates some part of the image. Those parts are combined by alpha blending
process to create a new single image. It can generate, for example, background
and face sequentially with two generators, after training on face dataset.
Training was done in an unsupervised way without any labels about what each
generator should generate. We found possibilities of learning the structure by
using this generative model empirically.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2016 03:09:31 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2016 07:32:35 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Kwak",
"Hanock",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] | TITLE: Generating Images Part by Part with Composite Generative Adversarial
Networks
ABSTRACT: Image generation remains a fundamental problem in artificial intelligence in
general and deep learning in specific. The generative adversarial network (GAN)
was successful in generating high quality samples of natural images. We propose
a model called composite generative adversarial network, that reveals the
complex structure of images with multiple generators in which each generator
generates some part of the image. Those parts are combined by alpha blending
process to create a new single image. It can generate, for example, background
and face sequentially with two generators, after training on face dataset.
Training was done in an unsupervised way without any labels about what each
generator should generate. We found possibilities of learning the structure by
using this generative model empirically.
| no_new_dataset | 0.950824 |
1608.04917 | Igor Mozeti\v{c} | Darko Cherepnalkoski, Andreas Karpf, Igor Mozetic, Miha Grcar | Cohesion and Coalition Formation in the European Parliament: Roll-Call
Votes and Twitter Activities | null | PLoS ONE 11(11): e0166586, 2016 | 10.1371/journal.pone.0166586 | null | cs.CL cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the cohesion within and the coalitions between political groups in
the Eighth European Parliament (2014--2019) by analyzing two entirely different
aspects of the behavior of the Members of the European Parliament (MEPs) in the
policy-making processes. On one hand, we analyze their co-voting patterns and,
on the other, their retweeting behavior. We make use of two diverse datasets in
the analysis. The first one is the roll-call vote dataset, where cohesion is
regarded as the tendency to co-vote within a group, and a coalition is formed
when the members of several groups exhibit a high degree of co-voting agreement
on a subject. The second dataset comes from Twitter; it captures the retweeting
(i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within
the same group) and coalitions (retweets between groups) from a completely
different perspective.
We employ two different methodologies to analyze the cohesion and coalitions.
The first one is based on Krippendorff's Alpha reliability, used to measure the
agreement between raters in data-analysis scenarios, and the second one is
based on Exponential Random Graph Models, often used in social-network
analysis. We give general insights into the cohesion of political groups in the
European Parliament, explore whether coalitions are formed in the same way for
different policy areas, and examine to what degree the retweeting behavior of
MEPs corresponds to their co-voting patterns. A novel and interesting aspect of
our work is the relationship between the co-voting and retweeting patterns.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 10:10:14 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Oct 2016 09:47:42 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Cherepnalkoski",
"Darko",
""
],
[
"Karpf",
"Andreas",
""
],
[
"Mozetic",
"Igor",
""
],
[
"Grcar",
"Miha",
""
]
] | TITLE: Cohesion and Coalition Formation in the European Parliament: Roll-Call
Votes and Twitter Activities
ABSTRACT: We study the cohesion within and the coalitions between political groups in
the Eighth European Parliament (2014--2019) by analyzing two entirely different
aspects of the behavior of the Members of the European Parliament (MEPs) in the
policy-making processes. On one hand, we analyze their co-voting patterns and,
on the other, their retweeting behavior. We make use of two diverse datasets in
the analysis. The first one is the roll-call vote dataset, where cohesion is
regarded as the tendency to co-vote within a group, and a coalition is formed
when the members of several groups exhibit a high degree of co-voting agreement
on a subject. The second dataset comes from Twitter; it captures the retweeting
(i.e., endorsing) behavior of the MEPs and implies cohesion (retweets within
the same group) and coalitions (retweets between groups) from a completely
different perspective.
We employ two different methodologies to analyze the cohesion and coalitions.
The first one is based on Krippendorff's Alpha reliability, used to measure the
agreement between raters in data-analysis scenarios, and the second one is
based on Exponential Random Graph Models, often used in social-network
analysis. We give general insights into the cohesion of political groups in the
European Parliament, explore whether coalitions are formed in the same way for
different policy areas, and examine to what degree the retweeting behavior of
MEPs corresponds to their co-voting patterns. A novel and interesting aspect of
our work is the relationship between the co-voting and retweeting patterns.
| new_dataset | 0.821617 |
1611.02447 | Pichao Wang | Pichao Wang and Zhaoyang Li and Yonghong Hou and Wanqing Li | Action Recognition Based on Joint Trajectory Maps Using Convolutional
Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Convolutional Neural Networks (ConvNets) have shown promising
performances in many computer vision tasks, especially image-based recognition.
How to effectively use ConvNets for video-based recognition is still an open
problem. In this paper, we propose a compact, effective yet simple method to
encode spatio-temporal information carried in $3D$ skeleton sequences into
multiple $2D$ images, referred to as Joint Trajectory Maps (JTM), and ConvNets
are adopted to exploit the discriminative features for real-time human action
recognition. The proposed method has been evaluated on three public benchmarks,
i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal
human action dataset (UTD-MHAD) and achieved the state-of-the-art results.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2016 09:35:17 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Nov 2016 23:24:58 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Wang",
"Pichao",
""
],
[
"Li",
"Zhaoyang",
""
],
[
"Hou",
"Yonghong",
""
],
[
"Li",
"Wanqing",
""
]
] | TITLE: Action Recognition Based on Joint Trajectory Maps Using Convolutional
Neural Networks
ABSTRACT: Recently, Convolutional Neural Networks (ConvNets) have shown promising
performances in many computer vision tasks, especially image-based recognition.
How to effectively use ConvNets for video-based recognition is still an open
problem. In this paper, we propose a compact, effective yet simple method to
encode spatio-temporal information carried in $3D$ skeleton sequences into
multiple $2D$ images, referred to as Joint Trajectory Maps (JTM), and ConvNets
are adopted to exploit the discriminative features for real-time human action
recognition. The proposed method has been evaluated on three public benchmarks,
i.e., MSRC-12 Kinect gesture dataset (MSRC-12), G3D dataset and UTD multimodal
human action dataset (UTD-MHAD) and achieved the state-of-the-art results.
| no_new_dataset | 0.941708 |
1611.03890 | Guido D'Amico | Guido D'Amico, Raul Rabadan, Matthew Kleban | A Theory of Taxonomy | 7+13 pages, 5 figures. Comments welcome | null | null | null | physics.soc-ph cs.SI physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A taxonomy is a standardized framework to classify and organize items into
categories. Hierarchical taxonomies are ubiquitous, ranging from the
classification of organisms to the file system on a computer. Characterizing
the typical distribution of items within taxonomic categories is an important
question with applications in many disciplines. Ecologists have long sought to
account for the patterns observed in species-abundance distributions (the
number of individuals per species found in some sample), and computer
scientists study the distribution of files per directory. Is there a universal
statistical distribution describing how many items are typically found in each
category in large taxonomies? Here, we analyze a wide array of large,
real-world datasets -- including items lost and found on the New York City
transit system, library books, and a bacterial microbiome -- and discover such
an underlying commonality. A simple, non-parametric branching model that
randomly categorizes items and takes as input only the total number of items
and the total number of categories successfully reproduces the abundance
distributions in these datasets. This result may shed light on patterns in
species-abundance distributions long observed in ecology. The model also
predicts the number of taxonomic categories that remain unrepresented in a
finite sample.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 19:25:49 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"D'Amico",
"Guido",
""
],
[
"Rabadan",
"Raul",
""
],
[
"Kleban",
"Matthew",
""
]
] | TITLE: A Theory of Taxonomy
ABSTRACT: A taxonomy is a standardized framework to classify and organize items into
categories. Hierarchical taxonomies are ubiquitous, ranging from the
classification of organisms to the file system on a computer. Characterizing
the typical distribution of items within taxonomic categories is an important
question with applications in many disciplines. Ecologists have long sought to
account for the patterns observed in species-abundance distributions (the
number of individuals per species found in some sample), and computer
scientists study the distribution of files per directory. Is there a universal
statistical distribution describing how many items are typically found in each
category in large taxonomies? Here, we analyze a wide array of large,
real-world datasets -- including items lost and found on the New York City
transit system, library books, and a bacterial microbiome -- and discover such
an underlying commonality. A simple, non-parametric branching model that
randomly categorizes items and takes as input only the total number of items
and the total number of categories successfully reproduces the abundance
distributions in these datasets. This result may shed light on patterns in
species-abundance distributions long observed in ecology. The model also
predicts the number of taxonomic categories that remain unrepresented in a
finite sample.
| no_new_dataset | 0.949856 |
1611.03932 | Jangho Lee | Jangho Lee, Gyuwan Kim, Jaeyoon Yoo, Changwoo Jung, Minseok Kim,
Sungroh Yoon | Training IBM Watson using Automatically Generated Question-Answer Pairs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | IBM Watson is a cognitive computing system capable of question answering in
natural languages. It is believed that IBM Watson can understand large corpora
and answer relevant questions more effectively than any other
question-answering system currently available. To unleash the full power of
Watson, however, we need to train its instance with a large number of
well-prepared question-answer pairs. Obviously, manually generating such pairs
in a large quantity is prohibitively time consuming and significantly limits
the efficiency of Watson's training. Recently, a large-scale dataset of over 30
million question-answer pairs was reported. Under the assumption that using
such an automatically generated dataset could relieve the burden of manual
question-answer generation, we tried to use this dataset to train an instance
of Watson and checked the training efficiency and accuracy. According to our
experiments, using this auto-generated dataset was effective for training
Watson, complementing manually crafted question-answer pairs. To the best of
the authors' knowledge, this work is the first attempt to use a large-scale
dataset of automatically generated question-answer pairs for training IBM
Watson. We anticipate that the insights and lessons obtained from our
experiments will be useful for researchers who want to expedite Watson training
leveraged by automatically generated question-answer pairs.
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2016 01:49:48 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Lee",
"Jangho",
""
],
[
"Kim",
"Gyuwan",
""
],
[
"Yoo",
"Jaeyoon",
""
],
[
"Jung",
"Changwoo",
""
],
[
"Kim",
"Minseok",
""
],
[
"Yoon",
"Sungroh",
""
]
] | TITLE: Training IBM Watson using Automatically Generated Question-Answer Pairs
ABSTRACT: IBM Watson is a cognitive computing system capable of question answering in
natural languages. It is believed that IBM Watson can understand large corpora
and answer relevant questions more effectively than any other
question-answering system currently available. To unleash the full power of
Watson, however, we need to train its instance with a large number of
well-prepared question-answer pairs. Obviously, manually generating such pairs
in a large quantity is prohibitively time consuming and significantly limits
the efficiency of Watson's training. Recently, a large-scale dataset of over 30
million question-answer pairs was reported. Under the assumption that using
such an automatically generated dataset could relieve the burden of manual
question-answer generation, we tried to use this dataset to train an instance
of Watson and checked the training efficiency and accuracy. According to our
experiments, using this auto-generated dataset was effective for training
Watson, complementing manually crafted question-answer pairs. To the best of
the authors' knowledge, this work is the first attempt to use a large-scale
dataset of automatically generated question-answer pairs for training IBM
Watson. We anticipate that the insights and lessons obtained from our
experiments will be useful for researchers who want to expedite Watson training
leveraged by automatically generated question-answer pairs.
| new_dataset | 0.963746 |
1611.03934 | Ahmed Alaa | Jinsung Yoon, Ahmed M. Alaa, Martin Cadeiras, and Mihaela van der
Schaar | Personalized Donor-Recipient Matching for Organ Transplantation | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Organ transplants can improve the life expectancy and quality of life for the
recipient but carries the risk of serious post-operative complications, such as
septic shock and organ rejection. The probability of a successful transplant
depends in a very subtle fashion on compatibility between the donor and the
recipient but current medical practice is short of domain knowledge regarding
the complex nature of recipient-donor compatibility. Hence a data-driven
approach for learning compatibility has the potential for significant
improvements in match quality. This paper proposes a novel system
(ConfidentMatch) that is trained using data from electronic health records.
ConfidentMatch predicts the success of an organ transplant (in terms of the 3
year survival rates) on the basis of clinical and demographic traits of the
donor and recipient. ConfidentMatch captures the heterogeneity of the donor and
recipient traits by optimally dividing the feature space into clusters and
constructing different optimal predictive models to each cluster. The system
controls the complexity of the learned predictive model in a way that allows
for assuring more granular and confident predictions for a larger number of
potential recipient-donor pairs, thereby ensuring that predictions are
"personalized" and tailored to individual characteristics to the finest
possible granularity. Experiments conducted on the UNOS heart transplant
dataset show the superiority of the prognostic value of ConfidentMatch to other
competing benchmarks; ConfidentMatch can provide predictions of success with
95% confidence for 5,489 patients of a total population of 9,620 patients,
which corresponds to 410 more patients than the most competitive benchmark
algorithm (DeepBoost).
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2016 01:53:54 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Yoon",
"Jinsung",
""
],
[
"Alaa",
"Ahmed M.",
""
],
[
"Cadeiras",
"Martin",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] | TITLE: Personalized Donor-Recipient Matching for Organ Transplantation
ABSTRACT: Organ transplants can improve the life expectancy and quality of life for the
recipient but carries the risk of serious post-operative complications, such as
septic shock and organ rejection. The probability of a successful transplant
depends in a very subtle fashion on compatibility between the donor and the
recipient but current medical practice is short of domain knowledge regarding
the complex nature of recipient-donor compatibility. Hence a data-driven
approach for learning compatibility has the potential for significant
improvements in match quality. This paper proposes a novel system
(ConfidentMatch) that is trained using data from electronic health records.
ConfidentMatch predicts the success of an organ transplant (in terms of the 3
year survival rates) on the basis of clinical and demographic traits of the
donor and recipient. ConfidentMatch captures the heterogeneity of the donor and
recipient traits by optimally dividing the feature space into clusters and
constructing different optimal predictive models to each cluster. The system
controls the complexity of the learned predictive model in a way that allows
for assuring more granular and confident predictions for a larger number of
potential recipient-donor pairs, thereby ensuring that predictions are
"personalized" and tailored to individual characteristics to the finest
possible granularity. Experiments conducted on the UNOS heart transplant
dataset show the superiority of the prognostic value of ConfidentMatch to other
competing benchmarks; ConfidentMatch can provide predictions of success with
95% confidence for 5,489 patients of a total population of 9,620 patients,
which corresponds to 410 more patients than the most competitive benchmark
algorithm (DeepBoost).
| no_new_dataset | 0.949106 |
1611.03999 | David Freire-Obreg\'on | D. Freire-Obreg\'on and M. Castrill\'on-Santana and J. Lorenzo-Navarro | Optimized clothes segmentation to boost gender classification in
unconstrained scenarios | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several applications require demographic information of ordinary people in
unconstrained scenarios. This is not a trivial task due to significant human
appearance variations. In this work, we introduce trixels for clustering image
regions, enumerating their advantages compared to superpixels. The classical
GrabCut algorithm is later modified to segment trixels instead of pixels in an
unsupervised context. Combining with face detection lead us to a clothes
segmentation approach close to real time. The study uses the challenging Pascal
VOC dataset for segmentation evaluation experiments. A final experiment
analyzes the fusion of clothes features with state-of-the-art gender
classifiers in ClothesDB, revealing a significant performance improvement in
gender classification.
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2016 13:39:55 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Freire-Obregón",
"D.",
""
],
[
"Castrillón-Santana",
"M.",
""
],
[
"Lorenzo-Navarro",
"J.",
""
]
] | TITLE: Optimized clothes segmentation to boost gender classification in
unconstrained scenarios
ABSTRACT: Several applications require demographic information of ordinary people in
unconstrained scenarios. This is not a trivial task due to significant human
appearance variations. In this work, we introduce trixels for clustering image
regions, enumerating their advantages compared to superpixels. The classical
GrabCut algorithm is later modified to segment trixels instead of pixels in an
unsupervised context. Combining with face detection lead us to a clothes
segmentation approach close to real time. The study uses the challenging Pascal
VOC dataset for segmentation evaluation experiments. A final experiment
analyzes the fusion of clothes features with state-of-the-art gender
classifiers in ClothesDB, revealing a significant performance improvement in
gender classification.
| no_new_dataset | 0.953362 |
1611.04049 | Chuyang Ke | Chuyang Ke, Yan Jin, Heather Evans, Bill Lober, Xiaoning Qian, Ji Liu,
Shuai Huang | Prognostics of Surgical Site Infections using Dynamic Health Data | 23 pages, 8 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Surgical Site Infection (SSI) is a national priority in healthcare research.
Much research attention has been attracted to develop better SSI risk
prediction models. However, most of the existing SSI risk prediction models are
built on static risk factors such as comorbidities and operative factors. In
this paper, we investigate the use of the dynamic wound data for SSI risk
prediction. There have been emerging mobile health (mHealth) tools that can
closely monitor the patients and generate continuous measurements of many
wound-related variables and other evolving clinical variables. Since existing
prediction models of SSI have quite limited capacity to utilize the evolving
clinical data, we develop the corresponding solution to equip these mHealth
tools with decision-making capabilities for SSI prediction with a seamless
assembly of several machine learning models to tackle the analytic challenges
arising from the spatial-temporal data. The basic idea is to exploit the
low-rank property of the spatial-temporal data via the bilinear formulation,
and further enhance it with automatic missing data imputation by the matrix
completion technique. We derive efficient optimization algorithms to implement
these models and demonstrate the superior performances of our new predictive
model on a real-world dataset of SSI, compared to a range of state-of-the-art
methods.
| [
{
"version": "v1",
"created": "Sat, 12 Nov 2016 22:08:15 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Ke",
"Chuyang",
""
],
[
"Jin",
"Yan",
""
],
[
"Evans",
"Heather",
""
],
[
"Lober",
"Bill",
""
],
[
"Qian",
"Xiaoning",
""
],
[
"Liu",
"Ji",
""
],
[
"Huang",
"Shuai",
""
]
] | TITLE: Prognostics of Surgical Site Infections using Dynamic Health Data
ABSTRACT: Surgical Site Infection (SSI) is a national priority in healthcare research.
Much research attention has been attracted to develop better SSI risk
prediction models. However, most of the existing SSI risk prediction models are
built on static risk factors such as comorbidities and operative factors. In
this paper, we investigate the use of the dynamic wound data for SSI risk
prediction. There have been emerging mobile health (mHealth) tools that can
closely monitor the patients and generate continuous measurements of many
wound-related variables and other evolving clinical variables. Since existing
prediction models of SSI have quite limited capacity to utilize the evolving
clinical data, we develop the corresponding solution to equip these mHealth
tools with decision-making capabilities for SSI prediction with a seamless
assembly of several machine learning models to tackle the analytic challenges
arising from the spatial-temporal data. The basic idea is to exploit the
low-rank property of the spatial-temporal data via the bilinear formulation,
and further enhance it with automatic missing data imputation by the matrix
completion technique. We derive efficient optimization algorithms to implement
these models and demonstrate the superior performances of our new predictive
model on a real-world dataset of SSI, compared to a range of state-of-the-art
methods.
| no_new_dataset | 0.944638 |
1611.04144 | Xuanpeng Li | Xuanpeng Li and Rachid Belaroussi | Semi-Dense 3D Semantic Mapping from Monocular SLAM | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bundle of geometry and appearance in computer vision has proven to be a
promising solution for robots across a wide variety of applications. Stereo
cameras and RGB-D sensors are widely used to realise fast 3D reconstruction and
trajectory tracking in a dense way. However, they lack flexibility of seamless
switch between different scaled environments, i.e., indoor and outdoor scenes.
In addition, semantic information are still hard to acquire in a 3D mapping. We
address this challenge by combining the state-of-art deep learning method and
semi-dense Simultaneous Localisation and Mapping (SLAM) based on video stream
from a monocular camera. In our approach, 2D semantic information are
transferred to 3D mapping via correspondence between connective Keyframes with
spatial consistency. There is no need to obtain a semantic segmentation for
each frame in a sequence, so that it could achieve a reasonable computation
time. We evaluate our method on indoor/outdoor datasets and lead to an
improvement in the 2D semantic labelling over baseline single frame
predictions.
| [
{
"version": "v1",
"created": "Sun, 13 Nov 2016 15:31:31 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Li",
"Xuanpeng",
""
],
[
"Belaroussi",
"Rachid",
""
]
] | TITLE: Semi-Dense 3D Semantic Mapping from Monocular SLAM
ABSTRACT: The bundle of geometry and appearance in computer vision has proven to be a
promising solution for robots across a wide variety of applications. Stereo
cameras and RGB-D sensors are widely used to realise fast 3D reconstruction and
trajectory tracking in a dense way. However, they lack flexibility of seamless
switch between different scaled environments, i.e., indoor and outdoor scenes.
In addition, semantic information are still hard to acquire in a 3D mapping. We
address this challenge by combining the state-of-art deep learning method and
semi-dense Simultaneous Localisation and Mapping (SLAM) based on video stream
from a monocular camera. In our approach, 2D semantic information are
transferred to 3D mapping via correspondence between connective Keyframes with
spatial consistency. There is no need to obtain a semantic segmentation for
each frame in a sequence, so that it could achieve a reasonable computation
time. We evaluate our method on indoor/outdoor datasets and lead to an
improvement in the 2D semantic labelling over baseline single frame
predictions.
| no_new_dataset | 0.944638 |
1611.04228 | Aseem Wadhwa | Aseem Wadhwa and Upamanyu Madhow | Learning Sparse, Distributed Representations using the Hebbian Principle | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The "fire together, wire together" Hebbian model is a central principle for
learning in neuroscience, but surprisingly, it has found limited applicability
in modern machine learning. In this paper, we take a first step towards
bridging this gap, by developing flavors of competitive Hebbian learning which
produce sparse, distributed neural codes using online adaptation with minimal
tuning. We propose an unsupervised algorithm, termed Adaptive Hebbian Learning
(AHL). We illustrate the distributed nature of the learned representations via
output entropy computations for synthetic data, and demonstrate superior
performance, compared to standard alternatives such as autoencoders, in
training a deep convolutional net on standard image datasets.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 02:28:13 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Wadhwa",
"Aseem",
""
],
[
"Madhow",
"Upamanyu",
""
]
] | TITLE: Learning Sparse, Distributed Representations using the Hebbian Principle
ABSTRACT: The "fire together, wire together" Hebbian model is a central principle for
learning in neuroscience, but surprisingly, it has found limited applicability
in modern machine learning. In this paper, we take a first step towards
bridging this gap, by developing flavors of competitive Hebbian learning which
produce sparse, distributed neural codes using online adaptation with minimal
tuning. We propose an unsupervised algorithm, termed Adaptive Hebbian Learning
(AHL). We illustrate the distributed nature of the learned representations via
output entropy computations for synthetic data, and demonstrate superior
performance, compared to standard alternatives such as autoencoders, in
training a deep convolutional net on standard image datasets.
| no_new_dataset | 0.950227 |
1611.04298 | Chengzhe Yan Mr | Chengzhe Yan, Jie Hu and Changshui Zhang | A DNN Framework For Text Image Rectification From Planar Transformations | 9 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel neural network architecture is proposed attempting to
rectify text images with mild assumptions. A new dataset of text images is
collected to verify our model and open to public. We explored the capability of
deep neural network in learning geometric transformation and found the model
could segment the text image without explicit supervised segmentation
information. Experiments show the architecture proposed can restore planar
transformations with wonderful robustness and effectiveness.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 09:40:38 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Yan",
"Chengzhe",
""
],
[
"Hu",
"Jie",
""
],
[
"Zhang",
"Changshui",
""
]
] | TITLE: A DNN Framework For Text Image Rectification From Planar Transformations
ABSTRACT: In this paper, a novel neural network architecture is proposed attempting to
rectify text images with mild assumptions. A new dataset of text images is
collected to verify our model and open to public. We explored the capability of
deep neural network in learning geometric transformation and found the model
could segment the text image without explicit supervised segmentation
information. Experiments show the architecture proposed can restore planar
transformations with wonderful robustness and effectiveness.
| new_dataset | 0.956917 |
1611.04357 | Yashas Annadani | Yashas Annadani, Vijayakrishna Naganoor, Akshay Kumar Jagadish,
Krishnan Chemmangat | Selfie Detection by Synergy-Constraint Based Convolutional Neural
Network | 8 Pages, Accepted for Publication at IEEE SITIS 2016 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Categorisation of huge amount of data on the multimedia platform is a crucial
task. In this work, we propose a novel approach to address the subtle problem
of selfie detection for image database segregation on the web, given rapid rise
in number of selfies clicked. A Convolutional Neural Network (CNN) is modeled
to learn a synergy feature in the common subspace of head and shoulder
orientation, derived from Local Binary Pattern (LBP) and Histogram of Oriented
Gradients (HOG) features respectively. This synergy was captured by projecting
the aforementioned features using Canonical Correlation Analysis (CCA). We show
that the resulting network's convolutional activations in the neighbourhood of
spatial keypoints captured by SIFT are discriminative for selfie-detection. In
general, proposed approach aids in capturing intricacies present in the image
data and has the potential for usage in other subtle image analysis scenarios
apart from just selfie detection. We investigate and analyse the performance of
popular CNN architectures (GoogleNet, AlexNet), used for other image
classification tasks, when subjected to the task of detecting the selfies on
the multimedia platform. The results of the proposed approach are compared with
these popular architectures on a dataset of ninety thousand images comprising
of roughly equal number of selfies and non-selfies. Experimental results on
this dataset shows the effectiveness of the proposed approach.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 12:22:34 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Annadani",
"Yashas",
""
],
[
"Naganoor",
"Vijayakrishna",
""
],
[
"Jagadish",
"Akshay Kumar",
""
],
[
"Chemmangat",
"Krishnan",
""
]
] | TITLE: Selfie Detection by Synergy-Constraint Based Convolutional Neural
Network
ABSTRACT: Categorisation of huge amount of data on the multimedia platform is a crucial
task. In this work, we propose a novel approach to address the subtle problem
of selfie detection for image database segregation on the web, given rapid rise
in number of selfies clicked. A Convolutional Neural Network (CNN) is modeled
to learn a synergy feature in the common subspace of head and shoulder
orientation, derived from Local Binary Pattern (LBP) and Histogram of Oriented
Gradients (HOG) features respectively. This synergy was captured by projecting
the aforementioned features using Canonical Correlation Analysis (CCA). We show
that the resulting network's convolutional activations in the neighbourhood of
spatial keypoints captured by SIFT are discriminative for selfie-detection. In
general, proposed approach aids in capturing intricacies present in the image
data and has the potential for usage in other subtle image analysis scenarios
apart from just selfie detection. We investigate and analyse the performance of
popular CNN architectures (GoogleNet, AlexNet), used for other image
classification tasks, when subjected to the task of detecting the selfies on
the multimedia platform. The results of the proposed approach are compared with
these popular architectures on a dataset of ninety thousand images comprising
of roughly equal number of selfies and non-selfies. Experimental results on
this dataset shows the effectiveness of the proposed approach.
| no_new_dataset | 0.741323 |
1611.04361 | Marek Rei | Marek Rei, Gamal K.O. Crichton, Sampo Pyysalo | Attending to Characters in Neural Sequence Labeling Models | Proceedings of COLING 2016 | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence labeling architectures use word embeddings for capturing similarity,
but suffer when handling previously unseen or rare words. We investigate
character-level extensions to such models and propose a novel architecture for
combining alternative word representations. By using an attention mechanism,
the model is able to dynamically decide how much information to use from a
word- or character-level component. We evaluated different architectures on a
range of sequence labeling datasets, and character-level extensions were found
to improve performance on every benchmark. In addition, the proposed
attention-based architecture delivered the best results even with a smaller
number of trainable parameters.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 12:36:07 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Rei",
"Marek",
""
],
[
"Crichton",
"Gamal K. O.",
""
],
[
"Pyysalo",
"Sampo",
""
]
] | TITLE: Attending to Characters in Neural Sequence Labeling Models
ABSTRACT: Sequence labeling architectures use word embeddings for capturing similarity,
but suffer when handling previously unseen or rare words. We investigate
character-level extensions to such models and propose a novel architecture for
combining alternative word representations. By using an attention mechanism,
the model is able to dynamically decide how much information to use from a
word- or character-level component. We evaluated different architectures on a
range of sequence labeling datasets, and character-level extensions were found
to improve performance on every benchmark. In addition, the proposed
attention-based architecture delivered the best results even with a smaller
number of trainable parameters.
| no_new_dataset | 0.952175 |
1611.04413 | Ronan Sicre | Ronan Sicre, Julien Rabin, Yannis Avrithis, Teddy Furon, Frederic
Jurie | Automatic discovery of discriminative parts as a quadratic assignment
problem | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Part-based image classification consists in representing categories by small
sets of discriminative parts upon which a representation of the images is
built. This paper addresses the question of how to automatically learn such
parts from a set of labeled training images. The training of parts is cast as a
quadratic assignment problem in which optimal correspondences between image
regions and parts are automatically learned. The paper analyses different
assignment strategies and thoroughly evaluates them on two public datasets:
Willow actions and MIT 67 scenes. State-of-the art results are obtained on
these datasets.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 15:17:48 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Sicre",
"Ronan",
""
],
[
"Rabin",
"Julien",
""
],
[
"Avrithis",
"Yannis",
""
],
[
"Furon",
"Teddy",
""
],
[
"Jurie",
"Frederic",
""
]
] | TITLE: Automatic discovery of discriminative parts as a quadratic assignment
problem
ABSTRACT: Part-based image classification consists in representing categories by small
sets of discriminative parts upon which a representation of the images is
built. This paper addresses the question of how to automatically learn such
parts from a set of labeled training images. The training of parts is cast as a
quadratic assignment problem in which optimal correspondences between image
regions and parts are automatically learned. The paper analyses different
assignment strategies and thoroughly evaluates them on two public datasets:
Willow actions and MIT 67 scenes. State-of-the art results are obtained on
these datasets.
| no_new_dataset | 0.952574 |
1611.04455 | Hongyi Liu | Vaidehi Dalmia, Hongyi Liu, Shih-Fu Chang | Columbia MVSO Image Sentiment Dataset | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Multilingual Visual Sentiment Ontology (MVSO) consists of 15,600 concepts
in 12 different languages that are strongly related to emotions and sentiments
expressed in images. These concepts are defined in the form of Adjective-Noun
Pair (ANP), which are crawled and discovered from online image forum Flickr. In
this work, we used Amazon Mechanical Turk as a crowd-sourcing platform to
collect human judgments on sentiments expressed in images that are uniformly
sampled over 3,911 English ANPs extracted from a tag-restricted subset of MVSO.
Our goal is to use the dataset as a benchmark for the evaluation of systems
that automatically predict sentiments in images or ANPs.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 16:48:12 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Dalmia",
"Vaidehi",
""
],
[
"Liu",
"Hongyi",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: Columbia MVSO Image Sentiment Dataset
ABSTRACT: The Multilingual Visual Sentiment Ontology (MVSO) consists of 15,600 concepts
in 12 different languages that are strongly related to emotions and sentiments
expressed in images. These concepts are defined in the form of Adjective-Noun
Pair (ANP), which are crawled and discovered from online image forum Flickr. In
this work, we used Amazon Mechanical Turk as a crowd-sourcing platform to
collect human judgments on sentiments expressed in images that are uniformly
sampled over 3,911 English ANPs extracted from a tag-restricted subset of MVSO.
Our goal is to use the dataset as a benchmark for the evaluation of systems
that automatically predict sentiments in images or ANPs.
| new_dataset | 0.962568 |
1611.04534 | Mu Zhou | Darvin Yi and Mu Zhou and Zhao Chen and Olivier Gevaert | 3-D Convolutional Neural Networks for Glioblastoma Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (CNN) have emerged as powerful tools for
learning discriminative image features. In this paper, we propose a framework
of 3-D fully CNN models for Glioblastoma segmentation from multi-modality MRI
data. By generalizing CNN models to true 3-D convolutions in learning 3-D tumor
MRI data, the proposed approach utilizes a unique network architecture to
decouple image pixels. Specifically, we design a convolutional layer with
pre-defined Difference- of-Gaussian (DoG) filters to perform true 3-D
convolution incorporating local neighborhood information at each pixel. We then
use three trained convolutional layers that act to decouple voxels from the
initial 3-D convolution. The proposed framework allows identification of
high-level tumor structures on MRI. We evaluate segmentation performance on the
BRATS segmentation dataset with 274 tumor samples. Extensive experimental
results demonstrate encouraging performance of the proposed approach comparing
to the state-of-the-art methods. Our data-driven approach achieves a median
Dice score accuracy of 89% in whole tumor glioblastoma segmentation, revealing
a generalized low-bias possibility to learn from medium-size MRI datasets.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 19:21:33 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Yi",
"Darvin",
""
],
[
"Zhou",
"Mu",
""
],
[
"Chen",
"Zhao",
""
],
[
"Gevaert",
"Olivier",
""
]
] | TITLE: 3-D Convolutional Neural Networks for Glioblastoma Segmentation
ABSTRACT: Convolutional Neural Networks (CNN) have emerged as powerful tools for
learning discriminative image features. In this paper, we propose a framework
of 3-D fully CNN models for Glioblastoma segmentation from multi-modality MRI
data. By generalizing CNN models to true 3-D convolutions in learning 3-D tumor
MRI data, the proposed approach utilizes a unique network architecture to
decouple image pixels. Specifically, we design a convolutional layer with
pre-defined Difference- of-Gaussian (DoG) filters to perform true 3-D
convolution incorporating local neighborhood information at each pixel. We then
use three trained convolutional layers that act to decouple voxels from the
initial 3-D convolution. The proposed framework allows identification of
high-level tumor structures on MRI. We evaluate segmentation performance on the
BRATS segmentation dataset with 274 tumor samples. Extensive experimental
results demonstrate encouraging performance of the proposed approach comparing
to the state-of-the-art methods. Our data-driven approach achieves a median
Dice score accuracy of 89% in whole tumor glioblastoma segmentation, revealing
a generalized low-bias possibility to learn from medium-size MRI datasets.
| no_new_dataset | 0.948822 |
1611.04581 | Peter Jin | Peter H. Jin, Qiaochu Yuan, Forrest Iandola, Kurt Keutzer | How to scale distributed deep learning? | Extended version of paper accepted at ML Sys 2016 (at NIPS 2016) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training time on large datasets for deep neural networks is the principal
workflow bottleneck in a number of important applications of deep learning,
such as object classification and detection in automatic driver assistance
systems (ADAS). To minimize training time, the training of a deep neural
network must be scaled beyond a single machine to as many machines as possible
by distributing the optimization method used for training. While a number of
approaches have been proposed for distributed stochastic gradient descent
(SGD), at the current time synchronous approaches to distributed SGD appear to
be showing the greatest performance at large scale. Synchronous scaling of SGD
suffers from the need to synchronize all processors on each gradient step and
is not resilient in the face of failing or lagging processors. In asynchronous
approaches using parameter servers, training is slowed by contention to the
parameter server. In this paper we compare the convergence of synchronous and
asynchronous SGD for training a modern ResNet network architecture on the
ImageNet classification problem. We also propose an asynchronous method,
gossiping SGD, that aims to retain the positive features of both systems by
replacing the all-reduce collective operation of synchronous training with a
gossip aggregation algorithm. We find, perhaps counterintuitively, that
asynchronous SGD, including both elastic averaging and gossiping, converges
faster at fewer nodes (up to about 32 nodes), whereas synchronous SGD scales
better to more nodes (up to about 100 nodes).
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2016 20:59:54 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Jin",
"Peter H.",
""
],
[
"Yuan",
"Qiaochu",
""
],
[
"Iandola",
"Forrest",
""
],
[
"Keutzer",
"Kurt",
""
]
] | TITLE: How to scale distributed deep learning?
ABSTRACT: Training time on large datasets for deep neural networks is the principal
workflow bottleneck in a number of important applications of deep learning,
such as object classification and detection in automatic driver assistance
systems (ADAS). To minimize training time, the training of a deep neural
network must be scaled beyond a single machine to as many machines as possible
by distributing the optimization method used for training. While a number of
approaches have been proposed for distributed stochastic gradient descent
(SGD), at the current time synchronous approaches to distributed SGD appear to
be showing the greatest performance at large scale. Synchronous scaling of SGD
suffers from the need to synchronize all processors on each gradient step and
is not resilient in the face of failing or lagging processors. In asynchronous
approaches using parameter servers, training is slowed by contention to the
parameter server. In this paper we compare the convergence of synchronous and
asynchronous SGD for training a modern ResNet network architecture on the
ImageNet classification problem. We also propose an asynchronous method,
gossiping SGD, that aims to retain the positive features of both systems by
replacing the all-reduce collective operation of synchronous training with a
gossip aggregation algorithm. We find, perhaps counterintuitively, that
asynchronous SGD, including both elastic averaging and gossiping, converges
faster at fewer nodes (up to about 32 nodes), whereas synchronous SGD scales
better to more nodes (up to about 100 nodes).
| no_new_dataset | 0.943867 |
physics/0412112 | Jeanine Pellet | D. Lazaro, Z. El Bitar (LPC-Clermont), V. Breton (LPC-Clermont), I.
Buvat | Effect of noise and modeling errors on the reliability of fully 3D Monte
Carlo reconstruction in SPECT | null | Dans Proceedings (2004) 1-4 - Conference: IEEE Nuclear Science
Symposium And Medical Imaging Conference (NSS / MIC) (2004-10-16 to
2004-10-22), Rome (it) | 10.1109/NSSMIC.2004.1462770 | null | physics.med-ph | null | We recently demonstrated the value of reconstructing SPECT data with fully 3D
Monte Carlo reconstruction (F3DMC), in terms of spatial resolution and
quantification. This was shown on a small cubic phantom (64 projections 10 x
10) in some idealistic configurations. The goals of the present study were to
assess the effect of noise and modeling errors on the reliability of F3DMC, to
propose and evaluate strategies for reducing the noise in the projector, and to
demonstrate the feasibility of F3DMC for a dataset with realistic dimensions. A
small cubic phantom and a realistic Jaszczak phantom dataset were considered.
Projections and projectors for both phantoms were calculated using the Monte
Carlo simulation code GATE. Projectors with different statistics were
considered and two methods for reducing noise in the projector were
investigated: one based on principal component analysis (PCA) and the other
consisting in setting small probability values to zero. Energy and spatial
shifts in projection sampling with respect to projector sampling were also
introduced to test F3DMC in realistic conditions. Experiments with the cubic
phantom showed the importance of using simulations with high statistics for
calculating the projector, and the value of filtering the projector using a PCA
approach. F3DMC was shown to be robust with respect to energy shift and small
spatial sampling off-set between the projector and the projections. Images of
the Jaszczak phantom were successfully reconstructed and also showed promising
results in terms of spatial resolution recovery and quantitative accuracy in
small structures. It is concluded that the promising results of F3DMC hold on
realistic data sets
| [
{
"version": "v1",
"created": "Fri, 17 Dec 2004 14:49:10 GMT"
}
] | 2016-11-15T00:00:00 | [
[
"Lazaro",
"D.",
"",
"LPC-Clermont"
],
[
"Bitar",
"Z. El",
"",
"LPC-Clermont"
],
[
"Breton",
"V.",
"",
"LPC-Clermont"
],
[
"Buvat",
"I.",
""
]
] | TITLE: Effect of noise and modeling errors on the reliability of fully 3D Monte
Carlo reconstruction in SPECT
ABSTRACT: We recently demonstrated the value of reconstructing SPECT data with fully 3D
Monte Carlo reconstruction (F3DMC), in terms of spatial resolution and
quantification. This was shown on a small cubic phantom (64 projections 10 x
10) in some idealistic configurations. The goals of the present study were to
assess the effect of noise and modeling errors on the reliability of F3DMC, to
propose and evaluate strategies for reducing the noise in the projector, and to
demonstrate the feasibility of F3DMC for a dataset with realistic dimensions. A
small cubic phantom and a realistic Jaszczak phantom dataset were considered.
Projections and projectors for both phantoms were calculated using the Monte
Carlo simulation code GATE. Projectors with different statistics were
considered and two methods for reducing noise in the projector were
investigated: one based on principal component analysis (PCA) and the other
consisting in setting small probability values to zero. Energy and spatial
shifts in projection sampling with respect to projector sampling were also
introduced to test F3DMC in realistic conditions. Experiments with the cubic
phantom showed the importance of using simulations with high statistics for
calculating the projector, and the value of filtering the projector using a PCA
approach. F3DMC was shown to be robust with respect to energy shift and small
spatial sampling off-set between the projector and the projections. Images of
the Jaszczak phantom were successfully reconstructed and also showed promising
results in terms of spatial resolution recovery and quantitative accuracy in
small structures. It is concluded that the promising results of F3DMC hold on
realistic data sets
| no_new_dataset | 0.950227 |
1508.04924 | Hamid Palangi | Hamid Palangi, Rabab Ward, Li Deng | Distributed Compressive Sensing: A Deep Learning Approach | To appear in IEEE Transactions on Signal Processing | IEEE Transactions on Signal Processing, Volume: 64, Issue: 17, pp.
4504-4518, 2016 | 10.1109/TSP.2016.2557301 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM)[1], a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and a number of the model-based Bayesian methods. The
proposed method does not add any complexity to the general compressive sensing
encoder. The trained model is used just at the decoder. As the proposed method
is a data driven method, it is only applicable when training data is available.
In many applications however, training data is indeed available, e.g. in
recorded images and videos.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 08:57:29 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Sep 2015 01:15:11 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2016 22:18:13 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Palangi",
"Hamid",
""
],
[
"Ward",
"Rabab",
""
],
[
"Deng",
"Li",
""
]
] | TITLE: Distributed Compressive Sensing: A Deep Learning Approach
ABSTRACT: Various studies that address the compressed sensing problem with Multiple
Measurement Vectors (MMVs) have been recently carried. These studies assume the
vectors of the different channels to be jointly sparse. In this paper, we relax
this condition. Instead we assume that these sparse vectors depend on each
other but that this dependency is unknown. We capture this dependency by
computing the conditional probability of each entry in each vector being
non-zero, given the "residuals" of all previous vectors. To estimate these
probabilities, we propose the use of the Long Short-Term Memory (LSTM)[1], a
data driven model for sequence modelling that is deep in time. To calculate the
model parameters, we minimize a cross entropy cost function. To reconstruct the
sparse vectors at the decoder, we propose a greedy solver that uses the above
model to estimate the conditional probabilities. By performing extensive
experiments on two real world datasets, we show that the proposed method
significantly outperforms the general MMV solver (the Simultaneous Orthogonal
Matching Pursuit (SOMP)) and a number of the model-based Bayesian methods. The
proposed method does not add any complexity to the general compressive sensing
encoder. The trained model is used just at the decoder. As the proposed method
is a data driven method, it is only applicable when training data is available.
In many applications however, training data is indeed available, e.g. in
recorded images and videos.
| no_new_dataset | 0.944638 |
1510.04130 | Jaroslav Fowkes | Jaroslav Fowkes and Charles Sutton | A Bayesian Network Model for Interesting Itemsets | Supplementary material attached as Ancillary File; in PKDD 2016:
European Conference on Machine Learning and Knowledge Discovery in Databases | null | 10.1007/978-3-319-46227-1_26 | null | stat.ML cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining itemsets that are the most interesting under a statistical model of
the underlying data is a commonly used and well-studied technique for
exploratory data analysis, with the most recent interestingness models
exhibiting state of the art performance. Continuing this highly promising line
of work, we propose the first, to the best of our knowledge, generative model
over itemsets, in the form of a Bayesian network, and an associated novel
measure of interestingness. Our model is able to efficiently infer interesting
itemsets directly from the transaction database using structural EM, in which
the E-step employs the greedy approximation to weighted set cover. Our approach
is theoretically simple, straightforward to implement, trivially parallelizable
and retrieves itemsets whose quality is comparable to, if not better than,
existing state of the art algorithms as we demonstrate on several real-world
datasets.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 14:55:17 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 11:15:30 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Fowkes",
"Jaroslav",
""
],
[
"Sutton",
"Charles",
""
]
] | TITLE: A Bayesian Network Model for Interesting Itemsets
ABSTRACT: Mining itemsets that are the most interesting under a statistical model of
the underlying data is a commonly used and well-studied technique for
exploratory data analysis, with the most recent interestingness models
exhibiting state of the art performance. Continuing this highly promising line
of work, we propose the first, to the best of our knowledge, generative model
over itemsets, in the form of a Bayesian network, and an associated novel
measure of interestingness. Our model is able to efficiently infer interesting
itemsets directly from the transaction database using structural EM, in which
the E-step employs the greedy approximation to weighted set cover. Our approach
is theoretically simple, straightforward to implement, trivially parallelizable
and retrieves itemsets whose quality is comparable to, if not better than,
existing state of the art algorithms as we demonstrate on several real-world
datasets.
| no_new_dataset | 0.950732 |
1602.05012 | Jaroslav Fowkes | Jaroslav Fowkes and Charles Sutton | A Subsequence Interleaving Model for Sequential Pattern Mining | 10 pages in KDD 2016: Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining | null | 10.1145/2939672.2939787 | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent sequential pattern mining methods have used the minimum description
length (MDL) principle to define an encoding scheme which describes an
algorithm for mining the most compressing patterns in a database. We present a
novel subsequence interleaving model based on a probabilistic model of the
sequence database, which allows us to search for the most compressing set of
patterns without designing a specific encoding scheme. Our proposed algorithm
is able to efficiently mine the most relevant sequential patterns and rank them
using an associated measure of interestingness. The efficient inference in our
model is a direct result of our use of a structural expectation-maximization
framework, in which the expectation-step takes the form of a submodular
optimization problem subject to a coverage constraint. We show on both
synthetic and real world datasets that our model mines a set of sequential
patterns with low spuriousness and redundancy, high interpretability and
usefulness in real-world applications. Furthermore, we demonstrate that the
quality of the patterns from our approach is comparable to, if not better than,
existing state of the art sequential pattern mining algorithms.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 13:30:10 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 10:43:36 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Fowkes",
"Jaroslav",
""
],
[
"Sutton",
"Charles",
""
]
] | TITLE: A Subsequence Interleaving Model for Sequential Pattern Mining
ABSTRACT: Recent sequential pattern mining methods have used the minimum description
length (MDL) principle to define an encoding scheme which describes an
algorithm for mining the most compressing patterns in a database. We present a
novel subsequence interleaving model based on a probabilistic model of the
sequence database, which allows us to search for the most compressing set of
patterns without designing a specific encoding scheme. Our proposed algorithm
is able to efficiently mine the most relevant sequential patterns and rank them
using an associated measure of interestingness. The efficient inference in our
model is a direct result of our use of a structural expectation-maximization
framework, in which the expectation-step takes the form of a submodular
optimization problem subject to a coverage constraint. We show on both
synthetic and real world datasets that our model mines a set of sequential
patterns with low spuriousness and redundancy, high interpretability and
usefulness in real-world applications. Furthermore, we demonstrate that the
quality of the patterns from our approach is comparable to, if not better than,
existing state of the art sequential pattern mining algorithms.
| no_new_dataset | 0.948537 |
1605.03804 | Sandra Avila | Carlos Caetano and Sandra Avila and William Robson Schwartz and Silvio
Jamil F. Guimar\~aes and Arnaldo de A. Ara\'ujo | A Mid-level Video Representation based on Binary Descriptors: A Case
Study for Pornography Detection | Manuscript accepted at Elsevier Neurocomputing | null | 10.1016/j.neucom.2016.03.099 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing amount of inappropriate content on the Internet, such as
pornography, arises the need to detect and filter such material. The reason for
this is given by the fact that such content is often prohibited in certain
environments (e.g., schools and workplaces) or for certain publics (e.g.,
children). In recent years, many works have been mainly focused on detecting
pornographic images and videos based on visual content, particularly on the
detection of skin color. Although these approaches provide good results, they
generally have the disadvantage of a high false positive rate since not all
images with large areas of skin exposure are necessarily pornographic images,
such as people wearing swimsuits or images related to sports. Local feature
based approaches with Bag-of-Words models (BoW) have been successfully applied
to visual recognition tasks in the context of pornography detection. Even
though existing methods provide promising results, they use local feature
descriptors that require a high computational processing time yielding
high-dimensional vectors. In this work, we propose an approach for pornography
detection based on local binary feature extraction and BossaNova image
representation, a BoW model extension that preserves more richly the visual
information. Moreover, we propose two approaches for video description based on
the combination of mid-level representations namely BossaNova Video Descriptor
(BNVD) and BoW Video Descriptor (BoW-VD). The proposed techniques are
promising, achieving an accuracy of 92.40%, thus reducing the classification
error by 16% over the current state-of-the-art local features approach on the
Pornography dataset.
| [
{
"version": "v1",
"created": "Thu, 12 May 2016 13:27:12 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Caetano",
"Carlos",
""
],
[
"Avila",
"Sandra",
""
],
[
"Schwartz",
"William Robson",
""
],
[
"Guimarães",
"Silvio Jamil F.",
""
],
[
"Araújo",
"Arnaldo de A.",
""
]
] | TITLE: A Mid-level Video Representation based on Binary Descriptors: A Case
Study for Pornography Detection
ABSTRACT: With the growing amount of inappropriate content on the Internet, such as
pornography, arises the need to detect and filter such material. The reason for
this is given by the fact that such content is often prohibited in certain
environments (e.g., schools and workplaces) or for certain publics (e.g.,
children). In recent years, many works have been mainly focused on detecting
pornographic images and videos based on visual content, particularly on the
detection of skin color. Although these approaches provide good results, they
generally have the disadvantage of a high false positive rate since not all
images with large areas of skin exposure are necessarily pornographic images,
such as people wearing swimsuits or images related to sports. Local feature
based approaches with Bag-of-Words models (BoW) have been successfully applied
to visual recognition tasks in the context of pornography detection. Even
though existing methods provide promising results, they use local feature
descriptors that require a high computational processing time yielding
high-dimensional vectors. In this work, we propose an approach for pornography
detection based on local binary feature extraction and BossaNova image
representation, a BoW model extension that preserves more richly the visual
information. Moreover, we propose two approaches for video description based on
the combination of mid-level representations namely BossaNova Video Descriptor
(BNVD) and BoW Video Descriptor (BoW-VD). The proposed techniques are
promising, achieving an accuracy of 92.40%, thus reducing the classification
error by 16% over the current state-of-the-art local features approach on the
Pornography dataset.
| no_new_dataset | 0.953535 |
1610.05712 | Mariano Tepper | Mariano Tepper and Guillermo Sapiro | Fast L1-NMF for Multiple Parametric Model Estimation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we introduce a comprehensive algorithmic pipeline for multiple
parametric model estimation. The proposed approach analyzes the information
produced by a random sampling algorithm (e.g., RANSAC) from a machine
learning/optimization perspective, using a \textit{parameterless} biclustering
algorithm based on L1 nonnegative matrix factorization (L1-NMF). The proposed
framework exploits consistent patterns that naturally arise during the RANSAC
execution, while explicitly avoiding spurious inconsistencies. Contrarily to
the main trends in the literature, the proposed technique does not impose
non-intersecting parametric models. A new accelerated algorithm to compute
L1-NMFs allows to handle medium-sized problems faster while also extending the
usability of the algorithm to much larger datasets. This accelerated algorithm
has applications in any other context where an L1-NMF is needed, beyond the
biclustering approach to parameter estimation here addressed. We accompany the
algorithmic presentation with theoretical foundations and numerous and diverse
examples.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 17:20:38 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 15:54:14 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Tepper",
"Mariano",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Fast L1-NMF for Multiple Parametric Model Estimation
ABSTRACT: In this work we introduce a comprehensive algorithmic pipeline for multiple
parametric model estimation. The proposed approach analyzes the information
produced by a random sampling algorithm (e.g., RANSAC) from a machine
learning/optimization perspective, using a \textit{parameterless} biclustering
algorithm based on L1 nonnegative matrix factorization (L1-NMF). The proposed
framework exploits consistent patterns that naturally arise during the RANSAC
execution, while explicitly avoiding spurious inconsistencies. Contrarily to
the main trends in the literature, the proposed technique does not impose
non-intersecting parametric models. A new accelerated algorithm to compute
L1-NMFs allows to handle medium-sized problems faster while also extending the
usability of the algorithm to much larger datasets. This accelerated algorithm
has applications in any other context where an L1-NMF is needed, beyond the
biclustering approach to parameter estimation here addressed. We accompany the
algorithmic presentation with theoretical foundations and numerous and diverse
examples.
| no_new_dataset | 0.945651 |
1611.01911 | Ponnurangam Kumaraguru | Hemank Lamba, Varun Bharadhwaj, Mayank Vachher, Divyansh Agarwal,
Megha Arora, Ponnurangam Kumaraguru | Me, Myself and My Killfie: Characterizing and Preventing Selfie Deaths | null | null | null | null | cs.SI cs.CY | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Over the past couple of years, clicking and posting selfies has become a
popular trend. However, since March 2014, 127 people have died and many have
been injured while trying to click a selfie. Researchers have studied selfies
for understanding the psychology of the authors, and understanding its effect
on social media platforms. In this work, we perform a comprehensive analysis of
the selfie-related casualties and infer various reasons behind these deaths. We
use inferences from incidents and from our understanding of the features, we
create a system to make people more aware of the dangerous situations in which
these selfies are taken. We use a combination of text-based, image-based and
location-based features to classify a particular selfie as dangerous or not.
Our method ran on 3,155 annotated selfies collected on Twitter gave 73%
accuracy. Individually the image-based features were the most informative for
the prediction task. The combination of image-based and location-based features
resulted in the best accuracy. We have made our code and dataset available at
http://labs.precog.iiitd.edu.in/killfie.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 06:52:26 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 10:05:12 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Lamba",
"Hemank",
""
],
[
"Bharadhwaj",
"Varun",
""
],
[
"Vachher",
"Mayank",
""
],
[
"Agarwal",
"Divyansh",
""
],
[
"Arora",
"Megha",
""
],
[
"Kumaraguru",
"Ponnurangam",
""
]
] | TITLE: Me, Myself and My Killfie: Characterizing and Preventing Selfie Deaths
ABSTRACT: Over the past couple of years, clicking and posting selfies has become a
popular trend. However, since March 2014, 127 people have died and many have
been injured while trying to click a selfie. Researchers have studied selfies
for understanding the psychology of the authors, and understanding its effect
on social media platforms. In this work, we perform a comprehensive analysis of
the selfie-related casualties and infer various reasons behind these deaths. We
use inferences from incidents and from our understanding of the features, we
create a system to make people more aware of the dangerous situations in which
these selfies are taken. We use a combination of text-based, image-based and
location-based features to classify a particular selfie as dangerous or not.
Our method ran on 3,155 annotated selfies collected on Twitter gave 73%
accuracy. Individually the image-based features were the most informative for
the prediction task. The combination of image-based and location-based features
resulted in the best accuracy. We have made our code and dataset available at
http://labs.precog.iiitd.edu.in/killfie.
| new_dataset | 0.963506 |
1611.03578 | Guangxi Li | Guangxi Li, Zenglin Xu, Linnan Wang, Jinmian Ye, Irwin King, Michael
Lyu | Simple and Efficient Parallelization for Probabilistic Temporal Tensor
Factorization | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm
to model the temporal tensor data. It leverages a time constraint to capture
the evolving properties of tensor data. Nowadays the exploding dataset demands
a large scale PTTF analysis, and a parallel solution is critical to accommodate
the trend. Whereas, the parallelization of PTTF still remains unexplored. In
this paper, we propose a simple yet efficient Parallel Probabilistic Temporal
Tensor Factorization, referred to as P$^2$T$^2$F, to provide a scalable PTTF
solution. P$^2$T$^2$F is fundamentally disparate from existing parallel tensor
factorizations by considering the probabilistic decomposition and the temporal
effects of tensor data. It adopts a new tensor data split strategy to subdivide
a large tensor into independent sub-tensors, the computation of which is
inherently parallel. We train P$^2$T$^2$F with an efficient algorithm of
stochastic Alternating Direction Method of Multipliers, and show that the
convergence is guaranteed. Experiments on several real-word tensor datasets
demonstrate that P$^2$T$^2$F is a highly effective and efficiently scalable
algorithm dedicated for large scale probabilistic temporal tensor analysis.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 03:54:00 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Li",
"Guangxi",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Wang",
"Linnan",
""
],
[
"Ye",
"Jinmian",
""
],
[
"King",
"Irwin",
""
],
[
"Lyu",
"Michael",
""
]
] | TITLE: Simple and Efficient Parallelization for Probabilistic Temporal Tensor
Factorization
ABSTRACT: Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm
to model the temporal tensor data. It leverages a time constraint to capture
the evolving properties of tensor data. Nowadays the exploding dataset demands
a large scale PTTF analysis, and a parallel solution is critical to accommodate
the trend. Whereas, the parallelization of PTTF still remains unexplored. In
this paper, we propose a simple yet efficient Parallel Probabilistic Temporal
Tensor Factorization, referred to as P$^2$T$^2$F, to provide a scalable PTTF
solution. P$^2$T$^2$F is fundamentally disparate from existing parallel tensor
factorizations by considering the probabilistic decomposition and the temporal
effects of tensor data. It adopts a new tensor data split strategy to subdivide
a large tensor into independent sub-tensors, the computation of which is
inherently parallel. We train P$^2$T$^2$F with an efficient algorithm of
stochastic Alternating Direction Method of Multipliers, and show that the
convergence is guaranteed. Experiments on several real-word tensor datasets
demonstrate that P$^2$T$^2$F is a highly effective and efficiently scalable
algorithm dedicated for large scale probabilistic temporal tensor analysis.
| no_new_dataset | 0.946547 |
1611.03591 | Renlong Hang | Qingshan Liu, Renlong Hang, Huihui Song, Zhi Li | Learning Multi-Scale Deep Features for High-Resolution Satellite Image
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a multi-scale deep feature learning method for
high-resolution satellite image classification. Specifically, we firstly warp
the original satellite image into multiple different scales. The images in each
scale are employed to train a deep convolutional neural network (DCNN).
However, simultaneously training multiple DCNNs is time-consuming. To address
this issue, we explore DCNN with spatial pyramid pooling (SPP-net). Since
different SPP-nets have the same number of parameters, which share the
identical initial values, and only fine-tuning the parameters in
fully-connected layers ensures the effectiveness of each network, thereby
greatly accelerating the training process. Then, the multi-scale satellite
images are fed into their corresponding SPP-nets respectively to extract
multi-scale deep features. Finally, a multiple kernel learning method is
developed to automatically learn the optimal combination of such features.
Experiments on two difficult datasets show that the proposed method achieves
favorable performance compared to other state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 05:31:42 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Liu",
"Qingshan",
""
],
[
"Hang",
"Renlong",
""
],
[
"Song",
"Huihui",
""
],
[
"Li",
"Zhi",
""
]
] | TITLE: Learning Multi-Scale Deep Features for High-Resolution Satellite Image
Classification
ABSTRACT: In this paper, we propose a multi-scale deep feature learning method for
high-resolution satellite image classification. Specifically, we firstly warp
the original satellite image into multiple different scales. The images in each
scale are employed to train a deep convolutional neural network (DCNN).
However, simultaneously training multiple DCNNs is time-consuming. To address
this issue, we explore DCNN with spatial pyramid pooling (SPP-net). Since
different SPP-nets have the same number of parameters, which share the
identical initial values, and only fine-tuning the parameters in
fully-connected layers ensures the effectiveness of each network, thereby
greatly accelerating the training process. Then, the multi-scale satellite
images are fed into their corresponding SPP-nets respectively to extract
multi-scale deep features. Finally, a multiple kernel learning method is
developed to automatically learn the optimal combination of such features.
Experiments on two difficult datasets show that the proposed method achieves
favorable performance compared to other state-of-the-art methods.
| no_new_dataset | 0.945951 |
1611.03607 | Masaya Inoue | Masaya Inoue, Sozo Inoue, Takeshi Nishida | Deep Recurrent Neural Network for Mobile Human Activity Recognition with
High Throughput | 10 pages, 13 figures | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a method of human activity recognition with high
throughput from raw accelerometer data applying a deep recurrent neural network
(DRNN), and investigate various architectures and its combination to find the
best parameter values. The "high throughput" refers to short time at a time of
recognition. We investigated various parameters and architectures of the DRNN
by using the training dataset of 432 trials with 6 activity classes from 7
people. The maximum recognition rate was 95.42% and 83.43% against the test
data of 108 segmented trials each of which has single activity class and 18
multiple sequential trials, respectively. Here, the maximum recognition rates
by traditional methods were 71.65% and 54.97% for each. In addition, the
efficiency of the found parameters was evaluated by using additional dataset.
Further, as for throughput of the recognition per unit time, the constructed
DRNN was requiring only 1.347 [ms], while the best traditional method required
11.031 [ms] which includes 11.027 [ms] for feature calculation. These
advantages are caused by the compact and small architecture of the constructed
real time oriented DRNN.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 08:21:09 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Inoue",
"Masaya",
""
],
[
"Inoue",
"Sozo",
""
],
[
"Nishida",
"Takeshi",
""
]
] | TITLE: Deep Recurrent Neural Network for Mobile Human Activity Recognition with
High Throughput
ABSTRACT: In this paper, we propose a method of human activity recognition with high
throughput from raw accelerometer data applying a deep recurrent neural network
(DRNN), and investigate various architectures and its combination to find the
best parameter values. The "high throughput" refers to short time at a time of
recognition. We investigated various parameters and architectures of the DRNN
by using the training dataset of 432 trials with 6 activity classes from 7
people. The maximum recognition rate was 95.42% and 83.43% against the test
data of 108 segmented trials each of which has single activity class and 18
multiple sequential trials, respectively. Here, the maximum recognition rates
by traditional methods were 71.65% and 54.97% for each. In addition, the
efficiency of the found parameters was evaluated by using additional dataset.
Further, as for throughput of the recognition per unit time, the constructed
DRNN was requiring only 1.347 [ms], while the best traditional method required
11.031 [ms] which includes 11.027 [ms] for feature calculation. These
advantages are caused by the compact and small architecture of the constructed
real time oriented DRNN.
| no_new_dataset | 0.946349 |
1611.03608 | Xiatian Zhang | Xiatian Zhang, Fan Yao, Yongjun Tian | Greedy Step Averaging: A parameter-free stochastic optimization method | 23 pages, 24 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the greedy step averaging(GSA) method, a
parameter-free stochastic optimization algorithm for a variety of machine
learning problems. As a gradient-based optimization method, GSA makes use of
the information from the minimizer of a single sample's loss function, and
takes average strategy to calculate reasonable learning rate sequence. While
most existing gradient-based algorithms introduce an increasing number of hyper
parameters or try to make a trade-off between computational cost and
convergence rate, GSA avoids the manual tuning of learning rate and brings in
no more hyper parameters or extra cost. We perform exhaustive numerical
experiments for logistic and softmax regression to compare our method with the
other state of the art ones on 16 datasets. Results show that GSA is robust on
various scenarios.
| [
{
"version": "v1",
"created": "Fri, 11 Nov 2016 08:23:30 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Zhang",
"Xiatian",
""
],
[
"Yao",
"Fan",
""
],
[
"Tian",
"Yongjun",
""
]
] | TITLE: Greedy Step Averaging: A parameter-free stochastic optimization method
ABSTRACT: In this paper we present the greedy step averaging(GSA) method, a
parameter-free stochastic optimization algorithm for a variety of machine
learning problems. As a gradient-based optimization method, GSA makes use of
the information from the minimizer of a single sample's loss function, and
takes average strategy to calculate reasonable learning rate sequence. While
most existing gradient-based algorithms introduce an increasing number of hyper
parameters or try to make a trade-off between computational cost and
convergence rate, GSA avoids the manual tuning of learning rate and brings in
no more hyper parameters or extra cost. We perform exhaustive numerical
experiments for logistic and softmax regression to compare our method with the
other state of the art ones on 16 datasets. Results show that GSA is robust on
various scenarios.
| no_new_dataset | 0.946448 |
1611.03777 | Barak Pearlmutter | At{\i}l{\i}m G\"une\c{s} Baydin and Barak A. Pearlmutter and Jeffrey
Mark Siskind | Tricks from Deep Learning | Extended abstract presented at the AD 2016 Conference, Sep 2016,
Oxford UK | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deep learning community has devised a diverse set of methods to make
gradient optimization, using large datasets, of large and highly complex models
with deeply cascaded nonlinearities, practical. Taken as a whole, these methods
constitute a breakthrough, allowing computational structures which are quite
wide, very deep, and with an enormous number and variety of free parameters to
be effectively optimized. The result now dominates much of practical machine
learning, with applications in machine translation, computer vision, and speech
recognition. Many of these methods, viewed through the lens of algorithmic
differentiation (AD), can be seen as either addressing issues with the gradient
itself, or finding ways of achieving increased efficiency using tricks that are
AD-related, but not provided by current AD systems.
The goal of this paper is to explain not just those methods of most relevance
to AD, but also the technical constraints and mindset which led to their
discovery. After explaining this context, we present a "laundry list" of
methods developed by the deep learning community. Two of these are discussed in
further mathematical detail: a way to dramatically reduce the size of the tape
when performing reverse-mode AD on a (theoretically) time-reversible process
like an ODE integrator; and a new mathematical insight that allows for the
implementation of a stochastic Newton's method.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 17:57:19 GMT"
}
] | 2016-11-14T00:00:00 | [
[
"Baydin",
"Atılım Güneş",
""
],
[
"Pearlmutter",
"Barak A.",
""
],
[
"Siskind",
"Jeffrey Mark",
""
]
] | TITLE: Tricks from Deep Learning
ABSTRACT: The deep learning community has devised a diverse set of methods to make
gradient optimization, using large datasets, of large and highly complex models
with deeply cascaded nonlinearities, practical. Taken as a whole, these methods
constitute a breakthrough, allowing computational structures which are quite
wide, very deep, and with an enormous number and variety of free parameters to
be effectively optimized. The result now dominates much of practical machine
learning, with applications in machine translation, computer vision, and speech
recognition. Many of these methods, viewed through the lens of algorithmic
differentiation (AD), can be seen as either addressing issues with the gradient
itself, or finding ways of achieving increased efficiency using tricks that are
AD-related, but not provided by current AD systems.
The goal of this paper is to explain not just those methods of most relevance
to AD, but also the technical constraints and mindset which led to their
discovery. After explaining this context, we present a "laundry list" of
methods developed by the deep learning community. Two of these are discussed in
further mathematical detail: a way to dramatically reduce the size of the tape
when performing reverse-mode AD on a (theoretically) time-reversible process
like an ODE integrator; and a new mathematical insight that allows for the
implementation of a stochastic Newton's method.
| no_new_dataset | 0.910187 |
1502.02454 | Thuc Le Ph.D | Thuc Duy Le, Tao Hoang, Jiuyong Li, Lin Liu, and Huawen Liu | A fast PC algorithm for high dimensional causal discovery with
multi-core PCs | Thuc Le, Tao Hoang, Jiuyong Li, Lin Liu, Huawen Liu, Shu Hu, "A fast
PC algorithm for high dimensional causal discovery with multi-core PCs",
IEEE/ACM Transactions on Computational Biology and Bioinformatics,
doi:10.1109/TCBB.2016.2591526 | null | 10.1109/TCBB.2016.2591526 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering causal relationships from observational data is a crucial problem
and it has applications in many research areas. The PC algorithm is the
state-of-the-art constraint based method for causal discovery. However, runtime
of the PC algorithm, in the worst-case, is exponential to the number of nodes
(variables), and thus it is inefficient when being applied to high dimensional
data, e.g. gene expression datasets. On another note, the advancement of
computer hardware in the last decade has resulted in the widespread
availability of multi-core personal computers. There is a significant
motivation for designing a parallelised PC algorithm that is suitable for
personal computers and does not require end users' parallel computing knowledge
beyond their competency in using the PC algorithm. In this paper, we develop
parallel-PC, a fast and memory efficient PC algorithm using the parallel
computing technique. We apply our method to a range of synthetic and real-world
high dimensional datasets. Experimental results on a dataset from the DREAM 5
challenge show that the original PC algorithm could not produce any results
after running more than 24 hours; meanwhile, our parallel-PC algorithm managed
to finish within around 12 hours with a 4-core CPU computer, and less than 6
hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a
causal inference method for inferring miRNA-mRNA regulatory relationships. The
experimental results show that parallel-PC helps improve both the efficiency
and accuracy of the causal inference algorithm.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 12:15:21 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jul 2015 03:03:16 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Nov 2016 12:23:48 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Le",
"Thuc Duy",
""
],
[
"Hoang",
"Tao",
""
],
[
"Li",
"Jiuyong",
""
],
[
"Liu",
"Lin",
""
],
[
"Liu",
"Huawen",
""
]
] | TITLE: A fast PC algorithm for high dimensional causal discovery with
multi-core PCs
ABSTRACT: Discovering causal relationships from observational data is a crucial problem
and it has applications in many research areas. The PC algorithm is the
state-of-the-art constraint based method for causal discovery. However, runtime
of the PC algorithm, in the worst-case, is exponential to the number of nodes
(variables), and thus it is inefficient when being applied to high dimensional
data, e.g. gene expression datasets. On another note, the advancement of
computer hardware in the last decade has resulted in the widespread
availability of multi-core personal computers. There is a significant
motivation for designing a parallelised PC algorithm that is suitable for
personal computers and does not require end users' parallel computing knowledge
beyond their competency in using the PC algorithm. In this paper, we develop
parallel-PC, a fast and memory efficient PC algorithm using the parallel
computing technique. We apply our method to a range of synthetic and real-world
high dimensional datasets. Experimental results on a dataset from the DREAM 5
challenge show that the original PC algorithm could not produce any results
after running more than 24 hours; meanwhile, our parallel-PC algorithm managed
to finish within around 12 hours with a 4-core CPU computer, and less than 6
hours with a 8-core CPU computer. Furthermore, we integrate parallel-PC into a
causal inference method for inferring miRNA-mRNA regulatory relationships. The
experimental results show that parallel-PC helps improve both the efficiency
and accuracy of the causal inference algorithm.
| no_new_dataset | 0.944434 |
1508.07372 | Vijay Gadepally | Vijay Gadepally, Jake Bolewski, Dan Hook, Dylan Hutchison, Ben Miller,
Jeremy Kepner | Graphulo: Linear Algebra Graph Kernels for NoSQL Databases | 10 pages | null | 10.1109/IPDPSW.2015.19 | null | cs.DS cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big data and the Internet of Things era continue to challenge computational
systems. Several technology solutions such as NoSQL databases have been
developed to deal with this challenge. In order to generate meaningful results
from large datasets, analysts often use a graph representation which provides
an intuitive way to work with the data. Graph vertices can represent users and
events, and edges can represent the relationship between vertices. Graph
algorithms are used to extract meaningful information from these very large
graphs. At MIT, the Graphulo initiative is an effort to perform graph
algorithms directly in NoSQL databases such as Apache Accumulo or SciDB, which
have an inherently sparse data storage scheme. Sparse matrix operations have a
history of efficient implementations and the Graph Basic Linear Algebra
Subprogram (GraphBLAS) community has developed a set of key kernels that can be
used to develop efficient linear algebra operations. However, in order to use
the GraphBLAS kernels, it is important that common graph algorithms be recast
using the linear algebra building blocks. In this article, we look at common
classes of graph algorithms and recast them into linear algebra operations
using the GraphBLAS building blocks.
| [
{
"version": "v1",
"created": "Fri, 28 Aug 2015 23:03:10 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Oct 2015 03:23:10 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Gadepally",
"Vijay",
""
],
[
"Bolewski",
"Jake",
""
],
[
"Hook",
"Dan",
""
],
[
"Hutchison",
"Dylan",
""
],
[
"Miller",
"Ben",
""
],
[
"Kepner",
"Jeremy",
""
]
] | TITLE: Graphulo: Linear Algebra Graph Kernels for NoSQL Databases
ABSTRACT: Big data and the Internet of Things era continue to challenge computational
systems. Several technology solutions such as NoSQL databases have been
developed to deal with this challenge. In order to generate meaningful results
from large datasets, analysts often use a graph representation which provides
an intuitive way to work with the data. Graph vertices can represent users and
events, and edges can represent the relationship between vertices. Graph
algorithms are used to extract meaningful information from these very large
graphs. At MIT, the Graphulo initiative is an effort to perform graph
algorithms directly in NoSQL databases such as Apache Accumulo or SciDB, which
have an inherently sparse data storage scheme. Sparse matrix operations have a
history of efficient implementations and the Graph Basic Linear Algebra
Subprogram (GraphBLAS) community has developed a set of key kernels that can be
used to develop efficient linear algebra operations. However, in order to use
the GraphBLAS kernels, it is important that common graph algorithms be recast
using the linear algebra building blocks. In this article, we look at common
classes of graph algorithms and recast them into linear algebra operations
using the GraphBLAS building blocks.
| no_new_dataset | 0.940298 |
1611.01584 | Brandon Smith | Brandon M. Smith and Charles R. Dyer | Efficient Branching Cascaded Regression for Face Alignment under
Significant Head Rotation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite much interest in face alignment in recent years, the large majority
of work has focused on near-frontal faces. Algorithms typically break down on
profile faces, or are too slow for real-time applications. In this work we
propose an efficient approach to face alignment that can handle 180 degrees of
head rotation in a unified way (e.g., without resorting to view-based models)
using 2D training data. The foundation of our approach is cascaded shape
regression (CSR), which has emerged recently as the leading strategy. We
propose a generalization of conventional CSRs that we call branching cascaded
regression (BCR). Conventional CSRs are single-track; that is, they progress
from one cascade level to the next in a straight line, with each regressor
attempting to fit the entire dataset. We instead split the regression problem
into two or more simpler ones after each cascade level. Intuitively, each
regressor can then operate on a simpler objective function (i.e., with fewer
conflicting gradient directions). Within the BCR framework, we model and infer
pose-related landmark visibility and face shape simultaneously using Structured
Point Distribution Models (SPDMs). We propose to learn task-specific feature
mapping functions that are adaptive to landmark visibility, and that use SPDM
parameters as regression targets instead of 2D landmark coordinates.
Additionally, we introduce a new in-the-wild dataset of profile faces to
validate our approach.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 01:42:39 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Nov 2016 04:53:39 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Smith",
"Brandon M.",
""
],
[
"Dyer",
"Charles R.",
""
]
] | TITLE: Efficient Branching Cascaded Regression for Face Alignment under
Significant Head Rotation
ABSTRACT: Despite much interest in face alignment in recent years, the large majority
of work has focused on near-frontal faces. Algorithms typically break down on
profile faces, or are too slow for real-time applications. In this work we
propose an efficient approach to face alignment that can handle 180 degrees of
head rotation in a unified way (e.g., without resorting to view-based models)
using 2D training data. The foundation of our approach is cascaded shape
regression (CSR), which has emerged recently as the leading strategy. We
propose a generalization of conventional CSRs that we call branching cascaded
regression (BCR). Conventional CSRs are single-track; that is, they progress
from one cascade level to the next in a straight line, with each regressor
attempting to fit the entire dataset. We instead split the regression problem
into two or more simpler ones after each cascade level. Intuitively, each
regressor can then operate on a simpler objective function (i.e., with fewer
conflicting gradient directions). Within the BCR framework, we model and infer
pose-related landmark visibility and face shape simultaneously using Structured
Point Distribution Models (SPDMs). We propose to learn task-specific feature
mapping functions that are adaptive to landmark visibility, and that use SPDM
parameters as regression targets instead of 2D landmark coordinates.
Additionally, we introduce a new in-the-wild dataset of profile faces to
validate our approach.
| no_new_dataset | 0.949902 |
1611.01880 | Nikita Jain | Nkita Jain and Rachita Gupta | Inductive decision based Real Time Occupancy detector in University
Buildings | 7 Pages 9 Figures, International Journal of Computer Science and
Information Security Vol 14 No 10 2016 | International Journal of Computer Science and Information Security
14 (10) 2016 | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to estimate College Campus Occupancy for Classrooms and Labs in
real time has become one of the major concerns for various Academicians,
authorities and administrators,where still a manual attendance marking system
is being followed. Using a low budget multiple sensor setup installed in a
college auditorium, the goal is to build a real-time occupancy detector. This
paper presents an Inductive real time Decision tree based classifier using
multiple sensor dataset to detect occupancy. Using simple feature based
thresholds, Reverberation time which comes out to be a novel as well as most
distinguishing feature sampled at various frequencies over a given time
interval was used to detect the occupancy with an accuracy of %.Addition of
various other sensor data, decreased the accuracy of classification results.
The detector setup can be used in various college buildings to provide real
time centralised occupancy status thus automating the manual attendance system
being used.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 03:02:01 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Jain",
"Nkita",
""
],
[
"Gupta",
"Rachita",
""
]
] | TITLE: Inductive decision based Real Time Occupancy detector in University
Buildings
ABSTRACT: The ability to estimate College Campus Occupancy for Classrooms and Labs in
real time has become one of the major concerns for various Academicians,
authorities and administrators,where still a manual attendance marking system
is being followed. Using a low budget multiple sensor setup installed in a
college auditorium, the goal is to build a real-time occupancy detector. This
paper presents an Inductive real time Decision tree based classifier using
multiple sensor dataset to detect occupancy. Using simple feature based
thresholds, Reverberation time which comes out to be a novel as well as most
distinguishing feature sampled at various frequencies over a given time
interval was used to detect the occupancy with an accuracy of %.Addition of
various other sensor data, decreased the accuracy of classification results.
The detector setup can be used in various college buildings to provide real
time centralised occupancy status thus automating the manual attendance system
being used.
| no_new_dataset | 0.934873 |
1611.03159 | Kifayat Khan | Kifayat Ullah Khan, Waqas Nawaz, Young-Koo Lee | Scalable Compression of a Weighted Graph | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Graph is a useful data structure to model various real life aspects like
email communications, co-authorship among researchers, interactions among
chemical compounds, and so on. Supporting such real life interactions produce a
knowledge rich massive repository of data. However, efficiently understanding
underlying trends and patterns is hard due to large size of the graph.
Therefore, this paper presents a scalable compression solution to compute
summary of a weighted graph. All the aforementioned interactions from various
domains are represented as edge weights in a graph. Therefore, creating a
summary graph while considering this vital aspect is necessary to learn
insights of different communication patterns. By experimenting the proposed
method on two real world and publically available datasets against a state of
the art technique, we obtain order of magnitude performance gain and better
summarization accuracy.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 01:52:49 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Khan",
"Kifayat Ullah",
""
],
[
"Nawaz",
"Waqas",
""
],
[
"Lee",
"Young-Koo",
""
]
] | TITLE: Scalable Compression of a Weighted Graph
ABSTRACT: Graph is a useful data structure to model various real life aspects like
email communications, co-authorship among researchers, interactions among
chemical compounds, and so on. Supporting such real life interactions produce a
knowledge rich massive repository of data. However, efficiently understanding
underlying trends and patterns is hard due to large size of the graph.
Therefore, this paper presents a scalable compression solution to compute
summary of a weighted graph. All the aforementioned interactions from various
domains are represented as edge weights in a graph. Therefore, creating a
summary graph while considering this vital aspect is necessary to learn
insights of different communication patterns. By experimenting the proposed
method on two real world and publically available datasets against a state of
the art technique, we obtain order of magnitude performance gain and better
summarization accuracy.
| no_new_dataset | 0.946892 |
1611.03214 | Alexander Novikov | Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, Dmitry Vetrov | Ultimate tensorization: compressing convolutional and FC layers alike | NIPS 2016 workshop: Learning with Tensors: Why Now and How? | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks excel in image recognition tasks, but this
comes at the cost of high computational and memory complexity. To tackle this
problem, [1] developed a tensor factorization framework to compress
fully-connected layers. In this paper, we focus on compressing convolutional
layers. We show that while the direct application of the tensor framework [1]
to the 4-dimensional kernel of convolution does compress the layer, we can do
better. We reshape the convolutional kernel into a tensor of higher order and
factorize it. We combine the proposed approach with the previous work to
compress both convolutional and fully-connected layers of a network and achieve
80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 08:07:46 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Garipov",
"Timur",
""
],
[
"Podoprikhin",
"Dmitry",
""
],
[
"Novikov",
"Alexander",
""
],
[
"Vetrov",
"Dmitry",
""
]
] | TITLE: Ultimate tensorization: compressing convolutional and FC layers alike
ABSTRACT: Convolutional neural networks excel in image recognition tasks, but this
comes at the cost of high computational and memory complexity. To tackle this
problem, [1] developed a tensor factorization framework to compress
fully-connected layers. In this paper, we focus on compressing convolutional
layers. We show that while the direct application of the tensor framework [1]
to the 4-dimensional kernel of convolution does compress the layer, we can do
better. We reshape the convolutional kernel into a tensor of higher order and
factorize it. We combine the proposed approach with the previous work to
compress both convolutional and fully-connected layers of a network and achieve
80x network compression rate with 1.1% accuracy drop on the CIFAR-10 dataset.
| no_new_dataset | 0.949106 |
1611.03270 | Adi Dafni | Adi Dafni, Yael Moses and Shai Avidan | Detecting Moving Regions in CrowdCam Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the novel problem of detecting dynamic regions in CrowdCam images,
a set of still images captured by a group of people. These regions capture the
most interesting parts of the scene, and detecting them plays an important role
in the analysis of visual data. Our method is based on the observation that
matching static points must satisfy the epipolar geometry constraints, but
computing exact matches is challenging. Instead, we compute the probability
that a pixel has a match, not necessarily the correct one, along the
corresponding epipolar line. The complement of this probability is not
necessarily the probability of a dynamic point because of occlusions, noise,
and matching errors. Therefore, information from all pairs of images is
aggregated to obtain a high quality dynamic probability map, per image.
Experiments on challenging datasets demonstrate the effectiveness of the
algorithm on a broad range of settings; no prior knowledge about the scene, the
camera characteristics or the camera locations is required.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 11:58:52 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Dafni",
"Adi",
""
],
[
"Moses",
"Yael",
""
],
[
"Avidan",
"Shai",
""
]
] | TITLE: Detecting Moving Regions in CrowdCam Images
ABSTRACT: We address the novel problem of detecting dynamic regions in CrowdCam images,
a set of still images captured by a group of people. These regions capture the
most interesting parts of the scene, and detecting them plays an important role
in the analysis of visual data. Our method is based on the observation that
matching static points must satisfy the epipolar geometry constraints, but
computing exact matches is challenging. Instead, we compute the probability
that a pixel has a match, not necessarily the correct one, along the
corresponding epipolar line. The complement of this probability is not
necessarily the probability of a dynamic point because of occlusions, noise,
and matching errors. Therefore, information from all pairs of images is
aggregated to obtain a high quality dynamic probability map, per image.
Experiments on challenging datasets demonstrate the effectiveness of the
algorithm on a broad range of settings; no prior knowledge about the scene, the
camera characteristics or the camera locations is required.
| no_new_dataset | 0.945801 |
1611.03298 | Harsh Nisar | Deshana Desai, Harsh Nisar, Rishab Bhardawaj | Role of Temporal Diversity in Inferring Social Ties Based on
Spatio-Temporal Data | 7 pages, 3 figures | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The last two decades have seen a tremendous surge in research on social
networks and their implications. The studies includes inferring social
relationships, which in turn have been used for target advertising,
recommendations, search customization etc. However, the offline experiences of
human, the conversations with people and face-to-face interactions that govern
our lives interactions have received lesser attention. We introduce DAIICT
Spatio-Temporal Network (DSSN), a spatiotemporal dataset of 0.7 million data
points of continuous location data logged at an interval of every 2 minutes by
mobile phones of 46 subjects. Our research is focused at inferring relationship
strength between students based on the spatiotemporal data and comparing the
results with the self-reported data. In that pursuit we introduce Temporal
Diversity, which we show to be superior in its contribution to predicting
relationship strength than its counterparts. We also explore the evolving
nature of Temporal Diversity with time. Our rich dataset opens various other
avenues of research that require fine-grained location data with bounded
movement of participants within a limited geographical area. The advantage of
having a bounded geographical area such as a university campus is that it
provides us with a microcosm of the real world, where each such geographic zone
has an internal context and function and a high percentage of mobility is
governed by schedules and time-tables. The bounded geographical region in
addition to the age homogeneous population gives us a minute look into the
active internal socialization of students in a university.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 13:42:05 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Desai",
"Deshana",
""
],
[
"Nisar",
"Harsh",
""
],
[
"Bhardawaj",
"Rishab",
""
]
] | TITLE: Role of Temporal Diversity in Inferring Social Ties Based on
Spatio-Temporal Data
ABSTRACT: The last two decades have seen a tremendous surge in research on social
networks and their implications. The studies includes inferring social
relationships, which in turn have been used for target advertising,
recommendations, search customization etc. However, the offline experiences of
human, the conversations with people and face-to-face interactions that govern
our lives interactions have received lesser attention. We introduce DAIICT
Spatio-Temporal Network (DSSN), a spatiotemporal dataset of 0.7 million data
points of continuous location data logged at an interval of every 2 minutes by
mobile phones of 46 subjects. Our research is focused at inferring relationship
strength between students based on the spatiotemporal data and comparing the
results with the self-reported data. In that pursuit we introduce Temporal
Diversity, which we show to be superior in its contribution to predicting
relationship strength than its counterparts. We also explore the evolving
nature of Temporal Diversity with time. Our rich dataset opens various other
avenues of research that require fine-grained location data with bounded
movement of participants within a limited geographical area. The advantage of
having a bounded geographical area such as a university campus is that it
provides us with a microcosm of the real world, where each such geographic zone
has an internal context and function and a high percentage of mobility is
governed by schedules and time-tables. The bounded geographical region in
addition to the age homogeneous population gives us a minute look into the
active internal socialization of students in a university.
| new_dataset | 0.967625 |
1611.03313 | Boyu Wang | Boyu Wang, Kevin Yager, Dantong Yu, Minh Hoai | X-ray Scattering Image Classification Using Deep Learning | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual inspection of x-ray scattering images is a powerful technique for
probing the physical structure of materials at the molecular scale. In this
paper, we explore the use of deep learning to develop methods for automatically
analyzing x-ray scattering images. In particular, we apply Convolutional Neural
Networks and Convolutional Autoencoders for x-ray scattering image
classification. To acquire enough training data for deep learning, we use
simulation software to generate synthetic x-ray scattering images. Experiments
show that deep learning methods outperform previously published methods by 10\%
on synthetic and real datasets.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 14:32:24 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Wang",
"Boyu",
""
],
[
"Yager",
"Kevin",
""
],
[
"Yu",
"Dantong",
""
],
[
"Hoai",
"Minh",
""
]
] | TITLE: X-ray Scattering Image Classification Using Deep Learning
ABSTRACT: Visual inspection of x-ray scattering images is a powerful technique for
probing the physical structure of materials at the molecular scale. In this
paper, we explore the use of deep learning to develop methods for automatically
analyzing x-ray scattering images. In particular, we apply Convolutional Neural
Networks and Convolutional Autoencoders for x-ray scattering image
classification. To acquire enough training data for deep learning, we use
simulation software to generate synthetic x-ray scattering images. Experiments
show that deep learning methods outperform previously published methods by 10\%
on synthetic and real datasets.
| no_new_dataset | 0.954563 |
1611.03382 | Wenyuan Zeng | Wenyuan Zeng, Wenjie Luo, Sanja Fidler, Raquel Urtasun | Efficient Summarization with Read-Again and Copy Mechanism | 11 pages, 4 figures, 5 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Encoder-decoder models have been widely used to solve sequence to sequence
prediction tasks. However current approaches suffer from two shortcomings.
First, the encoders compute a representation of each word taking into account
only the history of the words it has read so far, yielding suboptimal
representations. Second, current decoders utilize large vocabularies in order
to minimize the problem of unknown words, resulting in slow decoding times. In
this paper we address both shortcomings. Towards this goal, we first introduce
a simple mechanism that first reads the input sequence before committing to a
representation of each word. Furthermore, we propose a simple copy mechanism
that is able to exploit very small vocabularies and handle out-of-vocabulary
words. We demonstrate the effectiveness of our approach on the Gigaword dataset
and DUC competition outperforming the state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 16:23:04 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Zeng",
"Wenyuan",
""
],
[
"Luo",
"Wenjie",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Urtasun",
"Raquel",
""
]
] | TITLE: Efficient Summarization with Read-Again and Copy Mechanism
ABSTRACT: Encoder-decoder models have been widely used to solve sequence to sequence
prediction tasks. However current approaches suffer from two shortcomings.
First, the encoders compute a representation of each word taking into account
only the history of the words it has read so far, yielding suboptimal
representations. Second, current decoders utilize large vocabularies in order
to minimize the problem of unknown words, resulting in slow decoding times. In
this paper we address both shortcomings. Towards this goal, we first introduce
a simple mechanism that first reads the input sequence before committing to a
representation of each word. Furthermore, we propose a simple copy mechanism
that is able to exploit very small vocabularies and handle out-of-vocabulary
words. We demonstrate the effectiveness of our approach on the Gigaword dataset
and DUC competition outperforming the state-of-the-art.
| no_new_dataset | 0.947039 |
1611.03383 | Junbo Zhao | Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann
LeCun | Disentangling factors of variation in deep representations using
adversarial training | Conference paper in NIPS 2016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a conditional generative model for learning to disentangle the
hidden factors of variation within a set of labeled observations, and separate
them into complementary codes. One code summarizes the specified factors of
variation associated with the labels. The other summarizes the remaining
unspecified variability. During training, the only available source of
supervision comes from our ability to distinguish among different observations
belonging to the same class. Examples of such observations include images of a
set of labeled objects captured at different viewpoints, or recordings of set
of speakers dictating multiple phrases. In both instances, the intra-class
diversity is the source of the unspecified factors of variation: each object is
observed at multiple viewpoints, and each speaker dictates multiple phrases.
Learning to disentangle the specified factors from the unspecified ones becomes
easier when strong supervision is possible. Suppose that during training, we
have access to pairs of images, where each pair shows two different objects
captured from the same viewpoint. This source of alignment allows us to solve
our task using existing methods. However, labels for the unspecified factors
are usually unavailable in realistic scenarios where data acquisition is not
strictly controlled. We address the problem of disentanglement in this more
general setting by combining deep convolutional autoencoders with a form of
adversarial training. Both factors of variation are implicitly captured in the
organization of the learned embedding space, and can be used for solving
single-image analogies. Experimental results on synthetic and real datasets
show that the proposed method is capable of generalizing to unseen classes and
intra-class variabilities.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 16:24:16 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Mathieu",
"Michael",
""
],
[
"Zhao",
"Junbo",
""
],
[
"Sprechmann",
"Pablo",
""
],
[
"Ramesh",
"Aditya",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Disentangling factors of variation in deep representations using
adversarial training
ABSTRACT: We introduce a conditional generative model for learning to disentangle the
hidden factors of variation within a set of labeled observations, and separate
them into complementary codes. One code summarizes the specified factors of
variation associated with the labels. The other summarizes the remaining
unspecified variability. During training, the only available source of
supervision comes from our ability to distinguish among different observations
belonging to the same class. Examples of such observations include images of a
set of labeled objects captured at different viewpoints, or recordings of set
of speakers dictating multiple phrases. In both instances, the intra-class
diversity is the source of the unspecified factors of variation: each object is
observed at multiple viewpoints, and each speaker dictates multiple phrases.
Learning to disentangle the specified factors from the unspecified ones becomes
easier when strong supervision is possible. Suppose that during training, we
have access to pairs of images, where each pair shows two different objects
captured from the same viewpoint. This source of alignment allows us to solve
our task using existing methods. However, labels for the unspecified factors
are usually unavailable in realistic scenarios where data acquisition is not
strictly controlled. We address the problem of disentanglement in this more
general setting by combining deep convolutional autoencoders with a form of
adversarial training. Both factors of variation are implicitly captured in the
organization of the learned embedding space, and can be used for solving
single-image analogies. Experimental results on synthetic and real datasets
show that the proposed method is capable of generalizing to unseen classes and
intra-class variabilities.
| no_new_dataset | 0.9463 |
1611.03403 | Rui A. P. Perdig\~ao | Rui A. P. Perdig\~ao, Carlos A. L. Pires, Julia Hall | Synergistic Dynamic Theory of Complex Coevolutionary Systems:
Disentangling Nonlinear Spatiotemporal Controls on Precipitation | 40 pages, 10 figures | null | null | null | math.DS physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate a nonlinear synergistic theory of coevolutionary systems,
disentangling and explaining dynamic complexity in terms of fundamental
processes for optimised data analysis and dynamic model design: Dynamic Source
Analysis (DSA). DSA provides a nonlinear dynamical basis for spatiotemporal
datasets or dynamical models, eliminating redundancies and expressing the
system in terms of the smallest number of fundamental processes and
interactions without loss of information. This optimises model design in
dynamical systems, expressing complex coevolution in simple synergistic terms,
yielding physically meaningful spatial and temporal structures. These are
extracted by spatiotemporal decomposition of nonlinearly interacting subspaces
via the novel concept of a Spatiotemporal Coevolution Manifold. Physical
consistency is ensured and mathematical ambiguities are avoided with
fundamental principles on energy minimisation and entropy production. The
relevance of DSA is illustrated by retrieving a non-redundant, synergistic set
of nonlinear geophysical processes exerting control over precipitation in space
and time over the Euro-Atlantic region. For that purpose, a nonlinear
spatiotemporal basis is extracted from geopotential data fields, yielding two
independent dynamic sources dominated respectively by meridional and zonal
circulation gradients. These sources are decomposed into spatial and temporal
structures corresponding to multiscale climate dynamics. The added value of
nonlinear predictability is brought out in the geospatial evaluation and
dynamic simulation of evolving precipitation distributions from the geophysical
controls, using DSA-driven model building and implementation. The simulated
precipitation is found to be in agreement with observational data, which they
not only describe but also dynamically link and attribute in synergistic terms
of the retrieved dynamic sources.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 17:13:57 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Perdigão",
"Rui A. P.",
""
],
[
"Pires",
"Carlos A. L.",
""
],
[
"Hall",
"Julia",
""
]
] | TITLE: Synergistic Dynamic Theory of Complex Coevolutionary Systems:
Disentangling Nonlinear Spatiotemporal Controls on Precipitation
ABSTRACT: We formulate a nonlinear synergistic theory of coevolutionary systems,
disentangling and explaining dynamic complexity in terms of fundamental
processes for optimised data analysis and dynamic model design: Dynamic Source
Analysis (DSA). DSA provides a nonlinear dynamical basis for spatiotemporal
datasets or dynamical models, eliminating redundancies and expressing the
system in terms of the smallest number of fundamental processes and
interactions without loss of information. This optimises model design in
dynamical systems, expressing complex coevolution in simple synergistic terms,
yielding physically meaningful spatial and temporal structures. These are
extracted by spatiotemporal decomposition of nonlinearly interacting subspaces
via the novel concept of a Spatiotemporal Coevolution Manifold. Physical
consistency is ensured and mathematical ambiguities are avoided with
fundamental principles on energy minimisation and entropy production. The
relevance of DSA is illustrated by retrieving a non-redundant, synergistic set
of nonlinear geophysical processes exerting control over precipitation in space
and time over the Euro-Atlantic region. For that purpose, a nonlinear
spatiotemporal basis is extracted from geopotential data fields, yielding two
independent dynamic sources dominated respectively by meridional and zonal
circulation gradients. These sources are decomposed into spatial and temporal
structures corresponding to multiscale climate dynamics. The added value of
nonlinear predictability is brought out in the geospatial evaluation and
dynamic simulation of evolving precipitation distributions from the geophysical
controls, using DSA-driven model building and implementation. The simulated
precipitation is found to be in agreement with observational data, which they
not only describe but also dynamically link and attribute in synergistic terms
of the retrieved dynamic sources.
| no_new_dataset | 0.948155 |
1611.03404 | Jeffrey Regier | Jeffrey Regier, Kiran Pamnany, Ryan Giordano, Rollin Thomas, David
Schlegel, Jon McAuliffe and Prabhat | Learning an Astronomical Catalog of the Visible Universe through
Scalable Bayesian Inference | submitting to IPDPS'17 | null | null | null | cs.DC astro-ph.IM cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Celeste is a procedure for inferring astronomical catalogs that attains
state-of-the-art scientific results. To date, Celeste has been scaled to at
most hundreds of megabytes of astronomical images: Bayesian posterior inference
is notoriously demanding computationally. In this paper, we report on a
scalable, parallel version of Celeste, suitable for learning catalogs from
modern large-scale astronomical datasets. Our algorithmic innovations include a
fast numerical optimization routine for Bayesian posterior inference and a
statistically efficient scheme for decomposing astronomical optimization
problems into subproblems.
Our scalable implementation is written entirely in Julia, a new high-level
dynamic programming language designed for scientific and numerical computing.
We use Julia's high-level constructs for shared and distributed memory
parallelism, and demonstrate effective load balancing and efficient scaling on
up to 8192 Xeon cores on the NERSC Cori supercomputer.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 17:16:04 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Regier",
"Jeffrey",
""
],
[
"Pamnany",
"Kiran",
""
],
[
"Giordano",
"Ryan",
""
],
[
"Thomas",
"Rollin",
""
],
[
"Schlegel",
"David",
""
],
[
"McAuliffe",
"Jon",
""
],
[
"Prabhat",
"",
""
]
] | TITLE: Learning an Astronomical Catalog of the Visible Universe through
Scalable Bayesian Inference
ABSTRACT: Celeste is a procedure for inferring astronomical catalogs that attains
state-of-the-art scientific results. To date, Celeste has been scaled to at
most hundreds of megabytes of astronomical images: Bayesian posterior inference
is notoriously demanding computationally. In this paper, we report on a
scalable, parallel version of Celeste, suitable for learning catalogs from
modern large-scale astronomical datasets. Our algorithmic innovations include a
fast numerical optimization routine for Bayesian posterior inference and a
statistically efficient scheme for decomposing astronomical optimization
problems into subproblems.
Our scalable implementation is written entirely in Julia, a new high-level
dynamic programming language designed for scientific and numerical computing.
We use Julia's high-level constructs for shared and distributed memory
parallelism, and demonstrate effective load balancing and efficient scaling on
up to 8192 Xeon cores on the NERSC Cori supercomputer.
| no_new_dataset | 0.941815 |
1611.03426 | Ernesto Diaz-Aviles | Avar\'e Stewart, Sara Romano, Nattiya Kanhabua, Sergio Di Martino,
Wolf Siberski, Antonino Mazzeo, Wolfgang Nejdl, and Ernesto Diaz-Aviles | Why is it Difficult to Detect Sudden and Unexpected Epidemic Outbreaks
in Twitter? | ACM CCS Concepts: Applied computing - Health informatics; Information
systems - Web mining; Document filtering; Novelty in information retrieval;
Recommender systems; Human-centered computing - Social media | null | null | null | cs.CY cs.IR cs.SI stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | Social media services such as Twitter are a valuable source of information
for decision support systems. Many studies have shown that this also holds for
the medical domain, where Twitter is considered a viable tool for public health
officials to sift through relevant information for the early detection,
management, and control of epidemic outbreaks. This is possible due to the
inherent capability of social media services to transmit information faster
than traditional channels. However, the majority of current studies have
limited their scope to the detection of common and seasonal health recurring
events (e.g., Influenza-like Illness), partially due to the noisy nature of
Twitter data, which makes outbreak detection and management very challenging.
Within the European project M-Eco, we developed a Twitter-based Epidemic
Intelligence (EI) system, which is designed to also handle a more general class
of unexpected and aperiodic outbreaks. In particular, we faced three main
research challenges in this endeavor:
1) dynamic classification to manage terminology evolution of Twitter
messages, 2) alert generation to produce reliable outbreak alerts analyzing the
(noisy) tweet time series, and 3) ranking and recommendation to support domain
experts for better assessment of the generated alerts.
In this paper, we empirically evaluate our proposed approach to these
challenges using real-world outbreak datasets and a large collection of tweets.
We validate our solution with domain experts, describe our experiences, and
give a more realistic view on the benefits and issues of analyzing social media
for public health.
| [
{
"version": "v1",
"created": "Thu, 10 Nov 2016 17:53:33 GMT"
}
] | 2016-11-11T00:00:00 | [
[
"Stewart",
"Avaré",
""
],
[
"Romano",
"Sara",
""
],
[
"Kanhabua",
"Nattiya",
""
],
[
"Di Martino",
"Sergio",
""
],
[
"Siberski",
"Wolf",
""
],
[
"Mazzeo",
"Antonino",
""
],
[
"Nejdl",
"Wolfgang",
""
],
[
"Diaz-Aviles",
"Ernesto",
""
]
] | TITLE: Why is it Difficult to Detect Sudden and Unexpected Epidemic Outbreaks
in Twitter?
ABSTRACT: Social media services such as Twitter are a valuable source of information
for decision support systems. Many studies have shown that this also holds for
the medical domain, where Twitter is considered a viable tool for public health
officials to sift through relevant information for the early detection,
management, and control of epidemic outbreaks. This is possible due to the
inherent capability of social media services to transmit information faster
than traditional channels. However, the majority of current studies have
limited their scope to the detection of common and seasonal health recurring
events (e.g., Influenza-like Illness), partially due to the noisy nature of
Twitter data, which makes outbreak detection and management very challenging.
Within the European project M-Eco, we developed a Twitter-based Epidemic
Intelligence (EI) system, which is designed to also handle a more general class
of unexpected and aperiodic outbreaks. In particular, we faced three main
research challenges in this endeavor:
1) dynamic classification to manage terminology evolution of Twitter
messages, 2) alert generation to produce reliable outbreak alerts analyzing the
(noisy) tweet time series, and 3) ranking and recommendation to support domain
experts for better assessment of the generated alerts.
In this paper, we empirically evaluate our proposed approach to these
challenges using real-world outbreak datasets and a large collection of tweets.
We validate our solution with domain experts, describe our experiences, and
give a more realistic view on the benefits and issues of analyzing social media
for public health.
| no_new_dataset | 0.944638 |
1604.07287 | Uran Ferizi | Uran Ferizi, Benoit Scherrer, Torben Schneider, Mohammad Alipoor, Odin
Eufracio, Rutger H.J. Fick, Rachid Deriche, Markus Nilsson, Ana K.
Loya-Olivas, Mariano Rivera, Dirk H.J. Poot, Alonso Ramirez-Manzanares, Jose
L. Marroquin, Ariel Rokem, Christian P\"otter, Robert F. Dougherty, Ken
Sakaie, Claudia Wheeler-Kingshott, Simon K. Warfield, Thomas Witzel, Lawrence
L. Wald, Jos\'e G. Raya, Daniel C. Alexander | Diffusion MRI microstructure models with in vivo human brain Connectom
data: results from a multi-group comparison | null | null | null | null | physics.med-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large number of mathematical models have been proposed to describe the
measured signal in diffusion-weighted (DW) magnetic resonance imaging (MRI) and
infer properties about the white matter microstructure. However, a head-to-head
comparison of DW-MRI models is critically missing in the field. To address this
deficiency, we organized the "White Matter Modeling Challenge" during the
International Symposium on Biomedical Imaging (ISBI) 2015 conference. This
competition aimed at identifying the DW-MRI models that best predict unseen DW
data. in vivo DW-MRI data was acquired on the Connectom scanner at the
A.A.Martinos Center (Massachusetts General Hospital) using gradients strength
of up to 300 mT/m and a broad set of diffusion times. We focused on assessing
the DW signal prediction in two regions: the genu in the corpus callosum, where
the fibres are relatively straight and parallel, and the fornix, where the
configuration of fibres is more complex. The challenge participants had access
to three-quarters of the whole dataset, and their models were ranked on their
ability to predict the remaining unseen quarter of data. In this paper we
provide both an overview and a more in-depth description of each evaluated
model, report the challenge results, and infer trends about the model
characteristics that were associated with high model ranking. This work
provides a much needed benchmark for DW-MRI models. The acquired data and model
details for signal prediction evaluation are provided online to encourage a
larger scale assessment of diffusion models in the future.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 14:44:28 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2016 14:48:25 GMT"
}
] | 2016-11-10T00:00:00 | [
[
"Ferizi",
"Uran",
""
],
[
"Scherrer",
"Benoit",
""
],
[
"Schneider",
"Torben",
""
],
[
"Alipoor",
"Mohammad",
""
],
[
"Eufracio",
"Odin",
""
],
[
"Fick",
"Rutger H. J.",
""
],
[
"Deriche",
"Rachid",
""
],
[
"Nilsson",
"Markus",
""
],
[
"Loya-Olivas",
"Ana K.",
""
],
[
"Rivera",
"Mariano",
""
],
[
"Poot",
"Dirk H. J.",
""
],
[
"Ramirez-Manzanares",
"Alonso",
""
],
[
"Marroquin",
"Jose L.",
""
],
[
"Rokem",
"Ariel",
""
],
[
"Pötter",
"Christian",
""
],
[
"Dougherty",
"Robert F.",
""
],
[
"Sakaie",
"Ken",
""
],
[
"Wheeler-Kingshott",
"Claudia",
""
],
[
"Warfield",
"Simon K.",
""
],
[
"Witzel",
"Thomas",
""
],
[
"Wald",
"Lawrence L.",
""
],
[
"Raya",
"José G.",
""
],
[
"Alexander",
"Daniel C.",
""
]
] | TITLE: Diffusion MRI microstructure models with in vivo human brain Connectom
data: results from a multi-group comparison
ABSTRACT: A large number of mathematical models have been proposed to describe the
measured signal in diffusion-weighted (DW) magnetic resonance imaging (MRI) and
infer properties about the white matter microstructure. However, a head-to-head
comparison of DW-MRI models is critically missing in the field. To address this
deficiency, we organized the "White Matter Modeling Challenge" during the
International Symposium on Biomedical Imaging (ISBI) 2015 conference. This
competition aimed at identifying the DW-MRI models that best predict unseen DW
data. in vivo DW-MRI data was acquired on the Connectom scanner at the
A.A.Martinos Center (Massachusetts General Hospital) using gradients strength
of up to 300 mT/m and a broad set of diffusion times. We focused on assessing
the DW signal prediction in two regions: the genu in the corpus callosum, where
the fibres are relatively straight and parallel, and the fornix, where the
configuration of fibres is more complex. The challenge participants had access
to three-quarters of the whole dataset, and their models were ranked on their
ability to predict the remaining unseen quarter of data. In this paper we
provide both an overview and a more in-depth description of each evaluated
model, report the challenge results, and infer trends about the model
characteristics that were associated with high model ranking. This work
provides a much needed benchmark for DW-MRI models. The acquired data and model
details for signal prediction evaluation are provided online to encourage a
larger scale assessment of diffusion models in the future.
| no_new_dataset | 0.948822 |
1606.02245 | Alessandro Sordoni | Alessandro Sordoni and Philip Bachman and Adam Trischler and Yoshua
Bengio | Iterative Alternating Neural Attention for Machine Reading | null | null | null | null | cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel neural attention architecture to tackle machine
comprehension tasks, such as answering Cloze-style queries with respect to a
document. Unlike previous models, we do not collapse the query into a single
vector, instead we deploy an iterative alternating attention mechanism that
allows a fine-grained exploration of both the query and the document. Our model
outperforms state-of-the-art baselines in standard machine comprehension
benchmarks such as CNN news articles and the Children's Book Test (CBT)
dataset.
| [
{
"version": "v1",
"created": "Tue, 7 Jun 2016 18:25:48 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jun 2016 18:17:03 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Jun 2016 21:16:56 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Nov 2016 18:11:09 GMT"
}
] | 2016-11-10T00:00:00 | [
[
"Sordoni",
"Alessandro",
""
],
[
"Bachman",
"Philip",
""
],
[
"Trischler",
"Adam",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Iterative Alternating Neural Attention for Machine Reading
ABSTRACT: We propose a novel neural attention architecture to tackle machine
comprehension tasks, such as answering Cloze-style queries with respect to a
document. Unlike previous models, we do not collapse the query into a single
vector, instead we deploy an iterative alternating attention mechanism that
allows a fine-grained exploration of both the query and the document. Our model
outperforms state-of-the-art baselines in standard machine comprehension
benchmarks such as CNN news articles and the Children's Book Test (CBT)
dataset.
| no_new_dataset | 0.940681 |
1611.00142 | Binod Bhattarai | Binod Bhattarai, Gaurav Sharma, Frederic Jurie | Deep fusion of visual signatures for client-server facial analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial analysis is a key technology for enabling human-machine interaction.
In this context, we present a client-server framework, where a client transmits
the signature of a face to be analyzed to the server, and, in return, the
server sends back various information describing the face e.g. is the person
male or female, is she/he bald, does he have a mustache, etc. We assume that a
client can compute one (or a combination) of visual features; from very simple
and efficient features, like Local Binary Patterns, to more complex and
computationally heavy, like Fisher Vectors and CNN based, depending on the
computing resources available. The challenge addressed in this paper is to
design a common universal representation such that a single merged signature is
transmitted to the server, whatever be the type and number of features computed
by the client, ensuring nonetheless an optimal performance. Our solution is
based on learning of a common optimal subspace for aligning the different face
features and merging them into a universal signature. We have validated the
proposed method on the challenging CelebA dataset, on which our method
outperforms existing state-of-the-art methods when rich representation is
available at test time, while giving competitive performance when only simple
signatures (like LBP) are available at test time due to resource constraints on
the client.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2016 06:57:58 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Nov 2016 10:48:58 GMT"
}
] | 2016-11-10T00:00:00 | [
[
"Bhattarai",
"Binod",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Jurie",
"Frederic",
""
]
] | TITLE: Deep fusion of visual signatures for client-server facial analysis
ABSTRACT: Facial analysis is a key technology for enabling human-machine interaction.
In this context, we present a client-server framework, where a client transmits
the signature of a face to be analyzed to the server, and, in return, the
server sends back various information describing the face e.g. is the person
male or female, is she/he bald, does he have a mustache, etc. We assume that a
client can compute one (or a combination) of visual features; from very simple
and efficient features, like Local Binary Patterns, to more complex and
computationally heavy, like Fisher Vectors and CNN based, depending on the
computing resources available. The challenge addressed in this paper is to
design a common universal representation such that a single merged signature is
transmitted to the server, whatever be the type and number of features computed
by the client, ensuring nonetheless an optimal performance. Our solution is
based on learning of a common optimal subspace for aligning the different face
features and merging them into a universal signature. We have validated the
proposed method on the challenging CelebA dataset, on which our method
outperforms existing state-of-the-art methods when rich representation is
available at test time, while giving competitive performance when only simple
signatures (like LBP) are available at test time due to resource constraints on
the client.
| no_new_dataset | 0.946349 |
1611.02776 | Daoyuan Jia | Daoyuan Jia, Yongchi Su, Chunping Li | Deep Convolutional Neural Network for 6-DOF Image Localization | will update soon | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present an accurate and robust method for six degree of freedom image
localization. There are two key-points of our method, 1. automatic immense
photo synthesis and labeling from point cloud model and, 2. pose estimation
with deep convolutional neural networks regression. Our model can directly
regresses 6-DOF camera poses from images, accurately describing where and how
it was captured. We achieved an accuracy within 1 meters and 1 degree on our
out-door dataset, which covers about 2 acres on our school campus.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2016 23:59:16 GMT"
}
] | 2016-11-10T00:00:00 | [
[
"Jia",
"Daoyuan",
""
],
[
"Su",
"Yongchi",
""
],
[
"Li",
"Chunping",
""
]
] | TITLE: Deep Convolutional Neural Network for 6-DOF Image Localization
ABSTRACT: We present an accurate and robust method for six degree of freedom image
localization. There are two key-points of our method, 1. automatic immense
photo synthesis and labeling from point cloud model and, 2. pose estimation
with deep convolutional neural networks regression. Our model can directly
regresses 6-DOF camera poses from images, accurately describing where and how
it was captured. We achieved an accuracy within 1 meters and 1 degree on our
out-door dataset, which covers about 2 acres on our school campus.
| no_new_dataset | 0.75005 |
1611.02792 | Dhireesha Kudithipudi | Lennard Streat, Dhireesha Kudithipudi, Kevin Gomez | Non-volatile Hierarchical Temporal Memory: Hardware for Spatial Pooling | null | null | null | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical Temporal Memory (HTM) is a biomimetic machine learning algorithm
imbibing the structural and algorithmic properties of the neocortex. Two main
functional components of HTM that enable spatio-temporal processing are the
spatial pooler and temporal memory. In this research, we explore a scalable
hardware realization of the spatial pooler closely coupled with the
mathematical formulation of spatial pooler. This class of neuromorphic
algorithms are advantageous in solving a subset of the future engineering
problems by extracting nonintuitive patterns in complex data. The proposed
architecture, Non-volatile HTM (NVHTM), leverages large-scale solid state flash
memory to realize a optimal memory organization, area and power envelope. A
behavioral model of NVHTM is evaluated against the MNIST dataset, yielding
91.98% classification accuracy. A full custom layout is developed to validate
the design in a TSMC 180nm process. The area and power profile of the spatial
pooler are 30.538mm2 and 64.394mW, respectively. This design is a
proof-of-concept that storage processing is a viable platform for large scale
HTM network models.
| [
{
"version": "v1",
"created": "Wed, 9 Nov 2016 01:25:59 GMT"
}
] | 2016-11-10T00:00:00 | [
[
"Streat",
"Lennard",
""
],
[
"Kudithipudi",
"Dhireesha",
""
],
[
"Gomez",
"Kevin",
""
]
] | TITLE: Non-volatile Hierarchical Temporal Memory: Hardware for Spatial Pooling
ABSTRACT: Hierarchical Temporal Memory (HTM) is a biomimetic machine learning algorithm
imbibing the structural and algorithmic properties of the neocortex. Two main
functional components of HTM that enable spatio-temporal processing are the
spatial pooler and temporal memory. In this research, we explore a scalable
hardware realization of the spatial pooler closely coupled with the
mathematical formulation of spatial pooler. This class of neuromorphic
algorithms are advantageous in solving a subset of the future engineering
problems by extracting nonintuitive patterns in complex data. The proposed
architecture, Non-volatile HTM (NVHTM), leverages large-scale solid state flash
memory to realize a optimal memory organization, area and power envelope. A
behavioral model of NVHTM is evaluated against the MNIST dataset, yielding
91.98% classification accuracy. A full custom layout is developed to validate
the design in a TSMC 180nm process. The area and power profile of the spatial
pooler are 30.538mm2 and 64.394mW, respectively. This design is a
proof-of-concept that storage processing is a viable platform for large scale
HTM network models.
| no_new_dataset | 0.946151 |
1510.04822 | Massil Achab | Massil Achab (CMAP), Agathe Guilloux (LSTA), St\'ephane Ga\"iffas
(CMAP) and Emmanuel Bacry (CMAP) | SGD with Variance Reduction beyond Empirical Risk Minimization | 17 pages | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a doubly stochastic proximal gradient algorithm for optimizing a
finite average of smooth convex functions, whose gradients depend on
numerically expensive expectations. Our main motivation is the acceleration of
the optimization of the regularized Cox partial-likelihood (the core model used
in survival analysis), but our algorithm can be used in different settings as
well. The proposed algorithm is doubly stochastic in the sense that gradient
steps are done using stochastic gradient descent (SGD) with variance reduction,
where the inner expectations are approximated by a Monte-Carlo Markov-Chain
(MCMC) algorithm. We derive conditions on the MCMC number of iterations
guaranteeing convergence, and obtain a linear rate of convergence under strong
convexity and a sublinear rate without this assumption. We illustrate the fact
that our algorithm improves the state-of-the-art solver for regularized Cox
partial-likelihood on several datasets from survival analysis.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2015 09:32:24 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Oct 2015 19:45:58 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Nov 2016 09:43:23 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Achab",
"Massil",
"",
"CMAP"
],
[
"Guilloux",
"Agathe",
"",
"LSTA"
],
[
"Gaïffas",
"Stéphane",
"",
"CMAP"
],
[
"Bacry",
"Emmanuel",
"",
"CMAP"
]
] | TITLE: SGD with Variance Reduction beyond Empirical Risk Minimization
ABSTRACT: We introduce a doubly stochastic proximal gradient algorithm for optimizing a
finite average of smooth convex functions, whose gradients depend on
numerically expensive expectations. Our main motivation is the acceleration of
the optimization of the regularized Cox partial-likelihood (the core model used
in survival analysis), but our algorithm can be used in different settings as
well. The proposed algorithm is doubly stochastic in the sense that gradient
steps are done using stochastic gradient descent (SGD) with variance reduction,
where the inner expectations are approximated by a Monte-Carlo Markov-Chain
(MCMC) algorithm. We derive conditions on the MCMC number of iterations
guaranteeing convergence, and obtain a linear rate of convergence under strong
convexity and a sublinear rate without this assumption. We illustrate the fact
that our algorithm improves the state-of-the-art solver for regularized Cox
partial-likelihood on several datasets from survival analysis.
| no_new_dataset | 0.947088 |
1512.02109 | Anmer Daskin | Anmer Daskin | Obtaining A Linear Combination of the Principal Components of a Matrix
on Quantum Computers | The title of the paper is changed. A couple of sections are extended.
8 pages and 3 figures | Quantum Inf Process (2016) 15: 4013 | 10.1007/s11128-016-1388-7 | null | quant-ph cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal component analysis is a multivariate statistical method frequently
used in science and engineering to reduce the dimension of a problem or extract
the most significant features from a dataset. In this paper, using a similar
notion to the quantum counting, we show how to apply the amplitude
amplification together with the phase estimation algorithm to an operator in
order to procure the eigenvectors of the operator associated to the eigenvalues
defined in the range $\left[a, b\right]$, where $a$ and $b$ are real and $0
\leq a \leq b \leq 1$. This makes possible to obtain a combination of the
eigenvectors associated to the largest eigenvalues and so can be used to do
principal component analysis on quantum computers.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 14:31:12 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 13:37:00 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jan 2016 09:53:59 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Daskin",
"Anmer",
""
]
] | TITLE: Obtaining A Linear Combination of the Principal Components of a Matrix
on Quantum Computers
ABSTRACT: Principal component analysis is a multivariate statistical method frequently
used in science and engineering to reduce the dimension of a problem or extract
the most significant features from a dataset. In this paper, using a similar
notion to the quantum counting, we show how to apply the amplitude
amplification together with the phase estimation algorithm to an operator in
order to procure the eigenvectors of the operator associated to the eigenvalues
defined in the range $\left[a, b\right]$, where $a$ and $b$ are real and $0
\leq a \leq b \leq 1$. This makes possible to obtain a combination of the
eigenvectors associated to the largest eigenvalues and so can be used to do
principal component analysis on quantum computers.
| no_new_dataset | 0.948058 |
1602.05100 | Abolfazl Asudeh | Abolfazl Asudeh and Nan Zhang and Gautam Das | Query Reranking As A Service | Proceedings of the VLDB Endowment (PVLDB), Vol. 9, No. 11, 2016 | Proceedings of the VLDB Endowment (PVLDB), Vol 9, No 11, 2016 | 10.14778/2983200.2983205 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ranked retrieval model has rapidly become the de facto way for search
query processing in client-server databases, especially those on the web.
Despite of the extensive efforts in the database community on designing better
ranking functions/mechanisms, many such databases in practice still fail to
address the diverse and sometimes contradicting preferences of users on tuple
ranking, perhaps (at least partially) due to the lack of expertise and/or
motivation for the database owner to design truly effective ranking functions.
This paper takes a different route on addressing the issue by defining a novel
{\em query reranking problem}, i.e., we aim to design a third-party service
that uses nothing but the public search interface of a client-server database
to enable the on-the-fly processing of queries with any user-specified ranking
functions (with or without selection conditions), no matter if the ranking
function is supported by the database or not. We analyze the worst-case
complexity of the problem and introduce a number of ideas, e.g., on-the-fly
indexing, domination detection and virtual tuple pruning, to reduce the
average-case cost of the query reranking algorithm. We also present extensive
experimental results on real-world datasets, in both offline and live online
systems, that demonstrate the effectiveness of our proposed techniques.
| [
{
"version": "v1",
"created": "Sun, 7 Feb 2016 04:03:26 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Jul 2016 18:47:43 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Asudeh",
"Abolfazl",
""
],
[
"Zhang",
"Nan",
""
],
[
"Das",
"Gautam",
""
]
] | TITLE: Query Reranking As A Service
ABSTRACT: The ranked retrieval model has rapidly become the de facto way for search
query processing in client-server databases, especially those on the web.
Despite of the extensive efforts in the database community on designing better
ranking functions/mechanisms, many such databases in practice still fail to
address the diverse and sometimes contradicting preferences of users on tuple
ranking, perhaps (at least partially) due to the lack of expertise and/or
motivation for the database owner to design truly effective ranking functions.
This paper takes a different route on addressing the issue by defining a novel
{\em query reranking problem}, i.e., we aim to design a third-party service
that uses nothing but the public search interface of a client-server database
to enable the on-the-fly processing of queries with any user-specified ranking
functions (with or without selection conditions), no matter if the ranking
function is supported by the database or not. We analyze the worst-case
complexity of the problem and introduce a number of ideas, e.g., on-the-fly
indexing, domination detection and virtual tuple pruning, to reduce the
average-case cost of the query reranking algorithm. We also present extensive
experimental results on real-world datasets, in both offline and live online
systems, that demonstrate the effectiveness of our proposed techniques.
| no_new_dataset | 0.946597 |
1603.04779 | Yanghao Li | Yanghao Li, Naiyan Wang, Jianping Shi, Jiaying Liu, Xiaodi Hou | Revisiting Batch Normalization For Practical Domain Adaptation | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNN) have shown unprecedented success in various
computer vision applications such as image classification and object detection.
However, it is still a common annoyance during the training phase, that one has
to prepare at least thousands of labeled images to fine-tune a network to a
specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong
dependency towards the training dataset, and the learned features cannot be
easily transferred to a different but relevant task without fine-tuning. In
this paper, we propose a simple yet powerful remedy, called Adaptive Batch
Normalization (AdaBN) to increase the generalization ability of a DNN. By
modulating the statistics in all Batch Normalization layers across the network,
our approach achieves deep adaptation effect for domain adaptation tasks. In
contrary to other deep learning domain adaptation methods, our method does not
require additional components, and is parameter-free. It archives
state-of-the-art performance despite its surprising simplicity. Furthermore, we
demonstrate that our method is complementary with other existing methods.
Combining AdaBN with existing domain adaptation treatments may further improve
model performance.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 17:44:32 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2016 03:57:19 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Sep 2016 08:41:43 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Nov 2016 06:11:30 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Li",
"Yanghao",
""
],
[
"Wang",
"Naiyan",
""
],
[
"Shi",
"Jianping",
""
],
[
"Liu",
"Jiaying",
""
],
[
"Hou",
"Xiaodi",
""
]
] | TITLE: Revisiting Batch Normalization For Practical Domain Adaptation
ABSTRACT: Deep neural networks (DNN) have shown unprecedented success in various
computer vision applications such as image classification and object detection.
However, it is still a common annoyance during the training phase, that one has
to prepare at least thousands of labeled images to fine-tune a network to a
specific domain. Recent study (Tommasi et al. 2015) shows that a DNN has strong
dependency towards the training dataset, and the learned features cannot be
easily transferred to a different but relevant task without fine-tuning. In
this paper, we propose a simple yet powerful remedy, called Adaptive Batch
Normalization (AdaBN) to increase the generalization ability of a DNN. By
modulating the statistics in all Batch Normalization layers across the network,
our approach achieves deep adaptation effect for domain adaptation tasks. In
contrary to other deep learning domain adaptation methods, our method does not
require additional components, and is parameter-free. It archives
state-of-the-art performance despite its surprising simplicity. Furthermore, we
demonstrate that our method is complementary with other existing methods.
Combining AdaBN with existing domain adaptation treatments may further improve
model performance.
| no_new_dataset | 0.945298 |
1609.05413 | Camila Ara\'ujo | Gabriel Magno, Camila Souza Ara\'ujo, Wagner Meira Jr., Virgilio
Almeida | Stereotypes in Search Engine Results: Understanding The Role of Local
and Global Factors | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The internet has been blurring the lines between local and global cultures,
affecting in different ways the perception of people about themselves and
others. In the global context of the internet, search engine platforms are a
key mediator between individuals and information. In this paper, we examine the
local and global impact of the internet on the formation of female physical
attractiveness stereotypes in search engine results. By investigating datasets
of images collected from two major search engines in 42 countries, we identify
a significant fraction of replicated images. We find that common images are
clustered around countries with the same language. We also show that existence
of common images among countries is practically eliminated when the queries are
limited to local sites. In summary, we show evidence that results from search
engines are biased towards the language used to query the system, which leads
to certain attractiveness stereotypes that are often quite different from the
majority of the female population of the country.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2016 01:37:50 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2016 23:43:19 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Magno",
"Gabriel",
""
],
[
"Araújo",
"Camila Souza",
""
],
[
"Meira",
"Wagner",
"Jr."
],
[
"Almeida",
"Virgilio",
""
]
] | TITLE: Stereotypes in Search Engine Results: Understanding The Role of Local
and Global Factors
ABSTRACT: The internet has been blurring the lines between local and global cultures,
affecting in different ways the perception of people about themselves and
others. In the global context of the internet, search engine platforms are a
key mediator between individuals and information. In this paper, we examine the
local and global impact of the internet on the formation of female physical
attractiveness stereotypes in search engine results. By investigating datasets
of images collected from two major search engines in 42 countries, we identify
a significant fraction of replicated images. We find that common images are
clustered around countries with the same language. We also show that existence
of common images among countries is practically eliminated when the queries are
limited to local sites. In summary, we show evidence that results from search
engines are biased towards the language used to query the system, which leads
to certain attractiveness stereotypes that are often quite different from the
majority of the female population of the country.
| no_new_dataset | 0.944944 |
1611.02305 | Xinran He | Xinran He, Ke Xu, David Kempe and Yan Liu | Learning Influence Functions from Incomplete Observations | Full version of paper "Learning Influence Functions from Incomplete
Observations" in NIPS16 | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning influence functions under incomplete
observations of node activations. Incomplete observations are a major concern
as most (online and real-world) social networks are not fully observable. We
establish both proper and improper PAC learnability of influence functions
under randomly missing observations. Proper PAC learnability under the
Discrete-Time Linear Threshold (DLT) and Discrete-Time Independent Cascade
(DIC) models is established by reducing incomplete observations to complete
observations in a modified graph. Our improper PAC learnability result applies
for the DLT and DIC models as well as the Continuous-Time Independent Cascade
(CIC) model. It is based on a parametrization in terms of reachability
features, and also gives rise to an efficient and practical heuristic.
Experiments on synthetic and real-world datasets demonstrate the ability of our
method to compensate even for a fairly large fraction of missing observations.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 21:28:40 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"He",
"Xinran",
""
],
[
"Xu",
"Ke",
""
],
[
"Kempe",
"David",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: Learning Influence Functions from Incomplete Observations
ABSTRACT: We study the problem of learning influence functions under incomplete
observations of node activations. Incomplete observations are a major concern
as most (online and real-world) social networks are not fully observable. We
establish both proper and improper PAC learnability of influence functions
under randomly missing observations. Proper PAC learnability under the
Discrete-Time Linear Threshold (DLT) and Discrete-Time Independent Cascade
(DIC) models is established by reducing incomplete observations to complete
observations in a modified graph. Our improper PAC learnability result applies
for the DLT and DIC models as well as the Continuous-Time Independent Cascade
(CIC) model. It is based on a parametrization in terms of reachability
features, and also gives rise to an efficient and practical heuristic.
Experiments on synthetic and real-world datasets demonstrate the ability of our
method to compensate even for a fairly large fraction of missing observations.
| no_new_dataset | 0.946547 |
1611.02329 | Shaunak Bopardikar | Shaunak D. Bopardikar, Alberto Speranzon, Cedric Langbort | Convergence Analysis of Iterated Best Response for a Trusted Computation
Game | Contains detailed proofs of all results as well as an additional
section on "the case of equal means" (Section 5) | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a game of trusted computation in which a sensor equipped with
limited computing power leverages a central node to evaluate a specified
function over a large dataset, collected over time. We assume that the central
computer can be under attack and we propose a strategy where the sensor retains
a limited amount of the data to counteract the effect of attack. We formulate
the problem as a two player game in which the sensor (defender) chooses an
optimal fusion strategy using both the non-trusted output from the central
computer and locally stored trusted data. The attacker seeks to compromise the
computation by influencing the fused value through malicious manipulation of
the data stored on the central node. We first characterize all Nash equilibria
of this game, which turn out to be dependent on parameters known to both
players. Next we adopt an Iterated Best Response (IBR) scheme in which, at each
iteration, the central computer reveals its output to the sensor, who then
computes its best response based on a linear combination of its private local
estimate and the untrusted third-party output. We characterize necessary and
sufficient conditions for convergence of the IBR along with numerical results
which show that the convergence conditions are relatively tight.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 22:38:32 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Bopardikar",
"Shaunak D.",
""
],
[
"Speranzon",
"Alberto",
""
],
[
"Langbort",
"Cedric",
""
]
] | TITLE: Convergence Analysis of Iterated Best Response for a Trusted Computation
Game
ABSTRACT: We introduce a game of trusted computation in which a sensor equipped with
limited computing power leverages a central node to evaluate a specified
function over a large dataset, collected over time. We assume that the central
computer can be under attack and we propose a strategy where the sensor retains
a limited amount of the data to counteract the effect of attack. We formulate
the problem as a two player game in which the sensor (defender) chooses an
optimal fusion strategy using both the non-trusted output from the central
computer and locally stored trusted data. The attacker seeks to compromise the
computation by influencing the fused value through malicious manipulation of
the data stored on the central node. We first characterize all Nash equilibria
of this game, which turn out to be dependent on parameters known to both
players. Next we adopt an Iterated Best Response (IBR) scheme in which, at each
iteration, the central computer reveals its output to the sensor, who then
computes its best response based on a linear combination of its private local
estimate and the untrusted third-party output. We characterize necessary and
sufficient conditions for convergence of the IBR along with numerical results
which show that the convergence conditions are relatively tight.
| no_new_dataset | 0.943191 |
1611.02516 | Miltiadis Allamanis | Miltiadis Allamanis, Earl T. Barr, Ren\'e Just, Charles Sutton | Tailored Mutants Fit Bugs Better | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mutation analysis measures test suite adequacy, the degree to which a test
suite detects seeded faults: one test suite is better than another if it
detects more mutants. Mutation analysis effectiveness rests on the assumption
that mutants are coupled with real faults i.e. mutant detection is strongly
correlated with real fault detection. The work that validated this also showed
that a large portion of defects remain out of reach.
We introduce tailored mutation operators to reach and capture these defects.
Tailored mutation operators are built from and apply to an existing codebase
and its history. They can, for instance, identify and replay errors specific to
the project for which they are tailored. As our point of departure, we define
tailored mutation operators for identifiers, which mutation analysis has
largely ignored, because there are too many ways to mutate them. Evaluated on
the Defects4J dataset, our new mutation operators creates mutants coupled to
14% more faults, compared to traditional mutation operators.
These new mutation operators, however, quadruple the number of mutants. To
combat this problem, we propose a new approach to mutant selection focusing on
the location at which to apply mutation operators and the unnaturalness of the
mutated code. The results demonstrate that the location selection heuristics
produce mutants more closely coupled to real faults for a given budget of
mutation operator applications.
In summary, this paper defines and explores tailored mutation operators,
advancing the state of the art in mutation testing in two ways: 1) it suggests
mutation operators that mutate identifiers and literals, extending mutation
analysis to a new class of faults and 2) it demonstrates that selecting the
location where a mutation operator is applied decreases the number of generated
mutants without affecting the coupling of mutants and real faults.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2016 13:43:51 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Allamanis",
"Miltiadis",
""
],
[
"Barr",
"Earl T.",
""
],
[
"Just",
"René",
""
],
[
"Sutton",
"Charles",
""
]
] | TITLE: Tailored Mutants Fit Bugs Better
ABSTRACT: Mutation analysis measures test suite adequacy, the degree to which a test
suite detects seeded faults: one test suite is better than another if it
detects more mutants. Mutation analysis effectiveness rests on the assumption
that mutants are coupled with real faults i.e. mutant detection is strongly
correlated with real fault detection. The work that validated this also showed
that a large portion of defects remain out of reach.
We introduce tailored mutation operators to reach and capture these defects.
Tailored mutation operators are built from and apply to an existing codebase
and its history. They can, for instance, identify and replay errors specific to
the project for which they are tailored. As our point of departure, we define
tailored mutation operators for identifiers, which mutation analysis has
largely ignored, because there are too many ways to mutate them. Evaluated on
the Defects4J dataset, our new mutation operators creates mutants coupled to
14% more faults, compared to traditional mutation operators.
These new mutation operators, however, quadruple the number of mutants. To
combat this problem, we propose a new approach to mutant selection focusing on
the location at which to apply mutation operators and the unnaturalness of the
mutated code. The results demonstrate that the location selection heuristics
produce mutants more closely coupled to real faults for a given budget of
mutation operator applications.
In summary, this paper defines and explores tailored mutation operators,
advancing the state of the art in mutation testing in two ways: 1) it suggests
mutation operators that mutate identifiers and literals, extending mutation
analysis to a new class of faults and 2) it demonstrates that selecting the
location where a mutation operator is applied decreases the number of generated
mutants without affecting the coupling of mutants and real faults.
| no_new_dataset | 0.951684 |
1611.02624 | Vasileios Kotronis | Rowan Kloti, Bernhard Ager, Vasileios Kotronis, George Nomikos and
Xenofontas Dimitropoulos | A Comparative Look into Public IXP Datasets | ACM Computer Communication Review, Vol. 46 / Issue 1, pages 21-29,
11/1/2016 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Internet eXchange Points (IXPs) are core components of the Internet
infrastructure where Internet Service Providers (ISPs) meet and exchange
traffic. During the last few years, the number and size of IXPs have increased
rapidly, driving the flattening and shortening of Internet paths. However,
understanding the present status of the IXP ecosystem and its potential role in
shaping the future Internet requires rigorous data about IXPs, their presence,
status, participants, etc. In this work, we do the first cross-comparison of
three well-known publicly available IXP databases, namely of PeeringDB,
Euro-IX, and PCH. A key challenge we address is linking IXP identifiers across
databases maintained by different organizations. We find different AS-centric
versus IXP-centric views provided by the databases as a result of their data
collection approaches. In addition, we highlight differences and similarities
w.r.t. IXP participants, geographical coverage, and co-location facilities. As
a side-product of our linkage heuristics, we make publicly available the union
of the three databases, which includes 40.2 % more IXPs and 66.3 % more IXP
participants than the commonly-used PeeringDB. We also publish our analysis
code to foster reproducibility of our experiments and shed preliminary insights
into the accuracy of the union dataset.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2016 17:38:49 GMT"
}
] | 2016-11-09T00:00:00 | [
[
"Kloti",
"Rowan",
""
],
[
"Ager",
"Bernhard",
""
],
[
"Kotronis",
"Vasileios",
""
],
[
"Nomikos",
"George",
""
],
[
"Dimitropoulos",
"Xenofontas",
""
]
] | TITLE: A Comparative Look into Public IXP Datasets
ABSTRACT: Internet eXchange Points (IXPs) are core components of the Internet
infrastructure where Internet Service Providers (ISPs) meet and exchange
traffic. During the last few years, the number and size of IXPs have increased
rapidly, driving the flattening and shortening of Internet paths. However,
understanding the present status of the IXP ecosystem and its potential role in
shaping the future Internet requires rigorous data about IXPs, their presence,
status, participants, etc. In this work, we do the first cross-comparison of
three well-known publicly available IXP databases, namely of PeeringDB,
Euro-IX, and PCH. A key challenge we address is linking IXP identifiers across
databases maintained by different organizations. We find different AS-centric
versus IXP-centric views provided by the databases as a result of their data
collection approaches. In addition, we highlight differences and similarities
w.r.t. IXP participants, geographical coverage, and co-location facilities. As
a side-product of our linkage heuristics, we make publicly available the union
of the three databases, which includes 40.2 % more IXPs and 66.3 % more IXP
participants than the commonly-used PeeringDB. We also publish our analysis
code to foster reproducibility of our experiments and shed preliminary insights
into the accuracy of the union dataset.
| no_new_dataset | 0.943815 |
1512.01413 | Katherine Bouman | Katherine L. Bouman, Michael D. Johnson, Daniel Zoran, Vincent L.
Fish, Sheperd S. Doeleman, William T. Freeman | Computational Imaging for VLBI Image Reconstruction | Accepted for publication at CVPR 2016, Project Website:
http://vlbiimaging.csail.mit.edu/, Video of Oral Presentation at CVPR June
2016: https://www.youtube.com/watch?v=YgB6o_d4tL8 | IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2016, pp. 913-922 | null | null | astro-ph.IM astro-ph.GA cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very long baseline interferometry (VLBI) is a technique for imaging celestial
radio emissions by simultaneously observing a source from telescopes
distributed across Earth. The challenges in reconstructing images from fine
angular resolution VLBI data are immense. The data is extremely sparse and
noisy, thus requiring statistical image models such as those designed in the
computer vision community. In this paper we present a novel Bayesian approach
for VLBI image reconstruction. While other methods often require careful tuning
and parameter selection for different types of data, our method (CHIRP)
produces good results under different settings such as low SNR or extended
emission. The success of our method is demonstrated on realistic synthetic
experiments as well as publicly available real data. We present this problem in
a way that is accessible to members of the community, and provide a dataset
website (vlbiimaging.csail.mit.edu) that facilitates controlled comparisons
across algorithms.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 14:11:46 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2016 15:57:40 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Bouman",
"Katherine L.",
""
],
[
"Johnson",
"Michael D.",
""
],
[
"Zoran",
"Daniel",
""
],
[
"Fish",
"Vincent L.",
""
],
[
"Doeleman",
"Sheperd S.",
""
],
[
"Freeman",
"William T.",
""
]
] | TITLE: Computational Imaging for VLBI Image Reconstruction
ABSTRACT: Very long baseline interferometry (VLBI) is a technique for imaging celestial
radio emissions by simultaneously observing a source from telescopes
distributed across Earth. The challenges in reconstructing images from fine
angular resolution VLBI data are immense. The data is extremely sparse and
noisy, thus requiring statistical image models such as those designed in the
computer vision community. In this paper we present a novel Bayesian approach
for VLBI image reconstruction. While other methods often require careful tuning
and parameter selection for different types of data, our method (CHIRP)
produces good results under different settings such as low SNR or extended
emission. The success of our method is demonstrated on realistic synthetic
experiments as well as publicly available real data. We present this problem in
a way that is accessible to members of the community, and provide a dataset
website (vlbiimaging.csail.mit.edu) that facilitates controlled comparisons
across algorithms.
| new_dataset | 0.856632 |
1602.01517 | Keiller Nogueira | Keiller Nogueira, Ot\'avio A. B. Penatti, Jefersson A. dos Santos | Towards Better Exploiting Convolutional Neural Networks for Remote
Sensing Scene Classification | null | null | 10.1016/j.patcog.2016.07.001 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an analysis of three possible strategies for exploiting the power
of existing convolutional neural networks (ConvNets) in different scenarios
from the ones they were trained: full training, fine tuning, and using ConvNets
as feature extractors. In many applications, especially including remote
sensing, it is not feasible to fully design and train a new ConvNet, as this
usually requires a considerable amount of labeled data and demands high
computational costs. Therefore, it is important to understand how to obtain the
best profit from existing ConvNets. We perform experiments with six popular
ConvNets using three remote sensing datasets. We also compare ConvNets in each
strategy with existing descriptors and with state-of-the-art baselines. Results
point that fine tuning tends to be the best performing strategy. In fact, using
the features from the fine-tuned ConvNet with linear SVM obtains the best
results. We also achieved state-of-the-art results for the three datasets used.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 00:53:32 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Nogueira",
"Keiller",
""
],
[
"Penatti",
"Otávio A. B.",
""
],
[
"Santos",
"Jefersson A. dos",
""
]
] | TITLE: Towards Better Exploiting Convolutional Neural Networks for Remote
Sensing Scene Classification
ABSTRACT: We present an analysis of three possible strategies for exploiting the power
of existing convolutional neural networks (ConvNets) in different scenarios
from the ones they were trained: full training, fine tuning, and using ConvNets
as feature extractors. In many applications, especially including remote
sensing, it is not feasible to fully design and train a new ConvNet, as this
usually requires a considerable amount of labeled data and demands high
computational costs. Therefore, it is important to understand how to obtain the
best profit from existing ConvNets. We perform experiments with six popular
ConvNets using three remote sensing datasets. We also compare ConvNets in each
strategy with existing descriptors and with state-of-the-art baselines. Results
point that fine tuning tends to be the best performing strategy. In fact, using
the features from the fine-tuned ConvNet with linear SVM obtains the best
results. We also achieved state-of-the-art results for the three datasets used.
| no_new_dataset | 0.951818 |
1606.01865 | Zhengping Che | Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, Yan
Liu | Recurrent Neural Networks for Multivariate Time Series with Missing
Values | null | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multivariate time series data in practical applications, such as health care,
geoscience, and biology, are characterized by a variety of missing values. In
time series prediction and other related tasks, it has been noted that missing
values and their missing patterns are often correlated with the target labels,
a.k.a., informative missingness. There is very limited work on exploiting the
missing patterns for effective imputation and improving prediction performance.
In this paper, we develop novel deep learning models, namely GRU-D, as one of
the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a
state-of-the-art recurrent neural network. It takes two representations of
missing patterns, i.e., masking and time interval, and effectively incorporates
them into a deep model architecture so that it not only captures the long-term
temporal dependencies in time series, but also utilizes the missing patterns to
achieve better prediction results. Experiments of time series classification
tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic
datasets demonstrate that our models achieve state-of-the-art performance and
provides useful insights for better understanding and utilization of missing
values in time series analysis.
| [
{
"version": "v1",
"created": "Mon, 6 Jun 2016 19:08:41 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2016 20:51:29 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Che",
"Zhengping",
""
],
[
"Purushotham",
"Sanjay",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Sontag",
"David",
""
],
[
"Liu",
"Yan",
""
]
] | TITLE: Recurrent Neural Networks for Multivariate Time Series with Missing
Values
ABSTRACT: Multivariate time series data in practical applications, such as health care,
geoscience, and biology, are characterized by a variety of missing values. In
time series prediction and other related tasks, it has been noted that missing
values and their missing patterns are often correlated with the target labels,
a.k.a., informative missingness. There is very limited work on exploiting the
missing patterns for effective imputation and improving prediction performance.
In this paper, we develop novel deep learning models, namely GRU-D, as one of
the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a
state-of-the-art recurrent neural network. It takes two representations of
missing patterns, i.e., masking and time interval, and effectively incorporates
them into a deep model architecture so that it not only captures the long-term
temporal dependencies in time series, but also utilizes the missing patterns to
achieve better prediction results. Experiments of time series classification
tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic
datasets demonstrate that our models achieve state-of-the-art performance and
provides useful insights for better understanding and utilization of missing
values in time series analysis.
| no_new_dataset | 0.948394 |
1608.07905 | Shuohang Wang | Shuohang Wang and Jing Jiang | Machine Comprehension Using Match-LSTM and Answer Pointer | 11 pages; 3 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine comprehension of text is an important problem in natural language
processing. A recently released dataset, the Stanford Question Answering
Dataset (SQuAD), offers a large number of real questions and their answers
created by humans through crowdsourcing. SQuAD provides a challenging testbed
for evaluating machine comprehension algorithms, partly because compared with
previous datasets, in SQuAD the answers do not come from a small set of
candidate answers and they have variable lengths. We propose an end-to-end
neural architecture for the task. The architecture is based on match-LSTM, a
model we proposed previously for textual entailment, and Pointer Net, a
sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the
output tokens to be from the input sequences. We propose two ways of using
Pointer Net for our task. Our experiments show that both of our two models
substantially outperform the best results obtained by Rajpurkar et al.(2016)
using logistic regression and manually crafted features.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 03:42:50 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2016 03:39:40 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Wang",
"Shuohang",
""
],
[
"Jiang",
"Jing",
""
]
] | TITLE: Machine Comprehension Using Match-LSTM and Answer Pointer
ABSTRACT: Machine comprehension of text is an important problem in natural language
processing. A recently released dataset, the Stanford Question Answering
Dataset (SQuAD), offers a large number of real questions and their answers
created by humans through crowdsourcing. SQuAD provides a challenging testbed
for evaluating machine comprehension algorithms, partly because compared with
previous datasets, in SQuAD the answers do not come from a small set of
candidate answers and they have variable lengths. We propose an end-to-end
neural architecture for the task. The architecture is based on match-LSTM, a
model we proposed previously for textual entailment, and Pointer Net, a
sequence-to-sequence model proposed by Vinyals et al.(2015) to constrain the
output tokens to be from the input sequences. We propose two ways of using
Pointer Net for our task. Our experiments show that both of our two models
substantially outperform the best results obtained by Rajpurkar et al.(2016)
using logistic regression and manually crafted features.
| new_dataset | 0.948489 |
1610.04900 | Cheng Tang | Cheng Tang, Claire Monteleoni | Convergence rate of stochastic k-means | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze online and mini-batch k-means variants. Both scale up the widely
used Lloyd 's algorithm via stochastic approximation, and have become popular
for large-scale clustering and unsupervised feature learning. We show, for the
first time, that they have global convergence towards local optima at
$O(\frac{1}{t})$ rate under general conditions. In addition, we show if the
dataset is clusterable, with suitable initialization, mini-batch k-means
converges to an optimal k-means solution with $O(\frac{1}{t})$ convergence rate
with high probability. The k-means objective is non-convex and
non-differentiable: we exploit ideas from non-convex gradient-based
optimization by providing a novel characterization of the trajectory of k-means
algorithm on its solution space, and circumvent its non-differentiability via
geometric insights about k-means update.
| [
{
"version": "v1",
"created": "Sun, 16 Oct 2016 18:59:59 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Nov 2016 18:20:06 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Tang",
"Cheng",
""
],
[
"Monteleoni",
"Claire",
""
]
] | TITLE: Convergence rate of stochastic k-means
ABSTRACT: We analyze online and mini-batch k-means variants. Both scale up the widely
used Lloyd 's algorithm via stochastic approximation, and have become popular
for large-scale clustering and unsupervised feature learning. We show, for the
first time, that they have global convergence towards local optima at
$O(\frac{1}{t})$ rate under general conditions. In addition, we show if the
dataset is clusterable, with suitable initialization, mini-batch k-means
converges to an optimal k-means solution with $O(\frac{1}{t})$ convergence rate
with high probability. The k-means objective is non-convex and
non-differentiable: we exploit ideas from non-convex gradient-based
optimization by providing a novel characterization of the trajectory of k-means
algorithm on its solution space, and circumvent its non-differentiability via
geometric insights about k-means update.
| no_new_dataset | 0.945701 |
1611.01586 | Gang Niu | Marthinus C. du Plessis, Gang Niu, and Masashi Sugiyama | Class-prior Estimation for Learning from Positive and Unlabeled Data | To appear in Machine Learning | null | 10.1007/s10994-016-5604-6 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating the class prior in an unlabeled
dataset. Under the assumption that an additional labeled dataset is available,
the class prior can be estimated by fitting a mixture of class-wise data
distributions to the unlabeled data distribution. However, in practice, such an
additional labeled dataset is often not available. In this paper, we show that,
with additional samples coming only from the positive class, the class prior of
the unlabeled dataset can be estimated correctly. Our key idea is to use
properly penalized divergences for model fitting to cancel the error caused by
the absence of negative samples. We further show that the use of the penalized
$L_1$-distance gives a computationally efficient algorithm with an analytic
solution. The consistency, stability, and estimation error are theoretically
analyzed. Finally, we experimentally demonstrate the usefulness of the proposed
method.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 01:58:12 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Plessis",
"Marthinus C. du",
""
],
[
"Niu",
"Gang",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Class-prior Estimation for Learning from Positive and Unlabeled Data
ABSTRACT: We consider the problem of estimating the class prior in an unlabeled
dataset. Under the assumption that an additional labeled dataset is available,
the class prior can be estimated by fitting a mixture of class-wise data
distributions to the unlabeled data distribution. However, in practice, such an
additional labeled dataset is often not available. In this paper, we show that,
with additional samples coming only from the positive class, the class prior of
the unlabeled dataset can be estimated correctly. Our key idea is to use
properly penalized divergences for model fitting to cancel the error caused by
the absence of negative samples. We further show that the use of the penalized
$L_1$-distance gives a computationally efficient algorithm with an analytic
solution. The consistency, stability, and estimation error are theoretically
analyzed. Finally, we experimentally demonstrate the usefulness of the proposed
method.
| no_new_dataset | 0.943764 |
1611.01640 | Jiedong Hao | Jiedong Hao, Jing Dong, Wei Wang, Tieniu Tan | What Is the Best Practice for CNNs Applied to Visual Instance Retrieval? | The verison submitted to ICLR | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous work has shown that feature maps of deep convolutional neural
networks (CNNs) can be interpreted as feature representation of a particular
image region. Features aggregated from these feature maps have been exploited
for image retrieval tasks and achieved state-of-the-art performances in recent
years. The key to the success of such methods is the feature representation.
However, the different factors that impact the effectiveness of features are
still not explored thoroughly. There are much less discussion about the best
combination of them.
The main contribution of our paper is the thorough evaluations of the various
factors that affect the discriminative ability of the features extracted from
CNNs. Based on the evaluation results, we also identify the best choices for
different factors and propose a new multi-scale image feature representation
method to encode the image effectively. Finally, we show that the proposed
method generalises well and outperforms the state-of-the-art methods on four
typical datasets used for visual instance retrieval.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 12:44:40 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Hao",
"Jiedong",
""
],
[
"Dong",
"Jing",
""
],
[
"Wang",
"Wei",
""
],
[
"Tan",
"Tieniu",
""
]
] | TITLE: What Is the Best Practice for CNNs Applied to Visual Instance Retrieval?
ABSTRACT: Previous work has shown that feature maps of deep convolutional neural
networks (CNNs) can be interpreted as feature representation of a particular
image region. Features aggregated from these feature maps have been exploited
for image retrieval tasks and achieved state-of-the-art performances in recent
years. The key to the success of such methods is the feature representation.
However, the different factors that impact the effectiveness of features are
still not explored thoroughly. There are much less discussion about the best
combination of them.
The main contribution of our paper is the thorough evaluations of the various
factors that affect the discriminative ability of the features extracted from
CNNs. Based on the evaluation results, we also identify the best choices for
different factors and propose a new multi-scale image feature representation
method to encode the image effectively. Finally, we show that the proposed
method generalises well and outperforms the state-of-the-art methods on four
typical datasets used for visual instance retrieval.
| no_new_dataset | 0.9455 |
1611.01646 | Ting Yao | Ting Yao, Yingwei Pan, Yehao Li, Zhaofan Qiu, Tao Mei | Boosting Image Captioning with Attributes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically describing an image with a natural language has been an
emerging challenge in both fields of computer vision and natural language
processing. In this paper, we present Long Short-Term Memory with Attributes
(LSTM-A) - a novel architecture that integrates attributes into the successful
Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs)
image captioning framework, by training them in an end-to-end manner. To
incorporate attributes, we construct variants of architectures by feeding image
representations and attributes into RNNs in different ways to explore the
mutual but also fuzzy relationship between them. Extensive experiments are
conducted on COCO image captioning dataset and our framework achieves superior
results when compared to state-of-the-art deep models. Most remarkably, we
obtain METEOR/CIDEr-D of 25.2%/98.6% on testing data of widely used and
publicly available splits in (Karpathy & Fei-Fei, 2015) when extracting image
representations by GoogleNet and achieve to date top-1 performance on COCO
captioning Leaderboard.
| [
{
"version": "v1",
"created": "Sat, 5 Nov 2016 13:12:29 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Yao",
"Ting",
""
],
[
"Pan",
"Yingwei",
""
],
[
"Li",
"Yehao",
""
],
[
"Qiu",
"Zhaofan",
""
],
[
"Mei",
"Tao",
""
]
] | TITLE: Boosting Image Captioning with Attributes
ABSTRACT: Automatically describing an image with a natural language has been an
emerging challenge in both fields of computer vision and natural language
processing. In this paper, we present Long Short-Term Memory with Attributes
(LSTM-A) - a novel architecture that integrates attributes into the successful
Convolutional Neural Networks (CNNs) plus Recurrent Neural Networks (RNNs)
image captioning framework, by training them in an end-to-end manner. To
incorporate attributes, we construct variants of architectures by feeding image
representations and attributes into RNNs in different ways to explore the
mutual but also fuzzy relationship between them. Extensive experiments are
conducted on COCO image captioning dataset and our framework achieves superior
results when compared to state-of-the-art deep models. Most remarkably, we
obtain METEOR/CIDEr-D of 25.2%/98.6% on testing data of widely used and
publicly available splits in (Karpathy & Fei-Fei, 2015) when extracting image
representations by GoogleNet and achieve to date top-1 performance on COCO
captioning Leaderboard.
| no_new_dataset | 0.949482 |
1611.01726 | Gyuwan Kim | Gyuwan Kim, Hayoon Yi, Jangho Lee, Yunheung Paek, Sungroh Yoon | LSTM-Based System-Call Language Modeling and Robust Ensemble Method for
Designing Host-Based Intrusion Detection Systems | 12 pages, 5 figures | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In computer security, designing a robust intrusion detection system is one of
the most fundamental and important problems. In this paper, we propose a
system-call language-modeling approach for designing anomaly-based host
intrusion detection systems. To remedy the issue of high false-alarm rates
commonly arising in conventional methods, we employ a novel ensemble method
that blends multiple thresholding classifiers into a single one, making it
possible to accumulate 'highly normal' sequences. The proposed system-call
language model has various advantages leveraged by the fact that it can learn
the semantic meaning and interactions of each system call that existing methods
cannot effectively consider. Through diverse experiments on public benchmark
datasets, we demonstrate the validity and effectiveness of the proposed method.
Moreover, we show that our model possesses high portability, which is one of
the key aspects of realizing successful intrusion detection systems.
| [
{
"version": "v1",
"created": "Sun, 6 Nov 2016 04:07:29 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Kim",
"Gyuwan",
""
],
[
"Yi",
"Hayoon",
""
],
[
"Lee",
"Jangho",
""
],
[
"Paek",
"Yunheung",
""
],
[
"Yoon",
"Sungroh",
""
]
] | TITLE: LSTM-Based System-Call Language Modeling and Robust Ensemble Method for
Designing Host-Based Intrusion Detection Systems
ABSTRACT: In computer security, designing a robust intrusion detection system is one of
the most fundamental and important problems. In this paper, we propose a
system-call language-modeling approach for designing anomaly-based host
intrusion detection systems. To remedy the issue of high false-alarm rates
commonly arising in conventional methods, we employ a novel ensemble method
that blends multiple thresholding classifiers into a single one, making it
possible to accumulate 'highly normal' sequences. The proposed system-call
language model has various advantages leveraged by the fact that it can learn
the semantic meaning and interactions of each system call that existing methods
cannot effectively consider. Through diverse experiments on public benchmark
datasets, we demonstrate the validity and effectiveness of the proposed method.
Moreover, we show that our model possesses high portability, which is one of
the key aspects of realizing successful intrusion detection systems.
| no_new_dataset | 0.94474 |
1611.01747 | Shuohang Wang | Shuohang Wang and Jing Jiang | A Compare-Aggregate Model for Matching Text Sequences | 11 pages, 2 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many NLP tasks including machine comprehension, answer selection and text
entailment require the comparison between sequences. Matching the important
units between sequences is a key to solve these problems. In this paper, we
present a general "compare-aggregate" framework that performs word-level
matching followed by aggregation using Convolutional Neural Networks. We
particularly focus on the different comparison functions we can use to match
two vectors. We use four different datasets to evaluate the model. We find that
some simple comparison functions based on element-wise operations can work
better than standard neural network and neural tensor network.
| [
{
"version": "v1",
"created": "Sun, 6 Nov 2016 09:50:24 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Wang",
"Shuohang",
""
],
[
"Jiang",
"Jing",
""
]
] | TITLE: A Compare-Aggregate Model for Matching Text Sequences
ABSTRACT: Many NLP tasks including machine comprehension, answer selection and text
entailment require the comparison between sequences. Matching the important
units between sequences is a key to solve these problems. In this paper, we
present a general "compare-aggregate" framework that performs word-level
matching followed by aggregation using Convolutional Neural Networks. We
particularly focus on the different comparison functions we can use to match
two vectors. We use four different datasets to evaluate the model. We find that
some simple comparison functions based on element-wise operations can work
better than standard neural network and neural tensor network.
| no_new_dataset | 0.94474 |
1611.01783 | Joseph Keshet | Yehoshua Dissen, Joseph Keshet, Jacob Goldberger and Cynthia Clopper | Domain Adaptation For Formant Estimation Using Deep Learning | null | null | null | null | cs.CL cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a domain adaptation technique for formant estimation
using a deep network. We first train a deep learning network on a small read
speech dataset. We then freeze the parameters of the trained network and use
several different datasets to train an adaptation layer that makes the obtained
network universal in the sense that it works well for a variety of speakers and
speech domains with very different characteristics. We evaluated our adapted
network on three datasets, each of which has different speaker characteristics
and speech styles. The performance of our method compares favorably with
alternative methods for formant estimation.
| [
{
"version": "v1",
"created": "Sun, 6 Nov 2016 14:00:14 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Dissen",
"Yehoshua",
""
],
[
"Keshet",
"Joseph",
""
],
[
"Goldberger",
"Jacob",
""
],
[
"Clopper",
"Cynthia",
""
]
] | TITLE: Domain Adaptation For Formant Estimation Using Deep Learning
ABSTRACT: In this paper we present a domain adaptation technique for formant estimation
using a deep network. We first train a deep learning network on a small read
speech dataset. We then freeze the parameters of the trained network and use
several different datasets to train an adaptation layer that makes the obtained
network universal in the sense that it works well for a variety of speakers and
speech domains with very different characteristics. We evaluated our adapted
network on three datasets, each of which has different speaker characteristics
and speech styles. The performance of our method compares favorably with
alternative methods for formant estimation.
| no_new_dataset | 0.949716 |
1611.01820 | Behnam Ghavimi | Behnam Ghavimi (1,2), Philipp Mayr (1), Christoph Lange (2,3), Sahar
Vahdati (2) and S\"oren AUER (2,3) ((1) GESIS Leibniz Institute for the
Social Sciences, (2) Enterprise Information Systems (EIS), University of
Bonn, (3) Fraunhofer Institute for Intelligent Analysis and Information
Systems IAIS) | A Semi-Automatic Approach for Detecting Dataset References in Social
Science Texts | Pre-print IS&U journal. arXiv admin note: substantial text overlap
with arXiv:1603.01774 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, full-texts of scientific articles are often stored in different
locations than the used datasets. Dataset registries aim at a closer
integration by making datasets citable but authors typically refer to datasets
using inconsistent abbreviations and heterogeneous metadata (e.g. title,
publication year). It is thus hard to reproduce research results, to access
datasets for further analysis, and to determine the impact of a dataset.
Manually detecting references to datasets in scientific articles is
time-consuming and requires expert knowledge in the underlying research
domain.We propose and evaluate a semi-automatic three-step approach for finding
explicit references to datasets in social sciences articles.We first extract
pre-defined special features from dataset titles in the da|ra registry, then
detect references to datasets using the extracted features, and finally match
the references found with corresponding dataset titles. The approach does not
require a corpus of articles (avoiding the cold start problem) and performs
well on a test corpus. We achieved an F-measure of 0.84 for detecting
references in full-texts and an F-measure of 0.83 for finding correct matches
of detected references in the da|ra dataset registry.
| [
{
"version": "v1",
"created": "Sun, 6 Nov 2016 18:36:16 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Ghavimi",
"Behnam",
""
],
[
"Mayr",
"Philipp",
""
],
[
"Lange",
"Christoph",
""
],
[
"Vahdati",
"Sahar",
""
],
[
"AUER",
"Sören",
""
]
] | TITLE: A Semi-Automatic Approach for Detecting Dataset References in Social
Science Texts
ABSTRACT: Today, full-texts of scientific articles are often stored in different
locations than the used datasets. Dataset registries aim at a closer
integration by making datasets citable but authors typically refer to datasets
using inconsistent abbreviations and heterogeneous metadata (e.g. title,
publication year). It is thus hard to reproduce research results, to access
datasets for further analysis, and to determine the impact of a dataset.
Manually detecting references to datasets in scientific articles is
time-consuming and requires expert knowledge in the underlying research
domain.We propose and evaluate a semi-automatic three-step approach for finding
explicit references to datasets in social sciences articles.We first extract
pre-defined special features from dataset titles in the da|ra registry, then
detect references to datasets using the extracted features, and finally match
the references found with corresponding dataset titles. The approach does not
require a corpus of articles (avoiding the cold start problem) and performs
well on a test corpus. We achieved an F-measure of 0.84 for detecting
references in full-texts and an F-measure of 0.83 for finding correct matches
of detected references in the da|ra dataset registry.
| no_new_dataset | 0.949995 |
1611.01867 | Xinyun Chen | Xinyun Chen, Chang Liu, Richard Shin, Dawn Song, Mingcheng Chen | Latent Attention For If-Then Program Synthesis | Accepted by NIPS 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic translation from natural language descriptions into programs is a
longstanding challenging problem. In this work, we consider a simple yet
important sub-problem: translation from textual descriptions to If-Then
programs. We devise a novel neural network architecture for this task which we
train end-to-end. Specifically, we introduce Latent Attention, which computes
multiplicative weights for the words in the description in a two-stage process
with the goal of better leveraging the natural language structures that
indicate the relevant parts for predicting program elements. Our architecture
reduces the error rate by 28.57% compared to prior art. We also propose a
one-shot learning scenario of If-Then program synthesis and simulate it with
our existing dataset. We demonstrate a variation on the training procedure for
this scenario that outperforms the original procedure, significantly closing
the gap to the model trained with all data.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 00:56:19 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Chen",
"Xinyun",
""
],
[
"Liu",
"Chang",
""
],
[
"Shin",
"Richard",
""
],
[
"Song",
"Dawn",
""
],
[
"Chen",
"Mingcheng",
""
]
] | TITLE: Latent Attention For If-Then Program Synthesis
ABSTRACT: Automatic translation from natural language descriptions into programs is a
longstanding challenging problem. In this work, we consider a simple yet
important sub-problem: translation from textual descriptions to If-Then
programs. We devise a novel neural network architecture for this task which we
train end-to-end. Specifically, we introduce Latent Attention, which computes
multiplicative weights for the words in the description in a two-stage process
with the goal of better leveraging the natural language structures that
indicate the relevant parts for predicting program elements. Our architecture
reduces the error rate by 28.57% compared to prior art. We also propose a
one-shot learning scenario of If-Then program synthesis and simulate it with
our existing dataset. We demonstrate a variation on the training procedure for
this scenario that outperforms the original procedure, significantly closing
the gap to the model trained with all data.
| no_new_dataset | 0.803097 |
1611.01872 | Ye Liu | Ye Liu, Liqiang Nie, Lei Han, Luming Zhang, David S Rosenblum | Action2Activity: Recognizing Complex Activities from Sensor Data | IJCAI 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As compared to simple actions, activities are much more complex, but
semantically consistent with a human's real life. Techniques for action
recognition from sensor generated data are mature. However, there has been
relatively little work on bridging the gap between actions and activities. To
this end, this paper presents a novel approach for complex activity recognition
comprising of two components. The first component is temporal pattern mining,
which provides a mid-level feature representation for activities, encodes
temporal relatedness among actions, and captures the intrinsic properties of
activities. The second component is adaptive Multi-Task Learning, which
captures relatedness among activities and selects discriminant features.
Extensive experiments on a real-world dataset demonstrate the effectiveness of
our work.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 02:01:29 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Liu",
"Ye",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Han",
"Lei",
""
],
[
"Zhang",
"Luming",
""
],
[
"Rosenblum",
"David S",
""
]
] | TITLE: Action2Activity: Recognizing Complex Activities from Sensor Data
ABSTRACT: As compared to simple actions, activities are much more complex, but
semantically consistent with a human's real life. Techniques for action
recognition from sensor generated data are mature. However, there has been
relatively little work on bridging the gap between actions and activities. To
this end, this paper presents a novel approach for complex activity recognition
comprising of two components. The first component is temporal pattern mining,
which provides a mid-level feature representation for activities, encodes
temporal relatedness among actions, and captures the intrinsic properties of
activities. The second component is adaptive Multi-Task Learning, which
captures relatedness among activities and selects discriminant features.
Extensive experiments on a real-world dataset demonstrate the effectiveness of
our work.
| no_new_dataset | 0.948537 |
1611.01964 | Kalina Jasinska | Kalina Jasinska, Nikos Karampatziakis | Log-time and Log-space Extreme Classification | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present LTLS, a technique for multiclass and multilabel prediction that
can perform training and inference in logarithmic time and space. LTLS embeds
large classification problems into simple structured prediction problems and
relies on efficient dynamic programming algorithms for inference. We train LTLS
with stochastic gradient descent on a number of multiclass and multilabel
datasets and show that despite its small memory footprint it is often
competitive with existing approaches.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 10:10:43 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Jasinska",
"Kalina",
""
],
[
"Karampatziakis",
"Nikos",
""
]
] | TITLE: Log-time and Log-space Extreme Classification
ABSTRACT: We present LTLS, a technique for multiclass and multilabel prediction that
can perform training and inference in logarithmic time and space. LTLS embeds
large classification problems into simple structured prediction problems and
relies on efficient dynamic programming algorithms for inference. We train LTLS
with stochastic gradient descent on a number of multiclass and multilabel
datasets and show that despite its small memory footprint it is often
competitive with existing approaches.
| no_new_dataset | 0.945801 |
1611.02007 | Florian Boudin | Adrien Bougouin, Florian Boudin, B\'eatrice Daille | Keyphrase Annotation with Graph Co-Ranking | Accepted at the COLING 2016 conference | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Keyphrase annotation is the task of identifying textual units that represent
the main content of a document. Keyphrase annotation is either carried out by
extracting the most important phrases from a document, keyphrase extraction, or
by assigning entries from a controlled domain-specific vocabulary, keyphrase
assignment. Assignment methods are generally more reliable. They provide
better-formed keyphrases, as well as keyphrases that do not occur in the
document. But they are often silent on the contrary of extraction methods that
do not depend on manually built resources. This paper proposes a new method to
perform both keyphrase extraction and keyphrase assignment in an integrated and
mutual reinforcing manner. Experiments have been carried out on datasets
covering different domains of humanities and social sciences. They show
statistically significant improvements compared to both keyphrase extraction
and keyphrase assignment state-of-the art methods.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 12:08:13 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Bougouin",
"Adrien",
""
],
[
"Boudin",
"Florian",
""
],
[
"Daille",
"Béatrice",
""
]
] | TITLE: Keyphrase Annotation with Graph Co-Ranking
ABSTRACT: Keyphrase annotation is the task of identifying textual units that represent
the main content of a document. Keyphrase annotation is either carried out by
extracting the most important phrases from a document, keyphrase extraction, or
by assigning entries from a controlled domain-specific vocabulary, keyphrase
assignment. Assignment methods are generally more reliable. They provide
better-formed keyphrases, as well as keyphrases that do not occur in the
document. But they are often silent on the contrary of extraction methods that
do not depend on manually built resources. This paper proposes a new method to
perform both keyphrase extraction and keyphrase assignment in an integrated and
mutual reinforcing manner. Experiments have been carried out on datasets
covering different domains of humanities and social sciences. They show
statistically significant improvements compared to both keyphrase extraction
and keyphrase assignment state-of-the art methods.
| no_new_dataset | 0.950549 |
1611.02025 | Xavier Holt | Xavier Holt, Will Radford, Ben Hachey | Presenting a New Dataset for the Timeline Generation Problem | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The timeline generation task summarises an entity's biography by selecting
stories representing key events from a large pool of relevant documents. This
paper addresses the lack of a standard dataset and evaluative methodology for
the problem. We present and make publicly available a new dataset of 18,793
news articles covering 39 entities. For each entity, we provide a gold standard
timeline and a set of entity-related articles. We propose ROUGE as an
evaluation metric and validate our dataset by showing that top Google results
outperform straw-man baselines.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 12:47:25 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Holt",
"Xavier",
""
],
[
"Radford",
"Will",
""
],
[
"Hachey",
"Ben",
""
]
] | TITLE: Presenting a New Dataset for the Timeline Generation Problem
ABSTRACT: The timeline generation task summarises an entity's biography by selecting
stories representing key events from a large pool of relevant documents. This
paper addresses the lack of a standard dataset and evaluative methodology for
the problem. We present and make publicly available a new dataset of 18,793
news articles covering 39 entities. For each entity, we provide a gold standard
timeline and a set of entity-related articles. We propose ROUGE as an
evaluation metric and validate our dataset by showing that top Google results
outperform straw-man baselines.
| new_dataset | 0.957198 |
1611.02053 | Andrey Filchenkov | Valeria Efimova, Andrey Filchenkov, Anatoly Shalyto | Reinforcement-based Simultaneous Algorithm and its Hyperparameters
Selection | null | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many algorithms for data analysis exist, especially for classification
problems. To solve a data analysis problem, a proper algorithm should be
chosen, and also its hyperparameters should be selected. In this paper, we
present a new method for the simultaneous selection of an algorithm and its
hyperparameters. In order to do so, we reduced this problem to the multi-armed
bandit problem. We consider an algorithm as an arm and algorithm
hyperparameters search during a fixed time as the corresponding arm play. We
also suggest a problem-specific reward function. We performed the experiments
on 10 real datasets and compare the suggested method with the existing one
implemented in Auto-WEKA. The results show that our method is significantly
better in most of the cases and never worse than the Auto-WEKA.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 13:55:00 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Efimova",
"Valeria",
""
],
[
"Filchenkov",
"Andrey",
""
],
[
"Shalyto",
"Anatoly",
""
]
] | TITLE: Reinforcement-based Simultaneous Algorithm and its Hyperparameters
Selection
ABSTRACT: Many algorithms for data analysis exist, especially for classification
problems. To solve a data analysis problem, a proper algorithm should be
chosen, and also its hyperparameters should be selected. In this paper, we
present a new method for the simultaneous selection of an algorithm and its
hyperparameters. In order to do so, we reduced this problem to the multi-armed
bandit problem. We consider an algorithm as an arm and algorithm
hyperparameters search during a fixed time as the corresponding arm play. We
also suggest a problem-specific reward function. We performed the experiments
on 10 real datasets and compare the suggested method with the existing one
implemented in Auto-WEKA. The results show that our method is significantly
better in most of the cases and never worse than the Auto-WEKA.
| no_new_dataset | 0.956756 |
1611.02118 | Yann-A\"el Le Borgne | Yann-A\"el Le Borgne, Adriana Homolova, Gianluca Bontempi | OpenTED Browser: Insights into European Public Spendings | ECML, PKDD, SoGood workshop 2016 | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the OpenTED browser, a Web application allowing to interactively
browse public spending data related to public procurements in the European
Union. The application relies on Open Data recently published by the European
Commission and the Publications Office of the European Union, from which we
imported a curated dataset of 4.2 million contract award notices spanning the
period 2006-2015. The application is designed to easily filter notices and
visualise relationships between public contracting authorities and private
contractors. The simple design allows for example to quickly find information
about who the biggest suppliers of local governments are, and the nature of the
contracted goods and services. We believe the tool, which we make Open Source,
is a valuable source of information for journalists, NGOs, analysts and
citizens for getting information on public procurement data, from large scale
trends to local municipal developments.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 14:35:16 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Borgne",
"Yann-Aël Le",
""
],
[
"Homolova",
"Adriana",
""
],
[
"Bontempi",
"Gianluca",
""
]
] | TITLE: OpenTED Browser: Insights into European Public Spendings
ABSTRACT: We present the OpenTED browser, a Web application allowing to interactively
browse public spending data related to public procurements in the European
Union. The application relies on Open Data recently published by the European
Commission and the Publications Office of the European Union, from which we
imported a curated dataset of 4.2 million contract award notices spanning the
period 2006-2015. The application is designed to easily filter notices and
visualise relationships between public contracting authorities and private
contractors. The simple design allows for example to quickly find information
about who the biggest suppliers of local governments are, and the nature of the
contracted goods and services. We believe the tool, which we make Open Source,
is a valuable source of information for journalists, NGOs, analysts and
citizens for getting information on public procurement data, from large scale
trends to local municipal developments.
| new_dataset | 0.915658 |
1611.02120 | Brett Meyer | Sean C. Smithson and Guang Yang and Warren J. Gross and Brett H. Meyer | Neural Networks Designing Neural Networks: Multi-Objective
Hyper-Parameter Optimization | To appear in ICCAD'16. The authoritative version will appear in the
ACM Digital Library | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial neural networks have gone through a recent rise in popularity,
achieving state-of-the-art results in various fields, including image
classification, speech recognition, and automated control. Both the performance
and computational complexity of such models are heavily dependant on the design
of characteristic hyper-parameters (e.g., number of hidden layers, nodes per
layer, or choice of activation functions), which have traditionally been
optimized manually. With machine learning penetrating low-power mobile and
embedded areas, the need to optimize not only for performance (accuracy), but
also for implementation complexity, becomes paramount. In this work, we present
a multi-objective design space exploration method that reduces the number of
solution networks trained and evaluated through response surface modelling.
Given spaces which can easily exceed 1020 solutions, manually designing a
near-optimal architecture is unlikely as opportunities to reduce network
complexity, while maintaining performance, may be overlooked. This problem is
exacerbated by the fact that hyper-parameters which perform well on specific
datasets may yield sub-par results on others, and must therefore be designed on
a per-application basis. In our work, machine learning is leveraged by training
an artificial neural network to predict the performance of future candidate
networks. The method is evaluated on the MNIST and CIFAR-10 image datasets,
optimizing for both recognition accuracy and computational complexity.
Experimental results demonstrate that the proposed method can closely
approximate the Pareto-optimal front, while only exploring a small fraction of
the design space.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 15:38:39 GMT"
}
] | 2016-11-08T00:00:00 | [
[
"Smithson",
"Sean C.",
""
],
[
"Yang",
"Guang",
""
],
[
"Gross",
"Warren J.",
""
],
[
"Meyer",
"Brett H.",
""
]
] | TITLE: Neural Networks Designing Neural Networks: Multi-Objective
Hyper-Parameter Optimization
ABSTRACT: Artificial neural networks have gone through a recent rise in popularity,
achieving state-of-the-art results in various fields, including image
classification, speech recognition, and automated control. Both the performance
and computational complexity of such models are heavily dependant on the design
of characteristic hyper-parameters (e.g., number of hidden layers, nodes per
layer, or choice of activation functions), which have traditionally been
optimized manually. With machine learning penetrating low-power mobile and
embedded areas, the need to optimize not only for performance (accuracy), but
also for implementation complexity, becomes paramount. In this work, we present
a multi-objective design space exploration method that reduces the number of
solution networks trained and evaluated through response surface modelling.
Given spaces which can easily exceed 1020 solutions, manually designing a
near-optimal architecture is unlikely as opportunities to reduce network
complexity, while maintaining performance, may be overlooked. This problem is
exacerbated by the fact that hyper-parameters which perform well on specific
datasets may yield sub-par results on others, and must therefore be designed on
a per-application basis. In our work, machine learning is leveraged by training
an artificial neural network to predict the performance of future candidate
networks. The method is evaluated on the MNIST and CIFAR-10 image datasets,
optimizing for both recognition accuracy and computational complexity.
Experimental results demonstrate that the proposed method can closely
approximate the Pareto-optimal front, while only exploring a small fraction of
the design space.
| no_new_dataset | 0.944331 |
1502.05890 | Akshay Krishnamurthy | Akshay Krishnamurthy, Alekh Agarwal, Miroslav Dudik | Contextual Semibandits via Supervised Learning Oracles | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study an online decision making problem where on each round a learner
chooses a list of items based on some side information, receives a scalar
feedback value for each individual item, and a reward that is linearly related
to this feedback. These problems, known as contextual semibandits, arise in
crowdsourcing, recommendation, and many other domains. This paper reduces
contextual semibandits to supervised learning, allowing us to leverage powerful
supervised learning methods in this partial-feedback setting. Our first
reduction applies when the mapping from feedback to reward is known and leads
to a computationally efficient algorithm with near-optimal regret. We show that
this algorithm outperforms state-of-the-art approaches on real-world
learning-to-rank datasets, demonstrating the advantage of oracle-based
algorithms. Our second reduction applies to the previously unstudied setting
when the linear mapping from feedback to reward is unknown. Our regret
guarantees are superior to prior techniques that ignore the feedback.
| [
{
"version": "v1",
"created": "Fri, 20 Feb 2015 14:55:41 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Mar 2015 01:38:23 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Jun 2016 00:43:13 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Nov 2016 19:28:07 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Krishnamurthy",
"Akshay",
""
],
[
"Agarwal",
"Alekh",
""
],
[
"Dudik",
"Miroslav",
""
]
] | TITLE: Contextual Semibandits via Supervised Learning Oracles
ABSTRACT: We study an online decision making problem where on each round a learner
chooses a list of items based on some side information, receives a scalar
feedback value for each individual item, and a reward that is linearly related
to this feedback. These problems, known as contextual semibandits, arise in
crowdsourcing, recommendation, and many other domains. This paper reduces
contextual semibandits to supervised learning, allowing us to leverage powerful
supervised learning methods in this partial-feedback setting. Our first
reduction applies when the mapping from feedback to reward is known and leads
to a computationally efficient algorithm with near-optimal regret. We show that
this algorithm outperforms state-of-the-art approaches on real-world
learning-to-rank datasets, demonstrating the advantage of oracle-based
algorithms. Our second reduction applies to the previously unstudied setting
when the linear mapping from feedback to reward is unknown. Our regret
guarantees are superior to prior techniques that ignore the feedback.
| no_new_dataset | 0.948202 |
1611.00938 | Johann Paratte | Johan Paratte and Lionel Martin | Fast Eigenspace Approximation using Random Signals | null | null | null | null | cs.DS cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus in this work on the estimation of the first $k$ eigenvectors of any
graph Laplacian using filtering of Gaussian random signals. We prove that we
only need $k$ such signals to be able to exactly recover as many of the
smallest eigenvectors, regardless of the number of nodes in the graph. In
addition, we address key issues in implementing the theoretical concepts in
practice using accurate approximated methods. We also propose fast algorithms
both for eigenspace approximation and for the determination of the $k$th
smallest eigenvalue $\lambda_k$. The latter proves to be extremely efficient
under the assumption of locally uniform distribution of the eigenvalue over the
spectrum. Finally, we present experiments which show the validity of our method
in practice and compare it to state-of-the-art methods for clustering and
visualization both on synthetic small-scale datasets and larger real-world
problems of millions of nodes. We show that our method allows a better scaling
with the number of nodes than all previous methods while achieving an almost
perfect reconstruction of the eigenspace formed by the first $k$ eigenvectors.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2016 10:08:22 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Nov 2016 09:25:41 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Paratte",
"Johan",
""
],
[
"Martin",
"Lionel",
""
]
] | TITLE: Fast Eigenspace Approximation using Random Signals
ABSTRACT: We focus in this work on the estimation of the first $k$ eigenvectors of any
graph Laplacian using filtering of Gaussian random signals. We prove that we
only need $k$ such signals to be able to exactly recover as many of the
smallest eigenvectors, regardless of the number of nodes in the graph. In
addition, we address key issues in implementing the theoretical concepts in
practice using accurate approximated methods. We also propose fast algorithms
both for eigenspace approximation and for the determination of the $k$th
smallest eigenvalue $\lambda_k$. The latter proves to be extremely efficient
under the assumption of locally uniform distribution of the eigenvalue over the
spectrum. Finally, we present experiments which show the validity of our method
in practice and compare it to state-of-the-art methods for clustering and
visualization both on synthetic small-scale datasets and larger real-world
problems of millions of nodes. We show that our method allows a better scaling
with the number of nodes than all previous methods while achieving an almost
perfect reconstruction of the eigenspace formed by the first $k$ eigenvectors.
| no_new_dataset | 0.945197 |
1611.01195 | Shusil Dangi | Shusil Dangi, Nathan Cahill, Cristian A. Linte | Integrating Atlas and Graph Cut Methods for LV Segmentation from Cardiac
Cine MRI | Statistical Atlases and Computational Modelling of Heart workshop
2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetic Resonance Imaging (MRI) has evolved as a clinical standard-of-care
imaging modality for cardiac morphology, function assessment, and guidance of
cardiac interventions. All these applications rely on accurate extraction of
the myocardial tissue and blood pool from the imaging data. Here we propose a
framework for left ventricle (LV) segmentation from cardiac cine-MRI. First, we
segment the LV blood pool using iterative graph cuts, and subsequently use this
information to segment the myocardium. We formulate the segmentation procedure
as an energy minimization problem in a graph subject to the shape prior
obtained by label propagation from an average atlas using affine registration.
The proposed framework has been validated on 30 patient cardiac cine-MRI
datasets available through the STACOM LV segmentation challenge and yielded
fast, robust, and accurate segmentation results.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2016 21:12:55 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Dangi",
"Shusil",
""
],
[
"Cahill",
"Nathan",
""
],
[
"Linte",
"Cristian A.",
""
]
] | TITLE: Integrating Atlas and Graph Cut Methods for LV Segmentation from Cardiac
Cine MRI
ABSTRACT: Magnetic Resonance Imaging (MRI) has evolved as a clinical standard-of-care
imaging modality for cardiac morphology, function assessment, and guidance of
cardiac interventions. All these applications rely on accurate extraction of
the myocardial tissue and blood pool from the imaging data. Here we propose a
framework for left ventricle (LV) segmentation from cardiac cine-MRI. First, we
segment the LV blood pool using iterative graph cuts, and subsequently use this
information to segment the myocardium. We formulate the segmentation procedure
as an energy minimization problem in a graph subject to the shape prior
obtained by label propagation from an average atlas using affine registration.
The proposed framework has been validated on 30 patient cardiac cine-MRI
datasets available through the STACOM LV segmentation challenge and yielded
fast, robust, and accurate segmentation results.
| no_new_dataset | 0.952353 |
1611.01235 | Tiffany Hwu | Tiffany Hwu, Jacob Isbell, Nicolas Oros, and Jeffrey Krichmar | A Self-Driving Robot Using Deep Convolutional Neural Networks on
Neuromorphic Hardware | 6 pages, 8 figures | null | null | null | cs.NE cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromorphic computing is a promising solution for reducing the size, weight
and power of mobile embedded systems. In this paper, we introduce a realization
of such a system by creating the first closed-loop battery-powered
communication system between an IBM TrueNorth NS1e and an autonomous
Android-Based Robotics platform. Using this system, we constructed a dataset of
path following behavior by manually driving the Android-Based robot along steep
mountain trails and recording video frames from the camera mounted on the robot
along with the corresponding motor commands. We used this dataset to train a
deep convolutional neural network implemented on the TrueNorth NS1e. The NS1e,
which was mounted on the robot and powered by the robot's battery, resulted in
a self-driving robot that could successfully traverse a steep mountain path in
real time. To our knowledge, this represents the first time the TrueNorth NS1e
neuromorphic chip has been embedded on a mobile platform under closed-loop
control.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 01:10:07 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Hwu",
"Tiffany",
""
],
[
"Isbell",
"Jacob",
""
],
[
"Oros",
"Nicolas",
""
],
[
"Krichmar",
"Jeffrey",
""
]
] | TITLE: A Self-Driving Robot Using Deep Convolutional Neural Networks on
Neuromorphic Hardware
ABSTRACT: Neuromorphic computing is a promising solution for reducing the size, weight
and power of mobile embedded systems. In this paper, we introduce a realization
of such a system by creating the first closed-loop battery-powered
communication system between an IBM TrueNorth NS1e and an autonomous
Android-Based Robotics platform. Using this system, we constructed a dataset of
path following behavior by manually driving the Android-Based robot along steep
mountain trails and recording video frames from the camera mounted on the robot
along with the corresponding motor commands. We used this dataset to train a
deep convolutional neural network implemented on the TrueNorth NS1e. The NS1e,
which was mounted on the robot and powered by the robot's battery, resulted in
a self-driving robot that could successfully traverse a steep mountain path in
real time. To our knowledge, this represents the first time the TrueNorth NS1e
neuromorphic chip has been embedded on a mobile platform under closed-loop
control.
| new_dataset | 0.973418 |
1611.01242 | Mohit Iyyer | Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang | Answering Complicated Question Intents Expressed in Decomposed Question
Sequences | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work in semantic parsing for question answering has focused on long
and complicated questions, many of which would seem unnatural if asked in a
normal conversation between two humans. In an effort to explore a
conversational QA setting, we present a more realistic task: answering
sequences of simple but inter-related questions. We collect a dataset of 6,066
question sequences that inquire about semi-structured tables from Wikipedia,
with 17,553 question-answer pairs in total. Existing QA systems face two major
problems when evaluated on our dataset: (1) handling questions that contain
coreferences to previous questions or answers, and (2) matching words or
phrases in a question to corresponding entries in the associated table. We
conclude by proposing strategies to handle both of these issues.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 01:54:03 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Iyyer",
"Mohit",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Chang",
"Ming-Wei",
""
]
] | TITLE: Answering Complicated Question Intents Expressed in Decomposed Question
Sequences
ABSTRACT: Recent work in semantic parsing for question answering has focused on long
and complicated questions, many of which would seem unnatural if asked in a
normal conversation between two humans. In an effort to explore a
conversational QA setting, we present a more realistic task: answering
sequences of simple but inter-related questions. We collect a dataset of 6,066
question sequences that inquire about semi-structured tables from Wikipedia,
with 17,553 question-answer pairs in total. Existing QA systems face two major
problems when evaluated on our dataset: (1) handling questions that contain
coreferences to previous questions or answers, and (2) matching words or
phrases in a question to corresponding entries in the associated table. We
conclude by proposing strategies to handle both of these issues.
| new_dataset | 0.957715 |
1611.01276 | Qi Meng | Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhi-Ming Ma and
Tie-Yan Liu | A Communication-Efficient Parallel Algorithm for Decision Tree | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision tree (and its extensions such as Gradient Boosting Decision Trees
and Random Forest) is a widely used machine learning algorithm, due to its
practical effectiveness and model interpretability. With the emergence of big
data, there is an increasing need to parallelize the training process of
decision tree. However, most existing attempts along this line suffer from high
communication costs. In this paper, we propose a new algorithm, called
\emph{Parallel Voting Decision Tree (PV-Tree)}, to tackle this challenge. After
partitioning the training data onto a number of (e.g., $M$) machines, this
algorithm performs both local voting and global voting in each iteration. For
local voting, the top-$k$ attributes are selected from each machine according
to its local data. Then, globally top-$2k$ attributes are determined by a
majority voting among these local candidates. Finally, the full-grained
histograms of the globally top-$2k$ attributes are collected from local
machines in order to identify the best (most informative) attribute and its
split point. PV-Tree can achieve a very low communication cost (independent of
the total number of attributes) and thus can scale out very well. Furthermore,
theoretical analysis shows that this algorithm can learn a near optimal
decision tree, since it can find the best attribute with a large probability.
Our experiments on real-world datasets show that PV-Tree significantly
outperforms the existing parallel decision tree algorithms in the trade-off
between accuracy and efficiency.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 07:09:03 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Meng",
"Qi",
""
],
[
"Ke",
"Guolin",
""
],
[
"Wang",
"Taifeng",
""
],
[
"Chen",
"Wei",
""
],
[
"Ye",
"Qiwei",
""
],
[
"Ma",
"Zhi-Ming",
""
],
[
"Liu",
"Tie-Yan",
""
]
] | TITLE: A Communication-Efficient Parallel Algorithm for Decision Tree
ABSTRACT: Decision tree (and its extensions such as Gradient Boosting Decision Trees
and Random Forest) is a widely used machine learning algorithm, due to its
practical effectiveness and model interpretability. With the emergence of big
data, there is an increasing need to parallelize the training process of
decision tree. However, most existing attempts along this line suffer from high
communication costs. In this paper, we propose a new algorithm, called
\emph{Parallel Voting Decision Tree (PV-Tree)}, to tackle this challenge. After
partitioning the training data onto a number of (e.g., $M$) machines, this
algorithm performs both local voting and global voting in each iteration. For
local voting, the top-$k$ attributes are selected from each machine according
to its local data. Then, globally top-$2k$ attributes are determined by a
majority voting among these local candidates. Finally, the full-grained
histograms of the globally top-$2k$ attributes are collected from local
machines in order to identify the best (most informative) attribute and its
split point. PV-Tree can achieve a very low communication cost (independent of
the total number of attributes) and thus can scale out very well. Furthermore,
theoretical analysis shows that this algorithm can learn a near optimal
decision tree, since it can find the best attribute with a large probability.
Our experiments on real-world datasets show that PV-Tree significantly
outperforms the existing parallel decision tree algorithms in the trade-off
between accuracy and efficiency.
| no_new_dataset | 0.948537 |
1611.01503 | Akosua Busia | Akosua Busia, Jasmine Collins, Navdeep Jaitly | Protein Secondary Structure Prediction Using Deep Multi-scale
Convolutional Neural Networks and Next-Step Conditioning | 10 pages, 2 figures, submitted to RECOMB 2017 | null | null | null | cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently developed deep learning techniques have significantly improved the
accuracy of various speech and image recognition systems. In this paper we
adapt some of these techniques for protein secondary structure prediction. We
first train a series of deep neural networks to predict eight-class secondary
structure labels given a protein's amino acid sequence information and find
that using recent methods for regularization, such as dropout and weight-norm
constraining, leads to measurable gains in accuracy. We then adapt recent
convolutional neural network architectures--Inception, ReSNet, and DenseNet
with Batch Normalization--to the problem of protein structure prediction. These
convolutional architectures make heavy use of multi-scale filter layers that
simultaneously compute features on several scales, and use residual connections
to prevent underfitting. Using a carefully modified version of these
architectures, we achieve state-of-the-art performance of 70.0% per amino acid
accuracy on the public CB513 benchmark dataset. Finally, we explore additions
from sequence-to-sequence learning, altering the model to make its predictions
conditioned on both the protein's amino acid sequence and its past secondary
structure labels. We introduce a new method of ensembling such a conditional
model with our convolutional model, an approach which reaches 70.6% Q8 accuracy
on CB513. We argue that these results can be further refined for larger boosts
in prediction accuracy through more sophisticated attempts to control
overfitting of conditional models. We aim to release the code for these
experiments as part of the TensorFlow repository.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 19:32:15 GMT"
}
] | 2016-11-07T00:00:00 | [
[
"Busia",
"Akosua",
""
],
[
"Collins",
"Jasmine",
""
],
[
"Jaitly",
"Navdeep",
""
]
] | TITLE: Protein Secondary Structure Prediction Using Deep Multi-scale
Convolutional Neural Networks and Next-Step Conditioning
ABSTRACT: Recently developed deep learning techniques have significantly improved the
accuracy of various speech and image recognition systems. In this paper we
adapt some of these techniques for protein secondary structure prediction. We
first train a series of deep neural networks to predict eight-class secondary
structure labels given a protein's amino acid sequence information and find
that using recent methods for regularization, such as dropout and weight-norm
constraining, leads to measurable gains in accuracy. We then adapt recent
convolutional neural network architectures--Inception, ReSNet, and DenseNet
with Batch Normalization--to the problem of protein structure prediction. These
convolutional architectures make heavy use of multi-scale filter layers that
simultaneously compute features on several scales, and use residual connections
to prevent underfitting. Using a carefully modified version of these
architectures, we achieve state-of-the-art performance of 70.0% per amino acid
accuracy on the public CB513 benchmark dataset. Finally, we explore additions
from sequence-to-sequence learning, altering the model to make its predictions
conditioned on both the protein's amino acid sequence and its past secondary
structure labels. We introduce a new method of ensembling such a conditional
model with our convolutional model, an approach which reaches 70.6% Q8 accuracy
on CB513. We argue that these results can be further refined for larger boosts
in prediction accuracy through more sophisticated attempts to control
overfitting of conditional models. We aim to release the code for these
experiments as part of the TensorFlow repository.
| no_new_dataset | 0.951278 |
1501.02990 | Yi Li | Yi Wang, Yi Li, Momiao Xiong, Li Jin | Random Bits Regression: a Strong General Predictor for Big Data | 20 pages,1 figure, 2 tables, research article | Big Data Analytics 2016 1:12 | 10.1186/s41044-016-0010-4 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To improve accuracy and speed of regressions and classifications, we present
a data-based prediction method, Random Bits Regression (RBR). This method first
generates a large number of random binary intermediate/derived features based
on the original input matrix, and then performs regularized linear/logistic
regression on those intermediate/derived features to predict the outcome.
Benchmark analyses on a simulated dataset, UCI machine learning repository
datasets and a GWAS dataset showed that RBR outperforms other popular methods
in accuracy and robustness. RBR (available on
https://sourceforge.net/projects/rbr/) is very fast and requires reasonable
memories, therefore, provides a strong, robust and fast predictor in the big
data era.
| [
{
"version": "v1",
"created": "Tue, 13 Jan 2015 13:14:42 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Wang",
"Yi",
""
],
[
"Li",
"Yi",
""
],
[
"Xiong",
"Momiao",
""
],
[
"Jin",
"Li",
""
]
] | TITLE: Random Bits Regression: a Strong General Predictor for Big Data
ABSTRACT: To improve accuracy and speed of regressions and classifications, we present
a data-based prediction method, Random Bits Regression (RBR). This method first
generates a large number of random binary intermediate/derived features based
on the original input matrix, and then performs regularized linear/logistic
regression on those intermediate/derived features to predict the outcome.
Benchmark analyses on a simulated dataset, UCI machine learning repository
datasets and a GWAS dataset showed that RBR outperforms other popular methods
in accuracy and robustness. RBR (available on
https://sourceforge.net/projects/rbr/) is very fast and requires reasonable
memories, therefore, provides a strong, robust and fast predictor in the big
data era.
| no_new_dataset | 0.947088 |
1609.04112 | C.-C. Jay Kuo | C.-C. Jay Kuo | Understanding Convolutional Neural Networks with A Mathematical Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work attempts to address two fundamental questions about the structure
of the convolutional neural networks (CNN): 1) why a non-linear activation
function is essential at the filter output of every convolutional layer? 2)
what is the advantage of the two-layer cascade system over the one-layer
system? A mathematical model called the "REctified-COrrelations on a Sphere"
(RECOS) is proposed to answer these two questions. After the CNN training
process, the converged filter weights define a set of anchor vectors in the
RECOS model. Anchor vectors represent the frequently occurring patterns (or the
spectral components). The necessity of rectification is explained using the
RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and
compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are
used to illustrate discussion points. Finally, the RECOS model is generalized
to a multi-layer system with the AlexNet as an example.
Keywords: Convolutional Neural Network (CNN), Nonlinear Activation, RECOS
Model, Rectified Linear Unit (ReLU), MNIST Dataset.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 02:17:09 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2016 21:55:26 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Kuo",
"C. -C. Jay",
""
]
] | TITLE: Understanding Convolutional Neural Networks with A Mathematical Model
ABSTRACT: This work attempts to address two fundamental questions about the structure
of the convolutional neural networks (CNN): 1) why a non-linear activation
function is essential at the filter output of every convolutional layer? 2)
what is the advantage of the two-layer cascade system over the one-layer
system? A mathematical model called the "REctified-COrrelations on a Sphere"
(RECOS) is proposed to answer these two questions. After the CNN training
process, the converged filter weights define a set of anchor vectors in the
RECOS model. Anchor vectors represent the frequently occurring patterns (or the
spectral components). The necessity of rectification is explained using the
RECOS model. Then, the behavior of a two-layer RECOS system is analyzed and
compared with its one-layer counterpart. The LeNet-5 and the MNIST dataset are
used to illustrate discussion points. Finally, the RECOS model is generalized
to a multi-layer system with the AlexNet as an example.
Keywords: Convolutional Neural Network (CNN), Nonlinear Activation, RECOS
Model, Rectified Linear Unit (ReLU), MNIST Dataset.
| no_new_dataset | 0.951818 |
1610.01969 | Hyrum Anderson | Hyrum S. Anderson, Jonathan Woodbridge and Bobby Filar | DeepDGA: Adversarially-Tuned Domain Generation and Detection | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many malware families utilize domain generation algorithms (DGAs) to
establish command and control (C&C) connections. While there are many methods
to pseudorandomly generate domains, we focus in this paper on detecting (and
generating) domains on a per-domain basis which provides a simple and flexible
means to detect known DGA families. Recent machine learning approaches to DGA
detection have been successful on fairly simplistic DGAs, many of which produce
names of fixed length. However, models trained on limited datasets are somewhat
blind to new DGA variants.
In this paper, we leverage the concept of generative adversarial networks to
construct a deep learning based DGA that is designed to intentionally bypass a
deep learning based detector. In a series of adversarial rounds, the generator
learns to generate domain names that are increasingly more difficult to detect.
In turn, a detector model updates its parameters to compensate for the
adversarially generated domains. We test the hypothesis of whether
adversarially generated domains may be used to augment training sets in order
to harden other machine learning models against yet-to-be-observed DGAs. We
detail solutions to several challenges in training this character-based
generative adversarial network (GAN). In particular, our deep learning
architecture begins as a domain name auto-encoder (encoder + decoder) trained
on domains in the Alexa one million. Then the encoder and decoder are
reassembled competitively in a generative adversarial network (detector +
generator), with novel neural architectures and training strategies to improve
convergence.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2016 17:50:27 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Anderson",
"Hyrum S.",
""
],
[
"Woodbridge",
"Jonathan",
""
],
[
"Filar",
"Bobby",
""
]
] | TITLE: DeepDGA: Adversarially-Tuned Domain Generation and Detection
ABSTRACT: Many malware families utilize domain generation algorithms (DGAs) to
establish command and control (C&C) connections. While there are many methods
to pseudorandomly generate domains, we focus in this paper on detecting (and
generating) domains on a per-domain basis which provides a simple and flexible
means to detect known DGA families. Recent machine learning approaches to DGA
detection have been successful on fairly simplistic DGAs, many of which produce
names of fixed length. However, models trained on limited datasets are somewhat
blind to new DGA variants.
In this paper, we leverage the concept of generative adversarial networks to
construct a deep learning based DGA that is designed to intentionally bypass a
deep learning based detector. In a series of adversarial rounds, the generator
learns to generate domain names that are increasingly more difficult to detect.
In turn, a detector model updates its parameters to compensate for the
adversarially generated domains. We test the hypothesis of whether
adversarially generated domains may be used to augment training sets in order
to harden other machine learning models against yet-to-be-observed DGAs. We
detail solutions to several challenges in training this character-based
generative adversarial network (GAN). In particular, our deep learning
architecture begins as a domain name auto-encoder (encoder + decoder) trained
on domains in the Alexa one million. Then the encoder and decoder are
reassembled competitively in a generative adversarial network (detector +
generator), with novel neural architectures and training strategies to improve
convergence.
| no_new_dataset | 0.94699 |
1611.00791 | Hyrum Anderson | Jonathan Woodbridge, Hyrum S. Anderson, Anjum Ahuja and Daniel Grant | Predicting Domain Generation Algorithms with Long Short-Term Memory
Networks | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various families of malware use domain generation algorithms (DGAs) to
generate a large number of pseudo-random domain names to connect to a command
and control (C&C) server. In order to block DGA C&C traffic, security
organizations must first discover the algorithm by reverse engineering malware
samples, then generating a list of domains for a given seed. The domains are
then either preregistered or published in a DNS blacklist. This process is not
only tedious, but can be readily circumvented by malware authors using a large
number of seeds in algorithms with multivariate recurrence properties (e.g.,
banjori) or by using a dynamic list of seeds (e.g., bedep). Another technique
to stop malware from using DGAs is to intercept DNS queries on a network and
predict whether domains are DGA generated. Such a technique will alert network
administrators to the presence of malware on their networks. In addition, if
the predictor can also accurately predict the family of DGAs, then network
administrators can also be alerted to the type of malware that is on their
networks. This paper presents a DGA classifier that leverages long short-term
memory (LSTM) networks to predict DGAs and their respective families without
the need for a priori feature extraction. Results are significantly better than
state-of-the-art techniques, providing 0.9993 area under the receiver operating
characteristic curve for binary classification and a micro-averaged F1 score of
0.9906. In other terms, the LSTM technique can provide a 90% detection rate
with a 1:10000 false positive (FP) rate---a twenty times FP improvement over
comparable methods. Experiments in this paper are run on open datasets and code
snippets are provided to reproduce the results.
| [
{
"version": "v1",
"created": "Wed, 2 Nov 2016 20:34:56 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Woodbridge",
"Jonathan",
""
],
[
"Anderson",
"Hyrum S.",
""
],
[
"Ahuja",
"Anjum",
""
],
[
"Grant",
"Daniel",
""
]
] | TITLE: Predicting Domain Generation Algorithms with Long Short-Term Memory
Networks
ABSTRACT: Various families of malware use domain generation algorithms (DGAs) to
generate a large number of pseudo-random domain names to connect to a command
and control (C&C) server. In order to block DGA C&C traffic, security
organizations must first discover the algorithm by reverse engineering malware
samples, then generating a list of domains for a given seed. The domains are
then either preregistered or published in a DNS blacklist. This process is not
only tedious, but can be readily circumvented by malware authors using a large
number of seeds in algorithms with multivariate recurrence properties (e.g.,
banjori) or by using a dynamic list of seeds (e.g., bedep). Another technique
to stop malware from using DGAs is to intercept DNS queries on a network and
predict whether domains are DGA generated. Such a technique will alert network
administrators to the presence of malware on their networks. In addition, if
the predictor can also accurately predict the family of DGAs, then network
administrators can also be alerted to the type of malware that is on their
networks. This paper presents a DGA classifier that leverages long short-term
memory (LSTM) networks to predict DGAs and their respective families without
the need for a priori feature extraction. Results are significantly better than
state-of-the-art techniques, providing 0.9993 area under the receiver operating
characteristic curve for binary classification and a micro-averaged F1 score of
0.9906. In other terms, the LSTM technique can provide a 90% detection rate
with a 1:10000 false positive (FP) rate---a twenty times FP improvement over
comparable methods. Experiments in this paper are run on open datasets and code
snippets are provided to reproduce the results.
| no_new_dataset | 0.949248 |
1611.00800 | Andy Jinhua Ma | Frodo Kin Sun Chan, Andy J Ma, Pong C Yuen, Terry Cheuk-Fung Yip,
Yee-Kit Tse, Vincent Wai-Sun Wong and Grace Lai-Hung Wong | Temporal Matrix Completion with Locally Linear Latent Factors for
Medical Applications | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regular medical records are useful for medical practitioners to analyze and
monitor patient health status especially for those with chronic disease, but
such records are usually incomplete due to unpunctuality and absence of
patients. In order to resolve the missing data problem over time, tensor-based
model is suggested for missing data imputation in recent papers because this
approach makes use of low rank tensor assumption for highly correlated data.
However, when the time intervals between records are long, the data correlation
is not high along temporal direction and such assumption is not valid. To
address this problem, we propose to decompose a matrix with missing data into
its latent factors. Then, the locally linear constraint is imposed on these
factors for matrix completion in this paper. By using a publicly available
dataset and two medical datasets collected from hospital, experimental results
show that the proposed algorithm achieves the best performance by comparing
with the existing methods.
| [
{
"version": "v1",
"created": "Mon, 31 Oct 2016 12:02:53 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Chan",
"Frodo Kin Sun",
""
],
[
"Ma",
"Andy J",
""
],
[
"Yuen",
"Pong C",
""
],
[
"Yip",
"Terry Cheuk-Fung",
""
],
[
"Tse",
"Yee-Kit",
""
],
[
"Wong",
"Vincent Wai-Sun",
""
],
[
"Wong",
"Grace Lai-Hung",
""
]
] | TITLE: Temporal Matrix Completion with Locally Linear Latent Factors for
Medical Applications
ABSTRACT: Regular medical records are useful for medical practitioners to analyze and
monitor patient health status especially for those with chronic disease, but
such records are usually incomplete due to unpunctuality and absence of
patients. In order to resolve the missing data problem over time, tensor-based
model is suggested for missing data imputation in recent papers because this
approach makes use of low rank tensor assumption for highly correlated data.
However, when the time intervals between records are long, the data correlation
is not high along temporal direction and such assumption is not valid. To
address this problem, we propose to decompose a matrix with missing data into
its latent factors. Then, the locally linear constraint is imposed on these
factors for matrix completion in this paper. By using a publicly available
dataset and two medical datasets collected from hospital, experimental results
show that the proposed algorithm achieves the best performance by comparing
with the existing methods.
| no_new_dataset | 0.944177 |
1611.00822 | Evgeniya Ustinova | Evgeniya Ustinova, Victor Lempitsky | Learning Deep Embeddings with Histogram Loss | NIPS 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We suggest a loss for learning deep embeddings. The new loss does not
introduce parameters that need to be tuned and results in very good embeddings
across a range of datasets and problems. The loss is computed by estimating two
distribution of similarities for positive (matching) and negative
(non-matching) sample pairs, and then computing the probability of a positive
pair to have a lower similarity score than a negative pair based on the
estimated similarity distributions. We show that such operations can be
performed in a simple and piecewise-differentiable manner using 1D histograms
with soft assignment operations. This makes the proposed loss suitable for
learning deep embeddings using stochastic optimization. In the experiments, the
new loss performs favourably compared to recently proposed alternatives.
| [
{
"version": "v1",
"created": "Wed, 2 Nov 2016 21:48:32 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Ustinova",
"Evgeniya",
""
],
[
"Lempitsky",
"Victor",
""
]
] | TITLE: Learning Deep Embeddings with Histogram Loss
ABSTRACT: We suggest a loss for learning deep embeddings. The new loss does not
introduce parameters that need to be tuned and results in very good embeddings
across a range of datasets and problems. The loss is computed by estimating two
distribution of similarities for positive (matching) and negative
(non-matching) sample pairs, and then computing the probability of a positive
pair to have a lower similarity score than a negative pair based on the
estimated similarity distributions. We show that such operations can be
performed in a simple and piecewise-differentiable manner using 1D histograms
with soft assignment operations. This makes the proposed loss suitable for
learning deep embeddings using stochastic optimization. In the experiments, the
new loss performs favourably compared to recently proposed alternatives.
| no_new_dataset | 0.947478 |
1611.00873 | Qiang Lyu | Qiang Lyu, Yixin Chen, Zhaorong Li, Zhicheng Cui, Ling Chen, Xing
Zhang, Haihua Shen | Extracting Actionability from Machine Learning Models by Sub-optimal
Deterministic Planning | 16 pages, 4 figures | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A main focus of machine learning research has been improving the
generalization accuracy and efficiency of prediction models. Many models such
as SVM, random forest, and deep neural nets have been proposed and achieved
great success. However, what emerges as missing in many applications is
actionability, i.e., the ability to turn prediction results into actions. For
example, in applications such as customer relationship management, clinical
prediction, and advertisement, the users need not only accurate prediction, but
also actionable instructions which can transfer an input to a desirable goal
(e.g., higher profit repays, lower morbidity rates, higher ads hit rates).
Existing effort in deriving such actionable knowledge is few and limited to
simple action models which restricted to only change one attribute for each
action. The dilemma is that in many real applications those action models are
often more complex and harder to extract an optimal solution.
In this paper, we propose a novel approach that achieves actionability by
combining learning with planning, two core areas of AI. In particular, we
propose a framework to extract actionable knowledge from random forest, one of
the most widely used and best off-the-shelf classifiers. We formulate the
actionability problem to a sub-optimal action planning (SOAP) problem, which is
to find a plan to alter certain features of a given input so that the random
forest would yield a desirable output, while minimizing the total costs of
actions. Technically, the SOAP problem is formulated in the SAS+ planning
formalism, and solved using a Max-SAT based approach. Our experimental results
demonstrate the effectiveness and efficiency of the proposed approach on a
personal credit dataset and other benchmarks. Our work represents a new
application of automated planning on an emerging and challenging machine
learning paradigm.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2016 03:53:41 GMT"
}
] | 2016-11-04T00:00:00 | [
[
"Lyu",
"Qiang",
""
],
[
"Chen",
"Yixin",
""
],
[
"Li",
"Zhaorong",
""
],
[
"Cui",
"Zhicheng",
""
],
[
"Chen",
"Ling",
""
],
[
"Zhang",
"Xing",
""
],
[
"Shen",
"Haihua",
""
]
] | TITLE: Extracting Actionability from Machine Learning Models by Sub-optimal
Deterministic Planning
ABSTRACT: A main focus of machine learning research has been improving the
generalization accuracy and efficiency of prediction models. Many models such
as SVM, random forest, and deep neural nets have been proposed and achieved
great success. However, what emerges as missing in many applications is
actionability, i.e., the ability to turn prediction results into actions. For
example, in applications such as customer relationship management, clinical
prediction, and advertisement, the users need not only accurate prediction, but
also actionable instructions which can transfer an input to a desirable goal
(e.g., higher profit repays, lower morbidity rates, higher ads hit rates).
Existing effort in deriving such actionable knowledge is few and limited to
simple action models which restricted to only change one attribute for each
action. The dilemma is that in many real applications those action models are
often more complex and harder to extract an optimal solution.
In this paper, we propose a novel approach that achieves actionability by
combining learning with planning, two core areas of AI. In particular, we
propose a framework to extract actionable knowledge from random forest, one of
the most widely used and best off-the-shelf classifiers. We formulate the
actionability problem to a sub-optimal action planning (SOAP) problem, which is
to find a plan to alter certain features of a given input so that the random
forest would yield a desirable output, while minimizing the total costs of
actions. Technically, the SOAP problem is formulated in the SAS+ planning
formalism, and solved using a Max-SAT based approach. Our experimental results
demonstrate the effectiveness and efficiency of the proposed approach on a
personal credit dataset and other benchmarks. Our work represents a new
application of automated planning on an emerging and challenging machine
learning paradigm.
| no_new_dataset | 0.9455 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.