id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.08333 | Karthik Bangalore Mani | Karthik Bangalore Mani | Text Summarization using Deep Learning and Ridge Regression | 4 pages,10 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop models and extract relevant features for automatic text
summarization and investigate the performance of different models on the DUC
2001 dataset. Two different models were developed, one being a ridge regressor
and the other one was a multi-layer perceptron. The hyperparameters were varied
and their performance were noted. We segregated the summarization task into 2
main steps, the first being sentence ranking and the second step being sentence
selection. In the first step, given a document, we sort the sentences based on
their Importance, and in the second step, in order to obtain non-redundant
sentences, we weed out the sentences that are have high similarity with the
previously selected sentences.
| [
{
"version": "v1",
"created": "Mon, 26 Dec 2016 07:17:30 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 06:37:24 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2017 00:23:45 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Jun 2017 02:42:47 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Mani",
"Karthik Bangalore",
""
]
] | TITLE: Text Summarization using Deep Learning and Ridge Regression
ABSTRACT: We develop models and extract relevant features for automatic text
summarization and investigate the performance of different models on the DUC
2001 dataset. Two different models were developed, one being a ridge regressor
and the other one was a multi-layer perceptron. The hyperparameters were varied
and their performance were noted. We segregated the summarization task into 2
main steps, the first being sentence ranking and the second step being sentence
selection. In the first step, given a document, we sort the sentences based on
their Importance, and in the second step, in order to obtain non-redundant
sentences, we weed out the sentences that are have high similarity with the
previously selected sentences.
| no_new_dataset | 0.955527 |
1701.02593 | Diego Marcheggiani | Diego Marcheggiani, Anton Frolov, Ivan Titov | A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based
Semantic Role Labeling | To appear in CoNLL 2017 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a simple and accurate neural model for dependency-based semantic
role labeling. Our model predicts predicate-argument dependencies relying on
states of a bidirectional LSTM encoder. The semantic role labeler achieves
competitive performance on English, even without any kind of syntactic
information and only using local inference. However, when automatically
predicted part-of-speech tags are provided as input, it substantially
outperforms all previous local models and approaches the best reported results
on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish
where our approach also achieves competitive results. Syntactic parsers are
unreliable on out-of-domain data, so standard (i.e., syntactically-informed)
SRL models are hindered when tested in this setting. Our syntax-agnostic model
appears more robust, resulting in the best reported results on standard
out-of-domain test sets.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2017 14:01:47 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 16:47:47 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Marcheggiani",
"Diego",
""
],
[
"Frolov",
"Anton",
""
],
[
"Titov",
"Ivan",
""
]
] | TITLE: A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based
Semantic Role Labeling
ABSTRACT: We introduce a simple and accurate neural model for dependency-based semantic
role labeling. Our model predicts predicate-argument dependencies relying on
states of a bidirectional LSTM encoder. The semantic role labeler achieves
competitive performance on English, even without any kind of syntactic
information and only using local inference. However, when automatically
predicted part-of-speech tags are provided as input, it substantially
outperforms all previous local models and approaches the best reported results
on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish
where our approach also achieves competitive results. Syntactic parsers are
unreliable on out-of-domain data, so standard (i.e., syntactically-informed)
SRL models are hindered when tested in this setting. Our syntax-agnostic model
appears more robust, resulting in the best reported results on standard
out-of-domain test sets.
| no_new_dataset | 0.948822 |
1702.02519 | Adrian Benton | Adrian Benton, Huda Khayrallah, Biman Gujral, Dee Ann Reisinger, Sheng
Zhang, Raman Arora | Deep Generalized Canonical Correlation Analysis | 14 pages, 6 figures | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a
method for learning nonlinear transformations of arbitrarily many views of
data, such that the resulting transformations are maximally informative of each
other. While methods for nonlinear two-view representation learning (Deep CCA,
(Andrew et al., 2013)) and linear many-view representation learning
(Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview
representation learning technique that combines the flexibility of nonlinear
(deep) representation learning with the statistical power of incorporating
information from many independent sources, or views. We present the DGCCA
formulation as well as an efficient stochastic optimization algorithm for
solving it. We learn DGCCA representations on two distinct datasets for three
downstream tasks: phonetic transcription from acoustic and articulatory
measurements, and recommending hashtags and friends on a dataset of Twitter
users. We find that DGCCA representations soundly beat existing methods at
phonetic transcription and hashtag recommendation, and in general perform no
worse than standard linear many-view techniques.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2017 16:57:48 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 00:06:08 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Benton",
"Adrian",
""
],
[
"Khayrallah",
"Huda",
""
],
[
"Gujral",
"Biman",
""
],
[
"Reisinger",
"Dee Ann",
""
],
[
"Zhang",
"Sheng",
""
],
[
"Arora",
"Raman",
""
]
] | TITLE: Deep Generalized Canonical Correlation Analysis
ABSTRACT: We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a
method for learning nonlinear transformations of arbitrarily many views of
data, such that the resulting transformations are maximally informative of each
other. While methods for nonlinear two-view representation learning (Deep CCA,
(Andrew et al., 2013)) and linear many-view representation learning
(Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview
representation learning technique that combines the flexibility of nonlinear
(deep) representation learning with the statistical power of incorporating
information from many independent sources, or views. We present the DGCCA
formulation as well as an efficient stochastic optimization algorithm for
solving it. We learn DGCCA representations on two distinct datasets for three
downstream tasks: phonetic transcription from acoustic and articulatory
measurements, and recommending hashtags and friends on a dataset of Twitter
users. We find that DGCCA representations soundly beat existing methods at
phonetic transcription and hashtag recommendation, and in general perform no
worse than standard linear many-view techniques.
| no_new_dataset | 0.948489 |
1706.03038 | Mohammadamin Barekatain | Mohammadamin Barekatain, Miquel Mart\'i, Hsueh-Fu Shih, Samuel Murray,
Kotaro Nakayama, Yutaka Matsuo and Helmut Prendinger | Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action
Detection | Computer Vision and Pattern Recognition Workshops (CVPRW), Hawaii,
USA, 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite significant progress in the development of human action detection
datasets and algorithms, no current dataset is representative of real-world
aerial view scenarios. We present Okutama-Action, a new video dataset for
aerial view concurrent human action detection. It consists of 43 minute-long
fully-annotated sequences with 12 action classes. Okutama-Action features many
challenges missing in current datasets, including dynamic transition of
actions, significant changes in scale and aspect ratio, abrupt camera movement,
as well as multi-labeled actors. As a result, our dataset is more challenging
than existing ones, and will help push the field forward to enable real-world
applications.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 16:54:51 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 16:04:01 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Barekatain",
"Mohammadamin",
""
],
[
"Martí",
"Miquel",
""
],
[
"Shih",
"Hsueh-Fu",
""
],
[
"Murray",
"Samuel",
""
],
[
"Nakayama",
"Kotaro",
""
],
[
"Matsuo",
"Yutaka",
""
],
[
"Prendinger",
"Helmut",
""
]
] | TITLE: Okutama-Action: An Aerial View Video Dataset for Concurrent Human Action
Detection
ABSTRACT: Despite significant progress in the development of human action detection
datasets and algorithms, no current dataset is representative of real-world
aerial view scenarios. We present Okutama-Action, a new video dataset for
aerial view concurrent human action detection. It consists of 43 minute-long
fully-annotated sequences with 12 action classes. Okutama-Action features many
challenges missing in current datasets, including dynamic transition of
actions, significant changes in scale and aspect ratio, abrupt camera movement,
as well as multi-labeled actors. As a result, our dataset is more challenging
than existing ones, and will help push the field forward to enable real-world
applications.
| new_dataset | 0.955527 |
1706.03610 | Georg Wiese | Georg Wiese, Dirk Weissenborn, Mariana Neves | Neural Domain Adaptation for Biomedical Question Answering | null | null | null | null | cs.CL cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Factoid question answering (QA) has recently benefited from the development
of deep learning (DL) systems. Neural network models outperform traditional
approaches in domains where large datasets exist, such as SQuAD (ca. 100,000
questions) for Wikipedia articles. However, these systems have not yet been
applied to QA in more specific domains, such as biomedicine, because datasets
are generally too small to train a DL system from scratch. For example, the
BioASQ dataset for biomedical QA comprises less then 900 factoid (single
answer) and list (multiple answers) QA instances. In this work, we adapt a
neural QA system trained on a large open-domain dataset (SQuAD, source) to a
biomedical dataset (BioASQ, target) by employing various transfer learning
techniques. Our network architecture is based on a state-of-the-art QA system,
extended with biomedical word embeddings and a novel mechanism to answer list
questions. In contrast to existing biomedical QA systems, our system does not
rely on domain-specific ontologies, parsers or entity taggers, which are
expensive to create. Despite this fact, our systems achieve state-of-the-art
results on factoid questions and competitive results on list questions.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 13:08:21 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 15:16:18 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Wiese",
"Georg",
""
],
[
"Weissenborn",
"Dirk",
""
],
[
"Neves",
"Mariana",
""
]
] | TITLE: Neural Domain Adaptation for Biomedical Question Answering
ABSTRACT: Factoid question answering (QA) has recently benefited from the development
of deep learning (DL) systems. Neural network models outperform traditional
approaches in domains where large datasets exist, such as SQuAD (ca. 100,000
questions) for Wikipedia articles. However, these systems have not yet been
applied to QA in more specific domains, such as biomedicine, because datasets
are generally too small to train a DL system from scratch. For example, the
BioASQ dataset for biomedical QA comprises less then 900 factoid (single
answer) and list (multiple answers) QA instances. In this work, we adapt a
neural QA system trained on a large open-domain dataset (SQuAD, source) to a
biomedical dataset (BioASQ, target) by employing various transfer learning
techniques. Our network architecture is based on a state-of-the-art QA system,
extended with biomedical word embeddings and a novel mechanism to answer list
questions. In contrast to existing biomedical QA systems, our system does not
rely on domain-specific ontologies, parsers or entity taggers, which are
expensive to create. Despite this fact, our systems achieve state-of-the-art
results on factoid questions and competitive results on list questions.
| no_new_dataset | 0.932453 |
1706.04124 | Jinzhuo Wang | Baoyang Chen, Wenmin Wang, Jinzhuo Wang, Xiongtao Chen | Video Imagination from a Single Image with Transformation Generation | 9 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we focus on a challenging task: synthesizing multiple imaginary
videos given a single image. Major problems come from high dimensionality of
pixel space and the ambiguity of potential motions. To overcome those problems,
we propose a new framework that produce imaginary videos by transformation
generation. The generated transformations are applied to the original image in
a novel volumetric merge network to reconstruct frames in imaginary video.
Through sampling different latent variables, our method can output different
imaginary video samples. The framework is trained in an adversarial way with
unsupervised learning. For evaluation, we propose a new assessment metric
$RIQA$. In experiments, we test on 3 datasets varying from synthetic data to
natural scene. Our framework achieves promising performance in image quality
assessment. The visual inspection indicates that it can successfully generate
diverse five-frame videos in acceptable perceptual quality.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 15:31:10 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Jun 2017 07:51:22 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Chen",
"Baoyang",
""
],
[
"Wang",
"Wenmin",
""
],
[
"Wang",
"Jinzhuo",
""
],
[
"Chen",
"Xiongtao",
""
]
] | TITLE: Video Imagination from a Single Image with Transformation Generation
ABSTRACT: In this work, we focus on a challenging task: synthesizing multiple imaginary
videos given a single image. Major problems come from high dimensionality of
pixel space and the ambiguity of potential motions. To overcome those problems,
we propose a new framework that produce imaginary videos by transformation
generation. The generated transformations are applied to the original image in
a novel volumetric merge network to reconstruct frames in imaginary video.
Through sampling different latent variables, our method can output different
imaginary video samples. The framework is trained in an adversarial way with
unsupervised learning. For evaluation, we propose a new assessment metric
$RIQA$. In experiments, we test on 3 datasets varying from synthetic data to
natural scene. Our framework achieves promising performance in image quality
assessment. The visual inspection indicates that it can successfully generate
diverse five-frame videos in acceptable perceptual quality.
| no_new_dataset | 0.947914 |
1706.04737 | Lin Yang | Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, Danny Z. Chen | Suggestive Annotation: A Deep Active Learning Framework for Biomedical
Image Segmentation | Accepted at MICCAI 2017 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Image segmentation is a fundamental problem in biomedical image analysis.
Recent advances in deep learning have achieved promising results on many
biomedical image segmentation benchmarks. However, due to large variations in
biomedical images (different modalities, image settings, objects, noise, etc),
to utilize deep learning on a new application, it usually needs a new set of
training data. This can incur a great deal of annotation effort and cost,
because only biomedical experts can annotate effectively, and often there are
too many instances in images (e.g., cells) to annotate. In this paper, we aim
to address the following question: With limited effort (e.g., time) for
annotation, what instances should be annotated in order to attain the best
performance? We present a deep active learning framework that combines fully
convolutional network (FCN) and active learning to significantly reduce
annotation effort by making judicious suggestions on the most effective
annotation areas. We utilize uncertainty and similarity information provided by
FCN and formulate a generalized version of the maximum set cover problem to
determine the most representative and uncertain areas for annotation. Extensive
experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node
ultrasound image segmentation dataset show that, using annotation suggestions
by our method, state-of-the-art segmentation performance can be achieved by
using only 50% of training data.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2017 05:01:53 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Yang",
"Lin",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Chen",
"Jianxu",
""
],
[
"Zhang",
"Siyuan",
""
],
[
"Chen",
"Danny Z.",
""
]
] | TITLE: Suggestive Annotation: A Deep Active Learning Framework for Biomedical
Image Segmentation
ABSTRACT: Image segmentation is a fundamental problem in biomedical image analysis.
Recent advances in deep learning have achieved promising results on many
biomedical image segmentation benchmarks. However, due to large variations in
biomedical images (different modalities, image settings, objects, noise, etc),
to utilize deep learning on a new application, it usually needs a new set of
training data. This can incur a great deal of annotation effort and cost,
because only biomedical experts can annotate effectively, and often there are
too many instances in images (e.g., cells) to annotate. In this paper, we aim
to address the following question: With limited effort (e.g., time) for
annotation, what instances should be annotated in order to attain the best
performance? We present a deep active learning framework that combines fully
convolutional network (FCN) and active learning to significantly reduce
annotation effort by making judicious suggestions on the most effective
annotation areas. We utilize uncertainty and similarity information provided by
FCN and formulate a generalized version of the maximum set cover problem to
determine the most representative and uncertain areas for annotation. Extensive
experiments using the 2015 MICCAI Gland Challenge dataset and a lymph node
ultrasound image segmentation dataset show that, using annotation suggestions
by our method, state-of-the-art segmentation performance can be achieved by
using only 50% of training data.
| no_new_dataset | 0.950686 |
1706.04769 | Simone Scardapane | Simone Scardapane, Paolo Di Lorenzo | Stochastic Training of Neural Networks via Successive Convex
Approximations | Preprint submitted to IEEE Transactions on Neural Networks and
Learning Systems | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a new family of algorithms for training neural networks
(NNs). These are based on recent developments in the field of non-convex
optimization, going under the general name of successive convex approximation
(SCA) techniques. The basic idea is to iteratively replace the original
(non-convex, highly dimensional) learning problem with a sequence of (strongly
convex) approximations, which are both accurate and simple to optimize.
Differently from similar ideas (e.g., quasi-Newton algorithms), the
approximations can be constructed using only first-order information of the
neural network function, in a stochastic fashion, while exploiting the overall
structure of the learning problem for a faster convergence. We discuss several
use cases, based on different choices for the loss function (e.g., squared loss
and cross-entropy loss), and for the regularization of the NN's weights. We
experiment on several medium-sized benchmark problems, and on a large-scale
dataset involving simulated physical data. The results show how the algorithm
outperforms state-of-the-art techniques, providing faster convergence to a
better minimum. Additionally, we show how the algorithm can be easily
parallelized over multiple computational units without hindering its
performance. In particular, each computational unit can optimize a tailored
surrogate function defined on a randomly assigned subset of the input
variables, whose dimension can be selected depending entirely on the available
computational power.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2017 08:11:22 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Scardapane",
"Simone",
""
],
[
"Di Lorenzo",
"Paolo",
""
]
] | TITLE: Stochastic Training of Neural Networks via Successive Convex
Approximations
ABSTRACT: This paper proposes a new family of algorithms for training neural networks
(NNs). These are based on recent developments in the field of non-convex
optimization, going under the general name of successive convex approximation
(SCA) techniques. The basic idea is to iteratively replace the original
(non-convex, highly dimensional) learning problem with a sequence of (strongly
convex) approximations, which are both accurate and simple to optimize.
Differently from similar ideas (e.g., quasi-Newton algorithms), the
approximations can be constructed using only first-order information of the
neural network function, in a stochastic fashion, while exploiting the overall
structure of the learning problem for a faster convergence. We discuss several
use cases, based on different choices for the loss function (e.g., squared loss
and cross-entropy loss), and for the regularization of the NN's weights. We
experiment on several medium-sized benchmark problems, and on a large-scale
dataset involving simulated physical data. The results show how the algorithm
outperforms state-of-the-art techniques, providing faster convergence to a
better minimum. Additionally, we show how the algorithm can be easily
parallelized over multiple computational units without hindering its
performance. In particular, each computational unit can optimize a tailored
surrogate function defined on a randomly assigned subset of the input
variables, whose dimension can be selected depending entirely on the available
computational power.
| no_new_dataset | 0.947137 |
1706.04870 | Ashraf Darwish | Ayat Taha, Ashraf Darwish, and Aboul Ella Hassanien | Arabian Horse Identification Benchmark Dataset | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | The lack of a standard muzzle print database is a challenge for conducting
researches in Arabian horse identification systems. Therefore, collecting a
muzzle print images database is a crucial decision. The dataset presented in
this paper is an option for the studies that need a dataset for testing and
comparing the algorithms under development for Arabian horse identification.
Our collected dataset consists of 300 color images that were collected from 50
Arabian horse muzzle species. This dataset has been collected from 50 Arabian
horses with 6 muzzle print images each. A special care has been given to the
quality of the collected images. The collected images cover different quality
levels and degradation factors such as image rotation and image partiality for
simulating real time identification operations. This dataset can be used to
test the identification of Arabian horse system including the extracted
features and the selected classifier.
| [
{
"version": "v1",
"created": "Thu, 15 Jun 2017 13:58:02 GMT"
}
] | 2017-06-16T00:00:00 | [
[
"Taha",
"Ayat",
""
],
[
"Darwish",
"Ashraf",
""
],
[
"Hassanien",
"Aboul Ella",
""
]
] | TITLE: Arabian Horse Identification Benchmark Dataset
ABSTRACT: The lack of a standard muzzle print database is a challenge for conducting
researches in Arabian horse identification systems. Therefore, collecting a
muzzle print images database is a crucial decision. The dataset presented in
this paper is an option for the studies that need a dataset for testing and
comparing the algorithms under development for Arabian horse identification.
Our collected dataset consists of 300 color images that were collected from 50
Arabian horse muzzle species. This dataset has been collected from 50 Arabian
horses with 6 muzzle print images each. A special care has been given to the
quality of the collected images. The collected images cover different quality
levels and degradation factors such as image rotation and image partiality for
simulating real time identification operations. This dataset can be used to
test the identification of Arabian horse system including the extracted
features and the selected classifier.
| new_dataset | 0.965414 |
1508.06950 | Kevin Chan | Sameet Sreenivasan, Kevin S. Chan, Ananthram Swami, Gyorgy Korniss and
Boleslaw Szymanski | Information Cascades in Feed-based Networks of Users with Limited
Attention | 8 pages, 5 figures, For IEEE Transactions on Network Science and
Engineering (submitted) | IEEE Transactions on Network Science and Engineering 4, 120-128
(2017) | 10.1109/TNSE.2016.2625807 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We build a model of information cascades on feed-based networks, taking into
account the finite attention span of users, message generation rates and
message forwarding rates. Using this model, we study through simulations, the
effect of the extent of user attention on the probability that the cascade
becomes viral. In analogy with a branching process, we estimate the branching
factor associated with the cascade process for different attention spans and
different forwarding probabilities, and demonstrate that beyond a certain
attention span, critical forwarding probabilities exist that constitute a
threshold after which cascades can become viral. The critical forwarding
probabilities have an inverse relationship with the attention span. Next, we
develop a semi-analytical approach for our model, that allows us determine the
branching factor for given values of message generation rates, message
forwarding rates and attention spans. The branching factors obtained using this
analytical approach show good agreement with those obtained through
simulations. Finally, we analyze an event specific dataset obtained from
Twitter, and show that estimated branching factors correlate well with the
cascade size distributions associated with distinct hashtags.
| [
{
"version": "v1",
"created": "Thu, 27 Aug 2015 17:36:45 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Sreenivasan",
"Sameet",
""
],
[
"Chan",
"Kevin S.",
""
],
[
"Swami",
"Ananthram",
""
],
[
"Korniss",
"Gyorgy",
""
],
[
"Szymanski",
"Boleslaw",
""
]
] | TITLE: Information Cascades in Feed-based Networks of Users with Limited
Attention
ABSTRACT: We build a model of information cascades on feed-based networks, taking into
account the finite attention span of users, message generation rates and
message forwarding rates. Using this model, we study through simulations, the
effect of the extent of user attention on the probability that the cascade
becomes viral. In analogy with a branching process, we estimate the branching
factor associated with the cascade process for different attention spans and
different forwarding probabilities, and demonstrate that beyond a certain
attention span, critical forwarding probabilities exist that constitute a
threshold after which cascades can become viral. The critical forwarding
probabilities have an inverse relationship with the attention span. Next, we
develop a semi-analytical approach for our model, that allows us determine the
branching factor for given values of message generation rates, message
forwarding rates and attention spans. The branching factors obtained using this
analytical approach show good agreement with those obtained through
simulations. Finally, we analyze an event specific dataset obtained from
Twitter, and show that estimated branching factors correlate well with the
cascade size distributions associated with distinct hashtags.
| no_new_dataset | 0.95222 |
1512.04280 | Liang Lu | Liang Lu and Steve Renals | Small-footprint Deep Neural Networks with Highway Connections for Speech
Recognition | 5 pages, 3 figures, fixed typo, accepted by Interspeech 2016 | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For speech recognition, deep neural networks (DNNs) have significantly
improved the recognition accuracy in most of benchmark datasets and application
domains. However, compared to the conventional Gaussian mixture models,
DNN-based acoustic models usually have much larger number of model parameters,
making it challenging for their applications in resource constrained platforms,
e.g., mobile devices. In this paper, we study the application of the recently
proposed highway network to train small-footprint DNNs, which are {\it thinner}
and {\it deeper}, and have significantly smaller number of model parameters
compared to conventional DNNs. We investigated this approach on the AMI meeting
speech transcription corpus which has around 70 hours of audio data. The
highway neural networks constantly outperformed their plain DNN counterparts,
and the number of model parameters can be reduced significantly without
sacrificing the recognition accuracy.
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 12:29:32 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Mar 2016 12:14:06 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jun 2016 10:30:54 GMT"
},
{
"version": "v4",
"created": "Wed, 14 Jun 2017 15:17:27 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Lu",
"Liang",
""
],
[
"Renals",
"Steve",
""
]
] | TITLE: Small-footprint Deep Neural Networks with Highway Connections for Speech
Recognition
ABSTRACT: For speech recognition, deep neural networks (DNNs) have significantly
improved the recognition accuracy in most of benchmark datasets and application
domains. However, compared to the conventional Gaussian mixture models,
DNN-based acoustic models usually have much larger number of model parameters,
making it challenging for their applications in resource constrained platforms,
e.g., mobile devices. In this paper, we study the application of the recently
proposed highway network to train small-footprint DNNs, which are {\it thinner}
and {\it deeper}, and have significantly smaller number of model parameters
compared to conventional DNNs. We investigated this approach on the AMI meeting
speech transcription corpus which has around 70 hours of audio data. The
highway neural networks constantly outperformed their plain DNN counterparts,
and the number of model parameters can be reduced significantly without
sacrificing the recognition accuracy.
| no_new_dataset | 0.951504 |
1604.07243 | Mattia Desana | Mattia Desana and Christoph Schn\"orr | Learning Arbitrary Sum-Product Network Leaves with
Expectation-Maximization | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sum-Product Networks with complex probability distribution at the leaves have
been shown to be powerful tractable-inference probabilistic models. However,
while learning the internal parameters has been amply studied, learning complex
leaf distribution is an open problem with only few results available in special
cases. In this paper we derive an efficient method to learn a very large class
of leaf distributions with Expectation-Maximization. The EM updates have the
form of simple weighted maximum likelihood problems, allowing to use any
distribution that can be learned with maximum likelihood, even approximately.
The algorithm has cost linear in the model size and converges even if only
partial optimizations are performed. We demonstrate this approach with
experiments on twenty real-life datasets for density estimation, using tree
graphical models as leaves. Our model outperforms state-of-the-art methods for
parameter learning despite using SPNs with much fewer parameters.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 13:22:55 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2016 16:42:59 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2017 14:08:22 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Desana",
"Mattia",
""
],
[
"Schnörr",
"Christoph",
""
]
] | TITLE: Learning Arbitrary Sum-Product Network Leaves with
Expectation-Maximization
ABSTRACT: Sum-Product Networks with complex probability distribution at the leaves have
been shown to be powerful tractable-inference probabilistic models. However,
while learning the internal parameters has been amply studied, learning complex
leaf distribution is an open problem with only few results available in special
cases. In this paper we derive an efficient method to learn a very large class
of leaf distributions with Expectation-Maximization. The EM updates have the
form of simple weighted maximum likelihood problems, allowing to use any
distribution that can be learned with maximum likelihood, even approximately.
The algorithm has cost linear in the model size and converges even if only
partial optimizations are performed. We demonstrate this approach with
experiments on twenty real-life datasets for density estimation, using tree
graphical models as leaves. Our model outperforms state-of-the-art methods for
parameter learning despite using SPNs with much fewer parameters.
| no_new_dataset | 0.944995 |
1609.04938 | Yuntian Deng | Yuntian Deng, Anssi Kanervisto, Jeffrey Ling, Alexander M. Rush | Image-to-Markup Generation with Coarse-to-Fine Attention | Accepted by ICML 2017 | null | null | null | cs.CV cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a neural encoder-decoder model to convert images into
presentational markup based on a scalable coarse-to-fine attention mechanism.
Our method is evaluated in the context of image-to-LaTeX generation, and we
introduce a new dataset of real-world rendered mathematical expressions paired
with LaTeX markup. We show that unlike neural OCR techniques using CTC-based
models, attention-based approaches can tackle this non-standard OCR task. Our
approach outperforms classical mathematical OCR systems by a large margin on
in-domain rendered data, and, with pretraining, also performs well on
out-of-domain handwritten data. To reduce the inference complexity associated
with the attention-based approaches, we introduce a new coarse-to-fine
attention layer that selects a support region before applying attention.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 08:14:50 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 22:48:53 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Deng",
"Yuntian",
""
],
[
"Kanervisto",
"Anssi",
""
],
[
"Ling",
"Jeffrey",
""
],
[
"Rush",
"Alexander M.",
""
]
] | TITLE: Image-to-Markup Generation with Coarse-to-Fine Attention
ABSTRACT: We present a neural encoder-decoder model to convert images into
presentational markup based on a scalable coarse-to-fine attention mechanism.
Our method is evaluated in the context of image-to-LaTeX generation, and we
introduce a new dataset of real-world rendered mathematical expressions paired
with LaTeX markup. We show that unlike neural OCR techniques using CTC-based
models, attention-based approaches can tackle this non-standard OCR task. Our
approach outperforms classical mathematical OCR systems by a large margin on
in-domain rendered data, and, with pretraining, also performs well on
out-of-domain handwritten data. To reduce the inference complexity associated
with the attention-based approaches, we introduce a new coarse-to-fine
attention layer that selects a support region before applying attention.
| new_dataset | 0.958731 |
1610.04794 | Bo Yang | Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong | Towards K-means-friendly Spaces: Simultaneous Deep Learning and
Clustering | Final ICML2017 version. Main paper: 10 pages; Supplementary material:
4 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most learning approaches treat dimensionality reduction (DR) and clustering
separately (i.e., sequentially), but recent research has shown that optimizing
the two tasks jointly can substantially improve the performance of both. The
premise behind the latter genre is that the data samples are obtained via
linear transformation of latent representations that are easy to cluster; but
in practice, the transformation from the latent space to the data can be more
complicated. In this work, we assume that this transformation is an unknown and
possibly nonlinear function. To recover the `clustering-friendly' latent
representations and to better cluster the data, we propose a joint DR and
K-means clustering approach in which DR is accomplished via learning a deep
neural network (DNN). The motivation is to keep the advantages of jointly
optimizing the two tasks, while exploiting the deep neural network's ability to
approximate any nonlinear function. This way, the proposed approach can work
well for a broad class of generative models. Towards this end, we carefully
design the DNN structure and the associated joint optimization criterion, and
propose an effective and scalable algorithm to handle the formulated
optimization problem. Experiments using different real datasets are employed to
showcase the effectiveness of the proposed approach.
| [
{
"version": "v1",
"created": "Sat, 15 Oct 2016 22:51:06 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 22:40:26 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Yang",
"Bo",
""
],
[
"Fu",
"Xiao",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
],
[
"Hong",
"Mingyi",
""
]
] | TITLE: Towards K-means-friendly Spaces: Simultaneous Deep Learning and
Clustering
ABSTRACT: Most learning approaches treat dimensionality reduction (DR) and clustering
separately (i.e., sequentially), but recent research has shown that optimizing
the two tasks jointly can substantially improve the performance of both. The
premise behind the latter genre is that the data samples are obtained via
linear transformation of latent representations that are easy to cluster; but
in practice, the transformation from the latent space to the data can be more
complicated. In this work, we assume that this transformation is an unknown and
possibly nonlinear function. To recover the `clustering-friendly' latent
representations and to better cluster the data, we propose a joint DR and
K-means clustering approach in which DR is accomplished via learning a deep
neural network (DNN). The motivation is to keep the advantages of jointly
optimizing the two tasks, while exploiting the deep neural network's ability to
approximate any nonlinear function. This way, the proposed approach can work
well for a broad class of generative models. Towards this end, we carefully
design the DNN structure and the associated joint optimization criterion, and
propose an effective and scalable algorithm to handle the formulated
optimization problem. Experiments using different real datasets are employed to
showcase the effectiveness of the proposed approach.
| no_new_dataset | 0.946399 |
1702.08720 | Weihua Hu | Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi
Sugiyama | Learning Discrete Representations via Information Maximizing
Self-Augmented Training | To appear at ICML 2017 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning discrete representations of data is a central machine learning task
because of the compactness of the representations and ease of interpretation.
The task includes clustering and hash learning as special cases. Deep neural
networks are promising to be used because they can model the non-linearity of
data and scale to large datasets. However, their model complexity is huge, and
therefore, we need to carefully regularize the networks in order to learn
useful representations that exhibit intended invariance for applications of
interest. To this end, we propose a method called Information Maximizing
Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose
the invariance on discrete representations. More specifically, we encourage the
predicted representations of augmented data points to be close to those of the
original data points in an end-to-end fashion. At the same time, we maximize
the information-theoretic dependency between data and their predicted discrete
representations. Extensive experiments on benchmark datasets show that IMSAT
produces state-of-the-art results for both clustering and unsupervised hash
learning.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 09:57:27 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2017 10:14:51 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2017 04:18:11 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Hu",
"Weihua",
""
],
[
"Miyato",
"Takeru",
""
],
[
"Tokui",
"Seiya",
""
],
[
"Matsumoto",
"Eiichi",
""
],
[
"Sugiyama",
"Masashi",
""
]
] | TITLE: Learning Discrete Representations via Information Maximizing
Self-Augmented Training
ABSTRACT: Learning discrete representations of data is a central machine learning task
because of the compactness of the representations and ease of interpretation.
The task includes clustering and hash learning as special cases. Deep neural
networks are promising to be used because they can model the non-linearity of
data and scale to large datasets. However, their model complexity is huge, and
therefore, we need to carefully regularize the networks in order to learn
useful representations that exhibit intended invariance for applications of
interest. To this end, we propose a method called Information Maximizing
Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose
the invariance on discrete representations. More specifically, we encourage the
predicted representations of augmented data points to be close to those of the
original data points in an end-to-end fashion. At the same time, we maximize
the information-theoretic dependency between data and their predicted discrete
representations. Extensive experiments on benchmark datasets show that IMSAT
produces state-of-the-art results for both clustering and unsupervised hash
learning.
| no_new_dataset | 0.944893 |
1703.02161 | Sofia Ira Ktena | Sofia Ira Ktena, Sarah Parisot, Enzo Ferrante, Martin Rajchl, Matthew
Lee, Ben Glocker, Daniel Rueckert | Distance Metric Learning using Graph Convolutional Networks: Application
to Functional Brain Networks | International Conference on Medical Image Computing and
Computer-Assisted Interventions (MICCAI) 2017 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating similarity between graphs is of major importance in several
computer vision and pattern recognition problems, where graph representations
are often used to model objects or interactions between elements. The choice of
a distance or similarity metric is, however, not trivial and can be highly
dependent on the application at hand. In this work, we propose a novel metric
learning method to evaluate distance between graphs that leverages the power of
convolutional neural networks, while exploiting concepts from spectral graph
theory to allow these operations on irregular graphs. We demonstrate the
potential of our method in the field of connectomics, where neuronal pathways
or functional connections between brain regions are commonly modelled as
graphs. In this problem, the definition of an appropriate graph similarity
function is critical to unveil patterns of disruptions associated with certain
brain disorders. Experimental results on the ABIDE dataset show that our method
can learn a graph similarity metric tailored for a clinical application,
improving the performance of a simple k-nn classifier by 11.9% compared to a
traditional distance metric.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2017 00:49:27 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2017 11:05:52 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Ktena",
"Sofia Ira",
""
],
[
"Parisot",
"Sarah",
""
],
[
"Ferrante",
"Enzo",
""
],
[
"Rajchl",
"Martin",
""
],
[
"Lee",
"Matthew",
""
],
[
"Glocker",
"Ben",
""
],
[
"Rueckert",
"Daniel",
""
]
] | TITLE: Distance Metric Learning using Graph Convolutional Networks: Application
to Functional Brain Networks
ABSTRACT: Evaluating similarity between graphs is of major importance in several
computer vision and pattern recognition problems, where graph representations
are often used to model objects or interactions between elements. The choice of
a distance or similarity metric is, however, not trivial and can be highly
dependent on the application at hand. In this work, we propose a novel metric
learning method to evaluate distance between graphs that leverages the power of
convolutional neural networks, while exploiting concepts from spectral graph
theory to allow these operations on irregular graphs. We demonstrate the
potential of our method in the field of connectomics, where neuronal pathways
or functional connections between brain regions are commonly modelled as
graphs. In this problem, the definition of an appropriate graph similarity
function is critical to unveil patterns of disruptions associated with certain
brain disorders. Experimental results on the ABIDE dataset show that our method
can learn a graph similarity metric tailored for a clinical application,
improving the performance of a simple k-nn classifier by 11.9% compared to a
traditional distance metric.
| no_new_dataset | 0.947817 |
1704.08387 | Jianpeng Cheng J | Jianpeng Cheng, Siva Reddy, Vijay Saraswat, Mirella Lapata | Learning Structured Natural Language Representations for Semantic
Parsing | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a neural semantic parser that converts natural language
utterances to intermediate representations in the form of predicate-argument
structures, which are induced with a transition system and subsequently mapped
to target domains. The semantic parser is trained end-to-end using annotated
logical forms or their denotations. We obtain competitive results on various
datasets. The induced predicate-argument structures shed light on the types of
representations useful for semantic parsing and how these are different from
linguistically motivated ones.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2017 00:24:20 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2017 09:57:29 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2017 04:18:26 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Cheng",
"Jianpeng",
""
],
[
"Reddy",
"Siva",
""
],
[
"Saraswat",
"Vijay",
""
],
[
"Lapata",
"Mirella",
""
]
] | TITLE: Learning Structured Natural Language Representations for Semantic
Parsing
ABSTRACT: We introduce a neural semantic parser that converts natural language
utterances to intermediate representations in the form of predicate-argument
structures, which are induced with a transition system and subsequently mapped
to target domains. The semantic parser is trained end-to-end using annotated
logical forms or their denotations. We obtain competitive results on various
datasets. The induced predicate-argument structures shed light on the types of
representations useful for semantic parsing and how these are different from
linguistically motivated ones.
| no_new_dataset | 0.941975 |
1705.05732 | Tatiana Alessandra Bubba | Tatiana A. Bubba, Markus Juvonen, Jonatan Lehtonen, Maximilian M\"arz,
Alexander Meaney, Zenith Purisha and Samuli Siltanen | Tomographic X-ray data of carved cheese | arXiv admin note: substantial text overlap with arXiv:1609.07299,
arXiv:1502.04064 | null | null | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the documentation of the tomographic X-ray data of a carved cheese
slice. Data are available at www.fips.fi/dataset.php, and can be freely used
for scientific purposes with appropriate references to them, and to this
document in http://arxiv.org/. The data set consists of (1) the X-ray sinogram
of a single 2D slice of the cheese slice with three different resolutions and
(2) the corresponding measurement matrices modeling the linear operation of the
X-ray transform. Each of these sinograms was obtained from a measured
360-projection fan-beam sinogram by down-sampling and taking logarithms. The
original (measured) sinogram is also provided in its original form and
resolution.
| [
{
"version": "v1",
"created": "Fri, 12 May 2017 14:02:15 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jun 2017 14:22:00 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Bubba",
"Tatiana A.",
""
],
[
"Juvonen",
"Markus",
""
],
[
"Lehtonen",
"Jonatan",
""
],
[
"März",
"Maximilian",
""
],
[
"Meaney",
"Alexander",
""
],
[
"Purisha",
"Zenith",
""
],
[
"Siltanen",
"Samuli",
""
]
] | TITLE: Tomographic X-ray data of carved cheese
ABSTRACT: This is the documentation of the tomographic X-ray data of a carved cheese
slice. Data are available at www.fips.fi/dataset.php, and can be freely used
for scientific purposes with appropriate references to them, and to this
document in http://arxiv.org/. The data set consists of (1) the X-ray sinogram
of a single 2D slice of the cheese slice with three different resolutions and
(2) the corresponding measurement matrices modeling the linear operation of the
X-ray transform. Each of these sinograms was obtained from a measured
360-projection fan-beam sinogram by down-sampling and taking logarithms. The
original (measured) sinogram is also provided in its original form and
resolution.
| no_new_dataset | 0.924073 |
1706.04215 | Ashwinkumar Ganesan | Mandar Haldekar, Ashwinkumar Ganesan, Tim Oates | Identifying Spatial Relations in Images using Convolutional Neural
Networks | null | null | null | null | cs.AI cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional approaches to building a large scale knowledge graph have usually
relied on extracting information (entities, their properties, and relations
between them) from unstructured text (e.g. Dbpedia). Recent advances in
Convolutional Neural Networks (CNN) allow us to shift our focus to learning
entities and relations from images, as they build robust models that require
little or no pre-processing of the images. In this paper, we present an
approach to identify and extract spatial relations (e.g., The girl is standing
behind the table) from images using CNNs. Our research addresses two specific
challenges: providing insight into how spatial relations are learned by the
network and which parts of the image are used to predict these relations. We
use the pre-trained network VGGNet to extract features from an image and train
a Multi-layer Perceptron (MLP) on a set of synthetic images and the sun09
dataset to extract spatial relations. The MLP predicts spatial relations
without a bounding box around the objects or the space in the image depicting
the relation. To understand how the spatial relations are represented in the
network, a heatmap is overlayed on the image to show the regions that are
deemed important by the network. Also, we analyze the MLP to show the
relationship between the activation of consistent groups of nodes and the
prediction of a spatial relation. We show how the loss of these groups affects
the networks ability to identify relations.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 18:24:11 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Haldekar",
"Mandar",
""
],
[
"Ganesan",
"Ashwinkumar",
""
],
[
"Oates",
"Tim",
""
]
] | TITLE: Identifying Spatial Relations in Images using Convolutional Neural
Networks
ABSTRACT: Traditional approaches to building a large scale knowledge graph have usually
relied on extracting information (entities, their properties, and relations
between them) from unstructured text (e.g. Dbpedia). Recent advances in
Convolutional Neural Networks (CNN) allow us to shift our focus to learning
entities and relations from images, as they build robust models that require
little or no pre-processing of the images. In this paper, we present an
approach to identify and extract spatial relations (e.g., The girl is standing
behind the table) from images using CNNs. Our research addresses two specific
challenges: providing insight into how spatial relations are learned by the
network and which parts of the image are used to predict these relations. We
use the pre-trained network VGGNet to extract features from an image and train
a Multi-layer Perceptron (MLP) on a set of synthetic images and the sun09
dataset to extract spatial relations. The MLP predicts spatial relations
without a bounding box around the objects or the space in the image depicting
the relation. To understand how the spatial relations are represented in the
network, a heatmap is overlayed on the image to show the regions that are
deemed important by the network. Also, we analyze the MLP to show the
relationship between the activation of consistent groups of nodes and the
prediction of a spatial relation. We show how the loss of these groups affects
the networks ability to identify relations.
| no_new_dataset | 0.948058 |
1706.04256 | Ulugbek Kamilov | Kevin Degraux, Ulugbek S. Kamilov, Petros T. Boufounos, Dehong Liu | Online Convolutional Dictionary Learning for Multimodal Imaging | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational imaging methods that can exploit multiple modalities have the
potential to enhance the capabilities of traditional sensing systems. In this
paper, we propose a new method that reconstructs multimodal images from their
linear measurements by exploiting redundancies across different modalities. Our
method combines a convolutional group-sparse representation of images with
total variation (TV) regularization for high-quality multimodal imaging. We
develop an online algorithm that enables the unsupervised learning of
convolutional dictionaries on large-scale datasets that are typical in such
applications. We illustrate the benefit of our approach in the context of joint
intensity-depth imaging.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 21:08:33 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Degraux",
"Kevin",
""
],
[
"Kamilov",
"Ulugbek S.",
""
],
[
"Boufounos",
"Petros T.",
""
],
[
"Liu",
"Dehong",
""
]
] | TITLE: Online Convolutional Dictionary Learning for Multimodal Imaging
ABSTRACT: Computational imaging methods that can exploit multiple modalities have the
potential to enhance the capabilities of traditional sensing systems. In this
paper, we propose a new method that reconstructs multimodal images from their
linear measurements by exploiting redundancies across different modalities. Our
method combines a convolutional group-sparse representation of images with
total variation (TV) regularization for high-quality multimodal imaging. We
develop an online algorithm that enables the unsupervised learning of
convolutional dictionaries on large-scale datasets that are typical in such
applications. We illustrate the benefit of our approach in the context of joint
intensity-depth imaging.
| no_new_dataset | 0.951414 |
1706.04285 | Chenxing Xia | Chenxing Xia and Hanling Zhang and Xiuju Gao | Saliency detection by aggregating complementary background template with
optimization framework | 28 pages,10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an unsupervised bottom-up saliency detection approach by
aggregating complementary background template with refinement. Feature vectors
are extracted from each superpixel to cover regional color, contrast and
texture information. By using these features, a coarse detection for salient
region is realized based on background template achieved by different
combinations of boundary regions instead of only treating four boundaries as
background. Then, by ranking the relevance of the image nodes with foreground
cues extracted from the former saliency map, we obtain an improved result.
Finally, smoothing operation is utilized to refine the foreground-based
saliency map to improve the contrast between salient and non-salient regions
until a close to binary saliency map is reached. Experimental results show that
the proposed algorithm generates more accurate saliency maps and performs
favorably against the state-off-the-art saliency detection methods on four
publicly available datasets.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 00:06:02 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Xia",
"Chenxing",
""
],
[
"Zhang",
"Hanling",
""
],
[
"Gao",
"Xiuju",
""
]
] | TITLE: Saliency detection by aggregating complementary background template with
optimization framework
ABSTRACT: This paper proposes an unsupervised bottom-up saliency detection approach by
aggregating complementary background template with refinement. Feature vectors
are extracted from each superpixel to cover regional color, contrast and
texture information. By using these features, a coarse detection for salient
region is realized based on background template achieved by different
combinations of boundary regions instead of only treating four boundaries as
background. Then, by ranking the relevance of the image nodes with foreground
cues extracted from the former saliency map, we obtain an improved result.
Finally, smoothing operation is utilized to refine the foreground-based
saliency map to improve the contrast between salient and non-salient regions
until a close to binary saliency map is reached. Experimental results show that
the proposed algorithm generates more accurate saliency maps and performs
favorably against the state-off-the-art saliency detection methods on four
publicly available datasets.
| no_new_dataset | 0.952353 |
1706.04318 | Tetsu Matsukawa | Tetsu Matsukawa, Takahiro Okabe, Einoshin Suzuki, Yoichi Sato | Hierarchical Gaussian Descriptors with Application to Person
Re-Identification | 14 pages, 12 figures, 4 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Describing the color and textural information of a person image is one of the
most crucial aspects of person re-identification (re-id). In this paper, we
present novel meta-descriptors based on a hierarchical distribution of pixel
features. Although hierarchical covariance descriptors have been successfully
applied to image classification, the mean information of pixel features, which
is absent from the covariance, tends to be the major discriminative information
for person re-id. To solve this problem, we describe a local region in an image
via hierarchical Gaussian distribution in which both means and covariances are
included in their parameters. More specifically, the region is modeled as a set
of multiple Gaussian distributions in which each Gaussian represents the
appearance of a local patch. The characteristics of the set of Gaussians are
again described by another Gaussian distribution. In both steps, we embed the
parameters of the Gaussian into a point of Symmetric Positive Definite (SPD)
matrix manifold. By changing the way to handle mean information in this
embedding, we develop two hierarchical Gaussian descriptors. Additionally, we
develop feature norm normalization methods with the ability to alleviate the
biased trends that exist on the descriptors. The experimental results conducted
on five public datasets indicate that the proposed descriptors achieve
remarkably high performance on person re-id.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 05:16:16 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Matsukawa",
"Tetsu",
""
],
[
"Okabe",
"Takahiro",
""
],
[
"Suzuki",
"Einoshin",
""
],
[
"Sato",
"Yoichi",
""
]
] | TITLE: Hierarchical Gaussian Descriptors with Application to Person
Re-Identification
ABSTRACT: Describing the color and textural information of a person image is one of the
most crucial aspects of person re-identification (re-id). In this paper, we
present novel meta-descriptors based on a hierarchical distribution of pixel
features. Although hierarchical covariance descriptors have been successfully
applied to image classification, the mean information of pixel features, which
is absent from the covariance, tends to be the major discriminative information
for person re-id. To solve this problem, we describe a local region in an image
via hierarchical Gaussian distribution in which both means and covariances are
included in their parameters. More specifically, the region is modeled as a set
of multiple Gaussian distributions in which each Gaussian represents the
appearance of a local patch. The characteristics of the set of Gaussians are
again described by another Gaussian distribution. In both steps, we embed the
parameters of the Gaussian into a point of Symmetric Positive Definite (SPD)
matrix manifold. By changing the way to handle mean information in this
embedding, we develop two hierarchical Gaussian descriptors. Additionally, we
develop feature norm normalization methods with the ability to alleviate the
biased trends that exist on the descriptors. The experimental results conducted
on five public datasets indicate that the proposed descriptors achieve
remarkably high performance on person re-id.
| no_new_dataset | 0.948106 |
1706.04372 | Zhe Wang | Zhe Wang, Yanxin Yin, Jianping Shi, Wei Fang, Hongsheng Li and
Xiaogang Wang | Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection | accepted by MICCAI 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a convolution neural network based algorithm for simultaneously
diagnosing diabetic retinopathy and highlighting suspicious regions. Our
contributions are two folds: 1) a network termed Zoom-in-Net which mimics the
zoom-in process of a clinician to examine the retinal images. Trained with only
image-level supervisions, Zoomin-Net can generate attention maps which
highlight suspicious regions, and predicts the disease level accurately based
on both the whole image and its high resolution suspicious patches. 2) Only
four bounding boxes generated from the automatically learned attention maps are
enough to cover 80% of the lesions labeled by an experienced ophthalmologist,
which shows good localization ability of the attention maps. By clustering
features at high response locations on the attention maps, we discover
meaningful clusters which contain potential lesions in diabetic retinopathy.
Experiments show that our algorithm outperform the state-of-the-art methods on
two datasets, EyePACS and Messidor.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 09:13:52 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Wang",
"Zhe",
""
],
[
"Yin",
"Yanxin",
""
],
[
"Shi",
"Jianping",
""
],
[
"Fang",
"Wei",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection
ABSTRACT: We propose a convolution neural network based algorithm for simultaneously
diagnosing diabetic retinopathy and highlighting suspicious regions. Our
contributions are two folds: 1) a network termed Zoom-in-Net which mimics the
zoom-in process of a clinician to examine the retinal images. Trained with only
image-level supervisions, Zoomin-Net can generate attention maps which
highlight suspicious regions, and predicts the disease level accurately based
on both the whole image and its high resolution suspicious patches. 2) Only
four bounding boxes generated from the automatically learned attention maps are
enough to cover 80% of the lesions labeled by an experienced ophthalmologist,
which shows good localization ability of the attention maps. By clustering
features at high response locations on the attention maps, we discover
meaningful clusters which contain potential lesions in diabetic retinopathy.
Experiments show that our algorithm outperform the state-of-the-art methods on
two datasets, EyePACS and Messidor.
| no_new_dataset | 0.948822 |
1706.04399 | Manh Duong Phung | Manh Duong Phung, Cong Hoang Quach, Tran Hiep Dinh, Quang Ha | Enhanced discrete particle swarm optimization path planning for UAV
vision-based surface inspection | null | Automation in Construction, Vol.81, pp.25-33 (2017) | 10.1016/j.autcon.2017.04.013 | null | cs.RO cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In built infrastructure monitoring, an efficient path planning algorithm is
essential for robotic inspection of large surfaces using computer vision. In
this work, we first formulate the inspection path planning problem as an
extended travelling salesman problem (TSP) in which both the coverage and
obstacle avoidance were taken into account. An enhanced discrete particle swarm
optimization (DPSO) algorithm is then proposed to solve the TSP, with
performance improvement by using deterministic initialization, random mutation,
and edge exchange. Finally, we take advantage of parallel computing to
implement the DPSO in a GPU-based framework so that the computation time can be
significantly reduced while keeping the hardware requirement unchanged. To show
the effectiveness of the proposed algorithm, experimental results are included
for datasets obtained from UAV inspection of an office building and a bridge.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 10:40:19 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Phung",
"Manh Duong",
""
],
[
"Quach",
"Cong Hoang",
""
],
[
"Dinh",
"Tran Hiep",
""
],
[
"Ha",
"Quang",
""
]
] | TITLE: Enhanced discrete particle swarm optimization path planning for UAV
vision-based surface inspection
ABSTRACT: In built infrastructure monitoring, an efficient path planning algorithm is
essential for robotic inspection of large surfaces using computer vision. In
this work, we first formulate the inspection path planning problem as an
extended travelling salesman problem (TSP) in which both the coverage and
obstacle avoidance were taken into account. An enhanced discrete particle swarm
optimization (DPSO) algorithm is then proposed to solve the TSP, with
performance improvement by using deterministic initialization, random mutation,
and edge exchange. Finally, we take advantage of parallel computing to
implement the DPSO in a GPU-based framework so that the computation time can be
significantly reduced while keeping the hardware requirement unchanged. To show
the effectiveness of the proposed algorithm, experimental results are included
for datasets obtained from UAV inspection of an office building and a bridge.
| no_new_dataset | 0.947721 |
1706.04472 | Prerana Mukherjee | Prerana Mukherjee, Brejesh Lall, Sarvaswa Tandon | SalProp: Salient object proposals via aggregated edge cues | 5 pages, 4 figures, accepted at ICIP 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel object proposal generation scheme by
formulating a graph-based salient edge classification framework that utilizes
the edge context. In the proposed method, we construct a Bayesian probabilistic
edge map to assign a saliency value to the edgelets by exploiting low level
edge features. A Conditional Random Field is then learned to effectively
combine these features for edge classification with object/non-object label. We
propose an objectness score for the generated windows by analyzing the salient
edge density inside the bounding box. Extensive experiments on PASCAL VOC 2007
dataset demonstrate that the proposed method gives competitive performance
against 10 popular generic object detection techniques while using fewer number
of proposals.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 13:17:42 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Mukherjee",
"Prerana",
""
],
[
"Lall",
"Brejesh",
""
],
[
"Tandon",
"Sarvaswa",
""
]
] | TITLE: SalProp: Salient object proposals via aggregated edge cues
ABSTRACT: In this paper, we propose a novel object proposal generation scheme by
formulating a graph-based salient edge classification framework that utilizes
the edge context. In the proposed method, we construct a Bayesian probabilistic
edge map to assign a saliency value to the edgelets by exploiting low level
edge features. A Conditional Random Field is then learned to effectively
combine these features for edge classification with object/non-object label. We
propose an objectness score for the generated windows by analyzing the salient
edge density inside the bounding box. Extensive experiments on PASCAL VOC 2007
dataset demonstrate that the proposed method gives competitive performance
against 10 popular generic object detection techniques while using fewer number
of proposals.
| no_new_dataset | 0.948106 |
1706.04473 | Kairit Sirts | Kairit Sirts, Olivier Piguet, Mark Johnson | Idea density for predicting Alzheimer's disease from transcribed speech | CoNLL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Idea Density (ID) measures the rate at which ideas or elementary predications
are expressed in an utterance or in a text. Lower ID is found to be associated
with an increased risk of developing Alzheimer's disease (AD) (Snowdon et al.,
1996; Engelman et al., 2010). ID has been used in two different versions:
propositional idea density (PID) counts the expressed ideas and can be applied
to any text while semantic idea density (SID) counts pre-defined information
content units and is naturally more applicable to normative domains, such as
picture description tasks. In this paper, we develop DEPID, a novel
dependency-based method for computing PID, and its version DEPID-R that enables
to exclude repeating ideas---a feature characteristic to AD speech. We conduct
the first comparison of automatically extracted PID and SID in the diagnostic
classification task on two different AD datasets covering both closed-topic and
free-recall domains. While SID performs better on the normative dataset, adding
PID leads to a small but significant improvement (+1.7 F-score). On the
free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in
F-score) but adding the features derived from the word embedding clustering
underlying the automatic SID increases the results considerably, leading to an
F-score of 84.8.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 13:18:08 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Sirts",
"Kairit",
""
],
[
"Piguet",
"Olivier",
""
],
[
"Johnson",
"Mark",
""
]
] | TITLE: Idea density for predicting Alzheimer's disease from transcribed speech
ABSTRACT: Idea Density (ID) measures the rate at which ideas or elementary predications
are expressed in an utterance or in a text. Lower ID is found to be associated
with an increased risk of developing Alzheimer's disease (AD) (Snowdon et al.,
1996; Engelman et al., 2010). ID has been used in two different versions:
propositional idea density (PID) counts the expressed ideas and can be applied
to any text while semantic idea density (SID) counts pre-defined information
content units and is naturally more applicable to normative domains, such as
picture description tasks. In this paper, we develop DEPID, a novel
dependency-based method for computing PID, and its version DEPID-R that enables
to exclude repeating ideas---a feature characteristic to AD speech. We conduct
the first comparison of automatically extracted PID and SID in the diagnostic
classification task on two different AD datasets covering both closed-topic and
free-recall domains. While SID performs better on the normative dataset, adding
PID leads to a small but significant improvement (+1.7 F-score). On the
free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in
F-score) but adding the features derived from the word embedding clustering
underlying the automatic SID increases the results considerably, leading to an
F-score of 84.8.
| no_new_dataset | 0.951142 |
1706.04488 | Manuk Akopyan | Manuk Akopyan (1), and Eshsou Khashba (1) ((1) Institute for System
Programming) | Large-Scale YouTube-8M Video Understanding with Deep Neural Networks | 6 pages, 5 figures, 3 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video classification problem has been studied many years. The success of
Convolutional Neural Networks (CNN) in image recognition tasks gives a powerful
incentive for researchers to create more advanced video classification
approaches. As video has a temporal content Long Short Term Memory (LSTM)
networks become handy tool allowing to model long-term temporal clues. Both
approaches need a large dataset of input data. In this paper three models
provided to address video classification using recently announced YouTube-8M
large-scale dataset. The first model is based on frame pooling approach. Two
other models based on LSTM networks. Mixture of Experts intermediate layer is
used in third model allowing to increase model capacity without dramatically
increasing computations. The set of experiments for handling imbalanced
training data has been conducted.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 13:38:43 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Akopyan",
"Manuk",
""
],
[
"Khashba",
"Eshsou",
""
]
] | TITLE: Large-Scale YouTube-8M Video Understanding with Deep Neural Networks
ABSTRACT: Video classification problem has been studied many years. The success of
Convolutional Neural Networks (CNN) in image recognition tasks gives a powerful
incentive for researchers to create more advanced video classification
approaches. As video has a temporal content Long Short Term Memory (LSTM)
networks become handy tool allowing to model long-term temporal clues. Both
approaches need a large dataset of input data. In this paper three models
provided to address video classification using recently announced YouTube-8M
large-scale dataset. The first model is based on frame pooling approach. Two
other models based on LSTM networks. Mixture of Experts intermediate layer is
used in third model allowing to increase model capacity without dramatically
increasing computations. The set of experiments for handling imbalanced
training data has been conducted.
| no_new_dataset | 0.94868 |
1706.04525 | Giulio Siracusano Dr. | Giulio Siracusano, Aurelio La Corte, Michele Gaeta, Giovanni Finocchio | A data-Oriented based Self-Calibration And Robust chemical-shift
encoding by using clusterization (OSCAR) - Theory, Optimization and Clinical
Validation in Neuromuscular disorders | 29 pages and 11 images as supplemental materials | null | null | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-echo Chemical Shift Encoded methods for Fat-Water quantification are
growing in clinical use due to their ability to estimate and correct some
confounding effects. State of the art CSE water-fat separation approaches rely
on a multi-peak fat spectrum with peak frequencies and relative amplitudes kept
constant over the entire MRI dataset. However, the latter approximation
introduces a systematic error in fat percentage quantification in patients
where the differences in lipid chemical composition are significant, such as
for neuromuscular disorders, because of the spatial dependence of the peak
amplitudes. The present work aims to overcome this limitation by taking
advantage of an unsupervised clusterization-based approach offering a reliable
criterion to carry out a data-driven segmentation of the input MRI dataset into
multiple regions. The idea is to apply the clusterization for partitioning the
multi-echo MRI dataset into a finite number of clusters whose internal voxels
exhibit similar distance metrics. For each cluster, the estimation of the fat
spectral properties are evaluated with a self-calibration technique and finally
the fat-water percentages are computed via a non-linear fitting. The method is
tested in ad-hoc and public datasets. The overall performance and results in
terms of fitting accuracy, robustness and reproducibility are compared with
other state-of-the-art CSE algorithms. This approach provides a more accurate
and reproducible identification of chemical species, hence fat-water
separation, when compared with other calibrated and non-calibrated approaches.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 15:02:44 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Siracusano",
"Giulio",
""
],
[
"La Corte",
"Aurelio",
""
],
[
"Gaeta",
"Michele",
""
],
[
"Finocchio",
"Giovanni",
""
]
] | TITLE: A data-Oriented based Self-Calibration And Robust chemical-shift
encoding by using clusterization (OSCAR) - Theory, Optimization and Clinical
Validation in Neuromuscular disorders
ABSTRACT: Multi-echo Chemical Shift Encoded methods for Fat-Water quantification are
growing in clinical use due to their ability to estimate and correct some
confounding effects. State of the art CSE water-fat separation approaches rely
on a multi-peak fat spectrum with peak frequencies and relative amplitudes kept
constant over the entire MRI dataset. However, the latter approximation
introduces a systematic error in fat percentage quantification in patients
where the differences in lipid chemical composition are significant, such as
for neuromuscular disorders, because of the spatial dependence of the peak
amplitudes. The present work aims to overcome this limitation by taking
advantage of an unsupervised clusterization-based approach offering a reliable
criterion to carry out a data-driven segmentation of the input MRI dataset into
multiple regions. The idea is to apply the clusterization for partitioning the
multi-echo MRI dataset into a finite number of clusters whose internal voxels
exhibit similar distance metrics. For each cluster, the estimation of the fat
spectral properties are evaluated with a self-calibration technique and finally
the fat-water percentages are computed via a non-linear fitting. The method is
tested in ad-hoc and public datasets. The overall performance and results in
terms of fitting accuracy, robustness and reproducibility are compared with
other state-of-the-art CSE algorithms. This approach provides a more accurate
and reproducible identification of chemical species, hence fat-water
separation, when compared with other calibrated and non-calibrated approaches.
| no_new_dataset | 0.951774 |
1706.04572 | Miha Skalic | Miha Skalic, Marcin Pekalski, Xingguo E. Pan | Deep Learning Methods for Efficient Large Scale Video Labeling | 7 pages, 5 tables, 1 figure | null | null | null | stat.ML cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a solution to "Google Cloud and YouTube-8M Video Understanding
Challenge" that ranked 5th place. The proposed model is an ensemble of three
model families, two frame level and one video level. The training was performed
on augmented dataset, with cross validation.
| [
{
"version": "v1",
"created": "Wed, 14 Jun 2017 16:24:18 GMT"
}
] | 2017-06-15T00:00:00 | [
[
"Skalic",
"Miha",
""
],
[
"Pekalski",
"Marcin",
""
],
[
"Pan",
"Xingguo E.",
""
]
] | TITLE: Deep Learning Methods for Efficient Large Scale Video Labeling
ABSTRACT: We present a solution to "Google Cloud and YouTube-8M Video Understanding
Challenge" that ranked 5th place. The proposed model is an ensemble of three
model families, two frame level and one video level. The training was performed
on augmented dataset, with cross validation.
| no_new_dataset | 0.952662 |
1609.06377 | Reza Mahjourian | Reza Mahjourian, Martin Wicke, Anelia Angelova | Geometry-Based Next Frame Prediction from Monocular Video | To appear in 2017 IEEE Intelligent Vehicles Symposium | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of next frame prediction from video input. A
recurrent convolutional neural network is trained to predict depth from
monocular video input, which, along with the current video image and the camera
trajectory, can then be used to compute the next frame. Unlike prior next-frame
prediction approaches, we take advantage of the scene geometry and use the
predicted depth for generating the next frame prediction. Our approach can
produce rich next frame predictions which include depth information attached to
each pixel. Another novel aspect of our approach is that it predicts depth from
a sequence of images (e.g. in a video), rather than from a single still image.
We evaluate the proposed approach on the KITTI dataset, a standard dataset for
benchmarking tasks relevant to autonomous driving. The proposed method produces
results which are visually and numerically superior to existing methods that
directly predict the next frame. We show that the accuracy of depth prediction
improves as more prior frames are considered.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2016 22:49:34 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2017 21:52:06 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Mahjourian",
"Reza",
""
],
[
"Wicke",
"Martin",
""
],
[
"Angelova",
"Anelia",
""
]
] | TITLE: Geometry-Based Next Frame Prediction from Monocular Video
ABSTRACT: We consider the problem of next frame prediction from video input. A
recurrent convolutional neural network is trained to predict depth from
monocular video input, which, along with the current video image and the camera
trajectory, can then be used to compute the next frame. Unlike prior next-frame
prediction approaches, we take advantage of the scene geometry and use the
predicted depth for generating the next frame prediction. Our approach can
produce rich next frame predictions which include depth information attached to
each pixel. Another novel aspect of our approach is that it predicts depth from
a sequence of images (e.g. in a video), rather than from a single still image.
We evaluate the proposed approach on the KITTI dataset, a standard dataset for
benchmarking tasks relevant to autonomous driving. The proposed method produces
results which are visually and numerically superior to existing methods that
directly predict the next frame. We show that the accuracy of depth prediction
improves as more prior frames are considered.
| no_new_dataset | 0.952397 |
1612.09548 | Zhenhua Feng | Zhen-Hua Feng, Josef Kittler, William Christmas and Xiao-Jun Wu | A Unified Tensor-based Active Appearance Face Model | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Appearance variations result in many difficulties in face image analysis. To
deal with this challenge, we present a Unified Tensor-based Active Appearance
Model (UT-AAM) for jointly modelling the geometry and texture information of 2D
faces. For each type of face information, namely shape and texture, we
construct a unified tensor model capturing all relevant appearance variations.
This contrasts with the variation-specific models of the classical tensor AAM.
To achieve the unification across pose variations, a strategy for dealing with
self-occluded faces is proposed to obtain consistent shape and texture
representations of pose-varied faces. In addition, our UT-AAM is capable of
constructing the model from an incomplete training dataset, using tensor
completion methods. Last, we use an effective cascaded-regression-based method
for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice
is considerably enhanced. As an example, we demonstrate the improvements in
training facial landmark detectors through the use of UT-AAM to synthesise a
large number of virtual samples. Experimental results obtained using the
Multi-PIE and 300-W face datasets demonstrate the merits of the proposed
approach.
| [
{
"version": "v1",
"created": "Fri, 30 Dec 2016 18:08:16 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 16:33:25 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Feng",
"Zhen-Hua",
""
],
[
"Kittler",
"Josef",
""
],
[
"Christmas",
"William",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: A Unified Tensor-based Active Appearance Face Model
ABSTRACT: Appearance variations result in many difficulties in face image analysis. To
deal with this challenge, we present a Unified Tensor-based Active Appearance
Model (UT-AAM) for jointly modelling the geometry and texture information of 2D
faces. For each type of face information, namely shape and texture, we
construct a unified tensor model capturing all relevant appearance variations.
This contrasts with the variation-specific models of the classical tensor AAM.
To achieve the unification across pose variations, a strategy for dealing with
self-occluded faces is proposed to obtain consistent shape and texture
representations of pose-varied faces. In addition, our UT-AAM is capable of
constructing the model from an incomplete training dataset, using tensor
completion methods. Last, we use an effective cascaded-regression-based method
for UT-AAM fitting. With these advancements, the utility of UT-AAM in practice
is considerably enhanced. As an example, we demonstrate the improvements in
training facial landmark detectors through the use of UT-AAM to synthesise a
large number of virtual samples. Experimental results obtained using the
Multi-PIE and 300-W face datasets demonstrate the merits of the proposed
approach.
| no_new_dataset | 0.948155 |
1701.00193 | Hao Liu | Hao Liu, Zequn Jie, Karlekar Jayashree, Meibin Qi, Jianguo Jiang,
Shuicheng Yan, Jiashi Feng | Video-based Person Re-identification with Accumulative Motion Context | accepted by TCSVT | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video based person re-identification plays a central role in realistic
security and video surveillance. In this paper we propose a novel Accumulative
Motion Context (AMOC) network for addressing this important problem, which
effectively exploits the long-range motion context for robustly identifying the
same person under challenging conditions. Given a video sequence of the same or
different persons, the proposed AMOC network jointly learns appearance
representation and motion context from a collection of adjacent frames using a
two-stream convolutional architecture. Then AMOC accumulates clues from motion
context by recurrent aggregation, allowing effective information flow among
adjacent frames and capturing dynamic gist of the persons. The architecture of
AMOC is end-to-end trainable and thus motion context can be adapted to
complement appearance clues under unfavorable conditions (e.g. occlusions).
Extensive experiments are conduced on three public benchmark datasets, i.e.,
the iLIDS-VID, PRID-2011 and MARS datasets, to investigate the performance of
AMOC. The experimental results demonstrate that the proposed AMOC network
outperforms state-of-the-arts for video-based re-identification significantly
and confirm the advantage of exploiting long-range motion context for video
based person re-identification, validating our motivation evidently.
| [
{
"version": "v1",
"created": "Sun, 1 Jan 2017 04:20:20 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 03:27:01 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Liu",
"Hao",
""
],
[
"Jie",
"Zequn",
""
],
[
"Jayashree",
"Karlekar",
""
],
[
"Qi",
"Meibin",
""
],
[
"Jiang",
"Jianguo",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Feng",
"Jiashi",
""
]
] | TITLE: Video-based Person Re-identification with Accumulative Motion Context
ABSTRACT: Video based person re-identification plays a central role in realistic
security and video surveillance. In this paper we propose a novel Accumulative
Motion Context (AMOC) network for addressing this important problem, which
effectively exploits the long-range motion context for robustly identifying the
same person under challenging conditions. Given a video sequence of the same or
different persons, the proposed AMOC network jointly learns appearance
representation and motion context from a collection of adjacent frames using a
two-stream convolutional architecture. Then AMOC accumulates clues from motion
context by recurrent aggregation, allowing effective information flow among
adjacent frames and capturing dynamic gist of the persons. The architecture of
AMOC is end-to-end trainable and thus motion context can be adapted to
complement appearance clues under unfavorable conditions (e.g. occlusions).
Extensive experiments are conduced on three public benchmark datasets, i.e.,
the iLIDS-VID, PRID-2011 and MARS datasets, to investigate the performance of
AMOC. The experimental results demonstrate that the proposed AMOC network
outperforms state-of-the-arts for video-based re-identification significantly
and confirm the advantage of exploiting long-range motion context for video
based person re-identification, validating our motivation evidently.
| no_new_dataset | 0.951863 |
1701.02405 | Juan Echeverria | Juan Echeverr\'ia, Shi Zhou | Discovery, Retrieval, and Analysis of 'Star Wars' botnet in Twitter | Accepted for publication at ASONAM 2017 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is known that many Twitter users are bots, which are accounts controlled
and sometimes created by computers. Twitter bots can send spam tweets,
manipulate public opinion and be used for online fraud. Here we report the
discovery, retrieval, and analysis of the `Star Wars' botnet in Twitter, which
consists of more than 350,000 bots tweeting random quotations exclusively from
Star Wars novels.
The botnet contains a single type of bot, showing exactly the same properties
throughout the botnet. It is unusually large, many times larger than other
available datasets. It provides a valuable source of ground truth for research
on Twitter bots. We analysed and revealed rich details on how the botnet was
designed and created. As of this writing, the Star Wars bots are still alive in
Twitter. They have survived since their creation in 2013, despite the
increasing efforts in recent years to detect and remove Twitter bots.We also
reflect on the `unconventional' way in which we discovered the Star Wars bots,
and discuss the current problems and future challenges of Twitter bot
detection.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2017 01:56:03 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Apr 2017 04:14:28 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2017 13:47:08 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Echeverría",
"Juan",
""
],
[
"Zhou",
"Shi",
""
]
] | TITLE: Discovery, Retrieval, and Analysis of 'Star Wars' botnet in Twitter
ABSTRACT: It is known that many Twitter users are bots, which are accounts controlled
and sometimes created by computers. Twitter bots can send spam tweets,
manipulate public opinion and be used for online fraud. Here we report the
discovery, retrieval, and analysis of the `Star Wars' botnet in Twitter, which
consists of more than 350,000 bots tweeting random quotations exclusively from
Star Wars novels.
The botnet contains a single type of bot, showing exactly the same properties
throughout the botnet. It is unusually large, many times larger than other
available datasets. It provides a valuable source of ground truth for research
on Twitter bots. We analysed and revealed rich details on how the botnet was
designed and created. As of this writing, the Star Wars bots are still alive in
Twitter. They have survived since their creation in 2013, despite the
increasing efforts in recent years to detect and remove Twitter bots.We also
reflect on the `unconventional' way in which we discovered the Star Wars bots,
and discuss the current problems and future challenges of Twitter bot
detection.
| no_new_dataset | 0.928018 |
1703.06337 | Andriy Miranskyy | Mefta Sadat and Ayse Basar Bener and Andriy V. Miranskyy | Rediscovery Datasets: Connecting Duplicate Reports | null | Proceedings of the 14th International Conference on Mining
Software Repositories (MSR '17). IEEE Press, Piscataway, NJ, USA, 527-530,
2017 | 10.1109/MSR.2017.50 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The same defect can be rediscovered by multiple clients, causing unplanned
outages and leading to reduced customer satisfaction. In the case of popular
open source software, high volume of defects is reported on a regular basis. A
large number of these reports are actually duplicates / rediscoveries of each
other. Researchers have analyzed the factors related to the content of
duplicate defect reports in the past. However, some of the other potentially
important factors, such as the inter-relationships among duplicate defect
reports, are not readily available in defect tracking systems such as Bugzilla.
This information may speed up bug fixing, enable efficient triaging, improve
customer profiles, etc.
In this paper, we present three defect rediscovery datasets mined from
Bugzilla. The datasets capture data for three groups of open source software
projects: Apache, Eclipse, and KDE. The datasets contain information about
approximately 914 thousands of defect reports over a period of 18 years
(1999-2017) to capture the inter-relationships among duplicate defects. We
believe that sharing these data with the community will help researchers and
practitioners to better understand the nature of defect rediscovery and enhance
the analysis of defect reports.
| [
{
"version": "v1",
"created": "Sat, 18 Mar 2017 19:01:38 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Sadat",
"Mefta",
""
],
[
"Bener",
"Ayse Basar",
""
],
[
"Miranskyy",
"Andriy V.",
""
]
] | TITLE: Rediscovery Datasets: Connecting Duplicate Reports
ABSTRACT: The same defect can be rediscovered by multiple clients, causing unplanned
outages and leading to reduced customer satisfaction. In the case of popular
open source software, high volume of defects is reported on a regular basis. A
large number of these reports are actually duplicates / rediscoveries of each
other. Researchers have analyzed the factors related to the content of
duplicate defect reports in the past. However, some of the other potentially
important factors, such as the inter-relationships among duplicate defect
reports, are not readily available in defect tracking systems such as Bugzilla.
This information may speed up bug fixing, enable efficient triaging, improve
customer profiles, etc.
In this paper, we present three defect rediscovery datasets mined from
Bugzilla. The datasets capture data for three groups of open source software
projects: Apache, Eclipse, and KDE. The datasets contain information about
approximately 914 thousands of defect reports over a period of 18 years
(1999-2017) to capture the inter-relationships among duplicate defects. We
believe that sharing these data with the community will help researchers and
practitioners to better understand the nature of defect rediscovery and enhance
the analysis of defect reports.
| new_dataset | 0.960768 |
1704.00551 | Shuai Zhang | Shuai Zhang, Lina Yao and Xiwei Xu | AutoSVD++: An Efficient Hybrid Collaborative Filtering Model via
Contractive Auto-encoders | 4 pages, 3 figures | null | 10.1145/3077136.3080689 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering (CF) has been successfully used to provide users with
personalized products and services. However, dealing with the increasing
sparseness of user-item matrix still remains a challenge. To tackle such issue,
hybrid CF such as combining with content based filtering and leveraging side
information of users and items has been extensively studied to enhance
performance. However, most of these approaches depend on hand-crafted feature
engineering, which are usually noise-prone and biased by different feature
extraction and selection schemes. In this paper, we propose a new hybrid model
by generalizing contractive auto-encoder paradigm into matrix factorization
framework with good scalability and computational efficiency, which jointly
model content information as representations of effectiveness and compactness,
and leverage implicit user feedback to make accurate recommendations. Extensive
experiments conducted over three large scale real datasets indicate the
proposed approach outperforms the compared methods for item recommendation.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 12:39:25 GMT"
},
{
"version": "v2",
"created": "Tue, 2 May 2017 00:17:14 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Jun 2017 01:01:30 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Zhang",
"Shuai",
""
],
[
"Yao",
"Lina",
""
],
[
"Xu",
"Xiwei",
""
]
] | TITLE: AutoSVD++: An Efficient Hybrid Collaborative Filtering Model via
Contractive Auto-encoders
ABSTRACT: Collaborative filtering (CF) has been successfully used to provide users with
personalized products and services. However, dealing with the increasing
sparseness of user-item matrix still remains a challenge. To tackle such issue,
hybrid CF such as combining with content based filtering and leveraging side
information of users and items has been extensively studied to enhance
performance. However, most of these approaches depend on hand-crafted feature
engineering, which are usually noise-prone and biased by different feature
extraction and selection schemes. In this paper, we propose a new hybrid model
by generalizing contractive auto-encoder paradigm into matrix factorization
framework with good scalability and computational efficiency, which jointly
model content information as representations of effectiveness and compactness,
and leverage implicit user feedback to make accurate recommendations. Extensive
experiments conducted over three large scale real datasets indicate the
proposed approach outperforms the compared methods for item recommendation.
| no_new_dataset | 0.943191 |
1704.01212 | Justin Gilmer | Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals,
George E. Dahl | Neural Message Passing for Quantum Chemistry | 14 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning on molecules has incredible potential to be useful in
chemistry, drug discovery, and materials science. Luckily, several promising
and closely related neural network models invariant to molecular symmetries
have already been described in the literature. These models learn a message
passing algorithm and aggregation procedure to compute a function of their
entire input graph. At this point, the next step is to find a particularly
effective variant of this general approach and apply it to chemical prediction
benchmarks until we either solve them or reach the limits of the approach. In
this paper, we reformulate existing models into a single common framework we
call Message Passing Neural Networks (MPNNs) and explore additional novel
variations within this framework. Using MPNNs we demonstrate state of the art
results on an important molecular property prediction benchmark; these results
are strong enough that we believe future work should focus on datasets with
larger molecules or more accurate ground truth labels.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 23:00:44 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2017 20:52:56 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Gilmer",
"Justin",
""
],
[
"Schoenholz",
"Samuel S.",
""
],
[
"Riley",
"Patrick F.",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Dahl",
"George E.",
""
]
] | TITLE: Neural Message Passing for Quantum Chemistry
ABSTRACT: Supervised learning on molecules has incredible potential to be useful in
chemistry, drug discovery, and materials science. Luckily, several promising
and closely related neural network models invariant to molecular symmetries
have already been described in the literature. These models learn a message
passing algorithm and aggregation procedure to compute a function of their
entire input graph. At this point, the next step is to find a particularly
effective variant of this general approach and apply it to chemical prediction
benchmarks until we either solve them or reach the limits of the approach. In
this paper, we reformulate existing models into a single common framework we
call Message Passing Neural Networks (MPNNs) and explore additional novel
variations within this framework. Using MPNNs we demonstrate state of the art
results on an important molecular property prediction benchmark; these results
are strong enough that we believe future work should focus on datasets with
larger molecules or more accurate ground truth labels.
| no_new_dataset | 0.947088 |
1704.05645 | Bo Li | Bo Li, Mingyi He, Xuelian Cheng, Yucheng Chen, Yuchao Dai | Skeleton based action recognition using translation-scale invariant
image mapping and multi-scale deep cnn | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an image classification based approach for skeleton-based
video action recognition problem. Firstly, A dataset independent
translation-scale invariant image mapping method is proposed, which transformes
the skeleton videos to colour images, named skeleton-images. Secondly, A
multi-scale deep convolutional neural network (CNN) architecture is proposed
which could be built and fine-tuned on the powerful pre-trained CNNs, e.g.,
AlexNet, VGGNet, ResNet etal.. Even though the skeleton-images are very
different from natural images, the fine-tune strategy still works well. At
last, we prove that our method could also work well on 2D skeleton video data.
We achieve the state-of-the-art results on the popular benchmard datasets e.g.
NTU RGB+D, UTD-MHAD, MSRC-12, and G3D. Especially on the largest and challenge
NTU RGB+D, UTD-MHAD, and MSRC-12 dataset, our method outperforms other methods
by a large margion, which proves the efficacy of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 08:30:19 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jun 2017 01:59:13 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"He",
"Mingyi",
""
],
[
"Cheng",
"Xuelian",
""
],
[
"Chen",
"Yucheng",
""
],
[
"Dai",
"Yuchao",
""
]
] | TITLE: Skeleton based action recognition using translation-scale invariant
image mapping and multi-scale deep cnn
ABSTRACT: This paper presents an image classification based approach for skeleton-based
video action recognition problem. Firstly, A dataset independent
translation-scale invariant image mapping method is proposed, which transformes
the skeleton videos to colour images, named skeleton-images. Secondly, A
multi-scale deep convolutional neural network (CNN) architecture is proposed
which could be built and fine-tuned on the powerful pre-trained CNNs, e.g.,
AlexNet, VGGNet, ResNet etal.. Even though the skeleton-images are very
different from natural images, the fine-tune strategy still works well. At
last, we prove that our method could also work well on 2D skeleton video data.
We achieve the state-of-the-art results on the popular benchmard datasets e.g.
NTU RGB+D, UTD-MHAD, MSRC-12, and G3D. Especially on the largest and challenge
NTU RGB+D, UTD-MHAD, and MSRC-12 dataset, our method outperforms other methods
by a large margion, which proves the efficacy of the proposed method.
| no_new_dataset | 0.949763 |
1706.02777 | R.Stuart Geiger | R. Stuart Geiger | Summary Analysis of the 2017 GitHub Open Source Survey | 58 pages | null | 10.17605/OSF.IO/ENRQ5 | null | cs.CY cs.SE cs.SI | http://creativecommons.org/licenses/by/4.0/ | This report is a high-level summary analysis of the 2017 GitHub Open Source
Survey dataset, presenting frequency counts, proportions, and frequency or
proportion bar plots for every question asked in the survey.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 21:29:00 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Geiger",
"R. Stuart",
""
]
] | TITLE: Summary Analysis of the 2017 GitHub Open Source Survey
ABSTRACT: This report is a high-level summary analysis of the 2017 GitHub Open Source
Survey dataset, presenting frequency counts, proportions, and frequency or
proportion bar plots for every question asked in the survey.
| no_new_dataset | 0.96738 |
1706.03863 | Kwang In Kim | James Tompkin, Kwang In Kim, Hanspeter Pfister and Christian Theobalt | Criteria Sliders: Learning Continuous Database Criteria via Interactive
Ranking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large databases are often organized by hand-labeled metadata, or criteria,
which are expensive to collect. We can use unsupervised learning to model
database variation, but these models are often high dimensional, complex to
parameterize, or require expert knowledge. We learn low-dimensional continuous
criteria via interactive ranking, so that the novice user need only describe
the relative ordering of examples. This is formed as semi-supervised label
propagation in which we maximize the information gained from a limited number
of examples. Further, we actively suggest data points to the user to rank in a
more informative way than existing work. Our efficient approach allows users to
interactively organize thousands of data points along 1D and 2D continuous
sliders. We experiment with datasets of imagery and geometry to demonstrate
that our tool is useful for quickly assessing and organizing the content of
large databases.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 21:59:26 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Tompkin",
"James",
""
],
[
"Kim",
"Kwang In",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Theobalt",
"Christian",
""
]
] | TITLE: Criteria Sliders: Learning Continuous Database Criteria via Interactive
Ranking
ABSTRACT: Large databases are often organized by hand-labeled metadata, or criteria,
which are expensive to collect. We can use unsupervised learning to model
database variation, but these models are often high dimensional, complex to
parameterize, or require expert knowledge. We learn low-dimensional continuous
criteria via interactive ranking, so that the novice user need only describe
the relative ordering of examples. This is formed as semi-supervised label
propagation in which we maximize the information gained from a limited number
of examples. Further, we actively suggest data points to the user to rank in a
more informative way than existing work. Our efficient approach allows users to
interactively organize thousands of data points along 1D and 2D continuous
sliders. We experiment with datasets of imagery and geometry to demonstrate
that our tool is useful for quickly assessing and organizing the content of
large databases.
| no_new_dataset | 0.953708 |
1706.03946 | Isabelle Augenstein | Ed Collins and Isabelle Augenstein and Sebastian Riedel | A Supervised Approach to Extractive Summarisation of Scientific Papers | 11 pages, 6 figures | null | null | null | cs.CL cs.AI cs.NE stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic summarisation is a popular approach to reduce a document to its
main arguments. Recent research in the area has focused on neural approaches to
summarisation, which can be very data-hungry. However, few large datasets exist
and none for the traditionally popular domain of scientific publications, which
opens up challenging research avenues centered on encoding large, complex
documents. In this paper, we introduce a new dataset for summarisation of
computer science publications by exploiting a large resource of author provided
summaries and show straightforward ways of extending it further. We develop
models on the dataset making use of both neural sentence encoding and
traditionally used summarisation features and show that models which encode
sentences as well as their local and global context perform best, significantly
outperforming well-established baseline methods.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 08:15:25 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Collins",
"Ed",
""
],
[
"Augenstein",
"Isabelle",
""
],
[
"Riedel",
"Sebastian",
""
]
] | TITLE: A Supervised Approach to Extractive Summarisation of Scientific Papers
ABSTRACT: Automatic summarisation is a popular approach to reduce a document to its
main arguments. Recent research in the area has focused on neural approaches to
summarisation, which can be very data-hungry. However, few large datasets exist
and none for the traditionally popular domain of scientific publications, which
opens up challenging research avenues centered on encoding large, complex
documents. In this paper, we introduce a new dataset for summarisation of
computer science publications by exploiting a large resource of author provided
summaries and show straightforward ways of extending it further. We develop
models on the dataset making use of both neural sentence encoding and
traditionally used summarisation features and show that models which encode
sentences as well as their local and global context perform best, significantly
outperforming well-established baseline methods.
| new_dataset | 0.957873 |
1706.04026 | Panayiotis Christodoulou | Sotirios Chatzis, Panayiotis Christodoulou, Andreas S. Andreou | Recurrent Latent Variable Networks for Session-Based Recommendation | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we attempt to ameliorate the impact of data sparsity in the
context of session-based recommendation. Specifically, we seek to devise a
machine learning mechanism capable of extracting subtle and complex underlying
temporal dynamics in the observed session data, so as to inform the
recommendation algorithm. To this end, we improve upon systems that utilize
deep learning techniques with recurrently connected units; we do so by adopting
concepts from the field of Bayesian statistics, namely variational inference.
Our proposed approach consists in treating the network recurrent units as
stochastic latent variables with a prior distribution imposed over them. On
this basis, we proceed to infer corresponding posteriors; these can be used for
prediction and recommendation generation, in a way that accounts for the
uncertainty in the available sparse training data. To allow for our approach to
easily scale to large real-world datasets, we perform inference under an
approximate amortized variational inference (AVI) setup, whereby the learned
posteriors are parameterized via (conventional) neural networks. We perform an
extensive experimental evaluation of our approach using challenging benchmark
datasets, and illustrate its superiority over existing state-of-the-art
techniques.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 12:35:56 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Chatzis",
"Sotirios",
""
],
[
"Christodoulou",
"Panayiotis",
""
],
[
"Andreou",
"Andreas S.",
""
]
] | TITLE: Recurrent Latent Variable Networks for Session-Based Recommendation
ABSTRACT: In this work, we attempt to ameliorate the impact of data sparsity in the
context of session-based recommendation. Specifically, we seek to devise a
machine learning mechanism capable of extracting subtle and complex underlying
temporal dynamics in the observed session data, so as to inform the
recommendation algorithm. To this end, we improve upon systems that utilize
deep learning techniques with recurrently connected units; we do so by adopting
concepts from the field of Bayesian statistics, namely variational inference.
Our proposed approach consists in treating the network recurrent units as
stochastic latent variables with a prior distribution imposed over them. On
this basis, we proceed to infer corresponding posteriors; these can be used for
prediction and recommendation generation, in a way that accounts for the
uncertainty in the available sparse training data. To allow for our approach to
easily scale to large real-world datasets, we perform inference under an
approximate amortized variational inference (AVI) setup, whereby the learned
posteriors are parameterized via (conventional) neural networks. We perform an
extensive experimental evaluation of our approach using challenging benchmark
datasets, and illustrate its superiority over existing state-of-the-art
techniques.
| no_new_dataset | 0.943867 |
1706.04047 | Mikko Rinne | Mikko Rinne, Mehrdad Bagheri, Tuukka Tolvanen | Automatic Recognition of Public Transport Trips from Mobile Device
Sensor Data and Transport Infrastructure Information | 22 pages, 7 figures, 10 tables | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic detection of public transport (PT) usage has important applications
for intelligent transport systems. It is crucial for understanding the
commuting habits of passengers at large and over longer periods of time. It
also enables compilation of door-to-door trip chains, which in turn can assist
public transport providers in improved optimisation of their transport
networks. In addition, predictions of future trips based on past activities can
be used to assist passengers with targeted information. This article documents
a dataset compiled from a day of active commuting by a small group of people
using different means of PT in the Helsinki region. Mobility data was collected
by two means: (a) manually written details of each PT trip during the day, and
(b) measurements using sensors of travellers' mobile devices. The manual log is
used to cross-check and verify the results derived from automatic measurements.
The mobile client application used for our data collection provides a fully
automated measurement service and implements a set of algorithms for decreasing
battery consumption. The live locations of some of the public transport
vehicles in the region were made available by the local transport provider and
sampled with a 30-second interval. The stopping times of local trains at
stations during the day were retrieved from the railway operator. The static
timetable information of all the PT vehicles operating in the area is made
available by the transport provider, and linked to our dataset. The challenge
is to correctly detect as many manually logged trips as possible by using the
automatically collected data. This paper includes an analysis of challenges due
to missing or partially sampled information in the data, and initial results
from automatic recognition using a set of algorithms. Improvement of correct
recognitions is left as an ongoing challenge.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 13:19:43 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Rinne",
"Mikko",
""
],
[
"Bagheri",
"Mehrdad",
""
],
[
"Tolvanen",
"Tuukka",
""
]
] | TITLE: Automatic Recognition of Public Transport Trips from Mobile Device
Sensor Data and Transport Infrastructure Information
ABSTRACT: Automatic detection of public transport (PT) usage has important applications
for intelligent transport systems. It is crucial for understanding the
commuting habits of passengers at large and over longer periods of time. It
also enables compilation of door-to-door trip chains, which in turn can assist
public transport providers in improved optimisation of their transport
networks. In addition, predictions of future trips based on past activities can
be used to assist passengers with targeted information. This article documents
a dataset compiled from a day of active commuting by a small group of people
using different means of PT in the Helsinki region. Mobility data was collected
by two means: (a) manually written details of each PT trip during the day, and
(b) measurements using sensors of travellers' mobile devices. The manual log is
used to cross-check and verify the results derived from automatic measurements.
The mobile client application used for our data collection provides a fully
automated measurement service and implements a set of algorithms for decreasing
battery consumption. The live locations of some of the public transport
vehicles in the region were made available by the local transport provider and
sampled with a 30-second interval. The stopping times of local trains at
stations during the day were retrieved from the railway operator. The static
timetable information of all the PT vehicles operating in the area is made
available by the transport provider, and linked to our dataset. The challenge
is to correctly detect as many manually logged trips as possible by using the
automatically collected data. This paper includes an analysis of challenges due
to missing or partially sampled information in the data, and initial results
from automatic recognition using a set of algorithms. Improvement of correct
recognitions is left as an ongoing challenge.
| no_new_dataset | 0.834879 |
1706.04052 | Jinzhuo Wang | Jinzhuo Wang, Wenmin Wang, Ronggang Wang, Wen Gao | Beyond Monte Carlo Tree Search: Playing Go with Deep Alternative Neural
Network and Long-Term Evaluation | AAAI 2017 | null | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo tree search (MCTS) is extremely popular in computer Go which
determines each action by enormous simulations in a broad and deep search tree.
However, human experts select most actions by pattern analysis and careful
evaluation rather than brute search of millions of future nteractions. In this
paper, we propose a computer Go system that follows experts way of thinking and
playing. Our system consists of two parts. The first part is a novel deep
alternative neural network (DANN) used to generate candidates of next move.
Compared with existing deep convolutional neural network (DCNN), DANN inserts
recurrent layer after each convolutional layer and stacks them in an
alternative manner. We show such setting can preserve more contexts of local
features and its evolutions which are beneficial for move prediction. The
second part is a long-term evaluation (LTE) module used to provide a reliable
evaluation of candidates rather than a single probability from move predictor.
This is consistent with human experts nature of playing since they can foresee
tens of steps to give an accurate estimation of candidates. In our system, for
each candidate, LTE calculates a cumulative reward after several future
interactions when local variations are settled. Combining criteria from the two
parts, our system determines the optimal choice of next move. For more
comprehensive experiments, we introduce a new professional Go dataset (PGD),
consisting of 253233 professional records. Experiments on GoGoD and PGD
datasets show the DANN can substantially improve performance of move prediction
over pure DCNN. When combining LTE, our system outperforms most relevant
approaches and open engines based on MCTS.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 13:30:04 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Wang",
"Jinzhuo",
""
],
[
"Wang",
"Wenmin",
""
],
[
"Wang",
"Ronggang",
""
],
[
"Gao",
"Wen",
""
]
] | TITLE: Beyond Monte Carlo Tree Search: Playing Go with Deep Alternative Neural
Network and Long-Term Evaluation
ABSTRACT: Monte Carlo tree search (MCTS) is extremely popular in computer Go which
determines each action by enormous simulations in a broad and deep search tree.
However, human experts select most actions by pattern analysis and careful
evaluation rather than brute search of millions of future nteractions. In this
paper, we propose a computer Go system that follows experts way of thinking and
playing. Our system consists of two parts. The first part is a novel deep
alternative neural network (DANN) used to generate candidates of next move.
Compared with existing deep convolutional neural network (DCNN), DANN inserts
recurrent layer after each convolutional layer and stacks them in an
alternative manner. We show such setting can preserve more contexts of local
features and its evolutions which are beneficial for move prediction. The
second part is a long-term evaluation (LTE) module used to provide a reliable
evaluation of candidates rather than a single probability from move predictor.
This is consistent with human experts nature of playing since they can foresee
tens of steps to give an accurate estimation of candidates. In our system, for
each candidate, LTE calculates a cumulative reward after several future
interactions when local variations are settled. Combining criteria from the two
parts, our system determines the optimal choice of next move. For more
comprehensive experiments, we introduce a new professional Go dataset (PGD),
consisting of 253233 professional records. Experiments on GoGoD and PGD
datasets show the DANN can substantially improve performance of move prediction
over pure DCNN. When combining LTE, our system outperforms most relevant
approaches and open engines based on MCTS.
| new_dataset | 0.963541 |
1706.04097 | Yingyu Liang | Yuanzhi Li, Yingyu Liang | Provable Alternating Gradient Descent for Non-negative Matrix
Factorization with Strong Correlations | Accepted to the International Conference on Machine Learning (ICML),
2017 | null | null | null | cs.LG cs.DS cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-negative matrix factorization is a basic tool for decomposing data into
the feature and weight matrices under non-negativity constraints, and in
practice is often solved in the alternating minimization framework. However, it
is unclear whether such algorithms can recover the ground-truth feature matrix
when the weights for different features are highly correlated, which is common
in applications. This paper proposes a simple and natural alternating gradient
descent based algorithm, and shows that with a mild initialization it provably
recovers the ground-truth in the presence of strong correlations. In most
interesting cases, the correlation can be in the same order as the highest
possible. Our analysis also reveals its several favorable features including
robustness to noise. We complement our theoretical results with empirical
studies on semi-synthetic datasets, demonstrating its advantage over several
popular methods in recovering the ground-truth.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 14:39:59 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Li",
"Yuanzhi",
""
],
[
"Liang",
"Yingyu",
""
]
] | TITLE: Provable Alternating Gradient Descent for Non-negative Matrix
Factorization with Strong Correlations
ABSTRACT: Non-negative matrix factorization is a basic tool for decomposing data into
the feature and weight matrices under non-negativity constraints, and in
practice is often solved in the alternating minimization framework. However, it
is unclear whether such algorithms can recover the ground-truth feature matrix
when the weights for different features are highly correlated, which is common
in applications. This paper proposes a simple and natural alternating gradient
descent based algorithm, and shows that with a mild initialization it provably
recovers the ground-truth in the presence of strong correlations. In most
interesting cases, the correlation can be in the same order as the highest
possible. Our analysis also reveals its several favorable features including
robustness to noise. We complement our theoretical results with empirical
studies on semi-synthetic datasets, demonstrating its advantage over several
popular methods in recovering the ground-truth.
| no_new_dataset | 0.9462 |
1706.04109 | Fran Casino | Fran Casino, Constantinos Patsakis, Antoni Martinez-Balleste, Frederic
Borras, Edgar Batista | Technical Report: Implementation and Validation of a Smart Health
Application | 4-page Tech Report | null | null | null | cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we explain in detail the internal structures and databases
of a smart health application. Moreover, we describe how to generate a
statistically sound synthetic dataset using real-world medical data.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 14:58:41 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Casino",
"Fran",
""
],
[
"Patsakis",
"Constantinos",
""
],
[
"Martinez-Balleste",
"Antoni",
""
],
[
"Borras",
"Frederic",
""
],
[
"Batista",
"Edgar",
""
]
] | TITLE: Technical Report: Implementation and Validation of a Smart Health
Application
ABSTRACT: In this article, we explain in detail the internal structures and databases
of a smart health application. Moreover, we describe how to generate a
statistically sound synthetic dataset using real-world medical data.
| no_new_dataset | 0.9255 |
1706.04122 | Iman Abbasnejad | Iman Abbasnejad, Sridha Sridharan, Simon Denman, Clinton Fookes, Simon
Lucey | Joint Max Margin and Semantic Features for Continuous Event Detection in
Complex Scenes | submit to journal of Computer Vision and Image Understanding | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper the problem of complex event detection in the continuous domain
(i.e. events with unknown starting and ending locations) is addressed. Existing
event detection methods are limited to features that are extracted from the
local spatial or spatio-temporal patches from the videos. However, this makes
the model vulnerable to the events with similar concepts e.g. "Open drawer" and
"Open cupboard". In this work, in order to address the aforementioned
limitations we present a novel model based on the combination of semantic and
temporal features extracted from video frames. We train a max-margin classifier
on top of the extracted features in an adaptive framework that is able to
detect the events with unknown starting and ending locations. Our model is
based on the Bidirectional Region Neural Network and large margin Structural
Output SVM. The generality of our model allows it to be simply applied to
different labeled and unlabeled datasets. We finally test our algorithm on
three challenging datasets, "UCF 101-Action Recognition", "MPII Cooking
Activities" and "Hollywood", and we report state-of-the-art performance.
| [
{
"version": "v1",
"created": "Tue, 13 Jun 2017 15:30:16 GMT"
}
] | 2017-06-14T00:00:00 | [
[
"Abbasnejad",
"Iman",
""
],
[
"Sridharan",
"Sridha",
""
],
[
"Denman",
"Simon",
""
],
[
"Fookes",
"Clinton",
""
],
[
"Lucey",
"Simon",
""
]
] | TITLE: Joint Max Margin and Semantic Features for Continuous Event Detection in
Complex Scenes
ABSTRACT: In this paper the problem of complex event detection in the continuous domain
(i.e. events with unknown starting and ending locations) is addressed. Existing
event detection methods are limited to features that are extracted from the
local spatial or spatio-temporal patches from the videos. However, this makes
the model vulnerable to the events with similar concepts e.g. "Open drawer" and
"Open cupboard". In this work, in order to address the aforementioned
limitations we present a novel model based on the combination of semantic and
temporal features extracted from video frames. We train a max-margin classifier
on top of the extracted features in an adaptive framework that is able to
detect the events with unknown starting and ending locations. Our model is
based on the Bidirectional Region Neural Network and large margin Structural
Output SVM. The generality of our model allows it to be simply applied to
different labeled and unlabeled datasets. We finally test our algorithm on
three challenging datasets, "UCF 101-Action Recognition", "MPII Cooking
Activities" and "Hollywood", and we report state-of-the-art performance.
| no_new_dataset | 0.949623 |
1603.04908 | Gedas Bertasius | Gedas Bertasius, Hyun Soo Park, Stella X. Yu, and Jianbo Shi | First Person Action-Object Detection with EgoNet | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unlike traditional third-person cameras mounted on robots, a first-person
camera, captures a person's visual sensorimotor object interactions from up
close. In this paper, we study the tight interplay between our momentary visual
attention and motor action with objects from a first-person camera. We propose
a concept of action-objects---the objects that capture person's conscious
visual (watching a TV) or tactile (taking a cup) interactions. Action-objects
may be task-dependent but since many tasks share common person-object spatial
configurations, action-objects exhibit a characteristic 3D spatial distance and
orientation with respect to the person.
We design a predictive model that detects action-objects using EgoNet, a
joint two-stream network that holistically integrates visual appearance (RGB)
and 3D spatial layout (depth and height) cues to predict per-pixel likelihood
of action-objects. Our network also incorporates a first-person coordinate
embedding, which is designed to learn a spatial distribution of the
action-objects in the first-person data. We demonstrate EgoNet's predictive
power, by showing that it consistently outperforms previous baseline
approaches. Furthermore, EgoNet also exhibits a strong generalization ability,
i.e., it predicts semantically meaningful objects in novel first-person
datasets. Our method's ability to effectively detect action-objects could be
used to improve robots' understanding of human-object interactions.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 22:29:03 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Nov 2016 16:59:28 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Jun 2017 18:04:17 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Bertasius",
"Gedas",
""
],
[
"Park",
"Hyun Soo",
""
],
[
"Yu",
"Stella X.",
""
],
[
"Shi",
"Jianbo",
""
]
] | TITLE: First Person Action-Object Detection with EgoNet
ABSTRACT: Unlike traditional third-person cameras mounted on robots, a first-person
camera, captures a person's visual sensorimotor object interactions from up
close. In this paper, we study the tight interplay between our momentary visual
attention and motor action with objects from a first-person camera. We propose
a concept of action-objects---the objects that capture person's conscious
visual (watching a TV) or tactile (taking a cup) interactions. Action-objects
may be task-dependent but since many tasks share common person-object spatial
configurations, action-objects exhibit a characteristic 3D spatial distance and
orientation with respect to the person.
We design a predictive model that detects action-objects using EgoNet, a
joint two-stream network that holistically integrates visual appearance (RGB)
and 3D spatial layout (depth and height) cues to predict per-pixel likelihood
of action-objects. Our network also incorporates a first-person coordinate
embedding, which is designed to learn a spatial distribution of the
action-objects in the first-person data. We demonstrate EgoNet's predictive
power, by showing that it consistently outperforms previous baseline
approaches. Furthermore, EgoNet also exhibits a strong generalization ability,
i.e., it predicts semantically meaningful objects in novel first-person
datasets. Our method's ability to effectively detect action-objects could be
used to improve robots' understanding of human-object interactions.
| new_dataset | 0.644784 |
1610.01675 | Michael Lash | Michael T. Lash, Qihang Lin, W. Nick Street, Jennifer G. Robinson,
Jeffrey Ohlmann | Generalized Inverse Classification | Accepted to SDM 2017. Full paper + supplemental material | null | 10.1137/1.9781611974973.19 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inverse classification is the process of perturbing an instance in a
meaningful way such that it is more likely to conform to a specific class.
Historical methods that address such a problem are often framed to leverage
only a single classifier, or specific set of classifiers. These works are often
accompanied by naive assumptions. In this work we propose generalized inverse
classification (GIC), which avoids restricting the classification model that
can be used. We incorporate this formulation into a refined framework in which
GIC takes place. Under this framework, GIC operates on features that are
immediately actionable. Each change incurs an individual cost, either linear or
non-linear. Such changes are subjected to occur within a specified level of
cumulative change (budget). Furthermore, our framework incorporates the
estimation of features that change as a consequence of direct actions taken
(indirectly changeable features). To solve such a problem, we propose three
real-valued heuristic-based methods and two sensitivity analysis-based
comparison methods, each of which is evaluated on two freely available
real-world datasets. Our results demonstrate the validity and benefits of our
formulation, framework, and methods.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2016 22:28:01 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Jan 2017 17:38:58 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Lash",
"Michael T.",
""
],
[
"Lin",
"Qihang",
""
],
[
"Street",
"W. Nick",
""
],
[
"Robinson",
"Jennifer G.",
""
],
[
"Ohlmann",
"Jeffrey",
""
]
] | TITLE: Generalized Inverse Classification
ABSTRACT: Inverse classification is the process of perturbing an instance in a
meaningful way such that it is more likely to conform to a specific class.
Historical methods that address such a problem are often framed to leverage
only a single classifier, or specific set of classifiers. These works are often
accompanied by naive assumptions. In this work we propose generalized inverse
classification (GIC), which avoids restricting the classification model that
can be used. We incorporate this formulation into a refined framework in which
GIC takes place. Under this framework, GIC operates on features that are
immediately actionable. Each change incurs an individual cost, either linear or
non-linear. Such changes are subjected to occur within a specified level of
cumulative change (budget). Furthermore, our framework incorporates the
estimation of features that change as a consequence of direct actions taken
(indirectly changeable features). To solve such a problem, we propose three
real-valued heuristic-based methods and two sensitivity analysis-based
comparison methods, each of which is evaluated on two freely available
real-world datasets. Our results demonstrate the validity and benefits of our
formulation, framework, and methods.
| no_new_dataset | 0.9462 |
1611.02315 | Jacob Steinhardt | Moses Charikar and Jacob Steinhardt and Gregory Valiant | Learning from Untrusted Data | Updated based on STOC camera-ready | null | null | null | cs.LG cs.AI cs.CC cs.CR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vast majority of theoretical results in machine learning and statistics
assume that the available training data is a reasonably reliable reflection of
the phenomena to be learned or estimated. Similarly, the majority of machine
learning and statistical techniques used in practice are brittle to the
presence of large amounts of biased or malicious data. In this work we consider
two frameworks in which to study estimation, learning, and optimization in the
presence of significant fractions of arbitrary data.
The first framework, list-decodable learning, asks whether it is possible to
return a list of answers, with the guarantee that at least one of them is
accurate. For example, given a dataset of $n$ points for which an unknown
subset of $\alpha n$ points are drawn from a distribution of interest, and no
assumptions are made about the remaining $(1-\alpha)n$ points, is it possible
to return a list of $\operatorname{poly}(1/\alpha)$ answers, one of which is
correct? The second framework, which we term the semi-verified learning model,
considers the extent to which a small dataset of trusted data (drawn from the
distribution in question) can be leveraged to enable the accurate extraction of
information from a much larger but untrusted dataset (of which only an
$\alpha$-fraction is drawn from the distribution).
We show strong positive results in both settings, and provide an algorithm
for robust learning in a very general stochastic optimization setting. This
general result has immediate implications for robust estimation in a number of
settings, including for robustly estimating the mean of distributions with
bounded second moments, robustly learning mixtures of such distributions, and
robustly finding planted partitions in random graphs in which significant
portions of the graph have been perturbed by an adversary.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 21:43:39 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2017 17:48:31 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Charikar",
"Moses",
""
],
[
"Steinhardt",
"Jacob",
""
],
[
"Valiant",
"Gregory",
""
]
] | TITLE: Learning from Untrusted Data
ABSTRACT: The vast majority of theoretical results in machine learning and statistics
assume that the available training data is a reasonably reliable reflection of
the phenomena to be learned or estimated. Similarly, the majority of machine
learning and statistical techniques used in practice are brittle to the
presence of large amounts of biased or malicious data. In this work we consider
two frameworks in which to study estimation, learning, and optimization in the
presence of significant fractions of arbitrary data.
The first framework, list-decodable learning, asks whether it is possible to
return a list of answers, with the guarantee that at least one of them is
accurate. For example, given a dataset of $n$ points for which an unknown
subset of $\alpha n$ points are drawn from a distribution of interest, and no
assumptions are made about the remaining $(1-\alpha)n$ points, is it possible
to return a list of $\operatorname{poly}(1/\alpha)$ answers, one of which is
correct? The second framework, which we term the semi-verified learning model,
considers the extent to which a small dataset of trusted data (drawn from the
distribution in question) can be leveraged to enable the accurate extraction of
information from a much larger but untrusted dataset (of which only an
$\alpha$-fraction is drawn from the distribution).
We show strong positive results in both settings, and provide an algorithm
for robust learning in a very general stochastic optimization setting. This
general result has immediate implications for robust estimation in a number of
settings, including for robustly estimating the mean of distributions with
bounded second moments, robustly learning mixtures of such distributions, and
robustly finding planted partitions in random graphs in which significant
portions of the graph have been perturbed by an adversary.
| no_new_dataset | 0.939692 |
1612.05062 | Thomas Nestmeyer | Thomas Nestmeyer, Peter V. Gehler | Reflectance Adaptive Filtering Improves Intrinsic Image Estimation | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Separating an image into reflectance and shading layers poses a challenge for
learning approaches because no large corpus of precise and realistic ground
truth decompositions exists. The Intrinsic Images in the Wild~(IIW) dataset
provides a sparse set of relative human reflectance judgments, which serves as
a standard benchmark for intrinsic images. A number of methods use IIW to learn
statistical dependencies between the images and their reflectance layer.
Although learning plays an important role for high performance, we show that a
standard signal processing technique achieves performance on par with current
state-of-the-art. We propose a loss function for CNN learning of dense
reflectance predictions. Our results show a simple pixel-wise decision, without
any context or prior knowledge, is sufficient to provide a strong baseline on
IIW. This sets a competitive baseline which only two other approaches surpass.
We then develop a joint bilateral filtering method that implements strong prior
knowledge about reflectance constancy. This filtering operation can be applied
to any intrinsic image algorithm and we improve several previous results
achieving a new state-of-the-art on IIW. Our findings suggest that the effect
of learning-based approaches may have been over-estimated so far. Explicit
prior knowledge is still at least as important to obtain high performance in
intrinsic image decompositions.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 13:42:54 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2017 12:39:49 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Nestmeyer",
"Thomas",
""
],
[
"Gehler",
"Peter V.",
""
]
] | TITLE: Reflectance Adaptive Filtering Improves Intrinsic Image Estimation
ABSTRACT: Separating an image into reflectance and shading layers poses a challenge for
learning approaches because no large corpus of precise and realistic ground
truth decompositions exists. The Intrinsic Images in the Wild~(IIW) dataset
provides a sparse set of relative human reflectance judgments, which serves as
a standard benchmark for intrinsic images. A number of methods use IIW to learn
statistical dependencies between the images and their reflectance layer.
Although learning plays an important role for high performance, we show that a
standard signal processing technique achieves performance on par with current
state-of-the-art. We propose a loss function for CNN learning of dense
reflectance predictions. Our results show a simple pixel-wise decision, without
any context or prior knowledge, is sufficient to provide a strong baseline on
IIW. This sets a competitive baseline which only two other approaches surpass.
We then develop a joint bilateral filtering method that implements strong prior
knowledge about reflectance constancy. This filtering operation can be applied
to any intrinsic image algorithm and we improve several previous results
achieving a new state-of-the-art on IIW. Our findings suggest that the effect
of learning-based approaches may have been over-estimated so far. Explicit
prior knowledge is still at least as important to obtain high performance in
intrinsic image decompositions.
| no_new_dataset | 0.944177 |
1612.05970 | Wentao Zhu | Wentao Zhu, Xiang Xiang, Trac D. Tran, Xiaohui Xie | Adversarial Deep Structural Networks for Mammographic Mass Segmentation | First version on arXiv 2016, MICCAI 2017 Deep Learning in Medical
Image Analysis (DLMIA) workshop | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mass segmentation is an important task in mammogram analysis, providing
effective morphological features and regions of interest (ROI) for mass
detection and classification. Inspired by the success of using deep
convolutional features for natural image analysis and conditional random fields
(CRF) for structural learning, we propose an end-to-end network for
mammographic mass segmentation. The network employs a fully convolutional
network (FCN) to model potential function, followed by a CRF to perform
structural learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with position priori for the task. Due to the
small size of mammogram datasets, we use adversarial training to control
over-fitting. Four models with different convolutional kernels are further
fused to improve the segmentation results. Experimental results on two public
datasets, INbreast and DDSM-BCRP, show that our end-to-end network combined
with adversarial training achieves the-state-of-the-art results.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 18:40:21 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 21:32:38 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Zhu",
"Wentao",
""
],
[
"Xiang",
"Xiang",
""
],
[
"Tran",
"Trac D.",
""
],
[
"Xie",
"Xiaohui",
""
]
] | TITLE: Adversarial Deep Structural Networks for Mammographic Mass Segmentation
ABSTRACT: Mass segmentation is an important task in mammogram analysis, providing
effective morphological features and regions of interest (ROI) for mass
detection and classification. Inspired by the success of using deep
convolutional features for natural image analysis and conditional random fields
(CRF) for structural learning, we propose an end-to-end network for
mammographic mass segmentation. The network employs a fully convolutional
network (FCN) to model potential function, followed by a CRF to perform
structural learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with position priori for the task. Due to the
small size of mammogram datasets, we use adversarial training to control
over-fitting. Four models with different convolutional kernels are further
fused to improve the segmentation results. Experimental results on two public
datasets, INbreast and DDSM-BCRP, show that our end-to-end network combined
with adversarial training achieves the-state-of-the-art results.
| no_new_dataset | 0.954563 |
1702.01426 | Nadav Israel | Nadav Israel, Lior Wolf, Ran Barzilay, Gal Shoval | Robust features for facial action recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic recognition of facial gestures is becoming increasingly important
as real world AI agents become a reality. In this paper, we present an
automated system that recognizes facial gestures by capturing local changes and
encoding the motion into a histogram of frequencies. We evaluate the proposed
method by demonstrating its effectiveness on spontaneous face action
benchmarks: the FEEDTUM dataset, the Pain dataset and the HMDB51 dataset. The
results show that, compared to known methods, the new encoding methods
significantly improve the recognition accuracy and the robustness of analysis
for a variety of applications.
| [
{
"version": "v1",
"created": "Sun, 5 Feb 2017 16:28:26 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2017 17:08:43 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Israel",
"Nadav",
""
],
[
"Wolf",
"Lior",
""
],
[
"Barzilay",
"Ran",
""
],
[
"Shoval",
"Gal",
""
]
] | TITLE: Robust features for facial action recognition
ABSTRACT: Automatic recognition of facial gestures is becoming increasingly important
as real world AI agents become a reality. In this paper, we present an
automated system that recognizes facial gestures by capturing local changes and
encoding the motion into a histogram of frequencies. We evaluate the proposed
method by demonstrating its effectiveness on spontaneous face action
benchmarks: the FEEDTUM dataset, the Pain dataset and the HMDB51 dataset. The
results show that, compared to known methods, the new encoding methods
significantly improve the recognition accuracy and the robustness of analysis
for a variety of applications.
| no_new_dataset | 0.941061 |
1703.00366 | Emiliano De Cristofaro | Apostolos Pyrgelis, Carmela Troncoso, Emiliano De Cristofaro | What Does The Crowd Say About You? Evaluating Aggregation-based Location
Privacy | To appear in PETS 2017 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information about people's movements and the locations they visit enables an
increasing number of mobility analytics applications, e.g., in the context of
urban and transportation planning, In this setting, rather than collecting or
sharing raw data, entities often use aggregation as a privacy protection
mechanism, aiming to hide individual users' location traces. Furthermore, to
bound information leakage from the aggregates, they can perturb the input of
the aggregation or its output to ensure that these are differentially private.
In this paper, we set to evaluate the impact of releasing aggregate location
time-series on the privacy of individuals contributing to the aggregation. We
introduce a framework allowing us to reason about privacy against an adversary
attempting to predict users' locations or recover their mobility patterns. We
formalize these attacks as inference problems, and discuss a few strategies to
model the adversary's prior knowledge based on the information she may have
access to. We then use the framework to quantify the privacy loss stemming from
aggregate location data, with and without the protection of differential
privacy, using two real-world mobility datasets. We find that aggregates do
leak information about individuals' punctual locations and mobility profiles.
The density of the observations, as well as timing, play important roles, e.g.,
regular patterns during peak hours are better protected than sporadic
movements. Finally, our evaluation shows that both output and input
perturbation offer little additional protection, unless they introduce large
amounts of noise ultimately destroying the utility of the data.
| [
{
"version": "v1",
"created": "Wed, 1 Mar 2017 16:22:52 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2017 14:43:48 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Jun 2017 10:58:48 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Pyrgelis",
"Apostolos",
""
],
[
"Troncoso",
"Carmela",
""
],
[
"De Cristofaro",
"Emiliano",
""
]
] | TITLE: What Does The Crowd Say About You? Evaluating Aggregation-based Location
Privacy
ABSTRACT: Information about people's movements and the locations they visit enables an
increasing number of mobility analytics applications, e.g., in the context of
urban and transportation planning, In this setting, rather than collecting or
sharing raw data, entities often use aggregation as a privacy protection
mechanism, aiming to hide individual users' location traces. Furthermore, to
bound information leakage from the aggregates, they can perturb the input of
the aggregation or its output to ensure that these are differentially private.
In this paper, we set to evaluate the impact of releasing aggregate location
time-series on the privacy of individuals contributing to the aggregation. We
introduce a framework allowing us to reason about privacy against an adversary
attempting to predict users' locations or recover their mobility patterns. We
formalize these attacks as inference problems, and discuss a few strategies to
model the adversary's prior knowledge based on the information she may have
access to. We then use the framework to quantify the privacy loss stemming from
aggregate location data, with and without the protection of differential
privacy, using two real-world mobility datasets. We find that aggregates do
leak information about individuals' punctual locations and mobility profiles.
The density of the observations, as well as timing, play important roles, e.g.,
regular patterns during peak hours are better protected than sporadic
movements. Finally, our evaluation shows that both output and input
perturbation offer little additional protection, unless they introduce large
amounts of noise ultimately destroying the utility of the data.
| no_new_dataset | 0.943034 |
1703.01041 | Esteban Real | Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon
Suematsu, Jie Tan, Quoc Le, Alex Kurakin | Large-Scale Evolution of Image Classifiers | Accepted for publication at ICML 2017 (34th International Conference
on Machine Learning) | null | null | null | cs.NE cs.AI cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements.
| [
{
"version": "v1",
"created": "Fri, 3 Mar 2017 05:41:30 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Jun 2017 08:42:28 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Real",
"Esteban",
""
],
[
"Moore",
"Sherry",
""
],
[
"Selle",
"Andrew",
""
],
[
"Saxena",
"Saurabh",
""
],
[
"Suematsu",
"Yutaka Leon",
""
],
[
"Tan",
"Jie",
""
],
[
"Le",
"Quoc",
""
],
[
"Kurakin",
"Alex",
""
]
] | TITLE: Large-Scale Evolution of Image Classifiers
ABSTRACT: Neural networks have proven effective at solving difficult problems but
designing their architectures can be challenging, even for image classification
problems alone. Our goal is to minimize human participation, so we employ
evolutionary algorithms to discover such networks automatically. Despite
significant computational requirements, we show that it is now possible to
evolve models with accuracies within the range of those published in the last
year. Specifically, we employ simple evolutionary techniques at unprecedented
scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting
from trivial initial conditions and reaching accuracies of 94.6% (95.6% for
ensemble) and 77.0%, respectively. To do this, we use novel and intuitive
mutation operators that navigate large search spaces; we stress that no human
participation is required once evolution starts and that the output is a
fully-trained model. Throughout this work, we place special emphasis on the
repeatability of results, the variability in the outcomes and the computational
requirements.
| no_new_dataset | 0.948442 |
1703.01958 | David Hallac | David Hallac, Youngsuk Park, Stephen Boyd, Jure Leskovec | Network Inference via the Time-Varying Graphical Lasso | null | null | null | null | cs.LG cs.SI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many important problems can be modeled as a system of interconnected
entities, where each entity is recording time-dependent observations or
measurements. In order to spot trends, detect anomalies, and interpret the
temporal dynamics of such data, it is essential to understand the relationships
between the different entities and how these relationships evolve over time. In
this paper, we introduce the time-varying graphical lasso (TVGL), a method of
inferring time-varying networks from raw time series data. We cast the problem
in terms of estimating a sparse time-varying inverse covariance matrix, which
reveals a dynamic network of interdependencies between the entities. Since
dynamic network inference is a computationally expensive task, we derive a
scalable message-passing algorithm based on the Alternating Direction Method of
Multipliers (ADMM) to solve this problem in an efficient way. We also discuss
several extensions, including a streaming algorithm to update the model and
incorporate new observations in real time. Finally, we evaluate our TVGL
algorithm on both real and synthetic datasets, obtaining interpretable results
and outperforming state-of-the-art baselines in terms of both accuracy and
scalability.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2017 16:35:48 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Jun 2017 01:07:39 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Hallac",
"David",
""
],
[
"Park",
"Youngsuk",
""
],
[
"Boyd",
"Stephen",
""
],
[
"Leskovec",
"Jure",
""
]
] | TITLE: Network Inference via the Time-Varying Graphical Lasso
ABSTRACT: Many important problems can be modeled as a system of interconnected
entities, where each entity is recording time-dependent observations or
measurements. In order to spot trends, detect anomalies, and interpret the
temporal dynamics of such data, it is essential to understand the relationships
between the different entities and how these relationships evolve over time. In
this paper, we introduce the time-varying graphical lasso (TVGL), a method of
inferring time-varying networks from raw time series data. We cast the problem
in terms of estimating a sparse time-varying inverse covariance matrix, which
reveals a dynamic network of interdependencies between the entities. Since
dynamic network inference is a computationally expensive task, we derive a
scalable message-passing algorithm based on the Alternating Direction Method of
Multipliers (ADMM) to solve this problem in an efficient way. We also discuss
several extensions, including a streaming algorithm to update the model and
incorporate new observations in real time. Finally, we evaluate our TVGL
algorithm on both real and synthetic datasets, obtaining interpretable results
and outperforming state-of-the-art baselines in terms of both accuracy and
scalability.
| no_new_dataset | 0.946745 |
1704.05179 | Levent Sagun | Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik
and Kyunghyun Cho | SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We publicly release a new large-scale dataset, called SearchQA, for machine
comprehension, or question-answering. Unlike recently released datasets, such
as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to
reflect a full pipeline of general question-answering. That is, we start not
from an existing article and generate a question-answer pair, but start from an
existing question-answer pair, crawled from J! Archive, and augment it with
text snippets retrieved by Google. Following this approach, we built SearchQA,
which consists of more than 140k question-answer pairs with each pair having
49.6 snippets on average. Each question-answer-context tuple of the SearchQA
comes with additional meta-data such as the snippet's URL, which we believe
will be valuable resources for future research. We conduct human evaluation as
well as test two baseline methods, one simple word selection and the other deep
learning based, on the SearchQA. We show that there is a meaningful gap between
the human and machine performances. This suggests that the proposed dataset
could well serve as a benchmark for question-answering.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 02:42:17 GMT"
},
{
"version": "v2",
"created": "Thu, 4 May 2017 14:07:21 GMT"
},
{
"version": "v3",
"created": "Sun, 11 Jun 2017 11:51:06 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Dunn",
"Matthew",
""
],
[
"Sagun",
"Levent",
""
],
[
"Higgins",
"Mike",
""
],
[
"Guney",
"V. Ugur",
""
],
[
"Cirik",
"Volkan",
""
],
[
"Cho",
"Kyunghyun",
""
]
] | TITLE: SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine
ABSTRACT: We publicly release a new large-scale dataset, called SearchQA, for machine
comprehension, or question-answering. Unlike recently released datasets, such
as DeepMind CNN/DailyMail and SQuAD, the proposed SearchQA was constructed to
reflect a full pipeline of general question-answering. That is, we start not
from an existing article and generate a question-answer pair, but start from an
existing question-answer pair, crawled from J! Archive, and augment it with
text snippets retrieved by Google. Following this approach, we built SearchQA,
which consists of more than 140k question-answer pairs with each pair having
49.6 snippets on average. Each question-answer-context tuple of the SearchQA
comes with additional meta-data such as the snippet's URL, which we believe
will be valuable resources for future research. We conduct human evaluation as
well as test two baseline methods, one simple word selection and the other deep
learning based, on the SearchQA. We show that there is a meaningful gap between
the human and machine performances. This suggests that the proposed dataset
could well serve as a benchmark for question-answering.
| new_dataset | 0.956594 |
1705.01209 | Gan Sun | Gan Sun, Yang Cong, Ji Liu and Xiaowei Xu | Lifelong Metric Learning | 10 pages, 6 figures | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The state-of-the-art online learning approaches are only capable of learning
the metric for predefined tasks. In this paper, we consider lifelong learning
problem to mimic "human learning", i.e., endowing a new capability to the
learned metric for a new task from new online samples and incorporating
previous experiences and knowledge. Therefore, we propose a new metric learning
framework: lifelong metric learning (LML), which only utilizes the data of the
new task to train the metric model while preserving the original capabilities.
More specifically, the proposed LML maintains a common subspace for all learned
metrics, named lifelong dictionary, transfers knowledge from the common
subspace to each new metric task with task-specific idiosyncrasy, and redefines
the common subspace over time to maximize performance across all metric tasks.
For model optimization, we apply online passive aggressive optimization
algorithm to solve the proposed LML framework, where the lifelong dictionary
and task-specific partition are optimized alternatively and consecutively.
Finally, we evaluate our approach by analyzing several multi-task metric
learning datasets. Extensive experimental results demonstrate effectiveness and
efficiency of the proposed framework.
| [
{
"version": "v1",
"created": "Wed, 3 May 2017 00:31:55 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Jun 2017 15:09:20 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Sun",
"Gan",
""
],
[
"Cong",
"Yang",
""
],
[
"Liu",
"Ji",
""
],
[
"Xu",
"Xiaowei",
""
]
] | TITLE: Lifelong Metric Learning
ABSTRACT: The state-of-the-art online learning approaches are only capable of learning
the metric for predefined tasks. In this paper, we consider lifelong learning
problem to mimic "human learning", i.e., endowing a new capability to the
learned metric for a new task from new online samples and incorporating
previous experiences and knowledge. Therefore, we propose a new metric learning
framework: lifelong metric learning (LML), which only utilizes the data of the
new task to train the metric model while preserving the original capabilities.
More specifically, the proposed LML maintains a common subspace for all learned
metrics, named lifelong dictionary, transfers knowledge from the common
subspace to each new metric task with task-specific idiosyncrasy, and redefines
the common subspace over time to maximize performance across all metric tasks.
For model optimization, we apply online passive aggressive optimization
algorithm to solve the proposed LML framework, where the lifelong dictionary
and task-specific partition are optimized alternatively and consecutively.
Finally, we evaluate our approach by analyzing several multi-task metric
learning datasets. Extensive experimental results demonstrate effectiveness and
efficiency of the proposed framework.
| no_new_dataset | 0.95018 |
1706.03112 | Rameswar Panda | Rameswar Panda, Amran Bhuiyan, Vittorio Murino, Amit K. Roy-Chowdhury | Unsupervised Adaptive Re-identification in Open World Dynamic Camera
Networks | CVPR 2017 Spotlight | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification is an open and challenging problem in computer
vision. Existing approaches have concentrated on either designing the best
feature representation or learning optimal matching metrics in a static setting
where the number of cameras are fixed in a network. Most approaches have
neglected the dynamic and open world nature of the re-identification problem,
where a new camera may be temporarily inserted into an existing system to get
additional information. To address such a novel and very practical problem, we
propose an unsupervised adaptation scheme for re-identification models in a
dynamic camera network. First, we formulate a domain perceptive
re-identification method based on geodesic flow kernel that can effectively
find the best source camera (already installed) to adapt with a newly
introduced target camera, without requiring a very expensive training phase.
Second, we introduce a transitive inference algorithm for re-identification
that can exploit the information from best source camera to improve the
accuracy across other camera pairs in a network of multiple cameras. Extensive
experiments on four benchmark datasets demonstrate that the proposed approach
significantly outperforms the state-of-the-art unsupervised learning based
alternatives whilst being extremely efficient to compute.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 20:17:55 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Panda",
"Rameswar",
""
],
[
"Bhuiyan",
"Amran",
""
],
[
"Murino",
"Vittorio",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Unsupervised Adaptive Re-identification in Open World Dynamic Camera
Networks
ABSTRACT: Person re-identification is an open and challenging problem in computer
vision. Existing approaches have concentrated on either designing the best
feature representation or learning optimal matching metrics in a static setting
where the number of cameras are fixed in a network. Most approaches have
neglected the dynamic and open world nature of the re-identification problem,
where a new camera may be temporarily inserted into an existing system to get
additional information. To address such a novel and very practical problem, we
propose an unsupervised adaptation scheme for re-identification models in a
dynamic camera network. First, we formulate a domain perceptive
re-identification method based on geodesic flow kernel that can effectively
find the best source camera (already installed) to adapt with a newly
introduced target camera, without requiring a very expensive training phase.
Second, we introduce a transitive inference algorithm for re-identification
that can exploit the information from best source camera to improve the
accuracy across other camera pairs in a network of multiple cameras. Extensive
experiments on four benchmark datasets demonstrate that the proposed approach
significantly outperforms the state-of-the-art unsupervised learning based
alternatives whilst being extremely efficient to compute.
| no_new_dataset | 0.948917 |
1706.03114 | Rameswar Panda | Rameswar Panda, Amit K. Roy-Chowdhury | Collaborative Summarization of Topic-Related Videos | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large collections of videos are grouped into clusters by a topic keyword,
such as Eiffel Tower or Surfing, with many important visual concepts repeating
across them. Such a topically close set of videos have mutual influence on each
other, which could be used to summarize one of them by exploiting information
from others in the set. We build on this intuition to develop a novel approach
to extract a summary that simultaneously captures both important
particularities arising in the given video, as well as, generalities identified
from the set of videos. The topic-related videos provide visual context to
identify the important parts of the video being summarized. We achieve this by
developing a collaborative sparse optimization method which can be efficiently
solved by a half-quadratic minimization algorithm. Our work builds upon the
idea of collaborative techniques from information retrieval and natural
language processing, which typically use the attributes of other similar
objects to predict the attribute of a given object. Experiments on two
challenging and diverse datasets well demonstrate the efficacy of our approach
over state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 20:23:43 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Panda",
"Rameswar",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Collaborative Summarization of Topic-Related Videos
ABSTRACT: Large collections of videos are grouped into clusters by a topic keyword,
such as Eiffel Tower or Surfing, with many important visual concepts repeating
across them. Such a topically close set of videos have mutual influence on each
other, which could be used to summarize one of them by exploiting information
from others in the set. We build on this intuition to develop a novel approach
to extract a summary that simultaneously captures both important
particularities arising in the given video, as well as, generalities identified
from the set of videos. The topic-related videos provide visual context to
identify the important parts of the video being summarized. We achieve this by
developing a collaborative sparse optimization method which can be efficiently
solved by a half-quadratic minimization algorithm. Our work builds upon the
idea of collaborative techniques from information retrieval and natural
language processing, which typically use the attributes of other similar
objects to predict the attribute of a given object. Experiments on two
challenging and diverse datasets well demonstrate the efficacy of our approach
over state-of-the-art methods.
| no_new_dataset | 0.948202 |
1706.03121 | Rameswar Panda | Rameswar Panda, Amit K. Roy-Chowdhury | Multi-View Surveillance Video Summarization via Joint Embedding and
Sparse Optimization | IEEE Trans. on Multimedia, 2017 (In Press) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most traditional video summarization methods are designed to generate
effective summaries for single-view videos, and thus they cannot fully exploit
the complicated intra and inter-view correlations in summarizing multi-view
videos in a camera network. In this paper, with the aim of summarizing
multi-view videos, we introduce a novel unsupervised framework via joint
embedding and sparse representative selection. The objective function is
two-fold. The first is to capture the multi-view correlations via an embedding,
which helps in extracting a diverse set of representatives. The second is to
use a `2;1- norm to model the sparsity while selecting representative shots for
the summary. We propose to jointly optimize both of the objectives, such that
embedding can not only characterize the correlations, but also indicate the
requirements of sparse representative selection. We present an efficient
alternating algorithm based on half-quadratic minimization to solve the
proposed non-smooth and non-convex objective with convergence analysis. A key
advantage of the proposed approach with respect to the state-of-the-art is that
it can summarize multi-view videos without assuming any prior
correspondences/alignment between them, e.g., uncalibrated camera networks.
Rigorous experiments on several multi-view datasets demonstrate that our
approach clearly outperforms the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 20:56:20 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Panda",
"Rameswar",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Multi-View Surveillance Video Summarization via Joint Embedding and
Sparse Optimization
ABSTRACT: Most traditional video summarization methods are designed to generate
effective summaries for single-view videos, and thus they cannot fully exploit
the complicated intra and inter-view correlations in summarizing multi-view
videos in a camera network. In this paper, with the aim of summarizing
multi-view videos, we introduce a novel unsupervised framework via joint
embedding and sparse representative selection. The objective function is
two-fold. The first is to capture the multi-view correlations via an embedding,
which helps in extracting a diverse set of representatives. The second is to
use a `2;1- norm to model the sparsity while selecting representative shots for
the summary. We propose to jointly optimize both of the objectives, such that
embedding can not only characterize the correlations, but also indicate the
requirements of sparse representative selection. We present an efficient
alternating algorithm based on half-quadratic minimization to solve the
proposed non-smooth and non-convex objective with convergence analysis. A key
advantage of the proposed approach with respect to the state-of-the-art is that
it can summarize multi-view videos without assuming any prior
correspondences/alignment between them, e.g., uncalibrated camera networks.
Rigorous experiments on several multi-view datasets demonstrate that our
approach clearly outperforms the state-of-the-art methods.
| no_new_dataset | 0.944689 |
1706.03205 | Xiang Wang | Xiang Wang, Xiangnan He, Liqiang Nie, Tat-Seng Chua | Item Silk Road: Recommending Items from Information Domains to Social
Users | 10 pages, 7 figures, SIGIR 2017 | null | 10.1145/3077136.3080771 | null | cs.IR cs.AI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online platforms can be divided into information-oriented and social-oriented
domains. The former refers to forums or E-commerce sites that emphasize
user-item interactions, like Trip.com and Amazon; whereas the latter refers to
social networking services (SNSs) that have rich user-user connections, such as
Facebook and Twitter. Despite their heterogeneity, these two domains can be
bridged by a few overlapping users, dubbed as bridge users. In this work, we
address the problem of cross-domain social recommendation, i.e., recommending
relevant items of information domains to potential users of social networks. To
our knowledge, this is a new problem that has rarely been studied before.
Existing cross-domain recommender systems are unsuitable for this task since
they have either focused on homogeneous information domains or assumed that
users are fully overlapped. Towards this end, we present a novel Neural Social
Collaborative Ranking (NSCR) approach, which seamlessly sews up the user-item
interactions in information domains and user-user connections in SNSs. In the
information domain part, the attributes of users and items are leveraged to
strengthen the embedding learning of users and items. In the SNS part, the
embeddings of bridge users are propagated to learn the embeddings of other
non-bridge users. Extensive experiments on two real-world datasets demonstrate
the effectiveness and rationality of our NSCR method.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2017 08:58:02 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Wang",
"Xiang",
""
],
[
"He",
"Xiangnan",
""
],
[
"Nie",
"Liqiang",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: Item Silk Road: Recommending Items from Information Domains to Social
Users
ABSTRACT: Online platforms can be divided into information-oriented and social-oriented
domains. The former refers to forums or E-commerce sites that emphasize
user-item interactions, like Trip.com and Amazon; whereas the latter refers to
social networking services (SNSs) that have rich user-user connections, such as
Facebook and Twitter. Despite their heterogeneity, these two domains can be
bridged by a few overlapping users, dubbed as bridge users. In this work, we
address the problem of cross-domain social recommendation, i.e., recommending
relevant items of information domains to potential users of social networks. To
our knowledge, this is a new problem that has rarely been studied before.
Existing cross-domain recommender systems are unsuitable for this task since
they have either focused on homogeneous information domains or assumed that
users are fully overlapped. Towards this end, we present a novel Neural Social
Collaborative Ranking (NSCR) approach, which seamlessly sews up the user-item
interactions in information domains and user-user connections in SNSs. In the
information domain part, the attributes of users and items are leveraged to
strengthen the embedding learning of users and items. In the SNS part, the
embeddings of bridge users are propagated to learn the embeddings of other
non-bridge users. Extensive experiments on two real-world datasets demonstrate
the effectiveness and rationality of our NSCR method.
| no_new_dataset | 0.94801 |
1706.03206 | Veronica Estrada-Galinanes | Veronica del Carmen Estrada | Analysis of Anomalies in the Internet Traffic Observed at the Campus
Network Gateway | Master Thesis, January 14th 2011, Graduate School of
Interdisciplinary Information Studies, University of Tokyo | null | null | 10m096404-01 | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A considerable portion of the machine learning literature applied to
intrusion detection uses outdated data sets based on a simulated network with a
limited environment. Moreover, flaws usually appear in datasets and the way we
handle them may impact on measurements. Finally, the detection capacity of
intrusion detection is highly influenced by the system configuration. We focus
on a topic rarely investigated: the characterization of anomalies in a large
network environment. Intrusion Detection System (IDS) are used to detect
exploits or other attacks that raise alarms. These anomalous events usually
receive less attention than attack alarms, causing them to be frequently
overlooked by security administrators. However, the observation of this
activity contributes to understand the traffic network characteristics. On one
hand, abnormal behaviors may be legitimate, e.g., misinterpreted protocols or
malfunctioning network equipment, but on the other hand an attacker may
intentionally craft packets to introduce anomalies to evade monitoring systems.
Anomalies found in operational network environments may indicate cases of
evasion attacks, application bugs, and a wide variety of factors that highly
influence intrusion detection performance. This study explores the nature of
anomalies found in U-Tokyo Network using cooperatively Bro and Snort IDS among
other resources. We analyze 6.5 TB of compressed binary tcpdump data
representing 12 hours of network traffic. Our major contributions can be
summarized in: 1) reporting the anomalies observed in real, up-to-date traffic
from a large academic network environment, and documenting problems in research
that may lead to wrong results due to misinterpretations of data or
misconfigurations in software; 2) assessing the quality of data by analyzing
the potential and the real problems in the capture process.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2017 09:00:51 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Estrada",
"Veronica del Carmen",
""
]
] | TITLE: Analysis of Anomalies in the Internet Traffic Observed at the Campus
Network Gateway
ABSTRACT: A considerable portion of the machine learning literature applied to
intrusion detection uses outdated data sets based on a simulated network with a
limited environment. Moreover, flaws usually appear in datasets and the way we
handle them may impact on measurements. Finally, the detection capacity of
intrusion detection is highly influenced by the system configuration. We focus
on a topic rarely investigated: the characterization of anomalies in a large
network environment. Intrusion Detection System (IDS) are used to detect
exploits or other attacks that raise alarms. These anomalous events usually
receive less attention than attack alarms, causing them to be frequently
overlooked by security administrators. However, the observation of this
activity contributes to understand the traffic network characteristics. On one
hand, abnormal behaviors may be legitimate, e.g., misinterpreted protocols or
malfunctioning network equipment, but on the other hand an attacker may
intentionally craft packets to introduce anomalies to evade monitoring systems.
Anomalies found in operational network environments may indicate cases of
evasion attacks, application bugs, and a wide variety of factors that highly
influence intrusion detection performance. This study explores the nature of
anomalies found in U-Tokyo Network using cooperatively Bro and Snort IDS among
other resources. We analyze 6.5 TB of compressed binary tcpdump data
representing 12 hours of network traffic. Our major contributions can be
summarized in: 1) reporting the anomalies observed in real, up-to-date traffic
from a large academic network environment, and documenting problems in research
that may lead to wrong results due to misinterpretations of data or
misconfigurations in software; 2) assessing the quality of data by analyzing
the potential and the real problems in the capture process.
| no_new_dataset | 0.938576 |
1706.03249 | Rishabh Mehrotra | Rishabh Mehrotra and Prasanta Bhattacharya | Characterizing and Predicting Supply-side Engagement on
Crowd-contributed Video Sharing Platforms | 8 pages, ICTIR 2017 | null | null | null | cs.HC cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video sharing and entertainment websites have rapidly grown in popularity and
now constitute some of the most visited websites on the Internet. Despite the
active user engagement on these online video-sharing platforms, most of recent
research on online media platforms have restricted themselves to networking
based social media sites, like Facebook or Twitter. We depart from previous
studies in the online media space that have focused exclusively on demand-side
user engagement, by modeling the supply-side of the crowd-contributed videos on
this platform. The current study is among the first to perform a large-scale
empirical study using longitudinal video upload data from a large online video
platform. The modeling and subsequent prediction of video uploads is made
complicated by the heterogeneity of video types (e.g. popular vs. niche video
genres), and the inherent time trend effects associated with media uploads. We
identify distinct genre-clusters from our dataset and employ a self-exciting
Hawkes point-process model on each of these clusters to fully specify and
estimate the video upload process. Additionally, we go beyond prediction to
disentangle potential factors that govern user engagement and determine the
video upload rates, which improves our analysis with additional explanatory
power. Our findings show that using a relatively parsimonious point-process
model, we are able to achieve higher model fit, and predict video uploads to
the platform with a higher accuracy than competing models. The findings from
this study can benefit platform owners in better understanding how their
supply-side users engage with their site over time. We also offer a robust
method for performing media upload prediction that is likely to be
generalizable across media platforms which demonstrate similar temporal and
genre-level heterogeneity.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2017 16:26:48 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Mehrotra",
"Rishabh",
""
],
[
"Bhattacharya",
"Prasanta",
""
]
] | TITLE: Characterizing and Predicting Supply-side Engagement on
Crowd-contributed Video Sharing Platforms
ABSTRACT: Video sharing and entertainment websites have rapidly grown in popularity and
now constitute some of the most visited websites on the Internet. Despite the
active user engagement on these online video-sharing platforms, most of recent
research on online media platforms have restricted themselves to networking
based social media sites, like Facebook or Twitter. We depart from previous
studies in the online media space that have focused exclusively on demand-side
user engagement, by modeling the supply-side of the crowd-contributed videos on
this platform. The current study is among the first to perform a large-scale
empirical study using longitudinal video upload data from a large online video
platform. The modeling and subsequent prediction of video uploads is made
complicated by the heterogeneity of video types (e.g. popular vs. niche video
genres), and the inherent time trend effects associated with media uploads. We
identify distinct genre-clusters from our dataset and employ a self-exciting
Hawkes point-process model on each of these clusters to fully specify and
estimate the video upload process. Additionally, we go beyond prediction to
disentangle potential factors that govern user engagement and determine the
video upload rates, which improves our analysis with additional explanatory
power. Our findings show that using a relatively parsimonious point-process
model, we are able to achieve higher model fit, and predict video uploads to
the platform with a higher accuracy than competing models. The findings from
this study can benefit platform owners in better understanding how their
supply-side users engage with their site over time. We also offer a robust
method for performing media upload prediction that is likely to be
generalizable across media platforms which demonstrate similar temporal and
genre-level heterogeneity.
| no_new_dataset | 0.943138 |
1706.03256 | Soheil Khorram | John Gideon, Soheil Khorram, Zakaria Aldeneh, Dimitrios Dimitriadis,
Emily Mower Provost | Progressive Neural Networks for Transfer Learning in Emotion Recognition | 5 pages, 4 figures, to appear in the proceedings of Interspeech 2017 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Many paralinguistic tasks are closely related and thus representations
learned in one domain can be leveraged for another. In this paper, we
investigate how knowledge can be transferred between three paralinguistic
tasks: speaker, emotion, and gender recognition. Further, we extend this
problem to cross-dataset tasks, asking how knowledge captured in one emotion
dataset can be transferred to another. We focus on progressive neural networks
and compare these networks to the conventional deep learning method of
pre-training and fine-tuning. Progressive neural networks provide a way to
transfer knowledge and avoid the forgetting effect present when pre-training
neural networks on different tasks. Our experiments demonstrate that: (1)
emotion recognition can benefit from using representations originally learned
for different paralinguistic tasks and (2) transfer learning can effectively
leverage additional datasets to improve the performance of emotion recognition
systems.
| [
{
"version": "v1",
"created": "Sat, 10 Jun 2017 17:26:20 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Gideon",
"John",
""
],
[
"Khorram",
"Soheil",
""
],
[
"Aldeneh",
"Zakaria",
""
],
[
"Dimitriadis",
"Dimitrios",
""
],
[
"Provost",
"Emily Mower",
""
]
] | TITLE: Progressive Neural Networks for Transfer Learning in Emotion Recognition
ABSTRACT: Many paralinguistic tasks are closely related and thus representations
learned in one domain can be leveraged for another. In this paper, we
investigate how knowledge can be transferred between three paralinguistic
tasks: speaker, emotion, and gender recognition. Further, we extend this
problem to cross-dataset tasks, asking how knowledge captured in one emotion
dataset can be transferred to another. We focus on progressive neural networks
and compare these networks to the conventional deep learning method of
pre-training and fine-tuning. Progressive neural networks provide a way to
transfer knowledge and avoid the forgetting effect present when pre-training
neural networks on different tasks. Our experiments demonstrate that: (1)
emotion recognition can benefit from using representations originally learned
for different paralinguistic tasks and (2) transfer learning can effectively
leverage additional datasets to improve the performance of emotion recognition
systems.
| no_new_dataset | 0.948822 |
1706.03367 | Carlos G\'omez-Rodr\'iguez | Daniel Fern\'andez-Gonz\'alez, Carlos G\'omez-Rodr\'iguez | A Full Non-Monotonic Transition System for Unrestricted Non-Projective
Parsing | 11 pages. Accepted for publication at ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Restricted non-monotonicity has been shown beneficial for the projective
arc-eager dependency parser in previous research, as posterior decisions can
repair mistakes made in previous states due to the lack of information. In this
paper, we propose a novel, fully non-monotonic transition system based on the
non-projective Covington algorithm. As a non-monotonic system requires
exploration of erroneous actions during the training process, we develop
several non-monotonic variants of the recently defined dynamic oracle for the
Covington parser, based on tight approximations of the loss. Experiments on
datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic
dynamic oracle outperforms the monotonic version in the majority of languages.
| [
{
"version": "v1",
"created": "Sun, 11 Jun 2017 16:04:42 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Fernández-González",
"Daniel",
""
],
[
"Gómez-Rodríguez",
"Carlos",
""
]
] | TITLE: A Full Non-Monotonic Transition System for Unrestricted Non-Projective
Parsing
ABSTRACT: Restricted non-monotonicity has been shown beneficial for the projective
arc-eager dependency parser in previous research, as posterior decisions can
repair mistakes made in previous states due to the lack of information. In this
paper, we propose a novel, fully non-monotonic transition system based on the
non-projective Covington algorithm. As a non-monotonic system requires
exploration of erroneous actions during the training process, we develop
several non-monotonic variants of the recently defined dynamic oracle for the
Covington parser, based on tight approximations of the loss. Experiments on
datasets from the CoNLL-X and CoNLL-XI shared tasks show that a non-monotonic
dynamic oracle outperforms the monotonic version in the majority of languages.
| no_new_dataset | 0.946597 |
1706.03412 | Evgeny Burnaev | Vladislav Ishimtsev, Ivan Nazarov, Alexander Bernstein and Evgeny
Burnaev | Conformal k-NN Anomaly Detector for Univariate Data Streams | 15 pages, 2 figures, 7 tables | null | null | null | stat.ML cs.DS stat.AP stat.CO stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomalies in time-series data give essential and often actionable information
in many applications. In this paper we consider a model-free anomaly detection
method for univariate time-series which adapts to non-stationarity in the data
stream and provides probabilistic abnormality scores based on the conformal
prediction paradigm. Despite its simplicity the method performs on par with
complex prediction-based models on the Numenta Anomaly Detection benchmark and
the Yahoo! S5 dataset.
| [
{
"version": "v1",
"created": "Sun, 11 Jun 2017 21:45:24 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Ishimtsev",
"Vladislav",
""
],
[
"Nazarov",
"Ivan",
""
],
[
"Bernstein",
"Alexander",
""
],
[
"Burnaev",
"Evgeny",
""
]
] | TITLE: Conformal k-NN Anomaly Detector for Univariate Data Streams
ABSTRACT: Anomalies in time-series data give essential and often actionable information
in many applications. In this paper we consider a model-free anomaly detection
method for univariate time-series which adapts to non-stationarity in the data
stream and provides probabilistic abnormality scores based on the conformal
prediction paradigm. Despite its simplicity the method performs on par with
complex prediction-based models on the Numenta Anomaly Detection benchmark and
the Yahoo! S5 dataset.
| no_new_dataset | 0.953319 |
1706.03449 | Arman Cohan | Arman Cohan, Nazli Goharian | Scientific document summarization via citation contextualization and
scientific discourse | Preprint. The final publication is available at Springer via
http://dx.doi.org/10.1007/s00799-017-0216-8, International Journal on Digital
Libraries (IJDL) 2017 | null | 10.1007/s00799-017-0216-8 | null | cs.CL cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of scientific literature has made it difficult for the
researchers to quickly learn about the developments in their respective fields.
Scientific document summarization addresses this challenge by providing
summaries of the important contributions of scientific papers. We present a
framework for scientific summarization which takes advantage of the citations
and the scientific discourse structure. Citation texts often lack the evidence
and context to support the content of the cited paper and are even sometimes
inaccurate. We first address the problem of inaccuracy of the citation texts by
finding the relevant context from the cited paper. We propose three approaches
for contextualizing citations which are based on query reformulation, word
embeddings, and supervised learning. We then train a model to identify the
discourse facets for each citation. We finally propose a method for summarizing
scientific papers by leveraging the faceted citations and their corresponding
contexts. We evaluate our proposed method on two scientific summarization
datasets in the biomedical and computational linguistics domains. Extensive
evaluation results show that our methods can improve over the state of the art
by large margins.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 03:21:38 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Cohan",
"Arman",
""
],
[
"Goharian",
"Nazli",
""
]
] | TITLE: Scientific document summarization via citation contextualization and
scientific discourse
ABSTRACT: The rapid growth of scientific literature has made it difficult for the
researchers to quickly learn about the developments in their respective fields.
Scientific document summarization addresses this challenge by providing
summaries of the important contributions of scientific papers. We present a
framework for scientific summarization which takes advantage of the citations
and the scientific discourse structure. Citation texts often lack the evidence
and context to support the content of the cited paper and are even sometimes
inaccurate. We first address the problem of inaccuracy of the citation texts by
finding the relevant context from the cited paper. We propose three approaches
for contextualizing citations which are based on query reformulation, word
embeddings, and supervised learning. We then train a model to identify the
discourse facets for each citation. We finally propose a method for summarizing
scientific papers by leveraging the faceted citations and their corresponding
contexts. We evaluate our proposed method on two scientific summarization
datasets in the biomedical and computational linguistics domains. Extensive
evaluation results show that our methods can improve over the state of the art
by large margins.
| no_new_dataset | 0.946695 |
1706.03509 | Veronika Cheplygina | Veronika Cheplygina and Pim Moeskops and Mitko Veta and Behdad Dasht
Bozorg and Josien Pluim | Exploring the similarity of medical imaging classification problems | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning is ubiquitous in medical image analysis. In this paper we
consider the problem of meta-learning -- predicting which methods will perform
well in an unseen classification problem, given previous experience with other
classification problems. We investigate the first step of such an approach: how
to quantify the similarity of different classification problems. We
characterize datasets sampled from six classification problems by performance
ranks of simple classifiers, and define the similarity by the inverse of
Euclidean distance in this meta-feature space. We visualize the similarities in
a 2D space, where meaningful clusters start to emerge, and show that the
proposed representation can be used to classify datasets according to their
origin with 89.3\% accuracy. These findings, together with the observations of
recent trends in machine learning, suggest that meta-learning could be a
valuable tool for the medical imaging community.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 08:28:17 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Cheplygina",
"Veronika",
""
],
[
"Moeskops",
"Pim",
""
],
[
"Veta",
"Mitko",
""
],
[
"Bozorg",
"Behdad Dasht",
""
],
[
"Pluim",
"Josien",
""
]
] | TITLE: Exploring the similarity of medical imaging classification problems
ABSTRACT: Supervised learning is ubiquitous in medical image analysis. In this paper we
consider the problem of meta-learning -- predicting which methods will perform
well in an unseen classification problem, given previous experience with other
classification problems. We investigate the first step of such an approach: how
to quantify the similarity of different classification problems. We
characterize datasets sampled from six classification problems by performance
ranks of simple classifiers, and define the similarity by the inverse of
Euclidean distance in this meta-feature space. We visualize the similarities in
a 2D space, where meaningful clusters start to emerge, and show that the
proposed representation can be used to classify datasets according to their
origin with 89.3\% accuracy. These findings, together with the observations of
recent trends in machine learning, suggest that meta-learning could be a
valuable tool for the medical imaging community.
| no_new_dataset | 0.948155 |
1706.03581 | Artsiom Ablavatski | Artsiom Ablavatski, Shijian Lu and Jianfei Cai | Enriched Deep Recurrent Visual Attention Model for Multiple Object
Recognition | null | null | 10.1109/WACV.2017.113 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) - an
improved attention-based architecture for multiple object recognition. The
proposed model is a fully differentiable unit that can be optimized end-to-end
by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was
employed as visual attention mechanism which allows to learn the geometric
transformation of objects within images. With the combination of the Spatial
Transformer and the powerful recurrent architecture, the proposed EDRAM can
localize and recognize objects simultaneously. EDRAM has been evaluated on two
publicly available datasets including MNIST Cluttered (with 70K cluttered
digits) and SVHN (with up to 250k real world images of house numbers).
Experiments show that it obtains superior performance as compared with the
state-of-the-art models.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 11:55:35 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Ablavatski",
"Artsiom",
""
],
[
"Lu",
"Shijian",
""
],
[
"Cai",
"Jianfei",
""
]
] | TITLE: Enriched Deep Recurrent Visual Attention Model for Multiple Object
Recognition
ABSTRACT: We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) - an
improved attention-based architecture for multiple object recognition. The
proposed model is a fully differentiable unit that can be optimized end-to-end
by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was
employed as visual attention mechanism which allows to learn the geometric
transformation of objects within images. With the combination of the Spatial
Transformer and the powerful recurrent architecture, the proposed EDRAM can
localize and recognize objects simultaneously. EDRAM has been evaluated on two
publicly available datasets including MNIST Cluttered (with 70K cluttered
digits) and SVHN (with up to 250k real world images of house numbers).
Experiments show that it obtains superior performance as compared with the
state-of-the-art models.
| no_new_dataset | 0.947332 |
1706.03725 | Zhiyuan Shi | Zhiyuan Shi, Timothy M. Hospedales, Tao Xiang | Transferring a Semantic Representation for Person Re-Identification and
Search | cvpr 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning semantic attributes for person re-identification and
description-based person search has gained increasing interest due to
attributes' great potential as a pose and view-invariant representation.
However, existing attribute-centric approaches have thus far underperformed
state-of-the-art conventional approaches. This is due to their non-scalable
need for extensive domain (camera) specific annotation. In this paper we
present a new semantic attribute learning approach for person re-identification
and search. Our model is trained on existing fashion photography datasets --
either weakly or strongly labelled. It can then be transferred and adapted to
provide a powerful semantic description of surveillance person detections,
without requiring any surveillance domain supervision. The resulting
representation is useful for both unsupervised and supervised person
re-identification, achieving state-of-the-art and near state-of-the-art
performance respectively. Furthermore, as a semantic representation it allows
description-based person search to be integrated within the same framework.
| [
{
"version": "v1",
"created": "Mon, 12 Jun 2017 16:52:57 GMT"
}
] | 2017-06-13T00:00:00 | [
[
"Shi",
"Zhiyuan",
""
],
[
"Hospedales",
"Timothy M.",
""
],
[
"Xiang",
"Tao",
""
]
] | TITLE: Transferring a Semantic Representation for Person Re-Identification and
Search
ABSTRACT: Learning semantic attributes for person re-identification and
description-based person search has gained increasing interest due to
attributes' great potential as a pose and view-invariant representation.
However, existing attribute-centric approaches have thus far underperformed
state-of-the-art conventional approaches. This is due to their non-scalable
need for extensive domain (camera) specific annotation. In this paper we
present a new semantic attribute learning approach for person re-identification
and search. Our model is trained on existing fashion photography datasets --
either weakly or strongly labelled. It can then be transferred and adapted to
provide a powerful semantic description of surveillance person detections,
without requiring any surveillance domain supervision. The resulting
representation is useful for both unsupervised and supervised person
re-identification, achieving state-of-the-art and near state-of-the-art
performance respectively. Furthermore, as a semantic representation it allows
description-based person search to be integrated within the same framework.
| no_new_dataset | 0.947817 |
1507.05738 | Serena Yeung | Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg
Mori, Li Fei-Fei | Every Moment Counts: Dense Detailed Labeling of Actions in Complex
Videos | To appear in IJCV | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Every moment counts in action recognition. A comprehensive understanding of
human activity in video requires labeling every frame according to the actions
occurring, placing multiple labels densely over a video sequence. To study this
problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new
dataset of dense labels over unconstrained internet videos. Modeling multiple,
dense labels benefits from temporal relations within and across classes. We
define a novel variant of long short-term memory (LSTM) deep networks for
modeling these temporal relations via multiple input and output connections. We
show that this model improves action labeling accuracy and further enables
deeper understanding tasks ranging from structured retrieval to action
prediction.
| [
{
"version": "v1",
"created": "Tue, 21 Jul 2015 08:07:50 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Jul 2015 22:09:30 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Jun 2017 10:42:09 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Yeung",
"Serena",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Jin",
"Ning",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Mori",
"Greg",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: Every Moment Counts: Dense Detailed Labeling of Actions in Complex
Videos
ABSTRACT: Every moment counts in action recognition. A comprehensive understanding of
human activity in video requires labeling every frame according to the actions
occurring, placing multiple labels densely over a video sequence. To study this
problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new
dataset of dense labels over unconstrained internet videos. Modeling multiple,
dense labels benefits from temporal relations within and across classes. We
define a novel variant of long short-term memory (LSTM) deep networks for
modeling these temporal relations via multiple input and output connections. We
show that this model improves action labeling accuracy and further enables
deeper understanding tasks ranging from structured retrieval to action
prediction.
| new_dataset | 0.951818 |
1605.09068 | Michael Lash | Michael T. Lash, Qihang Lin, W. Nick Street and Jennifer G. Robinson | A budget-constrained inverse classification framework for smooth
classifiers | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inverse classification is the process of manipulating an instance such that
it is more likely to conform to a specific class. Past methods that address
such a problem have shortcomings. Greedy methods make changes that are overly
radical, often relying on data that is strictly discrete. Other methods rely on
certain data points, the presence of which cannot be guaranteed. In this paper
we propose a general framework and method that overcomes these and other
limitations. The formulation of our method can use any differentiable
classification function. We demonstrate the method by using logistic regression
and Gaussian kernel SVMs. We constrain the inverse classification to occur on
features that can actually be changed, each of which incurs an individual cost.
We further subject such changes to fall within a certain level of cumulative
change (budget). Our framework can also accommodate the estimation of
(indirectly changeable) features whose values change as a consequence of
actions taken. Furthermore, we propose two methods for specifying feature-value
ranges that result in different algorithmic behavior. We apply our method, and
a proposed sensitivity analysis-based benchmark method, to two freely available
datasets: Student Performance from the UCI Machine Learning Repository and a
real world cardiovascular disease dataset. The results obtained demonstrate the
validity and benefits of our framework and method.
| [
{
"version": "v1",
"created": "Sun, 29 May 2016 21:50:25 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Feb 2017 22:30:53 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2017 18:27:39 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Lash",
"Michael T.",
""
],
[
"Lin",
"Qihang",
""
],
[
"Street",
"W. Nick",
""
],
[
"Robinson",
"Jennifer G.",
""
]
] | TITLE: A budget-constrained inverse classification framework for smooth
classifiers
ABSTRACT: Inverse classification is the process of manipulating an instance such that
it is more likely to conform to a specific class. Past methods that address
such a problem have shortcomings. Greedy methods make changes that are overly
radical, often relying on data that is strictly discrete. Other methods rely on
certain data points, the presence of which cannot be guaranteed. In this paper
we propose a general framework and method that overcomes these and other
limitations. The formulation of our method can use any differentiable
classification function. We demonstrate the method by using logistic regression
and Gaussian kernel SVMs. We constrain the inverse classification to occur on
features that can actually be changed, each of which incurs an individual cost.
We further subject such changes to fall within a certain level of cumulative
change (budget). Our framework can also accommodate the estimation of
(indirectly changeable) features whose values change as a consequence of
actions taken. Furthermore, we propose two methods for specifying feature-value
ranges that result in different algorithmic behavior. We apply our method, and
a proposed sensitivity analysis-based benchmark method, to two freely available
datasets: Student Performance from the UCI Machine Learning Repository and a
real world cardiovascular disease dataset. The results obtained demonstrate the
validity and benefits of our framework and method.
| no_new_dataset | 0.938913 |
1611.05424 | Alejandro Newell | Alejandro Newell, Zhiao Huang, Jia Deng | Associative Embedding: End-to-End Learning for Joint Detection and
Grouping | Added results on MS-COCO and updated results on MPII | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce associative embedding, a novel method for supervising
convolutional neural networks for the task of detection and grouping. A number
of computer vision problems can be framed in this manner including multi-person
pose estimation, instance segmentation, and multi-object tracking. Usually the
grouping of detections is achieved with multi-stage pipelines, instead we
propose an approach that teaches a network to simultaneously output detections
and group assignments. This technique can be easily integrated into any
state-of-the-art network architecture that produces pixel-wise predictions. We
show how to apply this method to both multi-person pose estimation and instance
segmentation and report state-of-the-art performance for multi-person pose on
the MPII and MS-COCO datasets.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 20:04:28 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 16:13:48 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Newell",
"Alejandro",
""
],
[
"Huang",
"Zhiao",
""
],
[
"Deng",
"Jia",
""
]
] | TITLE: Associative Embedding: End-to-End Learning for Joint Detection and
Grouping
ABSTRACT: We introduce associative embedding, a novel method for supervising
convolutional neural networks for the task of detection and grouping. A number
of computer vision problems can be framed in this manner including multi-person
pose estimation, instance segmentation, and multi-object tracking. Usually the
grouping of detections is achieved with multi-stage pipelines, instead we
propose an approach that teaches a network to simultaneously output detections
and group assignments. This technique can be easily integrated into any
state-of-the-art network architecture that produces pixel-wise predictions. We
show how to apply this method to both multi-person pose estimation and instance
segmentation and report state-of-the-art performance for multi-person pose on
the MPII and MS-COCO datasets.
| no_new_dataset | 0.947624 |
1611.06440 | Pavlo Molchanov | Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, Jan Kautz | Pruning Convolutional Neural Networks for Resource Efficient Inference | 17 pages, 14 figures, ICLR 2017 paper | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach.
| [
{
"version": "v1",
"created": "Sat, 19 Nov 2016 22:48:30 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2017 19:53:26 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Molchanov",
"Pavlo",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Karras",
"Tero",
""
],
[
"Aila",
"Timo",
""
],
[
"Kautz",
"Jan",
""
]
] | TITLE: Pruning Convolutional Neural Networks for Resource Efficient Inference
ABSTRACT: We propose a new formulation for pruning convolutional kernels in neural
networks to enable efficient inference. We interleave greedy criteria-based
pruning with fine-tuning by backpropagation - a computationally efficient
procedure that maintains good generalization in the pruned network. We propose
a new criterion based on Taylor expansion that approximates the change in the
cost function induced by pruning network parameters. We focus on transfer
learning, where large pretrained networks are adapted to specialized tasks. The
proposed criterion demonstrates superior performance compared to other
criteria, e.g. the norm of kernel weights or feature map activation, for
pruning large CNNs after adaptation to fine-grained classification tasks
(Birds-200 and Flowers-102) relaying only on the first order gradient
information. We also show that pruning can lead to more than 10x theoretical
(5x practical) reduction in adapted 3D-convolutional filters with a small drop
in accuracy in a recurrent gesture classifier. Finally, we show results for the
large-scale ImageNet dataset to emphasize the flexibility of our approach.
| no_new_dataset | 0.948728 |
1702.07944 | Simon Du | Simon S. Du, Jianshu Chen, Lihong Li, Lin Xiao, Dengyong Zhou | Stochastic Variance Reduction Methods for Policy Evaluation | Accepted by ICML 2017 | null | null | null | cs.LG cs.AI cs.SY math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policy evaluation is a crucial step in many reinforcement-learning
procedures, which estimates a value function that predicts states' long-term
value under a given policy. In this paper, we focus on policy evaluation with
linear function approximation over a fixed dataset. We first transform the
empirical policy evaluation problem into a (quadratic) convex-concave saddle
point problem, and then present a primal-dual batch gradient method, as well as
two stochastic variance reduction methods for solving the problem. These
algorithms scale linearly in both sample size and feature dimension. Moreover,
they achieve linear convergence even when the saddle-point problem has only
strong concavity in the dual variables but no strong convexity in the primal
variables. Numerical experiments on benchmark problems demonstrate the
effectiveness of our methods.
| [
{
"version": "v1",
"created": "Sat, 25 Feb 2017 20:15:55 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 06:02:47 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Du",
"Simon S.",
""
],
[
"Chen",
"Jianshu",
""
],
[
"Li",
"Lihong",
""
],
[
"Xiao",
"Lin",
""
],
[
"Zhou",
"Dengyong",
""
]
] | TITLE: Stochastic Variance Reduction Methods for Policy Evaluation
ABSTRACT: Policy evaluation is a crucial step in many reinforcement-learning
procedures, which estimates a value function that predicts states' long-term
value under a given policy. In this paper, we focus on policy evaluation with
linear function approximation over a fixed dataset. We first transform the
empirical policy evaluation problem into a (quadratic) convex-concave saddle
point problem, and then present a primal-dual batch gradient method, as well as
two stochastic variance reduction methods for solving the problem. These
algorithms scale linearly in both sample size and feature dimension. Moreover,
they achieve linear convergence even when the saddle-point problem has only
strong concavity in the dual variables but no strong convexity in the primal
variables. Numerical experiments on benchmark problems demonstrate the
effectiveness of our methods.
| no_new_dataset | 0.950319 |
1702.08396 | Shengjia Zhao | Shengjia Zhao, Jiaming Song, Stefano Ermon | Learning Hierarchical Features from Generative Models | ICML'2017 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have been shown to be very successful at learning
feature hierarchies in supervised learning tasks. Generative models, on the
other hand, have benefited less from hierarchical models with multiple layers
of latent variables. In this paper, we prove that hierarchical latent variable
models do not take advantage of the hierarchical structure when trained with
existing variational methods, and provide some limitations on the kind of
features existing models can learn. Finally we propose an alternative
architecture that do not suffer from these limitations. Our model is able to
learn highly interpretable and disentangled hierarchical features on several
natural image datasets with no task specific regularization or prior knowledge.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 17:43:34 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 17:19:15 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Zhao",
"Shengjia",
""
],
[
"Song",
"Jiaming",
""
],
[
"Ermon",
"Stefano",
""
]
] | TITLE: Learning Hierarchical Features from Generative Models
ABSTRACT: Deep neural networks have been shown to be very successful at learning
feature hierarchies in supervised learning tasks. Generative models, on the
other hand, have benefited less from hierarchical models with multiple layers
of latent variables. In this paper, we prove that hierarchical latent variable
models do not take advantage of the hierarchical structure when trained with
existing variational methods, and provide some limitations on the kind of
features existing models can learn. Finally we propose an alternative
architecture that do not suffer from these limitations. Our model is able to
learn highly interpretable and disentangled hierarchical features on several
natural image datasets with no task specific regularization or prior knowledge.
| no_new_dataset | 0.944842 |
1705.04416 | Joshua Peterson | Dawn Chen, Joshua C. Peterson, Thomas L. Griffiths | Evaluating vector-space models of analogy | 6 pages, 4 figures, In the Proceedings of the 39th Annual Conference
of the Cognitive Science Society | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector-space representations provide geometric tools for reasoning about the
similarity of a set of objects and their relationships. Recent machine learning
methods for deriving vector-space embeddings of words (e.g., word2vec) have
achieved considerable success in natural language processing. These vector
spaces have also been shown to exhibit a surprising capacity to capture verbal
analogies, with similar results for natural images, giving new life to a
classic model of analogies as parallelograms that was first proposed by
cognitive scientists. We evaluate the parallelogram model of analogy as applied
to modern word embeddings, providing a detailed analysis of the extent to which
this approach captures human relational similarity judgments in a large
benchmark dataset. We find that that some semantic relationships are better
captured than others. We then provide evidence for deeper limitations of the
parallelogram model based on the intrinsic geometric constraints of vector
spaces, paralleling classic results for first-order similarity.
| [
{
"version": "v1",
"created": "Fri, 12 May 2017 01:26:23 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Jun 2017 20:52:12 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Chen",
"Dawn",
""
],
[
"Peterson",
"Joshua C.",
""
],
[
"Griffiths",
"Thomas L.",
""
]
] | TITLE: Evaluating vector-space models of analogy
ABSTRACT: Vector-space representations provide geometric tools for reasoning about the
similarity of a set of objects and their relationships. Recent machine learning
methods for deriving vector-space embeddings of words (e.g., word2vec) have
achieved considerable success in natural language processing. These vector
spaces have also been shown to exhibit a surprising capacity to capture verbal
analogies, with similar results for natural images, giving new life to a
classic model of analogies as parallelograms that was first proposed by
cognitive scientists. We evaluate the parallelogram model of analogy as applied
to modern word embeddings, providing a detailed analysis of the extent to which
this approach captures human relational similarity judgments in a large
benchmark dataset. We find that that some semantic relationships are better
captured than others. We then provide evidence for deeper limitations of the
parallelogram model based on the intrinsic geometric constraints of vector
spaces, paralleling classic results for first-order similarity.
| no_new_dataset | 0.951233 |
1706.02384 | Virag Shah | Virag Shah, Anne Bouillard, Francois Baccelli | Delay Comparison of Delivery and Coding Policies in Data Clusters | 13 pages, 4 figures | null | null | null | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key function of cloud infrastructure is to store and deliver diverse files,
e.g., scientific datasets, social network information, videos, etc. In such
systems, for the purpose of fast and reliable delivery, files are divided into
chunks, replicated or erasure-coded, and disseminated across servers. It is
neither known in general how delays scale with the size of a request nor how
delays compare under different policies for coding, data dissemination, and
delivery.
Motivated by these questions, we develop and explore a set of evolution
equations as a unified model which captures the above features. These equations
allow for both efficient simulation and mathematical analysis of several
delivery policies under general statistical assumptions. In particular, we
quantify in what sense a workload aware delivery policy performs better than a
workload agnostic policy. Under a dynamic or stochastic setting, the sample
path comparison of these policies does not hold in general. The comparison is
shown to hold under the weaker increasing convex stochastic ordering, still
stronger than the comparison of averages.
This result further allows us to obtain insightful computable performance
bounds. For example, we show that in a system where files are divided into
chunks of equal size, replicated or erasure-coded, and disseminated across
servers at random, the job delays increase sub-logarithmically in the request
size for small and medium-sized files but linearly for large files.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 21:27:04 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 14:35:54 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Shah",
"Virag",
""
],
[
"Bouillard",
"Anne",
""
],
[
"Baccelli",
"Francois",
""
]
] | TITLE: Delay Comparison of Delivery and Coding Policies in Data Clusters
ABSTRACT: A key function of cloud infrastructure is to store and deliver diverse files,
e.g., scientific datasets, social network information, videos, etc. In such
systems, for the purpose of fast and reliable delivery, files are divided into
chunks, replicated or erasure-coded, and disseminated across servers. It is
neither known in general how delays scale with the size of a request nor how
delays compare under different policies for coding, data dissemination, and
delivery.
Motivated by these questions, we develop and explore a set of evolution
equations as a unified model which captures the above features. These equations
allow for both efficient simulation and mathematical analysis of several
delivery policies under general statistical assumptions. In particular, we
quantify in what sense a workload aware delivery policy performs better than a
workload agnostic policy. Under a dynamic or stochastic setting, the sample
path comparison of these policies does not hold in general. The comparison is
shown to hold under the weaker increasing convex stochastic ordering, still
stronger than the comparison of averages.
This result further allows us to obtain insightful computable performance
bounds. For example, we show that in a system where files are divided into
chunks of equal size, replicated or erasure-coded, and disseminated across
servers at random, the job delays increase sub-logarithmically in the request
size for small and medium-sized files but linearly for large files.
| no_new_dataset | 0.946843 |
1706.02493 | Zhe Wang | Zhe Wang, Hongsheng Li, Wanli Ouyang, and Xiaogang Wang | Learning Deep Representations for Scene Labeling with Semantic Context
Guided Supervision | 13 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene labeling is a challenging classification problem where each input image
requires a pixel-level prediction map. Recently, deep-learning-based methods
have shown their effectiveness on solving this problem. However, we argue that
the large intra-class variation provides ambiguous training information and
hinders the deep models' ability to learn more discriminative deep feature
representations. Unlike existing methods that mainly utilize semantic context
for regularizing or smoothing the prediction map, we design novel supervisions
from semantic context for learning better deep feature representations. Two
types of semantic context, scene names of images and label map statistics of
image patches, are exploited to create label hierarchies between the original
classes and newly created subclasses as the learning supervisions. Such
subclasses show lower intra-class variation, and help CNN detect more
meaningful visual patterns and learn more effective deep features. Novel
training strategies and network structure that take advantages of such label
hierarchies are introduced. Our proposed method is evaluated extensively on
four popular datasets, Stanford Background (8 classes), SIFTFlow (33 classes),
Barcelona (170 classes) and LM+Sun datasets (232 classes) with 3 different
networks structures, and show state-of-the-art performance. The experiments
show that our proposed method makes deep models learn more discriminative
feature representations without increasing model size or complexity.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 09:44:00 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jun 2017 04:15:55 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Wang",
"Zhe",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Learning Deep Representations for Scene Labeling with Semantic Context
Guided Supervision
ABSTRACT: Scene labeling is a challenging classification problem where each input image
requires a pixel-level prediction map. Recently, deep-learning-based methods
have shown their effectiveness on solving this problem. However, we argue that
the large intra-class variation provides ambiguous training information and
hinders the deep models' ability to learn more discriminative deep feature
representations. Unlike existing methods that mainly utilize semantic context
for regularizing or smoothing the prediction map, we design novel supervisions
from semantic context for learning better deep feature representations. Two
types of semantic context, scene names of images and label map statistics of
image patches, are exploited to create label hierarchies between the original
classes and newly created subclasses as the learning supervisions. Such
subclasses show lower intra-class variation, and help CNN detect more
meaningful visual patterns and learn more effective deep features. Novel
training strategies and network structure that take advantages of such label
hierarchies are introduced. Our proposed method is evaluated extensively on
four popular datasets, Stanford Background (8 classes), SIFTFlow (33 classes),
Barcelona (170 classes) and LM+Sun datasets (232 classes) with 3 different
networks structures, and show state-of-the-art performance. The experiments
show that our proposed method makes deep models learn more discriminative
feature representations without increasing model size or complexity.
| no_new_dataset | 0.951414 |
1706.02863 | Shuo Yang | Shuo Yang, Yuanjun Xiong, Chen Change Loy, Xiaoou Tang | Face Detection through Scale-Friendly Deep Convolutional Networks | 12 pages, 10 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we share our experience in designing a convolutional
network-based face detector that could handle faces of an extremely wide range
of scales. We show that faces with different scales can be modeled through a
specialized set of deep convolutional networks with different structures. These
detectors can be seamlessly integrated into a single unified network that can
be trained end-to-end. In contrast to existing deep models that are designed
for wide scale range, our network does not require an image pyramid input and
the model is of modest complexity. Our network, dubbed ScaleFace, achieves
promising performance on WIDER FACE and FDDB datasets with practical runtime
speed. Specifically, our method achieves 76.4 average precision on the
challenging WIDER FACE dataset and 96% recall rate on the FDDB dataset with 7
frames per second (fps) for 900 * 1300 input image.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 08:20:56 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Yang",
"Shuo",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Face Detection through Scale-Friendly Deep Convolutional Networks
ABSTRACT: In this paper, we share our experience in designing a convolutional
network-based face detector that could handle faces of an extremely wide range
of scales. We show that faces with different scales can be modeled through a
specialized set of deep convolutional networks with different structures. These
detectors can be seamlessly integrated into a single unified network that can
be trained end-to-end. In contrast to existing deep models that are designed
for wide scale range, our network does not require an image pyramid input and
the model is of modest complexity. Our network, dubbed ScaleFace, achieves
promising performance on WIDER FACE and FDDB datasets with practical runtime
speed. Specifically, our method achieves 76.4 average precision on the
challenging WIDER FACE dataset and 96% recall rate on the FDDB dataset with 7
frames per second (fps) for 900 * 1300 input image.
| no_new_dataset | 0.954732 |
1706.02867 | Milad Niknejad | Milad Niknejad, Jose M. Bioucas-Dias, Mario A. T. Figueiredo | Class-specific Poisson denoising by patch-based importance sampling | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of recovering images degraded by
Poisson noise, where the image is known to belong to a specific class. In the
proposed method, a dataset of clean patches from images of the class of
interest is clustered using multivariate Gaussian distributions. In order to
recover the noisy image, each noisy patch is assigned to one of these
distributions, and the corresponding minimum mean squared error (MMSE) estimate
is obtained. We propose to use a self-normalized importance sampling approach,
which is a method of the Monte-Carlo family, for the both determining the most
likely distribution and approximating the MMSE estimate of the clean patch.
Experimental results shows that our proposed method outperforms other methods
for Poisson denoising at a low SNR regime.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 08:47:26 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Niknejad",
"Milad",
""
],
[
"Bioucas-Dias",
"Jose M.",
""
],
[
"Figueiredo",
"Mario A. T.",
""
]
] | TITLE: Class-specific Poisson denoising by patch-based importance sampling
ABSTRACT: In this paper, we address the problem of recovering images degraded by
Poisson noise, where the image is known to belong to a specific class. In the
proposed method, a dataset of clean patches from images of the class of
interest is clustered using multivariate Gaussian distributions. In order to
recover the noisy image, each noisy patch is assigned to one of these
distributions, and the corresponding minimum mean squared error (MMSE) estimate
is obtained. We propose to use a self-normalized importance sampling approach,
which is a method of the Monte-Carlo family, for the both determining the most
likely distribution and approximating the MMSE estimate of the clean patch.
Experimental results shows that our proposed method outperforms other methods
for Poisson denoising at a low SNR regime.
| no_new_dataset | 0.945349 |
1706.02883 | Jingjing Gong | Xipeng Qiu, Jingjing Gong, Xuanjing Huang | Overview of the NLPCC 2017 Shared Task: Chinese News Headline
Categorization | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we give an overview for the shared task at the CCF Conference
on Natural Language Processing \& Chinese Computing (NLPCC 2017): Chinese News
Headline Categorization. The dataset of this shared task consists 18 classes,
12,000 short texts along with corresponded labels for each class. The dataset
and example code can be accessed at
https://github.com/FudanNLP/nlpcc2017_news_headline_categorization.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 10:17:24 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Qiu",
"Xipeng",
""
],
[
"Gong",
"Jingjing",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: Overview of the NLPCC 2017 Shared Task: Chinese News Headline
Categorization
ABSTRACT: In this paper, we give an overview for the shared task at the CCF Conference
on Natural Language Processing \& Chinese Computing (NLPCC 2017): Chinese News
Headline Categorization. The dataset of this shared task consists 18 classes,
12,000 short texts along with corresponded labels for each class. The dataset
and example code can be accessed at
https://github.com/FudanNLP/nlpcc2017_news_headline_categorization.
| new_dataset | 0.788909 |
1706.02884 | Serena Yeung | Serena Yeung, Vignesh Ramanathan, Olga Russakovsky, Liyue Shen, Greg
Mori, Li Fei-Fei | Learning to Learn from Noisy Web Videos | To appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the simultaneously very diverse and intricately fine-grained
set of possible human actions is a critical open problem in computer vision.
Manually labeling training videos is feasible for some action classes but
doesn't scale to the full long-tailed distribution of actions. A promising way
to address this is to leverage noisy data from web queries to learn new
actions, using semi-supervised or "webly-supervised" approaches. However, these
methods typically do not learn domain-specific knowledge, or rely on iterative
hand-tuned data labeling policies. In this work, we instead propose a
reinforcement learning-based formulation for selecting the right examples for
training a classifier from noisy web search results. Our method uses Q-learning
to learn a data labeling policy on a small labeled training dataset, and then
uses this to automatically label noisy web data for new visual concepts.
Experiments on the challenging Sports-1M action recognition benchmark as well
as on additional fine-grained and newly emerging action classes demonstrate
that our method is able to learn good labeling policies for noisy data and use
this to learn accurate visual concept classifiers.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 10:25:05 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Yeung",
"Serena",
""
],
[
"Ramanathan",
"Vignesh",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Shen",
"Liyue",
""
],
[
"Mori",
"Greg",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: Learning to Learn from Noisy Web Videos
ABSTRACT: Understanding the simultaneously very diverse and intricately fine-grained
set of possible human actions is a critical open problem in computer vision.
Manually labeling training videos is feasible for some action classes but
doesn't scale to the full long-tailed distribution of actions. A promising way
to address this is to leverage noisy data from web queries to learn new
actions, using semi-supervised or "webly-supervised" approaches. However, these
methods typically do not learn domain-specific knowledge, or rely on iterative
hand-tuned data labeling policies. In this work, we instead propose a
reinforcement learning-based formulation for selecting the right examples for
training a classifier from noisy web search results. Our method uses Q-learning
to learn a data labeling policy on a small labeled training dataset, and then
uses this to automatically label noisy web data for new visual concepts.
Experiments on the challenging Sports-1M action recognition benchmark as well
as on additional fine-grained and newly emerging action classes demonstrate
that our method is able to learn good labeling policies for noisy data and use
this to learn accurate visual concept classifiers.
| no_new_dataset | 0.947769 |
1706.02897 | Djallel Bouneffouf | Djallel Bouneffouf, Irina Rish, Guillermo A. Cecchi | Bandit Models of Human Behavior: Reward Processing in Mental Disorders | Conference on Artificial General Intelligence, AGI-17 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drawing an inspiration from behavioral studies of human decision making, we
propose here a general parametric framework for multi-armed bandit problem,
which extends the standard Thompson Sampling approach to incorporate reward
processing biases associated with several neurological and psychiatric
conditions, including Parkinson's and Alzheimer's diseases,
attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain.
We demonstrate empirically that the proposed parametric approach can often
outperform the baseline Thompson Sampling on a variety of datasets. Moreover,
from the behavioral modeling perspective, our parametric framework can be
viewed as a first step towards a unifying computational model capturing reward
processing abnormalities across multiple mental conditions.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 18:36:12 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Bouneffouf",
"Djallel",
""
],
[
"Rish",
"Irina",
""
],
[
"Cecchi",
"Guillermo A.",
""
]
] | TITLE: Bandit Models of Human Behavior: Reward Processing in Mental Disorders
ABSTRACT: Drawing an inspiration from behavioral studies of human decision making, we
propose here a general parametric framework for multi-armed bandit problem,
which extends the standard Thompson Sampling approach to incorporate reward
processing biases associated with several neurological and psychiatric
conditions, including Parkinson's and Alzheimer's diseases,
attention-deficit/hyperactivity disorder (ADHD), addiction, and chronic pain.
We demonstrate empirically that the proposed parametric approach can often
outperform the baseline Thompson Sampling on a variety of datasets. Moreover,
from the behavioral modeling perspective, our parametric framework can be
viewed as a first step towards a unifying computational model capturing reward
processing abnormalities across multiple mental conditions.
| no_new_dataset | 0.938407 |
1706.03015 | Jie Miao | Jie Miao, Xiangmin Xu, Xiaofen Xing, Dacheng Tao | Manifold Regularized Slow Feature Analysis for Dynamic Texture
Recognition | 12 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams,
but recognizing dynamic texture is challenging due to the complex temporal
variations. In this paper, we present a novel approach stemmed from slow
feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly
varying features from fast varying signals. Fortunately, SFA is capable to
leach invariant representations from dynamic textures. However, complex
temporal variations require high-level semantic representations to fully
achieve temporal slowness, and thus it is impractical to learn a high-level
representation from dynamic textures directly by SFA. In order to learn a
robust low-level feature to resolve the complexity of dynamic textures, we
propose manifold regularized SFA (MR-SFA) by exploring the neighbor
relationship of the initial state of each temporal transition and retaining the
locality of their variations. Therefore, the learned features are not only
slowly varying, but also partly predictable. MR-SFA for dynamic texture
recognition is proposed in the following steps: 1) learning feature extraction
functions as convolution filters by MR-SFA, 2) extracting local features by
convolution and pooling, and 3) employing Fisher vectors to form a video-level
representation for classification. Experimental results on dynamic texture and
dynamic scene recognition datasets validate the effectiveness of the proposed
approach.
| [
{
"version": "v1",
"created": "Fri, 9 Jun 2017 16:06:25 GMT"
}
] | 2017-06-12T00:00:00 | [
[
"Miao",
"Jie",
""
],
[
"Xu",
"Xiangmin",
""
],
[
"Xing",
"Xiaofen",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: Manifold Regularized Slow Feature Analysis for Dynamic Texture
Recognition
ABSTRACT: Dynamic textures exist in various forms, e.g., fire, smoke, and traffic jams,
but recognizing dynamic texture is challenging due to the complex temporal
variations. In this paper, we present a novel approach stemmed from slow
feature analysis (SFA) for dynamic texture recognition. SFA extracts slowly
varying features from fast varying signals. Fortunately, SFA is capable to
leach invariant representations from dynamic textures. However, complex
temporal variations require high-level semantic representations to fully
achieve temporal slowness, and thus it is impractical to learn a high-level
representation from dynamic textures directly by SFA. In order to learn a
robust low-level feature to resolve the complexity of dynamic textures, we
propose manifold regularized SFA (MR-SFA) by exploring the neighbor
relationship of the initial state of each temporal transition and retaining the
locality of their variations. Therefore, the learned features are not only
slowly varying, but also partly predictable. MR-SFA for dynamic texture
recognition is proposed in the following steps: 1) learning feature extraction
functions as convolution filters by MR-SFA, 2) extracting local features by
convolution and pooling, and 3) employing Fisher vectors to form a video-level
representation for classification. Experimental results on dynamic texture and
dynamic scene recognition datasets validate the effectiveness of the proposed
approach.
| no_new_dataset | 0.946794 |
1703.04816 | Dirk Weissenborn | Dirk Weissenborn and Georg Wiese and Laura Seiffe | Making Neural QA as Simple as Possible but not Simpler | null | null | null | null | cs.CL cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent development of large-scale question answering (QA) datasets triggered
a substantial amount of research into end-to-end neural architectures for QA.
Increasingly complex systems have been conceived without comparison to simpler
neural baseline systems that would justify their complexity. In this work, we
propose a simple heuristic that guides the development of neural baseline
systems for the extractive QA task. We find that there are two ingredients
necessary for building a high-performing neural QA system: first, the awareness
of question words while processing the context and second, a composition
function that goes beyond simple bag-of-words modeling, such as recurrent
neural networks. Our results show that FastQA, a system that meets these two
requirements, can achieve very competitive performance compared with existing
models. We argue that this surprising finding puts results of previous systems
and the complexity of recent QA datasets into perspective.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2017 23:09:45 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 07:40:23 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Jun 2017 14:12:35 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Weissenborn",
"Dirk",
""
],
[
"Wiese",
"Georg",
""
],
[
"Seiffe",
"Laura",
""
]
] | TITLE: Making Neural QA as Simple as Possible but not Simpler
ABSTRACT: Recent development of large-scale question answering (QA) datasets triggered
a substantial amount of research into end-to-end neural architectures for QA.
Increasingly complex systems have been conceived without comparison to simpler
neural baseline systems that would justify their complexity. In this work, we
propose a simple heuristic that guides the development of neural baseline
systems for the extractive QA task. We find that there are two ingredients
necessary for building a high-performing neural QA system: first, the awareness
of question words while processing the context and second, a composition
function that goes beyond simple bag-of-words modeling, such as recurrent
neural networks. Our results show that FastQA, a system that meets these two
requirements, can achieve very competitive performance compared with existing
models. We argue that this surprising finding puts results of previous systems
and the complexity of recent QA datasets into perspective.
| no_new_dataset | 0.943556 |
1703.09507 | Rajeev Ranjan | Rajeev Ranjan, Carlos D. Castillo and Rama Chellappa | L2-constrained Softmax Loss for Discriminative Face Verification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the performance of face verification systems has
significantly improved using deep convolutional neural networks (DCNNs). A
typical pipeline for face verification includes training a deep network for
subject classification with softmax loss, using the penultimate layer output as
the feature descriptor, and generating a cosine similarity score given a pair
of face images. The softmax loss function does not optimize the features to
have higher similarity score for positive pairs and lower similarity score for
negative pairs, which leads to a performance gap. In this paper, we add an
L2-constraint to the feature descriptors which restricts them to lie on a
hypersphere of a fixed radius. This module can be easily implemented using
existing deep learning frameworks. We show that integrating this simple step in
the training pipeline significantly boosts the performance of face
verification. Specifically, we achieve state-of-the-art results on the
challenging IJB-A dataset, achieving True Accept Rate of 0.909 at False Accept
Rate 0.0001 on the face verification protocol. Additionally, we achieve
state-of-the-art performance on LFW dataset with an accuracy of 99.78%, and
competing performance on YTF dataset with accuracy of 96.08%.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 11:19:50 GMT"
},
{
"version": "v2",
"created": "Mon, 8 May 2017 21:30:51 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Jun 2017 18:58:18 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Ranjan",
"Rajeev",
""
],
[
"Castillo",
"Carlos D.",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: L2-constrained Softmax Loss for Discriminative Face Verification
ABSTRACT: In recent years, the performance of face verification systems has
significantly improved using deep convolutional neural networks (DCNNs). A
typical pipeline for face verification includes training a deep network for
subject classification with softmax loss, using the penultimate layer output as
the feature descriptor, and generating a cosine similarity score given a pair
of face images. The softmax loss function does not optimize the features to
have higher similarity score for positive pairs and lower similarity score for
negative pairs, which leads to a performance gap. In this paper, we add an
L2-constraint to the feature descriptors which restricts them to lie on a
hypersphere of a fixed radius. This module can be easily implemented using
existing deep learning frameworks. We show that integrating this simple step in
the training pipeline significantly boosts the performance of face
verification. Specifically, we achieve state-of-the-art results on the
challenging IJB-A dataset, achieving True Accept Rate of 0.909 at False Accept
Rate 0.0001 on the face verification protocol. Additionally, we achieve
state-of-the-art performance on LFW dataset with an accuracy of 99.78%, and
competing performance on YTF dataset with accuracy of 96.08%.
| no_new_dataset | 0.949529 |
1705.03821 | Djallel Bouneffouf | Djallel Bouneffouf, Irina Rish, Guillermo A. Cecchi, Raphael Feraud | Context Attentive Bandits: Contextual Bandit with Restricted Context | IJCAI 2017 | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a novel formulation of the multi-armed bandit model, which we
call the contextual bandit with restricted context, where only a limited number
of features can be accessed by the learner at every iteration. This novel
formulation is motivated by different online problems arising in clinical
trials, recommender systems and attention modeling. Herein, we adapt the
standard multi-armed bandit algorithm known as Thompson Sampling to take
advantage of our restricted context setting, and propose two novel algorithms,
called the Thompson Sampling with Restricted Context(TSRC) and the Windows
Thompson Sampling with Restricted Context(WTSRC), for handling stationary and
nonstationary environments, respectively. Our empirical results demonstrate
advantages of the proposed approaches on several real-life datasets
| [
{
"version": "v1",
"created": "Wed, 10 May 2017 15:32:36 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2017 18:40:28 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Bouneffouf",
"Djallel",
""
],
[
"Rish",
"Irina",
""
],
[
"Cecchi",
"Guillermo A.",
""
],
[
"Feraud",
"Raphael",
""
]
] | TITLE: Context Attentive Bandits: Contextual Bandit with Restricted Context
ABSTRACT: We consider a novel formulation of the multi-armed bandit model, which we
call the contextual bandit with restricted context, where only a limited number
of features can be accessed by the learner at every iteration. This novel
formulation is motivated by different online problems arising in clinical
trials, recommender systems and attention modeling. Herein, we adapt the
standard multi-armed bandit algorithm known as Thompson Sampling to take
advantage of our restricted context setting, and propose two novel algorithms,
called the Thompson Sampling with Restricted Context(TSRC) and the Windows
Thompson Sampling with Restricted Context(WTSRC), for handling stationary and
nonstationary environments, respectively. Our empirical results demonstrate
advantages of the proposed approaches on several real-life datasets
| no_new_dataset | 0.952175 |
1706.02291 | Sharath Adavanne | Sharath Adavanne, Pasi Pertil\"a, Tuomas Virtanen | Sound Event Detection Using Spatial Features and Convolutional Recurrent
Neural Network | Accepted for IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP 2017) | null | null | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes to use low-level spatial features extracted from
multichannel audio for sound event detection. We extend the convolutional
recurrent neural network to handle more than one type of these multichannel
features by learning from each of them separately in the initial stages. We
show that instead of concatenating the features of each channel into a single
feature vector the network learns sound events in multichannel audio better
when they are presented as separate layers of a volume. Using the proposed
spatial features over monaural features on the same network gives an absolute
F-score improvement of 6.1% on the publicly available TUT-SED 2016 dataset and
2.7% on the TUT-SED 2009 dataset that is fifteen times larger.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 06:01:48 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Adavanne",
"Sharath",
""
],
[
"Pertilä",
"Pasi",
""
],
[
"Virtanen",
"Tuomas",
""
]
] | TITLE: Sound Event Detection Using Spatial Features and Convolutional Recurrent
Neural Network
ABSTRACT: This paper proposes to use low-level spatial features extracted from
multichannel audio for sound event detection. We extend the convolutional
recurrent neural network to handle more than one type of these multichannel
features by learning from each of them separately in the initial stages. We
show that instead of concatenating the features of each channel into a single
feature vector the network learns sound events in multichannel audio better
when they are presented as separate layers of a volume. Using the proposed
spatial features over monaural features on the same network gives an absolute
F-score improvement of 6.1% on the publicly available TUT-SED 2016 dataset and
2.7% on the TUT-SED 2009 dataset that is fifteen times larger.
| no_new_dataset | 0.954647 |
1706.02292 | Sharath Adavanne | Miroslav Malik, Sharath Adavanne, Konstantinos Drossos, Tuomas
Virtanen, Dasa Ticha, Roman Jarina | Stacked Convolutional and Recurrent Neural Networks for Music Emotion
Recognition | Accepted for Sound and Music Computing (SMC 2017) | null | null | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the emotion recognition from musical tracks in the
2-dimensional valence-arousal (V-A) emotional space. We propose a method based
on convolutional (CNN) and recurrent neural networks (RNN), having
significantly fewer parameters compared with the state-of-the-art method for
the same task. We utilize one CNN layer followed by two branches of RNNs
trained separately for arousal and valence. The method was evaluated using the
'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for
arousal and 0.268 for valence, which is the best result reported on this
dataset.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 06:06:14 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Malik",
"Miroslav",
""
],
[
"Adavanne",
"Sharath",
""
],
[
"Drossos",
"Konstantinos",
""
],
[
"Virtanen",
"Tuomas",
""
],
[
"Ticha",
"Dasa",
""
],
[
"Jarina",
"Roman",
""
]
] | TITLE: Stacked Convolutional and Recurrent Neural Networks for Music Emotion
Recognition
ABSTRACT: This paper studies the emotion recognition from musical tracks in the
2-dimensional valence-arousal (V-A) emotional space. We propose a method based
on convolutional (CNN) and recurrent neural networks (RNN), having
significantly fewer parameters compared with the state-of-the-art method for
the same task. We utilize one CNN layer followed by two branches of RNNs
trained separately for arousal and valence. The method was evaluated using the
'MediaEval2015 emotion in music' dataset. We achieved an RMSE of 0.202 for
arousal and 0.268 for valence, which is the best result reported on this
dataset.
| no_new_dataset | 0.950411 |
1706.02387 | Jorge Blasco | Jorge Blasco, Thomas M. Chen, Igor Muttik and Markus Roggenbach | Detection of App Collusion Potential Using Logic Programming | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Android is designed with a number of built-in security features such as app
sandboxing and permission-based access controls. Android supports multiple
communication methods for apps to cooperate. This creates a security risk of
app collusion. For instance, a sandboxed app with permission to access
sensitive data might leak that data to another sandboxed app with access to the
internet. In this paper, we present a method to detect potential collusion
between apps. First, we extract from apps all information about their accesses
to protected resources and communications. Then we identify sets of apps that
might be colluding by using rules in first order logic codified in Prolog.
After these, more computationally demanding approaches like taint analysis can
focus on the identified sets that show collusion potential. This "filtering"
approach is validated against a dataset of manually crafted colluding apps. We
also demonstrate that our tool scales by running it on a set of more than
50,000 apps collected in the wild. Our tool allowed us to detect a large set of
real apps that used collusion as a synchronization method to maximize the
effects of a payload that was injected into all of them via the same SDK.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 21:36:41 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Blasco",
"Jorge",
""
],
[
"Chen",
"Thomas M.",
""
],
[
"Muttik",
"Igor",
""
],
[
"Roggenbach",
"Markus",
""
]
] | TITLE: Detection of App Collusion Potential Using Logic Programming
ABSTRACT: Android is designed with a number of built-in security features such as app
sandboxing and permission-based access controls. Android supports multiple
communication methods for apps to cooperate. This creates a security risk of
app collusion. For instance, a sandboxed app with permission to access
sensitive data might leak that data to another sandboxed app with access to the
internet. In this paper, we present a method to detect potential collusion
between apps. First, we extract from apps all information about their accesses
to protected resources and communications. Then we identify sets of apps that
might be colluding by using rules in first order logic codified in Prolog.
After these, more computationally demanding approaches like taint analysis can
focus on the identified sets that show collusion potential. This "filtering"
approach is validated against a dataset of manually crafted colluding apps. We
also demonstrate that our tool scales by running it on a set of more than
50,000 apps collected in the wild. Our tool allowed us to detect a large set of
real apps that used collusion as a synchronization method to maximize the
effects of a payload that was injected into all of them via the same SDK.
| new_dataset | 0.878314 |
1706.02409 | Shahin Jabbari | Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael
Kearns, Jamie Morgenstern, Seth Neel, Aaron Roth | A Convex Framework for Fair Regression | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a flexible family of fairness regularizers for (linear and
logistic) regression problems. These regularizers all enjoy convexity,
permitting fast optimization, and they span the rang from notions of group
fairness to strong individual fairness. By varying the weight on the fairness
regularizer, we can compute the efficient frontier of the accuracy-fairness
trade-off on any given dataset, and we measure the severity of this trade-off
via a numerical quantity we call the Price of Fairness (PoF). The centerpiece
of our results is an extensive comparative study of the PoF across six
different datasets in which fairness is a primary consideration.
| [
{
"version": "v1",
"created": "Wed, 7 Jun 2017 23:09:28 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Berk",
"Richard",
""
],
[
"Heidari",
"Hoda",
""
],
[
"Jabbari",
"Shahin",
""
],
[
"Joseph",
"Matthew",
""
],
[
"Kearns",
"Michael",
""
],
[
"Morgenstern",
"Jamie",
""
],
[
"Neel",
"Seth",
""
],
[
"Roth",
"Aaron",
""
]
] | TITLE: A Convex Framework for Fair Regression
ABSTRACT: We introduce a flexible family of fairness regularizers for (linear and
logistic) regression problems. These regularizers all enjoy convexity,
permitting fast optimization, and they span the rang from notions of group
fairness to strong individual fairness. By varying the weight on the fairness
regularizer, we can compute the efficient frontier of the accuracy-fairness
trade-off on any given dataset, and we measure the severity of this trade-off
via a numerical quantity we call the Price of Fairness (PoF). The centerpiece
of our results is an extensive comparative study of the PoF across six
different datasets in which fairness is a primary consideration.
| no_new_dataset | 0.949809 |
1706.02427 | Duyu Tang | Zhao Yan and Duyu Tang and Nan Duan and Junwei Bao and Yuanhua Lv and
Ming Zhou and Zhoujun Li | Content-Based Table Retrieval for Web Queries | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the connections between unstructured text and semi-structured
table is an important yet neglected problem in natural language processing. In
this work, we focus on content-based table retrieval. Given a query, the task
is to find the most relevant table from a collection of tables. Further
progress towards improving this area requires powerful models of semantic
matching and richer training and evaluation resources. To remedy this, we
present a ranking based approach, and implement both carefully designed
features and neural network architectures to measure the relevance between a
query and the content of a table. Furthermore, we release an open-domain
dataset that includes 21,113 web queries for 273,816 tables. We conduct
comprehensive experiments on both real world and synthetic datasets. Results
verify the effectiveness of our approach and present the challenges for this
task.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 02:03:32 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Yan",
"Zhao",
""
],
[
"Tang",
"Duyu",
""
],
[
"Duan",
"Nan",
""
],
[
"Bao",
"Junwei",
""
],
[
"Lv",
"Yuanhua",
""
],
[
"Zhou",
"Ming",
""
],
[
"Li",
"Zhoujun",
""
]
] | TITLE: Content-Based Table Retrieval for Web Queries
ABSTRACT: Understanding the connections between unstructured text and semi-structured
table is an important yet neglected problem in natural language processing. In
this work, we focus on content-based table retrieval. Given a query, the task
is to find the most relevant table from a collection of tables. Further
progress towards improving this area requires powerful models of semantic
matching and richer training and evaluation resources. To remedy this, we
present a ranking based approach, and implement both carefully designed
features and neural network architectures to measure the relevance between a
query and the content of a table. Furthermore, we release an open-domain
dataset that includes 21,113 web queries for 273,816 tables. We conduct
comprehensive experiments on both real world and synthetic datasets. Results
verify the effectiveness of our approach and present the challenges for this
task.
| new_dataset | 0.959116 |
1706.02430 | Zhongliang Yang | Zhongliang Yang, Yu-Jin Zhang, Sadaqat ur Rehman, Yongfeng Huang | Image Captioning with Object Detection and Localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically generating a natural language description of an image is a task
close to the heart of image understanding. In this paper, we present a
multi-model neural network method closely related to the human visual system
that automatically learns to describe the content of images. Our model consists
of two sub-models: an object detection and localization model, which extract
the information of objects and their spatial relationship in images
respectively; Besides, a deep recurrent neural network (RNN) based on long
short-term memory (LSTM) units with attention mechanism for sentences
generation. Each word of the description will be automatically aligned to
different objects of the input image when it is generated. This is similar to
the attention mechanism of the human visual system. Experimental results on the
COCO dataset showcase the merit of the proposed method, which outperforms
previous benchmark models.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 02:23:33 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Yang",
"Zhongliang",
""
],
[
"Zhang",
"Yu-Jin",
""
],
[
"Rehman",
"Sadaqat ur",
""
],
[
"Huang",
"Yongfeng",
""
]
] | TITLE: Image Captioning with Object Detection and Localization
ABSTRACT: Automatically generating a natural language description of an image is a task
close to the heart of image understanding. In this paper, we present a
multi-model neural network method closely related to the human visual system
that automatically learns to describe the content of images. Our model consists
of two sub-models: an object detection and localization model, which extract
the information of objects and their spatial relationship in images
respectively; Besides, a deep recurrent neural network (RNN) based on long
short-term memory (LSTM) units with attention mechanism for sentences
generation. Each word of the description will be automatically aligned to
different objects of the input image when it is generated. This is similar to
the attention mechanism of the human visual system. Experimental results on the
COCO dataset showcase the merit of the proposed method, which outperforms
previous benchmark models.
| no_new_dataset | 0.948728 |
1706.02434 | D\'ario Oliveira | Dario Augusto Borges Oliveira, Laura Leal-Taixe, Raul Queiroz Feitosa,
Bodo Rosenhahn | Automatic tracking of vessel-like structures from a single starting
point | null | null | 10.1016/j.compmedimag.2015.11.002 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of vascular networks is an important topic in the medical
image analysis community. While most methods focus on single vessel tracking,
the few solutions that exist for tracking complete vascular networks are
usually computationally intensive and require a lot of user interaction. In
this paper we present a method to track full vascular networks iteratively
using a single starting point. Our approach is based on a cloud of sampling
points distributed over concentric spherical layers. We also proposed a vessel
model and a metric of how well a sample point fits this model. Then, we
implement the network tracking as a min-cost flow problem, and propose a novel
optimization scheme to iteratively track the vessel structure by inherently
handling bifurcations and paths. The method was tested using both synthetic and
real images. On the 9 different data-sets of synthetic blood vessels, we
achieved maximum accuracies of more than 98\%. We further use the synthetic
data-set to analyse the sensibility of our method to parameter setting, showing
the robustness of the proposed algorithm. For real images, we used coronary,
carotid and pulmonary data to segment vascular structures and present the
visual results. Still for real images, we present numerical and visual results
for networks of nerve fibers in the olfactory system. Further visual results
also show the potential of our approach for identifying vascular networks
topologies. The presented method delivers good results for the several
different datasets tested and have potential for segmenting vessel-like
structures. Also, the topology information, inherently extracted, can be used
for further analysis to computed aided diagnosis and surgical planning.
Finally, the method's modular aspect holds potential for problem-oriented
adjustments and improvements.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 02:45:27 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Oliveira",
"Dario Augusto Borges",
""
],
[
"Leal-Taixe",
"Laura",
""
],
[
"Feitosa",
"Raul Queiroz",
""
],
[
"Rosenhahn",
"Bodo",
""
]
] | TITLE: Automatic tracking of vessel-like structures from a single starting
point
ABSTRACT: The identification of vascular networks is an important topic in the medical
image analysis community. While most methods focus on single vessel tracking,
the few solutions that exist for tracking complete vascular networks are
usually computationally intensive and require a lot of user interaction. In
this paper we present a method to track full vascular networks iteratively
using a single starting point. Our approach is based on a cloud of sampling
points distributed over concentric spherical layers. We also proposed a vessel
model and a metric of how well a sample point fits this model. Then, we
implement the network tracking as a min-cost flow problem, and propose a novel
optimization scheme to iteratively track the vessel structure by inherently
handling bifurcations and paths. The method was tested using both synthetic and
real images. On the 9 different data-sets of synthetic blood vessels, we
achieved maximum accuracies of more than 98\%. We further use the synthetic
data-set to analyse the sensibility of our method to parameter setting, showing
the robustness of the proposed algorithm. For real images, we used coronary,
carotid and pulmonary data to segment vascular structures and present the
visual results. Still for real images, we present numerical and visual results
for networks of nerve fibers in the olfactory system. Further visual results
also show the potential of our approach for identifying vascular networks
topologies. The presented method delivers good results for the several
different datasets tested and have potential for segmenting vessel-like
structures. Also, the topology information, inherently extracted, can be used
for further analysis to computed aided diagnosis and surgical planning.
Finally, the method's modular aspect holds potential for problem-oriented
adjustments and improvements.
| no_new_dataset | 0.949295 |
1706.02480 | Jeffrey Humpherys | Chris Hettinger, Tanner Christensen, Ben Ehlert, Jeffrey Humpherys,
Tyler Jarvis, and Sean Wade | Forward Thinking: Building and Training Neural Networks One Layer at a
Time | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general framework for training deep neural networks without
backpropagation. This substantially decreases training time and also allows for
construction of deep networks with many sorts of learners, including networks
whose layers are defined by functions that are not easily differentiated, like
decision trees. The main idea is that layers can be trained one at a time, and
once they are trained, the input data are mapped forward through the layer to
create a new learning problem. The process is repeated, transforming the data
through multiple layers, one at a time, rendering a new data set, which is
expected to be better behaved, and on which a final output layer can achieve
good performance. We call this forward thinking and demonstrate a proof of
concept by achieving state-of-the-art accuracy on the MNIST dataset for
convolutional neural networks. We also provide a general mathematical
formulation of forward thinking that allows for other types of deep learning
problems to be considered.
| [
{
"version": "v1",
"created": "Thu, 8 Jun 2017 08:53:00 GMT"
}
] | 2017-06-09T00:00:00 | [
[
"Hettinger",
"Chris",
""
],
[
"Christensen",
"Tanner",
""
],
[
"Ehlert",
"Ben",
""
],
[
"Humpherys",
"Jeffrey",
""
],
[
"Jarvis",
"Tyler",
""
],
[
"Wade",
"Sean",
""
]
] | TITLE: Forward Thinking: Building and Training Neural Networks One Layer at a
Time
ABSTRACT: We present a general framework for training deep neural networks without
backpropagation. This substantially decreases training time and also allows for
construction of deep networks with many sorts of learners, including networks
whose layers are defined by functions that are not easily differentiated, like
decision trees. The main idea is that layers can be trained one at a time, and
once they are trained, the input data are mapped forward through the layer to
create a new learning problem. The process is repeated, transforming the data
through multiple layers, one at a time, rendering a new data set, which is
expected to be better behaved, and on which a final output layer can achieve
good performance. We call this forward thinking and demonstrate a proof of
concept by achieving state-of-the-art accuracy on the MNIST dataset for
convolutional neural networks. We also provide a general mathematical
formulation of forward thinking that allows for other types of deep learning
problems to be considered.
| no_new_dataset | 0.952353 |
1001.1027 | Jascha Sohl-Dickstein | Jascha Sohl-Dickstein, Ching Ming Wang, Bruno A. Olshausen | An Unsupervised Algorithm For Learning Lie Group Transformations | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present several theoretical contributions which allow Lie groups to be fit
to high dimensional datasets. Transformation operators are represented in their
eigen-basis, reducing the computational complexity of parameter estimation to
that of training a linear transformation model. A transformation specific
"blurring" operator is introduced that allows inference to escape local minima
via a smoothing of the transformation space. A penalty on traversed manifold
distance is added which encourages the discovery of sparse, minimal distance,
transformations between states. Both learning and inference are demonstrated
using these methods for the full set of affine transformations on natural image
patches. Transformation operators are then trained on natural video sequences.
It is shown that the learned video transformations provide a better description
of inter-frame differences than the standard motion model based on rigid
translation.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2010 06:22:56 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jan 2010 07:18:39 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Nov 2011 04:35:48 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Jul 2014 23:34:43 GMT"
},
{
"version": "v5",
"created": "Wed, 7 Jun 2017 17:05:16 GMT"
}
] | 2017-06-08T00:00:00 | [
[
"Sohl-Dickstein",
"Jascha",
""
],
[
"Wang",
"Ching Ming",
""
],
[
"Olshausen",
"Bruno A.",
""
]
] | TITLE: An Unsupervised Algorithm For Learning Lie Group Transformations
ABSTRACT: We present several theoretical contributions which allow Lie groups to be fit
to high dimensional datasets. Transformation operators are represented in their
eigen-basis, reducing the computational complexity of parameter estimation to
that of training a linear transformation model. A transformation specific
"blurring" operator is introduced that allows inference to escape local minima
via a smoothing of the transformation space. A penalty on traversed manifold
distance is added which encourages the discovery of sparse, minimal distance,
transformations between states. Both learning and inference are demonstrated
using these methods for the full set of affine transformations on natural image
patches. Transformation operators are then trained on natural video sequences.
It is shown that the learned video transformations provide a better description
of inter-frame differences than the standard motion model based on rigid
translation.
| no_new_dataset | 0.946399 |
1705.08106 | Juanhui Tu | Hong Liu and Juanhui Tu and Mengyuan Liu | Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action
Recognition | 5 pages, 6 figures, 3 tabels | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | It remains a challenge to efficiently extract spatialtemporal information
from skeleton sequences for 3D human action recognition. Although most recent
action recognition methods are based on Recurrent Neural Networks which present
outstanding performance, one of the shortcomings of these methods is the
tendency to overemphasize the temporal information. Since 3D convolutional
neural network(3D CNN) is a powerful tool to simultaneously learn features from
both spatial and temporal dimensions through capturing the correlations between
three dimensional signals, this paper proposes a novel two-stream model using
3D CNN. To our best knowledge, this is the first application of 3D CNN in
skeleton-based action recognition. Our method consists of three stages. First,
skeleton joints are mapped into a 3D coordinate space and then encoding the
spatial and temporal information, respectively. Second, 3D CNN models are
seperately adopted to extract deep features from two streams. Third, to enhance
the ability of deep features to capture global relationships, we extend every
stream into multitemporal version. Extensive experiments on the SmartHome
dataset and the large-scale NTU RGB-D dataset demonstrate that our method
outperforms most of RNN-based methods, which verify the complementary property
between spatial and temporal information and the robustness to noise.
| [
{
"version": "v1",
"created": "Tue, 23 May 2017 07:36:51 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2017 11:23:40 GMT"
}
] | 2017-06-08T00:00:00 | [
[
"Liu",
"Hong",
""
],
[
"Tu",
"Juanhui",
""
],
[
"Liu",
"Mengyuan",
""
]
] | TITLE: Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action
Recognition
ABSTRACT: It remains a challenge to efficiently extract spatialtemporal information
from skeleton sequences for 3D human action recognition. Although most recent
action recognition methods are based on Recurrent Neural Networks which present
outstanding performance, one of the shortcomings of these methods is the
tendency to overemphasize the temporal information. Since 3D convolutional
neural network(3D CNN) is a powerful tool to simultaneously learn features from
both spatial and temporal dimensions through capturing the correlations between
three dimensional signals, this paper proposes a novel two-stream model using
3D CNN. To our best knowledge, this is the first application of 3D CNN in
skeleton-based action recognition. Our method consists of three stages. First,
skeleton joints are mapped into a 3D coordinate space and then encoding the
spatial and temporal information, respectively. Second, 3D CNN models are
seperately adopted to extract deep features from two streams. Third, to enhance
the ability of deep features to capture global relationships, we extend every
stream into multitemporal version. Extensive experiments on the SmartHome
dataset and the large-scale NTU RGB-D dataset demonstrate that our method
outperforms most of RNN-based methods, which verify the complementary property
between spatial and temporal information and the robustness to noise.
| no_new_dataset | 0.946399 |
1705.08940 | Quentin Bateux | Quentin Bateux, Eric Marchand, J\"urgen Leitner, Francois Chaumette,
Peter Corke | Visual Servoing from Deep Neural Networks | fixed authors list | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a deep neural network-based method to perform high-precision,
robust and real-time 6 DOF visual servoing. The paper describes how to create a
dataset simulating various perturbations (occlusions and lighting conditions)
from a single real-world image of the scene. A convolutional neural network is
fine-tuned using this dataset to estimate the relative pose between two images
of the same scene. The output of the network is then employed in a visual
servoing control scheme. The method converges robustly even in difficult
real-world settings with strong lighting variations and occlusions.A
positioning error of less than one millimeter is obtained in experiments with a
6 DOF robot.
| [
{
"version": "v1",
"created": "Wed, 24 May 2017 19:39:25 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2017 09:26:34 GMT"
}
] | 2017-06-08T00:00:00 | [
[
"Bateux",
"Quentin",
""
],
[
"Marchand",
"Eric",
""
],
[
"Leitner",
"Jürgen",
""
],
[
"Chaumette",
"Francois",
""
],
[
"Corke",
"Peter",
""
]
] | TITLE: Visual Servoing from Deep Neural Networks
ABSTRACT: We present a deep neural network-based method to perform high-precision,
robust and real-time 6 DOF visual servoing. The paper describes how to create a
dataset simulating various perturbations (occlusions and lighting conditions)
from a single real-world image of the scene. A convolutional neural network is
fine-tuned using this dataset to estimate the relative pose between two images
of the same scene. The output of the network is then employed in a visual
servoing control scheme. The method converges robustly even in difficult
real-world settings with strong lighting variations and occlusions.A
positioning error of less than one millimeter is obtained in experiments with a
6 DOF robot.
| no_new_dataset | 0.562436 |
1706.01556 | Yifan Peng | Yifan Peng and Zhiyong Lu | Deep learning for extracting protein-protein interactions from
biomedical literature | Accepted for publication in Proceedings of the 2017 Workshop on
Biomedical Natural Language Processing, 10 pages, 2 figures, 6 tables | null | null | null | cs.CL cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art methods for protein-protein interaction (PPI) extraction are
primarily feature-based or kernel-based by leveraging lexical and syntactic
information. But how to incorporate such knowledge in the recent deep learning
methods remains an open question. In this paper, we propose a multichannel
dependency-based convolutional neural network model (McDepCNN). It applies one
channel to the embedding vector of each word in the sentence, and another
channel to the embedding vector of the head of the corresponding word.
Therefore, the model can use richer information obtained from different
channels. Experiments on two public benchmarking datasets, AIMed and BioInfer,
demonstrate that McDepCNN compares favorably to the state-of-the-art
rich-feature and single-kernel based methods. In addition, McDepCNN achieves
24.4% relative improvement in F1-score over the state-of-the-art methods on
cross-corpus evaluation and 12% improvement in F1-score over kernel-based
methods on "difficult" instances. These results suggest that McDepCNN
generalizes more easily over different corpora, and is capable of capturing
long distance features in the sentences.
| [
{
"version": "v1",
"created": "Mon, 5 Jun 2017 23:09:06 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Jun 2017 00:28:21 GMT"
}
] | 2017-06-08T00:00:00 | [
[
"Peng",
"Yifan",
""
],
[
"Lu",
"Zhiyong",
""
]
] | TITLE: Deep learning for extracting protein-protein interactions from
biomedical literature
ABSTRACT: State-of-the-art methods for protein-protein interaction (PPI) extraction are
primarily feature-based or kernel-based by leveraging lexical and syntactic
information. But how to incorporate such knowledge in the recent deep learning
methods remains an open question. In this paper, we propose a multichannel
dependency-based convolutional neural network model (McDepCNN). It applies one
channel to the embedding vector of each word in the sentence, and another
channel to the embedding vector of the head of the corresponding word.
Therefore, the model can use richer information obtained from different
channels. Experiments on two public benchmarking datasets, AIMed and BioInfer,
demonstrate that McDepCNN compares favorably to the state-of-the-art
rich-feature and single-kernel based methods. In addition, McDepCNN achieves
24.4% relative improvement in F1-score over the state-of-the-art methods on
cross-corpus evaluation and 12% improvement in F1-score over kernel-based
methods on "difficult" instances. These results suggest that McDepCNN
generalizes more easily over different corpora, and is capable of capturing
long distance features in the sentences.
| no_new_dataset | 0.952309 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.