id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.04062 | Reza Fuad Rachmadi | Reza Fuad Rachmadi, Keiichi Uchimura, and Gou Koutaki | Spatial Pyramid Convolutional Neural Network for Social Event Detection
in Static Image | in Proceeding of 11th International Student Conference on Advanced
Science and Technology (ICAST) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social event detection in a static image is a very challenging problem and
it's very useful for internet of things applications including automatic photo
organization, ads recommender system, or image captioning. Several publications
show that variety of objects, scene, and people can be very ambiguous for the
system to decide the event that occurs in the image. We proposed the spatial
pyramid configuration of convolutional neural network (CNN) classifier for
social event detection in a static image. By applying the spatial pyramid
configuration to the CNN classifier, the detail that occurs in the image can
observe more accurately by the classifier. USED dataset provided by Ahmad et
al. is used to evaluate our proposed method, which consists of two different
image sets, EiMM, and SED dataset. As a result, the average accuracy of our
system outperforms the baseline method by 15% and 2% respectively.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 08:32:56 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Rachmadi",
"Reza Fuad",
""
],
[
"Uchimura",
"Keiichi",
""
],
[
"Koutaki",
"Gou",
""
]
] | TITLE: Spatial Pyramid Convolutional Neural Network for Social Event Detection
in Static Image
ABSTRACT: Social event detection in a static image is a very challenging problem and
it's very useful for internet of things applications including automatic photo
organization, ads recommender system, or image captioning. Several publications
show that variety of objects, scene, and people can be very ambiguous for the
system to decide the event that occurs in the image. We proposed the spatial
pyramid configuration of convolutional neural network (CNN) classifier for
social event detection in a static image. By applying the spatial pyramid
configuration to the CNN classifier, the detail that occurs in the image can
observe more accurately by the classifier. USED dataset provided by Ahmad et
al. is used to evaluate our proposed method, which consists of two different
image sets, EiMM, and SED dataset. As a result, the average accuracy of our
system outperforms the baseline method by 15% and 2% respectively.
| no_new_dataset | 0.951818 |
1612.04211 | Zhiguo Wang | Zhiguo Wang, Haitao Mi, Wael Hamza and Radu Florian | Multi-Perspective Context Matching for Machine Comprehension | 8 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous machine comprehension (MC) datasets are either too small to train
end-to-end deep learning models, or not difficult enough to evaluate the
ability of current MC techniques. The newly released SQuAD dataset alleviates
these limitations, and gives us a chance to develop more realistic MC models.
Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM)
model, which is an end-to-end system that directly predicts the answer
beginning and ending points in a passage. Our model first adjusts each
word-embedding vector in the passage by multiplying a relevancy weight computed
against the question. Then, we encode the question and weighted passage by
using bi-directional LSTMs. For each point in the passage, our model matches
the context of this point against the encoded question from multiple
perspectives and produces a matching vector. Given those matched vectors, we
employ another bi-directional LSTM to aggregate all the information and predict
the beginning and ending points. Experimental result on the test set of SQuAD
shows that our model achieves a competitive result on the leaderboard.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 14:49:47 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Wang",
"Zhiguo",
""
],
[
"Mi",
"Haitao",
""
],
[
"Hamza",
"Wael",
""
],
[
"Florian",
"Radu",
""
]
] | TITLE: Multi-Perspective Context Matching for Machine Comprehension
ABSTRACT: Previous machine comprehension (MC) datasets are either too small to train
end-to-end deep learning models, or not difficult enough to evaluate the
ability of current MC techniques. The newly released SQuAD dataset alleviates
these limitations, and gives us a chance to develop more realistic MC models.
Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM)
model, which is an end-to-end system that directly predicts the answer
beginning and ending points in a passage. Our model first adjusts each
word-embedding vector in the passage by multiplying a relevancy weight computed
against the question. Then, we encode the question and weighted passage by
using bi-directional LSTMs. For each point in the passage, our model matches
the context of this point against the encoded question from multiple
perspectives and produces a matching vector. Given those matched vectors, we
employ another bi-directional LSTM to aggregate all the information and predict
the beginning and ending points. Experimental result on the test set of SQuAD
shows that our model achieves a competitive result on the leaderboard.
| new_dataset | 0.960952 |
1612.04342 | Radu Soricut | Radu Soricut and Nan Ding | Building Large Machine Reading-Comprehension Datasets using Paragraph
Vectors | 10 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dual contribution to the task of machine reading-comprehension:
a technique for creating large-sized machine-comprehension (MC) datasets using
paragraph-vector models; and a novel, hybrid neural-network architecture that
combines the representation power of recurrent neural networks with the
discriminative power of fully-connected multi-layered networks. We use the
MC-dataset generation technique to build a dataset of around 2 million
examples, for which we empirically determine the high-ceiling of human
performance (around 91% accuracy), as well as the performance of a variety of
computer models. Among all the models we have experimented with, our hybrid
neural-network architecture achieves the highest performance (83.2% accuracy).
The remaining gap to the human-performance ceiling provides enough room for
future model improvements.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 20:22:36 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Soricut",
"Radu",
""
],
[
"Ding",
"Nan",
""
]
] | TITLE: Building Large Machine Reading-Comprehension Datasets using Paragraph
Vectors
ABSTRACT: We present a dual contribution to the task of machine reading-comprehension:
a technique for creating large-sized machine-comprehension (MC) datasets using
paragraph-vector models; and a novel, hybrid neural-network architecture that
combines the representation power of recurrent neural networks with the
discriminative power of fully-connected multi-layered networks. We use the
MC-dataset generation technique to build a dataset of around 2 million
examples, for which we empirically determine the high-ceiling of human
performance (around 91% accuracy), as well as the performance of a variety of
computer models. Among all the models we have experimented with, our hybrid
neural-network architecture achieves the highest performance (83.2% accuracy).
The remaining gap to the human-performance ceiling provides enough room for
future model improvements.
| new_dataset | 0.950365 |
1602.00133 | Zhao Shen-Yi | Shen-Yi Zhao, Ru Xiang, Ying-Hao Shi, Peng Gao, Wu-Jun Li | SCOPE: Scalable Composite Optimization for Learning on Spark | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many machine learning models, such as logistic regression~(LR) and support
vector machine~(SVM), can be formulated as composite optimization problems.
Recently, many distributed stochastic optimization~(DSO) methods have been
proposed to solve the large-scale composite optimization problems, which have
shown better performance than traditional batch methods. However, most of these
DSO methods are not scalable enough. In this paper, we propose a novel DSO
method, called \underline{s}calable \underline{c}omposite
\underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it
on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both
computation-efficient and communication-efficient. Theoretical analysis shows
that SCOPE is convergent with linear convergence rate when the objective
function is convex. Furthermore, empirical results on real datasets show that
SCOPE can outperform other state-of-the-art distributed learning methods on
Spark, including both batch learning methods and DSO methods.
| [
{
"version": "v1",
"created": "Sat, 30 Jan 2016 16:11:53 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Feb 2016 07:07:56 GMT"
},
{
"version": "v3",
"created": "Wed, 1 Jun 2016 07:50:39 GMT"
},
{
"version": "v4",
"created": "Thu, 2 Jun 2016 07:01:25 GMT"
},
{
"version": "v5",
"created": "Sun, 11 Dec 2016 16:10:37 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Zhao",
"Shen-Yi",
""
],
[
"Xiang",
"Ru",
""
],
[
"Shi",
"Ying-Hao",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Wu-Jun",
""
]
] | TITLE: SCOPE: Scalable Composite Optimization for Learning on Spark
ABSTRACT: Many machine learning models, such as logistic regression~(LR) and support
vector machine~(SVM), can be formulated as composite optimization problems.
Recently, many distributed stochastic optimization~(DSO) methods have been
proposed to solve the large-scale composite optimization problems, which have
shown better performance than traditional batch methods. However, most of these
DSO methods are not scalable enough. In this paper, we propose a novel DSO
method, called \underline{s}calable \underline{c}omposite
\underline{op}timization for l\underline{e}arning~({SCOPE}), and implement it
on the fault-tolerant distributed platform \mbox{Spark}. SCOPE is both
computation-efficient and communication-efficient. Theoretical analysis shows
that SCOPE is convergent with linear convergence rate when the objective
function is convex. Furthermore, empirical results on real datasets show that
SCOPE can outperform other state-of-the-art distributed learning methods on
Spark, including both batch learning methods and DSO methods.
| no_new_dataset | 0.943086 |
1603.00145 | Shifeng Liu | Shifeng Liu, Zheng Hu, Sujit Dey and Xin Ke | On Tie Strength Augmented Social Correlation for Inferring Preference of
Mobile Telco Users | This paper has been modified and the writing may make reader confused | null | null | null | cs.SI cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For mobile telecom operators, it is critical to build preference profiles of
their customers and connected users, which can help operators make better
marketing strategies, and provide more personalized services. With the
deployment of deep packet inspection (DPI) in telecom networks, it is possible
for the telco operators to obtain user online preference. However, DPI has its
limitations and user preference derived only from DPI faces sparsity and cold
start problems. To better infer the user preference, social correlation in
telco users network derived from Call Detailed Records (CDRs) with regard to
online preference is investigated. Though widely verified in several online
social networks, social correlation between online preference of users in
mobile telco networks, where the CDRs derived relationship are of less social
properties and user mobile internet surfing activities are not visible to
neighbourhood, has not been explored at a large scale. Based on a real world
telecom dataset including CDRs and preference of more than $550K$ users for
several months, we verified that correlation does exist between online
preference in such \textit{ambiguous} social network. Furthermore, we found
that the stronger ties that users build, the more similarity between their
preference may have. After defining the preference inferring task as a Top-$K$
recommendation problem, we incorporated Matrix Factorization Collaborative
Filtering model with social correlation and tie strength based on call patterns
to generate Top-$K$ preferred categories for users. The proposed Tie Strength
Augmented Social Recommendation (TSASoRec) model takes data sparsity and cold
start user problems into account, considering both the recorded and missing
recorded category entries. The experiment on real dataset shows the proposed
model can better infer user preference, especially for cold start users.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2016 05:20:47 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Dec 2016 22:06:05 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Liu",
"Shifeng",
""
],
[
"Hu",
"Zheng",
""
],
[
"Dey",
"Sujit",
""
],
[
"Ke",
"Xin",
""
]
] | TITLE: On Tie Strength Augmented Social Correlation for Inferring Preference of
Mobile Telco Users
ABSTRACT: For mobile telecom operators, it is critical to build preference profiles of
their customers and connected users, which can help operators make better
marketing strategies, and provide more personalized services. With the
deployment of deep packet inspection (DPI) in telecom networks, it is possible
for the telco operators to obtain user online preference. However, DPI has its
limitations and user preference derived only from DPI faces sparsity and cold
start problems. To better infer the user preference, social correlation in
telco users network derived from Call Detailed Records (CDRs) with regard to
online preference is investigated. Though widely verified in several online
social networks, social correlation between online preference of users in
mobile telco networks, where the CDRs derived relationship are of less social
properties and user mobile internet surfing activities are not visible to
neighbourhood, has not been explored at a large scale. Based on a real world
telecom dataset including CDRs and preference of more than $550K$ users for
several months, we verified that correlation does exist between online
preference in such \textit{ambiguous} social network. Furthermore, we found
that the stronger ties that users build, the more similarity between their
preference may have. After defining the preference inferring task as a Top-$K$
recommendation problem, we incorporated Matrix Factorization Collaborative
Filtering model with social correlation and tie strength based on call patterns
to generate Top-$K$ preferred categories for users. The proposed Tie Strength
Augmented Social Recommendation (TSASoRec) model takes data sparsity and cold
start user problems into account, considering both the recorded and missing
recorded category entries. The experiment on real dataset shows the proposed
model can better infer user preference, especially for cold start users.
| no_new_dataset | 0.931088 |
1606.04596 | Yang Liu | Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun and Yang
Liu | Semi-Supervised Learning for Neural Machine Translation | Corrected a typo | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While end-to-end neural machine translation (NMT) has made remarkable
progress recently, NMT systems only rely on parallel corpora for parameter
estimation. Since parallel corpora are usually limited in quantity, quality,
and coverage, especially for low-resource languages, it is appealing to exploit
monolingual corpora to improve NMT. We propose a semi-supervised approach for
training NMT models on the concatenation of labeled (parallel corpora) and
unlabeled (monolingual corpora) data. The central idea is to reconstruct the
monolingual corpora using an autoencoder, in which the source-to-target and
target-to-source translation models serve as the encoder and decoder,
respectively. Our approach can not only exploit the monolingual corpora of the
target language, but also of the source language. Experiments on the
Chinese-English dataset show that our approach achieves significant
improvements over state-of-the-art SMT and NMT systems.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 00:22:27 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2016 19:08:20 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Dec 2016 20:02:52 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Cheng",
"Yong",
""
],
[
"Xu",
"Wei",
""
],
[
"He",
"Zhongjun",
""
],
[
"He",
"Wei",
""
],
[
"Wu",
"Hua",
""
],
[
"Sun",
"Maosong",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: Semi-Supervised Learning for Neural Machine Translation
ABSTRACT: While end-to-end neural machine translation (NMT) has made remarkable
progress recently, NMT systems only rely on parallel corpora for parameter
estimation. Since parallel corpora are usually limited in quantity, quality,
and coverage, especially for low-resource languages, it is appealing to exploit
monolingual corpora to improve NMT. We propose a semi-supervised approach for
training NMT models on the concatenation of labeled (parallel corpora) and
unlabeled (monolingual corpora) data. The central idea is to reconstruct the
monolingual corpora using an autoencoder, in which the source-to-target and
target-to-source translation models serve as the encoder and decoder,
respectively. Our approach can not only exploit the monolingual corpora of the
target language, but also of the source language. Experiments on the
Chinese-English dataset show that our approach achieves significant
improvements over state-of-the-art SMT and NMT systems.
| no_new_dataset | 0.951639 |
1608.05182 | Yanbo Xu | Yanbo Xu, Yanxun Xu and Suchi Saria | A Bayesian Nonparametric Approach for Estimating Individualized
Treatment-Response Curves | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of estimating the continuous response over time to
interventions using observational time series---a retrospective dataset where
the policy by which the data are generated is unknown to the learner. We are
motivated by applications where response varies by individuals and therefore,
estimating responses at the individual-level is valuable for personalizing
decision-making. We refer to this as the problem of estimating individualized
treatment response (ITR) curves. In statistics, G-computation formula (Robins,
1986) has been commonly used for estimating treatment responses from
observational data containing sequential treatment assignments. However, past
studies have focused predominantly on obtaining point-in-time estimates at the
population level. We leverage the G-computation formula and develop a novel
Bayesian nonparametric (BNP) method that can flexibly model functional data and
provide posterior inference over the treatment response curves at both the
individual and population level. On a challenging dataset containing time
series from patients admitted to a hospital, we estimate responses to
treatments used in managing kidney function and show that the resulting fits
are more accurate than alternative approaches. Accurate methods for obtaining
ITRs from observational data can dramatically accelerate the pace at which
personalized treatment plans become possible.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 05:31:53 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2016 16:44:14 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Xu",
"Yanbo",
""
],
[
"Xu",
"Yanxun",
""
],
[
"Saria",
"Suchi",
""
]
] | TITLE: A Bayesian Nonparametric Approach for Estimating Individualized
Treatment-Response Curves
ABSTRACT: We study the problem of estimating the continuous response over time to
interventions using observational time series---a retrospective dataset where
the policy by which the data are generated is unknown to the learner. We are
motivated by applications where response varies by individuals and therefore,
estimating responses at the individual-level is valuable for personalizing
decision-making. We refer to this as the problem of estimating individualized
treatment response (ITR) curves. In statistics, G-computation formula (Robins,
1986) has been commonly used for estimating treatment responses from
observational data containing sequential treatment assignments. However, past
studies have focused predominantly on obtaining point-in-time estimates at the
population level. We leverage the G-computation formula and develop a novel
Bayesian nonparametric (BNP) method that can flexibly model functional data and
provide posterior inference over the treatment response curves at both the
individual and population level. On a challenging dataset containing time
series from patients admitted to a hospital, we estimate responses to
treatments used in managing kidney function and show that the resulting fits
are more accurate than alternative approaches. Accurate methods for obtaining
ITRs from observational data can dramatically accelerate the pace at which
personalized treatment plans become possible.
| no_new_dataset | 0.942401 |
1608.08614 | Minyoung Huh | Minyoung Huh, Pulkit Agrawal, Alexei A. Efros | What makes ImageNet good for transfer learning? | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class?
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2016 19:45:09 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Dec 2016 13:37:06 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Huh",
"Minyoung",
""
],
[
"Agrawal",
"Pulkit",
""
],
[
"Efros",
"Alexei A.",
""
]
] | TITLE: What makes ImageNet good for transfer learning?
ABSTRACT: The tremendous success of ImageNet-trained deep features on a wide range of
transfer tasks begs the question: what are the properties of the ImageNet
dataset that are critical for learning good, general-purpose features? This
work provides an empirical investigation of various facets of this question: Is
more pre-training data always better? How does feature quality depend on the
number of training examples per class? Does adding more object classes improve
performance? For the same data budget, how should the data be split into
classes? Is fine-grained recognition necessary for learning good features?
Given the same number of training classes, is it better to have coarse classes
or fine-grained classes? Which is better: more classes or more examples per
class? To answer these and related questions, we pre-trained CNN features on
various subsets of the ImageNet dataset and evaluated transfer performance on
PASCAL detection, PASCAL action classification, and SUN scene classification
tasks. Our overall findings suggest that most changes in the choice of
pre-training data long thought to be critical do not significantly affect
transfer performance.? Given the same number of training classes, is it better
to have coarse classes or fine-grained classes? Which is better: more classes
or more examples per class?
| no_new_dataset | 0.951233 |
1609.07545 | Jeremy Kepner | Siddharth Samsi, Laura Brattain, William Arcand, David Bestor, Bill
Bergeron, Chansup Byun, Vijay Gadepally, Michael Houle, Matthew Hubbell,
Michael Jones, Anna Klein, Peter Michaleas, Lauren Milechin, Julie Mullen,
Andrew Prout, Antonio Rosa, Charles Yee, Jeremy Kepner, Albert Reuther | Benchmarking SciDB Data Import on HPC Systems | 5 pages, 4 figures, IEEE High Performance Extreme Computing (HPEC)
2016, best paper finalist | null | 10.1109/HPEC.2016.7761617 | null | cs.DB cs.DC cs.PF q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SciDB is a scalable, computational database management system that uses an
array model for data storage. The array data model of SciDB makes it ideally
suited for storing and managing large amounts of imaging data. SciDB is
designed to support advanced analytics in database, thus reducing the need for
extracting data for analysis. It is designed to be massively parallel and can
run on commodity hardware in a high performance computing (HPC) environment. In
this paper, we present the performance of SciDB using simulated image data. The
Dynamic Distributed Dimensional Data Model (D4M) software is used to implement
the benchmark on a cluster running the MIT SuperCloud software stack. A peak
performance of 2.2M database inserts per second was achieved on a single node
of this system. We also show that SciDB and the D4M toolbox provide more
efficient ways to access random sub-volumes of massive datasets compared to the
traditional approaches of reading volumetric data from individual files. This
work describes the D4M and SciDB tools we developed and presents the initial
performance results. This performance was achieved by using parallel inserts, a
in-database merging of arrays as well as supercomputing techniques, such as
distributed arrays and single-program-multiple-data programming.
| [
{
"version": "v1",
"created": "Sat, 24 Sep 2016 01:01:30 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Samsi",
"Siddharth",
""
],
[
"Brattain",
"Laura",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Byun",
"Chansup",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Houle",
"Michael",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Jones",
"Michael",
""
],
[
"Klein",
"Anna",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Milechin",
"Lauren",
""
],
[
"Mullen",
"Julie",
""
],
[
"Prout",
"Andrew",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Yee",
"Charles",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Reuther",
"Albert",
""
]
] | TITLE: Benchmarking SciDB Data Import on HPC Systems
ABSTRACT: SciDB is a scalable, computational database management system that uses an
array model for data storage. The array data model of SciDB makes it ideally
suited for storing and managing large amounts of imaging data. SciDB is
designed to support advanced analytics in database, thus reducing the need for
extracting data for analysis. It is designed to be massively parallel and can
run on commodity hardware in a high performance computing (HPC) environment. In
this paper, we present the performance of SciDB using simulated image data. The
Dynamic Distributed Dimensional Data Model (D4M) software is used to implement
the benchmark on a cluster running the MIT SuperCloud software stack. A peak
performance of 2.2M database inserts per second was achieved on a single node
of this system. We also show that SciDB and the D4M toolbox provide more
efficient ways to access random sub-volumes of massive datasets compared to the
traditional approaches of reading volumetric data from individual files. This
work describes the D4M and SciDB tools we developed and presents the initial
performance results. This performance was achieved by using parallel inserts, a
in-database merging of arrays as well as supercomputing techniques, such as
distributed arrays and single-program-multiple-data programming.
| no_new_dataset | 0.943086 |
1609.07548 | Jeremy Kepner | Vijay Gadepally, Peinan Chen, Jennie Duggan, Aaron Elmore, Brandon
Haynes, Jeremy Kepner, Samuel Madden, Tim Mattson, Michael Stonebraker | The BigDAWG Polystore System and Architecture | 6 pages, 5 figures, IEEE High Performance Extreme Computing (HPEC)
conference 2016 | null | 10.1109/HPEC.2016.7761636 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Organizations are often faced with the challenge of providing data management
solutions for large, heterogenous datasets that may have different underlying
data and programming models. For example, a medical dataset may have
unstructured text, relational data, time series waveforms and imagery. Trying
to fit such datasets in a single data management system can have adverse
performance and efficiency effects. As a part of the Intel Science and
Technology Center on Big Data, we are developing a polystore system designed
for such problems. BigDAWG (short for the Big Data Analytics Working Group) is
a polystore system designed to work on complex problems that naturally span
across different processing or storage engines. BigDAWG provides an
architecture that supports diverse database systems working with different data
models, support for the competing notions of location transparency and semantic
completeness via islands and a middleware that provides a uniform multi--island
interface. Initial results from a prototype of the BigDAWG system applied to a
medical dataset validate polystore concepts. In this article, we will describe
polystore databases, the current BigDAWG architecture and its application on
the MIMIC II medical dataset, initial performance results and our future
development plans.
| [
{
"version": "v1",
"created": "Sat, 24 Sep 2016 01:14:06 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Gadepally",
"Vijay",
""
],
[
"Chen",
"Peinan",
""
],
[
"Duggan",
"Jennie",
""
],
[
"Elmore",
"Aaron",
""
],
[
"Haynes",
"Brandon",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Madden",
"Samuel",
""
],
[
"Mattson",
"Tim",
""
],
[
"Stonebraker",
"Michael",
""
]
] | TITLE: The BigDAWG Polystore System and Architecture
ABSTRACT: Organizations are often faced with the challenge of providing data management
solutions for large, heterogenous datasets that may have different underlying
data and programming models. For example, a medical dataset may have
unstructured text, relational data, time series waveforms and imagery. Trying
to fit such datasets in a single data management system can have adverse
performance and efficiency effects. As a part of the Intel Science and
Technology Center on Big Data, we are developing a polystore system designed
for such problems. BigDAWG (short for the Big Data Analytics Working Group) is
a polystore system designed to work on complex problems that naturally span
across different processing or storage engines. BigDAWG provides an
architecture that supports diverse database systems working with different data
models, support for the competing notions of location transparency and semantic
completeness via islands and a middleware that provides a uniform multi--island
interface. Initial results from a prototype of the BigDAWG system applied to a
medical dataset validate polystore concepts. In this article, we will describe
polystore databases, the current BigDAWG architecture and its application on
the MIMIC II medical dataset, initial performance results and our future
development plans.
| no_new_dataset | 0.94625 |
1609.08642 | Jeremy Kepner | Timothy Weale, Vijay Gadepally, Dylan Hutchison, Jeremy Kepner | Benchmarking the Graphulo Processing Framework | 5 pages, 4 figures, IEEE High Performance Extreme Computing (HPEC)
conference 2016 | null | 10.1109/HPEC.2016.7761640 | null | cs.DB cs.MS cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph algorithms have wide applicablity to a variety of domains and are often
used on massive datasets. Recent standardization efforts such as the GraphBLAS
specify a set of key computational kernels that hardware and software
developers can adhere to. Graphulo is a processing framework that enables
GraphBLAS kernels in the Apache Accumulo database. In our previous work, we
have demonstrated a core Graphulo operation called \textit{TableMult} that
performs large-scale multiplication operations of database tables. In this
article, we present the results of scaling the Graphulo engine to larger
problems and scalablity when a greater number of resources is used.
Specifically, we present two experiments that demonstrate Graphulo scaling
performance is linear with the number of available resources. The first
experiment demonstrates cluster processing rates through Graphulo's TableMult
operator on two large graphs, scaled between $2^{17}$ and $2^{19}$ vertices.
The second experiment uses TableMult to extract a random set of rows from a
large graph ($2^{19}$ nodes) to simulate a cued graph analytic. These
benchmarking results are of relevance to Graphulo users who wish to apply
Graphulo to their graph problems.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2016 20:09:03 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Weale",
"Timothy",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Hutchison",
"Dylan",
""
],
[
"Kepner",
"Jeremy",
""
]
] | TITLE: Benchmarking the Graphulo Processing Framework
ABSTRACT: Graph algorithms have wide applicablity to a variety of domains and are often
used on massive datasets. Recent standardization efforts such as the GraphBLAS
specify a set of key computational kernels that hardware and software
developers can adhere to. Graphulo is a processing framework that enables
GraphBLAS kernels in the Apache Accumulo database. In our previous work, we
have demonstrated a core Graphulo operation called \textit{TableMult} that
performs large-scale multiplication operations of database tables. In this
article, we present the results of scaling the Graphulo engine to larger
problems and scalablity when a greater number of resources is used.
Specifically, we present two experiments that demonstrate Graphulo scaling
performance is linear with the number of available resources. The first
experiment demonstrates cluster processing rates through Graphulo's TableMult
operator on two large graphs, scaled between $2^{17}$ and $2^{19}$ vertices.
The second experiment uses TableMult to extract a random set of rows from a
large graph ($2^{19}$ nodes) to simulate a cued graph analytic. These
benchmarking results are of relevance to Graphulo users who wish to apply
Graphulo to their graph problems.
| no_new_dataset | 0.945951 |
1611.09086 | Krishna Agarwal | Krishna Agarwal and Radek Mach\'a\v{n} | Multiple Signal Classification Algorithm for super-resolution
fluorescence microscopy | 28 pages, 29 figures, Nature Communications, 2016 | null | 10.1038/ncomms13752 | null | q-bio.QM physics.bio-ph physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Super-resolution microscopy is providing unprecedented insights into biology
by resolving details much below the diffraction limit. State-of-the-art Single
Molecule Localization Microscopy (SMLM) techniques for super-resolution are
restricted by long acquisition and computational times, or the need of special
fluorophores or chemical environments. Here, we propose a novel statistical
super-resolution technique of wide-field fluorescence microscopy called
MUltiple SIgnal Classification ALgorithm (MUSICAL) which has several advantages
over SMLM techniques. MUSICAL provides resolution down to at least 50 nm, has
low requirements on number of frames and excitation power and works even at
high fluorophore concentrations. Further, it works with any fluorophore that
exhibits blinking on the time scale of the recording. We compare imaging
results of MUSICAL with SMLM and four contemporary statistical super-resolution
methods for experiments of in-vitro actin filaments and datasets provided by
independent research groups. Results show comparable or superior performance of
MUSICAL. We also demonstrate super-resolution at time scales of 245 ms (using
49 frames at acquisition rate of 200 frames per second) in samples of live-cell
microtubules and live-cell actin filaments.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 11:52:11 GMT"
},
{
"version": "v2",
"created": "Sun, 11 Dec 2016 17:12:46 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Agarwal",
"Krishna",
""
],
[
"Macháň",
"Radek",
""
]
] | TITLE: Multiple Signal Classification Algorithm for super-resolution
fluorescence microscopy
ABSTRACT: Super-resolution microscopy is providing unprecedented insights into biology
by resolving details much below the diffraction limit. State-of-the-art Single
Molecule Localization Microscopy (SMLM) techniques for super-resolution are
restricted by long acquisition and computational times, or the need of special
fluorophores or chemical environments. Here, we propose a novel statistical
super-resolution technique of wide-field fluorescence microscopy called
MUltiple SIgnal Classification ALgorithm (MUSICAL) which has several advantages
over SMLM techniques. MUSICAL provides resolution down to at least 50 nm, has
low requirements on number of frames and excitation power and works even at
high fluorophore concentrations. Further, it works with any fluorophore that
exhibits blinking on the time scale of the recording. We compare imaging
results of MUSICAL with SMLM and four contemporary statistical super-resolution
methods for experiments of in-vitro actin filaments and datasets provided by
independent research groups. Results show comparable or superior performance of
MUSICAL. We also demonstrate super-resolution at time scales of 245 ms (using
49 frames at acquisition rate of 200 frames per second) in samples of live-cell
microtubules and live-cell actin filaments.
| no_new_dataset | 0.952442 |
1612.03413 | George Teodoro | George Teodoro, Tahsin Kurc, Luis F. R. Taveira, Alba C. M. A. Melo,
Jun Kong, and Joel Saltz | Efficient Methods and Parallel Execution for Algorithm Sensitivity
Analysis with Parameter Tuning on Microscopy Imaging Datasets | 36 pages, 10 figures | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: We describe an informatics framework for researchers and clinical
investigators to efficiently perform parameter sensitivity analysis and
auto-tuning for algorithms that segment and classify image features in a large
dataset of high-resolution images. The computational cost of the sensitivity
analysis process can be very high, because the process requires processing the
input dataset several times to systematically evaluate how output varies when
input parameters are varied. Thus, high performance computing techniques are
required to quickly execute the sensitivity analysis process.
Results: We carried out an empirical evaluation of the proposed method on
high performance computing clusters with multi-core CPUs and co-processors
(GPUs and Intel Xeon Phis). Our results show that (1) the framework achieves
excellent scalability and efficiency on a high performance computing cluster --
execution efficiency remained above 85% in all experiments; (2) the parameter
auto-tuning methods are able to converge by visiting only a small fraction
(0.0009%) of the search space with limited impact to the algorithm output
(0.56% on average).
Conclusions: The sensitivity analysis framework provides a range of
strategies for the efficient exploration of the parameter space, as well as
multiple indexes to evaluate the effect of parameter modification to outputs or
even correlation between parameters. Our work demonstrates the feasibility of
performing sensitivity analyses, parameter studies, and auto-tuning with large
datasets with the use of high-performance systems and techniques. The proposed
technologies will enable the quantification of error estimations and output
variations in these pipelines, which may be used in application specific ways
to assess uncertainty of conclusions extracted from data generated by these
image analysis pipelines.
| [
{
"version": "v1",
"created": "Sun, 11 Dec 2016 14:05:58 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Teodoro",
"George",
""
],
[
"Kurc",
"Tahsin",
""
],
[
"Taveira",
"Luis F. R.",
""
],
[
"Melo",
"Alba C. M. A.",
""
],
[
"Kong",
"Jun",
""
],
[
"Saltz",
"Joel",
""
]
] | TITLE: Efficient Methods and Parallel Execution for Algorithm Sensitivity
Analysis with Parameter Tuning on Microscopy Imaging Datasets
ABSTRACT: Background: We describe an informatics framework for researchers and clinical
investigators to efficiently perform parameter sensitivity analysis and
auto-tuning for algorithms that segment and classify image features in a large
dataset of high-resolution images. The computational cost of the sensitivity
analysis process can be very high, because the process requires processing the
input dataset several times to systematically evaluate how output varies when
input parameters are varied. Thus, high performance computing techniques are
required to quickly execute the sensitivity analysis process.
Results: We carried out an empirical evaluation of the proposed method on
high performance computing clusters with multi-core CPUs and co-processors
(GPUs and Intel Xeon Phis). Our results show that (1) the framework achieves
excellent scalability and efficiency on a high performance computing cluster --
execution efficiency remained above 85% in all experiments; (2) the parameter
auto-tuning methods are able to converge by visiting only a small fraction
(0.0009%) of the search space with limited impact to the algorithm output
(0.56% on average).
Conclusions: The sensitivity analysis framework provides a range of
strategies for the efficient exploration of the parameter space, as well as
multiple indexes to evaluate the effect of parameter modification to outputs or
even correlation between parameters. Our work demonstrates the feasibility of
performing sensitivity analyses, parameter studies, and auto-tuning with large
datasets with the use of high-performance systems and techniques. The proposed
technologies will enable the quantification of error estimations and output
variations in these pipelines, which may be used in application specific ways
to assess uncertainty of conclusions extracted from data generated by these
image analysis pipelines.
| no_new_dataset | 0.946745 |
1612.03477 | Dani\"el Reichman | Dani\"el Reichman, Leslie M. Collins, and Jordan M. Malof | On Choosing Training and Testing Data for Supervised Algorithms in
Ground Penetrating Radar Data for Buried Threat Detection | 9 pages, 8 figures, journal paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ground penetrating radar (GPR) is one of the most popular and successful
sensing modalities that has been investigated for landmine and subsurface
threat detection. Many of the detection algorithms applied to this task are
supervised and therefore require labeled examples of target and non-target data
for training. Training data most often consists of 2-dimensional images (or
patches) of GPR data, from which features are extracted, and provided to the
classifier during training and testing. Identifying desirable training and
testing locations to extract patches, which we term "keypoints", is well
established in the literature. In contrast however, a large variety of
strategies have been proposed regarding keypoint utilization (e.g., how many of
the identified keypoints should be used at targets, or non-target, locations).
Given the variety keypoint utilization strategies that are available, it is
very unclear (i) which strategies are best, or (ii) whether the choice of
strategy has a large impact on classifier performance. We address these
questions by presenting a taxonomy of existing utilization strategies, and then
evaluating their effectiveness on a large dataset using many different
classifiers and features. We analyze the results and propose a new strategy,
called PatchSelect, which outperforms other strategies across all experiments.
| [
{
"version": "v1",
"created": "Sun, 11 Dec 2016 21:05:18 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Reichman",
"Daniël",
""
],
[
"Collins",
"Leslie M.",
""
],
[
"Malof",
"Jordan M.",
""
]
] | TITLE: On Choosing Training and Testing Data for Supervised Algorithms in
Ground Penetrating Radar Data for Buried Threat Detection
ABSTRACT: Ground penetrating radar (GPR) is one of the most popular and successful
sensing modalities that has been investigated for landmine and subsurface
threat detection. Many of the detection algorithms applied to this task are
supervised and therefore require labeled examples of target and non-target data
for training. Training data most often consists of 2-dimensional images (or
patches) of GPR data, from which features are extracted, and provided to the
classifier during training and testing. Identifying desirable training and
testing locations to extract patches, which we term "keypoints", is well
established in the literature. In contrast however, a large variety of
strategies have been proposed regarding keypoint utilization (e.g., how many of
the identified keypoints should be used at targets, or non-target, locations).
Given the variety keypoint utilization strategies that are available, it is
very unclear (i) which strategies are best, or (ii) whether the choice of
strategy has a large impact on classifier performance. We address these
questions by presenting a taxonomy of existing utilization strategies, and then
evaluating their effectiveness on a large dataset using many different
classifiers and features. We analyze the results and propose a new strategy,
called PatchSelect, which outperforms other strategies across all experiments.
| no_new_dataset | 0.953751 |
1612.03628 | Marc Bola\~nos | Marc Bola\~nos, \'Alvaro Peris, Francisco Casacuberta, Petia Radeva | VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question
Answering | Submitted to IbPRIA'17, 8 pages, 3 figures, 1 table | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of visual question answering by
proposing a novel model, called VIBIKNet. Our model is based on integrating
Kernelized Convolutional Neural Networks and Long-Short Term Memory units to
generate an answer given a question about an image. We prove that VIBIKNet is
an optimal trade-off between accuracy and computational load, in terms of
memory and time consumption. We validate our method on the VQA challenge
dataset and compare it to the top performing methods in order to illustrate its
performance and speed.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 11:41:46 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Bolaños",
"Marc",
""
],
[
"Peris",
"Álvaro",
""
],
[
"Casacuberta",
"Francisco",
""
],
[
"Radeva",
"Petia",
""
]
] | TITLE: VIBIKNet: Visual Bidirectional Kernelized Network for Visual Question
Answering
ABSTRACT: In this paper, we address the problem of visual question answering by
proposing a novel model, called VIBIKNet. Our model is based on integrating
Kernelized Convolutional Neural Networks and Long-Short Term Memory units to
generate an answer given a question about an image. We prove that VIBIKNet is
an optimal trade-off between accuracy and computational load, in terms of
memory and time consumption. We validate our method on the VQA challenge
dataset and compare it to the top performing methods in order to illustrate its
performance and speed.
| no_new_dataset | 0.949106 |
1612.03630 | Zichuan Liu | Zichuan Liu, Yixing Li, Fengbo Ren, Hao Yu | A Binary Convolutional Encoder-decoder Network for Real-time Natural
Scene Text Processing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop a binary convolutional encoder-decoder network
(B-CEDNet) for natural scene text processing (NSTP). It converts a text image
to a class-distinguished salience map that reveals the categorical, spatial and
morphological information of characters. The existing solutions are either
memory consuming or run-time consuming that cannot be applied to real-time
applications on resource-constrained devices such as advanced driver assistance
systems. The developed network can process multiple regions containing
characters by one-off forward operation, and is trained to have binary weights
and binary feature maps, which lead to both remarkable inference run-time
speedup and memory usage reduction. By training with over 200, 000 synthesis
scene text images (size of $32\times128$), it can achieve $90\%$ and $91\%$
pixel-wise accuracy on ICDAR-03 and ICDAR-13 datasets. It only consumes $4.59\
ms$ inference run-time realized on GPU with a small network size of 2.14 MB,
which is up to $8\times$ faster and $96\%$ smaller than it full-precision
version.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 11:48:00 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Liu",
"Zichuan",
""
],
[
"Li",
"Yixing",
""
],
[
"Ren",
"Fengbo",
""
],
[
"Yu",
"Hao",
""
]
] | TITLE: A Binary Convolutional Encoder-decoder Network for Real-time Natural
Scene Text Processing
ABSTRACT: In this paper, we develop a binary convolutional encoder-decoder network
(B-CEDNet) for natural scene text processing (NSTP). It converts a text image
to a class-distinguished salience map that reveals the categorical, spatial and
morphological information of characters. The existing solutions are either
memory consuming or run-time consuming that cannot be applied to real-time
applications on resource-constrained devices such as advanced driver assistance
systems. The developed network can process multiple regions containing
characters by one-off forward operation, and is trained to have binary weights
and binary feature maps, which lead to both remarkable inference run-time
speedup and memory usage reduction. By training with over 200, 000 synthesis
scene text images (size of $32\times128$), it can achieve $90\%$ and $91\%$
pixel-wise accuracy on ICDAR-03 and ICDAR-13 datasets. It only consumes $4.59\
ms$ inference run-time realized on GPU with a small network size of 2.14 MB,
which is up to $8\times$ faster and $96\%$ smaller than it full-precision
version.
| no_new_dataset | 0.950778 |
1612.03707 | Yuzhen Lu | Yuzhen Lu | Empirical Evaluation of A New Approach to Simplifying Long Short-term
Memory (LSTM) | 5 pages, 5 figures | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The standard LSTM, although it succeeds in the modeling long-range
dependences, suffers from a highly complex structure that can be simplified
through modifications to its gate units. This paper was to perform an empirical
comparison between the standard LSTM and three new simplified variants that
were obtained by eliminating input signal, bias and hidden unit signal from
individual gates, on the tasks of modeling two sequence datasets. The
experiments show that the three variants, with reduced parameters, can achieve
comparable performance with the standard LSTM. Due attention should be paid to
turning the learning rate to achieve high accuracies
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 14:36:22 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Lu",
"Yuzhen",
""
]
] | TITLE: Empirical Evaluation of A New Approach to Simplifying Long Short-term
Memory (LSTM)
ABSTRACT: The standard LSTM, although it succeeds in the modeling long-range
dependences, suffers from a highly complex structure that can be simplified
through modifications to its gate units. This paper was to perform an empirical
comparison between the standard LSTM and three new simplified variants that
were obtained by eliminating input signal, bias and hidden unit signal from
individual gates, on the tasks of modeling two sequence datasets. The
experiments show that the three variants, with reduced parameters, can achieve
comparable performance with the standard LSTM. Due attention should be paid to
turning the learning rate to achieve high accuracies
| no_new_dataset | 0.945399 |
1612.03762 | Margherita Zorzi | Carlo Combi, Margherita Zorzi, Gabriele Pozzani, Ugo Moretti | From narrative descriptions to MedDRA: automagically encoding adverse
drug reactions | arXiv admin note: substantial text overlap with arXiv:1506.08052 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The collection of narrative spontaneous reports is an irreplaceable source
for the prompt detection of suspected adverse drug reactions (ADRs): qualified
domain experts manually revise a huge amount of narrative descriptions and then
encode texts according to MedDRA standard terminology. The manual annotation of
narrative documents with medical terminology is a subtle and expensive task,
since the number of reports is growing up day-by-day. MagiCoder, a Natural
Language Processing algorithm, is proposed for the automatic encoding of
free-text descriptions into MedDRA terms. MagiCoder procedure is efficient in
terms of computational complexity (in particular, it is linear in the size of
the narrative input and the terminology). We tested it on a large dataset of
about 4500 manually revised reports, by performing an automated comparison
between human and MagiCoder revisions. For the current base version of
MagiCoder, we measured: on short descriptions, an average recall of $86\%$ and
an average precision of $88\%$; on medium-long descriptions (up to 255
characters), an average recall of $64\%$ and an average precision of $63\%$.
From a practical point of view, MagiCoder reduces the time required for
encoding ADR reports. Pharmacologists have simply to review and validate the
MagiCoder terms proposed by the application, instead of choosing the right
terms among the 70K low level terms of MedDRA. Such improvement in the
efficiency of pharmacologists' work has a relevant impact also on the quality
of the subsequent data analysis. We developed MagiCoder for the Italian
pharmacovigilance language. However, our proposal is based on a general
approach, not depending on the considered language nor the term dictionary.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 16:14:02 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Combi",
"Carlo",
""
],
[
"Zorzi",
"Margherita",
""
],
[
"Pozzani",
"Gabriele",
""
],
[
"Moretti",
"Ugo",
""
]
] | TITLE: From narrative descriptions to MedDRA: automagically encoding adverse
drug reactions
ABSTRACT: The collection of narrative spontaneous reports is an irreplaceable source
for the prompt detection of suspected adverse drug reactions (ADRs): qualified
domain experts manually revise a huge amount of narrative descriptions and then
encode texts according to MedDRA standard terminology. The manual annotation of
narrative documents with medical terminology is a subtle and expensive task,
since the number of reports is growing up day-by-day. MagiCoder, a Natural
Language Processing algorithm, is proposed for the automatic encoding of
free-text descriptions into MedDRA terms. MagiCoder procedure is efficient in
terms of computational complexity (in particular, it is linear in the size of
the narrative input and the terminology). We tested it on a large dataset of
about 4500 manually revised reports, by performing an automated comparison
between human and MagiCoder revisions. For the current base version of
MagiCoder, we measured: on short descriptions, an average recall of $86\%$ and
an average precision of $88\%$; on medium-long descriptions (up to 255
characters), an average recall of $64\%$ and an average precision of $63\%$.
From a practical point of view, MagiCoder reduces the time required for
encoding ADR reports. Pharmacologists have simply to review and validate the
MagiCoder terms proposed by the application, instead of choosing the right
terms among the 70K low level terms of MedDRA. Such improvement in the
efficiency of pharmacologists' work has a relevant impact also on the quality
of the subsequent data analysis. We developed MagiCoder for the Italian
pharmacovigilance language. However, our proposal is based on a general
approach, not depending on the considered language nor the term dictionary.
| no_new_dataset | 0.943243 |
1612.03900 | Xiaofang Wang | Xiaofang Wang, Yi Shi and Kris M. Kitani | Deep Supervised Hashing with Triplet Labels | Appear in ACCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashing is one of the most popular and powerful approximate nearest neighbor
search techniques for large-scale image retrieval. Most traditional hashing
methods first represent images as off-the-shelf visual features and then
produce hashing codes in a separate stage. However, off-the-shelf visual
features may not be optimally compatible with the hash code learning procedure,
which may result in sub-optimal hash codes. Recently, deep hashing methods have
been proposed to simultaneously learn image features and hash codes using deep
neural networks and have shown superior performance over traditional hashing
methods. Most deep hashing methods are given supervised information in the form
of pairwise labels or triplet labels. The current state-of-the-art deep hashing
method DPSH~\cite{li2015feature}, which is based on pairwise labels, performs
image feature learning and hash code learning simultaneously by maximizing the
likelihood of pairwise similarities. Inspired by DPSH~\cite{li2015feature}, we
propose a triplet label based deep hashing method which aims to maximize the
likelihood of the given triplet labels. Experimental results show that our
method outperforms all the baselines on CIFAR-10 and NUS-WIDE datasets,
including the state-of-the-art method DPSH~\cite{li2015feature} and all the
previous triplet label based deep hashing methods.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 20:56:38 GMT"
}
] | 2016-12-13T00:00:00 | [
[
"Wang",
"Xiaofang",
""
],
[
"Shi",
"Yi",
""
],
[
"Kitani",
"Kris M.",
""
]
] | TITLE: Deep Supervised Hashing with Triplet Labels
ABSTRACT: Hashing is one of the most popular and powerful approximate nearest neighbor
search techniques for large-scale image retrieval. Most traditional hashing
methods first represent images as off-the-shelf visual features and then
produce hashing codes in a separate stage. However, off-the-shelf visual
features may not be optimally compatible with the hash code learning procedure,
which may result in sub-optimal hash codes. Recently, deep hashing methods have
been proposed to simultaneously learn image features and hash codes using deep
neural networks and have shown superior performance over traditional hashing
methods. Most deep hashing methods are given supervised information in the form
of pairwise labels or triplet labels. The current state-of-the-art deep hashing
method DPSH~\cite{li2015feature}, which is based on pairwise labels, performs
image feature learning and hash code learning simultaneously by maximizing the
likelihood of pairwise similarities. Inspired by DPSH~\cite{li2015feature}, we
propose a triplet label based deep hashing method which aims to maximize the
likelihood of the given triplet labels. Experimental results show that our
method outperforms all the baselines on CIFAR-10 and NUS-WIDE datasets,
including the state-of-the-art method DPSH~\cite{li2015feature} and all the
previous triplet label based deep hashing methods.
| no_new_dataset | 0.943764 |
1604.01655 | Ziyan Wang | Ziyan Wang, Jiwen Lu, Ruogu Lin, Jianjiang Feng, Jie zhou | Correlated and Individual Multi-Modal Deep Learning for RGB-D Object
Recognition | 11 pages, 7 figures, submitted to a conference in 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new correlated and individual multi-modal deep
learning (CIMDL) method for RGB-D object recognition. Unlike most conventional
RGB-D object recognition methods which extract features from the RGB and depth
channels individually, our CIMDL jointly learns feature representations from
raw RGB-D data with a pair of deep neural networks, so that the sharable and
modal-specific information can be simultaneously exploited. Specifically, we
construct a pair of deep convolutional neural networks (CNNs) for the RGB and
depth data, and concatenate them at the top layer of the network with a loss
function which learns a new feature space where both correlated part and the
individual part of the RGB-D information are well modelled. The parameters of
the whole networks are updated by using the back-propagation criterion.
Experimental results on two widely used RGB-D object image benchmark datasets
clearly show that our method outperforms state-of-the-arts.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 15:06:02 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2016 12:08:07 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Dec 2016 13:56:02 GMT"
}
] | 2016-12-12T00:00:00 | [
[
"Wang",
"Ziyan",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Lin",
"Ruogu",
""
],
[
"Feng",
"Jianjiang",
""
],
[
"zhou",
"Jie",
""
]
] | TITLE: Correlated and Individual Multi-Modal Deep Learning for RGB-D Object
Recognition
ABSTRACT: In this paper, we propose a new correlated and individual multi-modal deep
learning (CIMDL) method for RGB-D object recognition. Unlike most conventional
RGB-D object recognition methods which extract features from the RGB and depth
channels individually, our CIMDL jointly learns feature representations from
raw RGB-D data with a pair of deep neural networks, so that the sharable and
modal-specific information can be simultaneously exploited. Specifically, we
construct a pair of deep convolutional neural networks (CNNs) for the RGB and
depth data, and concatenate them at the top layer of the network with a loss
function which learns a new feature space where both correlated part and the
individual part of the RGB-D information are well modelled. The parameters of
the whole networks are updated by using the back-propagation criterion.
Experimental results on two widely used RGB-D object image benchmark datasets
clearly show that our method outperforms state-of-the-arts.
| no_new_dataset | 0.946349 |
1607.06972 | Zhiyuan Shi | Seungryul Baek, Zhiyuan Shi, Masato Kawade, Tae-Kyun Kim | Kinematic-Layout-aware Random Forests for Depth-based Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the problem of 24 hours-monitoring patient actions
in a ward such as "stretching an arm out of the bed", "falling out of the bed",
where temporal movements are subtle or significant. In the concerned scenarios,
the relations between scene layouts and body kinematics (skeletons) become
important cues to recognize actions; however they are hard to be secured at a
testing stage. To address this problem, we propose a kinematic-layout-aware
random forest which takes into account the kinematic-layout (\ie layout and
skeletons), to maximize the discriminative power of depth image appearance. We
integrate the kinematic-layout in the split criteria of random forests to guide
the learning process by 1) determining the switch to either the depth
appearance or the kinematic-layout information, and 2) implicitly closing the
gap between two distributions obtained by the kinematic-layout and the
appearance, when the kinematic-layout appears useful. The kinematic-layout
information is not required for the test data, thus called "privileged
information prior". The proposed method has also been testified in cross-view
settings, by the use of view-invariant features and enforcing the consistency
among synthetic-view data. Experimental evaluations on our new dataset PATIENT,
CAD-60 and UWA3D (multiview) demonstrate that our method outperforms various
state-of-the-arts.
| [
{
"version": "v1",
"created": "Sat, 23 Jul 2016 20:36:39 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 16:30:28 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Dec 2016 11:32:54 GMT"
}
] | 2016-12-12T00:00:00 | [
[
"Baek",
"Seungryul",
""
],
[
"Shi",
"Zhiyuan",
""
],
[
"Kawade",
"Masato",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: Kinematic-Layout-aware Random Forests for Depth-based Action Recognition
ABSTRACT: In this paper, we tackle the problem of 24 hours-monitoring patient actions
in a ward such as "stretching an arm out of the bed", "falling out of the bed",
where temporal movements are subtle or significant. In the concerned scenarios,
the relations between scene layouts and body kinematics (skeletons) become
important cues to recognize actions; however they are hard to be secured at a
testing stage. To address this problem, we propose a kinematic-layout-aware
random forest which takes into account the kinematic-layout (\ie layout and
skeletons), to maximize the discriminative power of depth image appearance. We
integrate the kinematic-layout in the split criteria of random forests to guide
the learning process by 1) determining the switch to either the depth
appearance or the kinematic-layout information, and 2) implicitly closing the
gap between two distributions obtained by the kinematic-layout and the
appearance, when the kinematic-layout appears useful. The kinematic-layout
information is not required for the test data, thus called "privileged
information prior". The proposed method has also been testified in cross-view
settings, by the use of view-invariant features and enforcing the consistency
among synthetic-view data. Experimental evaluations on our new dataset PATIENT,
CAD-60 and UWA3D (multiview) demonstrate that our method outperforms various
state-of-the-arts.
| new_dataset | 0.947186 |
1612.02120 | Yaron Meirovitch | Yaron Meirovitch, Alexander Matveev, Hayk Saribekyan, David Budden,
David Rolnick, Gergely Odor, Seymour Knowles-Barley, Thouis Raymond Jones,
Hanspeter Pfister, Jeff William Lichtman, Nir Shavit | A Multi-Pass Approach to Large-Scale Connectomics | 18 pages, 10 figures | null | null | null | q-bio.QM cs.AI q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of connectomics faces unprecedented "big data" challenges. To
reconstruct neuronal connectivity, automated pixel-level segmentation is
required for petabytes of streaming electron microscopy data. Existing
algorithms provide relatively good accuracy but are unacceptably slow, and
would require years to extract connectivity graphs from even a single cubic
millimeter of neural tissue. Here we present a viable real-time solution, a
multi-pass pipeline optimized for shared-memory multicore systems, capable of
processing data at near the terabyte-per-hour pace of multi-beam electron
microscopes. The pipeline makes an initial fast-pass over the data, and then
makes a second slow-pass to iteratively correct errors in the output of the
fast-pass. We demonstrate the accuracy of a sparse slow-pass reconstruction
algorithm and suggest new methods for detecting morphological errors. Our
fast-pass approach provided many algorithmic challenges, including the design
and implementation of novel shallow convolutional neural nets and the
parallelization of watershed and object-merging techniques. We use it to
reconstruct, from image stack to skeletons, the full dataset of Kasthuri et al.
(463 GB capturing 120,000 cubic microns) in a matter of hours on a single
multicore machine rather than the weeks it has taken in the past on much larger
distributed systems.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 05:46:24 GMT"
}
] | 2016-12-12T00:00:00 | [
[
"Meirovitch",
"Yaron",
""
],
[
"Matveev",
"Alexander",
""
],
[
"Saribekyan",
"Hayk",
""
],
[
"Budden",
"David",
""
],
[
"Rolnick",
"David",
""
],
[
"Odor",
"Gergely",
""
],
[
"Knowles-Barley",
"Seymour",
""
],
[
"Jones",
"Thouis Raymond",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Lichtman",
"Jeff William",
""
],
[
"Shavit",
"Nir",
""
]
] | TITLE: A Multi-Pass Approach to Large-Scale Connectomics
ABSTRACT: The field of connectomics faces unprecedented "big data" challenges. To
reconstruct neuronal connectivity, automated pixel-level segmentation is
required for petabytes of streaming electron microscopy data. Existing
algorithms provide relatively good accuracy but are unacceptably slow, and
would require years to extract connectivity graphs from even a single cubic
millimeter of neural tissue. Here we present a viable real-time solution, a
multi-pass pipeline optimized for shared-memory multicore systems, capable of
processing data at near the terabyte-per-hour pace of multi-beam electron
microscopes. The pipeline makes an initial fast-pass over the data, and then
makes a second slow-pass to iteratively correct errors in the output of the
fast-pass. We demonstrate the accuracy of a sparse slow-pass reconstruction
algorithm and suggest new methods for detecting morphological errors. Our
fast-pass approach provided many algorithmic challenges, including the design
and implementation of novel shallow convolutional neural nets and the
parallelization of watershed and object-merging techniques. We use it to
reconstruct, from image stack to skeletons, the full dataset of Kasthuri et al.
(463 GB capturing 120,000 cubic microns) in a matter of hours on a single
multicore machine rather than the weeks it has taken in the past on much larger
distributed systems.
| no_new_dataset | 0.947332 |
1612.02844 | Hang Zhang | Hang Zhang, Jia Xue, Kristin Dana | Deep TEN: Texture Encoding Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Deep Texture Encoding Network (Deep-TEN) with a novel Encoding
Layer integrated on top of convolutional layers, which ports the entire
dictionary learning and encoding pipeline into a single model. Current methods
build from distinct components, using standard encoders with separate
off-the-shelf features such as SIFT descriptors or pre-trained CNN features for
material recognition. Our new approach provides an end-to-end learning
framework, where the inherent visual vocabularies are learned directly from the
loss function. The features, dictionaries and the encoding representation for
the classifier are all learned simultaneously. The representation is orderless
and therefore is particularly useful for material and texture recognition. The
Encoding Layer generalizes robust residual encoders such as VLAD and Fisher
Vectors, and has the property of discarding domain specific information which
makes the learned convolutional features easier to transfer. Additionally,
joint training using multiple datasets of varied sizes and class labels is
supported resulting in increased recognition performance. The experimental
results show superior performance as compared to state-of-the-art methods using
gold-standard databases such as MINC-2500, Flickr Material Database,
KTH-TIPS-2b, and two recent databases 4D-Light-Field-Material and GTOS. The
source code for the complete system are publicly available.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 21:27:31 GMT"
}
] | 2016-12-12T00:00:00 | [
[
"Zhang",
"Hang",
""
],
[
"Xue",
"Jia",
""
],
[
"Dana",
"Kristin",
""
]
] | TITLE: Deep TEN: Texture Encoding Network
ABSTRACT: We propose a Deep Texture Encoding Network (Deep-TEN) with a novel Encoding
Layer integrated on top of convolutional layers, which ports the entire
dictionary learning and encoding pipeline into a single model. Current methods
build from distinct components, using standard encoders with separate
off-the-shelf features such as SIFT descriptors or pre-trained CNN features for
material recognition. Our new approach provides an end-to-end learning
framework, where the inherent visual vocabularies are learned directly from the
loss function. The features, dictionaries and the encoding representation for
the classifier are all learned simultaneously. The representation is orderless
and therefore is particularly useful for material and texture recognition. The
Encoding Layer generalizes robust residual encoders such as VLAD and Fisher
Vectors, and has the property of discarding domain specific information which
makes the learned convolutional features easier to transfer. Additionally,
joint training using multiple datasets of varied sizes and class labels is
supported resulting in increased recognition performance. The experimental
results show superior performance as compared to state-of-the-art methods using
gold-standard databases such as MINC-2500, Flickr Material Database,
KTH-TIPS-2b, and two recent databases 4D-Light-Field-Material and GTOS. The
source code for the complete system are publicly available.
| no_new_dataset | 0.955026 |
1612.03094 | Adri\`a Recasens | Adri\`a Recasens, Carl Vondrick, Aditya Khosla, Antonio Torralba | Following Gaze Across Views | 9 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following the gaze of people inside videos is an important signal for
understanding people and their actions. In this paper, we present an approach
for following gaze across views by predicting where a particular person is
looking throughout a scene. We collect VideoGaze, a new dataset which we use as
a benchmark to both train and evaluate models. Given one view with a person in
it and a second view of the scene, our model estimates a density for gaze
location in the second view. A key aspect of our approach is an end-to-end
model that solves the following sub-problems: saliency, gaze pose, and
geometric relationships between views. Although our model is supervised only
with gaze, we show that the model learns to solve these subproblems
automatically without supervision. Experiments suggest that our approach
follows gaze better than standard baselines and produces plausible results for
everyday situations.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 17:20:17 GMT"
}
] | 2016-12-12T00:00:00 | [
[
"Recasens",
"Adrià",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Following Gaze Across Views
ABSTRACT: Following the gaze of people inside videos is an important signal for
understanding people and their actions. In this paper, we present an approach
for following gaze across views by predicting where a particular person is
looking throughout a scene. We collect VideoGaze, a new dataset which we use as
a benchmark to both train and evaluate models. Given one view with a person in
it and a second view of the scene, our model estimates a density for gaze
location in the second view. A key aspect of our approach is an end-to-end
model that solves the following sub-problems: saliency, gaze pose, and
geometric relationships between views. Although our model is supervised only
with gaze, we show that the model learns to solve these subproblems
automatically without supervision. Experiments suggest that our approach
follows gaze better than standard baselines and produces plausible results for
everyday situations.
| new_dataset | 0.955858 |
1611.07478 | Scott Lundberg | Scott Lundberg and Su-In Lee | An unexpected unity among methods for interpreting model predictions | Presented at NIPS 2016 Workshop on Interpretable Machine Learning in
Complex Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding why a model made a certain prediction is crucial in many data
science fields. Interpretable predictions engender appropriate trust and
provide insight into how the model may be improved. However, with large modern
datasets the best accuracy is often achieved by complex models even experts
struggle to interpret, which creates a tension between accuracy and
interpretability. Recently, several methods have been proposed for interpreting
predictions from complex models by estimating the importance of input features.
Here, we present how a model-agnostic additive representation of the importance
of input features unifies current methods. This representation is optimal, in
the sense that it is the only set of additive values that satisfies important
properties. We show how we can leverage these properties to create novel visual
explanations of model predictions. The thread of unity that this representation
weaves through the literature indicates that there are common principles to be
learned about the interpretation of model predictions that apply in many
scenarios.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2016 19:30:28 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2016 06:44:36 GMT"
},
{
"version": "v3",
"created": "Thu, 8 Dec 2016 08:24:15 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Lundberg",
"Scott",
""
],
[
"Lee",
"Su-In",
""
]
] | TITLE: An unexpected unity among methods for interpreting model predictions
ABSTRACT: Understanding why a model made a certain prediction is crucial in many data
science fields. Interpretable predictions engender appropriate trust and
provide insight into how the model may be improved. However, with large modern
datasets the best accuracy is often achieved by complex models even experts
struggle to interpret, which creates a tension between accuracy and
interpretability. Recently, several methods have been proposed for interpreting
predictions from complex models by estimating the importance of input features.
Here, we present how a model-agnostic additive representation of the importance
of input features unifies current methods. This representation is optimal, in
the sense that it is the only set of additive values that satisfies important
properties. We show how we can leverage these properties to create novel visual
explanations of model predictions. The thread of unity that this representation
weaves through the literature indicates that there are common principles to be
learned about the interpretation of model predictions that apply in many
scenarios.
| no_new_dataset | 0.940024 |
1611.08583 | Ari Seff | Ari Seff and Jianxiong Xiao | Learning from Maps: Visual Common Sense for Autonomous Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today's autonomous vehicles rely extensively on high-definition 3D maps to
navigate the environment. While this approach works well when these maps are
completely up-to-date, safe autonomous vehicles must be able to corroborate the
map's information via a real time sensor-based system. Our goal in this work is
to develop a model for road layout inference given imagery from on-board
cameras, without any reliance on high-definition maps. However, no sufficient
dataset for training such a model exists. Here, we leverage the availability of
standard navigation maps and corresponding street view images to construct an
automatically labeled, large-scale dataset for this complex scene understanding
problem. By matching road vectors and metadata from navigation maps with Google
Street View images, we can assign ground truth road layout attributes (e.g.,
distance to an intersection, one-way vs. two-way street) to the images. We then
train deep convolutional networks to predict these road layout attributes given
a single monocular RGB image. Experimental evaluation demonstrates that our
model learns to correctly infer the road attributes using only panoramas
captured by car-mounted cameras as input. Additionally, our results indicate
that this method may be suitable to the novel application of recommending
safety improvements to infrastructure (e.g., suggesting an alternative speed
limit for a street).
| [
{
"version": "v1",
"created": "Fri, 25 Nov 2016 20:56:55 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 22:24:52 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Seff",
"Ari",
""
],
[
"Xiao",
"Jianxiong",
""
]
] | TITLE: Learning from Maps: Visual Common Sense for Autonomous Driving
ABSTRACT: Today's autonomous vehicles rely extensively on high-definition 3D maps to
navigate the environment. While this approach works well when these maps are
completely up-to-date, safe autonomous vehicles must be able to corroborate the
map's information via a real time sensor-based system. Our goal in this work is
to develop a model for road layout inference given imagery from on-board
cameras, without any reliance on high-definition maps. However, no sufficient
dataset for training such a model exists. Here, we leverage the availability of
standard navigation maps and corresponding street view images to construct an
automatically labeled, large-scale dataset for this complex scene understanding
problem. By matching road vectors and metadata from navigation maps with Google
Street View images, we can assign ground truth road layout attributes (e.g.,
distance to an intersection, one-way vs. two-way street) to the images. We then
train deep convolutional networks to predict these road layout attributes given
a single monocular RGB image. Experimental evaluation demonstrates that our
model learns to correctly infer the road attributes using only panoramas
captured by car-mounted cameras as input. Additionally, our results indicate
that this method may be suitable to the novel application of recommending
safety improvements to infrastructure (e.g., suggesting an alternative speed
limit for a street).
| new_dataset | 0.968291 |
1612.02490 | An Qu | An Qu and Cheng Zhang and Paul Ackermann and Hedvig Kjellstr\"om | Bridging Medical Data Inference to Achilles Tendon Rupture
Rehabilitation | Workshop on Machine Learning for Healthcare, NIPS 2016, Barcelona,
Spain | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imputing incomplete medical tests and predicting patient outcomes are crucial
for guiding the decision making for therapy, such as after an Achilles Tendon
Rupture (ATR). We formulate the problem of data imputation and prediction for
ATR relevant medical measurements into a recommender system framework. By
applying MatchBox, which is a collaborative filtering approach, on a real
dataset collected from 374 ATR patients, we aim at offering personalized
medical data imputation and prediction. In this work, we show the feasibility
of this approach and discuss potential research directions by conducting
initial qualitative evaluations.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 23:58:36 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Qu",
"An",
""
],
[
"Zhang",
"Cheng",
""
],
[
"Ackermann",
"Paul",
""
],
[
"Kjellström",
"Hedvig",
""
]
] | TITLE: Bridging Medical Data Inference to Achilles Tendon Rupture
Rehabilitation
ABSTRACT: Imputing incomplete medical tests and predicting patient outcomes are crucial
for guiding the decision making for therapy, such as after an Achilles Tendon
Rupture (ATR). We formulate the problem of data imputation and prediction for
ATR relevant medical measurements into a recommender system framework. By
applying MatchBox, which is a collaborative filtering approach, on a real
dataset collected from 374 ATR patients, we aim at offering personalized
medical data imputation and prediction. In this work, we show the feasibility
of this approach and discuss potential research directions by conducting
initial qualitative evaluations.
| no_new_dataset | 0.950503 |
1612.02572 | Giovanni Montana | James H Cole, Rudra PK Poudel, Dimosthenis Tsagkrasoulis, Matthan WA
Caan, Claire Steves, Tim D Spector, Giovanni Montana | Predicting brain age with deep learning from raw imaging data results in
a reliable and heritable biomarker | null | null | null | null | stat.ML cs.CV cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning analysis of neuroimaging data can accurately predict
chronological age in healthy people and deviations from healthy brain ageing
have been associated with cognitive impairment and disease. Here we sought to
further establish the credentials of "brain-predicted age" as a biomarker of
individual differences in the brain ageing process, using a predictive
modelling approach based on deep learning, and specifically convolutional
neural networks (CNN), and applied to both pre-processed and raw T1-weighted
MRI data. Firstly, we aimed to demonstrate the accuracy of CNN brain-predicted
age using a large dataset of healthy adults (N = 2001). Next, we sought to
establish the heritability of brain-predicted age using a sample of monozygotic
and dizygotic female twins (N = 62). Thirdly, we examined the test-retest and
multi-centre reliability of brain-predicted age using two samples
(within-scanner N = 20; between-scanner N = 11). CNN brain-predicted ages were
generated and compared to a Gaussian Process Regression (GPR) approach, on all
datasets. Input data were grey matter (GM) or white matter (WM) volumetric maps
generated by Statistical Parametric Mapping (SPM) or raw data. Brain-predicted
age represents an accurate, highly reliable and genetically-valid phenotype,
that has potential to be used as a biomarker of brain ageing. Moreover, age
predictions can be accurately generated on raw T1-MRI data, substantially
reducing computation time for novel data, bringing the process closer to giving
real-time information on brain health in clinical settings.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 09:26:08 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Cole",
"James H",
""
],
[
"Poudel",
"Rudra PK",
""
],
[
"Tsagkrasoulis",
"Dimosthenis",
""
],
[
"Caan",
"Matthan WA",
""
],
[
"Steves",
"Claire",
""
],
[
"Spector",
"Tim D",
""
],
[
"Montana",
"Giovanni",
""
]
] | TITLE: Predicting brain age with deep learning from raw imaging data results in
a reliable and heritable biomarker
ABSTRACT: Machine learning analysis of neuroimaging data can accurately predict
chronological age in healthy people and deviations from healthy brain ageing
have been associated with cognitive impairment and disease. Here we sought to
further establish the credentials of "brain-predicted age" as a biomarker of
individual differences in the brain ageing process, using a predictive
modelling approach based on deep learning, and specifically convolutional
neural networks (CNN), and applied to both pre-processed and raw T1-weighted
MRI data. Firstly, we aimed to demonstrate the accuracy of CNN brain-predicted
age using a large dataset of healthy adults (N = 2001). Next, we sought to
establish the heritability of brain-predicted age using a sample of monozygotic
and dizygotic female twins (N = 62). Thirdly, we examined the test-retest and
multi-centre reliability of brain-predicted age using two samples
(within-scanner N = 20; between-scanner N = 11). CNN brain-predicted ages were
generated and compared to a Gaussian Process Regression (GPR) approach, on all
datasets. Input data were grey matter (GM) or white matter (WM) volumetric maps
generated by Statistical Parametric Mapping (SPM) or raw data. Brain-predicted
age represents an accurate, highly reliable and genetically-valid phenotype,
that has potential to be used as a biomarker of brain ageing. Moreover, age
predictions can be accurately generated on raw T1-MRI data, substantially
reducing computation time for novel data, bringing the process closer to giving
real-time information on brain health in clinical settings.
| no_new_dataset | 0.947137 |
1612.02631 | Yuliya Tarabalka | Seong-Gyun Jeong, Yuliya Tarabalka, Nicolas Nisse and Josiane Zerubia | Progressive Tree-like Curvilinear Structure Reconstruction with
Structured Ranking Learning and Graph Algorithm | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel tree-like curvilinear structure reconstruction algorithm
based on supervised learning and graph theory. In this work we analyze image
patches to obtain the local major orientations and the rankings that correspond
to the curvilinear structure. To extract local curvilinear features, we compute
oriented gradient information using steerable filters. We then employ
Structured Support Vector Machine for ordinal regression of the input image
patches, where the ordering is determined by shape similarity to latent
curvilinear structure. Finally, we progressively reconstruct the curvilinear
structure by looking for geodesic paths connecting remote vertices in the graph
built on the structured output rankings. Experimental results show that the
proposed algorithm faithfully provides topological features of the curvilinear
structures using minimal pixels for various datasets.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 13:13:01 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Jeong",
"Seong-Gyun",
""
],
[
"Tarabalka",
"Yuliya",
""
],
[
"Nisse",
"Nicolas",
""
],
[
"Zerubia",
"Josiane",
""
]
] | TITLE: Progressive Tree-like Curvilinear Structure Reconstruction with
Structured Ranking Learning and Graph Algorithm
ABSTRACT: We propose a novel tree-like curvilinear structure reconstruction algorithm
based on supervised learning and graph theory. In this work we analyze image
patches to obtain the local major orientations and the rankings that correspond
to the curvilinear structure. To extract local curvilinear features, we compute
oriented gradient information using steerable filters. We then employ
Structured Support Vector Machine for ordinal regression of the input image
patches, where the ordering is determined by shape similarity to latent
curvilinear structure. Finally, we progressively reconstruct the curvilinear
structure by looking for geodesic paths connecting remote vertices in the graph
built on the structured output rankings. Experimental results show that the
proposed algorithm faithfully provides topological features of the curvilinear
structures using minimal pixels for various datasets.
| no_new_dataset | 0.952397 |
1612.02649 | Judy Hoffman | Judy Hoffman, Dequan Wang, Fisher Yu, Trevor Darrell | FCNs in the Wild: Pixel-level Adversarial and Constraint-based
Adaptation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully convolutional models for dense prediction have proven successful for a
wide range of visual tasks. Such models perform well in a supervised setting,
but performance can be surprisingly poor under domain shifts that appear mild
to a human observer. For example, training on one city and testing on another
in a different geographic region and/or weather condition may result in
significantly degraded performance due to pixel-level distribution shift. In
this paper, we introduce the first domain adaptive semantic segmentation
method, proposing an unsupervised adversarial approach to pixel prediction
problems. Our method consists of both global and category specific adaptation
techniques. Global domain alignment is performed using a novel semantic
segmentation network with fully convolutional domain adversarial learning. This
initially adapted space then enables category specific adaptation through a
generalization of constrained weak learning, with explicit transfer of the
spatial layout from the source to the target domains. Our approach outperforms
baselines across different settings on multiple large-scale datasets, including
adapting across various real city environments, different synthetic
sub-domains, from simulated to real environments, and on a novel large-scale
dash-cam dataset.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 14:11:10 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Hoffman",
"Judy",
""
],
[
"Wang",
"Dequan",
""
],
[
"Yu",
"Fisher",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: FCNs in the Wild: Pixel-level Adversarial and Constraint-based
Adaptation
ABSTRACT: Fully convolutional models for dense prediction have proven successful for a
wide range of visual tasks. Such models perform well in a supervised setting,
but performance can be surprisingly poor under domain shifts that appear mild
to a human observer. For example, training on one city and testing on another
in a different geographic region and/or weather condition may result in
significantly degraded performance due to pixel-level distribution shift. In
this paper, we introduce the first domain adaptive semantic segmentation
method, proposing an unsupervised adversarial approach to pixel prediction
problems. Our method consists of both global and category specific adaptation
techniques. Global domain alignment is performed using a novel semantic
segmentation network with fully convolutional domain adversarial learning. This
initially adapted space then enables category specific adaptation through a
generalization of constrained weak learning, with explicit transfer of the
spatial layout from the source to the target domains. Our approach outperforms
baselines across different settings on multiple large-scale datasets, including
adapting across various real city environments, different synthetic
sub-domains, from simulated to real environments, and on a novel large-scale
dash-cam dataset.
| no_new_dataset | 0.949106 |
1612.02695 | Jan Chorowski | Jan Chorowski and Navdeep Jaitly | Towards better decoding and language model integration in sequence to
sequence models | null | null | null | null | cs.NE cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently proposed Sequence-to-Sequence (seq2seq) framework advocates
replacing complex data processing pipelines, such as an entire automatic speech
recognition system, with a single neural network trained in an end-to-end
fashion. In this contribution, we analyse an attention-based seq2seq speech
recognition system that directly transcribes recordings into characters. We
observe two shortcomings: overconfidence in its predictions and a tendency to
produce incomplete transcriptions when language models are used. We propose
practical solutions to both problems achieving competitive speaker independent
word error rates on the Wall Street Journal dataset: without separate language
models we reach 10.6% WER, while together with a trigram language model, we
reach 6.7% WER.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 15:23:44 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Chorowski",
"Jan",
""
],
[
"Jaitly",
"Navdeep",
""
]
] | TITLE: Towards better decoding and language model integration in sequence to
sequence models
ABSTRACT: The recently proposed Sequence-to-Sequence (seq2seq) framework advocates
replacing complex data processing pipelines, such as an entire automatic speech
recognition system, with a single neural network trained in an end-to-end
fashion. In this contribution, we analyse an attention-based seq2seq speech
recognition system that directly transcribes recordings into characters. We
observe two shortcomings: overconfidence in its predictions and a tendency to
produce incomplete transcriptions when language models are used. We propose
practical solutions to both problems achieving competitive speaker independent
word error rates on the Wall Street Journal dataset: without separate language
models we reach 10.6% WER, while together with a trigram language model, we
reach 6.7% WER.
| no_new_dataset | 0.955277 |
1612.02701 | Andrei Sorin Sabau | Andrei Sorin Sabau | Stream Clustering using Probabilistic Data Structures | 9 pages, 3 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most density based stream clustering algorithms separate the clustering
process into an online and offline component. Exact summarized statistics are
being employed for defining micro-clusters or grid cells during the online
stage followed by macro-clustering during the offline stage. This paper
proposes a novel alternative to the traditional two phase stream clustering
scheme, introducing sketch-based data structures for assessing both stream
density and cluster membership with probabilistic accuracy guarantees. A
count-min sketch using a damped window model estimates stream density. Bloom
filters employing a variation of active-active buffering estimate cluster
membership. Instances of both types of sketches share the same set of hash
functions. The resulting stream clustering algorithm is capable of detecting
arbitrarily shaped clusters while correctly handling outliers and making no
assumption on the total number of clusters. Experimental results over a number
of real and synthetic datasets illustrate the proposed algorithm quality and
efficiency.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 15:43:54 GMT"
}
] | 2016-12-09T00:00:00 | [
[
"Sabau",
"Andrei Sorin",
""
]
] | TITLE: Stream Clustering using Probabilistic Data Structures
ABSTRACT: Most density based stream clustering algorithms separate the clustering
process into an online and offline component. Exact summarized statistics are
being employed for defining micro-clusters or grid cells during the online
stage followed by macro-clustering during the offline stage. This paper
proposes a novel alternative to the traditional two phase stream clustering
scheme, introducing sketch-based data structures for assessing both stream
density and cluster membership with probabilistic accuracy guarantees. A
count-min sketch using a damped window model estimates stream density. Bloom
filters employing a variation of active-active buffering estimate cluster
membership. Instances of both types of sketches share the same set of hash
functions. The resulting stream clustering algorithm is capable of detecting
arbitrarily shaped clusters while correctly handling outliers and making no
assumption on the total number of clusters. Experimental results over a number
of real and synthetic datasets illustrate the proposed algorithm quality and
efficiency.
| no_new_dataset | 0.948346 |
1511.05082 | Yehezkel Resheff | Yehezkel S. Resheff, Shay Rotics, Ran Nathan, Daphna Weinshall | Topic Modeling of Behavioral Modes Using Sensor Data | Invited Extended version of a paper \cite{resheffmatrix} presented at
the international conference \textit{Data Science and Advanced Analytics},
Paris, France, 19-21 OCtober 2015 | International Journal of Data Science and Analytics 1.1 (2016):
51-60 | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of Movement Ecology, like so many other fields, is experiencing a
period of rapid growth in availability of data. As the volume rises,
traditional methods are giving way to machine learning and data science, which
are playing an increasingly large part it turning this data into
science-driving insights. One rich and interesting source is the bio-logger.
These small electronic wearable devices are attached to animals free to roam in
their natural habitats, and report back readings from multiple sensors,
including GPS and accelerometer bursts. A common use of accelerometer data is
for supervised learning of behavioral modes. However, we need unsupervised
analysis tools as well, in order to overcome the inherent difficulties of
obtaining a labeled dataset, which in some cases is either infeasible or does
not successfully encompass the full repertoire of behavioral modes of interest.
Here we present a matrix factorization based topic-model method for
accelerometer bursts, derived using a linear mixture property of patch
features. Our method is validated via comparison to a labeled dataset, and is
further compared to standard clustering algorithms.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2015 18:42:04 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Resheff",
"Yehezkel S.",
""
],
[
"Rotics",
"Shay",
""
],
[
"Nathan",
"Ran",
""
],
[
"Weinshall",
"Daphna",
""
]
] | TITLE: Topic Modeling of Behavioral Modes Using Sensor Data
ABSTRACT: The field of Movement Ecology, like so many other fields, is experiencing a
period of rapid growth in availability of data. As the volume rises,
traditional methods are giving way to machine learning and data science, which
are playing an increasingly large part it turning this data into
science-driving insights. One rich and interesting source is the bio-logger.
These small electronic wearable devices are attached to animals free to roam in
their natural habitats, and report back readings from multiple sensors,
including GPS and accelerometer bursts. A common use of accelerometer data is
for supervised learning of behavioral modes. However, we need unsupervised
analysis tools as well, in order to overcome the inherent difficulties of
obtaining a labeled dataset, which in some cases is either infeasible or does
not successfully encompass the full repertoire of behavioral modes of interest.
Here we present a matrix factorization based topic-model method for
accelerometer bursts, derived using a linear mixture property of patch
features. Our method is validated via comparison to a labeled dataset, and is
further compared to standard clustering algorithms.
| no_new_dataset | 0.944638 |
1609.02770 | Andrew Gilbert | Andrew Gilbert, Richard Bowden | Image and Video Mining through Online Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the field of image and video recognition, the traditional approach is
a dataset split into fixed training and test partitions. However, the labelling
of the training set is time-consuming, especially as datasets grow in size and
complexity. Furthermore, this approach is not applicable to the home user, who
wants to intuitively group their media without tirelessly labelling the
content. Our interactive approach is able to iteratively cluster classes of
images and video. Our approach is based around the concept of an image
signature which, unlike a standard bag of words model, can express
co-occurrence statistics as well as symbol frequency. We efficiently compute
metric distances between signatures despite their inherent high dimensionality
and provide discriminative feature selection, to allow common and distinctive
elements to be identified from a small set of user labelled examples. These
elements are then accentuated in the image signature to increase similarity
between examples and pull correct classes together. By repeating this process
in an online learning framework, the accuracy of similarity increases
dramatically despite labelling only a few training examples. To demonstrate
that the approach is agnostic to media type and features used, we evaluate on
three image datasets (15 scene, Caltech101 and FG-NET), a mixed text and image
dataset (ImageTag), a dataset used in active learning (Iris) and on three
action recognition datasets (UCF11, KTH and Hollywood2). On the UCF11 video
dataset, the accuracy is 86.7% despite using only 90 labelled examples from a
dataset of over 1200 videos, instead of the standard 1122 training videos. The
approach is both scalable and efficient, with a single iteration over the full
UCF11 dataset of around 1200 videos taking approximately 1 minute on a standard
desktop machine.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 12:49:22 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 12:26:30 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Gilbert",
"Andrew",
""
],
[
"Bowden",
"Richard",
""
]
] | TITLE: Image and Video Mining through Online Learning
ABSTRACT: Within the field of image and video recognition, the traditional approach is
a dataset split into fixed training and test partitions. However, the labelling
of the training set is time-consuming, especially as datasets grow in size and
complexity. Furthermore, this approach is not applicable to the home user, who
wants to intuitively group their media without tirelessly labelling the
content. Our interactive approach is able to iteratively cluster classes of
images and video. Our approach is based around the concept of an image
signature which, unlike a standard bag of words model, can express
co-occurrence statistics as well as symbol frequency. We efficiently compute
metric distances between signatures despite their inherent high dimensionality
and provide discriminative feature selection, to allow common and distinctive
elements to be identified from a small set of user labelled examples. These
elements are then accentuated in the image signature to increase similarity
between examples and pull correct classes together. By repeating this process
in an online learning framework, the accuracy of similarity increases
dramatically despite labelling only a few training examples. To demonstrate
that the approach is agnostic to media type and features used, we evaluate on
three image datasets (15 scene, Caltech101 and FG-NET), a mixed text and image
dataset (ImageTag), a dataset used in active learning (Iris) and on three
action recognition datasets (UCF11, KTH and Hollywood2). On the UCF11 video
dataset, the accuracy is 86.7% despite using only 90 labelled examples from a
dataset of over 1200 videos, instead of the standard 1122 training videos. The
approach is both scalable and efficient, with a single iteration over the full
UCF11 dataset of around 1200 videos taking approximately 1 minute on a standard
desktop machine.
| no_new_dataset | 0.939748 |
1612.01834 | Beidi Chen | Beidi Chen, Anshumali Shrivastava | Revisiting Winner Take All (WTA) Hashing for Sparse Datasets | null | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WTA (Winner Take All) hashing has been successfully applied in many large
scale vision applications. This hashing scheme was tailored to take advantage
of the comparative reasoning (or order based information), which showed
significant accuracy improvements. In this paper, we identify a subtle issue
with WTA, which grows with the sparsity of the datasets. This issue limits the
discriminative power of WTA. We then propose a solution for this problem based
on the idea of Densification which provably fixes the issue. Our experiments
show that Densified WTA Hashing outperforms Vanilla WTA both in image
classification and retrieval tasks consistently and significantly.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 14:51:37 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Dec 2016 08:50:26 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Chen",
"Beidi",
""
],
[
"Shrivastava",
"Anshumali",
""
]
] | TITLE: Revisiting Winner Take All (WTA) Hashing for Sparse Datasets
ABSTRACT: WTA (Winner Take All) hashing has been successfully applied in many large
scale vision applications. This hashing scheme was tailored to take advantage
of the comparative reasoning (or order based information), which showed
significant accuracy improvements. In this paper, we identify a subtle issue
with WTA, which grows with the sparsity of the datasets. This issue limits the
discriminative power of WTA. We then propose a solution for this problem based
on the idea of Densification which provably fixes the issue. Our experiments
show that Densified WTA Hashing outperforms Vanilla WTA both in image
classification and retrieval tasks consistently and significantly.
| no_new_dataset | 0.949529 |
1612.02155 | Haroon Idrees | Shayan Modiri Assari, Haroon Idrees and Mubarak Shah | Re-identification of Humans in Crowds using Personal, Social and
Environmental Constraints | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of human re-identification across
non-overlapping cameras in crowds.Re-identification in crowded scenes is a
challenging problem due to large number of people and frequent occlusions,
coupled with changes in their appearance due to different properties and
exposure of cameras. To solve this problem, we model multiple Personal, Social
and Environmental (PSE) constraints on human motion across cameras. The
personal constraints include appearance and preferred speed of each individual
assumed to be similar across the non-overlapping cameras. The social influences
(constraints) are quadratic in nature, i.e. occur between pairs of individuals,
and modeled through grouping and collision avoidance. Finally, the
environmental constraints capture the transition probabilities between gates
(entrances / exits) in different cameras, defined as multi-modal distributions
of transition time and destination between all pairs of gates. We incorporate
these constraints into an energy minimization framework for solving human
re-identification. Assigning $1-1$ correspondence while modeling PSE
constraints is NP-hard. We present a stochastic local search algorithm to
restrict the search space of hypotheses, and obtain $1-1$ solution in the
presence of linear and quadratic PSE constraints. Moreover, we present an
alternate optimization using Frank-Wolfe algorithm that solves the convex
approximation of the objective function with linear relaxation on binary
variables, and yields an order of magnitude speed up over stochastic local
search with minor drop in performance. We evaluate our approach using
Cumulative Matching Curves as well $1-1$ assignment on several thousand frames
of Grand Central, PRID and DukeMTMC datasets, and obtain significantly better
results compared to existing re-identification methods.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 09:03:11 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Assari",
"Shayan Modiri",
""
],
[
"Idrees",
"Haroon",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Re-identification of Humans in Crowds using Personal, Social and
Environmental Constraints
ABSTRACT: This paper addresses the problem of human re-identification across
non-overlapping cameras in crowds.Re-identification in crowded scenes is a
challenging problem due to large number of people and frequent occlusions,
coupled with changes in their appearance due to different properties and
exposure of cameras. To solve this problem, we model multiple Personal, Social
and Environmental (PSE) constraints on human motion across cameras. The
personal constraints include appearance and preferred speed of each individual
assumed to be similar across the non-overlapping cameras. The social influences
(constraints) are quadratic in nature, i.e. occur between pairs of individuals,
and modeled through grouping and collision avoidance. Finally, the
environmental constraints capture the transition probabilities between gates
(entrances / exits) in different cameras, defined as multi-modal distributions
of transition time and destination between all pairs of gates. We incorporate
these constraints into an energy minimization framework for solving human
re-identification. Assigning $1-1$ correspondence while modeling PSE
constraints is NP-hard. We present a stochastic local search algorithm to
restrict the search space of hypotheses, and obtain $1-1$ solution in the
presence of linear and quadratic PSE constraints. Moreover, we present an
alternate optimization using Frank-Wolfe algorithm that solves the convex
approximation of the objective function with linear relaxation on binary
variables, and yields an order of magnitude speed up over stochastic local
search with minor drop in performance. We evaluate our approach using
Cumulative Matching Curves as well $1-1$ assignment on several thousand frames
of Grand Central, PRID and DukeMTMC datasets, and obtain significantly better
results compared to existing re-identification methods.
| no_new_dataset | 0.950595 |
1612.02222 | Binghong Chen | Binghong Chen, Jun Zhu | A Communication-Efficient Parallel Method for Group-Lasso | 7 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Group-Lasso (gLasso) identifies important explanatory factors in predicting
the response variable by considering the grouping structure over input
variables. However, most existing algorithms for gLasso are not scalable to
deal with large-scale datasets, which are becoming a norm in many applications.
In this paper, we present a divide-and-conquer based parallel algorithm
(DC-gLasso) to scale up gLasso in the tasks of regression with grouping
structures. DC-gLasso only needs two iterations to collect and aggregate the
local estimates on subsets of the data, and is provably correct to recover the
true model under certain conditions. We further extend it to deal with
overlappings between groups. Empirical results on a wide range of synthetic and
real-world datasets show that DC-gLasso can significantly improve the time
efficiency without sacrificing regression accuracy.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 12:32:44 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Chen",
"Binghong",
""
],
[
"Zhu",
"Jun",
""
]
] | TITLE: A Communication-Efficient Parallel Method for Group-Lasso
ABSTRACT: Group-Lasso (gLasso) identifies important explanatory factors in predicting
the response variable by considering the grouping structure over input
variables. However, most existing algorithms for gLasso are not scalable to
deal with large-scale datasets, which are becoming a norm in many applications.
In this paper, we present a divide-and-conquer based parallel algorithm
(DC-gLasso) to scale up gLasso in the tasks of regression with grouping
structures. DC-gLasso only needs two iterations to collect and aggregate the
local estimates on subsets of the data, and is provably correct to recover the
true model under certain conditions. We further extend it to deal with
overlappings between groups. Empirical results on a wide range of synthetic and
real-world datasets show that DC-gLasso can significantly improve the time
efficiency without sacrificing regression accuracy.
| no_new_dataset | 0.94428 |
1612.02335 | Yu-Chuan Su | Yu-Chuan Su, Dinesh Jayaraman, Kristen Grauman | Pano2Vid: Automatic Cinematography for Watching 360$^{\circ}$ Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the novel task of Pano2Vid $-$ automatic cinematography in
panoramic 360$^{\circ}$ videos. Given a 360$^{\circ}$ video, the goal is to
direct an imaginary camera to virtually capture natural-looking normal
field-of-view (NFOV) video. By selecting "where to look" within the panorama at
each time step, Pano2Vid aims to free both the videographer and the end viewer
from the task of determining what to watch. Towards this goal, we first compile
a dataset of 360$^{\circ}$ videos downloaded from the web, together with
human-edited NFOV camera trajectories to facilitate evaluation. Next, we
propose AutoCam, a data-driven approach to solve the Pano2Vid task. AutoCam
leverages NFOV web video to discriminatively identify space-time "glimpses" of
interest at each time instant, and then uses dynamic programming to select
optimal human-like camera trajectories. Through experimental evaluation on
multiple newly defined Pano2Vid performance measures against several baselines,
we show that our method successfully produces informative videos that could
conceivably have been captured by human videographers.
| [
{
"version": "v1",
"created": "Wed, 7 Dec 2016 17:20:09 GMT"
}
] | 2016-12-08T00:00:00 | [
[
"Su",
"Yu-Chuan",
""
],
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Pano2Vid: Automatic Cinematography for Watching 360$^{\circ}$ Videos
ABSTRACT: We introduce the novel task of Pano2Vid $-$ automatic cinematography in
panoramic 360$^{\circ}$ videos. Given a 360$^{\circ}$ video, the goal is to
direct an imaginary camera to virtually capture natural-looking normal
field-of-view (NFOV) video. By selecting "where to look" within the panorama at
each time step, Pano2Vid aims to free both the videographer and the end viewer
from the task of determining what to watch. Towards this goal, we first compile
a dataset of 360$^{\circ}$ videos downloaded from the web, together with
human-edited NFOV camera trajectories to facilitate evaluation. Next, we
propose AutoCam, a data-driven approach to solve the Pano2Vid task. AutoCam
leverages NFOV web video to discriminatively identify space-time "glimpses" of
interest at each time instant, and then uses dynamic programming to select
optimal human-like camera trajectories. Through experimental evaluation on
multiple newly defined Pano2Vid performance measures against several baselines,
we show that our method successfully produces informative videos that could
conceivably have been captured by human videographers.
| new_dataset | 0.955152 |
1603.07697 | Homa Foroughi | Homa Foroughi, Nilanjan Ray, Hong Zhang | Joint Projection and Dictionary Learning using Low-rank Regularization
and Graph Constraints | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim at learning simultaneously a discriminative dictionary
and a robust projection matrix from noisy data. The joint learning, makes the
learned projection and dictionary a better fit for each other, so a more
accurate classification can be obtained. However, current prevailing joint
dimensionality reduction and dictionary learning methods, would fail when the
training samples are noisy or heavily corrupted. To address this issue, we
propose a joint projection and dictionary learning using low-rank
regularization and graph constraints (JPDL-LR). Specifically, the
discrimination of the dictionary is achieved by imposing Fisher criterion on
the coding coefficients. In addition, our method explicitly encodes the local
structure of data by incorporating a graph regularization term, that further
improves the discriminative ability of the projection matrix. Inspired by
recent advances of low-rank representation for removing outliers and noise, we
enforce a low-rank constraint on sub-dictionaries of all classes to make them
more compact and robust to noise. Experimental results on several benchmark
datasets verify the effectiveness and robustness of our method for both
dimensionality reduction and image classification, especially when the data
contains considerable noise or variations.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 18:35:41 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 00:08:19 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Foroughi",
"Homa",
""
],
[
"Ray",
"Nilanjan",
""
],
[
"Zhang",
"Hong",
""
]
] | TITLE: Joint Projection and Dictionary Learning using Low-rank Regularization
and Graph Constraints
ABSTRACT: In this paper, we aim at learning simultaneously a discriminative dictionary
and a robust projection matrix from noisy data. The joint learning, makes the
learned projection and dictionary a better fit for each other, so a more
accurate classification can be obtained. However, current prevailing joint
dimensionality reduction and dictionary learning methods, would fail when the
training samples are noisy or heavily corrupted. To address this issue, we
propose a joint projection and dictionary learning using low-rank
regularization and graph constraints (JPDL-LR). Specifically, the
discrimination of the dictionary is achieved by imposing Fisher criterion on
the coding coefficients. In addition, our method explicitly encodes the local
structure of data by incorporating a graph regularization term, that further
improves the discriminative ability of the projection matrix. Inspired by
recent advances of low-rank representation for removing outliers and noise, we
enforce a low-rank constraint on sub-dictionaries of all classes to make them
more compact and robust to noise. Experimental results on several benchmark
datasets verify the effectiveness and robustness of our method for both
dimensionality reduction and image classification, especially when the data
contains considerable noise or variations.
| no_new_dataset | 0.942981 |
1609.09444 | Arnab Ghosh | Arnab Ghosh and Viveka Kulharia and Amitabha Mukerjee and Vinay
Namboodiri and Mohit Bansal | Contextual RNN-GANs for Abstract Reasoning Diagram Generation | To Appear in AAAI-17 and NIPS Workshop on Adversarial Training | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding, predicting, and generating object motions and transformations
is a core problem in artificial intelligence. Modeling sequences of evolving
images may provide better representations and models of motion and may
ultimately be used for forecasting, simulation, or video generation.
Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in
complex patterns and one needs to infer the underlying pattern sequence and
generate the next image in the sequence. For this, we develop a novel
Contextual Generative Adversarial Network based on Recurrent Neural Networks
(Context-RNN-GANs), where both the generator and the discriminator modules are
based on contextual history (modeled as RNNs) and the adversarial discriminator
guides the generator to produce realistic images for the particular time step
in the image sequence. We evaluate the Context-RNN-GAN model (and its variants)
on a novel dataset of Diagrammatic Abstract Reasoning, where it performs
competitively with 10th-grade human performance but there is still scope for
interesting improvements as compared to college-grade human performance. We
also evaluate our model on a standard video next-frame prediction task,
achieving improved performance over comparable state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2016 17:56:32 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 13:14:09 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Ghosh",
"Arnab",
""
],
[
"Kulharia",
"Viveka",
""
],
[
"Mukerjee",
"Amitabha",
""
],
[
"Namboodiri",
"Vinay",
""
],
[
"Bansal",
"Mohit",
""
]
] | TITLE: Contextual RNN-GANs for Abstract Reasoning Diagram Generation
ABSTRACT: Understanding, predicting, and generating object motions and transformations
is a core problem in artificial intelligence. Modeling sequences of evolving
images may provide better representations and models of motion and may
ultimately be used for forecasting, simulation, or video generation.
Diagrammatic Abstract Reasoning is an avenue in which diagrams evolve in
complex patterns and one needs to infer the underlying pattern sequence and
generate the next image in the sequence. For this, we develop a novel
Contextual Generative Adversarial Network based on Recurrent Neural Networks
(Context-RNN-GANs), where both the generator and the discriminator modules are
based on contextual history (modeled as RNNs) and the adversarial discriminator
guides the generator to produce realistic images for the particular time step
in the image sequence. We evaluate the Context-RNN-GAN model (and its variants)
on a novel dataset of Diagrammatic Abstract Reasoning, where it performs
competitively with 10th-grade human performance but there is still scope for
interesting improvements as compared to college-grade human performance. We
also evaluate our model on a standard video next-frame prediction task,
achieving improved performance over comparable state-of-the-art.
| new_dataset | 0.965086 |
1611.05126 | Jan Hamaekers | James Barker, Johannes Bulin, Jan Hamaekers and Sonja Mathias | Localized Coulomb Descriptors for the Gaussian Approximation Potential | null | null | null | null | stat.ML physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel class of localized atomic environment representations,
based upon the Coulomb matrix. By combining these functions with the Gaussian
approximation potential approach, we present LC-GAP, a new system for
generating atomic potentials through machine learning (ML). Tests on the QM7,
QM7b and GDB9 biomolecular datasets demonstrate that potentials created with
LC-GAP can successfully predict atomization energies for molecules larger than
those used for training to chemical accuracy, and can (in the case of QM7b)
also be used to predict a range of other atomic properties with accuracy in
line with the recent literature. As the best-performing representation has only
linear dimensionality in the number of atoms in a local atomic environment,
this represents an improvement both in prediction accuracy and computational
cost when considered against similar Coulomb matrix-based methods.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 02:57:40 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 12:01:13 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Barker",
"James",
""
],
[
"Bulin",
"Johannes",
""
],
[
"Hamaekers",
"Jan",
""
],
[
"Mathias",
"Sonja",
""
]
] | TITLE: Localized Coulomb Descriptors for the Gaussian Approximation Potential
ABSTRACT: We introduce a novel class of localized atomic environment representations,
based upon the Coulomb matrix. By combining these functions with the Gaussian
approximation potential approach, we present LC-GAP, a new system for
generating atomic potentials through machine learning (ML). Tests on the QM7,
QM7b and GDB9 biomolecular datasets demonstrate that potentials created with
LC-GAP can successfully predict atomization energies for molecules larger than
those used for training to chemical accuracy, and can (in the case of QM7b)
also be used to predict a range of other atomic properties with accuracy in
line with the recent literature. As the best-performing representation has only
linear dimensionality in the number of atoms in a local atomic environment,
this represents an improvement both in prediction accuracy and computational
cost when considered against similar Coulomb matrix-based methods.
| no_new_dataset | 0.94868 |
1611.08323 | Tobias Pohlen | Tobias Pohlen, Alexander Hermans, Markus Mathias, Bastian Leibe | Full-Resolution Residual Networks for Semantic Segmentation in Street
Scenes | Changes in v2: Fixed equation (10), fixed legend of Figure 6, fixed
legend of Figure 9, added page numbers, fixed minor spelling mistakes | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic image segmentation is an essential component of modern autonomous
driving systems, as an accurate understanding of the surrounding scene is
crucial to navigation and action planning. Current state-of-the-art approaches
in semantic image segmentation rely on pre-trained networks that were initially
developed for classifying images as a whole. While these networks exhibit
outstanding recognition performance (i.e., what is visible?), they lack
localization accuracy (i.e., where precisely is something located?). Therefore,
additional processing steps have to be performed in order to obtain
pixel-accurate segmentation masks at the full image resolution. To alleviate
this problem we propose a novel ResNet-like architecture that exhibits strong
localization and recognition performance. We combine multi-scale context with
pixel-level accuracy by using two processing streams within our network: One
stream carries information at the full image resolution, enabling precise
adherence to segment boundaries. The other stream undergoes a sequence of
pooling operations to obtain robust features for recognition. The two streams
are coupled at the full image resolution using residuals. Without additional
processing steps and without pre-training, our approach achieves an
intersection-over-union score of 71.8% on the Cityscapes dataset.
| [
{
"version": "v1",
"created": "Thu, 24 Nov 2016 23:55:28 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Dec 2016 19:36:19 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Pohlen",
"Tobias",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Mathias",
"Markus",
""
],
[
"Leibe",
"Bastian",
""
]
] | TITLE: Full-Resolution Residual Networks for Semantic Segmentation in Street
Scenes
ABSTRACT: Semantic image segmentation is an essential component of modern autonomous
driving systems, as an accurate understanding of the surrounding scene is
crucial to navigation and action planning. Current state-of-the-art approaches
in semantic image segmentation rely on pre-trained networks that were initially
developed for classifying images as a whole. While these networks exhibit
outstanding recognition performance (i.e., what is visible?), they lack
localization accuracy (i.e., where precisely is something located?). Therefore,
additional processing steps have to be performed in order to obtain
pixel-accurate segmentation masks at the full image resolution. To alleviate
this problem we propose a novel ResNet-like architecture that exhibits strong
localization and recognition performance. We combine multi-scale context with
pixel-level accuracy by using two processing streams within our network: One
stream carries information at the full image resolution, enabling precise
adherence to segment boundaries. The other stream undergoes a sequence of
pooling operations to obtain robust features for recognition. The two streams
are coupled at the full image resolution using residuals. Without additional
processing steps and without pre-training, our approach achieves an
intersection-over-union score of 71.8% on the Cityscapes dataset.
| no_new_dataset | 0.948346 |
1612.01288 | Wim Abbeloos | Wim Abbeloos, Toon Goedem\'e | Point Pair Feature based Object Detection for Random Bin Picking | null | null | 10.1109/CRV.2016.59 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Point pair features are a popular representation for free form 3D object
detection and pose estimation. In this paper, their performance in an
industrial random bin picking context is investigated. A new method to generate
representative synthetic datasets is proposed. This allows to investigate the
influence of a high degree of clutter and the presence of self similar
features, which are typical to our application. We provide an overview of
solutions proposed in literature and discuss their strengths and weaknesses. A
simple heuristic method to drastically reduce the computational complexity is
introduced, which results in improved robustness, speed and accuracy compared
to the naive approach.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 09:57:45 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Abbeloos",
"Wim",
""
],
[
"Goedemé",
"Toon",
""
]
] | TITLE: Point Pair Feature based Object Detection for Random Bin Picking
ABSTRACT: Point pair features are a popular representation for free form 3D object
detection and pose estimation. In this paper, their performance in an
industrial random bin picking context is investigated. A new method to generate
representative synthetic datasets is proposed. This allows to investigate the
influence of a high degree of clutter and the presence of self similar
features, which are typical to our application. We provide an overview of
solutions proposed in literature and discuss their strengths and weaknesses. A
simple heuristic method to drastically reduce the computational complexity is
introduced, which results in improved robustness, speed and accuracy compared
to the naive approach.
| no_new_dataset | 0.874828 |
1612.01594 | Homa Foroughi | Homa Foroughi, Nilanjan Ray and Hong Zhang | Object Classification with Joint Projection and Low-rank Dictionary
Learning | arXiv admin note: text overlap with arXiv:1603.07697; text overlap
with arXiv:1404.3606 by other authors | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For an object classification system, the most critical obstacles towards
real-world applications are often caused by large intra-class variability,
arising from different lightings, occlusion and corruption, in limited sample
sets. Most methods in the literature would fail when the training samples are
heavily occluded, corrupted or have significant illumination or viewpoint
variations. Besides, most of the existing methods and especially deep
learning-based methods, need large training sets to achieve a satisfactory
recognition performance. Although using the pre-trained network on a generic
large-scale dataset and fine-tune it to the small-sized target dataset is a
widely used technique, this would not help when the content of base and target
datasets are very different. To address these issues, we propose a joint
projection and low-rank dictionary learning method using dual graph constraints
(JP-LRDL). The proposed joint learning method would enable us to learn the
features on top of which dictionaries can be better learned, from the data with
large intra-class variability. Specifically, a structured class-specific
dictionary is learned and the discrimination is further improved by imposing a
graph constraint on the coding coefficients, that maximizes the intra-class
compactness and inter-class separability. We also enforce low-rank and
structural incoherence constraints on sub-dictionaries to make them more
compact and robust to variations and outliers and reduce the redundancy among
them, respectively. To preserve the intrinsic structure of data and penalize
unfavourable relationship among training samples simultaneously, we introduce a
projection graph into the framework, which significantly enhances the
discriminative ability of the projection matrix and makes the method robust to
small-sized and high-dimensional datasets.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 23:49:26 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Foroughi",
"Homa",
""
],
[
"Ray",
"Nilanjan",
""
],
[
"Zhang",
"Hong",
""
]
] | TITLE: Object Classification with Joint Projection and Low-rank Dictionary
Learning
ABSTRACT: For an object classification system, the most critical obstacles towards
real-world applications are often caused by large intra-class variability,
arising from different lightings, occlusion and corruption, in limited sample
sets. Most methods in the literature would fail when the training samples are
heavily occluded, corrupted or have significant illumination or viewpoint
variations. Besides, most of the existing methods and especially deep
learning-based methods, need large training sets to achieve a satisfactory
recognition performance. Although using the pre-trained network on a generic
large-scale dataset and fine-tune it to the small-sized target dataset is a
widely used technique, this would not help when the content of base and target
datasets are very different. To address these issues, we propose a joint
projection and low-rank dictionary learning method using dual graph constraints
(JP-LRDL). The proposed joint learning method would enable us to learn the
features on top of which dictionaries can be better learned, from the data with
large intra-class variability. Specifically, a structured class-specific
dictionary is learned and the discrimination is further improved by imposing a
graph constraint on the coding coefficients, that maximizes the intra-class
compactness and inter-class separability. We also enforce low-rank and
structural incoherence constraints on sub-dictionaries to make them more
compact and robust to variations and outliers and reduce the redundancy among
them, respectively. To preserve the intrinsic structure of data and penalize
unfavourable relationship among training samples simultaneously, we introduce a
projection graph into the framework, which significantly enhances the
discriminative ability of the projection matrix and makes the method robust to
small-sized and high-dimensional datasets.
| no_new_dataset | 0.944125 |
1612.01657 | Yang Yang | Ruicong Xu, Yang Yang, Yadan Luo, Fumin Shen, Zi Huang, Heng Tao Shen | Binary Subspace Coding for Query-by-Image Video Retrieval | null | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The query-by-image video retrieval (QBIVR) task has been attracting
considerable research attention recently. However, most existing methods
represent a video by either aggregating or projecting all its frames into a
single datum point, which may easily cause severe information loss. In this
paper, we propose an efficient QBIVR framework to enable an effective and
efficient video search with image query. We first define a
similarity-preserving distance metric between an image and its orthogonal
projection in the subspace of the video, which can be equivalently transformed
to a Maximum Inner Product Search (MIPS) problem.
Besides, to boost the efficiency of solving the MIPS problem, we propose two
asymmetric hashing schemes, which bridge the domain gap of images and videos.
The first approach, termed Inner-product Binary Coding (IBC), preserves the
inner relationships of images and videos in a common Hamming space. To further
improve the retrieval efficiency, we devise a Bilinear Binary Coding (BBC)
approach, which employs compact bilinear projections instead of a single large
projection matrix. Extensive experiments have been conducted on four real-world
video datasets to verify the effectiveness of our proposed approaches as
compared to the state-of-the-arts.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 04:01:17 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Xu",
"Ruicong",
""
],
[
"Yang",
"Yang",
""
],
[
"Luo",
"Yadan",
""
],
[
"Shen",
"Fumin",
""
],
[
"Huang",
"Zi",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Binary Subspace Coding for Query-by-Image Video Retrieval
ABSTRACT: The query-by-image video retrieval (QBIVR) task has been attracting
considerable research attention recently. However, most existing methods
represent a video by either aggregating or projecting all its frames into a
single datum point, which may easily cause severe information loss. In this
paper, we propose an efficient QBIVR framework to enable an effective and
efficient video search with image query. We first define a
similarity-preserving distance metric between an image and its orthogonal
projection in the subspace of the video, which can be equivalently transformed
to a Maximum Inner Product Search (MIPS) problem.
Besides, to boost the efficiency of solving the MIPS problem, we propose two
asymmetric hashing schemes, which bridge the domain gap of images and videos.
The first approach, termed Inner-product Binary Coding (IBC), preserves the
inner relationships of images and videos in a common Hamming space. To further
improve the retrieval efficiency, we devise a Bilinear Binary Coding (BBC)
approach, which employs compact bilinear projections instead of a single large
projection matrix. Extensive experiments have been conducted on four real-world
video datasets to verify the effectiveness of our proposed approaches as
compared to the state-of-the-arts.
| no_new_dataset | 0.949012 |
1612.01663 | Yi Xu | Yi Xu, Haiqin Yang, Lijun Zhang, Tianbao Yang | Efficient Non-oblivious Randomized Reduction for Risk Minimization with
Improved Excess Risk Guarantee | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address learning problems for high dimensional data.
Previously, oblivious random projection based approaches that project high
dimensional features onto a random subspace have been used in practice for
tackling high-dimensionality challenge in machine learning. Recently, various
non-oblivious randomized reduction methods have been developed and deployed for
solving many numerical problems such as matrix product approximation, low-rank
matrix approximation, etc. However, they are less explored for the machine
learning tasks, e.g., classification. More seriously, the theoretical analysis
of excess risk bounds for risk minimization, an important measure of
generalization performance, has not been established for non-oblivious
randomized reduction methods. It therefore remains an open problem what is the
benefit of using them over previous oblivious random projection based
approaches. To tackle these challenges, we propose an algorithmic framework for
employing non-oblivious randomized reduction method for general empirical risk
minimizing in machine learning tasks, where the original high-dimensional
features are projected onto a random subspace that is derived from the data
with a small matrix approximation error. We then derive the first excess risk
bound for the proposed non-oblivious randomized reduction approach without
requiring strong assumptions on the training data. The established excess risk
bound exhibits that the proposed approach provides much better generalization
performance and it also sheds more insights about different randomized
reduction approaches. Finally, we conduct extensive experiments on both
synthetic and real-world benchmark datasets, whose dimension scales to
$O(10^7)$, to demonstrate the efficacy of our proposed approach.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 04:58:45 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Xu",
"Yi",
""
],
[
"Yang",
"Haiqin",
""
],
[
"Zhang",
"Lijun",
""
],
[
"Yang",
"Tianbao",
""
]
] | TITLE: Efficient Non-oblivious Randomized Reduction for Risk Minimization with
Improved Excess Risk Guarantee
ABSTRACT: In this paper, we address learning problems for high dimensional data.
Previously, oblivious random projection based approaches that project high
dimensional features onto a random subspace have been used in practice for
tackling high-dimensionality challenge in machine learning. Recently, various
non-oblivious randomized reduction methods have been developed and deployed for
solving many numerical problems such as matrix product approximation, low-rank
matrix approximation, etc. However, they are less explored for the machine
learning tasks, e.g., classification. More seriously, the theoretical analysis
of excess risk bounds for risk minimization, an important measure of
generalization performance, has not been established for non-oblivious
randomized reduction methods. It therefore remains an open problem what is the
benefit of using them over previous oblivious random projection based
approaches. To tackle these challenges, we propose an algorithmic framework for
employing non-oblivious randomized reduction method for general empirical risk
minimizing in machine learning tasks, where the original high-dimensional
features are projected onto a random subspace that is derived from the data
with a small matrix approximation error. We then derive the first excess risk
bound for the proposed non-oblivious randomized reduction approach without
requiring strong assumptions on the training data. The established excess risk
bound exhibits that the proposed approach provides much better generalization
performance and it also sheds more insights about different randomized
reduction approaches. Finally, we conduct extensive experiments on both
synthetic and real-world benchmark datasets, whose dimension scales to
$O(10^7)$, to demonstrate the efficacy of our proposed approach.
| no_new_dataset | 0.948775 |
1612.01812 | Dang Nguyen | Dang Nguyen, Wei Luo, Dinh Phung, Svetha Venkatesh | Control Matching via Discharge Code Sequences | 5 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the patient similarity matching problem over a
cancer cohort of more than 220,000 patients. Our approach first leverages on
Word2Vec framework to embed ICD codes into vector-valued representation. We
then propose a sequential algorithm for case-control matching on this
representation space of diagnosis codes. The novel practice of applying the
sequential matching on the vector representation lifted the matching accuracy
measured through multiple clinical outcomes. We reported the results on a
large-scale dataset to demonstrate the effectiveness of our method. For such a
large dataset where most clinical information has been codified, the new method
is particularly relevant.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 04:21:55 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Nguyen",
"Dang",
""
],
[
"Luo",
"Wei",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Control Matching via Discharge Code Sequences
ABSTRACT: In this paper, we consider the patient similarity matching problem over a
cancer cohort of more than 220,000 patients. Our approach first leverages on
Word2Vec framework to embed ICD codes into vector-valued representation. We
then propose a sequential algorithm for case-control matching on this
representation space of diagnosis codes. The novel practice of applying the
sequential matching on the vector representation lifted the matching accuracy
measured through multiple clinical outcomes. We reported the results on a
large-scale dataset to demonstrate the effectiveness of our method. For such a
large dataset where most clinical information has been codified, the new method
is particularly relevant.
| no_new_dataset | 0.917525 |
1612.01939 | Baochen Sun | Baochen Sun, Jiashi Feng, Kate Saenko | Correlation Alignment for Unsupervised Domain Adaptation | Introduction to CORAL, CORAL-LDA, and Deep CORAL. arXiv admin note:
text overlap with arXiv:1511.05547 | null | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this chapter, we present CORrelation ALignment (CORAL), a simple yet
effective method for unsupervised domain adaptation. CORAL minimizes domain
shift by aligning the second-order statistics of source and target
distributions, without requiring any target labels. In contrast to subspace
manifold methods, it aligns the original feature distributions of the source
and target domains, rather than the bases of lower-dimensional subspaces. It is
also much simpler than other distribution matching methods. CORAL performs
remarkably well in extensive evaluations on standard benchmark datasets. We
first describe a solution that applies a linear transformation to source
features to align them with target features before classifier training. For
linear classifiers, we propose to equivalently apply CORAL to the classifier
weights, leading to added efficiency when the number of classifiers is small
but the number and dimensionality of target examples are very high. The
resulting CORAL Linear Discriminant Analysis (CORAL-LDA) outperforms LDA by a
large margin on standard domain adaptation benchmarks. Finally, we extend CORAL
to learn a nonlinear transformation that aligns correlations of layer
activations in deep neural networks (DNNs). The resulting Deep CORAL approach
works seamlessly with DNNs and achieves state-of-the-art performance on
standard benchmark datasets. Our code is available
at:~\url{https://github.com/VisionLearningGroup/CORAL}
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 18:31:57 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Sun",
"Baochen",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Correlation Alignment for Unsupervised Domain Adaptation
ABSTRACT: In this chapter, we present CORrelation ALignment (CORAL), a simple yet
effective method for unsupervised domain adaptation. CORAL minimizes domain
shift by aligning the second-order statistics of source and target
distributions, without requiring any target labels. In contrast to subspace
manifold methods, it aligns the original feature distributions of the source
and target domains, rather than the bases of lower-dimensional subspaces. It is
also much simpler than other distribution matching methods. CORAL performs
remarkably well in extensive evaluations on standard benchmark datasets. We
first describe a solution that applies a linear transformation to source
features to align them with target features before classifier training. For
linear classifiers, we propose to equivalently apply CORAL to the classifier
weights, leading to added efficiency when the number of classifiers is small
but the number and dimensionality of target examples are very high. The
resulting CORAL Linear Discriminant Analysis (CORAL-LDA) outperforms LDA by a
large margin on standard domain adaptation benchmarks. Finally, we extend CORAL
to learn a nonlinear transformation that aligns correlations of layer
activations in deep neural networks (DNNs). The resulting Deep CORAL approach
works seamlessly with DNNs and achieves state-of-the-art performance on
standard benchmark datasets. Our code is available
at:~\url{https://github.com/VisionLearningGroup/CORAL}
| no_new_dataset | 0.948251 |
1612.01943 | Yuhao Zhang | Yuhao Zhang, Sandeep Ayyar, Long-Huei Chen, Ethan J. Li | Segmental Convolutional Neural Networks for Detection of Cardiac
Abnormality With Noisy Heart Sound Recordings | This work was finished in May 2016, and remains unpublished until
December 2016 due to a request from the data provider | null | null | null | cs.SD cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heart diseases constitute a global health burden, and the problem is
exacerbated by the error-prone nature of listening to and interpreting heart
sounds. This motivates the development of automated classification to screen
for abnormal heart sounds. Existing machine learning-based systems achieve
accurate classification of heart sound recordings but rely on expert features
that have not been thoroughly evaluated on noisy recordings. Here we propose a
segmental convolutional neural network architecture that achieves automatic
feature learning from noisy heart sound recordings. Our experiments show that
our best model, trained on noisy recording segments acquired with an existing
hidden semi-markov model-based approach, attains a classification accuracy of
87.5% on the 2016 PhysioNet/CinC Challenge dataset, compared to the 84.6%
accuracy of the state-of-the-art statistical classifier trained and evaluated
on the same dataset. Our results indicate the potential of using neural
network-based methods to increase the accuracy of automated classification of
heart sound recordings for improved screening of heart diseases.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 18:37:30 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Zhang",
"Yuhao",
""
],
[
"Ayyar",
"Sandeep",
""
],
[
"Chen",
"Long-Huei",
""
],
[
"Li",
"Ethan J.",
""
]
] | TITLE: Segmental Convolutional Neural Networks for Detection of Cardiac
Abnormality With Noisy Heart Sound Recordings
ABSTRACT: Heart diseases constitute a global health burden, and the problem is
exacerbated by the error-prone nature of listening to and interpreting heart
sounds. This motivates the development of automated classification to screen
for abnormal heart sounds. Existing machine learning-based systems achieve
accurate classification of heart sound recordings but rely on expert features
that have not been thoroughly evaluated on noisy recordings. Here we propose a
segmental convolutional neural network architecture that achieves automatic
feature learning from noisy heart sound recordings. Our experiments show that
our best model, trained on noisy recording segments acquired with an existing
hidden semi-markov model-based approach, attains a classification accuracy of
87.5% on the 2016 PhysioNet/CinC Challenge dataset, compared to the 84.6%
accuracy of the state-of-the-art statistical classifier trained and evaluated
on the same dataset. Our results indicate the potential of using neural
network-based methods to increase the accuracy of automated classification of
heart sound recordings for improved screening of heart diseases.
| no_new_dataset | 0.955899 |
1612.01981 | Manohar Karki | Manohar Karki, Robert DiBiano, Saikat Basu, Supratik Mukhopadhyay | Core Sampling Framework for Pixel Classification | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intermediate map responses of a Convolutional Neural Network (CNN)
contain information about an image that can be used to extract contextual
knowledge about it. In this paper, we present a core sampling framework that is
able to use these activation maps from several layers as features to another
neural network using transfer learning to provide an understanding of an input
image. Our framework creates a representation that combines features from the
test data and the contextual knowledge gained from the responses of a
pretrained network, processes it and feeds it to a separate Deep Belief
Network. We use this representation to extract more information from an image
at the pixel level, hence gaining understanding of the whole image. We
experimentally demonstrate the usefulness of our framework using a pretrained
VGG-16 model to perform segmentation on the BAERI dataset of Synthetic Aperture
Radar(SAR) imagery and the CAMVID dataset.
| [
{
"version": "v1",
"created": "Tue, 6 Dec 2016 20:28:44 GMT"
}
] | 2016-12-07T00:00:00 | [
[
"Karki",
"Manohar",
""
],
[
"DiBiano",
"Robert",
""
],
[
"Basu",
"Saikat",
""
],
[
"Mukhopadhyay",
"Supratik",
""
]
] | TITLE: Core Sampling Framework for Pixel Classification
ABSTRACT: The intermediate map responses of a Convolutional Neural Network (CNN)
contain information about an image that can be used to extract contextual
knowledge about it. In this paper, we present a core sampling framework that is
able to use these activation maps from several layers as features to another
neural network using transfer learning to provide an understanding of an input
image. Our framework creates a representation that combines features from the
test data and the contextual knowledge gained from the responses of a
pretrained network, processes it and feeds it to a separate Deep Belief
Network. We use this representation to extract more information from an image
at the pixel level, hence gaining understanding of the whole image. We
experimentally demonstrate the usefulness of our framework using a pretrained
VGG-16 model to perform segmentation on the BAERI dataset of Synthetic Aperture
Radar(SAR) imagery and the CAMVID dataset.
| no_new_dataset | 0.949809 |
1406.5726 | Yunchao Wei | Yunchao Wei, Wei Xia, Junshi Huang, Bingbing Ni, Jian Dong, Yao Zhao,
Shuicheng Yan | CNN: Single-label to Multi-label | 13 pages, 10 figures, 3 tables | null | 10.1109/TPAMI.2015.2491929 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Network (CNN) has demonstrated promising performance in
single-label image classification tasks. However, how CNN best copes with
multi-label images still remains an open problem, mainly due to the complex
underlying object layouts and insufficient multi-label training images. In this
work, we propose a flexible deep CNN infrastructure, called
Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment
hypotheses are taken as the inputs, then a shared CNN is connected with each
hypothesis, and finally the CNN output results from different hypotheses are
aggregated with max pooling to produce the ultimate multi-label predictions.
Some unique characteristics of this flexible deep CNN infrastructure include:
1) no ground truth bounding box information is required for training; 2) the
whole HCP infrastructure is robust to possibly noisy and/or redundant
hypotheses; 3) no explicit hypothesis label is required; 4) the shared CNN may
be well pre-trained with a large-scale single-label image dataset, e.g.
ImageNet; and 5) it may naturally output multi-label prediction results.
Experimental results on Pascal VOC2007 and VOC2012 multi-label image datasets
well demonstrate the superiority of the proposed HCP infrastructure over other
state-of-the-arts. In particular, the mAP reaches 84.2% by HCP only and 90.3%
after the fusion with our complementary result in [47] based on hand-crafted
features on the VOC2012 dataset, which significantly outperforms the
state-of-the-arts with a large margin of more than 7%.
| [
{
"version": "v1",
"created": "Sun, 22 Jun 2014 14:03:07 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Jun 2014 03:32:46 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Jul 2014 11:26:56 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Wei",
"Yunchao",
""
],
[
"Xia",
"Wei",
""
],
[
"Huang",
"Junshi",
""
],
[
"Ni",
"Bingbing",
""
],
[
"Dong",
"Jian",
""
],
[
"Zhao",
"Yao",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: CNN: Single-label to Multi-label
ABSTRACT: Convolutional Neural Network (CNN) has demonstrated promising performance in
single-label image classification tasks. However, how CNN best copes with
multi-label images still remains an open problem, mainly due to the complex
underlying object layouts and insufficient multi-label training images. In this
work, we propose a flexible deep CNN infrastructure, called
Hypotheses-CNN-Pooling (HCP), where an arbitrary number of object segment
hypotheses are taken as the inputs, then a shared CNN is connected with each
hypothesis, and finally the CNN output results from different hypotheses are
aggregated with max pooling to produce the ultimate multi-label predictions.
Some unique characteristics of this flexible deep CNN infrastructure include:
1) no ground truth bounding box information is required for training; 2) the
whole HCP infrastructure is robust to possibly noisy and/or redundant
hypotheses; 3) no explicit hypothesis label is required; 4) the shared CNN may
be well pre-trained with a large-scale single-label image dataset, e.g.
ImageNet; and 5) it may naturally output multi-label prediction results.
Experimental results on Pascal VOC2007 and VOC2012 multi-label image datasets
well demonstrate the superiority of the proposed HCP infrastructure over other
state-of-the-arts. In particular, the mAP reaches 84.2% by HCP only and 90.3%
after the fusion with our complementary result in [47] based on hand-crafted
features on the VOC2012 dataset, which significantly outperforms the
state-of-the-arts with a large margin of more than 7%.
| no_new_dataset | 0.947088 |
1602.02220 | Tianbao Yang | Zhe Li, Boqing Gong, Tianbao Yang | Improved Dropout for Shallow and Deep Learning | In NIPS 2016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dropout has been witnessed with great success in training deep neural
networks by independently zeroing out the outputs of neurons at random. It has
also received a surge of interest for shallow learning, e.g., logistic
regression. However, the independent sampling for dropout could be suboptimal
for the sake of convergence. In this paper, we propose to use multinomial
sampling for dropout, i.e., sampling features or neurons according to a
multinomial distribution with different probabilities for different
features/neurons. To exhibit the optimal dropout probabilities, we analyze the
shallow learning with multinomial dropout and establish the risk bound for
stochastic optimization. By minimizing a sampling dependent factor in the risk
bound, we obtain a distribution-dependent dropout with sampling probabilities
dependent on the second order statistics of the data distribution. To tackle
the issue of evolving distribution of neurons in deep learning, we propose an
efficient adaptive dropout (named \textbf{evolutional dropout}) that computes
the sampling probabilities on-the-fly from a mini-batch of examples. Empirical
studies on several benchmark datasets demonstrate that the proposed dropouts
achieve not only much faster convergence and but also a smaller testing error
than the standard dropout. For example, on the CIFAR-100 data, the evolutional
dropout achieves relative improvements over 10\% on the prediction performance
and over 50\% on the convergence speed compared to the standard dropout.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 05:41:57 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Dec 2016 05:31:19 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Li",
"Zhe",
""
],
[
"Gong",
"Boqing",
""
],
[
"Yang",
"Tianbao",
""
]
] | TITLE: Improved Dropout for Shallow and Deep Learning
ABSTRACT: Dropout has been witnessed with great success in training deep neural
networks by independently zeroing out the outputs of neurons at random. It has
also received a surge of interest for shallow learning, e.g., logistic
regression. However, the independent sampling for dropout could be suboptimal
for the sake of convergence. In this paper, we propose to use multinomial
sampling for dropout, i.e., sampling features or neurons according to a
multinomial distribution with different probabilities for different
features/neurons. To exhibit the optimal dropout probabilities, we analyze the
shallow learning with multinomial dropout and establish the risk bound for
stochastic optimization. By minimizing a sampling dependent factor in the risk
bound, we obtain a distribution-dependent dropout with sampling probabilities
dependent on the second order statistics of the data distribution. To tackle
the issue of evolving distribution of neurons in deep learning, we propose an
efficient adaptive dropout (named \textbf{evolutional dropout}) that computes
the sampling probabilities on-the-fly from a mini-batch of examples. Empirical
studies on several benchmark datasets demonstrate that the proposed dropouts
achieve not only much faster convergence and but also a smaller testing error
than the standard dropout. For example, on the CIFAR-100 data, the evolutional
dropout achieves relative improvements over 10\% on the prediction performance
and over 50\% on the convergence speed compared to the standard dropout.
| no_new_dataset | 0.948917 |
1602.08194 | Ryan Spring | Ryan Spring, Anshumali Shrivastava | Scalable and Sustainable Deep Learning via Randomized Hashing | null | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current deep learning architectures are growing larger in order to learn from
complex datasets. These architectures require giant matrix multiplication
operations to train millions of parameters. Conversely, there is another
growing trend to bring deep learning to low-power, embedded devices. The matrix
operations, associated with both training and testing of deep networks, are
very expensive from a computational and energy standpoint. We present a novel
hashing based technique to drastically reduce the amount of computation needed
to train and test deep networks. Our approach combines recent ideas from
adaptive dropouts and randomized hashing for maximum inner product search to
select the nodes with the highest activation efficiently. Our new algorithm for
deep learning reduces the overall computational cost of forward and
back-propagation by operating on significantly fewer (sparse) nodes. As a
consequence, our algorithm uses only 5% of the total multiplications, while
keeping on average within 1% of the accuracy of the original model. A unique
property of the proposed hashing based back-propagation is that the updates are
always sparse. Due to the sparse gradient updates, our algorithm is ideally
suited for asynchronous and parallel training leading to near linear speedup
with increasing number of cores. We demonstrate the scalability and
sustainability (energy efficiency) of our proposed algorithm via rigorous
experimental evaluations on several real datasets.
| [
{
"version": "v1",
"created": "Fri, 26 Feb 2016 05:07:23 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 04:52:36 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Spring",
"Ryan",
""
],
[
"Shrivastava",
"Anshumali",
""
]
] | TITLE: Scalable and Sustainable Deep Learning via Randomized Hashing
ABSTRACT: Current deep learning architectures are growing larger in order to learn from
complex datasets. These architectures require giant matrix multiplication
operations to train millions of parameters. Conversely, there is another
growing trend to bring deep learning to low-power, embedded devices. The matrix
operations, associated with both training and testing of deep networks, are
very expensive from a computational and energy standpoint. We present a novel
hashing based technique to drastically reduce the amount of computation needed
to train and test deep networks. Our approach combines recent ideas from
adaptive dropouts and randomized hashing for maximum inner product search to
select the nodes with the highest activation efficiently. Our new algorithm for
deep learning reduces the overall computational cost of forward and
back-propagation by operating on significantly fewer (sparse) nodes. As a
consequence, our algorithm uses only 5% of the total multiplications, while
keeping on average within 1% of the accuracy of the original model. A unique
property of the proposed hashing based back-propagation is that the updates are
always sparse. Due to the sparse gradient updates, our algorithm is ideally
suited for asynchronous and parallel training leading to near linear speedup
with increasing number of cores. We demonstrate the scalability and
sustainability (energy efficiency) of our proposed algorithm via rigorous
experimental evaluations on several real datasets.
| no_new_dataset | 0.94366 |
1603.02636 | Lucas Beyer | Lucas Beyer and Alexander Hermans and Bastian Leibe | DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range
Data | Lucas Beyer and Alexander Hermans contributed equally | null | null | null | cs.RO cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the DROW detector, a deep learning based detector for 2D range
data. Laser scanners are lighting invariant, provide accurate range data, and
typically cover a large field of view, making them interesting sensors for
robotics applications. So far, research on detection in laser range data has
been dominated by hand-crafted features and boosted classifiers, potentially
losing performance due to suboptimal design choices. We propose a Convolutional
Neural Network (CNN) based detector for this task. We show how to effectively
apply CNNs for detection in 2D range data, and propose a depth preprocessing
step and voting scheme that significantly improve CNN performance. We
demonstrate our approach on wheelchairs and walkers, obtaining state of the art
detection results. Apart from the training data, none of our design choices
limits the detector to these two classes, though. We provide a ROS node for our
detector and release our dataset containing 464k laser scans, out of which 24k
were annotated.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 19:39:19 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 18:06:28 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Beyer",
"Lucas",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Leibe",
"Bastian",
""
]
] | TITLE: DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range
Data
ABSTRACT: We introduce the DROW detector, a deep learning based detector for 2D range
data. Laser scanners are lighting invariant, provide accurate range data, and
typically cover a large field of view, making them interesting sensors for
robotics applications. So far, research on detection in laser range data has
been dominated by hand-crafted features and boosted classifiers, potentially
losing performance due to suboptimal design choices. We propose a Convolutional
Neural Network (CNN) based detector for this task. We show how to effectively
apply CNNs for detection in 2D range data, and propose a depth preprocessing
step and voting scheme that significantly improve CNN performance. We
demonstrate our approach on wheelchairs and walkers, obtaining state of the art
detection results. Apart from the training data, none of our design choices
limits the detector to these two classes, though. We provide a ROS node for our
detector and release our dataset containing 464k laser scans, out of which 24k
were annotated.
| new_dataset | 0.956104 |
1609.09869 | Rahul Gopal Krishnan | Rahul G. Krishnan, Uri Shalit, David Sontag | Structured Inference Networks for Nonlinear State Space Models | To appear in the Thirty-First AAAI Conference on Artificial
Intelligence, February 2017, 13 pages, 11 figures with supplement, changed to
AAAI formatting style, added references | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian state space models have been used for decades as generative models
of sequential data. They admit an intuitive probabilistic interpretation, have
a simple functional form, and enjoy widespread adoption. We introduce a unified
algorithm to efficiently learn a broad class of linear and non-linear state
space models, including variants where the emission and transition
distributions are modeled by deep neural networks. Our learning algorithm
simultaneously learns a compiled inference network and the generative model,
leveraging a structured variational approximation parameterized by recurrent
neural networks to mimic the posterior distribution. We apply the learning
algorithm to both synthetic and real-world datasets, demonstrating its
scalability and versatility. We find that using the structured approximation to
the posterior results in models with significantly higher held-out likelihood.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2016 19:53:11 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 19:10:10 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Krishnan",
"Rahul G.",
""
],
[
"Shalit",
"Uri",
""
],
[
"Sontag",
"David",
""
]
] | TITLE: Structured Inference Networks for Nonlinear State Space Models
ABSTRACT: Gaussian state space models have been used for decades as generative models
of sequential data. They admit an intuitive probabilistic interpretation, have
a simple functional form, and enjoy widespread adoption. We introduce a unified
algorithm to efficiently learn a broad class of linear and non-linear state
space models, including variants where the emission and transition
distributions are modeled by deep neural networks. Our learning algorithm
simultaneously learns a compiled inference network and the generative model,
leveraging a structured variational approximation parameterized by recurrent
neural networks to mimic the posterior distribution. We apply the learning
algorithm to both synthetic and real-world datasets, demonstrating its
scalability and versatility. We find that using the structured approximation to
the posterior results in models with significantly higher held-out likelihood.
| no_new_dataset | 0.951549 |
1611.07593 | Ziming Zhang | Ziming Zhang and Venkatesh Saligrama | Learning Joint Feature Adaptation for Zero-Shot Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot recognition (ZSR) aims to recognize target-domain data instances of
unseen classes based on the models learned from associated pairs of seen-class
source and target domain data. One of the key challenges in ZSR is the relative
scarcity of source-domain features (e.g. one feature vector per class), which
do not fully account for wide variability in target-domain instances. In this
paper we propose a novel framework of learning data-dependent feature
transforms for scoring similarity between an arbitrary pair of source and
target data instances to account for the wide variability in target domain. Our
proposed approach is based on optimizing over a parameterized family of local
feature displacements that maximize the source-target adaptive similarity
functions. Accordingly we propose formulating zero-shot learning (ZSL) using
latent structural SVMs to learn our similarity functions from training data. As
demonstration we design a specific algorithm under the proposed framework
involving bilinear similarity functions and regularized least squares as
penalties for feature displacement. We test our approach on several benchmark
datasets for ZSR and show significant improvement over the state-of-the-art.
For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition
accuracy, outperforming the state-of-the-art by 11.15%.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2016 01:13:37 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Dec 2016 03:17:02 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Learning Joint Feature Adaptation for Zero-Shot Recognition
ABSTRACT: Zero-shot recognition (ZSR) aims to recognize target-domain data instances of
unseen classes based on the models learned from associated pairs of seen-class
source and target domain data. One of the key challenges in ZSR is the relative
scarcity of source-domain features (e.g. one feature vector per class), which
do not fully account for wide variability in target-domain instances. In this
paper we propose a novel framework of learning data-dependent feature
transforms for scoring similarity between an arbitrary pair of source and
target data instances to account for the wide variability in target domain. Our
proposed approach is based on optimizing over a parameterized family of local
feature displacements that maximize the source-target adaptive similarity
functions. Accordingly we propose formulating zero-shot learning (ZSL) using
latent structural SVMs to learn our similarity functions from training data. As
demonstration we design a specific algorithm under the proposed framework
involving bilinear similarity functions and regularized least squares as
penalties for feature displacement. We test our approach on several benchmark
datasets for ZSR and show significant improvement over the state-of-the-art.
For instance, on aP&Y dataset we can achieve 80.89% in terms of recognition
accuracy, outperforming the state-of-the-art by 11.15%.
| no_new_dataset | 0.945801 |
1611.08737 | Shuangfei Zhai | Nana Li, Shuangfei Zhai, Zhongfei Zhang, Boying Liu | Structural Correspondence Learning for Cross-lingual Sentiment
Classification with One-to-many Mappings | To appear in AAAI 2017. arXiv admin note: text overlap with
arXiv:1008.0716 by other authors | null | null | null | cs.LG cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural correspondence learning (SCL) is an effective method for
cross-lingual sentiment classification. This approach uses unlabeled documents
along with a word translation oracle to automatically induce task specific,
cross-lingual correspondences. It transfers knowledge through identifying
important features, i.e., pivot features. For simplicity, however, it assumes
that the word translation oracle maps each pivot feature in source language to
exactly only one word in target language. This one-to-one mapping between words
in different languages is too strict. Also the context is not considered at
all. In this paper, we propose a cross-lingual SCL based on distributed
representation of words; it can learn meaningful one-to-many mappings for pivot
words using large amounts of monolingual data and a small dictionary. We
conduct experiments on NLP\&CC 2013 cross-lingual sentiment analysis dataset,
employing English as source language, and Chinese as target language. Our
method does not rely on the parallel corpora and the experimental results show
that our approach is more competitive than the state-of-the-art methods in
cross-lingual sentiment classification.
| [
{
"version": "v1",
"created": "Sat, 26 Nov 2016 20:11:00 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Li",
"Nana",
""
],
[
"Zhai",
"Shuangfei",
""
],
[
"Zhang",
"Zhongfei",
""
],
[
"Liu",
"Boying",
""
]
] | TITLE: Structural Correspondence Learning for Cross-lingual Sentiment
Classification with One-to-many Mappings
ABSTRACT: Structural correspondence learning (SCL) is an effective method for
cross-lingual sentiment classification. This approach uses unlabeled documents
along with a word translation oracle to automatically induce task specific,
cross-lingual correspondences. It transfers knowledge through identifying
important features, i.e., pivot features. For simplicity, however, it assumes
that the word translation oracle maps each pivot feature in source language to
exactly only one word in target language. This one-to-one mapping between words
in different languages is too strict. Also the context is not considered at
all. In this paper, we propose a cross-lingual SCL based on distributed
representation of words; it can learn meaningful one-to-many mappings for pivot
words using large amounts of monolingual data and a small dictionary. We
conduct experiments on NLP\&CC 2013 cross-lingual sentiment analysis dataset,
employing English as source language, and Chinese as target language. Our
method does not rely on the parallel corpora and the experimental results show
that our approach is more competitive than the state-of-the-art methods in
cross-lingual sentiment classification.
| no_new_dataset | 0.948155 |
1611.09226 | Michael Figurnov | Michael Figurnov, Kirill Struminsky, Dmitry Vetrov | Robust Variational Inference | NIPS 2016 Workshop, Advances in Approximate Bayesian Inference | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variational inference is a powerful tool for approximate inference. However,
it mainly focuses on the evidence lower bound as variational objective and the
development of other measures for variational inference is a promising area of
research. This paper proposes a robust modification of evidence and a lower
bound for the evidence, which is applicable when the majority of the training
set samples are random noise objects. We provide experiments for variational
autoencoders to show advantage of the objective over the evidence lower bound
on synthetic datasets obtained by adding uninformative noise objects to MNIST
and OMNIGLOT. Additionally, for the original MNIST and OMNIGLOT datasets we
observe a small improvement over the non-robust evidence lower bound.
| [
{
"version": "v1",
"created": "Mon, 28 Nov 2016 16:28:41 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Figurnov",
"Michael",
""
],
[
"Struminsky",
"Kirill",
""
],
[
"Vetrov",
"Dmitry",
""
]
] | TITLE: Robust Variational Inference
ABSTRACT: Variational inference is a powerful tool for approximate inference. However,
it mainly focuses on the evidence lower bound as variational objective and the
development of other measures for variational inference is a promising area of
research. This paper proposes a robust modification of evidence and a lower
bound for the evidence, which is applicable when the majority of the training
set samples are random noise objects. We provide experiments for variational
autoencoders to show advantage of the objective over the evidence lower bound
on synthetic datasets obtained by adding uninformative noise objects to MNIST
and OMNIGLOT. Additionally, for the original MNIST and OMNIGLOT datasets we
observe a small improvement over the non-robust evidence lower bound.
| no_new_dataset | 0.945399 |
1612.00525 | Turki Turki | Turki Turki and Zhi Wei | A Noise-Filtering Approach for Cancer Drug Sensitivity Prediction | Accepted at NIPS 2016 Workshop on Machine Learning for Health | null | null | null | cs.LG q-bio.GN stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Accurately predicting drug responses to cancer is an important problem
hindering oncologists' efforts to find the most effective drugs to treat
cancer, which is a core goal in precision medicine. The scientific community
has focused on improving this prediction based on genomic, epigenomic, and
proteomic datasets measured in human cancer cell lines. Real-world cancer cell
lines contain noise, which degrades the performance of machine learning
algorithms. This problem is rarely addressed in the existing approaches. In
this paper, we present a noise-filtering approach that integrates techniques
from numerical linear algebra and information retrieval targeted at filtering
out noisy cancer cell lines. By filtering out noisy cancer cell lines, we can
train machine learning algorithms on better quality cancer cell lines. We
evaluate the performance of our approach and compare it with an existing
approach using the Area Under the ROC Curve (AUC) on clinical trial data. The
experimental results show that our proposed approach is stable and also yields
the highest AUC at a statistically significant level.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 00:41:11 GMT"
},
{
"version": "v2",
"created": "Mon, 5 Dec 2016 05:15:51 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Turki",
"Turki",
""
],
[
"Wei",
"Zhi",
""
]
] | TITLE: A Noise-Filtering Approach for Cancer Drug Sensitivity Prediction
ABSTRACT: Accurately predicting drug responses to cancer is an important problem
hindering oncologists' efforts to find the most effective drugs to treat
cancer, which is a core goal in precision medicine. The scientific community
has focused on improving this prediction based on genomic, epigenomic, and
proteomic datasets measured in human cancer cell lines. Real-world cancer cell
lines contain noise, which degrades the performance of machine learning
algorithms. This problem is rarely addressed in the existing approaches. In
this paper, we present a noise-filtering approach that integrates techniques
from numerical linear algebra and information retrieval targeted at filtering
out noisy cancer cell lines. By filtering out noisy cancer cell lines, we can
train machine learning algorithms on better quality cancer cell lines. We
evaluate the performance of our approach and compare it with an existing
approach using the Area Under the ROC Curve (AUC) on clinical trial data. The
experimental results show that our proposed approach is stable and also yields
the highest AUC at a statistically significant level.
| no_new_dataset | 0.950824 |
1612.00840 | Soumi Chaki | Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani | A novel multiclassSVM based framework to classify lithology from well
logs: a real-world application | 5 pages, 5 figures, 4 tables Presented at INDICON 2015 at New Delhi,
India | null | null | null | cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support vector machines (SVMs) have been recognized as a potential tool for
supervised classification analyses in different domains of research. In
essence, SVM is a binary classifier. Therefore, in case of a multiclass
problem, the problem is divided into a series of binary problems which are
solved by binary classifiers, and finally the classification results are
combined following either the one-against-one or one-against-all strategies. In
this paper, an attempt has been made to classify lithology using a multiclass
SVM based framework using well logs as predictor variables. Here, the lithology
is classified into four classes such as sand, shaly sand, sandy shale and shale
based on the relative values of sand and shale fractions as suggested by an
expert geologist. The available dataset consisting well logs (gamma ray,
neutron porosity, density, and P-sonic) and class information from four closely
spaced wells from an onshore hydrocarbon field is divided into training and
testing sets. We have used one-against-all strategy to combine the results of
multiple binary classifiers. The reported results established the superiority
of multiclass SVM compared to other classifiers in terms of classification
accuracy. The selection of kernel function and associated parameters has also
been investigated here. It can be envisaged from the results achieved in this
study that the proposed framework based on multiclass SVM can further be used
to solve classification problems. In future research endeavor, seismic
attributes can be introduced in the framework to classify the lithology
throughout a study area from seismic inputs.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 07:55:16 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Chaki",
"Soumi",
""
],
[
"Routray",
"Aurobinda",
""
],
[
"Mohanty",
"William K.",
""
],
[
"Jenamani",
"Mamata",
""
]
] | TITLE: A novel multiclassSVM based framework to classify lithology from well
logs: a real-world application
ABSTRACT: Support vector machines (SVMs) have been recognized as a potential tool for
supervised classification analyses in different domains of research. In
essence, SVM is a binary classifier. Therefore, in case of a multiclass
problem, the problem is divided into a series of binary problems which are
solved by binary classifiers, and finally the classification results are
combined following either the one-against-one or one-against-all strategies. In
this paper, an attempt has been made to classify lithology using a multiclass
SVM based framework using well logs as predictor variables. Here, the lithology
is classified into four classes such as sand, shaly sand, sandy shale and shale
based on the relative values of sand and shale fractions as suggested by an
expert geologist. The available dataset consisting well logs (gamma ray,
neutron porosity, density, and P-sonic) and class information from four closely
spaced wells from an onshore hydrocarbon field is divided into training and
testing sets. We have used one-against-all strategy to combine the results of
multiple binary classifiers. The reported results established the superiority
of multiclass SVM compared to other classifiers in terms of classification
accuracy. The selection of kernel function and associated parameters has also
been investigated here. It can be envisaged from the results achieved in this
study that the proposed framework based on multiclass SVM can further be used
to solve classification problems. In future research endeavor, seismic
attributes can be introduced in the framework to classify the lithology
throughout a study area from seismic inputs.
| no_new_dataset | 0.949201 |
1612.00866 | John Beieler | John Beieler | Creating a Real-Time, Reproducible Event Dataset | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation of political event data has remained much the same since the
mid-1990s, both in terms of data acquisition and the process of coding text
into data. Since the 1990s, however, there have been significant improvements
in open-source natural language processing software and in the availability of
digitized news content. This paper presents a new, next-generation event
dataset, named Phoenix, that builds from these and other advances. This dataset
includes improvements in the underlying news collection process and event
coding software, along with the creation of a general processing pipeline
necessary to produce daily-updated data. This paper provides a face validity
checks by briefly examining the data for the conflict in Syria, and a
comparison between Phoenix and the Integrated Crisis Early Warning System data.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 21:28:00 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Beieler",
"John",
""
]
] | TITLE: Creating a Real-Time, Reproducible Event Dataset
ABSTRACT: The generation of political event data has remained much the same since the
mid-1990s, both in terms of data acquisition and the process of coding text
into data. Since the 1990s, however, there have been significant improvements
in open-source natural language processing software and in the availability of
digitized news content. This paper presents a new, next-generation event
dataset, named Phoenix, that builds from these and other advances. This dataset
includes improvements in the underlying news collection process and event
coding software, along with the creation of a general processing pipeline
necessary to produce daily-updated data. This paper provides a face validity
checks by briefly examining the data for the conflict in Syria, and a
comparison between Phoenix and the Integrated Crisis Early Warning System data.
| new_dataset | 0.964321 |
1612.00960 | Tasuku Soma | Tasuku Soma, Yuichi Yoshida | Non-monotone DR-Submodular Function Maximization | This paper is to appear in AAAI 2017 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider non-monotone DR-submodular function maximization, where
DR-submodularity (diminishing return submodularity) is an extension of
submodularity for functions over the integer lattice based on the concept of
the diminishing return property. Maximizing non-monotone DR-submodular
functions has many applications in machine learning that cannot be captured by
submodular set functions. In this paper, we present a
$\frac{1}{2+\epsilon}$-approximation algorithm with a running time of roughly
$O(\frac{n}{\epsilon}\log^2 B)$, where $n$ is the size of the ground set, $B$
is the maximum value of a coordinate, and $\epsilon > 0$ is a parameter. The
approximation ratio is almost tight and the dependency of running time on $B$
is exponentially smaller than the naive greedy algorithm. Experiments on
synthetic and real-world datasets demonstrate that our algorithm outputs almost
the best solution compared to other baseline algorithms, whereas its running
time is several orders of magnitude faster.
| [
{
"version": "v1",
"created": "Sat, 3 Dec 2016 11:37:28 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Soma",
"Tasuku",
""
],
[
"Yoshida",
"Yuichi",
""
]
] | TITLE: Non-monotone DR-Submodular Function Maximization
ABSTRACT: We consider non-monotone DR-submodular function maximization, where
DR-submodularity (diminishing return submodularity) is an extension of
submodularity for functions over the integer lattice based on the concept of
the diminishing return property. Maximizing non-monotone DR-submodular
functions has many applications in machine learning that cannot be captured by
submodular set functions. In this paper, we present a
$\frac{1}{2+\epsilon}$-approximation algorithm with a running time of roughly
$O(\frac{n}{\epsilon}\log^2 B)$, where $n$ is the size of the ground set, $B$
is the maximum value of a coordinate, and $\epsilon > 0$ is a parameter. The
approximation ratio is almost tight and the dependency of running time on $B$
is exponentially smaller than the naive greedy algorithm. Experiments on
synthetic and real-world datasets demonstrate that our algorithm outputs almost
the best solution compared to other baseline algorithms, whereas its running
time is several orders of magnitude faster.
| no_new_dataset | 0.947914 |
1612.00991 | Yaxing Wang | Yaxing Wang, Lichao Zhang, Joost van de Weijer | Ensembles of Generative Adversarial Networks | accepted NIPS 2016 Workshop on Adversarial Training | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly. In this paper we investigate the usage of
ensembles of GANs. The specific nature of GANs opens up several new ways to
construct ensembles. The first one is based on the fact that in the minimax
game which is played to optimize the GAN objective the generator network keeps
on changing even after the network can be considered optimal. As such ensembles
of GANs can be constructed based on the same network initialization but just
taking models which have different amount of iterations. These so-called self
ensembles are much faster to train than traditional ensembles. The second
method, called cascade GANs, redirects part of the training data which is badly
modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset
we show that ensembles of GANs obtain model probability distributions which
better model the data distribution. In addition, we show that these improved
results can be obtained at little additional computational cost.
| [
{
"version": "v1",
"created": "Sat, 3 Dec 2016 17:49:02 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Wang",
"Yaxing",
""
],
[
"Zhang",
"Lichao",
""
],
[
"van de Weijer",
"Joost",
""
]
] | TITLE: Ensembles of Generative Adversarial Networks
ABSTRACT: Ensembles are a popular way to improve results of discriminative CNNs. The
combination of several networks trained starting from different initializations
improves results significantly. In this paper we investigate the usage of
ensembles of GANs. The specific nature of GANs opens up several new ways to
construct ensembles. The first one is based on the fact that in the minimax
game which is played to optimize the GAN objective the generator network keeps
on changing even after the network can be considered optimal. As such ensembles
of GANs can be constructed based on the same network initialization but just
taking models which have different amount of iterations. These so-called self
ensembles are much faster to train than traditional ensembles. The second
method, called cascade GANs, redirects part of the training data which is badly
modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset
we show that ensembles of GANs obtain model probability distributions which
better model the data distribution. In addition, we show that these improved
results can be obtained at little additional computational cost.
| no_new_dataset | 0.949902 |
1612.01022 | Yuankai Wu Yuankai Wu | Yuankai Wu and Huachun Tan | Short-term traffic flow forecasting with spatial-temporal correlation in
a hybrid deep learning framework | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning approaches have reached a celebrity status in artificial
intelligence field, its success have mostly relied on Convolutional Networks
(CNN) and Recurrent Networks. By exploiting fundamental spatial properties of
images and videos, the CNN always achieves dominant performance on visual
tasks. And the Recurrent Networks (RNN) especially long short-term memory
methods (LSTM) can successfully characterize the temporal correlation, thus
exhibits superior capability for time series tasks. Traffic flow data have
plentiful characteristics on both time and space domain. However, applications
of CNN and LSTM approaches on traffic flow are limited. In this paper, we
propose a novel deep architecture combined CNN and LSTM to forecast future
traffic flow (CLTFP). An 1-dimension CNN is exploited to capture spatial
features of traffic flow, and two LSTMs are utilized to mine the short-term
variability and periodicities of traffic flow. Given those meaningful features,
the feature-level fusion is performed to achieve short-term forecasting. The
proposed CLTFP is compared with other popular forecasting methods on an open
datasets. Experimental results indicate that the CLTFP has considerable
advantages in traffic flow forecasting. in additional, the proposed CLTFP is
analyzed from the view of Granger Causality, and several interesting properties
of CLTFP are discovered and discussed .
| [
{
"version": "v1",
"created": "Sat, 3 Dec 2016 21:30:26 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Wu",
"Yuankai",
""
],
[
"Tan",
"Huachun",
""
]
] | TITLE: Short-term traffic flow forecasting with spatial-temporal correlation in
a hybrid deep learning framework
ABSTRACT: Deep learning approaches have reached a celebrity status in artificial
intelligence field, its success have mostly relied on Convolutional Networks
(CNN) and Recurrent Networks. By exploiting fundamental spatial properties of
images and videos, the CNN always achieves dominant performance on visual
tasks. And the Recurrent Networks (RNN) especially long short-term memory
methods (LSTM) can successfully characterize the temporal correlation, thus
exhibits superior capability for time series tasks. Traffic flow data have
plentiful characteristics on both time and space domain. However, applications
of CNN and LSTM approaches on traffic flow are limited. In this paper, we
propose a novel deep architecture combined CNN and LSTM to forecast future
traffic flow (CLTFP). An 1-dimension CNN is exploited to capture spatial
features of traffic flow, and two LSTMs are utilized to mine the short-term
variability and periodicities of traffic flow. Given those meaningful features,
the feature-level fusion is performed to achieve short-term forecasting. The
proposed CLTFP is compared with other popular forecasting methods on an open
datasets. Experimental results indicate that the CLTFP has considerable
advantages in traffic flow forecasting. in additional, the proposed CLTFP is
analyzed from the view of Granger Causality, and several interesting properties
of CLTFP are discovered and discussed .
| no_new_dataset | 0.946794 |
1612.01030 | Alexandre Drouin | Alexandre Drouin, Fr\'ed\'eric Raymond, Ga\"el Letarte St-Pierre,
Mario Marchand, Jacques Corbeil, Fran\c{c}ois Laviolette | Large scale modeling of antimicrobial resistance with interpretable
classifiers | Peer-reviewed and accepted for presentation at the Machine Learning
for Health Workshop, NIPS 2016, Barcelona, Spain | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antimicrobial resistance is an important public health concern that has
implications in the practice of medicine worldwide. Accurately predicting
resistance phenotypes from genome sequences shows great promise in promoting
better use of antimicrobial agents, by determining which antibiotics are likely
to be effective in specific clinical cases. In healthcare, this would allow for
the design of treatment plans tailored for specific individuals, likely
resulting in better clinical outcomes for patients with bacterial infections.
In this work, we present the recent work of Drouin et al. (2016) on using Set
Covering Machines to learn highly interpretable models of antibiotic resistance
and complement it by providing a large scale application of their method to the
entire PATRIC database. We report prediction results for 36 new datasets and
present the Kover AMR platform, a new web-based tool allowing the visualization
and interpretation of the generated models.
| [
{
"version": "v1",
"created": "Sat, 3 Dec 2016 22:52:44 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Drouin",
"Alexandre",
""
],
[
"Raymond",
"Frédéric",
""
],
[
"St-Pierre",
"Gaël Letarte",
""
],
[
"Marchand",
"Mario",
""
],
[
"Corbeil",
"Jacques",
""
],
[
"Laviolette",
"François",
""
]
] | TITLE: Large scale modeling of antimicrobial resistance with interpretable
classifiers
ABSTRACT: Antimicrobial resistance is an important public health concern that has
implications in the practice of medicine worldwide. Accurately predicting
resistance phenotypes from genome sequences shows great promise in promoting
better use of antimicrobial agents, by determining which antibiotics are likely
to be effective in specific clinical cases. In healthcare, this would allow for
the design of treatment plans tailored for specific individuals, likely
resulting in better clinical outcomes for patients with bacterial infections.
In this work, we present the recent work of Drouin et al. (2016) on using Set
Covering Machines to learn highly interpretable models of antibiotic resistance
and complement it by providing a large scale application of their method to the
entire PATRIC database. We report prediction results for 36 new datasets and
present the Kover AMR platform, a new web-based tool allowing the visualization
and interpretation of the generated models.
| new_dataset | 0.947039 |
1612.01035 | Lex Fridman | Lex Fridman, Bryan Reimer | Semi-Automated Annotation of Discrete States in Large Video Datasets | To be presented at AAAI 2017. arXiv admin note: text overlap with
arXiv:1508.04028 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a framework for semi-automated annotation of video frames where
the video is of an object that at any point in time can be labeled as being in
one of a finite number of discrete states. A Hidden Markov Model (HMM) is used
to model (1) the behavior of the underlying object and (2) the noisy
observation of its state through an image processing algorithm. The key insight
of this approach is that the annotation of frame-by-frame video can be reduced
from a problem of labeling every single image to a problem of detecting a
transition between states of the underlying objected being recording on video.
The performance of the framework is evaluated on a driver gaze classification
dataset composed of 16,000,000 images that were fully annotated over 6,000
hours of direct manual annotation labor. On this dataset, we achieve a 13x
reduction in manual annotation for an average accuracy of 99.1% and a 84x
reduction for an average accuracy of 91.2%.
| [
{
"version": "v1",
"created": "Sat, 3 Dec 2016 23:40:14 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Fridman",
"Lex",
""
],
[
"Reimer",
"Bryan",
""
]
] | TITLE: Semi-Automated Annotation of Discrete States in Large Video Datasets
ABSTRACT: We propose a framework for semi-automated annotation of video frames where
the video is of an object that at any point in time can be labeled as being in
one of a finite number of discrete states. A Hidden Markov Model (HMM) is used
to model (1) the behavior of the underlying object and (2) the noisy
observation of its state through an image processing algorithm. The key insight
of this approach is that the annotation of frame-by-frame video can be reduced
from a problem of labeling every single image to a problem of detecting a
transition between states of the underlying objected being recording on video.
The performance of the framework is evaluated on a driver gaze classification
dataset composed of 16,000,000 images that were fully annotated over 6,000
hours of direct manual annotation labor. On this dataset, we achieve a 13x
reduction in manual annotation for an average accuracy of 99.1% and a 84x
reduction for an average accuracy of 91.2%.
| new_dataset | 0.956957 |
1612.01072 | Gang Chen | Gang Chen, Yawei Li and Sargur N. Srihari | Word Recognition with Deep Conditional Random Fields | 5 pages, published in ICIP 2016. arXiv admin note: substantial text
overlap with arXiv:1412.3397 | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Recognition of handwritten words continues to be an important problem in
document analysis and recognition. Existing approaches extract hand-engineered
features from word images--which can perform poorly with new data sets.
Recently, deep learning has attracted great attention because of the ability to
learn features from raw data. Moreover they have yielded state-of-the-art
results in classification tasks including character recognition and scene
recognition. On the other hand, word recognition is a sequential problem where
we need to model the correlation between characters. In this paper, we propose
using deep Conditional Random Fields (deep CRFs) for word recognition.
Basically, we combine CRFs with deep learning, in which deep features are
learned and sequences are labeled in a unified framework. We pre-train the deep
structure with stacked restricted Boltzmann machines (RBMs) for feature
learning and optimize the entire network with an online learning algorithm. The
proposed model was evaluated on two datasets, and seen to perform significantly
better than competitive baseline models. The source code is available at
https://github.com/ganggit/deepCRFs.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2016 05:39:42 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Chen",
"Gang",
""
],
[
"Li",
"Yawei",
""
],
[
"Srihari",
"Sargur N.",
""
]
] | TITLE: Word Recognition with Deep Conditional Random Fields
ABSTRACT: Recognition of handwritten words continues to be an important problem in
document analysis and recognition. Existing approaches extract hand-engineered
features from word images--which can perform poorly with new data sets.
Recently, deep learning has attracted great attention because of the ability to
learn features from raw data. Moreover they have yielded state-of-the-art
results in classification tasks including character recognition and scene
recognition. On the other hand, word recognition is a sequential problem where
we need to model the correlation between characters. In this paper, we propose
using deep Conditional Random Fields (deep CRFs) for word recognition.
Basically, we combine CRFs with deep learning, in which deep features are
learned and sequences are labeled in a unified framework. We pre-train the deep
structure with stacked restricted Boltzmann machines (RBMs) for feature
learning and optimize the entire network with an online learning algorithm. The
proposed model was evaluated on two datasets, and seen to perform significantly
better than competitive baseline models. The source code is available at
https://github.com/ganggit/deepCRFs.
| no_new_dataset | 0.950549 |
1612.01194 | Haroon Idrees | Khurram Soomro, Haroon Idrees, and Mubarak Shah | Online Localization and Prediction of Actions and Interactions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a person-centric and online approach to the challenging
problem of localization and prediction of actions and interactions in videos.
Typically, localization or recognition is performed in an offline manner where
all the frames in the video are processed together. This prevents timely
localization and prediction of actions and interactions - an important
consideration for many tasks including surveillance and human-machine
interaction.
In our approach, we estimate human poses at each frame and train
discriminative appearance models using the superpixels inside the pose bounding
boxes. Since the pose estimation per frame is inherently noisy, the conditional
probability of pose hypotheses at current time-step (frame) is computed using
pose estimations in the current frame and their consistency with poses in the
previous frames. Next, both the superpixel and pose-based foreground
likelihoods are used to infer the location of actors at each time through a
Conditional Random. The issue of visual drift is handled by updating the
appearance models, and refining poses using motion smoothness on joint
locations, in an online manner. For online prediction of action (interaction)
confidences, we propose an approach based on Structural SVM that operates on
short video segments, and is trained with the objective that confidence of an
action or interaction increases as time progresses. Lastly, we quantify the
performance of both detection and prediction together, and analyze how the
prediction accuracy varies as a time function of observed action (interaction)
at different levels of detection performance. Our experiments on several
datasets suggest that despite using only a few frames to localize actions
(interactions) at each time instant, we are able to obtain competitive results
to state-of-the-art offline methods.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2016 22:16:55 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Soomro",
"Khurram",
""
],
[
"Idrees",
"Haroon",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Online Localization and Prediction of Actions and Interactions
ABSTRACT: This paper proposes a person-centric and online approach to the challenging
problem of localization and prediction of actions and interactions in videos.
Typically, localization or recognition is performed in an offline manner where
all the frames in the video are processed together. This prevents timely
localization and prediction of actions and interactions - an important
consideration for many tasks including surveillance and human-machine
interaction.
In our approach, we estimate human poses at each frame and train
discriminative appearance models using the superpixels inside the pose bounding
boxes. Since the pose estimation per frame is inherently noisy, the conditional
probability of pose hypotheses at current time-step (frame) is computed using
pose estimations in the current frame and their consistency with poses in the
previous frames. Next, both the superpixel and pose-based foreground
likelihoods are used to infer the location of actors at each time through a
Conditional Random. The issue of visual drift is handled by updating the
appearance models, and refining poses using motion smoothness on joint
locations, in an online manner. For online prediction of action (interaction)
confidences, we propose an approach based on Structural SVM that operates on
short video segments, and is trained with the objective that confidence of an
action or interaction increases as time progresses. Lastly, we quantify the
performance of both detection and prediction together, and analyze how the
prediction accuracy varies as a time function of observed action (interaction)
at different levels of detection performance. Our experiments on several
datasets suggest that despite using only a few frames to localize actions
(interactions) at each time instant, we are able to obtain competitive results
to state-of-the-art offline methods.
| no_new_dataset | 0.950641 |
1612.01197 | Chen Liang | Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao | Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision (Short Version) | Published in NAMPI workshop at NIPS 2016. Short version of
arXiv:1611.00020 | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extending the success of deep neural networks to natural language
understanding and symbolic reasoning requires complex operations and external
memory. Recent neural program induction approaches have attempted to address
this problem, but are typically limited to differentiable memory, and
consequently cannot scale beyond small synthetic tasks. In this work, we
propose the Manager-Programmer-Computer framework, which integrates neural
networks with non-differentiable memory to support abstract, scalable and
precise operations through a friendly neural computer interface. Specifically,
we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence
neural "programmer", and a non-differentiable "computer" that is a Lisp
interpreter with code assist. To successfully apply REINFORCE for training, we
augment it with approximate gold programs found by an iterative maximum
likelihood training process. NSM is able to learn a semantic parser from weak
supervision over a large knowledge base. It achieves new state-of-the-art
performance on WebQuestionsSP, a challenging semantic parsing dataset, with
weak supervision. Compared to previous approaches, NSM is end-to-end, therefore
does not rely on feature engineering or domain specific knowledge.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2016 22:29:32 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Liang",
"Chen",
""
],
[
"Berant",
"Jonathan",
""
],
[
"Le",
"Quoc",
""
],
[
"Forbus",
"Kenneth D.",
""
],
[
"Lao",
"Ni",
""
]
] | TITLE: Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision (Short Version)
ABSTRACT: Extending the success of deep neural networks to natural language
understanding and symbolic reasoning requires complex operations and external
memory. Recent neural program induction approaches have attempted to address
this problem, but are typically limited to differentiable memory, and
consequently cannot scale beyond small synthetic tasks. In this work, we
propose the Manager-Programmer-Computer framework, which integrates neural
networks with non-differentiable memory to support abstract, scalable and
precise operations through a friendly neural computer interface. Specifically,
we introduce a Neural Symbolic Machine, which contains a sequence-to-sequence
neural "programmer", and a non-differentiable "computer" that is a Lisp
interpreter with code assist. To successfully apply REINFORCE for training, we
augment it with approximate gold programs found by an iterative maximum
likelihood training process. NSM is able to learn a semantic parser from weak
supervision over a large knowledge base. It achieves new state-of-the-art
performance on WebQuestionsSP, a challenging semantic parsing dataset, with
weak supervision. Compared to previous approaches, NSM is end-to-end, therefore
does not rely on feature engineering or domain specific knowledge.
| no_new_dataset | 0.942665 |
1612.01225 | Chen Liu | Chen Liu, Jiajun Wu, Pushmeet Kohli, Yasutaka Furukawa | Deep Multi-Modal Image Correspondence Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference of correspondences between images from different modalities is an
extremely important perceptual ability that enables humans to understand and
recognize cross-modal concepts. In this paper, we consider an instance of this
problem that involves matching photographs of building interiors with their
corresponding floorplan. This is a particularly challenging problem because a
floorplan, as a stylized architectural drawing, is very different in appearance
from a color photograph. Furthermore, individual photographs by themselves
depict only a part of a floorplan (e.g., kitchen, bathroom, and living room).
We propose the use of a number of different neural network architectures for
this task, which are trained and evaluated on a novel large-scale dataset of 5
million floorplan images and 80 million associated photographs. Experimental
evaluation reveals that our neural network architectures are able to identify
visual cues that result in reliable matches across these two quite different
modalities. In fact, the trained networks are able to even outperform human
subjects in several challenging image matching problems. Our result implies
that neural networks are effective at perceptual tasks that require long
periods of reasoning even for humans to solve.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 02:16:09 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Liu",
"Chen",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Furukawa",
"Yasutaka",
""
]
] | TITLE: Deep Multi-Modal Image Correspondence Learning
ABSTRACT: Inference of correspondences between images from different modalities is an
extremely important perceptual ability that enables humans to understand and
recognize cross-modal concepts. In this paper, we consider an instance of this
problem that involves matching photographs of building interiors with their
corresponding floorplan. This is a particularly challenging problem because a
floorplan, as a stylized architectural drawing, is very different in appearance
from a color photograph. Furthermore, individual photographs by themselves
depict only a part of a floorplan (e.g., kitchen, bathroom, and living room).
We propose the use of a number of different neural network architectures for
this task, which are trained and evaluated on a novel large-scale dataset of 5
million floorplan images and 80 million associated photographs. Experimental
evaluation reveals that our neural network architectures are able to identify
visual cues that result in reliable matches across these two quite different
modalities. In fact, the trained networks are able to even outperform human
subjects in several challenging image matching problems. Our result implies
that neural networks are effective at perceptual tasks that require long
periods of reasoning even for humans to solve.
| new_dataset | 0.963712 |
1612.01253 | Yen-Chang Hsu | Yen-Chang Hsu, Zhaoyang Lv, Zsolt Kira | Deep Image Category Discovery using a Transferred Similarity Function | 13 pages, 9 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically discovering image categories in unlabeled natural images is one
of the important goals of unsupervised learning. However, the task is
challenging and even human beings define visual categories based on a large
amount of prior knowledge. In this paper, we similarly utilize prior knowledge
to facilitate the discovery of image categories. We present a novel end-to-end
network to map unlabeled images to categories as a clustering network. We
propose that this network can be learned with contrastive loss which is only
based on weak binary pair-wise constraints. Such binary constraints can be
learned from datasets in other domains as transferred similarity functions,
which mimic a simple knowledge transfer. We first evaluate our experiments on
the MNIST dataset as a proof of concept, based on predicted similarities
trained on Omniglot, showing a 99\% accuracy which significantly outperforms
clustering based approaches. Then we evaluate the discovery performance on
Cifar-10, STL-10, and ImageNet, which achieves both state-of-the-art accuracy
and shows it can be scalable to various large natural images.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 05:41:26 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Hsu",
"Yen-Chang",
""
],
[
"Lv",
"Zhaoyang",
""
],
[
"Kira",
"Zsolt",
""
]
] | TITLE: Deep Image Category Discovery using a Transferred Similarity Function
ABSTRACT: Automatically discovering image categories in unlabeled natural images is one
of the important goals of unsupervised learning. However, the task is
challenging and even human beings define visual categories based on a large
amount of prior knowledge. In this paper, we similarly utilize prior knowledge
to facilitate the discovery of image categories. We present a novel end-to-end
network to map unlabeled images to categories as a clustering network. We
propose that this network can be learned with contrastive loss which is only
based on weak binary pair-wise constraints. Such binary constraints can be
learned from datasets in other domains as transferred similarity functions,
which mimic a simple knowledge transfer. We first evaluate our experiments on
the MNIST dataset as a proof of concept, based on predicted similarities
trained on Omniglot, showing a 99\% accuracy which significantly outperforms
clustering based approaches. Then we evaluate the discovery performance on
Cifar-10, STL-10, and ImageNet, which achieves both state-of-the-art accuracy
and shows it can be scalable to various large natural images.
| no_new_dataset | 0.949342 |
1612.01254 | Soheil Bahrampour | Shengdong Zhang and Soheil Bahrampour and Naveen Ramakrishnan and
Mohak Shah | Deep Symbolic Representation Learning for Heterogeneous Time-series
Classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of event classification with
multi-variate time series data consisting of heterogeneous (continuous and
categorical) variables. The complex temporal dependencies between the variables
combined with sparsity of the data makes the event classification problem
particularly challenging. Most state-of-art approaches address this either by
designing hand-engineered features or breaking up the problem over homogeneous
variates. In this work, we propose and compare three representation learning
algorithms over symbolized sequences which enables classification of
heterogeneous time-series data using a deep architecture. The proposed
representations are trained jointly along with the rest of the network
architecture in an end-to-end fashion that makes the learned features
discriminative for the given task. Experiments on three real-world datasets
demonstrate the effectiveness of the proposed approaches.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 05:53:47 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Zhang",
"Shengdong",
""
],
[
"Bahrampour",
"Soheil",
""
],
[
"Ramakrishnan",
"Naveen",
""
],
[
"Shah",
"Mohak",
""
]
] | TITLE: Deep Symbolic Representation Learning for Heterogeneous Time-series
Classification
ABSTRACT: In this paper, we consider the problem of event classification with
multi-variate time series data consisting of heterogeneous (continuous and
categorical) variables. The complex temporal dependencies between the variables
combined with sparsity of the data makes the event classification problem
particularly challenging. Most state-of-art approaches address this either by
designing hand-engineered features or breaking up the problem over homogeneous
variates. In this work, we propose and compare three representation learning
algorithms over symbolized sequences which enables classification of
heterogeneous time-series data using a deep architecture. The proposed
representations are trained jointly along with the rest of the network
architecture in an end-to-end fashion that makes the learned features
discriminative for the given task. Experiments on three real-world datasets
demonstrate the effectiveness of the proposed approaches.
| no_new_dataset | 0.948965 |
1612.01256 | Yasutaka Furukawa | Satoshi Ikehata and Ivaylo Boyadzhiev and Qi Shan and Yasutaka
Furukawa | Panoramic Structure from Motion via Geometric Relationship Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of Structure from Motion (SfM) for indoor
panoramic image streams, extremely challenging even for the state-of-the-art
due to the lack of textures and minimal parallax. The key idea is the fusion of
single-view and multi-view reconstruction techniques via geometric relationship
detection (e.g., detecting 2D lines as coplanar in 3D). Rough geometry suffices
to perform such detection, and our approach utilizes rough surface normal
estimates from an image-to-normal deep network to discover geometric
relationships among lines. The detected relationships provide exact geometric
constraints in our line-based linear SfM formulation. A constrained linear
least squares is used to reconstruct a 3D model and camera motions, followed by
the bundle adjustment. We have validated our algorithm on challenging datasets,
outperforming various state-of-the-art reconstruction techniques.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 06:24:10 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Ikehata",
"Satoshi",
""
],
[
"Boyadzhiev",
"Ivaylo",
""
],
[
"Shan",
"Qi",
""
],
[
"Furukawa",
"Yasutaka",
""
]
] | TITLE: Panoramic Structure from Motion via Geometric Relationship Detection
ABSTRACT: This paper addresses the problem of Structure from Motion (SfM) for indoor
panoramic image streams, extremely challenging even for the state-of-the-art
due to the lack of textures and minimal parallax. The key idea is the fusion of
single-view and multi-view reconstruction techniques via geometric relationship
detection (e.g., detecting 2D lines as coplanar in 3D). Rough geometry suffices
to perform such detection, and our approach utilizes rough surface normal
estimates from an image-to-normal deep network to discover geometric
relationships among lines. The detected relationships provide exact geometric
constraints in our line-based linear SfM formulation. A constrained linear
least squares is used to reconstruct a 3D model and camera motions, followed by
the bundle adjustment. We have validated our algorithm on challenging datasets,
outperforming various state-of-the-art reconstruction techniques.
| no_new_dataset | 0.950549 |
1612.01316 | Konstantinos Sechidis | Konstantinos Sechidis, Emily Turner, Paul D. Metcalfe, James
Weatherall and Gavin Brown | Ranking Biomarkers Through Mutual Information | Accepted at NIPS 2016 Workshop on Machine Learning for Health | null | null | null | stat.ML cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study information theoretic methods for ranking biomarkers. In clinical
trials there are two, closely related, types of biomarkers: predictive and
prognostic, and disentangling them is a key challenge. Our first step is to
phrase biomarker ranking in terms of optimizing an information theoretic
quantity. This formalization of the problem will enable us to derive rankings
of predictive/prognostic biomarkers, by estimating different, high dimensional,
conditional mutual information terms. To estimate these terms, we suggest
efficient low dimensional approximations, and we derive an empirical Bayes
estimator, which is suitable for small or sparse datasets. Finally, we
introduce a new visualisation tool that captures the prognostic and the
predictive strength of a set of biomarkers. We believe this representation will
prove to be a powerful tool in biomarker discovery.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 11:44:32 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Sechidis",
"Konstantinos",
""
],
[
"Turner",
"Emily",
""
],
[
"Metcalfe",
"Paul D.",
""
],
[
"Weatherall",
"James",
""
],
[
"Brown",
"Gavin",
""
]
] | TITLE: Ranking Biomarkers Through Mutual Information
ABSTRACT: We study information theoretic methods for ranking biomarkers. In clinical
trials there are two, closely related, types of biomarkers: predictive and
prognostic, and disentangling them is a key challenge. Our first step is to
phrase biomarker ranking in terms of optimizing an information theoretic
quantity. This formalization of the problem will enable us to derive rankings
of predictive/prognostic biomarkers, by estimating different, high dimensional,
conditional mutual information terms. To estimate these terms, we suggest
efficient low dimensional approximations, and we derive an empirical Bayes
estimator, which is suitable for small or sparse datasets. Finally, we
introduce a new visualisation tool that captures the prognostic and the
predictive strength of a set of biomarkers. We believe this representation will
prove to be a powerful tool in biomarker discovery.
| no_new_dataset | 0.946448 |
1612.01349 | Soumi Chaki | Soumi Chaki, Akhilesh Kumar Verma, Aurobinda Routray, William K.
Mohanty, Mamata Jenamani | A One class Classifier based Framework using SVDD : Application to an
Imbalanced Geological Dataset | presented at IEEE Students Technology Symposium (TechSym), 28
February to 2 March 2014, IIT Kharagpur, India. 6 pages, 7 figures, 2tables | null | null | null | cs.LG stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluation of hydrocarbon reservoir requires classification of petrophysical
properties from available dataset. However, characterization of reservoir
attributes is difficult due to the nonlinear and heterogeneous nature of the
subsurface physical properties. In this context, present study proposes a
generalized one class classification framework based on Support Vector Data
Description (SVDD) to classify a reservoir characteristic water saturation into
two classes (Class high and Class low) from four logs namely gamma ray, neutron
porosity, bulk density, and P sonic using an imbalanced dataset. A comparison
is carried out among proposed framework and different supervised classification
algorithms in terms of g metric means and execution time. Experimental results
show that proposed framework has outperformed other classifiers in terms of
these performance evaluators. It is envisaged that the classification analysis
performed in this study will be useful in further reservoir modeling.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 07:54:23 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Chaki",
"Soumi",
""
],
[
"Verma",
"Akhilesh Kumar",
""
],
[
"Routray",
"Aurobinda",
""
],
[
"Mohanty",
"William K.",
""
],
[
"Jenamani",
"Mamata",
""
]
] | TITLE: A One class Classifier based Framework using SVDD : Application to an
Imbalanced Geological Dataset
ABSTRACT: Evaluation of hydrocarbon reservoir requires classification of petrophysical
properties from available dataset. However, characterization of reservoir
attributes is difficult due to the nonlinear and heterogeneous nature of the
subsurface physical properties. In this context, present study proposes a
generalized one class classification framework based on Support Vector Data
Description (SVDD) to classify a reservoir characteristic water saturation into
two classes (Class high and Class low) from four logs namely gamma ray, neutron
porosity, bulk density, and P sonic using an imbalanced dataset. A comparison
is carried out among proposed framework and different supervised classification
algorithms in terms of g metric means and execution time. Experimental results
show that proposed framework has outperformed other classifiers in terms of
these performance evaluators. It is envisaged that the classification analysis
performed in this study will be useful in further reservoir modeling.
| no_new_dataset | 0.949716 |
1612.01356 | Cheng Zhang | Cheng Zhang, Hedvig Kjellstrom, Bo C. Bertilson | Diagnostic Prediction Using Discomfort Drawings | NIPS 2016 Workshop on Machine Learning for Health | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we extend a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. Experimental results show reasonable predictions of
diagnostic labels given an unseen discomfort drawing. The positive result
indicates a significant potential of machine learning to be used for parts of
the pain diagnostic process and to be a decision support system for physicians
and other health care personnel.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 14:11:20 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Zhang",
"Cheng",
""
],
[
"Kjellstrom",
"Hedvig",
""
],
[
"Bertilson",
"Bo C.",
""
]
] | TITLE: Diagnostic Prediction Using Discomfort Drawings
ABSTRACT: In this paper, we explore the possibility to apply machine learning to make
diagnostic predictions using discomfort drawings. A discomfort drawing is an
intuitive way for patients to express discomfort and pain related symptoms.
These drawings have proven to be an effective method to collect patient data
and make diagnostic decisions in real-life practice. A dataset from real-world
patient cases is collected for which medical experts provide diagnostic labels.
Next, we extend a factorized multimodal topic model, Inter-Battery Topic Model
(IBTM), to train a system that can make diagnostic predictions given an unseen
discomfort drawing. Experimental results show reasonable predictions of
diagnostic labels given an unseen discomfort drawing. The positive result
indicates a significant potential of machine learning to be used for parts of
the pain diagnostic process and to be a decision support system for physicians
and other health care personnel.
| no_new_dataset | 0.946794 |
1612.01445 | Suleiman Yerima | BooJoong Kang, Suleiman Y. Yerima, Sakir Sezer and Kieran McLaughlin | N-gram Opcode Analysis for Android Malware Detection | null | International Journal on Cyber Situational Awareness, Vol. 1, No.
1, pp231-255 (2016) | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Android malware has been on the rise in recent years due to the increasing
popularity of Android and the proliferation of third party application markets.
Emerging Android malware families are increasingly adopting sophisticated
detection avoidance techniques and this calls for more effective approaches for
Android malware detection. Hence, in this paper we present and evaluate an
n-gram opcode features based approach that utilizes machine learning to
identify and categorize Android malware. This approach enables automated
feature discovery without relying on prior expert or domain knowledge for
pre-determined features. Furthermore, by using a data segmentation technique
for feature selection, our analysis is able to scale up to 10-gram opcodes. Our
experiments on a dataset of 2520 samples showed an f-measure of 98% using the
n-gram opcode based approach. We also provide empirical findings that
illustrate factors that have probable impact on the overall n-gram opcodes
performance trends.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 17:33:23 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Kang",
"BooJoong",
""
],
[
"Yerima",
"Suleiman Y.",
""
],
[
"Sezer",
"Sakir",
""
],
[
"McLaughlin",
"Kieran",
""
]
] | TITLE: N-gram Opcode Analysis for Android Malware Detection
ABSTRACT: Android malware has been on the rise in recent years due to the increasing
popularity of Android and the proliferation of third party application markets.
Emerging Android malware families are increasingly adopting sophisticated
detection avoidance techniques and this calls for more effective approaches for
Android malware detection. Hence, in this paper we present and evaluate an
n-gram opcode features based approach that utilizes machine learning to
identify and categorize Android malware. This approach enables automated
feature discovery without relying on prior expert or domain knowledge for
pre-determined features. Furthermore, by using a data segmentation technique
for feature selection, our analysis is able to scale up to 10-gram opcodes. Our
experiments on a dataset of 2520 samples showed an f-measure of 98% using the
n-gram opcode based approach. We also provide empirical findings that
illustrate factors that have probable impact on the overall n-gram opcodes
performance trends.
| no_new_dataset | 0.94366 |
1612.01450 | Ting Wang | Xinyang Zhang and Dashun Wang and Ting Wang | Inspiration or Preparation? Explaining Creativity in Scientific
Enterprise | Published in CIKM'16 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human creativity is the ultimate driving force behind scientific progress.
While the building blocks of innovations are often embodied in existing
knowledge, it is creativity that blends seemingly disparate ideas. Existing
studies have made striding advances in quantifying creativity of scientific
publications by investigating their citation relationships. Yet, little is
known hitherto about the underlying mechanisms governing scientific creative
processes, largely due to that a paper's references, at best, only partially
reflect its authors' actual information consumption. This work represents an
initial step towards fine-grained understanding of creative processes in
scientific enterprise. In specific, using two web-scale longitudinal datasets
(120.1 million papers and 53.5 billion web requests spanning 4 years), we
directly contrast authors' information consumption behaviors against their
knowledge products. We find that, of 59.0\% papers across all scientific
fields, 25.7\% of their creativity can be readily explained by information
consumed by their authors. Further, by leveraging these findings, we develop a
predictive framework that accurately identifies the most critical knowledge to
fostering target scientific innovations. We believe that our framework is of
fundamental importance to the study of scientific creativity. It promotes
strategies to stimulate and potentially automate creative processes, and
provides insights towards more effective designs of information recommendation
platforms.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 17:44:20 GMT"
}
] | 2016-12-06T00:00:00 | [
[
"Zhang",
"Xinyang",
""
],
[
"Wang",
"Dashun",
""
],
[
"Wang",
"Ting",
""
]
] | TITLE: Inspiration or Preparation? Explaining Creativity in Scientific
Enterprise
ABSTRACT: Human creativity is the ultimate driving force behind scientific progress.
While the building blocks of innovations are often embodied in existing
knowledge, it is creativity that blends seemingly disparate ideas. Existing
studies have made striding advances in quantifying creativity of scientific
publications by investigating their citation relationships. Yet, little is
known hitherto about the underlying mechanisms governing scientific creative
processes, largely due to that a paper's references, at best, only partially
reflect its authors' actual information consumption. This work represents an
initial step towards fine-grained understanding of creative processes in
scientific enterprise. In specific, using two web-scale longitudinal datasets
(120.1 million papers and 53.5 billion web requests spanning 4 years), we
directly contrast authors' information consumption behaviors against their
knowledge products. We find that, of 59.0\% papers across all scientific
fields, 25.7\% of their creativity can be readily explained by information
consumed by their authors. Further, by leveraging these findings, we develop a
predictive framework that accurately identifies the most critical knowledge to
fostering target scientific innovations. We believe that our framework is of
fundamental importance to the study of scientific creativity. It promotes
strategies to stimulate and potentially automate creative processes, and
provides insights towards more effective designs of information recommendation
platforms.
| no_new_dataset | 0.942295 |
1603.01076 | Gabriela Csurka | Gabriela Csurka, Diane Larlus, Albert Gordo and Jon Almazan | What is the right way to represent document images? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we study the problem of document image representation based
on visual features. We propose a comprehensive experimental study that compares
three types of visual document image representations: (1) traditional so-called
shallow features, such as the RunLength and the Fisher-Vector descriptors, (2)
deep features based on Convolutional Neural Networks, and (3) features
extracted from hybrid architectures that take inspiration from the two previous
ones.
We evaluate these features in several tasks (i.e. classification, clustering,
and retrieval) and in different setups (e.g. domain transfer) using several
public and in-house datasets. Our results show that deep features generally
outperform other types of features when there is no domain shift and the new
task is closely related to the one used to train the model. However, when a
large domain or task shift is present, the Fisher-Vector shallow features
generalize better and often obtain the best results.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 12:46:51 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Mar 2016 17:38:52 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Dec 2016 16:38:25 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Csurka",
"Gabriela",
""
],
[
"Larlus",
"Diane",
""
],
[
"Gordo",
"Albert",
""
],
[
"Almazan",
"Jon",
""
]
] | TITLE: What is the right way to represent document images?
ABSTRACT: In this article we study the problem of document image representation based
on visual features. We propose a comprehensive experimental study that compares
three types of visual document image representations: (1) traditional so-called
shallow features, such as the RunLength and the Fisher-Vector descriptors, (2)
deep features based on Convolutional Neural Networks, and (3) features
extracted from hybrid architectures that take inspiration from the two previous
ones.
We evaluate these features in several tasks (i.e. classification, clustering,
and retrieval) and in different setups (e.g. domain transfer) using several
public and in-house datasets. Our results show that deep features generally
outperform other types of features when there is no domain shift and the new
task is closely related to the one used to train the model. However, when a
large domain or task shift is present, the Fisher-Vector shallow features
generalize better and often obtain the best results.
| no_new_dataset | 0.947624 |
1605.03389 | Markus Oberweger | Markus Oberweger, Gernot Riegler, Paul Wohlhart, Vincent Lepetit | Efficiently Creating 3D Training Data for Fine Hand Pose Estimation | added link to source https://github.com/moberweger/semi-auto-anno.
Appears in Proc. of CVPR 2016 | null | null | null | cs.CV cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While many recent hand pose estimation methods critically rely on a training
set of labelled frames, the creation of such a dataset is a challenging task
that has been overlooked so far. As a result, existing datasets are limited to
a few sequences and individuals, with limited accuracy, and this prevents these
methods from delivering their full potential. We propose a semi-automated
method for efficiently and accurately labeling each frame of a hand depth video
with the corresponding 3D locations of the joints: The user is asked to provide
only an estimate of the 2D reprojections of the visible joints in some
reference frames, which are automatically selected to minimize the labeling
work by efficiently optimizing a sub-modular loss function. We then exploit
spatial, temporal, and appearance constraints to retrieve the full 3D poses of
the hand over the complete sequence. We show that this data can be used to
train a recent state-of-the-art hand pose estimation method, leading to
increased accuracy. The code and dataset can be found on our website
https://cvarlab.icg.tugraz.at/projects/hand_detection/
| [
{
"version": "v1",
"created": "Wed, 11 May 2016 11:40:27 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2016 15:45:38 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Oberweger",
"Markus",
""
],
[
"Riegler",
"Gernot",
""
],
[
"Wohlhart",
"Paul",
""
],
[
"Lepetit",
"Vincent",
""
]
] | TITLE: Efficiently Creating 3D Training Data for Fine Hand Pose Estimation
ABSTRACT: While many recent hand pose estimation methods critically rely on a training
set of labelled frames, the creation of such a dataset is a challenging task
that has been overlooked so far. As a result, existing datasets are limited to
a few sequences and individuals, with limited accuracy, and this prevents these
methods from delivering their full potential. We propose a semi-automated
method for efficiently and accurately labeling each frame of a hand depth video
with the corresponding 3D locations of the joints: The user is asked to provide
only an estimate of the 2D reprojections of the visible joints in some
reference frames, which are automatically selected to minimize the labeling
work by efficiently optimizing a sub-modular loss function. We then exploit
spatial, temporal, and appearance constraints to retrieve the full 3D poses of
the hand over the complete sequence. We show that this data can be used to
train a recent state-of-the-art hand pose estimation method, leading to
increased accuracy. The code and dataset can be found on our website
https://cvarlab.icg.tugraz.at/projects/hand_detection/
| new_dataset | 0.590794 |
1606.00897 | Stefan Bauer | Stefan Bauer and Nicolas Carion and Peter Sch\"uffler and Thomas Fuchs
and Peter Wild and Joachim M. Buhmann | Multi-Organ Cancer Classification and Survival Analysis | null | null | null | null | q-bio.QM cs.LG q-bio.TO stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and robust cell nuclei classification is the cornerstone for a wider
range of tasks in digital and Computational Pathology. However, most machine
learning systems require extensive labeling from expert pathologists for each
individual problem at hand, with no or limited abilities for knowledge transfer
between datasets and organ sites. In this paper we implement and evaluate a
variety of deep neural network models and model ensembles for nuclei
classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose
a convolutional neural network system based on residual learning which
significantly improves over the state-of-the-art in cell nuclei classification.
Finally, we show that the combination of tissue types during training increases
not only classification accuracy but also overall survival analysis.
| [
{
"version": "v1",
"created": "Thu, 2 Jun 2016 21:09:00 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2016 20:06:14 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Bauer",
"Stefan",
""
],
[
"Carion",
"Nicolas",
""
],
[
"Schüffler",
"Peter",
""
],
[
"Fuchs",
"Thomas",
""
],
[
"Wild",
"Peter",
""
],
[
"Buhmann",
"Joachim M.",
""
]
] | TITLE: Multi-Organ Cancer Classification and Survival Analysis
ABSTRACT: Accurate and robust cell nuclei classification is the cornerstone for a wider
range of tasks in digital and Computational Pathology. However, most machine
learning systems require extensive labeling from expert pathologists for each
individual problem at hand, with no or limited abilities for knowledge transfer
between datasets and organ sites. In this paper we implement and evaluate a
variety of deep neural network models and model ensembles for nuclei
classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose
a convolutional neural network system based on residual learning which
significantly improves over the state-of-the-art in cell nuclei classification.
Finally, we show that the combination of tissue types during training increases
not only classification accuracy but also overall survival analysis.
| no_new_dataset | 0.95222 |
1606.04300 | Deng Cai | Deng Cai and Hai Zhao | Neural Word Segmentation Learning for Chinese | ACL2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most previous approaches to Chinese word segmentation formalize this problem
as a character-based sequence labeling task where only contextual information
within fixed sized local windows and simple interactions between adjacent tags
can be captured. In this paper, we propose a novel neural framework which
thoroughly eliminates context windows and can utilize complete segmentation
history. Our model employs a gated combination neural network over characters
to produce distributed representations of word candidates, which are then given
to a long short-term memory (LSTM) language scoring model. Experiments on the
benchmark datasets show that without the help of feature engineering as most
existing approaches, our models achieve competitive or better performances with
previous state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2016 10:52:21 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Dec 2016 08:06:10 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Cai",
"Deng",
""
],
[
"Zhao",
"Hai",
""
]
] | TITLE: Neural Word Segmentation Learning for Chinese
ABSTRACT: Most previous approaches to Chinese word segmentation formalize this problem
as a character-based sequence labeling task where only contextual information
within fixed sized local windows and simple interactions between adjacent tags
can be captured. In this paper, we propose a novel neural framework which
thoroughly eliminates context windows and can utilize complete segmentation
history. Our model employs a gated combination neural network over characters
to produce distributed representations of word candidates, which are then given
to a long short-term memory (LSTM) language scoring model. Experiments on the
benchmark datasets show that without the help of feature engineering as most
existing approaches, our models achieve competitive or better performances with
previous state-of-the-art methods.
| no_new_dataset | 0.948298 |
1611.06962 | Karl Ni | Karl Ni, Kyle Zaragoza, Charles Foster, Carmen Carrano, Barry Chen,
Yonas Tesfaye, Alex Gude | Sampled Image Tagging and Retrieval Methods on User Generated Content | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Traditional image tagging and retrieval algorithms have limited value as a
result of being trained with heavily curated datasets. These limitations are
most evident when arbitrary search words are used that do not intersect with
training set labels. Weak labels from user generated content (UGC) found in the
wild (e.g., Google Photos, FlickR, etc.) have an almost unlimited number of
unique words in the metadata tags. Prior work on word embeddings successfully
leveraged unstructured text with large vocabularies, and our proposed method
seeks to apply similar cost functions to open source imagery. Specifically, we
train a deep learning image tagging and retrieval system on large scale, user
generated content (UGC) using sampling methods and joint optimization of word
embeddings. By using the Yahoo! FlickR Creative Commons (YFCC100M) dataset,
such an approach builds robustness to common unstructured data issues that
include but are not limited to irrelevant tags, misspellings, multiple
languages, polysemy, and tag imbalance. As a result, the final proposed
algorithm will not only yield comparable results to state of the art in
conventional image tagging, but will enable new capability to train algorithms
on large, scale unstructured text in the YFCC100M dataset and outperform cited
work in zero-shot capability.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 19:24:58 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Nov 2016 01:32:21 GMT"
},
{
"version": "v3",
"created": "Fri, 2 Dec 2016 20:52:40 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Ni",
"Karl",
""
],
[
"Zaragoza",
"Kyle",
""
],
[
"Foster",
"Charles",
""
],
[
"Carrano",
"Carmen",
""
],
[
"Chen",
"Barry",
""
],
[
"Tesfaye",
"Yonas",
""
],
[
"Gude",
"Alex",
""
]
] | TITLE: Sampled Image Tagging and Retrieval Methods on User Generated Content
ABSTRACT: Traditional image tagging and retrieval algorithms have limited value as a
result of being trained with heavily curated datasets. These limitations are
most evident when arbitrary search words are used that do not intersect with
training set labels. Weak labels from user generated content (UGC) found in the
wild (e.g., Google Photos, FlickR, etc.) have an almost unlimited number of
unique words in the metadata tags. Prior work on word embeddings successfully
leveraged unstructured text with large vocabularies, and our proposed method
seeks to apply similar cost functions to open source imagery. Specifically, we
train a deep learning image tagging and retrieval system on large scale, user
generated content (UGC) using sampling methods and joint optimization of word
embeddings. By using the Yahoo! FlickR Creative Commons (YFCC100M) dataset,
such an approach builds robustness to common unstructured data issues that
include but are not limited to irrelevant tags, misspellings, multiple
languages, polysemy, and tag imbalance. As a result, the final proposed
algorithm will not only yield comparable results to state of the art in
conventional image tagging, but will enable new capability to train algorithms
on large, scale unstructured text in the YFCC100M dataset and outperform cited
work in zero-shot capability.
| no_new_dataset | 0.94545 |
1612.00478 | Noranart Vesdapunt | Jonathan Shen, Noranart Vesdapunt, Vishnu N. Boddeti, Kris M. Kitani | In Teacher We Trust: Learning Compressed Models for Pedestrian Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks continue to advance the state-of-the-art
in many domains as they grow bigger and more complex. It has been observed that
many of the parameters of a large network are redundant, allowing for the
possibility of learning a smaller network that mimics the outputs of the large
network through a process called Knowledge Distillation. We show, however, that
standard Knowledge Distillation is not effective for learning small models for
the task of pedestrian detection. To improve this process, we introduce a
higher-dimensional hint layer to increase information flow. We also estimate
the variance in the outputs of the large network and propose a loss function to
incorporate this uncertainty. Finally, we attempt to boost the complexity of
the small network without increasing its size by using as input hand-designed
features that have been demonstrated to be effective for pedestrian detection.
We succeed in training a model that contains $400\times$ fewer parameters than
the large network while outperforming AlexNet on the Caltech Pedestrian
Dataset.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 21:37:19 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Shen",
"Jonathan",
""
],
[
"Vesdapunt",
"Noranart",
""
],
[
"Boddeti",
"Vishnu N.",
""
],
[
"Kitani",
"Kris M.",
""
]
] | TITLE: In Teacher We Trust: Learning Compressed Models for Pedestrian Detection
ABSTRACT: Deep convolutional neural networks continue to advance the state-of-the-art
in many domains as they grow bigger and more complex. It has been observed that
many of the parameters of a large network are redundant, allowing for the
possibility of learning a smaller network that mimics the outputs of the large
network through a process called Knowledge Distillation. We show, however, that
standard Knowledge Distillation is not effective for learning small models for
the task of pedestrian detection. To improve this process, we introduce a
higher-dimensional hint layer to increase information flow. We also estimate
the variance in the outputs of the large network and propose a loss function to
incorporate this uncertainty. Finally, we attempt to boost the complexity of
the small network without increasing its size by using as input hand-designed
features that have been demonstrated to be effective for pedestrian detection.
We succeed in training a model that contains $400\times$ fewer parameters than
the large network while outperforming AlexNet on the Caltech Pedestrian
Dataset.
| no_new_dataset | 0.946101 |
1612.00500 | Ruohan Gao | Ruohan Gao, Dinesh Jayaraman, Kristen Grauman | Object-Centric Representation Learning from Unlabeled Videos | In Proceedings of the Asian Conference on Computer Vision (ACCV),
2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised (pre-)training currently yields state-of-the-art performance for
representation learning for visual recognition, yet it comes at the cost of (1)
intensive manual annotations and (2) an inherent restriction in the scope of
data relevant for learning. In this work, we explore unsupervised feature
learning from unlabeled video. We introduce a novel object-centric approach to
temporal coherence that encourages similar representations to be learned for
object-like regions segmented from nearby frames. Our framework relies on a
Siamese-triplet network to train a deep convolutional neural network (CNN)
representation. Compared to existing temporal coherence methods, our idea has
the advantage of lightweight preprocessing of the unlabeled video (no tracking
required) while still being able to extract object-level regions from which to
learn invariances. Furthermore, as we show in results on several standard
datasets, our method typically achieves substantial accuracy gains over
competing unsupervised methods for image classification and retrieval tasks.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 22:36:20 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Gao",
"Ruohan",
""
],
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Object-Centric Representation Learning from Unlabeled Videos
ABSTRACT: Supervised (pre-)training currently yields state-of-the-art performance for
representation learning for visual recognition, yet it comes at the cost of (1)
intensive manual annotations and (2) an inherent restriction in the scope of
data relevant for learning. In this work, we explore unsupervised feature
learning from unlabeled video. We introduce a novel object-centric approach to
temporal coherence that encourages similar representations to be learned for
object-like regions segmented from nearby frames. Our framework relies on a
Siamese-triplet network to train a deep convolutional neural network (CNN)
representation. Compared to existing temporal coherence methods, our idea has
the advantage of lightweight preprocessing of the unlabeled video (no tracking
required) while still being able to extract object-level regions from which to
learn invariances. Furthermore, as we show in results on several standard
datasets, our method typically achieves substantial accuracy gains over
competing unsupervised methods for image classification and retrieval tasks.
| no_new_dataset | 0.949902 |
1612.00542 | Daniel L\'evy | Daniel L\'evy, Arzav Jain | Breast Mass Classification from Mammograms using Deep Convolutional
Neural Networks | NIPS 2016 ML4HC Workshop | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mammography is the most widely used method to screen breast cancer. Because
of its mostly manual nature, variability in mass appearance, and low
signal-to-noise ratio, a significant number of breast masses are missed or
misdiagnosed. In this work, we present how Convolutional Neural Networks can be
used to directly classify pre-segmented breast masses in mammograms as benign
or malignant, using a combination of transfer learning, careful pre-processing
and data augmentation to overcome limited training data. We achieve
state-of-the-art results on the DDSM dataset, surpassing human performance, and
show interpretability of our model.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 02:06:15 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Lévy",
"Daniel",
""
],
[
"Jain",
"Arzav",
""
]
] | TITLE: Breast Mass Classification from Mammograms using Deep Convolutional
Neural Networks
ABSTRACT: Mammography is the most widely used method to screen breast cancer. Because
of its mostly manual nature, variability in mass appearance, and low
signal-to-noise ratio, a significant number of breast masses are missed or
misdiagnosed. In this work, we present how Convolutional Neural Networks can be
used to directly classify pre-segmented breast masses in mammograms as benign
or malignant, using a combination of transfer learning, careful pre-processing
and data augmentation to overcome limited training data. We achieve
state-of-the-art results on the DDSM dataset, surpassing human performance, and
show interpretability of our model.
| no_new_dataset | 0.955775 |
1612.00585 | Soumi Chaki | Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani | Development of a hybrid learning system based on SVM, ANFIS and domain
knowledge: DKFIS | 6 pages, 5 figures, 3tables Presented at Indicon 2015 | null | null | null | cs.LG cs.CE stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the development of a hybrid learning system based on
Support Vector Machines (SVM), Adaptive Neuro-Fuzzy Inference System (ANFIS)
and domain knowledge to solve prediction problem. The proposed two-stage Domain
Knowledge based Fuzzy Information System (DKFIS) improves the prediction
accuracy attained by ANFIS alone. The proposed framework has been implemented
on a noisy and incomplete dataset acquired from a hydrocarbon field located at
western part of India. Here, oil saturation has been predicted from four
different well logs i.e. gamma ray, resistivity, density, and clay volume. In
the first stage, depending on zero or near zero and non-zero oil saturation
levels the input vector is classified into two classes (Class 0 and Class 1)
using SVM. The classification results have been further fine-tuned applying
expert knowledge based on the relationship among predictor variables i.e. well
logs and target variable - oil saturation. Second, an ANFIS is designed to
predict non-zero (Class 1) oil saturation values from predictor logs. The
predicted output has been further refined based on expert knowledge. It is
apparent from the experimental results that the expert intervention with
qualitative judgment at each stage has rendered the prediction into the
feasible and realistic ranges. The performance analysis of the prediction in
terms of four performance metrics such as correlation coefficient (CC), root
mean square error (RMSE), and absolute error mean (AEM), scatter index (SI) has
established DKFIS as a useful tool for reservoir characterization.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 07:56:23 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Chaki",
"Soumi",
""
],
[
"Routray",
"Aurobinda",
""
],
[
"Mohanty",
"William K.",
""
],
[
"Jenamani",
"Mamata",
""
]
] | TITLE: Development of a hybrid learning system based on SVM, ANFIS and domain
knowledge: DKFIS
ABSTRACT: This paper presents the development of a hybrid learning system based on
Support Vector Machines (SVM), Adaptive Neuro-Fuzzy Inference System (ANFIS)
and domain knowledge to solve prediction problem. The proposed two-stage Domain
Knowledge based Fuzzy Information System (DKFIS) improves the prediction
accuracy attained by ANFIS alone. The proposed framework has been implemented
on a noisy and incomplete dataset acquired from a hydrocarbon field located at
western part of India. Here, oil saturation has been predicted from four
different well logs i.e. gamma ray, resistivity, density, and clay volume. In
the first stage, depending on zero or near zero and non-zero oil saturation
levels the input vector is classified into two classes (Class 0 and Class 1)
using SVM. The classification results have been further fine-tuned applying
expert knowledge based on the relationship among predictor variables i.e. well
logs and target variable - oil saturation. Second, an ANFIS is designed to
predict non-zero (Class 1) oil saturation values from predictor logs. The
predicted output has been further refined based on expert knowledge. It is
apparent from the experimental results that the expert intervention with
qualitative judgment at each stage has rendered the prediction into the
feasible and realistic ranges. The performance analysis of the prediction in
terms of four performance metrics such as correlation coefficient (CC), root
mean square error (RMSE), and absolute error mean (AEM), scatter index (SI) has
established DKFIS as a useful tool for reservoir characterization.
| no_new_dataset | 0.947769 |
1612.00596 | Li Cheng | Yu Zhang, Chi Xu, Li Cheng | Learning to Search on Manifolds for 3D Pose Estimation of Articulated
Objects | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on the challenging problem of 3D pose estimation of a
diverse spectrum of articulated objects from single depth images. A novel
structured prediction approach is considered, where 3D poses are represented as
skeletal models that naturally operate on manifolds. Given an input depth
image, the problem of predicting the most proper articulation of underlying
skeletal model is thus formulated as sequentially searching for the optimal
skeletal configuration. This is subsequently addressed by convolutional neural
nets trained end-to-end to render sequential prediction of the joint locations
as regressing a set of tangent vectors of the underlying manifolds. Our
approach is examined on various articulated objects including human hand,
mouse, and fish benchmark datasets. Empirically it is shown to deliver highly
competitive performance with respect to the state-of-the-arts, while operating
in real-time (over 30 FPS).
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 08:54:28 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Zhang",
"Yu",
""
],
[
"Xu",
"Chi",
""
],
[
"Cheng",
"Li",
""
]
] | TITLE: Learning to Search on Manifolds for 3D Pose Estimation of Articulated
Objects
ABSTRACT: This paper focuses on the challenging problem of 3D pose estimation of a
diverse spectrum of articulated objects from single depth images. A novel
structured prediction approach is considered, where 3D poses are represented as
skeletal models that naturally operate on manifolds. Given an input depth
image, the problem of predicting the most proper articulation of underlying
skeletal model is thus formulated as sequentially searching for the optimal
skeletal configuration. This is subsequently addressed by convolutional neural
nets trained end-to-end to render sequential prediction of the joint locations
as regressing a set of tangent vectors of the underlying manifolds. Our
approach is examined on various articulated objects including human hand,
mouse, and fish benchmark datasets. Empirically it is shown to deliver highly
competitive performance with respect to the state-of-the-arts, while operating
in real-time (over 30 FPS).
| no_new_dataset | 0.947284 |
1612.00606 | Li Yi | Li Yi, Hao Su, Xingwen Guo, Leonidas Guibas | SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of semantic annotation on 3D models that
are represented as shape graphs. A functional view is taken to represent
localized information on graphs, so that annotations such as part segment or
keypoint are nothing but 0-1 indicator vertex functions. Compared with images
that are 2D grids, shape graphs are irregular and non-isomorphic data
structures. To enable the prediction of vertex functions on them by
convolutional neural networks, we resort to spectral CNN method that enables
weight sharing by parameterizing kernels in the spectral domain spanned by
graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN,
strive to overcome two key challenges: how to share coefficients and conduct
multi-scale analysis in different parts of the graph for a single shape, and
how to share information across related but different shapes that may be
represented by very different graphs. Towards these goals, we introduce a
spectral parameterization of dilated convolutional kernels and a spectral
transformer network. Experimentally we tested our SyncSpecCNN on various tasks,
including 3D shape part segmentation and 3D keypoint prediction.
State-of-the-art performance has been achieved on all benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 09:27:34 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Yi",
"Li",
""
],
[
"Su",
"Hao",
""
],
[
"Guo",
"Xingwen",
""
],
[
"Guibas",
"Leonidas",
""
]
] | TITLE: SyncSpecCNN: Synchronized Spectral CNN for 3D Shape Segmentation
ABSTRACT: In this paper, we study the problem of semantic annotation on 3D models that
are represented as shape graphs. A functional view is taken to represent
localized information on graphs, so that annotations such as part segment or
keypoint are nothing but 0-1 indicator vertex functions. Compared with images
that are 2D grids, shape graphs are irregular and non-isomorphic data
structures. To enable the prediction of vertex functions on them by
convolutional neural networks, we resort to spectral CNN method that enables
weight sharing by parameterizing kernels in the spectral domain spanned by
graph laplacian eigenbases. Under this setting, our network, named SyncSpecCNN,
strive to overcome two key challenges: how to share coefficients and conduct
multi-scale analysis in different parts of the graph for a single shape, and
how to share information across related but different shapes that may be
represented by very different graphs. Towards these goals, we introduce a
spectral parameterization of dilated convolutional kernels and a spectral
transformer network. Experimentally we tested our SyncSpecCNN on various tasks,
including 3D shape part segmentation and 3D keypoint prediction.
State-of-the-art performance has been achieved on all benchmark datasets.
| no_new_dataset | 0.946843 |
1612.00611 | Yinchong Yang | Yinchong Yang, Peter A. Fasching, Markus Wallwiener, Tanja N. Fehm,
Sara Y. Brucker, Volker Tresp | Predictive Clinical Decision Support System with RNN Encoding and Tensor
Decoding | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the introduction of the Electric Health Records, large amounts of
digital data become available for analysis and decision support. When
physicians are prescribing treatments to a patient, they need to consider a
large range of data variety and volume, making decisions increasingly complex.
Machine learning based Clinical Decision Support systems can be a solution to
the data challenges. In this work we focus on a class of decision support in
which the physicians' decision is directly predicted. Concretely, the model
would assign higher probabilities to decisions that it presumes the physician
are more likely to make. Thus the CDS system can provide physicians with
rational recommendations. We also address the problem of correlation in target
features: Often a physician is required to make multiple (sub-)decisions in a
block, and that these decisions are mutually dependent. We propose a solution
to the target correlation problem using a tensor factorization model. In order
to handle the patients' historical information as sequential data, we apply the
so-called Encoder-Decoder-Framework which is based on Recurrent Neural Networks
(RNN) as encoders and a tensor factorization model as a decoder, a combination
which is novel in machine learning. With experiments with real-world datasets
we show that the proposed model does achieve better prediction performances.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 10:03:09 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Yang",
"Yinchong",
""
],
[
"Fasching",
"Peter A.",
""
],
[
"Wallwiener",
"Markus",
""
],
[
"Fehm",
"Tanja N.",
""
],
[
"Brucker",
"Sara Y.",
""
],
[
"Tresp",
"Volker",
""
]
] | TITLE: Predictive Clinical Decision Support System with RNN Encoding and Tensor
Decoding
ABSTRACT: With the introduction of the Electric Health Records, large amounts of
digital data become available for analysis and decision support. When
physicians are prescribing treatments to a patient, they need to consider a
large range of data variety and volume, making decisions increasingly complex.
Machine learning based Clinical Decision Support systems can be a solution to
the data challenges. In this work we focus on a class of decision support in
which the physicians' decision is directly predicted. Concretely, the model
would assign higher probabilities to decisions that it presumes the physician
are more likely to make. Thus the CDS system can provide physicians with
rational recommendations. We also address the problem of correlation in target
features: Often a physician is required to make multiple (sub-)decisions in a
block, and that these decisions are mutually dependent. We propose a solution
to the target correlation problem using a tensor factorization model. In order
to handle the patients' historical information as sequential data, we apply the
so-called Encoder-Decoder-Framework which is based on Recurrent Neural Networks
(RNN) as encoders and a tensor factorization model as a decoder, a combination
which is novel in machine learning. With experiments with real-world datasets
we show that the proposed model does achieve better prediction performances.
| no_new_dataset | 0.945551 |
1612.00637 | Nurjahan Begum | Nurjahan Begum, Liudmila Ulanova, Hoang Anh Dau, Jun Wang and Eamonn
Keogh | A General Framework for Density Based Time Series Clustering Exploiting
a Novel Admissible Pruning Strategy | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time Series Clustering is an important subroutine in many higher-level data
mining analyses, including data editing for classifiers, summarization, and
outlier detection. It is well known that for similarity search the superiority
of Dynamic Time Warping (DTW) over Euclidean distance gradually diminishes as
we consider ever larger datasets. However, as we shall show, the same is not
true for clustering. Clustering time series under DTW remains a computationally
expensive operation. In this work, we address this issue in two ways. We
propose a novel pruning strategy that exploits both the upper and lower bounds
to prune off a very large fraction of the expensive distance calculations. This
pruning strategy is admissible and gives us provably identical results to the
brute force algorithm, but is at least an order of magnitude faster. For
datasets where even this level of speedup is inadequate, we show that we can
use a simple heuristic to order the unavoidable calculations in a
most-useful-first ordering, thus casting the clustering into an anytime
framework. We demonstrate the utility of our ideas with both single and
multidimensional case studies in the domains of astronomy, speech physiology,
medicine and entomology. In addition, we show the generality of our clustering
framework to other domains by efficiently obtaining semantically significant
clusters in protein sequences using the Edit Distance, the discrete data
analogue of DTW.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 11:27:44 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Begum",
"Nurjahan",
""
],
[
"Ulanova",
"Liudmila",
""
],
[
"Dau",
"Hoang Anh",
""
],
[
"Wang",
"Jun",
""
],
[
"Keogh",
"Eamonn",
""
]
] | TITLE: A General Framework for Density Based Time Series Clustering Exploiting
a Novel Admissible Pruning Strategy
ABSTRACT: Time Series Clustering is an important subroutine in many higher-level data
mining analyses, including data editing for classifiers, summarization, and
outlier detection. It is well known that for similarity search the superiority
of Dynamic Time Warping (DTW) over Euclidean distance gradually diminishes as
we consider ever larger datasets. However, as we shall show, the same is not
true for clustering. Clustering time series under DTW remains a computationally
expensive operation. In this work, we address this issue in two ways. We
propose a novel pruning strategy that exploits both the upper and lower bounds
to prune off a very large fraction of the expensive distance calculations. This
pruning strategy is admissible and gives us provably identical results to the
brute force algorithm, but is at least an order of magnitude faster. For
datasets where even this level of speedup is inadequate, we show that we can
use a simple heuristic to order the unavoidable calculations in a
most-useful-first ordering, thus casting the clustering into an anytime
framework. We demonstrate the utility of our ideas with both single and
multidimensional case studies in the domains of astronomy, speech physiology,
medicine and entomology. In addition, we show the generality of our clustering
framework to other domains by efficiently obtaining semantically significant
clusters in protein sequences using the Edit Distance, the discrete data
analogue of DTW.
| no_new_dataset | 0.946399 |
1612.00671 | Tirtharaj Dash | Siddharth Dinesh, Tirtharaj Dash | Reliable Evaluation of Neural Network for Multiclass Classification of
Real-world Data | null | null | null | TR-2016-STUDY-1 | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a systematic evaluation of Neural Network (NN) for
classification of real-world data. In the field of machine learning, it is
often seen that a single parameter that is 'predictive accuracy' is being used
for evaluating the performance of a classifier model. However, this parameter
might not be considered reliable given a dataset with very high level of
skewness. To demonstrate such behavior, seven different types of datasets have
been used to evaluate a Multilayer Perceptron (MLP) using twelve(12) different
parameters which include micro- and macro-level estimation. In the present
study, the most common problem of prediction called 'multiclass' classification
has been considered. The results that are obtained for different parameters for
each of the dataset could demonstrate interesting findings to support the
usability of these set of performance evaluation parameters.
| [
{
"version": "v1",
"created": "Wed, 30 Nov 2016 19:58:44 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Dinesh",
"Siddharth",
""
],
[
"Dash",
"Tirtharaj",
""
]
] | TITLE: Reliable Evaluation of Neural Network for Multiclass Classification of
Real-world Data
ABSTRACT: This paper presents a systematic evaluation of Neural Network (NN) for
classification of real-world data. In the field of machine learning, it is
often seen that a single parameter that is 'predictive accuracy' is being used
for evaluating the performance of a classifier model. However, this parameter
might not be considered reliable given a dataset with very high level of
skewness. To demonstrate such behavior, seven different types of datasets have
been used to evaluate a Multilayer Perceptron (MLP) using twelve(12) different
parameters which include micro- and macro-level estimation. In the present
study, the most common problem of prediction called 'multiclass' classification
has been considered. The results that are obtained for different parameters for
each of the dataset could demonstrate interesting findings to support the
usability of these set of performance evaluation parameters.
| no_new_dataset | 0.949059 |
1612.00799 | David V\'azquez | David V\'azquez, Jorge Bernal, F. Javier S\'anchez, Gloria
Fern\'andez-Esparrach, Antonio M. L\'opez, Adriana Romero, Michal Drozdzal
and Aaron Courville | A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Colorectal cancer (CRC) is the third cause of cancer death worldwide.
Currently, the standard approach to reduce CRC-related mortality is to perform
regular screening in search for polyps and colonoscopy is the screening tool of
choice. The main limitations of this screening procedure are polyp miss-rate
and inability to perform visual assessment of polyp malignancy. These drawbacks
can be reduced by designing Decision Support Systems (DSS) aiming to help
clinicians in the different stages of the procedure by providing endoluminal
scene segmentation. Thus, in this paper, we introduce an extended benchmark of
colonoscopy image, with the hope of establishing a new strong benchmark for
colonoscopy image analysis research. We provide new baselines on this dataset
by training standard fully convolutional networks (FCN) for semantic
segmentation and significantly outperforming, without any further
post-processing, prior results in endoluminal scene segmentation.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 19:25:44 GMT"
}
] | 2016-12-05T00:00:00 | [
[
"Vázquez",
"David",
""
],
[
"Bernal",
"Jorge",
""
],
[
"Sánchez",
"F. Javier",
""
],
[
"Fernández-Esparrach",
"Gloria",
""
],
[
"López",
"Antonio M.",
""
],
[
"Romero",
"Adriana",
""
],
[
"Drozdzal",
"Michal",
""
],
[
"Courville",
"Aaron",
""
]
] | TITLE: A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images
ABSTRACT: Colorectal cancer (CRC) is the third cause of cancer death worldwide.
Currently, the standard approach to reduce CRC-related mortality is to perform
regular screening in search for polyps and colonoscopy is the screening tool of
choice. The main limitations of this screening procedure are polyp miss-rate
and inability to perform visual assessment of polyp malignancy. These drawbacks
can be reduced by designing Decision Support Systems (DSS) aiming to help
clinicians in the different stages of the procedure by providing endoluminal
scene segmentation. Thus, in this paper, we introduce an extended benchmark of
colonoscopy image, with the hope of establishing a new strong benchmark for
colonoscopy image analysis research. We provide new baselines on this dataset
by training standard fully convolutional networks (FCN) for semantic
segmentation and significantly outperforming, without any further
post-processing, prior results in endoluminal scene segmentation.
| new_dataset | 0.581957 |
1610.07258 | Zhiguang Wang | Zhiguang Wang, Wei Song, Lu Liu, Fan Zhang, Junxiao Xue, Yangdong Ye,
Ming Fan, Mingliang Xu | Representation Learning with Deconvolution for Multivariate Time Series
Classification and Visualization | arXiv admin note: text overlap with arXiv:1505.04366 by other authors | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new model based on the deconvolutional networks and SAX
discretization to learn the representation for multivariate time series.
Deconvolutional networks fully exploit the advantage the powerful
expressiveness of deep neural networks in the manner of unsupervised learning.
We design a network structure specifically to capture the cross-channel
correlation with deconvolution, forcing the pooling operation to perform the
dimension reduction along each position in the individual channel.
Discretization based on Symbolic Aggregate Approximation is applied on the
feature vectors to further extract the bag of features. We show how this
representation and bag of features helps on classification. A full comparison
with the sequence distance based approach is provided to demonstrate the
effectiveness of our approach on the standard datasets. We further build the
Markov matrix from the discretized representation from the deconvolution to
visualize the time series as complex networks, which show more class-specific
statistical properties and clear structures with respect to different labels.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 01:53:12 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2016 00:17:45 GMT"
},
{
"version": "v3",
"created": "Sat, 26 Nov 2016 21:02:49 GMT"
}
] | 2016-12-04T00:00:00 | [
[
"Wang",
"Zhiguang",
""
],
[
"Song",
"Wei",
""
],
[
"Liu",
"Lu",
""
],
[
"Zhang",
"Fan",
""
],
[
"Xue",
"Junxiao",
""
],
[
"Ye",
"Yangdong",
""
],
[
"Fan",
"Ming",
""
],
[
"Xu",
"Mingliang",
""
]
] | TITLE: Representation Learning with Deconvolution for Multivariate Time Series
Classification and Visualization
ABSTRACT: We propose a new model based on the deconvolutional networks and SAX
discretization to learn the representation for multivariate time series.
Deconvolutional networks fully exploit the advantage the powerful
expressiveness of deep neural networks in the manner of unsupervised learning.
We design a network structure specifically to capture the cross-channel
correlation with deconvolution, forcing the pooling operation to perform the
dimension reduction along each position in the individual channel.
Discretization based on Symbolic Aggregate Approximation is applied on the
feature vectors to further extract the bag of features. We show how this
representation and bag of features helps on classification. A full comparison
with the sequence distance based approach is provided to demonstrate the
effectiveness of our approach on the standard datasets. We further build the
Markov matrix from the discretized representation from the deconvolution to
visualize the time series as complex networks, which show more class-specific
statistical properties and clear structures with respect to different labels.
| no_new_dataset | 0.949153 |
1611.09897 | Rushil Anirudh | Rushil Anirudh, Jayaraman J. Thiagarajan, Irene Kim, Wolfgang Polonik | Autism Spectrum Disorder Classification using Graph Kernels on
Multidimensional Time Series | Under review as a conference paper to BHI '17 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to model time series data from resting state fMRI for
autism spectrum disorder (ASD) severity classification. We propose to adopt
kernel machines and employ graph kernels that define a kernel dot product
between two graphs. This enables us to take advantage of spatio-temporal
information to capture the dynamics of the brain network, as opposed to
aggregating them in the spatial or temporal dimension. In addition to the
conventional similarity graphs, we explore the use of L1 graph using sparse
coding, and the persistent homology of time delay embeddings, in the proposed
pipeline for ASD classification. In our experiments on two datasets from the
ABIDE collection, we demonstrate a consistent and significant advantage in
using graph kernels over traditional linear or non linear kernels for a variety
of time series features.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 21:39:23 GMT"
}
] | 2016-12-04T00:00:00 | [
[
"Anirudh",
"Rushil",
""
],
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Kim",
"Irene",
""
],
[
"Polonik",
"Wolfgang",
""
]
] | TITLE: Autism Spectrum Disorder Classification using Graph Kernels on
Multidimensional Time Series
ABSTRACT: We present an approach to model time series data from resting state fMRI for
autism spectrum disorder (ASD) severity classification. We propose to adopt
kernel machines and employ graph kernels that define a kernel dot product
between two graphs. This enables us to take advantage of spatio-temporal
information to capture the dynamics of the brain network, as opposed to
aggregating them in the spatial or temporal dimension. In addition to the
conventional similarity graphs, we explore the use of L1 graph using sparse
coding, and the persistent homology of time delay embeddings, in the proposed
pipeline for ASD classification. In our experiments on two datasets from the
ABIDE collection, we demonstrate a consistent and significant advantage in
using graph kernels over traditional linear or non linear kernels for a variety
of time series features.
| no_new_dataset | 0.950457 |
1612.00100 | Hongyang Zhang | Maria-Florina Balcan and Hongyang Zhang | Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling | 24 pages, 5 figures in NIPS 2016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of recovering an incomplete $m\times n$ matrix of rank
$r$ with columns arriving online over time. This is known as the problem of
life-long matrix completion, and is widely applied to recommendation system,
computer vision, system identification, etc. The challenge is to design
provable algorithms tolerant to a large amount of noises, with small sample
complexity. In this work, we give algorithms achieving strong guarantee under
two realistic noise models. In bounded deterministic noise, an adversary can
add any bounded yet unstructured noise to each column. For this problem, we
present an algorithm that returns a matrix of a small error, with sample
complexity almost as small as the best prior results in the noiseless case. For
sparse random noise, where the corrupted columns are sparse and drawn randomly,
we give an algorithm that exactly recovers an $\mu_0$-incoherent matrix by
probability at least $1-\delta$ with sample complexity as small as
$O\left(\mu_0rn\log (r/\delta)\right)$. This result advances the
state-of-the-art work and matches the lower bound in a worst case. We also
study the scenario where the hidden matrix lies on a mixture of subspaces and
show that the sample complexity can be even smaller. Our proposed algorithms
perform well experimentally in both synthetic and real-world datasets.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 01:10:07 GMT"
}
] | 2016-12-04T00:00:00 | [
[
"Balcan",
"Maria-Florina",
""
],
[
"Zhang",
"Hongyang",
""
]
] | TITLE: Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling
ABSTRACT: We study the problem of recovering an incomplete $m\times n$ matrix of rank
$r$ with columns arriving online over time. This is known as the problem of
life-long matrix completion, and is widely applied to recommendation system,
computer vision, system identification, etc. The challenge is to design
provable algorithms tolerant to a large amount of noises, with small sample
complexity. In this work, we give algorithms achieving strong guarantee under
two realistic noise models. In bounded deterministic noise, an adversary can
add any bounded yet unstructured noise to each column. For this problem, we
present an algorithm that returns a matrix of a small error, with sample
complexity almost as small as the best prior results in the noiseless case. For
sparse random noise, where the corrupted columns are sparse and drawn randomly,
we give an algorithm that exactly recovers an $\mu_0$-incoherent matrix by
probability at least $1-\delta$ with sample complexity as small as
$O\left(\mu_0rn\log (r/\delta)\right)$. This result advances the
state-of-the-art work and matches the lower bound in a worst case. We also
study the scenario where the hidden matrix lies on a mixture of subspaces and
show that the sample complexity can be even smaller. Our proposed algorithms
perform well experimentally in both synthetic and real-world datasets.
| no_new_dataset | 0.944434 |
1604.00974 | Luiz Gustavo Hafemann | Luiz G. Hafemann, Robert Sabourin, Luiz S. Oliveira | Writer-independent Feature Learning for Offline Signature Verification
using Deep Convolutional Neural Networks | Accepted as a conference paper to The International Joint Conference
on Neural Networks (IJCNN) 2016 | null | 10.1109/IJCNN.2016.7727521 | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic Offline Handwritten Signature Verification has been researched over
the last few decades from several perspectives, using insights from graphology,
computer vision, signal processing, among others. In spite of the advancements
on the field, building classifiers that can separate between genuine signatures
and skilled forgeries (forgeries made targeting a particular signature) is
still hard. We propose approaching the problem from a feature learning
perspective. Our hypothesis is that, in the absence of a good model of the data
generation process, it is better to learn the features from data, instead of
using hand-crafted features that have no resemblance to the signature
generation process. To this end, we use Deep Convolutional Neural Networks to
learn features in a writer-independent format, and use this model to obtain a
feature representation on another set of users, where we train writer-dependent
classifiers. We tested our method in two datasets: GPDS-960 and Brazilian
PUC-PR. Our experimental results show that the features learned in a subset of
the users are discriminative for the other users, including across different
datasets, reaching close to the state-of-the-art in the GPDS dataset, and
improving the state-of-the-art in the Brazilian PUC-PR dataset.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 18:26:48 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Hafemann",
"Luiz G.",
""
],
[
"Sabourin",
"Robert",
""
],
[
"Oliveira",
"Luiz S.",
""
]
] | TITLE: Writer-independent Feature Learning for Offline Signature Verification
using Deep Convolutional Neural Networks
ABSTRACT: Automatic Offline Handwritten Signature Verification has been researched over
the last few decades from several perspectives, using insights from graphology,
computer vision, signal processing, among others. In spite of the advancements
on the field, building classifiers that can separate between genuine signatures
and skilled forgeries (forgeries made targeting a particular signature) is
still hard. We propose approaching the problem from a feature learning
perspective. Our hypothesis is that, in the absence of a good model of the data
generation process, it is better to learn the features from data, instead of
using hand-crafted features that have no resemblance to the signature
generation process. To this end, we use Deep Convolutional Neural Networks to
learn features in a writer-independent format, and use this model to obtain a
feature representation on another set of users, where we train writer-dependent
classifiers. We tested our method in two datasets: GPDS-960 and Brazilian
PUC-PR. Our experimental results show that the features learned in a subset of
the users are discriminative for the other users, including across different
datasets, reaching close to the state-of-the-art in the GPDS dataset, and
improving the state-of-the-art in the Brazilian PUC-PR dataset.
| no_new_dataset | 0.949106 |
1605.06443 | Scott Yang | Corinna Cortes, Mehryar Mohri, Vitaly Kuznetsov, Scott Yang | Structured Prediction Theory Based on Factor Graph Complexity | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general theoretical analysis of structured prediction with a
series of new results. We give new data-dependent margin guarantees for
structured prediction for a very wide family of loss functions and a general
family of hypotheses, with an arbitrary factor graph decomposition. These are
the tightest margin bounds known for both standard multi-class and general
structured prediction problems. Our guarantees are expressed in terms of a
data-dependent complexity measure, factor graph complexity, which we show can
be estimated from data and bounded in terms of familiar quantities. We further
extend our theory by leveraging the principle of Voted Risk Minimization (VRM)
and show that learning is possible even with complex factor graphs. We present
new learning bounds for this advanced setting, which we use to design two new
algorithms, Voted Conditional Random Field (VCRF) and Voted Structured Boosting
(StructBoost). These algorithms can make use of complex features and factor
graphs and yet benefit from favorable learning guarantees. We also report the
results of experiments with VCRF on several datasets to validate our theory.
| [
{
"version": "v1",
"created": "Fri, 20 May 2016 17:21:17 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 17:02:48 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Cortes",
"Corinna",
""
],
[
"Mohri",
"Mehryar",
""
],
[
"Kuznetsov",
"Vitaly",
""
],
[
"Yang",
"Scott",
""
]
] | TITLE: Structured Prediction Theory Based on Factor Graph Complexity
ABSTRACT: We present a general theoretical analysis of structured prediction with a
series of new results. We give new data-dependent margin guarantees for
structured prediction for a very wide family of loss functions and a general
family of hypotheses, with an arbitrary factor graph decomposition. These are
the tightest margin bounds known for both standard multi-class and general
structured prediction problems. Our guarantees are expressed in terms of a
data-dependent complexity measure, factor graph complexity, which we show can
be estimated from data and bounded in terms of familiar quantities. We further
extend our theory by leveraging the principle of Voted Risk Minimization (VRM)
and show that learning is possible even with complex factor graphs. We present
new learning bounds for this advanced setting, which we use to design two new
algorithms, Voted Conditional Random Field (VCRF) and Voted Structured Boosting
(StructBoost). These algorithms can make use of complex features and factor
graphs and yet benefit from favorable learning guarantees. We also report the
results of experiments with VCRF on several datasets to validate our theory.
| no_new_dataset | 0.944689 |
1610.06912 | Prakhar Ojha | Prakhar Ojha, Partha Talukdar | KGEval: Estimating Accuracy of Automatically Constructed Knowledge
Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic construction of large knowledge graphs (KG) by mining web-scale
text datasets has received considerable attention recently. Estimating accuracy
of such automatically constructed KGs is a challenging problem due to their
size and diversity. This important problem has largely been ignored in prior
research we fill this gap and propose KGEval. KGEval binds facts of a KG using
coupling constraints and crowdsources the facts that infer correctness of large
parts of the KG. We demonstrate that the objective optimized by KGEval is
submodular and NP-hard, allowing guarantees for our approximation algorithm.
Through extensive experiments on real-world datasets, we demonstrate that
KGEval is able to estimate KG accuracy more accurately compared to other
competitive baselines, while requiring significantly lesser number of human
evaluations.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2016 19:49:19 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 06:45:34 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Ojha",
"Prakhar",
""
],
[
"Talukdar",
"Partha",
""
]
] | TITLE: KGEval: Estimating Accuracy of Automatically Constructed Knowledge
Graphs
ABSTRACT: Automatic construction of large knowledge graphs (KG) by mining web-scale
text datasets has received considerable attention recently. Estimating accuracy
of such automatically constructed KGs is a challenging problem due to their
size and diversity. This important problem has largely been ignored in prior
research we fill this gap and propose KGEval. KGEval binds facts of a KG using
coupling constraints and crowdsources the facts that infer correctness of large
parts of the KG. We demonstrate that the objective optimized by KGEval is
submodular and NP-hard, allowing guarantees for our approximation algorithm.
Through extensive experiments on real-world datasets, we demonstrate that
KGEval is able to estimate KG accuracy more accurately compared to other
competitive baselines, while requiring significantly lesser number of human
evaluations.
| no_new_dataset | 0.944074 |
1612.00085 | Woo Hyun Nam | Il Jun Ahn (1) and Woo Hyun Nam (1) ((1) Digital Media &
Communications R&D Center, Samsung Electronics, Seoul, Korea) | Texture Enhancement via High-Resolution Style Transfer for Single-Image
Super-Resolution | Il Jun Ahn and Woo Hyun Nam contributed equally to this work.
Submitted to IEEE Transactions on Consumer Electronics | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, various deep-neural-network (DNN)-based approaches have been
proposed for single-image super-resolution (SISR). Despite their promising
results on major structure regions such as edges and lines, they still suffer
from limited performance on texture regions that consist of very complex and
fine patterns. This is because, during the acquisition of a low-resolution (LR)
image via down-sampling, these regions lose most of the high frequency
information necessary to represent the texture details. In this paper, we
present a novel texture enhancement framework for SISR to effectively improve
the spatial resolution in the texture regions as well as edges and lines. We
call our method, high-resolution (HR) style transfer algorithm. Our framework
consists of three steps: (i) generate an initial HR image from an interpolated
LR image via an SISR algorithm, (ii) generate an HR style image from the
initial HR image via down-scaling and tiling, and (iii) combine the HR style
image with the initial HR image via a customized style transfer algorithm.
Here, the HR style image is obtained by down-scaling the initial HR image and
then repetitively tiling it into an image of the same size as the HR image.
This down-scaling and tiling process comes from the idea that texture regions
are often composed of small regions that similar in appearance albeit sometimes
different in scale. This process creates an HR style image that is rich in
details, which can be used to restore high-frequency texture details back into
the initial HR image via the style transfer algorithm. Experimental results on
a number of texture datasets show that our proposed HR style transfer algorithm
provides more visually pleasing results compared with competitive methods.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 00:15:02 GMT"
}
] | 2016-12-02T00:00:00 | [
[
"Ahn",
"Il Jun",
""
],
[
"Nam",
"Woo Hyun",
""
]
] | TITLE: Texture Enhancement via High-Resolution Style Transfer for Single-Image
Super-Resolution
ABSTRACT: Recently, various deep-neural-network (DNN)-based approaches have been
proposed for single-image super-resolution (SISR). Despite their promising
results on major structure regions such as edges and lines, they still suffer
from limited performance on texture regions that consist of very complex and
fine patterns. This is because, during the acquisition of a low-resolution (LR)
image via down-sampling, these regions lose most of the high frequency
information necessary to represent the texture details. In this paper, we
present a novel texture enhancement framework for SISR to effectively improve
the spatial resolution in the texture regions as well as edges and lines. We
call our method, high-resolution (HR) style transfer algorithm. Our framework
consists of three steps: (i) generate an initial HR image from an interpolated
LR image via an SISR algorithm, (ii) generate an HR style image from the
initial HR image via down-scaling and tiling, and (iii) combine the HR style
image with the initial HR image via a customized style transfer algorithm.
Here, the HR style image is obtained by down-scaling the initial HR image and
then repetitively tiling it into an image of the same size as the HR image.
This down-scaling and tiling process comes from the idea that texture regions
are often composed of small regions that similar in appearance albeit sometimes
different in scale. This process creates an HR style image that is rich in
details, which can be used to restore high-frequency texture details back into
the initial HR image via the style transfer algorithm. Experimental results on
a number of texture datasets show that our proposed HR style transfer algorithm
provides more visually pleasing results compared with competitive methods.
| no_new_dataset | 0.954435 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.