Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
list | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1607.05140
|
Thanh-Toan Do
|
Thanh-Toan Do, Anh-Dzung Doan, Ngai-Man Cheung
|
Learning to Hash with Binary Deep Neural Network
|
Appearing in European Conference on Computer Vision (ECCV) 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes deep network models and learning algorithms for
unsupervised and supervised binary hashing. Our novel network design constrains
one hidden layer to directly output the binary codes. This addresses a
challenging issue in some previous works: optimizing non-smooth objective
functions due to binarization. Moreover, we incorporate independence and
balance properties in the direct and strict forms in the learning. Furthermore,
we include similarity preserving property in our objective function. Our
resulting optimization with these binary, independence, and balance constraints
is difficult to solve. We propose to attack it with alternating optimization
and careful relaxation. Experimental results on three benchmark datasets show
that our proposed methods compare favorably with the state of the art.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2016 15:48:58 GMT"
}
] | 2016-07-19T00:00:00 |
[
[
"Do",
"Thanh-Toan",
""
],
[
"Doan",
"Anh-Dzung",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] |
TITLE: Learning to Hash with Binary Deep Neural Network
ABSTRACT: This work proposes deep network models and learning algorithms for
unsupervised and supervised binary hashing. Our novel network design constrains
one hidden layer to directly output the binary codes. This addresses a
challenging issue in some previous works: optimizing non-smooth objective
functions due to binarization. Moreover, we incorporate independence and
balance properties in the direct and strict forms in the learning. Furthermore,
we include similarity preserving property in our objective function. Our
resulting optimization with these binary, independence, and balance constraints
is difficult to solve. We propose to attack it with alternating optimization
and careful relaxation. Experimental results on three benchmark datasets show
that our proposed methods compare favorably with the state of the art.
|
1607.05177
|
Aidean Sharghi
|
Aidean Sharghi, Boqing Gong, Mubarak Shah
|
Query-Focused Extractive Video Summarization
|
Accepted to ECCV 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video data is explosively growing. As a result of the "big video data",
intelligent algorithms for automatic video summarization have re-emerged as a
pressing need. We develop a probabilistic model, Sequential and Hierarchical
Determinantal Point Process (SH-DPP), for query-focused extractive video
summarization. Given a user query and a long video sequence, our algorithm
returns a summary by selecting key shots from the video. The decision to
include a shot in the summary depends on the shot's relevance to the user query
and importance in the context of the video, jointly. We verify our approach on
two densely annotated video datasets. The query-focused video summarization is
particularly useful for search engines, e.g., to display snippets of videos.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2016 16:49:19 GMT"
}
] | 2016-07-19T00:00:00 |
[
[
"Sharghi",
"Aidean",
""
],
[
"Gong",
"Boqing",
""
],
[
"Shah",
"Mubarak",
""
]
] |
TITLE: Query-Focused Extractive Video Summarization
ABSTRACT: Video data is explosively growing. As a result of the "big video data",
intelligent algorithms for automatic video summarization have re-emerged as a
pressing need. We develop a probabilistic model, Sequential and Hierarchical
Determinantal Point Process (SH-DPP), for query-focused extractive video
summarization. Given a user query and a long video sequence, our algorithm
returns a summary by selecting key shots from the video. The decision to
include a shot in the summary depends on the shot's relevance to the user query
and importance in the context of the video, jointly. We verify our approach on
two densely annotated video datasets. The query-focused video summarization is
particularly useful for search engines, e.g., to display snippets of videos.
|
1607.05194
|
Mohammad Havaei
|
Mohammad Havaei and Nicolas Guizard and Nicolas Chapados and Yoshua
Bengio
|
HeMIS: Hetero-Modal Image Segmentation
|
Accepted as an oral presentation at MICCAI 2016
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce a deep learning image segmentation framework that is extremely
robust to missing imaging modalities. Instead of attempting to impute or
synthesize missing data, the proposed approach learns, for each modality, an
embedding of the input image into a single latent vector space for which
arithmetic operations (such as taking the mean) are well defined. Points in
that space, which are averaged over modalities available at inference time, can
then be further processed to yield the desired segmentation. As such, any
combinatorial subset of available modalities can be provided as input, without
having to learn a combinatorial number of imputation models. Evaluated on two
neurological MRI datasets (brain tumors and MS lesions), the approach yields
state-of-the-art segmentation results when provided with all modalities;
moreover, its performance degrades remarkably gracefully when modalities are
removed, significantly more so than alternative mean-filling or other synthesis
approaches.
|
[
{
"version": "v1",
"created": "Mon, 18 Jul 2016 17:11:57 GMT"
}
] | 2016-07-19T00:00:00 |
[
[
"Havaei",
"Mohammad",
""
],
[
"Guizard",
"Nicolas",
""
],
[
"Chapados",
"Nicolas",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
TITLE: HeMIS: Hetero-Modal Image Segmentation
ABSTRACT: We introduce a deep learning image segmentation framework that is extremely
robust to missing imaging modalities. Instead of attempting to impute or
synthesize missing data, the proposed approach learns, for each modality, an
embedding of the input image into a single latent vector space for which
arithmetic operations (such as taking the mean) are well defined. Points in
that space, which are averaged over modalities available at inference time, can
then be further processed to yield the desired segmentation. As such, any
combinatorial subset of available modalities can be provided as input, without
having to learn a combinatorial number of imputation models. Evaluated on two
neurological MRI datasets (brain tumors and MS lesions), the approach yields
state-of-the-art segmentation results when provided with all modalities;
moreover, its performance degrades remarkably gracefully when modalities are
removed, significantly more so than alternative mean-filling or other synthesis
approaches.
|
1512.01818
|
Filipe Ribeiro Filipe Nunes Ribeiro
|
Filipe Nunes Ribeiro, Matheus Ara\'ujo, Pollyanna Gon\c{c}alves,
Fabr\'icio Benevenuto, Marcos Andr\'e Gon\c{c}alves
|
SentiBench - a benchmark comparison of state-of-the-practice sentiment
analysis methods
| null | null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the last few years thousands of scientific papers have investigated
sentiment analysis, several startups that measure opinions on real data have
emerged and a number of innovative products related to this theme have been
developed. There are multiple methods for measuring sentiments, including
lexical-based and supervised machine learning methods. Despite the vast
interest on the theme and wide popularity of some methods, it is unclear which
one is better for identifying the polarity (i.e., positive or negative) of a
message. Accordingly, there is a strong need to conduct a thorough
apple-to-apple comparison of sentiment analysis methods, \textit{as they are
used in practice}, across multiple datasets originated from different data
sources. Such a comparison is key for understanding the potential limitations,
advantages, and disadvantages of popular methods. This article aims at filling
this gap by presenting a benchmark comparison of twenty-four popular sentiment
analysis methods (which we call the state-of-the-practice methods). Our
evaluation is based on a benchmark of eighteen labeled datasets, covering
messages posted on social networks, movie and product reviews, as well as
opinions and comments in news articles. Our results highlight the extent to
which the prediction performance of these methods varies considerably across
datasets. Aiming at boosting the development of this research area, we open the
methods' codes and datasets used in this article, deploying them in a benchmark
system, which provides an open API for accessing and comparing sentence-level
sentiment analysis methods.
|
[
{
"version": "v1",
"created": "Sun, 6 Dec 2015 18:52:51 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2015 00:32:23 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Feb 2016 18:54:51 GMT"
},
{
"version": "v4",
"created": "Sat, 4 Jun 2016 16:52:29 GMT"
},
{
"version": "v5",
"created": "Thu, 14 Jul 2016 22:51:39 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"Ribeiro",
"Filipe Nunes",
""
],
[
"Araújo",
"Matheus",
""
],
[
"Gonçalves",
"Pollyanna",
""
],
[
"Benevenuto",
"Fabrício",
""
],
[
"Gonçalves",
"Marcos André",
""
]
] |
TITLE: SentiBench - a benchmark comparison of state-of-the-practice sentiment
analysis methods
ABSTRACT: In the last few years thousands of scientific papers have investigated
sentiment analysis, several startups that measure opinions on real data have
emerged and a number of innovative products related to this theme have been
developed. There are multiple methods for measuring sentiments, including
lexical-based and supervised machine learning methods. Despite the vast
interest on the theme and wide popularity of some methods, it is unclear which
one is better for identifying the polarity (i.e., positive or negative) of a
message. Accordingly, there is a strong need to conduct a thorough
apple-to-apple comparison of sentiment analysis methods, \textit{as they are
used in practice}, across multiple datasets originated from different data
sources. Such a comparison is key for understanding the potential limitations,
advantages, and disadvantages of popular methods. This article aims at filling
this gap by presenting a benchmark comparison of twenty-four popular sentiment
analysis methods (which we call the state-of-the-practice methods). Our
evaluation is based on a benchmark of eighteen labeled datasets, covering
messages posted on social networks, movie and product reviews, as well as
opinions and comments in news articles. Our results highlight the extent to
which the prediction performance of these methods varies considerably across
datasets. Aiming at boosting the development of this research area, we open the
methods' codes and datasets used in this article, deploying them in a benchmark
system, which provides an open API for accessing and comparing sentence-level
sentiment analysis methods.
|
1603.08120
|
Wenbin Li
|
Wenbin Li and Darren Cosker and Zhihan Lv and Matthew Brown
|
Nonrigid Optical Flow Ground Truth for Real-World Scenes with
Time-Varying Shading Effects
|
preprint of our paper accepted by RA-L'16
| null |
10.1109/LRA.2016.2592513
| null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we present a dense ground truth dataset of nonrigidly deforming
real-world scenes. Our dataset contains both long and short video sequences,
and enables the quantitatively evaluation for RGB based tracking and
registration methods. To construct ground truth for the RGB sequences, we
simultaneously capture Near-Infrared (NIR) image sequences where dense markers
- visible only in NIR - represent ground truth positions. This allows for
comparison with automatically tracked RGB positions and the formation of error
metrics. Most previous datasets containing nonrigidly deforming sequences are
based on synthetic data. Our capture protocol enables us to acquire real-world
deforming objects with realistic photometric effects - such as blur and
illumination change - as well as occlusion and complex deformations. A public
evaluation website is constructed to allow for ranking of RGB image based
optical flow and other dense tracking algorithms, with various statistical
measures. Furthermore, we present an RGB-NIR multispectral optical flow model
allowing for energy optimization by adoptively combining featured information
from both the RGB and the complementary NIR channels. In our experiments we
evaluate eight existing RGB based optical flow methods on our new dataset. We
also evaluate our hybrid optical flow algorithm by comparing to two existing
multispectral approaches, as well as varying our input channels across RGB, NIR
and RGB-NIR.
|
[
{
"version": "v1",
"created": "Sat, 26 Mar 2016 16:08:13 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Jun 2016 14:57:38 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Jul 2016 12:39:03 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"Li",
"Wenbin",
""
],
[
"Cosker",
"Darren",
""
],
[
"Lv",
"Zhihan",
""
],
[
"Brown",
"Matthew",
""
]
] |
TITLE: Nonrigid Optical Flow Ground Truth for Real-World Scenes with
Time-Varying Shading Effects
ABSTRACT: In this paper we present a dense ground truth dataset of nonrigidly deforming
real-world scenes. Our dataset contains both long and short video sequences,
and enables the quantitatively evaluation for RGB based tracking and
registration methods. To construct ground truth for the RGB sequences, we
simultaneously capture Near-Infrared (NIR) image sequences where dense markers
- visible only in NIR - represent ground truth positions. This allows for
comparison with automatically tracked RGB positions and the formation of error
metrics. Most previous datasets containing nonrigidly deforming sequences are
based on synthetic data. Our capture protocol enables us to acquire real-world
deforming objects with realistic photometric effects - such as blur and
illumination change - as well as occlusion and complex deformations. A public
evaluation website is constructed to allow for ranking of RGB image based
optical flow and other dense tracking algorithms, with various statistical
measures. Furthermore, we present an RGB-NIR multispectral optical flow model
allowing for energy optimization by adoptively combining featured information
from both the RGB and the complementary NIR channels. In our experiments we
evaluate eight existing RGB based optical flow methods on our new dataset. We
also evaluate our hybrid optical flow algorithm by comparing to two existing
multispectral approaches, as well as varying our input channels across RGB, NIR
and RGB-NIR.
|
1607.04373
|
Ruining He
|
Ruining He, Chen Fang, Zhaowen Wang, Julian McAuley
|
Vista: A Visually, Socially, and Temporally-aware Model for Artistic
Recommendation
|
8 pages, 3 figures
| null |
10.1145/2959100.2959152
| null |
cs.IR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding users' interactions with highly subjective content---like
artistic images---is challenging due to the complex semantics that guide our
preferences. On the one hand one has to overcome `standard' recommender systems
challenges, such as dealing with large, sparse, and long-tailed datasets. On
the other, several new challenges present themselves, such as the need to model
content in terms of its visual appearance, or even social dynamics, such as a
preference toward a particular artist that is independent of the art they
create.
In this paper we build large-scale recommender systems to model the dynamics
of a vibrant digital art community, Behance, consisting of tens of millions of
interactions (clicks and `appreciates') of users toward digital art.
Methodologically, our main contributions are to model (a) rich content,
especially in terms of its visual appearance; (b) temporal dynamics, in terms
of how users prefer `visually consistent' content within and across sessions;
and (c) social dynamics, in terms of how users exhibit preferences both towards
certain art styles, as well as the artists themselves.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2016 03:35:56 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"He",
"Ruining",
""
],
[
"Fang",
"Chen",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"McAuley",
"Julian",
""
]
] |
TITLE: Vista: A Visually, Socially, and Temporally-aware Model for Artistic
Recommendation
ABSTRACT: Understanding users' interactions with highly subjective content---like
artistic images---is challenging due to the complex semantics that guide our
preferences. On the one hand one has to overcome `standard' recommender systems
challenges, such as dealing with large, sparse, and long-tailed datasets. On
the other, several new challenges present themselves, such as the need to model
content in terms of its visual appearance, or even social dynamics, such as a
preference toward a particular artist that is independent of the art they
create.
In this paper we build large-scale recommender systems to model the dynamics
of a vibrant digital art community, Behance, consisting of tens of millions of
interactions (clicks and `appreciates') of users toward digital art.
Methodologically, our main contributions are to model (a) rich content,
especially in terms of its visual appearance; (b) temporal dynamics, in terms
of how users prefer `visually consistent' content within and across sessions;
and (c) social dynamics, in terms of how users exhibit preferences both towards
certain art styles, as well as the artists themselves.
|
1607.04378
|
Liping Jing Dr.
|
Liping Jing, Bo Liu, Jaeyoung Choi, Adam Janin, Julia Bernd, Michael
W. Mahoney, and Gerald Friedland
|
DCAR: A Discriminative and Compact Audio Representation to Improve Event
Detection
|
An abbreviated version of this paper will be published in ACM
Multimedia 2016
| null | null | null |
cs.SD cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel two-phase method for audio representation,
Discriminative and Compact Audio Representation (DCAR), and evaluates its
performance at detecting events in consumer-produced videos. In the first phase
of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that
includes several components to capture the variability within that track. The
second phase takes into account both global structure and local structure. In
this phase, the components are rendered more discriminative and compact by
formulating an optimization problem on Grassmannian manifolds, which we found
represents the structure of audio effectively.
Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus
based on YFCC100M), which includes ten events. The results show that the
proposed DCAR representation consistently outperforms state-of-the-art audio
representations. DCAR's advantage over i-vector, mv-vector, and GMM
representations is significant for both easier and harder discrimination tasks.
We discuss how these performance differences across easy and hard cases follow
from how each type of model leverages (or doesn't leverage) the intrinsic
structure of the data. Furthermore, DCAR shows a particularly notable accuracy
advantage on events where humans have more difficulty classifying the videos,
i.e., events with lower mean annotator confidence.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2016 04:28:14 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"Jing",
"Liping",
""
],
[
"Liu",
"Bo",
""
],
[
"Choi",
"Jaeyoung",
""
],
[
"Janin",
"Adam",
""
],
[
"Bernd",
"Julia",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Friedland",
"Gerald",
""
]
] |
TITLE: DCAR: A Discriminative and Compact Audio Representation to Improve Event
Detection
ABSTRACT: This paper presents a novel two-phase method for audio representation,
Discriminative and Compact Audio Representation (DCAR), and evaluates its
performance at detecting events in consumer-produced videos. In the first phase
of DCAR, each audio track is modeled using a Gaussian mixture model (GMM) that
includes several components to capture the variability within that track. The
second phase takes into account both global structure and local structure. In
this phase, the components are rendered more discriminative and compact by
formulating an optimization problem on Grassmannian manifolds, which we found
represents the structure of audio effectively.
Our experiments used the YLI-MED dataset (an open TRECVID-style video corpus
based on YFCC100M), which includes ten events. The results show that the
proposed DCAR representation consistently outperforms state-of-the-art audio
representations. DCAR's advantage over i-vector, mv-vector, and GMM
representations is significant for both easier and harder discrimination tasks.
We discuss how these performance differences across easy and hard cases follow
from how each type of model leverages (or doesn't leverage) the intrinsic
structure of the data. Furthermore, DCAR shows a particularly notable accuracy
advantage on events where humans have more difficulty classifying the videos,
i.e., events with lower mean annotator confidence.
|
1607.04379
|
Renzhi Cao
|
Renzhi Cao, Debswapna Bhattacharya, Jie Hou, and Jianlin Cheng
|
DeepQA: Improving the estimation of single protein model quality with
deep belief networks
|
19 pages, 1 figure, 4 tables
| null | null | null |
cs.AI cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Protein quality assessment (QA) by ranking and selecting protein models has
long been viewed as one of the major challenges for protein tertiary structure
prediction. Especially, estimating the quality of a single protein model, which
is important for selecting a few good models out of a large model pool
consisting of mostly low-quality models, is still a largely unsolved problem.
We introduce a novel single-model quality assessment method DeepQA based on
deep belief network that utilizes a number of selected features describing the
quality of a model from different perspectives, such as energy, physio-chemical
characteristics, and structural information. The deep belief network is trained
on several large datasets consisting of models from the Critical Assessment of
Protein Structure Prediction (CASP) experiments, several publicly available
datasets, and models generated by our in-house ab initio method. Our experiment
demonstrate that deep belief network has better performance compared to Support
Vector Machines and Neural Networks on the protein model quality assessment
problem, and our method DeepQA achieves the state-of-the-art performance on
CASP11 dataset. It also outperformed two well-established methods in selecting
good outlier models from a large set of models of mostly low quality generated
by ab initio modeling methods. DeepQA is a useful tool for protein single model
quality assessment and protein structure prediction. The source code,
executable, document and training/test datasets of DeepQA for Linux is freely
available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2016 04:28:55 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"Cao",
"Renzhi",
""
],
[
"Bhattacharya",
"Debswapna",
""
],
[
"Hou",
"Jie",
""
],
[
"Cheng",
"Jianlin",
""
]
] |
TITLE: DeepQA: Improving the estimation of single protein model quality with
deep belief networks
ABSTRACT: Protein quality assessment (QA) by ranking and selecting protein models has
long been viewed as one of the major challenges for protein tertiary structure
prediction. Especially, estimating the quality of a single protein model, which
is important for selecting a few good models out of a large model pool
consisting of mostly low-quality models, is still a largely unsolved problem.
We introduce a novel single-model quality assessment method DeepQA based on
deep belief network that utilizes a number of selected features describing the
quality of a model from different perspectives, such as energy, physio-chemical
characteristics, and structural information. The deep belief network is trained
on several large datasets consisting of models from the Critical Assessment of
Protein Structure Prediction (CASP) experiments, several publicly available
datasets, and models generated by our in-house ab initio method. Our experiment
demonstrate that deep belief network has better performance compared to Support
Vector Machines and Neural Networks on the protein model quality assessment
problem, and our method DeepQA achieves the state-of-the-art performance on
CASP11 dataset. It also outperformed two well-established methods in selecting
good outlier models from a large set of models of mostly low quality generated
by ab initio modeling methods. DeepQA is a useful tool for protein single model
quality assessment and protein structure prediction. The source code,
executable, document and training/test datasets of DeepQA for Linux is freely
available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/.
|
1607.04593
|
Saurabh Prasad
|
Minshan Cui, Saurabh Prasad
|
Spatial Context based Angular Information Preserving Projection for
Hyperspectral Image Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Dimensionality reduction is a crucial preprocessing for hyperspectral data
analysis - finding an appropriate subspace is often required for subsequent
image classification. In recent work, we proposed supervised angular
information based dimensionality reduction methods to find effective subspaces.
Since unlabeled data are often more readily available compared to labeled data,
we propose an unsupervised projection that finds a lower dimensional subspace
where local angular information is preserved. To exploit spatial information
from the hyperspectral images, we further extend our unsupervised projection to
incorporate spatial contextual information around each pixel in the image.
Additionally, we also propose a sparse representation based classifier which is
optimized to exploit spatial information during classification - we hence
assert that our proposed projection is particularly suitable for classifiers
where local similarity and spatial context are both important. Experimental
results with two real-world hyperspectral datasets demonstrate that our
proposed methods provide a robust classification performance.
|
[
{
"version": "v1",
"created": "Fri, 15 Jul 2016 17:38:34 GMT"
}
] | 2016-07-18T00:00:00 |
[
[
"Cui",
"Minshan",
""
],
[
"Prasad",
"Saurabh",
""
]
] |
TITLE: Spatial Context based Angular Information Preserving Projection for
Hyperspectral Image Classification
ABSTRACT: Dimensionality reduction is a crucial preprocessing for hyperspectral data
analysis - finding an appropriate subspace is often required for subsequent
image classification. In recent work, we proposed supervised angular
information based dimensionality reduction methods to find effective subspaces.
Since unlabeled data are often more readily available compared to labeled data,
we propose an unsupervised projection that finds a lower dimensional subspace
where local angular information is preserved. To exploit spatial information
from the hyperspectral images, we further extend our unsupervised projection to
incorporate spatial contextual information around each pixel in the image.
Additionally, we also propose a sparse representation based classifier which is
optimized to exploit spatial information during classification - we hence
assert that our proposed projection is particularly suitable for classifiers
where local similarity and spatial context are both important. Experimental
results with two real-world hyperspectral datasets demonstrate that our
proposed methods provide a robust classification performance.
|
1412.7854
|
Seyedshams Feyzabadi
|
Seyedshams Feyzabadi
|
Joint Deep Learning for Car Detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Traditional object recognition approaches apply feature extraction, part
deformation handling, occlusion handling and classification sequentially while
they are independent from each other. Ouyang and Wang proposed a model for
jointly learning of all of the mentioned processes using one deep neural
network. We utilized, and manipulated their toolbox in order to apply it in car
detection scenarios where it had not been tested. Creating a single deep
architecture from these components, improves the interaction between them and
can enhance the performance of the whole system. We believe that the approach
can be used as a general purpose object detection toolbox. We tested the
algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy
of our method was 97 % while the previously reported results showed an accuracy
of up to 91 %. We strongly believe that having an experiment on a larger
dataset can show the advantage of using deep models over shallow ones.
|
[
{
"version": "v1",
"created": "Thu, 25 Dec 2014 18:55:49 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jul 2016 17:57:31 GMT"
}
] | 2016-07-15T00:00:00 |
[
[
"Feyzabadi",
"Seyedshams",
""
]
] |
TITLE: Joint Deep Learning for Car Detection
ABSTRACT: Traditional object recognition approaches apply feature extraction, part
deformation handling, occlusion handling and classification sequentially while
they are independent from each other. Ouyang and Wang proposed a model for
jointly learning of all of the mentioned processes using one deep neural
network. We utilized, and manipulated their toolbox in order to apply it in car
detection scenarios where it had not been tested. Creating a single deep
architecture from these components, improves the interaction between them and
can enhance the performance of the whole system. We believe that the approach
can be used as a general purpose object detection toolbox. We tested the
algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy
of our method was 97 % while the previously reported results showed an accuracy
of up to 91 %. We strongly believe that having an experiment on a larger
dataset can show the advantage of using deep models over shallow ones.
|
1604.05096
|
Jonas Uhrig
|
Jonas Uhrig, Marius Cordts, Uwe Franke, Thomas Brox
|
Pixel-level Encoding and Depth Layering for Instance-level Semantic
Labeling
|
Accepted at GCPR 2016. Includes supplementary material
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent approaches for instance-aware semantic labeling have augmented
convolutional neural networks (CNNs) with complex multi-task architectures or
computationally expensive graphical models. We present a method that leverages
a fully convolutional network (FCN) to predict semantic labels, depth and an
instance-based encoding using each pixel's direction towards its corresponding
instance center. Subsequently, we apply low-level computer vision techniques to
generate state-of-the-art instance segmentation on the street scene datasets
KITTI and Cityscapes. Our approach outperforms existing works by a large margin
and can additionally predict absolute distances of individual instances from a
monocular image as well as a pixel-level semantic labeling.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2016 11:24:39 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jul 2016 14:46:35 GMT"
}
] | 2016-07-15T00:00:00 |
[
[
"Uhrig",
"Jonas",
""
],
[
"Cordts",
"Marius",
""
],
[
"Franke",
"Uwe",
""
],
[
"Brox",
"Thomas",
""
]
] |
TITLE: Pixel-level Encoding and Depth Layering for Instance-level Semantic
Labeling
ABSTRACT: Recent approaches for instance-aware semantic labeling have augmented
convolutional neural networks (CNNs) with complex multi-task architectures or
computationally expensive graphical models. We present a method that leverages
a fully convolutional network (FCN) to predict semantic labels, depth and an
instance-based encoding using each pixel's direction towards its corresponding
instance center. Subsequently, we apply low-level computer vision techniques to
generate state-of-the-art instance segmentation on the street scene datasets
KITTI and Cityscapes. Our approach outperforms existing works by a large margin
and can additionally predict absolute distances of individual instances from a
monocular image as well as a pixel-level semantic labeling.
|
1606.08733
|
Ond\v{r}ej Pl\'atek
|
Ond\v{r}ej Pl\'atek and Petr B\v{e}lohl\'avek and Vojt\v{e}ch
Hude\v{c}ek and Filip Jur\v{c}\'i\v{c}ek
|
Recurrent Neural Networks for Dialogue State Tracking
|
Accepted to slo-nlp 2016
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper discusses models for dialogue state tracking using recurrent
neural networks (RNN). We present experiments on the standard dialogue state
tracking (DST) dataset, DSTC2. On the one hand, RNN models became the state of
the art models in DST, on the other hand, most state-of-the-art models are only
turn-based and require dataset-specific preprocessing (e.g. DSTC2-specific) in
order to achieve such results. We implemented two architectures which can be
used in incremental settings and require almost no preprocessing. We compare
their performance to the benchmarks on DSTC2 and discuss their properties. With
only trivial preprocessing, the performance of our models is close to the
state-of- the-art results.
|
[
{
"version": "v1",
"created": "Tue, 28 Jun 2016 14:33:29 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2016 21:42:58 GMT"
}
] | 2016-07-15T00:00:00 |
[
[
"Plátek",
"Ondřej",
""
],
[
"Bělohlávek",
"Petr",
""
],
[
"Hudeček",
"Vojtěch",
""
],
[
"Jurčíček",
"Filip",
""
]
] |
TITLE: Recurrent Neural Networks for Dialogue State Tracking
ABSTRACT: This paper discusses models for dialogue state tracking using recurrent
neural networks (RNN). We present experiments on the standard dialogue state
tracking (DST) dataset, DSTC2. On the one hand, RNN models became the state of
the art models in DST, on the other hand, most state-of-the-art models are only
turn-based and require dataset-specific preprocessing (e.g. DSTC2-specific) in
order to achieve such results. We implemented two architectures which can be
used in incremental settings and require almost no preprocessing. We compare
their performance to the benchmarks on DSTC2 and discuss their properties. With
only trivial preprocessing, the performance of our models is close to the
state-of- the-art results.
|
1607.03990
|
Jerry Li
|
Jayadev Acharya, Ilias Diakonikolas, Jerry Li, Ludwig Schmidt
|
Fast Algorithms for Segmented Regression
|
27 pages, appeared in ICML 2016
| null | null | null |
cs.LG cs.DS math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the fixed design segmented regression problem: Given noisy samples
from a piecewise linear function $f$, we want to recover $f$ up to a desired
accuracy in mean-squared error.
Previous rigorous approaches for this problem rely on dynamic programming
(DP) and, while sample efficient, have running time quadratic in the sample
size. As our main contribution, we provide new sample near-linear time
algorithms for the problem that -- while not being minimax optimal -- achieve a
significantly better sample-time tradeoff on large datasets compared to the DP
approach. Our experimental evaluation shows that, compared with the DP
approach, our algorithms provide a convergence rate that is only off by a
factor of $2$ to $4$, while achieving speedups of three orders of magnitude.
|
[
{
"version": "v1",
"created": "Thu, 14 Jul 2016 04:52:53 GMT"
}
] | 2016-07-15T00:00:00 |
[
[
"Acharya",
"Jayadev",
""
],
[
"Diakonikolas",
"Ilias",
""
],
[
"Li",
"Jerry",
""
],
[
"Schmidt",
"Ludwig",
""
]
] |
TITLE: Fast Algorithms for Segmented Regression
ABSTRACT: We study the fixed design segmented regression problem: Given noisy samples
from a piecewise linear function $f$, we want to recover $f$ up to a desired
accuracy in mean-squared error.
Previous rigorous approaches for this problem rely on dynamic programming
(DP) and, while sample efficient, have running time quadratic in the sample
size. As our main contribution, we provide new sample near-linear time
algorithms for the problem that -- while not being minimax optimal -- achieve a
significantly better sample-time tradeoff on large datasets compared to the DP
approach. Our experimental evaluation shows that, compared with the DP
approach, our algorithms provide a convergence rate that is only off by a
factor of $2$ to $4$, while achieving speedups of three orders of magnitude.
|
1607.04186
|
Mathieu Acher
|
Mathieu Acher (DiverSe), Fran\c{c}ois Esnault (DiverSe)
|
Large-scale Analysis of Chess Games with Chess Engines: A Preliminary
Report
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The strength of chess engines together with the availability of numerous
chess games have attracted the attention of chess players, data scientists, and
researchers during the last decades. State-of-the-art engines now provide an
authoritative judgement that can be used in many applications like cheating
detection, intrinsic ratings computation, skill assessment, or the study of
human decision-making. A key issue for the research community is to gather a
large dataset of chess games together with the judgement of chess engines.
Unfortunately the analysis of each move takes lots of times. In this paper, we
report our effort to analyse almost 5 millions chess games with a computing
grid. During summer 2015, we processed 270 millions unique played positions
using the Stockfish engine with a quite high depth (20). We populated a
database of 1+ tera-octets of chess evaluations, representing an estimated time
of 50 years of computation on a single machine. Our effort is a first step
towards the replication of research results, the supply of open data and
procedures for exploring new directions, and the investigation of software
engineering/scalability issues when computing billions of moves.
|
[
{
"version": "v1",
"created": "Thu, 28 Apr 2016 08:37:43 GMT"
}
] | 2016-07-15T00:00:00 |
[
[
"Acher",
"Mathieu",
"",
"DiverSe"
],
[
"Esnault",
"François",
"",
"DiverSe"
]
] |
TITLE: Large-scale Analysis of Chess Games with Chess Engines: A Preliminary
Report
ABSTRACT: The strength of chess engines together with the availability of numerous
chess games have attracted the attention of chess players, data scientists, and
researchers during the last decades. State-of-the-art engines now provide an
authoritative judgement that can be used in many applications like cheating
detection, intrinsic ratings computation, skill assessment, or the study of
human decision-making. A key issue for the research community is to gather a
large dataset of chess games together with the judgement of chess engines.
Unfortunately the analysis of each move takes lots of times. In this paper, we
report our effort to analyse almost 5 millions chess games with a computing
grid. During summer 2015, we processed 270 millions unique played positions
using the Stockfish engine with a quite high depth (20). We populated a
database of 1+ tera-octets of chess evaluations, representing an estimated time
of 50 years of computation on a single machine. Our effort is a first step
towards the replication of research results, the supply of open data and
procedures for exploring new directions, and the investigation of software
engineering/scalability issues when computing billions of moves.
|
1203.6276
|
Pekka Malo
|
Ankur Sinha, Pekka Malo, Timo Kuosmanen
|
A Multi-objective Exploratory Procedure for Regression Model Selection
|
in Journal of Computational and Graphical Statistics, Vol. 24, Iss.
1, 2015
| null |
10.1080/10618600.2014.899236
| null |
stat.CO cs.NE stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variable selection is recognized as one of the most critical steps in
statistical modeling. The problems encountered in engineering and social
sciences are commonly characterized by over-abundance of explanatory variables,
non-linearities and unknown interdependencies between the regressors. An added
difficulty is that the analysts may have little or no prior knowledge on the
relative importance of the variables. To provide a robust method for model
selection, this paper introduces the Multi-objective Genetic Algorithm for
Variable Selection (MOGA-VS) that provides the user with an optimal set of
regression models for a given data-set. The algorithm considers the regression
problem as a two objective task, and explores the Pareto-optimal (best subset)
models by preferring those models over the other which have less number of
regression coefficients and better goodness of fit. The model exploration can
be performed based on in-sample or generalization error minimization. The model
selection is proposed to be performed in two steps. First, we generate the
frontier of Pareto-optimal regression models by eliminating the dominated
models without any user intervention. Second, a decision making process is
executed which allows the user to choose the most preferred model using
visualisations and simple metrics. The method has been evaluated on a recently
published real dataset on Communities and Crime within United States.
|
[
{
"version": "v1",
"created": "Wed, 28 Mar 2012 14:15:24 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Sep 2012 15:54:33 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Jul 2013 22:34:01 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Jul 2016 07:53:01 GMT"
}
] | 2016-07-14T00:00:00 |
[
[
"Sinha",
"Ankur",
""
],
[
"Malo",
"Pekka",
""
],
[
"Kuosmanen",
"Timo",
""
]
] |
TITLE: A Multi-objective Exploratory Procedure for Regression Model Selection
ABSTRACT: Variable selection is recognized as one of the most critical steps in
statistical modeling. The problems encountered in engineering and social
sciences are commonly characterized by over-abundance of explanatory variables,
non-linearities and unknown interdependencies between the regressors. An added
difficulty is that the analysts may have little or no prior knowledge on the
relative importance of the variables. To provide a robust method for model
selection, this paper introduces the Multi-objective Genetic Algorithm for
Variable Selection (MOGA-VS) that provides the user with an optimal set of
regression models for a given data-set. The algorithm considers the regression
problem as a two objective task, and explores the Pareto-optimal (best subset)
models by preferring those models over the other which have less number of
regression coefficients and better goodness of fit. The model exploration can
be performed based on in-sample or generalization error minimization. The model
selection is proposed to be performed in two steps. First, we generate the
frontier of Pareto-optimal regression models by eliminating the dominated
models without any user intervention. Second, a decision making process is
executed which allows the user to choose the most preferred model using
visualisations and simple metrics. The method has been evaluated on a recently
published real dataset on Communities and Crime within United States.
|
1604.05472
|
Arpita Biswas
|
Ragavendran Gopalakrishnan, Arpita Biswas, Alefiya Lightwala, Skanda
Vasudevan, Partha Dutta, Abhishek Tripathi
|
Demand Prediction and Placement Optimization for Electric Vehicle
Charging Stations
|
Published in the proceedings of the 25th International Joint
Conference on Artificial Intelligence IJCAI 2016
| null | null | null |
cs.AI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective placement of charging stations plays a key role in Electric Vehicle
(EV) adoption. In the placement problem, given a set of candidate sites, an
optimal subset needs to be selected with respect to the concerns of both (a)
the charging station service provider, such as the demand at the candidate
sites and the budget for deployment, and (b) the EV user, such as charging
station reachability and short waiting times at the station. This work
addresses these concerns, making the following three novel contributions: (i) a
supervised multi-view learning framework using Canonical Correlation Analysis
(CCA) for demand prediction at candidate sites, using multiple datasets such as
points of interest information, traffic density, and the historical usage at
existing charging stations; (ii) a mixed-packing-and- covering optimization
framework that models competing concerns of the service provider and EV users;
(iii) an iterative heuristic to solve these problems by alternately invoking
knapsack and set cover algorithms. The performance of the demand prediction
model and the placement optimization heuristic are evaluated using real world
data.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 08:51:03 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2016 14:30:23 GMT"
}
] | 2016-07-14T00:00:00 |
[
[
"Gopalakrishnan",
"Ragavendran",
""
],
[
"Biswas",
"Arpita",
""
],
[
"Lightwala",
"Alefiya",
""
],
[
"Vasudevan",
"Skanda",
""
],
[
"Dutta",
"Partha",
""
],
[
"Tripathi",
"Abhishek",
""
]
] |
TITLE: Demand Prediction and Placement Optimization for Electric Vehicle
Charging Stations
ABSTRACT: Effective placement of charging stations plays a key role in Electric Vehicle
(EV) adoption. In the placement problem, given a set of candidate sites, an
optimal subset needs to be selected with respect to the concerns of both (a)
the charging station service provider, such as the demand at the candidate
sites and the budget for deployment, and (b) the EV user, such as charging
station reachability and short waiting times at the station. This work
addresses these concerns, making the following three novel contributions: (i) a
supervised multi-view learning framework using Canonical Correlation Analysis
(CCA) for demand prediction at candidate sites, using multiple datasets such as
points of interest information, traffic density, and the historical usage at
existing charging stations; (ii) a mixed-packing-and- covering optimization
framework that models competing concerns of the service provider and EV users;
(iii) an iterative heuristic to solve these problems by alternately invoking
knapsack and set cover algorithms. The performance of the demand prediction
model and the placement optimization heuristic are evaluated using real world
data.
|
1607.03691
|
Gabriella Contardo
|
Gabriella Contardo, Ludovic Denoyer, Thierry Arti\`eres
|
Sequential Cost-Sensitive Feature Acquisition
|
12 pages, conference : accepted at IDA 2016
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a reinforcement learning based approach to tackle the
cost-sensitive learning problem where each input feature has a specific cost.
The acquisition process is handled through a stochastic policy which allows
features to be acquired in an adaptive way. The general architecture of our
approach relies on representation learning to enable performing prediction on
any partially observed sample, whatever the set of its observed features are.
The resulting model is an original mix of representation learning and of
reinforcement learning ideas. It is learned with policy gradient techniques to
minimize a budgeted inference cost. We demonstrate the effectiveness of our
proposed method with several experiments on a variety of datasets for the
sparse prediction problem where all features have the same cost, but also for
some cost-sensitive settings.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2016 12:10:08 GMT"
}
] | 2016-07-14T00:00:00 |
[
[
"Contardo",
"Gabriella",
""
],
[
"Denoyer",
"Ludovic",
""
],
[
"Artières",
"Thierry",
""
]
] |
TITLE: Sequential Cost-Sensitive Feature Acquisition
ABSTRACT: We propose a reinforcement learning based approach to tackle the
cost-sensitive learning problem where each input feature has a specific cost.
The acquisition process is handled through a stochastic policy which allows
features to be acquired in an adaptive way. The general architecture of our
approach relies on representation learning to enable performing prediction on
any partially observed sample, whatever the set of its observed features are.
The resulting model is an original mix of representation learning and of
reinforcement learning ideas. It is learned with policy gradient techniques to
minimize a budgeted inference cost. We demonstrate the effectiveness of our
proposed method with several experiments on a variety of datasets for the
sparse prediction problem where all features have the same cost, but also for
some cost-sensitive settings.
|
1607.03705
|
Philippe Leray
|
Maroua Haddad (LINA, LARODEC), Philippe Leray (LINA), Nahla Ben Amor
(LARODEC)
|
Possibilistic Networks: Parameters Learning from Imprecise Data and
Evaluation strategy
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been an ever-increasing interest in multidisciplinary research on
representing and reasoning with imperfect data. Possibilistic networks present
one of the powerful frameworks of interest for representing uncertain and
imprecise information. This paper covers the problem of their parameters
learning from imprecise datasets, i.e., containing multi-valued data. We
propose in the rst part of this paper a possibilistic networks sampling
process. In the second part, we propose a likelihood function which explores
the link between random sets theory and possibility theory. This function is
then deployed to parametrize possibilistic networks.
|
[
{
"version": "v1",
"created": "Wed, 13 Jul 2016 12:45:53 GMT"
}
] | 2016-07-14T00:00:00 |
[
[
"Haddad",
"Maroua",
"",
"LINA, LARODEC"
],
[
"Leray",
"Philippe",
"",
"LINA"
],
[
"Amor",
"Nahla Ben",
"",
"LARODEC"
]
] |
TITLE: Possibilistic Networks: Parameters Learning from Imprecise Data and
Evaluation strategy
ABSTRACT: There has been an ever-increasing interest in multidisciplinary research on
representing and reasoning with imperfect data. Possibilistic networks present
one of the powerful frameworks of interest for representing uncertain and
imprecise information. This paper covers the problem of their parameters
learning from imprecise datasets, i.e., containing multi-valued data. We
propose in the rst part of this paper a possibilistic networks sampling
process. In the second part, we propose a likelihood function which explores
the link between random sets theory and possibility theory. This function is
then deployed to parametrize possibilistic networks.
|
1603.01431
|
Devansh Arpit
|
Devansh Arpit, Yingbo Zhou, Bhargava U. Kota, Venu Govindaraju
|
Normalization Propagation: A Parametric Technique for Removing Internal
Covariate Shift in Deep Networks
|
11 pages, ICML 2016, appendix added to the last version
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- Internal Covariate
Shift-- the current solution has certain drawbacks. Specifically, BN depends on
batch statistics for layerwise input normalization during training which makes
the estimates of mean and standard deviation of input (distribution) to hidden
layers inaccurate for validation due to shifting parameter values (especially
during initial training epochs). Also, BN cannot be used with batch-size 1
during training. We address these drawbacks by proposing a non-adaptive
normalization technique for removing internal covariate shift, that we call
Normalization Propagation. Our approach does not depend on batch statistics,
but rather uses a data-independent parametric estimate of mean and
standard-deviation in every layer thus being computationally faster compared
with BN. We exploit the observation that the pre-activation before Rectified
Linear Units follow Gaussian distribution in deep networks, and that once the
first and second order statistics of any given dataset are normalized, we can
forward propagate this normalization without the need for recalculating the
approximate statistics for hidden layers.
|
[
{
"version": "v1",
"created": "Fri, 4 Mar 2016 12:01:58 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Mar 2016 16:41:25 GMT"
},
{
"version": "v3",
"created": "Mon, 23 May 2016 23:01:55 GMT"
},
{
"version": "v4",
"created": "Mon, 30 May 2016 02:08:06 GMT"
},
{
"version": "v5",
"created": "Sun, 3 Jul 2016 20:17:44 GMT"
},
{
"version": "v6",
"created": "Tue, 12 Jul 2016 13:57:19 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Arpit",
"Devansh",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Kota",
"Bhargava U.",
""
],
[
"Govindaraju",
"Venu",
""
]
] |
TITLE: Normalization Propagation: A Parametric Technique for Removing Internal
Covariate Shift in Deep Networks
ABSTRACT: While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- Internal Covariate
Shift-- the current solution has certain drawbacks. Specifically, BN depends on
batch statistics for layerwise input normalization during training which makes
the estimates of mean and standard deviation of input (distribution) to hidden
layers inaccurate for validation due to shifting parameter values (especially
during initial training epochs). Also, BN cannot be used with batch-size 1
during training. We address these drawbacks by proposing a non-adaptive
normalization technique for removing internal covariate shift, that we call
Normalization Propagation. Our approach does not depend on batch statistics,
but rather uses a data-independent parametric estimate of mean and
standard-deviation in every layer thus being computationally faster compared
with BN. We exploit the observation that the pre-activation before Rectified
Linear Units follow Gaussian distribution in deep networks, and that once the
first and second order statistics of any given dataset are normalized, we can
forward propagate this normalization without the need for recalculating the
approximate statistics for hidden layers.
|
1603.05587
|
Mohammad Ghasemi Hamed
|
Mohammad Ghasemi Hamed and Masoud Ebadi Kivaj
|
Reliable Prediction Intervals for Local Linear Regression
|
40 pages,11 figures, 10 tables and 1 algorithm. arXiv admin note:
text overlap with arXiv:1402.5874
| null | null | null |
stat.ME cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces two methods for estimating reliable prediction
intervals for local linear least-squares regressions, named Bounded Oscillation
Prediction Intervals (BOPI). It also proposes a new measure for comparing
interval prediction models named Equivalent Gaussian Standard Deviation (EGSD).
The experimental results compare BOPI to other methods using coverage
probability, Mean Interval Size and the introduced EGSD measure. The results
were generally in favor of the BOPI on considered benchmark regression
datasets. It also, reports simulation studies validating the BOPI method's
reliability.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2016 17:39:12 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2016 17:54:37 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Mar 2016 21:52:48 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Apr 2016 10:23:38 GMT"
},
{
"version": "v5",
"created": "Tue, 12 Jul 2016 17:39:50 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Hamed",
"Mohammad Ghasemi",
""
],
[
"Kivaj",
"Masoud Ebadi",
""
]
] |
TITLE: Reliable Prediction Intervals for Local Linear Regression
ABSTRACT: This paper introduces two methods for estimating reliable prediction
intervals for local linear least-squares regressions, named Bounded Oscillation
Prediction Intervals (BOPI). It also proposes a new measure for comparing
interval prediction models named Equivalent Gaussian Standard Deviation (EGSD).
The experimental results compare BOPI to other methods using coverage
probability, Mean Interval Size and the introduced EGSD measure. The results
were generally in favor of the BOPI on considered benchmark regression
datasets. It also, reports simulation studies validating the BOPI method's
reliability.
|
1604.07939
|
Andre Araujo
|
Andre Araujo, Jason Chaves, Haricharan Lakshman, Roland Angst, Bernd
Girod
|
Large-Scale Query-by-Image Video Retrieval Using Bloom Filters
| null | null | null | null |
cs.MM cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of using image queries to retrieve videos from a
database. Our focus is on large-scale applications, where it is infeasible to
index each database video frame independently. Our main contribution is a
framework based on Bloom filters, which can be used to index long video
segments, enabling efficient image-to-video comparisons. Using this framework,
we investigate several retrieval architectures, by considering different types
of aggregation and different functions to encode visual information -- these
play a crucial role in achieving high performance. Extensive experiments show
that the proposed technique improves mean average precision by 24% on a public
dataset, while being 4X faster, compared to the previous state-of-the-art.
|
[
{
"version": "v1",
"created": "Wed, 27 Apr 2016 05:46:52 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2016 17:58:16 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Araujo",
"Andre",
""
],
[
"Chaves",
"Jason",
""
],
[
"Lakshman",
"Haricharan",
""
],
[
"Angst",
"Roland",
""
],
[
"Girod",
"Bernd",
""
]
] |
TITLE: Large-Scale Query-by-Image Video Retrieval Using Bloom Filters
ABSTRACT: We consider the problem of using image queries to retrieve videos from a
database. Our focus is on large-scale applications, where it is infeasible to
index each database video frame independently. Our main contribution is a
framework based on Bloom filters, which can be used to index long video
segments, enabling efficient image-to-video comparisons. Using this framework,
we investigate several retrieval architectures, by considering different types
of aggregation and different functions to encode visual information -- these
play a crucial role in achieving high performance. Extensive experiments show
that the proposed technique improves mean average precision by 24% on a public
dataset, while being 4X faster, compared to the previous state-of-the-art.
|
1607.03226
|
Xiaoyue Jiang
|
Xiaoyue Jiang, Dong Zhang and Xiaoyi Feng
|
Local feature hierarchy for face recognition across pose and
illumination
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Even though face recognition in frontal view and normal lighting condition
works very well, the performance degenerates sharply in extreme conditions.
Recently there are many work dealing with pose and illumination problems,
respectively. However both the lighting and pose variation will always be
encountered at the same time. Accordingly we propose an end-to-end face
recognition method to deal with pose and illumination simultaneously based on
convolutional networks where the discriminative nonlinear features that are
invariant to pose and illumination are extracted. Normally the global structure
for images taken in different views is quite diverse. Therefore we propose to
use the 1*1 convolutional kernel to extract the local features. Furthermore the
parallel multi-stream multi-layer 1*1 convolution network is developed to
extract multi-hierarchy features. In the experiments we obtained the average
face recognition rate of 96.9% on multiPIE dataset,which improves the
state-of-the-art of face recognition across poses and illumination by 7.5%.
Especially for profile-wise positions, the average recognition rate of our
proposed network is 97.8%, which increases the state-of-the-art recognition
rate by 19%.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 03:52:30 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Jiang",
"Xiaoyue",
""
],
[
"Zhang",
"Dong",
""
],
[
"Feng",
"Xiaoyi",
""
]
] |
TITLE: Local feature hierarchy for face recognition across pose and
illumination
ABSTRACT: Even though face recognition in frontal view and normal lighting condition
works very well, the performance degenerates sharply in extreme conditions.
Recently there are many work dealing with pose and illumination problems,
respectively. However both the lighting and pose variation will always be
encountered at the same time. Accordingly we propose an end-to-end face
recognition method to deal with pose and illumination simultaneously based on
convolutional networks where the discriminative nonlinear features that are
invariant to pose and illumination are extracted. Normally the global structure
for images taken in different views is quite diverse. Therefore we propose to
use the 1*1 convolutional kernel to extract the local features. Furthermore the
parallel multi-stream multi-layer 1*1 convolution network is developed to
extract multi-hierarchy features. In the experiments we obtained the average
face recognition rate of 96.9% on multiPIE dataset,which improves the
state-of-the-art of face recognition across poses and illumination by 7.5%.
Especially for profile-wise positions, the average recognition rate of our
proposed network is 97.8%, which increases the state-of-the-art recognition
rate by 19%.
|
1607.03240
|
Sohil Shah
|
Sohil Shah, Kuldeep Kulkarni, Arijit Biswas, Ankit Gandhi, Om Deshmukh
and Larry Davis
|
Weakly Supervised Learning of Heterogeneous Concepts in Videos
|
To appear at ECCV 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Typical textual descriptions that accompany online videos are 'weak': i.e.,
they mention the main concepts in the video but not their corresponding
spatio-temporal locations. The concepts in the description are typically
heterogeneous (e.g., objects, persons, actions). Certain location constraints
on these concepts can also be inferred from the description. The goal of this
paper is to present a generalization of the Indian Buffet Process (IBP) that
can (a) systematically incorporate heterogeneous concepts in an integrated
framework, and (b) enforce location constraints, for efficient classification
and localization of the concepts in the videos. Finally, we develop posterior
inference for the proposed formulation using mean-field variational
approximation. Comparative evaluations on the Casablanca and the A2D datasets
show that the proposed approach significantly outperforms other
state-of-the-art techniques: 24% relative improvement for pairwise concept
classification in the Casablanca dataset and 9% relative improvement for
localization in the A2D dataset as compared to the most competitive baseline.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 06:49:49 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Shah",
"Sohil",
""
],
[
"Kulkarni",
"Kuldeep",
""
],
[
"Biswas",
"Arijit",
""
],
[
"Gandhi",
"Ankit",
""
],
[
"Deshmukh",
"Om",
""
],
[
"Davis",
"Larry",
""
]
] |
TITLE: Weakly Supervised Learning of Heterogeneous Concepts in Videos
ABSTRACT: Typical textual descriptions that accompany online videos are 'weak': i.e.,
they mention the main concepts in the video but not their corresponding
spatio-temporal locations. The concepts in the description are typically
heterogeneous (e.g., objects, persons, actions). Certain location constraints
on these concepts can also be inferred from the description. The goal of this
paper is to present a generalization of the Indian Buffet Process (IBP) that
can (a) systematically incorporate heterogeneous concepts in an integrated
framework, and (b) enforce location constraints, for efficient classification
and localization of the concepts in the videos. Finally, we develop posterior
inference for the proposed formulation using mean-field variational
approximation. Comparative evaluations on the Casablanca and the A2D datasets
show that the proposed approach significantly outperforms other
state-of-the-art techniques: 24% relative improvement for pairwise concept
classification in the Casablanca dataset and 9% relative improvement for
localization in the A2D dataset as compared to the most competitive baseline.
|
1607.03250
|
Hengyuan Hu
|
Hengyuan Hu, Rui Peng, Yu-Wing Tai, Chi-Keung Tang
|
Network Trimming: A Data-Driven Neuron Pruning Approach towards
Efficient Deep Architectures
| null | null | null | null |
cs.NE cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art neural networks are getting deeper and wider. While their
performance increases with the increasing number of layers and neurons, it is
crucial to design an efficient deep architecture in order to reduce
computational and memory costs. Designing an efficient neural network, however,
is labor intensive requiring many experiments, and fine-tunings. In this paper,
we introduce network trimming which iteratively optimizes the network by
pruning unimportant neurons based on analysis of their outputs on a large
dataset. Our algorithm is inspired by an observation that the outputs of a
significant portion of neurons in a large network are mostly zero, regardless
of what inputs the network received. These zero activation neurons are
redundant, and can be removed without affecting the overall accuracy of the
network. After pruning the zero activation neurons, we retrain the network
using the weights before pruning as initialization. We alternate the pruning
and retraining to further reduce zero activations in a network. Our experiments
on the LeNet and VGG-16 show that we can achieve high compression ratio of
parameters without losing or even achieving higher accuracy than the original
network.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 07:43:01 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Hu",
"Hengyuan",
""
],
[
"Peng",
"Rui",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Tang",
"Chi-Keung",
""
]
] |
TITLE: Network Trimming: A Data-Driven Neuron Pruning Approach towards
Efficient Deep Architectures
ABSTRACT: State-of-the-art neural networks are getting deeper and wider. While their
performance increases with the increasing number of layers and neurons, it is
crucial to design an efficient deep architecture in order to reduce
computational and memory costs. Designing an efficient neural network, however,
is labor intensive requiring many experiments, and fine-tunings. In this paper,
we introduce network trimming which iteratively optimizes the network by
pruning unimportant neurons based on analysis of their outputs on a large
dataset. Our algorithm is inspired by an observation that the outputs of a
significant portion of neurons in a large network are mostly zero, regardless
of what inputs the network received. These zero activation neurons are
redundant, and can be removed without affecting the overall accuracy of the
network. After pruning the zero activation neurons, we retrain the network
using the weights before pruning as initialization. We alternate the pruning
and retraining to further reduce zero activations in a network. Our experiments
on the LeNet and VGG-16 show that we can achieve high compression ratio of
parameters without losing or even achieving higher accuracy than the original
network.
|
1607.03274
|
Maria Han Veiga
|
Maria Han Veiga and Carsten Eickhoff
|
A Cross-Platform Collection of Social Network Profiles
|
4 pages, 5 figures, SIGIR 2016, short paper. SIGIR 2016 Proceedings
of the 39th International ACM SIGIR conference on Research and Development in
Information Retrieval
| null |
10.1145/2911451.2914666
| null |
cs.IR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of Internet-enabled devices and services has led to a
shifting balance between digital and analogue aspects of our everyday lives. In
the face of this development there is a growing demand for the study of privacy
hazards, the potential for unique user de-anonymization and information leakage
between the various social media profiles many of us maintain. To enable the
structured study of such adversarial effects, this paper presents a dedicated
dataset of cross-platform social network personas (i.e., the same person has
accounts on multiple platforms). The corpus comprises 850 users who generate
predominantly English content. Each user object contains the online footprint
of the same person in three distinct social networks: Twitter, Instagram and
Foursquare. In total, it encompasses over 2.5M tweets, 340k check-ins and 42k
Instagram posts. We describe the collection methodology, characteristics of the
dataset, and how to obtain it. Finally, we discuss a common use case,
cross-platform user identification.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 09:09:58 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Veiga",
"Maria Han",
""
],
[
"Eickhoff",
"Carsten",
""
]
] |
TITLE: A Cross-Platform Collection of Social Network Profiles
ABSTRACT: The proliferation of Internet-enabled devices and services has led to a
shifting balance between digital and analogue aspects of our everyday lives. In
the face of this development there is a growing demand for the study of privacy
hazards, the potential for unique user de-anonymization and information leakage
between the various social media profiles many of us maintain. To enable the
structured study of such adversarial effects, this paper presents a dedicated
dataset of cross-platform social network personas (i.e., the same person has
accounts on multiple platforms). The corpus comprises 850 users who generate
predominantly English content. Each user object contains the online footprint
of the same person in three distinct social networks: Twitter, Instagram and
Foursquare. In total, it encompasses over 2.5M tweets, 340k check-ins and 42k
Instagram posts. We describe the collection methodology, characteristics of the
dataset, and how to obtain it. Finally, we discuss a common use case,
cross-platform user identification.
|
1607.03305
|
Martin Cadik
|
Martin Cadik and Jan Vasicek and Michal Hradis and Filip Radenovic and
Ondrej Chum
|
Camera Elevation Estimation from a Single Mountain Landscape Photograph
| null |
In Xianghua Xie, Mark W. Jones, and Gary K. L. Tam, editors,
Proceedings of the British Machine Vision Conference (BMVC), pages
30.1-30.12. BMVA Press, September 2015
|
10.5244/C.29.30
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work addresses the problem of camera elevation estimation from a single
photograph in an outdoor environment. We introduce a new benchmark dataset of
one-hundred thousand images with annotated camera elevation called Alps100K. We
propose and experimentally evaluate two automatic data-driven approaches to
camera elevation estimation: one based on convolutional neural networks, the
other on local features. To compare the proposed methods to human performance,
an experiment with 100 subjects is conducted. The experimental results show
that both proposed approaches outperform humans and that the best result is
achieved by their combination.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 10:47:51 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Cadik",
"Martin",
""
],
[
"Vasicek",
"Jan",
""
],
[
"Hradis",
"Michal",
""
],
[
"Radenovic",
"Filip",
""
],
[
"Chum",
"Ondrej",
""
]
] |
TITLE: Camera Elevation Estimation from a Single Mountain Landscape Photograph
ABSTRACT: This work addresses the problem of camera elevation estimation from a single
photograph in an outdoor environment. We introduce a new benchmark dataset of
one-hundred thousand images with annotated camera elevation called Alps100K. We
propose and experimentally evaluate two automatic data-driven approaches to
camera elevation estimation: one based on convolutional neural networks, the
other on local features. To compare the proposed methods to human performance,
an experiment with 100 subjects is conducted. The experimental results show
that both proposed approaches outperform humans and that the best result is
achieved by their combination.
|
1607.03380
|
Yangqing Li
|
Yangqing Li and Saurabh Prasad and Wei Chen and Changchuan Yin and Zhu
Han
|
An approximate message passing approach for compressive hyperspectral
imaging using a simultaneous low-rank and joint-sparsity prior
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers a compressive sensing (CS) approach for hyperspectral
data acquisition, which results in a practical compression ratio substantially
higher than the state-of-the-art. Applying simultaneous low-rank and
joint-sparse (L&S) model to the hyperspectral data, we propose a novel
algorithm to joint reconstruction of hyperspectral data based on loopy belief
propagation that enables the exploitation of both structured sparsity and
amplitude correlations in the data. Experimental results with real
hyperspectral datasets demonstrate that the proposed algorithm outperforms the
state-of-the-art CS-based solutions with substantial reductions in
reconstruction error.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 14:49:05 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Li",
"Yangqing",
""
],
[
"Prasad",
"Saurabh",
""
],
[
"Chen",
"Wei",
""
],
[
"Yin",
"Changchuan",
""
],
[
"Han",
"Zhu",
""
]
] |
TITLE: An approximate message passing approach for compressive hyperspectral
imaging using a simultaneous low-rank and joint-sparsity prior
ABSTRACT: This paper considers a compressive sensing (CS) approach for hyperspectral
data acquisition, which results in a practical compression ratio substantially
higher than the state-of-the-art. Applying simultaneous low-rank and
joint-sparse (L&S) model to the hyperspectral data, we propose a novel
algorithm to joint reconstruction of hyperspectral data based on loopy belief
propagation that enables the exploitation of both structured sparsity and
amplitude correlations in the data. Experimental results with real
hyperspectral datasets demonstrate that the proposed algorithm outperforms the
state-of-the-art CS-based solutions with substantial reductions in
reconstruction error.
|
1607.03401
|
Qianqian Xu
|
Qianqian Xu, Jiechao Xiong, Xiaochun Cao, and Yuan Yao
|
Parsimonious Mixed-Effects HodgeRank for Crowdsourced Preference
Aggregation
|
10 pages, ACM Multimedia (full paper) accepted
| null | null | null |
cs.HC cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In crowdsourced preference aggregation, it is often assumed that all the
annotators are subject to a common preference or utility function which
generates their comparison behaviors in experiments. However, in reality
annotators are subject to variations due to multi-criteria, abnormal, or a
mixture of such behaviors. In this paper, we propose a parsimonious
mixed-effects model based on HodgeRank, which takes into account both the fixed
effect that the majority of annotators follows a common linear utility model,
and the random effect that a small subset of annotators might deviate from the
common significantly and exhibits strongly personalized preferences. HodgeRank
has been successfully applied to subjective quality evaluation of multimedia
and resolves pairwise crowdsourced ranking data into a global consensus ranking
and cyclic conflicts of interests. As an extension, our proposed methodology
further explores the conflicts of interests through the random effect in
annotator specific variations. The key algorithm in this paper establishes a
dynamic path from the common utility to individual variations, with different
levels of parsimony or sparsity on personalization, based on newly developed
Linearized Bregman Algorithms with Inverse Scale Space method. Finally the
validity of the methodology are supported by experiments with both simulated
examples and three real-world crowdsourcing datasets, which shows that our
proposed method exhibits better performance (i.e. smaller test error) compared
with HodgeRank due to its parsimonious property.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 15:30:10 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Xu",
"Qianqian",
""
],
[
"Xiong",
"Jiechao",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"Yao",
"Yuan",
""
]
] |
TITLE: Parsimonious Mixed-Effects HodgeRank for Crowdsourced Preference
Aggregation
ABSTRACT: In crowdsourced preference aggregation, it is often assumed that all the
annotators are subject to a common preference or utility function which
generates their comparison behaviors in experiments. However, in reality
annotators are subject to variations due to multi-criteria, abnormal, or a
mixture of such behaviors. In this paper, we propose a parsimonious
mixed-effects model based on HodgeRank, which takes into account both the fixed
effect that the majority of annotators follows a common linear utility model,
and the random effect that a small subset of annotators might deviate from the
common significantly and exhibits strongly personalized preferences. HodgeRank
has been successfully applied to subjective quality evaluation of multimedia
and resolves pairwise crowdsourced ranking data into a global consensus ranking
and cyclic conflicts of interests. As an extension, our proposed methodology
further explores the conflicts of interests through the random effect in
annotator specific variations. The key algorithm in this paper establishes a
dynamic path from the common utility to individual variations, with different
levels of parsimony or sparsity on personalization, based on newly developed
Linearized Bregman Algorithms with Inverse Scale Space method. Finally the
validity of the methodology are supported by experiments with both simulated
examples and three real-world crowdsourcing datasets, which shows that our
proposed method exhibits better performance (i.e. smaller test error) compared
with HodgeRank due to its parsimonious property.
|
1607.03425
|
Matthias Vestner
|
Matthias Vestner, Roee Litman, Alex Bronstein, Emanuele Rodol\`a and
Daniel Cremers
|
Bayesian Inference of Bijective Non-Rigid Shape Correspondence
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many algorithms for the computation of correspondences between deformable
shapes rely on some variant of nearest neighbor matching in a descriptor space.
Such are, for example, various point-wise correspondence recovery algorithms
used as a postprocessing stage in the functional correspondence framework. In
this paper, we show that such frequently used techniques in practice suffer
from lack of accuracy and result in poor surjectivity. We propose an
alternative recovery technique guaranteeing a bijective correspondence and
producing significantly higher accuracy. We derive the proposed method from a
statistical framework of Bayesian inference and demonstrate its performance on
several challenging deformable 3D shape matching datasets.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 16:04:54 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Vestner",
"Matthias",
""
],
[
"Litman",
"Roee",
""
],
[
"Bronstein",
"Alex",
""
],
[
"Rodolà",
"Emanuele",
""
],
[
"Cremers",
"Daniel",
""
]
] |
TITLE: Bayesian Inference of Bijective Non-Rigid Shape Correspondence
ABSTRACT: Many algorithms for the computation of correspondences between deformable
shapes rely on some variant of nearest neighbor matching in a descriptor space.
Such are, for example, various point-wise correspondence recovery algorithms
used as a postprocessing stage in the functional correspondence framework. In
this paper, we show that such frequently used techniques in practice suffer
from lack of accuracy and result in poor surjectivity. We propose an
alternative recovery technique guaranteeing a bijective correspondence and
producing significantly higher accuracy. We derive the proposed method from a
statistical framework of Bayesian inference and demonstrate its performance on
several challenging deformable 3D shape matching datasets.
|
1607.03456
|
Moshe Salhov
|
Amit Bermanis, Aviv Rotbart, Moshe Salhov and Amir Averbuch
|
Incomplete Pivoted QR-based Dimensionality Reduction
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-dimensional big data appears in many research fields such as image
recognition, biology and collaborative filtering. Often, the exploration of
such data by classic algorithms is encountered with difficulties due to `curse
of dimensionality' phenomenon. Therefore, dimensionality reduction methods are
applied to the data prior to its analysis. Many of these methods are based on
principal components analysis, which is statistically driven, namely they map
the data into a low-dimension subspace that preserves significant statistical
properties of the high-dimensional data. As a consequence, such methods do not
directly address the geometry of the data, reflected by the mutual distances
between multidimensional data point. Thus, operations such as classification,
anomaly detection or other machine learning tasks may be affected.
This work provides a dictionary-based framework for geometrically driven data
analysis that includes dimensionality reduction, out-of-sample extension and
anomaly detection. It embeds high-dimensional data in a low-dimensional
subspace. This embedding preserves the original high-dimensional geometry of
the data up to a user-defined distortion rate. In addition, it identifies a
subset of landmark data points that constitute a dictionary for the analyzed
dataset. The dictionary enables to have a natural extension of the
low-dimensional embedding to out-of-sample data points, which gives rise to a
distortion-based criterion for anomaly detection. The suggested method is
demonstrated on synthetic and real-world datasets and achieves good results for
classification, anomaly detection and out-of-sample tasks.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 18:20:23 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Bermanis",
"Amit",
""
],
[
"Rotbart",
"Aviv",
""
],
[
"Salhov",
"Moshe",
""
],
[
"Averbuch",
"Amir",
""
]
] |
TITLE: Incomplete Pivoted QR-based Dimensionality Reduction
ABSTRACT: High-dimensional big data appears in many research fields such as image
recognition, biology and collaborative filtering. Often, the exploration of
such data by classic algorithms is encountered with difficulties due to `curse
of dimensionality' phenomenon. Therefore, dimensionality reduction methods are
applied to the data prior to its analysis. Many of these methods are based on
principal components analysis, which is statistically driven, namely they map
the data into a low-dimension subspace that preserves significant statistical
properties of the high-dimensional data. As a consequence, such methods do not
directly address the geometry of the data, reflected by the mutual distances
between multidimensional data point. Thus, operations such as classification,
anomaly detection or other machine learning tasks may be affected.
This work provides a dictionary-based framework for geometrically driven data
analysis that includes dimensionality reduction, out-of-sample extension and
anomaly detection. It embeds high-dimensional data in a low-dimensional
subspace. This embedding preserves the original high-dimensional geometry of
the data up to a user-defined distortion rate. In addition, it identifies a
subset of landmark data points that constitute a dictionary for the analyzed
dataset. The dictionary enables to have a natural extension of the
low-dimensional embedding to out-of-sample data points, which gives rise to a
distortion-based criterion for anomaly detection. The suggested method is
demonstrated on synthetic and real-world datasets and achieves good results for
classification, anomaly detection and out-of-sample tasks.
|
1607.03475
|
Ping Li
|
Ping Li
|
Nystrom Method for Approximating the GMM Kernel
| null | null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The GMM (generalized min-max) kernel was recently proposed (Li, 2016) as a
measure of data similarity and was demonstrated effective in machine learning
tasks. In order to use the GMM kernel for large-scale datasets, the prior work
resorted to the (generalized) consistent weighted sampling (GCWS) to convert
the GMM kernel to linear kernel. We call this approach as ``GMM-GCWS''.
In the machine learning literature, there is a popular algorithm which we
call ``RBF-RFF''. That is, one can use the ``random Fourier features'' (RFF) to
convert the ``radial basis function'' (RBF) kernel to linear kernel. It was
empirically shown in (Li, 2016) that RBF-RFF typically requires substantially
more samples than GMM-GCWS in order to achieve comparable accuracies.
The Nystrom method is a general tool for computing nonlinear kernels, which
again converts nonlinear kernels into linear kernels. We apply the Nystrom
method for approximating the GMM kernel, a strategy which we name as
``GMM-NYS''. In this study, our extensive experiments on a set of fairly large
datasets confirm that GMM-NYS is also a strong competitor of RBF-RFF.
|
[
{
"version": "v1",
"created": "Tue, 12 Jul 2016 19:42:40 GMT"
}
] | 2016-07-13T00:00:00 |
[
[
"Li",
"Ping",
""
]
] |
TITLE: Nystrom Method for Approximating the GMM Kernel
ABSTRACT: The GMM (generalized min-max) kernel was recently proposed (Li, 2016) as a
measure of data similarity and was demonstrated effective in machine learning
tasks. In order to use the GMM kernel for large-scale datasets, the prior work
resorted to the (generalized) consistent weighted sampling (GCWS) to convert
the GMM kernel to linear kernel. We call this approach as ``GMM-GCWS''.
In the machine learning literature, there is a popular algorithm which we
call ``RBF-RFF''. That is, one can use the ``random Fourier features'' (RFF) to
convert the ``radial basis function'' (RBF) kernel to linear kernel. It was
empirically shown in (Li, 2016) that RBF-RFF typically requires substantially
more samples than GMM-GCWS in order to achieve comparable accuracies.
The Nystrom method is a general tool for computing nonlinear kernels, which
again converts nonlinear kernels into linear kernels. We apply the Nystrom
method for approximating the GMM kernel, a strategy which we name as
``GMM-NYS''. In this study, our extensive experiments on a set of fairly large
datasets confirm that GMM-NYS is also a strong competitor of RBF-RFF.
|
1607.00148
|
Pankaj Malhotra Mr.
|
Pankaj Malhotra, Anusha Ramakrishnan, Gaurangi Anand, Lovekesh Vig,
Puneet Agarwal, Gautam Shroff
|
LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection
|
Accepted at ICML 2016 Anomaly Detection Workshop, New York, NY, USA,
2016. Reference update in this version (v2)
| null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mechanical devices such as engines, vehicles, aircrafts, etc., are typically
instrumented with numerous sensors to capture the behavior and health of the
machine. However, there are often external factors or variables which are not
captured by sensors leading to time-series which are inherently unpredictable.
For instance, manual controls and/or unmonitored environmental conditions or
load may lead to inherently unpredictable time-series. Detecting anomalies in
such scenarios becomes challenging using standard approaches based on
mathematical models that rely on stationarity, or prediction models that
utilize prediction errors to detect anomalies. We propose a Long Short Term
Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)
that learns to reconstruct 'normal' time-series behavior, and thereafter uses
reconstruction error to detect anomalies. We experiment with three publicly
available quasi predictable time-series datasets: power demand, space shuttle,
and ECG, and two real-world engine datasets with both predictive and
unpredictable behavior. We show that EncDec-AD is robust and can detect
anomalies from predictable, unpredictable, periodic, aperiodic, and
quasi-periodic time-series. Further, we show that EncDec-AD is able to detect
anomalies from short time-series (length as small as 30) as well as long
time-series (length as large as 500).
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 08:25:48 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Jul 2016 09:33:48 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Malhotra",
"Pankaj",
""
],
[
"Ramakrishnan",
"Anusha",
""
],
[
"Anand",
"Gaurangi",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Agarwal",
"Puneet",
""
],
[
"Shroff",
"Gautam",
""
]
] |
TITLE: LSTM-based Encoder-Decoder for Multi-sensor Anomaly Detection
ABSTRACT: Mechanical devices such as engines, vehicles, aircrafts, etc., are typically
instrumented with numerous sensors to capture the behavior and health of the
machine. However, there are often external factors or variables which are not
captured by sensors leading to time-series which are inherently unpredictable.
For instance, manual controls and/or unmonitored environmental conditions or
load may lead to inherently unpredictable time-series. Detecting anomalies in
such scenarios becomes challenging using standard approaches based on
mathematical models that rely on stationarity, or prediction models that
utilize prediction errors to detect anomalies. We propose a Long Short Term
Memory Networks based Encoder-Decoder scheme for Anomaly Detection (EncDec-AD)
that learns to reconstruct 'normal' time-series behavior, and thereafter uses
reconstruction error to detect anomalies. We experiment with three publicly
available quasi predictable time-series datasets: power demand, space shuttle,
and ECG, and two real-world engine datasets with both predictive and
unpredictable behavior. We show that EncDec-AD is robust and can detect
anomalies from predictable, unpredictable, periodic, aperiodic, and
quasi-periodic time-series. Further, we show that EncDec-AD is able to detect
anomalies from short time-series (length as small as 30) as well as long
time-series (length as large as 500).
|
1607.02556
|
Jialin Wu
|
Jialin Wu, Gu Wang, Wukui Yang, Xiangyang Ji
|
Action Recognition with Joint Attention on Multi-Level Deep Features
|
13 pages, submitted to BMVC
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel deep supervised neural network for the task of action
recognition in videos, which implicitly takes advantage of visual tracking and
shares the robustness of both deep Convolutional Neural Network (CNN) and
Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed
to suppress noise from background jitters. Specifically, we firstly extract
multi-level deep features from deep CNNs and feed them into 3d-convolutional
network. After that we feed those feature cubes into our novel joint LSTM
module to predict labels and to generate attention regularization. We evaluate
our model on two challenging datasets: UCF101 and HMDB51. The results show that
our model achieves the state-of-art by only using convolutional features.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2016 01:25:24 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Wu",
"Jialin",
""
],
[
"Wang",
"Gu",
""
],
[
"Yang",
"Wukui",
""
],
[
"Ji",
"Xiangyang",
""
]
] |
TITLE: Action Recognition with Joint Attention on Multi-Level Deep Features
ABSTRACT: We propose a novel deep supervised neural network for the task of action
recognition in videos, which implicitly takes advantage of visual tracking and
shares the robustness of both deep Convolutional Neural Network (CNN) and
Recurrent Neural Network (RNN). In our method, a multi-branch model is proposed
to suppress noise from background jitters. Specifically, we firstly extract
multi-level deep features from deep CNNs and feed them into 3d-convolutional
network. After that we feed those feature cubes into our novel joint LSTM
module to predict labels and to generate attention regularization. We evaluate
our model on two challenging datasets: UCF101 and HMDB51. The results show that
our model achieves the state-of-art by only using convolutional features.
|
1607.02559
|
Xiaojun Chang
|
Sen Wang and Feiping Nie and Xiaojun Chang and Xue Li and Quan Z.
Sheng and Lina Yao
|
Uncovering Locally Discriminative Structure for Feature Analysis
|
Accepted by ECML/PKDD2016
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Manifold structure learning is often used to exploit geometric information
among data in semi-supervised feature learning algorithms. In this paper, we
find that local discriminative information is also of importance for
semi-supervised feature learning. We propose a method that utilizes both the
manifold structure of data and local discriminant information. Specifically, we
define a local clique for each data point. The k-Nearest Neighbors (kNN) is
used to determine the structural information within each clique. We then employ
a variant of Fisher criterion model to each clique for local discriminant
evaluation and sum all cliques as global integration into the framework. In
this way, local discriminant information is embedded. Labels are also utilized
to minimize distances between data from the same class. In addition, we use the
kernel method to extend our proposed model and facilitate feature learning in a
high-dimensional space after feature mapping. Experimental results show that
our method is superior to all other compared methods over a number of datasets.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2016 02:29:53 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Wang",
"Sen",
""
],
[
"Nie",
"Feiping",
""
],
[
"Chang",
"Xiaojun",
""
],
[
"Li",
"Xue",
""
],
[
"Sheng",
"Quan Z.",
""
],
[
"Yao",
"Lina",
""
]
] |
TITLE: Uncovering Locally Discriminative Structure for Feature Analysis
ABSTRACT: Manifold structure learning is often used to exploit geometric information
among data in semi-supervised feature learning algorithms. In this paper, we
find that local discriminative information is also of importance for
semi-supervised feature learning. We propose a method that utilizes both the
manifold structure of data and local discriminant information. Specifically, we
define a local clique for each data point. The k-Nearest Neighbors (kNN) is
used to determine the structural information within each clique. We then employ
a variant of Fisher criterion model to each clique for local discriminant
evaluation and sum all cliques as global integration into the framework. In
this way, local discriminant information is embedded. Labels are also utilized
to minimize distances between data from the same class. In addition, we use the
kernel method to extend our proposed model and facilitate feature learning in a
high-dimensional space after feature mapping. Experimental results show that
our method is superior to all other compared methods over a number of datasets.
|
1607.02643
|
Mostafa Ibrahim Mostafa Ibrahim
|
Mostafa S. Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash Vahdat,
Greg Mori
|
Hierarchical Deep Temporal Models for Group Activity Recognition
|
arXiv admin note: text overlap with arXiv:1511.06040
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an approach for classifying the activity performed
by a group of people in a video sequence. This problem of group activity
recognition can be addressed by examining individual person actions and their
relations. Temporal dynamics exist both at the level of individual person
actions as well as at the level of group activity. Given a video sequence as
input, methods can be developed to capture these dynamics at both person-level
and group-level detail. We build a deep model to capture these dynamics based
on LSTM (long short-term memory) models. In order to model both person-level
and group-level dynamics, we present a 2-stage deep temporal model for the
group activity recognition problem. In our approach, one LSTM model is designed
to represent action dynamics of individual people in a video sequence and
another LSTM model is designed to aggregate person-level information for group
activity recognition. We collected a new dataset consisting of volleyball
videos labeled with individual and group activities in order to evaluate our
method. Experimental results on this new Volleyball Dataset and the standard
benchmark Collective Activity Dataset demonstrate the efficacy of the proposed
models.
|
[
{
"version": "v1",
"created": "Sat, 9 Jul 2016 18:23:36 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Ibrahim",
"Mostafa S.",
""
],
[
"Muralidharan",
"Srikanth",
""
],
[
"Deng",
"Zhiwei",
""
],
[
"Vahdat",
"Arash",
""
],
[
"Mori",
"Greg",
""
]
] |
TITLE: Hierarchical Deep Temporal Models for Group Activity Recognition
ABSTRACT: In this paper we present an approach for classifying the activity performed
by a group of people in a video sequence. This problem of group activity
recognition can be addressed by examining individual person actions and their
relations. Temporal dynamics exist both at the level of individual person
actions as well as at the level of group activity. Given a video sequence as
input, methods can be developed to capture these dynamics at both person-level
and group-level detail. We build a deep model to capture these dynamics based
on LSTM (long short-term memory) models. In order to model both person-level
and group-level dynamics, we present a 2-stage deep temporal model for the
group activity recognition problem. In our approach, one LSTM model is designed
to represent action dynamics of individual people in a video sequence and
another LSTM model is designed to aggregate person-level information for group
activity recognition. We collected a new dataset consisting of volleyball
videos labeled with individual and group activities in order to evaluate our
method. Experimental results on this new Volleyball Dataset and the standard
benchmark Collective Activity Dataset demonstrate the efficacy of the proposed
models.
|
1607.02678
|
Wei Li
|
Wei Li, Farnaz Abtahi, Christina Tsangouri, Zhigang Zhu
|
Towards an "In-the-Wild" Emotion Dataset Using a Game-based Framework
|
This paper is accepted at CVPR 2016 Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to create an "in-the-wild" dataset of facial emotions with large
number of balanced samples, this paper proposes a game-based data collection
framework. The framework mainly include three components: a game engine, a game
interface, and a data collection and evaluation module. We use a deep learning
approach to build an emotion classifier as the game engine. Then a emotion web
game to allow gamers to enjoy the games, while the data collection module
obtains automatically-labelled emotion images. Using our game, we have
collected more than 15,000 images within a month of the test run and built an
emotion dataset "GaMo". To evaluate the dataset, we compared the performance of
two deep learning models trained on both GaMo and CIFE. The results of our
experiments show that because of being large and balanced, GaMo can be used to
build a more robust emotion detector than the emotion detector trained on CIFE,
which was used in the game engine to collect the face images.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2016 02:16:10 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Li",
"Wei",
""
],
[
"Abtahi",
"Farnaz",
""
],
[
"Tsangouri",
"Christina",
""
],
[
"Zhu",
"Zhigang",
""
]
] |
TITLE: Towards an "In-the-Wild" Emotion Dataset Using a Game-based Framework
ABSTRACT: In order to create an "in-the-wild" dataset of facial emotions with large
number of balanced samples, this paper proposes a game-based data collection
framework. The framework mainly include three components: a game engine, a game
interface, and a data collection and evaluation module. We use a deep learning
approach to build an emotion classifier as the game engine. Then a emotion web
game to allow gamers to enjoy the games, while the data collection module
obtains automatically-labelled emotion images. Using our game, we have
collected more than 15,000 images within a month of the test run and built an
emotion dataset "GaMo". To evaluate the dataset, we compared the performance of
two deep learning models trained on both GaMo and CIFE. The results of our
experiments show that because of being large and balanced, GaMo can be used to
build a more robust emotion detector than the emotion detector trained on CIFE,
which was used in the game engine to collect the face images.
|
1607.02769
|
Gitit Kehat
|
Gitit Kehat and James Pustejovsky
|
Annotation Methodologies for Vision and Language Dataset Creation
|
in Scene Understanding Workshop (SUNw) in CVPR 2016
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Annotated datasets are commonly used in the training and evaluation of tasks
involving natural language and vision (image description generation, action
recognition and visual question answering). However, many of the existing
datasets reflect problems that emerge in the process of data selection and
annotation. Here we point out some of the difficulties and problems one
confronts when creating and validating annotated vision and language datasets.
|
[
{
"version": "v1",
"created": "Sun, 10 Jul 2016 18:11:27 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Kehat",
"Gitit",
""
],
[
"Pustejovsky",
"James",
""
]
] |
TITLE: Annotation Methodologies for Vision and Language Dataset Creation
ABSTRACT: Annotated datasets are commonly used in the training and evaluation of tasks
involving natural language and vision (image description generation, action
recognition and visual question answering). However, many of the existing
datasets reflect problems that emerge in the process of data selection and
annotation. Here we point out some of the difficulties and problems one
confronts when creating and validating annotated vision and language datasets.
|
1607.02802
|
Franck Dernoncourt
|
Franck Dernoncourt
|
Mapping distributional to model-theoretic semantic spaces: a baseline
| null | null | null | null |
cs.CL cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word embeddings have been shown to be useful across state-of-the-art systems
in many natural language processing tasks, ranging from question answering
systems to dependency parsing. (Herbelot and Vecchi, 2015) explored word
embeddings and their utility for modeling language semantics. In particular,
they presented an approach to automatically map a standard distributional
semantic space onto a set-theoretic model using partial least squares
regression. We show in this paper that a simple baseline achieves a +51%
relative improvement compared to their model on one of the two datasets they
used, and yields competitive results on the second dataset.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2016 01:20:57 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Dernoncourt",
"Franck",
""
]
] |
TITLE: Mapping distributional to model-theoretic semantic spaces: a baseline
ABSTRACT: Word embeddings have been shown to be useful across state-of-the-art systems
in many natural language processing tasks, ranging from question answering
systems to dependency parsing. (Herbelot and Vecchi, 2015) explored word
embeddings and their utility for modeling language semantics. In particular,
they presented an approach to automatically map a standard distributional
semantic space onto a set-theoretic model using partial least squares
regression. We show in this paper that a simple baseline achieves a +51%
relative improvement compared to their model on one of the two datasets they
used, and yields competitive results on the second dataset.
|
1607.02815
|
Chao-Yeh Chen
|
Chao-Yeh Chen and Kristen Grauman
|
Efficient Activity Detection in Untrimmed Video with Max-Subgraph Search
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an efficient approach for activity detection in video that unifies
activity categorization with space-time localization. The main idea is to pose
activity detection as a maximum-weight connected subgraph problem. Offline, we
learn a binary classifier for an activity category using positive video
exemplars that are "trimmed" in time to the activity of interest. Then, given a
novel \emph{untrimmed} video sequence, we decompose it into a 3D array of
space-time nodes, which are weighted based on the extent to which their
component features support the learned activity model. To perform detection, we
then directly localize instances of the activity by solving for the
maximum-weight connected subgraph in the test video's space-time graph. We show
that this detection strategy permits an efficient branch-and-cut solution for
the best-scoring---and possibly non-cubically shaped---portion of the video for
a given activity classifier. The upshot is a fast method that can search a
broader space of space-time region candidates than was previously practical,
which we find often leads to more accurate detection. We demonstrate the
proposed algorithm on four datasets, and we show its speed and accuracy
advantages over multiple existing search strategies.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2016 03:48:21 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Chen",
"Chao-Yeh",
""
],
[
"Grauman",
"Kristen",
""
]
] |
TITLE: Efficient Activity Detection in Untrimmed Video with Max-Subgraph Search
ABSTRACT: We propose an efficient approach for activity detection in video that unifies
activity categorization with space-time localization. The main idea is to pose
activity detection as a maximum-weight connected subgraph problem. Offline, we
learn a binary classifier for an activity category using positive video
exemplars that are "trimmed" in time to the activity of interest. Then, given a
novel \emph{untrimmed} video sequence, we decompose it into a 3D array of
space-time nodes, which are weighted based on the extent to which their
component features support the learned activity model. To perform detection, we
then directly localize instances of the activity by solving for the
maximum-weight connected subgraph in the test video's space-time graph. We show
that this detection strategy permits an efficient branch-and-cut solution for
the best-scoring---and possibly non-cubically shaped---portion of the video for
a given activity classifier. The upshot is a fast method that can search a
broader space of space-time region candidates than was previously practical,
which we find often leads to more accurate detection. We demonstrate the
proposed algorithm on four datasets, and we show its speed and accuracy
advantages over multiple existing search strategies.
|
1607.02858
|
Takuya Kitazawa
|
Takuya Kitazawa
|
Incremental Factorization Machines for Persistently Cold-starting Online
Item Recommendation
|
4 pages, 6 figures, The 1st Workshop on Profiling User Preferences
for Dynamic Online and Real-Time Recommendations, RecSys 2016
| null | null | null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-world item recommenders commonly suffer from a persistent cold-start
problem which is caused by dynamically changing users and items. In order to
overcome the problem, several context-aware recommendation techniques have been
recently proposed. In terms of both feasibility and performance, factorization
machine (FM) is one of the most promising methods as generalization of the
conventional matrix factorization techniques. However, since online algorithms
are suitable for dynamic data, the static FMs are still inadequate. Thus, this
paper proposes incremental FMs (iFMs), a general online factorization
framework, and specially extends iFMs into an online item recommender. The
proposed framework can be a promising baseline for further development of the
production recommender systems. Evaluation is done empirically both on
synthetic and real-world unstable datasets.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2016 08:37:42 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Kitazawa",
"Takuya",
""
]
] |
TITLE: Incremental Factorization Machines for Persistently Cold-starting Online
Item Recommendation
ABSTRACT: Real-world item recommenders commonly suffer from a persistent cold-start
problem which is caused by dynamically changing users and items. In order to
overcome the problem, several context-aware recommendation techniques have been
recently proposed. In terms of both feasibility and performance, factorization
machine (FM) is one of the most promising methods as generalization of the
conventional matrix factorization techniques. However, since online algorithms
are suitable for dynamic data, the static FMs are still inadequate. Thus, this
paper proposes incremental FMs (iFMs), a general online factorization
framework, and specially extends iFMs into an online item recommender. The
proposed framework can be a promising baseline for further development of the
production recommender systems. Evaluation is done empirically both on
synthetic and real-world unstable datasets.
|
1607.03050
|
Daniel Jiwoong Im
|
Daniel Jiwoong Im, Graham W. Taylor
|
Learning a metric for class-conditional KNN
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework
which addresses many of the pitfalls of K-Nearest Neighbour (KNN)
classification. It has yielded competitive results on several computer vision
benchmarks. Its central tenet is that during NN search, a query is not compared
to every example in a database, ignoring class information. Instead, NN
searches are performed within each class, generating a score per class. A key
problem with NN techniques, including NBNN, is that they fail when the data
representation does not capture perceptual (e.g.~class-based) similarity. NBNN
circumvents this by using independent engineered descriptors (e.g.~SIFT). To
extend its applicability outside of image-based domains, we propose to learn a
metric which captures perceptual similarity. Similar to how Neighbourhood
Components Analysis optimizes a differentiable form of KNN classification, we
propose "Class Conditional" metric learning (CCML), which optimizes a soft form
of the NBNN selection rule. Typical metric learning algorithms learn either a
global or local metric. However, our proposed method can be adjusted to a
particular level of locality by tuning a single parameter. An empirical
evaluation on classification and retrieval tasks demonstrates that our proposed
method clearly outperforms existing learned distance metrics across a variety
of image and non-image datasets.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2016 17:29:19 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Im",
"Daniel Jiwoong",
""
],
[
"Taylor",
"Graham W.",
""
]
] |
TITLE: Learning a metric for class-conditional KNN
ABSTRACT: Naive Bayes Nearest Neighbour (NBNN) is a simple and effective framework
which addresses many of the pitfalls of K-Nearest Neighbour (KNN)
classification. It has yielded competitive results on several computer vision
benchmarks. Its central tenet is that during NN search, a query is not compared
to every example in a database, ignoring class information. Instead, NN
searches are performed within each class, generating a score per class. A key
problem with NN techniques, including NBNN, is that they fail when the data
representation does not capture perceptual (e.g.~class-based) similarity. NBNN
circumvents this by using independent engineered descriptors (e.g.~SIFT). To
extend its applicability outside of image-based domains, we propose to learn a
metric which captures perceptual similarity. Similar to how Neighbourhood
Components Analysis optimizes a differentiable form of KNN classification, we
propose "Class Conditional" metric learning (CCML), which optimizes a soft form
of the NBNN selection rule. Typical metric learning algorithms learn either a
global or local metric. However, our proposed method can be adjusted to a
particular level of locality by tuning a single parameter. An empirical
evaluation on classification and retrieval tasks demonstrates that our proposed
method clearly outperforms existing learned distance metrics across a variety
of image and non-image datasets.
|
1607.03057
|
Pedro Saleiro
|
Pedro Saleiro, Carlos Soares
|
Learning from the News: Predicting Entity Popularity on Twitter
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we tackle the problem of predicting entity popularity on
Twitter based on the news cycle. We apply a supervised learn- ing approach and
extract four types of features: (i) signal, (ii) textual, (iii) sentiment and
(iv) semantic, which we use to predict whether the popularity of a given entity
will be high or low in the following hours. We run several experiments on six
different entities in a dataset of over 150M tweets and 5M news and obtained F1
scores over 0.70. Error analysis indicates that news perform better on
predicting entity popularity on Twitter when they are the primary information
source of the event, in opposition to events such as live TV broadcasts,
political debates or football matches.
|
[
{
"version": "v1",
"created": "Mon, 11 Jul 2016 17:53:27 GMT"
}
] | 2016-07-12T00:00:00 |
[
[
"Saleiro",
"Pedro",
""
],
[
"Soares",
"Carlos",
""
]
] |
TITLE: Learning from the News: Predicting Entity Popularity on Twitter
ABSTRACT: In this work, we tackle the problem of predicting entity popularity on
Twitter based on the news cycle. We apply a supervised learn- ing approach and
extract four types of features: (i) signal, (ii) textual, (iii) sentiment and
(iv) semantic, which we use to predict whether the popularity of a given entity
will be high or low in the following hours. We run several experiments on six
different entities in a dataset of over 150M tweets and 5M news and obtained F1
scores over 0.70. Error analysis indicates that news perform better on
predicting entity popularity on Twitter when they are the primary information
source of the event, in opposition to events such as live TV broadcasts,
political debates or football matches.
|
1502.06470
|
Eric Tramel
|
Eric W. Tramel and Ang\'elique Dr\'emeau and Florent Krzakala
|
Approximate Message Passing with Restricted Boltzmann Machine Priors
| null |
J. Stat. Mech. (2016) 073401
|
10.1088/1742-5468/2016/07/073401
| null |
cs.IT cond-mat.dis-nn math.IT physics.data-an stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Approximate Message Passing (AMP) has been shown to be an excellent
statistical approach to signal inference and compressed sensing problem. The
AMP framework provides modularity in the choice of signal prior; here we
propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a
Restricted Boltzmann Machine (RBM) trained on the signal support to push
reconstruction performance beyond that of simple iid priors for signals whose
support can be well represented by a trained binary RBM. We present and analyze
two methods of RBM factorization and demonstrate how these affect signal
reconstruction performance within our proposed algorithm. Finally, using the
MNIST handwritten digit dataset, we show experimentally that using an RBM
allows AMP to approach oracle-support performance.
|
[
{
"version": "v1",
"created": "Mon, 23 Feb 2015 15:51:07 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2015 14:05:45 GMT"
},
{
"version": "v3",
"created": "Thu, 10 Dec 2015 03:45:32 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Tramel",
"Eric W.",
""
],
[
"Drémeau",
"Angélique",
""
],
[
"Krzakala",
"Florent",
""
]
] |
TITLE: Approximate Message Passing with Restricted Boltzmann Machine Priors
ABSTRACT: Approximate Message Passing (AMP) has been shown to be an excellent
statistical approach to signal inference and compressed sensing problem. The
AMP framework provides modularity in the choice of signal prior; here we
propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a
Restricted Boltzmann Machine (RBM) trained on the signal support to push
reconstruction performance beyond that of simple iid priors for signals whose
support can be well represented by a trained binary RBM. We present and analyze
two methods of RBM factorization and demonstrate how these affect signal
reconstruction performance within our proposed algorithm. Finally, using the
MNIST handwritten digit dataset, we show experimentally that using an RBM
allows AMP to approach oracle-support performance.
|
1511.02136
|
James Atwood
|
James Atwood and Don Towsley
|
Diffusion-Convolutional Neural Networks
|
Full paper
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present diffusion-convolutional neural networks (DCNNs), a new model for
graph-structured data. Through the introduction of a diffusion-convolution
operation, we show how diffusion-based representations can be learned from
graph-structured data and used as an effective basis for node classification.
DCNNs have several attractive qualities, including a latent representation for
graphical data that is invariant under isomorphism, as well as polynomial-time
prediction and learning that can be represented as tensor operations and
efficiently implemented on the GPU. Through several experiments with real
structured datasets, we demonstrate that DCNNs are able to outperform
probabilistic relational models and kernel-on-graph methods at relational node
classification tasks.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 16:09:32 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 14:33:30 GMT"
},
{
"version": "v3",
"created": "Fri, 20 Nov 2015 14:38:08 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Jan 2016 19:33:18 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Jan 2016 20:36:29 GMT"
},
{
"version": "v6",
"created": "Fri, 8 Jul 2016 15:05:17 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Atwood",
"James",
""
],
[
"Towsley",
"Don",
""
]
] |
TITLE: Diffusion-Convolutional Neural Networks
ABSTRACT: We present diffusion-convolutional neural networks (DCNNs), a new model for
graph-structured data. Through the introduction of a diffusion-convolution
operation, we show how diffusion-based representations can be learned from
graph-structured data and used as an effective basis for node classification.
DCNNs have several attractive qualities, including a latent representation for
graphical data that is invariant under isomorphism, as well as polynomial-time
prediction and learning that can be represented as tensor operations and
efficiently implemented on the GPU. Through several experiments with real
structured datasets, we demonstrate that DCNNs are able to outperform
probabilistic relational models and kernel-on-graph methods at relational node
classification tasks.
|
1511.05065
|
Bumsub Ham
|
Bumsub Ham, Minsu Cho, Cordelia Schmid, Jean Ponce
|
Proposal Flow
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding image correspondences remains a challenging problem in the presence
of intra-class variations and large changes in scene layout.~Semantic flow
methods are designed to handle images depicting different instances of the same
object or scene category. We introduce a novel approach to semantic flow,
dubbed proposal flow, that establishes reliable correspondences using object
proposals. Unlike prevailing semantic flow approaches that operate on pixels or
regularly sampled local regions, proposal flow benefits from the
characteristics of modern object proposals, that exhibit high repeatability at
multiple scales, and can take advantage of both local and geometric consistency
constraints among proposals. We also show that proposal flow can effectively be
transformed into a conventional dense flow field. We introduce a new dataset
that can be used to evaluate both general semantic flow techniques and
region-based approaches such as proposal flow. We use this benchmark to compare
different matching algorithms, object proposals, and region features within
proposal flow, to the state of the art in semantic flow. This comparison, along
with experiments on standard datasets, demonstrates that proposal flow
significantly outperforms existing semantic flow methods in various settings.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 17:54:45 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Mar 2016 16:19:40 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jul 2016 18:32:37 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Ham",
"Bumsub",
""
],
[
"Cho",
"Minsu",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Ponce",
"Jean",
""
]
] |
TITLE: Proposal Flow
ABSTRACT: Finding image correspondences remains a challenging problem in the presence
of intra-class variations and large changes in scene layout.~Semantic flow
methods are designed to handle images depicting different instances of the same
object or scene category. We introduce a novel approach to semantic flow,
dubbed proposal flow, that establishes reliable correspondences using object
proposals. Unlike prevailing semantic flow approaches that operate on pixels or
regularly sampled local regions, proposal flow benefits from the
characteristics of modern object proposals, that exhibit high repeatability at
multiple scales, and can take advantage of both local and geometric consistency
constraints among proposals. We also show that proposal flow can effectively be
transformed into a conventional dense flow field. We introduce a new dataset
that can be used to evaluate both general semantic flow techniques and
region-based approaches such as proposal flow. We use this benchmark to compare
different matching algorithms, object proposals, and region features within
proposal flow, to the state of the art in semantic flow. This comparison, along
with experiments on standard datasets, demonstrates that proposal flow
significantly outperforms existing semantic flow methods in various settings.
|
1511.05292
|
Jinghua Wang
|
Jinghua Wang, Gang Wang
|
Hierarchical Spatial Sum-Product Networks for Action Recognition in
Still Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing actions from still images is popularly studied recently. In this
paper, we model an action class as a flexible number of spatial configurations
of body parts by proposing a new spatial SPN (Sum-Product Networks). First, we
discover a set of parts in image collections via unsupervised learning. Then,
our new spatial SPN is applied to model the spatial relationship and also the
high-order correlations of parts. To learn robust networks, we further develop
a hierarchical spatial SPN method, which models pairwise spatial relationship
between parts inside sub-images and models the correlation of sub-images via
extra layers of SPN. Our method is shown to be effective on two benchmark
datasets.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2015 07:21:20 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Nov 2015 07:29:25 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jul 2016 01:41:41 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Wang",
"Jinghua",
""
],
[
"Wang",
"Gang",
""
]
] |
TITLE: Hierarchical Spatial Sum-Product Networks for Action Recognition in
Still Images
ABSTRACT: Recognizing actions from still images is popularly studied recently. In this
paper, we model an action class as a flexible number of spatial configurations
of body parts by proposing a new spatial SPN (Sum-Product Networks). First, we
discover a set of parts in image collections via unsupervised learning. Then,
our new spatial SPN is applied to model the spatial relationship and also the
high-order correlations of parts. To learn robust networks, we further develop
a hierarchical spatial SPN method, which models pairwise spatial relationship
between parts inside sub-images and models the correlation of sub-images via
extra layers of SPN. Our method is shown to be effective on two benchmark
datasets.
|
1605.02112
|
Seyoung Park
|
Seyoung Park, Bruce Xiaohan Nie, Song-Chun Zhu
|
Attribute And-Or Grammar for Joint Parsing of Human Attributes, Part and
Pose
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an attribute and-or grammar (A-AOG) model for jointly
inferring human body pose and human attributes in a parse graph with attributes
augmented to nodes in the hierarchical representation. In contrast to other
popular methods in the current literature that train separate classifiers for
poses and individual attributes, our method explicitly represents the
decomposition and articulation of body parts, and account for the correlations
between poses and attributes. The A-AOG model is an amalgamation of three
traditional grammar formulations: (i) Phrase structure grammar representing the
hierarchical decomposition of the human body from whole to parts; (ii)
Dependency grammar modeling the geometric articulation by a kinematic graph of
the body pose; and (iii) Attribute grammar accounting for the compatibility
relations between different parts in the hierarchy so that their appearances
follow a consistent style. The parse graph outputs human detection, pose
estimation, and attribute prediction simultaneously, which are intuitive and
interpretable. We conduct experiments on two tasks on two datasets, and
experimental results demonstrate the advantage of joint modeling in comparison
with computing poses and attributes independently. Furthermore, our model
obtains better performance over existing methods for both pose estimation and
attribute prediction tasks.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2016 22:23:41 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2016 20:10:52 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Park",
"Seyoung",
""
],
[
"Nie",
"Bruce Xiaohan",
""
],
[
"Zhu",
"Song-Chun",
""
]
] |
TITLE: Attribute And-Or Grammar for Joint Parsing of Human Attributes, Part and
Pose
ABSTRACT: This paper presents an attribute and-or grammar (A-AOG) model for jointly
inferring human body pose and human attributes in a parse graph with attributes
augmented to nodes in the hierarchical representation. In contrast to other
popular methods in the current literature that train separate classifiers for
poses and individual attributes, our method explicitly represents the
decomposition and articulation of body parts, and account for the correlations
between poses and attributes. The A-AOG model is an amalgamation of three
traditional grammar formulations: (i) Phrase structure grammar representing the
hierarchical decomposition of the human body from whole to parts; (ii)
Dependency grammar modeling the geometric articulation by a kinematic graph of
the body pose; and (iii) Attribute grammar accounting for the compatibility
relations between different parts in the hierarchy so that their appearances
follow a consistent style. The parse graph outputs human detection, pose
estimation, and attribute prediction simultaneously, which are intuitive and
interpretable. We conduct experiments on two tasks on two datasets, and
experimental results demonstrate the advantage of joint modeling in comparison
with computing poses and attributes independently. Furthermore, our model
obtains better performance over existing methods for both pose estimation and
attribute prediction tasks.
|
1607.02171
|
Eric Nunes
|
Eric Nunes, Paulo Shakarian, Gerardo I. Simari, Andrew Ruef
|
Argumentation Models for Cyber Attribution
|
8 pages paper to be presented at International Symposium on
Foundations of Open Source Intelligence and Security Informatics (FOSINT-SI)
2016 In conjunction with ASONAM 2016 San Francisco, CA, USA, August 19-20,
2016
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major challenge in cyber-threat analysis is combining information from
different sources to find the person or the group responsible for the
cyber-attack. It is one of the most important technical and policy challenges
in cyber-security. The lack of ground truth for an individual responsible for
an attack has limited previous studies. In this paper, we take a first step
towards overcoming this limitation by building a dataset from the
capture-the-flag event held at DEFCON, and propose an argumentation model based
on a formal reasoning framework called DeLP (Defeasible Logic Programming)
designed to aid an analyst in attributing a cyber-attack. We build models from
latent variables to reduce the search space of culprits (attackers), and show
that this reduction significantly improves the performance of
classification-based approaches from 37% to 62% in identifying the attacker.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 21:01:06 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Nunes",
"Eric",
""
],
[
"Shakarian",
"Paulo",
""
],
[
"Simari",
"Gerardo I.",
""
],
[
"Ruef",
"Andrew",
""
]
] |
TITLE: Argumentation Models for Cyber Attribution
ABSTRACT: A major challenge in cyber-threat analysis is combining information from
different sources to find the person or the group responsible for the
cyber-attack. It is one of the most important technical and policy challenges
in cyber-security. The lack of ground truth for an individual responsible for
an attack has limited previous studies. In this paper, we take a first step
towards overcoming this limitation by building a dataset from the
capture-the-flag event held at DEFCON, and propose an argumentation model based
on a formal reasoning framework called DeLP (Defeasible Logic Programming)
designed to aid an analyst in attributing a cyber-attack. We build models from
latent variables to reduce the search space of culprits (attackers), and show
that this reduction significantly improves the performance of
classification-based approaches from 37% to 62% in identifying the attacker.
|
1607.02174
|
Faiza Khattak Faiza Khattak
|
Faiza Khan Khattak, Ansaf Salleb-Aouissi
|
Toward a Robust Crowd-labeling Framework using Expert Evaluation and
Pairwise Comparison
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowd-labeling emerged from the need to label large-scale and complex data, a
tedious, expensive, and time-consuming task. One of the main challenges in the
crowd-labeling task is to control for or determine in advance the proportion of
low-quality/malicious labelers. If that proportion grows too high, there is
often a phase transition leading to a steep, non-linear drop in labeling
accuracy as noted by Karger et al. [2014]. To address these challenges, we
propose a new framework called Expert Label Injected Crowd Estimation (ELICE)
and extend it to different versions and variants that delay phase transition
leading to a better labeling accuracy. ELICE automatically combines and boosts
bulk crowd labels supported by labels from experts for limited number of
instances from the dataset. The expert-labels help to estimate the individual
ability of crowd labelers and difficulty of each instance, both of which are
used to aggregate the labels. Empirical evaluation shows the superiority of
ELICE as compared to other state-of-the-art methods. We also derive a lower
bound on the number of expert-labeled instances needed to estimate the crowd
ability and dataset difficulty as well as to get better quality labels.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 21:23:20 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Khattak",
"Faiza Khan",
""
],
[
"Salleb-Aouissi",
"Ansaf",
""
]
] |
TITLE: Toward a Robust Crowd-labeling Framework using Expert Evaluation and
Pairwise Comparison
ABSTRACT: Crowd-labeling emerged from the need to label large-scale and complex data, a
tedious, expensive, and time-consuming task. One of the main challenges in the
crowd-labeling task is to control for or determine in advance the proportion of
low-quality/malicious labelers. If that proportion grows too high, there is
often a phase transition leading to a steep, non-linear drop in labeling
accuracy as noted by Karger et al. [2014]. To address these challenges, we
propose a new framework called Expert Label Injected Crowd Estimation (ELICE)
and extend it to different versions and variants that delay phase transition
leading to a better labeling accuracy. ELICE automatically combines and boosts
bulk crowd labels supported by labels from experts for limited number of
instances from the dataset. The expert-labels help to estimate the individual
ability of crowd labelers and difficulty of each instance, both of which are
used to aggregate the labels. Empirical evaluation shows the superiority of
ELICE as compared to other state-of-the-art methods. We also derive a lower
bound on the number of expert-labeled instances needed to estimate the crowd
ability and dataset difficulty as well as to get better quality labels.
|
1607.02257
|
Rigas Kouskouridas
|
Andreas Doumanoglou, Vassileios Balntas, Rigas Kouskouridas, Tae-Kyun
Kim
|
Siamese Regression Networks with Efficient mid-level Feature Extraction
for 3D Object Pose Estimation
|
9 pages, paper submitted to NIPS 2016, project page:
http://www.iis.ee.ic.ac.uk/rkouskou/research/SRN.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we tackle the problem of estimating the 3D pose of object
instances, using convolutional neural networks. State of the art methods
usually solve the challenging problem of regression in angle space indirectly,
focusing on learning discriminative features that are later fed into a separate
architecture for 3D pose estimation. In contrast, we propose an end-to-end
learning framework for directly regressing object poses by exploiting Siamese
Networks. For a given image pair, we enforce a similarity measure between the
representation of the sample images in the feature and pose space respectively,
that is shown to boost regression performance. Furthermore, we argue that our
pose-guided feature learning using our Siamese Regression Network generates
more discriminative features that outperform the state of the art. Last, our
feature learning formulation provides the ability of learning features that can
perform under severe occlusions, demonstrating high performance on our novel
hand-object dataset.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2016 07:25:47 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Doumanoglou",
"Andreas",
""
],
[
"Balntas",
"Vassileios",
""
],
[
"Kouskouridas",
"Rigas",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] |
TITLE: Siamese Regression Networks with Efficient mid-level Feature Extraction
for 3D Object Pose Estimation
ABSTRACT: In this paper we tackle the problem of estimating the 3D pose of object
instances, using convolutional neural networks. State of the art methods
usually solve the challenging problem of regression in angle space indirectly,
focusing on learning discriminative features that are later fed into a separate
architecture for 3D pose estimation. In contrast, we propose an end-to-end
learning framework for directly regressing object poses by exploiting Siamese
Networks. For a given image pair, we enforce a similarity measure between the
representation of the sample images in the feature and pose space respectively,
that is shown to boost regression performance. Furthermore, we argue that our
pose-guided feature learning using our Siamese Regression Network generates
more discriminative features that outperform the state of the art. Last, our
feature learning formulation provides the ability of learning features that can
perform under severe occlusions, demonstrating high performance on our novel
hand-object dataset.
|
1607.02329
|
Markus Wulfmeier
|
Markus Wulfmeier, Dominic Zeng Wang and Ingmar Posner
|
Watch This: Scalable Cost-Function Learning for Path Planning in Urban
Environments
|
Accepted for publication in the Proceedings of the 2016 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS 2016)
| null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present an approach to learn cost maps for driving in
complex urban environments from a very large number of demonstrations of
driving behaviour by human experts. The learned cost maps are constructed
directly from raw sensor measurements, bypassing the effort of manually
designing cost maps as well as features. When deploying the learned cost maps,
the trajectories generated not only replicate human-like driving behaviour but
are also demonstrably robust against systematic errors in putative robot
configuration. To achieve this we deploy a Maximum Entropy based, non-linear
IRL framework which uses Fully Convolutional Neural Networks (FCNs) to
represent the cost model underlying expert driving behaviour. Using a deep,
parametric approach enables us to scale efficiently to large datasets and
complex behaviours by being run-time independent of dataset extent during
deployment. We demonstrate the scalability and the performance of the proposed
approach on an ambitious dataset collected over the course of one year
including more than 25k demonstration trajectories extracted from over 120km of
driving around pedestrianised areas in the city of Milton Keynes, UK. We
evaluate the resulting cost representations by showing the advantages over a
carefully manually designed cost map and, in addition, demonstrate its
robustness to systematic errors by learning precise cost-maps even in the
presence of system calibration perturbations.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2016 11:59:51 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Wulfmeier",
"Markus",
""
],
[
"Wang",
"Dominic Zeng",
""
],
[
"Posner",
"Ingmar",
""
]
] |
TITLE: Watch This: Scalable Cost-Function Learning for Path Planning in Urban
Environments
ABSTRACT: In this work, we present an approach to learn cost maps for driving in
complex urban environments from a very large number of demonstrations of
driving behaviour by human experts. The learned cost maps are constructed
directly from raw sensor measurements, bypassing the effort of manually
designing cost maps as well as features. When deploying the learned cost maps,
the trajectories generated not only replicate human-like driving behaviour but
are also demonstrably robust against systematic errors in putative robot
configuration. To achieve this we deploy a Maximum Entropy based, non-linear
IRL framework which uses Fully Convolutional Neural Networks (FCNs) to
represent the cost model underlying expert driving behaviour. Using a deep,
parametric approach enables us to scale efficiently to large datasets and
complex behaviours by being run-time independent of dataset extent during
deployment. We demonstrate the scalability and the performance of the proposed
approach on an ambitious dataset collected over the course of one year
including more than 25k demonstration trajectories extracted from over 120km of
driving around pedestrianised areas in the city of Milton Keynes, UK. We
evaluate the resulting cost representations by showing the advantages over a
carefully manually designed cost map and, in addition, demonstrate its
robustness to systematic errors by learning precise cost-maps even in the
presence of system calibration perturbations.
|
1607.02383
|
Yoonchang Han
|
Yoonchang Han, Kyogu Lee
|
Acoustic scene classification using convolutional neural network and
multiple-width frequency-delta data augmentation
|
11 pages, 5 figures, submitted to IEEE/ACM Transactions on Audio,
Speech, and Language Processing on 08-July-2016
| null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, neural network approaches have shown superior performance to
conventional hand-made features in numerous application areas. In particular,
convolutional neural networks (ConvNets) exploit spatially local correlations
across input data to improve the performance of audio processing tasks, such as
speech recognition, musical chord recognition, and onset detection. Here we
apply ConvNet to acoustic scene classification, and show that the error rate
can be further decreased by using delta features in the frequency domain. We
propose a multiple-width frequency-delta (MWFD) data augmentation method that
uses static mel-spectrogram and frequency-delta features as individual input
examples. In addition, we describe a ConvNet output aggregation method designed
for MWFD augmentation, folded mean aggregation, which combines output
probabilities of static and MWFD features from the same analysis window using
multiplication first, rather than taking an average of all output
probabilities. We describe calculation results using the DCASE 2016 challenge
dataset, which shows that ConvNet outperforms both of the baseline system with
hand-crafted features and a deep neural network approach by around 7%. The
performance was further improved (by 5.7%) using the MWFD augmentation together
with folded mean aggregation. The system exhibited a classification accuracy of
0.831 when classifying 15 acoustic scenes.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2016 14:33:58 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Han",
"Yoonchang",
""
],
[
"Lee",
"Kyogu",
""
]
] |
TITLE: Acoustic scene classification using convolutional neural network and
multiple-width frequency-delta data augmentation
ABSTRACT: In recent years, neural network approaches have shown superior performance to
conventional hand-made features in numerous application areas. In particular,
convolutional neural networks (ConvNets) exploit spatially local correlations
across input data to improve the performance of audio processing tasks, such as
speech recognition, musical chord recognition, and onset detection. Here we
apply ConvNet to acoustic scene classification, and show that the error rate
can be further decreased by using delta features in the frequency domain. We
propose a multiple-width frequency-delta (MWFD) data augmentation method that
uses static mel-spectrogram and frequency-delta features as individual input
examples. In addition, we describe a ConvNet output aggregation method designed
for MWFD augmentation, folded mean aggregation, which combines output
probabilities of static and MWFD features from the same analysis window using
multiplication first, rather than taking an average of all output
probabilities. We describe calculation results using the DCASE 2016 challenge
dataset, which shows that ConvNet outperforms both of the baseline system with
hand-crafted features and a deep neural network approach by around 7%. The
performance was further improved (by 5.7%) using the MWFD augmentation together
with folded mean aggregation. The system exhibited a classification accuracy of
0.831 when classifying 15 acoustic scenes.
|
1607.02504
|
Xiao Yang
|
Xiao Yang, Roland Kwitt, Marc Niethammer
|
Fast Predictive Image Registration
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method to predict image deformations based on patch-wise image
appearance. Specifically, we design a patch-based deep encoder-decoder network
which learns the pixel/voxel-wise mapping between image appearance and
registration parameters. Our approach can predict general deformation
parameterizations, however, we focus on the large deformation diffeomorphic
metric mapping (LDDMM) registration model. By predicting the LDDMM
momentum-parameterization we retain the desirable theoretical properties of
LDDMM, while reducing computation time by orders of magnitude: combined with
patch pruning, we achieve a 1500x/66x speed up compared to GPU-based
optimization for 2D/3D image registration. Our approach has better prediction
accuracy than predicting deformation or velocity fields and results in
diffeomorphic transformations. Additionally, we create a Bayesian probabilistic
version of our network, which allows evaluation of deformation field
uncertainty through Monte Carlo sampling using dropout at test time. We show
that deformation uncertainty highlights areas of ambiguous deformations. We
test our method on the OASIS brain image dataset in 2D and 3D.
|
[
{
"version": "v1",
"created": "Fri, 8 Jul 2016 19:58:56 GMT"
}
] | 2016-07-11T00:00:00 |
[
[
"Yang",
"Xiao",
""
],
[
"Kwitt",
"Roland",
""
],
[
"Niethammer",
"Marc",
""
]
] |
TITLE: Fast Predictive Image Registration
ABSTRACT: We present a method to predict image deformations based on patch-wise image
appearance. Specifically, we design a patch-based deep encoder-decoder network
which learns the pixel/voxel-wise mapping between image appearance and
registration parameters. Our approach can predict general deformation
parameterizations, however, we focus on the large deformation diffeomorphic
metric mapping (LDDMM) registration model. By predicting the LDDMM
momentum-parameterization we retain the desirable theoretical properties of
LDDMM, while reducing computation time by orders of magnitude: combined with
patch pruning, we achieve a 1500x/66x speed up compared to GPU-based
optimization for 2D/3D image registration. Our approach has better prediction
accuracy than predicting deformation or velocity fields and results in
diffeomorphic transformations. Additionally, we create a Bayesian probabilistic
version of our network, which allows evaluation of deformation field
uncertainty through Monte Carlo sampling using dropout at test time. We show
that deformation uncertainty highlights areas of ambiguous deformations. We
test our method on the OASIS brain image dataset in 2D and 3D.
|
1406.2139
|
Conrad Sanderson
|
Masoud Faraki, Maziar Palhang, Conrad Sanderson
|
Log-Euclidean Bag of Words for Human Action Recognition
| null |
IET Computer Vision, Vol. 9, No. 3, 2015
|
10.1049/iet-cvi.2014.0018
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Representing videos by densely extracted local space-time features has
recently become a popular approach for analysing actions. In this paper, we
tackle the problem of categorising human actions by devising Bag of Words (BoW)
models based on covariance matrices of spatio-temporal features, with the
features formed from histograms of optical flow. Since covariance matrices form
a special type of Riemannian manifold, the space of Symmetric Positive Definite
(SPD) matrices, non-Euclidean geometry should be taken into account while
discriminating between covariance matrices. To this end, we propose to embed
SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW
approach to its Riemannian version. The proposed BoW approach takes into
account the manifold geometry of SPD matrices during the generation of the
codebook and histograms. Experiments on challenging human action datasets show
that the proposed method obtains notable improvements in discrimination
accuracy, in comparison to several state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 9 Jun 2014 11:14:03 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jul 2014 09:33:58 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jul 2016 09:27:40 GMT"
}
] | 2016-07-08T00:00:00 |
[
[
"Faraki",
"Masoud",
""
],
[
"Palhang",
"Maziar",
""
],
[
"Sanderson",
"Conrad",
""
]
] |
TITLE: Log-Euclidean Bag of Words for Human Action Recognition
ABSTRACT: Representing videos by densely extracted local space-time features has
recently become a popular approach for analysing actions. In this paper, we
tackle the problem of categorising human actions by devising Bag of Words (BoW)
models based on covariance matrices of spatio-temporal features, with the
features formed from histograms of optical flow. Since covariance matrices form
a special type of Riemannian manifold, the space of Symmetric Positive Definite
(SPD) matrices, non-Euclidean geometry should be taken into account while
discriminating between covariance matrices. To this end, we propose to embed
SPD manifolds to Euclidean spaces via a diffeomorphism and extend the BoW
approach to its Riemannian version. The proposed BoW approach takes into
account the manifold geometry of SPD matrices during the generation of the
codebook and histograms. Experiments on challenging human action datasets show
that the proposed method obtains notable improvements in discrimination
accuracy, in comparison to several state-of-the-art methods.
|
1607.01794
|
Zhenyang Li
|
Zhenyang Li, Efstratios Gavves, Mihir Jain, Cees G. M. Snoek
|
VideoLSTM Convolves, Attends and Flows for Action Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new architecture for end-to-end sequence learning of actions in
video, we call VideoLSTM. Rather than adapting the video to the peculiarities
of established recurrent or convolutional architectures, we adapt the
architecture to fit the requirements of the video medium. Starting from the
soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video
has a spatial layout. To exploit the spatial correlation we hardwire
convolutions in the soft-Attention LSTM architecture. Second, motion not only
informs us about the action content, but also guides better the attention
towards the relevant spatio-temporal locations. We introduce motion-based
attention. And finally, we demonstrate how the attention from VideoLSTM can be
used for action localization by relying on just the action class label.
Experiments and comparisons on challenging datasets for action classification
and localization support our claims.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 20:00:20 GMT"
}
] | 2016-07-08T00:00:00 |
[
[
"Li",
"Zhenyang",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Jain",
"Mihir",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] |
TITLE: VideoLSTM Convolves, Attends and Flows for Action Recognition
ABSTRACT: We present a new architecture for end-to-end sequence learning of actions in
video, we call VideoLSTM. Rather than adapting the video to the peculiarities
of established recurrent or convolutional architectures, we adapt the
architecture to fit the requirements of the video medium. Starting from the
soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video
has a spatial layout. To exploit the spatial correlation we hardwire
convolutions in the soft-Attention LSTM architecture. Second, motion not only
informs us about the action content, but also guides better the attention
towards the relevant spatio-temporal locations. We introduce motion-based
attention. And finally, we demonstrate how the attention from VideoLSTM can be
used for action localization by relying on just the action class label.
Experiments and comparisons on challenging datasets for action classification
and localization support our claims.
|
1607.01977
|
Yuchao Dai Dr.
|
Xibin Song, Yuchao Dai, Xueying Qin
|
Deep Depth Super-Resolution : Learning Depth Super-Resolution using Deep
Convolutional Neural Network
|
13 pages, 10 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth image super-resolution is an extremely challenging task due to the
information loss in sub-sampling. Deep convolutional neural network have been
widely applied to color image super-resolution. Quite surprisingly, this
success has not been matched to depth super-resolution. This is mainly due to
the inherent difference between color and depth images. In this paper, we
bridge up the gap and extend the success of deep convolutional neural network
to depth super-resolution. The proposed deep depth super-resolution method
learns the mapping from a low-resolution depth image to a high resolution one
in an end-to-end style. Furthermore, to better regularize the learned depth
map, we propose to exploit the depth field statistics and the local correlation
between depth image and color image. These priors are integrated in an energy
minimization formulation, where the deep neural network learns the unary term,
the depth field statistics works as global model constraint and the color-depth
correlation is utilized to enforce the local structure in depth images.
Extensive experiments on various depth super-resolution benchmark datasets show
that our method outperforms the state-of-the-art depth image super-resolution
methods with a margin.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 12:01:59 GMT"
}
] | 2016-07-08T00:00:00 |
[
[
"Song",
"Xibin",
""
],
[
"Dai",
"Yuchao",
""
],
[
"Qin",
"Xueying",
""
]
] |
TITLE: Deep Depth Super-Resolution : Learning Depth Super-Resolution using Deep
Convolutional Neural Network
ABSTRACT: Depth image super-resolution is an extremely challenging task due to the
information loss in sub-sampling. Deep convolutional neural network have been
widely applied to color image super-resolution. Quite surprisingly, this
success has not been matched to depth super-resolution. This is mainly due to
the inherent difference between color and depth images. In this paper, we
bridge up the gap and extend the success of deep convolutional neural network
to depth super-resolution. The proposed deep depth super-resolution method
learns the mapping from a low-resolution depth image to a high resolution one
in an end-to-end style. Furthermore, to better regularize the learned depth
map, we propose to exploit the depth field statistics and the local correlation
between depth image and color image. These priors are integrated in an energy
minimization formulation, where the deep neural network learns the unary term,
the depth field statistics works as global model constraint and the color-depth
correlation is utilized to enforce the local structure in depth images.
Extensive experiments on various depth super-resolution benchmark datasets show
that our method outperforms the state-of-the-art depth image super-resolution
methods with a margin.
|
1607.02003
|
Mihir Jain
|
Mihir Jain, Jan van Gemert, Herv\'e J\'egou, Patrick Bouthemy, Cees
G.M. Snoek
|
Tubelets: Unsupervised action proposals from spatiotemporal super-voxels
|
submitted to International Journal of Computer Vision
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the problem of localizing actions in videos as a
sequences of bounding boxes. The objective is to generate action proposals that
are likely to include the action of interest, ideally achieving high recall
with few proposals. Our contributions are threefold. First, inspired by
selective search for object proposals, we introduce an approach to generate
action proposals from spatiotemporal super-voxels in an unsupervised manner, we
call them Tubelets. Second, along with the static features from individual
frames our approach advantageously exploits motion. We introduce independent
motion evidence as a feature to characterize how the action deviates from the
background and explicitly incorporate such motion information in various stages
of the proposal generation. Finally, we introduce spatiotemporal refinement of
Tubelets, for more precise localization of actions, and pruning to keep the
number of Tubelets limited. We demonstrate the suitability of our approach by
extensive experiments for action proposal quality and action localization on
three public datasets: UCF Sports, MSR-II and UCF101. For action proposal
quality, our unsupervised proposals beat all other existing approaches on the
three datasets. For action localization, we show top performance on both the
trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of
MSR-II.
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 13:30:17 GMT"
}
] | 2016-07-08T00:00:00 |
[
[
"Jain",
"Mihir",
""
],
[
"van Gemert",
"Jan",
""
],
[
"Jégou",
"Hervé",
""
],
[
"Bouthemy",
"Patrick",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] |
TITLE: Tubelets: Unsupervised action proposals from spatiotemporal super-voxels
ABSTRACT: This paper considers the problem of localizing actions in videos as a
sequences of bounding boxes. The objective is to generate action proposals that
are likely to include the action of interest, ideally achieving high recall
with few proposals. Our contributions are threefold. First, inspired by
selective search for object proposals, we introduce an approach to generate
action proposals from spatiotemporal super-voxels in an unsupervised manner, we
call them Tubelets. Second, along with the static features from individual
frames our approach advantageously exploits motion. We introduce independent
motion evidence as a feature to characterize how the action deviates from the
background and explicitly incorporate such motion information in various stages
of the proposal generation. Finally, we introduce spatiotemporal refinement of
Tubelets, for more precise localization of actions, and pruning to keep the
number of Tubelets limited. We demonstrate the suitability of our approach by
extensive experiments for action proposal quality and action localization on
three public datasets: UCF Sports, MSR-II and UCF101. For action proposal
quality, our unsupervised proposals beat all other existing approaches on the
three datasets. For action localization, we show top performance on both the
trimmed videos of UCF Sports and UCF101 as well as the untrimmed videos of
MSR-II.
|
1607.02062
|
Georg Groh
|
Christoph Fuchs and Akash Nayyar and Ruth Nussbaumer and Georg Groh
|
Estimating the Dissemination of Social and Mobile Search in Categories
of Information Needs Using Websites as Proxies
| null | null | null | null |
cs.CY cs.IR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing popularity of social means to satisfy information needs
using Social Media (e.g., Social Media Question Asking, SMQA) or Social
Information Retrieval approaches, this paper tries to identify types of
information needs which are inherently social and therefore better suited for
those techniques. We describe an experiment where prominent websites from
various content categories are used to represent their respective content area
and allow to correlate attributes of the content areas. The underlying
assumption is that successful websites for focused content areas perfectly
align with the information seekers' requirements when satisfying information
needs in the respective content areas. Based on a manually collected dataset of
URLs from websites covering a broad range of topics taken from Alexa
(http://www.alexa.com} (retrieved 2015-11-04)) (a company that publishes
statistics about web traffic), a crowdsourcing approach is employed to rate the
information needs that could get solved by the respective URLs according to
several dimensions (incl. sociality and mobility) to investigate possible
correlations with other attributes. Our results suggest that information needs
which do not require a certain formal expertise play an important role in
social information retrieval and that some content areas are better suited for
social information retrieval (e.g., Factual Knowledge & News, Games, Lifestyle)
than others (e.g., Health & Lifestyle).
|
[
{
"version": "v1",
"created": "Thu, 7 Jul 2016 16:01:41 GMT"
}
] | 2016-07-08T00:00:00 |
[
[
"Fuchs",
"Christoph",
""
],
[
"Nayyar",
"Akash",
""
],
[
"Nussbaumer",
"Ruth",
""
],
[
"Groh",
"Georg",
""
]
] |
TITLE: Estimating the Dissemination of Social and Mobile Search in Categories
of Information Needs Using Websites as Proxies
ABSTRACT: With the increasing popularity of social means to satisfy information needs
using Social Media (e.g., Social Media Question Asking, SMQA) or Social
Information Retrieval approaches, this paper tries to identify types of
information needs which are inherently social and therefore better suited for
those techniques. We describe an experiment where prominent websites from
various content categories are used to represent their respective content area
and allow to correlate attributes of the content areas. The underlying
assumption is that successful websites for focused content areas perfectly
align with the information seekers' requirements when satisfying information
needs in the respective content areas. Based on a manually collected dataset of
URLs from websites covering a broad range of topics taken from Alexa
(http://www.alexa.com} (retrieved 2015-11-04)) (a company that publishes
statistics about web traffic), a crowdsourcing approach is employed to rate the
information needs that could get solved by the respective URLs according to
several dimensions (incl. sociality and mobility) to investigate possible
correlations with other attributes. Our results suggest that information needs
which do not require a certain formal expertise play an important role in
social information retrieval and that some content areas are better suited for
social information retrieval (e.g., Factual Knowledge & News, Games, Lifestyle)
than others (e.g., Health & Lifestyle).
|
1607.01462
|
Yingfei Wang
|
Yingfei Wang and Warren Powell
|
An optimal learning method for developing personalized treatment regimes
| null | null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A treatment regime is a function that maps individual patient information to
a recommended treatment, hence explicitly incorporating the heterogeneity in
need for treatment across individuals. Patient responses are dichotomous and
can be predicted through an unknown relationship that depends on the patient
information and the selected treatment. The goal is to find the treatments that
lead to the best patient responses on average. Each experiment is expensive,
forcing us to learn the most from each experiment. We adopt a Bayesian approach
both to incorporate possible prior information and to update our treatment
regime continuously as information accrues, with the potential to allow smaller
yet more informative trials and for patients to receive better treatment. By
formulating the problem as contextual bandits, we introduce a knowledge
gradient policy to guide the treatment assignment by maximizing the expected
value of information, for which an approximation method is used to overcome
computational challenges. We provide a detailed study on how to make sequential
medical decisions under uncertainty to reduce health care costs on a real world
knee replacement dataset. We use clustering and LASSO to deal with the
intrinsic sparsity in health datasets. We show experimentally that even though
the problem is sparse, through careful selection of physicians (versus picking
them at random), we can significantly improve the success rates.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 02:34:21 GMT"
}
] | 2016-07-07T00:00:00 |
[
[
"Wang",
"Yingfei",
""
],
[
"Powell",
"Warren",
""
]
] |
TITLE: An optimal learning method for developing personalized treatment regimes
ABSTRACT: A treatment regime is a function that maps individual patient information to
a recommended treatment, hence explicitly incorporating the heterogeneity in
need for treatment across individuals. Patient responses are dichotomous and
can be predicted through an unknown relationship that depends on the patient
information and the selected treatment. The goal is to find the treatments that
lead to the best patient responses on average. Each experiment is expensive,
forcing us to learn the most from each experiment. We adopt a Bayesian approach
both to incorporate possible prior information and to update our treatment
regime continuously as information accrues, with the potential to allow smaller
yet more informative trials and for patients to receive better treatment. By
formulating the problem as contextual bandits, we introduce a knowledge
gradient policy to guide the treatment assignment by maximizing the expected
value of information, for which an approximation method is used to overcome
computational challenges. We provide a detailed study on how to make sequential
medical decisions under uncertainty to reduce health care costs on a real world
knee replacement dataset. We use clustering and LASSO to deal with the
intrinsic sparsity in health datasets. We show experimentally that even though
the problem is sparse, through careful selection of physicians (versus picking
them at random), we can significantly improve the success rates.
|
1607.01577
|
Le Dong
|
Le Dong, Ling He, Gaipeng Kong, Qianni Zhang, Xiaochun Cao, and Ebroul
Izquierdo
|
CUNet: A Compact Unsupervised Network for Image Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a compact network called CUNet (compact
unsupervised network) to counter the image classification challenge. Different
from the traditional convolutional neural networks learning filters by the
time-consuming stochastic gradient descent, CUNet learns the filter bank from
diverse image patches with the simple K-means, which significantly avoids the
requirement of scarce labeled training images, reduces the training
consumption, and maintains the high discriminative ability. Besides, we propose
a new pooling method named weighted pooling considering the different weight
values of adjacent neurons, which helps to improve the robustness to small
image distortions. In the output layer, CUNet integrates the feature maps
gained in the last hidden layer, and straightforwardly computes histograms in
non-overlapped blocks. To reduce feature redundancy, we implement the
max-pooling operation on adjacent blocks to select the most competitive
features. Comprehensive experiments are conducted to demonstrate the
state-of-the-art classification performances with CUNet on CIFAR-10, STL-10,
MNIST and Caltech101 benchmark datasets.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 11:56:52 GMT"
}
] | 2016-07-07T00:00:00 |
[
[
"Dong",
"Le",
""
],
[
"He",
"Ling",
""
],
[
"Kong",
"Gaipeng",
""
],
[
"Zhang",
"Qianni",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"Izquierdo",
"Ebroul",
""
]
] |
TITLE: CUNet: A Compact Unsupervised Network for Image Classification
ABSTRACT: In this paper, we propose a compact network called CUNet (compact
unsupervised network) to counter the image classification challenge. Different
from the traditional convolutional neural networks learning filters by the
time-consuming stochastic gradient descent, CUNet learns the filter bank from
diverse image patches with the simple K-means, which significantly avoids the
requirement of scarce labeled training images, reduces the training
consumption, and maintains the high discriminative ability. Besides, we propose
a new pooling method named weighted pooling considering the different weight
values of adjacent neurons, which helps to improve the robustness to small
image distortions. In the output layer, CUNet integrates the feature maps
gained in the last hidden layer, and straightforwardly computes histograms in
non-overlapped blocks. To reduce feature redundancy, we implement the
max-pooling operation on adjacent blocks to select the most competitive
features. Comprehensive experiments are conducted to demonstrate the
state-of-the-art classification performances with CUNet on CIFAR-10, STL-10,
MNIST and Caltech101 benchmark datasets.
|
1607.01582
|
Gerasimos Spanakis
|
Gerasimos Spanakis and Gerhard Weiss and Anne Roefs
|
Bagged Boosted Trees for Classification of Ecological Momentary
Assessment Data
|
to be presented at ECAI2016
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ecological Momentary Assessment (EMA) data is organized in multiple levels
(per-subject, per-day, etc.) and this particular structure should be taken into
account in machine learning algorithms used in EMA like decision trees and its
variants. We propose a new algorithm called BBT (standing for Bagged Boosted
Trees) that is enhanced by a over/under sampling method and can provide better
estimates for the conditional class probability function. Experimental results
on a real-world dataset show that BBT can benefit EMA data classification and
performance.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 12:10:29 GMT"
}
] | 2016-07-07T00:00:00 |
[
[
"Spanakis",
"Gerasimos",
""
],
[
"Weiss",
"Gerhard",
""
],
[
"Roefs",
"Anne",
""
]
] |
TITLE: Bagged Boosted Trees for Classification of Ecological Momentary
Assessment Data
ABSTRACT: Ecological Momentary Assessment (EMA) data is organized in multiple levels
(per-subject, per-day, etc.) and this particular structure should be taken into
account in machine learning algorithms used in EMA like decision trees and its
variants. We propose a new algorithm called BBT (standing for Bagged Boosted
Trees) that is enhanced by a over/under sampling method and can provide better
estimates for the conditional class probability function. Experimental results
on a real-world dataset show that BBT can benefit EMA data classification and
performance.
|
1607.01690
|
Cen Wan
|
Cen Wan and Alex A. Freitas
|
A New Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
Classifier for Coping with Gene Ontology-based Features
|
International Conference on Machine Learning (ICML 2016)
Computational Biology Workshop
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Tree Augmented Naive Bayes classifier is a type of probabilistic
graphical model that can represent some feature dependencies. In this work, we
propose a Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
(HRE-TAN) algorithm, which considers removing the hierarchical redundancy
during the classifier learning process, when coping with data containing
hierarchically structured features. The experiments showed that HRE-TAN obtains
significantly better predictive performance than the conventional Tree
Augmented Naive Bayes classifier, and enhanced the robustness against
imbalanced class distributions, in aging-related gene datasets with Gene
Ontology terms used as features.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 16:00:43 GMT"
}
] | 2016-07-07T00:00:00 |
[
[
"Wan",
"Cen",
""
],
[
"Freitas",
"Alex A.",
""
]
] |
TITLE: A New Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
Classifier for Coping with Gene Ontology-based Features
ABSTRACT: The Tree Augmented Naive Bayes classifier is a type of probabilistic
graphical model that can represent some feature dependencies. In this work, we
propose a Hierarchical Redundancy Eliminated Tree Augmented Naive Bayes
(HRE-TAN) algorithm, which considers removing the hierarchical redundancy
during the classifier learning process, when coping with data containing
hierarchically structured features. The experiments showed that HRE-TAN obtains
significantly better predictive performance than the conventional Tree
Augmented Naive Bayes classifier, and enhanced the robustness against
imbalanced class distributions, in aging-related gene datasets with Gene
Ontology terms used as features.
|
1607.01719
|
Baochen Sun
|
Baochen Sun, Kate Saenko
|
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
|
Extended Abstract
| null | null | null |
cs.CV cs.AI cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks are able to learn powerful representations from large
quantities of labeled input data, however they cannot always generalize well
across changes in input distributions. Domain adaptation algorithms have been
proposed to compensate for the degradation in performance due to domain shift.
In this paper, we address the case when the target domain is unlabeled,
requiring unsupervised adaptation. CORAL is a "frustratingly easy" unsupervised
domain adaptation method that aligns the second-order statistics of the source
and target distributions with a linear transformation. Here, we extend CORAL to
learn a nonlinear transformation that aligns correlations of layer activations
in deep neural networks (Deep CORAL). Experiments on standard benchmark
datasets show state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2016 17:35:55 GMT"
}
] | 2016-07-07T00:00:00 |
[
[
"Sun",
"Baochen",
""
],
[
"Saenko",
"Kate",
""
]
] |
TITLE: Deep CORAL: Correlation Alignment for Deep Domain Adaptation
ABSTRACT: Deep neural networks are able to learn powerful representations from large
quantities of labeled input data, however they cannot always generalize well
across changes in input distributions. Domain adaptation algorithms have been
proposed to compensate for the degradation in performance due to domain shift.
In this paper, we address the case when the target domain is unlabeled,
requiring unsupervised adaptation. CORAL is a "frustratingly easy" unsupervised
domain adaptation method that aligns the second-order statistics of the source
and target distributions with a linear transformation. Here, we extend CORAL to
learn a nonlinear transformation that aligns correlations of layer activations
in deep neural networks (Deep CORAL). Experiments on standard benchmark
datasets show state-of-the-art performance.
|
1511.04397
|
Ehsan Hosseini-Asl
|
Ehsan Hosseini-Asl, Angshuman Guha
|
Similarity-based Text Recognition by Deeply Supervised Siamese Network
|
Accepted for presenting at Future Technologies Conference - (FTC
2016) San Francisco, December 6-7, 2016
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new text recognition model based on measuring the
visual similarity of text and predicting the content of unlabeled texts. First
a Siamese convolutional network is trained with deep supervision on a labeled
training dataset. This network projects texts into a similarity manifold. The
Deeply Supervised Siamese network learns visual similarity of texts. Then a
K-nearest neighbor classifier is used to predict unlabeled text based on
similarity distance to labeled texts. The performance of the model is evaluated
on three datasets of machine-print and hand-written text combined. We
demonstrate that the model reduces the cost of human estimation by $50\%-85\%$.
The error of the system is less than $0.5\%$. The proposed model outperform
conventional Siamese network by finding visually-similar barely-readable and
readable text, e.g. machine-printed, handwritten, due to deep supervision. The
results also demonstrate that the predicted labels are sometimes better than
human labels e.g. spelling correction.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2015 18:46:01 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2015 20:59:10 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jan 2016 00:37:29 GMT"
},
{
"version": "v4",
"created": "Sun, 3 Jul 2016 16:38:35 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Jul 2016 01:21:08 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Hosseini-Asl",
"Ehsan",
""
],
[
"Guha",
"Angshuman",
""
]
] |
TITLE: Similarity-based Text Recognition by Deeply Supervised Siamese Network
ABSTRACT: In this paper, we propose a new text recognition model based on measuring the
visual similarity of text and predicting the content of unlabeled texts. First
a Siamese convolutional network is trained with deep supervision on a labeled
training dataset. This network projects texts into a similarity manifold. The
Deeply Supervised Siamese network learns visual similarity of texts. Then a
K-nearest neighbor classifier is used to predict unlabeled text based on
similarity distance to labeled texts. The performance of the model is evaluated
on three datasets of machine-print and hand-written text combined. We
demonstrate that the model reduces the cost of human estimation by $50\%-85\%$.
The error of the system is less than $0.5\%$. The proposed model outperform
conventional Siamese network by finding visually-similar barely-readable and
readable text, e.g. machine-printed, handwritten, due to deep supervision. The
results also demonstrate that the predicted labels are sometimes better than
human labels e.g. spelling correction.
|
1603.05614
|
Qilian Yu
|
Qilian Yu, Easton Li Xu, Shuguang Cui
|
Streaming Algorithms for News and Scientific Literature Recommendation:
Submodular Maximization with a d-Knapsack Constraint
|
11 pages, 5 figures
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Submodular maximization problems belong to the family of combinatorial
optimization problems and enjoy wide applications. In this paper, we focus on
the problem of maximizing a monotone submodular function subject to a
$d$-knapsack constraint, for which we propose a streaming algorithm that
achieves a $\left(\frac{1}{1+2d}-\epsilon\right)$-approximation of the optimal
value, while it only needs one single pass through the dataset without storing
all the data in the memory. In our experiments, we extensively evaluate the
effectiveness of our proposed algorithm via two applications: news
recommendation and scientific literature recommendation. It is observed that
the proposed streaming algorithm achieves both execution speedup and memory
saving by several orders of magnitude, compared with existing approaches.
|
[
{
"version": "v1",
"created": "Thu, 17 Mar 2016 19:01:12 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2016 16:15:56 GMT"
},
{
"version": "v3",
"created": "Tue, 5 Jul 2016 00:43:45 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Yu",
"Qilian",
""
],
[
"Xu",
"Easton Li",
""
],
[
"Cui",
"Shuguang",
""
]
] |
TITLE: Streaming Algorithms for News and Scientific Literature Recommendation:
Submodular Maximization with a d-Knapsack Constraint
ABSTRACT: Submodular maximization problems belong to the family of combinatorial
optimization problems and enjoy wide applications. In this paper, we focus on
the problem of maximizing a monotone submodular function subject to a
$d$-knapsack constraint, for which we propose a streaming algorithm that
achieves a $\left(\frac{1}{1+2d}-\epsilon\right)$-approximation of the optimal
value, while it only needs one single pass through the dataset without storing
all the data in the memory. In our experiments, we extensively evaluate the
effectiveness of our proposed algorithm via two applications: news
recommendation and scientific literature recommendation. It is observed that
the proposed streaming algorithm achieves both execution speedup and memory
saving by several orders of magnitude, compared with existing approaches.
|
1604.07364
|
Sibasish Laha
|
Sibasish Laha, Francis P. Keenan, Gary J. Ferland, Catherine A.
Ramsbottom, and Kanti M. Aggarwal
|
Ultraviolet emission lines of Si II in quasars --- investigating the "Si
II disaster"
|
Accepted for publication in ApJ
| null |
10.3847/0004-637X/825/1/28
| null |
astro-ph.GA physics.atom-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The observed line intensity ratios of the Si II 1263 and 1307 \AA\ multiplets
to that of Si II 1814\,\AA\ in the broad line region of quasars are both an
order of magnitude larger than the theoretical values. This was first pointed
out by Baldwin et al. (1996), who termed it the "Si II disaster", and it has
remained unresolved. We investigate the problem in the light of newly-published
atomic data for Si II. Specifically, we perform broad line region calculations
using several different atomic datasets within the CLOUDY modeling code under
optically thick quasar cloud conditions. In addition, we test for selective
pumping by the source photons or intrinsic galactic reddening as possible
causes for the discrepancy, and also consider blending with other species.
However, we find that none of the options investigated resolves the Si II
disaster, with the potential exception of microturbulent velocity broadening
and line blending. We find that a larger microturbulent velocity ($\sim 500 \rm
\, kms^{-1}$) may solve the Si II disaster through continuum pumping and other
effects. The CLOUDY models indicate strong blending of the Si II 1307 \AA\
multiplet with emission lines of O I, although the predicted degree of blending
is incompatible with the observed 1263/1307 intensity ratios. Clearly, more
work is required on the quasar modelling of not just the Si II lines but also
nearby transitions (in particular those of O I) to fully investigate if
blending may be responsible for the Si II disaster.
|
[
{
"version": "v1",
"created": "Mon, 25 Apr 2016 19:01:58 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Laha",
"Sibasish",
""
],
[
"Keenan",
"Francis P.",
""
],
[
"Ferland",
"Gary J.",
""
],
[
"Ramsbottom",
"Catherine A.",
""
],
[
"Aggarwal",
"Kanti M.",
""
]
] |
TITLE: Ultraviolet emission lines of Si II in quasars --- investigating the "Si
II disaster"
ABSTRACT: The observed line intensity ratios of the Si II 1263 and 1307 \AA\ multiplets
to that of Si II 1814\,\AA\ in the broad line region of quasars are both an
order of magnitude larger than the theoretical values. This was first pointed
out by Baldwin et al. (1996), who termed it the "Si II disaster", and it has
remained unresolved. We investigate the problem in the light of newly-published
atomic data for Si II. Specifically, we perform broad line region calculations
using several different atomic datasets within the CLOUDY modeling code under
optically thick quasar cloud conditions. In addition, we test for selective
pumping by the source photons or intrinsic galactic reddening as possible
causes for the discrepancy, and also consider blending with other species.
However, we find that none of the options investigated resolves the Si II
disaster, with the potential exception of microturbulent velocity broadening
and line blending. We find that a larger microturbulent velocity ($\sim 500 \rm
\, kms^{-1}$) may solve the Si II disaster through continuum pumping and other
effects. The CLOUDY models indicate strong blending of the Si II 1307 \AA\
multiplet with emission lines of O I, although the predicted degree of blending
is incompatible with the observed 1263/1307 intensity ratios. Clearly, more
work is required on the quasar modelling of not just the Si II lines but also
nearby transitions (in particular those of O I) to fully investigate if
blending may be responsible for the Si II disaster.
|
1606.09002
|
Cong Yao
|
Cong Yao, Xiang Bai, Nong Sang, Xinyu Zhou, Shuchang Zhou and Zhimin
Cao
|
Scene Text Detection via Holistic, Multi-Channel Prediction
|
10 pages, 9 figures, 5 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, scene text detection has become an active research topic in
computer vision and document analysis, because of its great importance and
significant challenge. However, vast majority of the existing methods detect
text within local regions, typically through extracting character, word or line
level candidates followed by candidate aggregation and false positive
elimination, which potentially exclude the effect of wide-scope and long-range
contextual cues in the scene. To take full advantage of the rich information
available in the whole natural image, we propose to localize text in a holistic
manner, by casting scene text detection as a semantic segmentation problem. The
proposed algorithm directly runs on full images and produces global, pixel-wise
prediction maps, in which detections are subsequently formed. To better make
use of the properties of text, three types of information regarding text
region, individual characters and their relationship are estimated, with a
single Fully Convolutional Network (FCN) model. With such predictions of text
properties, the proposed algorithm can simultaneously handle horizontal,
multi-oriented and curved text in real-world natural images. The experiments on
standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500,
demonstrate that the proposed algorithm substantially outperforms previous
state-of-the-art approaches. Moreover, we report the first baseline result on
the recently-released, large-scale dataset COCO-Text.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2016 08:45:17 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jul 2016 11:22:49 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Yao",
"Cong",
""
],
[
"Bai",
"Xiang",
""
],
[
"Sang",
"Nong",
""
],
[
"Zhou",
"Xinyu",
""
],
[
"Zhou",
"Shuchang",
""
],
[
"Cao",
"Zhimin",
""
]
] |
TITLE: Scene Text Detection via Holistic, Multi-Channel Prediction
ABSTRACT: Recently, scene text detection has become an active research topic in
computer vision and document analysis, because of its great importance and
significant challenge. However, vast majority of the existing methods detect
text within local regions, typically through extracting character, word or line
level candidates followed by candidate aggregation and false positive
elimination, which potentially exclude the effect of wide-scope and long-range
contextual cues in the scene. To take full advantage of the rich information
available in the whole natural image, we propose to localize text in a holistic
manner, by casting scene text detection as a semantic segmentation problem. The
proposed algorithm directly runs on full images and produces global, pixel-wise
prediction maps, in which detections are subsequently formed. To better make
use of the properties of text, three types of information regarding text
region, individual characters and their relationship are estimated, with a
single Fully Convolutional Network (FCN) model. With such predictions of text
properties, the proposed algorithm can simultaneously handle horizontal,
multi-oriented and curved text in real-world natural images. The experiments on
standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500,
demonstrate that the proposed algorithm substantially outperforms previous
state-of-the-art approaches. Moreover, we report the first baseline result on
the recently-released, large-scale dataset COCO-Text.
|
1607.01115
|
Suyog Jain
|
Suyog Dutt Jain, Kristen Grauman
|
Click Carving: Segmenting Objects in Video with Point Clicks
|
A preliminary version of the material in this document was filed as
University of Texas technical report no. UT AI16-01
| null | null |
University of Texas Technical Report UT AI16-01
|
cs.CV cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel form of interactive video object segmentation where a few
clicks by the user helps the system produce a full spatio-temporal segmentation
of the object of interest. Whereas conventional interactive pipelines take the
user's initialization as a starting point, we show the value in the system
taking the lead even in initialization. In particular, for a given video frame,
the system precomputes a ranked list of thousands of possible segmentation
hypotheses (also referred to as object region proposals) using image and motion
cues. Then, the user looks at the top ranked proposals, and clicks on the
object boundary to carve away erroneous ones. This process iterates (typically
2-3 times), and each time the system revises the top ranked proposal set, until
the user is satisfied with a resulting segmentation mask. Finally, the mask is
propagated across the video to produce a spatio-temporal object tube. On three
challenging datasets, we provide extensive comparisons with both existing work
and simpler alternative methods. In all, the proposed Click Carving approach
strikes an excellent balance of accuracy and human effort. It outperforms all
similarly fast methods, and is competitive or better than those requiring 2 to
12 times the effort.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2016 05:35:22 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Jain",
"Suyog Dutt",
""
],
[
"Grauman",
"Kristen",
""
]
] |
TITLE: Click Carving: Segmenting Objects in Video with Point Clicks
ABSTRACT: We present a novel form of interactive video object segmentation where a few
clicks by the user helps the system produce a full spatio-temporal segmentation
of the object of interest. Whereas conventional interactive pipelines take the
user's initialization as a starting point, we show the value in the system
taking the lead even in initialization. In particular, for a given video frame,
the system precomputes a ranked list of thousands of possible segmentation
hypotheses (also referred to as object region proposals) using image and motion
cues. Then, the user looks at the top ranked proposals, and clicks on the
object boundary to carve away erroneous ones. This process iterates (typically
2-3 times), and each time the system revises the top ranked proposal set, until
the user is satisfied with a resulting segmentation mask. Finally, the mask is
propagated across the video to produce a spatio-temporal object tube. On three
challenging datasets, we provide extensive comparisons with both existing work
and simpler alternative methods. In all, the proposed Click Carving approach
strikes an excellent balance of accuracy and human effort. It outperforms all
similarly fast methods, and is competitive or better than those requiring 2 to
12 times the effort.
|
1607.01152
|
Nicolas Goix
|
Nicolas Goix (LTCI)
|
How to Evaluate the Quality of Unsupervised Anomaly Detection
Algorithms?
| null | null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When sufficient labeled data are available, classical criteria based on
Receiver Operating Characteristic (ROC) or Precision-Recall (PR) curves can be
used to compare the performance of un-supervised anomaly detection algorithms.
However , in many situations, few or no data are labeled. This calls for
alternative criteria one can compute on non-labeled data. In this paper, two
criteria that do not require labels are empirically shown to discriminate
accurately (w.r.t. ROC or PR based criteria) between algorithms. These criteria
are based on existing Excess-Mass (EM) and Mass-Volume (MV) curves, which
generally cannot be well estimated in large dimension. A methodology based on
feature sub-sampling and aggregating is also described and tested, extending
the use of these criteria to high-dimensional datasets and solving major
drawbacks inherent to standard EM and MV curves.
|
[
{
"version": "v1",
"created": "Tue, 5 Jul 2016 08:58:44 GMT"
}
] | 2016-07-06T00:00:00 |
[
[
"Goix",
"Nicolas",
"",
"LTCI"
]
] |
TITLE: How to Evaluate the Quality of Unsupervised Anomaly Detection
Algorithms?
ABSTRACT: When sufficient labeled data are available, classical criteria based on
Receiver Operating Characteristic (ROC) or Precision-Recall (PR) curves can be
used to compare the performance of un-supervised anomaly detection algorithms.
However , in many situations, few or no data are labeled. This calls for
alternative criteria one can compute on non-labeled data. In this paper, two
criteria that do not require labels are empirically shown to discriminate
accurately (w.r.t. ROC or PR based criteria) between algorithms. These criteria
are based on existing Excess-Mass (EM) and Mass-Volume (MV) curves, which
generally cannot be well estimated in large dimension. A methodology based on
feature sub-sampling and aggregating is also described and tested, extending
the use of these criteria to high-dimensional datasets and solving major
drawbacks inherent to standard EM and MV curves.
|
1605.04797
|
Qingnan Zhou
|
Qingnan Zhou, Alec Jacobson
|
Thingi10K: A Dataset of 10,000 3D-Printing Models
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Empirically validating new 3D-printing related algorithms and implementations
requires testing data representative of inputs encountered \emph{in the wild}.
An ideal benchmarking dataset should not only draw from the same distribution
of shapes people print in terms of class (e.g., toys, mechanisms, jewelry),
representation type (e.g., triangle soup meshes) and complexity (e.g., number
of facets), but should also capture problems and artifacts endemic to 3D
printing models (e.g., self-intersections, non-manifoldness). We observe that
the contextual and geometric characteristics of 3D printing models differ
significantly from those used for computer graphics applications, not to
mention standard models (e.g., Stanford bunny, Armadillo, Fertility). We
present a new dataset of 10,000 models collected from an online 3D printing
model-sharing database. Via analysis of both geometric (e.g., triangle aspect
ratios, manifoldness) and contextual (e.g., licenses, tags, classes)
characteristics, we demonstrate that this dataset represents a more concise
summary of real-world models used for 3D printing compared to existing
datasets. To facilitate future research endeavors, we also present an online
query interface to select subsets of the dataset according to project-specific
characteristics. The complete dataset and per-model statistical data are freely
available to the public.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 15:09:19 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2016 03:15:10 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Zhou",
"Qingnan",
""
],
[
"Jacobson",
"Alec",
""
]
] |
TITLE: Thingi10K: A Dataset of 10,000 3D-Printing Models
ABSTRACT: Empirically validating new 3D-printing related algorithms and implementations
requires testing data representative of inputs encountered \emph{in the wild}.
An ideal benchmarking dataset should not only draw from the same distribution
of shapes people print in terms of class (e.g., toys, mechanisms, jewelry),
representation type (e.g., triangle soup meshes) and complexity (e.g., number
of facets), but should also capture problems and artifacts endemic to 3D
printing models (e.g., self-intersections, non-manifoldness). We observe that
the contextual and geometric characteristics of 3D printing models differ
significantly from those used for computer graphics applications, not to
mention standard models (e.g., Stanford bunny, Armadillo, Fertility). We
present a new dataset of 10,000 models collected from an online 3D printing
model-sharing database. Via analysis of both geometric (e.g., triangle aspect
ratios, manifoldness) and contextual (e.g., licenses, tags, classes)
characteristics, we demonstrate that this dataset represents a more concise
summary of real-world models used for 3D printing compared to existing
datasets. To facilitate future research endeavors, we also present an online
query interface to select subsets of the dataset according to project-specific
characteristics. The complete dataset and per-model statistical data are freely
available to the public.
|
1606.01994
|
Zihang Dai
|
Zihang Dai, Lei Li, Wei Xu
|
CFO: Conditional Focused Neural Question Answering with Large-scale
Knowledge Bases
|
Accepted by ACL 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we enable computers to automatically answer questions like "Who
created the character Harry Potter"? Carefully built knowledge bases provide
rich sources of facts. However, it remains a challenge to answer factoid
questions raised in natural language due to numerous expressions of one
question. In particular, we focus on the most common questions --- ones that
can be answered with a single fact in the knowledge base. We propose CFO, a
Conditional Focused neural-network-based approach to answering factoid
questions with knowledge bases. Our approach first zooms in a question to find
more probable candidate subject mentions, and infers the final answers with a
unified conditional probabilistic framework. Powered by deep recurrent neural
networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7%
on a dataset of 108k questions - the largest public one to date. It outperforms
the current state of the art by an absolute margin of 11.8%.
|
[
{
"version": "v1",
"created": "Tue, 7 Jun 2016 01:36:07 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2016 03:04:38 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Dai",
"Zihang",
""
],
[
"Li",
"Lei",
""
],
[
"Xu",
"Wei",
""
]
] |
TITLE: CFO: Conditional Focused Neural Question Answering with Large-scale
Knowledge Bases
ABSTRACT: How can we enable computers to automatically answer questions like "Who
created the character Harry Potter"? Carefully built knowledge bases provide
rich sources of facts. However, it remains a challenge to answer factoid
questions raised in natural language due to numerous expressions of one
question. In particular, we focus on the most common questions --- ones that
can be answered with a single fact in the knowledge base. We propose CFO, a
Conditional Focused neural-network-based approach to answering factoid
questions with knowledge bases. Our approach first zooms in a question to find
more probable candidate subject mentions, and infers the final answers with a
unified conditional probabilistic framework. Powered by deep recurrent neural
networks and neural embeddings, our proposed CFO achieves an accuracy of 75.7%
on a dataset of 108k questions - the largest public one to date. It outperforms
the current state of the art by an absolute margin of 11.8%.
|
1606.08998
|
Ernest C. H. Cheung
|
Ernest Cheung, Tsan Kwong Wong, Aniket Bera, Xiaogang Wang, and Dinesh
Manocha
|
LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior
Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel procedural framework to generate an arbitrary number of
labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to
design accurate algorithms or training models for crowded scene understanding.
Our overall approach is composed of two components: a procedural simulation
framework for generating crowd movements and behaviors, and a procedural
rendering framework to generate different videos or images. Each video or image
is automatically labeled based on the environment, number of pedestrians,
density, behavior, flow, lighting conditions, viewpoint, noise, etc.
Furthermore, we can increase the realism by combining synthetically-generated
behaviors with real-world background videos. We demonstrate the benefits of
LCrowdV over prior lableled crowd datasets by improving the accuracy of
pedestrian detection and crowd behavior classification algorithms. LCrowdV
would be released on the WWW.
|
[
{
"version": "v1",
"created": "Wed, 29 Jun 2016 08:30:44 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2016 05:33:48 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Cheung",
"Ernest",
""
],
[
"Wong",
"Tsan Kwong",
""
],
[
"Bera",
"Aniket",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
TITLE: LCrowdV: Generating Labeled Videos for Simulation-based Crowd Behavior
Learning
ABSTRACT: We present a novel procedural framework to generate an arbitrary number of
labeled crowd videos (LCrowdV). The resulting crowd video datasets are used to
design accurate algorithms or training models for crowded scene understanding.
Our overall approach is composed of two components: a procedural simulation
framework for generating crowd movements and behaviors, and a procedural
rendering framework to generate different videos or images. Each video or image
is automatically labeled based on the environment, number of pedestrians,
density, behavior, flow, lighting conditions, viewpoint, noise, etc.
Furthermore, we can increase the realism by combining synthetically-generated
behaviors with real-world background videos. We demonstrate the benefits of
LCrowdV over prior lableled crowd datasets by improving the accuracy of
pedestrian detection and crowd behavior classification algorithms. LCrowdV
would be released on the WWW.
|
1606.09446
|
Han Xiao
|
Han Xiao, Polina Rozenshtein, and Aristides Gionis
|
Discovering topically- and temporally-coherent events in interaction
networks
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing use of online communication platforms, such as email,
twitter, and messaging applications, we are faced with a growing amount of data
that combine content (what is said), time (when), and user (by whom)
information. An important computational challenge is to analyze these data,
discover meaningful patterns, and understand what is happening. We consider the
problem of mining online communication data and finding top-k temporal events.
We define a temporal event to be a coherent topic that is discussed frequently,
in a relatively short time span, while the information ow of the event respects
the underlying network structure. We construct our model for detecting temporal
events in two steps. We first introduce the notion of interaction meta-graph,
which connects associated interactions. Using this notion, we define a temporal
event to be a subset of interactions that (i) are topically and temporally
close and (ii) correspond to a tree that captures the information ow. Finding
the best temporal event leads to budget version of the prize-collecting
Steiner-tree (PCST) problem, which we solve using three different methods: a
greedy approach, a dynamic-programming algorithm, and an adaptation to an
existing approximation algorithm. The problem of finding the top- k events
among a set of candidate events maps to maximum set-cover problem, and thus,
solved by greedy. We compare and analyze our algorithms in both synthetic and
real datasets, such as twitter and email communication. The results show that
our methods are able to detect meaningful temporal events.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2016 12:08:06 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Jul 2016 17:30:26 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Xiao",
"Han",
""
],
[
"Rozenshtein",
"Polina",
""
],
[
"Gionis",
"Aristides",
""
]
] |
TITLE: Discovering topically- and temporally-coherent events in interaction
networks
ABSTRACT: With the increasing use of online communication platforms, such as email,
twitter, and messaging applications, we are faced with a growing amount of data
that combine content (what is said), time (when), and user (by whom)
information. An important computational challenge is to analyze these data,
discover meaningful patterns, and understand what is happening. We consider the
problem of mining online communication data and finding top-k temporal events.
We define a temporal event to be a coherent topic that is discussed frequently,
in a relatively short time span, while the information ow of the event respects
the underlying network structure. We construct our model for detecting temporal
events in two steps. We first introduce the notion of interaction meta-graph,
which connects associated interactions. Using this notion, we define a temporal
event to be a subset of interactions that (i) are topically and temporally
close and (ii) correspond to a tree that captures the information ow. Finding
the best temporal event leads to budget version of the prize-collecting
Steiner-tree (PCST) problem, which we solve using three different methods: a
greedy approach, a dynamic-programming algorithm, and an adaptation to an
existing approximation algorithm. The problem of finding the top- k events
among a set of candidate events maps to maximum set-cover problem, and thus,
solved by greedy. We compare and analyze our algorithms in both synthetic and
real datasets, such as twitter and email communication. The results show that
our methods are able to detect meaningful temporal events.
|
1607.00410
|
Yusuke Watanabe Dr.
|
Yusuke Watanabe, Kazuma Hashimoto, Yoshimasa Tsuruoka
|
Domain Adaptation for Neural Networks by Parameter Augmentation
|
9 page. To appear in the first ACL Workshop on Representation
Learning for NLP
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a simple domain adaptation method for neural networks in a
supervised setting. Supervised domain adaptation is a way of improving the
generalization performance on the target domain by using the source domain
dataset, assuming that both of the datasets are labeled. Recently, recurrent
neural networks have been shown to be successful on a variety of NLP tasks such
as caption generation; however, the existing domain adaptation techniques are
limited to (1) tune the model parameters by the target dataset after the
training by the source dataset, or (2) design the network to have dual output,
one for the source domain and the other for the target domain. Reformulating
the idea of the domain adaptation technique proposed by Daume (2007), we
propose a simple domain adaptation method, which can be applied to neural
networks trained with a cross-entropy loss. On captioning datasets, we show
performance improvements over other domain adaptation methods.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 21:24:21 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Watanabe",
"Yusuke",
""
],
[
"Hashimoto",
"Kazuma",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
TITLE: Domain Adaptation for Neural Networks by Parameter Augmentation
ABSTRACT: We propose a simple domain adaptation method for neural networks in a
supervised setting. Supervised domain adaptation is a way of improving the
generalization performance on the target domain by using the source domain
dataset, assuming that both of the datasets are labeled. Recently, recurrent
neural networks have been shown to be successful on a variety of NLP tasks such
as caption generation; however, the existing domain adaptation techniques are
limited to (1) tune the model parameters by the target dataset after the
training by the source dataset, or (2) design the network to have dual output,
one for the source domain and the other for the target domain. Reformulating
the idea of the domain adaptation technique proposed by Daume (2007), we
propose a simple domain adaptation method, which can be applied to neural
networks trained with a cross-entropy loss. On captioning datasets, we show
performance improvements over other domain adaptation methods.
|
1607.00417
|
Abir Das
|
Abir Das, Rameswar Panda and Amit K. Roy-Chowdhury
|
Continuous Adaptation of Multi-Camera Person Identification Models
through Sparse Non-redundant Representative Selection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of image-base person identification/recognition is to provide an
identity to the image of an individual based on learned models that describe
his/her appearance. Most traditional person identification systems rely on
learning a static model on tediously labeled training data. Though labeling
manually is an indispensable part of a supervised framework, for a large scale
identification system labeling huge amount of data is a significant overhead.
For large multi-sensor data as typically encountered in camera networks,
labeling a lot of samples does not always mean more information, as redundant
images are labeled several times. In this work, we propose a convex
optimization based iterative framework that progressively and judiciously
chooses a sparse but informative set of samples for labeling, with minimal
overlap with previously labeled images. We also use a structure preserving
sparse reconstruction based classifier to reduce the training burden typically
seen in discriminative classifiers. The two stage approach leads to a novel
framework for online update of the classifiers involving only the incorporation
of new labeled data rather than any expensive training phase. We demonstrate
the effectiveness of our approach on multi-camera person re-identification
datasets, to demonstrate the feasibility of learning online classification
models in multi-camera big data applications. Using three benchmark datasets,
we validate our approach and demonstrate that our framework achieves superior
performance with significantly less amount of manual labeling.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 21:48:16 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Das",
"Abir",
""
],
[
"Panda",
"Rameswar",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] |
TITLE: Continuous Adaptation of Multi-Camera Person Identification Models
through Sparse Non-redundant Representative Selection
ABSTRACT: The problem of image-base person identification/recognition is to provide an
identity to the image of an individual based on learned models that describe
his/her appearance. Most traditional person identification systems rely on
learning a static model on tediously labeled training data. Though labeling
manually is an indispensable part of a supervised framework, for a large scale
identification system labeling huge amount of data is a significant overhead.
For large multi-sensor data as typically encountered in camera networks,
labeling a lot of samples does not always mean more information, as redundant
images are labeled several times. In this work, we propose a convex
optimization based iterative framework that progressively and judiciously
chooses a sparse but informative set of samples for labeling, with minimal
overlap with previously labeled images. We also use a structure preserving
sparse reconstruction based classifier to reduce the training burden typically
seen in discriminative classifiers. The two stage approach leads to a novel
framework for online update of the classifiers involving only the incorporation
of new labeled data rather than any expensive training phase. We demonstrate
the effectiveness of our approach on multi-camera person re-identification
datasets, to demonstrate the feasibility of learning online classification
models in multi-camera big data applications. Using three benchmark datasets,
we validate our approach and demonstrate that our framework achieves superior
performance with significantly less amount of manual labeling.
|
1607.00442
|
Yongqiang Huang
|
Yongqiang Huang and Yu Sun
|
Datasets on object manipulation and interaction: a survey
|
8 pages, 3 figures, 3 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A dataset is crucial for model learning and evaluation. Choosing the right
dataset to use or making a new dataset requires the knowledge of those that are
available. In this work, we provide that knowledge, by reviewing twenty
datasets that were published in the recent six years and that are directly
related to object manipulation. We report on modalities, activities, and
annotations for each individual dataset and give our view on its use for object
manipulation. We also compare the datasets and summarize them. We conclude with
our suggestion on future datasets.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 00:58:57 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Huang",
"Yongqiang",
""
],
[
"Sun",
"Yu",
""
]
] |
TITLE: Datasets on object manipulation and interaction: a survey
ABSTRACT: A dataset is crucial for model learning and evaluation. Choosing the right
dataset to use or making a new dataset requires the knowledge of those that are
available. In this work, we provide that knowledge, by reviewing twenty
datasets that were published in the recent six years and that are directly
related to object manipulation. We report on modalities, activities, and
annotations for each individual dataset and give our view on its use for object
manipulation. We also compare the datasets and summarize them. We conclude with
our suggestion on future datasets.
|
1607.00464
|
Le Dong
|
Le Dong, Xiuyuan Chen, Mengdie Mao, Qianni Zhang
|
NIST: An Image Classification Network to Image Semantic Retrieval
|
4 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a classification network to image semantic retrieval
(NIST) framework to counter the image retrieval challenge. Our approach
leverages the successful classification network GoogleNet based on
Convolutional Neural Networks to obtain the semantic feature matrix which
contains the serial number of classes and corresponding probabilities. Compared
with traditional image retrieval using feature matching to compute the
similarity between two images, NIST leverages the semantic information to
construct semantic feature matrix and uses the semantic distance algorithm to
compute the similarity. Besides, the fusion strategy can significantly reduce
storage and time consumption due to less classes participating in the last
semantic distance computation. Experiments demonstrate that our NIST framework
produces state-of-the-art results in retrieval experiments on MIRFLICKR-25K
dataset.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 04:39:24 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Dong",
"Le",
""
],
[
"Chen",
"Xiuyuan",
""
],
[
"Mao",
"Mengdie",
""
],
[
"Zhang",
"Qianni",
""
]
] |
TITLE: NIST: An Image Classification Network to Image Semantic Retrieval
ABSTRACT: This paper proposes a classification network to image semantic retrieval
(NIST) framework to counter the image retrieval challenge. Our approach
leverages the successful classification network GoogleNet based on
Convolutional Neural Networks to obtain the semantic feature matrix which
contains the serial number of classes and corresponding probabilities. Compared
with traditional image retrieval using feature matching to compute the
similarity between two images, NIST leverages the semantic information to
construct semantic feature matrix and uses the semantic distance algorithm to
compute the similarity. Besides, the fusion strategy can significantly reduce
storage and time consumption due to less classes participating in the last
semantic distance computation. Experiments demonstrate that our NIST framework
produces state-of-the-art results in retrieval experiments on MIRFLICKR-25K
dataset.
|
1607.00501
|
Le Dong
|
Le Dong, Na Lv, Qianni Zhang, Shanshan Xie, Ling He, Mengdie Mao
|
A Distributed Deep Representation Learning Model for Big Image Data
Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes an effective and efficient image classification
framework nominated distributed deep representation learning model (DDRL). The
aim is to strike the balance between the computational intensive deep learning
approaches (tuned parameters) which are intended for distributed computing, and
the approaches that focused on the designed parameters but often limited by
sequential computing and cannot scale up. In the evaluation of our approach, it
is shown that DDRL is able to achieve state-of-art classification accuracy
efficiently on both medium and large datasets. The result implies that our
approach is more efficient than the conventional deep learning approaches, and
can be applied to big data that is too complex for parameter designing focused
approaches. More specifically, DDRL contains two main components, i.e., feature
extraction and selection. A hierarchical distributed deep representation
learning algorithm is designed to extract image statistics and a nonlinear
mapping algorithm is used to map the inherent statistics into abstract
features. Both algorithms are carefully designed to avoid millions of
parameters tuning. This leads to a more compact solution for image
classification of big data. We note that the proposed approach is designed to
be friendly with parallel computing. It is generic and easy to be deployed to
different distributed computing resources. In the experiments, the largescale
image datasets are classified with a DDRM implementation on Hadoop MapReduce,
which shows high scalability and resilience.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 12:33:12 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Dong",
"Le",
""
],
[
"Lv",
"Na",
""
],
[
"Zhang",
"Qianni",
""
],
[
"Xie",
"Shanshan",
""
],
[
"He",
"Ling",
""
],
[
"Mao",
"Mengdie",
""
]
] |
TITLE: A Distributed Deep Representation Learning Model for Big Image Data
Classification
ABSTRACT: This paper describes an effective and efficient image classification
framework nominated distributed deep representation learning model (DDRL). The
aim is to strike the balance between the computational intensive deep learning
approaches (tuned parameters) which are intended for distributed computing, and
the approaches that focused on the designed parameters but often limited by
sequential computing and cannot scale up. In the evaluation of our approach, it
is shown that DDRL is able to achieve state-of-art classification accuracy
efficiently on both medium and large datasets. The result implies that our
approach is more efficient than the conventional deep learning approaches, and
can be applied to big data that is too complex for parameter designing focused
approaches. More specifically, DDRL contains two main components, i.e., feature
extraction and selection. A hierarchical distributed deep representation
learning algorithm is designed to extract image statistics and a nonlinear
mapping algorithm is used to map the inherent statistics into abstract
features. Both algorithms are carefully designed to avoid millions of
parameters tuning. This leads to a more compact solution for image
classification of big data. We note that the proposed approach is designed to
be friendly with parallel computing. It is generic and easy to be deployed to
different distributed computing resources. In the experiments, the largescale
image datasets are classified with a DDRM implementation on Hadoop MapReduce,
which shows high scalability and resilience.
|
1607.00509
|
Evangelos Psomakelis Mr
|
Evangelos Psomakelis, Fotis Aisopos, Antonios Litke, Konstantinos
Tserpes, Magdalini Kardara, Pablo Mart\'inez Campo
|
Big IoT and social networking data for smart cities: Algorithmic
improvements on Big Data Analysis in the context of RADICAL city applications
|
Conference
| null | null | null |
cs.CY cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a SOA (Service Oriented Architecture)-based
platform, enabling the retrieval and analysis of big datasets stemming from
social networking (SN) sites and Internet of Things (IoT) devices, collected by
smart city applications and socially-aware data aggregation services. A large
set of city applications in the areas of Participating Urbanism, Augmented
Reality and Sound-Mapping throughout participating cities is being applied,
resulting into produced sets of millions of user-generated events and online SN
reports fed into the RADICAL platform. Moreover, we study the application of
data analytics such as sentiment analysis to the combined IoT and SN data saved
into an SQL database, further investigating algorithmic and configurations to
minimize delays in dataset processing and results retrieval.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 13:35:02 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Psomakelis",
"Evangelos",
""
],
[
"Aisopos",
"Fotis",
""
],
[
"Litke",
"Antonios",
""
],
[
"Tserpes",
"Konstantinos",
""
],
[
"Kardara",
"Magdalini",
""
],
[
"Campo",
"Pablo Martínez",
""
]
] |
TITLE: Big IoT and social networking data for smart cities: Algorithmic
improvements on Big Data Analysis in the context of RADICAL city applications
ABSTRACT: In this paper we present a SOA (Service Oriented Architecture)-based
platform, enabling the retrieval and analysis of big datasets stemming from
social networking (SN) sites and Internet of Things (IoT) devices, collected by
smart city applications and socially-aware data aggregation services. A large
set of city applications in the areas of Participating Urbanism, Augmented
Reality and Sound-Mapping throughout participating cities is being applied,
resulting into produced sets of millions of user-generated events and online SN
reports fed into the RADICAL platform. Moreover, we study the application of
data analytics such as sentiment analysis to the combined IoT and SN data saved
into an SQL database, further investigating algorithmic and configurations to
minimize delays in dataset processing and results retrieval.
|
1607.00548
|
Melanie Mitchell
|
Max H. Quinn, Anthony D. Rhodes, Melanie Mitchell
|
Active Object Localization in Visual Situations
|
14 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a method for performing active localization of objects in
instances of visual situations. A visual situation is an abstract
concept---e.g., "a boxing match", "a birthday party", "walking the dog",
"waiting for a bus"---whose image instantiations are linked more by their
common spatial and semantic structure than by low-level visual similarity. Our
system combines given and learned knowledge of the structure of a particular
situation, and adapts that knowledge to a new situation instance as it actively
searches for objects. More specifically, the system learns a set of probability
distributions describing spatial and other relationships among relevant
objects. The system uses those distributions to iteratively sample object
proposals on a test image, but also continually uses information from those
object proposals to adaptively modify the distributions based on what the
system has detected. We test our approach's ability to efficiently localize
objects, using a situation-specific image dataset created by our group. We
compare the results with several baselines and variations on our method, and
demonstrate the strong benefit of using situation knowledge and active
context-driven localization. Finally, we contrast our method with several other
approaches that use context as well as active search for object localization in
images.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 18:43:07 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Quinn",
"Max H.",
""
],
[
"Rhodes",
"Anthony D.",
""
],
[
"Mitchell",
"Melanie",
""
]
] |
TITLE: Active Object Localization in Visual Situations
ABSTRACT: We describe a method for performing active localization of objects in
instances of visual situations. A visual situation is an abstract
concept---e.g., "a boxing match", "a birthday party", "walking the dog",
"waiting for a bus"---whose image instantiations are linked more by their
common spatial and semantic structure than by low-level visual similarity. Our
system combines given and learned knowledge of the structure of a particular
situation, and adapts that knowledge to a new situation instance as it actively
searches for objects. More specifically, the system learns a set of probability
distributions describing spatial and other relationships among relevant
objects. The system uses those distributions to iteratively sample object
proposals on a test image, but also continually uses information from those
object proposals to adaptively modify the distributions based on what the
system has detected. We test our approach's ability to efficiently localize
objects, using a situation-specific image dataset created by our group. We
compare the results with several baselines and variations on our method, and
demonstrate the strong benefit of using situation knowledge and active
context-driven localization. Finally, we contrast our method with several other
approaches that use context as well as active search for object localization in
images.
|
1607.00556
|
Ehsan Hosseini-Asl
|
Ehsan Hosseini-Asl, Georgy Gimel'farb, Ayman El-Baz
|
Alzheimer's Disease Diagnostics by a Deeply Supervised Adaptable 3D
Convolutional Network
| null | null | null | null |
cs.LG q-bio.NC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer's disease (AD), is based on classification of features
extracted from brain images. The features have to accurately capture main
AD-related variations of anatomical brain structures, such as, e.g., ventricles
size, hippocampus shape, cortical thickness, and brain volume. This paper
proposes to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
\emph{ADNI} MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy and robustness.
Abilities of the 3D-CNN to generalize the features learnt and adapt to other
domains have been validated on the \emph{CADDementia} dataset.
|
[
{
"version": "v1",
"created": "Sat, 2 Jul 2016 19:55:56 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Hosseini-Asl",
"Ehsan",
""
],
[
"Gimel'farb",
"Georgy",
""
],
[
"El-Baz",
"Ayman",
""
]
] |
TITLE: Alzheimer's Disease Diagnostics by a Deeply Supervised Adaptable 3D
Convolutional Network
ABSTRACT: Early diagnosis, playing an important role in preventing progress and
treating the Alzheimer's disease (AD), is based on classification of features
extracted from brain images. The features have to accurately capture main
AD-related variations of anatomical brain structures, such as, e.g., ventricles
size, hippocampus shape, cortical thickness, and brain volume. This paper
proposes to predict the AD with a deep 3D convolutional neural network
(3D-CNN), which can learn generic features capturing AD biomarkers and adapt to
different domain datasets. The 3D-CNN is built upon a 3D convolutional
autoencoder, which is pre-trained to capture anatomical shape variations in
structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then
fine-tuned for each task-specific AD classification. Experiments on the
\emph{ADNI} MRI dataset with no skull-stripping preprocessing have shown our
3D-CNN outperforms several conventional classifiers by accuracy and robustness.
Abilities of the 3D-CNN to generalize the features learnt and adapt to other
domains have been validated on the \emph{CADDementia} dataset.
|
1607.00577
|
Le Dong
|
Le Dong, Zhiyu Lin, Yan Liang, Ling He, Ning Zhang, Qi Chen, Xiaochun
Cao, Ebroul lzquierdo
|
A Hierarchical Distributed Processing Framework for Big Image Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces an effective processing framework nominated ICP (Image
Cloud Processing) to powerfully cope with the data explosion in image
processing field. While most previous researches focus on optimizing the image
processing algorithms to gain higher efficiency, our work dedicates to
providing a general framework for those image processing algorithms, which can
be implemented in parallel so as to achieve a boost in time efficiency without
compromising the results performance along with the increasing image scale. The
proposed ICP framework consists of two mechanisms, i.e. SICP (Static ICP) and
DICP (Dynamic ICP). Specifically, SICP is aimed at processing the big image
data pre-stored in the distributed system, while DICP is proposed for dynamic
input. To accomplish SICP, two novel data representations named P-Image and
Big-Image are designed to cooperate with MapReduce to achieve more optimized
configuration and higher efficiency. DICP is implemented through a parallel
processing procedure working with the traditional processing mechanism of the
distributed system. Representative results of comprehensive experiments on the
challenging ImageNet dataset are selected to validate the capacity of our
proposed ICP framework over the traditional state-of-the-art methods, both in
time efficiency and quality of results.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2016 02:16:49 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Dong",
"Le",
""
],
[
"Lin",
"Zhiyu",
""
],
[
"Liang",
"Yan",
""
],
[
"He",
"Ling",
""
],
[
"Zhang",
"Ning",
""
],
[
"Chen",
"Qi",
""
],
[
"Cao",
"Xiaochun",
""
],
[
"lzquierdo",
"Ebroul",
""
]
] |
TITLE: A Hierarchical Distributed Processing Framework for Big Image Data
ABSTRACT: This paper introduces an effective processing framework nominated ICP (Image
Cloud Processing) to powerfully cope with the data explosion in image
processing field. While most previous researches focus on optimizing the image
processing algorithms to gain higher efficiency, our work dedicates to
providing a general framework for those image processing algorithms, which can
be implemented in parallel so as to achieve a boost in time efficiency without
compromising the results performance along with the increasing image scale. The
proposed ICP framework consists of two mechanisms, i.e. SICP (Static ICP) and
DICP (Dynamic ICP). Specifically, SICP is aimed at processing the big image
data pre-stored in the distributed system, while DICP is proposed for dynamic
input. To accomplish SICP, two novel data representations named P-Image and
Big-Image are designed to cooperate with MapReduce to achieve more optimized
configuration and higher efficiency. DICP is implemented through a parallel
processing procedure working with the traditional processing mechanism of the
distributed system. Representative results of comprehensive experiments on the
challenging ImageNet dataset are selected to validate the capacity of our
proposed ICP framework over the traditional state-of-the-art methods, both in
time efficiency and quality of results.
|
1607.00582
|
Qi Dou
|
Qi Dou, Hao Chen, Yueming Jin, Lequan Yu, Jing Qin, Pheng-Ann Heng
|
3D Deeply Supervised Network for Automatic Liver Segmentation from CT
Volumes
|
Accepted to MICCAI 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic liver segmentation from CT volumes is a crucial prerequisite yet
challenging task for computer-aided hepatic disease diagnosis and treatment. In
this paper, we present a novel 3D deeply supervised network (3D DSN) to address
this challenging task. The proposed 3D DSN takes advantage of a fully
convolutional architecture which performs efficient end-to-end learning and
inference. More importantly, we introduce a deep supervision mechanism during
the learning process to combat potential optimization difficulties, and thus
the model can acquire a much faster convergence rate and more powerful
discrimination capability. On top of the high-quality score map produced by the
3D DSN, a conditional random field model is further employed to obtain refined
segmentation results. We evaluated our framework on the public MICCAI-SLiver07
dataset. Extensive experiments demonstrated that our method achieves
competitive segmentation results to state-of-the-art approaches with a much
faster processing speed.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2016 02:52:56 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Dou",
"Qi",
""
],
[
"Chen",
"Hao",
""
],
[
"Jin",
"Yueming",
""
],
[
"Yu",
"Lequan",
""
],
[
"Qin",
"Jing",
""
],
[
"Heng",
"Pheng-Ann",
""
]
] |
TITLE: 3D Deeply Supervised Network for Automatic Liver Segmentation from CT
Volumes
ABSTRACT: Automatic liver segmentation from CT volumes is a crucial prerequisite yet
challenging task for computer-aided hepatic disease diagnosis and treatment. In
this paper, we present a novel 3D deeply supervised network (3D DSN) to address
this challenging task. The proposed 3D DSN takes advantage of a fully
convolutional architecture which performs efficient end-to-end learning and
inference. More importantly, we introduce a deep supervision mechanism during
the learning process to combat potential optimization difficulties, and thus
the model can acquire a much faster convergence rate and more powerful
discrimination capability. On top of the high-quality score map produced by the
3D DSN, a conditional random field model is further employed to obtain refined
segmentation results. We evaluated our framework on the public MICCAI-SLiver07
dataset. Extensive experiments demonstrated that our method achieves
competitive segmentation results to state-of-the-art approaches with a much
faster processing speed.
|
1607.00598
|
Yuzhuo Ren
|
Yuzhuo Ren, Chen Chen, Shangwen Li, and C.-C. Jay Kuo
|
A Coarse-to-Fine Indoor Layout Estimation (CFILE) Method
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of estimating the spatial layout of cluttered indoor scenes from a
single RGB image is addressed in this work. Existing solutions to this problems
largely rely on hand-craft features and vanishing lines, and they often fail in
highly cluttered indoor rooms. The proposed coarse-to-fine indoor layout
estimation (CFILE) method consists of two stages: 1) coarse layout estimation;
and 2) fine layout localization. In the first stage, we adopt a fully
convolutional neural network (FCN) to obtain a coarse-scale room layout
estimate that is close to the ground truth globally. The proposed FCN considers
combines the layout contour property and the surface property so as to provide
a robust estimate in the presence of cluttered objects. In the second stage, we
formulate an optimization framework that enforces several constraints such as
layout contour straightness, surface smoothness and geometric constraints for
layout detail refinement. Our proposed system offers the state-of-the-art
performance on two commonly used benchmark datasets.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2016 05:55:47 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Ren",
"Yuzhuo",
""
],
[
"Chen",
"Chen",
""
],
[
"Li",
"Shangwen",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
TITLE: A Coarse-to-Fine Indoor Layout Estimation (CFILE) Method
ABSTRACT: The task of estimating the spatial layout of cluttered indoor scenes from a
single RGB image is addressed in this work. Existing solutions to this problems
largely rely on hand-craft features and vanishing lines, and they often fail in
highly cluttered indoor rooms. The proposed coarse-to-fine indoor layout
estimation (CFILE) method consists of two stages: 1) coarse layout estimation;
and 2) fine layout localization. In the first stage, we adopt a fully
convolutional neural network (FCN) to obtain a coarse-scale room layout
estimate that is close to the ground truth globally. The proposed FCN considers
combines the layout contour property and the surface property so as to provide
a robust estimate in the presence of cluttered objects. In the second stage, we
formulate an optimization framework that enforces several constraints such as
layout contour straightness, surface smoothness and geometric constraints for
layout detail refinement. Our proposed system offers the state-of-the-art
performance on two commonly used benchmark datasets.
|
1607.00659
|
Kha Gia Quach
|
Kha Gia Quach, Chi Nhan Duong, Khoa Luu and Tien D. Bui
|
Robust Deep Appearance Models
|
6 pages, 8 figures, submitted to ICPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel Robust Deep Appearance Models to learn the
non-linear correlation between shape and texture of face images. In this
approach, two crucial components of face images, i.e. shape and texture, are
represented by Deep Boltzmann Machines and Robust Deep Boltzmann Machines
(RDBM), respectively. The RDBM, an alternative form of Robust Boltzmann
Machines, can separate corrupted/occluded pixels in the texture modeling to
achieve better reconstruction results. The two models are connected by
Restricted Boltzmann Machines at the top layer to jointly learn and capture the
variations of both facial shapes and appearances. This paper also introduces
new fitting algorithms with occlusion awareness through the mask obtained from
the RDBM reconstruction. The proposed approach is evaluated in various
applications by using challenging face datasets, i.e. Labeled Face Parts in the
Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and
capabilities.
|
[
{
"version": "v1",
"created": "Sun, 3 Jul 2016 17:31:30 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Quach",
"Kha Gia",
""
],
[
"Duong",
"Chi Nhan",
""
],
[
"Luu",
"Khoa",
""
],
[
"Bui",
"Tien D.",
""
]
] |
TITLE: Robust Deep Appearance Models
ABSTRACT: This paper presents a novel Robust Deep Appearance Models to learn the
non-linear correlation between shape and texture of face images. In this
approach, two crucial components of face images, i.e. shape and texture, are
represented by Deep Boltzmann Machines and Robust Deep Boltzmann Machines
(RDBM), respectively. The RDBM, an alternative form of Robust Boltzmann
Machines, can separate corrupted/occluded pixels in the texture modeling to
achieve better reconstruction results. The two models are connected by
Restricted Boltzmann Machines at the top layer to jointly learn and capture the
variations of both facial shapes and appearances. This paper also introduces
new fitting algorithms with occlusion awareness through the mask obtained from
the RDBM reconstruction. The proposed approach is evaluated in various
applications by using challenging face datasets, i.e. Labeled Face Parts in the
Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and
capabilities.
|
1607.00719
|
Le Dong
|
Gaipeng Kong, Le Dong, Wenpu Dong, Liang Zheng, Qi Tian
|
Coarse2Fine: Two-Layer Fusion For Image Retrieval
| null | null | null | null |
cs.MM cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the problem of large-scale image retrieval. We propose a
two-layer fusion method which takes advantage of global and local cues and
ranks database images from coarse to fine (C2F). Departing from the previous
methods fusing multiple image descriptors simultaneously, C2F is featured by a
layered procedure composed by filtering and refining. In particular, C2F
consists of three components. 1) Distractor filtering. With holistic
representations, noise images are filtered out from the database, so the number
of candidate images to be used for comparison with the query can be greatly
reduced. 2) Adaptive weighting. For a certain query, the similarity of
candidate images can be estimated by holistic similarity scores in
complementary to the local ones. 3) Candidate refining. Accurate retrieval is
conducted via local features, combining the pre-computed adaptive weights.
Experiments are presented on two benchmarks, \emph{i.e.,} Holidays and Ukbench
datasets. We show that our method outperforms recent fusion methods in terms of
storage consumption and computation complexity, and that the accuracy is
competitive to the state-of-the-arts.
|
[
{
"version": "v1",
"created": "Mon, 4 Jul 2016 01:56:20 GMT"
}
] | 2016-07-05T00:00:00 |
[
[
"Kong",
"Gaipeng",
""
],
[
"Dong",
"Le",
""
],
[
"Dong",
"Wenpu",
""
],
[
"Zheng",
"Liang",
""
],
[
"Tian",
"Qi",
""
]
] |
TITLE: Coarse2Fine: Two-Layer Fusion For Image Retrieval
ABSTRACT: This paper addresses the problem of large-scale image retrieval. We propose a
two-layer fusion method which takes advantage of global and local cues and
ranks database images from coarse to fine (C2F). Departing from the previous
methods fusing multiple image descriptors simultaneously, C2F is featured by a
layered procedure composed by filtering and refining. In particular, C2F
consists of three components. 1) Distractor filtering. With holistic
representations, noise images are filtered out from the database, so the number
of candidate images to be used for comparison with the query can be greatly
reduced. 2) Adaptive weighting. For a certain query, the similarity of
candidate images can be estimated by holistic similarity scores in
complementary to the local ones. 3) Candidate refining. Accurate retrieval is
conducted via local features, combining the pre-computed adaptive weights.
Experiments are presented on two benchmarks, \emph{i.e.,} Holidays and Ukbench
datasets. We show that our method outperforms recent fusion methods in terms of
storage consumption and computation complexity, and that the accuracy is
competitive to the state-of-the-arts.
|
1510.03519
|
Janarthanan Rajendran
|
Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman
Ravindran
|
Bridge Correlational Neural Networks for Multilingual Multimodal
Representation Learning
|
Published at NAACL-HLT 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently there has been a lot of interest in learning common representations
for multiple views of data. Typically, such common representations are learned
using a parallel corpus between the two views (say, 1M images and their English
captions). In this work, we address a real-world scenario where no direct
parallel data is available between two views of interest (say, $V_1$ and $V_2$)
but parallel data is available between each of these views and a pivot view
($V_3$). We propose a model for learning a common representation for $V_1$,
$V_2$ and $V_3$ using only the parallel data available between $V_1V_3$ and
$V_2V_3$. The proposed model is generic and even works when there are $n$ views
of interest and only one pivot view which acts as a bridge between them. There
are two specific downstream applications that we focus on (i) transfer learning
between languages $L_1$,$L_2$,...,$L_n$ using a pivot language $L$ and (ii)
cross modal access between images and a language $L_1$ using a pivot language
$L_2$. Our model achieves state-of-the-art performance in multilingual document
classification on the publicly available multilingual TED corpus and promising
results in multilingual multimodal retrieval on a new dataset created and
released as a part of this work.
|
[
{
"version": "v1",
"created": "Tue, 13 Oct 2015 03:25:18 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Feb 2016 07:44:01 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Jul 2016 09:01:19 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Rajendran",
"Janarthanan",
""
],
[
"Khapra",
"Mitesh M.",
""
],
[
"Chandar",
"Sarath",
""
],
[
"Ravindran",
"Balaraman",
""
]
] |
TITLE: Bridge Correlational Neural Networks for Multilingual Multimodal
Representation Learning
ABSTRACT: Recently there has been a lot of interest in learning common representations
for multiple views of data. Typically, such common representations are learned
using a parallel corpus between the two views (say, 1M images and their English
captions). In this work, we address a real-world scenario where no direct
parallel data is available between two views of interest (say, $V_1$ and $V_2$)
but parallel data is available between each of these views and a pivot view
($V_3$). We propose a model for learning a common representation for $V_1$,
$V_2$ and $V_3$ using only the parallel data available between $V_1V_3$ and
$V_2V_3$. The proposed model is generic and even works when there are $n$ views
of interest and only one pivot view which acts as a bridge between them. There
are two specific downstream applications that we focus on (i) transfer learning
between languages $L_1$,$L_2$,...,$L_n$ using a pivot language $L$ and (ii)
cross modal access between images and a language $L_1$ using a pivot language
$L_2$. Our model achieves state-of-the-art performance in multilingual document
classification on the publicly available multilingual TED corpus and promising
results in multilingual multimodal retrieval on a new dataset created and
released as a part of this work.
|
1511.05526
|
Matthew Walter
|
Zhengyang Wu and Mohit Bansal and Matthew R. Walter
|
Learning Articulated Motion Models from Visual and Lingual Signals
| null | null | null | null |
cs.RO cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order for robots to operate effectively in homes and workplaces, they must
be able to manipulate the articulated objects common within environments built
for and by humans. Previous work learns kinematic models that prescribe this
manipulation from visual demonstrations. Lingual signals, such as natural
language descriptions and instructions, offer a complementary means of
conveying knowledge of such manipulation models and are suitable to a wide
range of interactions (e.g., remote manipulation). In this paper, we present a
multimodal learning framework that incorporates both visual and lingual
information to estimate the structure and parameters that define kinematic
models of articulated objects. The visual signal takes the form of an RGB-D
image stream that opportunistically captures object motion in an unprepared
scene. Accompanying natural language descriptions of the motion constitute the
lingual signal. We present a probabilistic language model that uses word
embeddings to associate lingual verbs with their corresponding kinematic
structures. By exploiting the complementary nature of the visual and lingual
input, our method infers correct kinematic structures for various multiple-part
objects on which the previous state-of-the-art, visual-only system fails. We
evaluate our multimodal learning framework on a dataset comprised of a variety
of household objects, and demonstrate a 36% improvement in model accuracy over
the vision-only baseline.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2015 19:55:34 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Jul 2016 14:53:28 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Wu",
"Zhengyang",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Walter",
"Matthew R.",
""
]
] |
TITLE: Learning Articulated Motion Models from Visual and Lingual Signals
ABSTRACT: In order for robots to operate effectively in homes and workplaces, they must
be able to manipulate the articulated objects common within environments built
for and by humans. Previous work learns kinematic models that prescribe this
manipulation from visual demonstrations. Lingual signals, such as natural
language descriptions and instructions, offer a complementary means of
conveying knowledge of such manipulation models and are suitable to a wide
range of interactions (e.g., remote manipulation). In this paper, we present a
multimodal learning framework that incorporates both visual and lingual
information to estimate the structure and parameters that define kinematic
models of articulated objects. The visual signal takes the form of an RGB-D
image stream that opportunistically captures object motion in an unprepared
scene. Accompanying natural language descriptions of the motion constitute the
lingual signal. We present a probabilistic language model that uses word
embeddings to associate lingual verbs with their corresponding kinematic
structures. By exploiting the complementary nature of the visual and lingual
input, our method infers correct kinematic structures for various multiple-part
objects on which the previous state-of-the-art, visual-only system fails. We
evaluate our multimodal learning framework on a dataset comprised of a variety
of household objects, and demonstrate a 36% improvement in model accuracy over
the vision-only baseline.
|
1603.07252
|
Jianpeng Cheng J
|
Jianpeng Cheng, Mirella Lapata
|
Neural Summarization by Extracting Sentences and Words
|
ACL2016 conference paper with appendix
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional approaches to extractive summarization rely heavily on
human-engineered features. In this work we propose a data-driven approach based
on neural networks and continuous sentence features. We develop a general
framework for single-document summarization composed of a hierarchical document
encoder and an attention-based extractor. This architecture allows us to
develop different classes of summarization models which can extract sentences
or words. We train our models on large scale corpora containing hundreds of
thousands of document-summary pairs. Experimental results on two summarization
datasets demonstrate that our models obtain results comparable to the state of
the art without any access to linguistic annotation.
|
[
{
"version": "v1",
"created": "Wed, 23 Mar 2016 16:05:46 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Jun 2016 13:41:50 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Jul 2016 03:16:03 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Cheng",
"Jianpeng",
""
],
[
"Lapata",
"Mirella",
""
]
] |
TITLE: Neural Summarization by Extracting Sentences and Words
ABSTRACT: Traditional approaches to extractive summarization rely heavily on
human-engineered features. In this work we propose a data-driven approach based
on neural networks and continuous sentence features. We develop a general
framework for single-document summarization composed of a hierarchical document
encoder and an attention-based extractor. This architecture allows us to
develop different classes of summarization models which can extract sentences
or words. We train our models on large scale corpora containing hundreds of
thousands of document-summary pairs. Experimental results on two summarization
datasets demonstrate that our models obtain results comparable to the state of
the art without any access to linguistic annotation.
|
1607.00067
|
Fariba Yousefi
|
Fariba Yousefi, Zhenwen Dai, Carl Henrik Ek, Neil Lawrence
|
Unsupervised Learning with Imbalanced Data via Structure Consolidation
Latent Variable Model
|
ICLR 2016 Workshop
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised learning on imbalanced data is challenging because, when given
imbalanced data, current model is often dominated by the major category and
ignores the categories with small amount of data. We develop a latent variable
model that can cope with imbalanced data by dividing the latent space into a
shared space and a private space. Based on Gaussian Process Latent Variable
Models, we propose a new kernel formulation that enables the separation of
latent space and derives an efficient variational inference method. The
performance of our model is demonstrated with an imbalanced medical image
dataset.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2016 22:25:20 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Yousefi",
"Fariba",
""
],
[
"Dai",
"Zhenwen",
""
],
[
"Ek",
"Carl Henrik",
""
],
[
"Lawrence",
"Neil",
""
]
] |
TITLE: Unsupervised Learning with Imbalanced Data via Structure Consolidation
Latent Variable Model
ABSTRACT: Unsupervised learning on imbalanced data is challenging because, when given
imbalanced data, current model is often dominated by the major category and
ignores the categories with small amount of data. We develop a latent variable
model that can cope with imbalanced data by dividing the latent space into a
shared space and a private space. Based on Gaussian Process Latent Variable
Models, we propose a new kernel formulation that enables the separation of
latent space and derives an efficient variational inference method. The
performance of our model is demonstrated with an imbalanced medical image
dataset.
|
1607.00070
|
Layla El Asri
|
Layla El Asri and Jing He and Kaheer Suleman
|
A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue
Systems
|
Accepted for publication at Interspeech 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User simulation is essential for generating enough data to train a
statistical spoken dialogue system. Previous models for user simulation suffer
from several drawbacks, such as the inability to take dialogue history into
account, the need of rigid structure to ensure coherent user behaviour, heavy
dependence on a specific domain, the inability to output several user
intentions during one dialogue turn, or the requirement of a summarized action
space for tractability. This paper introduces a data-driven user simulator
based on an encoder-decoder recurrent neural network. The model takes as input
a sequence of dialogue contexts and outputs a sequence of dialogue acts
corresponding to user intentions. The dialogue contexts include information
about the machine acts and the status of the user goal. We show on the Dialogue
State Tracking Challenge 2 (DSTC2) dataset that the sequence-to-sequence model
outperforms an agenda-based simulator and an n-gram simulator, according to
F-score. Furthermore, we show how this model can be used on the original action
space and thereby models user behaviour with finer granularity.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2016 22:51:00 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Asri",
"Layla El",
""
],
[
"He",
"Jing",
""
],
[
"Suleman",
"Kaheer",
""
]
] |
TITLE: A Sequence-to-Sequence Model for User Simulation in Spoken Dialogue
Systems
ABSTRACT: User simulation is essential for generating enough data to train a
statistical spoken dialogue system. Previous models for user simulation suffer
from several drawbacks, such as the inability to take dialogue history into
account, the need of rigid structure to ensure coherent user behaviour, heavy
dependence on a specific domain, the inability to output several user
intentions during one dialogue turn, or the requirement of a summarized action
space for tractability. This paper introduces a data-driven user simulator
based on an encoder-decoder recurrent neural network. The model takes as input
a sequence of dialogue contexts and outputs a sequence of dialogue acts
corresponding to user intentions. The dialogue contexts include information
about the machine acts and the status of the user goal. We show on the Dialogue
State Tracking Challenge 2 (DSTC2) dataset that the sequence-to-sequence model
outperforms an agenda-based simulator and an n-gram simulator, according to
F-score. Furthermore, we show how this model can be used on the original action
space and thereby models user behaviour with finer granularity.
|
1607.00110
|
Iman Alodah
|
Iman Alodah and Jennifer Neville
|
Combining Gradient Boosting Machines with Collective Inference to
Predict Continuous Values
|
7 pages, 3 Figures, Sixth International Workshop on Statistical
Relational AI
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gradient boosting of regression trees is a competitive procedure for learning
predictive models of continuous data that fits the data with an additive
non-parametric model. The classic version of gradient boosting assumes that the
data is independent and identically distributed. However, relational data with
interdependent, linked instances is now common and the dependencies in such
data can be exploited to improve predictive performance. Collective inference
is one approach to exploit relational correlation patterns and significantly
reduce classification error. However, much of the work on collective learning
and inference has focused on discrete prediction tasks rather than continuous.
%target values has not got that attention in terms of collective inference. In
this work, we investigate how to combine these two paradigms together to
improve regression in relational domains. Specifically, we propose a boosting
algorithm for learning a collective inference model that predicts a continuous
target variable. In the algorithm, we learn a basic relational model,
collectively infer the target values, and then iteratively learn relational
models to predict the residuals. We evaluate our proposed algorithm on a real
network dataset and show that it outperforms alternative boosting methods.
However, our investigation also revealed that the relational features interact
together to produce better predictions.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 05:21:15 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Alodah",
"Iman",
""
],
[
"Neville",
"Jennifer",
""
]
] |
TITLE: Combining Gradient Boosting Machines with Collective Inference to
Predict Continuous Values
ABSTRACT: Gradient boosting of regression trees is a competitive procedure for learning
predictive models of continuous data that fits the data with an additive
non-parametric model. The classic version of gradient boosting assumes that the
data is independent and identically distributed. However, relational data with
interdependent, linked instances is now common and the dependencies in such
data can be exploited to improve predictive performance. Collective inference
is one approach to exploit relational correlation patterns and significantly
reduce classification error. However, much of the work on collective learning
and inference has focused on discrete prediction tasks rather than continuous.
%target values has not got that attention in terms of collective inference. In
this work, we investigate how to combine these two paradigms together to
improve regression in relational domains. Specifically, we propose a boosting
algorithm for learning a collective inference model that predicts a continuous
target variable. In the algorithm, we learn a basic relational model,
collectively infer the target values, and then iteratively learn relational
models to predict the residuals. We evaluate our proposed algorithm on a real
network dataset and show that it outperforms alternative boosting methods.
However, our investigation also revealed that the relational features interact
together to produce better predictions.
|
1607.00136
|
Collins Leke
|
Collins Leke and Tshilidzi Marwala
|
Missing Data Estimation in High-Dimensional Datasets: A Swarm
Intelligence-Deep Neural Network Approach
|
12 pages, 3 figures
| null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we examine the problem of missing data in high-dimensional
datasets by taking into consideration the Missing Completely at Random and
Missing at Random mechanisms, as well as theArbitrary missing pattern.
Additionally, this paper employs a methodology based on Deep Learning and Swarm
Intelligence algorithms in order to provide reliable estimates for missing
data. The deep learning technique is used to extract features from the input
data via an unsupervised learning approach by modeling the data distribution
based on the input. This deep learning technique is then used as part of the
objective function for the swarm intelligence technique in order to estimate
the missing data after a supervised fine-tuning phase by minimizing an error
function based on the interrelationship and correlation between features in the
dataset. The investigated methodology in this paper therefore has longer
running times, however, the promising potential outcomes justify the trade-off.
Also, basic knowledge of statistics is presumed.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 07:34:50 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Leke",
"Collins",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] |
TITLE: Missing Data Estimation in High-Dimensional Datasets: A Swarm
Intelligence-Deep Neural Network Approach
ABSTRACT: In this paper, we examine the problem of missing data in high-dimensional
datasets by taking into consideration the Missing Completely at Random and
Missing at Random mechanisms, as well as theArbitrary missing pattern.
Additionally, this paper employs a methodology based on Deep Learning and Swarm
Intelligence algorithms in order to provide reliable estimates for missing
data. The deep learning technique is used to extract features from the input
data via an unsupervised learning approach by modeling the data distribution
based on the input. This deep learning technique is then used as part of the
objective function for the swarm intelligence technique in order to estimate
the missing data after a supervised fine-tuning phase by minimizing an error
function based on the interrelationship and correlation between features in the
dataset. The investigated methodology in this paper therefore has longer
running times, however, the promising potential outcomes justify the trade-off.
Also, basic knowledge of statistics is presumed.
|
1607.00137
|
Chunlei Peng
|
Chunlei Peng, Xinbo Gao, Nannan Wang, Jie Li
|
Sparse Graphical Representation based Discriminant Analysis for
Heterogeneous Face Recognition
|
13 pages, 17 figures, submitted to IEEE TNNLS
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face images captured in heterogeneous environments, e.g., sketches generated
by the artists or composite-generation software, photos taken by common cameras
and infrared images captured by corresponding infrared imaging devices, usually
subject to large texture (i.e., style) differences. This results in heavily
degraded performance of conventional face recognition methods in comparison
with the performance on images captured in homogeneous environments. In this
paper, we propose a novel sparse graphical representation based discriminant
analysis (SGR-DA) approach to address aforementioned face recognition in
heterogeneous scenarios. An adaptive sparse graphical representation scheme is
designed to represent heterogeneous face images, where a Markov networks model
is constructed to generate adaptive sparse vectors. To handle the complex
facial structure and further improve the discriminability, a spatial
partition-based discriminant analysis framework is presented to refine the
adaptive sparse vectors for face matching. We conducted experiments on six
commonly used heterogeneous face datasets and experimental results illustrate
that our proposed SGR-DA approach achieves superior performance in comparison
with state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 07:41:25 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Peng",
"Chunlei",
""
],
[
"Gao",
"Xinbo",
""
],
[
"Wang",
"Nannan",
""
],
[
"Li",
"Jie",
""
]
] |
TITLE: Sparse Graphical Representation based Discriminant Analysis for
Heterogeneous Face Recognition
ABSTRACT: Face images captured in heterogeneous environments, e.g., sketches generated
by the artists or composite-generation software, photos taken by common cameras
and infrared images captured by corresponding infrared imaging devices, usually
subject to large texture (i.e., style) differences. This results in heavily
degraded performance of conventional face recognition methods in comparison
with the performance on images captured in homogeneous environments. In this
paper, we propose a novel sparse graphical representation based discriminant
analysis (SGR-DA) approach to address aforementioned face recognition in
heterogeneous scenarios. An adaptive sparse graphical representation scheme is
designed to represent heterogeneous face images, where a Markov networks model
is constructed to generate adaptive sparse vectors. To handle the complex
facial structure and further improve the discriminability, a spatial
partition-based discriminant analysis framework is presented to refine the
adaptive sparse vectors for face matching. We conducted experiments on six
commonly used heterogeneous face datasets and experimental results illustrate
that our proposed SGR-DA approach achieves superior performance in comparison
with state-of-the-art methods.
|
1607.00146
|
Kush Bhatia
|
Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar
|
Efficient and Consistent Robust Time Series Analysis
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of robust time series analysis under the standard
auto-regressive (AR) time series model in the presence of arbitrary outliers.
We devise an efficient hard thresholding based algorithm which can obtain a
consistent estimate of the optimal AR model despite a large fraction of the
time series points being corrupted. Our algorithm alternately estimates the
corrupted set of points and the model parameters, and is inspired by recent
advances in robust regression and hard-thresholding methods. However, a direct
application of existing techniques is hindered by a critical difference in the
time-series domain: each point is correlated with all previous points rendering
existing tools inapplicable directly. We show how to overcome this hurdle using
novel proof techniques. Using our techniques, we are also able to provide the
first efficient and provably consistent estimator for the robust regression
problem where a standard linear observation model with white additive noise is
corrupted arbitrarily. We illustrate our methods on synthetic datasets and show
that our methods indeed are able to consistently recover the optimal parameters
despite a large fraction of points being corrupted.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 08:17:27 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Bhatia",
"Kush",
""
],
[
"Jain",
"Prateek",
""
],
[
"Kamalaruban",
"Parameswaran",
""
],
[
"Kar",
"Purushottam",
""
]
] |
TITLE: Efficient and Consistent Robust Time Series Analysis
ABSTRACT: We study the problem of robust time series analysis under the standard
auto-regressive (AR) time series model in the presence of arbitrary outliers.
We devise an efficient hard thresholding based algorithm which can obtain a
consistent estimate of the optimal AR model despite a large fraction of the
time series points being corrupted. Our algorithm alternately estimates the
corrupted set of points and the model parameters, and is inspired by recent
advances in robust regression and hard-thresholding methods. However, a direct
application of existing techniques is hindered by a critical difference in the
time-series domain: each point is correlated with all previous points rendering
existing tools inapplicable directly. We show how to overcome this hurdle using
novel proof techniques. Using our techniques, we are also able to provide the
first efficient and provably consistent estimator for the robust regression
problem where a standard linear observation model with white additive noise is
corrupted arbitrarily. We illustrate our methods on synthetic datasets and show
that our methods indeed are able to consistently recover the optimal parameters
despite a large fraction of points being corrupted.
|
1607.00225
|
Chris Emmery
|
St\'ephan Tulkens, Chris Emmery, Walter Daelemans
|
Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource
|
in LREC 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word embeddings have recently seen a strong increase in interest as a result
of strong performance gains on a variety of tasks. However, most of this
research also underlined the importance of benchmark datasets, and the
difficulty of constructing these for a variety of language-specific tasks.
Still, many of the datasets used in these tasks could prove to be fruitful
linguistic resources, allowing for unique observations into language use and
variability. In this paper we demonstrate the performance of multiple types of
embeddings, created with both count and prediction-based architectures on a
variety of corpora, in two language-specific tasks: relation evaluation, and
dialect identification. For the latter, we compare unsupervised methods with a
traditional, hand-crafted dictionary. With this research, we provide the
embeddings themselves, the relation evaluation task benchmark for use in
further research, and demonstrate how the benchmarked embeddings prove a useful
unsupervised linguistic resource, effectively used in a downstream task.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 12:48:35 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Tulkens",
"Stéphan",
""
],
[
"Emmery",
"Chris",
""
],
[
"Daelemans",
"Walter",
""
]
] |
TITLE: Evaluating Unsupervised Dutch Word Embeddings as a Linguistic Resource
ABSTRACT: Word embeddings have recently seen a strong increase in interest as a result
of strong performance gains on a variety of tasks. However, most of this
research also underlined the importance of benchmark datasets, and the
difficulty of constructing these for a variety of language-specific tasks.
Still, many of the datasets used in these tasks could prove to be fruitful
linguistic resources, allowing for unique observations into language use and
variability. In this paper we demonstrate the performance of multiple types of
embeddings, created with both count and prediction-based architectures on a
variety of corpora, in two language-specific tasks: relation evaluation, and
dialect identification. For the latter, we compare unsupervised methods with a
traditional, hand-crafted dictionary. With this research, we provide the
embeddings themselves, the relation evaluation task benchmark for use in
further research, and demonstrate how the benchmarked embeddings prove a useful
unsupervised linguistic resource, effectively used in a downstream task.
|
1607.00273
|
Pablo F. Alcantarilla Dr.
|
Pablo F. Alcantarilla and Oliver J. Woodford
|
Noise Models in Feature-based Stereo Visual Odometry
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature-based visual structure and motion reconstruction pipelines, common in
visual odometry and large-scale reconstruction from photos, use the location of
corresponding features in different images to determine the 3D structure of the
scene, as well as the camera parameters associated with each image. The noise
model, which defines the likelihood of the location of each feature in each
image, is a key factor in the accuracy of such pipelines, alongside
optimization strategy. Many different noise models have been proposed in the
literature; in this paper we investigate the performance of several. We
evaluate these models specifically w.r.t. stereo visual odometry, as this task
is both simple (camera intrinsics are constant and known; geometry can be
initialized reliably) and has datasets with ground truth readily available
(KITTI Odometry and New Tsukuba Stereo Dataset). Our evaluation shows that
noise models which are more adaptable to the varying nature of noise generally
perform better.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 15:02:38 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Alcantarilla",
"Pablo F.",
""
],
[
"Woodford",
"Oliver J.",
""
]
] |
TITLE: Noise Models in Feature-based Stereo Visual Odometry
ABSTRACT: Feature-based visual structure and motion reconstruction pipelines, common in
visual odometry and large-scale reconstruction from photos, use the location of
corresponding features in different images to determine the 3D structure of the
scene, as well as the camera parameters associated with each image. The noise
model, which defines the likelihood of the location of each feature in each
image, is a key factor in the accuracy of such pipelines, alongside
optimization strategy. Many different noise models have been proposed in the
literature; in this paper we investigate the performance of several. We
evaluate these models specifically w.r.t. stereo visual odometry, as this task
is both simple (camera intrinsics are constant and known; geometry can be
initialized reliably) and has datasets with ground truth readily available
(KITTI Odometry and New Tsukuba Stereo Dataset). Our evaluation shows that
noise models which are more adaptable to the varying nature of noise generally
perform better.
|
1607.00315
|
Javier Turek Mr.
|
Eran Treister and Javier S. Turek and Irad Yavneh
|
A multilevel framework for sparse optimization with application to
inverse covariance estimation and logistic regression
|
To appear on SISC journal
| null | null | null |
cs.NA math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Solving l1 regularized optimization problems is common in the fields of
computational biology, signal processing and machine learning. Such l1
regularization is utilized to find sparse minimizers of convex functions. A
well-known example is the LASSO problem, where the l1 norm regularizes a
quadratic function. A multilevel framework is presented for solving such l1
regularized sparse optimization problems efficiently. We take advantage of the
expected sparseness of the solution, and create a hierarchy of problems of
similar type, which is traversed in order to accelerate the optimization
process. This framework is applied for solving two problems: (1) the sparse
inverse covariance estimation problem, and (2) l1-regularized logistic
regression. In the first problem, the inverse of an unknown covariance matrix
of a multivariate normal distribution is estimated, under the assumption that
it is sparse. To this end, an l1 regularized log-determinant optimization
problem needs to be solved. This task is challenging especially for large-scale
datasets, due to time and memory limitations. In the second problem, the
l1-regularization is added to the logistic regression classification objective
to reduce overfitting to the data and obtain a sparse model. Numerical
experiments demonstrate the efficiency of the multilevel framework in
accelerating existing iterative solvers for both of these problems.
|
[
{
"version": "v1",
"created": "Fri, 1 Jul 2016 16:59:13 GMT"
}
] | 2016-07-04T00:00:00 |
[
[
"Treister",
"Eran",
""
],
[
"Turek",
"Javier S.",
""
],
[
"Yavneh",
"Irad",
""
]
] |
TITLE: A multilevel framework for sparse optimization with application to
inverse covariance estimation and logistic regression
ABSTRACT: Solving l1 regularized optimization problems is common in the fields of
computational biology, signal processing and machine learning. Such l1
regularization is utilized to find sparse minimizers of convex functions. A
well-known example is the LASSO problem, where the l1 norm regularizes a
quadratic function. A multilevel framework is presented for solving such l1
regularized sparse optimization problems efficiently. We take advantage of the
expected sparseness of the solution, and create a hierarchy of problems of
similar type, which is traversed in order to accelerate the optimization
process. This framework is applied for solving two problems: (1) the sparse
inverse covariance estimation problem, and (2) l1-regularized logistic
regression. In the first problem, the inverse of an unknown covariance matrix
of a multivariate normal distribution is estimated, under the assumption that
it is sparse. To this end, an l1 regularized log-determinant optimization
problem needs to be solved. This task is challenging especially for large-scale
datasets, due to time and memory limitations. In the second problem, the
l1-regularization is added to the logistic regression classification objective
to reduce overfitting to the data and obtain a sparse model. Numerical
experiments demonstrate the efficiency of the multilevel framework in
accelerating existing iterative solvers for both of these problems.
|
1510.02795
|
S{\o}ren Hauberg
|
S{\o}ren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W.
Fisher III, Lars Kai Hansen
|
Dreaming More Data: Class-dependent Distributions over Diffeomorphisms
for Learned Data Augmentation
| null |
Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics, pp. 342-350, 2016
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data augmentation is a key element in training high-dimensional models. In
this approach, one synthesizes new observations by applying pre-specified
transformations to the original training data; e.g.~new images are formed by
rotating old ones. Current augmentation schemes, however, rely on manual
specification of the applied transformations, making data augmentation an
implicit form of feature engineering. With an eye towards true end-to-end
learning, we suggest learning the applied transformations on a per-class basis.
Particularly, we align image pairs within each class under the assumption that
the spatial transformation between images belongs to a large class of
diffeomorphisms. We then learn a class-specific probabilistic generative models
of the transformations in a Riemannian submanifold of the Lie group of
diffeomorphisms. We demonstrate significant performance improvements in
training deep neural nets over manually-specified augmentation schemes. Our
code and augmented datasets are available online.
|
[
{
"version": "v1",
"created": "Fri, 9 Oct 2015 20:00:47 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jun 2016 06:12:38 GMT"
}
] | 2016-07-01T00:00:00 |
[
[
"Hauberg",
"Søren",
""
],
[
"Freifeld",
"Oren",
""
],
[
"Larsen",
"Anders Boesen Lindbo",
""
],
[
"Fisher",
"John W.",
"III"
],
[
"Hansen",
"Lars Kai",
""
]
] |
TITLE: Dreaming More Data: Class-dependent Distributions over Diffeomorphisms
for Learned Data Augmentation
ABSTRACT: Data augmentation is a key element in training high-dimensional models. In
this approach, one synthesizes new observations by applying pre-specified
transformations to the original training data; e.g.~new images are formed by
rotating old ones. Current augmentation schemes, however, rely on manual
specification of the applied transformations, making data augmentation an
implicit form of feature engineering. With an eye towards true end-to-end
learning, we suggest learning the applied transformations on a per-class basis.
Particularly, we align image pairs within each class under the assumption that
the spatial transformation between images belongs to a large class of
diffeomorphisms. We then learn a class-specific probabilistic generative models
of the transformations in a Riemannian submanifold of the Lie group of
diffeomorphisms. We demonstrate significant performance improvements in
training deep neural nets over manually-specified augmentation schemes. Our
code and augmented datasets are available online.
|
1606.09349
|
Yuzhong Xie
|
Zhong Ji, Yuzhong Xie, Yanwei Pang, Lei Chen, Zhongfei Zhang
|
Zero-Shot Learning with Multi-Battery Factor Analysis
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Zero-shot learning (ZSL) extends the conventional image classification
technique to a more challenging situation where the test image categories are
not seen in the training samples. Most studies on ZSL utilize side information
such as attributes or word vectors to bridge the relations between the seen
classes and the unseen classes. However, existing approaches on ZSL typically
exploit a shared space for each type of side information independently, which
cannot make full use of the complementary knowledge of different types of side
information. To this end, this paper presents an MBFA-ZSL approach to embed
different types of side information as well as the visual feature into one
shared space. Specifically, we first develop an algorithm named Multi-Battery
Factor Analysis (MBFA) to build a unified semantic space, and then employ
multiple types of side information in it to achieve the ZSL. The close-form
solution makes MBFA-ZSL simple to implement and efficient to run on large
datasets. Extensive experiments on the popular AwA, CUB, and SUN datasets show
its significant superiority over the state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Thu, 30 Jun 2016 05:32:37 GMT"
}
] | 2016-07-01T00:00:00 |
[
[
"Ji",
"Zhong",
""
],
[
"Xie",
"Yuzhong",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Chen",
"Lei",
""
],
[
"Zhang",
"Zhongfei",
""
]
] |
TITLE: Zero-Shot Learning with Multi-Battery Factor Analysis
ABSTRACT: Zero-shot learning (ZSL) extends the conventional image classification
technique to a more challenging situation where the test image categories are
not seen in the training samples. Most studies on ZSL utilize side information
such as attributes or word vectors to bridge the relations between the seen
classes and the unseen classes. However, existing approaches on ZSL typically
exploit a shared space for each type of side information independently, which
cannot make full use of the complementary knowledge of different types of side
information. To this end, this paper presents an MBFA-ZSL approach to embed
different types of side information as well as the visual feature into one
shared space. Specifically, we first develop an algorithm named Multi-Battery
Factor Analysis (MBFA) to build a unified semantic space, and then employ
multiple types of side information in it to achieve the ZSL. The close-form
solution makes MBFA-ZSL simple to implement and efficient to run on large
datasets. Extensive experiments on the popular AwA, CUB, and SUN datasets show
its significant superiority over the state-of-the-art approaches.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.