Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
list | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.05643
|
Md Kamrul Hasan
|
Md Kamrul Hasan, Christopher J. Pal
|
A New Smooth Approximation to the Zero One Loss with a Probabilistic
Interpretation
|
32 pages, 7 figures, 15 tables
| null | null | null |
cs.CV cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We examine a new form of smooth approximation to the zero one loss in which
learning is performed using a reformulation of the widely used logistic
function. Our approach is based on using the posterior mean of a novel
generalized Beta-Bernoulli formulation. This leads to a generalized logistic
function that approximates the zero one loss, but retains a probabilistic
formulation conferring a number of useful properties. The approach is easily
generalized to kernel logistic regression and easily integrated into methods
for structured prediction. We present experiments in which we learn such models
using an optimization method consisting of a combination of gradient descent
and coordinate descent using localized grid search so as to escape from local
minima. Our experiments indicate that optimization quality is improved when
learning meta-parameters are themselves optimized using a validation set. Our
experiments show improved performance relative to widely used logistic and
hinge loss methods on a wide variety of problems ranging from standard UC
Irvine and libSVM evaluation datasets to product review predictions and a
visual information extraction task. We observe that the approach: 1) is more
robust to outliers compared to the logistic and hinge losses; 2) outperforms
comparable logistic and max margin models on larger scale benchmark problems;
3) when combined with Gaussian- Laplacian mixture prior on parameters the
kernelized version of our formulation yields sparser solutions than Support
Vector Machine classifiers; and 4) when integrated into a probabilistic
structured prediction technique our approach provides more accurate
probabilities yielding improved inference and increasing information extraction
performance.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 02:31:16 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Hasan",
"Md Kamrul",
""
],
[
"Pal",
"Christopher J.",
""
]
] |
TITLE: A New Smooth Approximation to the Zero One Loss with a Probabilistic
Interpretation
ABSTRACT: We examine a new form of smooth approximation to the zero one loss in which
learning is performed using a reformulation of the widely used logistic
function. Our approach is based on using the posterior mean of a novel
generalized Beta-Bernoulli formulation. This leads to a generalized logistic
function that approximates the zero one loss, but retains a probabilistic
formulation conferring a number of useful properties. The approach is easily
generalized to kernel logistic regression and easily integrated into methods
for structured prediction. We present experiments in which we learn such models
using an optimization method consisting of a combination of gradient descent
and coordinate descent using localized grid search so as to escape from local
minima. Our experiments indicate that optimization quality is improved when
learning meta-parameters are themselves optimized using a validation set. Our
experiments show improved performance relative to widely used logistic and
hinge loss methods on a wide variety of problems ranging from standard UC
Irvine and libSVM evaluation datasets to product review predictions and a
visual information extraction task. We observe that the approach: 1) is more
robust to outliers compared to the logistic and hinge losses; 2) outperforms
comparable logistic and max margin models on larger scale benchmark problems;
3) when combined with Gaussian- Laplacian mixture prior on parameters the
kernelized version of our formulation yields sparser solutions than Support
Vector Machine classifiers; and 4) when integrated into a probabilistic
structured prediction technique our approach provides more accurate
probabilities yielding improved inference and increasing information extraction
performance.
|
1511.05650
|
Seungjin Choi
|
Juho Lee and Seungjin Choi
|
Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models
|
12 pages, 10 figures, NIPS-2015
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Normalized random measures (NRMs) provide a broad class of discrete random
measures that are often used as priors for Bayesian nonparametric models.
Dirichlet process is a well-known example of NRMs. Most of posterior inference
methods for NRM mixture models rely on MCMC methods since they are easy to
implement and their convergence is well studied. However, MCMC often suffers
from slow convergence when the acceptance rate is low. Tree-based inference is
an alternative deterministic posterior inference method, where Bayesian
hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering
(IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively.
Although IBHC is a promising method for posterior inference for NRMM models due
to its efficiency and applicability to online inference, its convergence is not
guaranteed since it uses heuristics that simply selects the best solution after
multiple trials are made. In this paper, we present a hybrid inference
algorithm for NRMM models, which combines the merits of both MCMC and IBHC.
Trees built by IBHC outlines partitions of data, which guides
Metropolis-Hastings procedure to employ appropriate proposals. Inheriting the
nature of MCMC, our tree-guided MCMC (tgMCMC) is guaranteed to converge, and
enjoys the fast convergence thanks to the effective proposals guided by trees.
Experiments on both synthetic and real-world datasets demonstrate the benefit
of our method.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 03:16:27 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Lee",
"Juho",
""
],
[
"Choi",
"Seungjin",
""
]
] |
TITLE: Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models
ABSTRACT: Normalized random measures (NRMs) provide a broad class of discrete random
measures that are often used as priors for Bayesian nonparametric models.
Dirichlet process is a well-known example of NRMs. Most of posterior inference
methods for NRM mixture models rely on MCMC methods since they are easy to
implement and their convergence is well studied. However, MCMC often suffers
from slow convergence when the acceptance rate is low. Tree-based inference is
an alternative deterministic posterior inference method, where Bayesian
hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering
(IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively.
Although IBHC is a promising method for posterior inference for NRMM models due
to its efficiency and applicability to online inference, its convergence is not
guaranteed since it uses heuristics that simply selects the best solution after
multiple trials are made. In this paper, we present a hybrid inference
algorithm for NRMM models, which combines the merits of both MCMC and IBHC.
Trees built by IBHC outlines partitions of data, which guides
Metropolis-Hastings procedure to employ appropriate proposals. Inheriting the
nature of MCMC, our tree-guided MCMC (tgMCMC) is guaranteed to converge, and
enjoys the fast convergence thanks to the effective proposals guided by trees.
Experiments on both synthetic and real-world datasets demonstrate the benefit
of our method.
|
1511.05659
|
Aiwen Jiang
|
Aiwen Jiang and Hanxi Li and Yi Li and Mingwen Wang
|
Learning Discriminative Representations for Semantic Cross Media
Retrieval
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heterogeneous gap among different modalities emerges as one of the critical
issues in modern AI problems. Unlike traditional uni-modal cases, where raw
features are extracted and directly measured, the heterogeneous nature of cross
modal tasks requires the intrinsic semantic representation to be compared in a
unified framework. This paper studies the learning of different representations
that can be retrieved across different modality contents. A novel approach for
mining cross-modal representations is proposed by incorporating explicit linear
semantic projecting in Hilbert space. The insight is that the discriminative
structures of different modality data can be linearly represented in
appropriate high dimension Hilbert spaces, where linear operations can be used
to approximate nonlinear decisions in the original spaces. As a result, an
efficient linear semantic down mapping is jointly learned for multimodal data,
leading to a common space where they can be compared. The mechanism of "feature
up-lifting and down-projecting" works seamlessly as a whole, which accomplishes
crossmodal retrieval tasks very well. The proposed method, named as shared
discriminative semantic representation learning (\textbf{SDSRL}), is tested on
two public multimodal dataset for both within- and inter- modal retrieval. The
experiments demonstrate that it outperforms several state-of-the-art methods in
most scenarios.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 05:20:32 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Jiang",
"Aiwen",
""
],
[
"Li",
"Hanxi",
""
],
[
"Li",
"Yi",
""
],
[
"Wang",
"Mingwen",
""
]
] |
TITLE: Learning Discriminative Representations for Semantic Cross Media
Retrieval
ABSTRACT: Heterogeneous gap among different modalities emerges as one of the critical
issues in modern AI problems. Unlike traditional uni-modal cases, where raw
features are extracted and directly measured, the heterogeneous nature of cross
modal tasks requires the intrinsic semantic representation to be compared in a
unified framework. This paper studies the learning of different representations
that can be retrieved across different modality contents. A novel approach for
mining cross-modal representations is proposed by incorporating explicit linear
semantic projecting in Hilbert space. The insight is that the discriminative
structures of different modality data can be linearly represented in
appropriate high dimension Hilbert spaces, where linear operations can be used
to approximate nonlinear decisions in the original spaces. As a result, an
efficient linear semantic down mapping is jointly learned for multimodal data,
leading to a common space where they can be compared. The mechanism of "feature
up-lifting and down-projecting" works seamlessly as a whole, which accomplishes
crossmodal retrieval tasks very well. The proposed method, named as shared
discriminative semantic representation learning (\textbf{SDSRL}), is tested on
two public multimodal dataset for both within- and inter- modal retrieval. The
experiments demonstrate that it outperforms several state-of-the-art methods in
most scenarios.
|
1511.05676
|
Aiwen Jiang
|
Aiwen Jiang and Fang Wang and Fatih Porikli and Yi Li
|
Compositional Memory for Visual Question Answering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual Question Answering (VQA) emerges as one of the most fascinating topics
in computer vision recently. Many state of the art methods naively use holistic
visual features with language features into a Long Short-Term Memory (LSTM)
module, neglecting the sophisticated interaction between them. This coarse
modeling also blocks the possibilities of exploring finer-grained local
features that contribute to the question answering dynamically over time.
This paper addresses this fundamental problem by directly modeling the
temporal dynamics between language and all possible local image patches. When
traversing the question words sequentially, our end-to-end approach explicitly
fuses the features associated to the words and the ones available at multiple
local patches in an attention mechanism, and further combines the fused
information to generate dynamic messages, which we call episode. We then feed
the episodes to a standard question answering module together with the
contextual visual information and linguistic information. Motivated by recent
practices in deep learning, we use auxiliary loss functions during training to
improve the performance. Our experiments on two latest public datasets suggest
that our method has a superior performance. Notably, on the DARQUAR dataset we
advanced the state of the art by 6$\%$, and we also evaluated our approach on
the most recent MSCOCO-VQA dataset.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 07:25:16 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Jiang",
"Aiwen",
""
],
[
"Wang",
"Fang",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Li",
"Yi",
""
]
] |
TITLE: Compositional Memory for Visual Question Answering
ABSTRACT: Visual Question Answering (VQA) emerges as one of the most fascinating topics
in computer vision recently. Many state of the art methods naively use holistic
visual features with language features into a Long Short-Term Memory (LSTM)
module, neglecting the sophisticated interaction between them. This coarse
modeling also blocks the possibilities of exploring finer-grained local
features that contribute to the question answering dynamically over time.
This paper addresses this fundamental problem by directly modeling the
temporal dynamics between language and all possible local image patches. When
traversing the question words sequentially, our end-to-end approach explicitly
fuses the features associated to the words and the ones available at multiple
local patches in an attention mechanism, and further combines the fused
information to generate dynamic messages, which we call episode. We then feed
the episodes to a standard question answering module together with the
contextual visual information and linguistic information. Motivated by recent
practices in deep learning, we use auxiliary loss functions during training to
improve the performance. Our experiments on two latest public datasets suggest
that our method has a superior performance. Notably, on the DARQUAR dataset we
advanced the state of the art by 6$\%$, and we also evaluated our approach on
the most recent MSCOCO-VQA dataset.
|
1511.05862
|
Jeff Jones Dr
|
Jeff Jones, Richard Mayne, Andrew Adamatzky
|
Representation of Shape Mediated by Environmental Stimuli in Physarum
polycephalum and a Multi-agent Model
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The slime mould Physarum polycephalum is known to construct proto- plasmic
transport networks which approximate proximity graphs by forag- ing for
nutrients during its plasmodial life cycle stage. In these networks, nodes are
represented by nutrients and edges are represented by proto- plasmic tubes.
These networks have been shown to be efficient in terms of length and
resilience of the overall network to random damage. However relatively little
research has been performed in the potential for Physarum transport networks to
approximate the overall shape of a dataset. In this paper we distinguish
between connectivity and shape of a planar point dataset and demonstrate, using
scoping experiments with plasmodia of P. polycephalum and a multi-agent model
of the organism, how we can gen- erate representations of the external and
internal shapes of a set of points. As with proximity graphs formed by P.
polycephalum, the behaviour of the plasmodium (real and model) is mediated by
environmental stimuli. We further explore potential morphological computation
approaches with the multi-agent model, presenting methods which approximate the
Convex Hull and the Concave Hull. We demonstrate how a growth parameter in the
model can be used to transition between Convex and Concave Hulls. These results
suggest novel mechanisms of morphological computation mediated by environmental
stimuli.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 16:26:34 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Jones",
"Jeff",
""
],
[
"Mayne",
"Richard",
""
],
[
"Adamatzky",
"Andrew",
""
]
] |
TITLE: Representation of Shape Mediated by Environmental Stimuli in Physarum
polycephalum and a Multi-agent Model
ABSTRACT: The slime mould Physarum polycephalum is known to construct proto- plasmic
transport networks which approximate proximity graphs by forag- ing for
nutrients during its plasmodial life cycle stage. In these networks, nodes are
represented by nutrients and edges are represented by proto- plasmic tubes.
These networks have been shown to be efficient in terms of length and
resilience of the overall network to random damage. However relatively little
research has been performed in the potential for Physarum transport networks to
approximate the overall shape of a dataset. In this paper we distinguish
between connectivity and shape of a planar point dataset and demonstrate, using
scoping experiments with plasmodia of P. polycephalum and a multi-agent model
of the organism, how we can gen- erate representations of the external and
internal shapes of a set of points. As with proximity graphs formed by P.
polycephalum, the behaviour of the plasmodium (real and model) is mediated by
environmental stimuli. We further explore potential morphological computation
approaches with the multi-agent model, presenting methods which approximate the
Convex Hull and the Concave Hull. We demonstrate how a growth parameter in the
model can be used to transition between Convex and Concave Hulls. These results
suggest novel mechanisms of morphological computation mediated by environmental
stimuli.
|
1511.05914
|
Daniel Barrett
|
Daniel Paul Barrett and Ran Xu and Haonan Yu and Jeffrey Mark Siskind
|
Collecting and Annotating the Large Continuous Action Dataset
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We make available to the community a new dataset to support
action-recognition research. This dataset is different from prior datasets in
several key ways. It is significantly larger. It contains streaming video with
long segments containing multiple action occurrences that often overlap in
space and/or time. All actions were filmed in the same collection of
backgrounds so that background gives little clue as to action class. We had
five humans replicate the annotation of temporal extent of action occurrences
labeled with their class and measured a surprisingly low level of intercoder
agreement. A baseline experiment shows that recent state-of-the-art methods
perform poorly on this dataset. This suggests that this will be a challenging
dataset to foster advances in action-recognition research. This manuscript
serves to describe the novel content and characteristics of the LCA dataset,
present the design decisions made when filming the dataset, and document the
novel methods employed to annotate the dataset.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 19:16:58 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Barrett",
"Daniel Paul",
""
],
[
"Xu",
"Ran",
""
],
[
"Yu",
"Haonan",
""
],
[
"Siskind",
"Jeffrey Mark",
""
]
] |
TITLE: Collecting and Annotating the Large Continuous Action Dataset
ABSTRACT: We make available to the community a new dataset to support
action-recognition research. This dataset is different from prior datasets in
several key ways. It is significantly larger. It contains streaming video with
long segments containing multiple action occurrences that often overlap in
space and/or time. All actions were filmed in the same collection of
backgrounds so that background gives little clue as to action class. We had
five humans replicate the annotation of temporal extent of action occurrences
labeled with their class and measured a surprisingly low level of intercoder
agreement. A baseline experiment shows that recent state-of-the-art methods
perform poorly on this dataset. This suggests that this will be a challenging
dataset to foster advances in action-recognition research. This manuscript
serves to describe the novel content and characteristics of the LCA dataset,
present the design decisions made when filming the dataset, and document the
novel methods employed to annotate the dataset.
|
1511.05926
|
Thien Nguyen
|
Thien Huu Nguyen and Ralph Grishman
|
Combining Neural Networks and Log-linear Models to Improve Relation
Extraction
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The last decade has witnessed the success of the traditional feature-based
method on exploiting the discrete structures such as words or lexical patterns
to extract relations from text. Recently, convolutional and recurrent neural
networks has provided very effective mechanisms to capture the hidden
structures within sentences via continuous representations, thereby
significantly advancing the performance of relation extraction. The advantage
of convolutional neural networks is their capacity to generalize the
consecutive k-grams in the sentences while recurrent neural networks are
effective to encode long ranges of sentence context. This paper proposes to
combine the traditional feature-based method, the convolutional and recurrent
neural networks to simultaneously benefit from their advantages. Our systematic
evaluation of different network architectures and combination methods
demonstrates the effectiveness of this approach and results in the
state-of-the-art performance on the ACE 2005 and SemEval dataset.
|
[
{
"version": "v1",
"created": "Wed, 18 Nov 2015 20:17:39 GMT"
}
] | 2015-11-19T00:00:00 |
[
[
"Nguyen",
"Thien Huu",
""
],
[
"Grishman",
"Ralph",
""
]
] |
TITLE: Combining Neural Networks and Log-linear Models to Improve Relation
Extraction
ABSTRACT: The last decade has witnessed the success of the traditional feature-based
method on exploiting the discrete structures such as words or lexical patterns
to extract relations from text. Recently, convolutional and recurrent neural
networks has provided very effective mechanisms to capture the hidden
structures within sentences via continuous representations, thereby
significantly advancing the performance of relation extraction. The advantage
of convolutional neural networks is their capacity to generalize the
consecutive k-grams in the sentences while recurrent neural networks are
effective to encode long ranges of sentence context. This paper proposes to
combine the traditional feature-based method, the convolutional and recurrent
neural networks to simultaneously benefit from their advantages. Our systematic
evaluation of different network architectures and combination methods
demonstrates the effectiveness of this approach and results in the
state-of-the-art performance on the ACE 2005 and SemEval dataset.
|
1511.05169
|
Siyuan Huang
|
Siyuan Huang, Jiwen Lu, Jie Zhou, Anil K. Jain
|
Nonlinear Local Metric Learning for Person Re-identification
|
Submitted to CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person re-identification aims at matching pedestrians observed from
non-overlapping camera views. Feature descriptor and metric learning are two
significant problems in person re-identification. A discriminative metric
learning method should be capable of exploiting complex nonlinear
transformations due to the large variations in feature space. In this paper, we
propose a nonlinear local metric learning (NLML) method to improve the
state-of-the-art performance of person re-identification on public datasets.
Motivated by the fact that local metric learning has been introduced to handle
the data which varies locally and deep neural network has presented outstanding
capability in exploiting the nonlinearity of samples, we utilize the merits of
both local metric learning and deep neural network to learn multiple sets of
nonlinear transformations. By enforcing a margin between the distances of
positive pedestrian image pairs and distances of negative pairs in the
transformed feature subspace, discriminative information can be effectively
exploited in the developed neural networks. Our experiments show that the
proposed NLML method achieves the state-of-the-art results on the widely used
VIPeR, GRID, and CUHK 01 datasets.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 21:02:31 GMT"
}
] | 2015-11-18T00:00:00 |
[
[
"Huang",
"Siyuan",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
],
[
"Jain",
"Anil K.",
""
]
] |
TITLE: Nonlinear Local Metric Learning for Person Re-identification
ABSTRACT: Person re-identification aims at matching pedestrians observed from
non-overlapping camera views. Feature descriptor and metric learning are two
significant problems in person re-identification. A discriminative metric
learning method should be capable of exploiting complex nonlinear
transformations due to the large variations in feature space. In this paper, we
propose a nonlinear local metric learning (NLML) method to improve the
state-of-the-art performance of person re-identification on public datasets.
Motivated by the fact that local metric learning has been introduced to handle
the data which varies locally and deep neural network has presented outstanding
capability in exploiting the nonlinearity of samples, we utilize the merits of
both local metric learning and deep neural network to learn multiple sets of
nonlinear transformations. By enforcing a margin between the distances of
positive pedestrian image pairs and distances of negative pairs in the
transformed feature subspace, discriminative information can be effectively
exploited in the developed neural networks. Our experiments show that the
proposed NLML method achieves the state-of-the-art results on the widely used
VIPeR, GRID, and CUHK 01 datasets.
|
1511.05191
|
Mahdi Pakdaman Naeini
|
Mahdi Pakdaman Naeini, Gregory F. Cooper
|
Binary Classifier Calibration using an Ensemble of Near Isotonic
Regression Models
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning accurate probabilistic models from data is crucial in many practical
tasks in data mining. In this paper we present a new non-parametric calibration
method called \textit{ensemble of near isotonic regression} (ENIR). The method
can be considered as an extension of BBQ, a recently proposed calibration
method, as well as the commonly used calibration method based on isotonic
regression. ENIR is designed to address the key limitation of isotonic
regression which is the monotonicity assumption of the predictions. Similar to
BBQ, the method post-processes the output of a binary classifier to obtain
calibrated probabilities. Thus it can be combined with many existing
classification models. We demonstrate the performance of ENIR on synthetic and
real datasets for the commonly used binary classification models. Experimental
results show that the method outperforms several common binary classifier
calibration methods. In particular on the real data, ENIR commonly performs
statistically significantly better than the other methods, and never worse. It
is able to improve the calibration power of classifiers, while retaining their
discrimination power. The method is also computationally tractable for large
scale datasets, as it is $O(N \log N)$ time, where $N$ is the number of
samples.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 21:46:40 GMT"
}
] | 2015-11-18T00:00:00 |
[
[
"Naeini",
"Mahdi Pakdaman",
""
],
[
"Cooper",
"Gregory F.",
""
]
] |
TITLE: Binary Classifier Calibration using an Ensemble of Near Isotonic
Regression Models
ABSTRACT: Learning accurate probabilistic models from data is crucial in many practical
tasks in data mining. In this paper we present a new non-parametric calibration
method called \textit{ensemble of near isotonic regression} (ENIR). The method
can be considered as an extension of BBQ, a recently proposed calibration
method, as well as the commonly used calibration method based on isotonic
regression. ENIR is designed to address the key limitation of isotonic
regression which is the monotonicity assumption of the predictions. Similar to
BBQ, the method post-processes the output of a binary classifier to obtain
calibrated probabilities. Thus it can be combined with many existing
classification models. We demonstrate the performance of ENIR on synthetic and
real datasets for the commonly used binary classification models. Experimental
results show that the method outperforms several common binary classifier
calibration methods. In particular on the real data, ENIR commonly performs
statistically significantly better than the other methods, and never worse. It
is able to improve the calibration power of classifiers, while retaining their
discrimination power. The method is also computationally tractable for large
scale datasets, as it is $O(N \log N)$ time, where $N$ is the number of
samples.
|
1511.05266
|
Rana Forsati Dr.
|
Iman Barjasteh, Rana Forsati, Abdol-Hossein Esfahanian, Hayder Radha
|
Semi-supervised Collaborative Ranking with Push at Top
| null | null | null | null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing collaborative ranking based recommender systems tend to perform best
when there is enough observed ratings for each user and the observation is made
completely at random. Under this setting recommender systems can properly
suggest a list of recommendations according to the user interests. However,
when the observed ratings are extremely sparse (e.g. in the case of cold-start
users where no rating data is available), and are not sampled uniformly at
random, existing ranking methods fail to effectively leverage side information
to transduct the knowledge from existing ratings to unobserved ones. We propose
a semi-supervised collaborative ranking model, dubbed \texttt{S$^2$COR}, to
improve the quality of cold-start recommendation. \texttt{S$^2$COR} mitigates
the sparsity issue by leveraging side information about both observed and
missing ratings by collaboratively learning the ranking model. This enables it
to deal with the case of missing data not at random, but to also effectively
incorporate the available side information in transduction. We experimentally
evaluated our proposed algorithm on a number of challenging real-world datasets
and compared against state-of-the-art models for cold-start recommendation. We
report significantly higher quality recommendations with our algorithm compared
to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2015 04:02:26 GMT"
}
] | 2015-11-18T00:00:00 |
[
[
"Barjasteh",
"Iman",
""
],
[
"Forsati",
"Rana",
""
],
[
"Esfahanian",
"Abdol-Hossein",
""
],
[
"Radha",
"Hayder",
""
]
] |
TITLE: Semi-supervised Collaborative Ranking with Push at Top
ABSTRACT: Existing collaborative ranking based recommender systems tend to perform best
when there is enough observed ratings for each user and the observation is made
completely at random. Under this setting recommender systems can properly
suggest a list of recommendations according to the user interests. However,
when the observed ratings are extremely sparse (e.g. in the case of cold-start
users where no rating data is available), and are not sampled uniformly at
random, existing ranking methods fail to effectively leverage side information
to transduct the knowledge from existing ratings to unobserved ones. We propose
a semi-supervised collaborative ranking model, dubbed \texttt{S$^2$COR}, to
improve the quality of cold-start recommendation. \texttt{S$^2$COR} mitigates
the sparsity issue by leveraging side information about both observed and
missing ratings by collaboratively learning the ranking model. This enables it
to deal with the case of missing data not at random, but to also effectively
incorporate the available side information in transduction. We experimentally
evaluated our proposed algorithm on a number of challenging real-world datasets
and compared against state-of-the-art models for cold-start recommendation. We
report significantly higher quality recommendations with our algorithm compared
to the state-of-the-art.
|
1511.05371
|
Markus Schneider
|
Markus Schneider and Wolfgang Ertel and G\"unther Palm
|
Constant Time EXPected Similarity Estimation using Stochastic
Optimization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new algorithm named EXPected Similarity Estimation (EXPoSE) was recently
proposed to solve the problem of large-scale anomaly detection. It is a
non-parametric and distribution free kernel method based on the Hilbert space
embedding of probability measures. Given a dataset of $n$ samples, EXPoSE needs
only $\mathcal{O}(n)$ (linear time) to build a model and $\mathcal{O}(1)$
(constant time) to make a prediction. In this work we improve the linear
computational complexity and show that an $\epsilon$-accurate model can be
estimated in constant time, which has significant implications for large-scale
learning problems. To achieve this goal, we cast the original EXPoSE
formulation into a stochastic optimization problem. It is crucial that this
approach allows us to determine the number of iteration based on a desired
accuracy $\epsilon$, independent of the dataset size $n$. We will show that the
proposed stochastic gradient descent algorithm works in general (possible
infinite-dimensional) Hilbert spaces, is easy to implement and requires no
additional step-size parameters.
|
[
{
"version": "v1",
"created": "Tue, 17 Nov 2015 12:10:03 GMT"
}
] | 2015-11-18T00:00:00 |
[
[
"Schneider",
"Markus",
""
],
[
"Ertel",
"Wolfgang",
""
],
[
"Palm",
"Günther",
""
]
] |
TITLE: Constant Time EXPected Similarity Estimation using Stochastic
Optimization
ABSTRACT: A new algorithm named EXPected Similarity Estimation (EXPoSE) was recently
proposed to solve the problem of large-scale anomaly detection. It is a
non-parametric and distribution free kernel method based on the Hilbert space
embedding of probability measures. Given a dataset of $n$ samples, EXPoSE needs
only $\mathcal{O}(n)$ (linear time) to build a model and $\mathcal{O}(1)$
(constant time) to make a prediction. In this work we improve the linear
computational complexity and show that an $\epsilon$-accurate model can be
estimated in constant time, which has significant implications for large-scale
learning problems. To achieve this goal, we cast the original EXPoSE
formulation into a stochastic optimization problem. It is crucial that this
approach allows us to determine the number of iteration based on a desired
accuracy $\epsilon$, independent of the dataset size $n$. We will show that the
proposed stochastic gradient descent algorithm works in general (possible
infinite-dimensional) Hilbert spaces, is easy to implement and requires no
additional step-size parameters.
|
1311.0966
|
Emre Neftci
|
Emre Neftci, Srinjoy Das, Bruno Pedroni, Kenneth Kreutz-Delgado, and
Gert Cauwenberghs
|
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
|
(Under review)
| null |
10.3389/fnins.2013.00272
| null |
cs.NE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.
|
[
{
"version": "v1",
"created": "Tue, 5 Nov 2013 04:53:11 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Nov 2013 19:45:07 GMT"
},
{
"version": "v3",
"created": "Mon, 9 Dec 2013 07:04:28 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Neftci",
"Emre",
""
],
[
"Das",
"Srinjoy",
""
],
[
"Pedroni",
"Bruno",
""
],
[
"Kreutz-Delgado",
"Kenneth",
""
],
[
"Cauwenberghs",
"Gert",
""
]
] |
TITLE: Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
ABSTRACT: Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been
demonstrated to perform efficiently in a variety of applications, such as
dimensionality reduction, feature learning, and classification. Their
implementation on neuromorphic hardware platforms emulating large-scale
networks of spiking neurons can have significant advantages from the
perspectives of scalability, power dissipation and real-time interfacing with
the environment. However the traditional RBM architecture and the commonly used
training algorithm known as Contrastive Divergence (CD) are based on discrete
updates and exact arithmetics which do not directly map onto a dynamical neural
substrate. Here, we present an event-driven variation of CD to train a RBM
constructed with Integrate & Fire (I&F) neurons, that is constrained by the
limitations of existing and near future neuromorphic hardware platforms. Our
strategy is based on neural sampling, which allows us to synthesize a spiking
neural network that samples from a target Boltzmann distribution. The recurrent
activity of the network replaces the discrete steps of the CD algorithm, while
Spike Time Dependent Plasticity (STDP) carries out the weight updates in an
online, asynchronous fashion. We demonstrate our approach by training an RBM
composed of leaky I&F neurons with STDP synapses to learn a generative model of
the MNIST hand-written digit dataset, and by testing it in recognition,
generation and cue integration tasks. Our results contribute to a machine
learning-driven approach for synthesizing networks of spiking neurons capable
of carrying out practical, high-level functionality.
|
1410.4627
|
Carl Vondrick
|
Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba
|
Learning visual biases from human imagination
|
To appear at NIPS 2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the human visual system can recognize many concepts under
challenging conditions, it still has some biases. In this paper, we investigate
whether we can extract these biases and transfer them into a machine
recognition system. We introduce a novel method that, inspired by well-known
tools in human psychophysics, estimates the biases that the human visual system
might use for recognition, but in computer vision feature spaces. Our
experiments are surprising, and suggest that classifiers from the human visual
system can be transferred into a machine with some success. Since these
classifiers seem to capture favorable biases in the human visual system, we
further present an SVM formulation that constrains the orientation of the SVM
hyperplane to agree with the bias from human visual system. Our results suggest
that transferring this human bias into machines may help object recognition
systems generalize across datasets and perform better when very little training
data is available.
|
[
{
"version": "v1",
"created": "Fri, 17 Oct 2014 03:47:12 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 14:14:22 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Vondrick",
"Carl",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Oliva",
"Aude",
""
],
[
"Torralba",
"Antonio",
""
]
] |
TITLE: Learning visual biases from human imagination
ABSTRACT: Although the human visual system can recognize many concepts under
challenging conditions, it still has some biases. In this paper, we investigate
whether we can extract these biases and transfer them into a machine
recognition system. We introduce a novel method that, inspired by well-known
tools in human psychophysics, estimates the biases that the human visual system
might use for recognition, but in computer vision feature spaces. Our
experiments are surprising, and suggest that classifiers from the human visual
system can be transferred into a machine with some success. Since these
classifiers seem to capture favorable biases in the human visual system, we
further present an SVM formulation that constrains the orientation of the SVM
hyperplane to agree with the bias from human visual system. Our results suggest
that transferring this human bias into machines may help object recognition
systems generalize across datasets and perform better when very little training
data is available.
|
1502.01540
|
Xun Xu
|
Xun Xu, Timothy Hospedales, Shaogang Gong
|
Semantic Embedding Space for Zero-Shot Action Recognition
|
5 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of categories for action recognition is growing rapidly. It is
thus becoming increasingly hard to collect sufficient training data to learn
conventional models for each category. This issue may be ameliorated by the
increasingly popular 'zero-shot learning' (ZSL) paradigm. In this framework a
mapping is constructed between visual features and a human interpretable
semantic description of each category, allowing categories to be recognised in
the absence of any training data. Existing ZSL studies focus primarily on image
data, and attribute-based semantic representations. In this paper, we address
zero-shot recognition in contemporary video action recognition tasks, using
semantic word vector space as the common space to embed videos and category
labels. This is more challenging because the mapping between the semantic space
and space-time features of videos containing complex actions is more complex
and harder to learn. We demonstrate that a simple self-training and data
augmentation strategy can significantly improve the efficacy of this mapping.
Experiments on human action datasets including HMDB51 and UCF101 demonstrate
that our approach achieves the state-of-the-art zero-shot action recognition
performance.
|
[
{
"version": "v1",
"created": "Thu, 5 Feb 2015 13:34:48 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Xu",
"Xun",
""
],
[
"Hospedales",
"Timothy",
""
],
[
"Gong",
"Shaogang",
""
]
] |
TITLE: Semantic Embedding Space for Zero-Shot Action Recognition
ABSTRACT: The number of categories for action recognition is growing rapidly. It is
thus becoming increasingly hard to collect sufficient training data to learn
conventional models for each category. This issue may be ameliorated by the
increasingly popular 'zero-shot learning' (ZSL) paradigm. In this framework a
mapping is constructed between visual features and a human interpretable
semantic description of each category, allowing categories to be recognised in
the absence of any training data. Existing ZSL studies focus primarily on image
data, and attribute-based semantic representations. In this paper, we address
zero-shot recognition in contemporary video action recognition tasks, using
semantic word vector space as the common space to embed videos and category
labels. This is more challenging because the mapping between the semantic space
and space-time features of videos containing complex actions is more complex
and harder to learn. We demonstrate that a simple self-training and data
augmentation strategy can significantly improve the efficacy of this mapping.
Experiments on human action datasets including HMDB51 and UCF101 demonstrate
that our approach achieves the state-of-the-art zero-shot action recognition
performance.
|
1511.01042
|
Junyoung Chung
|
Junyoung Chung and Jacob Devlin and Hany Hassan Awadalla
|
Detecting Interrogative Utterances with Recurrent Neural Networks
|
6 pages, accepted to NIPS 2015 Workshop on Machine Learning for
Spoken Language Understanding and Interaction
| null | null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we explore different neural network architectures that can
predict if a speaker of a given utterance is asking a question or making a
statement. We com- pare the outcomes of regularization methods that are
popularly used to train deep neural networks and study how different context
functions can affect the classification performance. We also compare the
efficacy of gated activation functions that are favorably used in recurrent
neural networks and study how to combine multimodal inputs. We evaluate our
models on two multimodal datasets: MSR-Skype and CALLHOME.
|
[
{
"version": "v1",
"created": "Tue, 3 Nov 2015 19:26:16 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 03:54:19 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Chung",
"Junyoung",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Awadalla",
"Hany Hassan",
""
]
] |
TITLE: Detecting Interrogative Utterances with Recurrent Neural Networks
ABSTRACT: In this paper, we explore different neural network architectures that can
predict if a speaker of a given utterance is asking a question or making a
statement. We com- pare the outcomes of regularization methods that are
popularly used to train deep neural networks and study how different context
functions can affect the classification performance. We also compare the
efficacy of gated activation functions that are favorably used in recurrent
neural networks and study how to combine multimodal inputs. We evaluate our
models on two multimodal datasets: MSR-Skype and CALLHOME.
|
1511.02352
|
Wiharto Wiharto
|
Wiharto Wiharto, Hari Kusnanto, Herianto Herianto
|
Performance Analysis of Multiclass Support Vector Machine Classification
for Diagnosis of Coronary Heart Diseases
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic diagnosis of coronary heart disease helps the doctor to support in
decision making a diagnosis. Coronary heart disease have some types or levels.
Referring to the UCI Repository dataset, it divided into 4 types or levels that
are labeled numbers 1-4 (low, medium, high and serious). The diagnosis models
can be analyzed with multiclass classification approach. One of multiclass
classification approach used, one of which is a support vector machine (SVM).
The SVM use due to strong performance of SVM in binary classification. This
research study multiclass performance classification support vector machine to
diagnose the type or level of coronary heart disease. Coronary heart disease
patient data taken from the UCI Repository. Stages in this study is
preprocessing, which consist of, to normalizing the data, divide the data into
data training and testing. The next stage of multiclass classification and
performance analysis. This study uses multiclass SVM algorithm, namely: Binary
Tree Support Vector Machine (BTSVM), One-Against-One (OAO), One-Against-All
(OAA), Decision Direct Acyclic Graph (DDAG) and Exhaustive Output Error
Correction Code (ECOC). Performance parameter used is recall, precision,
F-measure and Overall accuracy.
|
[
{
"version": "v1",
"created": "Sat, 7 Nov 2015 13:09:57 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 14:20:59 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Wiharto",
"Wiharto",
""
],
[
"Kusnanto",
"Hari",
""
],
[
"Herianto",
"Herianto",
""
]
] |
TITLE: Performance Analysis of Multiclass Support Vector Machine Classification
for Diagnosis of Coronary Heart Diseases
ABSTRACT: Automatic diagnosis of coronary heart disease helps the doctor to support in
decision making a diagnosis. Coronary heart disease have some types or levels.
Referring to the UCI Repository dataset, it divided into 4 types or levels that
are labeled numbers 1-4 (low, medium, high and serious). The diagnosis models
can be analyzed with multiclass classification approach. One of multiclass
classification approach used, one of which is a support vector machine (SVM).
The SVM use due to strong performance of SVM in binary classification. This
research study multiclass performance classification support vector machine to
diagnose the type or level of coronary heart disease. Coronary heart disease
patient data taken from the UCI Repository. Stages in this study is
preprocessing, which consist of, to normalizing the data, divide the data into
data training and testing. The next stage of multiclass classification and
performance analysis. This study uses multiclass SVM algorithm, namely: Binary
Tree Support Vector Machine (BTSVM), One-Against-One (OAO), One-Against-All
(OAA), Decision Direct Acyclic Graph (DDAG) and Exhaustive Output Error
Correction Code (ECOC). Performance parameter used is recall, precision,
F-measure and Overall accuracy.
|
1511.03042
|
Hamed Habibi Aghdam
|
Elnaz J. Heravi, Hamed H. Aghdam, Domenec Puig
|
Analyzing Stability of Convolutional Neural Networks in the Frequency
Domain
|
Under review as a conference paper at ICLR2016, minor changes in the
text
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the internal process of ConvNets is commonly done using
visualization techniques. However, these techniques do not usually provide a
tool for estimating the stability of a ConvNet against noise. In this paper, we
show how to analyze a ConvNet in the frequency domain using a 4-dimensional
visualization technique. Using the frequency domain analysis, we show the
reason that a ConvNet might be sensitive to a very low magnitude additive
noise. Our experiments on a few ConvNets trained on different datasets revealed
that convolution kernels of a trained ConvNet usually pass most of the
frequencies and they are not able to effectively eliminate the effect of high
frequencies. Our next experiments shows that a convolution kernel which has a
more concentrated frequency response could be more stable. Finally, we show
that fine-tuning a ConvNet using a training set augmented with noisy images can
produce more stable ConvNets.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 09:54:20 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 08:42:10 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Heravi",
"Elnaz J.",
""
],
[
"Aghdam",
"Hamed H.",
""
],
[
"Puig",
"Domenec",
""
]
] |
TITLE: Analyzing Stability of Convolutional Neural Networks in the Frequency
Domain
ABSTRACT: Understanding the internal process of ConvNets is commonly done using
visualization techniques. However, these techniques do not usually provide a
tool for estimating the stability of a ConvNet against noise. In this paper, we
show how to analyze a ConvNet in the frequency domain using a 4-dimensional
visualization technique. Using the frequency domain analysis, we show the
reason that a ConvNet might be sensitive to a very low magnitude additive
noise. Our experiments on a few ConvNets trained on different datasets revealed
that convolution kernels of a trained ConvNet usually pass most of the
frequencies and they are not able to effectively eliminate the effect of high
frequencies. Our next experiments shows that a convolution kernel which has a
more concentrated frequency response could be more stable. Finally, we show
that fine-tuning a ConvNet using a training set augmented with noisy images can
produce more stable ConvNets.
|
1511.04472
|
Rui Yu
|
Rui Yu, Chris Russell, Lourdes Agapito
|
Solving Jigsaw Puzzles with Linear Programming
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel Linear Program (LP) based formula- tion for solving jigsaw
puzzles. We formulate jigsaw solving as a set of successive global convex
relaxations of the stan- dard NP-hard formulation, that can describe both
jigsaws with pieces of unknown position and puzzles of unknown po- sition and
orientation. The main contribution and strength of our approach comes from the
LP assembly strategy. In contrast to existing greedy methods, our LP solver
exploits all the pairwise matches simultaneously, and computes the position of
each piece/component globally. The main ad- vantages of our LP approach
include: (i) a reduced sensi- tivity to local minima compared to greedy
approaches, since our successive approximations are global and convex and (ii)
an increased robustness to the presence of mismatches in the pairwise matches
due to the use of a weighted L1 penalty. To demonstrate the effectiveness of
our approach, we test our algorithm on public jigsaw datasets and show that it
outperforms state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2015 22:15:54 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Yu",
"Rui",
""
],
[
"Russell",
"Chris",
""
],
[
"Agapito",
"Lourdes",
""
]
] |
TITLE: Solving Jigsaw Puzzles with Linear Programming
ABSTRACT: We propose a novel Linear Program (LP) based formula- tion for solving jigsaw
puzzles. We formulate jigsaw solving as a set of successive global convex
relaxations of the stan- dard NP-hard formulation, that can describe both
jigsaws with pieces of unknown position and puzzles of unknown po- sition and
orientation. The main contribution and strength of our approach comes from the
LP assembly strategy. In contrast to existing greedy methods, our LP solver
exploits all the pairwise matches simultaneously, and computes the position of
each piece/component globally. The main ad- vantages of our LP approach
include: (i) a reduced sensi- tivity to local minima compared to greedy
approaches, since our successive approximations are global and convex and (ii)
an increased robustness to the presence of mismatches in the pairwise matches
due to the use of a weighted L1 penalty. To demonstrate the effectiveness of
our approach, we test our algorithm on public jigsaw datasets and show that it
outperforms state-of-the-art methods.
|
1511.04510
|
Xiaodan Liang
|
Xiaodan Liang and Xiaohui Shen and Donglai Xiang and Jiashi Feng and
Liang Lin and Shuicheng Yan
|
Semantic Object Parsing with Local-Global Long Short-Term Memory
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic object parsing is a fundamental task for understanding objects in
detail in computer vision community, where incorporating multi-level contextual
information is critical for achieving such fine-grained pixel-level
recognition. Prior methods often leverage the contextual information through
post-processing predicted confidence maps. In this work, we propose a novel
deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly
incorporate short-distance and long-distance spatial dependencies into the
feature learning over all pixel positions. In each LG-LSTM layer, local
guidance from neighboring positions and global guidance from the whole image
are imposed on each position to better exploit complex local and global
contextual information. Individual LSTMs for distinct spatial dimensions are
also utilized to intrinsically capture various spatial layouts of semantic
parts in the images, yielding distinct hidden and memory cells of each position
for each dimension. In our parsing approach, several LG-LSTM layers are stacked
and appended to the intermediate convolutional layers to directly enhance
visual features, allowing network parameters to be learned in an end-to-end
way. The long chains of sequential computation by stacked LG-LSTM layers also
enable each pixel to sense a much larger region for inference benefiting from
the memorization of previous dependencies in all positions along all
dimensions. Comprehensive evaluations on three public datasets well demonstrate
the significant superiority of our LG-LSTM over other state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 14 Nov 2015 05:42:50 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Liang",
"Xiaodan",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Xiang",
"Donglai",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Lin",
"Liang",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
TITLE: Semantic Object Parsing with Local-Global Long Short-Term Memory
ABSTRACT: Semantic object parsing is a fundamental task for understanding objects in
detail in computer vision community, where incorporating multi-level contextual
information is critical for achieving such fine-grained pixel-level
recognition. Prior methods often leverage the contextual information through
post-processing predicted confidence maps. In this work, we propose a novel
deep Local-Global Long Short-Term Memory (LG-LSTM) architecture to seamlessly
incorporate short-distance and long-distance spatial dependencies into the
feature learning over all pixel positions. In each LG-LSTM layer, local
guidance from neighboring positions and global guidance from the whole image
are imposed on each position to better exploit complex local and global
contextual information. Individual LSTMs for distinct spatial dimensions are
also utilized to intrinsically capture various spatial layouts of semantic
parts in the images, yielding distinct hidden and memory cells of each position
for each dimension. In our parsing approach, several LG-LSTM layers are stacked
and appended to the intermediate convolutional layers to directly enhance
visual features, allowing network parameters to be learned in an end-to-end
way. The long chains of sequential computation by stacked LG-LSTM layers also
enable each pixel to sense a much larger region for inference benefiting from
the memorization of previous dependencies in all positions along all
dimensions. Comprehensive evaluations on three public datasets well demonstrate
the significant superiority of our LG-LSTM over other state-of-the-art methods.
|
1511.04670
|
Zhongwen Xu
|
Linchao Zhu, Zhongwen Xu, Yi Yang, Alexander G. Hauptmann
|
Uncovering Temporal Context for Video Question and Answering
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce Video Question Answering in temporal domain to
infer the past, describe the present and predict the future. We present an
encoder-decoder approach using Recurrent Neural Networks to learn temporal
structures of videos and introduce a dual-channel ranking loss to answer
multiple-choice questions. We explore approaches for finer understanding of
video content using question form of "fill-in-the-blank", and managed to
collect 109,895 video clips with duration over 1,000 hours from TACoS, MPII-MD,
MEDTest 14 datasets, while the corresponding 390,744 questions are generated
from annotations. Extensive experiments demonstrate that our approach
significantly outperforms the compared baselines.
|
[
{
"version": "v1",
"created": "Sun, 15 Nov 2015 07:57:41 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Zhu",
"Linchao",
""
],
[
"Xu",
"Zhongwen",
""
],
[
"Yang",
"Yi",
""
],
[
"Hauptmann",
"Alexander G.",
""
]
] |
TITLE: Uncovering Temporal Context for Video Question and Answering
ABSTRACT: In this work, we introduce Video Question Answering in temporal domain to
infer the past, describe the present and predict the future. We present an
encoder-decoder approach using Recurrent Neural Networks to learn temporal
structures of videos and introduce a dual-channel ranking loss to answer
multiple-choice questions. We explore approaches for finer understanding of
video content using question form of "fill-in-the-blank", and managed to
collect 109,895 video clips with duration over 1,000 hours from TACoS, MPII-MD,
MEDTest 14 datasets, while the corresponding 390,744 questions are generated
from annotations. Extensive experiments demonstrate that our approach
significantly outperforms the compared baselines.
|
1511.04808
|
Mengyi Liu
|
Mengyi Liu, Ruiping Wang, Shiguang Shan, Xilin Chen
|
Learning Mid-level Words on Riemannian Manifold for Action Recognition
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human action recognition remains a challenging task due to the various
sources of video data and large intra-class variations. It thus becomes one of
the key issues in recent research to explore effective and robust
representation to handle such challenges. In this paper, we propose a novel
representation approach by constructing mid-level words in videos and encoding
them on Riemannian manifold. Specifically, we first conduct a global alignment
on the densely extracted low-level features to build a bank of corresponding
feature groups, each of which can be statistically modeled as a mid-level word
lying on some specific Riemannian manifold. Based on these mid-level words, we
construct intrinsic Riemannian codebooks by employing K-Karcher-means
clustering and Riemannian Gaussian Mixture Model, and consequently extend the
Riemannian manifold version of three well studied encoding methods in Euclidean
space, i.e. Bag of Visual Words (BoVW), Vector of Locally Aggregated
Descriptors (VLAD), and Fisher Vector (FV), to obtain the final action video
representations. Our method is evaluated in two tasks on four popular realistic
datasets: action recognition on YouTube, UCF50, HMDB51 databases, and action
similarity labeling on ASLAN database. In all cases, the reported results
achieve very competitive performance with those most recent state-of-the-art
works.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 03:18:06 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Liu",
"Mengyi",
""
],
[
"Wang",
"Ruiping",
""
],
[
"Shan",
"Shiguang",
""
],
[
"Chen",
"Xilin",
""
]
] |
TITLE: Learning Mid-level Words on Riemannian Manifold for Action Recognition
ABSTRACT: Human action recognition remains a challenging task due to the various
sources of video data and large intra-class variations. It thus becomes one of
the key issues in recent research to explore effective and robust
representation to handle such challenges. In this paper, we propose a novel
representation approach by constructing mid-level words in videos and encoding
them on Riemannian manifold. Specifically, we first conduct a global alignment
on the densely extracted low-level features to build a bank of corresponding
feature groups, each of which can be statistically modeled as a mid-level word
lying on some specific Riemannian manifold. Based on these mid-level words, we
construct intrinsic Riemannian codebooks by employing K-Karcher-means
clustering and Riemannian Gaussian Mixture Model, and consequently extend the
Riemannian manifold version of three well studied encoding methods in Euclidean
space, i.e. Bag of Visual Words (BoVW), Vector of Locally Aggregated
Descriptors (VLAD), and Fisher Vector (FV), to obtain the final action video
representations. Our method is evaluated in two tasks on four popular realistic
datasets: action recognition on YouTube, UCF50, HMDB51 databases, and action
similarity labeling on ASLAN database. In all cases, the reported results
achieve very competitive performance with those most recent state-of-the-art
works.
|
1511.04861
|
Hyoung-Joo Kim
|
Woo-Hyun Lee, Hee-Gook Jun, Hyoung-Joo Kim
|
Hadoop Mapreduce Performance Enhancement Using In-node Combiners
|
International Journal of Computer Science & Information Technology,
2015
| null |
10.5121/ijcsit.2015.7501
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While advanced analysis of large dataset is in high demand, data sizes have
surpassed capabilities of conventional software and hardware. Hadoop framework
distributes large datasets over multiple commodity servers and performs
parallel computations. We discuss the I/O bottlenecks of Hadoop framework and
propose methods for enhancing I/O performance. A proven approach is to cache
data to maximize memory-locality of all map tasks. We introduce an approach to
optimize I/O, the in-node combining design which extends the traditional
combiner to a node level. The in-node combiner reduces the total number of
intermediate results and curtail network traffic between mappers and reducers.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 08:27:58 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Lee",
"Woo-Hyun",
""
],
[
"Jun",
"Hee-Gook",
""
],
[
"Kim",
"Hyoung-Joo",
""
]
] |
TITLE: Hadoop Mapreduce Performance Enhancement Using In-node Combiners
ABSTRACT: While advanced analysis of large dataset is in high demand, data sizes have
surpassed capabilities of conventional software and hardware. Hadoop framework
distributes large datasets over multiple commodity servers and performs
parallel computations. We discuss the I/O bottlenecks of Hadoop framework and
propose methods for enhancing I/O performance. A proven approach is to cache
data to maximize memory-locality of all map tasks. We introduce an approach to
optimize I/O, the in-node combining design which extends the traditional
combiner to a node level. The in-node combiner reduces the total number of
intermediate results and curtail network traffic between mappers and reducers.
|
1511.04898
|
Bertrand Thirion
|
Bertrand Thirion (PARIETAL), Andr\'es Hoyos-Idrobo (NEUROSPIN,
PARIETAL), Jonas Kahn (LPP), Gael Varoquaux (NEUROSPIN, PARIETAL)
|
Fast clustering for scalable statistical analysis on structured images
|
ICML Workshop on Statistics, Machine Learning and Neuroscience
(Stamlins 2015), Jul 2015, Lille, France
| null | null | null |
stat.ML cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of brain images as markers for diseases or behavioral differences is
challenged by the small effects size and the ensuing lack of power, an issue
that has incited researchers to rely more systematically on large cohorts.
Coupled with resolution increases, this leads to very large datasets. A
striking example in the case of brain imaging is that of the Human Connectome
Project: 20 Terabytes of data and growing. The resulting data deluge poses
severe challenges regarding the tractability of some processing steps
(discriminant analysis, multivariate models) due to the memory demands posed by
these data. In this work, we revisit dimension reduction approaches, such as
random projections, with the aim of replacing costly function evaluations by
cheaper ones while decreasing the memory requirements. Specifically, we
investigate the use of alternate schemes, based on fast clustering, that are
well suited for signals exhibiting a strong spatial structure, such as
anatomical and functional brain images. Our contribution is twofold: i) we
propose a linear-time clustering scheme that bypasses the percolation issues
inherent in these algorithms and thus provides compressions nearly as good as
traditional quadratic-complexity variance-minimizing clustering schemes, ii) we
show that cluster-based compression can have the virtuous effect of removing
high-frequency noise, actually improving subsequent estimations steps. As a
consequence, the proposed approach yields very accurate models on several
large-scale problems yet with impressive gains in computational efficiency,
making it possible to analyze large datasets.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 10:26:18 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Thirion",
"Bertrand",
"",
"PARIETAL"
],
[
"Hoyos-Idrobo",
"Andrés",
"",
"NEUROSPIN,\n PARIETAL"
],
[
"Kahn",
"Jonas",
"",
"LPP"
],
[
"Varoquaux",
"Gael",
"",
"NEUROSPIN, PARIETAL"
]
] |
TITLE: Fast clustering for scalable statistical analysis on structured images
ABSTRACT: The use of brain images as markers for diseases or behavioral differences is
challenged by the small effects size and the ensuing lack of power, an issue
that has incited researchers to rely more systematically on large cohorts.
Coupled with resolution increases, this leads to very large datasets. A
striking example in the case of brain imaging is that of the Human Connectome
Project: 20 Terabytes of data and growing. The resulting data deluge poses
severe challenges regarding the tractability of some processing steps
(discriminant analysis, multivariate models) due to the memory demands posed by
these data. In this work, we revisit dimension reduction approaches, such as
random projections, with the aim of replacing costly function evaluations by
cheaper ones while decreasing the memory requirements. Specifically, we
investigate the use of alternate schemes, based on fast clustering, that are
well suited for signals exhibiting a strong spatial structure, such as
anatomical and functional brain images. Our contribution is twofold: i) we
propose a linear-time clustering scheme that bypasses the percolation issues
inherent in these algorithms and thus provides compressions nearly as good as
traditional quadratic-complexity variance-minimizing clustering schemes, ii) we
show that cluster-based compression can have the virtuous effect of removing
high-frequency noise, actually improving subsequent estimations steps. As a
consequence, the proposed approach yields very accurate models on several
large-scale problems yet with impressive gains in computational efficiency,
making it possible to analyze large datasets.
|
1511.04901
|
Erjin Zhou
|
Zhiao Huang, Erjin Zhou, Zhimin Cao
|
Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial landmark localization plays an important role in face recognition and
analysis applications. In this paper, we give a brief introduction to a
coarse-to-fine pipeline with neural networks and sequential regression. First,
a global convolutional network is applied to the holistic facial image to give
an initial landmark prediction. A pyramid of multi-scale local image patches is
then cropped to feed to a new network for each landmark to refine the
prediction. As the refinement network outputs a more accurate position
estimation than the input, such procedure could be repeated several times until
the estimation converges. We evaluate our system on the 300-W dataset [11] and
it outperforms the recent state-of-the-arts.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 10:31:18 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Huang",
"Zhiao",
""
],
[
"Zhou",
"Erjin",
""
],
[
"Cao",
"Zhimin",
""
]
] |
TITLE: Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression
ABSTRACT: Facial landmark localization plays an important role in face recognition and
analysis applications. In this paper, we give a brief introduction to a
coarse-to-fine pipeline with neural networks and sequential regression. First,
a global convolutional network is applied to the holistic facial image to give
an initial landmark prediction. A pyramid of multi-scale local image patches is
then cropped to feed to a new network for each landmark to refine the
prediction. As the refinement network outputs a more accurate position
estimation than the input, such procedure could be repeated several times until
the estimation converges. We evaluate our system on the 300-W dataset [11] and
it outperforms the recent state-of-the-arts.
|
1511.05049
|
Heng Yang
|
Heng Yang and Xuhui Jia and Chen Change Loy and Peter Robinson
|
An Empirical Study of Recent Face Alignment Methods
|
under review of a conference. Project page:
https://www.cl.cam.ac.uk/~hy306/FaceAlignment.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of face alignment has been intensively studied in the past years.
A large number of novel methods have been proposed and reported very good
performance on benchmark dataset such as 300W. However, the differences in the
experimental setting and evaluation metric, missing details in the description
of the methods make it hard to reproduce the results reported and evaluate the
relative merits. For instance, most recent face alignment methods are built on
top of face detection but from different face detectors. In this paper, we
carry out a rigorous evaluation of these methods by making the following
contributions: 1) we proposes a new evaluation metric for face alignment on a
set of images, i.e., area under error distribution curve within a threshold,
AUC$_\alpha$, given the fact that the traditional evaluation measure (mean
error) is very sensitive to big alignment error. 2) we extend the 300W database
with more practical face detections to make fair comparison possible. 3) we
carry out face alignment sensitivity analysis w.r.t. face detection, on both
synthetic and real data, using both off-the-shelf and re-retrained models. 4)
we study factors that are particularly important to achieve good performance
and provide suggestions for practical applications. Most of the conclusions
drawn from our comparative analysis cannot be inferred from the original
publications.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 17:26:27 GMT"
}
] | 2015-11-17T00:00:00 |
[
[
"Yang",
"Heng",
""
],
[
"Jia",
"Xuhui",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Robinson",
"Peter",
""
]
] |
TITLE: An Empirical Study of Recent Face Alignment Methods
ABSTRACT: The problem of face alignment has been intensively studied in the past years.
A large number of novel methods have been proposed and reported very good
performance on benchmark dataset such as 300W. However, the differences in the
experimental setting and evaluation metric, missing details in the description
of the methods make it hard to reproduce the results reported and evaluate the
relative merits. For instance, most recent face alignment methods are built on
top of face detection but from different face detectors. In this paper, we
carry out a rigorous evaluation of these methods by making the following
contributions: 1) we proposes a new evaluation metric for face alignment on a
set of images, i.e., area under error distribution curve within a threshold,
AUC$_\alpha$, given the fact that the traditional evaluation measure (mean
error) is very sensitive to big alignment error. 2) we extend the 300W database
with more practical face detections to make fair comparison possible. 3) we
carry out face alignment sensitivity analysis w.r.t. face detection, on both
synthetic and real data, using both off-the-shelf and re-retrained models. 4)
we study factors that are particularly important to achieve good performance
and provide suggestions for practical applications. Most of the conclusions
drawn from our comparative analysis cannot be inferred from the original
publications.
|
1411.4568
|
Yannick Verdie
|
Yannick Verdie, Kwang Moo Yi, Pascal Fua, Vincent Lepetit
|
TILDE: A Temporally Invariant Learned DEtector
| null | null |
10.1109/CVPR.2015.7299165
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a learning-based approach to detect repeatable keypoints under
drastic imaging changes of weather and lighting conditions to which
state-of-the-art keypoint detectors are surprisingly sensitive. We first
identify good keypoint candidates in multiple training images taken from the
same viewpoint. We then train a regressor to predict a score map whose maxima
are those points so that they can be found by simple non-maximum suppression.
As there are no standard datasets to test the influence of these kinds of
changes, we created our own, which we will make publicly available. We will
show that our method significantly outperforms the state-of-the-art methods in
such challenging conditions, while still achieving state-of-the-art performance
on the untrained standard Oxford dataset.
|
[
{
"version": "v1",
"created": "Mon, 17 Nov 2014 17:44:21 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Feb 2015 14:22:39 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Mar 2015 20:07:01 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"Verdie",
"Yannick",
""
],
[
"Yi",
"Kwang Moo",
""
],
[
"Fua",
"Pascal",
""
],
[
"Lepetit",
"Vincent",
""
]
] |
TITLE: TILDE: A Temporally Invariant Learned DEtector
ABSTRACT: We introduce a learning-based approach to detect repeatable keypoints under
drastic imaging changes of weather and lighting conditions to which
state-of-the-art keypoint detectors are surprisingly sensitive. We first
identify good keypoint candidates in multiple training images taken from the
same viewpoint. We then train a regressor to predict a score map whose maxima
are those points so that they can be found by simple non-maximum suppression.
As there are no standard datasets to test the influence of these kinds of
changes, we created our own, which we will make publicly available. We will
show that our method significantly outperforms the state-of-the-art methods in
such challenging conditions, while still achieving state-of-the-art performance
on the untrained standard Oxford dataset.
|
1506.00333
|
Lin Ma
|
Lin Ma, Zhengdong Lu, Hang Li
|
Learning to Answer Questions From Image Using Convolutional Neural
Network
|
7 pages, 4 figures. Accepted by AAAI 2016
| null | null | null |
cs.CL cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose to employ the convolutional neural network (CNN)
for the image question answering (QA). Our proposed CNN provides an end-to-end
framework with convolutional architectures for learning not only the image and
question representations, but also their inter-modal interactions to produce
the answer. More specifically, our model consists of three CNNs: one image CNN
to encode the image content, one sentence CNN to compose the words of the
question, and one multimodal convolution layer to learn their joint
representation for the classification in the space of candidate answer words.
We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA
datasets, which are two benchmark datasets for the image QA, with the
performances significantly outperforming the state-of-the-art.
|
[
{
"version": "v1",
"created": "Mon, 1 Jun 2015 03:09:49 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Nov 2015 09:54:59 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"Ma",
"Lin",
""
],
[
"Lu",
"Zhengdong",
""
],
[
"Li",
"Hang",
""
]
] |
TITLE: Learning to Answer Questions From Image Using Convolutional Neural
Network
ABSTRACT: In this paper, we propose to employ the convolutional neural network (CNN)
for the image question answering (QA). Our proposed CNN provides an end-to-end
framework with convolutional architectures for learning not only the image and
question representations, but also their inter-modal interactions to produce
the answer. More specifically, our model consists of three CNNs: one image CNN
to encode the image content, one sentence CNN to compose the words of the
question, and one multimodal convolution layer to learn their joint
representation for the classification in the space of candidate answer words.
We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA
datasets, which are two benchmark datasets for the image QA, with the
performances significantly outperforming the state-of-the-art.
|
1511.02462
|
Steven C.H. Hoi
|
Steven C.H. Hoi, Xiongwei Wu, Hantang Liu, Yue Wu, Huiqiong Wang, Hui
Xue, Qiang Wu
|
LOGO-Net: Large-scale Deep Logo Detection and Brand Recognition with
Deep Region-based Convolutional Networks
|
15 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logo detection from images has many applications, particularly for brand
recognition and intellectual property protection. Most existing studies for
logo recognition and detection are based on small-scale datasets which are not
comprehensive enough when exploring emerging deep learning techniques. In this
paper, we introduce "LOGO-Net", a large-scale logo image database for logo
detection and brand recognition from real-world product images. To facilitate
research, LOGO-Net has two datasets: (i)"logos-18" consists of 18 logo classes,
10 brands, and 16,043 logo objects, and (ii) "logos-160" consists of 160 logo
classes, 100 brands, and 130,608 logo objects. We describe the ideas and
challenges for constructing such a large-scale database. Another key
contribution of this work is to apply emerging deep learning techniques for
logo detection and brand recognition tasks, and conduct extensive experiments
by exploring several state-of-the-art deep region-based convolutional networks
techniques for object detection tasks. The LOGO-net will be released at
http://logo-net.org/
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2015 09:44:45 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Nov 2015 12:57:05 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"Hoi",
"Steven C. H.",
""
],
[
"Wu",
"Xiongwei",
""
],
[
"Liu",
"Hantang",
""
],
[
"Wu",
"Yue",
""
],
[
"Wang",
"Huiqiong",
""
],
[
"Xue",
"Hui",
""
],
[
"Wu",
"Qiang",
""
]
] |
TITLE: LOGO-Net: Large-scale Deep Logo Detection and Brand Recognition with
Deep Region-based Convolutional Networks
ABSTRACT: Logo detection from images has many applications, particularly for brand
recognition and intellectual property protection. Most existing studies for
logo recognition and detection are based on small-scale datasets which are not
comprehensive enough when exploring emerging deep learning techniques. In this
paper, we introduce "LOGO-Net", a large-scale logo image database for logo
detection and brand recognition from real-world product images. To facilitate
research, LOGO-Net has two datasets: (i)"logos-18" consists of 18 logo classes,
10 brands, and 16,043 logo objects, and (ii) "logos-160" consists of 160 logo
classes, 100 brands, and 130,608 logo objects. We describe the ideas and
challenges for constructing such a large-scale database. Another key
contribution of this work is to apply emerging deep learning techniques for
logo detection and brand recognition tasks, and conduct extensive experiments
by exploring several state-of-the-art deep region-based convolutional networks
techniques for object detection tasks. The LOGO-net will be released at
http://logo-net.org/
|
1511.04134
|
Jisun An
|
Jisun An and Ingmar Weber
|
Whom Should We Sense in "Social Sensing" -- Analyzing Which Users Work
Best for Social Media Now-Casting
|
This is a pre-print of a forthcoming EPJ Data Science paper
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the ever increasing amount of publicly available social media data,
there is growing interest in using online data to study and quantify phenomena
in the offline "real" world. As social media data can be obtained in near
real-time and at low cost, it is often used for "now-casting" indices such as
levels of flu activity or unemployment. The term "social sensing" is often used
in this context to describe the idea that users act as "sensors", publicly
reporting their health status or job losses. Sensor activity during a time
period is then typically aggregated in a "one tweet, one vote" fashion by
simply counting. At the same time, researchers readily admit that social media
users are not a perfect representation of the actual population. Additionally,
users differ in the amount of details of their personal lives that they reveal.
Intuitively, it should be possible to improve now-casting by assigning
different weights to different user groups.
In this paper, we ask "How does social sensing actually work?" or, more
precisely, "Whom should we sense--and whom not--for optimal results?". We
investigate how different sampling strategies affect the performance of
now-casting of two common offline indices: flu activity and unemployment rate.
We show that now-casting can be improved by 1) applying user filtering
techniques and 2) selecting users with complete profiles. We also find that,
using the right type of user groups, now-casting performance does not degrade,
even when drastically reducing the size of the dataset. More fundamentally, we
describe which type of users contribute most to the accuracy by asking if
"babblers are better". We conclude the paper by providing guidance on how to
select better user groups for more accurate now-casting.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2015 01:13:48 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"An",
"Jisun",
""
],
[
"Weber",
"Ingmar",
""
]
] |
TITLE: Whom Should We Sense in "Social Sensing" -- Analyzing Which Users Work
Best for Social Media Now-Casting
ABSTRACT: Given the ever increasing amount of publicly available social media data,
there is growing interest in using online data to study and quantify phenomena
in the offline "real" world. As social media data can be obtained in near
real-time and at low cost, it is often used for "now-casting" indices such as
levels of flu activity or unemployment. The term "social sensing" is often used
in this context to describe the idea that users act as "sensors", publicly
reporting their health status or job losses. Sensor activity during a time
period is then typically aggregated in a "one tweet, one vote" fashion by
simply counting. At the same time, researchers readily admit that social media
users are not a perfect representation of the actual population. Additionally,
users differ in the amount of details of their personal lives that they reveal.
Intuitively, it should be possible to improve now-casting by assigning
different weights to different user groups.
In this paper, we ask "How does social sensing actually work?" or, more
precisely, "Whom should we sense--and whom not--for optimal results?". We
investigate how different sampling strategies affect the performance of
now-casting of two common offline indices: flu activity and unemployment rate.
We show that now-casting can be improved by 1) applying user filtering
techniques and 2) selecting users with complete profiles. We also find that,
using the right type of user groups, now-casting performance does not degrade,
even when drastically reducing the size of the dataset. More fundamentally, we
describe which type of users contribute most to the accuracy by asking if
"babblers are better". We conclude the paper by providing guidance on how to
select better user groups for more accurate now-casting.
|
1511.04145
|
Mehrdad Farajtabar
|
Mehrdad Farajtabar, Safoora Yousefi, Long Q. Tran, Le Song, Hongyuan
Zha
|
A Continuous-time Mutually-Exciting Point Process Framework for
Prioritizing Events in Social Media
| null | null | null | null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The overwhelming amount and rate of information update in online social media
is making it increasingly difficult for users to allocate their attention to
their topics of interest, thus there is a strong need for prioritizing news
feeds. The attractiveness of a post to a user depends on many complex
contextual and temporal features of the post. For instance, the contents of the
post, the responsiveness of a third user, and the age of the post may all have
impact. So far, these static and dynamic features has not been incorporated in
a unified framework to tackle the post prioritization problem. In this paper,
we propose a novel approach for prioritizing posts based on a feature modulated
multi-dimensional point process. Our model is able to simultaneously capture
textual and sentiment features, and temporal features such as self-excitation,
mutual-excitation and bursty nature of social interaction. As an evaluation, we
also curated a real-world conversational benchmark dataset crawled from
Facebook. In our experiments, we demonstrate that our algorithm is able to
achieve the-state-of-the-art performance in terms of analyzing, predicting, and
prioritizing events. In terms of interpretability of our method, we observe
that features indicating individual user profile and linguistic characteristics
of the events work best for prediction and prioritization of new events.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2015 02:56:32 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"Farajtabar",
"Mehrdad",
""
],
[
"Yousefi",
"Safoora",
""
],
[
"Tran",
"Long Q.",
""
],
[
"Song",
"Le",
""
],
[
"Zha",
"Hongyuan",
""
]
] |
TITLE: A Continuous-time Mutually-Exciting Point Process Framework for
Prioritizing Events in Social Media
ABSTRACT: The overwhelming amount and rate of information update in online social media
is making it increasingly difficult for users to allocate their attention to
their topics of interest, thus there is a strong need for prioritizing news
feeds. The attractiveness of a post to a user depends on many complex
contextual and temporal features of the post. For instance, the contents of the
post, the responsiveness of a third user, and the age of the post may all have
impact. So far, these static and dynamic features has not been incorporated in
a unified framework to tackle the post prioritization problem. In this paper,
we propose a novel approach for prioritizing posts based on a feature modulated
multi-dimensional point process. Our model is able to simultaneously capture
textual and sentiment features, and temporal features such as self-excitation,
mutual-excitation and bursty nature of social interaction. As an evaluation, we
also curated a real-world conversational benchmark dataset crawled from
Facebook. In our experiments, we demonstrate that our algorithm is able to
achieve the-state-of-the-art performance in terms of analyzing, predicting, and
prioritizing events. In terms of interpretability of our method, we observe
that features indicating individual user profile and linguistic characteristics
of the events work best for prediction and prioritization of new events.
|
1511.04242
|
Tommaso Cavallari
|
Tommaso Cavallari, Luigi Di Stefano
|
Volume-based Semantic Labeling with Signed Distance Functions
|
Submitted to PSIVT2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research works on the two topics of Semantic Segmentation and SLAM
(Simultaneous Localization and Mapping) have been following separate tracks.
Here, we link them quite tightly by delineating a category label fusion
technique that allows for embedding semantic information into the dense map
created by a volume-based SLAM algorithm such as KinectFusion. Accordingly, our
approach is the first to provide a semantically labeled dense reconstruction of
the environment from a stream of RGB-D images. We validate our proposal using a
publicly available semantically annotated RGB-D dataset and a) employing ground
truth labels, b) corrupting such annotations with synthetic noise, c) deploying
a state of the art semantic segmentation algorithm based on Convolutional
Neural Networks.
|
[
{
"version": "v1",
"created": "Fri, 13 Nov 2015 11:25:50 GMT"
}
] | 2015-11-16T00:00:00 |
[
[
"Cavallari",
"Tommaso",
""
],
[
"Di Stefano",
"Luigi",
""
]
] |
TITLE: Volume-based Semantic Labeling with Signed Distance Functions
ABSTRACT: Research works on the two topics of Semantic Segmentation and SLAM
(Simultaneous Localization and Mapping) have been following separate tracks.
Here, we link them quite tightly by delineating a category label fusion
technique that allows for embedding semantic information into the dense map
created by a volume-based SLAM algorithm such as KinectFusion. Accordingly, our
approach is the first to provide a semantically labeled dense reconstruction of
the environment from a stream of RGB-D images. We validate our proposal using a
publicly available semantically annotated RGB-D dataset and a) employing ground
truth labels, b) corrupting such annotations with synthetic noise, c) deploying
a state of the art semantic segmentation algorithm based on Convolutional
Neural Networks.
|
1502.07643
|
Ryan Robinson
|
Ryan Robinson
|
Dynamic Belief Fusion for Object Detection
|
The paper has been withdrawn and an updated paper has been uploaded
by a co-author: http://arxiv.org/pdf/1511.03183.pdf
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel approach for the fusion of detection scores from disparate object
detection methods is proposed. In order to effectively integrate the outputs of
multiple detectors, the level of ambiguity in each individual detection score
(called "uncertainty") is estimated using the precision/recall relationship of
the corresponding detector. The proposed fusion method, called Dynamic Belief
Fusion (DBF), dynamically assigns basic probabilities to propositions (target,
non-target, uncertain) based on confidence levels in the detection results of
individual approaches. A joint basic probability assignment, containing
information from all detectors, is determined using Dempster's combination
rule, and is easily reduced to a single fused detection score. Experiments on
ARL and PASCAL VOC 07 datasets demonstrate that the detection accuracy of DBF
is considerably greater than conventional fusion approaches as well as
state-of-the-art individual detectors.
|
[
{
"version": "v1",
"created": "Thu, 26 Feb 2015 17:31:15 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Mar 2015 15:40:11 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Nov 2015 04:19:40 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Robinson",
"Ryan",
""
]
] |
TITLE: Dynamic Belief Fusion for Object Detection
ABSTRACT: A novel approach for the fusion of detection scores from disparate object
detection methods is proposed. In order to effectively integrate the outputs of
multiple detectors, the level of ambiguity in each individual detection score
(called "uncertainty") is estimated using the precision/recall relationship of
the corresponding detector. The proposed fusion method, called Dynamic Belief
Fusion (DBF), dynamically assigns basic probabilities to propositions (target,
non-target, uncertain) based on confidence levels in the detection results of
individual approaches. A joint basic probability assignment, containing
information from all detectors, is determined using Dempster's combination
rule, and is easily reduced to a single fused detection score. Experiments on
ARL and PASCAL VOC 07 datasets demonstrate that the detection accuracy of DBF
is considerably greater than conventional fusion approaches as well as
state-of-the-art individual detectors.
|
1507.07295
|
Kirill Dyagilev
|
Kirill Dyagilev, Suchi Saria
|
Learning (Predictive) Risk Scores in the Presence of Censoring due to
Interventions
| null |
Machine Learning Journal, Special Issue on on Machine Learning for
Health and Medicine, pp. 1-26, 2015
| null | null |
cs.AI stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A large and diverse set of measurements are regularly collected during a
patient's hospital stay to monitor their health status. Tools for integrating
these measurements into severity scores, that accurately track changes in
illness severity, can improve clinicians ability to provide timely
interventions. Existing approaches for creating such scores either 1) rely on
experts to fully specify the severity score, or 2) train a predictive score,
using supervised learning, by regressing against a surrogate marker of severity
such as the presence of downstream adverse events. The first approach does not
extend to diseases where an accurate score cannot be elicited from experts. The
second approach often produces scores that suffer from bias due to
treatment-related censoring (Paxton, 2013). We propose a novel ranking based
framework for disease severity score learning (DSSL). DSSL exploits the
following key observation: while it is challenging for experts to quantify the
disease severity at any given time, it is often easy to compare the disease
severity at two different times. Extending existing ranking algorithms, DSSL
learns a function that maps a vector of patient's measurements to a scalar
severity score such that the resulting score is temporally smooth and
consistent with the expert's ranking of pairs of disease states. We apply DSSL
to the problem of learning a sepsis severity score using a large, real-world
dataset. The learned scores significantly outperform state-of-the-art clinical
scores in ranking patient states by severity and in early detection of future
adverse events. We also show that the learned disease severity trajectories are
consistent with clinical expectations of disease evolution. Further, using
simulated datasets, we show that DSSL exhibits better generalization
performance to changes in treatment patterns compared to the above approaches.
|
[
{
"version": "v1",
"created": "Mon, 27 Jul 2015 03:56:37 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Dyagilev",
"Kirill",
""
],
[
"Saria",
"Suchi",
""
]
] |
TITLE: Learning (Predictive) Risk Scores in the Presence of Censoring due to
Interventions
ABSTRACT: A large and diverse set of measurements are regularly collected during a
patient's hospital stay to monitor their health status. Tools for integrating
these measurements into severity scores, that accurately track changes in
illness severity, can improve clinicians ability to provide timely
interventions. Existing approaches for creating such scores either 1) rely on
experts to fully specify the severity score, or 2) train a predictive score,
using supervised learning, by regressing against a surrogate marker of severity
such as the presence of downstream adverse events. The first approach does not
extend to diseases where an accurate score cannot be elicited from experts. The
second approach often produces scores that suffer from bias due to
treatment-related censoring (Paxton, 2013). We propose a novel ranking based
framework for disease severity score learning (DSSL). DSSL exploits the
following key observation: while it is challenging for experts to quantify the
disease severity at any given time, it is often easy to compare the disease
severity at two different times. Extending existing ranking algorithms, DSSL
learns a function that maps a vector of patient's measurements to a scalar
severity score such that the resulting score is temporally smooth and
consistent with the expert's ranking of pairs of disease states. We apply DSSL
to the problem of learning a sepsis severity score using a large, real-world
dataset. The learned scores significantly outperform state-of-the-art clinical
scores in ranking patient states by severity and in early detection of future
adverse events. We also show that the learned disease severity trajectories are
consistent with clinical expectations of disease evolution. Further, using
simulated datasets, we show that DSSL exhibits better generalization
performance to changes in treatment patterns compared to the above approaches.
|
1511.02570
|
Chunhua Shen
|
Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel, Anthony Dick
|
Explicit Knowledge-based Reasoning for Visual Question Answering
|
20 pages
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a method for visual question answering which is capable of
reasoning about contents of an image on the basis of information extracted from
a large-scale knowledge base. The method not only answers natural language
questions using concepts not contained in the image, but can provide an
explanation of the reasoning by which it developed its answer. The method is
capable of answering far more complex questions than the predominant long
short-term memory-based approach, and outperforms it significantly in the
testing. We also provide a dataset and a protocol by which to evaluate such
methods, thus addressing one of the key issues in general visual ques- tion
answering.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2015 05:25:57 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Nov 2015 01:10:38 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Wang",
"Peng",
""
],
[
"Wu",
"Qi",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Dick",
"Anthony",
""
]
] |
TITLE: Explicit Knowledge-based Reasoning for Visual Question Answering
ABSTRACT: We describe a method for visual question answering which is capable of
reasoning about contents of an image on the basis of information extracted from
a large-scale knowledge base. The method not only answers natural language
questions using concepts not contained in the image, but can provide an
explanation of the reasoning by which it developed its answer. The method is
capable of answering far more complex questions than the predominant long
short-term memory-based approach, and outperforms it significantly in the
testing. We also provide a dataset and a protocol by which to evaluate such
methods, thus addressing one of the key issues in general visual ques- tion
answering.
|
1511.03690
|
David Harwath
|
David Harwath and James Glass
|
Deep Multimodal Semantic Embeddings for Speech and Images
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a model which takes as input a corpus of images
with relevant spoken captions and finds a correspondence between the two
modalities. We employ a pair of convolutional neural networks to model visual
objects and speech signals at the word level, and tie the networks together
with an embedding and alignment model which learns a joint semantic space over
both modalities. We evaluate our model using image search and annotation tasks
on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000
spoken captions using Amazon Mechanical Turk.
|
[
{
"version": "v1",
"created": "Wed, 11 Nov 2015 21:30:10 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Harwath",
"David",
""
],
[
"Glass",
"James",
""
]
] |
TITLE: Deep Multimodal Semantic Embeddings for Speech and Images
ABSTRACT: In this paper, we present a model which takes as input a corpus of images
with relevant spoken captions and finds a correspondence between the two
modalities. We employ a pair of convolutional neural networks to model visual
objects and speech signals at the word level, and tie the networks together
with an embedding and alignment model which learns a joint semantic space over
both modalities. We evaluate our model using image search and annotation tasks
on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000
spoken captions using Amazon Mechanical Turk.
|
1511.04048
|
Roozbeh Mottaghi
|
Roozbeh Mottaghi, Hessam Bagherinezhad, Mohammad Rastegari, Ali
Farhadi
|
Newtonian Image Understanding: Unfolding the Dynamics of Objects in
Static Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the challenging problem of predicting the dynamics of
objects in static images. Given a query object in an image, our goal is to
provide a physical understanding of the object in terms of the forces acting
upon it and its long term motion as response to those forces. Direct and
explicit estimation of the forces and the motion of objects from a single image
is extremely challenging. We define intermediate physical abstractions called
Newtonian scenarios and introduce Newtonian Neural Network ($N^3$) that learns
to map a single image to a state in a Newtonian scenario. Our experimental
evaluations show that our method can reliably predict dynamics of a query
object from a single image. In addition, our approach can provide physical
reasoning that supports the predicted dynamics in terms of velocity and force
vectors. To spur research in this direction we compiled Visual Newtonian
Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian
scenarios represented using game engines, and 4516 still images with their
ground truth dynamics.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 20:21:11 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Mottaghi",
"Roozbeh",
""
],
[
"Bagherinezhad",
"Hessam",
""
],
[
"Rastegari",
"Mohammad",
""
],
[
"Farhadi",
"Ali",
""
]
] |
TITLE: Newtonian Image Understanding: Unfolding the Dynamics of Objects in
Static Images
ABSTRACT: In this paper, we study the challenging problem of predicting the dynamics of
objects in static images. Given a query object in an image, our goal is to
provide a physical understanding of the object in terms of the forces acting
upon it and its long term motion as response to those forces. Direct and
explicit estimation of the forces and the motion of objects from a single image
is extremely challenging. We define intermediate physical abstractions called
Newtonian scenarios and introduce Newtonian Neural Network ($N^3$) that learns
to map a single image to a state in a Newtonian scenario. Our experimental
evaluations show that our method can reliably predict dynamics of a query
object from a single image. In addition, our approach can provide physical
reasoning that supports the predicted dynamics in terms of velocity and force
vectors. To spur research in this direction we compiled Visual Newtonian
Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian
scenarios represented using game engines, and 4516 still images with their
ground truth dynamics.
|
1511.04056
|
Mohammad Norouzi
|
Mohammad Norouzi, Maxwell D. Collins, Matthew Johnson, David J. Fleet,
Pushmeet Kohli
|
Efficient non-greedy optimization of decision trees
|
in NIPS 2015
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision trees and randomized forests are widely used in computer vision and
machine learning. Standard algorithms for decision tree induction optimize the
split functions one node at a time according to some splitting criteria. This
greedy procedure often leads to suboptimal trees. In this paper, we present an
algorithm for optimizing the split functions at all levels of the tree jointly
with the leaf parameters, based on a global objective. We show that the problem
of finding optimal linear-combination (oblique) splits for decision trees is
related to structured prediction with latent variables, and we formulate a
convex-concave upper bound on the tree's empirical loss. The run-time of
computing the gradient of the proposed surrogate objective with respect to each
training exemplar is quadratic in the the tree depth, and thus training deep
trees is feasible. The use of stochastic gradient descent for optimization
enables effective training with large datasets. Experiments on several
classification benchmarks demonstrate that the resulting non-greedy decision
trees outperform greedy decision tree baselines.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 20:32:28 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Norouzi",
"Mohammad",
""
],
[
"Collins",
"Maxwell D.",
""
],
[
"Johnson",
"Matthew",
""
],
[
"Fleet",
"David J.",
""
],
[
"Kohli",
"Pushmeet",
""
]
] |
TITLE: Efficient non-greedy optimization of decision trees
ABSTRACT: Decision trees and randomized forests are widely used in computer vision and
machine learning. Standard algorithms for decision tree induction optimize the
split functions one node at a time according to some splitting criteria. This
greedy procedure often leads to suboptimal trees. In this paper, we present an
algorithm for optimizing the split functions at all levels of the tree jointly
with the leaf parameters, based on a global objective. We show that the problem
of finding optimal linear-combination (oblique) splits for decision trees is
related to structured prediction with latent variables, and we formulate a
convex-concave upper bound on the tree's empirical loss. The run-time of
computing the gradient of the proposed surrogate objective with respect to each
training exemplar is quadratic in the the tree depth, and thus training deep
trees is feasible. The use of stochastic gradient descent for optimization
enables effective training with large datasets. Experiments on several
classification benchmarks demonstrate that the resulting non-greedy decision
trees outperform greedy decision tree baselines.
|
1511.04067
|
Oncel Tuzel
|
Raviteja Vemulapalli and Oncel Tuzel and Ming-Yu Liu
|
Deep Gaussian Conditional Random Field Network: A Model-based Deep
Network for Discriminative Denoising
|
10 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel deep network architecture for image\\ denoising based on a
Gaussian Conditional Random Field (GCRF) model. In contrast to the existing
discriminative denoising methods that train a separate model for each noise
level, the proposed deep network explicitly models the input noise variance and
hence is capable of handling a range of noise levels. Our deep network, which
we refer to as deep GCRF network, consists of two sub-networks: (i) a parameter
generation network that generates the pairwise potential parameters based on
the noisy input image, and (ii) an inference network whose layers perform the
computations involved in an iterative GCRF inference procedure.\ We train the
entire deep GCRF network (both parameter generation and inference networks)
discriminatively in an end-to-end fashion by maximizing the peak
signal-to-noise ratio measure. Experiments on Berkeley segmentation and
PASCALVOC datasets show that the proposed deep GCRF network outperforms
state-of-the-art image denoising approaches for several noise levels.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 20:49:20 GMT"
}
] | 2015-11-13T00:00:00 |
[
[
"Vemulapalli",
"Raviteja",
""
],
[
"Tuzel",
"Oncel",
""
],
[
"Liu",
"Ming-Yu",
""
]
] |
TITLE: Deep Gaussian Conditional Random Field Network: A Model-based Deep
Network for Discriminative Denoising
ABSTRACT: We propose a novel deep network architecture for image\\ denoising based on a
Gaussian Conditional Random Field (GCRF) model. In contrast to the existing
discriminative denoising methods that train a separate model for each noise
level, the proposed deep network explicitly models the input noise variance and
hence is capable of handling a range of noise levels. Our deep network, which
we refer to as deep GCRF network, consists of two sub-networks: (i) a parameter
generation network that generates the pairwise potential parameters based on
the noisy input image, and (ii) an inference network whose layers perform the
computations involved in an iterative GCRF inference procedure.\ We train the
entire deep GCRF network (both parameter generation and inference networks)
discriminatively in an end-to-end fashion by maximizing the peak
signal-to-noise ratio measure. Experiments on Berkeley segmentation and
PASCALVOC datasets show that the proposed deep GCRF network outperforms
state-of-the-art image denoising approaches for several noise levels.
|
1410.4175
|
Charles Brummitt
|
Charles D. Brummitt and George Barnett and Raissa M. D'Souza
|
Coupled catastrophes: sudden shifts cascade and hop among interdependent
systems
|
20 pages, 4 figures, plus a 6-page supplementary material that
contains 5 figures. Accepted at Journal of the Royal Society Interface
|
J. R. Soc. Interface 2015 12 20150712
|
10.1098/rsif.2015.0712
| null |
physics.soc-ph math.CA math.DS nlin.SI physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An important challenge in several disciplines is to understand how sudden
changes can propagate among coupled systems. Examples include the
synchronization of business cycles, population collapse in patchy ecosystems,
markets shifting to a new technology platform, collapses in prices and in
confidence in financial markets, and protests erupting in multiple countries. A
number of mathematical models of these phenomena have multiple equilibria
separated by saddle-node bifurcations. We study this behavior in its normal
form as fast--slow ordinary differential equations. In our model, a system
consists of multiple subsystems, such as countries in the global economy or
patches of an ecosystem. Each subsystem is described by a scalar quantity, such
as economic output or population, that undergoes sudden changes via saddle-node
bifurcations. The subsystems are coupled via their scalar quantity (e.g., trade
couples economic output; diffusion couples populations); that coupling moves
the locations of their bifurcations. The model demonstrates two ways in which
sudden changes can propagate: they can cascade (one causing the next), or they
can hop over subsystems. The latter is absent from classic models of cascades.
For an application, we study the Arab Spring protests. After connecting the
model to sociological theories that have bistability, we use socioeconomic data
to estimate relative proximities to tipping points and Facebook data to
estimate couplings among countries. We find that although protests tend to
spread locally, they also seem to "hop" over countries, like in the stylized
model; this result highlights a new class of temporal motifs in longitudinal
network datasets.
|
[
{
"version": "v1",
"created": "Wed, 15 Oct 2014 19:20:19 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Oct 2014 16:03:25 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Oct 2014 17:26:23 GMT"
},
{
"version": "v4",
"created": "Fri, 16 Oct 2015 16:58:54 GMT"
}
] | 2015-11-12T00:00:00 |
[
[
"Brummitt",
"Charles D.",
""
],
[
"Barnett",
"George",
""
],
[
"D'Souza",
"Raissa M.",
""
]
] |
TITLE: Coupled catastrophes: sudden shifts cascade and hop among interdependent
systems
ABSTRACT: An important challenge in several disciplines is to understand how sudden
changes can propagate among coupled systems. Examples include the
synchronization of business cycles, population collapse in patchy ecosystems,
markets shifting to a new technology platform, collapses in prices and in
confidence in financial markets, and protests erupting in multiple countries. A
number of mathematical models of these phenomena have multiple equilibria
separated by saddle-node bifurcations. We study this behavior in its normal
form as fast--slow ordinary differential equations. In our model, a system
consists of multiple subsystems, such as countries in the global economy or
patches of an ecosystem. Each subsystem is described by a scalar quantity, such
as economic output or population, that undergoes sudden changes via saddle-node
bifurcations. The subsystems are coupled via their scalar quantity (e.g., trade
couples economic output; diffusion couples populations); that coupling moves
the locations of their bifurcations. The model demonstrates two ways in which
sudden changes can propagate: they can cascade (one causing the next), or they
can hop over subsystems. The latter is absent from classic models of cascades.
For an application, we study the Arab Spring protests. After connecting the
model to sociological theories that have bistability, we use socioeconomic data
to estimate relative proximities to tipping points and Facebook data to
estimate couplings among countries. We find that although protests tend to
spread locally, they also seem to "hop" over countries, like in the stylized
model; this result highlights a new class of temporal motifs in longitudinal
network datasets.
|
1511.03292
|
Yezhou Yang
|
Somak Aditya, Yezhou Yang, Chitta Baral, Cornelia Fermuller, Yiannis
Aloimonos
|
From Images to Sentences through Scene Description Graphs using
Commonsense Reasoning and Knowledge
| null | null | null | null |
cs.CV cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose the construction of linguistic descriptions of
images. This is achieved through the extraction of scene description graphs
(SDGs) from visual scenes using an automatically constructed knowledge base.
SDGs are constructed using both vision and reasoning. Specifically, commonsense
reasoning is applied on (a) detections obtained from existing perception
methods on given images, (b) a "commonsense" knowledge base constructed using
natural language processing of image annotations and (c) lexical ontological
knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based
evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most
cases, sentences auto-constructed from SDGs obtained by our method give a more
relevant and thorough description of an image than a recent state-of-the-art
image caption based approach. Our Image-Sentence Alignment Evaluation results
are also comparable to that of the recent state-of-the art approaches.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 21:14:51 GMT"
}
] | 2015-11-12T00:00:00 |
[
[
"Aditya",
"Somak",
""
],
[
"Yang",
"Yezhou",
""
],
[
"Baral",
"Chitta",
""
],
[
"Fermuller",
"Cornelia",
""
],
[
"Aloimonos",
"Yiannis",
""
]
] |
TITLE: From Images to Sentences through Scene Description Graphs using
Commonsense Reasoning and Knowledge
ABSTRACT: In this paper we propose the construction of linguistic descriptions of
images. This is achieved through the extraction of scene description graphs
(SDGs) from visual scenes using an automatically constructed knowledge base.
SDGs are constructed using both vision and reasoning. Specifically, commonsense
reasoning is applied on (a) detections obtained from existing perception
methods on given images, (b) a "commonsense" knowledge base constructed using
natural language processing of image annotations and (c) lexical ontological
knowledge from resources such as WordNet. Amazon Mechanical Turk(AMT)-based
evaluations on Flickr8k, Flickr30k and MS-COCO datasets show that in most
cases, sentences auto-constructed from SDGs obtained by our method give a more
relevant and thorough description of an image than a recent state-of-the-art
image caption based approach. Our Image-Sentence Alignment Evaluation results
are also comparable to that of the recent state-of-the art approaches.
|
1511.03361
|
Alexander Wong
|
Mohammad Javad Shafiee, Audrey G. Chung, Devinder Kumar, Farzad
Khalvati, Masoom Haider, and Alexander Wong
|
Discovery Radiomics via StochasticNet Sequencers for Cancer Detection
|
3 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radiomics has proven to be a powerful prognostic tool for cancer detection,
and has previously been applied in lung, breast, prostate, and head-and-neck
cancer studies with great success. However, these radiomics-driven methods rely
on pre-defined, hand-crafted radiomic feature sets that can limit their ability
to characterize unique cancer traits. In this study, we introduce a novel
discovery radiomics framework where we directly discover custom radiomic
features from the wealth of available medical imaging data. In particular, we
leverage novel StochasticNet radiomic sequencers for extracting custom radiomic
features tailored for characterizing unique cancer tissue phenotype. Using
StochasticNet radiomic sequencers discovered using a wealth of lung CT data, we
perform binary classification on 42,340 lung lesions obtained from the CT scans
of 93 patients in the LIDC-IDRI dataset. Preliminary results show significant
improvement over previous state-of-the-art methods, indicating the potential of
the proposed discovery radiomics framework for improving cancer screening and
diagnosis.
|
[
{
"version": "v1",
"created": "Wed, 11 Nov 2015 02:27:23 GMT"
}
] | 2015-11-12T00:00:00 |
[
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Chung",
"Audrey G.",
""
],
[
"Kumar",
"Devinder",
""
],
[
"Khalvati",
"Farzad",
""
],
[
"Haider",
"Masoom",
""
],
[
"Wong",
"Alexander",
""
]
] |
TITLE: Discovery Radiomics via StochasticNet Sequencers for Cancer Detection
ABSTRACT: Radiomics has proven to be a powerful prognostic tool for cancer detection,
and has previously been applied in lung, breast, prostate, and head-and-neck
cancer studies with great success. However, these radiomics-driven methods rely
on pre-defined, hand-crafted radiomic feature sets that can limit their ability
to characterize unique cancer traits. In this study, we introduce a novel
discovery radiomics framework where we directly discover custom radiomic
features from the wealth of available medical imaging data. In particular, we
leverage novel StochasticNet radiomic sequencers for extracting custom radiomic
features tailored for characterizing unique cancer tissue phenotype. Using
StochasticNet radiomic sequencers discovered using a wealth of lung CT data, we
perform binary classification on 42,340 lung lesions obtained from the CT scans
of 93 patients in the LIDC-IDRI dataset. Preliminary results show significant
improvement over previous state-of-the-art methods, indicating the potential of
the proposed discovery radiomics framework for improving cancer screening and
diagnosis.
|
1511.03609
|
Andrei Costin
|
Andrei Costin and Apostolis Zarras and Aur\'elien Francillon
|
Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded
Web Interfaces
| null | null | null | null |
cs.CR cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embedded devices are becoming more widespread, interconnected, and
web-enabled than ever. However, recent studies showed that these devices are
far from being secure. Moreover, many embedded systems rely on web interfaces
for user interaction or administration. Unfortunately, web security is known to
be difficult, and therefore the web interfaces of embedded systems represent a
considerable attack surface.
In this paper, we present the first fully automated framework that applies
dynamic firmware analysis techniques to achieve, in a scalable manner,
automated vulnerability discovery within embedded firmware images. We apply our
framework to study the security of embedded web interfaces running in
Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable
modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement
a scalable framework for discovery of vulnerabilities in embedded web
interfaces regardless of the vendor, device, or architecture. To achieve this
goal, our framework performs full system emulation to achieve the execution of
firmware images in a software-only environment, i.e., without involving any
physical embedded devices. Then, we analyze the web interfaces within the
firmware using both static and dynamic tools. We also present some interesting
case-studies, and discuss the main challenges associated with the dynamic
analysis of firmware images and their web interfaces and network services. The
observations we make in this paper shed light on an important aspect of
embedded devices which was not previously studied at a large scale.
We validate our framework by testing it on 1925 firmware images from 54
different vendors. We discover important vulnerabilities in 185 firmware
images, affecting nearly a quarter of vendors in our dataset. These
experimental results demonstrate the effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Wed, 11 Nov 2015 19:17:38 GMT"
}
] | 2015-11-12T00:00:00 |
[
[
"Costin",
"Andrei",
""
],
[
"Zarras",
"Apostolis",
""
],
[
"Francillon",
"Aurélien",
""
]
] |
TITLE: Automated Dynamic Firmware Analysis at Scale: A Case Study on Embedded
Web Interfaces
ABSTRACT: Embedded devices are becoming more widespread, interconnected, and
web-enabled than ever. However, recent studies showed that these devices are
far from being secure. Moreover, many embedded systems rely on web interfaces
for user interaction or administration. Unfortunately, web security is known to
be difficult, and therefore the web interfaces of embedded systems represent a
considerable attack surface.
In this paper, we present the first fully automated framework that applies
dynamic firmware analysis techniques to achieve, in a scalable manner,
automated vulnerability discovery within embedded firmware images. We apply our
framework to study the security of embedded web interfaces running in
Commercial Off-The-Shelf (COTS) embedded devices, such as routers, DSL/cable
modems, VoIP phones, IP/CCTV cameras. We introduce a methodology and implement
a scalable framework for discovery of vulnerabilities in embedded web
interfaces regardless of the vendor, device, or architecture. To achieve this
goal, our framework performs full system emulation to achieve the execution of
firmware images in a software-only environment, i.e., without involving any
physical embedded devices. Then, we analyze the web interfaces within the
firmware using both static and dynamic tools. We also present some interesting
case-studies, and discuss the main challenges associated with the dynamic
analysis of firmware images and their web interfaces and network services. The
observations we make in this paper shed light on an important aspect of
embedded devices which was not previously studied at a large scale.
We validate our framework by testing it on 1925 firmware images from 54
different vendors. We discover important vulnerabilities in 185 firmware
images, affecting nearly a quarter of vendors in our dataset. These
experimental results demonstrate the effectiveness of our approach.
|
1503.04843
|
Thomas Steinke
|
Raef Bassily and Adam Smith and Thomas Steinke and Jonathan Ullman
|
More General Queries and Less Generalization Error in Adaptive Data
Analysis
|
This paper was merged with another manuscript and is now subsumed by
arXiv:1511.02513
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptivity is an important feature of data analysis---typically the choice of
questions asked about a dataset depends on previous interactions with the same
dataset. However, generalization error is typically bounded in a non-adaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the
formal study of this problem, and gave the first upper and lower bounds on the
achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a
set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an
algorithm that, given $x$ as input, "accurately" answers a sequence of
adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How
many samples $n$ must we draw from the distribution, as a function of the type
of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samples $n$ that are needed to answer
statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer
more general families of queries. These include arbitrary low-sensitivity
queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between
differential privacy and generalization error, but we feel that our analysis is
simpler and more modular, which may be useful for studying these questions in
the future.
|
[
{
"version": "v1",
"created": "Mon, 16 Mar 2015 20:48:42 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Nov 2015 02:01:05 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Bassily",
"Raef",
""
],
[
"Smith",
"Adam",
""
],
[
"Steinke",
"Thomas",
""
],
[
"Ullman",
"Jonathan",
""
]
] |
TITLE: More General Queries and Less Generalization Error in Adaptive Data
Analysis
ABSTRACT: Adaptivity is an important feature of data analysis---typically the choice of
questions asked about a dataset depends on previous interactions with the same
dataset. However, generalization error is typically bounded in a non-adaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC '15) and Hardt and Ullman (FOCS '14) initiated the
formal study of this problem, and gave the first upper and lower bounds on the
achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathcal{P}$ and a
set of $n$ independent samples $x$ is drawn from $\mathcal{P}$. We seek an
algorithm that, given $x$ as input, "accurately" answers a sequence of
adaptively chosen "queries" about the unknown distribution $\mathcal{P}$. How
many samples $n$ must we draw from the distribution, as a function of the type
of queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions towards resolving this question:
*We give upper bounds on the number of samples $n$ that are needed to answer
statistical queries that improve over the bounds of Dwork et al.
*We prove the first upper bounds on the number of samples required to answer
more general families of queries. These include arbitrary low-sensitivity
queries and the important class of convex risk minimization queries.
As in Dwork et al., our algorithms are based on a connection between
differential privacy and generalization error, but we feel that our analysis is
simpler and more modular, which may be useful for studying these questions in
the future.
|
1508.05463
|
Alexander Wong
|
Mohammad Javad Shafiee, Parthipan Siva, and Alexander Wong
|
StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity
|
8 pages
| null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks is a branch in machine learning that has seen a meteoric
rise in popularity due to its powerful abilities to represent and model
high-level abstractions in highly complex data. One area in deep neural
networks that is ripe for exploration is neural connectivity formation. A
pivotal study on the brain tissue of rats found that synaptic formation for
specific functional connectivity in neocortical neural microcircuits can be
surprisingly well modeled and predicted as a random formation. Motivated by
this intriguing finding, we introduce the concept of StochasticNet, where deep
neural networks are formed via stochastic connectivity between neurons. As a
result, any type of deep neural networks can be formed as a StochasticNet by
allowing the neuron connectivity to be stochastic. Stochastic synaptic
formations, in a deep neural network architecture, can allow for efficient
utilization of neurons for performing specific tasks. To evaluate the
feasibility of such a deep neural network architecture, we train a
StochasticNet using four different image datasets (CIFAR-10, MNIST, SVHN, and
STL-10). Experimental results show that a StochasticNet, using less than half
the number of neural connections as a conventional deep neural network,
achieves comparable accuracy and reduces overfitting on the CIFAR-10, MNIST and
SVHN dataset. Interestingly, StochasticNet with less than half the number of
neural connections, achieved a higher accuracy (relative improvement in test
error rate of ~6% compared to ConvNet) on the STL-10 dataset than a
conventional deep neural network. Finally, StochasticNets have faster
operational speeds while achieving better or similar accuracy performances.
|
[
{
"version": "v1",
"created": "Sat, 22 Aug 2015 03:36:43 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Aug 2015 19:05:03 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Sep 2015 01:34:17 GMT"
},
{
"version": "v4",
"created": "Tue, 10 Nov 2015 20:30:05 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Siva",
"Parthipan",
""
],
[
"Wong",
"Alexander",
""
]
] |
TITLE: StochasticNet: Forming Deep Neural Networks via Stochastic Connectivity
ABSTRACT: Deep neural networks is a branch in machine learning that has seen a meteoric
rise in popularity due to its powerful abilities to represent and model
high-level abstractions in highly complex data. One area in deep neural
networks that is ripe for exploration is neural connectivity formation. A
pivotal study on the brain tissue of rats found that synaptic formation for
specific functional connectivity in neocortical neural microcircuits can be
surprisingly well modeled and predicted as a random formation. Motivated by
this intriguing finding, we introduce the concept of StochasticNet, where deep
neural networks are formed via stochastic connectivity between neurons. As a
result, any type of deep neural networks can be formed as a StochasticNet by
allowing the neuron connectivity to be stochastic. Stochastic synaptic
formations, in a deep neural network architecture, can allow for efficient
utilization of neurons for performing specific tasks. To evaluate the
feasibility of such a deep neural network architecture, we train a
StochasticNet using four different image datasets (CIFAR-10, MNIST, SVHN, and
STL-10). Experimental results show that a StochasticNet, using less than half
the number of neural connections as a conventional deep neural network,
achieves comparable accuracy and reduces overfitting on the CIFAR-10, MNIST and
SVHN dataset. Interestingly, StochasticNet with less than half the number of
neural connections, achieved a higher accuracy (relative improvement in test
error rate of ~6% compared to ConvNet) on the STL-10 dataset than a
conventional deep neural network. Finally, StochasticNets have faster
operational speeds while achieving better or similar accuracy performances.
|
1511.02872
|
Hiroharu Kato
|
Hiroharu Kato and Tatsuya Harada
|
Visual Language Modeling on CNN Image Representations
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Measuring the naturalness of images is important to generate realistic images
or to detect unnatural regions in images. Additionally, a method to measure
naturalness can be complementary to Convolutional Neural Network (CNN) based
features, which are known to be insensitive to the naturalness of images.
However, most probabilistic image models have insufficient capability of
modeling the complex and abstract naturalness that we feel because they are
built directly on raw image pixels. In this work, we assume that naturalness
can be measured by the predictability on high-level features during eye
movement. Based on this assumption, we propose a novel method to evaluate the
naturalness by building a variant of Recurrent Neural Network Language Models
on pre-trained CNN representations. Our method is applied to two tasks,
demonstrating that 1) using our method as a regularizer enables us to generate
more understandable images from image features than existing approaches, and 2)
unnaturalness maps produced by our method achieve state-of-the-art eye fixation
prediction performance on two well-studied datasets.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2015 21:00:08 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Kato",
"Hiroharu",
""
],
[
"Harada",
"Tatsuya",
""
]
] |
TITLE: Visual Language Modeling on CNN Image Representations
ABSTRACT: Measuring the naturalness of images is important to generate realistic images
or to detect unnatural regions in images. Additionally, a method to measure
naturalness can be complementary to Convolutional Neural Network (CNN) based
features, which are known to be insensitive to the naturalness of images.
However, most probabilistic image models have insufficient capability of
modeling the complex and abstract naturalness that we feel because they are
built directly on raw image pixels. In this work, we assume that naturalness
can be measured by the predictability on high-level features during eye
movement. Based on this assumption, we propose a novel method to evaluate the
naturalness by building a variant of Recurrent Neural Network Language Models
on pre-trained CNN representations. Our method is applied to two tasks,
demonstrating that 1) using our method as a regularizer enables us to generate
more understandable images from image features than existing approaches, and 2)
unnaturalness maps produced by our method achieve state-of-the-art eye fixation
prediction performance on two well-studied datasets.
|
1511.03055
|
Olivier Mor\`ere
|
Jie Lin, Olivier Mor\`ere, Julie Petta, Vijay Chandrasekhar, Antoine
Veillard
|
Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing
| null | null | null | null |
cs.IR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A typical image retrieval pipeline starts with the comparison of global
descriptors from a large database to find a short list of candidate matches. A
good image descriptor is key to the retrieval pipeline and should reconcile two
contradictory requirements: providing recall rates as high as possible and
being as compact as possible for fast matching. Following the recent successes
of Deep Convolutional Neural Networks (DCNN) for large scale image
classification, descriptors extracted from DCNNs are increasingly used in place
of the traditional hand crafted descriptors such as Fisher Vectors (FV) with
better retrieval performances. Nevertheless, the dimensionality of a typical
DCNN descriptor --extracted either from the visual feature pyramid or the
fully-connected layers-- remains quite high at several thousands of scalar
values. In this paper, we propose Unsupervised Triplet Hashing (UTH), a fully
unsupervised method to compute extremely compact binary hashes --in the 32-256
bits range-- from high-dimensional global descriptors. UTH consists of two
successive deep learning steps. First, Stacked Restricted Boltzmann Machines
(SRBM), a type of unsupervised deep neural nets, are used to learn binary
embedding functions able to bring the descriptor size down to the desired
bitrate. SRBMs are typically able to ensure a very high compression rate at the
expense of loosing some desirable metric properties of the original DCNN
descriptor space. Then, triplet networks, a rank learning scheme based on
weight sharing nets is used to fine-tune the binary embedding functions to
retain as much as possible of the useful metric properties of the original
space. A thorough empirical evaluation conducted on multiple publicly available
dataset using DCNN descriptors shows that our method is able to significantly
outperform state-of-the-art unsupervised schemes in the target bit range.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 10:38:37 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Lin",
"Jie",
""
],
[
"Morère",
"Olivier",
""
],
[
"Petta",
"Julie",
""
],
[
"Chandrasekhar",
"Vijay",
""
],
[
"Veillard",
"Antoine",
""
]
] |
TITLE: Tiny Descriptors for Image Retrieval with Unsupervised Triplet Hashing
ABSTRACT: A typical image retrieval pipeline starts with the comparison of global
descriptors from a large database to find a short list of candidate matches. A
good image descriptor is key to the retrieval pipeline and should reconcile two
contradictory requirements: providing recall rates as high as possible and
being as compact as possible for fast matching. Following the recent successes
of Deep Convolutional Neural Networks (DCNN) for large scale image
classification, descriptors extracted from DCNNs are increasingly used in place
of the traditional hand crafted descriptors such as Fisher Vectors (FV) with
better retrieval performances. Nevertheless, the dimensionality of a typical
DCNN descriptor --extracted either from the visual feature pyramid or the
fully-connected layers-- remains quite high at several thousands of scalar
values. In this paper, we propose Unsupervised Triplet Hashing (UTH), a fully
unsupervised method to compute extremely compact binary hashes --in the 32-256
bits range-- from high-dimensional global descriptors. UTH consists of two
successive deep learning steps. First, Stacked Restricted Boltzmann Machines
(SRBM), a type of unsupervised deep neural nets, are used to learn binary
embedding functions able to bring the descriptor size down to the desired
bitrate. SRBMs are typically able to ensure a very high compression rate at the
expense of loosing some desirable metric properties of the original DCNN
descriptor space. Then, triplet networks, a rank learning scheme based on
weight sharing nets is used to fine-tune the binary embedding functions to
retain as much as possible of the useful metric properties of the original
space. A thorough empirical evaluation conducted on multiple publicly available
dataset using DCNN descriptors shows that our method is able to significantly
outperform state-of-the-art unsupervised schemes in the target bit range.
|
1511.03088
|
Isabelle Augenstein
|
Leon Derczynski and Isabelle Augenstein and Kalina Bontcheva
|
USFD: Twitter NER with Drift Compensation and Linked Data
|
Paper in ACL anthology:
https://aclweb.org/anthology/W/W15/W15-4306.bib
|
Proceedings of the ACL Workshop on Noisy User-generated Text
(2015), pp. 48--53
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a pilot NER system for Twitter, comprising the USFD
system entry to the W-NUT 2015 NER shared task. The goal is to correctly label
entities in a tweet dataset, using an inventory of ten types. We employ
structured learning, drawing on gazetteers taken from Linked Data, and on
unsupervised clustering features, and attempting to compensate for stylistic
and topic drift - a key challenge in social media text. Our result is
competitive; we provide an analysis of the components of our methodology, and
an examination of the target dataset in the context of this task.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 12:34:47 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Derczynski",
"Leon",
""
],
[
"Augenstein",
"Isabelle",
""
],
[
"Bontcheva",
"Kalina",
""
]
] |
TITLE: USFD: Twitter NER with Drift Compensation and Linked Data
ABSTRACT: This paper describes a pilot NER system for Twitter, comprising the USFD
system entry to the W-NUT 2015 NER shared task. The goal is to correctly label
entities in a tweet dataset, using an inventory of ten types. We employ
structured learning, drawing on gazetteers taken from Linked Data, and on
unsupervised clustering features, and attempting to compensate for stylistic
and topic drift - a key challenge in social media text. Our result is
competitive; we provide an analysis of the components of our methodology, and
an examination of the target dataset in the context of this task.
|
1511.03183
|
Hyungtae Lee
|
Hyungtae Lee, Heesung Kwon, Ryan M. Robinson, William d. Nothwang, and
Amar M. Marathe
|
Dynamic Belief Fusion for Object Detection
|
8 pages, 6 figures, 28 references. arXiv admin note: text overlap
with arXiv:1502.07643
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel approach for the fusion of heterogeneous object detection methods is
proposed. In order to effectively integrate the outputs of multiple detectors,
the level of ambiguity in each individual detection score is estimated using
the precision/recall relationship of the corresponding detector. The main
contribution of the proposed work is a novel fusion method, called Dynamic
Belief Fusion (DBF), which dynamically assigns probabilities to hypotheses
(target, non-target, intermediate state (target or non-target)) based on
confidence levels in the detection results conditioned on the prior performance
of individual detectors. In DBF, a joint basic probability assignment,
optimally fusing information from all detectors, is determined by the
Dempster's combination rule, and is easily reduced to a single fused detection
score. Experiments on ARL and PASCAL VOC 07 datasets demonstrate that the
detection accuracy of DBF is considerably greater than conventional fusion
approaches as well as individual detectors used for the fusion.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 17:03:55 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Lee",
"Hyungtae",
""
],
[
"Kwon",
"Heesung",
""
],
[
"Robinson",
"Ryan M.",
""
],
[
"Nothwang",
"William d.",
""
],
[
"Marathe",
"Amar M.",
""
]
] |
TITLE: Dynamic Belief Fusion for Object Detection
ABSTRACT: A novel approach for the fusion of heterogeneous object detection methods is
proposed. In order to effectively integrate the outputs of multiple detectors,
the level of ambiguity in each individual detection score is estimated using
the precision/recall relationship of the corresponding detector. The main
contribution of the proposed work is a novel fusion method, called Dynamic
Belief Fusion (DBF), which dynamically assigns probabilities to hypotheses
(target, non-target, intermediate state (target or non-target)) based on
confidence levels in the detection results conditioned on the prior performance
of individual detectors. In DBF, a joint basic probability assignment,
optimally fusing information from all detectors, is determined by the
Dempster's combination rule, and is easily reduced to a single fused detection
score. Experiments on ARL and PASCAL VOC 07 datasets demonstrate that the
detection accuracy of DBF is considerably greater than conventional fusion
approaches as well as individual detectors used for the fusion.
|
1511.03244
|
Ujwal Bonde
|
Ujwal Bonde, Vijay Badrinarayanan, Roberto Cipolla and Minh-Tri Pham
|
TemplateNet for Depth-Based Object Instance Recognition
|
10 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel deep architecture termed templateNet for depth based
object instance recognition. Using an intermediate template layer we exploit
prior knowledge of an object's shape to sparsify the feature maps. This has
three advantages: (i) the network is better regularised resulting in structured
filters; (ii) the sparse feature maps results in intuitive features been learnt
which can be visualized as the output of the template layer and (iii) the
resulting network achieves state-of-the-art performance. The network benefits
from this without any additional parametrization from the template layer. We
derive the weight updates needed to efficiently train this network in an
end-to-end manner. We benchmark the templateNet for depth based object instance
recognition using two publicly available datasets. The datasets present
multiple challenges of clutter, large pose variations and similar looking
distractors. Through our experiments we show that with the addition of a
template layer, a depth based CNN is able to outperform existing
state-of-the-art methods in the field.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 20:03:36 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Bonde",
"Ujwal",
""
],
[
"Badrinarayanan",
"Vijay",
""
],
[
"Cipolla",
"Roberto",
""
],
[
"Pham",
"Minh-Tri",
""
]
] |
TITLE: TemplateNet for Depth-Based Object Instance Recognition
ABSTRACT: We present a novel deep architecture termed templateNet for depth based
object instance recognition. Using an intermediate template layer we exploit
prior knowledge of an object's shape to sparsify the feature maps. This has
three advantages: (i) the network is better regularised resulting in structured
filters; (ii) the sparse feature maps results in intuitive features been learnt
which can be visualized as the output of the template layer and (iii) the
resulting network achieves state-of-the-art performance. The network benefits
from this without any additional parametrization from the template layer. We
derive the weight updates needed to efficiently train this network in an
end-to-end manner. We benchmark the templateNet for depth based object instance
recognition using two publicly available datasets. The datasets present
multiple challenges of clutter, large pose variations and similar looking
distractors. Through our experiments we show that with the addition of a
template layer, a depth based CNN is able to outperform existing
state-of-the-art methods in the field.
|
1511.03257
|
Fatih Cakir
|
Fatih Cakir, Sarah Adel Bargal, Stan Sclaroff
|
Online Supervised Hashing for Ever-Growing Datasets
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised hashing methods are widely-used for nearest neighbor search in
computer vision applications. Most state-of-the-art supervised hashing
approaches employ batch-learners. Unfortunately, batch-learning strategies can
be inefficient when confronted with large training datasets. Moreover, with
batch-learners, it is unclear how to adapt the hash functions as a dataset
continues to grow and diversify over time. Yet, in many practical scenarios the
dataset grows and diversifies; thus, both the hash functions and the indexing
must swiftly accommodate these changes. To address these issues, we propose an
online hashing method that is amenable to changes and expansions of the
datasets. Since it is an online algorithm, our approach offers linear
complexity with the dataset size. Our solution is supervised, in that we
incorporate available label information to preserve the semantic neighborhood.
Such an adaptive hashing method is attractive; but it requires recomputing the
hash table as the hash functions are updated. If the frequency of update is
high, then recomputing the hash table entries may cause inefficiencies in the
system, especially for large indexes. Thus, we also propose a framework to
reduce hash table updates. We compare our method to state-of-the-art solutions
on two benchmarks and demonstrate significant improvements over previous work.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 20:37:41 GMT"
}
] | 2015-11-11T00:00:00 |
[
[
"Cakir",
"Fatih",
""
],
[
"Bargal",
"Sarah Adel",
""
],
[
"Sclaroff",
"Stan",
""
]
] |
TITLE: Online Supervised Hashing for Ever-Growing Datasets
ABSTRACT: Supervised hashing methods are widely-used for nearest neighbor search in
computer vision applications. Most state-of-the-art supervised hashing
approaches employ batch-learners. Unfortunately, batch-learning strategies can
be inefficient when confronted with large training datasets. Moreover, with
batch-learners, it is unclear how to adapt the hash functions as a dataset
continues to grow and diversify over time. Yet, in many practical scenarios the
dataset grows and diversifies; thus, both the hash functions and the indexing
must swiftly accommodate these changes. To address these issues, we propose an
online hashing method that is amenable to changes and expansions of the
datasets. Since it is an online algorithm, our approach offers linear
complexity with the dataset size. Our solution is supervised, in that we
incorporate available label information to preserve the semantic neighborhood.
Such an adaptive hashing method is attractive; but it requires recomputing the
hash table as the hash functions are updated. If the frequency of update is
high, then recomputing the hash table entries may cause inefficiencies in the
system, especially for large indexes. Thus, we also propose a framework to
reduce hash table updates. We compare our method to state-of-the-art solutions
on two benchmarks and demonstrate significant improvements over previous work.
|
1506.02897
|
Tomas Pfister
|
Tomas Pfister and James Charles and Andrew Zisserman
|
Flowing ConvNets for Human Pose Estimation in Videos
|
ICCV'15
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this work is human pose estimation in videos, where multiple
frames are available. We investigate a ConvNet architecture that is able to
benefit from temporal context by combining information across the multiple
frames using optical flow.
To this end we propose a network architecture with the following novelties:
(i) a deeper network than previously investigated for regressing heatmaps; (ii)
spatial fusion layers that learn an implicit spatial model; (iii) optical flow
is used to align heatmap predictions from neighbouring frames; and (iv) a final
parametric pooling layer which learns to combine the aligned heatmaps into a
pooled confidence map.
We show that this architecture outperforms a number of others, including one
that uses optical flow solely at the input layers, one that regresses joint
coordinates directly, and one that predicts heatmaps without spatial fusion.
The new architecture outperforms the state of the art by a large margin on
three video pose estimation datasets, including the very challenging Poses in
the Wild dataset, and outperforms other deep methods that don't use a graphical
model on the single-image FLIC benchmark (and also Chen & Yuille and Tompson et
al. in the high precision region).
|
[
{
"version": "v1",
"created": "Tue, 9 Jun 2015 13:17:33 GMT"
},
{
"version": "v2",
"created": "Sun, 8 Nov 2015 16:52:59 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Pfister",
"Tomas",
""
],
[
"Charles",
"James",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
TITLE: Flowing ConvNets for Human Pose Estimation in Videos
ABSTRACT: The objective of this work is human pose estimation in videos, where multiple
frames are available. We investigate a ConvNet architecture that is able to
benefit from temporal context by combining information across the multiple
frames using optical flow.
To this end we propose a network architecture with the following novelties:
(i) a deeper network than previously investigated for regressing heatmaps; (ii)
spatial fusion layers that learn an implicit spatial model; (iii) optical flow
is used to align heatmap predictions from neighbouring frames; and (iv) a final
parametric pooling layer which learns to combine the aligned heatmaps into a
pooled confidence map.
We show that this architecture outperforms a number of others, including one
that uses optical flow solely at the input layers, one that regresses joint
coordinates directly, and one that predicts heatmaps without spatial fusion.
The new architecture outperforms the state of the art by a large margin on
three video pose estimation datasets, including the very challenging Poses in
the Wild dataset, and outperforms other deep methods that don't use a graphical
model on the single-image FLIC benchmark (and also Chen & Yuille and Tompson et
al. in the high precision region).
|
1511.01754
|
Bamdev Mishra
|
Vijay Badrinarayanan and Bamdev Mishra and Roberto Cipolla
|
Symmetry-invariant optimization in deep networks
|
Submitted to ICLR 2016. arXiv admin note: text overlap with
arXiv:1511.01029
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works have highlighted scale invariance or symmetry that is present in
the weight space of a typical deep network and the adverse effect that it has
on the Euclidean gradient based stochastic gradient descent optimization. In
this work, we show that these and other commonly used deep networks, such as
those which use a max-pooling and sub-sampling layer, possess more complex
forms of symmetry arising from scaling based reparameterization of the network
weights. We then propose two symmetry-invariant gradient based weight updates
for stochastic gradient descent based learning. Our empirical evidence based on
the MNIST dataset shows that these updates improve the test performance without
sacrificing the computational efficiency of the weight updates. We also show
the results of training with one of the proposed weight updates on an image
segmentation problem.
|
[
{
"version": "v1",
"created": "Thu, 5 Nov 2015 14:17:40 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Nov 2015 19:01:03 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Badrinarayanan",
"Vijay",
""
],
[
"Mishra",
"Bamdev",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
TITLE: Symmetry-invariant optimization in deep networks
ABSTRACT: Recent works have highlighted scale invariance or symmetry that is present in
the weight space of a typical deep network and the adverse effect that it has
on the Euclidean gradient based stochastic gradient descent optimization. In
this work, we show that these and other commonly used deep networks, such as
those which use a max-pooling and sub-sampling layer, possess more complex
forms of symmetry arising from scaling based reparameterization of the network
weights. We then propose two symmetry-invariant gradient based weight updates
for stochastic gradient descent based learning. Our empirical evidence based on
the MNIST dataset shows that these updates improve the test performance without
sacrificing the computational efficiency of the weight updates. We also show
the results of training with one of the proposed weight updates on an image
segmentation problem.
|
1511.02251
|
Armand Joulin
|
Armand Joulin, Laurens van der Maaten, Allan Jabri, Nicolas Vasilache
|
Learning Visual Features from Large Weakly Supervised Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional networks trained on large supervised dataset produce visual
features which form the basis for the state-of-the-art in many computer-vision
problems. Further improvements of these visual features will likely require
even larger manually labeled data sets, which severely limits the pace at which
progress can be made. In this paper, we explore the potential of leveraging
massive, weakly-labeled image collections for learning good visual features. We
train convolutional networks on a dataset of 100 million Flickr photos and
captions, and show that these networks produce features that perform well in a
range of vision problems. We also show that the networks appropriately capture
word similarity, and learn correspondences between different languages.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 22:08:37 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Joulin",
"Armand",
""
],
[
"van der Maaten",
"Laurens",
""
],
[
"Jabri",
"Allan",
""
],
[
"Vasilache",
"Nicolas",
""
]
] |
TITLE: Learning Visual Features from Large Weakly Supervised Data
ABSTRACT: Convolutional networks trained on large supervised dataset produce visual
features which form the basis for the state-of-the-art in many computer-vision
problems. Further improvements of these visual features will likely require
even larger manually labeled data sets, which severely limits the pace at which
progress can be made. In this paper, we explore the potential of leveraging
massive, weakly-labeled image collections for learning good visual features. We
train convolutional networks on a dataset of 100 million Flickr photos and
captions, and show that these networks produce features that perform well in a
range of vision problems. We also show that the networks appropriately capture
word similarity, and learn correspondences between different languages.
|
1511.02254
|
Eric Heim
|
Eric Heim (1), Matthew Berger (2), Lee Seversky (2), Milos Hauskrecht
(1) ((1) University of Pittsburgh, (2) Air Force Research Laboratory,
Information Directorate)
|
Active Perceptual Similarity Modeling with Auxiliary Information
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning a model of perceptual similarity from a collection of objects is a
fundamental task in machine learning underlying numerous applications. A common
way to learn such a model is from relative comparisons in the form of triplets:
responses to queries of the form "Is object a more similar to b than it is to
c?". If no consideration is made in the determination of which queries to ask,
existing similarity learning methods can require a prohibitively large number
of responses. In this work, we consider the problem of actively learning from
triplets -finding which queries are most useful for learning. Different from
previous active triplet learning approaches, we incorporate auxiliary
information into our similarity model and introduce an active learning scheme
to find queries that are informative for quickly learning both the relevant
aspects of auxiliary data and the directly-learned similarity components.
Compared to prior approaches, we show that we can learn just as effectively
with much fewer queries. For evaluation, we introduce a new dataset of
exhaustive triplet comparisons obtained from humans and demonstrate improved
performance for different types of auxiliary information.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 22:30:46 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Heim",
"Eric",
""
],
[
"Berger",
"Matthew",
""
],
[
"Seversky",
"Lee",
""
],
[
"Hauskrecht",
"Milos",
""
]
] |
TITLE: Active Perceptual Similarity Modeling with Auxiliary Information
ABSTRACT: Learning a model of perceptual similarity from a collection of objects is a
fundamental task in machine learning underlying numerous applications. A common
way to learn such a model is from relative comparisons in the form of triplets:
responses to queries of the form "Is object a more similar to b than it is to
c?". If no consideration is made in the determination of which queries to ask,
existing similarity learning methods can require a prohibitively large number
of responses. In this work, we consider the problem of actively learning from
triplets -finding which queries are most useful for learning. Different from
previous active triplet learning approaches, we incorporate auxiliary
information into our similarity model and introduce an active learning scheme
to find queries that are informative for quickly learning both the relevant
aspects of auxiliary data and the directly-learned similarity components.
Compared to prior approaches, we show that we can learn just as effectively
with much fewer queries. For evaluation, we introduce a new dataset of
exhaustive triplet comparisons obtained from humans and demonstrate improved
performance for different types of auxiliary information.
|
1511.02282
|
Lianwen Jin
|
Xiaorui Liu, Yichao Huang, Xin Zhang, Lianwen Jin
|
Fingertip in the Eye: A cascaded CNN pipeline for the real-time
fingertip detection in egocentric videos
|
5 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new pipeline for hand localization and fingertip detection.
For RGB images captured from an egocentric vision mobile camera, hand and
fingertip detection remains a challenging problem due to factors like
background complexity and hand shape variety. To address these issues
accurately and robustly, we build a large scale dataset named Ego-Fingertip and
propose a bi-level cascaded pipeline of convolutional neural networks, namely,
Attention-based Hand Detector as well as Multi-point Fingertip Detector. The
proposed method significantly tackles challenges and achieves satisfactorily
accurate prediction and real-time performance compared to previous hand and
fingertip detection methods.
|
[
{
"version": "v1",
"created": "Sat, 7 Nov 2015 02:06:11 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Liu",
"Xiaorui",
""
],
[
"Huang",
"Yichao",
""
],
[
"Zhang",
"Xin",
""
],
[
"Jin",
"Lianwen",
""
]
] |
TITLE: Fingertip in the Eye: A cascaded CNN pipeline for the real-time
fingertip detection in egocentric videos
ABSTRACT: We introduce a new pipeline for hand localization and fingertip detection.
For RGB images captured from an egocentric vision mobile camera, hand and
fingertip detection remains a challenging problem due to factors like
background complexity and hand shape variety. To address these issues
accurately and robustly, we build a large scale dataset named Ego-Fingertip and
propose a bi-level cascaded pipeline of convolutional neural networks, namely,
Attention-based Hand Detector as well as Multi-point Fingertip Detector. The
proposed method significantly tackles challenges and achieves satisfactorily
accurate prediction and real-time performance compared to previous hand and
fingertip detection methods.
|
1511.02426
|
Ehsan Lotfi
|
E. Lotfi
|
A Winner-Take-All Approach to Emotional Neural Networks with Universal
Approximation Property
|
Information Sciences (2015), Elsevier Publisher
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Here, we propose a brain-inspired winner-take-all emotional neural network
(WTAENN) and prove the universal approximation property for the novel
architecture. WTAENN is a single layered feedforward neural network that
benefits from the excitatory, inhibitory, and expandatory neural connections as
well as the winner-take-all (WTA) competitions in the human brain s nervous
system. The WTA competition increases the information capacity of the model
without adding hidden neurons. The universal approximation capability of the
proposed architecture is illustrated on two example functions, trained by a
genetic algorithm, and then applied to several competing recent and benchmark
problems such as in curve fitting, pattern recognition, classification and
prediction. In particular, it is tested on twelve UCI classification datasets,
a facial recognition problem, three real world prediction problems (2 chaotic
time series of geomagnetic activity indices and wind farm power generation
data), two synthetic case studies with constant and nonconstant noise variance
as well as k-selector and linear programming problems. Results indicate the
general applicability and often superiority of the approach in terms of higher
accuracy and lower model complexity, especially where low computational
complexity is imperative.
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2015 01:37:14 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Lotfi",
"E.",
""
]
] |
TITLE: A Winner-Take-All Approach to Emotional Neural Networks with Universal
Approximation Property
ABSTRACT: Here, we propose a brain-inspired winner-take-all emotional neural network
(WTAENN) and prove the universal approximation property for the novel
architecture. WTAENN is a single layered feedforward neural network that
benefits from the excitatory, inhibitory, and expandatory neural connections as
well as the winner-take-all (WTA) competitions in the human brain s nervous
system. The WTA competition increases the information capacity of the model
without adding hidden neurons. The universal approximation capability of the
proposed architecture is illustrated on two example functions, trained by a
genetic algorithm, and then applied to several competing recent and benchmark
problems such as in curve fitting, pattern recognition, classification and
prediction. In particular, it is tested on twelve UCI classification datasets,
a facial recognition problem, three real world prediction problems (2 chaotic
time series of geomagnetic activity indices and wind farm power generation
data), two synthetic case studies with constant and nonconstant noise variance
as well as k-selector and linear programming problems. Results indicate the
general applicability and often superiority of the approach in terms of higher
accuracy and lower model complexity, especially where low computational
complexity is imperative.
|
1511.02459
|
Lianwen Jin
|
Duorui Xie, Lingyu Liang, Lianwen Jin, Jie Xu, Mengru Li
|
SCUT-FBP: A Benchmark Dataset for Facial Beauty Perception
|
6 pages, 8 figures, 6 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a novel face dataset with attractiveness ratings, namely, the
SCUT-FBP dataset, is developed for automatic facial beauty perception. This
dataset provides a benchmark to evaluate the performance of different methods
for facial attractiveness prediction, including the state-of-the-art deep
learning method. The SCUT-FBP dataset contains face portraits of 500 Asian
female subjects with attractiveness ratings, all of which have been verified in
terms of rating distribution, standard deviation, consistency, and
self-consistency. Benchmark evaluations for facial attractiveness prediction
were performed with different combinations of facial geometrical features and
texture features using classical statistical learning methods and the deep
learning method. The best Pearson correlation (0.8187) was achieved by the CNN
model. Thus, the results of our experiments indicate that the SCUT-FBP dataset
provides a reliable benchmark for facial beauty perception.
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2015 09:21:32 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Xie",
"Duorui",
""
],
[
"Liang",
"Lingyu",
""
],
[
"Jin",
"Lianwen",
""
],
[
"Xu",
"Jie",
""
],
[
"Li",
"Mengru",
""
]
] |
TITLE: SCUT-FBP: A Benchmark Dataset for Facial Beauty Perception
ABSTRACT: In this paper, a novel face dataset with attractiveness ratings, namely, the
SCUT-FBP dataset, is developed for automatic facial beauty perception. This
dataset provides a benchmark to evaluate the performance of different methods
for facial attractiveness prediction, including the state-of-the-art deep
learning method. The SCUT-FBP dataset contains face portraits of 500 Asian
female subjects with attractiveness ratings, all of which have been verified in
terms of rating distribution, standard deviation, consistency, and
self-consistency. Benchmark evaluations for facial attractiveness prediction
were performed with different combinations of facial geometrical features and
texture features using classical statistical learning methods and the deep
learning method. The best Pearson correlation (0.8187) was achieved by the CNN
model. Thus, the results of our experiments indicate that the SCUT-FBP dataset
provides a reliable benchmark for facial beauty perception.
|
1511.02492
|
Amirhossein Habibian
|
Amirhossein Habibian, Thomas Mensink, Cees G.M. Snoek
|
VideoStory Embeddings Recognize Events when Examples are Scarce
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims for event recognition when video examples are scarce or even
completely absent. The key in such a challenging setting is a semantic video
representation. Rather than building the representation from individual
attribute detectors and their annotations, we propose to learn the entire
representation from freely available web videos and their descriptions using an
embedding between video features and term vectors. In our proposed embedding,
which we call VideoStory, the correlations between the terms are utilized to
learn a more effective representation by optimizing a joint objective balancing
descriptiveness and predictability.We show how learning the VideoStory using a
multimodal predictability loss, including appearance, motion and audio
features, results in a better predictable representation. We also propose a
variant of VideoStory to recognize an event in video from just the important
terms in a text query by introducing a term sensitive descriptiveness loss. Our
experiments on three challenging collections of web videos from the NIST
TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets
demonstrate: i) the advantages of VideoStory over representations using
attributes or alternative embeddings, ii) the benefit of fusing video
modalities by an embedding over common strategies, iii) the complementarity of
term sensitive descriptiveness and multimodal predictability for event
recognition without examples. By it abilities to improve predictability upon
any underlying video feature while at the same time maximizing semantic
descriptiveness, VideoStory leads to state-of-the-art accuracy for both few-
and zero-example recognition of events in video.
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2015 14:59:14 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Habibian",
"Amirhossein",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] |
TITLE: VideoStory Embeddings Recognize Events when Examples are Scarce
ABSTRACT: This paper aims for event recognition when video examples are scarce or even
completely absent. The key in such a challenging setting is a semantic video
representation. Rather than building the representation from individual
attribute detectors and their annotations, we propose to learn the entire
representation from freely available web videos and their descriptions using an
embedding between video features and term vectors. In our proposed embedding,
which we call VideoStory, the correlations between the terms are utilized to
learn a more effective representation by optimizing a joint objective balancing
descriptiveness and predictability.We show how learning the VideoStory using a
multimodal predictability loss, including appearance, motion and audio
features, results in a better predictable representation. We also propose a
variant of VideoStory to recognize an event in video from just the important
terms in a text query by introducing a term sensitive descriptiveness loss. Our
experiments on three challenging collections of web videos from the NIST
TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets
demonstrate: i) the advantages of VideoStory over representations using
attributes or alternative embeddings, ii) the benefit of fusing video
modalities by an embedding over common strategies, iii) the complementarity of
term sensitive descriptiveness and multimodal predictability for event
recognition without examples. By it abilities to improve predictability upon
any underlying video feature while at the same time maximizing semantic
descriptiveness, VideoStory leads to state-of-the-art accuracy for both few-
and zero-example recognition of events in video.
|
1511.02513
|
Thomas Steinke
|
Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer,
Jonathan Ullman
|
Algorithmic Stability for Adaptive Data Analysis
|
This work unifies and subsumes the two arXiv manuscripts
arXiv:1503.04843 and arXiv:1504.05800
| null | null | null |
cs.LG cs.CR cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptivity is an important feature of data analysis---the choice of questions
to ask about a dataset often depends on previous interactions with the same
dataset. However, statistical validity is typically studied in a nonadaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated
the formal study of this problem, and gave the first upper and lower bounds on
the achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathbf{P}$ and a set
of $n$ independent samples $\mathbf{x}$ is drawn from $\mathbf{P}$. We seek an
algorithm that, given $\mathbf{x}$ as input, accurately answers a sequence of
adaptively chosen queries about the unknown distribution $\mathbf{P}$. How many
samples $n$ must we draw from the distribution, as a function of the type of
queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions:
(i) We give upper bounds on the number of samples $n$ that are needed to
answer statistical queries. The bounds improve and simplify the work of Dwork
et al. (STOC, 2015), and have been applied in subsequent work by those authors
(Science, 2015, NIPS, 2015).
(ii) We prove the first upper bounds on the number of samples required to
answer more general families of queries. These include arbitrary
low-sensitivity queries and an important class of optimization queries.
As in Dwork et al., our algorithms are based on a connection with algorithmic
stability in the form of differential privacy. We extend their work by giving a
quantitatively optimal, more general, and simpler proof of their main theorem
that stability implies low generalization error. We also study weaker stability
guarantees such as bounded KL divergence and total variation distance.
|
[
{
"version": "v1",
"created": "Sun, 8 Nov 2015 18:26:50 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Bassily",
"Raef",
""
],
[
"Nissim",
"Kobbi",
""
],
[
"Smith",
"Adam",
""
],
[
"Steinke",
"Thomas",
""
],
[
"Stemmer",
"Uri",
""
],
[
"Ullman",
"Jonathan",
""
]
] |
TITLE: Algorithmic Stability for Adaptive Data Analysis
ABSTRACT: Adaptivity is an important feature of data analysis---the choice of questions
to ask about a dataset often depends on previous interactions with the same
dataset. However, statistical validity is typically studied in a nonadaptive
model, where all questions are specified before the dataset is drawn. Recent
work by Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014) initiated
the formal study of this problem, and gave the first upper and lower bounds on
the achievable generalization error for adaptive data analysis.
Specifically, suppose there is an unknown distribution $\mathbf{P}$ and a set
of $n$ independent samples $\mathbf{x}$ is drawn from $\mathbf{P}$. We seek an
algorithm that, given $\mathbf{x}$ as input, accurately answers a sequence of
adaptively chosen queries about the unknown distribution $\mathbf{P}$. How many
samples $n$ must we draw from the distribution, as a function of the type of
queries, the number of queries, and the desired level of accuracy?
In this work we make two new contributions:
(i) We give upper bounds on the number of samples $n$ that are needed to
answer statistical queries. The bounds improve and simplify the work of Dwork
et al. (STOC, 2015), and have been applied in subsequent work by those authors
(Science, 2015, NIPS, 2015).
(ii) We prove the first upper bounds on the number of samples required to
answer more general families of queries. These include arbitrary
low-sensitivity queries and an important class of optimization queries.
As in Dwork et al., our algorithms are based on a connection with algorithmic
stability in the form of differential privacy. We extend their work by giving a
quantitatively optimal, more general, and simpler proof of their main theorem
that stability implies low generalization error. We also study weaker stability
guarantees such as bounded KL divergence and total variation distance.
|
1511.02583
|
Yong-Sheng Chen
|
Jia-Ren Chang and Yong-Sheng Chen
|
Batch-normalized Maxout Network in Network
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports a novel deep architecture referred to as Maxout network In
Network (MIN), which can enhance model discriminability and facilitate the
process of information abstraction within the receptive field. The proposed
network adopts the framework of the recently developed Network In Network
structure, which slides a universal approximator, multilayer perceptron (MLP)
with rectifier units, to exact features. Instead of MLP, we employ maxout MLP
to learn a variety of piecewise linear activation functions and to mediate the
problem of vanishing gradients that can occur when using rectifier units.
Moreover, batch normalization is applied to reduce the saturation of maxout
units by pre-conditioning the model and dropout is applied to prevent
overfitting. Finally, average pooling is used in all pooling layers to
regularize maxout MLP in order to facilitate information abstraction in every
receptive field while tolerating the change of object position. Because average
pooling preserves all features in the local patch, the proposed MIN model can
enforce the suppression of irrelevant information during training. Our
experiments demonstrated the state-of-the-art classification performance when
the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and
comparable performance for SVHN dataset.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2015 07:09:57 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Chang",
"Jia-Ren",
""
],
[
"Chen",
"Yong-Sheng",
""
]
] |
TITLE: Batch-normalized Maxout Network in Network
ABSTRACT: This paper reports a novel deep architecture referred to as Maxout network In
Network (MIN), which can enhance model discriminability and facilitate the
process of information abstraction within the receptive field. The proposed
network adopts the framework of the recently developed Network In Network
structure, which slides a universal approximator, multilayer perceptron (MLP)
with rectifier units, to exact features. Instead of MLP, we employ maxout MLP
to learn a variety of piecewise linear activation functions and to mediate the
problem of vanishing gradients that can occur when using rectifier units.
Moreover, batch normalization is applied to reduce the saturation of maxout
units by pre-conditioning the model and dropout is applied to prevent
overfitting. Finally, average pooling is used in all pooling layers to
regularize maxout MLP in order to facilitate information abstraction in every
receptive field while tolerating the change of object position. Because average
pooling preserves all features in the local patch, the proposed MIN model can
enforce the suppression of irrelevant information during training. Our
experiments demonstrated the state-of-the-art classification performance when
the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and
comparable performance for SVHN dataset.
|
1511.02682
|
Gedas Bertasius
|
Gedas Bertasius, Hyun Soo Park, Jianbo Shi
|
Exploiting Egocentric Object Prior for 3D Saliency Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On a minute-to-minute basis people undergo numerous fluid interactions with
objects that barely register on a conscious level. Recent neuroscientific
research demonstrates that humans have a fixed size prior for salient objects.
This suggests that a salient object in 3D undergoes a consistent transformation
such that people's visual system perceives it with an approximately fixed size.
This finding indicates that there exists a consistent egocentric object prior
that can be characterized by shape, size, depth, and location in the first
person view.
In this paper, we develop an EgoObject Representation, which encodes these
characteristics by incorporating shape, location, size and depth features from
an egocentric RGBD image. We empirically show that this representation can
accurately characterize the egocentric object prior by testing it on an
egocentric RGBD dataset for three tasks: the 3D saliency detection, future
saliency prediction, and interaction classification. This representation is
evaluated on our new Egocentric RGBD Saliency dataset that includes various
activities such as cooking, dining, and shopping. By using our EgoObject
representation, we outperform previously proposed models for saliency detection
(relative 30% improvement for 3D saliency detection task) on our dataset.
Additionally, we demonstrate that this representation allows us to predict
future salient objects based on the gaze cue and classify people's interactions
with objects.
|
[
{
"version": "v1",
"created": "Mon, 9 Nov 2015 14:01:50 GMT"
}
] | 2015-11-10T00:00:00 |
[
[
"Bertasius",
"Gedas",
""
],
[
"Park",
"Hyun Soo",
""
],
[
"Shi",
"Jianbo",
""
]
] |
TITLE: Exploiting Egocentric Object Prior for 3D Saliency Detection
ABSTRACT: On a minute-to-minute basis people undergo numerous fluid interactions with
objects that barely register on a conscious level. Recent neuroscientific
research demonstrates that humans have a fixed size prior for salient objects.
This suggests that a salient object in 3D undergoes a consistent transformation
such that people's visual system perceives it with an approximately fixed size.
This finding indicates that there exists a consistent egocentric object prior
that can be characterized by shape, size, depth, and location in the first
person view.
In this paper, we develop an EgoObject Representation, which encodes these
characteristics by incorporating shape, location, size and depth features from
an egocentric RGBD image. We empirically show that this representation can
accurately characterize the egocentric object prior by testing it on an
egocentric RGBD dataset for three tasks: the 3D saliency detection, future
saliency prediction, and interaction classification. This representation is
evaluated on our new Egocentric RGBD Saliency dataset that includes various
activities such as cooking, dining, and shopping. By using our EgoObject
representation, we outperform previously proposed models for saliency detection
(relative 30% improvement for 3D saliency detection task) on our dataset.
Additionally, we demonstrate that this representation allows us to predict
future salient objects based on the gaze cue and classify people's interactions
with objects.
|
1511.02023
|
Mohammadamin Abbasnejad
|
Mohammadamin Abbasnejad, Mohammad Ali Masnadi-Shirazi
|
Facial Expression Recognition Using Sparse Gaussian Conditional Random
Field
|
http://waset.org/abstracts/computer-and-information-engineering/26245. arXiv
admin note: text overlap with arXiv:1509.01343 by other authors
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of expression and facial Action Units (AUs) detection are very
important tasks in fields of computer vision and Human Computer Interaction
(HCI) due to the wide range of applications in human life. Many works has been
done during the past few years which has their own advantages and
disadvantages. In this work we present a new model based on Gaussian
Conditional Random Field. We solve our objective problem using ADMM and we show
how well the proposed model works. We train and test our work on two facial
expression datasets, CK+ and RU-FACS. Experimental evaluation shows that our
proposed approach outperform state of the art expression recognition.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 10:29:09 GMT"
}
] | 2015-11-09T00:00:00 |
[
[
"Abbasnejad",
"Mohammadamin",
""
],
[
"Masnadi-Shirazi",
"Mohammad Ali",
""
]
] |
TITLE: Facial Expression Recognition Using Sparse Gaussian Conditional Random
Field
ABSTRACT: The analysis of expression and facial Action Units (AUs) detection are very
important tasks in fields of computer vision and Human Computer Interaction
(HCI) due to the wide range of applications in human life. Many works has been
done during the past few years which has their own advantages and
disadvantages. In this work we present a new model based on Gaussian
Conditional Random Field. We solve our objective problem using ADMM and we show
how well the proposed model works. We train and test our work on two facial
expression datasets, CK+ and RU-FACS. Experimental evaluation shows that our
proposed approach outperform state of the art expression recognition.
|
1511.02058
|
Hung-Hsuan Chen
|
Hung-Hsuan Chen, Alexander G. Ororbia II, C. Lee Giles
|
ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries
| null | null | null | null |
cs.DL cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
We describe ExpertSeer, a generic framework for expert recommendation based
on the contents of a digital library. Given a query term q, ExpertSeer
recommends experts of q by retrieving authors who published relevant papers
determined by related keyphrases and the quality of papers. The system is based
on a simple yet effective keyphrase extractor and the Bayes' rule for expert
recommendation. ExpertSeer is domain independent and can be applied to
different disciplines and applications since the system is automated and not
tailored to a specific discipline. Digital library providers can employ the
system to enrich their services and organizations can discover experts of
interest within an organization. To demonstrate the power of ExpertSeer, we
apply the framework to build two expert recommender systems. The first, CSSeer,
utilizes the CiteSeerX digital library to recommend experts primarily in
computer science. The second, ChemSeer, uses publicly available documents from
the Royal Society of Chemistry (RSC) to recommend experts in chemistry. Using
one thousand computer science terms as benchmark queries, we compared the top-n
experts (n=3, 5, 10) returned by CSSeer to two other expert recommenders --
Microsoft Academic Search and ArnetMiner -- and a simulator that imitates the
ranking function of Google Scholar. Although CSSeer, Microsoft Academic Search,
and ArnetMiner mostly return prestigious researchers who published several
papers related to the query term, it was found that different expert
recommenders return moderately different recommendations. To further study
their performance, we obtained a widely used benchmark dataset as the ground
truth for comparison. The results show that our system outperforms Microsoft
Academic Search and ArnetMiner in terms of Precision-at-k (P@k) for k=3, 5, 10.
We also conducted several case studies to validate the usefulness of our
system.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 12:55:17 GMT"
}
] | 2015-11-09T00:00:00 |
[
[
"Chen",
"Hung-Hsuan",
""
],
[
"Ororbia",
"Alexander G.",
"II"
],
[
"Giles",
"C. Lee",
""
]
] |
TITLE: ExpertSeer: a Keyphrase Based Expert Recommender for Digital Libraries
ABSTRACT: We describe ExpertSeer, a generic framework for expert recommendation based
on the contents of a digital library. Given a query term q, ExpertSeer
recommends experts of q by retrieving authors who published relevant papers
determined by related keyphrases and the quality of papers. The system is based
on a simple yet effective keyphrase extractor and the Bayes' rule for expert
recommendation. ExpertSeer is domain independent and can be applied to
different disciplines and applications since the system is automated and not
tailored to a specific discipline. Digital library providers can employ the
system to enrich their services and organizations can discover experts of
interest within an organization. To demonstrate the power of ExpertSeer, we
apply the framework to build two expert recommender systems. The first, CSSeer,
utilizes the CiteSeerX digital library to recommend experts primarily in
computer science. The second, ChemSeer, uses publicly available documents from
the Royal Society of Chemistry (RSC) to recommend experts in chemistry. Using
one thousand computer science terms as benchmark queries, we compared the top-n
experts (n=3, 5, 10) returned by CSSeer to two other expert recommenders --
Microsoft Academic Search and ArnetMiner -- and a simulator that imitates the
ranking function of Google Scholar. Although CSSeer, Microsoft Academic Search,
and ArnetMiner mostly return prestigious researchers who published several
papers related to the query term, it was found that different expert
recommenders return moderately different recommendations. To further study
their performance, we obtained a widely used benchmark dataset as the ground
truth for comparison. The results show that our system outperforms Microsoft
Academic Search and ArnetMiner in terms of Precision-at-k (P@k) for k=3, 5, 10.
We also conducted several case studies to validate the usefulness of our
system.
|
1511.02126
|
Yahong Han
|
Shichao Zhao, Yanbin Liu, Yahong Han, Richang Hong
|
Pooling the Convolutional Layers in Deep ConvNets for Action Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep ConvNets have shown its good performance in image classification tasks.
However it still remains as a problem in deep video representation for action
recognition. The problem comes from two aspects: on one hand, current video
ConvNets are relatively shallow compared with image ConvNets, which limits its
capability of capturing the complex video action information; on the other
hand, temporal information of videos is not properly utilized to pool and
encode the video sequences. Towards these issues, in this paper, we utilize two
state-of-the-art ConvNets, i.e., the very deep spatial net (VGGNet) and the
temporal net from Two-Stream ConvNets, for action representation. The
convolutional layers and the proposed new layer, called frame-diff layer, are
extracted and pooled with two temporal pooling strategy: Trajectory pooling and
line pooling. The pooled local descriptors are then encoded with VLAD to form
the video representations. In order to verify the effectiveness of the proposed
framework, we conduct experiments on UCF101 and HMDB51 datasets. It achieves
the accuracy of 93.78\% on UCF101 which is the state-of-the-art and the
accuracy of 65.62\% on HMDB51 which is comparable to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 15:51:07 GMT"
}
] | 2015-11-09T00:00:00 |
[
[
"Zhao",
"Shichao",
""
],
[
"Liu",
"Yanbin",
""
],
[
"Han",
"Yahong",
""
],
[
"Hong",
"Richang",
""
]
] |
TITLE: Pooling the Convolutional Layers in Deep ConvNets for Action Recognition
ABSTRACT: Deep ConvNets have shown its good performance in image classification tasks.
However it still remains as a problem in deep video representation for action
recognition. The problem comes from two aspects: on one hand, current video
ConvNets are relatively shallow compared with image ConvNets, which limits its
capability of capturing the complex video action information; on the other
hand, temporal information of videos is not properly utilized to pool and
encode the video sequences. Towards these issues, in this paper, we utilize two
state-of-the-art ConvNets, i.e., the very deep spatial net (VGGNet) and the
temporal net from Two-Stream ConvNets, for action representation. The
convolutional layers and the proposed new layer, called frame-diff layer, are
extracted and pooled with two temporal pooling strategy: Trajectory pooling and
line pooling. The pooled local descriptors are then encoded with VLAD to form
the video representations. In order to verify the effectiveness of the proposed
framework, we conduct experiments on UCF101 and HMDB51 datasets. It achieves
the accuracy of 93.78\% on UCF101 which is the state-of-the-art and the
accuracy of 65.62\% on HMDB51 which is comparable to the state-of-the-art.
|
1511.02196
|
Haohan Wang
|
Haohan Wang, Madhavi K. Ganapathiraju
|
Evaluating Protein-protein Interaction Predictors with a Novel
3-Dimensional Metric
|
This article is an extended version of a poster presented in AMIA TBI
2015
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order for the predicted interactions to be directly adopted by biologists,
the ma- chine learning predictions have to be of high precision, regardless of
recall. This aspect cannot be evaluated or numerically represented well by
traditional metrics like accuracy, ROC, or precision-recall curve. In this
work, we start from the alignment in sensitivity of ROC and recall of
precision-recall curve, and propose an evaluation metric focusing on the
ability of a model to be adopted by biologists. This metric evaluates the
ability of a machine learning algorithm to predict only new interactions,
meanwhile, it eliminates the influence of test dataset. In the experiment of
evaluating different classifiers with a same data set and evaluating the same
predictor with different datasets, our new metric fulfills the evaluation task
of our interest while two widely recognized metrics, ROC and precision-recall
curve fail the tasks for different reasons.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 19:14:09 GMT"
}
] | 2015-11-09T00:00:00 |
[
[
"Wang",
"Haohan",
""
],
[
"Ganapathiraju",
"Madhavi K.",
""
]
] |
TITLE: Evaluating Protein-protein Interaction Predictors with a Novel
3-Dimensional Metric
ABSTRACT: In order for the predicted interactions to be directly adopted by biologists,
the ma- chine learning predictions have to be of high precision, regardless of
recall. This aspect cannot be evaluated or numerically represented well by
traditional metrics like accuracy, ROC, or precision-recall curve. In this
work, we start from the alignment in sensitivity of ROC and recall of
precision-recall curve, and propose an evaluation metric focusing on the
ability of a model to be adopted by biologists. This metric evaluates the
ability of a machine learning algorithm to predict only new interactions,
meanwhile, it eliminates the influence of test dataset. In the experiment of
evaluating different classifiers with a same data set and evaluating the same
predictor with different datasets, our new metric fulfills the evaluation task
of our interest while two widely recognized metrics, ROC and precision-recall
curve fail the tasks for different reasons.
|
1511.02222
|
Andrew Wilson
|
Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, Eric P. Xing
|
Deep Kernel Learning
|
19 pages, 6 figures
| null | null | null |
cs.LG cs.AI stat.ME stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce scalable deep kernels, which combine the structural properties
of deep learning architectures with the non-parametric flexibility of kernel
methods. Specifically, we transform the inputs of a spectral mixture base
kernel with a deep architecture, using local kernel interpolation, inducing
points, and structure exploiting (Kronecker and Toeplitz) algebra for a
scalable kernel representation. These closed-form kernels can be used as
drop-in replacements for standard kernels, with benefits in expressive power
and scalability. We jointly learn the properties of these kernels through the
marginal likelihood of a Gaussian process. Inference and learning cost $O(n)$
for $n$ training points, and predictions cost $O(1)$ per test point. On a large
and diverse collection of applications, including a dataset with 2 million
examples, we show improved performance over scalable Gaussian processes with
flexible kernel learning models, and stand-alone deep architectures.
|
[
{
"version": "v1",
"created": "Fri, 6 Nov 2015 20:38:08 GMT"
}
] | 2015-11-09T00:00:00 |
[
[
"Wilson",
"Andrew Gordon",
""
],
[
"Hu",
"Zhiting",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Xing",
"Eric P.",
""
]
] |
TITLE: Deep Kernel Learning
ABSTRACT: We introduce scalable deep kernels, which combine the structural properties
of deep learning architectures with the non-parametric flexibility of kernel
methods. Specifically, we transform the inputs of a spectral mixture base
kernel with a deep architecture, using local kernel interpolation, inducing
points, and structure exploiting (Kronecker and Toeplitz) algebra for a
scalable kernel representation. These closed-form kernels can be used as
drop-in replacements for standard kernels, with benefits in expressive power
and scalability. We jointly learn the properties of these kernels through the
marginal likelihood of a Gaussian process. Inference and learning cost $O(n)$
for $n$ training points, and predictions cost $O(1)$ per test point. On a large
and diverse collection of applications, including a dataset with 2 million
examples, we show improved performance over scalable Gaussian processes with
flexible kernel learning models, and stand-alone deep architectures.
|
1507.01784
|
Anastasia Podosinnikova
|
Anastasia Podosinnikova, Francis Bach, and Simon Lacoste-Julien
|
Rethinking LDA: moment matching for discrete ICA
|
30 pages; added plate diagrams and clarifications, changed style,
corrected typos, updated figures. in Proceedings of the 29-th Conference on
Neural Information Processing Systems (NIPS), 2015
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider moment matching techniques for estimation in Latent Dirichlet
Allocation (LDA). By drawing explicit links between LDA and discrete versions
of independent component analysis (ICA), we first derive a new set of
cumulant-based tensors, with an improved sample complexity. Moreover, we reuse
standard ICA techniques such as joint diagonalization of tensors to improve
over existing methods based on the tensor power method. In an extensive set of
experiments on both synthetic and real datasets, we show that our new
combination of tensors and orthogonal joint diagonalization techniques
outperforms existing moment matching methods.
|
[
{
"version": "v1",
"created": "Tue, 7 Jul 2015 12:48:30 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Nov 2015 20:16:04 GMT"
}
] | 2015-11-06T00:00:00 |
[
[
"Podosinnikova",
"Anastasia",
""
],
[
"Bach",
"Francis",
""
],
[
"Lacoste-Julien",
"Simon",
""
]
] |
TITLE: Rethinking LDA: moment matching for discrete ICA
ABSTRACT: We consider moment matching techniques for estimation in Latent Dirichlet
Allocation (LDA). By drawing explicit links between LDA and discrete versions
of independent component analysis (ICA), we first derive a new set of
cumulant-based tensors, with an improved sample complexity. Moreover, we reuse
standard ICA techniques such as joint diagonalization of tensors to improve
over existing methods based on the tensor power method. In an extensive set of
experiments on both synthetic and real datasets, we show that our new
combination of tensors and orthogonal joint diagonalization techniques
outperforms existing moment matching methods.
|
1511.01764
|
Meisam Razaviyayn
|
Meisam Razaviyayn, Farzan Farnia, David Tse
|
Discrete R\'enyi Classifiers
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider the binary classification problem of predicting a target variable
$Y$ from a discrete feature vector $X = (X_1,...,X_d)$. When the probability
distribution $\mathbb{P}(X,Y)$ is known, the optimal classifier, leading to the
minimum misclassification rate, is given by the Maximum A-posteriori
Probability decision rule. However, estimating the complete joint distribution
$\mathbb{P}(X,Y)$ is computationally and statistically impossible for large
values of $d$. An alternative approach is to first estimate some low order
marginals of $\mathbb{P}(X,Y)$ and then design the classifier based on the
estimated low order marginals. This approach is also helpful when the complete
training data instances are not available due to privacy concerns. In this
work, we consider the problem of finding the optimum classifier based on some
estimated low order marginals of $(X,Y)$. We prove that for a given set of
marginals, the minimum Hirschfeld-Gebelein-Renyi (HGR) correlation principle
introduced in [1] leads to a randomized classification rule which is shown to
have a misclassification rate no larger than twice the misclassification rate
of the optimal classifier. Then, under a separability condition, we show that
the proposed algorithm is equivalent to a randomized linear regression
approach. In addition, this method naturally results in a robust feature
selection method selecting a subset of features having the maximum worst case
HGR correlation with the target variable. Our theoretical upper-bound is
similar to the recent Discrete Chebyshev Classifier (DCC) approach [2], while
the proposed algorithm has significant computational advantages since it only
requires solving a least square optimization problem. Finally, we numerically
compare our proposed algorithm with the DCC classifier and show that the
proposed algorithm results in better misclassification rate over various
datasets.
|
[
{
"version": "v1",
"created": "Thu, 5 Nov 2015 14:47:04 GMT"
}
] | 2015-11-06T00:00:00 |
[
[
"Razaviyayn",
"Meisam",
""
],
[
"Farnia",
"Farzan",
""
],
[
"Tse",
"David",
""
]
] |
TITLE: Discrete R\'enyi Classifiers
ABSTRACT: Consider the binary classification problem of predicting a target variable
$Y$ from a discrete feature vector $X = (X_1,...,X_d)$. When the probability
distribution $\mathbb{P}(X,Y)$ is known, the optimal classifier, leading to the
minimum misclassification rate, is given by the Maximum A-posteriori
Probability decision rule. However, estimating the complete joint distribution
$\mathbb{P}(X,Y)$ is computationally and statistically impossible for large
values of $d$. An alternative approach is to first estimate some low order
marginals of $\mathbb{P}(X,Y)$ and then design the classifier based on the
estimated low order marginals. This approach is also helpful when the complete
training data instances are not available due to privacy concerns. In this
work, we consider the problem of finding the optimum classifier based on some
estimated low order marginals of $(X,Y)$. We prove that for a given set of
marginals, the minimum Hirschfeld-Gebelein-Renyi (HGR) correlation principle
introduced in [1] leads to a randomized classification rule which is shown to
have a misclassification rate no larger than twice the misclassification rate
of the optimal classifier. Then, under a separability condition, we show that
the proposed algorithm is equivalent to a randomized linear regression
approach. In addition, this method naturally results in a robust feature
selection method selecting a subset of features having the maximum worst case
HGR correlation with the target variable. Our theoretical upper-bound is
similar to the recent Discrete Chebyshev Classifier (DCC) approach [2], while
the proposed algorithm has significant computational advantages since it only
requires solving a least square optimization problem. Finally, we numerically
compare our proposed algorithm with the DCC classifier and show that the
proposed algorithm results in better misclassification rate over various
datasets.
|
1410.5919
|
Yonghui Xiao
|
Yonghui Xiao and Li Xiong
|
Protecting Locations with Differential Privacy under Temporal
Correlations
|
Final version Nov-04-2015
| null |
10.1145/2810103.2813640
| null |
cs.DB cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concerns on location privacy frequently arise with the rapid development of
GPS enabled devices and location-based applications. While spatial
transformation techniques such as location perturbation or generalization have
been studied extensively, most techniques rely on syntactic privacy models
without rigorous privacy guarantee. Many of them only consider static scenarios
or perturb the location at single timestamps without considering temporal
correlations of a moving user's locations, and hence are vulnerable to various
inference attacks. While differential privacy has been accepted as a standard
for privacy protection, applying differential privacy in location based
applications presents new challenges, as the protection needs to be enforced on
the fly for a single user and needs to incorporate temporal correlations
between a user's locations.
In this paper, we propose a systematic solution to preserve location privacy
with rigorous privacy guarantee. First, we propose a new definition,
"$\delta$-location set" based differential privacy, to account for the temporal
correlations in location data. Second, we show that the well known
$\ell_1$-norm sensitivity fails to capture the geometric sensitivity in
multidimensional space and propose a new notion, sensitivity hull, based on
which the error of differential privacy is bounded. Third, to obtain the
optimal utility we present a planar isotropic mechanism (PIM) for location
perturbation, which is the first mechanism achieving the lower bound of
differential privacy. Experiments on real-world datasets also demonstrate that
PIM significantly outperforms baseline approaches in data utility.
|
[
{
"version": "v1",
"created": "Wed, 22 Oct 2014 05:23:04 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Feb 2015 17:24:37 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Aug 2015 20:11:43 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Oct 2015 18:57:05 GMT"
},
{
"version": "v5",
"created": "Wed, 4 Nov 2015 16:36:51 GMT"
}
] | 2015-11-05T00:00:00 |
[
[
"Xiao",
"Yonghui",
""
],
[
"Xiong",
"Li",
""
]
] |
TITLE: Protecting Locations with Differential Privacy under Temporal
Correlations
ABSTRACT: Concerns on location privacy frequently arise with the rapid development of
GPS enabled devices and location-based applications. While spatial
transformation techniques such as location perturbation or generalization have
been studied extensively, most techniques rely on syntactic privacy models
without rigorous privacy guarantee. Many of them only consider static scenarios
or perturb the location at single timestamps without considering temporal
correlations of a moving user's locations, and hence are vulnerable to various
inference attacks. While differential privacy has been accepted as a standard
for privacy protection, applying differential privacy in location based
applications presents new challenges, as the protection needs to be enforced on
the fly for a single user and needs to incorporate temporal correlations
between a user's locations.
In this paper, we propose a systematic solution to preserve location privacy
with rigorous privacy guarantee. First, we propose a new definition,
"$\delta$-location set" based differential privacy, to account for the temporal
correlations in location data. Second, we show that the well known
$\ell_1$-norm sensitivity fails to capture the geometric sensitivity in
multidimensional space and propose a new notion, sensitivity hull, based on
which the error of differential privacy is bounded. Third, to obtain the
optimal utility we present a planar isotropic mechanism (PIM) for location
perturbation, which is the first mechanism achieving the lower bound of
differential privacy. Experiments on real-world datasets also demonstrate that
PIM significantly outperforms baseline approaches in data utility.
|
1509.05382
|
Symeon Meichanetzoglou
|
Symeon Meichanetzoglou, Sotiris Ioannidis, Nikolaos Laoutaris
|
Testing for common sense (violation) in airline pricing or how
complexity asymmetry defeated you and the web
|
8 pages, 13 figures
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have collected and analysed prices for more than 1.4 million flight
tickets involving 63 destinations and 125 airlines and have found that common
sense violation i.e., discrepancies between what consumers would expect and
what truly holds for those prices, are far more frequent than one would think.
For example, oftentimes the price of a single leg flight is higher than two-leg
flights that include it under similar terms of travel (class, luggage
allowance, etc.). This happened for up to 24.5% of available fares on a
specific route in our dataset invalidating the common expectation that "further
is more expensive". Likewise, we found several two-leg fares where buying each
leg independently leads to lower overall cost than buying them together as a
single ticket. This happened for up to 37% of available fares on a specific
route invalidating the common expectation that "bundling saves money". Last,
several single stop tickets in which the two legs were separated by 1-5 days
(called multicity fares), were oftentimes found to be costing more than
corresponding back-to-back fares with a small transit time. This was found to
be occurring in up to 7.5% fares on a specific route invalidating that "a short
transit is better than a longer one".
|
[
{
"version": "v1",
"created": "Thu, 17 Sep 2015 19:23:14 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 04:08:39 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Nov 2015 07:30:45 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Nov 2015 19:52:57 GMT"
}
] | 2015-11-05T00:00:00 |
[
[
"Meichanetzoglou",
"Symeon",
""
],
[
"Ioannidis",
"Sotiris",
""
],
[
"Laoutaris",
"Nikolaos",
""
]
] |
TITLE: Testing for common sense (violation) in airline pricing or how
complexity asymmetry defeated you and the web
ABSTRACT: We have collected and analysed prices for more than 1.4 million flight
tickets involving 63 destinations and 125 airlines and have found that common
sense violation i.e., discrepancies between what consumers would expect and
what truly holds for those prices, are far more frequent than one would think.
For example, oftentimes the price of a single leg flight is higher than two-leg
flights that include it under similar terms of travel (class, luggage
allowance, etc.). This happened for up to 24.5% of available fares on a
specific route in our dataset invalidating the common expectation that "further
is more expensive". Likewise, we found several two-leg fares where buying each
leg independently leads to lower overall cost than buying them together as a
single ticket. This happened for up to 37% of available fares on a specific
route invalidating the common expectation that "bundling saves money". Last,
several single stop tickets in which the two legs were separated by 1-5 days
(called multicity fares), were oftentimes found to be costing more than
corresponding back-to-back fares with a small transit time. This was found to
be occurring in up to 7.5% fares on a specific route invalidating that "a short
transit is better than a longer one".
|
1511.01282
|
Phong Nguyen
|
Phong Nguyen and Jun Wang and Alexandros Kalousis
|
Factorizing LambdaMART for cold start recommendations
| null | null | null | null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommendation systems often rely on point-wise loss metrics such as the mean
squared error. However, in real recommendation settings only few items are
presented to a user. This observation has recently encouraged the use of
rank-based metrics. LambdaMART is the state-of-the-art algorithm in learning to
rank which relies on such a metric. Despite its success it does not have a
principled regularization mechanism relying in empirical approaches to control
model complexity leaving it thus prone to overfitting.
Motivated by the fact that very often the users' and items' descriptions as
well as the preference behavior can be well summarized by a small number of
hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization
(LambdaMART-MF), that learns a low rank latent representation of users and
items using gradient boosted trees. The algorithm factorizes lambdaMART by
defining relevance scores as the inner product of the learned representations
of the users and items. The low rank is essentially a model complexity
controller; on top of it we propose additional regularizers to constraint the
learned latent representations that reflect the user and item manifolds as
these are defined by their original feature based descriptors and the
preference behavior. Finally we also propose to use a weighted variant of NDCG
to reduce the penalty for similar items with large rating discrepancy.
We experiment on two very different recommendation datasets, meta-mining and
movies-users, and evaluate the performance of LambdaMART-MF, with and without
regularization, in the cold start setting as well as in the simpler matrix
completion setting. In both cases it outperforms in a significant manner
current state of the art algorithms.
|
[
{
"version": "v1",
"created": "Wed, 4 Nov 2015 10:49:15 GMT"
}
] | 2015-11-05T00:00:00 |
[
[
"Nguyen",
"Phong",
""
],
[
"Wang",
"Jun",
""
],
[
"Kalousis",
"Alexandros",
""
]
] |
TITLE: Factorizing LambdaMART for cold start recommendations
ABSTRACT: Recommendation systems often rely on point-wise loss metrics such as the mean
squared error. However, in real recommendation settings only few items are
presented to a user. This observation has recently encouraged the use of
rank-based metrics. LambdaMART is the state-of-the-art algorithm in learning to
rank which relies on such a metric. Despite its success it does not have a
principled regularization mechanism relying in empirical approaches to control
model complexity leaving it thus prone to overfitting.
Motivated by the fact that very often the users' and items' descriptions as
well as the preference behavior can be well summarized by a small number of
hidden factors, we propose a novel algorithm, LambdaMART Matrix Factorization
(LambdaMART-MF), that learns a low rank latent representation of users and
items using gradient boosted trees. The algorithm factorizes lambdaMART by
defining relevance scores as the inner product of the learned representations
of the users and items. The low rank is essentially a model complexity
controller; on top of it we propose additional regularizers to constraint the
learned latent representations that reflect the user and item manifolds as
these are defined by their original feature based descriptors and the
preference behavior. Finally we also propose to use a weighted variant of NDCG
to reduce the penalty for similar items with large rating discrepancy.
We experiment on two very different recommendation datasets, meta-mining and
movies-users, and evaluate the performance of LambdaMART-MF, with and without
regularization, in the cold start setting as well as in the simpler matrix
completion setting. In both cases it outperforms in a significant manner
current state of the art algorithms.
|
1505.05612
|
Junhua Mao
|
Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu
|
Are You Talking to a Machine? Dataset and Methods for Multilingual Image
Question Answering
|
Dataset released on the project page, see
http://idl.baidu.com/FM-IQA.html ; NIPS 2015 camera ready version
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present the mQA model, which is able to answer questions
about the content of an image. The answer can be a sentence, a phrase or a
single word. Our model contains four components: a Long Short-Term Memory
(LSTM) to extract the question representation, a Convolutional Neural Network
(CNN) to extract the visual representation, an LSTM for storing the linguistic
context in an answer, and a fusing component to combine the information from
the first three components and generate the answer. We construct a Freestyle
Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate
our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese
question-answer pairs and their English translations. The quality of the
generated answers of our mQA model on this dataset is evaluated by human judges
through a Turing Test. Specifically, we mix the answers provided by humans and
our model. The human judges need to distinguish our model from the human. They
will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the
quality of the answer. We propose strategies to monitor the quality of this
evaluation process. The experiments show that in 64.7% of cases, the human
judges cannot distinguish our model from humans. The average score is 1.454
(1.918 for human). The details of this work, including the FM-IQA dataset, can
be found on the project page: http://idl.baidu.com/FM-IQA.html
|
[
{
"version": "v1",
"created": "Thu, 21 May 2015 06:09:36 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Oct 2015 07:45:46 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Nov 2015 21:12:15 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Gao",
"Haoyuan",
""
],
[
"Mao",
"Junhua",
""
],
[
"Zhou",
"Jie",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Wang",
"Lei",
""
],
[
"Xu",
"Wei",
""
]
] |
TITLE: Are You Talking to a Machine? Dataset and Methods for Multilingual Image
Question Answering
ABSTRACT: In this paper, we present the mQA model, which is able to answer questions
about the content of an image. The answer can be a sentence, a phrase or a
single word. Our model contains four components: a Long Short-Term Memory
(LSTM) to extract the question representation, a Convolutional Neural Network
(CNN) to extract the visual representation, an LSTM for storing the linguistic
context in an answer, and a fusing component to combine the information from
the first three components and generate the answer. We construct a Freestyle
Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate
our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese
question-answer pairs and their English translations. The quality of the
generated answers of our mQA model on this dataset is evaluated by human judges
through a Turing Test. Specifically, we mix the answers provided by humans and
our model. The human judges need to distinguish our model from the human. They
will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the
quality of the answer. We propose strategies to monitor the quality of this
evaluation process. The experiments show that in 64.7% of cases, the human
judges cannot distinguish our model from humans. The average score is 1.454
(1.918 for human). The details of this work, including the FM-IQA dataset, can
be found on the project page: http://idl.baidu.com/FM-IQA.html
|
1506.03504
|
Philip Bachman
|
Philip Bachman and Doina Precup
|
Data Generation as Sequential Decision Making
|
Accepted for publication at Advances in Neural Information Processing
Systems (NIPS) 2015
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We connect a broad class of generative models through their shared reliance
on sequential decision making. Motivated by this view, we develop extensions to
an existing model, and then explore the idea further in the context of data
imputation -- perhaps the simplest setting in which to investigate the relation
between unconditional and conditional generative modelling. We formulate data
imputation as an MDP and develop models capable of representing effective
policies for it. We construct the models using neural networks and train them
using a form of guided policy search. Our models generate predictions through
an iterative process of feedback and refinement. We show that this approach can
learn effective policies for imputation problems of varying difficulty and
across multiple datasets.
|
[
{
"version": "v1",
"created": "Wed, 10 Jun 2015 23:17:24 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Nov 2015 00:31:11 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Nov 2015 01:16:31 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Bachman",
"Philip",
""
],
[
"Precup",
"Doina",
""
]
] |
TITLE: Data Generation as Sequential Decision Making
ABSTRACT: We connect a broad class of generative models through their shared reliance
on sequential decision making. Motivated by this view, we develop extensions to
an existing model, and then explore the idea further in the context of data
imputation -- perhaps the simplest setting in which to investigate the relation
between unconditional and conditional generative modelling. We formulate data
imputation as an MDP and develop models capable of representing effective
policies for it. We construct the models using neural networks and train them
using a form of guided policy search. Our models generate predictions through
an iterative process of feedback and refinement. We show that this approach can
learn effective policies for imputation problems of varying difficulty and
across multiple datasets.
|
1510.03753
|
Rudolf Kadlec
|
Rudolf Kadlec, Martin Schmid, Jan Kleindienst
|
Improved Deep Learning Baselines for Ubuntu Corpus Dialogs
|
Accepted to Machine Learning for SLU & Interaction NIPS 2015 Workshop
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents results of our experiments for the next utterance ranking
on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog
corpus. First, we use an in-house implementation of previously reported models
to do an independent evaluation using the same data. Second, we evaluate the
performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we
create an ensemble by averaging predictions of multiple models. The ensemble
further improves the performance and it achieves a state-of-the-art result for
the next utterance ranking on this dataset. Finally, we discuss our future
plans using this corpus.
|
[
{
"version": "v1",
"created": "Tue, 13 Oct 2015 15:56:26 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Nov 2015 08:23:50 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Kadlec",
"Rudolf",
""
],
[
"Schmid",
"Martin",
""
],
[
"Kleindienst",
"Jan",
""
]
] |
TITLE: Improved Deep Learning Baselines for Ubuntu Corpus Dialogs
ABSTRACT: This paper presents results of our experiments for the next utterance ranking
on the Ubuntu Dialog Corpus -- the largest publicly available multi-turn dialog
corpus. First, we use an in-house implementation of previously reported models
to do an independent evaluation using the same data. Second, we evaluate the
performances of various LSTMs, Bi-LSTMs and CNNs on the dataset. Third, we
create an ensemble by averaging predictions of multiple models. The ensemble
further improves the performance and it achieves a state-of-the-art result for
the next utterance ranking on this dataset. Finally, we discuss our future
plans using this corpus.
|
1510.05024
|
Patrick Huck
|
Patrick Huck, Anubhav Jain, Dan Gunter, Donald Winston, Kristin
Persson
|
A Community Contribution Framework for Sharing Materials Data with
Materials Project
|
7 pages, 3 figures, Proceedings of 2015 IEEE 11th International
Conference on eScience, to be published in IEEE Computer Society
| null |
10.1109/eScience.2015.75
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As scientific discovery becomes increasingly data-driven, software platforms
are needed to efficiently organize and disseminate data from disparate sources.
This is certainly the case in the field of materials science. For example,
Materials Project has generated computational data on over 60,000 chemical
compounds and has made that data available through a web portal and REST
interface. However, such portals must seek to incorporate community submissions
to expand the scope of scientific data sharing. In this paper, we describe
MPContribs, a computing/software infrastructure to integrate and organize
contributions of simulated or measured materials data from users. Our solution
supports complex submissions and provides interfaces that allow contributors to
share analyses and graphs. A RESTful API exposes mechanisms for book-keeping,
retrieval and aggregation of submitted entries, as well as persistent URIs or
DOIs that can be used to reference the data in publications. Our approach
isolates contributed data from a host project's quality-controlled core data
and yet enables analyses across the entire dataset, programmatically or through
customized web apps. We expect the developed framework to enhance collaborative
determination of material properties and to maximize the impact of each
contributor's dataset. In the long-term, MPContribs seeks to make Materials
Project an institutional, and thus community-wide, memory for computational and
experimental materials science.
|
[
{
"version": "v1",
"created": "Fri, 16 Oct 2015 21:01:50 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Huck",
"Patrick",
""
],
[
"Jain",
"Anubhav",
""
],
[
"Gunter",
"Dan",
""
],
[
"Winston",
"Donald",
""
],
[
"Persson",
"Kristin",
""
]
] |
TITLE: A Community Contribution Framework for Sharing Materials Data with
Materials Project
ABSTRACT: As scientific discovery becomes increasingly data-driven, software platforms
are needed to efficiently organize and disseminate data from disparate sources.
This is certainly the case in the field of materials science. For example,
Materials Project has generated computational data on over 60,000 chemical
compounds and has made that data available through a web portal and REST
interface. However, such portals must seek to incorporate community submissions
to expand the scope of scientific data sharing. In this paper, we describe
MPContribs, a computing/software infrastructure to integrate and organize
contributions of simulated or measured materials data from users. Our solution
supports complex submissions and provides interfaces that allow contributors to
share analyses and graphs. A RESTful API exposes mechanisms for book-keeping,
retrieval and aggregation of submitted entries, as well as persistent URIs or
DOIs that can be used to reference the data in publications. Our approach
isolates contributed data from a host project's quality-controlled core data
and yet enables analyses across the entire dataset, programmatically or through
customized web apps. We expect the developed framework to enhance collaborative
determination of material properties and to maximize the impact of each
contributor's dataset. In the long-term, MPContribs seeks to make Materials
Project an institutional, and thus community-wide, memory for computational and
experimental materials science.
|
1511.00871
|
Brijnesh Jain
|
Brijnesh J. Jain
|
Properties of the Sample Mean in Graph Spaces and the
Majorize-Minimize-Mean Algorithm
| null | null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most fundamental concepts in statistics is the concept of sample
mean. Properties of the sample mean that are well-defined in Euclidean spaces
become unwieldy or even unclear in graph spaces. Open problems related to the
sample mean of graphs include: non-existence, non-uniqueness, statistical
inconsistency, lack of convergence results of mean algorithms, non-existence of
midpoints, and disparity to midpoints. We present conditions to resolve all six
problems and propose a Majorize-Minimize-Mean (MMM) Algorithm. Experiments on
graph datasets representing images and molecules show that the MMM-Algorithm
best approximates a sample mean of graphs compared to six other mean
algorithms.
|
[
{
"version": "v1",
"created": "Tue, 3 Nov 2015 12:09:26 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Jain",
"Brijnesh J.",
""
]
] |
TITLE: Properties of the Sample Mean in Graph Spaces and the
Majorize-Minimize-Mean Algorithm
ABSTRACT: One of the most fundamental concepts in statistics is the concept of sample
mean. Properties of the sample mean that are well-defined in Euclidean spaces
become unwieldy or even unclear in graph spaces. Open problems related to the
sample mean of graphs include: non-existence, non-uniqueness, statistical
inconsistency, lack of convergence results of mean algorithms, non-existence of
midpoints, and disparity to midpoints. We present conditions to resolve all six
problems and propose a Majorize-Minimize-Mean (MMM) Algorithm. Experiments on
graph datasets representing images and molecules show that the MMM-Algorithm
best approximates a sample mean of graphs compared to six other mean
algorithms.
|
1511.00971
|
Diego Marron
|
Diego Marr\'on ([email protected]) and Jesse Read
([email protected]) and Albert Bifet ([email protected])
and Nacho Navarro ([email protected])
|
Data Stream Classification using Random Feature Functions and Novel
Method Combinations
|
20 pages, journal
| null | null | null |
cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Big Data streams are being generated in a faster, bigger, and more
commonplace. In this scenario, Hoeffding Trees are an established method for
classification. Several extensions exist, including high-performing ensemble
setups such as online and leveraging bagging. Also, $k$-nearest neighbors is a
popular choice, with most extensions dealing with the inherent performance
limitations over a potentially-infinite stream.
At the same time, gradient descent methods are becoming increasingly popular,
owing in part to the successes of deep learning. Although deep neural networks
can learn incrementally, they have so far proved too sensitive to
hyper-parameter options and initial conditions to be considered an effective
`off-the-shelf' data-streams solution.
In this work, we look at combinations of Hoeffding-trees, nearest neighbour,
and gradient descent methods with a streaming preprocessing approach in the
form of a random feature functions filter for additional predictive power.
We further extend the investigation to implementing methods on GPUs, which we
test on some large real-world datasets, and show the benefits of using GPUs for
data-stream learning due to their high scalability.
Our empirical evaluation yields positive results for the novel approaches
that we experiment with, highlighting important issues, and shed light on
promising future directions in approaches to data-stream classification.
|
[
{
"version": "v1",
"created": "Tue, 3 Nov 2015 16:29:57 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Marrón",
"Diego",
"",
"[email protected]"
],
[
"Read",
"Jesse",
"",
"[email protected]"
],
[
"Bifet",
"Albert",
"",
"[email protected]"
],
[
"Navarro",
"Nacho",
"",
"[email protected]"
]
] |
TITLE: Data Stream Classification using Random Feature Functions and Novel
Method Combinations
ABSTRACT: Big Data streams are being generated in a faster, bigger, and more
commonplace. In this scenario, Hoeffding Trees are an established method for
classification. Several extensions exist, including high-performing ensemble
setups such as online and leveraging bagging. Also, $k$-nearest neighbors is a
popular choice, with most extensions dealing with the inherent performance
limitations over a potentially-infinite stream.
At the same time, gradient descent methods are becoming increasingly popular,
owing in part to the successes of deep learning. Although deep neural networks
can learn incrementally, they have so far proved too sensitive to
hyper-parameter options and initial conditions to be considered an effective
`off-the-shelf' data-streams solution.
In this work, we look at combinations of Hoeffding-trees, nearest neighbour,
and gradient descent methods with a streaming preprocessing approach in the
form of a random feature functions filter for additional predictive power.
We further extend the investigation to implementing methods on GPUs, which we
test on some large real-world datasets, and show the benefits of using GPUs for
data-stream learning due to their high scalability.
Our empirical evaluation yields positive results for the novel approaches
that we experiment with, highlighting important issues, and shed light on
promising future directions in approaches to data-stream classification.
|
1511.01029
|
Vijay Badrinarayanan
|
Vijay Badrinarayanan and Bamdev Mishra and Roberto Cipolla
|
Understanding symmetries in deep networks
|
Accepted at the 8th NIPS Workshop on Optimization for Machine
Learning (OPT2015) to be held at Montreal, Canada on December 11, 2015
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent works have highlighted scale invariance or symmetry present in the
weight space of a typical deep network and the adverse effect it has on the
Euclidean gradient based stochastic gradient descent optimization. In this
work, we show that a commonly used deep network, which uses convolution, batch
normalization, reLU, max-pooling, and sub-sampling pipeline, possess more
complex forms of symmetry arising from scaling-based reparameterization of the
network weights. We propose to tackle the issue of the weight space symmetry by
constraining the filters to lie on the unit-norm manifold. Consequently,
training the network boils down to using stochastic gradient descent updates on
the unit-norm manifold. Our empirical evidence based on the MNIST dataset shows
that the proposed updates improve the test performance beyond what is achieved
with batch normalization and without sacrificing the computational efficiency
of the weight updates.
|
[
{
"version": "v1",
"created": "Tue, 3 Nov 2015 18:50:03 GMT"
}
] | 2015-11-04T00:00:00 |
[
[
"Badrinarayanan",
"Vijay",
""
],
[
"Mishra",
"Bamdev",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
TITLE: Understanding symmetries in deep networks
ABSTRACT: Recent works have highlighted scale invariance or symmetry present in the
weight space of a typical deep network and the adverse effect it has on the
Euclidean gradient based stochastic gradient descent optimization. In this
work, we show that a commonly used deep network, which uses convolution, batch
normalization, reLU, max-pooling, and sub-sampling pipeline, possess more
complex forms of symmetry arising from scaling-based reparameterization of the
network weights. We propose to tackle the issue of the weight space symmetry by
constraining the filters to lie on the unit-norm manifold. Consequently,
training the network boils down to using stochastic gradient descent updates on
the unit-norm manifold. Our empirical evidence based on the MNIST dataset shows
that the proposed updates improve the test performance beyond what is achieved
with batch normalization and without sacrificing the computational efficiency
of the weight updates.
|
1505.04972
|
Arthur Ryman
|
Arthur Ryman
|
Recursion in RDF Data Shape Languages
|
31 pages, 2 figures, invited expert contribution to the W3C RDF Data
Shapes Working Group
| null | null | null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An RDF data shape is a description of the expected contents of an RDF
document (aka graph) or dataset. A major part of this description is the set of
constraints that the document or dataset is required to satisfy. W3C recently
(2014) chartered the RDF Data Shapes Working Group to define SHACL, a standard
RDF data shape language. We refer to the ability to name and reference shape
language elements as recursion. This article provides a precise definition of
the meaning of recursion as used in Resource Shape 2.0. The definition of
recursion presented in this article is largely independent of language-specific
details. We speculate that it also applies to ShEx and to all three of the
current proposals for SHACL. In particular, recursion is not permitted in the
SHACL-SPARQL proposal, but we conjecture that recursion could be added by using
the definition proposed here as a top-level control structure.
|
[
{
"version": "v1",
"created": "Tue, 19 May 2015 12:45:59 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Nov 2015 22:27:03 GMT"
}
] | 2015-11-03T00:00:00 |
[
[
"Ryman",
"Arthur",
""
]
] |
TITLE: Recursion in RDF Data Shape Languages
ABSTRACT: An RDF data shape is a description of the expected contents of an RDF
document (aka graph) or dataset. A major part of this description is the set of
constraints that the document or dataset is required to satisfy. W3C recently
(2014) chartered the RDF Data Shapes Working Group to define SHACL, a standard
RDF data shape language. We refer to the ability to name and reference shape
language elements as recursion. This article provides a precise definition of
the meaning of recursion as used in Resource Shape 2.0. The definition of
recursion presented in this article is largely independent of language-specific
details. We speculate that it also applies to ShEx and to all three of the
current proposals for SHACL. In particular, recursion is not permitted in the
SHACL-SPARQL proposal, but we conjecture that recursion could be added by using
the definition proposed here as a top-level control structure.
|
1506.02626
|
Song Han
|
Song Han, Jeff Pool, John Tran, William J. Dally
|
Learning both Weights and Connections for Efficient Neural Networks
|
Published as a conference paper at NIPS 2015
| null | null | null |
cs.NE cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy.
|
[
{
"version": "v1",
"created": "Mon, 8 Jun 2015 19:28:43 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jul 2015 22:27:31 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Oct 2015 23:29:27 GMT"
}
] | 2015-11-03T00:00:00 |
[
[
"Han",
"Song",
""
],
[
"Pool",
"Jeff",
""
],
[
"Tran",
"John",
""
],
[
"Dally",
"William J.",
""
]
] |
TITLE: Learning both Weights and Connections for Efficient Neural Networks
ABSTRACT: Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy.
|
1508.00330
|
Zhibin Liao
|
Zhibin Liao, Gustavo Carneiro
|
On the Importance of Normalisation Layers in Deep Learning with
Piecewise Linear Activation Units
| null | null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep feedforward neural networks with piecewise linear activations are
currently producing the state-of-the-art results in several public datasets.
The combination of deep learning models and piecewise linear activation
functions allows for the estimation of exponentially complex functions with the
use of a large number of subnetworks specialized in the classification of
similar input examples. During the training process, these subnetworks avoid
overfitting with an implicit regularization scheme based on the fact that they
must share their parameters with other subnetworks. Using this framework, we
have made an empirical observation that can improve even more the performance
of such models. We notice that these models assume a balanced initial
distribution of data points with respect to the domain of the piecewise linear
activation function. If that assumption is violated, then the piecewise linear
activation units can degenerate into purely linear activation units, which can
result in a significant reduction of their capacity to learn complex functions.
Furthermore, as the number of model layers increases, this unbalanced initial
distribution makes the model ill-conditioned. Therefore, we propose the
introduction of batch normalisation units into deep feedforward neural networks
with piecewise linear activations, which drives a more balanced use of these
activation units, where each region of the activation function is trained with
a relatively large proportion of training samples. Also, this batch
normalisation promotes the pre-conditioning of very deep learning models. We
show that by introducing maxout and batch normalisation units to the network in
network model results in a model that produces classification results that are
better than or comparable to the current state of the art in CIFAR-10,
CIFAR-100, MNIST, and SVHN datasets.
|
[
{
"version": "v1",
"created": "Mon, 3 Aug 2015 07:24:07 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Nov 2015 06:44:10 GMT"
}
] | 2015-11-03T00:00:00 |
[
[
"Liao",
"Zhibin",
""
],
[
"Carneiro",
"Gustavo",
""
]
] |
TITLE: On the Importance of Normalisation Layers in Deep Learning with
Piecewise Linear Activation Units
ABSTRACT: Deep feedforward neural networks with piecewise linear activations are
currently producing the state-of-the-art results in several public datasets.
The combination of deep learning models and piecewise linear activation
functions allows for the estimation of exponentially complex functions with the
use of a large number of subnetworks specialized in the classification of
similar input examples. During the training process, these subnetworks avoid
overfitting with an implicit regularization scheme based on the fact that they
must share their parameters with other subnetworks. Using this framework, we
have made an empirical observation that can improve even more the performance
of such models. We notice that these models assume a balanced initial
distribution of data points with respect to the domain of the piecewise linear
activation function. If that assumption is violated, then the piecewise linear
activation units can degenerate into purely linear activation units, which can
result in a significant reduction of their capacity to learn complex functions.
Furthermore, as the number of model layers increases, this unbalanced initial
distribution makes the model ill-conditioned. Therefore, we propose the
introduction of batch normalisation units into deep feedforward neural networks
with piecewise linear activations, which drives a more balanced use of these
activation units, where each region of the activation function is trained with
a relatively large proportion of training samples. Also, this batch
normalisation promotes the pre-conditioning of very deep learning models. We
show that by introducing maxout and batch normalisation units to the network in
network model results in a model that produces classification results that are
better than or comparable to the current state of the art in CIFAR-10,
CIFAR-100, MNIST, and SVHN datasets.
|
1511.00054
|
David Moore
|
David A. Moore and Stuart J. Russell
|
Gaussian Process Random Fields
|
Advances in Neural Information Processing Systems (NIPS), 2015
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gaussian processes have been successful in both supervised and unsupervised
machine learning tasks, but their computational complexity has constrained
practical applications. We introduce a new approximation for large-scale
Gaussian processes, the Gaussian Process Random Field (GPRF), in which local
GPs are coupled via pairwise potentials. The GPRF likelihood is a simple,
tractable, and parallelizeable approximation to the full GP marginal
likelihood, enabling latent variable modeling and hyperparameter selection on
large datasets. We demonstrate its effectiveness on synthetic spatial data as
well as a real-world application to seismic event location.
|
[
{
"version": "v1",
"created": "Sat, 31 Oct 2015 01:02:14 GMT"
}
] | 2015-11-03T00:00:00 |
[
[
"Moore",
"David A.",
""
],
[
"Russell",
"Stuart J.",
""
]
] |
TITLE: Gaussian Process Random Fields
ABSTRACT: Gaussian processes have been successful in both supervised and unsupervised
machine learning tasks, but their computational complexity has constrained
practical applications. We introduce a new approximation for large-scale
Gaussian processes, the Gaussian Process Random Field (GPRF), in which local
GPs are coupled via pairwise potentials. The GPRF likelihood is a simple,
tractable, and parallelizeable approximation to the full GP marginal
likelihood, enabling latent variable modeling and hyperparameter selection on
large datasets. We demonstrate its effectiveness on synthetic spatial data as
well as a real-world application to seismic event location.
|
1511.00099
|
Anurag Mittal
|
Sarthak Parui and Anurag Mittal
|
Sketch-based Image Retrieval from Millions of Images under Rotation,
Translation and Scale Variations
|
submitted to IJCV, April 2015
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proliferation of touch-based devices has made sketch-based image retrieval
practical. While many methods exist for sketch-based object detection/image
retrieval on small datasets, relatively less work has been done on large
(web)-scale image retrieval. In this paper, we present an efficient approach
for image retrieval from millions of images based on user-drawn sketches.
Unlike existing methods for this problem which are sensitive to even
translation or scale variations, our method handles rotation, translation,
scale (i.e. a similarity transformation) and small deformations. The object
boundaries are represented as chains of connected segments and the database
images are pre-processed to obtain such chains that have a high chance of
containing the object. This is accomplished using two approaches in this work:
a) extracting long chains in contour segment networks and b) extracting
boundaries of segmented object proposals. These chains are then represented by
similarity-invariant variable length descriptors. Descriptor similarities are
computed by a fast Dynamic Programming-based partial matching algorithm. This
matching mechanism is used to generate a hierarchical k-medoids based indexing
structure for the extracted chains of all database images in an offline process
which is used to efficiently retrieve a small set of possible matched images
for query chains. Finally, a geometric verification step is employed to test
geometric consistency of multiple chain matches to improve results. Qualitative
and quantitative results clearly demonstrate superiority of the approach over
existing methods.
|
[
{
"version": "v1",
"created": "Sat, 31 Oct 2015 08:50:43 GMT"
}
] | 2015-11-03T00:00:00 |
[
[
"Parui",
"Sarthak",
""
],
[
"Mittal",
"Anurag",
""
]
] |
TITLE: Sketch-based Image Retrieval from Millions of Images under Rotation,
Translation and Scale Variations
ABSTRACT: Proliferation of touch-based devices has made sketch-based image retrieval
practical. While many methods exist for sketch-based object detection/image
retrieval on small datasets, relatively less work has been done on large
(web)-scale image retrieval. In this paper, we present an efficient approach
for image retrieval from millions of images based on user-drawn sketches.
Unlike existing methods for this problem which are sensitive to even
translation or scale variations, our method handles rotation, translation,
scale (i.e. a similarity transformation) and small deformations. The object
boundaries are represented as chains of connected segments and the database
images are pre-processed to obtain such chains that have a high chance of
containing the object. This is accomplished using two approaches in this work:
a) extracting long chains in contour segment networks and b) extracting
boundaries of segmented object proposals. These chains are then represented by
similarity-invariant variable length descriptors. Descriptor similarities are
computed by a fast Dynamic Programming-based partial matching algorithm. This
matching mechanism is used to generate a hierarchical k-medoids based indexing
structure for the extracted chains of all database images in an offline process
which is used to efficiently retrieve a small set of possible matched images
for query chains. Finally, a geometric verification step is employed to test
geometric consistency of multiple chain matches to improve results. Qualitative
and quantitative results clearly demonstrate superiority of the approach over
existing methods.
|
1507.01206
|
Paolo Napoletano
|
Daniela Micucci, Marco Mobilio, Paolo Napoletano, Francesco Tisato
|
Falls as anomalies? An experimental evaluation using smartphone
accelerometer data
|
submitted to the Journal of Ambient Intelligence and Humanized
Computing (Springer)
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Life expectancy keeps growing and, among elderly people, accidental falls
occur frequently. A system able to promptly detect falls would help in reducing
the injuries that a fall could cause. Such a system should meet the needs of
the people to which is designed, so that it is actually used. In particular,
the system should be minimally invasive and inexpensive. Thanks to the fact
that most of the smartphones embed accelerometers and powerful processing unit,
they are good candidates both as data acquisition devices and as platforms to
host fall detection systems. For this reason, in the last years several fall
detection methods have been experimented on smartphone accelerometer data. Most
of them have been tuned with simulated falls because, to date, datasets of
real-world falls are not available. This article evaluates the effectiveness of
methods that detect falls as anomalies. To this end, we compared traditional
approaches with anomaly detectors. In particular, we experienced the kNN and
the SVM methods using both the one-class and two-classes configurations. The
comparison involved three different collections of accelerometer data, and four
different data representations. Empirical results demonstrated that, in most of
the cases, falls are not required to design an effective fall detector.
|
[
{
"version": "v1",
"created": "Sun, 5 Jul 2015 11:49:34 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Oct 2015 13:52:08 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Micucci",
"Daniela",
""
],
[
"Mobilio",
"Marco",
""
],
[
"Napoletano",
"Paolo",
""
],
[
"Tisato",
"Francesco",
""
]
] |
TITLE: Falls as anomalies? An experimental evaluation using smartphone
accelerometer data
ABSTRACT: Life expectancy keeps growing and, among elderly people, accidental falls
occur frequently. A system able to promptly detect falls would help in reducing
the injuries that a fall could cause. Such a system should meet the needs of
the people to which is designed, so that it is actually used. In particular,
the system should be minimally invasive and inexpensive. Thanks to the fact
that most of the smartphones embed accelerometers and powerful processing unit,
they are good candidates both as data acquisition devices and as platforms to
host fall detection systems. For this reason, in the last years several fall
detection methods have been experimented on smartphone accelerometer data. Most
of them have been tuned with simulated falls because, to date, datasets of
real-world falls are not available. This article evaluates the effectiveness of
methods that detect falls as anomalies. To this end, we compared traditional
approaches with anomaly detectors. In particular, we experienced the kNN and
the SVM methods using both the one-class and two-classes configurations. The
comparison involved three different collections of accelerometer data, and four
different data representations. Empirical results demonstrated that, in most of
the cases, falls are not required to design an effective fall detector.
|
1510.08789
|
Travis Johnston
|
Travis Johnston, Boyu Zhang, Adam Liwo, Silvia Crivelli, Michela
Taufer
|
In-Situ Data Analysis of Protein Folding Trajectories
|
40 pages, 15 figures, this paper is presently in the format request
of the journal to which it was submitted for publication
| null | null | null |
cs.CE cs.DC q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transition from petascale to exascale computers is characterized by
substantial changes in the computer architectures and technologies. The
research community relying on computational simulations is being forced to
revisit the algorithms for data generation and analysis due to various
concerns, such as higher degrees of concurrency, deeper memory hierarchies,
substantial I/O and communication constraints. Simulations today typically save
all data to analyze later. Simulations at the exascale will require us to
analyze data as it is generated and save only what is really needed for
analysis, which must be performed predominately in-situ, i.e., executed
sufficiently fast locally, limiting memory and disk usage, and avoiding the
need to move large data across nodes.
In this paper, we present a distributed method that enables in-situ data
analysis for large protein folding trajectory datasets. Traditional trajectory
analysis methods currently follow a centralized approach that moves the
trajectory datasets to a centralized node and processes the data only after
simulations have been completed. Our method, on the other hand, captures
conformational information in-situ using local data only while reducing the
storage space needed for the part of the trajectory under consideration. This
method processes the input trajectory data in one pass, breaks from the
centralized approach of traditional analysis, avoids the movement of trajectory
data, and still builds the global knowledge on the formation of individual
$\alpha$-helices or $\beta$-strands as trajectory frames are generated.
|
[
{
"version": "v1",
"created": "Thu, 29 Oct 2015 17:34:57 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Oct 2015 15:41:25 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Johnston",
"Travis",
""
],
[
"Zhang",
"Boyu",
""
],
[
"Liwo",
"Adam",
""
],
[
"Crivelli",
"Silvia",
""
],
[
"Taufer",
"Michela",
""
]
] |
TITLE: In-Situ Data Analysis of Protein Folding Trajectories
ABSTRACT: The transition from petascale to exascale computers is characterized by
substantial changes in the computer architectures and technologies. The
research community relying on computational simulations is being forced to
revisit the algorithms for data generation and analysis due to various
concerns, such as higher degrees of concurrency, deeper memory hierarchies,
substantial I/O and communication constraints. Simulations today typically save
all data to analyze later. Simulations at the exascale will require us to
analyze data as it is generated and save only what is really needed for
analysis, which must be performed predominately in-situ, i.e., executed
sufficiently fast locally, limiting memory and disk usage, and avoiding the
need to move large data across nodes.
In this paper, we present a distributed method that enables in-situ data
analysis for large protein folding trajectory datasets. Traditional trajectory
analysis methods currently follow a centralized approach that moves the
trajectory datasets to a centralized node and processes the data only after
simulations have been completed. Our method, on the other hand, captures
conformational information in-situ using local data only while reducing the
storage space needed for the part of the trajectory under consideration. This
method processes the input trajectory data in one pass, breaks from the
centralized approach of traditional analysis, avoids the movement of trajectory
data, and still builds the global knowledge on the formation of individual
$\alpha$-helices or $\beta$-strands as trajectory frames are generated.
|
1510.08893
|
Lorenzo Baraldi
|
Lorenzo Baraldi, Costantino Grana and Rita Cucchiara
|
A Deep Siamese Network for Scene Detection in Broadcast Videos
|
ACM Multimedia 2015
| null |
10.1145/2733373.2806316
| null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a model that automatically divides broadcast videos into coherent
scenes by learning a distance measure between shots. Experiments are performed
to demonstrate the effectiveness of our approach by comparing our algorithm
against recent proposals for automatic scene segmentation. We also propose an
improved performance measure that aims to reduce the gap between numerical
evaluation and expected results, and propose and release a new benchmark
dataset.
|
[
{
"version": "v1",
"created": "Thu, 29 Oct 2015 20:34:15 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Baraldi",
"Lorenzo",
""
],
[
"Grana",
"Costantino",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
TITLE: A Deep Siamese Network for Scene Detection in Broadcast Videos
ABSTRACT: We present a model that automatically divides broadcast videos into coherent
scenes by learning a distance measure between shots. Experiments are performed
to demonstrate the effectiveness of our approach by comparing our algorithm
against recent proposals for automatic scene segmentation. We also propose an
improved performance measure that aims to reduce the gap between numerical
evaluation and expected results, and propose and release a new benchmark
dataset.
|
1510.08897
|
Kyriaki Dimitriadou
|
Kyriaki Dimitriadou and Olga Papaemmanouil and Yanlei Diao
|
AIDE: An Automated Sample-based Approach for Interactive Data
Exploration
|
14 pages
| null | null | null |
cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we argue that database systems be augmented with an automated
data exploration service that methodically steers users through the data in a
meaningful way. Such an automated system is crucial for deriving insights from
complex datasets found in many big data applications such as scientific and
healthcare applications as well as for reducing the human effort of data
exploration. Towards this end, we present AIDE, an Automatic Interactive Data
Exploration framework that assists users in discovering new interesting data
patterns and eliminate expensive ad-hoc exploratory queries.
AIDE relies on a seamless integration of classification algorithms and data
management optimization techniques that collectively strive to accurately learn
the user interests based on his relevance feedback on strategically collected
samples. We present a number of exploration techniques as well as optimizations
that minimize the number of samples presented to the user while offering
interactive performance. AIDE can deliver highly accurate query predictions for
very common conjunctive queries with small user effort while, given a
reasonable number of samples, it can predict with high accuracy complex
disjunctive queries. It provides interactive performance as it limits the user
wait time per iteration of exploration to less than a few seconds.
|
[
{
"version": "v1",
"created": "Thu, 29 Oct 2015 20:50:05 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Dimitriadou",
"Kyriaki",
""
],
[
"Papaemmanouil",
"Olga",
""
],
[
"Diao",
"Yanlei",
""
]
] |
TITLE: AIDE: An Automated Sample-based Approach for Interactive Data
Exploration
ABSTRACT: In this paper, we argue that database systems be augmented with an automated
data exploration service that methodically steers users through the data in a
meaningful way. Such an automated system is crucial for deriving insights from
complex datasets found in many big data applications such as scientific and
healthcare applications as well as for reducing the human effort of data
exploration. Towards this end, we present AIDE, an Automatic Interactive Data
Exploration framework that assists users in discovering new interesting data
patterns and eliminate expensive ad-hoc exploratory queries.
AIDE relies on a seamless integration of classification algorithms and data
management optimization techniques that collectively strive to accurately learn
the user interests based on his relevance feedback on strategically collected
samples. We present a number of exploration techniques as well as optimizations
that minimize the number of samples presented to the user while offering
interactive performance. AIDE can deliver highly accurate query predictions for
very common conjunctive queries with small user effort while, given a
reasonable number of samples, it can predict with high accuracy complex
disjunctive queries. It provides interactive performance as it limits the user
wait time per iteration of exploration to less than a few seconds.
|
1510.08973
|
Fereshteh Sadeghi
|
Fereshteh Sadeghi, C. Lawrence Zitnick, Ali Farhadi
|
VISALOGY: Answering Visual Analogy Questions
|
To appear in NIPS 2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of answering visual analogy questions.
These questions take the form of image A is to image B as image C is to what.
Answering these questions entails discovering the mapping from image A to image
B and then extending the mapping to image C and searching for the image D such
that the relation from A to B holds for C to D. We pose this problem as
learning an embedding that encourages pairs of analogous images with similar
transformations to be close together using convolutional neural networks with a
quadruple Siamese architecture. We introduce a dataset of visual analogy
questions in natural images, and show first results of its kind on solving
analogy questions on natural images.
|
[
{
"version": "v1",
"created": "Fri, 30 Oct 2015 05:43:41 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Sadeghi",
"Fereshteh",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Farhadi",
"Ali",
""
]
] |
TITLE: VISALOGY: Answering Visual Analogy Questions
ABSTRACT: In this paper, we study the problem of answering visual analogy questions.
These questions take the form of image A is to image B as image C is to what.
Answering these questions entails discovering the mapping from image A to image
B and then extending the mapping to image C and searching for the image D such
that the relation from A to B holds for C to D. We pose this problem as
learning an embedding that encourages pairs of analogous images with similar
transformations to be close together using convolutional neural networks with a
quadruple Siamese architecture. We introduce a dataset of visual analogy
questions in natural images, and show first results of its kind on solving
analogy questions on natural images.
|
1510.09171
|
Hang Chu
|
Hang Chu, Hongyuan Mei, Mohit Bansal, Matthew R. Walter
|
Accurate Vision-based Vehicle Localization using Satellite Imagery
|
9 pages, 8 figures. Full version is submitted to ICRA 2016. Short
version is to appear at NIPS 2015 Workshop on Transfer and Multi-Task
Learning
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for accurately localizing ground vehicles with the aid of
satellite imagery. Our approach takes a ground image as input, and outputs the
location from which it was taken on a georeferenced satellite image. We perform
visual localization by estimating the co-occurrence probabilities between the
ground and satellite images based on a ground-satellite feature dictionary. The
method is able to estimate likelihoods over arbitrary locations without the
need for a dense ground image database. We present a ranking-loss based
algorithm that learns location-discriminative feature projection matrices that
result in further improvements in accuracy. We evaluate our method on the
Malaga and KITTI public datasets and demonstrate significant improvements over
a baseline that performs exhaustive search.
|
[
{
"version": "v1",
"created": "Fri, 30 Oct 2015 17:35:23 GMT"
}
] | 2015-11-02T00:00:00 |
[
[
"Chu",
"Hang",
""
],
[
"Mei",
"Hongyuan",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Walter",
"Matthew R.",
""
]
] |
TITLE: Accurate Vision-based Vehicle Localization using Satellite Imagery
ABSTRACT: We propose a method for accurately localizing ground vehicles with the aid of
satellite imagery. Our approach takes a ground image as input, and outputs the
location from which it was taken on a georeferenced satellite image. We perform
visual localization by estimating the co-occurrence probabilities between the
ground and satellite images based on a ground-satellite feature dictionary. The
method is able to estimate likelihoods over arbitrary locations without the
need for a dense ground image database. We present a ranking-loss based
algorithm that learns location-discriminative feature projection matrices that
result in further improvements in accuracy. We evaluate our method on the
Malaga and KITTI public datasets and demonstrate significant improvements over
a baseline that performs exhaustive search.
|
1502.07162
|
Dimitar Nikolov
|
Dimitar Nikolov, Diego F. M. Oliveira, Alessandro Flammini, Filippo
Menczer
|
Measuring Online Social Bubbles
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media have quickly become a prevalent channel to access information,
spread ideas, and influence opinions. However, it has been suggested that
social and algorithmic filtering may cause exposure to less diverse points of
view, and even foster polarization and misinformation. Here we explore and
validate this hypothesis quantitatively for the first time, at the collective
and individual levels, by mining three massive datasets of web traffic, search
logs, and Twitter posts. Our analysis shows that collectively, people access
information from a significantly narrower spectrum of sources through social
media and email, compared to search. The significance of this finding for
individual exposure is revealed by investigating the relationship between the
diversity of information sources experienced by users at the collective and
individual level. There is a strong correlation between collective and
individual diversity, supporting the notion that when we use social media we
find ourselves inside "social bubbles". Our results could lead to a deeper
understanding of how technology biases our exposure to new information.
|
[
{
"version": "v1",
"created": "Wed, 25 Feb 2015 13:29:17 GMT"
},
{
"version": "v2",
"created": "Fri, 1 May 2015 20:08:36 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Oct 2015 20:36:49 GMT"
}
] | 2015-10-30T00:00:00 |
[
[
"Nikolov",
"Dimitar",
""
],
[
"Oliveira",
"Diego F. M.",
""
],
[
"Flammini",
"Alessandro",
""
],
[
"Menczer",
"Filippo",
""
]
] |
TITLE: Measuring Online Social Bubbles
ABSTRACT: Social media have quickly become a prevalent channel to access information,
spread ideas, and influence opinions. However, it has been suggested that
social and algorithmic filtering may cause exposure to less diverse points of
view, and even foster polarization and misinformation. Here we explore and
validate this hypothesis quantitatively for the first time, at the collective
and individual levels, by mining three massive datasets of web traffic, search
logs, and Twitter posts. Our analysis shows that collectively, people access
information from a significantly narrower spectrum of sources through social
media and email, compared to search. The significance of this finding for
individual exposure is revealed by investigating the relationship between the
diversity of information sources experienced by users at the collective and
individual level. There is a strong correlation between collective and
individual diversity, supporting the notion that when we use social media we
find ourselves inside "social bubbles". Our results could lead to a deeper
understanding of how technology biases our exposure to new information.
|
1507.07851
|
Jagdish Achara
|
Jagdish Prasad Achara, Gergely Acs and Claude Castelluccia
|
On the Unicity of Smartphone Applications
|
10 pages, 9 Figures, Appeared at ACM CCS Workshop on Privacy in
Electronic Society (WPES) 2015
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prior works have shown that the list of apps installed by a user reveal a lot
about user interests and behavior. These works rely on the semantics of the
installed apps and show that various user traits could be learnt automatically
using off-the-shelf machine-learning techniques. In this work, we focus on the
re-identifiability issue and thoroughly study the unicity of smartphone apps on
a dataset containing 54,893 Android users collected over a period of 7 months.
Our study finds that any 4 apps installed by a user are enough (more than 95%
times) for the re-identification of the user in our dataset. As the complete
list of installed apps is unique for 99% of the users in our dataset, it can be
easily used to track/profile the users by a service such as Twitter that has
access to the whole list of installed apps of users. As our analyzed dataset is
small as compared to the total population of Android users, we also study how
unicity would vary with larger datasets. This work emphasizes the need of
better privacy guards against collection, use and release of the list of
installed apps.
|
[
{
"version": "v1",
"created": "Tue, 28 Jul 2015 17:07:00 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Oct 2015 09:57:54 GMT"
}
] | 2015-10-30T00:00:00 |
[
[
"Achara",
"Jagdish Prasad",
""
],
[
"Acs",
"Gergely",
""
],
[
"Castelluccia",
"Claude",
""
]
] |
TITLE: On the Unicity of Smartphone Applications
ABSTRACT: Prior works have shown that the list of apps installed by a user reveal a lot
about user interests and behavior. These works rely on the semantics of the
installed apps and show that various user traits could be learnt automatically
using off-the-shelf machine-learning techniques. In this work, we focus on the
re-identifiability issue and thoroughly study the unicity of smartphone apps on
a dataset containing 54,893 Android users collected over a period of 7 months.
Our study finds that any 4 apps installed by a user are enough (more than 95%
times) for the re-identification of the user in our dataset. As the complete
list of installed apps is unique for 99% of the users in our dataset, it can be
easily used to track/profile the users by a service such as Twitter that has
access to the whole list of installed apps of users. As our analyzed dataset is
small as compared to the total population of Android users, we also study how
unicity would vary with larger datasets. This work emphasizes the need of
better privacy guards against collection, use and release of the list of
installed apps.
|
1510.08484
|
David Snyder
|
David Snyder, Guoguo Chen, Daniel Povey
|
MUSAN: A Music, Speech, and Noise Corpus
| null | null | null | null |
cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report introduces a new corpus of music, speech, and noise. This dataset
is suitable for training models for voice activity detection (VAD) and
music/speech discrimination. Our corpus is released under a flexible Creative
Commons license. The dataset consists of music from several genres, speech from
twelve languages, and a wide assortment of technical and non-technical noises.
We demonstrate use of this corpus for music/speech discrimination on Broadcast
news and VAD for speaker identification.
|
[
{
"version": "v1",
"created": "Wed, 28 Oct 2015 20:59:04 GMT"
}
] | 2015-10-30T00:00:00 |
[
[
"Snyder",
"David",
""
],
[
"Chen",
"Guoguo",
""
],
[
"Povey",
"Daniel",
""
]
] |
TITLE: MUSAN: A Music, Speech, and Noise Corpus
ABSTRACT: This report introduces a new corpus of music, speech, and noise. This dataset
is suitable for training models for voice activity detection (VAD) and
music/speech discrimination. Our corpus is released under a flexible Creative
Commons license. The dataset consists of music from several genres, speech from
twelve languages, and a wide assortment of technical and non-technical noises.
We demonstrate use of this corpus for music/speech discrimination on Broadcast
news and VAD for speaker identification.
|
1510.08829
|
Eric Hunsberger
|
Eric Hunsberger and Chris Eliasmith
|
Spiking Deep Networks with LIF Neurons
| null | null | null | null |
cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We train spiking deep networks using leaky integrate-and-fire (LIF) neurons,
and achieve state-of-the-art results for spiking networks on the CIFAR-10 and
MNIST datasets. This demonstrates that biologically-plausible spiking LIF
neurons can be integrated into deep networks can perform as well as other
spiking models (e.g. integrate-and-fire). We achieved this result by softening
the LIF response function, such that its derivative remains bounded, and by
training the network with noise to provide robustness against the variability
introduced by spikes. Our method is general and could be applied to other
neuron types, including those used on modern neuromorphic hardware. Our work
brings more biological realism into modern image classification models, with
the hope that these models can inform how the brain performs this difficult
task. It also provides new methods for training deep networks to run on
neuromorphic hardware, with the aim of fast, power-efficient image
classification for robotics applications.
|
[
{
"version": "v1",
"created": "Thu, 29 Oct 2015 19:24:03 GMT"
}
] | 2015-10-30T00:00:00 |
[
[
"Hunsberger",
"Eric",
""
],
[
"Eliasmith",
"Chris",
""
]
] |
TITLE: Spiking Deep Networks with LIF Neurons
ABSTRACT: We train spiking deep networks using leaky integrate-and-fire (LIF) neurons,
and achieve state-of-the-art results for spiking networks on the CIFAR-10 and
MNIST datasets. This demonstrates that biologically-plausible spiking LIF
neurons can be integrated into deep networks can perform as well as other
spiking models (e.g. integrate-and-fire). We achieved this result by softening
the LIF response function, such that its derivative remains bounded, and by
training the network with noise to provide robustness against the variability
introduced by spikes. Our method is general and could be applied to other
neuron types, including those used on modern neuromorphic hardware. Our work
brings more biological realism into modern image classification models, with
the hope that these models can inform how the brain performs this difficult
task. It also provides new methods for training deep networks to run on
neuromorphic hardware, with the aim of fast, power-efficient image
classification for robotics applications.
|
1505.03824
|
Jacopo Iacovacci
|
Jacopo Iacovacci, Zhihao Wu, Ginestra Bianconi
|
Mesoscopic Structures Reveal the Network Between the Layers of Multiplex
Datasets
|
11 pages, 7 figures
|
Phys. Rev. E 92, 042806 (2015)
|
10.1103/PhysRevE.92.042806
| null |
physics.soc-ph cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiplex networks describe a large variety of complex systems, whose
elements (nodes) can be connected by different types of interactions forming
different layers (networks) of the multiplex. Multiplex networks include social
networks, transportation networks or biological networks in the cell or in the
brain. Extracting relevant information from these networks is of crucial
importance for solving challenging inference problems and for characterizing
the multiplex networks microscopic and mesoscopic structure. Here we propose an
information theory method to extract the network between the layers of
multiplex datasets, forming a "network of networks". We build an indicator
function, based on the entropy of network ensembles, to characterize the
mesoscopic similarities between the layers of a multiplex network and we use
clustering techniques to characterize the communities present in this network
of networks. We apply the proposed method to study the Multiplex Collaboration
Network formed by scientists collaborating on different subjects and publishing
in the Americal Physical Society (APS) journals. The analysis of this dataset
reveals the interplay between the collaboration networks and the organization
of knowledge in physics.
|
[
{
"version": "v1",
"created": "Thu, 14 May 2015 18:18:34 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Oct 2015 16:43:56 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Oct 2015 16:32:11 GMT"
}
] | 2015-10-29T00:00:00 |
[
[
"Iacovacci",
"Jacopo",
""
],
[
"Wu",
"Zhihao",
""
],
[
"Bianconi",
"Ginestra",
""
]
] |
TITLE: Mesoscopic Structures Reveal the Network Between the Layers of Multiplex
Datasets
ABSTRACT: Multiplex networks describe a large variety of complex systems, whose
elements (nodes) can be connected by different types of interactions forming
different layers (networks) of the multiplex. Multiplex networks include social
networks, transportation networks or biological networks in the cell or in the
brain. Extracting relevant information from these networks is of crucial
importance for solving challenging inference problems and for characterizing
the multiplex networks microscopic and mesoscopic structure. Here we propose an
information theory method to extract the network between the layers of
multiplex datasets, forming a "network of networks". We build an indicator
function, based on the entropy of network ensembles, to characterize the
mesoscopic similarities between the layers of a multiplex network and we use
clustering techniques to characterize the communities present in this network
of networks. We apply the proposed method to study the Multiplex Collaboration
Network formed by scientists collaborating on different subjects and publishing
in the Americal Physical Society (APS) journals. The analysis of this dataset
reveals the interplay between the collaboration networks and the organization
of knowledge in physics.
|
1506.02089
|
Nemanja Spasojevic
|
Nemanja Spasojevic, Zhisheng Li, Adithya Rao, Prantik Bhattacharyya
|
When-To-Post on Social Networks
|
10 pages, to appear in KDD2015
| null |
10.1145/2783258.2788584
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For many users on social networks, one of the goals when broadcasting content
is to reach a large audience. The probability of receiving reactions to a
message differs for each user and depends on various factors, such as location,
daily and weekly behavior patterns and the visibility of the message. While
previous work has focused on overall network dynamics and message flow
cascades, the problem of recommending personalized posting times has remained
an underexplored topic of research. In this study, we formulate a when-to-post
problem, where the objective is to find the best times for a user to post on
social networks in order to maximize the probability of audience responses. To
understand the complexity of the problem, we examine user behavior in terms of
post-to-reaction times, and compare cross-network and cross-city weekly
reaction behavior for users in different cities, on both Twitter and Facebook.
We perform this analysis on over a billion posted messages and observed
reactions, and propose multiple approaches for generating personalized posting
schedules. We empirically assess these schedules on a sampled user set of 0.5
million active users and more than 25 million messages observed over a 56 day
period. We show that users see a reaction gain of up to 17% on Facebook and 4%
on Twitter when the recommended posting times are used. We open the dataset
used in this study, which includes timestamps for over 144 million posts and
over 1.1 billion reactions. The personalized schedules derived here are used in
a fully deployed production system to recommend posting times for millions of
users every day.
|
[
{
"version": "v1",
"created": "Fri, 5 Jun 2015 23:59:31 GMT"
}
] | 2015-10-29T00:00:00 |
[
[
"Spasojevic",
"Nemanja",
""
],
[
"Li",
"Zhisheng",
""
],
[
"Rao",
"Adithya",
""
],
[
"Bhattacharyya",
"Prantik",
""
]
] |
TITLE: When-To-Post on Social Networks
ABSTRACT: For many users on social networks, one of the goals when broadcasting content
is to reach a large audience. The probability of receiving reactions to a
message differs for each user and depends on various factors, such as location,
daily and weekly behavior patterns and the visibility of the message. While
previous work has focused on overall network dynamics and message flow
cascades, the problem of recommending personalized posting times has remained
an underexplored topic of research. In this study, we formulate a when-to-post
problem, where the objective is to find the best times for a user to post on
social networks in order to maximize the probability of audience responses. To
understand the complexity of the problem, we examine user behavior in terms of
post-to-reaction times, and compare cross-network and cross-city weekly
reaction behavior for users in different cities, on both Twitter and Facebook.
We perform this analysis on over a billion posted messages and observed
reactions, and propose multiple approaches for generating personalized posting
schedules. We empirically assess these schedules on a sampled user set of 0.5
million active users and more than 25 million messages observed over a 56 day
period. We show that users see a reaction gain of up to 17% on Facebook and 4%
on Twitter when the recommended posting times are used. We open the dataset
used in this study, which includes timestamps for over 144 million posts and
over 1.1 billion reactions. The personalized schedules derived here are used in
a fully deployed production system to recommend posting times for millions of
users every day.
|
1510.05711
|
Andrew Simpson
|
Andrew J.R. Simpson
|
Qualitative Projection Using Deep Neural Networks
| null | null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNN) abstract by demodulating the output of linear
filters. In this article, we refine this definition of abstraction to show that
the inputs of a DNN are abstracted with respect to the filters. Or, to restate,
the abstraction is qualified by the filters. This leads us to introduce the
notion of qualitative projection. We use qualitative projection to abstract
MNIST hand-written digits with respect to the various dogs, horses, planes and
cars of the CIFAR dataset. We then classify the MNIST digits according to the
magnitude of their dogness, horseness, planeness and carness qualities,
illustrating the generality of qualitative projection.
|
[
{
"version": "v1",
"created": "Mon, 19 Oct 2015 22:38:09 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Oct 2015 08:42:54 GMT"
}
] | 2015-10-29T00:00:00 |
[
[
"Simpson",
"Andrew J. R.",
""
]
] |
TITLE: Qualitative Projection Using Deep Neural Networks
ABSTRACT: Deep neural networks (DNN) abstract by demodulating the output of linear
filters. In this article, we refine this definition of abstraction to show that
the inputs of a DNN are abstracted with respect to the filters. Or, to restate,
the abstraction is qualified by the filters. This leads us to introduce the
notion of qualitative projection. We use qualitative projection to abstract
MNIST hand-written digits with respect to the various dogs, horses, planes and
cars of the CIFAR dataset. We then classify the MNIST digits according to the
magnitude of their dogness, horseness, planeness and carness qualities,
illustrating the generality of qualitative projection.
|
1404.3606
|
Tsung-Han Chan
|
Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng and Yi Ma
|
PCANet: A Simple Deep Learning Baseline for Image Classification?
| null | null |
10.1109/TIP.2015.2475625
| null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a very simple deep learning network for image
classification which comprises only the very basic data processing components:
cascaded principal component analysis (PCA), binary hashing, and block-wise
histograms. In the proposed architecture, PCA is employed to learn multistage
filter banks. It is followed by simple binary hashing and block histograms for
indexing and pooling. This architecture is thus named as a PCA network (PCANet)
and can be designed and learned extremely easily and efficiently. For
comparison and better understanding, we also introduce and study two simple
variations to the PCANet, namely the RandNet and LDANet. They share the same
topology of PCANet but their cascaded filters are either selected randomly or
learned from LDA. We have tested these basic networks extensively on many
benchmark visual datasets for different tasks, such as LFW for face
verification, MultiPIE, Extended Yale B, AR, FERET datasets for face
recognition, as well as MNIST for hand-written digits recognition.
Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with
the state of the art features, either prefixed, highly hand-crafted or
carefully learned (by DNNs). Even more surprisingly, it sets new records for
many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST
variations. Additional experiments on other public datasets also demonstrate
the potential of the PCANet serving as a simple but highly competitive baseline
for texture classification and object recognition.
|
[
{
"version": "v1",
"created": "Mon, 14 Apr 2014 15:02:17 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Aug 2014 15:20:44 GMT"
}
] | 2015-10-28T00:00:00 |
[
[
"Chan",
"Tsung-Han",
""
],
[
"Jia",
"Kui",
""
],
[
"Gao",
"Shenghua",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zeng",
"Zinan",
""
],
[
"Ma",
"Yi",
""
]
] |
TITLE: PCANet: A Simple Deep Learning Baseline for Image Classification?
ABSTRACT: In this work, we propose a very simple deep learning network for image
classification which comprises only the very basic data processing components:
cascaded principal component analysis (PCA), binary hashing, and block-wise
histograms. In the proposed architecture, PCA is employed to learn multistage
filter banks. It is followed by simple binary hashing and block histograms for
indexing and pooling. This architecture is thus named as a PCA network (PCANet)
and can be designed and learned extremely easily and efficiently. For
comparison and better understanding, we also introduce and study two simple
variations to the PCANet, namely the RandNet and LDANet. They share the same
topology of PCANet but their cascaded filters are either selected randomly or
learned from LDA. We have tested these basic networks extensively on many
benchmark visual datasets for different tasks, such as LFW for face
verification, MultiPIE, Extended Yale B, AR, FERET datasets for face
recognition, as well as MNIST for hand-written digits recognition.
Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with
the state of the art features, either prefixed, highly hand-crafted or
carefully learned (by DNNs). Even more surprisingly, it sets new records for
many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST
variations. Additional experiments on other public datasets also demonstrate
the potential of the PCANet serving as a simple but highly competitive baseline
for texture classification and object recognition.
|
1407.6071
|
Pin-Yu Chen
|
Pin-Yu Chen and Alfred O. Hero
|
Deep Community Detection
|
15 pages, 13 figures, journal submission and supplementary file
(Figures 11-13), to appear in IEEE Transactions on Signal Processing
| null |
10.1109/TSP.2015.2458782
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A deep community in a graph is a connected component that can only be seen
after removal of nodes or edges from the rest of the graph. This paper
formulates the problem of detecting deep communities as multi-stage node
removal that maximizes a new centrality measure, called the local Fiedler
vector centrality (LFVC), at each stage. The LFVC is associated with the
sensitivity of algebraic connectivity to node or edge removals. We prove that a
greedy node/edge removal strategy, based on successive maximization of LFVC,
has bounded performance loss relative to the optimal, but intractable,
combinatorial batch removal strategy. Under a stochastic block model framework,
we show that the greedy LFVC strategy can extract deep communities with
probability one as the number of observations becomes large. We apply the
greedy LFVC strategy to real-world social network datasets. Compared with
conventional community detection methods we demonstrate improved ability to
identify important communities and key members in the network.
|
[
{
"version": "v1",
"created": "Tue, 22 Jul 2014 23:39:48 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Mar 2015 01:34:39 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Jun 2015 01:21:33 GMT"
},
{
"version": "v4",
"created": "Sun, 12 Jul 2015 03:00:57 GMT"
},
{
"version": "v5",
"created": "Wed, 15 Jul 2015 20:41:52 GMT"
}
] | 2015-10-28T00:00:00 |
[
[
"Chen",
"Pin-Yu",
""
],
[
"Hero",
"Alfred O.",
""
]
] |
TITLE: Deep Community Detection
ABSTRACT: A deep community in a graph is a connected component that can only be seen
after removal of nodes or edges from the rest of the graph. This paper
formulates the problem of detecting deep communities as multi-stage node
removal that maximizes a new centrality measure, called the local Fiedler
vector centrality (LFVC), at each stage. The LFVC is associated with the
sensitivity of algebraic connectivity to node or edge removals. We prove that a
greedy node/edge removal strategy, based on successive maximization of LFVC,
has bounded performance loss relative to the optimal, but intractable,
combinatorial batch removal strategy. Under a stochastic block model framework,
we show that the greedy LFVC strategy can extract deep communities with
probability one as the number of observations becomes large. We apply the
greedy LFVC strategy to real-world social network datasets. Compared with
conventional community detection methods we demonstrate improved ability to
identify important communities and key members in the network.
|
1408.3698
|
Salman Salamatian
|
Salman Salamatian, Amy Zhang, Flavio du Pin Calmon, Sandilya
Bhamidipati, Nadia Fawaz, Branislav Kveton, Pedro Oliveira, Nina Taft
|
Managing your Private and Public Data: Bringing down Inference Attacks
against your Privacy
| null | null |
10.1109/JSTSP.2015.2442227
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a practical methodology to protect a user's private data, when he
wishes to publicly release data that is correlated with his private data, in
the hope of getting some utility. Our approach relies on a general statistical
inference framework that captures the privacy threat under inference attacks,
given utility constraints. Under this framework, data is distorted before it is
released, according to a privacy-preserving probabilistic mapping. This mapping
is obtained by solving a convex optimization problem, which minimizes
information leakage under a distortion constraint. We address practical
challenges encountered when applying this theoretical framework to real world
data. On one hand, the design of optimal privacy-preserving mechanisms requires
knowledge of the prior distribution linking private data and data to be
released, which is often unavailable in practice. On the other hand, the
optimization may become untractable and face scalability issues when data
assumes values in large size alphabets, or is high dimensional. Our work makes
three major contributions. First, we provide bounds on the impact on the
privacy-utility tradeoff of a mismatched prior. Second, we show how to reduce
the optimization size by introducing a quantization step, and how to generate
privacy mappings under quantization. Third, we evaluate our method on three
datasets, including a new dataset that we collected, showing correlations
between political convictions and TV viewing habits. We demonstrate that good
privacy properties can be achieved with limited distortion so as not to
undermine the original purpose of the publicly released data, e.g.
recommendations.
|
[
{
"version": "v1",
"created": "Sat, 16 Aug 2014 03:37:54 GMT"
}
] | 2015-10-28T00:00:00 |
[
[
"Salamatian",
"Salman",
""
],
[
"Zhang",
"Amy",
""
],
[
"Calmon",
"Flavio du Pin",
""
],
[
"Bhamidipati",
"Sandilya",
""
],
[
"Fawaz",
"Nadia",
""
],
[
"Kveton",
"Branislav",
""
],
[
"Oliveira",
"Pedro",
""
],
[
"Taft",
"Nina",
""
]
] |
TITLE: Managing your Private and Public Data: Bringing down Inference Attacks
against your Privacy
ABSTRACT: We propose a practical methodology to protect a user's private data, when he
wishes to publicly release data that is correlated with his private data, in
the hope of getting some utility. Our approach relies on a general statistical
inference framework that captures the privacy threat under inference attacks,
given utility constraints. Under this framework, data is distorted before it is
released, according to a privacy-preserving probabilistic mapping. This mapping
is obtained by solving a convex optimization problem, which minimizes
information leakage under a distortion constraint. We address practical
challenges encountered when applying this theoretical framework to real world
data. On one hand, the design of optimal privacy-preserving mechanisms requires
knowledge of the prior distribution linking private data and data to be
released, which is often unavailable in practice. On the other hand, the
optimization may become untractable and face scalability issues when data
assumes values in large size alphabets, or is high dimensional. Our work makes
three major contributions. First, we provide bounds on the impact on the
privacy-utility tradeoff of a mismatched prior. Second, we show how to reduce
the optimization size by introducing a quantization step, and how to generate
privacy mappings under quantization. Third, we evaluate our method on three
datasets, including a new dataset that we collected, showing correlations
between political convictions and TV viewing habits. We demonstrate that good
privacy properties can be achieved with limited distortion so as not to
undermine the original purpose of the publicly released data, e.g.
recommendations.
|
1410.0226
|
Xiaohao Cai
|
Juheon Lee, Xiaohao Cai, Carola-Bibiane Schonlieb, David Coomes
|
Non-parametric Image Registration of Airborne LiDAR, Hyperspectral and
Photographic Imagery of Forests
|
11 pages, 5 figures
| null |
10.1109/TGRS.2015.2431692
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is much current interest in using multi-sensor airborne remote sensing
to monitor the structure and biodiversity of forests. This paper addresses the
application of non-parametric image registration techniques to precisely align
images obtained from multimodal imaging, which is critical for the successful
identification of individual trees using object recognition approaches.
Non-parametric image registration, in particular the technique of optimizing
one objective function containing data fidelity and regularization terms,
provides flexible algorithms for image registration. Using a survey of
woodlands in southern Spain as an example, we show that non-parametric image
registration can be successful at fusing datasets when there is little prior
knowledge about how the datasets are interrelated (i.e. in the absence of
ground control points). The validity of non-parametric registration methods in
airborne remote sensing is demonstrated by a series of experiments. Precise
data fusion is a prerequisite to accurate recognition of objects within
airborne imagery, so non-parametric image registration could make a valuable
contribution to the analysis pipeline.
|
[
{
"version": "v1",
"created": "Mon, 28 Jul 2014 11:21:57 GMT"
}
] | 2015-10-28T00:00:00 |
[
[
"Lee",
"Juheon",
""
],
[
"Cai",
"Xiaohao",
""
],
[
"Schonlieb",
"Carola-Bibiane",
""
],
[
"Coomes",
"David",
""
]
] |
TITLE: Non-parametric Image Registration of Airborne LiDAR, Hyperspectral and
Photographic Imagery of Forests
ABSTRACT: There is much current interest in using multi-sensor airborne remote sensing
to monitor the structure and biodiversity of forests. This paper addresses the
application of non-parametric image registration techniques to precisely align
images obtained from multimodal imaging, which is critical for the successful
identification of individual trees using object recognition approaches.
Non-parametric image registration, in particular the technique of optimizing
one objective function containing data fidelity and regularization terms,
provides flexible algorithms for image registration. Using a survey of
woodlands in southern Spain as an example, we show that non-parametric image
registration can be successful at fusing datasets when there is little prior
knowledge about how the datasets are interrelated (i.e. in the absence of
ground control points). The validity of non-parametric registration methods in
airborne remote sensing is demonstrated by a series of experiments. Precise
data fusion is a prerequisite to accurate recognition of objects within
airborne imagery, so non-parametric image registration could make a valuable
contribution to the analysis pipeline.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.