id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1608.05842 | Jason Yu | Jason J. Yu, Adam W. Harley and Konstantinos G. Derpanis | Back to Basics: Unsupervised Learning of Optical Flow via Brightness
Constancy and Motion Smoothness | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, convolutional networks (convnets) have proven useful for predicting
optical flow. Much of this success is predicated on the availability of large
datasets that require expensive and involved data acquisition and laborious la-
beling. To bypass these challenges, we propose an unsuper- vised approach
(i.e., without leveraging groundtruth flow) to train a convnet end-to-end for
predicting optical flow be- tween two images. We use a loss function that
combines a data term that measures photometric constancy over time with a
spatial term that models the expected variation of flow across the image.
Together these losses form a proxy measure for losses based on the groundtruth
flow. Empiri- cally, we show that a strong convnet baseline trained with the
proposed unsupervised approach outperforms the same network trained with
supervision on the KITTI dataset.
| [
{
"version": "v1",
"created": "Sat, 20 Aug 2016 15:25:31 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Yu",
"Jason J.",
""
],
[
"Harley",
"Adam W.",
""
],
[
"Derpanis",
"Konstantinos G.",
""
]
] | TITLE: Back to Basics: Unsupervised Learning of Optical Flow via Brightness
Constancy and Motion Smoothness
ABSTRACT: Recently, convolutional networks (convnets) have proven useful for predicting
optical flow. Much of this success is predicated on the availability of large
datasets that require expensive and involved data acquisition and laborious la-
beling. To bypass these challenges, we propose an unsuper- vised approach
(i.e., without leveraging groundtruth flow) to train a convnet end-to-end for
predicting optical flow be- tween two images. We use a loss function that
combines a data term that measures photometric constancy over time with a
spatial term that models the expected variation of flow across the image.
Together these losses form a proxy measure for losses based on the groundtruth
flow. Empiri- cally, we show that a strong convnet baseline trained with the
proposed unsupervised approach outperforms the same network trained with
supervision on the KITTI dataset.
| no_new_dataset | 0.950041 |
1608.06019 | Konstantinos Bousmalis | Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip
Krishnan, Dumitru Erhan | Domain Separation Networks | This work will be presented at NIPS 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cost of large scale data collection and annotation often makes the
application of machine learning algorithms to new tasks or datasets
prohibitively expensive. One approach circumventing this cost is training
models on synthetic data where annotations are provided automatically. Despite
their appeal, such models often fail to generalize from synthetic to real
images, necessitating domain adaptation algorithms to manipulate these models
before they can be successfully applied. Existing approaches focus either on
mapping representations from one domain to the other, or on learning to extract
features that are invariant to the domain from which they were extracted.
However, by focusing only on creating a mapping or shared representation
between the two domains, they ignore the individual characteristics of each
domain. We suggest that explicitly modeling what is unique to each domain can
improve a model's ability to extract domain-invariant features. Inspired by
work on private-shared component analysis, we explicitly learn to extract image
representations that are partitioned into two subspaces: one component which is
private to each domain and one which is shared across domains. Our model is
trained not only to perform the task we care about in the source domain, but
also to use the partitioned representation to reconstruct the images from both
domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 00:12:27 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Bousmalis",
"Konstantinos",
""
],
[
"Trigeorgis",
"George",
""
],
[
"Silberman",
"Nathan",
""
],
[
"Krishnan",
"Dilip",
""
],
[
"Erhan",
"Dumitru",
""
]
] | TITLE: Domain Separation Networks
ABSTRACT: The cost of large scale data collection and annotation often makes the
application of machine learning algorithms to new tasks or datasets
prohibitively expensive. One approach circumventing this cost is training
models on synthetic data where annotations are provided automatically. Despite
their appeal, such models often fail to generalize from synthetic to real
images, necessitating domain adaptation algorithms to manipulate these models
before they can be successfully applied. Existing approaches focus either on
mapping representations from one domain to the other, or on learning to extract
features that are invariant to the domain from which they were extracted.
However, by focusing only on creating a mapping or shared representation
between the two domains, they ignore the individual characteristics of each
domain. We suggest that explicitly modeling what is unique to each domain can
improve a model's ability to extract domain-invariant features. Inspired by
work on private-shared component analysis, we explicitly learn to extract image
representations that are partitioned into two subspaces: one component which is
private to each domain and one which is shared across domains. Our model is
trained not only to perform the task we care about in the source domain, but
also to use the partitioned representation to reconstruct the images from both
domains. Our novel architecture results in a model that outperforms the
state-of-the-art on a range of unsupervised domain adaptation scenarios and
additionally produces visualizations of the private and shared representations
enabling interpretation of the domain adaptation process.
| no_new_dataset | 0.946101 |
1608.06048 | Ajinkya More | Ajinkya More | Survey of resampling techniques for improving classification performance
in unbalanced datasets | null | null | null | null | stat.AP cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of classification problems need to deal with data imbalance between
classes. Often it is desired to have a high recall on the minority class while
maintaining a high precision on the majority class. In this paper, we review a
number of resampling techniques proposed in literature to handle unbalanced
datasets and study their effect on classification performance.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 04:27:28 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"More",
"Ajinkya",
""
]
] | TITLE: Survey of resampling techniques for improving classification performance
in unbalanced datasets
ABSTRACT: A number of classification problems need to deal with data imbalance between
classes. Often it is desired to have a high recall on the minority class while
maintaining a high precision on the majority class. In this paper, we review a
number of resampling techniques proposed in literature to handle unbalanced
datasets and study their effect on classification performance.
| no_new_dataset | 0.951142 |
1608.06079 | Spiros Denaxas | Christiana McMahon and Spiros Denaxas | A novel framework for assessing metadata quality in epidemiological and
public health research settings | American Medical Informatics Association (AMIA) Joint Summits on
Translational Science 2015 | null | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metadata are critical in epidemiological and public health research. However,
a lack of biomedical metadata quality frameworks and limited awareness of the
implications of poor quality metadata renders data analyses problematic. In
this study, we created and evaluated a novel framework to assess metadata
quality of epidemiological and public health research datasets. We performed a
literature review and surveyed stakeholders to enhance our understanding of
biomedical metadata quality assessment. The review identified 11 studies and
nine quality dimensions; none of which were specifically aimed at biomedical
metadata. 96 individuals completed the survey; of those who submitted data,
most only assessed metadata quality sometimes, and eight did not at all. Our
framework has four sections: a) general information; b) tools and technologies;
c) usability; and d) management and curation. We evaluated the framework using
three test cases and sought expert feedback. The framework can assess
biomedical metadata quality systematically and robustly.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 08:27:24 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"McMahon",
"Christiana",
""
],
[
"Denaxas",
"Spiros",
""
]
] | TITLE: A novel framework for assessing metadata quality in epidemiological and
public health research settings
ABSTRACT: Metadata are critical in epidemiological and public health research. However,
a lack of biomedical metadata quality frameworks and limited awareness of the
implications of poor quality metadata renders data analyses problematic. In
this study, we created and evaluated a novel framework to assess metadata
quality of epidemiological and public health research datasets. We performed a
literature review and surveyed stakeholders to enhance our understanding of
biomedical metadata quality assessment. The review identified 11 studies and
nine quality dimensions; none of which were specifically aimed at biomedical
metadata. 96 individuals completed the survey; of those who submitted data,
most only assessed metadata quality sometimes, and eight did not at all. Our
framework has four sections: a) general information; b) tools and technologies;
c) usability; and d) management and curation. We evaluated the framework using
three test cases and sought expert feedback. The framework can assess
biomedical metadata quality systematically and robustly.
| no_new_dataset | 0.954478 |
1608.06154 | Pankaj Malhotra Mr. | Pankaj Malhotra, Vishnu TV, Anusha Ramakrishnan, Gaurangi Anand,
Lovekesh Vig, Puneet Agarwal, Gautam Shroff | Multi-Sensor Prognostics using an Unsupervised Health Index based on
LSTM Encoder-Decoder | Presented at 1st ACM SIGKDD Workshop on Machine Learning for
Prognostics and Health Management, San Francisco, CA, USA, 2016. 10 pages | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many approaches for estimation of Remaining Useful Life (RUL) of a machine,
using its operational sensor data, make assumptions about how a system degrades
or a fault evolves, e.g., exponential degradation. However, in many domains
degradation may not follow a pattern. We propose a Long Short Term Memory based
Encoder-Decoder (LSTM-ED) scheme to obtain an unsupervised health index (HI)
for a system using multi-sensor time-series data. LSTM-ED is trained to
reconstruct the time-series corresponding to healthy state of a system. The
reconstruction error is used to compute HI which is then used for RUL
estimation. We evaluate our approach on publicly available Turbofan Engine and
Milling Machine datasets. We also present results on a real-world industry
dataset from a pulverizer mill where we find significant correlation between
LSTM-ED based HI and maintenance costs.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 12:59:31 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Malhotra",
"Pankaj",
""
],
[
"TV",
"Vishnu",
""
],
[
"Ramakrishnan",
"Anusha",
""
],
[
"Anand",
"Gaurangi",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Agarwal",
"Puneet",
""
],
[
"Shroff",
"Gautam",
""
]
] | TITLE: Multi-Sensor Prognostics using an Unsupervised Health Index based on
LSTM Encoder-Decoder
ABSTRACT: Many approaches for estimation of Remaining Useful Life (RUL) of a machine,
using its operational sensor data, make assumptions about how a system degrades
or a fault evolves, e.g., exponential degradation. However, in many domains
degradation may not follow a pattern. We propose a Long Short Term Memory based
Encoder-Decoder (LSTM-ED) scheme to obtain an unsupervised health index (HI)
for a system using multi-sensor time-series data. LSTM-ED is trained to
reconstruct the time-series corresponding to healthy state of a system. The
reconstruction error is used to compute HI which is then used for RUL
estimation. We evaluate our approach on publicly available Turbofan Engine and
Milling Machine datasets. We also present results on a real-world industry
dataset from a pulverizer mill where we find significant correlation between
LSTM-ED based HI and maintenance costs.
| no_new_dataset | 0.946794 |
1608.06192 | Alban Desmaison | Alban Desmaison, Rudy Bunel, Pushmeet Kohli, Philip H.S. Torr and M.
Pawan Kumar | Efficient Continuous Relaxations for Dense CRF | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Dense conditional random fields (CRF) with Gaussian pairwise potentials have
emerged as a popular framework for several computer vision applications such as
stereo correspondence and semantic segmentation. By modeling long-range
interactions, dense CRFs provide a more detailed labelling compared to their
sparse counterparts. Variational inference in these dense models is performed
using a filtering-based mean-field algorithm in order to obtain a
fully-factorized distribution minimising the Kullback-Leibler divergence to the
true distribution. In contrast to the continuous relaxation-based energy
minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to
provide strong theoretical guarantees on the quality of its solutions. To
address this deficiency, we show that it is possible to use the same filtering
approach to speed-up the optimisation of several continuous relaxations.
Specifically, we solve a convex quadratic programming (QP) relaxation using the
efficient Frank-Wolfe algorithm. This also allows us to solve
difference-of-convex relaxations via the iterative concave-convex procedure
where each iteration requires solving a convex QP. Finally, we develop a novel
divide-and-conquer method to compute the subgradients of a linear programming
relaxation that provides the best theoretical bounds for energy minimisation.
We demonstrate the advantage of continuous relaxations over the widely used
mean-field algorithm on publicly available datasets.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 15:24:25 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Desmaison",
"Alban",
""
],
[
"Bunel",
"Rudy",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Kumar",
"M. Pawan",
""
]
] | TITLE: Efficient Continuous Relaxations for Dense CRF
ABSTRACT: Dense conditional random fields (CRF) with Gaussian pairwise potentials have
emerged as a popular framework for several computer vision applications such as
stereo correspondence and semantic segmentation. By modeling long-range
interactions, dense CRFs provide a more detailed labelling compared to their
sparse counterparts. Variational inference in these dense models is performed
using a filtering-based mean-field algorithm in order to obtain a
fully-factorized distribution minimising the Kullback-Leibler divergence to the
true distribution. In contrast to the continuous relaxation-based energy
minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to
provide strong theoretical guarantees on the quality of its solutions. To
address this deficiency, we show that it is possible to use the same filtering
approach to speed-up the optimisation of several continuous relaxations.
Specifically, we solve a convex quadratic programming (QP) relaxation using the
efficient Frank-Wolfe algorithm. This also allows us to solve
difference-of-convex relaxations via the iterative concave-convex procedure
where each iteration requires solving a convex QP. Finally, we develop a novel
divide-and-conquer method to compute the subgradients of a linear programming
relaxation that provides the best theoretical bounds for energy minimisation.
We demonstrate the advantage of continuous relaxations over the widely used
mean-field algorithm on publicly available datasets.
| no_new_dataset | 0.945751 |
1608.06197 | Srinivas S S Kruthiventi | Lokesh Boominathan, Srinivas S S Kruthiventi and R. Venkatesh Babu | CrowdNet: A Deep Convolutional Network for Dense Crowd Counting | Accepted at ACM Multimedia (MM) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our work proposes a novel deep learning framework for estimating crowd
density from static images of highly dense crowds. We use a combination of deep
and shallow, fully convolutional networks to predict the density map for a
given crowd image. Such a combination is used for effectively capturing both
the high-level semantic information (face/body detectors) and the low-level
features (blob detectors), that are necessary for crowd counting under large
scale variations. As most crowd datasets have limited training samples (<100
images) and deep learning based approaches require large amounts of training
data, we perform multi-scale data augmentation. Augmenting the training samples
in such a manner helps in guiding the CNN to learn scale invariant
representations. Our method is tested on the challenging UCF_CC_50 dataset, and
shown to outperform the state of the art methods.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 15:43:29 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Boominathan",
"Lokesh",
""
],
[
"Kruthiventi",
"Srinivas S S",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | TITLE: CrowdNet: A Deep Convolutional Network for Dense Crowd Counting
ABSTRACT: Our work proposes a novel deep learning framework for estimating crowd
density from static images of highly dense crowds. We use a combination of deep
and shallow, fully convolutional networks to predict the density map for a
given crowd image. Such a combination is used for effectively capturing both
the high-level semantic information (face/body detectors) and the low-level
features (blob detectors), that are necessary for crowd counting under large
scale variations. As most crowd datasets have limited training samples (<100
images) and deep learning based approaches require large amounts of training
data, we perform multi-scale data augmentation. Augmenting the training samples
in such a manner helps in guiding the CNN to learn scale invariant
representations. Our method is tested on the challenging UCF_CC_50 dataset, and
shown to outperform the state of the art methods.
| no_new_dataset | 0.953579 |
1608.06203 | Sewoong Oh | Ashish Khetan, Sewoong Oh | Computational and Statistical Tradeoffs in Learning to Rank | 30 pages 5 figures | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For massive and heterogeneous modern datasets, it is of fundamental interest
to provide guarantees on the accuracy of estimation when computational
resources are limited. In the application of learning to rank, we provide a
hierarchy of rank-breaking mechanisms ordered by the complexity in thus
generated sketch of the data. This allows the number of data points collected
to be gracefully traded off against computational resources available, while
guaranteeing the desired level of accuracy. Theoretical guarantees on the
proposed generalized rank-breaking implicitly provide such trade-offs, which
can be explicitly characterized under certain canonical scenarios on the
structure of the data.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 15:58:31 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Khetan",
"Ashish",
""
],
[
"Oh",
"Sewoong",
""
]
] | TITLE: Computational and Statistical Tradeoffs in Learning to Rank
ABSTRACT: For massive and heterogeneous modern datasets, it is of fundamental interest
to provide guarantees on the accuracy of estimation when computational
resources are limited. In the application of learning to rank, we provide a
hierarchy of rank-breaking mechanisms ordered by the complexity in thus
generated sketch of the data. This allows the number of data points collected
to be gracefully traded off against computational resources available, while
guaranteeing the desired level of accuracy. Theoretical guarantees on the
proposed generalized rank-breaking implicitly provide such trade-offs, which
can be explicitly characterized under certain canonical scenarios on the
structure of the data.
| no_new_dataset | 0.946349 |
1608.06253 | Christina Lioma Assoc. Prof | Brian Brost and Yevgeny Seldin and Ingemar J. Cox and Christina Lioma | Multi-Dueling Bandits and Their Application to Online Ranker Evaluation | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New ranking algorithms are continually being developed and refined,
necessitating the development of efficient methods for evaluating these
rankers. Online ranker evaluation focuses on the challenge of efficiently
determining, from implicit user feedback, which ranker out of a finite set of
rankers is the best. Online ranker evaluation can be modeled by dueling ban-
dits, a mathematical model for online learning under limited feedback from
pairwise comparisons. Comparisons of pairs of rankers is performed by
interleaving their result sets and examining which documents users click on.
The dueling bandits model addresses the key issue of which pair of rankers to
compare at each iteration, thereby providing a solution to the
exploration-exploitation trade-off. Recently, methods for simultaneously
comparing more than two rankers have been developed. However, the question of
which rankers to compare at each iteration was left open. We address this
question by proposing a generalization of the dueling bandits model that uses
simultaneous comparisons of an unrestricted number of rankers. We evaluate our
algorithm on synthetic data and several standard large-scale online ranker
evaluation datasets. Our experimental results show that the algorithm yields
orders of magnitude improvement in performance compared to stateof- the-art
dueling bandit algorithms.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2016 18:20:18 GMT"
}
] | 2016-08-23T00:00:00 | [
[
"Brost",
"Brian",
""
],
[
"Seldin",
"Yevgeny",
""
],
[
"Cox",
"Ingemar J.",
""
],
[
"Lioma",
"Christina",
""
]
] | TITLE: Multi-Dueling Bandits and Their Application to Online Ranker Evaluation
ABSTRACT: New ranking algorithms are continually being developed and refined,
necessitating the development of efficient methods for evaluating these
rankers. Online ranker evaluation focuses on the challenge of efficiently
determining, from implicit user feedback, which ranker out of a finite set of
rankers is the best. Online ranker evaluation can be modeled by dueling ban-
dits, a mathematical model for online learning under limited feedback from
pairwise comparisons. Comparisons of pairs of rankers is performed by
interleaving their result sets and examining which documents users click on.
The dueling bandits model addresses the key issue of which pair of rankers to
compare at each iteration, thereby providing a solution to the
exploration-exploitation trade-off. Recently, methods for simultaneously
comparing more than two rankers have been developed. However, the question of
which rankers to compare at each iteration was left open. We address this
question by proposing a generalization of the dueling bandits model that uses
simultaneous comparisons of an unrestricted number of rankers. We evaluate our
algorithm on synthetic data and several standard large-scale online ranker
evaluation datasets. Our experimental results show that the algorithm yields
orders of magnitude improvement in performance compared to stateof- the-art
dueling bandit algorithms.
| no_new_dataset | 0.949106 |
1509.02314 | Shenjian Zhao | Shenjian Zhao, Cong Xie, Zhihua Zhang | A Scalable and Extensible Framework for Superposition-Structured Models | null | AAAI 2016: 2372-2378 | null | null | cs.NA math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many learning tasks, structural models usually lead to better
interpretability and higher generalization performance. In recent years,
however, the simple structural models such as lasso are frequently proved to be
insufficient. Accordingly, there has been a lot of work on
"superposition-structured" models where multiple structural constraints are
imposed. To efficiently solve these "superposition-structured" statistical
models, we develop a framework based on a proximal Newton-type method.
Employing the smoothed conic dual approach with the LBFGS updating formula, we
propose a scalable and extensible proximal quasi-Newton (SEP-QN) framework.
Empirical analysis on various datasets shows that our framework is potentially
powerful, and achieves super-linear convergence rate for optimizing some
popular "superposition-structured" statistical models such as the fused sparse
group lasso.
| [
{
"version": "v1",
"created": "Tue, 8 Sep 2015 10:33:27 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Mar 2016 04:29:43 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Zhao",
"Shenjian",
""
],
[
"Xie",
"Cong",
""
],
[
"Zhang",
"Zhihua",
""
]
] | TITLE: A Scalable and Extensible Framework for Superposition-Structured Models
ABSTRACT: In many learning tasks, structural models usually lead to better
interpretability and higher generalization performance. In recent years,
however, the simple structural models such as lasso are frequently proved to be
insufficient. Accordingly, there has been a lot of work on
"superposition-structured" models where multiple structural constraints are
imposed. To efficiently solve these "superposition-structured" statistical
models, we develop a framework based on a proximal Newton-type method.
Employing the smoothed conic dual approach with the LBFGS updating formula, we
propose a scalable and extensible proximal quasi-Newton (SEP-QN) framework.
Empirical analysis on various datasets shows that our framework is potentially
powerful, and achieves super-linear convergence rate for optimizing some
popular "superposition-structured" statistical models such as the fused sparse
group lasso.
| no_new_dataset | 0.94743 |
1511.04512 | Ziming Zhang | Ziming Zhang and Venkatesh Saligrama | Zero-Shot Learning via Joint Latent Similarity Embedding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot recognition (ZSR) deals with the problem of predicting class labels
for target domain instances based on source domain side information (e.g.
attributes) of unseen classes. We formulate ZSR as a binary prediction problem.
Our resulting classifier is class-independent. It takes an arbitrary pair of
source and target domain instances as input and predicts whether or not they
come from the same class, i.e. whether there is a match. We model the posterior
probability of a match since it is a sufficient statistic and propose a latent
probabilistic model in this context. We develop a joint discriminative learning
framework based on dictionary learning to jointly learn the parameters of our
model for both domains, which ultimately leads to our class-independent
classifier. Many of the existing embedding methods can be viewed as special
cases of our probabilistic model. On ZSR our method shows 4.90\% improvement
over the state-of-the-art in accuracy averaged across four benchmark datasets.
We also adapt ZSR method for zero-shot retrieval and show 22.45\% improvement
accordingly in mean average precision (mAP).
| [
{
"version": "v1",
"created": "Sat, 14 Nov 2015 05:53:30 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 22:14:15 GMT"
},
{
"version": "v3",
"created": "Wed, 17 Aug 2016 16:29:51 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Zero-Shot Learning via Joint Latent Similarity Embedding
ABSTRACT: Zero-shot recognition (ZSR) deals with the problem of predicting class labels
for target domain instances based on source domain side information (e.g.
attributes) of unseen classes. We formulate ZSR as a binary prediction problem.
Our resulting classifier is class-independent. It takes an arbitrary pair of
source and target domain instances as input and predicts whether or not they
come from the same class, i.e. whether there is a match. We model the posterior
probability of a match since it is a sufficient statistic and propose a latent
probabilistic model in this context. We develop a joint discriminative learning
framework based on dictionary learning to jointly learn the parameters of our
model for both domains, which ultimately leads to our class-independent
classifier. Many of the existing embedding methods can be viewed as special
cases of our probabilistic model. On ZSR our method shows 4.90\% improvement
over the state-of-the-art in accuracy averaged across four benchmark datasets.
We also adapt ZSR method for zero-shot retrieval and show 22.45\% improvement
accordingly in mean average precision (mAP).
| no_new_dataset | 0.947381 |
1601.06759 | A\"aron van den Oord | Aaron van den Oord, Nal Kalchbrenner, Koray Kavukcuoglu | Pixel Recurrent Neural Networks | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 20:34:24 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Feb 2016 15:32:16 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Aug 2016 14:10:16 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Oord",
"Aaron van den",
""
],
[
"Kalchbrenner",
"Nal",
""
],
[
"Kavukcuoglu",
"Koray",
""
]
] | TITLE: Pixel Recurrent Neural Networks
ABSTRACT: Modeling the distribution of natural images is a landmark problem in
unsupervised learning. This task requires an image model that is at once
expressive, tractable and scalable. We present a deep neural network that
sequentially predicts the pixels in an image along the two spatial dimensions.
Our method models the discrete probability of the raw pixel values and encodes
the complete set of dependencies in the image. Architectural novelties include
fast two-dimensional recurrent layers and an effective use of residual
connections in deep recurrent networks. We achieve log-likelihood scores on
natural images that are considerably better than the previous state of the art.
Our main results also provide benchmarks on the diverse ImageNet dataset.
Samples generated from the model appear crisp, varied and globally coherent.
| no_new_dataset | 0.948106 |
1608.05457 | Takeshi Onishi | Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel and David
McAllester | Who did What: A Large-Scale Person-Centered Cloze Dataset | To appear at EMNLP 2016. Our dataset is available at
tticnlp.github.io/who_did_what | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have constructed a new "Who-did-What" dataset of over 200,000
fill-in-the-gap (cloze) multiple choice reading comprehension problems
constructed from the LDC English Gigaword newswire corpus. The WDW dataset has
a variety of novel features. First, in contrast with the CNN and Daily Mail
datasets (Hermann et al., 2015) we avoid using article summaries for question
formation. Instead, each problem is formed from two independent articles --- an
article given as the passage to be read and a separate article on the same
events used to form the question. Second, we avoid anonymization --- each
choice is a person named entity. Third, the problems have been filtered to
remove a fraction that are easily solved by simple baselines, while remaining
84% solvable by humans. We report performance benchmarks of standard systems
and propose the WDW dataset as a challenge task for the community.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 00:13:10 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Onishi",
"Takeshi",
""
],
[
"Wang",
"Hai",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Gimpel",
"Kevin",
""
],
[
"McAllester",
"David",
""
]
] | TITLE: Who did What: A Large-Scale Person-Centered Cloze Dataset
ABSTRACT: We have constructed a new "Who-did-What" dataset of over 200,000
fill-in-the-gap (cloze) multiple choice reading comprehension problems
constructed from the LDC English Gigaword newswire corpus. The WDW dataset has
a variety of novel features. First, in contrast with the CNN and Daily Mail
datasets (Hermann et al., 2015) we avoid using article summaries for question
formation. Instead, each problem is formed from two independent articles --- an
article given as the passage to be read and a separate article on the same
events used to form the question. Second, we avoid anonymization --- each
choice is a person named entity. Third, the problems have been filtered to
remove a fraction that are easily solved by simple baselines, while remaining
84% solvable by humans. We report performance benchmarks of standard systems
and propose the WDW dataset as a challenge task for the community.
| new_dataset | 0.957794 |
1608.05562 | Enzo Ferrante | Roque Porchetto (1), Franco Stramana (1), Nikos Paragios (2), Enzo
Ferrante (2) ((1) UNICEN University, Tandil Argentina, (2) CVN,
CentraleSupelec-INRIA, Universite Paris-Saclay, France) | Rigid Slice-To-Volume Medical Image Registration through Markov Random
Fields | Bayesian and Graphical Models for Biomedical Imaging Workshop, BAMBI
(MICCAI 2016, Athens, Greece) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rigid slice-to-volume registration is a challenging task, which finds
application in medical imaging problems like image fusion for image guided
surgeries and motion correction for volume reconstruction. It is usually
formulated as an optimization problem and solved using standard continuous
methods. In this paper, we discuss how this task be formulated as a discrete
labeling problem on a graph. Inspired by previous works on discrete estimation
of linear transformations using Markov Random Fields (MRFs), we model it using
a pairwise MRF, where the nodes are associated to the rigid parameters, and the
edges encode the relation between the variables. We compare the performance of
the proposed method to a continuous formulation optimized using simplex, and we
discuss how it can be used to further improve the accuracy of our approach.
Promising results are obtained using a monomodal dataset composed of magnetic
resonance images (MRI) of a beating heart.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 10:29:50 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Porchetto",
"Roque",
""
],
[
"Stramana",
"Franco",
""
],
[
"Paragios",
"Nikos",
""
],
[
"Ferrante",
"Enzo",
""
]
] | TITLE: Rigid Slice-To-Volume Medical Image Registration through Markov Random
Fields
ABSTRACT: Rigid slice-to-volume registration is a challenging task, which finds
application in medical imaging problems like image fusion for image guided
surgeries and motion correction for volume reconstruction. It is usually
formulated as an optimization problem and solved using standard continuous
methods. In this paper, we discuss how this task be formulated as a discrete
labeling problem on a graph. Inspired by previous works on discrete estimation
of linear transformations using Markov Random Fields (MRFs), we model it using
a pairwise MRF, where the nodes are associated to the rigid parameters, and the
edges encode the relation between the variables. We compare the performance of
the proposed method to a continuous formulation optimized using simplex, and we
discuss how it can be used to further improve the accuracy of our approach.
Promising results are obtained using a monomodal dataset composed of magnetic
resonance images (MRI) of a beating heart.
| new_dataset | 0.917414 |
1608.05605 | St\'ephan Tulkens | St\'ephan Tulkens and Simon \v{S}uster and Walter Daelemans | Using Distributed Representations to Disambiguate Biomedical and
Clinical Concepts | 6 pages, 1 figure, presented at the 15th Workshop on Biomedical
Natural Language Processing, Berlin 2016 | Proceedings of the 15th Workshop on Biomedical Natural Language
Processing, Berlin, Germany, 2016, pages 77-82. Association for Computational
Linguistics | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we report a knowledge-based method for Word Sense
Disambiguation in the domains of biomedical and clinical text. We combine word
representations created on large corpora with a small number of definitions
from the UMLS to create concept representations, which we then compare to
representations of the context of ambiguous terms. Using no relational
information, we obtain comparable performance to previous approaches on the
MSH-WSD dataset, which is a well-known dataset in the biomedical domain.
Additionally, our method is fast and easy to set up and extend to other
domains. Supplementary materials, including source code, can be found at https:
//github.com/clips/yarn
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 14:05:03 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Tulkens",
"Stéphan",
""
],
[
"Šuster",
"Simon",
""
],
[
"Daelemans",
"Walter",
""
]
] | TITLE: Using Distributed Representations to Disambiguate Biomedical and
Clinical Concepts
ABSTRACT: In this paper, we report a knowledge-based method for Word Sense
Disambiguation in the domains of biomedical and clinical text. We combine word
representations created on large corpora with a small number of definitions
from the UMLS to create concept representations, which we then compare to
representations of the context of ambiguous terms. Using no relational
information, we obtain comparable performance to previous approaches on the
MSH-WSD dataset, which is a well-known dataset in the biomedical domain.
Additionally, our method is fast and easy to set up and extend to other
domains. Supplementary materials, including source code, can be found at https:
//github.com/clips/yarn
| no_new_dataset | 0.948489 |
1608.05684 | Menghua Zhai | Menghua Zhai, Scott Workman, Nathan Jacobs | Detecting Vanishing Points using Global Image Context in a Non-Manhattan
World | IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method for detecting horizontal vanishing points and the
zenith vanishing point in man-made environments. The dominant trend in existing
methods is to first find candidate vanishing points, then remove outliers by
enforcing mutual orthogonality. Our method reverses this process: we propose a
set of horizon line candidates and score each based on the vanishing points it
contains. A key element of our approach is the use of global image context,
extracted with a deep convolutional network, to constrain the set of candidates
under consideration. Our method does not make a Manhattan-world assumption and
can operate effectively on scenes with only a single horizontal vanishing
point. We evaluate our approach on three benchmark datasets and achieve
state-of-the-art performance on each. In addition, our approach is
significantly faster than the previous best method.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 18:08:55 GMT"
}
] | 2016-08-22T00:00:00 | [
[
"Zhai",
"Menghua",
""
],
[
"Workman",
"Scott",
""
],
[
"Jacobs",
"Nathan",
""
]
] | TITLE: Detecting Vanishing Points using Global Image Context in a Non-Manhattan
World
ABSTRACT: We propose a novel method for detecting horizontal vanishing points and the
zenith vanishing point in man-made environments. The dominant trend in existing
methods is to first find candidate vanishing points, then remove outliers by
enforcing mutual orthogonality. Our method reverses this process: we propose a
set of horizon line candidates and score each based on the vanishing points it
contains. A key element of our approach is the use of global image context,
extracted with a deep convolutional network, to constrain the set of candidates
under consideration. Our method does not make a Manhattan-world assumption and
can operate effectively on scenes with only a single horizontal vanishing
point. We evaluate our approach on three benchmark datasets and achieve
state-of-the-art performance on each. In addition, our approach is
significantly faster than the previous best method.
| no_new_dataset | 0.950824 |
1608.05104 | Nasim Souly | Nasim Souly and Mubarak Shah | Scene Labeling Through Knowledge-Based Rules Employing Constrained
Integer Linear Programing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene labeling task is to segment the image into meaningful regions and
categorize them into classes of objects which comprised the image. Commonly
used methods typically find the local features for each segment and label them
using classifiers. Afterward, labeling is smoothed in order to make sure that
neighboring regions receive similar labels. However, they ignore expressive and
non-local dependencies among regions due to expensive training and inference.
In this paper, we propose to use high-level knowledge regarding rules in the
inference to incorporate dependencies among regions in the image to improve
scores of classification. Towards this aim, we extract these rules from data
and transform them into constraints for Integer Programming to optimize the
structured problem of assigning labels to super-pixels (consequently pixels) of
an image. In addition, we propose to use soft-constraints in some scenarios,
allowing violating the constraint by imposing a penalty, to make the model more
flexible. We assessed our approach on three datasets and obtained promising
results.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 21:14:51 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Souly",
"Nasim",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Scene Labeling Through Knowledge-Based Rules Employing Constrained
Integer Linear Programing
ABSTRACT: Scene labeling task is to segment the image into meaningful regions and
categorize them into classes of objects which comprised the image. Commonly
used methods typically find the local features for each segment and label them
using classifiers. Afterward, labeling is smoothed in order to make sure that
neighboring regions receive similar labels. However, they ignore expressive and
non-local dependencies among regions due to expensive training and inference.
In this paper, we propose to use high-level knowledge regarding rules in the
inference to incorporate dependencies among regions in the image to improve
scores of classification. Towards this aim, we extract these rules from data
and transform them into constraints for Integer Programming to optimize the
structured problem of assigning labels to super-pixels (consequently pixels) of
an image. In addition, we propose to use soft-constraints in some scenarios,
allowing violating the constraint by imposing a penalty, to make the model more
flexible. We assessed our approach on three datasets and obtained promising
results.
| no_new_dataset | 0.947478 |
1608.05117 | Saeed Mohajeryami | Saeed Mohajeryami | An Investigation of Randomized Controlled Trial (RCT) Method as a
Customer Baseline Load (CBL) Calculation for Residential Customers | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | FERC Order 745 allows demand response owners to sell their load reduction in
the wholesale market. However, in order to be able to sell the load reduction,
some implementation challenges must be addressed, one of which is to establish
Customer Baseline Load (CBL) calculation methods with acceptable error
performance, which has proven to be very challenging so far. In this paper, the
error and financial performance of Randomized Controlled Trial (RCT) method,
applied to both granular and aggregated forms of the consumption load, are
investigated for a hypothetical demand response program offered to a real
dataset of residential customers .
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 22:16:26 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Mohajeryami",
"Saeed",
""
]
] | TITLE: An Investigation of Randomized Controlled Trial (RCT) Method as a
Customer Baseline Load (CBL) Calculation for Residential Customers
ABSTRACT: FERC Order 745 allows demand response owners to sell their load reduction in
the wholesale market. However, in order to be able to sell the load reduction,
some implementation challenges must be addressed, one of which is to establish
Customer Baseline Load (CBL) calculation methods with acceptable error
performance, which has proven to be very challenging so far. In this paper, the
error and financial performance of Randomized Controlled Trial (RCT) method,
applied to both granular and aggregated forms of the consumption load, are
investigated for a hypothetical demand response program offered to a real
dataset of residential customers .
| no_new_dataset | 0.94625 |
1608.05159 | Jianan Li | Jianan Li, Xiaodan Liang, Jianshu Li, Tingfa Xu, Jiashi Feng,
Shuicheng Yan | Multi-stage Object Detection with Group Recursive Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of existing detection pipelines treat object proposals independently and
predict bounding box locations and classification scores over them separately.
However, the important semantic and spatial layout correlations among proposals
are often ignored, which are actually useful for more accurate object
detection. In this work, we propose a new EM-like group recursive learning
approach to iteratively refine object proposals by incorporating such context
of surrounding proposals and provide an optimal spatial configuration of object
detections. In addition, we propose to incorporate the weakly-supervised object
segmentation cues and region-based object detection into a multi-stage
architecture in order to fully exploit the learned segmentation features for
better object detection in an end-to-end way. The proposed architecture
consists of three cascaded networks which respectively learn to perform
weakly-supervised object segmentation, object proposal generation and recursive
detection refinement. Combining the group recursive learning and the
multi-stage architecture provides competitive mAPs of 78.6% and 74.9% on the
PASCAL VOC2007 and VOC2012 datasets respectively, which outperforms many
well-established baselines [10] [20] significantly.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 02:37:28 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Li",
"Jianan",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Li",
"Jianshu",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Multi-stage Object Detection with Group Recursive Learning
ABSTRACT: Most of existing detection pipelines treat object proposals independently and
predict bounding box locations and classification scores over them separately.
However, the important semantic and spatial layout correlations among proposals
are often ignored, which are actually useful for more accurate object
detection. In this work, we propose a new EM-like group recursive learning
approach to iteratively refine object proposals by incorporating such context
of surrounding proposals and provide an optimal spatial configuration of object
detections. In addition, we propose to incorporate the weakly-supervised object
segmentation cues and region-based object detection into a multi-stage
architecture in order to fully exploit the learned segmentation features for
better object detection in an end-to-end way. The proposed architecture
consists of three cascaded networks which respectively learn to perform
weakly-supervised object segmentation, object proposal generation and recursive
detection refinement. Combining the group recursive learning and the
multi-stage architecture provides competitive mAPs of 78.6% and 74.9% on the
PASCAL VOC2007 and VOC2012 datasets respectively, which outperforms many
well-established baselines [10] [20] significantly.
| no_new_dataset | 0.947575 |
1608.05174 | Cory James Kleinheksel | Cory J. Kleinheksel and Arun K. Somani | Scaling Distributed All-Pairs Algorithms: Manage Computation and Limit
Data Replication with Quorums | Chapter Information Science and Applications (ICISA) 2016 Volume 376
of the series Lecture Notes in Electrical Engineering pp 247-257 Date: 16
February 2016 | Kleinheksel, Cory J., and Arun K. Somani. "Scaling Distributed
All-Pairs Algorithms." Information Science and Applications (ICISA) 2016.
Springer Singapore, 2016. 247-257 | 10.1007/978-981-10-0557-2_25 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose and prove that cyclic quorum sets can efficiently
manage all-pairs computations and data replication. The quorums are
O(N/sqrt(P)) in size, up to 50% smaller than the dual N/sqrt(P) array
implementations, and significantly smaller than solutions requiring all data.
Implementation evaluation demonstrated scalability on real datasets with a 7x
speed up on 8 nodes with 1/3rd the memory usage per process. The all-pairs
problem requires all data elements to be paired with all other data elements.
These all-pair problems occur in many science fields, which has led to their
continued interest. Additionally, as datasets grow in size, new methods like
these that can reduce memory footprints and distribute work equally across
compute nodes will be demanded.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 04:51:38 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Kleinheksel",
"Cory J.",
""
],
[
"Somani",
"Arun K.",
""
]
] | TITLE: Scaling Distributed All-Pairs Algorithms: Manage Computation and Limit
Data Replication with Quorums
ABSTRACT: In this paper we propose and prove that cyclic quorum sets can efficiently
manage all-pairs computations and data replication. The quorums are
O(N/sqrt(P)) in size, up to 50% smaller than the dual N/sqrt(P) array
implementations, and significantly smaller than solutions requiring all data.
Implementation evaluation demonstrated scalability on real datasets with a 7x
speed up on 8 nodes with 1/3rd the memory usage per process. The all-pairs
problem requires all data elements to be paired with all other data elements.
These all-pair problems occur in many science fields, which has led to their
continued interest. Additionally, as datasets grow in size, new methods like
these that can reduce memory footprints and distribute work equally across
compute nodes will be demanded.
| no_new_dataset | 0.944074 |
1608.05177 | Youbao Tang | Youbao Tang, Xiangqian Wu, and Wei Bu | Deeply-Supervised Recurrent Convolutional Neural Network for Saliency
Detection | 5 pages, 5 figures, accepted by ACMMM 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel saliency detection method by developing a
deeply-supervised recurrent convolutional neural network (DSRCNN), which
performs a full image-to-image saliency prediction. For saliency detection, the
local, global, and contextual information of salient objects is important to
obtain a high quality salient map. To achieve this goal, the DSRCNN is designed
based on VGGNet-16. Firstly, the recurrent connections are incorporated into
each convolutional layer, which can make the model more powerful for learning
the contextual information. Secondly, side-output layers are added to conduct
the deeply-supervised operation, which can make the model learn more
discriminative and robust features by effecting the intermediate layers.
Finally, all of the side-outputs are fused to integrate the local and global
information to get the final saliency detection results. Therefore, the DSRCNN
combines the advantages of recurrent convolutional neural networks and
deeply-supervised nets. The DSRCNN model is tested on five benchmark datasets,
and experimental results demonstrate that the proposed method significantly
outperforms the state-of-the-art saliency detection approaches on all test
datasets.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 05:08:16 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Tang",
"Youbao",
""
],
[
"Wu",
"Xiangqian",
""
],
[
"Bu",
"Wei",
""
]
] | TITLE: Deeply-Supervised Recurrent Convolutional Neural Network for Saliency
Detection
ABSTRACT: This paper proposes a novel saliency detection method by developing a
deeply-supervised recurrent convolutional neural network (DSRCNN), which
performs a full image-to-image saliency prediction. For saliency detection, the
local, global, and contextual information of salient objects is important to
obtain a high quality salient map. To achieve this goal, the DSRCNN is designed
based on VGGNet-16. Firstly, the recurrent connections are incorporated into
each convolutional layer, which can make the model more powerful for learning
the contextual information. Secondly, side-output layers are added to conduct
the deeply-supervised operation, which can make the model learn more
discriminative and robust features by effecting the intermediate layers.
Finally, all of the side-outputs are fused to integrate the local and global
information to get the final saliency detection results. Therefore, the DSRCNN
combines the advantages of recurrent convolutional neural networks and
deeply-supervised nets. The DSRCNN model is tested on five benchmark datasets,
and experimental results demonstrate that the proposed method significantly
outperforms the state-of-the-art saliency detection approaches on all test
datasets.
| no_new_dataset | 0.948106 |
1608.05186 | Youbao Tang | Youbao Tang, Xiangqian Wu | Saliency Detection via Combining Region-Level and Pixel-Level
Predictions with CNNs | 18 pages, 9 figures, accepted by ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a novel saliency detection method by combining
region-level saliency estimation and pixel-level saliency prediction with CNNs
(denoted as CRPSD). For pixel-level saliency prediction, a fully convolutional
neural network (called pixel-level CNN) is constructed by modifying the VGGNet
architecture to perform multi-scale feature learning, based on which an
image-to-image prediction is conducted to accomplish the pixel-level saliency
detection. For region-level saliency estimation, an adaptive superpixel based
region generation technique is first designed to partition an image into
regions, based on which the region-level saliency is estimated by using a CNN
model (called region-level CNN). The pixel-level and region-level saliencies
are fused to form the final salient map by using another CNN (called fusion
CNN). And the pixel-level CNN and fusion CNN are jointly learned. Extensive
quantitative and qualitative experiments on four public benchmark datasets
demonstrate that the proposed method greatly outperforms the state-of-the-art
saliency detection approaches.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 06:00:18 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Tang",
"Youbao",
""
],
[
"Wu",
"Xiangqian",
""
]
] | TITLE: Saliency Detection via Combining Region-Level and Pixel-Level
Predictions with CNNs
ABSTRACT: This paper proposes a novel saliency detection method by combining
region-level saliency estimation and pixel-level saliency prediction with CNNs
(denoted as CRPSD). For pixel-level saliency prediction, a fully convolutional
neural network (called pixel-level CNN) is constructed by modifying the VGGNet
architecture to perform multi-scale feature learning, based on which an
image-to-image prediction is conducted to accomplish the pixel-level saliency
detection. For region-level saliency estimation, an adaptive superpixel based
region generation technique is first designed to partition an image into
regions, based on which the region-level saliency is estimated by using a CNN
model (called region-level CNN). The pixel-level and region-level saliencies
are fused to form the final salient map by using another CNN (called fusion
CNN). And the pixel-level CNN and fusion CNN are jointly learned. Extensive
quantitative and qualitative experiments on four public benchmark datasets
demonstrate that the proposed method greatly outperforms the state-of-the-art
saliency detection approaches.
| no_new_dataset | 0.951006 |
1608.05203 | Yusuke Sugano | Yusuke Sugano, Andreas Bulling | Seeing with Humans: Gaze-Assisted Neural Image Captioning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaze reflects how humans process visual scenes and is therefore increasingly
used in computer vision systems. Previous works demonstrated the potential of
gaze for object-centric tasks, such as object localization and recognition, but
it remains unclear if gaze can also be beneficial for scene-centric tasks, such
as image captioning. We present a new perspective on gaze-assisted image
captioning by studying the interplay between human gaze and the attention
mechanism of deep neural networks. Using a public large-scale gaze dataset, we
first assess the relationship between state-of-the-art object and scene
recognition models, bottom-up visual saliency, and human gaze. We then propose
a novel split attention model for image captioning. Our model integrates human
gaze information into an attention-based long short-term memory architecture,
and allows the algorithm to allocate attention selectively to both fixated and
non-fixated image regions. Through evaluation on the COCO/SALICON datasets we
show that our method improves image captioning performance and that gaze can
complement machine attention for semantic scene understanding tasks.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 08:13:22 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Sugano",
"Yusuke",
""
],
[
"Bulling",
"Andreas",
""
]
] | TITLE: Seeing with Humans: Gaze-Assisted Neural Image Captioning
ABSTRACT: Gaze reflects how humans process visual scenes and is therefore increasingly
used in computer vision systems. Previous works demonstrated the potential of
gaze for object-centric tasks, such as object localization and recognition, but
it remains unclear if gaze can also be beneficial for scene-centric tasks, such
as image captioning. We present a new perspective on gaze-assisted image
captioning by studying the interplay between human gaze and the attention
mechanism of deep neural networks. Using a public large-scale gaze dataset, we
first assess the relationship between state-of-the-art object and scene
recognition models, bottom-up visual saliency, and human gaze. We then propose
a novel split attention model for image captioning. Our model integrates human
gaze information into an attention-based long short-term memory architecture,
and allows the algorithm to allocate attention selectively to both fixated and
non-fixated image regions. Through evaluation on the COCO/SALICON datasets we
show that our method improves image captioning performance and that gaze can
complement machine attention for semantic scene understanding tasks.
| no_new_dataset | 0.947624 |
1608.05209 | Felix J\"aremo Lawin | Felix J\"aremo Lawin, Per-Erik Forss\'en and Hannes Ovr\'en | Efficient Multi-Frequency Phase Unwrapping using Kernel Density
Estimation | Accepted at ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we introduce an efficient method to unwrap multi-frequency
phase estimates for time-of-flight ranging. The algorithm generates multiple
depth hypotheses and uses a spatial kernel density estimate (KDE) to rank them.
The confidence produced by the KDE is also an effective means to detect
outliers. We also introduce a new closed-form expression for phase noise
prediction, that better fits real data. The method is applied to depth decoding
for the Kinect v2 sensor, and compared to the Microsoft Kinect SDK and to the
open source driver libfreenect2. The intended Kinect v2 use case is scenes with
less than 8m range, and for such cases we observe consistent improvements,
while maintaining real-time performance. When extending the depth range to the
maximal value of 8.75m, we get about 52% more valid measurements than
libfreenect2. The effect is that the sensor can now be used in large depth
scenes, where it was previously not a good choice. Code and supplementary
material are available at
http://www.cvl.isy.liu.se/research/datasets/kinect2-dataset.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 08:49:13 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Lawin",
"Felix Järemo",
""
],
[
"Forssén",
"Per-Erik",
""
],
[
"Ovrén",
"Hannes",
""
]
] | TITLE: Efficient Multi-Frequency Phase Unwrapping using Kernel Density
Estimation
ABSTRACT: In this paper we introduce an efficient method to unwrap multi-frequency
phase estimates for time-of-flight ranging. The algorithm generates multiple
depth hypotheses and uses a spatial kernel density estimate (KDE) to rank them.
The confidence produced by the KDE is also an effective means to detect
outliers. We also introduce a new closed-form expression for phase noise
prediction, that better fits real data. The method is applied to depth decoding
for the Kinect v2 sensor, and compared to the Microsoft Kinect SDK and to the
open source driver libfreenect2. The intended Kinect v2 use case is scenes with
less than 8m range, and for such cases we observe consistent improvements,
while maintaining real-time performance. When extending the depth range to the
maximal value of 8.75m, we get about 52% more valid measurements than
libfreenect2. The effect is that the sensor can now be used in large depth
scenes, where it was previously not a good choice. Code and supplementary
material are available at
http://www.cvl.isy.liu.se/research/datasets/kinect2-dataset.
| no_new_dataset | 0.940079 |
1608.05266 | Juste Raimbault | Juste Raimbault | Investigating the Empirical Existence of Static User Equilibrium | 9 pages, 5 figures. Forthcoming in Transportation Research Procedia,
EWGT2016, 5-7 September 2016, Istanbul | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Static User Equilibrium is a powerful framework for the theoretical study
of traffic. Despite the restricting assumption of stationary flows that
intuitively limit its application to real traffic systems, many operational
models implementing it are still used without an empirical validation of the
existence of the equilibrium. We investigate its existence on a traffic dataset
of three months for the region of Paris, FR. The implementation of an
application for interactive spatio-temporal data exploration allows to
hypothesize a high spatial and temporal heterogeneity, and to guide further
quantitative work. The assumption of locally stationary flows is invalidated in
a first approximation by empirical results, as shown by a strong spatial and
temporal variability in shortest paths and in network topological measures such
as betweenness centrality. Furthermore, the behavior of spatial autocorrelation
index of congestion patterns at different spatial ranges suggest a chaotic
evolution at the local scale, especially during peak hours. We finally discuss
the implications of these empirical findings and describe further possible
developments based on the estimation of Lyapunov dynamical stability of traffic
flows.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 14:02:44 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Raimbault",
"Juste",
""
]
] | TITLE: Investigating the Empirical Existence of Static User Equilibrium
ABSTRACT: The Static User Equilibrium is a powerful framework for the theoretical study
of traffic. Despite the restricting assumption of stationary flows that
intuitively limit its application to real traffic systems, many operational
models implementing it are still used without an empirical validation of the
existence of the equilibrium. We investigate its existence on a traffic dataset
of three months for the region of Paris, FR. The implementation of an
application for interactive spatio-temporal data exploration allows to
hypothesize a high spatial and temporal heterogeneity, and to guide further
quantitative work. The assumption of locally stationary flows is invalidated in
a first approximation by empirical results, as shown by a strong spatial and
temporal variability in shortest paths and in network topological measures such
as betweenness centrality. Furthermore, the behavior of spatial autocorrelation
index of congestion patterns at different spatial ranges suggest a chaotic
evolution at the local scale, especially during peak hours. We finally discuss
the implications of these empirical findings and describe further possible
developments based on the estimation of Lyapunov dynamical stability of traffic
flows.
| no_new_dataset | 0.945197 |
1608.05275 | Elad Mezuman | Elad Mezuman and Yair Weiss | A Tight Convex Upper Bound on the Likelihood of a Finite Mixture | icpr 2016 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The likelihood function of a finite mixture model is a non-convex function
with multiple local maxima and commonly used iterative algorithms such as EM
will converge to different solutions depending on initial conditions. In this
paper we ask: is it possible to assess how far we are from the global maximum
of the likelihood? Since the likelihood of a finite mixture model can grow
unboundedly by centering a Gaussian on a single datapoint and shrinking the
covariance, we constrain the problem by assuming that the parameters of the
individual models are members of a large discrete set (e.g. estimating a
mixture of two Gaussians where the means and variances of both Gaussians are
members of a set of a million possible means and variances). For this setting
we show that a simple upper bound on the likelihood can be computed using
convex optimization and we analyze conditions under which the bound is
guaranteed to be tight. This bound can then be used to assess the quality of
solutions found by EM (where the final result is projected on the discrete set)
or any other mixture estimation algorithm. For any dataset our method allows us
to find a finite mixture model together with a dataset-specific bound on how
far the likelihood of this mixture is from the global optimum of the likelihood
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 14:27:45 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Mezuman",
"Elad",
""
],
[
"Weiss",
"Yair",
""
]
] | TITLE: A Tight Convex Upper Bound on the Likelihood of a Finite Mixture
ABSTRACT: The likelihood function of a finite mixture model is a non-convex function
with multiple local maxima and commonly used iterative algorithms such as EM
will converge to different solutions depending on initial conditions. In this
paper we ask: is it possible to assess how far we are from the global maximum
of the likelihood? Since the likelihood of a finite mixture model can grow
unboundedly by centering a Gaussian on a single datapoint and shrinking the
covariance, we constrain the problem by assuming that the parameters of the
individual models are members of a large discrete set (e.g. estimating a
mixture of two Gaussians where the means and variances of both Gaussians are
members of a set of a million possible means and variances). For this setting
we show that a simple upper bound on the likelihood can be computed using
convex optimization and we analyze conditions under which the bound is
guaranteed to be tight. This bound can then be used to assess the quality of
solutions found by EM (where the final result is projected on the discrete set)
or any other mixture estimation algorithm. For any dataset our method allows us
to find a finite mixture model together with a dataset-specific bound on how
far the likelihood of this mixture is from the global optimum of the likelihood
| no_new_dataset | 0.942454 |
1608.05346 | Zaiqiao Meng | Zaiqiao Meng and Hong Shen | Diversified Top-k Similarity Search in Large Attributed Networks | 9 pages, 4 figures, conference | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a large network and a query node, finding its top-k similar nodes is a
primitive operation in many graph-based applications. Recently enhancing search
results with diversification have received much attention. In this paper, we
explore an novel problem of searching for top-k diversified similar nodes in
attributed networks, with the motivation that modeling diversification in an
attributed network should consider both the emergence of network links and the
attribute features of nodes such as user profile information. We formulate this
practical problem as two optimization problems: the Attributed Coverage
Diversification (ACD) problem and the r-Dissimilar Attributed Coverage
Diversification (r-DACD) problem. Based on the submodularity and the
monotonicity of ACD, we propose an efficient greedy algorithm achieving a tight
approximation guarantee of 1-1/e. Unlike the expension based methods only
considering nodes' neighborhood, ACD generalize the definition of
diversification to nodes' own features. To capture diversification in
topological structure of networks, the r-DACD problem introduce a dissimilarity
constraint. We refer to this problem as the Dissimilarity Constrained
Non-monotone Submodular Maximization (DCNSM) problem. We prove that there is no
constant-factor approximation for DCNSM, and also present an efficient greedy
algorithms achieving $1/\rho$ approximation, where $\rho\le\Delta$, $\Delta$ is
the maximum degree of its dissimilarity based graph. To the best of our
knowledge, it is the first approximation algorithm for the Submodular
Maximization problem with a distance constraint. The experimental results on
real-world attributed network datasets demonstrate the effectiveness of our
methods, and confirm that adding dissimilarity constraint can significantly
enhance the performance of diversification.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 17:45:45 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Meng",
"Zaiqiao",
""
],
[
"Shen",
"Hong",
""
]
] | TITLE: Diversified Top-k Similarity Search in Large Attributed Networks
ABSTRACT: Given a large network and a query node, finding its top-k similar nodes is a
primitive operation in many graph-based applications. Recently enhancing search
results with diversification have received much attention. In this paper, we
explore an novel problem of searching for top-k diversified similar nodes in
attributed networks, with the motivation that modeling diversification in an
attributed network should consider both the emergence of network links and the
attribute features of nodes such as user profile information. We formulate this
practical problem as two optimization problems: the Attributed Coverage
Diversification (ACD) problem and the r-Dissimilar Attributed Coverage
Diversification (r-DACD) problem. Based on the submodularity and the
monotonicity of ACD, we propose an efficient greedy algorithm achieving a tight
approximation guarantee of 1-1/e. Unlike the expension based methods only
considering nodes' neighborhood, ACD generalize the definition of
diversification to nodes' own features. To capture diversification in
topological structure of networks, the r-DACD problem introduce a dissimilarity
constraint. We refer to this problem as the Dissimilarity Constrained
Non-monotone Submodular Maximization (DCNSM) problem. We prove that there is no
constant-factor approximation for DCNSM, and also present an efficient greedy
algorithms achieving $1/\rho$ approximation, where $\rho\le\Delta$, $\Delta$ is
the maximum degree of its dissimilarity based graph. To the best of our
knowledge, it is the first approximation algorithm for the Submodular
Maximization problem with a distance constraint. The experimental results on
real-world attributed network datasets demonstrate the effectiveness of our
methods, and confirm that adding dissimilarity constraint can significantly
enhance the performance of diversification.
| no_new_dataset | 0.950869 |
1608.05374 | Srikanth Ronanki | Srikanth Ronanki and Siva Reddy and Bajibabu Bollepalli and Simon King | DNN-based Speech Synthesis for Indian Languages from ASCII text | 6 pages, 5 figures -- Accepted in 9th ISCA Speech Synthesis Workshop | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Text-to-Speech synthesis in Indian languages has a seen lot of progress over
the decade partly due to the annual Blizzard challenges. These systems assume
the text to be written in Devanagari or Dravidian scripts which are nearly
phonemic orthography scripts. However, the most common form of computer
interaction among Indians is ASCII written transliterated text. Such text is
generally noisy with many variations in spelling for the same word. In this
paper we evaluate three approaches to synthesize speech from such noisy ASCII
text: a naive Uni-Grapheme approach, a Multi-Grapheme approach, and a
supervised Grapheme-to-Phoneme (G2P) approach. These methods first convert the
ASCII text to a phonetic script, and then learn a Deep Neural Network to
synthesize speech from that. We train and test our models on Blizzard Challenge
datasets that were transliterated to ASCII using crowdsourcing. Our experiments
on Hindi, Tamil and Telugu demonstrate that our models generate speech of
competetive quality from ASCII text compared to the speech synthesized from the
native scripts. All the accompanying transliterated datasets are released for
public access.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 18:58:39 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Ronanki",
"Srikanth",
""
],
[
"Reddy",
"Siva",
""
],
[
"Bollepalli",
"Bajibabu",
""
],
[
"King",
"Simon",
""
]
] | TITLE: DNN-based Speech Synthesis for Indian Languages from ASCII text
ABSTRACT: Text-to-Speech synthesis in Indian languages has a seen lot of progress over
the decade partly due to the annual Blizzard challenges. These systems assume
the text to be written in Devanagari or Dravidian scripts which are nearly
phonemic orthography scripts. However, the most common form of computer
interaction among Indians is ASCII written transliterated text. Such text is
generally noisy with many variations in spelling for the same word. In this
paper we evaluate three approaches to synthesize speech from such noisy ASCII
text: a naive Uni-Grapheme approach, a Multi-Grapheme approach, and a
supervised Grapheme-to-Phoneme (G2P) approach. These methods first convert the
ASCII text to a phonetic script, and then learn a Deep Neural Network to
synthesize speech from that. We train and test our models on Blizzard Challenge
datasets that were transliterated to ASCII using crowdsourcing. Our experiments
on Hindi, Tamil and Telugu demonstrate that our models generate speech of
competetive quality from ASCII text compared to the speech synthesized from the
native scripts. All the accompanying transliterated datasets are released for
public access.
| no_new_dataset | 0.942823 |
1608.05380 | Amira Ghenai Amira Ghenai | Amira Ghenai, Moustafa M.Ghanem | Exploring Trust-Aware Neighbourhood in Trust-based Recommendation | null | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional Recommender Systems (RS) do not consider any personal user
information beyond rating history. Such information, on the other hand, is
widely available on social networking sites (Facebook, Twitter). As a result,
social networks have recently been used in recommendation systems. In this
paper, we propose an efficient method for incorporating social signals into the
recommendation process by building a trust network which supplements the users'
rating profiles. We first show the effect of different cold-start users types
on the Collaborative Filtering (CF) technique in several real-world datasets.
Later, we propose a "Trust-Aware Neighbourhood" algorithm which addresses a
performance issue of the former by limiting the trusted neighbourhood. We show
the doubling of the rating coverage compared to the traditional CF technique,
and a significant improvement in the accuracy for some datasets. Focusing
specifically on cold-start users, we propose a "Hybrid Trust-Aware
Neighbourhood" algorithm which expands the neighbourhood by considering both
trust and rating history of the users. We show a near complete coverage with a
rich trust network dataset-- Flixster. We conclude by discussing the potential
implementation of this algorithm in a budget-constrained cloud environment.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 19:21:08 GMT"
}
] | 2016-08-19T00:00:00 | [
[
"Ghenai",
"Amira",
""
],
[
"Ghanem",
"Moustafa M.",
""
]
] | TITLE: Exploring Trust-Aware Neighbourhood in Trust-based Recommendation
ABSTRACT: Traditional Recommender Systems (RS) do not consider any personal user
information beyond rating history. Such information, on the other hand, is
widely available on social networking sites (Facebook, Twitter). As a result,
social networks have recently been used in recommendation systems. In this
paper, we propose an efficient method for incorporating social signals into the
recommendation process by building a trust network which supplements the users'
rating profiles. We first show the effect of different cold-start users types
on the Collaborative Filtering (CF) technique in several real-world datasets.
Later, we propose a "Trust-Aware Neighbourhood" algorithm which addresses a
performance issue of the former by limiting the trusted neighbourhood. We show
the doubling of the rating coverage compared to the traditional CF technique,
and a significant improvement in the accuracy for some datasets. Focusing
specifically on cold-start users, we propose a "Hybrid Trust-Aware
Neighbourhood" algorithm which expands the neighbourhood by considering both
trust and rating history of the users. We show a near complete coverage with a
rich trust network dataset-- Flixster. We conclude by discussing the potential
implementation of this algorithm in a budget-constrained cloud environment.
| no_new_dataset | 0.947284 |
1412.0477 | Luca Del Pero | Luca Del Pero and Susanna Ricco and Rahul Sukthankar and Vittorio
Ferrari | Recovering Spatiotemporal Correspondence between Deformable Objects by
Exploiting Consistent Foreground Motion in Video | 9 pages, 14 figures. This article is obsolete. Its contents are now
covered in arXiv:1511.09319, where we discuss a comprehensive system for
behavior discovery and spatial alignment of articulated object classes from
unstructured video (available at https://arxiv.org/abs/1511.09319) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given unstructured videos of deformable objects, we automatically recover
spatiotemporal correspondences to map one object to another (such as animals in
the wild). While traditional methods based on appearance fail in such
challenging conditions, we exploit consistency in object motion between
instances. Our approach discovers pairs of short video intervals where the
object moves in a consistent manner and uses these candidates as seeds for
spatial alignment. We model the spatial correspondence between the point
trajectories on the object in one interval to those in the other using a
time-varying Thin Plate Spline deformation model. On a large dataset of tiger
and horse videos, our method automatically aligns thousands of pairs of frames
to a high accuracy, and outperforms the popular SIFT Flow algorithm.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 13:47:52 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Apr 2015 22:52:04 GMT"
},
{
"version": "v3",
"created": "Tue, 16 Aug 2016 22:33:33 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Del Pero",
"Luca",
""
],
[
"Ricco",
"Susanna",
""
],
[
"Sukthankar",
"Rahul",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Recovering Spatiotemporal Correspondence between Deformable Objects by
Exploiting Consistent Foreground Motion in Video
ABSTRACT: Given unstructured videos of deformable objects, we automatically recover
spatiotemporal correspondences to map one object to another (such as animals in
the wild). While traditional methods based on appearance fail in such
challenging conditions, we exploit consistency in object motion between
instances. Our approach discovers pairs of short video intervals where the
object moves in a consistent manner and uses these candidates as seeds for
spatial alignment. We model the spatial correspondence between the point
trajectories on the object in one interval to those in the other using a
time-varying Thin Plate Spline deformation model. On a large dataset of tiger
and horse videos, our method automatically aligns thousands of pairs of frames
to a high accuracy, and outperforms the popular SIFT Flow algorithm.
| no_new_dataset | 0.95594 |
1509.05371 | Felix Trier | Peter Burkert, Felix Trier, Muhammad Zeshan Afzal, Andreas Dengel,
Marcus Liwicki | DeXpression: Deep Convolutional Neural Network for Expression
Recognition | Under consideration for publication in Pattern Recognition Letters | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a convolutional neural network (CNN) architecture for facial
expression recognition. The proposed architecture is independent of any
hand-crafted feature extraction and performs better than the earlier proposed
convolutional neural network based approaches. We visualize the automatically
extracted features which have been learned by the network in order to provide a
better understanding. The standard datasets, i.e. Extended Cohn-Kanade (CKP)
and MMI Facial Expression Databse are used for the quantitative evaluation. On
the CKP set the current state of the art approach, using CNNs, achieves an
accuracy of 99.2%. For the MMI dataset, currently the best accuracy for emotion
recognition is 93.33%. The proposed architecture achieves 99.6% for CKP and
98.63% for MMI, therefore performing better than the state of the art using
CNNs. Automatic facial expression recognition has a broad spectrum of
applications such as human-computer interaction and safety systems. This is due
to the fact that non-verbal cues are important forms of communication and play
a pivotal role in interpersonal communication. The performance of the proposed
architecture endorses the efficacy and reliable usage of the proposed work for
real world applications.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 18:49:10 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Aug 2016 19:34:55 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Burkert",
"Peter",
""
],
[
"Trier",
"Felix",
""
],
[
"Afzal",
"Muhammad Zeshan",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Liwicki",
"Marcus",
""
]
] | TITLE: DeXpression: Deep Convolutional Neural Network for Expression
Recognition
ABSTRACT: We propose a convolutional neural network (CNN) architecture for facial
expression recognition. The proposed architecture is independent of any
hand-crafted feature extraction and performs better than the earlier proposed
convolutional neural network based approaches. We visualize the automatically
extracted features which have been learned by the network in order to provide a
better understanding. The standard datasets, i.e. Extended Cohn-Kanade (CKP)
and MMI Facial Expression Databse are used for the quantitative evaluation. On
the CKP set the current state of the art approach, using CNNs, achieves an
accuracy of 99.2%. For the MMI dataset, currently the best accuracy for emotion
recognition is 93.33%. The proposed architecture achieves 99.6% for CKP and
98.63% for MMI, therefore performing better than the state of the art using
CNNs. Automatic facial expression recognition has a broad spectrum of
applications such as human-computer interaction and safety systems. This is due
to the fact that non-verbal cues are important forms of communication and play
a pivotal role in interpersonal communication. The performance of the proposed
architecture endorses the efficacy and reliable usage of the proposed work for
real world applications.
| no_new_dataset | 0.946892 |
1603.06182 | Haimin Zhang | Haimin Zhang | Modelling Temporal Information Using Discrete Fourier Transform for
Video Classification | to be revised | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, video classification attracts intensive research efforts. However,
most existing works are based on framelevel visual features, which might fail
to model the temporal information, e.g. characteristics accumulated along time.
In order to capture video temporal information, we propose to analyse features
in frequency domain transformed by discrete Fourier transform (DFT features).
Frame-level features are firstly extract by a pre-trained deep convolutional
neural network (CNN). Then, time domain features are transformed and
interpolated into DFT features. CNN and DFT features are further encoded by
using different pooling methods and fused for video classification. In this
way, static image features extracted from a pre-trained deep CNN and temporal
information represented by DFT features are jointly considered for video
classification. We test our method for video emotion classification and action
recognition. Experimental results demonstrate that combining DFT features can
effectively capture temporal information and therefore improve the performance
of both video emotion classification and action recognition. Our approach has
achieved a state-of-the-art performance on the largest video emotion dataset
(VideoEmotion-8 dataset) and competitive results on UCF-101.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 04:28:21 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2016 00:42:37 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Jul 2016 07:29:40 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Jul 2016 01:17:17 GMT"
},
{
"version": "v5",
"created": "Wed, 17 Aug 2016 00:48:55 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Zhang",
"Haimin",
""
]
] | TITLE: Modelling Temporal Information Using Discrete Fourier Transform for
Video Classification
ABSTRACT: Recently, video classification attracts intensive research efforts. However,
most existing works are based on framelevel visual features, which might fail
to model the temporal information, e.g. characteristics accumulated along time.
In order to capture video temporal information, we propose to analyse features
in frequency domain transformed by discrete Fourier transform (DFT features).
Frame-level features are firstly extract by a pre-trained deep convolutional
neural network (CNN). Then, time domain features are transformed and
interpolated into DFT features. CNN and DFT features are further encoded by
using different pooling methods and fused for video classification. In this
way, static image features extracted from a pre-trained deep CNN and temporal
information represented by DFT features are jointly considered for video
classification. We test our method for video emotion classification and action
recognition. Experimental results demonstrate that combining DFT features can
effectively capture temporal information and therefore improve the performance
of both video emotion classification and action recognition. Our approach has
achieved a state-of-the-art performance on the largest video emotion dataset
(VideoEmotion-8 dataset) and competitive results on UCF-101.
| no_new_dataset | 0.946448 |
1608.04783 | Aileme Omogbai Aileme Omogbai | Aileme Omogbai | Application of multiview techniques to NHANES dataset | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disease prediction or classification using health datasets involve using
well-known predictors associated with the disease as features for the models.
This study considers multiple data components of an individual's health, using
the relationship between variables to generate features that may improve the
performance of disease classification models. In order to capture information
from different aspects of the data, this project uses a multiview learning
approach, using Canonical Correlation Analysis (CCA), a technique that finds
projections with maximum correlations between two data views. Data categories
collected from the NHANES survey (1999-2014) are used as views to learn the
multiview representations. The usefulness of the representations is
demonstrated by applying them as features in a Diabetes classification task.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2016 21:20:30 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Omogbai",
"Aileme",
""
]
] | TITLE: Application of multiview techniques to NHANES dataset
ABSTRACT: Disease prediction or classification using health datasets involve using
well-known predictors associated with the disease as features for the models.
This study considers multiple data components of an individual's health, using
the relationship between variables to generate features that may improve the
performance of disease classification models. In order to capture information
from different aspects of the data, this project uses a multiview learning
approach, using Canonical Correlation Analysis (CCA), a technique that finds
projections with maximum correlations between two data views. Data categories
collected from the NHANES survey (1999-2014) are used as views to learn the
multiview representations. The usefulness of the representations is
demonstrated by applying them as features in a Diabetes classification task.
| no_new_dataset | 0.94801 |
1608.04830 | Truyen Tran | Kien Do, Truyen Tran, Dinh Phung and Svetha Venkatesh | Outlier Detection on Mixed-Type Data: An Energy-based Approach | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Outlier detection amounts to finding data points that differ significantly
from the norm. Classic outlier detection methods are largely designed for
single data type such as continuous or discrete. However, real world data is
increasingly heterogeneous, where a data point can have both discrete and
continuous attributes. Handling mixed-type data in a disciplined way remains a
great challenge. In this paper, we propose a new unsupervised outlier detection
method for mixed-type data based on Mixed-variate Restricted Boltzmann Machine
(Mv.RBM). The Mv.RBM is a principled probabilistic method that models data
density. We propose to use \emph{free-energy} derived from Mv.RBM as outlier
score to detect outliers as those data points lying in low density regions. The
method is fast to learn and compute, is scalable to massive datasets. At the
same time, the outlier score is identical to data negative log-density up-to an
additive constant. We evaluate the proposed method on synthetic and real-world
datasets and demonstrate that (a) a proper handling mixed-types is necessary in
outlier detection, and (b) free-energy of Mv.RBM is a powerful and efficient
outlier scoring method, which is highly competitive against state-of-the-arts.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 01:41:40 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Do",
"Kien",
""
],
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Outlier Detection on Mixed-Type Data: An Energy-based Approach
ABSTRACT: Outlier detection amounts to finding data points that differ significantly
from the norm. Classic outlier detection methods are largely designed for
single data type such as continuous or discrete. However, real world data is
increasingly heterogeneous, where a data point can have both discrete and
continuous attributes. Handling mixed-type data in a disciplined way remains a
great challenge. In this paper, we propose a new unsupervised outlier detection
method for mixed-type data based on Mixed-variate Restricted Boltzmann Machine
(Mv.RBM). The Mv.RBM is a principled probabilistic method that models data
density. We propose to use \emph{free-energy} derived from Mv.RBM as outlier
score to detect outliers as those data points lying in low density regions. The
method is fast to learn and compute, is scalable to massive datasets. At the
same time, the outlier score is identical to data negative log-density up-to an
additive constant. We evaluate the proposed method on synthetic and real-world
datasets and demonstrate that (a) a proper handling mixed-types is necessary in
outlier detection, and (b) free-energy of Mv.RBM is a powerful and efficient
outlier scoring method, which is highly competitive against state-of-the-arts.
| no_new_dataset | 0.948489 |
1608.04875 | Sandipan Sikdar | Sandipan Sikdar, Matteo Marsili, Niloy Ganguly, Animesh Mukherjee | Anomalies in the peer-review system: A case study of the journal of High
Energy Physics | 25th ACM International Conference on Information and Knowledge
Management (CIKM 2016) | null | 10.1145/2983323.2983675 | null | cs.DL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peer-review system has long been relied upon for bringing quality research to
the notice of the scientific community and also preventing flawed research from
entering into the literature. The need for the peer-review system has often
been debated as in numerous cases it has failed in its task and in most of
these cases editors and the reviewers were thought to be responsible for not
being able to correctly judge the quality of the work. This raises a question
"Can the peer-review system be improved?" Since editors and reviewers are the
most important pillars of a reviewing system, we in this work, attempt to
address a related question - given the editing/reviewing history of the editors
or re- viewers "can we identify the under-performing ones?", with citations
received by the edited/reviewed papers being used as proxy for quantifying
performance. We term such review- ers and editors as anomalous and we believe
identifying and removing them shall improve the performance of the peer- review
system. Using a massive dataset of Journal of High Energy Physics (JHEP)
consisting of 29k papers submitted between 1997 and 2015 with 95 editors and
4035 reviewers and their review history, we identify several factors which
point to anomalous behavior of referees and editors. In fact the anomalous
editors and reviewers account for 26.8% and 14.5% of the total editors and
reviewers respectively and for most of these anomalous reviewers the
performance degrades alarmingly over time.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 06:48:08 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Sikdar",
"Sandipan",
""
],
[
"Marsili",
"Matteo",
""
],
[
"Ganguly",
"Niloy",
""
],
[
"Mukherjee",
"Animesh",
""
]
] | TITLE: Anomalies in the peer-review system: A case study of the journal of High
Energy Physics
ABSTRACT: Peer-review system has long been relied upon for bringing quality research to
the notice of the scientific community and also preventing flawed research from
entering into the literature. The need for the peer-review system has often
been debated as in numerous cases it has failed in its task and in most of
these cases editors and the reviewers were thought to be responsible for not
being able to correctly judge the quality of the work. This raises a question
"Can the peer-review system be improved?" Since editors and reviewers are the
most important pillars of a reviewing system, we in this work, attempt to
address a related question - given the editing/reviewing history of the editors
or re- viewers "can we identify the under-performing ones?", with citations
received by the edited/reviewed papers being used as proxy for quantifying
performance. We term such review- ers and editors as anomalous and we believe
identifying and removing them shall improve the performance of the peer- review
system. Using a massive dataset of Journal of High Energy Physics (JHEP)
consisting of 29k papers submitted between 1997 and 2015 with 95 editors and
4035 reviewers and their review history, we identify several factors which
point to anomalous behavior of referees and editors. In fact the anomalous
editors and reviewers account for 26.8% and 14.5% of the total editors and
reviewers respectively and for most of these anomalous reviewers the
performance degrades alarmingly over time.
| no_new_dataset | 0.916931 |
1608.04959 | Rakshith Shetty | Rakshith Shetty and Jorma Laaksonen | Frame- and Segment-Level Features and Candidate Pool Evaluation for
Video Caption Generation | null | null | 10.1145/2964284.2984062 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present our submission to the Microsoft Video to Language Challenge of
generating short captions describing videos in the challenge dataset. Our model
is based on the encoder--decoder pipeline, popular in image and video
captioning systems. We propose to utilize two different kinds of video
features, one to capture the video content in terms of objects and attributes,
and the other to capture the motion and action information. Using these diverse
features we train models specializing in two separate input sub-domains. We
then train an evaluator model which is used to pick the best caption from the
pool of candidates generated by these domain expert models. We argue that this
approach is better suited for the current video captioning task, compared to
using a single model, due to the diversity in the dataset.
Efficacy of our method is proven by the fact that it was rated best in MSR
Video to Language Challenge, as per human evaluation. Additionally, we were
ranked second in the automatic evaluation metrics based table.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 13:30:06 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Shetty",
"Rakshith",
""
],
[
"Laaksonen",
"Jorma",
""
]
] | TITLE: Frame- and Segment-Level Features and Candidate Pool Evaluation for
Video Caption Generation
ABSTRACT: We present our submission to the Microsoft Video to Language Challenge of
generating short captions describing videos in the challenge dataset. Our model
is based on the encoder--decoder pipeline, popular in image and video
captioning systems. We propose to utilize two different kinds of video
features, one to capture the video content in terms of objects and attributes,
and the other to capture the motion and action information. Using these diverse
features we train models specializing in two separate input sub-domains. We
then train an evaluator model which is used to pick the best caption from the
pool of candidates generated by these domain expert models. We argue that this
approach is better suited for the current video captioning task, compared to
using a single model, due to the diversity in the dataset.
Efficacy of our method is proven by the fact that it was rated best in MSR
Video to Language Challenge, as per human evaluation. Additionally, we were
ranked second in the automatic evaluation metrics based table.
| no_new_dataset | 0.950365 |
1608.05054 | Muhammet Bastan | Muhammet Bastan and Hilal Kandemir and Busra Canturk | MT3S: Mobile Turkish Scene Text-to-Speech System for the Visually
Impaired | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reading text is one of the essential needs of the visually impaired people.
We developed a mobile system that can read Turkish scene and book text, using a
fast gradient-based multi-scale text detection algorithm for real-time
operation and Tesseract OCR engine for character recognition. We evaluated the
OCR accuracy and running time of our system on a new, publicly available mobile
Turkish scene text dataset we constructed and also compared with
state-of-the-art systems. Our system proved to be much faster, able to run on a
mobile device, with OCR accuracy comparable to the state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 19:24:23 GMT"
}
] | 2016-08-18T00:00:00 | [
[
"Bastan",
"Muhammet",
""
],
[
"Kandemir",
"Hilal",
""
],
[
"Canturk",
"Busra",
""
]
] | TITLE: MT3S: Mobile Turkish Scene Text-to-Speech System for the Visually
Impaired
ABSTRACT: Reading text is one of the essential needs of the visually impaired people.
We developed a mobile system that can read Turkish scene and book text, using a
fast gradient-based multi-scale text detection algorithm for real-time
operation and Tesseract OCR engine for character recognition. We evaluated the
OCR accuracy and running time of our system on a new, publicly available mobile
Turkish scene text dataset we constructed and also compared with
state-of-the-art systems. Our system proved to be much faster, able to run on a
mobile device, with OCR accuracy comparable to the state-of-the-art.
| new_dataset | 0.962532 |
1604.02129 | Scott Workman | Scott Workman, Menghua Zhai, Nathan Jacobs | Horizon Lines in the Wild | British Machine Vision Conference (BMVC) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The horizon line is an important contextual attribute for a wide variety of
image understanding tasks. As such, many methods have been proposed to estimate
its location from a single image. These methods typically require the image to
contain specific cues, such as vanishing points, coplanar circles, and regular
textures, thus limiting their real-world applicability. We introduce a large,
realistic evaluation dataset, Horizon Lines in the Wild (HLW), containing
natural images with labeled horizon lines. Using this dataset, we investigate
the application of convolutional neural networks for directly estimating the
horizon line, without requiring any explicit geometric constraints or other
special cues. An extensive evaluation shows that using our CNNs, either in
isolation or in conjunction with a previous geometric approach, we achieve
state-of-the-art results on the challenging HLW dataset and two existing
benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 19:38:24 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2016 18:48:57 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Workman",
"Scott",
""
],
[
"Zhai",
"Menghua",
""
],
[
"Jacobs",
"Nathan",
""
]
] | TITLE: Horizon Lines in the Wild
ABSTRACT: The horizon line is an important contextual attribute for a wide variety of
image understanding tasks. As such, many methods have been proposed to estimate
its location from a single image. These methods typically require the image to
contain specific cues, such as vanishing points, coplanar circles, and regular
textures, thus limiting their real-world applicability. We introduce a large,
realistic evaluation dataset, Horizon Lines in the Wild (HLW), containing
natural images with labeled horizon lines. Using this dataset, we investigate
the application of convolutional neural networks for directly estimating the
horizon line, without requiring any explicit geometric constraints or other
special cues. An extensive evaluation shows that using our CNNs, either in
isolation or in conjunction with a previous geometric approach, we achieve
state-of-the-art results on the challenging HLW dataset and two existing
benchmark datasets.
| new_dataset | 0.968261 |
1606.06204 | Richard Barnes | Richard Barnes | Parallel Priority-Flood Depression Filling For Trillion Cell Digital
Elevation Models On Desktops Or Clusters | 21 pages, 4 tables, 8 figures | Computers and Geosciences, Volume 96, November 2016, pp. 56-68 | 10.1016/j.cageo.2016.07.001 | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms for extracting hydrologic features and properties from digital
elevation models (DEMs) are challenged by large datasets, which often cannot
fit within a computer's RAM. Depression filling is an important preconditioning
step to many of these algorithms. Here, I present a new, linearly-scaling
algorithm which parallelizes the Priority-Flood depression-filling algorithm by
subdividing a DEM into tiles. Using a single-producer, multi-consumer design,
the new algorithm works equally well on one core, multiple cores, or multiple
machines and can take advantage of large memories or cope with small ones.
Unlike previous algorithms, the new algorithm guarantees a fixed number of
memory access and communication events per subdivision of the DEM. In
comparison testing, this results in the new algorithm running generally faster
while using fewer resources than previous algorithms. For moderately sized
tiles, the algorithm exhibits ~60% strong and weak scaling efficiencies up to
48 cores, and linear time scaling across datasets ranging over three orders of
magnitude. The largest dataset on which I run the algorithm has 2 trillion
(2*10^12) cells. With 48 cores, processing required 4.8 hours wall-time (9.3
compute-days). This test is three orders of magnitude larger than any
previously performed in the literature. Complete, well-commented source code
and correctness tests are available for download from a repository.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2016 16:52:12 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2016 22:35:43 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Barnes",
"Richard",
""
]
] | TITLE: Parallel Priority-Flood Depression Filling For Trillion Cell Digital
Elevation Models On Desktops Or Clusters
ABSTRACT: Algorithms for extracting hydrologic features and properties from digital
elevation models (DEMs) are challenged by large datasets, which often cannot
fit within a computer's RAM. Depression filling is an important preconditioning
step to many of these algorithms. Here, I present a new, linearly-scaling
algorithm which parallelizes the Priority-Flood depression-filling algorithm by
subdividing a DEM into tiles. Using a single-producer, multi-consumer design,
the new algorithm works equally well on one core, multiple cores, or multiple
machines and can take advantage of large memories or cope with small ones.
Unlike previous algorithms, the new algorithm guarantees a fixed number of
memory access and communication events per subdivision of the DEM. In
comparison testing, this results in the new algorithm running generally faster
while using fewer resources than previous algorithms. For moderately sized
tiles, the algorithm exhibits ~60% strong and weak scaling efficiencies up to
48 cores, and linear time scaling across datasets ranging over three orders of
magnitude. The largest dataset on which I run the algorithm has 2 trillion
(2*10^12) cells. With 48 cores, processing required 4.8 hours wall-time (9.3
compute-days). This test is three orders of magnitude larger than any
previously performed in the literature. Complete, well-commented source code
and correctness tests are available for download from a repository.
| no_new_dataset | 0.950686 |
1608.00075 | Renbo Zhao | Renbo Zhao, Vincent Y. F. Tan, Huan Xu | Online Nonnegative Matrix Factorization with General Divergences | null | null | null | null | stat.ML cs.IT cs.NA math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a unified and systematic framework for performing online
nonnegative matrix factorization under a wide variety of important divergences.
The online nature of our algorithm makes it particularly amenable to
large-scale data. We prove that the sequence of learned dictionaries converges
almost surely to the set of critical points of the expected loss function. We
do so by leveraging the theory of stochastic approximations and projected
dynamical systems. This result substantially generalizes the previous results
obtained only for the squared-$\ell_2$ loss. Moreover, the novel techniques
involved in our analysis open new avenues for analyzing similar matrix
factorization problems. The computational efficiency and the quality of the
learned dictionary of our algorithm are verified empirically on both synthetic
and real datasets. In particular, on the tasks of topic learning, shadow
removal and image denoising, our algorithm achieves superior trade-offs between
the quality of learned dictionary and running time over the batch and other
online NMF algorithms.
| [
{
"version": "v1",
"created": "Sat, 30 Jul 2016 06:07:38 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2016 02:36:50 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Zhao",
"Renbo",
""
],
[
"Tan",
"Vincent Y. F.",
""
],
[
"Xu",
"Huan",
""
]
] | TITLE: Online Nonnegative Matrix Factorization with General Divergences
ABSTRACT: We develop a unified and systematic framework for performing online
nonnegative matrix factorization under a wide variety of important divergences.
The online nature of our algorithm makes it particularly amenable to
large-scale data. We prove that the sequence of learned dictionaries converges
almost surely to the set of critical points of the expected loss function. We
do so by leveraging the theory of stochastic approximations and projected
dynamical systems. This result substantially generalizes the previous results
obtained only for the squared-$\ell_2$ loss. Moreover, the novel techniques
involved in our analysis open new avenues for analyzing similar matrix
factorization problems. The computational efficiency and the quality of the
learned dictionary of our algorithm are verified empirically on both synthetic
and real datasets. In particular, on the tasks of topic learning, shadow
removal and image denoising, our algorithm achieves superior trade-offs between
the quality of learned dictionary and running time over the batch and other
online NMF algorithms.
| no_new_dataset | 0.943504 |
1608.03793 | Rajiv Shah | Rajiv Shah and Rob Romijnders | Applying Deep Learning to Basketball Trajectories | KDD 2016, Large Scale Sports Analytic Workshop | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the emerging trends for sports analytics is the growing use of player
and ball tracking data. A parallel development is deep learning predictive
approaches that use vast quantities of data with less reliance on feature
engineering. This paper applies recurrent neural networks in the form of
sequence modeling to predict whether a three-point shot is successful. The
models are capable of learning the trajectory of a basketball without any
knowledge of physics. For comparison, a baseline static machine learning model
with a full set of features, such as angle and velocity, in addition to the
positional data is also tested. Using a dataset of over 20,000 three pointers
from NBA SportVu data, the models based simply on sequential positional data
outperform a static feature rich machine learning model in predicting whether a
three-point shot is successful. This suggests deep learning models may offer an
improvement to traditional feature based machine learning methods for tracking
data.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 13:50:24 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2016 18:36:44 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Shah",
"Rajiv",
""
],
[
"Romijnders",
"Rob",
""
]
] | TITLE: Applying Deep Learning to Basketball Trajectories
ABSTRACT: One of the emerging trends for sports analytics is the growing use of player
and ball tracking data. A parallel development is deep learning predictive
approaches that use vast quantities of data with less reliance on feature
engineering. This paper applies recurrent neural networks in the form of
sequence modeling to predict whether a three-point shot is successful. The
models are capable of learning the trajectory of a basketball without any
knowledge of physics. For comparison, a baseline static machine learning model
with a full set of features, such as angle and velocity, in addition to the
positional data is also tested. Using a dataset of over 20,000 three pointers
from NBA SportVu data, the models based simply on sequential positional data
outperform a static feature rich machine learning model in predicting whether a
three-point shot is successful. This suggests deep learning models may offer an
improvement to traditional feature based machine learning methods for tracking
data.
| no_new_dataset | 0.943504 |
1608.04245 | Mike Gartrell | Mike Gartrell, Ulrich Paquet, Noam Koenigstein | The Bayesian Low-Rank Determinantal Point Process Mixture Model | 9 pages, 6 figures. This article draws heavily from arXiv:1602.05436 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determinantal point processes (DPPs) are an elegant model for encoding
probabilities over subsets, such as shopping baskets, of a ground set, such as
an item catalog. They are useful for a number of machine learning tasks,
including product recommendation. DPPs are parametrized by a positive
semi-definite kernel matrix. Recent work has shown that using a low-rank
factorization of this kernel provides remarkable scalability improvements that
open the door to training on large-scale datasets and computing online
recommendations, both of which are infeasible with standard DPP models that use
a full-rank kernel. In this paper we present a low-rank DPP mixture model that
allows us to represent the latent structure present in observed subsets as a
mixture of a number of component low-rank DPPs, where each component DPP is
responsible for representing a portion of the observed data. The mixture model
allows us to effectively address the capacity constraints of the low-rank DPP
model. We present an efficient and scalable Markov Chain Monte Carlo (MCMC)
learning algorithm for our model that uses Gibbs sampling and stochastic
gradient Hamiltonian Monte Carlo (SGHMC). Using an evaluation on several
real-world product recommendation datasets, we show that our low-rank DPP
mixture model provides substantially better predictive performance than is
possible with a single low-rank or full-rank DPP, and significantly better
performance than several other competing recommendation methods in many cases.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 11:42:51 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2016 10:41:32 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Gartrell",
"Mike",
""
],
[
"Paquet",
"Ulrich",
""
],
[
"Koenigstein",
"Noam",
""
]
] | TITLE: The Bayesian Low-Rank Determinantal Point Process Mixture Model
ABSTRACT: Determinantal point processes (DPPs) are an elegant model for encoding
probabilities over subsets, such as shopping baskets, of a ground set, such as
an item catalog. They are useful for a number of machine learning tasks,
including product recommendation. DPPs are parametrized by a positive
semi-definite kernel matrix. Recent work has shown that using a low-rank
factorization of this kernel provides remarkable scalability improvements that
open the door to training on large-scale datasets and computing online
recommendations, both of which are infeasible with standard DPP models that use
a full-rank kernel. In this paper we present a low-rank DPP mixture model that
allows us to represent the latent structure present in observed subsets as a
mixture of a number of component low-rank DPPs, where each component DPP is
responsible for representing a portion of the observed data. The mixture model
allows us to effectively address the capacity constraints of the low-rank DPP
model. We present an efficient and scalable Markov Chain Monte Carlo (MCMC)
learning algorithm for our model that uses Gibbs sampling and stochastic
gradient Hamiltonian Monte Carlo (SGHMC). Using an evaluation on several
real-world product recommendation datasets, we show that our low-rank DPP
mixture model provides substantially better predictive performance than is
possible with a single low-rank or full-rank DPP, and significantly better
performance than several other competing recommendation methods in many cases.
| no_new_dataset | 0.950134 |
1608.04314 | Miaojing Shi | Miaojing Shi and Vittorio Ferrari | Weakly Supervised Object Localization Using Size Estimates | ECCV 2016 camera-ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a technique for weakly supervised object localization (WSOL),
building on the observation that WSOL algorithms usually work better on images
with bigger objects. Instead of training the object detector on the entire
training set at the same time, we propose a curriculum learning strategy to
feed training images into the WSOL learning loop in an order from images
containing bigger objects down to smaller ones. To automatically determine the
order, we train a regressor to estimate the size of the object given the whole
image as input. Furthermore, we use these size estimates to further improve the
re-localization step of WSOL by assigning weights to object proposals according
to how close their size matches the estimated object size. We demonstrate the
effectiveness of using size order and size weighting on the challenging PASCAL
VOC 2007 dataset, where we achieve a significant improvement over existing
state-of-the-art WSOL techniques.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 16:07:24 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Aug 2016 11:31:41 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Shi",
"Miaojing",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Weakly Supervised Object Localization Using Size Estimates
ABSTRACT: We present a technique for weakly supervised object localization (WSOL),
building on the observation that WSOL algorithms usually work better on images
with bigger objects. Instead of training the object detector on the entire
training set at the same time, we propose a curriculum learning strategy to
feed training images into the WSOL learning loop in an order from images
containing bigger objects down to smaller ones. To automatically determine the
order, we train a regressor to estimate the size of the object given the whole
image as input. Furthermore, we use these size estimates to further improve the
re-localization step of WSOL by assigning weights to object proposals according
to how close their size matches the estimated object size. We demonstrate the
effectiveness of using size order and size weighting on the challenging PASCAL
VOC 2007 dataset, where we achieve a significant improvement over existing
state-of-the-art WSOL techniques.
| no_new_dataset | 0.951997 |
1608.04442 | Mayank Kejriwal | Mayank Kejriwal, Daniel P. Miranker | Experience: Type alignment on DBpedia and Freebase | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linked Open Data exhibits growth in both volume and variety of published
data. Due to this variety, instances of many different types (e.g. Person) can
be found in published datasets. Type alignment is the problem of automatically
matching types (in a possibly many-many fashion) between two such datasets.
Type alignment is an important preprocessing step in instance matching.
Instance matching concerns identifying pairs of instances referring to the same
underlying entity. By performing type alignment a priori, only instances
conforming to aligned types are processed together, leading to significant
savings. This article describes a type alignment experience with two
large-scale cross-domain RDF knowledge graphs, DBpedia and Freebase, that
contain hundreds, or even thousands, of unique types. Specifically, we present
a MapReduce-based type alignment algorithm and show that there are at least
three reasonable ways of evaluating type alignment within the larger context of
instance matching. We comment on the consistency of those results, and note
some general observations for researchers evaluating similar algorithms on
cross-domain graphs.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 23:56:08 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Kejriwal",
"Mayank",
""
],
[
"Miranker",
"Daniel P.",
""
]
] | TITLE: Experience: Type alignment on DBpedia and Freebase
ABSTRACT: Linked Open Data exhibits growth in both volume and variety of published
data. Due to this variety, instances of many different types (e.g. Person) can
be found in published datasets. Type alignment is the problem of automatically
matching types (in a possibly many-many fashion) between two such datasets.
Type alignment is an important preprocessing step in instance matching.
Instance matching concerns identifying pairs of instances referring to the same
underlying entity. By performing type alignment a priori, only instances
conforming to aligned types are processed together, leading to significant
savings. This article describes a type alignment experience with two
large-scale cross-domain RDF knowledge graphs, DBpedia and Freebase, that
contain hundreds, or even thousands, of unique types. Specifically, we present
a MapReduce-based type alignment algorithm and show that there are at least
three reasonable ways of evaluating type alignment within the larger context of
instance matching. We comment on the consistency of those results, and note
some general observations for researchers evaluating similar algorithms on
cross-domain graphs.
| no_new_dataset | 0.951863 |
1608.04689 | Hongyu Guo | Martin Renqiang Min, Hongyu Guo, Dongjin Song | A Shallow High-Order Parametric Approach to Data Visualization and
Compression | null | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explicit high-order feature interactions efficiently capture essential
structural knowledge about the data of interest and have been used for
constructing generative models. We present a supervised discriminative
High-Order Parametric Embedding (HOPE) approach to data visualization and
compression. Compared to deep embedding models with complicated deep
architectures, HOPE generates more effective high-order feature mapping through
an embarrassingly simple shallow model. Furthermore, two approaches to
generating a small number of exemplars conveying high-order interactions to
represent large-scale data sets are proposed. These exemplars in combination
with the feature mapping learned by HOPE effectively capture essential data
variations. Moreover, through HOPE, these exemplars are employed to increase
the computational efficiency of kNN classification for fast information
retrieval by thousands of times. For classification in two-dimensional
embedding space on MNIST and USPS datasets, our shallow method HOPE with simple
Sigmoid transformations significantly outperforms state-of-the-art supervised
deep embedding models based on deep neural networks, and even achieved
historically low test error rate of 0.65% in two-dimensional space on MNIST,
which demonstrates the representational efficiency and power of supervised
shallow models with high-order feature interactions.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2016 17:54:40 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Min",
"Martin Renqiang",
""
],
[
"Guo",
"Hongyu",
""
],
[
"Song",
"Dongjin",
""
]
] | TITLE: A Shallow High-Order Parametric Approach to Data Visualization and
Compression
ABSTRACT: Explicit high-order feature interactions efficiently capture essential
structural knowledge about the data of interest and have been used for
constructing generative models. We present a supervised discriminative
High-Order Parametric Embedding (HOPE) approach to data visualization and
compression. Compared to deep embedding models with complicated deep
architectures, HOPE generates more effective high-order feature mapping through
an embarrassingly simple shallow model. Furthermore, two approaches to
generating a small number of exemplars conveying high-order interactions to
represent large-scale data sets are proposed. These exemplars in combination
with the feature mapping learned by HOPE effectively capture essential data
variations. Moreover, through HOPE, these exemplars are employed to increase
the computational efficiency of kNN classification for fast information
retrieval by thousands of times. For classification in two-dimensional
embedding space on MNIST and USPS datasets, our shallow method HOPE with simple
Sigmoid transformations significantly outperforms state-of-the-art supervised
deep embedding models based on deep neural networks, and even achieved
historically low test error rate of 0.65% in two-dimensional space on MNIST,
which demonstrates the representational efficiency and power of supervised
shallow models with high-order feature interactions.
| no_new_dataset | 0.950041 |
1608.04698 | Dan Garant | Dan Garant, David Jensen | Evaluating Causal Models by Comparing Interventional Distributions | null | null | null | null | cs.AI stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The predominant method for evaluating the quality of causal models is to
measure the graphical accuracy of the learned model structure. We present an
alternative method for evaluating causal models that directly measures the
accuracy of estimated interventional distributions. We contrast such
distributional measures with structural measures, such as structural Hamming
distance and structural intervention distance, showing that structural measures
often correspond poorly to the accuracy of estimated interventional
distributions. We use a number of real and synthetic datasets to illustrate
various scenarios in which structural measures provide misleading results with
respect to algorithm selection and parameter tuning, and we recommend that
distributional measures become the new standard for evaluating causal models.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2016 18:32:24 GMT"
}
] | 2016-08-17T00:00:00 | [
[
"Garant",
"Dan",
""
],
[
"Jensen",
"David",
""
]
] | TITLE: Evaluating Causal Models by Comparing Interventional Distributions
ABSTRACT: The predominant method for evaluating the quality of causal models is to
measure the graphical accuracy of the learned model structure. We present an
alternative method for evaluating causal models that directly measures the
accuracy of estimated interventional distributions. We contrast such
distributional measures with structural measures, such as structural Hamming
distance and structural intervention distance, showing that structural measures
often correspond poorly to the accuracy of estimated interventional
distributions. We use a number of real and synthetic datasets to illustrate
various scenarios in which structural measures provide misleading results with
respect to algorithm selection and parameter tuning, and we recommend that
distributional measures become the new standard for evaluating causal models.
| no_new_dataset | 0.954009 |
1408.1664 | Yetian Chen | Yetian Chen, Jin Tian, Olga Nikolova and Srinivas Aluru | A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian
Networks | 32 pages, 12 figures | null | null | null | cs.AI cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exact Bayesian structure discovery in Bayesian networks requires exponential
time and space. Using dynamic programming (DP), the fastest known sequential
algorithm computes the exact posterior probabilities of structural features in
$O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the
Bayesian network is $n$ and the in-degree (the number of parents) per node is
bounded by a constant $d$. Here we present a parallel algorithm capable of
computing the exact posterior probabilities for all $n(n-1)$ edges with optimal
parallel space efficiency and nearly optimal parallel time efficiency. That is,
if $p=2^k$ processors are used, the run-time reduces to
$O(5(d+1)n2^{n-k}+k(n-k)^d)$ and the space usage becomes $O(n2^{n-k})$ per
processor. Our algorithm is based the observation that the subproblems in the
sequential DP algorithm constitute a $n$-$D$ hypercube. We take a delicate way
to coordinate the computation of correlated DP procedures such that large
amount of data exchange is suppressed. Further, we develop parallel techniques
for two variants of the well-known \emph{zeta transform}, which have
applications outside the context of Bayesian networks. We demonstrate the
capability of our algorithm on datasets with up to 33 variables and its
scalability on up to 2048 processors. We apply our algorithm to a biological
data set for discovering the yeast pheromone response pathways.
| [
{
"version": "v1",
"created": "Thu, 7 Aug 2014 17:40:36 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Aug 2014 04:12:09 GMT"
},
{
"version": "v3",
"created": "Sat, 13 Aug 2016 04:25:55 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Chen",
"Yetian",
""
],
[
"Tian",
"Jin",
""
],
[
"Nikolova",
"Olga",
""
],
[
"Aluru",
"Srinivas",
""
]
] | TITLE: A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian
Networks
ABSTRACT: Exact Bayesian structure discovery in Bayesian networks requires exponential
time and space. Using dynamic programming (DP), the fastest known sequential
algorithm computes the exact posterior probabilities of structural features in
$O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the
Bayesian network is $n$ and the in-degree (the number of parents) per node is
bounded by a constant $d$. Here we present a parallel algorithm capable of
computing the exact posterior probabilities for all $n(n-1)$ edges with optimal
parallel space efficiency and nearly optimal parallel time efficiency. That is,
if $p=2^k$ processors are used, the run-time reduces to
$O(5(d+1)n2^{n-k}+k(n-k)^d)$ and the space usage becomes $O(n2^{n-k})$ per
processor. Our algorithm is based the observation that the subproblems in the
sequential DP algorithm constitute a $n$-$D$ hypercube. We take a delicate way
to coordinate the computation of correlated DP procedures such that large
amount of data exchange is suppressed. Further, we develop parallel techniques
for two variants of the well-known \emph{zeta transform}, which have
applications outside the context of Bayesian networks. We demonstrate the
capability of our algorithm on datasets with up to 33 variables and its
scalability on up to 2048 processors. We apply our algorithm to a biological
data set for discovering the yeast pheromone response pathways.
| no_new_dataset | 0.947962 |
1601.00863 | Ming Yan | Zhimin Peng, Tianyu Wu, Yangyang Xu, Ming Yan, Wotao Yin | Coordinate Friendly Structures, Algorithms and Applications | null | Annals of Mathematical Sciences and Applications, 1 (2016), 57-119 | 10.4310/AMSA.2016.v1.n1.a2 | UCLA CAM Report 16-13 | math.OC cs.CE cs.DC math.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on coordinate update methods, which are useful for solving
problems involving large or high-dimensional datasets. They decompose a problem
into simple subproblems, where each updates one, or a small block of, variables
while fixing others. These methods can deal with linear and nonlinear mappings,
smooth and nonsmooth functions, as well as convex and nonconvex problems. In
addition, they are easy to parallelize.
The great performance of coordinate update methods depends on solving simple
sub-problems. To derive simple subproblems for several new classes of
applications, this paper systematically studies coordinate-friendly operators
that perform low-cost coordinate updates.
Based on the discovered coordinate friendly operators, as well as operator
splitting techniques, we obtain new coordinate update algorithms for a variety
of problems in machine learning, image processing, as well as sub-areas of
optimization. Several problems are treated with coordinate update for the first
time in history. The obtained algorithms are scalable to large instances
through parallel and even asynchronous computing. We present numerical examples
to illustrate how effective these algorithms are.
| [
{
"version": "v1",
"created": "Tue, 5 Jan 2016 15:33:05 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Mar 2016 23:05:07 GMT"
},
{
"version": "v3",
"created": "Sun, 14 Aug 2016 14:29:53 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Peng",
"Zhimin",
""
],
[
"Wu",
"Tianyu",
""
],
[
"Xu",
"Yangyang",
""
],
[
"Yan",
"Ming",
""
],
[
"Yin",
"Wotao",
""
]
] | TITLE: Coordinate Friendly Structures, Algorithms and Applications
ABSTRACT: This paper focuses on coordinate update methods, which are useful for solving
problems involving large or high-dimensional datasets. They decompose a problem
into simple subproblems, where each updates one, or a small block of, variables
while fixing others. These methods can deal with linear and nonlinear mappings,
smooth and nonsmooth functions, as well as convex and nonconvex problems. In
addition, they are easy to parallelize.
The great performance of coordinate update methods depends on solving simple
sub-problems. To derive simple subproblems for several new classes of
applications, this paper systematically studies coordinate-friendly operators
that perform low-cost coordinate updates.
Based on the discovered coordinate friendly operators, as well as operator
splitting techniques, we obtain new coordinate update algorithms for a variety
of problems in machine learning, image processing, as well as sub-areas of
optimization. Several problems are treated with coordinate update for the first
time in history. The obtained algorithms are scalable to large instances
through parallel and even asynchronous computing. We present numerical examples
to illustrate how effective these algorithms are.
| no_new_dataset | 0.941922 |
1602.03202 | Mohammad Abu Alsheikh | Dusit Niyato, Mohammad Abu Alsheikh, Ping Wang, Dong In Kim, and Zhu
Han | Market Model and Optimal Pricing Scheme of Big Data and Internet of
Things (IoT) | null | null | 10.1109/ICC.2016.7510922 | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big data has been emerging as a new approach in utilizing large datasets to
optimize complex system operations. Big data is fueled with Internet-of-Things
(IoT) services that generate immense sensory data from numerous sensors and
devices. While most current research focus of big data is on machine learning
and resource management design, the economic modeling and analysis have been
largely overlooked. This paper thus investigates the big data market model and
optimal pricing scheme. We first study the utility of data from the data
science perspective, i.e., using the machine learning methods. We then
introduce the market model and develop an optimal pricing scheme afterward. The
case study shows clearly the suitability of the proposed data utility
functions. The numerical examples demonstrate that big data and IoT service
provider can achieve the maximum profit through the proposed market model.
| [
{
"version": "v1",
"created": "Sun, 7 Feb 2016 04:57:17 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Niyato",
"Dusit",
""
],
[
"Alsheikh",
"Mohammad Abu",
""
],
[
"Wang",
"Ping",
""
],
[
"Kim",
"Dong In",
""
],
[
"Han",
"Zhu",
""
]
] | TITLE: Market Model and Optimal Pricing Scheme of Big Data and Internet of
Things (IoT)
ABSTRACT: Big data has been emerging as a new approach in utilizing large datasets to
optimize complex system operations. Big data is fueled with Internet-of-Things
(IoT) services that generate immense sensory data from numerous sensors and
devices. While most current research focus of big data is on machine learning
and resource management design, the economic modeling and analysis have been
largely overlooked. This paper thus investigates the big data market model and
optimal pricing scheme. We first study the utility of data from the data
science perspective, i.e., using the machine learning methods. We then
introduce the market model and develop an optimal pricing scheme afterward. The
case study shows clearly the suitability of the proposed data utility
functions. The numerical examples demonstrate that big data and IoT service
provider can achieve the maximum profit through the proposed market model.
| no_new_dataset | 0.94801 |
1602.07031 | Mohammad Abu Alsheikh | Mohammad Abu Alsheikh, Dusit Niyato, Shaowei Lin, Hwee-Pink Tan, and
Zhu Han | Mobile Big Data Analytics Using Deep Learning and Apache Spark | null | IEEE Network, vol. 30, no. 3, pp. 22-29, June 2016 | 10.1109/MNET.2016.7474340 | null | cs.DC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of mobile devices, such as smartphones and Internet of
Things (IoT) gadgets, results in the recent mobile big data (MBD) era.
Collecting MBD is unprofitable unless suitable analytics and learning methods
are utilized for extracting meaningful information and hidden patterns from
data. This article presents an overview and brief tutorial of deep learning in
MBD analytics and discusses a scalable learning framework over Apache Spark.
Specifically, a distributed deep learning is executed as an iterative MapReduce
computing on many Spark workers. Each Spark worker learns a partial deep model
on a partition of the overall MBD, and a master deep model is then built by
averaging the parameters of all partial models. This Spark-based framework
speeds up the learning of deep models consisting of many hidden layers and
millions of parameters. We use a context-aware activity recognition application
with a real-world dataset containing millions of samples to validate our
framework and assess its speedup effectiveness.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 04:32:02 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Alsheikh",
"Mohammad Abu",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Lin",
"Shaowei",
""
],
[
"Tan",
"Hwee-Pink",
""
],
[
"Han",
"Zhu",
""
]
] | TITLE: Mobile Big Data Analytics Using Deep Learning and Apache Spark
ABSTRACT: The proliferation of mobile devices, such as smartphones and Internet of
Things (IoT) gadgets, results in the recent mobile big data (MBD) era.
Collecting MBD is unprofitable unless suitable analytics and learning methods
are utilized for extracting meaningful information and hidden patterns from
data. This article presents an overview and brief tutorial of deep learning in
MBD analytics and discusses a scalable learning framework over Apache Spark.
Specifically, a distributed deep learning is executed as an iterative MapReduce
computing on many Spark workers. Each Spark worker learns a partial deep model
on a partition of the overall MBD, and a master deep model is then built by
averaging the parameters of all partial models. This Spark-based framework
speeds up the learning of deep models consisting of many hidden layers and
millions of parameters. We use a context-aware activity recognition application
with a real-world dataset containing millions of samples to validate our
framework and assess its speedup effectiveness.
| no_new_dataset | 0.944125 |
1604.00736 | Mohammad Abu Alsheikh | Mohammad Abu Alsheikh, Shaowei Lin, Dusit Niyato, Hwee-Pink Tan | Rate-distortion Balanced Data Compression for Wireless Sensor Networks | arXiv admin note: text overlap with arXiv:1408.2948 | IEEE Sensors Journal, vol. 16, no. 12, pp. 5072-5083, June15, 2016 | 10.1109/JSEN.2016.2550599 | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 04:14:21 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Alsheikh",
"Mohammad Abu",
""
],
[
"Lin",
"Shaowei",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Tan",
"Hwee-Pink",
""
]
] | TITLE: Rate-distortion Balanced Data Compression for Wireless Sensor Networks
ABSTRACT: This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.
| no_new_dataset | 0.950273 |
1607.06986 | Shervin Ardeshir | Shervin Ardeshir, Ali Borji | Ego2Top: Matching Viewers in Egocentric and Top-view Videos | European Conference on Computer Vision (ECCV) 2016. Amsterdam, the
Netherlands | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric cameras are becoming increasingly popular and provide us with
large amounts of videos, captured from the first person perspective. At the
same time, surveillance cameras and drones offer an abundance of visual
information, often captured from top-view. Although these two sources of
information have been separately studied in the past, they have not been
collectively studied and related. Having a set of egocentric cameras and a
top-view camera capturing the same area, we propose a framework to identify the
egocentric viewers in the top-view video. We utilize two types of features for
our assignment procedure. Unary features encode what a viewer (seen from
top-view or recording an egocentric video) visually experiences over time.
Pairwise features encode the relationship between the visual content of a pair
of viewers. Modeling each view (egocentric or top) by a graph, the assignment
process is formulated as spectral graph matching. Evaluating our method over a
dataset of 50 top-view and 188 egocentric videos taken in different scenarios
demonstrates the efficiency of the proposed approach in assigning egocentric
viewers to identities present in top-view camera. We also study the effect of
different parameters such as the number of egocentric viewers and visual
features.
| [
{
"version": "v1",
"created": "Sun, 24 Jul 2016 00:28:01 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Aug 2016 21:49:56 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Ardeshir",
"Shervin",
""
],
[
"Borji",
"Ali",
""
]
] | TITLE: Ego2Top: Matching Viewers in Egocentric and Top-view Videos
ABSTRACT: Egocentric cameras are becoming increasingly popular and provide us with
large amounts of videos, captured from the first person perspective. At the
same time, surveillance cameras and drones offer an abundance of visual
information, often captured from top-view. Although these two sources of
information have been separately studied in the past, they have not been
collectively studied and related. Having a set of egocentric cameras and a
top-view camera capturing the same area, we propose a framework to identify the
egocentric viewers in the top-view video. We utilize two types of features for
our assignment procedure. Unary features encode what a viewer (seen from
top-view or recording an egocentric video) visually experiences over time.
Pairwise features encode the relationship between the visual content of a pair
of viewers. Modeling each view (egocentric or top) by a graph, the assignment
process is formulated as spectral graph matching. Evaluating our method over a
dataset of 50 top-view and 188 egocentric videos taken in different scenarios
demonstrates the efficiency of the proposed approach in assigning egocentric
viewers to identities present in top-view camera. We also study the effect of
different parameters such as the number of egocentric viewers and visual
features.
| no_new_dataset | 0.948537 |
1608.01745 | Alireza Shafaei | Alireza Shafaei and James J. Little and Mark Schmidt | Play and Learn: Using Video Games to Train Computer Vision Models | To appear in the British Machine Vision Conference (BMVC), September
2016. -v2: fixed a typo in the references | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video games are a compelling source of annotated data as they can readily
provide fine-grained groundtruth for diverse tasks. However, it is not clear
whether the synthetically generated data has enough resemblance to the
real-world images to improve the performance of computer vision models in
practice. We present experiments assessing the effectiveness on real-world data
of systems trained on synthetic RGB images that are extracted from a video
game. We collected over 60000 synthetic samples from a modern video game with
similar conditions to the real-world CamVid and Cityscapes datasets. We provide
several experiments to demonstrate that the synthetically generated RGB images
can be used to improve the performance of deep neural networks on both image
segmentation and depth estimation. These results show that a convolutional
network trained on synthetic data achieves a similar test error to a network
that is trained on real-world data for dense image classification. Furthermore,
the synthetically generated RGB images can provide similar or better results
compared to the real-world datasets if a simple domain adaptation technique is
applied. Our results suggest that collaboration with game developers for an
accessible interface to gather data is potentially a fruitful direction for
future work in computer vision.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 03:16:07 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Aug 2016 19:41:47 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Shafaei",
"Alireza",
""
],
[
"Little",
"James J.",
""
],
[
"Schmidt",
"Mark",
""
]
] | TITLE: Play and Learn: Using Video Games to Train Computer Vision Models
ABSTRACT: Video games are a compelling source of annotated data as they can readily
provide fine-grained groundtruth for diverse tasks. However, it is not clear
whether the synthetically generated data has enough resemblance to the
real-world images to improve the performance of computer vision models in
practice. We present experiments assessing the effectiveness on real-world data
of systems trained on synthetic RGB images that are extracted from a video
game. We collected over 60000 synthetic samples from a modern video game with
similar conditions to the real-world CamVid and Cityscapes datasets. We provide
several experiments to demonstrate that the synthetically generated RGB images
can be used to improve the performance of deep neural networks on both image
segmentation and depth estimation. These results show that a convolutional
network trained on synthetic data achieves a similar test error to a network
that is trained on real-world data for dense image classification. Furthermore,
the synthetically generated RGB images can provide similar or better results
compared to the real-world datasets if a simple domain adaptation technique is
applied. Our results suggest that collaboration with game developers for an
accessible interface to gather data is potentially a fruitful direction for
future work in computer vision.
| no_new_dataset | 0.947624 |
1608.03507 | Ramin Rahnamoun | Ramin Rahnamoun, Reza Rawassizadeh, Arash Maskooki | Learning Mobile App Usage Routine through Learning Automata | 5 pages, 2 figures | null | null | null | cs.AI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since its conception, smart app market has grown exponentially. Success in
the app market depends on many factors among which the quality of the app is a
significant contributor, such as energy use. Nevertheless, smartphones, as a
subset of mobile computing devices. inherit the limited power resource
constraint. Therefore, there is a challenge of maintaining the resource while
increasing the target app quality. This paper introduces Learning Automata (LA)
as an online learning method to learn and predict the app usage routines of the
users. Such prediction can leverage the app cache functionality of the
operating system and thus (i) decreases app launch time and (ii) preserve
battery. Our algorithm, which is an online learning approach, temporally
updates and improves the internal states of itself. In particular, it learns
the transition probabilities between app launching. Each App launching instance
updates the transition probabilities related to that App, and this will result
in improving the prediction. We benefit from a real-world lifelogging dataset
and our experimental results show considerable success with respect to the two
baseline methods that are used currently for smartphone app prediction
approaches.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 15:43:55 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Aug 2016 08:08:35 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Rahnamoun",
"Ramin",
""
],
[
"Rawassizadeh",
"Reza",
""
],
[
"Maskooki",
"Arash",
""
]
] | TITLE: Learning Mobile App Usage Routine through Learning Automata
ABSTRACT: Since its conception, smart app market has grown exponentially. Success in
the app market depends on many factors among which the quality of the app is a
significant contributor, such as energy use. Nevertheless, smartphones, as a
subset of mobile computing devices. inherit the limited power resource
constraint. Therefore, there is a challenge of maintaining the resource while
increasing the target app quality. This paper introduces Learning Automata (LA)
as an online learning method to learn and predict the app usage routines of the
users. Such prediction can leverage the app cache functionality of the
operating system and thus (i) decreases app launch time and (ii) preserve
battery. Our algorithm, which is an online learning approach, temporally
updates and improves the internal states of itself. In particular, it learns
the transition probabilities between app launching. Each App launching instance
updates the transition probabilities related to that App, and this will result
in improving the prediction. We benefit from a real-world lifelogging dataset
and our experimental results show considerable success with respect to the two
baseline methods that are used currently for smartphone app prediction
approaches.
| no_new_dataset | 0.947721 |
1608.03914 | Sirion Vittayakorn | Sirion Vittayakorn, Alexander C. Berg, Tamara L. Berg | When was that made? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore deep learning methods for estimating when objects
were made. Automatic methods for this task could potentially be useful for
historians, collectors, or any individual interested in estimating when their
artifact was created. Direct applications include large-scale data organization
or retrieval. Toward this goal, we utilize features from existing deep networks
and also fine-tune new networks for temporal estimation. In addition, we create
two new datasets of 67,771 dated clothing items from Flickr and museum
collections. Our method outperforms both a color-based baseline and previous
state of the art methods for temporal estimation. We also provide several
analyses of what our networks have learned, and demonstrate applications to
identifying temporal inspiration in fashion collections.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 22:03:38 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Vittayakorn",
"Sirion",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Berg",
"Tamara L.",
""
]
] | TITLE: When was that made?
ABSTRACT: In this paper, we explore deep learning methods for estimating when objects
were made. Automatic methods for this task could potentially be useful for
historians, collectors, or any individual interested in estimating when their
artifact was created. Direct applications include large-scale data organization
or retrieval. Toward this goal, we utilize features from existing deep networks
and also fine-tune new networks for temporal estimation. In addition, we create
two new datasets of 67,771 dated clothing items from Flickr and museum
collections. Our method outperforms both a color-based baseline and previous
state of the art methods for temporal estimation. We also provide several
analyses of what our networks have learned, and demonstrate applications to
identifying temporal inspiration in fashion collections.
| new_dataset | 0.953535 |
1608.03932 | Liang Lin | Keze Wang and Shengfu Zhai and Hui Cheng and Xiaodan Liang and Liang
Lin | Human Pose Estimation from Depth Images via Inference Embedded
Multi-task Learning | To appear in ACM Multimedia 2016, full paper (oral), 10 pages, 11
figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human pose estimation (i.e., locating the body parts / joints of a person) is
a fundamental problem in human-computer interaction and multimedia
applications. Significant progress has been made based on the development of
depth sensors, i.e., accessible human pose prediction from still depth images
[32]. However, most of the existing approaches to this problem involve several
components/models that are independently designed and optimized, leading to
suboptimal performances. In this paper, we propose a novel inference-embedded
multi-task learning framework for predicting human pose from still depth
images, which is implemented with a deep architecture of neural networks.
Specifically, we handle two cascaded tasks: i) generating the heat (confidence)
maps of body parts via a fully convolutional network (FCN); ii) seeking the
optimal configuration of body parts based on the detected body part proposals
via an inference built-in MatchNet [10], which measures the appearance and
geometric kinematic compatibility of body parts and embodies the dynamic
programming inference as an extra network layer. These two tasks are jointly
optimized. Our extensive experiments show that the proposed deep model
significantly improves the accuracy of human pose estimation over other several
state-of-the-art methods or SDKs. We also release a large-scale dataset for
comparison, which includes 100K depth images under challenging scenarios.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2016 03:16:47 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Wang",
"Keze",
""
],
[
"Zhai",
"Shengfu",
""
],
[
"Cheng",
"Hui",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Human Pose Estimation from Depth Images via Inference Embedded
Multi-task Learning
ABSTRACT: Human pose estimation (i.e., locating the body parts / joints of a person) is
a fundamental problem in human-computer interaction and multimedia
applications. Significant progress has been made based on the development of
depth sensors, i.e., accessible human pose prediction from still depth images
[32]. However, most of the existing approaches to this problem involve several
components/models that are independently designed and optimized, leading to
suboptimal performances. In this paper, we propose a novel inference-embedded
multi-task learning framework for predicting human pose from still depth
images, which is implemented with a deep architecture of neural networks.
Specifically, we handle two cascaded tasks: i) generating the heat (confidence)
maps of body parts via a fully convolutional network (FCN); ii) seeking the
optimal configuration of body parts based on the detected body part proposals
via an inference built-in MatchNet [10], which measures the appearance and
geometric kinematic compatibility of body parts and embodies the dynamic
programming inference as an extra network layer. These two tasks are jointly
optimized. Our extensive experiments show that the proposed deep model
significantly improves the accuracy of human pose estimation over other several
state-of-the-art methods or SDKs. We also release a large-scale dataset for
comparison, which includes 100K depth images under challenging scenarios.
| new_dataset | 0.960435 |
1608.03938 | Christopher Thompson | Christopher Thompson, Josh Introne, and Clint Young | Determining Health Utilities through Data Mining of Social Media | 8 pages, 2 figures, 3 tables | null | null | null | cs.CL cs.AI cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 'Health utilities' measure patient preferences for perfect health compared to
specific unhealthy states, such as asthma, a fractured hip, or colon cancer.
When integrated over time, these estimations are called quality adjusted life
years (QALYs). Until now, characterizing health utilities (HUs) required
detailed patient interviews or written surveys. While reliable and specific,
this data remained costly due to efforts to locate, enlist and coordinate
participants. Thus the scope, context and temporality of diseases examined has
remained limited.
Now that more than a billion people use social media, we propose a novel
strategy: use natural language processing to analyze public online
conversations for signals of the severity of medical conditions and correlate
these to known HUs using machine learning. In this work, we filter a dataset
that originally contained 2 billion tweets for relevant content on 60 diseases.
Using this data, our algorithm successfully distinguished mild from severe
diseases, which had previously been categorized only by traditional techniques.
This represents progress towards two related applications: first, predicting
HUs where such information is nonexistent; and second, (where rich HU data
already exists) estimating temporal or geographic patterns of disease severity
through data mining.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2016 04:02:38 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Thompson",
"Christopher",
""
],
[
"Introne",
"Josh",
""
],
[
"Young",
"Clint",
""
]
] | TITLE: Determining Health Utilities through Data Mining of Social Media
ABSTRACT: 'Health utilities' measure patient preferences for perfect health compared to
specific unhealthy states, such as asthma, a fractured hip, or colon cancer.
When integrated over time, these estimations are called quality adjusted life
years (QALYs). Until now, characterizing health utilities (HUs) required
detailed patient interviews or written surveys. While reliable and specific,
this data remained costly due to efforts to locate, enlist and coordinate
participants. Thus the scope, context and temporality of diseases examined has
remained limited.
Now that more than a billion people use social media, we propose a novel
strategy: use natural language processing to analyze public online
conversations for signals of the severity of medical conditions and correlate
these to known HUs using machine learning. In this work, we filter a dataset
that originally contained 2 billion tweets for relevant content on 60 diseases.
Using this data, our algorithm successfully distinguished mild from severe
diseases, which had previously been categorized only by traditional techniques.
This represents progress towards two related applications: first, predicting
HUs where such information is nonexistent; and second, (where rich HU data
already exists) estimating temporal or geographic patterns of disease severity
through data mining.
| no_new_dataset | 0.943867 |
1608.03974 | Giovanni Montana | Rudra P K Poudel and Pablo Lamata and Giovanni Montana | Recurrent Fully Convolutional Neural Networks for Multi-slice MRI
Cardiac Segmentation | MICCAI Workshop RAMBO 2016 | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cardiac magnetic resonance imaging, fully-automatic segmentation of the
heart enables precise structural and functional measurements to be taken, e.g.
from short-axis MR images of the left-ventricle. In this work we propose a
recurrent fully-convolutional network (RFCN) that learns image representations
from the full stack of 2D slices and has the ability to leverage inter-slice
spatial dependences through internal memory units. RFCN combines anatomical
detection and segmentation into a single architecture that is trained
end-to-end thus significantly reducing computational time, simplifying the
segmentation pipeline, and potentially enabling real-time applications. We
report on an investigation of RFCN using two datasets, including the publicly
available MICCAI 2009 Challenge dataset. Comparisons have been carried out
between fully convolutional networks and deep restricted Boltzmann machines,
including a recurrent version that leverages inter-slice spatial correlation.
Our studies suggest that RFCN produces state-of-the-art results and can
substantially improve the delineation of contours near the apex of the heart.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2016 11:19:22 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Poudel",
"Rudra P K",
""
],
[
"Lamata",
"Pablo",
""
],
[
"Montana",
"Giovanni",
""
]
] | TITLE: Recurrent Fully Convolutional Neural Networks for Multi-slice MRI
Cardiac Segmentation
ABSTRACT: In cardiac magnetic resonance imaging, fully-automatic segmentation of the
heart enables precise structural and functional measurements to be taken, e.g.
from short-axis MR images of the left-ventricle. In this work we propose a
recurrent fully-convolutional network (RFCN) that learns image representations
from the full stack of 2D slices and has the ability to leverage inter-slice
spatial dependences through internal memory units. RFCN combines anatomical
detection and segmentation into a single architecture that is trained
end-to-end thus significantly reducing computational time, simplifying the
segmentation pipeline, and potentially enabling real-time applications. We
report on an investigation of RFCN using two datasets, including the publicly
available MICCAI 2009 Challenge dataset. Comparisons have been carried out
between fully convolutional networks and deep restricted Boltzmann machines,
including a recurrent version that leverages inter-slice spatial correlation.
Our studies suggest that RFCN produces state-of-the-art results and can
substantially improve the delineation of contours near the apex of the heart.
| no_new_dataset | 0.949856 |
1608.04037 | Davi Frossard | Davi E. N. Frossard, Igor O. Nunes, Renato A. Krohling | An approach to dealing with missing values in heterogeneous data using
k-nearest neighbors | null | null | null | null | cs.LG cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Techniques such as clusterization, neural networks and decision making
usually rely on algorithms that are not well suited to deal with missing
values. However, real world data frequently contains such cases. The simplest
solution is to either substitute them by a best guess value or completely
disregard the missing values. Unfortunately, both approaches can lead to biased
results. In this paper, we propose a technique for dealing with missing values
in heterogeneous data using imputation based on the k-nearest neighbors
algorithm. It can handle real (which we refer to as crisp henceforward),
interval and fuzzy data. The effectiveness of the algorithm is tested on
several datasets and the numerical results are promising.
| [
{
"version": "v1",
"created": "Sat, 13 Aug 2016 23:45:21 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Frossard",
"Davi E. N.",
""
],
[
"Nunes",
"Igor O.",
""
],
[
"Krohling",
"Renato A.",
""
]
] | TITLE: An approach to dealing with missing values in heterogeneous data using
k-nearest neighbors
ABSTRACT: Techniques such as clusterization, neural networks and decision making
usually rely on algorithms that are not well suited to deal with missing
values. However, real world data frequently contains such cases. The simplest
solution is to either substitute them by a best guess value or completely
disregard the missing values. Unfortunately, both approaches can lead to biased
results. In this paper, we propose a technique for dealing with missing values
in heterogeneous data using imputation based on the k-nearest neighbors
algorithm. It can handle real (which we refer to as crisp henceforward),
interval and fuzzy data. The effectiveness of the algorithm is tested on
several datasets and the numerical results are promising.
| no_new_dataset | 0.952442 |
1608.04045 | Kyle Simek | Kyle Simek, Ravishankar Palanivelu, Kobus Barnard | Branching Gaussian Processes with Applications to Spatiotemporal
Reconstruction of 3D Trees | ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a robust method for estimating dynamic 3D curvilinear branching
structure from monocular images. While 3D reconstruction from images has been
widely studied, estimating thin structure has received less attention. This
problem becomes more challenging in the presence of camera error, scene motion,
and a constraint that curves are attached in a branching structure. We propose
a new general-purpose prior, a branching Gaussian processes (BGP), that models
spatial smoothness and temporal dynamics of curves while enforcing attachment
between them. We apply this prior to fit 3D trees directly to image data, using
an efficient scheme for approximate inference based on expectation propagation.
The BGP prior's Gaussian form allows us to approximately marginalize over 3D
trees with a given model structure, enabling principled comparison between tree
models with varying complexity. We test our approach on a novel multi-view
dataset depicting plants with known 3D structures and topologies undergoing
small nonrigid motion. Our method outperforms a state-of-the-art 3D
reconstruction method designed for non-moving thin structure. We evaluate under
several common measures, and we propose a new measure for reconstructions of
branching multi-part 3D scenes under motion.
| [
{
"version": "v1",
"created": "Sun, 14 Aug 2016 01:41:07 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Simek",
"Kyle",
""
],
[
"Palanivelu",
"Ravishankar",
""
],
[
"Barnard",
"Kobus",
""
]
] | TITLE: Branching Gaussian Processes with Applications to Spatiotemporal
Reconstruction of 3D Trees
ABSTRACT: We propose a robust method for estimating dynamic 3D curvilinear branching
structure from monocular images. While 3D reconstruction from images has been
widely studied, estimating thin structure has received less attention. This
problem becomes more challenging in the presence of camera error, scene motion,
and a constraint that curves are attached in a branching structure. We propose
a new general-purpose prior, a branching Gaussian processes (BGP), that models
spatial smoothness and temporal dynamics of curves while enforcing attachment
between them. We apply this prior to fit 3D trees directly to image data, using
an efficient scheme for approximate inference based on expectation propagation.
The BGP prior's Gaussian form allows us to approximately marginalize over 3D
trees with a given model structure, enabling principled comparison between tree
models with varying complexity. We test our approach on a novel multi-view
dataset depicting plants with known 3D structures and topologies undergoing
small nonrigid motion. Our method outperforms a state-of-the-art 3D
reconstruction method designed for non-moving thin structure. We evaluate under
several common measures, and we propose a new measure for reconstructions of
branching multi-part 3D scenes under motion.
| new_dataset | 0.96859 |
1608.04064 | Ihsan Ullah | Ihsan Ullah and Alfredo Petrosino | About Pyramid Structure in Convolutional Neural Networks | Published in 2016 International Joint Conference on Neural Networks
(IJCNN) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (CNN) brought revolution without any doubt
to various challenging tasks, mainly in computer vision. However, their model
designing still requires attention to reduce number of learnable parameters,
with no meaningful reduction in performance. In this paper we investigate to
what extend CNN may take advantage of pyramid structure typical of biological
neurons. A generalized statement over convolutional layers from input till
fully connected layer is introduced that helps further in understanding and
designing a successful deep network. It reduces ambiguity, number of
parameters, and their size on disk without degrading overall accuracy.
Performance are shown on state-of-the-art models for MNIST, Cifar-10,
Cifar-100, and ImageNet-12 datasets. Despite more than 80% reduction in
parameters for Caffe_LENET, challenging results are obtained. Further, despite
10-20% reduction in training data along with 10-40% reduction in parameters for
AlexNet model and its variations, competitive results are achieved when
compared to similar well-engineered deeper architectures.
| [
{
"version": "v1",
"created": "Sun, 14 Aug 2016 06:03:09 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Ullah",
"Ihsan",
""
],
[
"Petrosino",
"Alfredo",
""
]
] | TITLE: About Pyramid Structure in Convolutional Neural Networks
ABSTRACT: Deep convolutional neural networks (CNN) brought revolution without any doubt
to various challenging tasks, mainly in computer vision. However, their model
designing still requires attention to reduce number of learnable parameters,
with no meaningful reduction in performance. In this paper we investigate to
what extend CNN may take advantage of pyramid structure typical of biological
neurons. A generalized statement over convolutional layers from input till
fully connected layer is introduced that helps further in understanding and
designing a successful deep network. It reduces ambiguity, number of
parameters, and their size on disk without degrading overall accuracy.
Performance are shown on state-of-the-art models for MNIST, Cifar-10,
Cifar-100, and ImageNet-12 datasets. Despite more than 80% reduction in
parameters for Caffe_LENET, challenging results are obtained. Further, despite
10-20% reduction in training data along with 10-40% reduction in parameters for
AlexNet model and its variations, competitive results are achieved when
compared to similar well-engineered deeper architectures.
| no_new_dataset | 0.94699 |
1608.04274 | Andrew Calway Dr | Pilailuck Panphattarasap and Andrew Calway | Visual place recognition using landmark distribution descriptors | 13 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work by Suenderhauf et al. [1] demonstrated improved visual place
recognition using proposal regions coupled with features from convolutional
neural networks (CNN) to match landmarks between views. In this work we extend
the approach by introducing descriptors built from landmark features which also
encode the spatial distribution of the landmarks within a view. Matching
descriptors then enforces consistency of the relative positions of landmarks
between views. This has a significant impact on performance. For example, in
experiments on 10 image-pair datasets, each consisting of 200 urban locations
with significant differences in viewing positions and conditions, we recorded
average precision of around 70% (at 100% recall), compared with 58% obtained
using whole image CNN features and 50% for the method in [1].
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 14:13:27 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Panphattarasap",
"Pilailuck",
""
],
[
"Calway",
"Andrew",
""
]
] | TITLE: Visual place recognition using landmark distribution descriptors
ABSTRACT: Recent work by Suenderhauf et al. [1] demonstrated improved visual place
recognition using proposal regions coupled with features from convolutional
neural networks (CNN) to match landmarks between views. In this work we extend
the approach by introducing descriptors built from landmark features which also
encode the spatial distribution of the landmarks within a view. Matching
descriptors then enforces consistency of the relative positions of landmarks
between views. This has a significant impact on performance. For example, in
experiments on 10 image-pair datasets, each consisting of 200 urban locations
with significant differences in viewing positions and conditions, we recorded
average precision of around 70% (at 100% recall), compared with 58% obtained
using whole image CNN features and 50% for the method in [1].
| no_new_dataset | 0.95388 |
1608.04307 | Zhangjie Cao | Zhangjie Cao, Mingsheng Long, Qiang Yang | Transitive Hashing Network for Heterogeneous Multimedia Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashing has been widely applied to large-scale multimedia retrieval due to
the storage and retrieval efficiency. Cross-modal hashing enables efficient
retrieval from database of one modality in response to a query of another
modality. Existing work on cross-modal hashing assumes heterogeneous
relationship across modalities for hash function learning. In this paper, we
relax the strong assumption by only requiring such heterogeneous relationship
in an auxiliary dataset different from the query/database domain. We craft a
hybrid deep architecture to simultaneously learn the cross-modal correlation
from the auxiliary dataset, and align the dataset distributions between the
auxiliary dataset and the query/database domain, which generates transitive
hash codes for heterogeneous multimedia retrieval. Extensive experiments
exhibit that the proposed approach yields state of the art multimedia retrieval
performance on public datasets, i.e. NUS-WIDE, ImageNet-YahooQA.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 15:36:41 GMT"
}
] | 2016-08-16T00:00:00 | [
[
"Cao",
"Zhangjie",
""
],
[
"Long",
"Mingsheng",
""
],
[
"Yang",
"Qiang",
""
]
] | TITLE: Transitive Hashing Network for Heterogeneous Multimedia Retrieval
ABSTRACT: Hashing has been widely applied to large-scale multimedia retrieval due to
the storage and retrieval efficiency. Cross-modal hashing enables efficient
retrieval from database of one modality in response to a query of another
modality. Existing work on cross-modal hashing assumes heterogeneous
relationship across modalities for hash function learning. In this paper, we
relax the strong assumption by only requiring such heterogeneous relationship
in an auxiliary dataset different from the query/database domain. We craft a
hybrid deep architecture to simultaneously learn the cross-modal correlation
from the auxiliary dataset, and align the dataset distributions between the
auxiliary dataset and the query/database domain, which generates transitive
hash codes for heterogeneous multimedia retrieval. Extensive experiments
exhibit that the proposed approach yields state of the art multimedia retrieval
performance on public datasets, i.e. NUS-WIDE, ImageNet-YahooQA.
| no_new_dataset | 0.945045 |
1602.08780 | Dirk Tasche | Dirk Tasche | Does quantification without adjustments work? | 20 pages, 2 figures, major update | null | null | null | stat.ML cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classification is the task of predicting the class labels of objects based on
the observation of their features. In contrast, quantification has been defined
as the task of determining the prevalences of the different sorts of class
labels in a target dataset. The simplest approach to quantification is Classify
& Count where a classifier is optimised for classification on a training set
and applied to the target dataset for the prediction of class labels. In the
case of binary quantification, the number of predicted positive labels is then
used as an estimate of the prevalence of the positive class in the target
dataset. Since the performance of Classify & Count for quantification is known
to be inferior its results typically are subject to adjustments. However, some
researchers recently have suggested that Classify & Count might actually work
without adjustments if it is based on a classifer that was specifically trained
for quantification. We discuss the theoretical foundation for this claim and
explore its potential and limitations with a numerical example based on the
binormal model with equal variances. In order to identify an optimal quantifier
in the binormal setting, we introduce the concept of local Bayes optimality. As
a side remark, we present a complete proof of a theorem by Ye et al. (2012).
| [
{
"version": "v1",
"created": "Sun, 28 Feb 2016 22:29:25 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Aug 2016 16:24:05 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Tasche",
"Dirk",
""
]
] | TITLE: Does quantification without adjustments work?
ABSTRACT: Classification is the task of predicting the class labels of objects based on
the observation of their features. In contrast, quantification has been defined
as the task of determining the prevalences of the different sorts of class
labels in a target dataset. The simplest approach to quantification is Classify
& Count where a classifier is optimised for classification on a training set
and applied to the target dataset for the prediction of class labels. In the
case of binary quantification, the number of predicted positive labels is then
used as an estimate of the prevalence of the positive class in the target
dataset. Since the performance of Classify & Count for quantification is known
to be inferior its results typically are subject to adjustments. However, some
researchers recently have suggested that Classify & Count might actually work
without adjustments if it is based on a classifer that was specifically trained
for quantification. We discuss the theoretical foundation for this claim and
explore its potential and limitations with a numerical example based on the
binormal model with equal variances. In order to identify an optimal quantifier
in the binormal setting, we introduce the concept of local Bayes optimality. As
a side remark, we present a complete proof of a theorem by Ye et al. (2012).
| no_new_dataset | 0.946745 |
1603.03235 | Amir Soleimani | Amir Soleimani, Kazim Fouladi, Babak N. Araabi | UTSig: A Persian Offline Signature Dataset | 15 pages, 6 figures | null | 10.1049/iet-bmt.2015.0058 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The pivotal role of datasets in signature verification systems motivates
researchers to collect signature samples. Distinct characteristics of Persian
signature demands for richer and culture-dependent offline signature datasets.
This paper introduces a new and public Persian offline signature dataset,
UTSig, that consists of 8280 images from 115 classes. Each class has 27 genuine
signatures, 3 opposite-hand signatures, and 42 skilled forgeries made by 6
forgers. Compared with the other public datasets, UTSig has more samples, more
classes, and more forgers. We considered various variables including signing
period, writing instrument, signature box size, and number of observable
samples for forgers in the data collection procedure. By careful examination of
main characteristics of offline signature datasets, we observe that Persian
signatures have fewer numbers of branch points and end points. We propose and
evaluate four different training and test setups for UTSig. Results of our
experiments show that training genuine samples along with opposite-hand samples
and random forgeries can improve the performance in terms of equal error rate
and minimum cost of log likelihood ratio.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2016 12:23:03 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2016 12:55:39 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2016 04:21:25 GMT"
},
{
"version": "v4",
"created": "Fri, 12 Aug 2016 06:58:59 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Soleimani",
"Amir",
""
],
[
"Fouladi",
"Kazim",
""
],
[
"Araabi",
"Babak N.",
""
]
] | TITLE: UTSig: A Persian Offline Signature Dataset
ABSTRACT: The pivotal role of datasets in signature verification systems motivates
researchers to collect signature samples. Distinct characteristics of Persian
signature demands for richer and culture-dependent offline signature datasets.
This paper introduces a new and public Persian offline signature dataset,
UTSig, that consists of 8280 images from 115 classes. Each class has 27 genuine
signatures, 3 opposite-hand signatures, and 42 skilled forgeries made by 6
forgers. Compared with the other public datasets, UTSig has more samples, more
classes, and more forgers. We considered various variables including signing
period, writing instrument, signature box size, and number of observable
samples for forgers in the data collection procedure. By careful examination of
main characteristics of offline signature datasets, we observe that Persian
signatures have fewer numbers of branch points and end points. We propose and
evaluate four different training and test setups for UTSig. Results of our
experiments show that training genuine samples along with opposite-hand samples
and random forgeries can improve the performance in terms of equal error rate
and minimum cost of log likelihood ratio.
| new_dataset | 0.955527 |
1608.03609 | Evan Shelhamer | Evan Shelhamer, Kate Rakelly, Judy Hoffman, Trevor Darrell | Clockwork Convnets for Video Semantic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen tremendous progress in still-image segmentation;
however the na\"ive application of these state-of-the-art algorithms to every
video frame requires considerable computation and ignores the temporal
continuity inherent in video. We propose a video recognition framework that
relies on two key observations: 1) while pixels may change rapidly from frame
to frame, the semantic content of a scene evolves more slowly, and 2) execution
can be viewed as an aspect of architecture, yielding purpose-fit computation
schedules for networks. We define a novel family of "clockwork" convnets driven
by fixed or adaptive clock signals that schedule the processing of different
layers at different update rates according to their semantic stability. We
design a pipeline schedule to reduce latency for real-time recognition and a
fixed-rate schedule to reduce overall computation. Finally, we extend clockwork
scheduling to adaptive video processing by incorporating data-driven clocks
that can be tuned on unlabeled video. The accuracy and efficiency of clockwork
convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video
datasets.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 20:32:55 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Shelhamer",
"Evan",
""
],
[
"Rakelly",
"Kate",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Clockwork Convnets for Video Semantic Segmentation
ABSTRACT: Recent years have seen tremendous progress in still-image segmentation;
however the na\"ive application of these state-of-the-art algorithms to every
video frame requires considerable computation and ignores the temporal
continuity inherent in video. We propose a video recognition framework that
relies on two key observations: 1) while pixels may change rapidly from frame
to frame, the semantic content of a scene evolves more slowly, and 2) execution
can be viewed as an aspect of architecture, yielding purpose-fit computation
schedules for networks. We define a novel family of "clockwork" convnets driven
by fixed or adaptive clock signals that schedule the processing of different
layers at different update rates according to their semantic stability. We
design a pipeline schedule to reduce latency for real-time recognition and a
fixed-rate schedule to reduce overall computation. Finally, we extend clockwork
scheduling to adaptive video processing by incorporating data-driven clocks
that can be tuned on unlabeled video. The accuracy and efficiency of clockwork
convnets are evaluated on the Youtube-Objects, NYUD, and Cityscapes video
datasets.
| no_new_dataset | 0.945601 |
1608.03639 | Truyen Tran | Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh | Faster Training of Very Deep Networks Via p-Norm Gates | To appear in ICPR'16 | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major contributing factor to the recent advances in deep neural networks is
structural units that let sensory information and gradients to propagate
easily. Gating is one such structure that acts as a flow control. Gates are
employed in many recent state-of-the-art recurrent models such as LSTM and GRU,
and feedforward models such as Residual Nets and Highway Networks. This enables
learning in very deep networks with hundred layers and helps achieve
record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP
(e.g., machine translation with GRU). However, there is limited work in
analysing the role of gating in the learning process. In this paper, we propose
a flexible $p$-norm gating scheme, which allows user-controllable flow and as a
consequence, improve the learning speed. This scheme subsumes other existing
gating schemes, including those in GRU, Highway Networks and Residual Nets as
special cases. Experiments on large sequence and vector datasets demonstrate
that the proposed gating scheme helps improve the learning speed significantly
without extra overhead.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 23:48:44 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Pham",
"Trang",
""
],
[
"Tran",
"Truyen",
""
],
[
"Phung",
"Dinh",
""
],
[
"Venkatesh",
"Svetha",
""
]
] | TITLE: Faster Training of Very Deep Networks Via p-Norm Gates
ABSTRACT: A major contributing factor to the recent advances in deep neural networks is
structural units that let sensory information and gradients to propagate
easily. Gating is one such structure that acts as a flow control. Gates are
employed in many recent state-of-the-art recurrent models such as LSTM and GRU,
and feedforward models such as Residual Nets and Highway Networks. This enables
learning in very deep networks with hundred layers and helps achieve
record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP
(e.g., machine translation with GRU). However, there is limited work in
analysing the role of gating in the learning process. In this paper, we propose
a flexible $p$-norm gating scheme, which allows user-controllable flow and as a
consequence, improve the learning speed. This scheme subsumes other existing
gating schemes, including those in GRU, Highway Networks and Residual Nets as
special cases. Experiments on large sequence and vector datasets demonstrate
that the proposed gating scheme helps improve the learning speed significantly
without extra overhead.
| no_new_dataset | 0.95253 |
1608.03658 | Yadong Mu | Yadong Mu and Zhu Liu | Deep Hashing: A Joint Approach for Image Signature Learning | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity-based image hashing represents crucial technique for visual data
storage reduction and expedited image search. Conventional hashing schemes
typically feed hand-crafted features into hash functions, which separates the
procedures of feature extraction and hash function learning. In this paper, we
propose a novel algorithm that concurrently performs feature engineering and
non-linear supervised hashing function learning. Our technical contributions in
this paper are two-folds: 1) deep network optimization is often achieved by
gradient propagation, which critically requires a smooth objective function.
The discrete nature of hash codes makes them not amenable for gradient-based
optimization. To address this issue, we propose an exponentiated hashing loss
function and its bilinear smooth approximation. Effective gradient calculation
and propagation are thereby enabled; 2) pre-training is an important trick in
supervised deep learning. The impact of pre-training on the hash code quality
has never been discussed in current deep hashing literature. We propose a
pre-training scheme inspired by recent advance in deep network based image
classification, and experimentally demonstrate its effectiveness. Comprehensive
quantitative evaluations are conducted on several widely-used image benchmarks.
On all benchmarks, our proposed deep hashing algorithm outperforms all
state-of-the-art competitors by significant margins. In particular, our
algorithm achieves a near-perfect 0.99 in terms of Hamming ranking accuracy
with only 12 bits on MNIST, and a new record of 0.74 on the CIFAR10 dataset. In
comparison, the best accuracies obtained on CIFAR10 by existing hashing
algorithms without or with deep networks are known to be 0.36 and 0.58
respectively.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 02:00:08 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Mu",
"Yadong",
""
],
[
"Liu",
"Zhu",
""
]
] | TITLE: Deep Hashing: A Joint Approach for Image Signature Learning
ABSTRACT: Similarity-based image hashing represents crucial technique for visual data
storage reduction and expedited image search. Conventional hashing schemes
typically feed hand-crafted features into hash functions, which separates the
procedures of feature extraction and hash function learning. In this paper, we
propose a novel algorithm that concurrently performs feature engineering and
non-linear supervised hashing function learning. Our technical contributions in
this paper are two-folds: 1) deep network optimization is often achieved by
gradient propagation, which critically requires a smooth objective function.
The discrete nature of hash codes makes them not amenable for gradient-based
optimization. To address this issue, we propose an exponentiated hashing loss
function and its bilinear smooth approximation. Effective gradient calculation
and propagation are thereby enabled; 2) pre-training is an important trick in
supervised deep learning. The impact of pre-training on the hash code quality
has never been discussed in current deep hashing literature. We propose a
pre-training scheme inspired by recent advance in deep network based image
classification, and experimentally demonstrate its effectiveness. Comprehensive
quantitative evaluations are conducted on several widely-used image benchmarks.
On all benchmarks, our proposed deep hashing algorithm outperforms all
state-of-the-art competitors by significant margins. In particular, our
algorithm achieves a near-perfect 0.99 in terms of Hamming ranking accuracy
with only 12 bits on MNIST, and a new record of 0.74 on the CIFAR10 dataset. In
comparison, the best accuracies obtained on CIFAR10 by existing hashing
algorithms without or with deep networks are known to be 0.36 and 0.58
respectively.
| no_new_dataset | 0.940298 |
1608.03889 | Hao Wu | Hao Wu, Maoyuan Sun, Jilles Vreeken, Nikolaj Tatti, Chris North, Naren
Ramakrishnan | Interactive and Iterative Discovery of Entity Network Subgraphs | null | null | null | null | cs.SI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph mining to extract interesting components has been studied in various
guises, e.g., communities, dense subgraphs, cliques. However, most existing
works are based on notions of frequency and connectivity and do not capture
subjective interestingness from a user's viewpoint. Furthermore, existing
approaches to mine graphs are not interactive and cannot incorporate user
feedbacks in any natural manner. In this paper, we address these gaps by
proposing a graph maximum entropy model to discover surprising connected
subgraph patterns from entity graphs. This model is embedded in an interactive
visualization framework to enable human-in-the-loop, model-guided data
exploration. Using case studies on real datasets, we demonstrate how
interactions between users and the maximum entropy model lead to faster and
explainable conclusions.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 19:56:14 GMT"
}
] | 2016-08-15T00:00:00 | [
[
"Wu",
"Hao",
""
],
[
"Sun",
"Maoyuan",
""
],
[
"Vreeken",
"Jilles",
""
],
[
"Tatti",
"Nikolaj",
""
],
[
"North",
"Chris",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Interactive and Iterative Discovery of Entity Network Subgraphs
ABSTRACT: Graph mining to extract interesting components has been studied in various
guises, e.g., communities, dense subgraphs, cliques. However, most existing
works are based on notions of frequency and connectivity and do not capture
subjective interestingness from a user's viewpoint. Furthermore, existing
approaches to mine graphs are not interactive and cannot incorporate user
feedbacks in any natural manner. In this paper, we address these gaps by
proposing a graph maximum entropy model to discover surprising connected
subgraph patterns from entity graphs. This model is embedded in an interactive
visualization framework to enable human-in-the-loop, model-guided data
exploration. Using case studies on real datasets, we demonstrate how
interactions between users and the maximum entropy model lead to faster and
explainable conclusions.
| no_new_dataset | 0.946101 |
1102.5597 | Radim v{R}eh{u}v{r}ek | Radim \v{R}eh{\r{u}}\v{r}ek | Fast and Faster: A Comparison of Two Streamed Matrix Decomposition
Algorithms | null | NIPS Workshop on Low-Rank Methods for Large-Scale Machine
Learning, 2010 | null | null | cs.NA cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the explosion of the size of digital dataset, the limiting factor for
decomposition algorithms is the \emph{number of passes} over the input, as the
input is often stored out-of-core or even off-site. Moreover, we're only
interested in algorithms that operate in \emph{constant memory} w.r.t. to the
input size, so that arbitrarily large input can be processed. In this paper, we
present a practical comparison of two such algorithms: a distributed method
that operates in a single pass over the input vs. a streamed two-pass
stochastic algorithm. The experiments track the effect of distributed
computing, oversampling and memory trade-offs on the accuracy and performance
of the two algorithms. To ensure meaningful results, we choose the input to be
a real dataset, namely the whole of the English Wikipedia, in the application
settings of Latent Semantic Analysis.
| [
{
"version": "v1",
"created": "Mon, 28 Feb 2011 05:26:58 GMT"
}
] | 2016-08-14T00:00:00 | [
[
"Řeh{ů}řek",
"Radim",
""
]
] | TITLE: Fast and Faster: A Comparison of Two Streamed Matrix Decomposition
Algorithms
ABSTRACT: With the explosion of the size of digital dataset, the limiting factor for
decomposition algorithms is the \emph{number of passes} over the input, as the
input is often stored out-of-core or even off-site. Moreover, we're only
interested in algorithms that operate in \emph{constant memory} w.r.t. to the
input size, so that arbitrarily large input can be processed. In this paper, we
present a practical comparison of two such algorithms: a distributed method
that operates in a single pass over the input vs. a streamed two-pass
stochastic algorithm. The experiments track the effect of distributed
computing, oversampling and memory trade-offs on the accuracy and performance
of the two algorithms. To ensure meaningful results, we choose the input to be
a real dataset, namely the whole of the English Wikipedia, in the application
settings of Latent Semantic Analysis.
| no_new_dataset | 0.945901 |
1511.09319 | Luca Del Pero | Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari | Behavior Discovery and Alignment of Articulated Object Classes from
Unstructured Video | 19 pages, 19 figure, 3 tables. arXiv admin note: substantial text
overlap with arXiv:1411.7883 | International Journal of Computer Vision (IJCV), July 2016 | 10.1007/S11263-016-0939-9 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an automatic system for organizing the content of a collection of
unstructured videos of an articulated object class (e.g. tiger, horse). By
exploiting the recurring motion patterns of the class across videos, our
system: 1) identifies its characteristic behaviors; and 2) recovers
pixel-to-pixel alignments across different instances. Our system can be useful
for organizing video collections for indexing and retrieval. Moreover, it can
be a platform for learning the appearance or behaviors of object classes from
Internet video. Traditional supervised techniques cannot exploit this wealth of
data directly, as they require a large amount of time-consuming manual
annotations.
The behavior discovery stage generates temporal video intervals, each
automatically trimmed to one instance of the discovered behavior, clustered by
type. It relies on our novel motion representation for articulated motion based
on the displacement of ordered pairs of trajectories (PoTs). The alignment
stage aligns hundreds of instances of the class to a great accuracy despite
considerable appearance variations (e.g. an adult tiger and a cub). It uses a
flexible Thin Plate Spline deformation model that can vary through time. We
carefully evaluate each step of our system on a new, fully annotated dataset.
On behavior discovery, we outperform the state-of-the-art Improved DTF
descriptor. On spatial alignment, we outperform the popular SIFT Flow
algorithm.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 14:22:52 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Aug 2016 01:29:20 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Del Pero",
"Luca",
""
],
[
"Ricco",
"Susanna",
""
],
[
"Sukthankar",
"Rahul",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Behavior Discovery and Alignment of Articulated Object Classes from
Unstructured Video
ABSTRACT: We propose an automatic system for organizing the content of a collection of
unstructured videos of an articulated object class (e.g. tiger, horse). By
exploiting the recurring motion patterns of the class across videos, our
system: 1) identifies its characteristic behaviors; and 2) recovers
pixel-to-pixel alignments across different instances. Our system can be useful
for organizing video collections for indexing and retrieval. Moreover, it can
be a platform for learning the appearance or behaviors of object classes from
Internet video. Traditional supervised techniques cannot exploit this wealth of
data directly, as they require a large amount of time-consuming manual
annotations.
The behavior discovery stage generates temporal video intervals, each
automatically trimmed to one instance of the discovered behavior, clustered by
type. It relies on our novel motion representation for articulated motion based
on the displacement of ordered pairs of trajectories (PoTs). The alignment
stage aligns hundreds of instances of the class to a great accuracy despite
considerable appearance variations (e.g. an adult tiger and a cub). It uses a
flexible Thin Plate Spline deformation model that can vary through time. We
carefully evaluate each step of our system on a new, fully annotated dataset.
On behavior discovery, we outperform the state-of-the-art Improved DTF
descriptor. On spatial alignment, we outperform the popular SIFT Flow
algorithm.
| no_new_dataset | 0.948394 |
1607.02537 | Heng Fan | Heng Fan, Xue Mei, Danil Prokhorov and Haibin Ling | Multi-level Contextual RNNs with Attention Model for Scene Labeling | 8 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context in image is crucial for scene labeling while existing methods only
exploit local context generated from a small surrounding area of an image patch
or a pixel, by contrast long-range and global contextual information is
ignored. To handle this issue, we in this work propose a novel approach for
scene labeling by exploring multi-level contextual recurrent neural networks
(ML-CRNNs). Specifically, we encode three kinds of contextual cues, i.e., local
context, global context and image topic context in structural recurrent neural
networks (RNNs) to model long-range local and global dependencies in image. In
this way, our method is able to `see' the image in terms of both long-range
local and holistic views, and make a more reliable inference for image
labeling. Besides, we integrate the proposed contextual RNNs into hierarchical
convolutional neural networks (CNNs), and exploit dependence relationships in
multiple levels to provide rich spatial and semantic information. Moreover, we
novelly adopt an attention model to effectively merge multiple levels and show
that it outperforms average- or max-pooling fusion strategies. Extensive
experiments demonstrate that the proposed approach achieves new
state-of-the-art results on the CamVid, SiftFlow and Stanford-background
datasets.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2016 21:51:53 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Aug 2016 21:15:51 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Fan",
"Heng",
""
],
[
"Mei",
"Xue",
""
],
[
"Prokhorov",
"Danil",
""
],
[
"Ling",
"Haibin",
""
]
] | TITLE: Multi-level Contextual RNNs with Attention Model for Scene Labeling
ABSTRACT: Context in image is crucial for scene labeling while existing methods only
exploit local context generated from a small surrounding area of an image patch
or a pixel, by contrast long-range and global contextual information is
ignored. To handle this issue, we in this work propose a novel approach for
scene labeling by exploring multi-level contextual recurrent neural networks
(ML-CRNNs). Specifically, we encode three kinds of contextual cues, i.e., local
context, global context and image topic context in structural recurrent neural
networks (RNNs) to model long-range local and global dependencies in image. In
this way, our method is able to `see' the image in terms of both long-range
local and holistic views, and make a more reliable inference for image
labeling. Besides, we integrate the proposed contextual RNNs into hierarchical
convolutional neural networks (CNNs), and exploit dependence relationships in
multiple levels to provide rich spatial and semantic information. Moreover, we
novelly adopt an attention model to effectively merge multiple levels and show
that it outperforms average- or max-pooling fusion strategies. Extensive
experiments demonstrate that the proposed approach achieves new
state-of-the-art results on the CamVid, SiftFlow and Stanford-background
datasets.
| no_new_dataset | 0.950227 |
1608.02341 | Nicola Di Mauro | Antonio Vergari and Nicola Di Mauro and Floriana Esposito | Towards Representation Learning with Tractable Probabilistic Models | 10 pages, submitted to ECML-PKDD 2016 Doctoral Consortium | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Probabilistic models learned as density estimators can be exploited in
representation learning beside being toolboxes used to answer inference queries
only. However, how to extract useful representations highly depends on the
particular model involved. We argue that tractable inference, i.e. inference
that can be computed in polynomial time, can enable general schemes to extract
features from black box models. We plan to investigate how Tractable
Probabilistic Models (TPMs) can be exploited to generate embeddings by random
query evaluations. We devise two experimental designs to assess and compare
different TPMs as feature extractors in an unsupervised representation learning
framework. We show some experimental results on standard image datasets by
applying such a method to Sum-Product Networks and Mixture of Trees as
tractable models generating embeddings.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 07:44:24 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Vergari",
"Antonio",
""
],
[
"Di Mauro",
"Nicola",
""
],
[
"Esposito",
"Floriana",
""
]
] | TITLE: Towards Representation Learning with Tractable Probabilistic Models
ABSTRACT: Probabilistic models learned as density estimators can be exploited in
representation learning beside being toolboxes used to answer inference queries
only. However, how to extract useful representations highly depends on the
particular model involved. We argue that tractable inference, i.e. inference
that can be computed in polynomial time, can enable general schemes to extract
features from black box models. We plan to investigate how Tractable
Probabilistic Models (TPMs) can be exploited to generate embeddings by random
query evaluations. We devise two experimental designs to assess and compare
different TPMs as feature extractors in an unsupervised representation learning
framework. We show some experimental results on standard image datasets by
applying such a method to Sum-Product Networks and Mixture of Trees as
tractable models generating embeddings.
| no_new_dataset | 0.944842 |
1608.03344 | Chenwei Zhang | Chenwei Zhang, Sihong Xie, Yaliang Li, Jing Gao, Wei Fan, Philip S. Yu | Multi-source Hierarchical Prediction Consolidation | null | null | null | null | cs.DB cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In big data applications such as healthcare data mining, due to privacy
concerns, it is necessary to collect predictions from multiple information
sources for the same instance, with raw features being discarded or withheld
when aggregating multiple predictions. Besides, crowd-sourced labels need to be
aggregated to estimate the ground truth of the data. Because of the imperfect
predictive models or human crowdsourcing workers, noisy and conflicting
information is ubiquitous and inevitable. Although state-of-the-art aggregation
methods have been proposed to handle label spaces with flat structures, as the
label space is becoming more and more complicated, aggregation under a label
hierarchical structure becomes necessary but has been largely ignored. These
label hierarchies can be quite informative as they are usually created by
domain experts to make sense of highly complex label correlations for many
real-world cases like protein functionality interactions or disease
relationships.
We propose a novel multi-source hierarchical prediction consolidation method
to effectively exploits the complicated hierarchical label structures to
resolve the noisy and conflicting information that inherently originates from
multiple imperfect sources. We formulate the problem as an optimization problem
with a closed-form solution. The proposed method captures the smoothness
overall information sources as well as penalizing any consolidation result that
violates the constraints derived from the label hierarchy. The hierarchical
instance similarity, as well as the consolidation result, are inferred in a
totally unsupervised, iterative fashion. Experimental results on both synthetic
and real-world datasets show the effectiveness of the proposed method over
existing alternatives.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 01:55:04 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Zhang",
"Chenwei",
""
],
[
"Xie",
"Sihong",
""
],
[
"Li",
"Yaliang",
""
],
[
"Gao",
"Jing",
""
],
[
"Fan",
"Wei",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Multi-source Hierarchical Prediction Consolidation
ABSTRACT: In big data applications such as healthcare data mining, due to privacy
concerns, it is necessary to collect predictions from multiple information
sources for the same instance, with raw features being discarded or withheld
when aggregating multiple predictions. Besides, crowd-sourced labels need to be
aggregated to estimate the ground truth of the data. Because of the imperfect
predictive models or human crowdsourcing workers, noisy and conflicting
information is ubiquitous and inevitable. Although state-of-the-art aggregation
methods have been proposed to handle label spaces with flat structures, as the
label space is becoming more and more complicated, aggregation under a label
hierarchical structure becomes necessary but has been largely ignored. These
label hierarchies can be quite informative as they are usually created by
domain experts to make sense of highly complex label correlations for many
real-world cases like protein functionality interactions or disease
relationships.
We propose a novel multi-source hierarchical prediction consolidation method
to effectively exploits the complicated hierarchical label structures to
resolve the noisy and conflicting information that inherently originates from
multiple imperfect sources. We formulate the problem as an optimization problem
with a closed-form solution. The proposed method captures the smoothness
overall information sources as well as penalizing any consolidation result that
violates the constraints derived from the label hierarchy. The hierarchical
instance similarity, as well as the consolidation result, are inferred in a
totally unsupervised, iterative fashion. Experimental results on both synthetic
and real-world datasets show the effectiveness of the proposed method over
existing alternatives.
| no_new_dataset | 0.95253 |
1608.03410 | Tatiana Tommasi | Tatiana Tommasi, Arun Mallya, Bryan Plummer, Svetlana Lazebnik,
Alexander C. Berg, Tamara L. Berg | Solving Visual Madlibs with Multiple Cues | accepted at BMVC 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on answering fill-in-the-blank style multiple choice
questions from the Visual Madlibs dataset. Previous approaches to Visual
Question Answering (VQA) have mainly used generic image features from networks
trained on the ImageNet dataset, despite the wide scope of questions. In
contrast, our approach employs features derived from networks trained for
specialized tasks of scene classification, person activity prediction, and
person and object attribute prediction. We also present a method for selecting
sub-regions of an image that are relevant for evaluating the appropriateness of
a putative answer. Visual features are computed both from the whole image and
from local regions, while sentences are mapped to a common space using a simple
normalized canonical correlation analysis (CCA) model. Our results show a
significant improvement over the previous state of the art, and indicate that
answering different question types benefits from examining a variety of image
cues and carefully choosing informative image sub-regions.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 09:51:21 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Tommasi",
"Tatiana",
""
],
[
"Mallya",
"Arun",
""
],
[
"Plummer",
"Bryan",
""
],
[
"Lazebnik",
"Svetlana",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Berg",
"Tamara L.",
""
]
] | TITLE: Solving Visual Madlibs with Multiple Cues
ABSTRACT: This paper focuses on answering fill-in-the-blank style multiple choice
questions from the Visual Madlibs dataset. Previous approaches to Visual
Question Answering (VQA) have mainly used generic image features from networks
trained on the ImageNet dataset, despite the wide scope of questions. In
contrast, our approach employs features derived from networks trained for
specialized tasks of scene classification, person activity prediction, and
person and object attribute prediction. We also present a method for selecting
sub-regions of an image that are relevant for evaluating the appropriateness of
a putative answer. Visual features are computed both from the whole image and
from local regions, while sentences are mapped to a common space using a simple
normalized canonical correlation analysis (CCA) model. Our results show a
significant improvement over the previous state of the art, and indicate that
answering different question types benefits from examining a variety of image
cues and carefully choosing informative image sub-regions.
| no_new_dataset | 0.948489 |
1608.03474 | Buyu Liu | Buyu Liu and Xuming He | Learning Dynamic Hierarchical Models for Anytime Scene Labeling | Accepted by ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With increasing demand for efficient image and video analysis, test-time cost
of scene parsing becomes critical for many large-scale or time-sensitive vision
applications. We propose a dynamic hierarchical model for anytime scene
labeling that allows us to achieve flexible trade-offs between efficiency and
accuracy in pixel-level prediction. In particular, our approach incorporates
the cost of feature computation and model inference, and optimizes the model
performance for any given test-time budget by learning a sequence of
image-adaptive hierarchical models. We formulate this anytime representation
learning as a Markov Decision Process with a discrete-continuous state-action
space. A high-quality policy of feature and model selection is learned based on
an approximate policy iteration method with action proposal mechanism. We
demonstrate the advantages of our dynamic non-myopic anytime scene parsing on
three semantic segmentation datasets, which achieves $90\%$ of the
state-of-the-art performances by using $15\%$ of their overall costs.
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 14:19:31 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Liu",
"Buyu",
""
],
[
"He",
"Xuming",
""
]
] | TITLE: Learning Dynamic Hierarchical Models for Anytime Scene Labeling
ABSTRACT: With increasing demand for efficient image and video analysis, test-time cost
of scene parsing becomes critical for many large-scale or time-sensitive vision
applications. We propose a dynamic hierarchical model for anytime scene
labeling that allows us to achieve flexible trade-offs between efficiency and
accuracy in pixel-level prediction. In particular, our approach incorporates
the cost of feature computation and model inference, and optimizes the model
performance for any given test-time budget by learning a sequence of
image-adaptive hierarchical models. We formulate this anytime representation
learning as a Markov Decision Process with a discrete-continuous state-action
space. A high-quality policy of feature and model selection is learned based on
an approximate policy iteration method with action proposal mechanism. We
demonstrate the advantages of our dynamic non-myopic anytime scene parsing on
three semantic segmentation datasets, which achieves $90\%$ of the
state-of-the-art performances by using $15\%$ of their overall costs.
| no_new_dataset | 0.947381 |
1608.03556 | Nikos Bikakis | Nikos Bikakis, Chrisa Tsinaraki, Nektarios Gioldasis, Ioannis
Stavrakantonakis, Stavros Christodoulakis | The XML and Semantic Web Worlds: Technologies, Interoperability and
Integration. A Survey of the State of the Art | This paper appears in "Semantic Hyper/Multi-media Adaptation: Schemes
and Applications", Springer 2013. arXiv admin note: text overlap with
arXiv:1311.0536 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of the emergent Web of Data, a large number of organizations,
institutes and companies (e.g., DBpedia, Geonames, PubMed ACM, IEEE, NASA, BBC)
adopt the Linked Data practices and publish their data utilizing Semantic Web
(SW) technologies. On the other hand, the dominant standard for information
exchange in the Web today is XML. Many international standards (e.g., Dublin
Core, MPEG-7, METS, TEI, IEEE LOM) have been expressed in XML Schema resulting
to a large number of XML datasets. The SW and XML worlds and their developed
infrastructures are based on different data models, semantics and query
languages. Thus, it is crucial to provide interoperability and integration
mechanisms to bridge the gap between the SW and XML worlds. In this chapter, we
give an overview and a comparison of the technologies and the standards adopted
by the XML and SW worlds. In addition, we outline the latest efforts from the
W3C groups, including the latest working drafts and recommendations (e.g., OWL
2, SPARQL 1.1, XML Schema 1.1). Moreover, we present a survey of the research
approaches which aim to provide interoperability and integration between the
XML and SW worlds. Finally, we present the SPARQL2XQuery and XS2OWL Frameworks,
which bridge the gap and create an interoperable environment between the two
worlds. These Frameworks provide mechanisms for: (a) Query translation (SPARQL
to XQuery translation); (b) Mapping specification and generation (Ontology to
XML Schema mapping); and (c) Schema transformation (XML Schema to OWL
transformation).
| [
{
"version": "v1",
"created": "Thu, 11 Aug 2016 18:03:04 GMT"
}
] | 2016-08-12T00:00:00 | [
[
"Bikakis",
"Nikos",
""
],
[
"Tsinaraki",
"Chrisa",
""
],
[
"Gioldasis",
"Nektarios",
""
],
[
"Stavrakantonakis",
"Ioannis",
""
],
[
"Christodoulakis",
"Stavros",
""
]
] | TITLE: The XML and Semantic Web Worlds: Technologies, Interoperability and
Integration. A Survey of the State of the Art
ABSTRACT: In the context of the emergent Web of Data, a large number of organizations,
institutes and companies (e.g., DBpedia, Geonames, PubMed ACM, IEEE, NASA, BBC)
adopt the Linked Data practices and publish their data utilizing Semantic Web
(SW) technologies. On the other hand, the dominant standard for information
exchange in the Web today is XML. Many international standards (e.g., Dublin
Core, MPEG-7, METS, TEI, IEEE LOM) have been expressed in XML Schema resulting
to a large number of XML datasets. The SW and XML worlds and their developed
infrastructures are based on different data models, semantics and query
languages. Thus, it is crucial to provide interoperability and integration
mechanisms to bridge the gap between the SW and XML worlds. In this chapter, we
give an overview and a comparison of the technologies and the standards adopted
by the XML and SW worlds. In addition, we outline the latest efforts from the
W3C groups, including the latest working drafts and recommendations (e.g., OWL
2, SPARQL 1.1, XML Schema 1.1). Moreover, we present a survey of the research
approaches which aim to provide interoperability and integration between the
XML and SW worlds. Finally, we present the SPARQL2XQuery and XS2OWL Frameworks,
which bridge the gap and create an interoperable environment between the two
worlds. These Frameworks provide mechanisms for: (a) Query translation (SPARQL
to XQuery translation); (b) Mapping specification and generation (Ontology to
XML Schema mapping); and (c) Schema transformation (XML Schema to OWL
transformation).
| no_new_dataset | 0.949576 |
1303.7474 | Matthew Anderson | Matthew Anderson, Geng-Shen Fu, Ronald Phlypo, and T\"ulay Adal{\i} | Independent Vector Analysis: Identification Conditions and Performance
Bounds | 14 pages, 5 figures, in review for IEEE Trans. on Signal Processing | null | 10.1109/TSP.2014.2333554 | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, an extension of independent component analysis (ICA) from one to
multiple datasets, termed independent vector analysis (IVA), has been the
subject of significant research interest. IVA has also been shown to be a
generalization of Hotelling's canonical correlation analysis. In this paper, we
provide the identification conditions for a general IVA formulation, which
accounts for linear, nonlinear, and sample-to-sample dependencies. The
identification conditions are a generalization of previous results for ICA and
for IVA when samples are independently and identically distributed.
Furthermore, a principal aim of IVA is the identification of dependent sources
between datasets. Thus, we provide the additional conditions for when the
arbitrary ordering of the sources within each dataset is common. Performance
bounds in terms of the Cramer-Rao lower bound are also provided for the
demixing matrices and interference to source ratio. The performance of two IVA
algorithms are compared to the theoretical bounds.
| [
{
"version": "v1",
"created": "Fri, 29 Mar 2013 19:52:31 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Anderson",
"Matthew",
""
],
[
"Fu",
"Geng-Shen",
""
],
[
"Phlypo",
"Ronald",
""
],
[
"Adalı",
"Tülay",
""
]
] | TITLE: Independent Vector Analysis: Identification Conditions and Performance
Bounds
ABSTRACT: Recently, an extension of independent component analysis (ICA) from one to
multiple datasets, termed independent vector analysis (IVA), has been the
subject of significant research interest. IVA has also been shown to be a
generalization of Hotelling's canonical correlation analysis. In this paper, we
provide the identification conditions for a general IVA formulation, which
accounts for linear, nonlinear, and sample-to-sample dependencies. The
identification conditions are a generalization of previous results for ICA and
for IVA when samples are independently and identically distributed.
Furthermore, a principal aim of IVA is the identification of dependent sources
between datasets. Thus, we provide the additional conditions for when the
arbitrary ordering of the sources within each dataset is common. Performance
bounds in terms of the Cramer-Rao lower bound are also provided for the
demixing matrices and interference to source ratio. The performance of two IVA
algorithms are compared to the theoretical bounds.
| no_new_dataset | 0.950641 |
1507.01073 | Makoto Yamada | Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan
Wimalawarne, Suleiman A Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang | Convex Factorization Machine for Regression | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the convex factorization machine (CFM), which is a convex variant
of the widely used Factorization Machines (FMs). Specifically, we employ a
linear+quadratic model and regularize the linear term with the
$\ell_2$-regularizer and the quadratic term with the trace norm regularizer.
Then, we formulate the CFM optimization as a semidefinite programming problem
and propose an efficient optimization procedure with Hazan's algorithm. A key
advantage of CFM over existing FMs is that it can find a globally optimal
solution, while FMs may get a poor locally optimal solution since the objective
function of FMs is non-convex. In addition, the proposed algorithm is simple
yet effective and can be implemented easily. Finally, CFM is a general
factorization method and can also be used for other factorization problems
including including multi-view matrix factorization and tensor completion
problems. Through synthetic and movielens datasets, we first show that the
proposed CFM achieves results competitive to FMs. Furthermore, in a
toxicogenomics prediction task, we show that CFM outperforms a state-of-the-art
tensor factorization method.
| [
{
"version": "v1",
"created": "Sat, 4 Jul 2015 05:54:29 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Aug 2015 17:17:17 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Dec 2015 08:52:42 GMT"
},
{
"version": "v4",
"created": "Mon, 8 Aug 2016 14:55:49 GMT"
},
{
"version": "v5",
"created": "Wed, 10 Aug 2016 01:23:56 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Yamada",
"Makoto",
""
],
[
"Lian",
"Wenzhao",
""
],
[
"Goyal",
"Amit",
""
],
[
"Chen",
"Jianhui",
""
],
[
"Wimalawarne",
"Kishan",
""
],
[
"Khan",
"Suleiman A",
""
],
[
"Kaski",
"Samuel",
""
],
[
"Mamitsuka",
"Hiroshi",
""
],
[
"Chang",
"Yi",
""
]
] | TITLE: Convex Factorization Machine for Regression
ABSTRACT: We propose the convex factorization machine (CFM), which is a convex variant
of the widely used Factorization Machines (FMs). Specifically, we employ a
linear+quadratic model and regularize the linear term with the
$\ell_2$-regularizer and the quadratic term with the trace norm regularizer.
Then, we formulate the CFM optimization as a semidefinite programming problem
and propose an efficient optimization procedure with Hazan's algorithm. A key
advantage of CFM over existing FMs is that it can find a globally optimal
solution, while FMs may get a poor locally optimal solution since the objective
function of FMs is non-convex. In addition, the proposed algorithm is simple
yet effective and can be implemented easily. Finally, CFM is a general
factorization method and can also be used for other factorization problems
including including multi-view matrix factorization and tensor completion
problems. Through synthetic and movielens datasets, we first show that the
proposed CFM achieves results competitive to FMs. Furthermore, in a
toxicogenomics prediction task, we show that CFM outperforms a state-of-the-art
tensor factorization method.
| no_new_dataset | 0.9434 |
1608.00272 | Licheng Yu | Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, Tamara L.
Berg | Modeling Context in Referring Expressions | 19 pages, 6 figures, in ECCV 2016; authors, references and
acknowledgement updated | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans refer to objects in their environments all the time, especially in
dialogue with other people. We explore generating and comprehending natural
language referring expressions for objects in images. In particular, we focus
on incorporating better measures of visual context into referring expression
models and find that visual comparison to other objects within an image helps
improve performance significantly. We also develop methods to tie the language
generation process together, so that we generate expressions for all objects of
a particular category jointly. Evaluation on three recent datasets - RefCOCO,
RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring
expression generation and comprehension.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 22:21:42 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Aug 2016 22:52:17 GMT"
},
{
"version": "v3",
"created": "Wed, 10 Aug 2016 19:01:37 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Yu",
"Licheng",
""
],
[
"Poirson",
"Patrick",
""
],
[
"Yang",
"Shan",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Berg",
"Tamara L.",
""
]
] | TITLE: Modeling Context in Referring Expressions
ABSTRACT: Humans refer to objects in their environments all the time, especially in
dialogue with other people. We explore generating and comprehending natural
language referring expressions for objects in images. In particular, we focus
on incorporating better measures of visual context into referring expression
models and find that visual comparison to other objects within an image helps
improve performance significantly. We also develop methods to tie the language
generation process together, so that we generate expressions for all objects of
a particular category jointly. Evaluation on three recent datasets - RefCOCO,
RefCOCO+, and RefCOCOg, shows the advantages of our methods for both referring
expression generation and comprehension.
| no_new_dataset | 0.949995 |
1608.03049 | Ziwei Liu | Ziwei Liu, Sijie Yan, Ping Luo, Xiaogang Wang, Xiaoou Tang | Fashion Landmark Detection in the Wild | To appear in European Conference on Computer Vision (ECCV) 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual fashion analysis has attracted many attentions in the recent years.
Previous work represented clothing regions by either bounding boxes or human
joints. This work presents fashion landmark detection or fashion alignment,
which is to predict the positions of functional key points defined on the
fashion items, such as the corners of neckline, hemline, and cuff. To encourage
future studies, we introduce a fashion landmark dataset with over 120K images,
where each image is labeled with eight landmarks. With this dataset, we study
fashion alignment by cascading multiple convolutional neural networks in three
stages. These stages gradually improve the accuracies of landmark predictions.
Extensive experiments demonstrate the effectiveness of the proposed method, as
well as its generalization ability to pose estimation. Fashion landmark is also
compared to clothing bounding boxes and human joints in two applications,
fashion attribute prediction and clothes retrieval, showing that fashion
landmark is a more discriminative representation to understand fashion images.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2016 05:07:10 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Liu",
"Ziwei",
""
],
[
"Yan",
"Sijie",
""
],
[
"Luo",
"Ping",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Fashion Landmark Detection in the Wild
ABSTRACT: Visual fashion analysis has attracted many attentions in the recent years.
Previous work represented clothing regions by either bounding boxes or human
joints. This work presents fashion landmark detection or fashion alignment,
which is to predict the positions of functional key points defined on the
fashion items, such as the corners of neckline, hemline, and cuff. To encourage
future studies, we introduce a fashion landmark dataset with over 120K images,
where each image is labeled with eight landmarks. With this dataset, we study
fashion alignment by cascading multiple convolutional neural networks in three
stages. These stages gradually improve the accuracies of landmark predictions.
Extensive experiments demonstrate the effectiveness of the proposed method, as
well as its generalization ability to pose estimation. Fashion landmark is also
compared to clothing bounding boxes and human joints in two applications,
fashion attribute prediction and clothes retrieval, showing that fashion
landmark is a more discriminative representation to understand fashion images.
| new_dataset | 0.963472 |
1608.03066 | Benjamin Drayer | Benjamin Drayer and Thomas Brox | Object Detection, Tracking, and Motion Segmentation for Object-level
Video Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach for object segmentation in videos that combines
frame-level object detection with concepts from object tracking and motion
segmentation. The approach extracts temporally consistent object tubes based on
an off-the-shelf detector. Besides the class label for each tube, this provides
a location prior that is independent of motion. For the final video
segmentation, we combine this information with motion cues. The method
overcomes the typical problems of weakly supervised/unsupervised video
segmentation, such as scenes with no motion, dominant camera motion, and
objects that move as a unit. In contrast to most tracking methods, it provides
an accurate, temporally consistent segmentation of each object. We report
results on four video segmentation datasets: YouTube Objects, SegTrackv2,
egoMotion, and FBMS.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2016 07:46:56 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Drayer",
"Benjamin",
""
],
[
"Brox",
"Thomas",
""
]
] | TITLE: Object Detection, Tracking, and Motion Segmentation for Object-level
Video Segmentation
ABSTRACT: We present an approach for object segmentation in videos that combines
frame-level object detection with concepts from object tracking and motion
segmentation. The approach extracts temporally consistent object tubes based on
an off-the-shelf detector. Besides the class label for each tube, this provides
a location prior that is independent of motion. For the final video
segmentation, we combine this information with motion cues. The method
overcomes the typical problems of weakly supervised/unsupervised video
segmentation, such as scenes with no motion, dominant camera motion, and
objects that move as a unit. In contrast to most tracking methods, it provides
an accurate, temporally consistent segmentation of each object. We report
results on four video segmentation datasets: YouTube Objects, SegTrackv2,
egoMotion, and FBMS.
| no_new_dataset | 0.956391 |
1608.03217 | Ali Diba | Ali Diba, Ali Mohammad Pazandeh, Hamed Pirsiavash, Luc Van Gool | DeepCAMP: Deep Convolutional Action & Attribute Mid-Level Patterns | in CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recognition of human actions and the determination of human attributes
are two tasks that call for fine-grained classification. Indeed, often rather
small and inconspicuous objects and features have to be detected to tell their
classes apart. In order to deal with this challenge, we propose a novel
convolutional neural network that mines mid-level image patches that are
sufficiently dedicated to resolve the corresponding subtleties. In particular,
we train a newly de- signed CNN (DeepPattern) that learns discriminative patch
groups. There are two innovative aspects to this. On the one hand we pay
attention to contextual information in an origi- nal fashion. On the other
hand, we let an iteration of feature learning and patch clustering purify the
set of dedicated patches that we use. We validate our method for action clas-
sification on two challenging datasets: PASCAL VOC 2012 Action and Stanford 40
Actions, and for attribute recogni- tion we use the Berkeley Attributes of
People dataset. Our discriminative mid-level mining CNN obtains state-of-the-
art results on these datasets, without a need for annotations about parts and
poses.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2016 15:43:10 GMT"
}
] | 2016-08-11T00:00:00 | [
[
"Diba",
"Ali",
""
],
[
"Pazandeh",
"Ali Mohammad",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: DeepCAMP: Deep Convolutional Action & Attribute Mid-Level Patterns
ABSTRACT: The recognition of human actions and the determination of human attributes
are two tasks that call for fine-grained classification. Indeed, often rather
small and inconspicuous objects and features have to be detected to tell their
classes apart. In order to deal with this challenge, we propose a novel
convolutional neural network that mines mid-level image patches that are
sufficiently dedicated to resolve the corresponding subtleties. In particular,
we train a newly de- signed CNN (DeepPattern) that learns discriminative patch
groups. There are two innovative aspects to this. On the one hand we pay
attention to contextual information in an origi- nal fashion. On the other
hand, we let an iteration of feature learning and patch clustering purify the
set of dedicated patches that we use. We validate our method for action clas-
sification on two challenging datasets: PASCAL VOC 2012 Action and Stanford 40
Actions, and for attribute recogni- tion we use the Berkeley Attributes of
People dataset. Our discriminative mid-level mining CNN obtains state-of-the-
art results on these datasets, without a need for annotations about parts and
poses.
| no_new_dataset | 0.947527 |
1409.2585 | Georgios Skoumas | Georgios Skoumas and Klaus Arthur Schmid and Gregor Joss\'e and
Andreas Z\"ufle and Mario A. Nascimento and Matthias Renz and Dieter Pfoser | Towards Knowledge-Enriched Path Computation | Accepted as a short paper at ACM SIGSPATIAL GIS 2014 | null | 10.1145/2666310.2666485 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Directions and paths, as commonly provided by navigation systems, are usually
derived considering absolute metrics, e.g., finding the shortest path within an
underlying road network. With the aid of crowdsourced geospatial data we aim at
obtaining paths that do not only minimize distance but also lead through more
popular areas using knowledge generated by users. We extract spatial relations
such as "nearby" or "next to" from travel blogs, that define closeness between
pairs of points of interest (PoIs) and quantify each of these relations using a
probabilistic model. Subsequently, we create a relationship graph where each
node corresponds to a PoI and each edge describes the spatial connection
between the respective PoIs. Using Bayesian inference we obtain a probabilistic
measure of spatial closeness according to the crowd. Applying this measure to
the corresponding road network, we obtain an altered cost function which does
not exclusively rely on distance, and enriches an actual road networks taking
crowdsourced spatial relations into account. Finally, we propose two routing
algorithms on the enriched road networks. To evaluate our approach, we use
Flickr photo data as a ground truth for popularity. Our experimental results --
based on real world datasets -- show that the paths computed w.r.t.\ our
alternative cost function yield competitive solutions in terms of path length
while also providing more "popular" paths, making routing easier and more
informative for the user.
| [
{
"version": "v1",
"created": "Tue, 9 Sep 2014 09:51:01 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Skoumas",
"Georgios",
""
],
[
"Schmid",
"Klaus Arthur",
""
],
[
"Jossé",
"Gregor",
""
],
[
"Züfle",
"Andreas",
""
],
[
"Nascimento",
"Mario A.",
""
],
[
"Renz",
"Matthias",
""
],
[
"Pfoser",
"Dieter",
""
]
] | TITLE: Towards Knowledge-Enriched Path Computation
ABSTRACT: Directions and paths, as commonly provided by navigation systems, are usually
derived considering absolute metrics, e.g., finding the shortest path within an
underlying road network. With the aid of crowdsourced geospatial data we aim at
obtaining paths that do not only minimize distance but also lead through more
popular areas using knowledge generated by users. We extract spatial relations
such as "nearby" or "next to" from travel blogs, that define closeness between
pairs of points of interest (PoIs) and quantify each of these relations using a
probabilistic model. Subsequently, we create a relationship graph where each
node corresponds to a PoI and each edge describes the spatial connection
between the respective PoIs. Using Bayesian inference we obtain a probabilistic
measure of spatial closeness according to the crowd. Applying this measure to
the corresponding road network, we obtain an altered cost function which does
not exclusively rely on distance, and enriches an actual road networks taking
crowdsourced spatial relations into account. Finally, we propose two routing
algorithms on the enriched road networks. To evaluate our approach, we use
Flickr photo data as a ground truth for popularity. Our experimental results --
based on real world datasets -- show that the paths computed w.r.t.\ our
alternative cost function yield competitive solutions in terms of path length
while also providing more "popular" paths, making routing easier and more
informative for the user.
| no_new_dataset | 0.951369 |
1505.04382 | Lei Zhang | Lei Zhang and David Zhang | Robust Visual Knowledge Transfer via EDA | This paper has been accepted for publication in IEEE Transactions on
Image Processing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of visual knowledge adaptation by leveraging labeled
patterns from source domain and a very limited number of labeled instances in
target domain to learn a robust classifier for visual categorization. This
paper proposes a new extreme learning machine based cross-domain network
learning framework, that is called Extreme Learning Machine (ELM) based Domain
Adaptation (EDA). It allows us to learn a category transformation and an ELM
classifier with random projection by minimizing the l_(2,1)-norm of the network
output weights and the learning error simultaneously. The unlabeled target
data, as useful knowledge, is also integrated as a fidelity term to guarantee
the stability during cross domain learning. It minimizes the matching error
between the learned classifier and a base classifier, such that many existing
classifiers can be readily incorporated as base classifiers. The network output
weights cannot only be analytically determined, but also transferrable.
Additionally, a manifold regularization with Laplacian graph is incorporated,
such that it is beneficial to semi-supervised learning. Extensively, we also
propose a model of multiple views, referred as MvEDA. Experiments on benchmark
visual datasets for video event recognition and object recognition, demonstrate
that our EDA methods outperform existing cross-domain learning methods.
| [
{
"version": "v1",
"created": "Sun, 17 May 2015 11:23:12 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2016 07:22:34 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Zhang",
"Lei",
""
],
[
"Zhang",
"David",
""
]
] | TITLE: Robust Visual Knowledge Transfer via EDA
ABSTRACT: We address the problem of visual knowledge adaptation by leveraging labeled
patterns from source domain and a very limited number of labeled instances in
target domain to learn a robust classifier for visual categorization. This
paper proposes a new extreme learning machine based cross-domain network
learning framework, that is called Extreme Learning Machine (ELM) based Domain
Adaptation (EDA). It allows us to learn a category transformation and an ELM
classifier with random projection by minimizing the l_(2,1)-norm of the network
output weights and the learning error simultaneously. The unlabeled target
data, as useful knowledge, is also integrated as a fidelity term to guarantee
the stability during cross domain learning. It minimizes the matching error
between the learned classifier and a base classifier, such that many existing
classifiers can be readily incorporated as base classifiers. The network output
weights cannot only be analytically determined, but also transferrable.
Additionally, a manifold regularization with Laplacian graph is incorporated,
such that it is beneficial to semi-supervised learning. Extensively, we also
propose a model of multiple views, referred as MvEDA. Experiments on benchmark
visual datasets for video event recognition and object recognition, demonstrate
that our EDA methods outperform existing cross-domain learning methods.
| no_new_dataset | 0.946941 |
1512.02413 | Julian Yarkony | Shaofei Wang, Steffen Wolf, Charless Fowlkes, Julian Yarkony | Tracking Objects with Higher Order Interactions using Delayed Column
Generation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of multi-target tracking and data association in video.
We formulate this in terms of selecting a subset of high-quality tracks subject
to the constraint that no pair of selected tracks is associated with a common
detection (of an object). This objective is equivalent to the classic NP-hard
problem of finding a maximum-weight set packing (MWSP) where tracks correspond
to sets and is made further difficult since the number of candidate tracks
grows exponentially in the number of detections. We present a relaxation of
this combinatorial problem that uses a column generation formulation where the
pricing problem is solved via dynamic programming to efficiently explore the
space of tracks. We employ row generation to tighten the bound in such a way as
to preserve efficient inference in the pricing problem. We show the practical
utility of this algorithm for tracking problems in natural and biological video
datasets.
| [
{
"version": "v1",
"created": "Tue, 8 Dec 2015 11:41:30 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jan 2016 04:10:10 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Aug 2016 05:44:51 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Wang",
"Shaofei",
""
],
[
"Wolf",
"Steffen",
""
],
[
"Fowlkes",
"Charless",
""
],
[
"Yarkony",
"Julian",
""
]
] | TITLE: Tracking Objects with Higher Order Interactions using Delayed Column
Generation
ABSTRACT: We study the problem of multi-target tracking and data association in video.
We formulate this in terms of selecting a subset of high-quality tracks subject
to the constraint that no pair of selected tracks is associated with a common
detection (of an object). This objective is equivalent to the classic NP-hard
problem of finding a maximum-weight set packing (MWSP) where tracks correspond
to sets and is made further difficult since the number of candidate tracks
grows exponentially in the number of detections. We present a relaxation of
this combinatorial problem that uses a column generation formulation where the
pricing problem is solved via dynamic programming to efficiently explore the
space of tracks. We employ row generation to tighten the bound in such a way as
to preserve efficient inference in the pricing problem. We show the practical
utility of this algorithm for tracking problems in natural and biological video
datasets.
| no_new_dataset | 0.946547 |
1606.02858 | Danqi Chen | Danqi Chen, Jason Bolton, Christopher D. Manning | A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task | ACL 2016, updated results | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enabling a computer to understand a document so that it can answer
comprehension questions is a central, yet unsolved goal of NLP. A key factor
impeding its solution by machine learned systems is the limited availability of
human-annotated data. Hermann et al. (2015) seek to solve this problem by
creating over a million training examples by pairing CNN and Daily Mail news
articles with their summarized bullet points, and show that a neural network
can then be trained to give good performance on this task. In this paper, we
conduct a thorough examination of this new reading comprehension task. Our
primary aim is to understand what depth of language understanding is required
to do well on this task. We approach this from one side by doing a careful
hand-analysis of a small subset of the problems and from the other by showing
that simple, carefully designed systems can obtain accuracies of 73.6% and
76.6% on these two datasets, exceeding current state-of-the-art results by
7-10% and approaching what we believe is the ceiling for performance on this
task.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2016 08:19:16 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2016 21:21:19 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Chen",
"Danqi",
""
],
[
"Bolton",
"Jason",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
ABSTRACT: Enabling a computer to understand a document so that it can answer
comprehension questions is a central, yet unsolved goal of NLP. A key factor
impeding its solution by machine learned systems is the limited availability of
human-annotated data. Hermann et al. (2015) seek to solve this problem by
creating over a million training examples by pairing CNN and Daily Mail news
articles with their summarized bullet points, and show that a neural network
can then be trained to give good performance on this task. In this paper, we
conduct a thorough examination of this new reading comprehension task. Our
primary aim is to understand what depth of language understanding is required
to do well on this task. We approach this from one side by doing a careful
hand-analysis of a small subset of the problems and from the other by showing
that simple, carefully designed systems can obtain accuracies of 73.6% and
76.6% on these two datasets, exceeding current state-of-the-art results by
7-10% and approaching what we believe is the ceiling for performance on this
task.
| no_new_dataset | 0.940898 |
1606.03676 | Benoit Sagot | Beno\^it Sagot (ALPAGE) | External Lexical Information for Multilingual Part-of-Speech Tagging | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Morphosyntactic lexicons and word vector representations have both proven
useful for improving the accuracy of statistical part-of-speech taggers. Here
we compare the performances of four systems on datasets covering 16 languages,
two of these systems being feature-based (MEMMs and CRFs) and two of them being
neural-based (bi-LSTMs). We show that, on average, all four approaches perform
similarly and reach state-of-the-art results. Yet better performances are
obtained with our feature-based models on lexically richer datasets (e.g. for
morphologically rich languages), whereas neural-based results are higher on
datasets with less lexical variability (e.g. for English). These conclusions
hold in particular for the MEMM models relying on our system MElt, which
benefited from newly designed features. This shows that, under certain
conditions, feature-based approaches enriched with morphosyntactic lexicons are
competitive with respect to neural methods.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 08:06:55 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2016 08:41:46 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Sagot",
"Benoît",
"",
"ALPAGE"
]
] | TITLE: External Lexical Information for Multilingual Part-of-Speech Tagging
ABSTRACT: Morphosyntactic lexicons and word vector representations have both proven
useful for improving the accuracy of statistical part-of-speech taggers. Here
we compare the performances of four systems on datasets covering 16 languages,
two of these systems being feature-based (MEMMs and CRFs) and two of them being
neural-based (bi-LSTMs). We show that, on average, all four approaches perform
similarly and reach state-of-the-art results. Yet better performances are
obtained with our feature-based models on lexically richer datasets (e.g. for
morphologically rich languages), whereas neural-based results are higher on
datasets with less lexical variability (e.g. for English). These conclusions
hold in particular for the MEMM models relying on our system MElt, which
benefited from newly designed features. This shows that, under certain
conditions, feature-based approaches enriched with morphosyntactic lexicons are
competitive with respect to neural methods.
| no_new_dataset | 0.948822 |
1608.01198 | Dong Huang | Dong Huang, Chang-Dong Wang, Jian-Huang Lai, Yun Liang, Shan Bian, Yu
Chen | Ensemble-driven support vector clustering: From ensemble learning to
automatic parameter estimation | To appear in ICPR 2016 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Support vector clustering (SVC) is a versatile clustering technique that is
able to identify clusters of arbitrary shapes by exploiting the kernel trick.
However, one hurdle that restricts the application of SVC lies in its
sensitivity to the kernel parameter and the trade-off parameter. Although many
extensions of SVC have been developed, to the best of our knowledge, there is
still no algorithm that is able to effectively estimate the two crucial
parameters in SVC without supervision. In this paper, we propose a novel
support vector clustering approach termed ensemble-driven support vector
clustering (EDSVC), which for the first time tackles the automatic parameter
estimation problem for SVC based on ensemble learning, and is capable of
producing robust clustering results in a purely unsupervised manner.
Experimental results on multiple real-world datasets demonstrate the
effectiveness of our approach.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 14:19:00 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Aug 2016 15:28:15 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Huang",
"Dong",
""
],
[
"Wang",
"Chang-Dong",
""
],
[
"Lai",
"Jian-Huang",
""
],
[
"Liang",
"Yun",
""
],
[
"Bian",
"Shan",
""
],
[
"Chen",
"Yu",
""
]
] | TITLE: Ensemble-driven support vector clustering: From ensemble learning to
automatic parameter estimation
ABSTRACT: Support vector clustering (SVC) is a versatile clustering technique that is
able to identify clusters of arbitrary shapes by exploiting the kernel trick.
However, one hurdle that restricts the application of SVC lies in its
sensitivity to the kernel parameter and the trade-off parameter. Although many
extensions of SVC have been developed, to the best of our knowledge, there is
still no algorithm that is able to effectively estimate the two crucial
parameters in SVC without supervision. In this paper, we propose a novel
support vector clustering approach termed ensemble-driven support vector
clustering (EDSVC), which for the first time tackles the automatic parameter
estimation problem for SVC based on ensemble learning, and is capable of
producing robust clustering results in a purely unsupervised manner.
Experimental results on multiple real-world datasets demonstrate the
effectiveness of our approach.
| no_new_dataset | 0.948728 |
1608.02639 | Boxiang Dong | Boxiang Dong, Zhengzhang Chen, Hui Wang, Lu-An Tang, Kai Zhang, Ying
Lin, Haifeng Chen, Guofei Jiang | GID: Graph-based Intrusion Detection on Massive Process Traces for
Enterprise Security Systems | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrusion detection system (IDS) is an important part of enterprise security
system architecture. In particular, anomaly-based IDS has been widely applied
to detect abnormal process behaviors that deviate from the majority. However,
such abnormal behavior usually consists of a series of low-level heterogeneous
events. The gap between the low-level events and the high-level abnormal
behaviors makes it hard to infer which single events are related to the real
abnormal activities, especially considering that there are massive "noisy"
low-level events happening in between. Hence, the existing work that focus on
detecting single entities/events can hardly achieve high detection accuracy.
Different from previous work, we design and implement GID, an efficient
graph-based intrusion detection technique that can identify abnormal event
sequences from a massive heterogeneous process traces with high accuracy. GID
first builds a compact graph structure to capture the interactions between
different system entities. The suspiciousness or anomaly score of process paths
is then measured by leveraging random walk technique to the constructed acyclic
directed graph. To eliminate the score bias from the path length, the Box-Cox
power transformation based approach is introduced to normalize the anomaly
scores so that the scores of paths of different lengths have the same
distribution. The efficiency of suspicious path discovery is further improved
by the proposed optimization scheme. We fully implement our GID algorithm and
deploy it into a real enterprise security system, and it greatly helps detect
the advanced threats, and optimize the incident response. Executing GID on
system monitoring datasets showing that GID is efficient (about 2 million
records per minute) and accurate (higher than 80% in terms of detection rate).
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 22:09:26 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Dong",
"Boxiang",
""
],
[
"Chen",
"Zhengzhang",
""
],
[
"Wang",
"Hui",
""
],
[
"Tang",
"Lu-An",
""
],
[
"Zhang",
"Kai",
""
],
[
"Lin",
"Ying",
""
],
[
"Chen",
"Haifeng",
""
],
[
"Jiang",
"Guofei",
""
]
] | TITLE: GID: Graph-based Intrusion Detection on Massive Process Traces for
Enterprise Security Systems
ABSTRACT: Intrusion detection system (IDS) is an important part of enterprise security
system architecture. In particular, anomaly-based IDS has been widely applied
to detect abnormal process behaviors that deviate from the majority. However,
such abnormal behavior usually consists of a series of low-level heterogeneous
events. The gap between the low-level events and the high-level abnormal
behaviors makes it hard to infer which single events are related to the real
abnormal activities, especially considering that there are massive "noisy"
low-level events happening in between. Hence, the existing work that focus on
detecting single entities/events can hardly achieve high detection accuracy.
Different from previous work, we design and implement GID, an efficient
graph-based intrusion detection technique that can identify abnormal event
sequences from a massive heterogeneous process traces with high accuracy. GID
first builds a compact graph structure to capture the interactions between
different system entities. The suspiciousness or anomaly score of process paths
is then measured by leveraging random walk technique to the constructed acyclic
directed graph. To eliminate the score bias from the path length, the Box-Cox
power transformation based approach is introduced to normalize the anomaly
scores so that the scores of paths of different lengths have the same
distribution. The efficiency of suspicious path discovery is further improved
by the proposed optimization scheme. We fully implement our GID algorithm and
deploy it into a real enterprise security system, and it greatly helps detect
the advanced threats, and optimize the incident response. Executing GID on
system monitoring datasets showing that GID is efficient (about 2 million
records per minute) and accurate (higher than 80% in terms of detection rate).
| no_new_dataset | 0.948585 |
1608.02657 | Bin Guo | Yan Liu, Bin Guo, Yang Wang, Wenle Wu, Zhiwen Yu, Daqing Zhang | TaskMe: Multi-Task Allocation in Mobile Crowd Sensing | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Task allocation or participant selection is a key issue in Mobile Crowd
Sensing (MCS). While previous participant selection approaches mainly focus on
selecting a proper subset of users for a single MCS task, multi-task-oriented
participant selection is essential and useful for the efficiency of large-scale
MCS platforms. This paper proposes TaskMe, a participant selection framework
for multi-task MCS environments. In particular, two typical multi-task
allocation situations with bi-objective optimization goals are studied: (1) For
FPMT (few participants, more tasks), each participant is required to complete
multiple tasks and the optimization goal is to maximize the total number of
accomplished tasks while minimizing the total movement distance. (2) For MPFT
(more participants, few tasks), each participant is selected to perform one
task based on pre-registered working areas in view of privacy, and the
optimization objective is to minimize total incentive payments while minimizing
the total traveling distance. Two optimal algorithms based on the Minimum Cost
Maximum Flow theory are proposed for FPMT, and two algorithms based on the
multi-objective optimization theory are proposed for MPFT. Experiments verify
that the proposed algorithms outperform baselines based on a large-scale
real-word dataset under different experiment settings (the number of tasks,
various task distributions, etc.).
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 23:43:15 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Liu",
"Yan",
""
],
[
"Guo",
"Bin",
""
],
[
"Wang",
"Yang",
""
],
[
"Wu",
"Wenle",
""
],
[
"Yu",
"Zhiwen",
""
],
[
"Zhang",
"Daqing",
""
]
] | TITLE: TaskMe: Multi-Task Allocation in Mobile Crowd Sensing
ABSTRACT: Task allocation or participant selection is a key issue in Mobile Crowd
Sensing (MCS). While previous participant selection approaches mainly focus on
selecting a proper subset of users for a single MCS task, multi-task-oriented
participant selection is essential and useful for the efficiency of large-scale
MCS platforms. This paper proposes TaskMe, a participant selection framework
for multi-task MCS environments. In particular, two typical multi-task
allocation situations with bi-objective optimization goals are studied: (1) For
FPMT (few participants, more tasks), each participant is required to complete
multiple tasks and the optimization goal is to maximize the total number of
accomplished tasks while minimizing the total movement distance. (2) For MPFT
(more participants, few tasks), each participant is selected to perform one
task based on pre-registered working areas in view of privacy, and the
optimization objective is to minimize total incentive payments while minimizing
the total traveling distance. Two optimal algorithms based on the Minimum Cost
Maximum Flow theory are proposed for FPMT, and two algorithms based on the
multi-objective optimization theory are proposed for MPFT. Experiments verify
that the proposed algorithms outperform baselines based on a large-scale
real-word dataset under different experiment settings (the number of tasks,
various task distributions, etc.).
| no_new_dataset | 0.949482 |
1608.02659 | Mohamed Ali Mahjoub | Anis Elbahi, Mohamed Nazih Omri, Mohamed Ali Mahjoub, Kamel Garrouch | Mouse Movement and Probabilistic Graphical Models Based E-Learning
Activity Recognition Improvement Possibilistic Model | in AJSE 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically recognizing the e-learning activities is an important task for
improving the online learning process. Probabilistic graphical models such as
hidden Markov models and conditional random fields have been successfully used
in order to identify a Web users activity. For such models, the sequences of
observation are crucial for training and inference processes. Despite the
efficiency of these probabilistic graphical models in segmenting and labeling
stochastic sequences, their performance is adversely affected by the imperfect
quality of data used for the construction of sequences of observation. In this
paper, a formalism of the possibilistic theory will be used in order to propose
a new approach for observation sequences preparation. The eminent contribution
of our approach is to evaluate the effect of possibilistic reasoning during the
generation of observation sequences on the effectiveness of hidden Markov
models and conditional random fields models. Using a dataset containing 51 real
manipulations related to three types of learners tasks, the preliminary
experiments demonstrate that the sequences of observation obtained based on
possibilistic reasoning significantly improve the performance of hidden Marvov
models and conditional random fields models in the automatic recognition of the
e-learning activities.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 23:48:19 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Elbahi",
"Anis",
""
],
[
"Omri",
"Mohamed Nazih",
""
],
[
"Mahjoub",
"Mohamed Ali",
""
],
[
"Garrouch",
"Kamel",
""
]
] | TITLE: Mouse Movement and Probabilistic Graphical Models Based E-Learning
Activity Recognition Improvement Possibilistic Model
ABSTRACT: Automatically recognizing the e-learning activities is an important task for
improving the online learning process. Probabilistic graphical models such as
hidden Markov models and conditional random fields have been successfully used
in order to identify a Web users activity. For such models, the sequences of
observation are crucial for training and inference processes. Despite the
efficiency of these probabilistic graphical models in segmenting and labeling
stochastic sequences, their performance is adversely affected by the imperfect
quality of data used for the construction of sequences of observation. In this
paper, a formalism of the possibilistic theory will be used in order to propose
a new approach for observation sequences preparation. The eminent contribution
of our approach is to evaluate the effect of possibilistic reasoning during the
generation of observation sequences on the effectiveness of hidden Markov
models and conditional random fields models. Using a dataset containing 51 real
manipulations related to three types of learners tasks, the preliminary
experiments demonstrate that the sequences of observation obtained based on
possibilistic reasoning significantly improve the performance of hidden Marvov
models and conditional random fields models in the automatic recognition of the
e-learning activities.
| no_new_dataset | 0.914444 |
1608.02676 | Krishna Kumar Singh | Krishna Kumar Singh and Yong Jae Lee | End-to-End Localization and Ranking for Relative Attributes | Appears in European Conference on Computer Vision (ECCV), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an end-to-end deep convolutional network to simultaneously
localize and rank relative visual attributes, given only weakly-supervised
pairwise image comparisons. Unlike previous methods, our network jointly learns
the attribute's features, localization, and ranker. The localization module of
our network discovers the most informative image region for the attribute,
which is then used by the ranking module to learn a ranking model of the
attribute. Our end-to-end framework also significantly speeds up processing and
is much faster than previous methods. We show state-of-the-art ranking results
on various relative attribute datasets, and our qualitative localization
results clearly demonstrate our network's ability to learn meaningful image
patches.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 02:19:37 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Singh",
"Krishna Kumar",
""
],
[
"Lee",
"Yong Jae",
""
]
] | TITLE: End-to-End Localization and Ranking for Relative Attributes
ABSTRACT: We propose an end-to-end deep convolutional network to simultaneously
localize and rank relative visual attributes, given only weakly-supervised
pairwise image comparisons. Unlike previous methods, our network jointly learns
the attribute's features, localization, and ranker. The localization module of
our network discovers the most informative image region for the attribute,
which is then used by the ranking module to learn a ranking model of the
attribute. Our end-to-end framework also significantly speeds up processing and
is much faster than previous methods. We show state-of-the-art ranking results
on various relative attribute datasets, and our qualitative localization
results clearly demonstrate our network's ability to learn meaningful image
patches.
| no_new_dataset | 0.954816 |
1608.02778 | Ke Yu | Ke Yu, Chao Dong, Chen Change Loy, Xiaoou Tang | Deep Convolution Networks for Compression Artifacts Reduction | 13 pages, 19 figures, an extension of our ICCV 2015 paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lossy compression introduces complex compression artifacts, particularly
blocking artifacts, ringing effects and blurring. Existing algorithms either
focus on removing blocking artifacts and produce blurred output, or restore
sharpened images that are accompanied with ringing effects. Inspired by the
success of deep convolutional networks (DCN) on superresolution, we formulate a
compact and efficient network for seamless attenuation of different compression
artifacts. To meet the speed requirement of real-world applications, we further
accelerate the proposed baseline model by layer decomposition and joint use of
large-stride convolutional and deconvolutional layers. This also leads to a
more general CNN framework that has a close relationship with the conventional
Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up
of 7.5 times with almost no performance loss compared to the baseline model. We
also demonstrate that a deeper model can be effectively trained with features
learned in a shallow network. Following a similar "easy to hard" idea, we
systematically investigate three practical transfer settings and show the
effectiveness of transfer learning in low-level vision problems. Our method
shows superior performance than the state-of-the-art methods both on benchmark
datasets and a real-world use case.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 12:11:51 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Yu",
"Ke",
""
],
[
"Dong",
"Chao",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Deep Convolution Networks for Compression Artifacts Reduction
ABSTRACT: Lossy compression introduces complex compression artifacts, particularly
blocking artifacts, ringing effects and blurring. Existing algorithms either
focus on removing blocking artifacts and produce blurred output, or restore
sharpened images that are accompanied with ringing effects. Inspired by the
success of deep convolutional networks (DCN) on superresolution, we formulate a
compact and efficient network for seamless attenuation of different compression
artifacts. To meet the speed requirement of real-world applications, we further
accelerate the proposed baseline model by layer decomposition and joint use of
large-stride convolutional and deconvolutional layers. This also leads to a
more general CNN framework that has a close relationship with the conventional
Multi-Layer Perceptron (MLP). Finally, the modified network achieves a speed up
of 7.5 times with almost no performance loss compared to the baseline model. We
also demonstrate that a deeper model can be effectively trained with features
learned in a shallow network. Following a similar "easy to hard" idea, we
systematically investigate three practical transfer settings and show the
effectiveness of transfer learning in low-level vision problems. Our method
shows superior performance than the state-of-the-art methods both on benchmark
datasets and a real-world use case.
| no_new_dataset | 0.948489 |
1608.02797 | Sharon Lee | Sharon X Lee, Kaleb L Leemaqz, Geoffrey J McLachlan | A block EM algorithm for multivariate skew normal and skew t-mixture
models | null | null | null | null | stat.CO cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finite mixtures of skew distributions provide a flexible tool for modelling
heterogeneous data with asymmetric distributional features. However, parameter
estimation via the Expectation-Maximization (EM) algorithm can become very
time-consuming due to the complicated expressions involved in the E-step that
are numerically expensive to evaluate. A more time-efficient implementation of
the EM algorithm was recently proposed which allows each component of the
mixture model to be evaluated in parallel. In this paper, we develop a block
implementation of the EM algorithm that facilitates the calculations in the E-
and M-steps to be spread across a larger number of threads. We focus on the
fitting of finite mixtures of multivariate skew normal and skew
t-distributions, and show that both the E- and M-steps in the EM algorithm can
be modified to allow the data to be split into blocks. The approach can be
easily implemented for use by multicore and multi-processor machines. It can
also be applied concurrently with the recently proposed multithreaded EM
algorithm to achieve further reduction in computation time. The improvement in
time performance is illustrated on some real datasets.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 13:28:38 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Lee",
"Sharon X",
""
],
[
"Leemaqz",
"Kaleb L",
""
],
[
"McLachlan",
"Geoffrey J",
""
]
] | TITLE: A block EM algorithm for multivariate skew normal and skew t-mixture
models
ABSTRACT: Finite mixtures of skew distributions provide a flexible tool for modelling
heterogeneous data with asymmetric distributional features. However, parameter
estimation via the Expectation-Maximization (EM) algorithm can become very
time-consuming due to the complicated expressions involved in the E-step that
are numerically expensive to evaluate. A more time-efficient implementation of
the EM algorithm was recently proposed which allows each component of the
mixture model to be evaluated in parallel. In this paper, we develop a block
implementation of the EM algorithm that facilitates the calculations in the E-
and M-steps to be spread across a larger number of threads. We focus on the
fitting of finite mixtures of multivariate skew normal and skew
t-distributions, and show that both the E- and M-steps in the EM algorithm can
be modified to allow the data to be split into blocks. The approach can be
easily implemented for use by multicore and multi-processor machines. It can
also be applied concurrently with the recently proposed multithreaded EM
algorithm to achieve further reduction in computation time. The improvement in
time performance is illustrated on some real datasets.
| no_new_dataset | 0.945601 |
1608.02858 | Jan Drchal | Jan Mrkos, Jan Drchal, Malcolm Egan, Michal Jakob | Liftago On-Demand Transport Dataset and Market Formation Algorithm Based
on Machine Learning | 9 pages, 2 figures, supplemental information for a journal paper | null | null | null | cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document serves as a technical report for the analysis of on-demand
transport dataset. Moreover we show how the dataset can be used to develop a
market formation algorithm based on machine learning. Data used in this work
comes from Liftago, a Prague based company which connects taxi drivers and
customers through a smartphone app. The dataset is analysed from the
machine-learning perspective: we give an overview of features available as well
as results of feature ranking. Later we propose the SImple Data-driven MArket
Formation (SIDMAF) algorithm which aims to improve a relevance while connecting
customers with relevant drivers. We compare the heuristics currently used by
Liftago with SIDMAF using two key performance indicators.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 16:33:03 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Mrkos",
"Jan",
""
],
[
"Drchal",
"Jan",
""
],
[
"Egan",
"Malcolm",
""
],
[
"Jakob",
"Michal",
""
]
] | TITLE: Liftago On-Demand Transport Dataset and Market Formation Algorithm Based
on Machine Learning
ABSTRACT: This document serves as a technical report for the analysis of on-demand
transport dataset. Moreover we show how the dataset can be used to develop a
market formation algorithm based on machine learning. Data used in this work
comes from Liftago, a Prague based company which connects taxi drivers and
customers through a smartphone app. The dataset is analysed from the
machine-learning perspective: we give an overview of features available as well
as results of feature ranking. Later we propose the SImple Data-driven MArket
Formation (SIDMAF) algorithm which aims to improve a relevance while connecting
customers with relevant drivers. We compare the heuristics currently used by
Liftago with SIDMAF using two key performance indicators.
| no_new_dataset | 0.951323 |
1608.02888 | Ayad Ghany Ismaeel | Ayad Ghany Ismaeel, Dina Yousif Mikhail | Effective Data Mining Technique for Classification Cancers via Mutations
in Gene using Neural Network | 8 pages, 8 figures, 1 Table | (IJACSA) International Journal of Advanced Computer Science and
Applications, Vol. 7, No. 7, 2016. Pages 69-76 | 10.14569/IJACSA.2016.070710 | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | The prediction plays the important role in detecting efficient protection and
therapy of cancer. The prediction of mutations in gene needs a diagnostic and
classification, which is based on the whole database (big dataset), to reach
sufficient accuracy results. Since the tumor suppressor P53 is approximately
about fifty percentage of all human tumors because mutations that occur in the
TP53 gene into the cells. So, this paper is applied on tumor p53, where the
problem is there are several primitive databases (excel database) contain
datasets of TP53 gene with its tumor protein p53, these databases are rich
datasets that cover all mutations and cause diseases (cancers). But these Data
Bases cannot reach to predict and diagnosis cancers, i.e. the big datasets have
not efficient Data Mining method, which can predict, diagnosis the mutation,
and classify the cancer of patient. The goal of this paper to reach a Data
Mining technique, that employs neural network, which bases on the big datasets.
Also, offers friendly predictions, flexible, and effective classified cancers,
in order to overcome the previous techniques drawbacks. This proposed technique
is done by using two approaches, first, bioinformatics techniques by using
BLAST, CLUSTALW, etc, in order to know if there are malignant mutations or not.
The second, data mining by using neural network; it is selected (12) out of
(53) TP53 gene database fields. To clarify, one of these 12 fields (gene
location field) did not exists in TP53 gene database; therefore, it is added to
the database of TP53 gene in training and testing back propagation algorithm,
in order to classify specifically the types of cancers. Feed Forward Back
Propagation supports this Data Mining method with data training rate (1) and
Mean Square Error (MSE) (0.00000000000001). This effective technique allows in
a quick, accurate and easy way to classify the type of cancer.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2016 12:48:40 GMT"
}
] | 2016-08-10T00:00:00 | [
[
"Ismaeel",
"Ayad Ghany",
""
],
[
"Mikhail",
"Dina Yousif",
""
]
] | TITLE: Effective Data Mining Technique for Classification Cancers via Mutations
in Gene using Neural Network
ABSTRACT: The prediction plays the important role in detecting efficient protection and
therapy of cancer. The prediction of mutations in gene needs a diagnostic and
classification, which is based on the whole database (big dataset), to reach
sufficient accuracy results. Since the tumor suppressor P53 is approximately
about fifty percentage of all human tumors because mutations that occur in the
TP53 gene into the cells. So, this paper is applied on tumor p53, where the
problem is there are several primitive databases (excel database) contain
datasets of TP53 gene with its tumor protein p53, these databases are rich
datasets that cover all mutations and cause diseases (cancers). But these Data
Bases cannot reach to predict and diagnosis cancers, i.e. the big datasets have
not efficient Data Mining method, which can predict, diagnosis the mutation,
and classify the cancer of patient. The goal of this paper to reach a Data
Mining technique, that employs neural network, which bases on the big datasets.
Also, offers friendly predictions, flexible, and effective classified cancers,
in order to overcome the previous techniques drawbacks. This proposed technique
is done by using two approaches, first, bioinformatics techniques by using
BLAST, CLUSTALW, etc, in order to know if there are malignant mutations or not.
The second, data mining by using neural network; it is selected (12) out of
(53) TP53 gene database fields. To clarify, one of these 12 fields (gene
location field) did not exists in TP53 gene database; therefore, it is added to
the database of TP53 gene in training and testing back propagation algorithm,
in order to classify specifically the types of cancers. Feed Forward Back
Propagation supports this Data Mining method with data training rate (1) and
Mean Square Error (MSE) (0.00000000000001). This effective technique allows in
a quick, accurate and easy way to classify the type of cancer.
| no_new_dataset | 0.944944 |
1506.03475 | Yuqing Hou | Yuqing Hou, Zhouchen Lin | Image Tag Completion and Refinement by Subspace Clustering and Matrix
Completion | This paper has been withdrawn by the author due to a error in the
model formulation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tag-based image retrieval (TBIR) has drawn much attention in recent years due
to the explosive amount of digital images and crowdsourcing tags. However, the
TBIR applications still suffer from the deficient and inaccurate tags provided
by users. Inspired by the subspace clustering methods, we formulate the tag
completion problem in a subspace clustering model which assumes that images are
sampled from subspaces, and complete the tags using the state-of-the-art Low
Rank Representation (LRR) method. And we propose a matrix completion algorithm
to further refine the tags. Our empirical results on multiple benchmark
datasets for image annotation show that the proposed algorithm outperforms
state-of-the-art approaches when handling missing and noisy tags.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 20:42:50 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2016 02:14:37 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Hou",
"Yuqing",
""
],
[
"Lin",
"Zhouchen",
""
]
] | TITLE: Image Tag Completion and Refinement by Subspace Clustering and Matrix
Completion
ABSTRACT: Tag-based image retrieval (TBIR) has drawn much attention in recent years due
to the explosive amount of digital images and crowdsourcing tags. However, the
TBIR applications still suffer from the deficient and inaccurate tags provided
by users. Inspired by the subspace clustering methods, we formulate the tag
completion problem in a subspace clustering model which assumes that images are
sampled from subspaces, and complete the tags using the state-of-the-art Low
Rank Representation (LRR) method. And we propose a matrix completion algorithm
to further refine the tags. Our empirical results on multiple benchmark
datasets for image annotation show that the proposed algorithm outperforms
state-of-the-art approaches when handling missing and noisy tags.
| no_new_dataset | 0.947284 |
1508.07468 | Yuqing Hou | Yuqing Hou | Image Annotation Incorporating Low-Rankness, Tag and Visual Correlation
and Inhomogeneous Errors | This paper has been withdrawn by the author to update more
experiments and some errors in the algorithm | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tag-based image retrieval (TBIR) has drawn much attention in recent years due
to the explosive amount of digital images and crowdsourcing tags. However, TBIR
is still suffering from the incomplete and inaccurate tags provided by users,
posing a great challenge for tag-based image management applications. In this
work, we proposed a novel method for image annotation, incorporating several
priors: Low-Rankness, Tag and Visual Correlation and Inhomogeneous Errors.
Highly representative CNN feature vectors are adopt to model the tag-visual
correlation and narrow the semantic gap. And we extract word vectors for tags
to measure similarity between tags in the semantic level, which is more
accurate than traditional frequency-based or graph-based methods. We utilize
the accelerated proximal gradient (APG) method to solve our model efficiently.
Extensive experiments conducted on multiple benchmark datasets demonstrate the
effectiveness and robustness of the proposed method.
| [
{
"version": "v1",
"created": "Sat, 29 Aug 2015 15:47:20 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2016 04:43:51 GMT"
},
{
"version": "v3",
"created": "Mon, 8 Aug 2016 02:15:36 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Hou",
"Yuqing",
""
]
] | TITLE: Image Annotation Incorporating Low-Rankness, Tag and Visual Correlation
and Inhomogeneous Errors
ABSTRACT: Tag-based image retrieval (TBIR) has drawn much attention in recent years due
to the explosive amount of digital images and crowdsourcing tags. However, TBIR
is still suffering from the incomplete and inaccurate tags provided by users,
posing a great challenge for tag-based image management applications. In this
work, we proposed a novel method for image annotation, incorporating several
priors: Low-Rankness, Tag and Visual Correlation and Inhomogeneous Errors.
Highly representative CNN feature vectors are adopt to model the tag-visual
correlation and narrow the semantic gap. And we extract word vectors for tags
to measure similarity between tags in the semantic level, which is more
accurate than traditional frequency-based or graph-based methods. We utilize
the accelerated proximal gradient (APG) method to solve our model efficiently.
Extensive experiments conducted on multiple benchmark datasets demonstrate the
effectiveness and robustness of the proposed method.
| no_new_dataset | 0.953232 |
1510.05237 | Vijay Gadepally | Brendan Gavin and Vijay Gadepally and Jeremy Kepner | Large Enforced Sparse Non-Negative Matrix Factorization | 9 pages | null | 10.1109/IPDPSW.2016.58 | null | cs.LG cs.NA cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-negative matrix factorization (NMF) is a common method for generating
topic models from text data. NMF is widely accepted for producing good results
despite its relative simplicity of implementation and ease of computation. One
challenge with applying NMF to large datasets is that intermediate matrix
products often become dense, stressing the memory and compute elements of a
system. In this article, we investigate a simple but powerful modification of a
common NMF algorithm that enforces the generation of sparse intermediate and
output matrices. This method enables the application of NMF to large datasets
through improved memory and compute performance. Further, we demonstrate
empirically that this method of enforcing sparsity in the NMF either preserves
or improves both the accuracy of the resulting topic model and the convergence
rate of the underlying algorithm.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2015 12:53:38 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Gavin",
"Brendan",
""
],
[
"Gadepally",
"Vijay",
""
],
[
"Kepner",
"Jeremy",
""
]
] | TITLE: Large Enforced Sparse Non-Negative Matrix Factorization
ABSTRACT: Non-negative matrix factorization (NMF) is a common method for generating
topic models from text data. NMF is widely accepted for producing good results
despite its relative simplicity of implementation and ease of computation. One
challenge with applying NMF to large datasets is that intermediate matrix
products often become dense, stressing the memory and compute elements of a
system. In this article, we investigate a simple but powerful modification of a
common NMF algorithm that enforces the generation of sparse intermediate and
output matrices. This method enables the application of NMF to large datasets
through improved memory and compute performance. Further, we demonstrate
empirically that this method of enforcing sparsity in the NMF either preserves
or improves both the accuracy of the resulting topic model and the convergence
rate of the underlying algorithm.
| no_new_dataset | 0.94545 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.