id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1505.01709 | Piotr Br\'odka | Stanis{\l}aw Saganowski, Bogdan Gliwa, Piotr Br\'odka, Anna Zygmunt,
Przemys{\l}aw Kazienko, Jaros{\l}aw Ko\'zlak | Predicting Community Evolution in Social Networks | Entropy 2015, 17, 1-x manuscripts; doi:10.3390/e170x000x 46 pages | Entropy 2015, 17, 3053-3096 | 10.3390/e17053053 | null | cs.SI physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | Nowadays, sustained development of different social media can be observed
worldwide. One of the relevant research domains intensively explored recently
is analysis of social communities existing in social media as well as
prediction of their future evolution taking into account collected historical
evolution chains. These evolution chains proposed in the paper contain group
states in the previous time frames and its historical transitions that were
identified using one out of two methods: Stable Group Changes Identification
(SGCI) and Group Evolution Discovery (GED). Based on the observed evolution
chains of various length, structural network features are extracted, validated
and selected as well as used to learn classification models. The experimental
studies were performed on three real datasets with different profile: DBLP,
Facebook and Polish blogosphere. The process of group prediction was analysed
with respect to different classifiers as well as various descriptive feature
sets extracted from evolution chains of different length. The results revealed
that, in general, the longer evolution chains the better predictive abilities
of the classification models. However, chains of length 3 to 7 enabled the
GED-based method to almost reach its maximum possible prediction quality. For
SGCI, this value was at the level of 3 to 5 last periods.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 14:03:47 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Saganowski",
"Stanisław",
""
],
[
"Gliwa",
"Bogdan",
""
],
[
"Bródka",
"Piotr",
""
],
[
"Zygmunt",
"Anna",
""
],
[
"Kazienko",
"Przemysław",
""
],
[
"Koźlak",
"Jarosław",
""
]
] | TITLE: Predicting Community Evolution in Social Networks
ABSTRACT: Nowadays, sustained development of different social media can be observed
worldwide. One of the relevant research domains intensively explored recently
is analysis of social communities existing in social media as well as
prediction of their future evolution taking into account collected historical
evolution chains. These evolution chains proposed in the paper contain group
states in the previous time frames and its historical transitions that were
identified using one out of two methods: Stable Group Changes Identification
(SGCI) and Group Evolution Discovery (GED). Based on the observed evolution
chains of various length, structural network features are extracted, validated
and selected as well as used to learn classification models. The experimental
studies were performed on three real datasets with different profile: DBLP,
Facebook and Polish blogosphere. The process of group prediction was analysed
with respect to different classifiers as well as various descriptive feature
sets extracted from evolution chains of different length. The results revealed
that, in general, the longer evolution chains the better predictive abilities
of the classification models. However, chains of length 3 to 7 enabled the
GED-based method to almost reach its maximum possible prediction quality. For
SGCI, this value was at the level of 3 to 5 last periods.
| no_new_dataset | 0.945851 |
1505.02269 | Zongyuan Ge | Zongyuan Ge and Christopher Mccool and Conrad Sanderson and Peter
Corke | Subset Feature Learning for Fine-Grained Category Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained categorisation has been a challenging problem due to small
inter-class variation, large intra-class variation and low number of training
images. We propose a learning system which first clusters visually similar
classes and then learns deep convolutional neural network features specific to
each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset
show that the proposed method outperforms recent fine-grained categorisation
methods under the most difficult setting: no bounding boxes are presented at
test time. It achieves a mean accuracy of 77.5%, compared to the previous best
performance of 73.2%. We also show that progressive transfer learning allows us
to first learn domain-generic features (for bird classification) which can then
be adapted to specific set of bird classes, yielding improvements in accuracy.
| [
{
"version": "v1",
"created": "Sat, 9 May 2015 13:25:24 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Ge",
"Zongyuan",
""
],
[
"Mccool",
"Christopher",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"Corke",
"Peter",
""
]
] | TITLE: Subset Feature Learning for Fine-Grained Category Classification
ABSTRACT: Fine-grained categorisation has been a challenging problem due to small
inter-class variation, large intra-class variation and low number of training
images. We propose a learning system which first clusters visually similar
classes and then learns deep convolutional neural network features specific to
each subset. Experiments on the popular fine-grained Caltech-UCSD bird dataset
show that the proposed method outperforms recent fine-grained categorisation
methods under the most difficult setting: no bounding boxes are presented at
test time. It achieves a mean accuracy of 77.5%, compared to the previous best
performance of 73.2%. We also show that progressive transfer learning allows us
to first learn domain-generic features (for bird classification) which can then
be adapted to specific set of bird classes, yielding improvements in accuracy.
| no_new_dataset | 0.951997 |
1505.02274 | Takayuki Mizuno | Takayuki Mizuno, Takaaki Ohnishi and Tsutomu Watanabe | Structure of global buyer-supplier networks and its implications for
conflict minerals regulations | 18 pages, 7 tables, 6 figures | null | null | null | physics.soc-ph q-fin.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the structure of global inter-firm linkages using a dataset
that contains information on business partners for about 400,000 firms
worldwide, including all the firms listed on the major stock exchanges. Among
the firms, we examine three networks, which are based on customer-supplier,
licensee-licensor, and strategic alliance relationships. First, we show that
these networks all have scale-free topology and that the degree distribution
for each follows a power law with an exponent of 1.5. The shortest path length
is around six for all three networks. Second, we show through community
structure analysis that the firms comprise a community with those firms that
belong to the same industry but different home countries, indicating the
globalization of firms' production activities. Finally, we discuss what such
production globalization implies for the proliferation of conflict minerals
(i.e., minerals extracted from conflict zones and sold to firms in other
countries to perpetuate fighting) through global buyer-supplier linkages. We
show that a limited number of firms belonging to some specific industries and
countries plays an important role in the global proliferation of conflict
minerals. Our numerical simulation shows that regulations on the purchases of
conflict minerals by those firms would substantially reduce their worldwide
use.
| [
{
"version": "v1",
"created": "Sat, 9 May 2015 13:58:27 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Mizuno",
"Takayuki",
""
],
[
"Ohnishi",
"Takaaki",
""
],
[
"Watanabe",
"Tsutomu",
""
]
] | TITLE: Structure of global buyer-supplier networks and its implications for
conflict minerals regulations
ABSTRACT: We investigate the structure of global inter-firm linkages using a dataset
that contains information on business partners for about 400,000 firms
worldwide, including all the firms listed on the major stock exchanges. Among
the firms, we examine three networks, which are based on customer-supplier,
licensee-licensor, and strategic alliance relationships. First, we show that
these networks all have scale-free topology and that the degree distribution
for each follows a power law with an exponent of 1.5. The shortest path length
is around six for all three networks. Second, we show through community
structure analysis that the firms comprise a community with those firms that
belong to the same industry but different home countries, indicating the
globalization of firms' production activities. Finally, we discuss what such
production globalization implies for the proliferation of conflict minerals
(i.e., minerals extracted from conflict zones and sold to firms in other
countries to perpetuate fighting) through global buyer-supplier linkages. We
show that a limited number of firms belonging to some specific industries and
countries plays an important role in the global proliferation of conflict
minerals. Our numerical simulation shows that regulations on the purchases of
conflict minerals by those firms would substantially reduce their worldwide
use.
| no_new_dataset | 0.908456 |
1505.02377 | Renjie Liao | Renjie Liao, Jianping Shi, Ziyang Ma, Jun Zhu and Jiaya Jia | Bounded-Distortion Metric Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metric learning aims to embed one metric space into another to benefit tasks
like classification and clustering. Although a greatly distorted metric space
has a high degree of freedom to fit training data, it is prone to overfitting
and numerical inaccuracy. This paper presents {\it bounded-distortion metric
learning} (BDML), a new metric learning framework which amounts to finding an
optimal Mahalanobis metric space with a bounded-distortion constraint. An
efficient solver based on the multiplicative weights update method is proposed.
Moreover, we generalize BDML to pseudo-metric learning and devise the
semidefinite relaxation and a randomized algorithm to approximately solve it.
We further provide theoretical analysis to show that distortion is a key
ingredient for stability and generalization ability of our BDML algorithm.
Extensive experiments on several benchmark datasets yield promising results.
| [
{
"version": "v1",
"created": "Sun, 10 May 2015 13:27:36 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Liao",
"Renjie",
""
],
[
"Shi",
"Jianping",
""
],
[
"Ma",
"Ziyang",
""
],
[
"Zhu",
"Jun",
""
],
[
"Jia",
"Jiaya",
""
]
] | TITLE: Bounded-Distortion Metric Learning
ABSTRACT: Metric learning aims to embed one metric space into another to benefit tasks
like classification and clustering. Although a greatly distorted metric space
has a high degree of freedom to fit training data, it is prone to overfitting
and numerical inaccuracy. This paper presents {\it bounded-distortion metric
learning} (BDML), a new metric learning framework which amounts to finding an
optimal Mahalanobis metric space with a bounded-distortion constraint. An
efficient solver based on the multiplicative weights update method is proposed.
Moreover, we generalize BDML to pseudo-metric learning and devise the
semidefinite relaxation and a randomized algorithm to approximately solve it.
We further provide theoretical analysis to show that distortion is a key
ingredient for stability and generalization ability of our BDML algorithm.
Extensive experiments on several benchmark datasets yield promising results.
| no_new_dataset | 0.947235 |
1505.02496 | Liwei Wang | Liwei Wang, Chen-Yu Lee, Zhuowen Tu, Svetlana Lazebnik | Training Deeper Convolutional Networks with Deep Supervision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most promising ways of improving the performance of deep
convolutional neural networks is by increasing the number of convolutional
layers. However, adding layers makes training more difficult and
computationally expensive. In order to train deeper networks, we propose to add
auxiliary supervision branches after certain intermediate layers during
training. We formulate a simple rule of thumb to determine where these branches
should be added. The resulting deeply supervised structure makes the training
much easier and also produces better classification results on ImageNet and the
recently released, larger MIT Places dataset
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 06:26:46 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Wang",
"Liwei",
""
],
[
"Lee",
"Chen-Yu",
""
],
[
"Tu",
"Zhuowen",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] | TITLE: Training Deeper Convolutional Networks with Deep Supervision
ABSTRACT: One of the most promising ways of improving the performance of deep
convolutional neural networks is by increasing the number of convolutional
layers. However, adding layers makes training more difficult and
computationally expensive. In order to train deeper networks, we propose to add
auxiliary supervision branches after certain intermediate layers during
training. We formulate a simple rule of thumb to determine where these branches
should be added. The resulting deeply supervised structure makes the training
much easier and also produces better classification results on ImageNet and the
recently released, larger MIT Places dataset
| no_new_dataset | 0.955775 |
1505.02505 | Lihua Guo | Guo Lihua and Guo Chenggan | A Two-Layer Local Constrained Sparse Coding Method for Fine-Grained
Visual Categorization | 19 pages, 12 figures, 8 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | Fine-grained categories are more difficulty distinguished than generic
categories due to the similarity of inter-class and the diversity of
intra-class. Therefore, the fine-grained visual categorization (FGVC) is
considered as one of challenge problems in computer vision recently. A new
feature learning framework, which is based on a two-layer local constrained
sparse coding architecture, is proposed in this paper. The two-layer
architecture is introduced for learning intermediate-level features, and the
local constrained term is applied to guarantee the local smooth of coding
coefficients. For extracting more discriminative information, local orientation
histograms are the input of sparse coding instead of raw pixels. Moreover, a
quick dictionary updating process is derived to further improve the training
speed. Two experimental results show that our method achieves 85.29% accuracy
on the Oxford 102 flowers dataset and 67.8% accuracy on the CUB-200-2011 bird
dataset, and the performance of our framework is highly competitive with
existing literatures.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 07:34:35 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Lihua",
"Guo",
""
],
[
"Chenggan",
"Guo",
""
]
] | TITLE: A Two-Layer Local Constrained Sparse Coding Method for Fine-Grained
Visual Categorization
ABSTRACT: Fine-grained categories are more difficulty distinguished than generic
categories due to the similarity of inter-class and the diversity of
intra-class. Therefore, the fine-grained visual categorization (FGVC) is
considered as one of challenge problems in computer vision recently. A new
feature learning framework, which is based on a two-layer local constrained
sparse coding architecture, is proposed in this paper. The two-layer
architecture is introduced for learning intermediate-level features, and the
local constrained term is applied to guarantee the local smooth of coding
coefficients. For extracting more discriminative information, local orientation
histograms are the input of sparse coding instead of raw pixels. Moreover, a
quick dictionary updating process is derived to further improve the training
speed. Two experimental results show that our method achieves 85.29% accuracy
on the Oxford 102 flowers dataset and 67.8% accuracy on the CUB-200-2011 bird
dataset, and the performance of our framework is highly competitive with
existing literatures.
| no_new_dataset | 0.95222 |
1505.02729 | Nakul Verma | Nakul Verma and Kristin Branson | Sample complexity of learning Mahalanobis distance metrics | 26 pages, 1 figure | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metric learning seeks a transformation of the feature space that enhances
prediction quality for the given task at hand. In this work we provide
PAC-style sample complexity rates for supervised metric learning. We give
matching lower- and upper-bounds showing that the sample complexity scales with
the representation dimension when no assumptions are made about the underlying
data distribution. However, by leveraging the structure of the data
distribution, we show that one can achieve rates that are fine-tuned to a
specific notion of intrinsic complexity for a given dataset. Our analysis
reveals that augmenting the metric learning optimization criterion with a
simple norm-based regularization can help adapt to a dataset's intrinsic
complexity, yielding better generalization. Experiments on benchmark datasets
validate our analysis and show that regularizing the metric can help discern
the signal even when the data contains high amounts of noise.
| [
{
"version": "v1",
"created": "Mon, 11 May 2015 18:55:42 GMT"
}
] | 2015-05-12T00:00:00 | [
[
"Verma",
"Nakul",
""
],
[
"Branson",
"Kristin",
""
]
] | TITLE: Sample complexity of learning Mahalanobis distance metrics
ABSTRACT: Metric learning seeks a transformation of the feature space that enhances
prediction quality for the given task at hand. In this work we provide
PAC-style sample complexity rates for supervised metric learning. We give
matching lower- and upper-bounds showing that the sample complexity scales with
the representation dimension when no assumptions are made about the underlying
data distribution. However, by leveraging the structure of the data
distribution, we show that one can achieve rates that are fine-tuned to a
specific notion of intrinsic complexity for a given dataset. Our analysis
reveals that augmenting the metric learning optimization criterion with a
simple norm-based regularization can help adapt to a dataset's intrinsic
complexity, yielding better generalization. Experiments on benchmark datasets
validate our analysis and show that regularizing the metric can help discern
the signal even when the data contains high amounts of noise.
| no_new_dataset | 0.946399 |
1503.04144 | Shengxin Zha | Shengxin Zha, Florian Luisier, Walter Andrews, Nitish Srivastava,
Ruslan Salakhutdinov | Exploiting Image-trained CNN Architectures for Unconstrained Video
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We conduct an in-depth exploration of different strategies for doing event
detection in videos using convolutional neural networks (CNNs) trained for
image classification. We study different ways of performing spatial and
temporal pooling, feature normalization, choice of CNN layers as well as choice
of classifiers. Making judicious choices along these dimensions led to a very
significant increase in performance over more naive approaches that have been
used till now. We evaluate our approach on the challenging TRECVID MED'14
dataset with two popular CNN architectures pretrained on ImageNet. On this
MED'14 dataset, our methods, based entirely on image-trained CNN features, can
outperform several state-of-the-art non-CNN models. Our proposed late fusion of
CNN- and motion-based features can further increase the mean average precision
(mAP) on MED'14 from 34.95% to 38.74%. The fusion approach achieves the
state-of-the-art classification performance on the challenging UCF-101 dataset.
| [
{
"version": "v1",
"created": "Fri, 13 Mar 2015 17:00:53 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Mar 2015 00:53:49 GMT"
},
{
"version": "v3",
"created": "Fri, 8 May 2015 01:54:08 GMT"
}
] | 2015-05-11T00:00:00 | [
[
"Zha",
"Shengxin",
""
],
[
"Luisier",
"Florian",
""
],
[
"Andrews",
"Walter",
""
],
[
"Srivastava",
"Nitish",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Exploiting Image-trained CNN Architectures for Unconstrained Video
Classification
ABSTRACT: We conduct an in-depth exploration of different strategies for doing event
detection in videos using convolutional neural networks (CNNs) trained for
image classification. We study different ways of performing spatial and
temporal pooling, feature normalization, choice of CNN layers as well as choice
of classifiers. Making judicious choices along these dimensions led to a very
significant increase in performance over more naive approaches that have been
used till now. We evaluate our approach on the challenging TRECVID MED'14
dataset with two popular CNN architectures pretrained on ImageNet. On this
MED'14 dataset, our methods, based entirely on image-trained CNN features, can
outperform several state-of-the-art non-CNN models. Our proposed late fusion of
CNN- and motion-based features can further increase the mean average precision
(mAP) on MED'14 from 34.95% to 38.74%. The fusion approach achieves the
state-of-the-art classification performance on the challenging UCF-101 dataset.
| no_new_dataset | 0.949623 |
1505.01866 | K. V. Rashmi | K. V. Rashmi and Ran Gilad-Bachrach | DART: Dropouts meet Multiple Additive Regression Trees | AIStats 2015 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple Additive Regression Trees (MART), an ensemble model of boosted
regression trees, is known to deliver high prediction accuracy for diverse
tasks, and it is widely used in practice. However, it suffers an issue which we
call over-specialization, wherein trees added at later iterations tend to
impact the prediction of only a few instances, and make negligible contribution
towards the remaining instances. This negatively affects the performance of the
model on unseen data, and also makes the model over-sensitive to the
contributions of the few, initially added tress. We show that the commonly used
tool to address this issue, that of shrinkage, alleviates the problem only to a
certain extent and the fundamental issue of over-specialization still remains.
In this work, we explore a different approach to address the problem that of
employing dropouts, a tool that has been recently proposed in the context of
learning deep neural networks. We propose a novel way of employing dropouts in
MART, resulting in the DART algorithm. We evaluate DART on ranking, regression
and classification tasks, using large scale, publicly available datasets, and
show that DART outperforms MART in each of the tasks, with a significant
margin. We also show that DART overcomes the issue of over-specialization to a
considerable extent.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 20:38:48 GMT"
}
] | 2015-05-11T00:00:00 | [
[
"Rashmi",
"K. V.",
""
],
[
"Gilad-Bachrach",
"Ran",
""
]
] | TITLE: DART: Dropouts meet Multiple Additive Regression Trees
ABSTRACT: Multiple Additive Regression Trees (MART), an ensemble model of boosted
regression trees, is known to deliver high prediction accuracy for diverse
tasks, and it is widely used in practice. However, it suffers an issue which we
call over-specialization, wherein trees added at later iterations tend to
impact the prediction of only a few instances, and make negligible contribution
towards the remaining instances. This negatively affects the performance of the
model on unseen data, and also makes the model over-sensitive to the
contributions of the few, initially added tress. We show that the commonly used
tool to address this issue, that of shrinkage, alleviates the problem only to a
certain extent and the fundamental issue of over-specialization still remains.
In this work, we explore a different approach to address the problem that of
employing dropouts, a tool that has been recently proposed in the context of
learning deep neural networks. We propose a novel way of employing dropouts in
MART, resulting in the DART algorithm. We evaluate DART on ranking, regression
and classification tasks, using large scale, publicly available datasets, and
show that DART outperforms MART in each of the tasks, with a significant
margin. We also show that DART overcomes the issue of over-specialization to a
considerable extent.
| no_new_dataset | 0.949763 |
1505.02000 | Matthew Lai | Matthew Lai | Deep Learning for Medical Image Segmentation | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report provides an overview of the current state of the art deep
learning architectures and optimisation techniques, and uses the ADNI
hippocampus MRI dataset as an example to compare the effectiveness and
efficiency of different convolutional architectures on the task of patch-based
3-dimensional hippocampal segmentation, which is important in the diagnosis of
Alzheimer's Disease. We found that a slightly unconventional "stacked 2D"
approach provides much better classification performance than simple 2D patches
without requiring significantly more computational power. We also examined the
popular "tri-planar" approach used in some recently published studies, and
found that it provides much better results than the 2D approaches, but also
with a moderate increase in computational power requirement. Finally, we
evaluated a full 3D convolutional architecture, and found that it provides
marginally better results than the tri-planar approach, but at the cost of a
very significant increase in computational power requirement.
| [
{
"version": "v1",
"created": "Fri, 8 May 2015 11:35:53 GMT"
}
] | 2015-05-11T00:00:00 | [
[
"Lai",
"Matthew",
""
]
] | TITLE: Deep Learning for Medical Image Segmentation
ABSTRACT: This report provides an overview of the current state of the art deep
learning architectures and optimisation techniques, and uses the ADNI
hippocampus MRI dataset as an example to compare the effectiveness and
efficiency of different convolutional architectures on the task of patch-based
3-dimensional hippocampal segmentation, which is important in the diagnosis of
Alzheimer's Disease. We found that a slightly unconventional "stacked 2D"
approach provides much better classification performance than simple 2D patches
without requiring significantly more computational power. We also examined the
popular "tri-planar" approach used in some recently published studies, and
found that it provides much better results than the 2D approaches, but also
with a moderate increase in computational power requirement. Finally, we
evaluated a full 3D convolutional architecture, and found that it provides
marginally better results than the tri-planar approach, but at the cost of a
very significant increase in computational power requirement.
| no_new_dataset | 0.953405 |
1505.02056 | Junchen Jiang | Junchen Jiang and Vyas Sekar and Yi Sun | DDA: Cross-Session Throughput Prediction with Applications to Video
Bitrate Selection | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User experience of video streaming could be greatly improved by selecting a
high-yet-sustainable initial video bitrate, and it is therefore critical to
accurately predict throughput before a video session starts. Inspired by
previous studies that show similarity among throughput of similar sessions
(e.g., those sharing same bottleneck link), we argue for a cross-session
prediction approach, where throughput measured on other sessions is used to
predict the throughput of a new session. In this paper, we study the challenges
of cross-session throughput prediction, develop an accurate throughput
predictor called DDA, and evaluate the performance of the predictor with
real-world datasets. We show that DDA can predict throughput more accurately
than simple predictors and conventional machine learning algorithms; e.g.,
DDA's 80%ile prediction error of DDA is > 50% lower than other algorithms. We
also show that this improved accuracy enables video players to select a higher
sustainable initial bitrate; e.g., compared to initial bitrate without
prediction, DDA leads to 4x higher average bitrate.
| [
{
"version": "v1",
"created": "Fri, 8 May 2015 14:51:12 GMT"
}
] | 2015-05-11T00:00:00 | [
[
"Jiang",
"Junchen",
""
],
[
"Sekar",
"Vyas",
""
],
[
"Sun",
"Yi",
""
]
] | TITLE: DDA: Cross-Session Throughput Prediction with Applications to Video
Bitrate Selection
ABSTRACT: User experience of video streaming could be greatly improved by selecting a
high-yet-sustainable initial video bitrate, and it is therefore critical to
accurately predict throughput before a video session starts. Inspired by
previous studies that show similarity among throughput of similar sessions
(e.g., those sharing same bottleneck link), we argue for a cross-session
prediction approach, where throughput measured on other sessions is used to
predict the throughput of a new session. In this paper, we study the challenges
of cross-session throughput prediction, develop an accurate throughput
predictor called DDA, and evaluate the performance of the predictor with
real-world datasets. We show that DDA can predict throughput more accurately
than simple predictors and conventional machine learning algorithms; e.g.,
DDA's 80%ile prediction error of DDA is > 50% lower than other algorithms. We
also show that this improved accuracy enables video players to select a higher
sustainable initial bitrate; e.g., compared to initial bitrate without
prediction, DDA leads to 4x higher average bitrate.
| no_new_dataset | 0.948822 |
1411.6069 | Abhishek Kar | Abhishek Kar, Shubham Tulsiani, Jo\~ao Carreira, Jitendra Malik | Category-Specific Object Reconstruction from a Single Image | First two authors contributed equally. To appear at CVPR 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object reconstruction from a single image -- in the wild -- is a problem
where we can make progress and get meaningful results today. This is the main
message of this paper, which introduces an automated pipeline with pixels as
inputs and 3D surfaces of various rigid categories as outputs in images of
realistic scenes. At the core of our approach are deformable 3D models that can
be learned from 2D annotations available in existing object detection datasets,
that can be driven by noisy automatic object segmentations and which we
complement with a bottom-up module for recovering high-frequency shape details.
We perform a comprehensive quantitative analysis and ablation study of our
approach using the recently introduced PASCAL 3D+ dataset and show very
encouraging automatic reconstructions on PASCAL VOC.
| [
{
"version": "v1",
"created": "Sat, 22 Nov 2014 03:15:29 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2015 21:42:41 GMT"
}
] | 2015-05-08T00:00:00 | [
[
"Kar",
"Abhishek",
""
],
[
"Tulsiani",
"Shubham",
""
],
[
"Carreira",
"João",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Category-Specific Object Reconstruction from a Single Image
ABSTRACT: Object reconstruction from a single image -- in the wild -- is a problem
where we can make progress and get meaningful results today. This is the main
message of this paper, which introduces an automated pipeline with pixels as
inputs and 3D surfaces of various rigid categories as outputs in images of
realistic scenes. At the core of our approach are deformable 3D models that can
be learned from 2D annotations available in existing object detection datasets,
that can be driven by noisy automatic object segmentations and which we
complement with a bottom-up module for recovering high-frequency shape details.
We perform a comprehensive quantitative analysis and ablation study of our
approach using the recently introduced PASCAL 3D+ dataset and show very
encouraging automatic reconstructions on PASCAL VOC.
| no_new_dataset | 0.813609 |
1504.06378 | James Supancic III | James Steven Supancic III, Gregory Rogez, Yi Yang, Jamie Shotton, Deva
Ramanan | Depth-based hand pose estimation: methods, data, and challenges | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hand pose estimation has matured rapidly in recent years. The introduction of
commodity depth sensors and a multitude of practical applications have spurred
new advances. We provide an extensive analysis of the state-of-the-art,
focusing on hand pose estimation from a single depth frame. To do so, we have
implemented a considerable number of systems, and will release all software and
evaluation code. We summarize important conclusions here: (1) Pose estimation
appears roughly solved for scenes with isolated hands. However, methods still
struggle to analyze cluttered scenes where hands may be interacting with nearby
objects and surfaces. To spur further progress we introduce a challenging new
dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves
with disparate criteria, making comparisons difficult. We define a consistent
evaluation criteria, rigorously motivated by human experiments. (3) We
introduce a simple nearest-neighbor baseline that outperforms most existing
systems. This implies that most systems do not generalize beyond their training
sets. This also reinforces the under-appreciated point that training data is as
important as the model itself. We conclude with directions for future progress.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 02:37:37 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2015 20:31:57 GMT"
}
] | 2015-05-08T00:00:00 | [
[
"Supancic",
"James Steven",
"III"
],
[
"Rogez",
"Gregory",
""
],
[
"Yang",
"Yi",
""
],
[
"Shotton",
"Jamie",
""
],
[
"Ramanan",
"Deva",
""
]
] | TITLE: Depth-based hand pose estimation: methods, data, and challenges
ABSTRACT: Hand pose estimation has matured rapidly in recent years. The introduction of
commodity depth sensors and a multitude of practical applications have spurred
new advances. We provide an extensive analysis of the state-of-the-art,
focusing on hand pose estimation from a single depth frame. To do so, we have
implemented a considerable number of systems, and will release all software and
evaluation code. We summarize important conclusions here: (1) Pose estimation
appears roughly solved for scenes with isolated hands. However, methods still
struggle to analyze cluttered scenes where hands may be interacting with nearby
objects and surfaces. To spur further progress we introduce a challenging new
dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves
with disparate criteria, making comparisons difficult. We define a consistent
evaluation criteria, rigorously motivated by human experiments. (3) We
introduce a simple nearest-neighbor baseline that outperforms most existing
systems. This implies that most systems do not generalize beyond their training
sets. This also reinforces the under-appreciated point that training data is as
important as the model itself. We conclude with directions for future progress.
| new_dataset | 0.961534 |
1505.01547 | Gordon J Ross | Gordon J Ross and Tim Jones | Understanding the Heavy Tailed Dynamics in Human Behavior | 9 pages in Physical Review E, 2015 | null | null | null | physics.soc-ph cs.SI stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent availability of electronic datasets containing large volumes of
communication data has made it possible to study human behavior on a larger
scale than ever before. From this, it has been discovered that across a diverse
range of data sets, the inter-event times between consecutive communication
events obey heavy tailed power law dynamics. Explaining this has proved
controversial, and two distinct hypotheses have emerged. The first holds that
these power laws are fundamental, and arise from the mechanisms such as
priority queuing that humans use to schedule tasks. The second holds that they
are a statistical artifact which only occur in aggregated data when features
such as circadian rhythms and burstiness are ignored. We use a large social
media data set to test these hypotheses, and find that although models that
incorporate circadian rhythms and burstiness do explain part of the observed
heavy tails, there is residual unexplained heavy tail behavior which suggests a
more fundamental cause. Based on this, we develop a new quantitative model of
human behavior which improves on existing approaches, and gives insight into
the mechanisms underlying human interactions.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 00:12:24 GMT"
}
] | 2015-05-08T00:00:00 | [
[
"Ross",
"Gordon J",
""
],
[
"Jones",
"Tim",
""
]
] | TITLE: Understanding the Heavy Tailed Dynamics in Human Behavior
ABSTRACT: The recent availability of electronic datasets containing large volumes of
communication data has made it possible to study human behavior on a larger
scale than ever before. From this, it has been discovered that across a diverse
range of data sets, the inter-event times between consecutive communication
events obey heavy tailed power law dynamics. Explaining this has proved
controversial, and two distinct hypotheses have emerged. The first holds that
these power laws are fundamental, and arise from the mechanisms such as
priority queuing that humans use to schedule tasks. The second holds that they
are a statistical artifact which only occur in aggregated data when features
such as circadian rhythms and burstiness are ignored. We use a large social
media data set to test these hypotheses, and find that although models that
incorporate circadian rhythms and burstiness do explain part of the observed
heavy tails, there is residual unexplained heavy tail behavior which suggests a
more fundamental cause. Based on this, we develop a new quantitative model of
human behavior which improves on existing approaches, and gives insight into
the mechanisms underlying human interactions.
| no_new_dataset | 0.949809 |
1505.01560 | Tam Nguyen | Tam V. Nguyen, Canyi Lu, Jose Sepulveda, Shuicheng Yan | Adaptive Nonparametric Image Parsing | 11 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/3.0/ | In this paper, we present an adaptive nonparametric solution to the image
parsing task, namely annotating each image pixel with its corresponding
category label. For a given test image, first, a locality-aware retrieval set
is extracted from the training data based on super-pixel matching similarities,
which are augmented with feature extraction for better differentiation of local
super-pixels. Then, the category of each super-pixel is initialized by the
majority vote of the $k$-nearest-neighbor super-pixels in the retrieval set.
Instead of fixing $k$ as in traditional non-parametric approaches, here we
propose a novel adaptive nonparametric approach which determines the
sample-specific k for each test image. In particular, $k$ is adaptively set to
be the number of the fewest nearest super-pixels which the images in the
retrieval set can use to get the best category prediction. Finally, the initial
super-pixel labels are further refined by contextual smoothing. Extensive
experiments on challenging datasets demonstrate the superiority of the new
solution over other state-of-the-art nonparametric solutions.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 02:28:32 GMT"
}
] | 2015-05-08T00:00:00 | [
[
"Nguyen",
"Tam V.",
""
],
[
"Lu",
"Canyi",
""
],
[
"Sepulveda",
"Jose",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Adaptive Nonparametric Image Parsing
ABSTRACT: In this paper, we present an adaptive nonparametric solution to the image
parsing task, namely annotating each image pixel with its corresponding
category label. For a given test image, first, a locality-aware retrieval set
is extracted from the training data based on super-pixel matching similarities,
which are augmented with feature extraction for better differentiation of local
super-pixels. Then, the category of each super-pixel is initialized by the
majority vote of the $k$-nearest-neighbor super-pixels in the retrieval set.
Instead of fixing $k$ as in traditional non-parametric approaches, here we
propose a novel adaptive nonparametric approach which determines the
sample-specific k for each test image. In particular, $k$ is adaptively set to
be the number of the fewest nearest super-pixels which the images in the
retrieval set can use to get the best category prediction. Finally, the initial
super-pixel labels are further refined by contextual smoothing. Extensive
experiments on challenging datasets demonstrate the superiority of the new
solution over other state-of-the-art nonparametric solutions.
| no_new_dataset | 0.948632 |
1505.01802 | Nagarajan Natarajan | Nagarajan Natarajan, Oluwasanmi Koyejo, Pradeep Ravikumar, Inderjit S.
Dhillon | Optimal Decision-Theoretic Classification Using Non-Decomposable
Performance Metrics | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a general theoretical analysis of expected out-of-sample utility,
also referred to as decision-theoretic classification, for non-decomposable
binary classification metrics such as F-measure and Jaccard coefficient. Our
key result is that the expected out-of-sample utility for many performance
metrics is provably optimized by a classifier which is equivalent to a signed
thresholding of the conditional probability of the positive class. Our analysis
bridges a gap in the literature on binary classification, revealed in light of
recent results for non-decomposable metrics in population utility maximization
style classification. Our results identify checkable properties of a
performance metric which are sufficient to guarantee a probability ranking
principle. We propose consistent estimators for optimal expected out-of-sample
classification. As a consequence of the probability ranking principle,
computational requirements can be reduced from exponential to cubic complexity
in the general case, and further reduced to quadratic complexity in special
cases. We provide empirical results on simulated and benchmark datasets
evaluating the performance of the proposed algorithms for decision-theoretic
classification and comparing them to baseline and state-of-the-art methods in
population utility maximization for non-decomposable metrics.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 18:19:24 GMT"
}
] | 2015-05-08T00:00:00 | [
[
"Natarajan",
"Nagarajan",
""
],
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Ravikumar",
"Pradeep",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Optimal Decision-Theoretic Classification Using Non-Decomposable
Performance Metrics
ABSTRACT: We provide a general theoretical analysis of expected out-of-sample utility,
also referred to as decision-theoretic classification, for non-decomposable
binary classification metrics such as F-measure and Jaccard coefficient. Our
key result is that the expected out-of-sample utility for many performance
metrics is provably optimized by a classifier which is equivalent to a signed
thresholding of the conditional probability of the positive class. Our analysis
bridges a gap in the literature on binary classification, revealed in light of
recent results for non-decomposable metrics in population utility maximization
style classification. Our results identify checkable properties of a
performance metric which are sufficient to guarantee a probability ranking
principle. We propose consistent estimators for optimal expected out-of-sample
classification. As a consequence of the probability ranking principle,
computational requirements can be reduced from exponential to cubic complexity
in the general case, and further reduced to quadratic complexity in special
cases. We provide empirical results on simulated and benchmark datasets
evaluating the performance of the proposed algorithms for decision-theoretic
classification and comparing them to baseline and state-of-the-art methods in
population utility maximization for non-decomposable metrics.
| no_new_dataset | 0.945951 |
1310.3567 | Adam Vaughan | Adam Vaughan and Stanislav V. Bohac | An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time | 11 pages, 7 figures, minor revision (added implementation details and
video link), submitted to Neural Networks | null | null | null | cs.LG cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2013 06:00:31 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Sep 2014 16:52:27 GMT"
},
{
"version": "v3",
"created": "Tue, 5 May 2015 20:23:49 GMT"
}
] | 2015-05-07T00:00:00 | [
[
"Vaughan",
"Adam",
""
],
[
"Bohac",
"Stanislav V.",
""
]
] | TITLE: An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time
ABSTRACT: Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines.
| no_new_dataset | 0.947088 |
1408.0369 | Jean Golay | Jean Golay and Mikhail Kanevski | A New Estimator of Intrinsic Dimension Based on the Multipoint Morisita
Index | null | null | null | null | physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The size of datasets has been increasing rapidly both in terms of number of
variables and number of events. As a result, the empty space phenomenon and the
curse of dimensionality complicate the extraction of useful information. But,
in general, data lie on non-linear manifolds of much lower dimension than that
of the spaces in which they are embedded. In many pattern recognition tasks,
learning these manifolds is a key issue and it requires the knowledge of their
true intrinsic dimension. This paper introduces a new estimator of intrinsic
dimension based on the multipoint Morisita index. It is applied to both
synthetic and real datasets of varying complexities and comparisons with other
existing estimators are carried out. The proposed estimator turns out to be
fairly robust to sample size and noise, unaffected by edge effects, able to
handle large datasets and computationally efficient.
| [
{
"version": "v1",
"created": "Sat, 2 Aug 2014 12:59:28 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Aug 2014 12:44:03 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Nov 2014 20:43:50 GMT"
},
{
"version": "v4",
"created": "Mon, 10 Nov 2014 14:51:22 GMT"
},
{
"version": "v5",
"created": "Mon, 1 Dec 2014 20:48:09 GMT"
},
{
"version": "v6",
"created": "Mon, 8 Dec 2014 16:19:48 GMT"
},
{
"version": "v7",
"created": "Wed, 6 May 2015 15:20:24 GMT"
}
] | 2015-05-07T00:00:00 | [
[
"Golay",
"Jean",
""
],
[
"Kanevski",
"Mikhail",
""
]
] | TITLE: A New Estimator of Intrinsic Dimension Based on the Multipoint Morisita
Index
ABSTRACT: The size of datasets has been increasing rapidly both in terms of number of
variables and number of events. As a result, the empty space phenomenon and the
curse of dimensionality complicate the extraction of useful information. But,
in general, data lie on non-linear manifolds of much lower dimension than that
of the spaces in which they are embedded. In many pattern recognition tasks,
learning these manifolds is a key issue and it requires the knowledge of their
true intrinsic dimension. This paper introduces a new estimator of intrinsic
dimension based on the multipoint Morisita index. It is applied to both
synthetic and real datasets of varying complexities and comparisons with other
existing estimators are carried out. The proposed estimator turns out to be
fairly robust to sample size and noise, unaffected by edge effects, able to
handle large datasets and computationally efficient.
| no_new_dataset | 0.948585 |
1412.6505 | Michael S. Ryoo | M. S. Ryoo, Brandon Rothrock, Larry Matthies | Pooled Motion Features for First-Person Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a new feature representation for first-person
videos. In first-person video understanding (e.g., activity recognition), it is
very important to capture both entire scene dynamics (i.e., egomotion) and
salient local motion observed in videos. We describe a representation framework
based on time series pooling, which is designed to abstract
short-term/long-term changes in feature descriptor elements. The idea is to
keep track of how descriptor values are changing over time and summarize them
to represent motion in the activity video. The framework is general, handling
any types of per-frame feature descriptors including conventional motion
descriptors like histogram of optical flows (HOF) as well as appearance
descriptors from more recent convolutional neural networks (CNN). We
experimentally confirm that our approach clearly outperforms previous feature
representations including bag-of-visual-words and improved Fisher vector (IFV)
when using identical underlying feature descriptors. We also confirm that our
feature representation has superior performance to existing state-of-the-art
features like local spatio-temporal features and Improved Trajectory Features
(originally developed for 3rd-person videos) when handling first-person videos.
Multiple first-person activity datasets were tested under various settings to
confirm these findings.
| [
{
"version": "v1",
"created": "Fri, 19 Dec 2014 20:03:00 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2015 19:16:08 GMT"
}
] | 2015-05-07T00:00:00 | [
[
"Ryoo",
"M. S.",
""
],
[
"Rothrock",
"Brandon",
""
],
[
"Matthies",
"Larry",
""
]
] | TITLE: Pooled Motion Features for First-Person Videos
ABSTRACT: In this paper, we present a new feature representation for first-person
videos. In first-person video understanding (e.g., activity recognition), it is
very important to capture both entire scene dynamics (i.e., egomotion) and
salient local motion observed in videos. We describe a representation framework
based on time series pooling, which is designed to abstract
short-term/long-term changes in feature descriptor elements. The idea is to
keep track of how descriptor values are changing over time and summarize them
to represent motion in the activity video. The framework is general, handling
any types of per-frame feature descriptors including conventional motion
descriptors like histogram of optical flows (HOF) as well as appearance
descriptors from more recent convolutional neural networks (CNN). We
experimentally confirm that our approach clearly outperforms previous feature
representations including bag-of-visual-words and improved Fisher vector (IFV)
when using identical underlying feature descriptors. We also confirm that our
feature representation has superior performance to existing state-of-the-art
features like local spatio-temporal features and Improved Trajectory Features
(originally developed for 3rd-person videos) when handling first-person videos.
Multiple first-person activity datasets were tested under various settings to
confirm these findings.
| no_new_dataset | 0.949342 |
1505.01257 | Tatiana Tommasi | Tatiana Tommasi, Novi Patricia, Barbara Caputo, Tinne Tuytelaars | A Deeper Look at Dataset Bias | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The presence of a bias in each image data collection has recently attracted a
lot of attention in the computer vision community showing the limits in
generalization of any learning method trained on a specific dataset. At the
same time, with the rapid development of deep learning architectures, the
activation values of Convolutional Neural Networks (CNN) are emerging as
reliable and robust image descriptors. In this paper we propose to verify the
potential of the DeCAF features when facing the dataset bias problem. We
conduct a series of analyses looking at how existing datasets differ among each
other and verifying the performance of existing debiasing methods under
different representations. We learn important lessons on which part of the
dataset bias problem can be considered solved and which open questions still
need to be tackled.
| [
{
"version": "v1",
"created": "Wed, 6 May 2015 06:19:23 GMT"
}
] | 2015-05-07T00:00:00 | [
[
"Tommasi",
"Tatiana",
""
],
[
"Patricia",
"Novi",
""
],
[
"Caputo",
"Barbara",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: A Deeper Look at Dataset Bias
ABSTRACT: The presence of a bias in each image data collection has recently attracted a
lot of attention in the computer vision community showing the limits in
generalization of any learning method trained on a specific dataset. At the
same time, with the rapid development of deep learning architectures, the
activation values of Convolutional Neural Networks (CNN) are emerging as
reliable and robust image descriptors. In this paper we propose to verify the
potential of the DeCAF features when facing the dataset bias problem. We
conduct a series of analyses looking at how existing datasets differ among each
other and verifying the performance of existing debiasing methods under
different representations. We learn important lessons on which part of the
dataset bias problem can be considered solved and which open questions still
need to be tackled.
| no_new_dataset | 0.942665 |
1505.01350 | Ozgur Yilmaz | Ozgur Yilmaz | Classification of Occluded Objects using Fast Recurrent Processing | arXiv admin note: text overlap with arXiv:1409.8576 by other authors | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks are powerful tools for handling incomplete data
problems in computer vision, thanks to their significant generative
capabilities. However, the computational demand for these algorithms is too
high to work in real time, without specialized hardware or software solutions.
In this paper, we propose a framework for augmenting recurrent processing
capabilities into a feedforward network without sacrificing much from
computational efficiency. We assume a mixture model and generate samples of the
last hidden layer according to the class decisions of the output layer, modify
the hidden layer activity using the samples, and propagate to lower layers. For
visual occlusion problem, the iterative procedure emulates feedforward-feedback
loop, filling-in the missing hidden layer activity with meaningful
representations. The proposed algorithm is tested on a widely used dataset, and
shown to achieve 2$\times$ improvement in classification accuracy for occluded
objects. When compared to Restricted Boltzmann Machines, our algorithm shows
superior performance for occluded object classification.
| [
{
"version": "v1",
"created": "Wed, 6 May 2015 12:58:58 GMT"
}
] | 2015-05-07T00:00:00 | [
[
"Yilmaz",
"Ozgur",
""
]
] | TITLE: Classification of Occluded Objects using Fast Recurrent Processing
ABSTRACT: Recurrent neural networks are powerful tools for handling incomplete data
problems in computer vision, thanks to their significant generative
capabilities. However, the computational demand for these algorithms is too
high to work in real time, without specialized hardware or software solutions.
In this paper, we propose a framework for augmenting recurrent processing
capabilities into a feedforward network without sacrificing much from
computational efficiency. We assume a mixture model and generate samples of the
last hidden layer according to the class decisions of the output layer, modify
the hidden layer activity using the samples, and propagate to lower layers. For
visual occlusion problem, the iterative procedure emulates feedforward-feedback
loop, filling-in the missing hidden layer activity with meaningful
representations. The proposed algorithm is tested on a widely used dataset, and
shown to achieve 2$\times$ improvement in classification accuracy for occluded
objects. When compared to Restricted Boltzmann Machines, our algorithm shows
superior performance for occluded object classification.
| no_new_dataset | 0.949059 |
1301.3516 | Mateusz Malinowski | Mateusz Malinowski and Mario Fritz | Learnable Pooling Regions for Image Classification | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching,
pooling has played an important role in visual recognition pipelines. Spatial
pooling, by grouping of local codes, equips these methods with a certain degree
of robustness to translation and deformation yet preserving important spatial
information. Despite the predominance of this approach in current recognition
systems, we have seen little progress to fully adapt the pooling strategy to
the task at hand. This paper proposes a model for learning task dependent
pooling scheme -- including previously proposed hand-crafted pooling schemes as
a particular instantiation. In our work, we investigate the role of different
regularization terms showing that the smooth regularization term is crucial to
achieve strong performance using the presented architecture. Finally, we
propose an efficient and parallel method to train the model. Our experiments
show improved performance over hand-crafted pooling schemes on the CIFAR-10 and
CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on
the latter.
| [
{
"version": "v1",
"created": "Tue, 15 Jan 2013 22:15:06 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Aug 2013 13:51:04 GMT"
},
{
"version": "v3",
"created": "Tue, 5 May 2015 18:12:46 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Malinowski",
"Mateusz",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: Learnable Pooling Regions for Image Classification
ABSTRACT: Biologically inspired, from the early HMAX model to Spatial Pyramid Matching,
pooling has played an important role in visual recognition pipelines. Spatial
pooling, by grouping of local codes, equips these methods with a certain degree
of robustness to translation and deformation yet preserving important spatial
information. Despite the predominance of this approach in current recognition
systems, we have seen little progress to fully adapt the pooling strategy to
the task at hand. This paper proposes a model for learning task dependent
pooling scheme -- including previously proposed hand-crafted pooling schemes as
a particular instantiation. In our work, we investigate the role of different
regularization terms showing that the smooth regularization term is crucial to
achieve strong performance using the presented architecture. Finally, we
propose an efficient and parallel method to train the model. Our experiments
show improved performance over hand-crafted pooling schemes on the CIFAR-10 and
CIFAR-100 datasets -- in particular improving the state-of-the-art to 56.29% on
the latter.
| no_new_dataset | 0.943138 |
1411.5190 | Mateusz Malinowski | Mateusz Malinowski and Mario Fritz | A Pooling Approach to Modelling Spatial Relations for Image Retrieval
and Annotation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last two decades we have witnessed strong progress on modeling
visual object classes, scenes and attributes that have significantly
contributed to automated image understanding. On the other hand, surprisingly
little progress has been made on incorporating a spatial representation and
reasoning in the inference process. In this work, we propose a pooling
interpretation of spatial relations and show how it improves image retrieval
and annotations tasks involving spatial language. Due to the complexity of the
spatial language, we argue for a learning-based approach that acquires a
representation of spatial relations by learning parameters of the pooling
operator. We show improvements on previous work on two datasets and two
different tasks as well as provide additional insights on a new dataset with an
explicit focus on spatial relations.
| [
{
"version": "v1",
"created": "Wed, 19 Nov 2014 11:44:24 GMT"
},
{
"version": "v2",
"created": "Tue, 5 May 2015 17:55:23 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Malinowski",
"Mateusz",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: A Pooling Approach to Modelling Spatial Relations for Image Retrieval
and Annotation
ABSTRACT: Over the last two decades we have witnessed strong progress on modeling
visual object classes, scenes and attributes that have significantly
contributed to automated image understanding. On the other hand, surprisingly
little progress has been made on incorporating a spatial representation and
reasoning in the inference process. In this work, we propose a pooling
interpretation of spatial relations and show how it improves image retrieval
and annotations tasks involving spatial language. Due to the complexity of the
spatial language, we argue for a learning-based approach that acquires a
representation of spatial relations by learning parameters of the pooling
operator. We show improvements on previous work on two datasets and two
different tasks as well as provide additional insights on a new dataset with an
explicit focus on spatial relations.
| new_dataset | 0.608798 |
1504.06451 | Marios Meimaris | Marios Meimaris, George Papastefanatos, Christos Pateritsas, Theodora
Galani and Yannis Stavrakas | A Framework for Managing Evolving Information Resources on the Data Web | arXiv admin note: text overlap with arXiv:1504.01891 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The web of data has brought forth the need to preserve and sustain evolving
information within linked datasets; however, a basic requirement of data
preservation is the maintenance of the datasets' structural characteristics as
well. As open data are often found using different and/or heterogeneous data
models and schemata from one source to another, there is a need to reconcile
these mismatches and provide common denominations of interpretation on a
multitude of levels, in order to be able to preserve and manage the evolution
of the generated resources. In this paper, we present a linked data approach
for the preservation and archiving of open heterogeneous datasets that evolve
through time, at both the structural and the semantic layer. We first propose a
set of re-quirements for modelling evolving linked datasets. We then proceed on
concep-tualizing a modelling framework for evolving entities and place these in
a 2x2 model space that consists of the semantic and the temporal dimensions.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 10:02:01 GMT"
},
{
"version": "v2",
"created": "Tue, 5 May 2015 14:43:54 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Meimaris",
"Marios",
""
],
[
"Papastefanatos",
"George",
""
],
[
"Pateritsas",
"Christos",
""
],
[
"Galani",
"Theodora",
""
],
[
"Stavrakas",
"Yannis",
""
]
] | TITLE: A Framework for Managing Evolving Information Resources on the Data Web
ABSTRACT: The web of data has brought forth the need to preserve and sustain evolving
information within linked datasets; however, a basic requirement of data
preservation is the maintenance of the datasets' structural characteristics as
well. As open data are often found using different and/or heterogeneous data
models and schemata from one source to another, there is a need to reconcile
these mismatches and provide common denominations of interpretation on a
multitude of levels, in order to be able to preserve and manage the evolution
of the generated resources. In this paper, we present a linked data approach
for the preservation and archiving of open heterogeneous datasets that evolve
through time, at both the structural and the semantic layer. We first propose a
set of re-quirements for modelling evolving linked datasets. We then proceed on
concep-tualizing a modelling framework for evolving entities and place these in
a 2x2 model space that consists of the semantic and the temporal dimensions.
| no_new_dataset | 0.943034 |
1505.00824 | Eva Dyer | Eva L. Dyer, Tom A. Goldstein, Raajen Patel, Konrad P. Kording, and
Richard G. Baraniuk | Self-Expressive Decompositions for Matrix Approximation and Clustering | 11 pages, 7 figures | null | null | null | cs.IT cs.CV cs.LG math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-aware methods for dimensionality reduction and matrix decomposition aim
to find low-dimensional structure in a collection of data. Classical approaches
discover such structure by learning a basis that can efficiently express the
collection. Recently, "self expression", the idea of using a small subset of
data vectors to represent the full collection, has been developed as an
alternative to learning. Here, we introduce a scalable method for computing
sparse SElf-Expressive Decompositions (SEED). SEED is a greedy method that
constructs a basis by sequentially selecting incoherent vectors from the
dataset. After forming a basis from a subset of vectors in the dataset, SEED
then computes a sparse representation of the dataset with respect to this
basis. We develop sufficient conditions under which SEED exactly represents low
rank matrices and vectors sampled from a unions of independent subspaces. We
show how SEED can be used in applications ranging from matrix approximation and
denoising to clustering, and apply it to numerous real-world datasets. Our
results demonstrate that SEED is an attractive low-complexity alternative to
other sparse matrix factorization approaches such as sparse PCA and
self-expressive methods for clustering.
| [
{
"version": "v1",
"created": "Mon, 4 May 2015 21:56:54 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Dyer",
"Eva L.",
""
],
[
"Goldstein",
"Tom A.",
""
],
[
"Patel",
"Raajen",
""
],
[
"Kording",
"Konrad P.",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Self-Expressive Decompositions for Matrix Approximation and Clustering
ABSTRACT: Data-aware methods for dimensionality reduction and matrix decomposition aim
to find low-dimensional structure in a collection of data. Classical approaches
discover such structure by learning a basis that can efficiently express the
collection. Recently, "self expression", the idea of using a small subset of
data vectors to represent the full collection, has been developed as an
alternative to learning. Here, we introduce a scalable method for computing
sparse SElf-Expressive Decompositions (SEED). SEED is a greedy method that
constructs a basis by sequentially selecting incoherent vectors from the
dataset. After forming a basis from a subset of vectors in the dataset, SEED
then computes a sparse representation of the dataset with respect to this
basis. We develop sufficient conditions under which SEED exactly represents low
rank matrices and vectors sampled from a unions of independent subspaces. We
show how SEED can be used in applications ranging from matrix approximation and
denoising to clustering, and apply it to numerous real-world datasets. Our
results demonstrate that SEED is an attractive low-complexity alternative to
other sparse matrix factorization approaches such as sparse PCA and
self-expressive methods for clustering.
| no_new_dataset | 0.941061 |
1505.00862 | Shuangyong Song | Shuangyong Song and Yao Meng | Classifying and Ranking Microblogging Hashtags with News Categories | 2 pages, no figure, to be appeared on RCIS 2015 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In microblogging, hashtags are used to be topical markers, and they are
adopted by users that contribute similar content or express a related idea.
However, hashtags are created in a free style and there is no domain category
information about them, which make users hard to get access to organized
hashtag presentation. In this paper, we propose an approach that classifies
hashtags with news categories, and then carry out a domain-sensitive popularity
ranking to get hot hashtags in each domain. The proposed approach first trains
a domain classification model with news content and news category information,
then detects microblogs related to a hashtag to be its representative text,
based on which we can classify this hashtag with a domain. Finally, we
calculate the domain-sensitive popularity of each hashtag with multiple
factors, to get most hotly discussed hashtags in each domain. Preliminary
experimental results on a dataset from Sina Weibo, one of the largest Chinese
microblogging websites, show usefulness of the proposed approach on describing
hashtags.
| [
{
"version": "v1",
"created": "Tue, 5 May 2015 02:02:23 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Song",
"Shuangyong",
""
],
[
"Meng",
"Yao",
""
]
] | TITLE: Classifying and Ranking Microblogging Hashtags with News Categories
ABSTRACT: In microblogging, hashtags are used to be topical markers, and they are
adopted by users that contribute similar content or express a related idea.
However, hashtags are created in a free style and there is no domain category
information about them, which make users hard to get access to organized
hashtag presentation. In this paper, we propose an approach that classifies
hashtags with news categories, and then carry out a domain-sensitive popularity
ranking to get hot hashtags in each domain. The proposed approach first trains
a domain classification model with news content and news category information,
then detects microblogs related to a hashtag to be its representative text,
based on which we can classify this hashtag with a domain. Finally, we
calculate the domain-sensitive popularity of each hashtag with multiple
factors, to get most hotly discussed hashtags in each domain. Preliminary
experimental results on a dataset from Sina Weibo, one of the largest Chinese
microblogging websites, show usefulness of the proposed approach on describing
hashtags.
| no_new_dataset | 0.95511 |
1505.00914 | Jose Cadenas | Jos\'e O. Cadenas, Graham Megson | An Empirical Evaluation of Preconditioning Data for Accelerating Convex
Hull Computations | 20 pages, 11 figures | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The convex hull describes the extent or shape of a set of data and is used
ubiquitously in computational geometry. Common algorithms to construct the
convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n)
time. However, it is often the case that a heuristic procedure is applied to
reduce the original set of n points to a set of s < n points which contains the
hull and so accelerates the final hull finding procedure. We present an
algorithm to precondition data before building a 2D convex hull with integer
coordinates, with three distinct advantages. First, for all practical purposes,
it is linear; second, no explicit sorting of data is required and third, the
reduced set of s points is constructed such that it forms an ordered set that
can be directly pipelined into an O(n) time convex hull algorithm. Under these
criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex
hull (approximately O(n)) for an arbitrary set of points. The paper empirically
evaluates and quantifies the acceleration generated by the method against the
most common convex hull algorithms. An extra acceleration of at least four
times when compared to previous existing preconditioning methods is found from
experiments on a dataset.
| [
{
"version": "v1",
"created": "Tue, 5 May 2015 08:31:48 GMT"
}
] | 2015-05-06T00:00:00 | [
[
"Cadenas",
"José O.",
""
],
[
"Megson",
"Graham",
""
]
] | TITLE: An Empirical Evaluation of Preconditioning Data for Accelerating Convex
Hull Computations
ABSTRACT: The convex hull describes the extent or shape of a set of data and is used
ubiquitously in computational geometry. Common algorithms to construct the
convex hull on a finite set of n points (x,y) range from O(nlogn) time to O(n)
time. However, it is often the case that a heuristic procedure is applied to
reduce the original set of n points to a set of s < n points which contains the
hull and so accelerates the final hull finding procedure. We present an
algorithm to precondition data before building a 2D convex hull with integer
coordinates, with three distinct advantages. First, for all practical purposes,
it is linear; second, no explicit sorting of data is required and third, the
reduced set of s points is constructed such that it forms an ordered set that
can be directly pipelined into an O(n) time convex hull algorithm. Under these
criteria a fast (or O(n)) pre-conditioner in principle creates a fast convex
hull (approximately O(n)) for an arbitrary set of points. The paper empirically
evaluates and quantifies the acceleration generated by the method against the
most common convex hull algorithms. An extra acceleration of at least four
times when compared to previous existing preconditioning methods is found from
experiments on a dataset.
| no_new_dataset | 0.948965 |
1204.2310 | Yue Wu | Yue Wu, Yicong Zhou, Joseph P. Noonan, Sos Agaian, and C. L. Philip
Chen | A Novel Latin Square Image Cipher | 26 pages, 17 figures, and 7 tables | Information Sciences 264 (2014): 317-339 | 10.1016/j.ins.2013.11.027 | null | cs.CR cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a symmetric-key Latin square image cipher (LSIC)
for grayscale and color images. Our contributions to the image encryption
community include 1) we develop new Latin square image encryption primitives
including Latin Square Whitening, Latin Square S-box and Latin Square P-box ;
2) we provide a new way of integrating probabilistic encryption in image
encryption by embedding random noise in the least significant image bit-plane;
and 3) we construct LSIC with these Latin square image encryption primitives
all on one keyed Latin square in a new loom-like substitution-permutation
network. Consequently, the proposed LSIC achieve many desired properties of a
secure cipher including a large key space, high key sensitivities, uniformly
distributed ciphertext, excellent confusion and diffusion properties,
semantically secure, and robustness against channel noise. Theoretical analysis
show that the LSIC has good resistance to many attack models including
brute-force attacks, ciphertext-only attacks, known-plaintext attacks and
chosen-plaintext attacks. Experimental analysis under extensive simulation
results using the complete USC-SIPI Miscellaneous image dataset demonstrate
that LSIC outperforms or reach state of the art suggested by many peer
algorithms. All these analysis and results demonstrate that the LSIC is very
suitable for digital image encryption. Finally, we open source the LSIC MATLAB
code under webpage https://sites.google.com/site/tuftsyuewu/source-code.
| [
{
"version": "v1",
"created": "Wed, 11 Apr 2012 00:54:13 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Wu",
"Yue",
""
],
[
"Zhou",
"Yicong",
""
],
[
"Noonan",
"Joseph P.",
""
],
[
"Agaian",
"Sos",
""
],
[
"Chen",
"C. L. Philip",
""
]
] | TITLE: A Novel Latin Square Image Cipher
ABSTRACT: In this paper, we introduce a symmetric-key Latin square image cipher (LSIC)
for grayscale and color images. Our contributions to the image encryption
community include 1) we develop new Latin square image encryption primitives
including Latin Square Whitening, Latin Square S-box and Latin Square P-box ;
2) we provide a new way of integrating probabilistic encryption in image
encryption by embedding random noise in the least significant image bit-plane;
and 3) we construct LSIC with these Latin square image encryption primitives
all on one keyed Latin square in a new loom-like substitution-permutation
network. Consequently, the proposed LSIC achieve many desired properties of a
secure cipher including a large key space, high key sensitivities, uniformly
distributed ciphertext, excellent confusion and diffusion properties,
semantically secure, and robustness against channel noise. Theoretical analysis
show that the LSIC has good resistance to many attack models including
brute-force attacks, ciphertext-only attacks, known-plaintext attacks and
chosen-plaintext attacks. Experimental analysis under extensive simulation
results using the complete USC-SIPI Miscellaneous image dataset demonstrate
that LSIC outperforms or reach state of the art suggested by many peer
algorithms. All these analysis and results demonstrate that the LSIC is very
suitable for digital image encryption. Finally, we open source the LSIC MATLAB
code under webpage https://sites.google.com/site/tuftsyuewu/source-code.
| new_dataset | 0.965446 |
1404.5065 | Eleftherios Spyromitros-Xioufis | Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Aikaterini
Vrekou, Ioannis Vlahavas | Multi-Target Regression via Random Linear Target Combinations | null | ECML PKDD Proceedings, Part III (2014) 225-240 | 10.1007/978-3-662-44845-8_15 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-target regression is concerned with the simultaneous prediction of
multiple continuous target variables based on the same set of input variables.
It arises in several interesting industrial and environmental application
domains, such as ecological modelling and energy forecasting. This paper
presents an ensemble method for multi-target regression that constructs new
target variables via random linear combinations of existing targets. We discuss
the connection of our approach with multi-label classification algorithms, in
particular RA$k$EL, which originally inspired this work, and a family of recent
multi-label classification algorithms that involve output coding. Experimental
results on 12 multi-target datasets show that it performs significantly better
than a strong baseline that learns a single model for each target using
gradient boosting and compares favourably to multi-objective random forest
approach, which is a state-of-the-art approach. The experiments further show
that our approach improves more when stronger unconditional dependencies exist
among the targets.
| [
{
"version": "v1",
"created": "Sun, 20 Apr 2014 19:17:23 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Tsoumakas",
"Grigorios",
""
],
[
"Spyromitros-Xioufis",
"Eleftherios",
""
],
[
"Vrekou",
"Aikaterini",
""
],
[
"Vlahavas",
"Ioannis",
""
]
] | TITLE: Multi-Target Regression via Random Linear Target Combinations
ABSTRACT: Multi-target regression is concerned with the simultaneous prediction of
multiple continuous target variables based on the same set of input variables.
It arises in several interesting industrial and environmental application
domains, such as ecological modelling and energy forecasting. This paper
presents an ensemble method for multi-target regression that constructs new
target variables via random linear combinations of existing targets. We discuss
the connection of our approach with multi-label classification algorithms, in
particular RA$k$EL, which originally inspired this work, and a family of recent
multi-label classification algorithms that involve output coding. Experimental
results on 12 multi-target datasets show that it performs significantly better
than a strong baseline that learns a single model for each target using
gradient boosting and compares favourably to multi-objective random forest
approach, which is a state-of-the-art approach. The experiments further show
that our approach improves more when stronger unconditional dependencies exist
among the targets.
| no_new_dataset | 0.942507 |
1411.2861 | Xiaodan Liang | Xiaodan Liang, Si Liu, Yunchao Wei, Luoqi Liu, Liang Lin, Shuicheng
Yan | Computational Baby Learning | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intuitive observations show that a baby may inherently possess the capability
of recognizing a new visual concept (e.g., chair, dog) by learning from only
very few positive instances taught by parent(s) or others, and this recognition
capability can be gradually further improved by exploring and/or interacting
with the real instances in the physical world. Inspired by these observations,
we propose a computational model for slightly-supervised object detection,
based on prior knowledge modelling, exemplar learning and learning with video
contexts. The prior knowledge is modeled with a pre-trained Convolutional
Neural Network (CNN). When very few instances of a new concept are given, an
initial concept detector is built by exemplar learning over the deep features
from the pre-trained CNN. Simulating the baby's interaction with physical
world, the well-designed tracking solution is then used to discover more
diverse instances from the massive online unlabeled videos. Once a positive
instance is detected/identified with high score in each video, more variable
instances possibly from different view-angles and/or different distances are
tracked and accumulated. Then the concept detector can be fine-tuned based on
these new instances. This process can be repeated again and again till we
obtain a very mature concept detector. Extensive experiments on Pascal
VOC-07/10/12 object detection datasets well demonstrate the effectiveness of
our framework. It can beat the state-of-the-art full-training based
performances by learning from very few samples for each object category, along
with about 20,000 unlabeled videos.
| [
{
"version": "v1",
"created": "Tue, 11 Nov 2014 16:00:59 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Nov 2014 13:59:59 GMT"
},
{
"version": "v3",
"created": "Mon, 4 May 2015 02:33:26 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Liang",
"Xiaodan",
""
],
[
"Liu",
"Si",
""
],
[
"Wei",
"Yunchao",
""
],
[
"Liu",
"Luoqi",
""
],
[
"Lin",
"Liang",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Computational Baby Learning
ABSTRACT: Intuitive observations show that a baby may inherently possess the capability
of recognizing a new visual concept (e.g., chair, dog) by learning from only
very few positive instances taught by parent(s) or others, and this recognition
capability can be gradually further improved by exploring and/or interacting
with the real instances in the physical world. Inspired by these observations,
we propose a computational model for slightly-supervised object detection,
based on prior knowledge modelling, exemplar learning and learning with video
contexts. The prior knowledge is modeled with a pre-trained Convolutional
Neural Network (CNN). When very few instances of a new concept are given, an
initial concept detector is built by exemplar learning over the deep features
from the pre-trained CNN. Simulating the baby's interaction with physical
world, the well-designed tracking solution is then used to discover more
diverse instances from the massive online unlabeled videos. Once a positive
instance is detected/identified with high score in each video, more variable
instances possibly from different view-angles and/or different distances are
tracked and accumulated. Then the concept detector can be fine-tuned based on
these new instances. This process can be repeated again and again till we
obtain a very mature concept detector. Extensive experiments on Pascal
VOC-07/10/12 object detection datasets well demonstrate the effectiveness of
our framework. It can beat the state-of-the-art full-training based
performances by learning from very few samples for each object category, along
with about 20,000 unlabeled videos.
| no_new_dataset | 0.947478 |
1411.6718 | Mohamed Aly | Mahmoud Nabil, Mohamed Aly, Amir Atiya | LABR: A Large Scale Arabic Sentiment Analysis Benchmark | 10 pages | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce LABR, the largest sentiment analysis dataset to-date for the
Arabic language. It consists of over 63,000 book reviews, each rated on a scale
of 1 to 5 stars. We investigate the properties of the dataset, and present its
statistics. We explore using the dataset for two tasks: (1) sentiment polarity
classification; and (2) ratings classification. Moreover, we provide standard
splits of the dataset into training, validation and testing, for both polarity
and ratings classification, in both balanced and unbalanced settings. We extend
our previous work by performing a comprehensive analysis on the dataset. In
particular, we perform an extended survey of the different classifiers
typically used for the sentiment polarity classification problem. We also
construct a sentiment lexicon from the dataset that contains both single and
compound sentiment words and we explore its effectiveness. We make the dataset
and experimental details publicly available.
| [
{
"version": "v1",
"created": "Tue, 25 Nov 2014 03:48:56 GMT"
},
{
"version": "v2",
"created": "Sun, 3 May 2015 08:35:59 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Nabil",
"Mahmoud",
""
],
[
"Aly",
"Mohamed",
""
],
[
"Atiya",
"Amir",
""
]
] | TITLE: LABR: A Large Scale Arabic Sentiment Analysis Benchmark
ABSTRACT: We introduce LABR, the largest sentiment analysis dataset to-date for the
Arabic language. It consists of over 63,000 book reviews, each rated on a scale
of 1 to 5 stars. We investigate the properties of the dataset, and present its
statistics. We explore using the dataset for two tasks: (1) sentiment polarity
classification; and (2) ratings classification. Moreover, we provide standard
splits of the dataset into training, validation and testing, for both polarity
and ratings classification, in both balanced and unbalanced settings. We extend
our previous work by performing a comprehensive analysis on the dataset. In
particular, we perform an extended survey of the different classifiers
typically used for the sentiment polarity classification problem. We also
construct a sentiment lexicon from the dataset that contains both single and
compound sentiment words and we explore its effectiveness. We make the dataset
and experimental details publicly available.
| new_dataset | 0.9601 |
1501.06170 | Minsu Cho | Minsu Cho, Suha Kwak, Cordelia Schmid, Jean Ponce | Unsupervised Object Discovery and Localization in the Wild: Part-based
Matching with Bottom-up Region Proposals | CVPR 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses unsupervised discovery and localization of dominant
objects from a noisy image collection with multiple object classes. The setting
of this problem is fully unsupervised, without even image-level annotations or
any assumption of a single dominant class. This is far more general than
typical colocalization, cosegmentation, or weakly-supervised localization
tasks. We tackle the discovery and localization problem using a part-based
region matching approach: We use off-the-shelf region proposals to form a set
of candidate bounding boxes for objects and object parts. These regions are
efficiently matched across images using a probabilistic Hough transform that
evaluates the confidence for each candidate correspondence considering both
appearance and spatial consistency. Dominant objects are discovered and
localized by comparing the scores of candidate regions and selecting those that
stand out over other regions containing them. Extensive experimental
evaluations on standard benchmarks demonstrate that the proposed approach
significantly outperforms the current state of the art in colocalization, and
achieves robust object discovery in challenging mixed-class datasets.
| [
{
"version": "v1",
"created": "Sun, 25 Jan 2015 15:09:23 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jan 2015 17:36:52 GMT"
},
{
"version": "v3",
"created": "Mon, 4 May 2015 16:18:58 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Cho",
"Minsu",
""
],
[
"Kwak",
"Suha",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Ponce",
"Jean",
""
]
] | TITLE: Unsupervised Object Discovery and Localization in the Wild: Part-based
Matching with Bottom-up Region Proposals
ABSTRACT: This paper addresses unsupervised discovery and localization of dominant
objects from a noisy image collection with multiple object classes. The setting
of this problem is fully unsupervised, without even image-level annotations or
any assumption of a single dominant class. This is far more general than
typical colocalization, cosegmentation, or weakly-supervised localization
tasks. We tackle the discovery and localization problem using a part-based
region matching approach: We use off-the-shelf region proposals to form a set
of candidate bounding boxes for objects and object parts. These regions are
efficiently matched across images using a probabilistic Hough transform that
evaluates the confidence for each candidate correspondence considering both
appearance and spatial consistency. Dominant objects are discovered and
localized by comparing the scores of candidate regions and selecting those that
stand out over other regions containing them. Extensive experimental
evaluations on standard benchmarks demonstrate that the proposed approach
significantly outperforms the current state of the art in colocalization, and
achieves robust object discovery in challenging mixed-class datasets.
| no_new_dataset | 0.951006 |
1504.01044 | Heng Wang | Heng Wang and Zubin Abraham | Concept Drift Detection for Streaming Data | 9 pages, accepted in the International Joint Conference of Neural
Networks 2015 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common statistical prediction models often require and assume stationarity in
the data. However, in many practical applications, changes in the relationship
of the response and predictor variables are regularly observed over time,
resulting in the deterioration of the predictive performance of these models.
This paper presents Linear Four Rates (LFR), a framework for detecting these
concept drifts and subsequently identifying the data points that belong to the
new concept (for relearning the model). Unlike conventional concept drift
detection approaches, LFR can be applied to both batch and stream data; is not
limited by the distribution properties of the response variable (e.g., datasets
with imbalanced labels); is independent of the underlying statistical-model;
and uses user-specified parameters that are intuitively comprehensible. The
performance of LFR is compared to benchmark approaches using both simulated and
commonly used public datasets that span the gamut of concept drift types. The
results show LFR significantly outperforms benchmark approaches in terms of
recall, accuracy and delay in detection of concept drifts across datasets.
| [
{
"version": "v1",
"created": "Sat, 4 Apr 2015 19:55:35 GMT"
},
{
"version": "v2",
"created": "Sun, 3 May 2015 22:11:21 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Wang",
"Heng",
""
],
[
"Abraham",
"Zubin",
""
]
] | TITLE: Concept Drift Detection for Streaming Data
ABSTRACT: Common statistical prediction models often require and assume stationarity in
the data. However, in many practical applications, changes in the relationship
of the response and predictor variables are regularly observed over time,
resulting in the deterioration of the predictive performance of these models.
This paper presents Linear Four Rates (LFR), a framework for detecting these
concept drifts and subsequently identifying the data points that belong to the
new concept (for relearning the model). Unlike conventional concept drift
detection approaches, LFR can be applied to both batch and stream data; is not
limited by the distribution properties of the response variable (e.g., datasets
with imbalanced labels); is independent of the underlying statistical-model;
and uses user-specified parameters that are intuitively comprehensible. The
performance of LFR is compared to benchmark approaches using both simulated and
commonly used public datasets that span the gamut of concept drift types. The
results show LFR significantly outperforms benchmark approaches in terms of
recall, accuracy and delay in detection of concept drifts across datasets.
| no_new_dataset | 0.953319 |
1504.08168 | Jan \v{Z}egklitz | Jan \v{Z}egklitz and Petr Po\v{s}\'ik | Model Selection and Overfitting in Genetic Programming: Empirical Study
[Extended Version] | 8 pages, 12 figures, full paper for GECCO 2015 (accepted as poster,
this is the original paper submitted to the conference); added subtitle and
removed copyright text at the first page, fixed some typography | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic Programming has been very successful in solving a large area of
problems but its use as a machine learning algorithm has been limited so far.
One of the reasons is the problem of overfitting which cannot be solved or
suppresed as easily as in more traditional approaches. Another problem, closely
related to overfitting, is the selection of the final model from the
population.
In this article we present our research that addresses both problems:
overfitting and model selection. We compare several ways of dealing with
ovefitting, based on Random Sampling Technique (RST) and on using a validation
set, all with an emphasis on model selection. We subject each approach to a
thorough testing on artificial and real--world datasets and compare them with
the standard approach, which uses the full training data, as a baseline.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 11:12:52 GMT"
},
{
"version": "v2",
"created": "Mon, 4 May 2015 14:29:34 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Žegklitz",
"Jan",
""
],
[
"Pošík",
"Petr",
""
]
] | TITLE: Model Selection and Overfitting in Genetic Programming: Empirical Study
[Extended Version]
ABSTRACT: Genetic Programming has been very successful in solving a large area of
problems but its use as a machine learning algorithm has been limited so far.
One of the reasons is the problem of overfitting which cannot be solved or
suppresed as easily as in more traditional approaches. Another problem, closely
related to overfitting, is the selection of the final model from the
population.
In this article we present our research that addresses both problems:
overfitting and model selection. We compare several ways of dealing with
ovefitting, based on Random Sampling Technique (RST) and on using a validation
set, all with an emphasis on model selection. We subject each approach to a
thorough testing on artificial and real--world datasets and compare them with
the standard approach, which uses the full training data, as a baseline.
| no_new_dataset | 0.947914 |
1505.00276 | Peng Wang | Peng Wang, Xiaohui Shen, Zhe Lin, Scott Cohen, Brian Price, Alan
Yuille | Joint Object and Part Segmentation using Deep Learned Potentials | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segmenting semantic objects from images and parsing them into their
respective semantic parts are fundamental steps towards detailed object
understanding in computer vision. In this paper, we propose a joint solution
that tackles semantic object and part segmentation simultaneously, in which
higher object-level context is provided to guide part segmentation, and more
detailed part-level localization is utilized to refine object segmentation.
Specifically, we first introduce the concept of semantic compositional parts
(SCP) in which similar semantic parts are grouped and shared among different
objects. A two-channel fully convolutional network (FCN) is then trained to
provide the SCP and object potentials at each pixel. At the same time, a
compact set of segments can also be obtained from the SCP predictions of the
network. Given the potentials and the generated segments, in order to explore
long-range context, we finally construct an efficient fully connected
conditional random field (FCRF) to jointly predict the final object and part
labels. Extensive evaluation on three different datasets shows that our
approach can mutually enhance the performance of object and part segmentation,
and outperforms the current state-of-the-art on both tasks.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 20:35:24 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Wang",
"Peng",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Lin",
"Zhe",
""
],
[
"Cohen",
"Scott",
""
],
[
"Price",
"Brian",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Joint Object and Part Segmentation using Deep Learned Potentials
ABSTRACT: Segmenting semantic objects from images and parsing them into their
respective semantic parts are fundamental steps towards detailed object
understanding in computer vision. In this paper, we propose a joint solution
that tackles semantic object and part segmentation simultaneously, in which
higher object-level context is provided to guide part segmentation, and more
detailed part-level localization is utilized to refine object segmentation.
Specifically, we first introduce the concept of semantic compositional parts
(SCP) in which similar semantic parts are grouped and shared among different
objects. A two-channel fully convolutional network (FCN) is then trained to
provide the SCP and object potentials at each pixel. At the same time, a
compact set of segments can also be obtained from the SCP predictions of the
network. Given the potentials and the generated segments, in order to explore
long-range context, we finally construct an efficient fully connected
conditional random field (FCRF) to jointly predict the final object and part
labels. Extensive evaluation on three different datasets shows that our
approach can mutually enhance the performance of object and part segmentation,
and outperforms the current state-of-the-art on both tasks.
| no_new_dataset | 0.947332 |
1505.00277 | Dana Movshovitz-Attias | Dana Movshovitz-Attias, William W. Cohen | Grounded Discovery of Coordinate Term Relationships between Software
Entities | null | null | null | null | cs.CL cs.AI cs.LG cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach for the detection of coordinate-term relationships
between entities from the software domain, that refer to Java classes. Usually,
relations are found by examining corpus statistics associated with text
entities. In some technical domains, however, we have access to additional
information about the real-world objects named by the entities, suggesting that
coupling information about the "grounded" entities with corpus statistics might
lead to improved methods for relation discovery. To this end, we develop a
similarity measure for Java classes using distributional information about how
they are used in software, which we combine with corpus statistics on the
distribution of contexts in which the classes appear in text. Using our
approach, cross-validation accuracy on this dataset can be improved
dramatically, from around 60% to 88%. Human labeling results show that our
classifier has an F1 score of 86% over the top 1000 predicted pairs.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 20:40:00 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Movshovitz-Attias",
"Dana",
""
],
[
"Cohen",
"William W.",
""
]
] | TITLE: Grounded Discovery of Coordinate Term Relationships between Software
Entities
ABSTRACT: We present an approach for the detection of coordinate-term relationships
between entities from the software domain, that refer to Java classes. Usually,
relations are found by examining corpus statistics associated with text
entities. In some technical domains, however, we have access to additional
information about the real-world objects named by the entities, suggesting that
coupling information about the "grounded" entities with corpus statistics might
lead to improved methods for relation discovery. To this end, we develop a
similarity measure for Java classes using distributional information about how
they are used in software, which we combine with corpus statistics on the
distribution of contexts in which the classes appear in text. Using our
approach, cross-validation accuracy on this dataset can be improved
dramatically, from around 60% to 88%. Human labeling results show that our
classifier has an F1 score of 86% over the top 1000 predicted pairs.
| no_new_dataset | 0.944125 |
1505.00308 | Tejaswi Nimmagadda | Tejaswi Nimmagadda and Anima Anandkumar | Multi-Object Classification and Unsupervised Scene Understanding Using
Deep Learning Features and Latent Tree Probabilistic Models | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has shown state-of-art classification performance on datasets
such as ImageNet, which contain a single object in each image. However,
multi-object classification is far more challenging. We present a unified
framework which leverages the strengths of multiple machine learning methods,
viz deep learning, probabilistic models and kernel methods to obtain
state-of-art performance on Microsoft COCO, consisting of non-iconic images. We
incorporate contextual information in natural images through a conditional
latent tree probabilistic model (CLTM), where the object co-occurrences are
conditioned on the extracted fc7 features from pre-trained Imagenet CNN as
input. We learn the CLTM tree structure using conditional pairwise
probabilities for object co-occurrences, estimated through kernel methods, and
we learn its node and edge potentials by training a new 3-layer neural network,
which takes fc7 features as input. Object classification is carried out via
inference on the learnt conditional tree model, and we obtain significant gain
in precision-recall and F-measures on MS-COCO, especially for difficult object
categories. Moreover, the latent variables in the CLTM capture scene
information: the images with top activations for a latent node have common
themes such as being a grasslands or a food scene, and on on. In addition, we
show that a simple k-means clustering of the inferred latent nodes alone
significantly improves scene classification performance on the MIT-Indoor
dataset, without the need for any retraining, and without using scene labels
during training. Thus, we present a unified framework for multi-object
classification and unsupervised scene understanding.
| [
{
"version": "v1",
"created": "Sat, 2 May 2015 03:23:46 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Nimmagadda",
"Tejaswi",
""
],
[
"Anandkumar",
"Anima",
""
]
] | TITLE: Multi-Object Classification and Unsupervised Scene Understanding Using
Deep Learning Features and Latent Tree Probabilistic Models
ABSTRACT: Deep learning has shown state-of-art classification performance on datasets
such as ImageNet, which contain a single object in each image. However,
multi-object classification is far more challenging. We present a unified
framework which leverages the strengths of multiple machine learning methods,
viz deep learning, probabilistic models and kernel methods to obtain
state-of-art performance on Microsoft COCO, consisting of non-iconic images. We
incorporate contextual information in natural images through a conditional
latent tree probabilistic model (CLTM), where the object co-occurrences are
conditioned on the extracted fc7 features from pre-trained Imagenet CNN as
input. We learn the CLTM tree structure using conditional pairwise
probabilities for object co-occurrences, estimated through kernel methods, and
we learn its node and edge potentials by training a new 3-layer neural network,
which takes fc7 features as input. Object classification is carried out via
inference on the learnt conditional tree model, and we obtain significant gain
in precision-recall and F-measures on MS-COCO, especially for difficult object
categories. Moreover, the latent variables in the CLTM capture scene
information: the images with top activations for a latent node have common
themes such as being a grasslands or a food scene, and on on. In addition, we
show that a simple k-means clustering of the inferred latent nodes alone
significantly improves scene classification performance on the MIT-Indoor
dataset, without the need for any retraining, and without using scene labels
during training. Thus, we present a unified framework for multi-object
classification and unsupervised scene understanding.
| no_new_dataset | 0.954942 |
1505.00423 | Josif Grabocka | Josif Grabocka and Nicolas Schilling and Lars Schmidt-Thieme | Optimal Time-Series Motifs | Submitted to KDD2015 | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motifs are the most repetitive/frequent patterns of a time-series. The
discovery of motifs is crucial for practitioners in order to understand and
interpret the phenomena occurring in sequential data. Currently, motifs are
searched among series sub-sequences, aiming at selecting the most frequently
occurring ones. Search-based methods, which try out series sub-sequence as
motif candidates, are currently believed to be the best methods in finding the
most frequent patterns.
However, this paper proposes an entirely new perspective in finding motifs.
We demonstrate that searching is non-optimal since the domain of motifs is
restricted, and instead we propose a principled optimization approach able to
find optimal motifs. We treat the occurrence frequency as a function and
time-series motifs as its parameters, therefore we \textit{learn} the optimal
motifs that maximize the frequency function. In contrast to searching, our
method is able to discover the most repetitive patterns (hence optimal), even
in cases where they do not explicitly occur as sub-sequences. Experiments on
several real-life time-series datasets show that the motifs found by our method
are highly more frequent than the ones found through searching, for exactly the
same distance threshold.
| [
{
"version": "v1",
"created": "Sun, 3 May 2015 12:11:43 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Grabocka",
"Josif",
""
],
[
"Schilling",
"Nicolas",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Optimal Time-Series Motifs
ABSTRACT: Motifs are the most repetitive/frequent patterns of a time-series. The
discovery of motifs is crucial for practitioners in order to understand and
interpret the phenomena occurring in sequential data. Currently, motifs are
searched among series sub-sequences, aiming at selecting the most frequently
occurring ones. Search-based methods, which try out series sub-sequence as
motif candidates, are currently believed to be the best methods in finding the
most frequent patterns.
However, this paper proposes an entirely new perspective in finding motifs.
We demonstrate that searching is non-optimal since the domain of motifs is
restricted, and instead we propose a principled optimization approach able to
find optimal motifs. We treat the occurrence frequency as a function and
time-series motifs as its parameters, therefore we \textit{learn} the optimal
motifs that maximize the frequency function. In contrast to searching, our
method is able to discover the most repetitive patterns (hence optimal), even
in cases where they do not explicitly occur as sub-sequences. Experiments on
several real-life time-series datasets show that the motifs found by our method
are highly more frequent than the ones found through searching, for exactly the
same distance threshold.
| no_new_dataset | 0.955651 |
1505.00519 | Cameron Summers | Cameron Summers and Phillip Popp | Large Scale Discovery of Seasonal Music From User Data | 4 pages, 1 figure | null | null | null | cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The consumption history of online media content such as music and video
offers a rich source of data from which to mine information. Trends in this
data are of particular interest because they reflect user preferences as well
as associated cultural contexts that can be exploited in systems such as
recommendation or search. This paper classifies songs as seasonal using a
large, real-world dataset of user listening data. Results show strong
performance of classification of Christmas music with Gaussian Mixture Models.
| [
{
"version": "v1",
"created": "Mon, 4 May 2015 03:38:04 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Summers",
"Cameron",
""
],
[
"Popp",
"Phillip",
""
]
] | TITLE: Large Scale Discovery of Seasonal Music From User Data
ABSTRACT: The consumption history of online media content such as music and video
offers a rich source of data from which to mine information. Trends in this
data are of particular interest because they reflect user preferences as well
as associated cultural contexts that can be exploited in systems such as
recommendation or search. This paper classifies songs as seasonal using a
large, real-world dataset of user listening data. Results show strong
performance of classification of Christmas music with Gaussian Mixture Models.
| no_new_dataset | 0.91708 |
1505.00720 | Vasilis Syrgkanis | Denis Nekipelov, Vasilis Syrgkanis, Eva Tardos | Econometrics for Learning Agents | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main goal of this paper is to develop a theory of inference of player
valuations from observed data in the generalized second price auction without
relying on the Nash equilibrium assumption. Existing work in Economics on
inferring agent values from data relies on the assumption that all participant
strategies are best responses of the observed play of other players, i.e. they
constitute a Nash equilibrium. In this paper, we show how to perform inference
relying on a weaker assumption instead: assuming that players are using some
form of no-regret learning. Learning outcomes emerged in recent years as an
attractive alternative to Nash equilibrium in analyzing game outcomes, modeling
players who haven't reached a stable equilibrium, but rather use algorithmic
learning, aiming to learn the best way to play from previous observations. In
this paper we show how to infer values of players who use algorithmic learning
strategies. Such inference is an important first step before we move to testing
any learning theoretic behavioral model on auction data. We apply our
techniques to a dataset from Microsoft's sponsored search ad auction system.
| [
{
"version": "v1",
"created": "Mon, 4 May 2015 17:28:47 GMT"
}
] | 2015-05-05T00:00:00 | [
[
"Nekipelov",
"Denis",
""
],
[
"Syrgkanis",
"Vasilis",
""
],
[
"Tardos",
"Eva",
""
]
] | TITLE: Econometrics for Learning Agents
ABSTRACT: The main goal of this paper is to develop a theory of inference of player
valuations from observed data in the generalized second price auction without
relying on the Nash equilibrium assumption. Existing work in Economics on
inferring agent values from data relies on the assumption that all participant
strategies are best responses of the observed play of other players, i.e. they
constitute a Nash equilibrium. In this paper, we show how to perform inference
relying on a weaker assumption instead: assuming that players are using some
form of no-regret learning. Learning outcomes emerged in recent years as an
attractive alternative to Nash equilibrium in analyzing game outcomes, modeling
players who haven't reached a stable equilibrium, but rather use algorithmic
learning, aiming to learn the best way to play from previous observations. In
this paper we show how to infer values of players who use algorithmic learning
strategies. Such inference is an important first step before we move to testing
any learning theoretic behavioral model on auction data. We apply our
techniques to a dataset from Microsoft's sponsored search ad auction system.
| no_new_dataset | 0.94868 |
1410.2834 | Ubiratam de Paula Junior | Ubiratam de Paula Junior, L\'ucia M. A. Drummond, Daniel de Oliveira,
Yuri Frota, Valmir C. Barbosa | Handling Flash-Crowd Events to Improve the Performance of Web
Applications | Submitted to the 30th Symposium On Applied Computing (2015) | Proceedings of the 30th ACM/SIGAPP Symposium on Applied Computing,
769-774, 2015 | 10.1145/2695664.2695839 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing can offer a set of computing resources according to users'
demand. It is suitable to be used to handle flash-crowd events in Web
applications due to its elasticity and on-demand characteristics. Thus, when
Web applications need more computing or storage capacity, they just instantiate
new resources. However, providers have to estimate the amount of resources to
instantiate to handle with the flash-crowd event. This estimation is far from
trivial since each cloud environment provides several kinds of heterogeneous
resources, each one with its own characteristics such as bandwidth, CPU, memory
and financial cost. In this paper, the Flash Crowd Handling Problem (FCHP) is
precisely defined and formulated as an integer programming problem. A new
algorithm for handling with a flash crowd named FCHP-ILS is also proposed. With
FCHP-ILS the Web applications can replicate contents in the already
instantiated resources and define the types and amount of resources to
instantiate in the cloud during a flash crowd. Our approach is evaluated
considering real flash crowd traces obtained from the related literature. We
also present a case study, based on a synthetic dataset representing
flash-crowd events in small scenarios aiming at the comparison of the proposed
approach against Amazon's Auto-Scale mechanism.
| [
{
"version": "v1",
"created": "Fri, 10 Oct 2014 16:36:09 GMT"
}
] | 2015-05-04T00:00:00 | [
[
"Junior",
"Ubiratam de Paula",
""
],
[
"Drummond",
"Lúcia M. A.",
""
],
[
"de Oliveira",
"Daniel",
""
],
[
"Frota",
"Yuri",
""
],
[
"Barbosa",
"Valmir C.",
""
]
] | TITLE: Handling Flash-Crowd Events to Improve the Performance of Web
Applications
ABSTRACT: Cloud computing can offer a set of computing resources according to users'
demand. It is suitable to be used to handle flash-crowd events in Web
applications due to its elasticity and on-demand characteristics. Thus, when
Web applications need more computing or storage capacity, they just instantiate
new resources. However, providers have to estimate the amount of resources to
instantiate to handle with the flash-crowd event. This estimation is far from
trivial since each cloud environment provides several kinds of heterogeneous
resources, each one with its own characteristics such as bandwidth, CPU, memory
and financial cost. In this paper, the Flash Crowd Handling Problem (FCHP) is
precisely defined and formulated as an integer programming problem. A new
algorithm for handling with a flash crowd named FCHP-ILS is also proposed. With
FCHP-ILS the Web applications can replicate contents in the already
instantiated resources and define the types and amount of resources to
instantiate in the cloud during a flash crowd. Our approach is evaluated
considering real flash crowd traces obtained from the related literature. We
also present a case study, based on a synthetic dataset representing
flash-crowd events in small scenarios aiming at the comparison of the proposed
approach against Amazon's Auto-Scale mechanism.
| new_dataset | 0.712482 |
1504.08175 | Jo\~ao Vinagre | Jo\~ao Vinagre, Al\'ipio M\'ario Jorge, Jo\~ao Gama | Evaluation of recommender systems in streaming environments | Workshop on 'Recommender Systems Evaluation: Dimensions and Design'
(REDD 2014), held in conjunction with RecSys 2014. October 10, 2014, Silicon
Valley, United States | null | 10.13140/2.1.4381.5367 | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Evaluation of recommender systems is typically done with finite datasets.
This means that conventional evaluation methodologies are only applicable in
offline experiments, where data and models are stationary. However, in real
world systems, user feedback is continuously generated, at unpredictable rates.
Given this setting, one important issue is how to evaluate algorithms in such a
streaming data environment. In this paper we propose a prequential evaluation
protocol for recommender systems, suitable for streaming data environments, but
also applicable in stationary settings. Using this protocol we are able to
monitor the evolution of algorithms' accuracy over time. Furthermore, we are
able to perform reliable comparative assessments of algorithms by computing
significance tests over a sliding window. We argue that besides being suitable
for streaming data, prequential evaluation allows the detection of phenomena
that would otherwise remain unnoticed in the evaluation of both offline and
online recommender systems.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 11:41:49 GMT"
}
] | 2015-05-04T00:00:00 | [
[
"Vinagre",
"João",
""
],
[
"Jorge",
"Alípio Mário",
""
],
[
"Gama",
"João",
""
]
] | TITLE: Evaluation of recommender systems in streaming environments
ABSTRACT: Evaluation of recommender systems is typically done with finite datasets.
This means that conventional evaluation methodologies are only applicable in
offline experiments, where data and models are stationary. However, in real
world systems, user feedback is continuously generated, at unpredictable rates.
Given this setting, one important issue is how to evaluate algorithms in such a
streaming data environment. In this paper we propose a prequential evaluation
protocol for recommender systems, suitable for streaming data environments, but
also applicable in stationary settings. Using this protocol we are able to
monitor the evolution of algorithms' accuracy over time. Furthermore, we are
able to perform reliable comparative assessments of algorithms by computing
significance tests over a sliding window. We argue that besides being suitable
for streaming data, prequential evaluation allows the detection of phenomena
that would otherwise remain unnoticed in the evaluation of both offline and
online recommender systems.
| no_new_dataset | 0.94743 |
1505.00036 | Yair Zick Dr. | Amit Datta and Anupam Datta and Ariel D. Procaccia and Yair Zick | Influence in Classification via Cooperative Game Theory | accepted to IJCAI 2015 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A dataset has been classified by some unknown classifier into two types of
points. What were the most important factors in determining the classification
outcome? In this work, we employ an axiomatic approach in order to uniquely
characterize an influence measure: a function that, given a set of classified
points, outputs a value for each feature corresponding to its influence in
determining the classification outcome. We show that our influence measure
takes on an intuitive form when the unknown classifier is linear. Finally, we
employ our influence measure in order to analyze the effects of user profiling
on Google's online display advertising.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 21:22:36 GMT"
}
] | 2015-05-04T00:00:00 | [
[
"Datta",
"Amit",
""
],
[
"Datta",
"Anupam",
""
],
[
"Procaccia",
"Ariel D.",
""
],
[
"Zick",
"Yair",
""
]
] | TITLE: Influence in Classification via Cooperative Game Theory
ABSTRACT: A dataset has been classified by some unknown classifier into two types of
points. What were the most important factors in determining the classification
outcome? In this work, we employ an axiomatic approach in order to uniquely
characterize an influence measure: a function that, given a set of classified
points, outputs a value for each feature corresponding to its influence in
determining the classification outcome. We show that our influence measure
takes on an intuitive form when the unknown classifier is linear. Finally, we
employ our influence measure in order to analyze the effects of user profiling
on Google's online display advertising.
| no_new_dataset | 0.949201 |
1505.00161 | Danushka Bollegala | Danushka Bollegala, Takanori Maehara, Ken-ichi Kawarabayashi | Embedding Semantic Relations into Word Representations | International Joint Conferences in AI (IJCAI) 2015 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning representations for semantic relations is important for various
tasks such as analogy detection, relational search, and relation
classification. Although there have been several proposals for learning
representations for individual words, learning word representations that
explicitly capture the semantic relations between words remains under
developed. We propose an unsupervised method for learning vector
representations for words such that the learnt representations are sensitive to
the semantic relations that exist between two words. First, we extract lexical
patterns from the co-occurrence contexts of two words in a corpus to represent
the semantic relations that exist between those two words. Second, we represent
a lexical pattern as the weighted sum of the representations of the words that
co-occur with that lexical pattern. Third, we train a binary classifier to
detect relationally similar vs. non-similar lexical pattern pairs. The proposed
method is unsupervised in the sense that the lexical pattern pairs we use as
train data are automatically sampled from a corpus, without requiring any
manual intervention. Our proposed method statistically significantly
outperforms the current state-of-the-art word representations on three
benchmark datasets for proportional analogy detection, demonstrating its
ability to accurately capture the semantic relations among words.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 11:43:34 GMT"
}
] | 2015-05-04T00:00:00 | [
[
"Bollegala",
"Danushka",
""
],
[
"Maehara",
"Takanori",
""
],
[
"Kawarabayashi",
"Ken-ichi",
""
]
] | TITLE: Embedding Semantic Relations into Word Representations
ABSTRACT: Learning representations for semantic relations is important for various
tasks such as analogy detection, relational search, and relation
classification. Although there have been several proposals for learning
representations for individual words, learning word representations that
explicitly capture the semantic relations between words remains under
developed. We propose an unsupervised method for learning vector
representations for words such that the learnt representations are sensitive to
the semantic relations that exist between two words. First, we extract lexical
patterns from the co-occurrence contexts of two words in a corpus to represent
the semantic relations that exist between those two words. Second, we represent
a lexical pattern as the weighted sum of the representations of the words that
co-occur with that lexical pattern. Third, we train a binary classifier to
detect relationally similar vs. non-similar lexical pattern pairs. The proposed
method is unsupervised in the sense that the lexical pattern pairs we use as
train data are automatically sampled from a corpus, without requiring any
manual intervention. Our proposed method statistically significantly
outperforms the current state-of-the-art word representations on three
benchmark datasets for proportional analogy detection, demonstrating its
ability to accurately capture the semantic relations among words.
| no_new_dataset | 0.945197 |
1412.4729 | Subhashini Venugopalan | Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach,
Raymond Mooney, Kate Saenko | Translating Videos to Natural Language Using Deep Recurrent Neural
Networks | NAACL-HLT 2015 camera ready | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving the visual symbol grounding problem has long been a goal of
artificial intelligence. The field appears to be advancing closer to this goal
with recent breakthroughs in deep learning for natural language grounding in
static images. In this paper, we propose to translate videos directly to
sentences using a unified deep neural network with both convolutional and
recurrent structure. Described video datasets are scarce, and most existing
methods have been applied to toy domains with a small vocabulary of possible
words. By transferring knowledge from 1.2M+ images with category labels and
100,000+ images with captions, our method is able to create sentence
descriptions of open-domain videos with large vocabularies. We compare our
approach with recent work using language generation metrics, subject, verb, and
object prediction accuracy, and a human evaluation.
| [
{
"version": "v1",
"created": "Mon, 15 Dec 2014 19:21:50 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Dec 2014 00:58:38 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Apr 2015 04:22:06 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Venugopalan",
"Subhashini",
""
],
[
"Xu",
"Huijuan",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Mooney",
"Raymond",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Translating Videos to Natural Language Using Deep Recurrent Neural
Networks
ABSTRACT: Solving the visual symbol grounding problem has long been a goal of
artificial intelligence. The field appears to be advancing closer to this goal
with recent breakthroughs in deep learning for natural language grounding in
static images. In this paper, we propose to translate videos directly to
sentences using a unified deep neural network with both convolutional and
recurrent structure. Described video datasets are scarce, and most existing
methods have been applied to toy domains with a small vocabulary of possible
words. By transferring knowledge from 1.2M+ images with category labels and
100,000+ images with captions, our method is able to create sentence
descriptions of open-domain videos with large vocabularies. We compare our
approach with recent work using language generation metrics, subject, verb, and
object prediction accuracy, and a human evaluation.
| no_new_dataset | 0.951278 |
1504.05133 | Joe Yue-Hei Ng | Joe Yue-Hei Ng, Fan Yang, Larry S. Davis | Exploiting Local Features from Deep Networks for Image Retrieval | CVPR DeepVision Workshop 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks have been successfully applied to image
classification tasks. When these same networks have been applied to image
retrieval, the assumption has been made that the last layers would give the
best performance, as they do in classification. We show that for instance-level
image retrieval, lower layers often perform better than the last layers in
convolutional neural networks. We present an approach for extracting
convolutional features from different layers of the networks, and adopt VLAD
encoding to encode features into a single vector for each image. We investigate
the effect of different layers and scales of input images on the performance of
convolutional features using the recent deep networks OxfordNet and GoogLeNet.
Experiments demonstrate that intermediate layers or higher layers with finer
scales produce better results for image retrieval, compared to the last layer.
When using compressed 128-D VLAD descriptors, our method obtains
state-of-the-art results and outperforms other VLAD and CNN based approaches on
two out of three test datasets. Our work provides guidance for transferring
deep networks trained on image classification to image retrieval tasks.
| [
{
"version": "v1",
"created": "Mon, 20 Apr 2015 17:41:46 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Apr 2015 03:36:25 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Ng",
"Joe Yue-Hei",
""
],
[
"Yang",
"Fan",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Exploiting Local Features from Deep Networks for Image Retrieval
ABSTRACT: Deep convolutional neural networks have been successfully applied to image
classification tasks. When these same networks have been applied to image
retrieval, the assumption has been made that the last layers would give the
best performance, as they do in classification. We show that for instance-level
image retrieval, lower layers often perform better than the last layers in
convolutional neural networks. We present an approach for extracting
convolutional features from different layers of the networks, and adopt VLAD
encoding to encode features into a single vector for each image. We investigate
the effect of different layers and scales of input images on the performance of
convolutional features using the recent deep networks OxfordNet and GoogLeNet.
Experiments demonstrate that intermediate layers or higher layers with finer
scales produce better results for image retrieval, compared to the last layer.
When using compressed 128-D VLAD descriptors, our method obtains
state-of-the-art results and outperforms other VLAD and CNN based approaches on
two out of three test datasets. Our work provides guidance for transferring
deep networks trained on image classification to image retrieval tasks.
| no_new_dataset | 0.949389 |
1504.07575 | Oisin Mac Aodha | Edward Johns and Oisin Mac Aodha and Gabriel J. Brostow | Becoming the Expert - Interactive Multi-Class Machine Teaching | CVPR 2015 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compared to machines, humans are extremely good at classifying images into
categories, especially when they possess prior knowledge of the categories at
hand. If this prior information is not available, supervision in the form of
teaching images is required. To learn categories more quickly, people should
see important and representative images first, followed by less important
images later - or not at all. However, image-importance is individual-specific,
i.e. a teaching image is important to a student if it changes their overall
ability to discriminate between classes. Further, students keep learning, so
while image-importance depends on their current knowledge, it also varies with
time.
In this work we propose an Interactive Machine Teaching algorithm that
enables a computer to teach challenging visual concepts to a human. Our
adaptive algorithm chooses, online, which labeled images from a teaching set
should be shown to the student as they learn. We show that a teaching strategy
that probabilistically models the student's ability and progress, based on
their correct and incorrect answers, produces better 'experts'. We present
results using real human participants across several varied and challenging
real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 17:22:29 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Johns",
"Edward",
""
],
[
"Mac Aodha",
"Oisin",
""
],
[
"Brostow",
"Gabriel J.",
""
]
] | TITLE: Becoming the Expert - Interactive Multi-Class Machine Teaching
ABSTRACT: Compared to machines, humans are extremely good at classifying images into
categories, especially when they possess prior knowledge of the categories at
hand. If this prior information is not available, supervision in the form of
teaching images is required. To learn categories more quickly, people should
see important and representative images first, followed by less important
images later - or not at all. However, image-importance is individual-specific,
i.e. a teaching image is important to a student if it changes their overall
ability to discriminate between classes. Further, students keep learning, so
while image-importance depends on their current knowledge, it also varies with
time.
In this work we propose an Interactive Machine Teaching algorithm that
enables a computer to teach challenging visual concepts to a human. Our
adaptive algorithm chooses, online, which labeled images from a teaching set
should be shown to the student as they learn. We show that a teaching strategy
that probabilistically models the student's ability and progress, based on
their correct and incorrect answers, produces better 'experts'. We present
results using real human participants across several varied and challenging
real-world datasets.
| no_new_dataset | 0.945147 |
1504.08022 | Hongyu Guo Ph.D | Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min | A Deep Learning Model for Structured Outputs with High-order Interaction | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world applications are associated with structured data, where not
only input but also output has interplay. However, typical classification and
regression models often lack the ability of simultaneously exploring high-order
interaction within input and that within output. In this paper, we present a
deep learning model aiming to generate a powerful nonlinear functional mapping
from structured input to structured output. More specifically, we propose to
integrate high-order hidden units, guided discriminative pretraining, and
high-order auto-encoders for this purpose. We evaluate the model with three
datasets, and obtain state-of-the-art performances among competitive methods.
Our current work focuses on structured output regression, which is a less
explored area, although the model can be extended to handle structured label
classification.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 20:58:52 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Guo",
"Hongyu",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Min",
"Martin Renqiang",
""
]
] | TITLE: A Deep Learning Model for Structured Outputs with High-order Interaction
ABSTRACT: Many real-world applications are associated with structured data, where not
only input but also output has interplay. However, typical classification and
regression models often lack the ability of simultaneously exploring high-order
interaction within input and that within output. In this paper, we present a
deep learning model aiming to generate a powerful nonlinear functional mapping
from structured input to structured output. More specifically, we propose to
integrate high-order hidden units, guided discriminative pretraining, and
high-order auto-encoders for this purpose. We evaluate the model with three
datasets, and obtain state-of-the-art performances among competitive methods.
Our current work focuses on structured output regression, which is a less
explored area, although the model can be extended to handle structured label
classification.
| no_new_dataset | 0.946745 |
1504.08050 | Shuangyong Song | Shuangyong Song and Yao Meng | Detecting Concept-level Emotion Cause in Microblogging | 2 pages, 2 figures, to appear on WWW 2015 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a Concept-level Emotion Cause Model (CECM), instead
of the mere word-level models, to discover causes of microblogging users'
diversified emotions on specific hot event. A modified topic-supervised biterm
topic model is utilized in CECM to detect emotion topics' in event-related
tweets, and then context-sensitive topical PageRank is utilized to detect
meaningful multiword expressions as emotion causes. Experimental results on a
dataset from Sina Weibo, one of the largest microblogging websites in China,
show CECM can better detect emotion causes than baseline methods.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 00:35:32 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Song",
"Shuangyong",
""
],
[
"Meng",
"Yao",
""
]
] | TITLE: Detecting Concept-level Emotion Cause in Microblogging
ABSTRACT: In this paper, we propose a Concept-level Emotion Cause Model (CECM), instead
of the mere word-level models, to discover causes of microblogging users'
diversified emotions on specific hot event. A modified topic-supervised biterm
topic model is utilized in CECM to detect emotion topics' in event-related
tweets, and then context-sensitive topical PageRank is utilized to detect
meaningful multiword expressions as emotion causes. Experimental results on a
dataset from Sina Weibo, one of the largest microblogging websites in China,
show CECM can better detect emotion causes than baseline methods.
| no_new_dataset | 0.952086 |
1504.08219 | Oisin Mac Aodha | Oisin Mac Aodha and Neill D.F. Campbell and Jan Kautz and Gabriel J.
Brostow | Hierarchical Subquery Evaluation for Active Learning on a Graph | CVPR 2014 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To train good supervised and semi-supervised object classifiers, it is
critical that we not waste the time of the human experts who are providing the
training labels. Existing active learning strategies can have uneven
performance, being efficient on some datasets but wasteful on others, or
inconsistent just between runs on the same dataset. We propose perplexity based
graph construction and a new hierarchical subquery evaluation algorithm to
combat this variability, and to release the potential of Expected Error
Reduction.
Under some specific circumstances, Expected Error Reduction has been one of
the strongest-performing informativeness criteria for active learning. Until
now, it has also been prohibitively costly to compute for sizeable datasets. We
demonstrate our highly practical algorithm, comparing it to other active
learning measures on classification datasets that vary in sparsity,
dimensionality, and size. Our algorithm is consistent over multiple runs and
achieves high accuracy, while querying the human expert for labels at a
frequency that matches their desired time budget.
| [
{
"version": "v1",
"created": "Thu, 30 Apr 2015 13:35:59 GMT"
}
] | 2015-05-01T00:00:00 | [
[
"Mac Aodha",
"Oisin",
""
],
[
"Campbell",
"Neill D. F.",
""
],
[
"Kautz",
"Jan",
""
],
[
"Brostow",
"Gabriel J.",
""
]
] | TITLE: Hierarchical Subquery Evaluation for Active Learning on a Graph
ABSTRACT: To train good supervised and semi-supervised object classifiers, it is
critical that we not waste the time of the human experts who are providing the
training labels. Existing active learning strategies can have uneven
performance, being efficient on some datasets but wasteful on others, or
inconsistent just between runs on the same dataset. We propose perplexity based
graph construction and a new hierarchical subquery evaluation algorithm to
combat this variability, and to release the potential of Expected Error
Reduction.
Under some specific circumstances, Expected Error Reduction has been one of
the strongest-performing informativeness criteria for active learning. Until
now, it has also been prohibitively costly to compute for sizeable datasets. We
demonstrate our highly practical algorithm, comparing it to other active
learning measures on classification datasets that vary in sparsity,
dimensionality, and size. Our algorithm is consistent over multiple runs and
achieves high accuracy, while querying the human expert for labels at a
frequency that matches their desired time budget.
| no_new_dataset | 0.946892 |
1501.00901 | Yubin Deng | Yubin Deng, Ping Luo, Chen Change Loy, Xiaoou Tang | Learning to Recognize Pedestrian Attribute | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning to recognize pedestrian attributes at far distance is a challenging
problem in visual surveillance since face and body close-shots are hardly
available; instead, only far-view image frames of pedestrian are given. In this
study, we present an alternative approach that exploits the context of
neighboring pedestrian images for improved attribute inference compared to the
conventional SVM-based method. In addition, we conduct extensive experiments to
evaluate the informativeness of background and foreground features for
attribute recognition. Experiments are based on our newly released pedestrian
attribute dataset, which is by far the largest and most diverse of its kind.
| [
{
"version": "v1",
"created": "Mon, 5 Jan 2015 15:53:01 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Apr 2015 06:35:50 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Deng",
"Yubin",
""
],
[
"Luo",
"Ping",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Learning to Recognize Pedestrian Attribute
ABSTRACT: Learning to recognize pedestrian attributes at far distance is a challenging
problem in visual surveillance since face and body close-shots are hardly
available; instead, only far-view image frames of pedestrian are given. In this
study, we present an alternative approach that exploits the context of
neighboring pedestrian images for improved attribute inference compared to the
conventional SVM-based method. In addition, we conduct extensive experiments to
evaluate the informativeness of background and foreground features for
attribute recognition. Experiments are based on our newly released pedestrian
attribute dataset, which is by far the largest and most diverse of its kind.
| new_dataset | 0.953665 |
1504.01777 | Junbin Gao Professor | Yanfeng Sun and Junbin Gao and Xia Hong and Bamdev Mishra and Baocai
Yin | Heterogeneous Tensor Decomposition for Clustering via Manifold
Optimization | 12 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tensors or multiarray data are generalizations of matrices. Tensor clustering
has become a very important research topic due to the intrinsically rich
structures in real-world multiarray datasets. Subspace clustering based on
vectorizing multiarray data has been extensively researched. However,
vectorization of tensorial data does not exploit complete structure
information. In this paper, we propose a subspace clustering algorithm without
adopting any vectorization process. Our approach is based on a novel
heterogeneous Tucker decomposition model. In contrast to existing techniques,
we propose a new clustering algorithm that alternates between different modes
of the proposed heterogeneous tensor model. All but the last mode have
closed-form updates. Updating the last mode reduces to optimizing over the
so-called multinomial manifold, for which we investigate second order
Riemannian geometry and propose a trust-region algorithm. Numerical experiments
show that our proposed algorithm compete effectively with state-of-the-art
clustering algorithms that are based on tensor factorization.
| [
{
"version": "v1",
"created": "Tue, 7 Apr 2015 23:18:34 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Apr 2015 02:53:10 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Sun",
"Yanfeng",
""
],
[
"Gao",
"Junbin",
""
],
[
"Hong",
"Xia",
""
],
[
"Mishra",
"Bamdev",
""
],
[
"Yin",
"Baocai",
""
]
] | TITLE: Heterogeneous Tensor Decomposition for Clustering via Manifold
Optimization
ABSTRACT: Tensors or multiarray data are generalizations of matrices. Tensor clustering
has become a very important research topic due to the intrinsically rich
structures in real-world multiarray datasets. Subspace clustering based on
vectorizing multiarray data has been extensively researched. However,
vectorization of tensorial data does not exploit complete structure
information. In this paper, we propose a subspace clustering algorithm without
adopting any vectorization process. Our approach is based on a novel
heterogeneous Tucker decomposition model. In contrast to existing techniques,
we propose a new clustering algorithm that alternates between different modes
of the proposed heterogeneous tensor model. All but the last mode have
closed-form updates. Updating the last mode reduces to optimizing over the
so-called multinomial manifold, for which we investigate second order
Riemannian geometry and propose a trust-region algorithm. Numerical experiments
show that our proposed algorithm compete effectively with state-of-the-art
clustering algorithms that are based on tensor factorization.
| no_new_dataset | 0.949949 |
1504.07678 | Hongzhao Huang | Hongzhao Huang and Larry Heck and Heng Ji | Leveraging Deep Neural Networks and Knowledge Graphs for Entity
Disambiguation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Entity Disambiguation aims to link mentions of ambiguous entities to a
knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for
this task based on the assumption that information from the same semantic
context tends to belong to the same topic. This paper presents a novel deep
semantic relatedness model (DSRM) based on deep neural networks (DNN) and
semantic knowledge graphs (KGs) to measure entity semantic relatedness for
topical coherence modeling. The DSRM is directly trained on large-scale KGs and
it maps heterogeneous types of knowledge of an entity from KGs to numerical
feature vectors in a latent space such that the distance between two
semantically-related entities is minimized. Compared with the state-of-the-art
relatedness approach proposed by (Milne and Witten, 2008a), the DSRM obtains
19.4% and 24.5% reductions in entity disambiguation errors on two publicly
available datasets respectively.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 22:47:25 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Huang",
"Hongzhao",
""
],
[
"Heck",
"Larry",
""
],
[
"Ji",
"Heng",
""
]
] | TITLE: Leveraging Deep Neural Networks and Knowledge Graphs for Entity
Disambiguation
ABSTRACT: Entity Disambiguation aims to link mentions of ambiguous entities to a
knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for
this task based on the assumption that information from the same semantic
context tends to belong to the same topic. This paper presents a novel deep
semantic relatedness model (DSRM) based on deep neural networks (DNN) and
semantic knowledge graphs (KGs) to measure entity semantic relatedness for
topical coherence modeling. The DSRM is directly trained on large-scale KGs and
it maps heterogeneous types of knowledge of an entity from KGs to numerical
feature vectors in a latent space such that the distance between two
semantically-related entities is minimized. Compared with the state-of-the-art
relatedness approach proposed by (Milne and Witten, 2008a), the DSRM obtains
19.4% and 24.5% reductions in entity disambiguation errors on two publicly
available datasets respectively.
| no_new_dataset | 0.950686 |
1504.07758 | Jeremy Debattista | Jeremy Debattista, Christoph Lange, S\"oren Auer | Luzzu Quality Metric Language -- A DSL for Linked Data Quality
Assessment | arXiv admin note: text overlap with arXiv:1412.3750 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The steadily growing number of linked open datasets brought about a number of
reservations amongst data consumers with regard to the datasets' quality.
Quality assessment requires significant effort and consideration, including the
definition of data quality metrics and a process to assess datasets based on
these definitions. Luzzu is a quality assessment framework for linked data that
allows domain-specific metrics to be plugged in. LQML offers notations,
abstractions and expressive power, focusing on the representation of quality
metrics. It provides expressive power for defining sophisticated quality
metrics. Its integration with Luzzu enables their efficient processing and
execution and thus the comprehensive assessment of extremely large datasets in
a streaming way. We also describe a novel ontology that enables the reuse,
sharing and querying of such definitions. Finally, we evaluate the proposed DSL
against the cognitive dimensions of notation framework.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 08:17:20 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Debattista",
"Jeremy",
""
],
[
"Lange",
"Christoph",
""
],
[
"Auer",
"Sören",
""
]
] | TITLE: Luzzu Quality Metric Language -- A DSL for Linked Data Quality
Assessment
ABSTRACT: The steadily growing number of linked open datasets brought about a number of
reservations amongst data consumers with regard to the datasets' quality.
Quality assessment requires significant effort and consideration, including the
definition of data quality metrics and a process to assess datasets based on
these definitions. Luzzu is a quality assessment framework for linked data that
allows domain-specific metrics to be plugged in. LQML offers notations,
abstractions and expressive power, focusing on the representation of quality
metrics. It provides expressive power for defining sophisticated quality
metrics. Its integration with Luzzu enables their efficient processing and
execution and thus the comprehensive assessment of extremely large datasets in
a streaming way. We also describe a novel ontology that enables the reuse,
sharing and querying of such definitions. Finally, we evaluate the proposed DSL
against the cognitive dimensions of notation framework.
| no_new_dataset | 0.947962 |
1504.07890 | Diego Fabregat-Traver | Alvaro Frank, Diego Fabregat-Traver and Paolo Bientinesi | Large-scale linear regression: Development of high-performance routines | null | null | null | null | cs.CE cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In statistics, series of ordinary least squares problems (OLS) are used to
study the linear correlation among sets of variables of interest; in many
studies, the number of such variables is at least in the millions, and the
corresponding datasets occupy terabytes of disk space. As the availability of
large-scale datasets increases regularly, so does the challenge in dealing with
them. Indeed, traditional solvers---which rely on the use of black-box"
routines optimized for one single OLS---are highly inefficient and fail to
provide a viable solution for big-data analyses. As a case study, in this paper
we consider a linear regression consisting of two-dimensional grids of related
OLS problems that arise in the context of genome-wide association analyses, and
give a careful walkthrough for the development of {\sc ols-grid}, a
high-performance routine for shared-memory architectures; analogous steps are
relevant for tailoring OLS solvers to other applications. In particular, we
first illustrate the design of efficient algorithms that exploit the structure
of the OLS problems and eliminate redundant computations; then, we show how to
effectively deal with datasets that do not fit in main memory; finally, we
discuss how to cast the computation in terms of efficient kernels and how to
achieve scalability. Importantly, each design decision along the way is
justified by simple performance models. {\sc ols-grid} enables the solution of
$10^{11}$ correlated OLS problems operating on terabytes of data in a matter of
hours.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 15:24:33 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Frank",
"Alvaro",
""
],
[
"Fabregat-Traver",
"Diego",
""
],
[
"Bientinesi",
"Paolo",
""
]
] | TITLE: Large-scale linear regression: Development of high-performance routines
ABSTRACT: In statistics, series of ordinary least squares problems (OLS) are used to
study the linear correlation among sets of variables of interest; in many
studies, the number of such variables is at least in the millions, and the
corresponding datasets occupy terabytes of disk space. As the availability of
large-scale datasets increases regularly, so does the challenge in dealing with
them. Indeed, traditional solvers---which rely on the use of black-box"
routines optimized for one single OLS---are highly inefficient and fail to
provide a viable solution for big-data analyses. As a case study, in this paper
we consider a linear regression consisting of two-dimensional grids of related
OLS problems that arise in the context of genome-wide association analyses, and
give a careful walkthrough for the development of {\sc ols-grid}, a
high-performance routine for shared-memory architectures; analogous steps are
relevant for tailoring OLS solvers to other applications. In particular, we
first illustrate the design of efficient algorithms that exploit the structure
of the OLS problems and eliminate redundant computations; then, we show how to
effectively deal with datasets that do not fit in main memory; finally, we
discuss how to cast the computation in terms of efficient kernels and how to
achieve scalability. Importantly, each design decision along the way is
justified by simple performance models. {\sc ols-grid} enables the solution of
$10^{11}$ correlated OLS problems operating on terabytes of data in a matter of
hours.
| no_new_dataset | 0.941385 |
1504.07912 | Adam Smith | Sofya Raskhodnikova, Adam Smith | Efficient Lipschitz Extensions for High-Dimensional Graph Statistics and
Node Private Degree Distributions | 23 pages, 2 figures | null | null | null | cs.CR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lipschitz extensions were recently proposed as a tool for designing node
differentially private algorithms. However, efficiently computable Lipschitz
extensions were known only for 1-dimensional functions (that is, functions that
output a single real value). In this paper, we study efficiently computable
Lipschitz extensions for multi-dimensional (that is, vector-valued) functions
on graphs. We show that, unlike for 1-dimensional functions, Lipschitz
extensions of higher-dimensional functions on graphs do not always exist, even
with a non-unit stretch. We design Lipschitz extensions with small stretch for
the sorted degree list and for the degree distribution of a graph. Crucially,
our extensions are efficiently computable.
We also develop new tools for employing Lipschitz extensions in the design of
differentially private algorithms. Specifically, we generalize the exponential
mechanism, a widely used tool in data privacy. The exponential mechanism is
given a collection of score functions that map datasets to real values. It
attempts to return the name of the function with nearly minimum value on the
data set. Our generalized exponential mechanism provides better accuracy when
the sensitivity of an optimal score function is much smaller than the maximum
sensitivity of score functions.
We use our Lipschitz extension and the generalized exponential mechanism to
design a node-differentially private algorithm for releasing an approximation
to the degree distribution of a graph. Our algorithm is much more accurate than
algorithms from previous work.
| [
{
"version": "v1",
"created": "Wed, 29 Apr 2015 16:08:57 GMT"
}
] | 2015-04-30T00:00:00 | [
[
"Raskhodnikova",
"Sofya",
""
],
[
"Smith",
"Adam",
""
]
] | TITLE: Efficient Lipschitz Extensions for High-Dimensional Graph Statistics and
Node Private Degree Distributions
ABSTRACT: Lipschitz extensions were recently proposed as a tool for designing node
differentially private algorithms. However, efficiently computable Lipschitz
extensions were known only for 1-dimensional functions (that is, functions that
output a single real value). In this paper, we study efficiently computable
Lipschitz extensions for multi-dimensional (that is, vector-valued) functions
on graphs. We show that, unlike for 1-dimensional functions, Lipschitz
extensions of higher-dimensional functions on graphs do not always exist, even
with a non-unit stretch. We design Lipschitz extensions with small stretch for
the sorted degree list and for the degree distribution of a graph. Crucially,
our extensions are efficiently computable.
We also develop new tools for employing Lipschitz extensions in the design of
differentially private algorithms. Specifically, we generalize the exponential
mechanism, a widely used tool in data privacy. The exponential mechanism is
given a collection of score functions that map datasets to real values. It
attempts to return the name of the function with nearly minimum value on the
data set. Our generalized exponential mechanism provides better accuracy when
the sensitivity of an optimal score function is much smaller than the maximum
sensitivity of score functions.
We use our Lipschitz extension and the generalized exponential mechanism to
design a node-differentially private algorithm for releasing an approximation
to the degree distribution of a graph. Our algorithm is much more accurate than
algorithms from previous work.
| no_new_dataset | 0.94801 |
1412.7272 | Maruan Al-Shedivat | Maruan Al-Shedivat, Emre Neftci and Gert Cauwenberghs | Learning Non-deterministic Representations with Energy-based Ensembles | 9 pages, 3 figures, ICLR-15 workshop contribution | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of a generative model is to capture the distribution underlying the
data, typically through latent variables. After training, these variables are
often used as a new representation, more effective than the original features
in a variety of learning tasks. However, the representations constructed by
contemporary generative models are usually point-wise deterministic mappings
from the original feature space. Thus, even with representations robust to
class-specific transformations, statistically driven models trained on them
would not be able to generalize when the labeled data is scarce. Inspired by
the stochasticity of the synaptic connections in the brain, we introduce
Energy-based Stochastic Ensembles. These ensembles can learn non-deterministic
representations, i.e., mappings from the feature space to a family of
distributions in the latent space. These mappings are encoded in a distribution
over a (possibly infinite) collection of models. By conditionally sampling
models from the ensemble, we obtain multiple representations for every input
example and effectively augment the data. We propose an algorithm similar to
contrastive divergence for training restricted Boltzmann stochastic ensembles.
Finally, we demonstrate the concept of the stochastic representations on a
synthetic dataset as well as test them in the one-shot learning scenario on
MNIST.
| [
{
"version": "v1",
"created": "Tue, 23 Dec 2014 07:06:55 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Apr 2015 10:04:49 GMT"
}
] | 2015-04-29T00:00:00 | [
[
"Al-Shedivat",
"Maruan",
""
],
[
"Neftci",
"Emre",
""
],
[
"Cauwenberghs",
"Gert",
""
]
] | TITLE: Learning Non-deterministic Representations with Energy-based Ensembles
ABSTRACT: The goal of a generative model is to capture the distribution underlying the
data, typically through latent variables. After training, these variables are
often used as a new representation, more effective than the original features
in a variety of learning tasks. However, the representations constructed by
contemporary generative models are usually point-wise deterministic mappings
from the original feature space. Thus, even with representations robust to
class-specific transformations, statistically driven models trained on them
would not be able to generalize when the labeled data is scarce. Inspired by
the stochasticity of the synaptic connections in the brain, we introduce
Energy-based Stochastic Ensembles. These ensembles can learn non-deterministic
representations, i.e., mappings from the feature space to a family of
distributions in the latent space. These mappings are encoded in a distribution
over a (possibly infinite) collection of models. By conditionally sampling
models from the ensemble, we obtain multiple representations for every input
example and effectively augment the data. We propose an algorithm similar to
contrastive divergence for training restricted Boltzmann stochastic ensembles.
Finally, we demonstrate the concept of the stochastic representations on a
synthetic dataset as well as test them in the one-shot learning scenario on
MNIST.
| no_new_dataset | 0.946646 |
1504.07235 | Ping Li | Ping Li | Sign Stable Random Projections for Large-Scale Learning | null | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the use of "sign $\alpha$-stable random projections" (where
$0<\alpha\leq 2$) for building basic data processing tools in the context of
large-scale machine learning applications (e.g., classification, regression,
clustering, and near-neighbor search). After the processing by sign stable
random projections, the inner products of the processed data approximate
various types of nonlinear kernels depending on the value of $\alpha$. Thus,
this approach provides an effective strategy for approximating nonlinear
learning algorithms essentially at the cost of linear learning. When $\alpha
=2$, it is known that the corresponding nonlinear kernel is the arc-cosine
kernel. When $\alpha=1$, the procedure approximates the arc-cos-$\chi^2$ kernel
(under certain condition). When $\alpha\rightarrow0+$, it corresponds to the
resemblance kernel.
From practitioners' perspective, the method of sign $\alpha$-stable random
projections is ready to be tested for large-scale learning applications, where
$\alpha$ can be simply viewed as a tuning parameter. What is missing in the
literature is an extensive empirical study to show the effectiveness of sign
stable random projections, especially for $\alpha\neq 2$ or 1. The paper
supplies such a study on a wide variety of classification datasets. In
particular, we compare shoulder-by-shoulder sign stable random projections with
the recently proposed "0-bit consistent weighted sampling (CWS)" (Li 2015).
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 19:50:40 GMT"
}
] | 2015-04-29T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: Sign Stable Random Projections for Large-Scale Learning
ABSTRACT: We study the use of "sign $\alpha$-stable random projections" (where
$0<\alpha\leq 2$) for building basic data processing tools in the context of
large-scale machine learning applications (e.g., classification, regression,
clustering, and near-neighbor search). After the processing by sign stable
random projections, the inner products of the processed data approximate
various types of nonlinear kernels depending on the value of $\alpha$. Thus,
this approach provides an effective strategy for approximating nonlinear
learning algorithms essentially at the cost of linear learning. When $\alpha
=2$, it is known that the corresponding nonlinear kernel is the arc-cosine
kernel. When $\alpha=1$, the procedure approximates the arc-cos-$\chi^2$ kernel
(under certain condition). When $\alpha\rightarrow0+$, it corresponds to the
resemblance kernel.
From practitioners' perspective, the method of sign $\alpha$-stable random
projections is ready to be tested for large-scale learning applications, where
$\alpha$ can be simply viewed as a tuning parameter. What is missing in the
literature is an extensive empirical study to show the effectiveness of sign
stable random projections, especially for $\alpha\neq 2$ or 1. The paper
supplies such a study on a wide variety of classification datasets. In
particular, we compare shoulder-by-shoulder sign stable random projections with
the recently proposed "0-bit consistent weighted sampling (CWS)" (Li 2015).
| no_new_dataset | 0.945851 |
1504.07269 | Narapureddy Dinesh Reddy | N. Dinesh Reddy, Prateek Singhal, Visesh Chari and K. Madhava Krishna | Dynamic Body VSLAM with Semantic Constraints | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image based reconstruction of urban environments is a challenging problem
that deals with optimization of large number of variables, and has several
sources of errors like the presence of dynamic objects. Since most large scale
approaches make the assumption of observing static scenes, dynamic objects are
relegated to the noise modeling section of such systems. This is an approach of
convenience since the RANSAC based framework used to compute most multiview
geometric quantities for static scenes naturally confine dynamic objects to the
class of outlier measurements. However, reconstructing dynamic objects along
with the static environment helps us get a complete picture of an urban
environment. Such understanding can then be used for important robotic tasks
like path planning for autonomous navigation, obstacle tracking and avoidance,
and other areas. In this paper, we propose a system for robust SLAM that works
in both static and dynamic environments. To overcome the challenge of dynamic
objects in the scene, we propose a new model to incorporate semantic
constraints into the reconstruction algorithm. While some of these constraints
are based on multi-layered dense CRFs trained over appearance as well as motion
cues, other proposed constraints can be expressed as additional terms in the
bundle adjustment optimization process that does iterative refinement of 3D
structure and camera / object motion trajectories. We show results on the
challenging KITTI urban dataset for accuracy of motion segmentation and
reconstruction of the trajectory and shape of moving objects relative to ground
truth. We are able to show average relative error reduction by a significant
amount for moving object trajectory reconstruction relative to state-of-the-art
methods like VISO 2, as well as standard bundle adjustment algorithms.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 20:30:04 GMT"
}
] | 2015-04-29T00:00:00 | [
[
"Reddy",
"N. Dinesh",
""
],
[
"Singhal",
"Prateek",
""
],
[
"Chari",
"Visesh",
""
],
[
"Krishna",
"K. Madhava",
""
]
] | TITLE: Dynamic Body VSLAM with Semantic Constraints
ABSTRACT: Image based reconstruction of urban environments is a challenging problem
that deals with optimization of large number of variables, and has several
sources of errors like the presence of dynamic objects. Since most large scale
approaches make the assumption of observing static scenes, dynamic objects are
relegated to the noise modeling section of such systems. This is an approach of
convenience since the RANSAC based framework used to compute most multiview
geometric quantities for static scenes naturally confine dynamic objects to the
class of outlier measurements. However, reconstructing dynamic objects along
with the static environment helps us get a complete picture of an urban
environment. Such understanding can then be used for important robotic tasks
like path planning for autonomous navigation, obstacle tracking and avoidance,
and other areas. In this paper, we propose a system for robust SLAM that works
in both static and dynamic environments. To overcome the challenge of dynamic
objects in the scene, we propose a new model to incorporate semantic
constraints into the reconstruction algorithm. While some of these constraints
are based on multi-layered dense CRFs trained over appearance as well as motion
cues, other proposed constraints can be expressed as additional terms in the
bundle adjustment optimization process that does iterative refinement of 3D
structure and camera / object motion trajectories. We show results on the
challenging KITTI urban dataset for accuracy of motion segmentation and
reconstruction of the trajectory and shape of moving objects relative to ground
truth. We are able to show average relative error reduction by a significant
amount for moving object trajectory reconstruction relative to state-of-the-art
methods like VISO 2, as well as standard bundle adjustment algorithms.
| no_new_dataset | 0.945951 |
1504.07459 | Marian-Andrei Rizoiu | Marian-Andrei Rizoiu, Adrien Guille and Julien Velcin | CommentWatcher: An Open Source Web-based platform for analyzing
discussions on web forums | null | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CommentWatcher, an open source tool aimed at analyzing discussions
on web forums. Constructed as a web platform, CommentWatcher features automatic
mass fetching of user posts from forum on multiple sites, extracting topics,
visualizing the topics as an expression cloud and exploring their temporal
evolution. The underlying social network of users is simultaneously constructed
using the citation relations between users and visualized as a graph structure.
Our platform addresses the issues of the diversity and dynamics of structures
of webpages hosting the forums by implementing a parser architecture that is
independent of the HTML structure of webpages. This allows easy on-the-fly
adding of new websites. Two types of users are targeted: end users who seek to
study the discussed topics and their temporal evolution, and researchers in
need of establishing a forum benchmark dataset and comparing the performances
of analysis tools.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 13:18:00 GMT"
}
] | 2015-04-29T00:00:00 | [
[
"Rizoiu",
"Marian-Andrei",
""
],
[
"Guille",
"Adrien",
""
],
[
"Velcin",
"Julien",
""
]
] | TITLE: CommentWatcher: An Open Source Web-based platform for analyzing
discussions on web forums
ABSTRACT: We present CommentWatcher, an open source tool aimed at analyzing discussions
on web forums. Constructed as a web platform, CommentWatcher features automatic
mass fetching of user posts from forum on multiple sites, extracting topics,
visualizing the topics as an expression cloud and exploring their temporal
evolution. The underlying social network of users is simultaneously constructed
using the citation relations between users and visualized as a graph structure.
Our platform addresses the issues of the diversity and dynamics of structures
of webpages hosting the forums by implementing a parser architecture that is
independent of the HTML structure of webpages. This allows easy on-the-fly
adding of new websites. Two types of users are targeted: end users who seek to
study the discussed topics and their temporal evolution, and researchers in
need of establishing a forum benchmark dataset and comparing the performances
of analysis tools.
| no_new_dataset | 0.798108 |
1504.07460 | Alexander Kolesnikov | Alexander Kolesnikov and Christoph H. Lampert | Identifying Reliable Annotations for Large Scale Image Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Challenging computer vision tasks, in particular semantic image segmentation,
require large training sets of annotated images. While obtaining the actual
images is often unproblematic, creating the necessary annotation is a tedious
and costly process. Therefore, one often has to work with unreliable annotation
sources, such as Amazon Mechanical Turk or (semi-)automatic algorithmic
techniques. In this work, we present a Gaussian process (GP) based technique
for simultaneously identifying which images of a training set have unreliable
annotation and learning a segmentation model in which the negative effect of
these images is suppressed. Alternatively, the model can also just be used to
identify the most reliably annotated images from the training set, which can
then be used for training any other segmentation method. By relying on "deep
features" in combination with a linear covariance function, our GP can be
learned and its hyperparameter determined efficiently using only matrix
operations and gradient-based optimization. This makes our method scalable even
to large datasets with several million training instances.
| [
{
"version": "v1",
"created": "Tue, 28 Apr 2015 13:19:21 GMT"
}
] | 2015-04-29T00:00:00 | [
[
"Kolesnikov",
"Alexander",
""
],
[
"Lampert",
"Christoph H.",
""
]
] | TITLE: Identifying Reliable Annotations for Large Scale Image Segmentation
ABSTRACT: Challenging computer vision tasks, in particular semantic image segmentation,
require large training sets of annotated images. While obtaining the actual
images is often unproblematic, creating the necessary annotation is a tedious
and costly process. Therefore, one often has to work with unreliable annotation
sources, such as Amazon Mechanical Turk or (semi-)automatic algorithmic
techniques. In this work, we present a Gaussian process (GP) based technique
for simultaneously identifying which images of a training set have unreliable
annotation and learning a segmentation model in which the negative effect of
these images is suppressed. Alternatively, the model can also just be used to
identify the most reliably annotated images from the training set, which can
then be used for training any other segmentation method. By relying on "deep
features" in combination with a linear covariance function, our GP can be
learned and its hyperparameter determined efficiently using only matrix
operations and gradient-based optimization. This makes our method scalable even
to large datasets with several million training instances.
| no_new_dataset | 0.949295 |
1504.06658 | Arvind Neelakantan | Arvind Neelakantan and Ming-Wei Chang | Inferring Missing Entity Type Instances for Knowledge Base Completion:
New Dataset and Methods | North American Chapter of the Association for Computational
Linguistics- Human Language Technologies, 2015 | null | null | null | cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of previous work in knowledge base (KB) completion has focused on the
problem of relation extraction. In this work, we focus on the task of inferring
missing entity type instances in a KB, a fundamental task for KB competition
yet receives little attention. Due to the novelty of this task, we construct a
large-scale dataset and design an automatic evaluation methodology. Our
knowledge base completion method uses information within the existing KB and
external information from Wikipedia. We show that individual methods trained
with a global objective that considers unobserved cells from both the entity
and the type side gives consistently higher quality predictions compared to
baseline methods. We also perform manual evaluation on a small subset of the
data to verify the effectiveness of our knowledge base completion methods and
the correctness of our proposed automatic evaluation method.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 22:32:40 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Neelakantan",
"Arvind",
""
],
[
"Chang",
"Ming-Wei",
""
]
] | TITLE: Inferring Missing Entity Type Instances for Knowledge Base Completion:
New Dataset and Methods
ABSTRACT: Most of previous work in knowledge base (KB) completion has focused on the
problem of relation extraction. In this work, we focus on the task of inferring
missing entity type instances in a KB, a fundamental task for KB competition
yet receives little attention. Due to the novelty of this task, we construct a
large-scale dataset and design an automatic evaluation methodology. Our
knowledge base completion method uses information within the existing KB and
external information from Wikipedia. We show that individual methods trained
with a global objective that considers unobserved cells from both the entity
and the type side gives consistently higher quality predictions compared to
baseline methods. We also perform manual evaluation on a small subset of the
data to verify the effectiveness of our knowledge base completion methods and
the correctness of our proposed automatic evaluation method.
| new_dataset | 0.961606 |
1504.06678 | Guo-Jun Qi | Vivek Veeriah and Naifan Zhuang and Guo-Jun Qi | Differential Recurrent Neural Networks for Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The long short-term memory (LSTM) neural network is capable of processing
complex sequential information since it utilizes special gating schemes for
learning representations from long input sequences. It has the potential to
model any sequential time-series data, where the current hidden state has to be
considered in the context of the past hidden states. This property makes LSTM
an ideal choice to learn the complex dynamics of various actions.
Unfortunately, the conventional LSTMs do not consider the impact of
spatio-temporal dynamics corresponding to the given salient motion patterns,
when they gate the information that ought to be memorized through time. To
address this problem, we propose a differential gating scheme for the LSTM
neural network, which emphasizes on the change in information gain caused by
the salient motions between the successive frames. This change in information
gain is quantified by Derivative of States (DoS), and thus the proposed LSTM
model is termed as differential Recurrent Neural Network (dRNN). We demonstrate
the effectiveness of the proposed model by automatically recognizing actions
from the real-world 2D and 3D human action datasets. Our study is one of the
first works towards demonstrating the potential of learning complex time-series
representations via high-order derivatives of states.
| [
{
"version": "v1",
"created": "Sat, 25 Apr 2015 03:59:14 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Veeriah",
"Vivek",
""
],
[
"Zhuang",
"Naifan",
""
],
[
"Qi",
"Guo-Jun",
""
]
] | TITLE: Differential Recurrent Neural Networks for Action Recognition
ABSTRACT: The long short-term memory (LSTM) neural network is capable of processing
complex sequential information since it utilizes special gating schemes for
learning representations from long input sequences. It has the potential to
model any sequential time-series data, where the current hidden state has to be
considered in the context of the past hidden states. This property makes LSTM
an ideal choice to learn the complex dynamics of various actions.
Unfortunately, the conventional LSTMs do not consider the impact of
spatio-temporal dynamics corresponding to the given salient motion patterns,
when they gate the information that ought to be memorized through time. To
address this problem, we propose a differential gating scheme for the LSTM
neural network, which emphasizes on the change in information gain caused by
the salient motions between the successive frames. This change in information
gain is quantified by Derivative of States (DoS), and thus the proposed LSTM
model is termed as differential Recurrent Neural Network (dRNN). We demonstrate
the effectiveness of the proposed model by automatically recognizing actions
from the real-world 2D and 3D human action datasets. Our study is one of the
first works towards demonstrating the potential of learning complex time-series
representations via high-order derivatives of states.
| no_new_dataset | 0.944022 |
1504.06825 | Patrick O. Glauner | Patrick O. Glauner | Comparison of Training Methods for Deep Neural Networks | 50 pages, 13 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report describes the difficulties of training neural networks and in
particular deep neural networks. It then provides a literature review of
training methods for deep neural networks, with a focus on pre-training. It
focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and
Stacked Autoencoders and provides an outreach on further and alternative
approaches. It also includes related practical recommendations from the
literature on training them. In the second part, initial experiments using some
of the covered methods are performed on two databases. In particular,
experiments are performed on the MNIST hand-written digit dataset and on facial
emotion data from a Kaggle competition. The results are discussed in the
context of results reported in other research papers. An error rate lower than
the best contribution to the Kaggle competition is achieved using an optimized
Stacked Autoencoder.
| [
{
"version": "v1",
"created": "Sun, 26 Apr 2015 14:09:17 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Glauner",
"Patrick O.",
""
]
] | TITLE: Comparison of Training Methods for Deep Neural Networks
ABSTRACT: This report describes the difficulties of training neural networks and in
particular deep neural networks. It then provides a literature review of
training methods for deep neural networks, with a focus on pre-training. It
focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and
Stacked Autoencoders and provides an outreach on further and alternative
approaches. It also includes related practical recommendations from the
literature on training them. In the second part, initial experiments using some
of the covered methods are performed on two databases. In particular,
experiments are performed on the MNIST hand-written digit dataset and on facial
emotion data from a Kaggle competition. The results are discussed in the
context of results reported in other research papers. An error rate lower than
the best contribution to the Kaggle competition is achieved using an optimized
Stacked Autoencoder.
| no_new_dataset | 0.950134 |
1504.06868 | Gordon Cormack | Gordon V. Cormack and Maura R. Grossman | Autonomy and Reliability of Continuous Active Learning for
Technology-Assisted Review | null | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We enhance the autonomy of the continuous active learning method shown by
Cormack and Grossman (SIGIR 2014) to be effective for technology-assisted
review, in which documents from a collection are retrieved and reviewed, using
relevance feedback, until substantially all of the relevant documents have been
reviewed. Autonomy is enhanced through the elimination of topic-specific and
dataset-specific tuning parameters, so that the sole input required by the user
is, at the outset, a short query, topic description, or single relevant
document; and, throughout the review, ongoing relevance assessments of the
retrieved documents. We show that our enhancements consistently yield superior
results to Cormack and Grossman's version of continuous active learning, and
other methods, not only on average, but on the vast majority of topics from
four separate sets of tasks: the legal datasets examined by Cormack and
Grossman, the Reuters RCV1-v2 subject categories, the TREC 6 AdHoc task, and
the construction of the TREC 2002 filtering test collection.
| [
{
"version": "v1",
"created": "Sun, 26 Apr 2015 19:19:01 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Cormack",
"Gordon V.",
""
],
[
"Grossman",
"Maura R.",
""
]
] | TITLE: Autonomy and Reliability of Continuous Active Learning for
Technology-Assisted Review
ABSTRACT: We enhance the autonomy of the continuous active learning method shown by
Cormack and Grossman (SIGIR 2014) to be effective for technology-assisted
review, in which documents from a collection are retrieved and reviewed, using
relevance feedback, until substantially all of the relevant documents have been
reviewed. Autonomy is enhanced through the elimination of topic-specific and
dataset-specific tuning parameters, so that the sole input required by the user
is, at the outset, a short query, topic description, or single relevant
document; and, throughout the review, ongoing relevance assessments of the
retrieved documents. We show that our enhancements consistently yield superior
results to Cormack and Grossman's version of continuous active learning, and
other methods, not only on average, but on the vast majority of topics from
four separate sets of tasks: the legal datasets examined by Cormack and
Grossman, the Reuters RCV1-v2 subject categories, the TREC 6 AdHoc task, and
the construction of the TREC 2002 filtering test collection.
| no_new_dataset | 0.949342 |
1504.06993 | Chao Dong | Chao Dong and Yubin Deng and Chen Change Loy and Xiaoou Tang | Compression Artifacts Reduction by a Deep Convolutional Network | 9 pages, 12 figures, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lossy compression introduces complex compression artifacts, particularly the
blocking artifacts, ringing effects and blurring. Existing algorithms either
focus on removing blocking artifacts and produce blurred output, or restores
sharpened images that are accompanied with ringing effects. Inspired by the
deep convolutional networks (DCN) on super-resolution, we formulate a compact
and efficient network for seamless attenuation of different compression
artifacts. We also demonstrate that a deeper model can be effectively trained
with the features learned in a shallow network. Following a similar "easy to
hard" idea, we systematically investigate several practical transfer settings
and show the effectiveness of transfer learning in low-level vision problems.
Our method shows superior performance than the state-of-the-arts both on the
benchmark datasets and the real-world use case (i.e. Twitter). In addition, we
show that our method can be applied as pre-processing to facilitate other
low-level vision routines when they take compressed images as input.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 09:30:30 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Dong",
"Chao",
""
],
[
"Deng",
"Yubin",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Compression Artifacts Reduction by a Deep Convolutional Network
ABSTRACT: Lossy compression introduces complex compression artifacts, particularly the
blocking artifacts, ringing effects and blurring. Existing algorithms either
focus on removing blocking artifacts and produce blurred output, or restores
sharpened images that are accompanied with ringing effects. Inspired by the
deep convolutional networks (DCN) on super-resolution, we formulate a compact
and efficient network for seamless attenuation of different compression
artifacts. We also demonstrate that a deeper model can be effectively trained
with the features learned in a shallow network. Following a similar "easy to
hard" idea, we systematically investigate several practical transfer settings
and show the effectiveness of transfer learning in low-level vision problems.
Our method shows superior performance than the state-of-the-arts both on the
benchmark datasets and the real-world use case (i.e. Twitter). In addition, we
show that our method can be applied as pre-processing to facilitate other
low-level vision routines when they take compressed images as input.
| no_new_dataset | 0.949716 |
1504.06998 | Mohammad Alaggan | Mohammad Alaggan, S\'ebastien Gambs, Anne-Marie Kermarrec | Heterogeneous Differential Privacy | 27 pages, 3 figures, presented at the first workshop on theory and
practice of differential privacy (TPDP 2015) at London, UK | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The massive collection of personal data by personalization systems has
rendered the preservation of privacy of individuals more and more difficult.
Most of the proposed approaches to preserve privacy in personalization systems
usually address this issue uniformly across users, thus ignoring the fact that
users have different privacy attitudes and expectations (even among their own
personal data). In this paper, we propose to account for this non-uniformity of
privacy expectations by introducing the concept of heterogeneous differential
privacy. This notion captures both the variation of privacy expectations among
users as well as across different pieces of information related to the same
user. We also describe an explicit mechanism achieving heterogeneous
differential privacy, which is a modification of the Laplacian mechanism by
Dwork, McSherry, Nissim, and Smith. In a nutshell, this mechanism achieves
heterogeneous differential privacy by manipulating the sensitivity of the
function using a linear transformation on the input domain. Finally, we
evaluate on real datasets the impact of the proposed mechanism with respect to
a semantic clustering task. The results of our experiments demonstrate that
heterogeneous differential privacy can account for different privacy attitudes
while sustaining a good level of utility as measured by the recall for the
semantic clustering task.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 09:35:46 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Alaggan",
"Mohammad",
""
],
[
"Gambs",
"Sébastien",
""
],
[
"Kermarrec",
"Anne-Marie",
""
]
] | TITLE: Heterogeneous Differential Privacy
ABSTRACT: The massive collection of personal data by personalization systems has
rendered the preservation of privacy of individuals more and more difficult.
Most of the proposed approaches to preserve privacy in personalization systems
usually address this issue uniformly across users, thus ignoring the fact that
users have different privacy attitudes and expectations (even among their own
personal data). In this paper, we propose to account for this non-uniformity of
privacy expectations by introducing the concept of heterogeneous differential
privacy. This notion captures both the variation of privacy expectations among
users as well as across different pieces of information related to the same
user. We also describe an explicit mechanism achieving heterogeneous
differential privacy, which is a modification of the Laplacian mechanism by
Dwork, McSherry, Nissim, and Smith. In a nutshell, this mechanism achieves
heterogeneous differential privacy by manipulating the sensitivity of the
function using a linear transformation on the input domain. Finally, we
evaluate on real datasets the impact of the proposed mechanism with respect to
a semantic clustering task. The results of our experiments demonstrate that
heterogeneous differential privacy can account for different privacy attitudes
while sustaining a good level of utility as measured by the recall for the
semantic clustering task.
| no_new_dataset | 0.949153 |
1504.07004 | Moitreya Chatterjee | Moitreya Chatterjee and Anton Leuski | An Active Learning Based Approach For Effective Video Annotation And
Retrieval | 5 pages, 3 figures, Compressed version published at ACM ICMR 2015 | null | null | null | cs.MM cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conventional multimedia annotation/retrieval systems such as Normalized
Continuous Relevance Model (NormCRM) [16] require a fully labeled training data
for a good performance. Active Learning, by determining an order for labeling
the training data, allows for a good performance even before the training data
is fully annotated. In this work we propose an active learning algorithm, which
combines a novel measure of sample uncertainty with a novel clustering-based
approach for determining sample density and diversity and integrate it with
NormCRM. The clusters are also iteratively refined to ensure both feature and
label-level agreement among samples. We show that our approach outperforms
multiple baselines both on a recent, open character animation dataset and on
the popular TRECVID corpus at both the tasks of annotation and text-based
retrieval of videos.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 09:44:30 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Chatterjee",
"Moitreya",
""
],
[
"Leuski",
"Anton",
""
]
] | TITLE: An Active Learning Based Approach For Effective Video Annotation And
Retrieval
ABSTRACT: Conventional multimedia annotation/retrieval systems such as Normalized
Continuous Relevance Model (NormCRM) [16] require a fully labeled training data
for a good performance. Active Learning, by determining an order for labeling
the training data, allows for a good performance even before the training data
is fully annotated. In this work we propose an active learning algorithm, which
combines a novel measure of sample uncertainty with a novel clustering-based
approach for determining sample density and diversity and integrate it with
NormCRM. The clusters are also iteratively refined to ensure both feature and
label-level agreement among samples. We show that our approach outperforms
multiple baselines both on a recent, open character animation dataset and on
the popular TRECVID corpus at both the tasks of annotation and text-based
retrieval of videos.
| no_new_dataset | 0.9455 |
1504.07082 | Bharathi Pilar | B.H.Shekar, Bharathi Pilar | Shape Representation and Classification through Pattern Spectrum and
Local Binary Pattern - A Decision Level Fusion Approach | Fifth International Conference on Signals and Image Processing
(ICSIP) 2014 | null | 10.1109/ICSIP.2014.41 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a decision level fused local Morphological Pattern
Spectrum(PS) and Local Binary Pattern (LBP) approach for an efficient shape
representation and classification. This method makes use of Earth Movers
Distance(EMD) as the measure in feature matching and shape retrieval process.
The proposed approach has three major phases : Feature Extraction, Construction
of hybrid spectrum knowledge base and Classification. In the first phase,
feature extraction of the shape is done using pattern spectrum and local binary
pattern method. In the second phase, the histograms of both pattern spectrum
and local binary pattern are fused and stored in the knowledge base. In the
third phase, the comparison and matching of the features, which are represented
in the form of histograms, is done using Earth Movers Distance(EMD) as metric.
The top-n shapes are retrieved for each query shape. The accuracy is tested by
means of standard Bulls eye score method. The experiments are conducted on
publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The
comparative study is also provided with the well known approaches to exhibit
the retrieval accuracy of the proposed approach.
| [
{
"version": "v1",
"created": "Mon, 27 Apr 2015 13:38:20 GMT"
}
] | 2015-04-28T00:00:00 | [
[
"Shekar",
"B. H.",
""
],
[
"Pilar",
"Bharathi",
""
]
] | TITLE: Shape Representation and Classification through Pattern Spectrum and
Local Binary Pattern - A Decision Level Fusion Approach
ABSTRACT: In this paper, we present a decision level fused local Morphological Pattern
Spectrum(PS) and Local Binary Pattern (LBP) approach for an efficient shape
representation and classification. This method makes use of Earth Movers
Distance(EMD) as the measure in feature matching and shape retrieval process.
The proposed approach has three major phases : Feature Extraction, Construction
of hybrid spectrum knowledge base and Classification. In the first phase,
feature extraction of the shape is done using pattern spectrum and local binary
pattern method. In the second phase, the histograms of both pattern spectrum
and local binary pattern are fused and stored in the knowledge base. In the
third phase, the comparison and matching of the features, which are represented
in the form of histograms, is done using Earth Movers Distance(EMD) as metric.
The top-n shapes are retrieved for each query shape. The accuracy is tested by
means of standard Bulls eye score method. The experiments are conducted on
publicly available shape datasets like Kimia-99, Kimia-216 and MPEG-7. The
comparative study is also provided with the well known approaches to exhibit
the retrieval accuracy of the proposed approach.
| no_new_dataset | 0.953362 |
1411.6228 | Pedro O. Pinheiro | Pedro O. Pinheiro and Ronan Collobert | From Image-level to Pixel-level Labeling with Convolutional Networks | CVPR2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in inferring object segmentation by leveraging only object
class information, and by considering only minimal priors on the object
segmentation task. This problem could be viewed as a kind of weakly supervised
segmentation task, and naturally fits the Multiple Instance Learning (MIL)
framework: every training image is known to have (or not) at least one pixel
corresponding to the image class label, and the segmentation task can be
rewritten as inferring the pixels belonging to the class of the object (given
one image, and its object class). We propose a Convolutional Neural
Network-based model, which is constrained during training to put more weight on
pixels which are important for classifying the image. We show that at test
time, the model has learned to discriminate the right pixels well enough, such
that it performs very well on an existing segmentation benchmark, by adding
only few smoothing priors. Our system is trained using a subset of the Imagenet
dataset and the segmentation experiments are performed on the challenging
Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model
beats the state of the art results in weakly supervised object segmentation
task by a large margin. We also compare the performance of our model with state
of the art fully-supervised segmentation approaches.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2014 12:06:36 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 13:11:43 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Apr 2015 07:26:01 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Pinheiro",
"Pedro O.",
""
],
[
"Collobert",
"Ronan",
""
]
] | TITLE: From Image-level to Pixel-level Labeling with Convolutional Networks
ABSTRACT: We are interested in inferring object segmentation by leveraging only object
class information, and by considering only minimal priors on the object
segmentation task. This problem could be viewed as a kind of weakly supervised
segmentation task, and naturally fits the Multiple Instance Learning (MIL)
framework: every training image is known to have (or not) at least one pixel
corresponding to the image class label, and the segmentation task can be
rewritten as inferring the pixels belonging to the class of the object (given
one image, and its object class). We propose a Convolutional Neural
Network-based model, which is constrained during training to put more weight on
pixels which are important for classifying the image. We show that at test
time, the model has learned to discriminate the right pixels well enough, such
that it performs very well on an existing segmentation benchmark, by adding
only few smoothing priors. Our system is trained using a subset of the Imagenet
dataset and the segmentation experiments are performed on the challenging
Pascal VOC dataset (with no fine-tuning of the model on Pascal VOC). Our model
beats the state of the art results in weakly supervised object segmentation
task by a large margin. We also compare the performance of our model with state
of the art fully-supervised segmentation approaches.
| no_new_dataset | 0.946843 |
1411.7883 | Luca Del Pero | Luca Del Pero, Susanna Ricco, Rahul Sukthankar, Vittorio Ferrari | Articulated motion discovery using pairs of trajectories | 10 pages, 5 figures, 2 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an unsupervised approach for discovering characteristic motion
patterns in videos of highly articulated objects performing natural, unscripted
behaviors, such as tigers in the wild. We discover consistent patterns in a
bottom-up manner by analyzing the relative displacements of large numbers of
ordered trajectory pairs through time, such that each trajectory is attached to
a different moving part on the object. The pairs of trajectories descriptor
relies entirely on motion and is more discriminative than state-of-the-art
features that employ single trajectories. Our method generates temporal video
intervals, each automatically trimmed to one instance of the discovered
behavior, and clusters them by type (e.g., running, turning head, drinking
water). We present experiments on two datasets: dogs from YouTube-Objects and a
new dataset of National Geographic tiger videos. Results confirm that our
proposed descriptor outperforms existing appearance- and trajectory-based
descriptors (e.g., HOG and DTFs) on both datasets and enables us to segment
unconstrained animal video into intervals containing single behaviors.
| [
{
"version": "v1",
"created": "Fri, 28 Nov 2014 14:43:03 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Dec 2014 13:56:07 GMT"
},
{
"version": "v3",
"created": "Fri, 24 Apr 2015 15:29:06 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Del Pero",
"Luca",
""
],
[
"Ricco",
"Susanna",
""
],
[
"Sukthankar",
"Rahul",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Articulated motion discovery using pairs of trajectories
ABSTRACT: We propose an unsupervised approach for discovering characteristic motion
patterns in videos of highly articulated objects performing natural, unscripted
behaviors, such as tigers in the wild. We discover consistent patterns in a
bottom-up manner by analyzing the relative displacements of large numbers of
ordered trajectory pairs through time, such that each trajectory is attached to
a different moving part on the object. The pairs of trajectories descriptor
relies entirely on motion and is more discriminative than state-of-the-art
features that employ single trajectories. Our method generates temporal video
intervals, each automatically trimmed to one instance of the discovered
behavior, and clusters them by type (e.g., running, turning head, drinking
water). We present experiments on two datasets: dogs from YouTube-Objects and a
new dataset of National Geographic tiger videos. Results confirm that our
proposed descriptor outperforms existing appearance- and trajectory-based
descriptors (e.g., HOG and DTFs) on both datasets and enables us to segment
unconstrained animal video into intervals containing single behaviors.
| new_dataset | 0.95803 |
1501.06783 | Cl\'ement Canonne | Cl\'ement L. Canonne | Big Data on the Rise: Testing monotonicity of distributions | null | null | null | null | cs.DS cs.DM math.PR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of property testing of probability distributions, or distribution
testing, aims to provide fast and (most likely) correct answers to questions
pertaining to specific aspects of very large datasets. In this work, we
consider a property of particular interest, monotonicity of distributions. We
focus on the complexity of monotonicity testing across different models of
access to the distributions; and obtain results in these new settings that
differ significantly from the known bounds in the standard sampling model.
| [
{
"version": "v1",
"created": "Tue, 27 Jan 2015 15:02:35 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Apr 2015 20:58:39 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Canonne",
"Clément L.",
""
]
] | TITLE: Big Data on the Rise: Testing monotonicity of distributions
ABSTRACT: The field of property testing of probability distributions, or distribution
testing, aims to provide fast and (most likely) correct answers to questions
pertaining to specific aspects of very large datasets. In this work, we
consider a property of particular interest, monotonicity of distributions. We
focus on the complexity of monotonicity testing across different models of
access to the distributions; and obtain results in these new settings that
differ significantly from the known bounds in the standard sampling model.
| no_new_dataset | 0.948394 |
1503.00783 | Davide Modolo | Davide Modolo, Alexander Vezhnevets, Olga Russakovsky, Vittorio
Ferrari | Joint calibration of Ensemble of Exemplar SVMs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for calibrating the Ensemble of Exemplar SVMs model.
Unlike the standard approach, which calibrates each SVM independently, our
method optimizes their joint performance as an ensemble. We formulate joint
calibration as a constrained optimization problem and devise an efficient
optimization algorithm to find its global optimum. The algorithm dynamically
discards parts of the solution space that cannot contain the optimum early on,
making the optimization computationally feasible. We experiment with EE-SVM
trained on state-of-the-art CNN descriptors. Results on the ILSVRC 2014 and
PASCAL VOC 2007 datasets show that (i) our joint calibration procedure
outperforms independent calibration on the task of classifying windows as
belonging to an object class or not; and (ii) this improved window classifier
leads to better performance on the object detection task.
| [
{
"version": "v1",
"created": "Mon, 2 Mar 2015 23:59:50 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Apr 2015 16:42:51 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Modolo",
"Davide",
""
],
[
"Vezhnevets",
"Alexander",
""
],
[
"Russakovsky",
"Olga",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Joint calibration of Ensemble of Exemplar SVMs
ABSTRACT: We present a method for calibrating the Ensemble of Exemplar SVMs model.
Unlike the standard approach, which calibrates each SVM independently, our
method optimizes their joint performance as an ensemble. We formulate joint
calibration as a constrained optimization problem and devise an efficient
optimization algorithm to find its global optimum. The algorithm dynamically
discards parts of the solution space that cannot contain the optimum early on,
making the optimization computationally feasible. We experiment with EE-SVM
trained on state-of-the-art CNN descriptors. Results on the ILSVRC 2014 and
PASCAL VOC 2007 datasets show that (i) our joint calibration procedure
outperforms independent calibration on the task of classifying windows as
belonging to an object class or not; and (ii) this improved window classifier
leads to better performance on the object detection task.
| no_new_dataset | 0.949576 |
1504.06394 | Jing Wang | Jing Wang and Jie Shen and Huan Xu | Social Trust Prediction via Max-norm Constrained 1-bit Matrix Completion | null | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social trust prediction addresses the significant problem of exploring
interactions among users in social networks. Naturally, this problem can be
formulated in the matrix completion framework, with each entry indicating the
trustness or distrustness. However, there are two challenges for the social
trust problem: 1) the observed data are with sign (1-bit) measurements; 2) they
are typically sampled non-uniformly. Most of the previous matrix completion
methods do not well handle the two issues. Motivated by the recent progress of
max-norm, we propose to solve the problem with a 1-bit max-norm constrained
formulation. Since max-norm is not easy to optimize, we utilize a reformulation
of max-norm which facilitates an efficient projected gradient decent algorithm.
We demonstrate the superiority of our formulation on two benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 05:01:12 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Wang",
"Jing",
""
],
[
"Shen",
"Jie",
""
],
[
"Xu",
"Huan",
""
]
] | TITLE: Social Trust Prediction via Max-norm Constrained 1-bit Matrix Completion
ABSTRACT: Social trust prediction addresses the significant problem of exploring
interactions among users in social networks. Naturally, this problem can be
formulated in the matrix completion framework, with each entry indicating the
trustness or distrustness. However, there are two challenges for the social
trust problem: 1) the observed data are with sign (1-bit) measurements; 2) they
are typically sampled non-uniformly. Most of the previous matrix completion
methods do not well handle the two issues. Motivated by the recent progress of
max-norm, we propose to solve the problem with a 1-bit max-norm constrained
formulation. Since max-norm is not easy to optimize, we utilize a reformulation
of max-norm which facilitates an efficient projected gradient decent algorithm.
We demonstrate the superiority of our formulation on two benchmark datasets.
| no_new_dataset | 0.944382 |
1504.06434 | Jasper Uijlings | Jasper Uijlings and Vittorio Ferrari | Situational Object Boundary Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intuitively, the appearance of true object boundaries varies from image to
image. Hence the usual monolithic approach of training a single boundary
predictor and applying it to all images regardless of their content is bound to
be suboptimal. In this paper we therefore propose situational object boundary
detection: We first define a variety of situations and train a specialized
object boundary detector for each of them using [Dollar and Zitnick 2013]. Then
given a test image, we classify it into these situations using its context,
which we model by global image appearance. We apply the corresponding
situational object boundary detectors, and fuse them based on the
classification probabilities. In experiments on ImageNet, Microsoft COCO, and
Pascal VOC 2012 segmentation we show that our situational object boundary
detection gives significant improvements over a monolithic approach.
Additionally, our method substantially outperforms [Hariharan et al. 2011] on
semantic contour detection on their SBD dataset.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 09:15:33 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Uijlings",
"Jasper",
""
],
[
"Ferrari",
"Vittorio",
""
]
] | TITLE: Situational Object Boundary Detection
ABSTRACT: Intuitively, the appearance of true object boundaries varies from image to
image. Hence the usual monolithic approach of training a single boundary
predictor and applying it to all images regardless of their content is bound to
be suboptimal. In this paper we therefore propose situational object boundary
detection: We first define a variety of situations and train a specialized
object boundary detector for each of them using [Dollar and Zitnick 2013]. Then
given a test image, we classify it into these situations using its context,
which we model by global image appearance. We apply the corresponding
situational object boundary detectors, and fuse them based on the
classification probabilities. In experiments on ImageNet, Microsoft COCO, and
Pascal VOC 2012 segmentation we show that our situational object boundary
detection gives significant improvements over a monolithic approach.
Additionally, our method substantially outperforms [Hariharan et al. 2011] on
semantic contour detection on their SBD dataset.
| no_new_dataset | 0.948202 |
1504.06464 | Tega Edo | Tega Boro Edo | The role of the Wigner distribution function in iterative ptychography | null | null | null | null | physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ptychography employs a set of diffraction patterns that capture redundant
information about an illuminated specimen as a localized beam is moved over the
specimen. The robustness of this method comes from the redundancy of the
dataset that in turn depends on the amount of oversampling and the form of the
illumination. Although the role of oversampling in ptychography is fairly well
understood, the same cannot be said of the illumination structure. This paper
provides a vector space model of ptychography that accounts for the
illumination structure in a way that highlights the role of the Wigner
distribution function in iterative ptychography.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 10:43:35 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Edo",
"Tega Boro",
""
]
] | TITLE: The role of the Wigner distribution function in iterative ptychography
ABSTRACT: Ptychography employs a set of diffraction patterns that capture redundant
information about an illuminated specimen as a localized beam is moved over the
specimen. The robustness of this method comes from the redundancy of the
dataset that in turn depends on the amount of oversampling and the form of the
illumination. Although the role of oversampling in ptychography is fairly well
understood, the same cannot be said of the illumination structure. This paper
provides a vector space model of ptychography that accounts for the
illumination structure in a way that highlights the role of the Wigner
distribution function in iterative ptychography.
| no_new_dataset | 0.952086 |
1504.06494 | Konstantinos Georgatzis | Konstantinos Georgatzis, Christopher K. I. Williams | Discriminative Switching Linear Dynamical Systems applied to
Physiological Condition Monitoring | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a Discriminative Switching Linear Dynamical System (DSLDS) applied
to patient monitoring in Intensive Care Units (ICUs). Our approach is based on
identifying the state-of-health of a patient given their observed vital signs
using a discriminative classifier, and then inferring their underlying
physiological values conditioned on this status. The work builds on the
Factorial Switching Linear Dynamical System (FSLDS) (Quinn et al., 2009) which
has been previously used in a similar setting. The FSLDS is a generative model,
whereas the DSLDS is a discriminative model. We demonstrate on two real-world
datasets that the DSLDS is able to outperform the FSLDS in most cases of
interest, and that an $\alpha$-mixture of the two models achieves higher
performance than either of the two models separately.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 13:23:40 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Georgatzis",
"Konstantinos",
""
],
[
"Williams",
"Christopher K. I.",
""
]
] | TITLE: Discriminative Switching Linear Dynamical Systems applied to
Physiological Condition Monitoring
ABSTRACT: We present a Discriminative Switching Linear Dynamical System (DSLDS) applied
to patient monitoring in Intensive Care Units (ICUs). Our approach is based on
identifying the state-of-health of a patient given their observed vital signs
using a discriminative classifier, and then inferring their underlying
physiological values conditioned on this status. The work builds on the
Factorial Switching Linear Dynamical System (FSLDS) (Quinn et al., 2009) which
has been previously used in a similar setting. The FSLDS is a generative model,
whereas the DSLDS is a discriminative model. We demonstrate on two real-world
datasets that the DSLDS is able to outperform the FSLDS in most cases of
interest, and that an $\alpha$-mixture of the two models achieves higher
performance than either of the two models separately.
| no_new_dataset | 0.951142 |
1504.06587 | Dinesh Reddy Narapureddy | N. Dinesh Reddy, Prateek Singhal, K. Madhava Krishna | Semantic Motion Segmentation Using Dense CRF Formulation | null | null | 10.1145/2683483.2683539 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the literature has been fairly dense in the areas of scene
understanding and semantic labeling there have been few works that make use of
motion cues to embellish semantic performance and vice versa. In this paper, we
address the problem of semantic motion segmentation, and show how semantic and
motion priors augments performance. We pro- pose an algorithm that jointly
infers the semantic class and motion labels of an object. Integrating semantic,
geometric and optical ow based constraints into a dense CRF-model we infer both
the object class as well as motion class, for each pixel. We found improvement
in performance using a fully connected CRF as compared to a standard
clique-based CRFs. For inference, we use a Mean Field approximation based
algorithm. Our method outperforms recently pro- posed motion detection
algorithms and also improves the semantic labeling compared to the
state-of-the-art Automatic Labeling Environment algorithm on the challenging
KITTI dataset especially for object classes such as pedestrians and cars that
are critical to an outdoor robotic navigation scenario.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 18:06:50 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Reddy",
"N. Dinesh",
""
],
[
"Singhal",
"Prateek",
""
],
[
"Krishna",
"K. Madhava",
""
]
] | TITLE: Semantic Motion Segmentation Using Dense CRF Formulation
ABSTRACT: While the literature has been fairly dense in the areas of scene
understanding and semantic labeling there have been few works that make use of
motion cues to embellish semantic performance and vice versa. In this paper, we
address the problem of semantic motion segmentation, and show how semantic and
motion priors augments performance. We pro- pose an algorithm that jointly
infers the semantic class and motion labels of an object. Integrating semantic,
geometric and optical ow based constraints into a dense CRF-model we infer both
the object class as well as motion class, for each pixel. We found improvement
in performance using a fully connected CRF as compared to a standard
clique-based CRFs. For inference, we use a Mean Field approximation based
algorithm. Our method outperforms recently pro- posed motion detection
algorithms and also improves the semantic labeling compared to the
state-of-the-art Automatic Labeling Environment algorithm on the challenging
KITTI dataset especially for object classes such as pedestrians and cars that
are critical to an outdoor robotic navigation scenario.
| no_new_dataset | 0.948489 |
1504.06591 | Konda Reddy Mopuri | Konda Reddy Mopuri and R. Venkatesh Babu | Object Level Deep Feature Pooling for Compact Image Representation | Deep Vision 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Network (CNN) features have been successfully employed
in recent works as an image descriptor for various vision tasks. But the
inability of the deep CNN features to exhibit invariance to geometric
transformations and object compositions poses a great challenge for image
search. In this work, we demonstrate the effectiveness of the objectness prior
over the deep CNN features of image regions for obtaining an invariant image
representation. The proposed approach represents the image as a vector of
pooled CNN features describing the underlying objects. This representation
provides robustness to spatial layout of the objects in the scene and achieves
invariance to general geometric transformations, such as translation, rotation
and scaling. The proposed approach also leads to a compact representation of
the scene, making each image occupy a smaller memory footprint. Experiments
show that the proposed representation achieves state of the art retrieval
results on a set of challenging benchmark image datasets, while maintaining a
compact representation.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 18:27:25 GMT"
}
] | 2015-04-27T00:00:00 | [
[
"Mopuri",
"Konda Reddy",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | TITLE: Object Level Deep Feature Pooling for Compact Image Representation
ABSTRACT: Convolutional Neural Network (CNN) features have been successfully employed
in recent works as an image descriptor for various vision tasks. But the
inability of the deep CNN features to exhibit invariance to geometric
transformations and object compositions poses a great challenge for image
search. In this work, we demonstrate the effectiveness of the objectness prior
over the deep CNN features of image regions for obtaining an invariant image
representation. The proposed approach represents the image as a vector of
pooled CNN features describing the underlying objects. This representation
provides robustness to spatial layout of the objects in the scene and achieves
invariance to general geometric transformations, such as translation, rotation
and scaling. The proposed approach also leads to a compact representation of
the scene, making each image occupy a smaller memory footprint. Experiments
show that the proposed representation achieves state of the art retrieval
results on a set of challenging benchmark image datasets, while maintaining a
compact representation.
| no_new_dataset | 0.950134 |
1504.05277 | Jianxin Wu | Bin-Bin Gao and Xiu-Shen Wei and Jianxin Wu and Weiyao Lin | Deep Spatial Pyramid: The Devil is Once Again in the Details | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we show that by carefully making good choices for various
detailed but important factors in a visual recognition framework using deep
learning features, one can achieve a simple, efficient, yet highly accurate
image classification system. We first list 5 important factors, based on both
existing researches and ideas proposed in this paper. These important detailed
factors include: 1) $\ell_2$ matrix normalization is more effective than
unnormalized or $\ell_2$ vector normalization, 2) the proposed natural deep
spatial pyramid is very effective, and 3) a very small $K$ in Fisher Vectors
surprisingly achieves higher accuracy than normally used large $K$ values.
Along with other choices (convolutional activations and multiple scales), the
proposed DSP framework is not only intuitive and efficient, but also achieves
excellent classification accuracy on many benchmark datasets. For example,
DSP's accuracy on SUN397 is 59.78%, significantly higher than previous
state-of-the-art (53.86%).
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 02:13:44 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Apr 2015 02:20:26 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Gao",
"Bin-Bin",
""
],
[
"Wei",
"Xiu-Shen",
""
],
[
"Wu",
"Jianxin",
""
],
[
"Lin",
"Weiyao",
""
]
] | TITLE: Deep Spatial Pyramid: The Devil is Once Again in the Details
ABSTRACT: In this paper we show that by carefully making good choices for various
detailed but important factors in a visual recognition framework using deep
learning features, one can achieve a simple, efficient, yet highly accurate
image classification system. We first list 5 important factors, based on both
existing researches and ideas proposed in this paper. These important detailed
factors include: 1) $\ell_2$ matrix normalization is more effective than
unnormalized or $\ell_2$ vector normalization, 2) the proposed natural deep
spatial pyramid is very effective, and 3) a very small $K$ in Fisher Vectors
surprisingly achieves higher accuracy than normally used large $K$ values.
Along with other choices (convolutional activations and multiple scales), the
proposed DSP framework is not only intuitive and efficient, but also achieves
excellent classification accuracy on many benchmark datasets. For example,
DSP's accuracy on SUN397 is 59.78%, significantly higher than previous
state-of-the-art (53.86%).
| no_new_dataset | 0.949809 |
1504.05997 | Dong Su | Dong Su, Jianneng Cao, Ninghui Li | Differentially Private Projected Histograms of Multi-Attribute Data for
Classification | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/3.0/ | In this paper, we tackle the problem of constructing a differentially private
synopsis for the classification analyses. Several the state-of-the-art methods
follow the structure of existing classification algorithms and are all
iterative, which is suboptimal due to the locally optimal choices and the
over-divided privacy budget among many sequentially composed steps. Instead, we
propose a new approach, PrivPfC, a new differentially private method for
releasing data for classification. The key idea is to privately select an
optimal partition of the underlying dataset using the given privacy budget in
one step. Given one dataset and the privacy budget, PrivPfC constructs a pool
of candidate grids where the number of cells of each grid is under a data-aware
and privacy-budget-aware threshold. After that, PrivPfC selects an optimal grid
via the exponential mechanism by using a novel quality function which minimizes
the expected number of misclassified records on which a histogram classifier is
constructed using the published grid. Finally, PrivPfC injects noise into each
cell of the selected grid and releases the noisy grid as the private synopsis
of the data. If the size of the candidate grid pool is larger than the
processing capability threshold set by the data curator, we add a step in the
beginning of PrivPfC to prune the set of attributes privately. We introduce a
modified $\chi^2$ quality function with low sensitivity and use it to evaluate
an attribute's relevance to the classification label variable. Through
extensive experiments on real datasets, we demonstrate PrivPfC's superiority
over the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 22:20:26 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Su",
"Dong",
""
],
[
"Cao",
"Jianneng",
""
],
[
"Li",
"Ninghui",
""
]
] | TITLE: Differentially Private Projected Histograms of Multi-Attribute Data for
Classification
ABSTRACT: In this paper, we tackle the problem of constructing a differentially private
synopsis for the classification analyses. Several the state-of-the-art methods
follow the structure of existing classification algorithms and are all
iterative, which is suboptimal due to the locally optimal choices and the
over-divided privacy budget among many sequentially composed steps. Instead, we
propose a new approach, PrivPfC, a new differentially private method for
releasing data for classification. The key idea is to privately select an
optimal partition of the underlying dataset using the given privacy budget in
one step. Given one dataset and the privacy budget, PrivPfC constructs a pool
of candidate grids where the number of cells of each grid is under a data-aware
and privacy-budget-aware threshold. After that, PrivPfC selects an optimal grid
via the exponential mechanism by using a novel quality function which minimizes
the expected number of misclassified records on which a histogram classifier is
constructed using the published grid. Finally, PrivPfC injects noise into each
cell of the selected grid and releases the noisy grid as the private synopsis
of the data. If the size of the candidate grid pool is larger than the
processing capability threshold set by the data curator, we add a step in the
beginning of PrivPfC to prune the set of attributes privately. We introduce a
modified $\chi^2$ quality function with low sensitivity and use it to evaluate
an attribute's relevance to the classification label variable. Through
extensive experiments on real datasets, we demonstrate PrivPfC's superiority
over the state-of-the-art methods.
| no_new_dataset | 0.946695 |
1504.05998 | Dong Su | Dong Su, Jianneng Cao, Ninghui Li, Elisa Bertino, Hongxia Jin | Differentially Private $k$-Means Clustering | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/3.0/ | There are two broad approaches for differentially private data analysis. The
interactive approach aims at developing customized differentially private
algorithms for various data mining tasks. The non-interactive approach aims at
developing differentially private algorithms that can output a synopsis of the
input dataset, which can then be used to support various data mining tasks. In
this paper we study the tradeoff of interactive vs. non-interactive approaches
and propose a hybrid approach that combines interactive and non-interactive,
using $k$-means clustering as an example. In the hybrid approach to
differentially private $k$-means clustering, one first uses a non-interactive
mechanism to publish a synopsis of the input dataset, then applies the standard
$k$-means clustering algorithm to learn $k$ cluster centroids, and finally uses
an interactive approach to further improve these cluster centroids. We analyze
the error behavior of both non-interactive and interactive approaches and use
such analysis to decide how to allocate privacy budget between the
non-interactive step and the interactive step. Results from extensive
experiments support our analysis and demonstrate the effectiveness of our
approach.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 22:21:30 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Su",
"Dong",
""
],
[
"Cao",
"Jianneng",
""
],
[
"Li",
"Ninghui",
""
],
[
"Bertino",
"Elisa",
""
],
[
"Jin",
"Hongxia",
""
]
] | TITLE: Differentially Private $k$-Means Clustering
ABSTRACT: There are two broad approaches for differentially private data analysis. The
interactive approach aims at developing customized differentially private
algorithms for various data mining tasks. The non-interactive approach aims at
developing differentially private algorithms that can output a synopsis of the
input dataset, which can then be used to support various data mining tasks. In
this paper we study the tradeoff of interactive vs. non-interactive approaches
and propose a hybrid approach that combines interactive and non-interactive,
using $k$-means clustering as an example. In the hybrid approach to
differentially private $k$-means clustering, one first uses a non-interactive
mechanism to publish a synopsis of the input dataset, then applies the standard
$k$-means clustering algorithm to learn $k$ cluster centroids, and finally uses
an interactive approach to further improve these cluster centroids. We analyze
the error behavior of both non-interactive and interactive approaches and use
such analysis to decide how to allocate privacy budget between the
non-interactive step and the interactive step. Results from extensive
experiments support our analysis and demonstrate the effectiveness of our
approach.
| no_new_dataset | 0.947817 |
1504.06055 | Naiyan Wang | Naiyan Wang, Jianping Shi, Dit-Yan Yeung, Jiaya Jia | Understanding and Diagnosing Visual Tracking Systems | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several benchmark datasets for visual tracking research have been proposed in
recent years. Despite their usefulness, whether they are sufficient for
understanding and diagnosing the strengths and weaknesses of different trackers
remains questionable. To address this issue, we propose a framework by breaking
a tracker down into five constituent parts, namely, motion model, feature
extractor, observation model, model updater, and ensemble post-processor. We
then conduct ablative experiments on each component to study how it affects the
overall result. Surprisingly, our findings are discrepant with some common
beliefs in the visual tracking research community. We find that the feature
extractor plays the most important role in a tracker. On the other hand,
although the observation model is the focus of many studies, we find that it
often brings no significant improvement. Moreover, the motion model and model
updater contain many details that could affect the result. Also, the ensemble
post-processor can improve the result substantially when the constituent
trackers have high diversity. Based on our findings, we put together some very
elementary building blocks to give a basic tracker which is competitive in
performance to the state-of-the-art trackers. We believe our framework can
provide a solid baseline when conducting controlled experiments for visual
tracking research.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 06:37:29 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Wang",
"Naiyan",
""
],
[
"Shi",
"Jianping",
""
],
[
"Yeung",
"Dit-Yan",
""
],
[
"Jia",
"Jiaya",
""
]
] | TITLE: Understanding and Diagnosing Visual Tracking Systems
ABSTRACT: Several benchmark datasets for visual tracking research have been proposed in
recent years. Despite their usefulness, whether they are sufficient for
understanding and diagnosing the strengths and weaknesses of different trackers
remains questionable. To address this issue, we propose a framework by breaking
a tracker down into five constituent parts, namely, motion model, feature
extractor, observation model, model updater, and ensemble post-processor. We
then conduct ablative experiments on each component to study how it affects the
overall result. Surprisingly, our findings are discrepant with some common
beliefs in the visual tracking research community. We find that the feature
extractor plays the most important role in a tracker. On the other hand,
although the observation model is the focus of many studies, we find that it
often brings no significant improvement. Moreover, the motion model and model
updater contain many details that could affect the result. Also, the ensemble
post-processor can improve the result substantially when the constituent
trackers have high diversity. Based on our findings, we put together some very
elementary building blocks to give a basic tracker which is competitive in
performance to the state-of-the-art trackers. We believe our framework can
provide a solid baseline when conducting controlled experiments for visual
tracking research.
| no_new_dataset | 0.943712 |
1504.06078 | Nicolas Turenne | Nicolas Turenne, Tien Phan | x.ent: R Package for Entities and Relations Extraction based on
Unsupervised Learning and Document Structure | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relation extraction with accurate precision is still a challenge when
processing full text databases. We propose an approach based on cooccurrence
analysis in each document for which we used document organization to improve
accuracy of relation extraction. This approach is implemented in a R package
called \emph{x.ent}. Another facet of extraction relies on use of extracted
relation into a querying system for expert end-users. Two datasets had been
used. One of them gets interest from specialists of epidemiology in plant
health. For this dataset usage is dedicated to plant-disease exploration
through agricultural information news. An open-data platform exploits exports
from \emph{x.ent} and is publicly available.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 08:28:01 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Turenne",
"Nicolas",
""
],
[
"Phan",
"Tien",
""
]
] | TITLE: x.ent: R Package for Entities and Relations Extraction based on
Unsupervised Learning and Document Structure
ABSTRACT: Relation extraction with accurate precision is still a challenge when
processing full text databases. We propose an approach based on cooccurrence
analysis in each document for which we used document organization to improve
accuracy of relation extraction. This approach is implemented in a R package
called \emph{x.ent}. Another facet of extraction relies on use of extracted
relation into a querying system for expert end-users. Two datasets had been
used. One of them gets interest from specialists of epidemiology in plant
health. For this dataset usage is dedicated to plant-disease exploration
through agricultural information news. An open-data platform exploits exports
from \emph{x.ent} and is publicly available.
| no_new_dataset | 0.941007 |
1504.06133 | Anguelos Nicolaou | Anguelos Nicolaou, Andrew D. Bagdanov, Marcus Liwicki, Dimosthenis
Karatzas | Sparse Radial Sampling LBP for Writer Identification | Submitted to the 13th International Conference on Document Analysis
and Recognition (ICDAR 2015) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the use of Sparse Radial Sampling Local Binary
Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture
classification. By adapting and extending the standard LBP operator to the
particularities of text we get a generic text-as-texture classification scheme
and apply it to writer identification. In experiments on CVL and ICDAR 2013
datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA)
performance. Among the SOA, the proposed method is the only one that is based
on dense extraction of a single local feature descriptor. This makes it fast
and applicable at the earliest stages in a DIA pipeline without the need for
segmentation, binarization, or extraction of multiple features.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 11:51:53 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Nicolaou",
"Anguelos",
""
],
[
"Bagdanov",
"Andrew D.",
""
],
[
"Liwicki",
"Marcus",
""
],
[
"Karatzas",
"Dimosthenis",
""
]
] | TITLE: Sparse Radial Sampling LBP for Writer Identification
ABSTRACT: In this paper we present the use of Sparse Radial Sampling Local Binary
Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture
classification. By adapting and extending the standard LBP operator to the
particularities of text we get a generic text-as-texture classification scheme
and apply it to writer identification. In experiments on CVL and ICDAR 2013
datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA)
performance. Among the SOA, the proposed method is the only one that is based
on dense extraction of a single local feature descriptor. This makes it fast
and applicable at the earliest stages in a DIA pipeline without the need for
segmentation, binarization, or extraction of multiple features.
| no_new_dataset | 0.953101 |
1504.06151 | Nauman Shahid | Nauman Shahid, Vassilis Kalofolias, Xavier Bresson, Michael Bronstein
and Pierre Vandergheynst | Robust Principal Component Analysis on Graphs | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Principal Component Analysis (PCA) is the most widely used tool for linear
dimensionality reduction and clustering. Still it is highly sensitive to
outliers and does not scale well with respect to the number of data samples.
Robust PCA solves the first issue with a sparse penalty term. The second issue
can be handled with the matrix factorization model, which is however
non-convex. Besides, PCA based clustering can also be enhanced by using a graph
of data similarity. In this article, we introduce a new model called "Robust
PCA on Graphs" which incorporates spectral graph regularization into the Robust
PCA framework. Our proposed model benefits from 1) the robustness of principal
components to occlusions and missing values, 2) enhanced low-rank recovery, 3)
improved clustering property due to the graph smoothness assumption on the
low-rank matrix, and 4) convexity of the resulting optimization problem.
Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with
corruptions clearly reveal that our model outperforms 10 other state-of-the-art
models in its clustering and low-rank recovery tasks.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 12:39:40 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Shahid",
"Nauman",
""
],
[
"Kalofolias",
"Vassilis",
""
],
[
"Bresson",
"Xavier",
""
],
[
"Bronstein",
"Michael",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Robust Principal Component Analysis on Graphs
ABSTRACT: Principal Component Analysis (PCA) is the most widely used tool for linear
dimensionality reduction and clustering. Still it is highly sensitive to
outliers and does not scale well with respect to the number of data samples.
Robust PCA solves the first issue with a sparse penalty term. The second issue
can be handled with the matrix factorization model, which is however
non-convex. Besides, PCA based clustering can also be enhanced by using a graph
of data similarity. In this article, we introduce a new model called "Robust
PCA on Graphs" which incorporates spectral graph regularization into the Robust
PCA framework. Our proposed model benefits from 1) the robustness of principal
components to occlusions and missing values, 2) enhanced low-rank recovery, 3)
improved clustering property due to the graph smoothness assumption on the
low-rank matrix, and 4) convexity of the resulting optimization problem.
Extensive experiments on 8 benchmark, 3 video and 2 artificial datasets with
corruptions clearly reveal that our model outperforms 10 other state-of-the-art
models in its clustering and low-rank recovery tasks.
| no_new_dataset | 0.951142 |
1504.06165 | Nitish Gupta | Nitish Gupta, Sameer Singh | Collectively Embedding Multi-Relational Data for Predicting User
Preferences | 10 pages, 5 figures | null | null | null | cs.LG cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matrix factorization has found incredible success and widespread application
as a collaborative filtering based approach to recommendations. Unfortunately,
incorporating additional sources of evidence, especially ones that are
incomplete and noisy, is quite difficult to achieve in such models, however, is
often crucial for obtaining further gains in accuracy. For example, additional
information about businesses from reviews, categories, and attributes should be
leveraged for predicting user preferences, even though this information is
often inaccurate and partially-observed. Instead of creating customized methods
that are specific to each type of evidences, in this paper we present a generic
approach to factorization of relational data that collectively models all the
relations in the database. By learning a set of embeddings that are shared
across all the relations, the model is able to incorporate observed information
from all the relations, while also predicting all the relations of interest.
Our evaluation on multiple Amazon and Yelp datasets demonstrates effective
utilization of additional information for held-out preference prediction, but
further, we present accurate models even for the cold-starting businesses and
products for which we do not observe any ratings or reviews. We also illustrate
the capability of the model in imputing missing information and jointly
visualizing words, categories, and attribute factors.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 13:07:24 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Gupta",
"Nitish",
""
],
[
"Singh",
"Sameer",
""
]
] | TITLE: Collectively Embedding Multi-Relational Data for Predicting User
Preferences
ABSTRACT: Matrix factorization has found incredible success and widespread application
as a collaborative filtering based approach to recommendations. Unfortunately,
incorporating additional sources of evidence, especially ones that are
incomplete and noisy, is quite difficult to achieve in such models, however, is
often crucial for obtaining further gains in accuracy. For example, additional
information about businesses from reviews, categories, and attributes should be
leveraged for predicting user preferences, even though this information is
often inaccurate and partially-observed. Instead of creating customized methods
that are specific to each type of evidences, in this paper we present a generic
approach to factorization of relational data that collectively models all the
relations in the database. By learning a set of embeddings that are shared
across all the relations, the model is able to incorporate observed information
from all the relations, while also predicting all the relations of interest.
Our evaluation on multiple Amazon and Yelp datasets demonstrates effective
utilization of additional information for held-out preference prediction, but
further, we present accurate models even for the cold-starting businesses and
products for which we do not observe any ratings or reviews. We also illustrate
the capability of the model in imputing missing information and jointly
visualizing words, categories, and attribute factors.
| no_new_dataset | 0.9463 |
1504.06266 | Hamid Tizhoosh | Ahmed Othman, Hamid R. Tizhoosh, Farzad Khalvati | Evolving Fuzzy Image Segmentation with Self-Configuration | Benchmark data (35 breast ultrasound images with gold standard
segments) available; 11 pages, 4 algorithms, 6 figures, 5 tables; | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Current image segmentation techniques usually require that the user tune
several parameters in order to obtain maximum segmentation accuracy, a
computationally inefficient approach, especially when a large number of images
must be processed sequentially in daily practice. The use of evolving fuzzy
systems for designing a method that automatically adjusts parameters to segment
medical images according to the quality expectation of expert users has been
proposed recently (Evolving fuzzy image segmentation EFIS). However, EFIS
suffers from a few limitations when used in practice mainly due to some fixed
parameters. For instance, EFIS depends on auto-detection of the object of
interest for feature calculation, a task that is highly application-dependent.
This shortcoming limits the applicability of EFIS, which was proposed with the
ultimate goal of offering a generic but adjustable segmentation scheme. In this
paper, a new version of EFIS is proposed to overcome these limitations. The new
EFIS, called self-configuring EFIS (SC-EFIS), uses available training data to
self-estimate the parameters that are fixed in EFIS. As well, the proposed
SC-EFIS relies on a feature selection process that does not require
auto-detection of an ROI. The proposed SC-EFIS was evaluated using the same
segmentation algorithms and the same dataset as for EFIS. The results show that
SC-EFIS can provide the same results as EFIS but with a higher level of
automation.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 17:23:09 GMT"
}
] | 2015-04-24T00:00:00 | [
[
"Othman",
"Ahmed",
""
],
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Khalvati",
"Farzad",
""
]
] | TITLE: Evolving Fuzzy Image Segmentation with Self-Configuration
ABSTRACT: Current image segmentation techniques usually require that the user tune
several parameters in order to obtain maximum segmentation accuracy, a
computationally inefficient approach, especially when a large number of images
must be processed sequentially in daily practice. The use of evolving fuzzy
systems for designing a method that automatically adjusts parameters to segment
medical images according to the quality expectation of expert users has been
proposed recently (Evolving fuzzy image segmentation EFIS). However, EFIS
suffers from a few limitations when used in practice mainly due to some fixed
parameters. For instance, EFIS depends on auto-detection of the object of
interest for feature calculation, a task that is highly application-dependent.
This shortcoming limits the applicability of EFIS, which was proposed with the
ultimate goal of offering a generic but adjustable segmentation scheme. In this
paper, a new version of EFIS is proposed to overcome these limitations. The new
EFIS, called self-configuring EFIS (SC-EFIS), uses available training data to
self-estimate the parameters that are fixed in EFIS. As well, the proposed
SC-EFIS relies on a feature selection process that does not require
auto-detection of an ROI. The proposed SC-EFIS was evaluated using the same
segmentation algorithms and the same dataset as for EFIS. The results show that
SC-EFIS can provide the same results as EFIS but with a higher level of
automation.
| no_new_dataset | 0.950732 |
1504.05880 | Shiva Kasiviswanathan | Shiva Prasad Kasiviswanathan and Mark Rudelson | Spectral Norm of Random Kernel Matrices with Applications to Privacy | 16 pages, 1 Figure | null | null | null | stat.ML cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel methods are an extremely popular set of techniques used for many
important machine learning and data analysis applications. In addition to
having good practical performances, these methods are supported by a
well-developed theory. Kernel methods use an implicit mapping of the input data
into a high dimensional feature space defined by a kernel function, i.e., a
function returning the inner product between the images of two data points in
the feature space. Central to any kernel method is the kernel matrix, which is
built by evaluating the kernel function on a given sample dataset.
In this paper, we initiate the study of non-asymptotic spectral theory of
random kernel matrices. These are n x n random matrices whose (i,j)th entry is
obtained by evaluating the kernel function on $x_i$ and $x_j$, where
$x_1,...,x_n$ are a set of n independent random high-dimensional vectors. Our
main contribution is to obtain tight upper bounds on the spectral norm (largest
eigenvalue) of random kernel matrices constructed by commonly used kernel
functions based on polynomials and Gaussian radial basis.
As an application of these results, we provide lower bounds on the distortion
needed for releasing the coefficients of kernel ridge regression under
attribute privacy, a general privacy notion which captures a large class of
privacy definitions. Kernel ridge regression is standard method for performing
non-parametric regression that regularly outperforms traditional regression
approaches in various domains. Our privacy distortion lower bounds are the
first for any kernel technique, and our analysis assumes realistic scenarios
for the input, unlike all previous lower bounds for other release problems
which only hold under very restrictive input settings.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 16:54:48 GMT"
}
] | 2015-04-23T00:00:00 | [
[
"Kasiviswanathan",
"Shiva Prasad",
""
],
[
"Rudelson",
"Mark",
""
]
] | TITLE: Spectral Norm of Random Kernel Matrices with Applications to Privacy
ABSTRACT: Kernel methods are an extremely popular set of techniques used for many
important machine learning and data analysis applications. In addition to
having good practical performances, these methods are supported by a
well-developed theory. Kernel methods use an implicit mapping of the input data
into a high dimensional feature space defined by a kernel function, i.e., a
function returning the inner product between the images of two data points in
the feature space. Central to any kernel method is the kernel matrix, which is
built by evaluating the kernel function on a given sample dataset.
In this paper, we initiate the study of non-asymptotic spectral theory of
random kernel matrices. These are n x n random matrices whose (i,j)th entry is
obtained by evaluating the kernel function on $x_i$ and $x_j$, where
$x_1,...,x_n$ are a set of n independent random high-dimensional vectors. Our
main contribution is to obtain tight upper bounds on the spectral norm (largest
eigenvalue) of random kernel matrices constructed by commonly used kernel
functions based on polynomials and Gaussian radial basis.
As an application of these results, we provide lower bounds on the distortion
needed for releasing the coefficients of kernel ridge regression under
attribute privacy, a general privacy notion which captures a large class of
privacy definitions. Kernel ridge regression is standard method for performing
non-parametric regression that regularly outperforms traditional regression
approaches in various domains. Our privacy distortion lower bounds are the
first for any kernel technique, and our analysis assumes realistic scenarios
for the input, unlike all previous lower bounds for other release problems
which only hold under very restrictive input settings.
| no_new_dataset | 0.946745 |
1411.4555 | Samy Bengio | Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan | Show and Tell: A Neural Image Caption Generator | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically describing the content of an image is a fundamental problem in
artificial intelligence that connects computer vision and natural language
processing. In this paper, we present a generative model based on a deep
recurrent architecture that combines recent advances in computer vision and
machine translation and that can be used to generate natural sentences
describing an image. The model is trained to maximize the likelihood of the
target description sentence given the training image. Experiments on several
datasets show the accuracy of the model and the fluency of the language it
learns solely from image descriptions. Our model is often quite accurate, which
we verify both qualitatively and quantitatively. For instance, while the
current state-of-the-art BLEU-1 score (the higher the better) on the Pascal
dataset is 25, our approach yields 59, to be compared to human performance
around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66,
and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we
achieve a BLEU-4 of 27.7, which is the current state-of-the-art.
| [
{
"version": "v1",
"created": "Mon, 17 Nov 2014 17:15:41 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Apr 2015 22:26:11 GMT"
}
] | 2015-04-22T00:00:00 | [
[
"Vinyals",
"Oriol",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Bengio",
"Samy",
""
],
[
"Erhan",
"Dumitru",
""
]
] | TITLE: Show and Tell: A Neural Image Caption Generator
ABSTRACT: Automatically describing the content of an image is a fundamental problem in
artificial intelligence that connects computer vision and natural language
processing. In this paper, we present a generative model based on a deep
recurrent architecture that combines recent advances in computer vision and
machine translation and that can be used to generate natural sentences
describing an image. The model is trained to maximize the likelihood of the
target description sentence given the training image. Experiments on several
datasets show the accuracy of the model and the fluency of the language it
learns solely from image descriptions. Our model is often quite accurate, which
we verify both qualitatively and quantitatively. For instance, while the
current state-of-the-art BLEU-1 score (the higher the better) on the Pascal
dataset is 25, our approach yields 59, to be compared to human performance
around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66,
and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we
achieve a BLEU-4 of 27.7, which is the current state-of-the-art.
| no_new_dataset | 0.936401 |
1502.02766 | Sachin Sudhakar Farfade | Sachin Sudhakar Farfade, Mohammad Saberian, Li-Jia Li | Multi-view Face Detection Using Deep Convolutional Neural Networks | in International Conference on Multimedia Retrieval 2015 (ICMR) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In this paper we consider the problem of multi-view face detection. While
there has been significant research on this problem, current state-of-the-art
approaches for this task require annotation of facial landmarks, e.g. TSM [25],
or annotation of face poses [28, 22]. They also require training dozens of
models to fully capture faces in all orientations, e.g. 22 models in HeadHunter
method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method
that does not require pose/landmark annotation and is able to detect faces in a
wide range of orientations using a single model based on deep convolutional
neural networks. The proposed method has minimal complexity; unlike other
recent deep learning object detection methods [9], it does not require
additional components such as segmentation, bounding-box regression, or SVM
classifiers. Furthermore, we analyzed scores of the proposed face detector for
faces in different orientations and found that 1) the proposed method is able
to detect faces from different angles and can handle occlusion to some extent,
2) there seems to be a correlation between dis- tribution of positive examples
in the training set and scores of the proposed face detector. The latter
suggests that the proposed methods performance can be further improved by using
better sampling strategies and more sophisticated data augmentation techniques.
Evaluations on popular face detection benchmark datasets show that our
single-model face detector algorithm has similar or better performance compared
to the previous methods, which are more complex and require annotations of
either different poses or facial landmarks.
| [
{
"version": "v1",
"created": "Tue, 10 Feb 2015 03:15:21 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2015 10:07:20 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Apr 2015 20:18:57 GMT"
}
] | 2015-04-22T00:00:00 | [
[
"Farfade",
"Sachin Sudhakar",
""
],
[
"Saberian",
"Mohammad",
""
],
[
"Li",
"Li-Jia",
""
]
] | TITLE: Multi-view Face Detection Using Deep Convolutional Neural Networks
ABSTRACT: In this paper we consider the problem of multi-view face detection. While
there has been significant research on this problem, current state-of-the-art
approaches for this task require annotation of facial landmarks, e.g. TSM [25],
or annotation of face poses [28, 22]. They also require training dozens of
models to fully capture faces in all orientations, e.g. 22 models in HeadHunter
method [22]. In this paper we propose Deep Dense Face Detector (DDFD), a method
that does not require pose/landmark annotation and is able to detect faces in a
wide range of orientations using a single model based on deep convolutional
neural networks. The proposed method has minimal complexity; unlike other
recent deep learning object detection methods [9], it does not require
additional components such as segmentation, bounding-box regression, or SVM
classifiers. Furthermore, we analyzed scores of the proposed face detector for
faces in different orientations and found that 1) the proposed method is able
to detect faces from different angles and can handle occlusion to some extent,
2) there seems to be a correlation between dis- tribution of positive examples
in the training set and scores of the proposed face detector. The latter
suggests that the proposed methods performance can be further improved by using
better sampling strategies and more sophisticated data augmentation techniques.
Evaluations on popular face detection benchmark datasets show that our
single-model face detector algorithm has similar or better performance compared
to the previous methods, which are more complex and require annotations of
either different poses or facial landmarks.
| no_new_dataset | 0.944022 |
1504.05150 | Mark Kaminski | Mark Kaminski, Bernardo Cuenca Grau | Computing Horn Rewritings of Description Logics Ontologies | 15 pages. To appear in IJCAI-15 | null | null | null | cs.AI cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of rewriting an ontology O1 expressed in a DL L1 into an
ontology O2 in a Horn DL L2 such that O1 and O2 are equisatisfiable when
extended with an arbitrary dataset. Ontologies that admit such rewritings are
amenable to reasoning techniques ensuring tractability in data complexity.
After showing undecidability whenever L1 extends ALCF, we focus on devising
efficiently checkable conditions that ensure existence of a Horn rewriting. By
lifting existing techniques for rewriting Disjunctive Datalog programs into
plain Datalog to the case of arbitrary first-order programs with function
symbols, we identify a class of ontologies that admit Horn rewritings of
polynomial size. Our experiments indicate that many real-world ontologies
satisfy our sufficient conditions and thus admit polynomial Horn rewritings.
| [
{
"version": "v1",
"created": "Mon, 20 Apr 2015 18:39:27 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Apr 2015 10:59:25 GMT"
}
] | 2015-04-22T00:00:00 | [
[
"Kaminski",
"Mark",
""
],
[
"Grau",
"Bernardo Cuenca",
""
]
] | TITLE: Computing Horn Rewritings of Description Logics Ontologies
ABSTRACT: We study the problem of rewriting an ontology O1 expressed in a DL L1 into an
ontology O2 in a Horn DL L2 such that O1 and O2 are equisatisfiable when
extended with an arbitrary dataset. Ontologies that admit such rewritings are
amenable to reasoning techniques ensuring tractability in data complexity.
After showing undecidability whenever L1 extends ALCF, we focus on devising
efficiently checkable conditions that ensure existence of a Horn rewriting. By
lifting existing techniques for rewriting Disjunctive Datalog programs into
plain Datalog to the case of arbitrary first-order programs with function
symbols, we identify a class of ontologies that admit Horn rewritings of
polynomial size. Our experiments indicate that many real-world ontologies
satisfy our sufficient conditions and thus admit polynomial Horn rewritings.
| no_new_dataset | 0.947381 |
1504.05473 | Yury Kashnitsky | Yury Kashnitsky, Dmitry I. Ignatov | Can FCA-based Recommender System Suggest a Proper Classifier? | 10 pages, 1 figure, 4 tables, ECAI 2014, workshop "What FCA can do
for "Artifficial Intelligence" | CEUR Workshop Proceedings, 1257, pp. 17-26 (2014) | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper briefly introduces multiple classifier systems and describes a new
algorithm, which improves classification accuracy by means of recommendation of
a proper algorithm to an object classification. This recommendation is done
assuming that a classifier is likely to predict the label of the object
correctly if it has correctly classified its neighbors. The process of
assigning a classifier to each object is based on Formal Concept Analysis. We
explain the idea of the algorithm with a toy example and describe our first
experiments with real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 15:38:23 GMT"
}
] | 2015-04-22T00:00:00 | [
[
"Kashnitsky",
"Yury",
""
],
[
"Ignatov",
"Dmitry I.",
""
]
] | TITLE: Can FCA-based Recommender System Suggest a Proper Classifier?
ABSTRACT: The paper briefly introduces multiple classifier systems and describes a new
algorithm, which improves classification accuracy by means of recommendation of
a proper algorithm to an object classification. This recommendation is done
assuming that a classifier is likely to predict the label of the object
correctly if it has correctly classified its neighbors. The process of
assigning a classifier to each object is based on Formal Concept Analysis. We
explain the idea of the algorithm with a toy example and describe our first
experiments with real-world datasets.
| no_new_dataset | 0.950273 |
1504.05524 | Dan Oneata | Heng Wang, Dan Oneata, Jakob Verbeek, Cordelia Schmid | A robust and efficient video representation for action recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 17:44:07 GMT"
}
] | 2015-04-22T00:00:00 | [
[
"Wang",
"Heng",
""
],
[
"Oneata",
"Dan",
""
],
[
"Verbeek",
"Jakob",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: A robust and efficient video representation for action recognition
ABSTRACT: This paper introduces a state-of-the-art video representation and applies it
to efficient action recognition and detection. We first propose to improve the
popular dense trajectory features by explicit camera motion estimation. More
specifically, we extract feature point matches between frames using SURF
descriptors and dense optical flow. The matches are used to estimate a
homography with RANSAC. To improve the robustness of homography estimation, a
human detector is employed to remove outlier matches from the human body as
human motion is not constrained by the camera. Trajectories consistent with the
homography are considered as due to camera motion, and thus removed. We also
use the homography to cancel out camera motion from the optical flow. This
results in significant improvement on motion-based HOF and MBH descriptors. We
further explore the recent Fisher vector as an alternative feature encoding
approach to the standard bag-of-words histogram, and consider different ways to
include spatial layout information in these encodings. We present a large and
varied set of evaluations, considering (i) classification of short basic
actions on six datasets, (ii) localization of such actions in feature-length
movies, and (iii) large-scale recognition of complex events. We find that our
improved trajectory features significantly outperform previous dense
trajectories, and that Fisher vectors are superior to bag-of-words encodings
for video recognition tasks. In all three tasks, we show substantial
improvements over the state-of-the-art results.
| no_new_dataset | 0.948917 |
1406.3407 | Gang Chen | Gang Chen and Sargur H. Srihari | Restricted Boltzmann Machine for Classification with Hierarchical
Correlated Prior | 13 pages, 5 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Restricted Boltzmann machines (RBM) and its variants have become hot research
topics recently, and widely applied to many classification problems, such as
character recognition and document categorization. Often, classification RBM
ignores the interclass relationship or prior knowledge of sharing information
among classes. In this paper, we are interested in RBM with the hierarchical
prior over classes. We assume parameters for nearby nodes are correlated in the
hierarchical tree, and further the parameters at each node of the tree be
orthogonal to those at its ancestors. We propose a hierarchical correlated RBM
for classification problem, which generalizes the classification RBM with
sharing information among different classes. In order to reduce the redundancy
between node parameters in the hierarchy, we also introduce orthogonal
restrictions to our objective function. We test our method on challenge
datasets, and show promising results compared to competitive baselines.
| [
{
"version": "v1",
"created": "Fri, 13 Jun 2014 02:19:26 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Apr 2015 18:39:18 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"Chen",
"Gang",
""
],
[
"Srihari",
"Sargur H.",
""
]
] | TITLE: Restricted Boltzmann Machine for Classification with Hierarchical
Correlated Prior
ABSTRACT: Restricted Boltzmann machines (RBM) and its variants have become hot research
topics recently, and widely applied to many classification problems, such as
character recognition and document categorization. Often, classification RBM
ignores the interclass relationship or prior knowledge of sharing information
among classes. In this paper, we are interested in RBM with the hierarchical
prior over classes. We assume parameters for nearby nodes are correlated in the
hierarchical tree, and further the parameters at each node of the tree be
orthogonal to those at its ancestors. We propose a hierarchical correlated RBM
for classification problem, which generalizes the classification RBM with
sharing information among different classes. In order to reduce the redundancy
between node parameters in the hierarchy, we also introduce orthogonal
restrictions to our objective function. We test our method on challenge
datasets, and show promising results compared to competitive baselines.
| no_new_dataset | 0.952486 |
1406.5266 | Yaniv Taigman | Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf | Web-Scale Training for Face Identification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Scaling machine learning methods to very large datasets has attracted
considerable attention in recent years, thanks to easy access to ubiquitous
sensing and data from the web. We study face recognition and show that three
distinct properties have surprising effects on the transferability of deep
convolutional networks (CNN): (1) The bottleneck of the network serves as an
important transfer learning regularizer, and (2) in contrast to the common
wisdom, performance saturation may exist in CNN's (as the number of training
samples grows); we propose a solution for alleviating this by replacing the
naive random subsampling of the training set with a bootstrapping process.
Moreover, (3) we find a link between the representation norm and the ability to
discriminate in a target domain, which sheds lights on how such networks
represent faces. Based on these discoveries, we are able to improve face
recognition accuracy on the widely used LFW benchmark, both in the verification
(1:1) and identification (1:N) protocols, and directly compare, for the first
time, with the state of the art Commercially-Off-The-Shelf system and show a
sizable leap in performance.
| [
{
"version": "v1",
"created": "Fri, 20 Jun 2014 02:51:31 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Apr 2015 09:18:19 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"Taigman",
"Yaniv",
""
],
[
"Yang",
"Ming",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Wolf",
"Lior",
""
]
] | TITLE: Web-Scale Training for Face Identification
ABSTRACT: Scaling machine learning methods to very large datasets has attracted
considerable attention in recent years, thanks to easy access to ubiquitous
sensing and data from the web. We study face recognition and show that three
distinct properties have surprising effects on the transferability of deep
convolutional networks (CNN): (1) The bottleneck of the network serves as an
important transfer learning regularizer, and (2) in contrast to the common
wisdom, performance saturation may exist in CNN's (as the number of training
samples grows); we propose a solution for alleviating this by replacing the
naive random subsampling of the training set with a bootstrapping process.
Moreover, (3) we find a link between the representation norm and the ability to
discriminate in a target domain, which sheds lights on how such networks
represent faces. Based on these discoveries, we are able to improve face
recognition accuracy on the widely used LFW benchmark, both in the verification
(1:1) and identification (1:N) protocols, and directly compare, for the first
time, with the state of the art Commercially-Off-The-Shelf system and show a
sizable leap in performance.
| no_new_dataset | 0.948346 |
1410.4355 | Erik Ferragut | Robert A. Bridges, John Collins, Erik M. Ferragut, Jason Laska, Blair
D. Sullivan | Multi-Level Anomaly Detection on Time-Varying Graph Data | 8 pages. Updated paper to address reviewer comments | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a novel modeling and analysis framework for graph
sequences which addresses the challenge of detecting and contextualizing
anomalies in labelled, streaming graph data. We introduce a generalization of
the BTER model of Seshadhri et al. by adding flexibility to community
structure, and use this model to perform multi-scale graph anomaly detection.
Specifically, probability models describing coarse subgraphs are built by
aggregating probabilities at finer levels, and these closely related
hierarchical models simultaneously detect deviations from expectation. This
technique provides insight into a graph's structure and internal context that
may shed light on a detected event. Additionally, this multi-scale analysis
facilitates intuitive visualizations by allowing users to narrow focus from an
anomalous graph to particular subgraphs or nodes causing the anomaly.
For evaluation, two hierarchical anomaly detectors are tested against a
baseline Gaussian method on a series of sampled graphs. We demonstrate that our
graph statistics-based approach outperforms both a distribution-based detector
and the baseline in a labeled setting with community structure, and it
accurately detects anomalies in synthetic and real-world datasets at the node,
subgraph, and graph levels. To illustrate the accessibility of information made
possible via this technique, the anomaly detector and an associated interactive
visualization tool are tested on NCAA football data, where teams and
conferences that moved within the league are identified with perfect recall,
and precision greater than 0.786.
| [
{
"version": "v1",
"created": "Thu, 16 Oct 2014 09:57:20 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Oct 2014 19:08:37 GMT"
},
{
"version": "v3",
"created": "Fri, 17 Apr 2015 16:58:08 GMT"
},
{
"version": "v4",
"created": "Mon, 20 Apr 2015 11:55:53 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"Bridges",
"Robert A.",
""
],
[
"Collins",
"John",
""
],
[
"Ferragut",
"Erik M.",
""
],
[
"Laska",
"Jason",
""
],
[
"Sullivan",
"Blair D.",
""
]
] | TITLE: Multi-Level Anomaly Detection on Time-Varying Graph Data
ABSTRACT: This work presents a novel modeling and analysis framework for graph
sequences which addresses the challenge of detecting and contextualizing
anomalies in labelled, streaming graph data. We introduce a generalization of
the BTER model of Seshadhri et al. by adding flexibility to community
structure, and use this model to perform multi-scale graph anomaly detection.
Specifically, probability models describing coarse subgraphs are built by
aggregating probabilities at finer levels, and these closely related
hierarchical models simultaneously detect deviations from expectation. This
technique provides insight into a graph's structure and internal context that
may shed light on a detected event. Additionally, this multi-scale analysis
facilitates intuitive visualizations by allowing users to narrow focus from an
anomalous graph to particular subgraphs or nodes causing the anomaly.
For evaluation, two hierarchical anomaly detectors are tested against a
baseline Gaussian method on a series of sampled graphs. We demonstrate that our
graph statistics-based approach outperforms both a distribution-based detector
and the baseline in a labeled setting with community structure, and it
accurately detects anomalies in synthetic and real-world datasets at the node,
subgraph, and graph levels. To illustrate the accessibility of information made
possible via this technique, the anomaly detector and an associated interactive
visualization tool are tested on NCAA football data, where teams and
conferences that moved within the league are identified with perfect recall,
and precision greater than 0.786.
| no_new_dataset | 0.950503 |
1412.6645 | Gabriel Synnaeve | Gabriel Synnaeve, Emmanuel Dupoux | Weakly Supervised Multi-Embeddings Learning of Acoustic Models | 6 pages, 3 figures | null | null | null | cs.SD cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We trained a Siamese network with multi-task same/different information on a
speech dataset, and found that it was possible to share a network for both
tasks without a loss in performance. The first task was to discriminate between
two same or different words, and the second was to discriminate between two
same or different talkers.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2014 11:54:41 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Feb 2015 10:09:09 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Apr 2015 12:35:32 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"Synnaeve",
"Gabriel",
""
],
[
"Dupoux",
"Emmanuel",
""
]
] | TITLE: Weakly Supervised Multi-Embeddings Learning of Acoustic Models
ABSTRACT: We trained a Siamese network with multi-task same/different information on a
speech dataset, and found that it was possible to share a network for both
tasks without a loss in performance. The first task was to discriminate between
two same or different words, and the second was to discriminate between two
same or different talkers.
| no_new_dataset | 0.945096 |
1501.06272 | Fang Zhao | Fang Zhao, Yongzhen Huang, Liang Wang, Tieniu Tan | Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval | CVPR 2015 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of web images, hashing has received increasing
interests in large scale image retrieval. Research efforts have been devoted to
learning compact binary codes that preserve semantic similarity based on
labels. However, most of these hashing methods are designed to handle simple
binary similarity. The complex multilevel semantic structure of images
associated with multiple labels have not yet been well explored. Here we
propose a deep semantic ranking based method for learning hash functions that
preserve multilevel semantic similarity between multi-label images. In our
approach, deep convolutional neural network is incorporated into hash functions
to jointly learn feature representations and mappings from them to hash codes,
which avoids the limitation of semantic representation power of hand-crafted
features. Meanwhile, a ranking list that encodes the multilevel similarity
information is employed to guide the learning of such deep hash functions. An
effective scheme based on surrogate loss is used to solve the intractable
optimization problem of nonsmooth and multivariate ranking measures involved in
the learning procedure. Experimental results show the superiority of our
proposed approach over several state-of-the-art hashing methods in term of
ranking evaluation metrics when tested on multi-label image datasets.
| [
{
"version": "v1",
"created": "Mon, 26 Jan 2015 07:33:40 GMT"
},
{
"version": "v2",
"created": "Sun, 19 Apr 2015 04:28:58 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"Zhao",
"Fang",
""
],
[
"Huang",
"Yongzhen",
""
],
[
"Wang",
"Liang",
""
],
[
"Tan",
"Tieniu",
""
]
] | TITLE: Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval
ABSTRACT: With the rapid growth of web images, hashing has received increasing
interests in large scale image retrieval. Research efforts have been devoted to
learning compact binary codes that preserve semantic similarity based on
labels. However, most of these hashing methods are designed to handle simple
binary similarity. The complex multilevel semantic structure of images
associated with multiple labels have not yet been well explored. Here we
propose a deep semantic ranking based method for learning hash functions that
preserve multilevel semantic similarity between multi-label images. In our
approach, deep convolutional neural network is incorporated into hash functions
to jointly learn feature representations and mappings from them to hash codes,
which avoids the limitation of semantic representation power of hand-crafted
features. Meanwhile, a ranking list that encodes the multilevel similarity
information is employed to guide the learning of such deep hash functions. An
effective scheme based on surrogate loss is used to solve the intractable
optimization problem of nonsmooth and multivariate ranking measures involved in
the learning procedure. Experimental results show the superiority of our
proposed approach over several state-of-the-art hashing methods in term of
ranking evaluation metrics when tested on multi-label image datasets.
| no_new_dataset | 0.947235 |
1504.04558 | Quanzeng You | Quanzeng You, Sumit Bhatia, Jiebo Luo | A Picture Tells a Thousand Words -- About You! User Interest Profiling
from User Generated Visual Content | 7 pages, 6 Figures, 4 Tables | null | null | null | cs.SI cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference of online social network users' attributes and interests has been
an active research topic. Accurate identification of users' attributes and
interests is crucial for improving the performance of personalization and
recommender systems. Most of the existing works have focused on textual content
generated by the users and have successfully used it for predicting users'
interests and other identifying attributes. However, little attention has been
paid to user generated visual content (images) that is becoming increasingly
popular and pervasive in recent times. We posit that images posted by users on
online social networks are a reflection of topics they are interested in and
propose an approach to infer user attributes from images posted by them. We
analyze the content of individual images and then aggregate the image-level
knowledge to infer user-level interest distribution. We employ image-level
similarity to propagate the label information between images, as well as
utilize the image category information derived from the user created
organization structure to further propagate the category-level knowledge for
all images. A real life social network dataset created from Pinterest is used
for evaluation and the experimental results demonstrate the effectiveness of
our proposed approach.
| [
{
"version": "v1",
"created": "Fri, 17 Apr 2015 16:28:35 GMT"
}
] | 2015-04-21T00:00:00 | [
[
"You",
"Quanzeng",
""
],
[
"Bhatia",
"Sumit",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: A Picture Tells a Thousand Words -- About You! User Interest Profiling
from User Generated Visual Content
ABSTRACT: Inference of online social network users' attributes and interests has been
an active research topic. Accurate identification of users' attributes and
interests is crucial for improving the performance of personalization and
recommender systems. Most of the existing works have focused on textual content
generated by the users and have successfully used it for predicting users'
interests and other identifying attributes. However, little attention has been
paid to user generated visual content (images) that is becoming increasingly
popular and pervasive in recent times. We posit that images posted by users on
online social networks are a reflection of topics they are interested in and
propose an approach to infer user attributes from images posted by them. We
analyze the content of individual images and then aggregate the image-level
knowledge to infer user-level interest distribution. We employ image-level
similarity to propagate the label information between images, as well as
utilize the image category information derived from the user created
organization structure to further propagate the category-level knowledge for
all images. A real life social network dataset created from Pinterest is used
for evaluation and the experimental results demonstrate the effectiveness of
our proposed approach.
| no_new_dataset | 0.914901 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.