id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1604.03540 | Abhinav Shrivastava | Abhinav Shrivastava, Abhinav Gupta, Ross Girshick | Training Region-based Object Detectors with Online Hard Example Mining | To appear in Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016. (oral) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of object detection has made significant advances riding on the
wave of region-based ConvNets, but their training procedure still includes many
heuristics and hyperparameters that are costly to tune. We present a simple yet
surprisingly effective online hard example mining (OHEM) algorithm for training
region-based ConvNet detectors. Our motivation is the same as it has always
been -- detection datasets contain an overwhelming number of easy examples and
a small number of hard examples. Automatic selection of these hard examples can
make training more effective and efficient. OHEM is a simple and intuitive
algorithm that eliminates several heuristics and hyperparameters in common use.
But more importantly, it yields consistent and significant boosts in detection
performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness
increases as datasets become larger and more difficult, as demonstrated by the
results on the MS COCO dataset. Moreover, combined with complementary advances
in the field, OHEM leads to state-of-the-art results of 78.9% and 76.3% mAP on
PASCAL VOC 2007 and 2012 respectively.
| [
{
"version": "v1",
"created": "Tue, 12 Apr 2016 19:44:13 GMT"
}
] | 2016-04-13T00:00:00 | [
[
"Shrivastava",
"Abhinav",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Girshick",
"Ross",
""
]
] | TITLE: Training Region-based Object Detectors with Online Hard Example Mining
ABSTRACT: The field of object detection has made significant advances riding on the
wave of region-based ConvNets, but their training procedure still includes many
heuristics and hyperparameters that are costly to tune. We present a simple yet
surprisingly effective online hard example mining (OHEM) algorithm for training
region-based ConvNet detectors. Our motivation is the same as it has always
been -- detection datasets contain an overwhelming number of easy examples and
a small number of hard examples. Automatic selection of these hard examples can
make training more effective and efficient. OHEM is a simple and intuitive
algorithm that eliminates several heuristics and hyperparameters in common use.
But more importantly, it yields consistent and significant boosts in detection
performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness
increases as datasets become larger and more difficult, as demonstrated by the
results on the MS COCO dataset. Moreover, combined with complementary advances
in the field, OHEM leads to state-of-the-art results of 78.9% and 76.3% mAP on
PASCAL VOC 2007 and 2012 respectively.
| no_new_dataset | 0.951908 |
1407.6810 | Ehsan Elhamifar | Ehsan Elhamifar, Guillermo Sapiro and S. Shankar Sastry | Dissimilarity-based Sparse Subset Selection | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding an informative subset of a large collection of data points or models
is at the center of many problems in computer vision, recommender systems,
bio/health informatics as well as image and natural language processing. Given
pairwise dissimilarities between the elements of a `source set' and a `target
set,' we consider the problem of finding a subset of the source set, called
representatives or exemplars, that can efficiently describe the target set. We
formulate the problem as a row-sparsity regularized trace minimization problem.
Since the proposed formulation is, in general, NP-hard, we consider a convex
relaxation. The solution of our optimization finds representatives and the
assignment of each element of the target set to each representative, hence,
obtaining a clustering. We analyze the solution of our proposed optimization as
a function of the regularization parameter. We show that when the two sets
jointly partition into multiple groups, our algorithm finds representatives
from all groups and reveals clustering of the sets. In addition, we show that
the proposed framework can effectively deal with outliers. Our algorithm works
with arbitrary dissimilarities, which can be asymmetric or violate the triangle
inequality. To efficiently implement our algorithm, we consider an Alternating
Direction Method of Multipliers (ADMM) framework, which results in quadratic
complexity in the problem size. We show that the ADMM implementation allows to
parallelize the algorithm, hence further reducing the computational time.
Finally, by experiments on real-world datasets, we show that our proposed
algorithm improves the state of the art on the two problems of scene
categorization using representative images and time-series modeling and
segmentation using representative~models.
| [
{
"version": "v1",
"created": "Fri, 25 Jul 2014 08:30:04 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Apr 2016 03:09:18 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Elhamifar",
"Ehsan",
""
],
[
"Sapiro",
"Guillermo",
""
],
[
"Sastry",
"S. Shankar",
""
]
] | TITLE: Dissimilarity-based Sparse Subset Selection
ABSTRACT: Finding an informative subset of a large collection of data points or models
is at the center of many problems in computer vision, recommender systems,
bio/health informatics as well as image and natural language processing. Given
pairwise dissimilarities between the elements of a `source set' and a `target
set,' we consider the problem of finding a subset of the source set, called
representatives or exemplars, that can efficiently describe the target set. We
formulate the problem as a row-sparsity regularized trace minimization problem.
Since the proposed formulation is, in general, NP-hard, we consider a convex
relaxation. The solution of our optimization finds representatives and the
assignment of each element of the target set to each representative, hence,
obtaining a clustering. We analyze the solution of our proposed optimization as
a function of the regularization parameter. We show that when the two sets
jointly partition into multiple groups, our algorithm finds representatives
from all groups and reveals clustering of the sets. In addition, we show that
the proposed framework can effectively deal with outliers. Our algorithm works
with arbitrary dissimilarities, which can be asymmetric or violate the triangle
inequality. To efficiently implement our algorithm, we consider an Alternating
Direction Method of Multipliers (ADMM) framework, which results in quadratic
complexity in the problem size. We show that the ADMM implementation allows to
parallelize the algorithm, hence further reducing the computational time.
Finally, by experiments on real-world datasets, we show that our proposed
algorithm improves the state of the art on the two problems of scene
categorization using representative images and time-series modeling and
segmentation using representative~models.
| no_new_dataset | 0.942295 |
1412.4320 | Daniel Lupei | Christoph Koch, Daniel Lupei, Val Tannen | Incremental View Maintenance For Collection Programming | 24 pages (12 pages plus appendix) | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of incremental view maintenance (IVM), delta query derivation
is an essential technique for speeding up the processing of large, dynamic
datasets. The goal is to generate delta queries that, given a small change in
the input, can update the materialized view more efficiently than via
recomputation. In this work we propose the first solution for the efficient
incrementalization of positive nested relational calculus (NRC+) on bags (with
integer multiplicities). More precisely, we model the cost of NRC+ operators
and classify queries as efficiently incrementalizable if their delta has a
strictly lower cost than full re-evaluation. Then, we identify IncNRC+; a large
fragment of NRC+ that is efficiently incrementalizable and we provide a
semantics-preserving translation that takes any NRC+ query to a collection of
IncNRC+ queries. Furthermore, we prove that incremental maintenance for NRC+ is
within the complexity class NC0 and we showcase how recursive IVM, a technique
that has provided significant speedups over traditional IVM in the case of flat
queries [25], can also be applied to IncNRC+.
| [
{
"version": "v1",
"created": "Sun, 14 Dec 2014 06:12:32 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2016 05:07:14 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Koch",
"Christoph",
""
],
[
"Lupei",
"Daniel",
""
],
[
"Tannen",
"Val",
""
]
] | TITLE: Incremental View Maintenance For Collection Programming
ABSTRACT: In the context of incremental view maintenance (IVM), delta query derivation
is an essential technique for speeding up the processing of large, dynamic
datasets. The goal is to generate delta queries that, given a small change in
the input, can update the materialized view more efficiently than via
recomputation. In this work we propose the first solution for the efficient
incrementalization of positive nested relational calculus (NRC+) on bags (with
integer multiplicities). More precisely, we model the cost of NRC+ operators
and classify queries as efficiently incrementalizable if their delta has a
strictly lower cost than full re-evaluation. Then, we identify IncNRC+; a large
fragment of NRC+ that is efficiently incrementalizable and we provide a
semantics-preserving translation that takes any NRC+ query to a collection of
IncNRC+ queries. Furthermore, we prove that incremental maintenance for NRC+ is
within the complexity class NC0 and we showcase how recursive IVM, a technique
that has provided significant speedups over traditional IVM in the case of flat
queries [25], can also be applied to IncNRC+.
| no_new_dataset | 0.940953 |
1509.07313 | Soumya Banerjee | Soumya Banerjee | Analysis of a Planetary Scale Scientific Collaboration Dataset Reveals
Novel Patterns | Proceedings of the Complex Systems Digital Campus 2015 World
eConference Conference on Complex Systems | null | null | null | cs.SI cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific collaboration networks are an important component of scientific
output and contribute significantly to expanding our knowledge and to the
economy and gross domestic product of nations. Here we examine a dataset from
the Mendeley scientific collaboration network. We analyze this data using a
combination of machine learning techniques and dynamical models. We find
interesting clusters of countries with different characteristics of
collaboration. Some of these clusters are dominated by developed countries that
have higher number of self connections compared with connections to other
countries. Another cluster is dominated by impoverished nations that have
mostly connections and collaborations with other countries but fewer self
connections. We also propose a complex systems dynamical model that explains
these characteristics. Our model explains how the scientific collaboration
networks of impoverished and developing nations change over time. We also find
interesting patterns in the behaviour of countries that may reflect past
foreign policies and contemporary geopolitics. Our model and analysis gives
insights and guidelines into how scientific development of developing countries
can be guided. This is intimately related to fostering economic development of
impoverished nations and creating a richer and more prosperous society.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 11:10:01 GMT"
},
{
"version": "v2",
"created": "Sat, 9 Apr 2016 13:45:48 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Banerjee",
"Soumya",
""
]
] | TITLE: Analysis of a Planetary Scale Scientific Collaboration Dataset Reveals
Novel Patterns
ABSTRACT: Scientific collaboration networks are an important component of scientific
output and contribute significantly to expanding our knowledge and to the
economy and gross domestic product of nations. Here we examine a dataset from
the Mendeley scientific collaboration network. We analyze this data using a
combination of machine learning techniques and dynamical models. We find
interesting clusters of countries with different characteristics of
collaboration. Some of these clusters are dominated by developed countries that
have higher number of self connections compared with connections to other
countries. Another cluster is dominated by impoverished nations that have
mostly connections and collaborations with other countries but fewer self
connections. We also propose a complex systems dynamical model that explains
these characteristics. Our model explains how the scientific collaboration
networks of impoverished and developing nations change over time. We also find
interesting patterns in the behaviour of countries that may reflect past
foreign policies and contemporary geopolitics. Our model and analysis gives
insights and guidelines into how scientific development of developing countries
can be guided. This is intimately related to fostering economic development of
impoverished nations and creating a richer and more prosperous society.
| no_new_dataset | 0.941223 |
1511.02283 | Junhua Mao | Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan
Yuille, Kevin Murphy | Generation and Comprehension of Unambiguous Object Descriptions | We have released the Google Refexp dataset together with a toolbox
for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox. Camera ready version for
CVPR 2016 | null | null | null | cs.CV cs.CL cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method that can generate an unambiguous description (known as a
referring expression) of a specific object or region in an image, and which can
also comprehend or interpret such an expression to infer which object is being
described. We show that our method outperforms previous methods that generate
descriptions of objects without taking into account other potentially ambiguous
objects in the scene. Our model is inspired by recent successes of deep
learning methods for image captioning, but while image captioning is difficult
to evaluate, our task allows for easy objective evaluation. We also present a
new large-scale dataset for referring expressions, based on MS-COCO. We have
released the dataset and a toolbox for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox
| [
{
"version": "v1",
"created": "Sat, 7 Nov 2015 02:17:36 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Nov 2015 08:58:08 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2016 01:11:56 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Mao",
"Junhua",
""
],
[
"Huang",
"Jonathan",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Camburu",
"Oana",
""
],
[
"Yuille",
"Alan",
""
],
[
"Murphy",
"Kevin",
""
]
] | TITLE: Generation and Comprehension of Unambiguous Object Descriptions
ABSTRACT: We propose a method that can generate an unambiguous description (known as a
referring expression) of a specific object or region in an image, and which can
also comprehend or interpret such an expression to infer which object is being
described. We show that our method outperforms previous methods that generate
descriptions of objects without taking into account other potentially ambiguous
objects in the scene. Our model is inspired by recent successes of deep
learning methods for image captioning, but while image captioning is difficult
to evaluate, our task allows for easy objective evaluation. We also present a
new large-scale dataset for referring expressions, based on MS-COCO. We have
released the dataset and a toolbox for visualization and evaluation, see
https://github.com/mjhucla/Google_Refexp_toolbox
| new_dataset | 0.953708 |
1511.02841 | Galin Georgiev | Galin Georgiev | Symmetries and control in generative neural nets | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study generative nets which can control and modify observations, after
being trained on real-life datasets. In order to zoom-in on an object, some
spatial, color and other attributes are learned by classifiers in specialized
attention nets. In field-theoretical terms, these learned symmetry statistics
form the gauge group of the data set. Plugging them in the generative layers of
auto-classifiers-encoders (ACE) appears to be the most direct way to
simultaneously: i) generate new observations with arbitrary attributes, from a
given class, ii) describe the low-dimensional manifold encoding the "essence"
of the data, after superfluous attributes are factored out, and iii)
organically control, i.e., move or modify objects within given observations. We
demonstrate the sharp improvement of the generative qualities of shallow ACE,
with added spatial and color symmetry statistics, on the distorted MNIST and
CIFAR10 datasets.
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2015 20:49:03 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Nov 2015 17:49:51 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Apr 2016 21:38:31 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Georgiev",
"Galin",
""
]
] | TITLE: Symmetries and control in generative neural nets
ABSTRACT: We study generative nets which can control and modify observations, after
being trained on real-life datasets. In order to zoom-in on an object, some
spatial, color and other attributes are learned by classifiers in specialized
attention nets. In field-theoretical terms, these learned symmetry statistics
form the gauge group of the data set. Plugging them in the generative layers of
auto-classifiers-encoders (ACE) appears to be the most direct way to
simultaneously: i) generate new observations with arbitrary attributes, from a
given class, ii) describe the low-dimensional manifold encoding the "essence"
of the data, after superfluous attributes are factored out, and iii)
organically control, i.e., move or modify objects within given observations. We
demonstrate the sharp improvement of the generative qualities of shallow ACE,
with added spatial and color symmetry statistics, on the distorted MNIST and
CIFAR10 datasets.
| no_new_dataset | 0.950686 |
1511.04164 | Ronghang Hu | Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko,
Trevor Darrell | Natural Language Object Retrieval | Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2016 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the task of natural language object retrieval, to
localize a target object within a given image based on a natural language query
of the object. Natural language object retrieval differs from text-based image
retrieval task as it involves spatial information about objects within the
scene and global scene context. To address this issue, we propose a novel
Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate
boxes for object retrieval, integrating spatial configurations and global
scene-level contextual information into the network. Our model processes query
text, local image descriptors, spatial configurations and global context
features through a recurrent network, outputs the probability of the query text
conditioned on each candidate box as a score for the box, and can transfer
visual-linguistic knowledge from image captioning domain to our task.
Experimental results demonstrate that our method effectively utilizes both
local and global information, outperforming previous baseline methods
significantly on different datasets and scenarios, and can exploit large scale
vision and language datasets for knowledge transfer.
| [
{
"version": "v1",
"created": "Fri, 13 Nov 2015 05:53:37 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Mar 2016 20:12:44 GMT"
},
{
"version": "v3",
"created": "Mon, 11 Apr 2016 03:36:58 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Hu",
"Ronghang",
""
],
[
"Xu",
"Huazhe",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Saenko",
"Kate",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Natural Language Object Retrieval
ABSTRACT: In this paper, we address the task of natural language object retrieval, to
localize a target object within a given image based on a natural language query
of the object. Natural language object retrieval differs from text-based image
retrieval task as it involves spatial information about objects within the
scene and global scene context. To address this issue, we propose a novel
Spatial Context Recurrent ConvNet (SCRC) model as scoring function on candidate
boxes for object retrieval, integrating spatial configurations and global
scene-level contextual information into the network. Our model processes query
text, local image descriptors, spatial configurations and global context
features through a recurrent network, outputs the probability of the query text
conditioned on each candidate box as a score for the box, and can transfer
visual-linguistic knowledge from image captioning domain to our task.
Experimental results demonstrate that our method effectively utilizes both
local and global information, outperforming previous baseline methods
significantly on different datasets and scenarios, and can exploit large scale
vision and language datasets for knowledge transfer.
| no_new_dataset | 0.952264 |
1511.04273 | Kwang Yi | Kwang Moo Yi, Yannick Verdie, Pascal Fua, Vincent Lepetit | Learning to Assign Orientations to Feature Points | Accepted as Oral presentation in Computer Vision and Pattern
Recognition, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show how to train a Convolutional Neural Network to assign a canonical
orientation to feature points given an image patch centered on the feature
point. Our method improves feature point matching upon the state-of-the art and
can be used in conjunction with any existing rotation sensitive descriptors. To
avoid the tedious and almost impossible task of finding a target orientation to
learn, we propose to use Siamese networks which implicitly find the optimal
orientations during training. We also propose a new type of activation function
for Neural Networks that generalizes the popular ReLU, maxout, and PReLU
activation functions. This novel activation performs better for our task. We
validate the effectiveness of our method extensively with four existing
datasets, including two non-planar datasets, as well as our own dataset. We
show that we outperform the state-of-the-art without the need of retraining for
each dataset.
| [
{
"version": "v1",
"created": "Fri, 13 Nov 2015 13:23:09 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2016 14:03:54 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Yi",
"Kwang Moo",
""
],
[
"Verdie",
"Yannick",
""
],
[
"Fua",
"Pascal",
""
],
[
"Lepetit",
"Vincent",
""
]
] | TITLE: Learning to Assign Orientations to Feature Points
ABSTRACT: We show how to train a Convolutional Neural Network to assign a canonical
orientation to feature points given an image patch centered on the feature
point. Our method improves feature point matching upon the state-of-the art and
can be used in conjunction with any existing rotation sensitive descriptors. To
avoid the tedious and almost impossible task of finding a target orientation to
learn, we propose to use Siamese networks which implicitly find the optimal
orientations during training. We also propose a new type of activation function
for Neural Networks that generalizes the popular ReLU, maxout, and PReLU
activation functions. This novel activation performs better for our task. We
validate the effectiveness of our method extensively with four existing
datasets, including two non-planar datasets, as well as our own dataset. We
show that we outperform the state-of-the-art without the need of retraining for
each dataset.
| no_new_dataset | 0.932207 |
1512.05227 | Yin Cui | Yin Cui, Feng Zhou, Yuanqing Lin, Serge Belongie | Fine-grained Categorization and Dataset Bootstrapping using Deep Metric
Learning with Humans in the Loop | 10 pages, 9 figures, CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing fine-grained visual categorization methods often suffer from three
challenges: lack of training data, large number of fine-grained categories, and
high intraclass vs. low inter-class variance. In this work we propose a generic
iterative framework for fine-grained categorization and dataset bootstrapping
that handles these three challenges. Using deep metric learning with humans in
the loop, we learn a low dimensional feature embedding with anchor points on
manifolds for each category. These anchor points capture intra-class variances
and remain discriminative between classes. In each round, images with high
confidence scores from our model are sent to humans for labeling. By comparing
with exemplar images, labelers mark each candidate image as either a "true
positive" or a "false positive". True positives are added into our current
dataset and false positives are regarded as "hard negatives" for our metric
learning model. Then the model is retrained with an expanded dataset and hard
negatives for the next round. To demonstrate the effectiveness of the proposed
framework, we bootstrap a fine-grained flower dataset with 620 categories from
Instagram images. The proposed deep metric learning scheme is evaluated on both
our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show
significant performance gain using dataset bootstrapping and demonstrate
state-of-the-art results achieved by the proposed deep metric learning methods.
| [
{
"version": "v1",
"created": "Wed, 16 Dec 2015 16:14:22 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2016 04:34:13 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Cui",
"Yin",
""
],
[
"Zhou",
"Feng",
""
],
[
"Lin",
"Yuanqing",
""
],
[
"Belongie",
"Serge",
""
]
] | TITLE: Fine-grained Categorization and Dataset Bootstrapping using Deep Metric
Learning with Humans in the Loop
ABSTRACT: Existing fine-grained visual categorization methods often suffer from three
challenges: lack of training data, large number of fine-grained categories, and
high intraclass vs. low inter-class variance. In this work we propose a generic
iterative framework for fine-grained categorization and dataset bootstrapping
that handles these three challenges. Using deep metric learning with humans in
the loop, we learn a low dimensional feature embedding with anchor points on
manifolds for each category. These anchor points capture intra-class variances
and remain discriminative between classes. In each round, images with high
confidence scores from our model are sent to humans for labeling. By comparing
with exemplar images, labelers mark each candidate image as either a "true
positive" or a "false positive". True positives are added into our current
dataset and false positives are regarded as "hard negatives" for our metric
learning model. Then the model is retrained with an expanded dataset and hard
negatives for the next round. To demonstrate the effectiveness of the proposed
framework, we bootstrap a fine-grained flower dataset with 620 categories from
Instagram images. The proposed deep metric learning scheme is evaluated on both
our dataset and the CUB-200-2001 Birds dataset. Experimental evaluations show
significant performance gain using dataset bootstrapping and demonstrate
state-of-the-art results achieved by the proposed deep metric learning methods.
| no_new_dataset | 0.928862 |
1603.07057 | Tal Hassner | Iacopo Masi, Anh Tuan Tran, Jatuporn Toy Leksut, Tal Hassner and
Gerard Medioni | Do We Really Need to Collect Millions of Faces for Effective Face
Recognition? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition capabilities have recently made extraordinary leaps. Though
this progress is at least partially due to ballooning training set sizes --
huge numbers of face images downloaded and labeled for identity -- it is not
clear if the formidable task of collecting so many images is truly necessary.
We propose a far more accessible means of increasing training data sizes for
face recognition systems. Rather than manually harvesting and labeling more
faces, we simply synthesize them. We describe novel methods of enriching an
existing dataset with important facial appearance variations by manipulating
the faces it contains. We further apply this synthesis approach when matching
query images represented using a standard convolutional neural network. The
effect of training and testing with synthesized images is extensively tested on
the LFW and IJB-A (verification and identification) benchmarks and Janus CS2.
The performances obtained by our approach match state of the art results
reported by systems trained on millions of downloaded images.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 02:57:15 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Apr 2016 02:25:35 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Masi",
"Iacopo",
""
],
[
"Tran",
"Anh Tuan",
""
],
[
"Leksut",
"Jatuporn Toy",
""
],
[
"Hassner",
"Tal",
""
],
[
"Medioni",
"Gerard",
""
]
] | TITLE: Do We Really Need to Collect Millions of Faces for Effective Face
Recognition?
ABSTRACT: Face recognition capabilities have recently made extraordinary leaps. Though
this progress is at least partially due to ballooning training set sizes --
huge numbers of face images downloaded and labeled for identity -- it is not
clear if the formidable task of collecting so many images is truly necessary.
We propose a far more accessible means of increasing training data sizes for
face recognition systems. Rather than manually harvesting and labeling more
faces, we simply synthesize them. We describe novel methods of enriching an
existing dataset with important facial appearance variations by manipulating
the faces it contains. We further apply this synthesis approach when matching
query images represented using a standard convolutional neural network. The
effect of training and testing with synthesized images is extensively tested on
the LFW and IJB-A (verification and identification) benchmarks and Janus CS2.
The performances obtained by our approach match state of the art results
reported by systems trained on millions of downloaded images.
| no_new_dataset | 0.950088 |
1603.08895 | Zeynep Akata PhD | Yongqin Xian and Zeynep Akata and Gaurav Sharma and Quynh Nguyen and
Matthias Hein and Bernt Schiele | Latent Embeddings for Zero-shot Classification | 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel latent embedding model for learning a compatibility
function between image and class embeddings, in the context of zero-shot
classification. The proposed method augments the state-of-the-art bilinear
compatibility model by incorporating latent variables. Instead of learning a
single bilinear map, it learns a collection of maps with the selection, of
which map to use, being a latent variable for the current image-class pair. We
train the model with a ranking based objective function which penalizes
incorrect rankings of the true class for a given image. We empirically
demonstrate that our model improves the state-of-the-art for various class
embeddings consistently on three challenging publicly available datasets for
the zero-shot setting. Moreover, our method leads to visually highly
interpretable results with clear clusters of different fine-grained object
properties that correspond to different latent variable maps.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 19:24:38 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2016 10:33:02 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Xian",
"Yongqin",
""
],
[
"Akata",
"Zeynep",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Nguyen",
"Quynh",
""
],
[
"Hein",
"Matthias",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Latent Embeddings for Zero-shot Classification
ABSTRACT: We present a novel latent embedding model for learning a compatibility
function between image and class embeddings, in the context of zero-shot
classification. The proposed method augments the state-of-the-art bilinear
compatibility model by incorporating latent variables. Instead of learning a
single bilinear map, it learns a collection of maps with the selection, of
which map to use, being a latent variable for the current image-class pair. We
train the model with a ranking based objective function which penalizes
incorrect rankings of the true class for a given image. We empirically
demonstrate that our model improves the state-of-the-art for various class
embeddings consistently on three challenging publicly available datasets for
the zero-shot setting. Moreover, our method leads to visually highly
interpretable results with clear clusters of different fine-grained object
properties that correspond to different latent variable maps.
| no_new_dataset | 0.950732 |
1604.02605 | Mohammed El-Kebir | Mohammed El-Kebir and Gryte Satas and Layla Oesper and Benjamin J.
Raphael | Multi-State Perfect Phylogeny Mixture Deconvolution and Applications to
Cancer Sequencing | RECOMB 2016 | null | null | null | cs.DS q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of phylogenetic trees from mixed populations has become
important in the study of cancer evolution, as sequencing is often performed on
bulk tumor tissue containing mixed populations of cells. Recent work has shown
how to reconstruct a perfect phylogeny tree from samples that contain mixtures
of two-state characters, where each character/locus is either mutated or not.
However, most cancers contain more complex mutations, such as copy-number
aberrations, that exhibit more than two states. We formulate the Multi-State
Perfect Phylogeny Mixture Deconvolution Problem of reconstructing a multi-state
perfect phylogeny tree given mixtures of the leaves of the tree. We
characterize the solutions of this problem as a restricted class of spanning
trees in a graph constructed from the input data, and prove that the problem is
NP-complete. We derive an algorithm to enumerate such trees in the important
special case of cladisitic characters, where the ordering of the states of each
character is given. We apply our algorithm to simulated data and to two cancer
datasets. On simulated data, we find that for a small number of samples, the
Multi-State Perfect Phylogeny Mixture Deconvolution Problem often has many
solutions, but that this ambiguity declines quickly as the number of samples
increases. On real data, we recover copy-neutral loss of heterozygosity,
single-copy amplification and single-copy deletion events, as well as their
interactions with single-nucleotide variants.
| [
{
"version": "v1",
"created": "Sat, 9 Apr 2016 20:00:07 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"El-Kebir",
"Mohammed",
""
],
[
"Satas",
"Gryte",
""
],
[
"Oesper",
"Layla",
""
],
[
"Raphael",
"Benjamin J.",
""
]
] | TITLE: Multi-State Perfect Phylogeny Mixture Deconvolution and Applications to
Cancer Sequencing
ABSTRACT: The reconstruction of phylogenetic trees from mixed populations has become
important in the study of cancer evolution, as sequencing is often performed on
bulk tumor tissue containing mixed populations of cells. Recent work has shown
how to reconstruct a perfect phylogeny tree from samples that contain mixtures
of two-state characters, where each character/locus is either mutated or not.
However, most cancers contain more complex mutations, such as copy-number
aberrations, that exhibit more than two states. We formulate the Multi-State
Perfect Phylogeny Mixture Deconvolution Problem of reconstructing a multi-state
perfect phylogeny tree given mixtures of the leaves of the tree. We
characterize the solutions of this problem as a restricted class of spanning
trees in a graph constructed from the input data, and prove that the problem is
NP-complete. We derive an algorithm to enumerate such trees in the important
special case of cladisitic characters, where the ordering of the states of each
character is given. We apply our algorithm to simulated data and to two cancer
datasets. On simulated data, we find that for a small number of samples, the
Multi-State Perfect Phylogeny Mixture Deconvolution Problem often has many
solutions, but that this ambiguity declines quickly as the number of samples
increases. On real data, we recover copy-neutral loss of heterozygosity,
single-copy amplification and single-copy deletion events, as well as their
interactions with single-nucleotide variants.
| no_new_dataset | 0.946547 |
1604.02612 | Mois\'es Pereira | Mois\'es H. R. Pereira, Fl\'avio L. C. P\'adua, Adriano C. M. Pereira,
Fabr\'icio Benevenuto, Daniel H. Dalip | Fusing Audio, Textual and Visual Features for Sentiment Analysis of News
Videos | 5 pages, 1 figure, International AAAI Conference on Web and Social
Media | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach to perform sentiment analysis of news
videos, based on the fusion of audio, textual and visual clues extracted from
their contents. The proposed approach aims at contributing to the
semiodiscoursive study regarding the construction of the ethos (identity) of
this media universe, which has become a central part of the modern-day lives of
millions of people. To achieve this goal, we apply state-of-the-art
computational methods for (1) automatic emotion recognition from facial
expressions, (2) extraction of modulations in the participants' speeches and
(3) sentiment analysis from the closed caption associated to the videos of
interest. More specifically, we compute features, such as, visual intensities
of recognized emotions, field sizes of participants, voicing probability, sound
loudness, speech fundamental frequencies and the sentiment scores (polarities)
from text sentences in the closed caption. Experimental results with a dataset
containing 520 annotated news videos from three Brazilian and one American
popular TV newscasts show that our approach achieves an accuracy of up to 84%
in the sentiments (tension levels) classification task, thus demonstrating its
high potential to be used by media analysts in several applications,
especially, in the journalistic domain.
| [
{
"version": "v1",
"created": "Sat, 9 Apr 2016 22:00:27 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Pereira",
"Moisés H. R.",
""
],
[
"Pádua",
"Flávio L. C.",
""
],
[
"Pereira",
"Adriano C. M.",
""
],
[
"Benevenuto",
"Fabrício",
""
],
[
"Dalip",
"Daniel H.",
""
]
] | TITLE: Fusing Audio, Textual and Visual Features for Sentiment Analysis of News
Videos
ABSTRACT: This paper presents a novel approach to perform sentiment analysis of news
videos, based on the fusion of audio, textual and visual clues extracted from
their contents. The proposed approach aims at contributing to the
semiodiscoursive study regarding the construction of the ethos (identity) of
this media universe, which has become a central part of the modern-day lives of
millions of people. To achieve this goal, we apply state-of-the-art
computational methods for (1) automatic emotion recognition from facial
expressions, (2) extraction of modulations in the participants' speeches and
(3) sentiment analysis from the closed caption associated to the videos of
interest. More specifically, we compute features, such as, visual intensities
of recognized emotions, field sizes of participants, voicing probability, sound
loudness, speech fundamental frequencies and the sentiment scores (polarities)
from text sentences in the closed caption. Experimental results with a dataset
containing 520 annotated news videos from three Brazilian and one American
popular TV newscasts show that our approach achieves an accuracy of up to 84%
in the sentiments (tension levels) classification task, thus demonstrating its
high potential to be used by media analysts in several applications,
especially, in the journalistic domain.
| new_dataset | 0.784773 |
1604.02647 | Shunsuke Saito | Shunsuke Saito, Tianye Li, Hao Li | Real-Time Facial Segmentation and Performance Capture from RGB Input | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the concept of unconstrained real-time 3D facial performance
capture through explicit semantic segmentation in the RGB input. To ensure
robustness, cutting edge supervised learning approaches rely on large training
datasets of face images captured in the wild. While impressive tracking quality
has been demonstrated for faces that are largely visible, any occlusion due to
hair, accessories, or hand-to-face gestures would result in significant visual
artifacts and loss of tracking accuracy. The modeling of occlusions has been
mostly avoided due to its immense space of appearance variability. To address
this curse of high dimensionality, we perform tracking in unconstrained images
assuming non-face regions can be fully masked out. Along with recent
breakthroughs in deep learning, we demonstrate that pixel-level facial
segmentation is possible in real-time by repurposing convolutional neural
networks designed originally for general semantic segmentation. We develop an
efficient architecture based on a two-stream deconvolution network with
complementary characteristics, and introduce carefully designed training
samples and data augmentation strategies for improved segmentation accuracy and
robustness. We adopt a state-of-the-art regression-based facial tracking
framework with segmented face images as training, and demonstrate accurate and
uninterrupted facial performance capture in the presence of extreme occlusion
and even side views. Furthermore, the resulting segmentation can be directly
used to composite partial 3D face models on the input images and enable
seamless facial manipulation tasks, such as virtual make-up or face
replacement.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2016 07:04:47 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Saito",
"Shunsuke",
""
],
[
"Li",
"Tianye",
""
],
[
"Li",
"Hao",
""
]
] | TITLE: Real-Time Facial Segmentation and Performance Capture from RGB Input
ABSTRACT: We introduce the concept of unconstrained real-time 3D facial performance
capture through explicit semantic segmentation in the RGB input. To ensure
robustness, cutting edge supervised learning approaches rely on large training
datasets of face images captured in the wild. While impressive tracking quality
has been demonstrated for faces that are largely visible, any occlusion due to
hair, accessories, or hand-to-face gestures would result in significant visual
artifacts and loss of tracking accuracy. The modeling of occlusions has been
mostly avoided due to its immense space of appearance variability. To address
this curse of high dimensionality, we perform tracking in unconstrained images
assuming non-face regions can be fully masked out. Along with recent
breakthroughs in deep learning, we demonstrate that pixel-level facial
segmentation is possible in real-time by repurposing convolutional neural
networks designed originally for general semantic segmentation. We develop an
efficient architecture based on a two-stream deconvolution network with
complementary characteristics, and introduce carefully designed training
samples and data augmentation strategies for improved segmentation accuracy and
robustness. We adopt a state-of-the-art regression-based facial tracking
framework with segmented face images as training, and demonstrate accurate and
uninterrupted facial performance capture in the presence of extreme occlusion
and even side views. Furthermore, the resulting segmentation can be directly
used to composite partial 3D face models on the input images and enable
seamless facial manipulation tasks, such as virtual make-up or face
replacement.
| no_new_dataset | 0.949669 |
1604.02657 | Chengde Wan Mr | Chengde Wan, Angela Yao, Luc Van Gool | Direction matters: hand pose estimation from local surface normals | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a hierarchical regression framework for estimating hand joint
positions from single depth images based on local surface normals. The
hierarchical regression follows the tree structured topology of hand from wrist
to finger tips. We propose a conditional regression forest, i.e., the Frame
Conditioned Regression Forest (FCRF) which uses a new normal difference
feature. At each stage of the regression, the frame of reference is established
from either the local surface normal or previously estimated hand joints. By
making the regression with respect to the local frame, the pose estimation is
more robust to rigid transformations. We also introduce a new efficient
approximation to estimate surface normals. We verify the effectiveness of our
method by conducting experiments on two challenging real-world datasets and
show consistent improvements over previous discriminative pose estimation
methods.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2016 09:16:28 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Wan",
"Chengde",
""
],
[
"Yao",
"Angela",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Direction matters: hand pose estimation from local surface normals
ABSTRACT: We present a hierarchical regression framework for estimating hand joint
positions from single depth images based on local surface normals. The
hierarchical regression follows the tree structured topology of hand from wrist
to finger tips. We propose a conditional regression forest, i.e., the Frame
Conditioned Regression Forest (FCRF) which uses a new normal difference
feature. At each stage of the regression, the frame of reference is established
from either the local surface normal or previously estimated hand joints. By
making the regression with respect to the local frame, the pose estimation is
more robust to rigid transformations. We also introduce a new efficient
approximation to estimate surface normals. We verify the effectiveness of our
method by conducting experiments on two challenging real-world datasets and
show consistent improvements over previous discriminative pose estimation
methods.
| no_new_dataset | 0.953665 |
1604.02694 | Hao Fu | Hao Fu, Xing Xie, Yong Rui, Defu Lian, Guangzhong Sun, Enhong Chen | Predicting Social Status via Social Networks: A Case Study on
University, Occupation, and Region | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social status refers to the relative position within the society. It is an
important notion in sociology and related research. The problem of measuring
social status has been studied for many years. Various indicators are proposed
to assess social status of individuals, including educational attainment,
occupation, and income/wealth. However, these indicators are sometimes
difficult to collect or measure.
We investigate social networks for alternative measures of social status.
Online activities expose certain traits of users in the real world. We are
interested in how these activities are related to social status, and how social
status can be predicted with social network data. To the best of our knowledge,
this is the first study on connecting online activities with social status in
reality.
In particular, we focus on the network structure of microblogs in this study.
A user following another implies some kind of status. We cast the predicted
social status of users to the "status" of real-world entities, e.g.,
universities, occupations, and regions, so that we can compare and validate
predicted results with facts in the real world. We propose an efficient
algorithm for this task and evaluate it on a dataset consisting of 3.4 million
users from Sina Weibo. The result shows that it is possible to predict social
status with reasonable accuracy using social network data. We also point out
challenges and limitations of this approach, e.g., inconsistence between online
popularity and real-world status for certain users. Our findings provide
insights on analyzing online social status and future designs of ranking
schemes for social networks.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2016 14:21:29 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Fu",
"Hao",
""
],
[
"Xie",
"Xing",
""
],
[
"Rui",
"Yong",
""
],
[
"Lian",
"Defu",
""
],
[
"Sun",
"Guangzhong",
""
],
[
"Chen",
"Enhong",
""
]
] | TITLE: Predicting Social Status via Social Networks: A Case Study on
University, Occupation, and Region
ABSTRACT: Social status refers to the relative position within the society. It is an
important notion in sociology and related research. The problem of measuring
social status has been studied for many years. Various indicators are proposed
to assess social status of individuals, including educational attainment,
occupation, and income/wealth. However, these indicators are sometimes
difficult to collect or measure.
We investigate social networks for alternative measures of social status.
Online activities expose certain traits of users in the real world. We are
interested in how these activities are related to social status, and how social
status can be predicted with social network data. To the best of our knowledge,
this is the first study on connecting online activities with social status in
reality.
In particular, we focus on the network structure of microblogs in this study.
A user following another implies some kind of status. We cast the predicted
social status of users to the "status" of real-world entities, e.g.,
universities, occupations, and regions, so that we can compare and validate
predicted results with facts in the real world. We propose an efficient
algorithm for this task and evaluate it on a dataset consisting of 3.4 million
users from Sina Weibo. The result shows that it is possible to predict social
status with reasonable accuracy using social network data. We also point out
challenges and limitations of this approach, e.g., inconsistence between online
popularity and real-world status for certain users. Our findings provide
insights on analyzing online social status and future designs of ranking
schemes for social networks.
| no_new_dataset | 0.933613 |
1604.02808 | Amir Shahroudy | Amir Shahroudy, Jun Liu, Tian-Tsong Ng, Gang Wang | NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent approaches in depth-based human activity analysis achieved outstanding
performance and proved the effectiveness of 3D representation for
classification of action classes. Currently available depth-based and
RGB+D-based action recognition benchmarks have a number of limitations,
including the lack of training samples, distinct class labels, camera views and
variety of subjects. In this paper we introduce a large-scale dataset for RGB+D
human action recognition with more than 56 thousand video samples and 4 million
frames, collected from 40 distinct subjects. Our dataset contains 60 different
action classes including daily, mutual, and health-related actions. In
addition, we propose a new recurrent neural network structure to model the
long-term temporal correlation of the features for each body part, and utilize
them for better action classification. Experimental results show the advantages
of applying deep learning methods over state-of-the-art hand-crafted features
on the suggested cross-subject and cross-view evaluation criteria for our
dataset. The introduction of this large scale dataset will enable the community
to apply, develop and adapt various data-hungry learning techniques for the
task of depth-based and RGB+D-based human activity analysis.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 06:44:53 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Shahroudy",
"Amir",
""
],
[
"Liu",
"Jun",
""
],
[
"Ng",
"Tian-Tsong",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis
ABSTRACT: Recent approaches in depth-based human activity analysis achieved outstanding
performance and proved the effectiveness of 3D representation for
classification of action classes. Currently available depth-based and
RGB+D-based action recognition benchmarks have a number of limitations,
including the lack of training samples, distinct class labels, camera views and
variety of subjects. In this paper we introduce a large-scale dataset for RGB+D
human action recognition with more than 56 thousand video samples and 4 million
frames, collected from 40 distinct subjects. Our dataset contains 60 different
action classes including daily, mutual, and health-related actions. In
addition, we propose a new recurrent neural network structure to model the
long-term temporal correlation of the features for each body part, and utilize
them for better action classification. Experimental results show the advantages
of applying deep learning methods over state-of-the-art hand-crafted features
on the suggested cross-subject and cross-view evaluation criteria for our
dataset. The introduction of this large scale dataset will enable the community
to apply, develop and adapt various data-hungry learning techniques for the
task of depth-based and RGB+D-based human activity analysis.
| new_dataset | 0.960249 |
1604.02907 | Hossein Nourikhah | Hossein Nourikhah, Mohammad Kazem Akbari, Mohammad Kalantari | Modeling and predicting measured response time of cloud-based web
services using long-memory time series | null | The Journal of Supercomputing, February 2015, Volume 71, Issue 2,
pp 673-696 | 10.1007/s11227-014-1317-4 | null | cs.NI cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting cloud performance from user's perspective is a complex task,
because of several factors involved in providing the service to the consumer.
In this work, the response time of 10 real-world services is analyzed. We have
observed long memory in terms of the measured response time of the
CPU-intensive services and statistically verified this observation using
estimators of the Hurst exponent. Then, na\"ive, mean, autoregressive
integrated moving average (ARIMA) and autoregressive fractionally integrated
moving average (ARFIMA) methods are used to forecast the future values of
quality of service (QoS) at runtime. Results of the cross-validation over the
10 datasets show that the long-memory ARFIMA model provides the mean of 37.5 %
and the maximum of 57.8 % reduction in the forecast error when compared to the
short-memory ARIMA model according to the standard error measure of mean
absolute percentage error. Our work implies that consideration of the
long-range dependence in QoS data can help to improve the selection of services
according to their possible future QoS values.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 12:07:20 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Nourikhah",
"Hossein",
""
],
[
"Akbari",
"Mohammad Kazem",
""
],
[
"Kalantari",
"Mohammad",
""
]
] | TITLE: Modeling and predicting measured response time of cloud-based web
services using long-memory time series
ABSTRACT: Predicting cloud performance from user's perspective is a complex task,
because of several factors involved in providing the service to the consumer.
In this work, the response time of 10 real-world services is analyzed. We have
observed long memory in terms of the measured response time of the
CPU-intensive services and statistically verified this observation using
estimators of the Hurst exponent. Then, na\"ive, mean, autoregressive
integrated moving average (ARIMA) and autoregressive fractionally integrated
moving average (ARFIMA) methods are used to forecast the future values of
quality of service (QoS) at runtime. Results of the cross-validation over the
10 datasets show that the long-memory ARFIMA model provides the mean of 37.5 %
and the maximum of 57.8 % reduction in the forecast error when compared to the
short-memory ARIMA model according to the standard error measure of mean
absolute percentage error. Our work implies that consideration of the
long-range dependence in QoS data can help to improve the selection of services
according to their possible future QoS values.
| no_new_dataset | 0.946941 |
1604.02935 | Nathan Hodas | Nathan Oken Hodas, Alex Endert | Adding Semantic Information into Data Models by Learning Domain
Expertise from User Interaction | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactive visual analytic systems enable users to discover insights from
complex data. Users can express and test hypotheses via user interaction,
leveraging their domain expertise and prior knowledge to guide and steer the
analytic models in the system. For example, semantic interaction techniques
enable systems to learn from the user's interactions and steer the underlying
analytic models based on the user's analytical reasoning. However, an open
challenge is how to not only steer models based on the dimensions or features
of the data, but how to add dimensions or attributes to the data based on the
domain expertise of the user. In this paper, we present a technique for
inferring and appending dimensions onto the dataset based on the prior
expertise of the user expressed via user interactions. Our technique enables
users to directly manipulate a spatial organization of data, from which both
the dimensions of the data are weighted, and also dimensions created to
represent the prior knowledge the user brings to the system. We describe this
technique and demonstrate its utility via a use case.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 18:15:49 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Hodas",
"Nathan Oken",
""
],
[
"Endert",
"Alex",
""
]
] | TITLE: Adding Semantic Information into Data Models by Learning Domain
Expertise from User Interaction
ABSTRACT: Interactive visual analytic systems enable users to discover insights from
complex data. Users can express and test hypotheses via user interaction,
leveraging their domain expertise and prior knowledge to guide and steer the
analytic models in the system. For example, semantic interaction techniques
enable systems to learn from the user's interactions and steer the underlying
analytic models based on the user's analytical reasoning. However, an open
challenge is how to not only steer models based on the dimensions or features
of the data, but how to add dimensions or attributes to the data based on the
domain expertise of the user. In this paper, we present a technique for
inferring and appending dimensions onto the dataset based on the prior
expertise of the user expressed via user interactions. Our technique enables
users to directly manipulate a spatial organization of data, from which both
the dimensions of the data are weighted, and also dimensions created to
represent the prior knowledge the user brings to the system. We describe this
technique and demonstrate its utility via a use case.
| no_new_dataset | 0.95388 |
1604.02975 | Binod Bhattarai | Binod Bhattarai, Gaurav Sharma, Frederic Jurie | CP-mtML: Coupled Projection multi-task Metric Learning for Large Scale
Face Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel Coupled Projection multi-task Metric Learning (CP-mtML)
method for large scale face retrieval. In contrast to previous works which were
limited to low dimensional features and small datasets, the proposed method
scales to large datasets with high dimensional face descriptors. It utilises
pairwise (dis-)similarity constraints as supervision and hence does not require
exhaustive class annotation for every training image. While, traditionally,
multi-task learning methods have been validated on same dataset but different
tasks, we work on the more challenging setting with heterogeneous datasets and
different tasks. We show empirical validation on multiple face image datasets
of different facial traits, e.g. identity, age and expression. We use classic
Local Binary Pattern (LBP) descriptors along with the recent Deep Convolutional
Neural Network (CNN) features. The experiments clearly demonstrate the
scalability and improved performance of the proposed method on the tasks of
identity and age based face image retrieval compared to competitive existing
methods, on the standard datasets and with the presence of a million distractor
face images.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 14:30:38 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Bhattarai",
"Binod",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Jurie",
"Frederic",
""
]
] | TITLE: CP-mtML: Coupled Projection multi-task Metric Learning for Large Scale
Face Retrieval
ABSTRACT: We propose a novel Coupled Projection multi-task Metric Learning (CP-mtML)
method for large scale face retrieval. In contrast to previous works which were
limited to low dimensional features and small datasets, the proposed method
scales to large datasets with high dimensional face descriptors. It utilises
pairwise (dis-)similarity constraints as supervision and hence does not require
exhaustive class annotation for every training image. While, traditionally,
multi-task learning methods have been validated on same dataset but different
tasks, we work on the more challenging setting with heterogeneous datasets and
different tasks. We show empirical validation on multiple face image datasets
of different facial traits, e.g. identity, age and expression. We use classic
Local Binary Pattern (LBP) descriptors along with the recent Deep Convolutional
Neural Network (CNN) features. The experiments clearly demonstrate the
scalability and improved performance of the proposed method on the tasks of
identity and age based face image retrieval compared to competitive existing
methods, on the standard datasets and with the presence of a million distractor
face images.
| no_new_dataset | 0.948346 |
1604.03034 | Dezhi Fang | Dezhi Fang, Duen Horng Chau | M3: Scaling Up Machine Learning via Memory Mapping | 2 pages, 1 figure, 1 table | null | 10.1145/1235 | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To process data that do not fit in RAM, conventional wisdom would suggest
using distributed approaches. However, recent research has demonstrated virtual
memory's strong potential in scaling up graph mining algorithms on a single
machine. We propose to use a similar approach for general machine learning. We
contribute: (1) our latest finding that memory mapping is also a feasible
technique for scaling up general machine learning algorithms like logistic
regression and k-means, when data fits in or exceeds RAM (we tested datasets up
to 190GB); (2) an approach, called M3, that enables existing machine learning
algorithms to work with out-of-core datasets through memory mapping, achieving
a speed that is significantly faster than a 4-instance Spark cluster, and
comparable to an 8-instance cluster.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 17:12:14 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Fang",
"Dezhi",
""
],
[
"Chau",
"Duen Horng",
""
]
] | TITLE: M3: Scaling Up Machine Learning via Memory Mapping
ABSTRACT: To process data that do not fit in RAM, conventional wisdom would suggest
using distributed approaches. However, recent research has demonstrated virtual
memory's strong potential in scaling up graph mining algorithms on a single
machine. We propose to use a similar approach for general machine learning. We
contribute: (1) our latest finding that memory mapping is also a feasible
technique for scaling up general machine learning algorithms like logistic
regression and k-means, when data fits in or exceeds RAM (we tested datasets up
to 190GB); (2) an approach, called M3, that enables existing machine learning
algorithms to work with out-of-core datasets through memory mapping, achieving
a speed that is significantly faster than a 4-instance Spark cluster, and
comparable to an 8-instance cluster.
| no_new_dataset | 0.950503 |
1604.03044 | Diego Saez-Trumper | Ricardo Baeza-Yates, Diego Saez-Trumper | Wisdom of the Crowd or Wisdom of a Few? An Analysis of Users' Content
Generation | null | Proceedings of the 26th ACM Conference on Hypertext & Social
Media, 2015 | 10.1145/2700171.2791056 | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we analyze how user generated content (UGC) is created,
challenging the well known {\it wisdom of crowds} concept. Although it is known
that user activity in most settings follow a power law, that is, few people do
a lot, while most do nothing, there are few studies that characterize well this
activity. In our analysis of datasets from two different social networks,
Facebook and Twitter, we find that a small percentage of active users and much
less of all users represent 50\% of the UGC. We also analyze the dynamic
behavior of the generation of this content to find that the set of most active
users is quite stable in time. Moreover, we study the social graph, finding
that those active users are highly connected among them. This implies that most
of the wisdom comes from a few users, challenging the independence assumption
needed to have a wisdom of crowds. We also address the content that is never
seen by any people, which we call digital desert, that challenges the
assumption that the content of every person should be taken in account in a
collective decision. We also compare our results with Wikipedia data and we
address the quality of UGC content using an Amazon dataset. At the end our
results are not surprising, as the Web is a reflection of our own society,
where economical or political power also is in the hands of minorities.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 17:53:28 GMT"
}
] | 2016-04-12T00:00:00 | [
[
"Baeza-Yates",
"Ricardo",
""
],
[
"Saez-Trumper",
"Diego",
""
]
] | TITLE: Wisdom of the Crowd or Wisdom of a Few? An Analysis of Users' Content
Generation
ABSTRACT: In this paper we analyze how user generated content (UGC) is created,
challenging the well known {\it wisdom of crowds} concept. Although it is known
that user activity in most settings follow a power law, that is, few people do
a lot, while most do nothing, there are few studies that characterize well this
activity. In our analysis of datasets from two different social networks,
Facebook and Twitter, we find that a small percentage of active users and much
less of all users represent 50\% of the UGC. We also analyze the dynamic
behavior of the generation of this content to find that the set of most active
users is quite stable in time. Moreover, we study the social graph, finding
that those active users are highly connected among them. This implies that most
of the wisdom comes from a few users, challenging the independence assumption
needed to have a wisdom of crowds. We also address the content that is never
seen by any people, which we call digital desert, that challenges the
assumption that the content of every person should be taken in account in a
collective decision. We also compare our results with Wikipedia data and we
address the quality of UGC content using an Amazon dataset. At the end our
results are not surprising, as the Web is a reflection of our own society,
where economical or political power also is in the hands of minorities.
| no_new_dataset | 0.928018 |
1602.06688 | Hiroshi Sakamoto | Yoshimasa Takabatake, Kenta Nakashima, Tetsuji Kuboyama, Yasuo Tabei,
Hiroshi Sakamoto | siEDM: an efficient string index and search algorithm for edit distance
with moves | 23 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although several self-indexes for highly repetitive text collections exist,
developing an index and search algorithm with editing operations remains a
challenge. Edit distance with moves (EDM) is a string-to-string distance
measure that includes substring moves in addition to ordinal editing operations
to turn one string into another. Although the problem of computing EDM is
intractable, it has a wide range of potential applications, especially in
approximate string retrieval. Despite the importance of computing EDM, there
has been no efficient method for indexing and searching large text collections
based on the EDM measure. We propose the first algorithm, named string index
for edit distance with moves (siEDM), for indexing and searching strings with
EDM. The siEDM algorithm builds an index structure by leveraging the idea
behind the edit sensitive parsing (ESP), an efficient algorithm enabling
approximately computing EDM with guarantees of upper and lower bounds for the
exact EDM. siEDM efficiently prunes the space for searching query strings by
the proposed method, which enables fast query searches with the same guarantee
as ESP. We experimentally tested the ability of siEDM to index and search
strings on benchmark datasets, and we showed siEDM's efficiency.
| [
{
"version": "v1",
"created": "Mon, 22 Feb 2016 09:02:44 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Apr 2016 05:23:27 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"Takabatake",
"Yoshimasa",
""
],
[
"Nakashima",
"Kenta",
""
],
[
"Kuboyama",
"Tetsuji",
""
],
[
"Tabei",
"Yasuo",
""
],
[
"Sakamoto",
"Hiroshi",
""
]
] | TITLE: siEDM: an efficient string index and search algorithm for edit distance
with moves
ABSTRACT: Although several self-indexes for highly repetitive text collections exist,
developing an index and search algorithm with editing operations remains a
challenge. Edit distance with moves (EDM) is a string-to-string distance
measure that includes substring moves in addition to ordinal editing operations
to turn one string into another. Although the problem of computing EDM is
intractable, it has a wide range of potential applications, especially in
approximate string retrieval. Despite the importance of computing EDM, there
has been no efficient method for indexing and searching large text collections
based on the EDM measure. We propose the first algorithm, named string index
for edit distance with moves (siEDM), for indexing and searching strings with
EDM. The siEDM algorithm builds an index structure by leveraging the idea
behind the edit sensitive parsing (ESP), an efficient algorithm enabling
approximately computing EDM with guarantees of upper and lower bounds for the
exact EDM. siEDM efficiently prunes the space for searching query strings by
the proposed method, which enables fast query searches with the same guarantee
as ESP. We experimentally tested the ability of siEDM to index and search
strings on benchmark datasets, and we showed siEDM's efficiency.
| no_new_dataset | 0.941547 |
1604.02264 | Frank-Michael Schleif | Frank-Michael Schleif and Andrej Gisbrecht and Peter Tino | Probabilistic classifiers with low rank indefinite kernels | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indefinite similarity measures can be frequently found in bio-informatics by
means of alignment scores, but are also common in other fields like shape
measures in image retrieval. Lacking an underlying vector space, the data are
given as pairwise similarities only. The few algorithms available for such data
do not scale to larger datasets. Focusing on probabilistic batch classifiers,
the Indefinite Kernel Fisher Discriminant (iKFD) and the Probabilistic
Classification Vector Machine (PCVM) are both effective algorithms for this
type of data but, with cubic complexity. Here we propose an extension of iKFD
and PCVM such that linear runtime and memory complexity is achieved for low
rank indefinite kernels. Employing the Nystr\"om approximation for indefinite
kernels, we also propose a new almost parameter free approach to identify the
landmarks, restricted to a supervised learning problem. Evaluations at several
larger similarity data from various domains show that the proposed methods
provides similar generalization capabilities while being easier to parametrize
and substantially faster for large scale data.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 07:58:36 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"Schleif",
"Frank-Michael",
""
],
[
"Gisbrecht",
"Andrej",
""
],
[
"Tino",
"Peter",
""
]
] | TITLE: Probabilistic classifiers with low rank indefinite kernels
ABSTRACT: Indefinite similarity measures can be frequently found in bio-informatics by
means of alignment scores, but are also common in other fields like shape
measures in image retrieval. Lacking an underlying vector space, the data are
given as pairwise similarities only. The few algorithms available for such data
do not scale to larger datasets. Focusing on probabilistic batch classifiers,
the Indefinite Kernel Fisher Discriminant (iKFD) and the Probabilistic
Classification Vector Machine (PCVM) are both effective algorithms for this
type of data but, with cubic complexity. Here we propose an extension of iKFD
and PCVM such that linear runtime and memory complexity is achieved for low
rank indefinite kernels. Employing the Nystr\"om approximation for indefinite
kernels, we also propose a new almost parameter free approach to identify the
landmarks, restricted to a supervised learning problem. Evaluations at several
larger similarity data from various domains show that the proposed methods
provides similar generalization capabilities while being easier to parametrize
and substantially faster for large scale data.
| no_new_dataset | 0.950088 |
1604.02275 | Rocco De Rosa rd | Rocco De Rosa, Thomas Mensink and Barbara Caputo | Online Open World Recognition | keywords{Open world recognition, Open set, Incremental Learning,
Metric Learning, Nonparametric methods, Classification confidence} | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As we enter into the big data age and an avalanche of images have become
readily available, recognition systems face the need to move from close, lab
settings where the number of classes and training data are fixed, to dynamic
scenarios where the number of categories to be recognized grows continuously
over time, as well as new data providing useful information to update the
system. Recent attempts, like the open world recognition framework, tried to
inject dynamics into the system by detecting new unknown classes and adding
them incrementally, while at the same time continuously updating the models for
the known classes. incrementally adding new classes and detecting instances
from unknown classes, while at the same time continuously updating the models
for the known classes. In this paper we argue that to properly capture the
intrinsic dynamic of open world recognition, it is necessary to add to these
aspects (a) the incremental learning of the underlying metric, (b) the
incremental estimate of confidence thresholds for the unknown classes, and (c)
the use of local learning to precisely describe the space of classes. We extend
three existing metric learning algorithms towards these goals by using online
metric learning. Experimentally we validate our approach on two large-scale
datasets in different learning scenarios. For all these scenarios our proposed
methods outperform their non-online counterparts. We conclude that local and
online learning is important to capture the full dynamics of open world
recognition.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 08:43:15 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"De Rosa",
"Rocco",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Caputo",
"Barbara",
""
]
] | TITLE: Online Open World Recognition
ABSTRACT: As we enter into the big data age and an avalanche of images have become
readily available, recognition systems face the need to move from close, lab
settings where the number of classes and training data are fixed, to dynamic
scenarios where the number of categories to be recognized grows continuously
over time, as well as new data providing useful information to update the
system. Recent attempts, like the open world recognition framework, tried to
inject dynamics into the system by detecting new unknown classes and adding
them incrementally, while at the same time continuously updating the models for
the known classes. incrementally adding new classes and detecting instances
from unknown classes, while at the same time continuously updating the models
for the known classes. In this paper we argue that to properly capture the
intrinsic dynamic of open world recognition, it is necessary to add to these
aspects (a) the incremental learning of the underlying metric, (b) the
incremental estimate of confidence thresholds for the unknown classes, and (c)
the use of local learning to precisely describe the space of classes. We extend
three existing metric learning algorithms towards these goals by using online
metric learning. Experimentally we validate our approach on two large-scale
datasets in different learning scenarios. For all these scenarios our proposed
methods outperform their non-online counterparts. We conclude that local and
online learning is important to capture the full dynamics of open world
recognition.
| no_new_dataset | 0.95222 |
1604.02287 | David Garcia | David Garcia and Markus Strohmaier | The QWERTY effect on the web: How typing shapes the meaning of words in
online human-computer interaction | In International WWW Conference, 2016. April 11-15, 2016, Montreal,
Quebec, Canada. 978-1-4503-4143-1/16/04 | null | 10.1145/2872427.2883019 | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The QWERTY effect postulates that the keyboard layout influences word
meanings by linking positivity to the use of the right hand and negativity to
the use of the left hand. For example, previous research has established that
words with more right hand letters are rated more positively than words with
more left hand letters by human subjects in small scale experiments. In this
paper, we perform large scale investigations of the QWERTY effect on the web.
Using data from eleven web platforms related to products, movies, books, and
videos, we conduct observational tests whether a hand-meaning relationship can
be found in decoding text on the web. Furthermore, we investigate whether
encoding text on the web exhibits the QWERTY effect as well, by analyzing the
relationship between the text of online reviews and their star ratings in four
additional datasets. Overall, we find robust evidence for the QWERTY effect
both at the point of text interpretation (decoding) and at the point of text
creation (encoding). We also find under which conditions the effect might not
hold. Our findings have implications for any algorithmic method aiming to
evaluate the meaning of words on the web, including for example semantic or
sentiment analysis, and show the existence of "dactilar onomatopoeias" that
shape the dynamics of word-meaning associations. To the best of our knowledge,
this is the first work to reveal the extent to which the QWERTY effect exists
in large scale human-computer interaction on the web.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 09:54:36 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"Garcia",
"David",
""
],
[
"Strohmaier",
"Markus",
""
]
] | TITLE: The QWERTY effect on the web: How typing shapes the meaning of words in
online human-computer interaction
ABSTRACT: The QWERTY effect postulates that the keyboard layout influences word
meanings by linking positivity to the use of the right hand and negativity to
the use of the left hand. For example, previous research has established that
words with more right hand letters are rated more positively than words with
more left hand letters by human subjects in small scale experiments. In this
paper, we perform large scale investigations of the QWERTY effect on the web.
Using data from eleven web platforms related to products, movies, books, and
videos, we conduct observational tests whether a hand-meaning relationship can
be found in decoding text on the web. Furthermore, we investigate whether
encoding text on the web exhibits the QWERTY effect as well, by analyzing the
relationship between the text of online reviews and their star ratings in four
additional datasets. Overall, we find robust evidence for the QWERTY effect
both at the point of text interpretation (decoding) and at the point of text
creation (encoding). We also find under which conditions the effect might not
hold. Our findings have implications for any algorithmic method aiming to
evaluate the meaning of words on the web, including for example semantic or
sentiment analysis, and show the existence of "dactilar onomatopoeias" that
shape the dynamics of word-meaning associations. To the best of our knowledge,
this is the first work to reveal the extent to which the QWERTY effect exists
in large scale human-computer interaction on the web.
| no_new_dataset | 0.940353 |
1604.02354 | Dong Wang | Dong Wang, Xiaoyang Tan | Bayesian Neighbourhood Component Analysis | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a good distance metric in feature space potentially improves the
performance of the KNN classifier and is useful in many real-world
applications. Many metric learning algorithms are however based on the point
estimation of a quadratic optimization problem, which is time-consuming,
susceptible to overfitting, and lack a natural mechanism to reason with
parameter uncertainty, an important property useful especially when the
training set is small and/or noisy. To deal with these issues, we present a
novel Bayesian metric learning method, called Bayesian NCA, based on the
well-known Neighbourhood Component Analysis method, in which the metric
posterior is characterized by the local label consistency constraints of
observations, encoded with a similarity graph instead of independent pairwise
constraints. For efficient Bayesian optimization, we explore the variational
lower bound over the log-likelihood of the original NCA objective. Experiments
on several publicly available datasets demonstrate that the proposed method is
able to learn robust metric measures from small size dataset and/or from
challenging training set with labels contaminated by errors. The proposed
method is also shown to outperform a previous pairwise constrained Bayesian
metric learning method.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 13:35:03 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"Wang",
"Dong",
""
],
[
"Tan",
"Xiaoyang",
""
]
] | TITLE: Bayesian Neighbourhood Component Analysis
ABSTRACT: Learning a good distance metric in feature space potentially improves the
performance of the KNN classifier and is useful in many real-world
applications. Many metric learning algorithms are however based on the point
estimation of a quadratic optimization problem, which is time-consuming,
susceptible to overfitting, and lack a natural mechanism to reason with
parameter uncertainty, an important property useful especially when the
training set is small and/or noisy. To deal with these issues, we present a
novel Bayesian metric learning method, called Bayesian NCA, based on the
well-known Neighbourhood Component Analysis method, in which the metric
posterior is characterized by the local label consistency constraints of
observations, encoded with a similarity graph instead of independent pairwise
constraints. For efficient Bayesian optimization, we explore the variational
lower bound over the log-likelihood of the original NCA objective. Experiments
on several publicly available datasets demonstrate that the proposed method is
able to learn robust metric measures from small size dataset and/or from
challenging training set with labels contaminated by errors. The proposed
method is also shown to outperform a previous pairwise constrained Bayesian
metric learning method.
| no_new_dataset | 0.952618 |
1604.02363 | Tanmoy Chakraborty | Dinesh Pradhan, Partha Sarathi Paul, Umesh Maheswari, Subrata Nandi,
Tanmoy Chakraborty | $C^3$-index: Revisiting Authors' Performance Measure | 2 Figures, 1 Table, WebSci 2016, May 22-25, 2016, Hannover, Germany | null | null | null | cs.DL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Author performance indices (such as h-index and its variants) fail to resolve
ties while ranking authors with low index values (majority in number) which
includes the young researchers. In this work we leverage the citations as well
as collaboration profile of an author in a novel way using a weighted
multi-layered network and propose a variant of page-rank algorithm to obtain a
new author performance measure, $C^3$-index. Experiments on a massive
publication dataset reveal several interesting characteristics of our metric:
(i) we observe that $C^3$-index is consistent over time, (ii) $C^3$-index has
high potential to break ties among low rank authors, (iii) $C^3$-index can
effectively be used to predict future achievers at the early stage of their
career.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 14:50:11 GMT"
}
] | 2016-04-11T00:00:00 | [
[
"Pradhan",
"Dinesh",
""
],
[
"Paul",
"Partha Sarathi",
""
],
[
"Maheswari",
"Umesh",
""
],
[
"Nandi",
"Subrata",
""
],
[
"Chakraborty",
"Tanmoy",
""
]
] | TITLE: $C^3$-index: Revisiting Authors' Performance Measure
ABSTRACT: Author performance indices (such as h-index and its variants) fail to resolve
ties while ranking authors with low index values (majority in number) which
includes the young researchers. In this work we leverage the citations as well
as collaboration profile of an author in a novel way using a weighted
multi-layered network and propose a variant of page-rank algorithm to obtain a
new author performance measure, $C^3$-index. Experiments on a massive
publication dataset reveal several interesting characteristics of our metric:
(i) we observe that $C^3$-index is consistent over time, (ii) $C^3$-index has
high potential to break ties among low rank authors, (iii) $C^3$-index can
effectively be used to predict future achievers at the early stage of their
career.
| no_new_dataset | 0.946399 |
1412.2404 | Devansh Arpit | Devansh Arpit, Ifeoma Nwogu, Venu Govindaraju | Dimensionality Reduction with Subspace Structure Preservation | Published in NIPS 2014; v2: minor updates to the algorithm and added
a few lines addressing application to large-scale/high-dimensional data | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling data as being sampled from a union of independent subspaces has been
widely applied to a number of real world applications. However, dimensionality
reduction approaches that theoretically preserve this independence assumption
have not been well studied. Our key contribution is to show that $2K$
projection vectors are sufficient for the independence preservation of any $K$
class data sampled from a union of independent subspaces. It is this
non-trivial observation that we use for designing our dimensionality reduction
technique. In this paper, we propose a novel dimensionality reduction algorithm
that theoretically preserves this structure for a given dataset. We support our
theoretical analysis with empirical results on both synthetic and real world
data achieving \textit{state-of-the-art} results compared to popular
dimensionality reduction techniques.
| [
{
"version": "v1",
"created": "Sun, 7 Dec 2014 22:02:33 GMT"
},
{
"version": "v2",
"created": "Sun, 31 May 2015 22:30:47 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Apr 2016 23:11:46 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Arpit",
"Devansh",
""
],
[
"Nwogu",
"Ifeoma",
""
],
[
"Govindaraju",
"Venu",
""
]
] | TITLE: Dimensionality Reduction with Subspace Structure Preservation
ABSTRACT: Modeling data as being sampled from a union of independent subspaces has been
widely applied to a number of real world applications. However, dimensionality
reduction approaches that theoretically preserve this independence assumption
have not been well studied. Our key contribution is to show that $2K$
projection vectors are sufficient for the independence preservation of any $K$
class data sampled from a union of independent subspaces. It is this
non-trivial observation that we use for designing our dimensionality reduction
technique. In this paper, we propose a novel dimensionality reduction algorithm
that theoretically preserves this structure for a given dataset. We support our
theoretical analysis with empirical results on both synthetic and real world
data achieving \textit{state-of-the-art} results compared to popular
dimensionality reduction techniques.
| no_new_dataset | 0.947478 |
1506.02216 | Junyoung Chung | Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron
Courville, Yoshua Bengio | A Recurrent Latent Variable Model for Sequential Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the inclusion of latent random variables into the
dynamic hidden state of a recurrent neural network (RNN) by combining elements
of the variational autoencoder. We argue that through the use of high-level
latent random variables, the variational RNN (VRNN)1 can model the kind of
variability observed in highly structured sequential data such as natural
speech. We empirically evaluate the proposed model against related sequential
models on four speech datasets and one handwriting dataset. Our results show
the important roles that latent random variables can play in the RNN dynamic
hidden state.
| [
{
"version": "v1",
"created": "Sun, 7 Jun 2015 04:23:50 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Jun 2015 02:25:53 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Jun 2015 04:57:00 GMT"
},
{
"version": "v4",
"created": "Thu, 15 Oct 2015 18:10:41 GMT"
},
{
"version": "v5",
"created": "Mon, 2 Nov 2015 18:56:13 GMT"
},
{
"version": "v6",
"created": "Wed, 6 Apr 2016 20:52:32 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Chung",
"Junyoung",
""
],
[
"Kastner",
"Kyle",
""
],
[
"Dinh",
"Laurent",
""
],
[
"Goel",
"Kratarth",
""
],
[
"Courville",
"Aaron",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: A Recurrent Latent Variable Model for Sequential Data
ABSTRACT: In this paper, we explore the inclusion of latent random variables into the
dynamic hidden state of a recurrent neural network (RNN) by combining elements
of the variational autoencoder. We argue that through the use of high-level
latent random variables, the variational RNN (VRNN)1 can model the kind of
variability observed in highly structured sequential data such as natural
speech. We empirically evaluate the proposed model against related sequential
models on four speech datasets and one handwriting dataset. Our results show
the important roles that latent random variables can play in the RNN dynamic
hidden state.
| no_new_dataset | 0.953535 |
1510.00041 | Michael Kane | Taylor Arnold, Michael Kane, and Simon Urbanek | iotools: High-Performance I/O Tools for R | 8 pages, 2 figures | null | null | null | stat.CO cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The iotools package provides a set of tools for Input/Output (I/O) intensive
datasets processing in R (R Core Team, 2014). Efficent parsing methods are
included which minimize copying and avoid the use of intermediate string
representations whenever possible. Functions for applying chunk-wise operations
allow for computing on streaming input as well as arbitrarily large files. We
present a set of example use cases for iotools, as well as extensive benchmarks
comparing comparable functions provided in both core-R as well as other
contributed packages.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 21:31:42 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2016 17:49:16 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Arnold",
"Taylor",
""
],
[
"Kane",
"Michael",
""
],
[
"Urbanek",
"Simon",
""
]
] | TITLE: iotools: High-Performance I/O Tools for R
ABSTRACT: The iotools package provides a set of tools for Input/Output (I/O) intensive
datasets processing in R (R Core Team, 2014). Efficent parsing methods are
included which minimize copying and avoid the use of intermediate string
representations whenever possible. Functions for applying chunk-wise operations
allow for computing on streaming input as well as arbitrarily large files. We
present a set of example use cases for iotools, as well as extensive benchmarks
comparing comparable functions provided in both core-R as well as other
contributed packages.
| no_new_dataset | 0.940517 |
1603.08458 | Shaodian Zhang | Shaodian Zhang, Edouard Grave, Elizabeth Sklar, Noemie Elhadad | Longitudinal Analysis of Discussion Topics in an Online Breast Cancer
Community using Convolutional Neural Networks | null | null | null | null | cs.CL cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying topics of discussions in online health communities (OHC) is
critical to various applications, but can be difficult because topics of OHC
content are usually heterogeneous and domain-dependent. In this paper, we
provide a multi-class schema, an annotated dataset, and supervised classifiers
based on convolutional neural network (CNN) and other models for the task of
classifying discussion topics. We apply the CNN classifier to the most popular
breast cancer online community, and carry out a longitudinal analysis to show
topic distributions and topic changes throughout members' participation. Our
experimental results suggest that CNN outperforms other classifiers in the task
of topic classification, and that certain trajectories can be detected with
respect to topic changes.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 17:47:42 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Apr 2016 22:46:39 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Apr 2016 15:09:05 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Zhang",
"Shaodian",
""
],
[
"Grave",
"Edouard",
""
],
[
"Sklar",
"Elizabeth",
""
],
[
"Elhadad",
"Noemie",
""
]
] | TITLE: Longitudinal Analysis of Discussion Topics in an Online Breast Cancer
Community using Convolutional Neural Networks
ABSTRACT: Identifying topics of discussions in online health communities (OHC) is
critical to various applications, but can be difficult because topics of OHC
content are usually heterogeneous and domain-dependent. In this paper, we
provide a multi-class schema, an annotated dataset, and supervised classifiers
based on convolutional neural network (CNN) and other models for the task of
classifying discussion topics. We apply the CNN classifier to the most popular
breast cancer online community, and carry out a longitudinal analysis to show
topic distributions and topic changes throughout members' participation. Our
experimental results suggest that CNN outperforms other classifiers in the task
of topic classification, and that certain trajectories can be detected with
respect to topic changes.
| new_dataset | 0.956104 |
1604.01685 | Marius Cordts | Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus
Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele | The Cityscapes Dataset for Semantic Urban Scene Understanding | Includes supplemental material | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 16:34:33 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Apr 2016 15:39:22 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Cordts",
"Marius",
""
],
[
"Omran",
"Mohamed",
""
],
[
"Ramos",
"Sebastian",
""
],
[
"Rehfeld",
"Timo",
""
],
[
"Enzweiler",
"Markus",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Franke",
"Uwe",
""
],
[
"Roth",
"Stefan",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: The Cityscapes Dataset for Semantic Urban Scene Understanding
ABSTRACT: Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.
| new_dataset | 0.966945 |
1604.01787 | Yanwei Cui | Yanwei Cui, Laetitia Chapel, S\'ebastien Lef\`evre | A Subpath Kernel for Learning Hierarchical Image Representations | 10th IAPR-TC-15 International Workshop, GbRPR 2015, Beijing, China,
May 13-15, 2015. Proceedings | null | 10.1007/978-3-319-18224-7_4 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tree kernels have demonstrated their ability to deal with hierarchical data,
as the intrinsic tree structure often plays a discriminative role. While such
kernels have been successfully applied to various domains such as nature
language processing and bioinformatics, they mostly concentrate on ordered
trees and whose nodes are described by symbolic data. Meanwhile, hierarchical
representations have gained increasing interest to describe image content. This
is particularly true in remote sensing, where such representations allow for
revealing different objects of interest at various scales through a tree
structure. However, the induced trees are unordered and the nodes are equipped
with numerical features. In this paper, we propose a new structured kernel for
hierarchical image representations which is built on the concept of subpath
kernel. Experimental results on both artificial and remote sensing datasets
show that the proposed kernel manages to deal with the hierarchical nature of
the data, leading to better classification rates.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 20:04:17 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Cui",
"Yanwei",
""
],
[
"Chapel",
"Laetitia",
""
],
[
"Lefèvre",
"Sébastien",
""
]
] | TITLE: A Subpath Kernel for Learning Hierarchical Image Representations
ABSTRACT: Tree kernels have demonstrated their ability to deal with hierarchical data,
as the intrinsic tree structure often plays a discriminative role. While such
kernels have been successfully applied to various domains such as nature
language processing and bioinformatics, they mostly concentrate on ordered
trees and whose nodes are described by symbolic data. Meanwhile, hierarchical
representations have gained increasing interest to describe image content. This
is particularly true in remote sensing, where such representations allow for
revealing different objects of interest at various scales through a tree
structure. However, the induced trees are unordered and the nodes are equipped
with numerical features. In this paper, we propose a new structured kernel for
hierarchical image representations which is built on the concept of subpath
kernel. Experimental results on both artificial and remote sensing datasets
show that the proposed kernel manages to deal with the hierarchical nature of
the data, leading to better classification rates.
| no_new_dataset | 0.949248 |
1604.01806 | Srikanth Cherla | Srikanth Cherla and Son N Tran and Tillman Weyde and Artur d'Avila
Garcez | Generalising the Discriminative Restricted Boltzmann Machine | Submitted to ECML 2016 conference track | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel theoretical result that generalises the Discriminative
Restricted Boltzmann Machine (DRBM). While originally the DRBM was defined
assuming the {0, 1}-Bernoulli distribution in each of its hidden units, this
result makes it possible to derive cost functions for variants of the DRBM that
utilise other distributions, including some that are often encountered in the
literature. This is illustrated with the Binomial and {-1, +1}-Bernoulli
distributions here. We evaluate these two DRBM variants and compare them with
the original one on three benchmark datasets, namely the MNIST and USPS digit
classification datasets, and the 20 Newsgroups document classification dataset.
Results show that each of the three compared models outperforms the remaining
two in one of the three datasets, thus indicating that the proposed theoretical
generalisation of the DRBM may be valuable in practice.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 21:01:35 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Cherla",
"Srikanth",
""
],
[
"Tran",
"Son N",
""
],
[
"Weyde",
"Tillman",
""
],
[
"Garcez",
"Artur d'Avila",
""
]
] | TITLE: Generalising the Discriminative Restricted Boltzmann Machine
ABSTRACT: We present a novel theoretical result that generalises the Discriminative
Restricted Boltzmann Machine (DRBM). While originally the DRBM was defined
assuming the {0, 1}-Bernoulli distribution in each of its hidden units, this
result makes it possible to derive cost functions for variants of the DRBM that
utilise other distributions, including some that are often encountered in the
literature. This is illustrated with the Binomial and {-1, +1}-Bernoulli
distributions here. We evaluate these two DRBM variants and compare them with
the original one on three benchmark datasets, namely the MNIST and USPS digit
classification datasets, and the 20 Newsgroups document classification dataset.
Results show that each of the three compared models outperforms the remaining
two in one of the three datasets, thus indicating that the proposed theoretical
generalisation of the DRBM may be valuable in practice.
| no_new_dataset | 0.949389 |
1604.01841 | Miao Sun | Miao Sun, Tony X. Han, Zhihai He | A Classification Leveraged Object Detector | Work in 2013, which contained some detailed algorithms for PASCAL VOC
2012 detection competition | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, the state-of-the-art image classification algorithms outperform
the best available object detector by a big margin in terms of average
precision. We, therefore, propose a simple yet principled approach that allows
us to leverage object detection through image classification on supporting
regions specified by a preliminary object detector. Using a simple bag-of-
words model based image classification algorithm, we leveraged the performance
of the deformable model objector from 35.9% to 39.5% in average precision over
20 categories on standard PASCAL VOC 2007 detection dataset.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 01:11:50 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Sun",
"Miao",
""
],
[
"Han",
"Tony X.",
""
],
[
"He",
"Zhihai",
""
]
] | TITLE: A Classification Leveraged Object Detector
ABSTRACT: Currently, the state-of-the-art image classification algorithms outperform
the best available object detector by a big margin in terms of average
precision. We, therefore, propose a simple yet principled approach that allows
us to leverage object detection through image classification on supporting
regions specified by a preliminary object detector. Using a simple bag-of-
words model based image classification algorithm, we leveraged the performance
of the deformable model objector from 35.9% to 39.5% in average precision over
20 categories on standard PASCAL VOC 2007 detection dataset.
| no_new_dataset | 0.95253 |
1604.01891 | Xiaohang Ren | Xiaohang Ren, Kai Chen and Jun Sun | A CNN Based Scene Chinese Text Recognition Algorithm With Synthetic Data
Engine | 2 pages, DAS 2016 short paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene text recognition plays an important role in many computer vision
applications. The small size of available public available scene text datasets
is the main challenge when training a text recognition CNN model. In this
paper, we propose a CNN based Chinese text recognition algorithm. To enlarge
the dataset for training the CNN model, we design a synthetic data engine for
Chinese scene character generation, which generates representative character
images according to the fonts use frequency of Chinese texts. As the Chinese
text is more complex, the English text recognition CNN architecture is modified
for Chinese text. To ensure the small size nature character dataset and the
large size artificial character dataset are comparable in training, the CNN
model are trained progressively. The proposed Chinese text recognition
algorithm is evaluated with two Chinese text datasets. The algorithm achieves
better recognize accuracy compared to the baseline methods.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 07:08:25 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Ren",
"Xiaohang",
""
],
[
"Chen",
"Kai",
""
],
[
"Sun",
"Jun",
""
]
] | TITLE: A CNN Based Scene Chinese Text Recognition Algorithm With Synthetic Data
Engine
ABSTRACT: Scene text recognition plays an important role in many computer vision
applications. The small size of available public available scene text datasets
is the main challenge when training a text recognition CNN model. In this
paper, we propose a CNN based Chinese text recognition algorithm. To enlarge
the dataset for training the CNN model, we design a synthetic data engine for
Chinese scene character generation, which generates representative character
images according to the fonts use frequency of Chinese texts. As the Chinese
text is more complex, the English text recognition CNN architecture is modified
for Chinese text. To ensure the small size nature character dataset and the
large size artificial character dataset are comparable in training, the CNN
model are trained progressively. The proposed Chinese text recognition
algorithm is evaluated with two Chinese text datasets. The algorithm achieves
better recognize accuracy compared to the baseline methods.
| no_new_dataset | 0.953579 |
1604.01894 | Xiaohang Ren | Xiaohang Ren, Kai Chen, Jun Sun | A Novel Scene Text Detection Algorithm Based On Convolutional Neural
Network | 5 pages, IWPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Candidate text region extraction plays a critical role in convolutional
neural network (CNN) based text detection from natural images. In this paper,
we propose a CNN based scene text detection algorithm with a new text region
extractor. The so called candidate text region extractor I-MSER is based on
Maximally Stable Extremal Region (MSER), which can improve the independency and
completeness of the extracted candidate text regions. Design of I-MSER is
motivated by the observation that text MSERs have high similarity and are close
to each other. The independency of candidate text regions obtained by I-MSER is
guaranteed by selecting the most representative regions from a MSER tree which
is generated according to the spatial overlapping relationship among the MSERs.
A multi-layer CNN model is trained to score the confidence value of the
extracted regions extracted by the I-MSER for text detection. The new text
detection algorithm based on I-MSER is evaluated with wide-used ICDAR 2011 and
2013 datasets and shows improved detection performance compared to the existing
algorithms.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 07:16:35 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Ren",
"Xiaohang",
""
],
[
"Chen",
"Kai",
""
],
[
"Sun",
"Jun",
""
]
] | TITLE: A Novel Scene Text Detection Algorithm Based On Convolutional Neural
Network
ABSTRACT: Candidate text region extraction plays a critical role in convolutional
neural network (CNN) based text detection from natural images. In this paper,
we propose a CNN based scene text detection algorithm with a new text region
extractor. The so called candidate text region extractor I-MSER is based on
Maximally Stable Extremal Region (MSER), which can improve the independency and
completeness of the extracted candidate text regions. Design of I-MSER is
motivated by the observation that text MSERs have high similarity and are close
to each other. The independency of candidate text regions obtained by I-MSER is
guaranteed by selecting the most representative regions from a MSER tree which
is generated according to the spatial overlapping relationship among the MSERs.
A multi-layer CNN model is trained to score the confidence value of the
extracted regions extracted by the I-MSER for text detection. The new text
detection algorithm based on I-MSER is evaluated with wide-used ICDAR 2011 and
2013 datasets and shows improved detection performance compared to the existing
algorithms.
| no_new_dataset | 0.949669 |
1604.02115 | Suriya Singh | Suriya Singh, Chetan Arora, C. V. Jawahar | Trajectory Aligned Features For First Person Action Recognition | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Egocentric videos are characterised by their ability to have the first person
view. With the popularity of Google Glass and GoPro, use of egocentric videos
is on the rise. Recognizing action of the wearer from egocentric videos is an
important problem. Unstructured movement of the camera due to natural head
motion of the wearer causes sharp changes in the visual field of the egocentric
camera causing many standard third person action recognition techniques to
perform poorly on such videos. Objects present in the scene and hand gestures
of the wearer are the most important cues for first person action recognition
but are difficult to segment and recognize in an egocentric video. We propose a
novel representation of the first person actions derived from feature
trajectories. The features are simple to compute using standard point tracking
and does not assume segmentation of hand/objects or recognizing object or hand
pose unlike in many previous approaches. We train a bag of words classifier
with the proposed features and report a performance improvement of more than
11% on publicly available datasets. Although not designed for the particular
case, we show that our technique can also recognize wearer's actions when hands
or objects are not visible.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 19:09:07 GMT"
}
] | 2016-04-08T00:00:00 | [
[
"Singh",
"Suriya",
""
],
[
"Arora",
"Chetan",
""
],
[
"Jawahar",
"C. V.",
""
]
] | TITLE: Trajectory Aligned Features For First Person Action Recognition
ABSTRACT: Egocentric videos are characterised by their ability to have the first person
view. With the popularity of Google Glass and GoPro, use of egocentric videos
is on the rise. Recognizing action of the wearer from egocentric videos is an
important problem. Unstructured movement of the camera due to natural head
motion of the wearer causes sharp changes in the visual field of the egocentric
camera causing many standard third person action recognition techniques to
perform poorly on such videos. Objects present in the scene and hand gestures
of the wearer are the most important cues for first person action recognition
but are difficult to segment and recognize in an egocentric video. We propose a
novel representation of the first person actions derived from feature
trajectories. The features are simple to compute using standard point tracking
and does not assume segmentation of hand/objects or recognizing object or hand
pose unlike in many previous approaches. We train a bag of words classifier
with the proposed features and report a performance improvement of more than
11% on publicly available datasets. Although not designed for the particular
case, we show that our technique can also recognize wearer's actions when hands
or objects are not visible.
| no_new_dataset | 0.946498 |
1510.07712 | Haonan Yu | Haonan Yu and Jiang Wang and Zhiheng Huang and Yi Yang and Wei Xu | Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks | In CVPR2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach that exploits hierarchical Recurrent Neural Networks
(RNNs) to tackle the video captioning problem, i.e., generating one or multiple
sentences to describe a realistic video. Our hierarchical framework contains a
sentence generator and a paragraph generator. The sentence generator produces
one simple short sentence that describes a specific short video interval. It
exploits both temporal- and spatial-attention mechanisms to selectively focus
on visual elements during generation. The paragraph generator captures the
inter-sentence dependency by taking as input the sentential embedding produced
by the sentence generator, combining it with the paragraph history, and
outputting the new initial state for the sentence generator. We evaluate our
approach on two large-scale benchmark datasets: YouTubeClips and
TACoS-MultiLevel. The experiments demonstrate that our approach significantly
outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and
0.305 respectively.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2015 22:47:00 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2016 02:24:35 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Yu",
"Haonan",
""
],
[
"Wang",
"Jiang",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Yang",
"Yi",
""
],
[
"Xu",
"Wei",
""
]
] | TITLE: Video Paragraph Captioning Using Hierarchical Recurrent Neural Networks
ABSTRACT: We present an approach that exploits hierarchical Recurrent Neural Networks
(RNNs) to tackle the video captioning problem, i.e., generating one or multiple
sentences to describe a realistic video. Our hierarchical framework contains a
sentence generator and a paragraph generator. The sentence generator produces
one simple short sentence that describes a specific short video interval. It
exploits both temporal- and spatial-attention mechanisms to selectively focus
on visual elements during generation. The paragraph generator captures the
inter-sentence dependency by taking as input the sentential embedding produced
by the sentence generator, combining it with the paragraph history, and
outputting the new initial state for the sentence generator. We evaluate our
approach on two large-scale benchmark datasets: YouTubeClips and
TACoS-MultiLevel. The experiments demonstrate that our approach significantly
outperforms the current state-of-the-art methods with BLEU@4 scores 0.499 and
0.305 respectively.
| no_new_dataset | 0.945298 |
1511.06040 | Srikanth Muralidharan | Moustafa Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash Vahdat,
Greg Mori | A Hierarchical Deep Temporal Model for Group Activity Recognition | cs.cv Accepted to CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In group activity recognition, the temporal dynamics of the whole activity
can be inferred based on the dynamics of the individual people representing the
activity. We build a deep model to capture these dynamics based on LSTM
(long-short term memory) models. To make use of these ob- servations, we
present a 2-stage deep temporal model for the group activity recognition
problem. In our model, a LSTM model is designed to represent action dynamics of
in- dividual people in a sequence and another LSTM model is designed to
aggregate human-level information for whole activity understanding. We evaluate
our model over two datasets: the collective activity dataset and a new volley-
ball dataset. Experimental results demonstrate that our proposed model improves
group activity recognition perfor- mance with compared to baseline methods.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 01:33:35 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 20:43:53 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Ibrahim",
"Moustafa",
""
],
[
"Muralidharan",
"Srikanth",
""
],
[
"Deng",
"Zhiwei",
""
],
[
"Vahdat",
"Arash",
""
],
[
"Mori",
"Greg",
""
]
] | TITLE: A Hierarchical Deep Temporal Model for Group Activity Recognition
ABSTRACT: In group activity recognition, the temporal dynamics of the whole activity
can be inferred based on the dynamics of the individual people representing the
activity. We build a deep model to capture these dynamics based on LSTM
(long-short term memory) models. To make use of these ob- servations, we
present a 2-stage deep temporal model for the group activity recognition
problem. In our model, a LSTM model is designed to represent action dynamics of
in- dividual people in a sequence and another LSTM model is designed to
aggregate human-level information for whole activity understanding. We evaluate
our model over two datasets: the collective activity dataset and a new volley-
ball dataset. Experimental results demonstrate that our proposed model improves
group activity recognition perfor- mance with compared to baseline methods.
| new_dataset | 0.952574 |
1601.00917 | Jie Fu | Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua | DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing
Hyperparameters of Deep Neural Networks | International Joint Conference on Artificial Intelligence, IJCAI,
2016 | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance of deep neural networks is well-known to be sensitive to the
setting of their hyperparameters. Recent advances in reverse-mode automatic
differentiation allow for optimizing hyperparameters with gradients. The
standard way of computing these gradients involves a forward and backward pass
of computations. However, the backward pass usually needs to consume
unaffordable memory to store all the intermediate variables to exactly reverse
the forward training procedure. In this work we propose a simple but effective
method, DrMAD, to distill the knowledge of the forward pass into a shortcut
path, through which we approximately reverse the training trajectory.
Experiments on several image benchmark datasets show that DrMAD is at least 45
times faster and consumes 100 times less memory compared to state-of-the-art
methods for optimizing hyperparameters with minimal compromise to its
effectiveness. To the best of our knowledge, DrMAD is the first research
attempt to make it practical to automatically tune thousands of hyperparameters
of deep neural networks. The code can be downloaded from
https://github.com/bigaidream-projects/drmad
| [
{
"version": "v1",
"created": "Tue, 5 Jan 2016 17:43:15 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jan 2016 05:57:51 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jan 2016 11:43:31 GMT"
},
{
"version": "v4",
"created": "Fri, 5 Feb 2016 05:45:35 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Apr 2016 15:55:19 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Fu",
"Jie",
""
],
[
"Luo",
"Hongyin",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Low",
"Kian Hsiang",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | TITLE: DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing
Hyperparameters of Deep Neural Networks
ABSTRACT: The performance of deep neural networks is well-known to be sensitive to the
setting of their hyperparameters. Recent advances in reverse-mode automatic
differentiation allow for optimizing hyperparameters with gradients. The
standard way of computing these gradients involves a forward and backward pass
of computations. However, the backward pass usually needs to consume
unaffordable memory to store all the intermediate variables to exactly reverse
the forward training procedure. In this work we propose a simple but effective
method, DrMAD, to distill the knowledge of the forward pass into a shortcut
path, through which we approximately reverse the training trajectory.
Experiments on several image benchmark datasets show that DrMAD is at least 45
times faster and consumes 100 times less memory compared to state-of-the-art
methods for optimizing hyperparameters with minimal compromise to its
effectiveness. To the best of our knowledge, DrMAD is the first research
attempt to make it practical to automatically tune thousands of hyperparameters
of deep neural networks. The code can be downloaded from
https://github.com/bigaidream-projects/drmad
| no_new_dataset | 0.945951 |
1603.03958 | Jeffrey Byrne | Nate Crosswhite, Jeffrey Byrne, Omkar M. Parkhi, Chris Stauffer, Qiong
Cao and Andrew Zisserman | Template Adaptation for Face Verification and Identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition performance evaluation has traditionally focused on
one-to-one verification, popularized by the Labeled Faces in the Wild dataset
for imagery and the YouTubeFaces dataset for videos. In contrast, the newly
released IJB-A face recognition dataset unifies evaluation of one-to-many face
identification with one-to-one face verification over templates, or sets of
imagery and videos for a subject. In this paper, we study the problem of
template adaptation, a form of transfer learning to the set of media in a
template. Extensive performance evaluations on IJB-A show a surprising result,
that perhaps the simplest method of template adaptation, combining deep
convolutional network features with template specific linear SVMs, outperforms
the state-of-the-art by a wide margin. We study the effects of template size,
negative set construction and classifier fusion on performance, then compare
template adaptation to convolutional networks with metric learning, 2D and 3D
alignment. Our unexpected conclusion is that these other methods, when combined
with template adaptation, all achieve nearly the same top performance on IJB-A
for template-based face verification and identification.
| [
{
"version": "v1",
"created": "Sat, 12 Mar 2016 19:57:17 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2016 19:56:52 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Apr 2016 02:11:02 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Crosswhite",
"Nate",
""
],
[
"Byrne",
"Jeffrey",
""
],
[
"Parkhi",
"Omkar M.",
""
],
[
"Stauffer",
"Chris",
""
],
[
"Cao",
"Qiong",
""
],
[
"Zisserman",
"Andrew",
""
]
] | TITLE: Template Adaptation for Face Verification and Identification
ABSTRACT: Face recognition performance evaluation has traditionally focused on
one-to-one verification, popularized by the Labeled Faces in the Wild dataset
for imagery and the YouTubeFaces dataset for videos. In contrast, the newly
released IJB-A face recognition dataset unifies evaluation of one-to-many face
identification with one-to-one face verification over templates, or sets of
imagery and videos for a subject. In this paper, we study the problem of
template adaptation, a form of transfer learning to the set of media in a
template. Extensive performance evaluations on IJB-A show a surprising result,
that perhaps the simplest method of template adaptation, combining deep
convolutional network features with template specific linear SVMs, outperforms
the state-of-the-art by a wide margin. We study the effects of template size,
negative set construction and classifier fusion on performance, then compare
template adaptation to convolutional networks with metric learning, 2D and 3D
alignment. Our unexpected conclusion is that these other methods, when combined
with template adaptation, all achieve nearly the same top performance on IJB-A
for template-based face verification and identification.
| new_dataset | 0.950134 |
1603.06708 | Changsheng Li | Changsheng Li and Fan Wei and Junchi Yan and Weishan Dong and Qingshan
Liu and Xiaoyu Zhang and Hongyuan Zha | A Self-Paced Regularization Framework for Multi-Label Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel multi-label learning framework, called
Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the
self-paced learning strategy into multi-label learning regime. In light of the
benefits of adopting the easy-to-hard strategy proposed by self-paced learning,
the devised MLSPL aims to learn multiple labels jointly by gradually including
label learning tasks and instances into model training from the easy to the
hard. We first introduce a self-paced function as a regularizer in the
multi-label learning formulation, so as to simultaneously rank priorities of
the label learning tasks and the instances in each learning iteration.
Considering that different multi-label learning scenarios often need different
self-paced schemes during optimization, we thus propose a general way to find
the desired self-paced functions. Experimental results on three benchmark
datasets suggest the state-of-the-art performance of our approach.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 09:03:40 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2016 14:54:28 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Li",
"Changsheng",
""
],
[
"Wei",
"Fan",
""
],
[
"Yan",
"Junchi",
""
],
[
"Dong",
"Weishan",
""
],
[
"Liu",
"Qingshan",
""
],
[
"Zhang",
"Xiaoyu",
""
],
[
"Zha",
"Hongyuan",
""
]
] | TITLE: A Self-Paced Regularization Framework for Multi-Label Learning
ABSTRACT: In this paper, we propose a novel multi-label learning framework, called
Multi-Label Self-Paced Learning (MLSPL), in an attempt to incorporate the
self-paced learning strategy into multi-label learning regime. In light of the
benefits of adopting the easy-to-hard strategy proposed by self-paced learning,
the devised MLSPL aims to learn multiple labels jointly by gradually including
label learning tasks and instances into model training from the easy to the
hard. We first introduce a self-paced function as a regularizer in the
multi-label learning formulation, so as to simultaneously rank priorities of
the label learning tasks and the instances in each learning iteration.
Considering that different multi-label learning scenarios often need different
self-paced schemes during optimization, we thus propose a general way to find
the desired self-paced functions. Experimental results on three benchmark
datasets suggest the state-of-the-art performance of our approach.
| no_new_dataset | 0.944022 |
1603.09446 | Wei Shen | Wei Shen, Kai Zhao, Yuan Jiang, Yan Wang, Zhijiang Zhang, Xiang Bai | Object Skeleton Extraction in Natural Images by Fusing Scale-associated
Deep Side Outputs | Accepted by CVPR2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object skeleton is a useful cue for object detection, complementary to the
object contour, as it provides a structural representation to describe the
relationship among object parts. While object skeleton extraction in natural
images is a very challenging problem, as it requires the extractor to be able
to capture both local and global image context to determine the intrinsic scale
of each skeleton pixel. Existing methods rely on per-pixel based multi-scale
feature computation, which results in difficult modeling and high time
consumption. In this paper, we present a fully convolutional network with
multiple scale-associated side outputs to address this problem. By observing
the relationship between the receptive field sizes of the sequential stages in
the network and the skeleton scales they can capture, we introduce a
scale-associated side output to each stage. We impose supervision to different
stages by guiding the scale-associated side outputs toward groundtruth
skeletons of different scales. The responses of the multiple scale-associated
side outputs are then fused in a scale-specific way to localize skeleton pixels
with multiple scales effectively. Our method achieves promising results on two
skeleton extraction datasets, and significantly outperforms other competitors.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 03:21:33 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Apr 2016 05:51:33 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Shen",
"Wei",
""
],
[
"Zhao",
"Kai",
""
],
[
"Jiang",
"Yuan",
""
],
[
"Wang",
"Yan",
""
],
[
"Zhang",
"Zhijiang",
""
],
[
"Bai",
"Xiang",
""
]
] | TITLE: Object Skeleton Extraction in Natural Images by Fusing Scale-associated
Deep Side Outputs
ABSTRACT: Object skeleton is a useful cue for object detection, complementary to the
object contour, as it provides a structural representation to describe the
relationship among object parts. While object skeleton extraction in natural
images is a very challenging problem, as it requires the extractor to be able
to capture both local and global image context to determine the intrinsic scale
of each skeleton pixel. Existing methods rely on per-pixel based multi-scale
feature computation, which results in difficult modeling and high time
consumption. In this paper, we present a fully convolutional network with
multiple scale-associated side outputs to address this problem. By observing
the relationship between the receptive field sizes of the sequential stages in
the network and the skeleton scales they can capture, we introduce a
scale-associated side output to each stage. We impose supervision to different
stages by guiding the scale-associated side outputs toward groundtruth
skeletons of different scales. The responses of the multiple scale-associated
side outputs are then fused in a scale-specific way to localize skeleton pixels
with multiple scales effectively. Our method achieves promising results on two
skeleton extraction datasets, and significantly outperforms other competitors.
| no_new_dataset | 0.951549 |
1604.01420 | Ognjen Arandjelovi\'c PhD | Reza Shoja Ghiass and Ognjen Arandjelovic | Highly accurate gaze estimation using a consumer RGB-depth sensor | International Joint Conference on Artificial Intelligence, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Determining the direction in which a person is looking is an important
problem in a wide range of HCI applications. In this paper we describe a highly
accurate algorithm that performs gaze estimation using an affordable and widely
available device such as Kinect. The method we propose starts by performing
accurate head pose estimation achieved by fitting a person specific morphable
model of the face to depth data. The ordinarily competing requirements of high
accuracy and high speed are met concurrently by formulating the fitting
objective function as a combination of terms which excel either in accurate or
fast fitting, and then by adaptively adjusting their relative contributions
throughout fitting. Following pose estimation, pose normalization is done by
re-rendering the fitted model as a frontal face. Finally gaze estimates are
obtained through regression from the appearance of the eyes in synthetic,
normalized images. Using EYEDIAP, the standard public dataset for the
evaluation of gaze estimation algorithms from RGB-D data, we demonstrate that
our method greatly outperforms the state of the art.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 20:50:40 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Ghiass",
"Reza Shoja",
""
],
[
"Arandjelovic",
"Ognjen",
""
]
] | TITLE: Highly accurate gaze estimation using a consumer RGB-depth sensor
ABSTRACT: Determining the direction in which a person is looking is an important
problem in a wide range of HCI applications. In this paper we describe a highly
accurate algorithm that performs gaze estimation using an affordable and widely
available device such as Kinect. The method we propose starts by performing
accurate head pose estimation achieved by fitting a person specific morphable
model of the face to depth data. The ordinarily competing requirements of high
accuracy and high speed are met concurrently by formulating the fitting
objective function as a combination of terms which excel either in accurate or
fast fitting, and then by adaptively adjusting their relative contributions
throughout fitting. Following pose estimation, pose normalization is done by
re-rendering the fitted model as a frontal face. Finally gaze estimates are
obtained through regression from the appearance of the eyes in synthetic,
normalized images. Using EYEDIAP, the standard public dataset for the
evaluation of gaze estimation algorithms from RGB-D data, we demonstrate that
our method greatly outperforms the state of the art.
| no_new_dataset | 0.944995 |
1604.01485 | Ilija Ilievski | Ilija Ilievski, Shuicheng Yan, Jiashi Feng | A Focused Dynamic Attention Model for Visual Question Answering | Submitted to ECCV 2016 | null | null | null | cs.CV cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Question and Answering (VQA) problems are attracting increasing
interest from multiple research disciplines. Solving VQA problems requires
techniques from both computer vision for understanding the visual contents of a
presented image or video, as well as the ones from natural language processing
for understanding semantics of the question and generating the answers.
Regarding visual content modeling, most of existing VQA methods adopt the
strategy of extracting global features from the image or video, which
inevitably fails in capturing fine-grained information such as spatial
configuration of multiple objects. Extracting features from auto-generated
regions -- as some region-based image recognition methods do -- cannot
essentially address this problem and may introduce some overwhelming irrelevant
features with the question. In this work, we propose a novel Focused Dynamic
Attention (FDA) model to provide better aligned image content representation
with proposed questions. Being aware of the key words in the question, FDA
employs off-the-shelf object detector to identify important regions and fuse
the information from the regions and global features via an LSTM unit. Such
question-driven representations are then combined with question representation
and fed into a reasoning unit for generating the answers. Extensive evaluation
on a large-scale benchmark dataset, VQA, clearly demonstrate the superior
performance of FDA over well-established baselines.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 05:16:10 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Ilievski",
"Ilija",
""
],
[
"Yan",
"Shuicheng",
""
],
[
"Feng",
"Jiashi",
""
]
] | TITLE: A Focused Dynamic Attention Model for Visual Question Answering
ABSTRACT: Visual Question and Answering (VQA) problems are attracting increasing
interest from multiple research disciplines. Solving VQA problems requires
techniques from both computer vision for understanding the visual contents of a
presented image or video, as well as the ones from natural language processing
for understanding semantics of the question and generating the answers.
Regarding visual content modeling, most of existing VQA methods adopt the
strategy of extracting global features from the image or video, which
inevitably fails in capturing fine-grained information such as spatial
configuration of multiple objects. Extracting features from auto-generated
regions -- as some region-based image recognition methods do -- cannot
essentially address this problem and may introduce some overwhelming irrelevant
features with the question. In this work, we propose a novel Focused Dynamic
Attention (FDA) model to provide better aligned image content representation
with proposed questions. Being aware of the key words in the question, FDA
employs off-the-shelf object detector to identify important regions and fuse
the information from the regions and global features via an LSTM unit. Such
question-driven representations are then combined with question representation
and fed into a reasoning unit for generating the answers. Extensive evaluation
on a large-scale benchmark dataset, VQA, clearly demonstrate the superior
performance of FDA over well-established baselines.
| no_new_dataset | 0.947284 |
1604.01500 | Karan Sikka | Karan Sikka, Gaurav Sharma and Marian Bartlett | LOMo: Latent Ordinal Model for Facial Analysis in Videos | 2016 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of facial analysis in videos. We propose a novel weakly
supervised learning method that models the video event (expression, pain etc.)
as a sequence of automatically mined, discriminative sub-events (eg. onset and
offset phase for smile, brow lower and cheek raise for pain). The proposed
model is inspired by the recent works on Multiple Instance Learning and latent
SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in
the videos, approximately. We obtain consistent improvements over relevant
competitive baselines on four challenging and publicly available video based
facial analysis datasets for prediction of expression, clinical pain and intent
in dyadic conversations. In combination with complimentary features, we report
state-of-the-art results on these datasets.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 06:14:58 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Sikka",
"Karan",
""
],
[
"Sharma",
"Gaurav",
""
],
[
"Bartlett",
"Marian",
""
]
] | TITLE: LOMo: Latent Ordinal Model for Facial Analysis in Videos
ABSTRACT: We study the problem of facial analysis in videos. We propose a novel weakly
supervised learning method that models the video event (expression, pain etc.)
as a sequence of automatically mined, discriminative sub-events (eg. onset and
offset phase for smile, brow lower and cheek raise for pain). The proposed
model is inspired by the recent works on Multiple Instance Learning and latent
SVM/HCRF- it extends such frameworks to model the ordinal or temporal aspect in
the videos, approximately. We obtain consistent improvements over relevant
competitive baselines on four challenging and publicly available video based
facial analysis datasets for prediction of expression, clinical pain and intent
in dyadic conversations. In combination with complimentary features, we report
state-of-the-art results on these datasets.
| no_new_dataset | 0.951908 |
1604.01518 | Xinxing Xu | Xinxing Xu, Joey Tianyi Zhou, IvorW. Tsang, Zheng Qin, Rick Siow Mong
Goh and Yong Liu | Simple and Efficient Learning using Privileged Information | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Support Vector Machine using Privileged Information (SVM+) has been
proposed to train a classifier to utilize the additional privileged information
that is only available in the training phase but not available in the test
phase. In this work, we propose an efficient solution for SVM+ by simply
utilizing the squared hinge loss instead of the hinge loss as in the existing
SVM+ formulation, which interestingly leads to a dual form with less variables
and in the same form with the dual of the standard SVM. The proposed algorithm
is utilized to leverage the additional web knowledge that is only available
during training for the image categorization tasks. The extensive experimental
results on both Caltech101 andWebQueries datasets show that our proposed method
can achieve a factor of up to hundred times speedup with the comparable
accuracy when compared with the existing SVM+ method.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 07:33:55 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Xu",
"Xinxing",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Tsang",
"IvorW.",
""
],
[
"Qin",
"Zheng",
""
],
[
"Goh",
"Rick Siow Mong",
""
],
[
"Liu",
"Yong",
""
]
] | TITLE: Simple and Efficient Learning using Privileged Information
ABSTRACT: The Support Vector Machine using Privileged Information (SVM+) has been
proposed to train a classifier to utilize the additional privileged information
that is only available in the training phase but not available in the test
phase. In this work, we propose an efficient solution for SVM+ by simply
utilizing the squared hinge loss instead of the hinge loss as in the existing
SVM+ formulation, which interestingly leads to a dual form with less variables
and in the same form with the dual of the standard SVM. The proposed algorithm
is utilized to leverage the additional web knowledge that is only available
during training for the image categorization tasks. The extensive experimental
results on both Caltech101 andWebQueries datasets show that our proposed method
can achieve a factor of up to hundred times speedup with the comparable
accuracy when compared with the existing SVM+ method.
| no_new_dataset | 0.953492 |
1604.01545 | German Ros | German Ros, Simon Stent, Pablo F. Alcantarilla and Tomoki Watanabe | Training Constrained Deconvolutional Networks for Road Scene Semantic
Segmentation | submitted as a conference paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we investigate the problem of road scene semantic segmentation
using Deconvolutional Networks (DNs). Several constraints limit the practical
performance of DNs in this context: firstly, the paucity of existing pixel-wise
labelled training data, and secondly, the memory constraints of embedded
hardware, which rule out the practical use of state-of-the-art DN architectures
such as fully convolutional networks (FCN). To address the first constraint, we
introduce a Multi-Domain Road Scene Semantic Segmentation (MDRS3) dataset,
aggregating data from six existing densely and sparsely labelled datasets for
training our models, and two existing, separate datasets for testing their
generalisation performance. We show that, while MDRS3 offers a greater volume
and variety of data, end-to-end training of a memory efficient DN does not
yield satisfactory performance. We propose a new training strategy to overcome
this, based on (i) the creation of a best-possible source network (S-Net) from
the aggregated data, ignoring time and memory constraints; and (ii) the
transfer of knowledge from S-Net to the memory-efficient target network
(T-Net). We evaluate different techniques for S-Net creation and T-Net
transferral, and demonstrate that training a constrained deconvolutional
network in this manner can unlock better performance than existing training
approaches. Specifically, we show that a target network can be trained to
achieve improved accuracy versus an FCN despite using less than 1\% of the
memory. We believe that our approach can be useful beyond automotive scenarios
where labelled data is similarly scarce or fragmented and where practical
constraints exist on the desired model size. We make available our network
models and aggregated multi-domain dataset for reproducibility.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 09:02:50 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Ros",
"German",
""
],
[
"Stent",
"Simon",
""
],
[
"Alcantarilla",
"Pablo F.",
""
],
[
"Watanabe",
"Tomoki",
""
]
] | TITLE: Training Constrained Deconvolutional Networks for Road Scene Semantic
Segmentation
ABSTRACT: In this work we investigate the problem of road scene semantic segmentation
using Deconvolutional Networks (DNs). Several constraints limit the practical
performance of DNs in this context: firstly, the paucity of existing pixel-wise
labelled training data, and secondly, the memory constraints of embedded
hardware, which rule out the practical use of state-of-the-art DN architectures
such as fully convolutional networks (FCN). To address the first constraint, we
introduce a Multi-Domain Road Scene Semantic Segmentation (MDRS3) dataset,
aggregating data from six existing densely and sparsely labelled datasets for
training our models, and two existing, separate datasets for testing their
generalisation performance. We show that, while MDRS3 offers a greater volume
and variety of data, end-to-end training of a memory efficient DN does not
yield satisfactory performance. We propose a new training strategy to overcome
this, based on (i) the creation of a best-possible source network (S-Net) from
the aggregated data, ignoring time and memory constraints; and (ii) the
transfer of knowledge from S-Net to the memory-efficient target network
(T-Net). We evaluate different techniques for S-Net creation and T-Net
transferral, and demonstrate that training a constrained deconvolutional
network in this manner can unlock better performance than existing training
approaches. Specifically, we show that a target network can be trained to
achieve improved accuracy versus an FCN despite using less than 1\% of the
memory. We believe that our approach can be useful beyond automotive scenarios
where labelled data is similarly scarce or fragmented and where practical
constraints exist on the desired model size. We make available our network
models and aggregated multi-domain dataset for reproducibility.
| no_new_dataset | 0.946498 |
1604.01684 | Lakshmi Prabha Nattamai Sekar | N. S. Lakshmiprabha | Face Image Analysis using AAM, Gabor, LBP and WD features for Gender,
Age, Expression and Ethnicity Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growth in electronic transactions and human machine interactions rely on
the information such as gender, age, expression and ethnicity provided by the
face image. In order to obtain these information, feature extraction plays a
major role. In this paper, retrieval of age, gender, expression and race
information from an individual face image is analysed using different feature
extraction methods. The performance of four major feature extraction methods
such as Active Appearance Model (AAM), Gabor wavelets, Local Binary Pattern
(LBP) and Wavelet Decomposition (WD) are analyzed for gender recognition, age
estimation, expression recognition and racial recognition in terms of accuracy
(recognition rate), time for feature extraction, neural training and time to
test an image. Each of this recognition system is compared with four feature
extractors on same dataset (training and validation set) to get a better
understanding in its performance. Experiments carried out on FG-NET,
Cohn-Kanade, PAL face database shows that each method has its own merits and
demerits. Hence it is practically impossible to define a method which is best
at all circumstances with less computational complexity. Further, a detailed
comparison of age estimation and age estimation using gender information is
provided along with a solution to overcome aging effect in case of gender
recognition. An attempt has been made in obtaining all (i.e. gender, age range,
expression and ethnicity) information from a test image in a single go.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 17:49:14 GMT"
}
] | 2016-04-07T00:00:00 | [
[
"Lakshmiprabha",
"N. S.",
""
]
] | TITLE: Face Image Analysis using AAM, Gabor, LBP and WD features for Gender,
Age, Expression and Ethnicity Classification
ABSTRACT: The growth in electronic transactions and human machine interactions rely on
the information such as gender, age, expression and ethnicity provided by the
face image. In order to obtain these information, feature extraction plays a
major role. In this paper, retrieval of age, gender, expression and race
information from an individual face image is analysed using different feature
extraction methods. The performance of four major feature extraction methods
such as Active Appearance Model (AAM), Gabor wavelets, Local Binary Pattern
(LBP) and Wavelet Decomposition (WD) are analyzed for gender recognition, age
estimation, expression recognition and racial recognition in terms of accuracy
(recognition rate), time for feature extraction, neural training and time to
test an image. Each of this recognition system is compared with four feature
extractors on same dataset (training and validation set) to get a better
understanding in its performance. Experiments carried out on FG-NET,
Cohn-Kanade, PAL face database shows that each method has its own merits and
demerits. Hence it is practically impossible to define a method which is best
at all circumstances with less computational complexity. Further, a detailed
comparison of age estimation and age estimation using gender information is
provided along with a solution to overcome aging effect in case of gender
recognition. An attempt has been made in obtaining all (i.e. gender, age range,
expression and ethnicity) information from a test image in a single go.
| no_new_dataset | 0.949201 |
1408.6027 | Xin Geng | Xin Geng | Label Distribution Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although multi-label learning can deal with many problems with label
ambiguity, it does not fit some real applications well where the overall
distribution of the importance of the labels matters. This paper proposes a
novel learning paradigm named \emph{label distribution learning} (LDL) for such
kind of applications. The label distribution covers a certain number of labels,
representing the degree to which each label describes the instance. LDL is a
more general learning framework which includes both single-label and
multi-label learning as its special cases. This paper proposes six working LDL
algorithms in three ways: problem transformation, algorithm adaptation, and
specialized algorithm design. In order to compare the performance of the LDL
algorithms, six representative and diverse evaluation measures are selected via
a clustering analysis, and the first batch of label distribution datasets are
collected and made publicly available. Experimental results on one artificial
and fifteen real-world datasets show clear advantages of the specialized
algorithms, which indicates the importance of special design for the
characteristics of the LDL problem.
| [
{
"version": "v1",
"created": "Tue, 26 Aug 2014 06:48:58 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 09:47:09 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Geng",
"Xin",
""
]
] | TITLE: Label Distribution Learning
ABSTRACT: Although multi-label learning can deal with many problems with label
ambiguity, it does not fit some real applications well where the overall
distribution of the importance of the labels matters. This paper proposes a
novel learning paradigm named \emph{label distribution learning} (LDL) for such
kind of applications. The label distribution covers a certain number of labels,
representing the degree to which each label describes the instance. LDL is a
more general learning framework which includes both single-label and
multi-label learning as its special cases. This paper proposes six working LDL
algorithms in three ways: problem transformation, algorithm adaptation, and
specialized algorithm design. In order to compare the performance of the LDL
algorithms, six representative and diverse evaluation measures are selected via
a clustering analysis, and the first batch of label distribution datasets are
collected and made publicly available. Experimental results on one artificial
and fifteen real-world datasets show clear advantages of the specialized
algorithms, which indicates the importance of special design for the
characteristics of the LDL problem.
| no_new_dataset | 0.921428 |
1409.5686 | Zhaohong Deng | Zhaohong Deng, Yizhang Jiang, Fu-Lai Chung, Hisao Ishibuchi, Kup-Sze
Choi, Shitong Wang | Transfer Prototype-based Fuzzy Clustering | The manuscript has been accepted by IEEE Trans. Fuzzy Systmes in 2015 | null | 10.1109/TFUZZ.2015.2505330 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional prototype based clustering methods, such as the well-known
fuzzy c-mean (FCM) algorithm, usually need sufficient data to find a good
clustering partition. If the available data is limited or scarce, most of the
existing prototype based clustering algorithms will no longer be effective.
While the data for the current clustering task may be scarce, there is usually
some useful knowledge available in the related scenes/domains. In this study,
the concept of transfer learning is applied to prototype based fuzzy clustering
(PFC). Specifically, the idea of leveraging knowledge from the source domain is
exploited to develop a set of transfer prototype based fuzzy clustering (TPFC)
algorithms. Three prototype based fuzzy clustering algorithms, namely, FCM,
fuzzy k-plane clustering (FKPC) and fuzzy subspace clustering (FSC), have been
chosen to incorporate with knowledge leveraging mechanism to develop the
corresponding transfer clustering algorithms. Novel objective functions are
proposed to integrate the knowledge of source domain with the data of target
domain for clustering in the target domain. The proposed algorithms have been
validated on different synthetic and real-world datasets and the results
demonstrate their effectiveness when compared with both the original prototype
based fuzzy clustering algorithms and the related clustering algorithms like
multi-task clustering and co-clustering.
| [
{
"version": "v1",
"created": "Fri, 19 Sep 2014 14:58:56 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 09:43:45 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Deng",
"Zhaohong",
""
],
[
"Jiang",
"Yizhang",
""
],
[
"Chung",
"Fu-Lai",
""
],
[
"Ishibuchi",
"Hisao",
""
],
[
"Choi",
"Kup-Sze",
""
],
[
"Wang",
"Shitong",
""
]
] | TITLE: Transfer Prototype-based Fuzzy Clustering
ABSTRACT: The traditional prototype based clustering methods, such as the well-known
fuzzy c-mean (FCM) algorithm, usually need sufficient data to find a good
clustering partition. If the available data is limited or scarce, most of the
existing prototype based clustering algorithms will no longer be effective.
While the data for the current clustering task may be scarce, there is usually
some useful knowledge available in the related scenes/domains. In this study,
the concept of transfer learning is applied to prototype based fuzzy clustering
(PFC). Specifically, the idea of leveraging knowledge from the source domain is
exploited to develop a set of transfer prototype based fuzzy clustering (TPFC)
algorithms. Three prototype based fuzzy clustering algorithms, namely, FCM,
fuzzy k-plane clustering (FKPC) and fuzzy subspace clustering (FSC), have been
chosen to incorporate with knowledge leveraging mechanism to develop the
corresponding transfer clustering algorithms. Novel objective functions are
proposed to integrate the knowledge of source domain with the data of target
domain for clustering in the target domain. The proposed algorithms have been
validated on different synthetic and real-world datasets and the results
demonstrate their effectiveness when compared with both the original prototype
based fuzzy clustering algorithms and the related clustering algorithms like
multi-task clustering and co-clustering.
| no_new_dataset | 0.952264 |
1506.09115 | Elisa Omodei | Elisa Omodei, Manlio De Domenico, and Alex Arenas | Characterizing interactions in online social networks during exceptional
events | null | Frontiers in Physics 3:59 (2015) | 10.3389/fphy.2015.00059 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, millions of people interact on a daily basis on online social media
like Facebook and Twitter, where they share and discuss information about a
wide variety of topics. In this paper, we focus on a specific online social
network, Twitter, and we analyze multiple datasets each one consisting of
individuals' online activity before, during and after an exceptional event in
terms of volume of the communications registered. We consider important events
that occurred in different arenas that range from policy to culture or science.
For each dataset, the users' online activities are modeled by a multilayer
network in which each layer conveys a different kind of interaction,
specifically: retweeting, mentioning and replying. This representation allows
us to unveil that these distinct types of interaction produce networks with
different statistical properties, in particular concerning the degree
distribution and the clustering structure. These results suggests that models
of online activity cannot discard the information carried by this multilayer
representation of the system, and should account for the different processes
generated by the different kinds of interactions. Secondly, our analysis
unveils the presence of statistical regularities among the different events,
suggesting that the non-trivial topological patterns that we observe may
represent universal features of the social dynamics on online social networks
during exceptional events.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 15:21:54 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Omodei",
"Elisa",
""
],
[
"De Domenico",
"Manlio",
""
],
[
"Arenas",
"Alex",
""
]
] | TITLE: Characterizing interactions in online social networks during exceptional
events
ABSTRACT: Nowadays, millions of people interact on a daily basis on online social media
like Facebook and Twitter, where they share and discuss information about a
wide variety of topics. In this paper, we focus on a specific online social
network, Twitter, and we analyze multiple datasets each one consisting of
individuals' online activity before, during and after an exceptional event in
terms of volume of the communications registered. We consider important events
that occurred in different arenas that range from policy to culture or science.
For each dataset, the users' online activities are modeled by a multilayer
network in which each layer conveys a different kind of interaction,
specifically: retweeting, mentioning and replying. This representation allows
us to unveil that these distinct types of interaction produce networks with
different statistical properties, in particular concerning the degree
distribution and the clustering structure. These results suggests that models
of online activity cannot discard the information carried by this multilayer
representation of the system, and should account for the different processes
generated by the different kinds of interactions. Secondly, our analysis
unveils the presence of statistical regularities among the different events,
suggesting that the non-trivial topological patterns that we observe may
represent universal features of the social dynamics on online social networks
during exceptional events.
| no_new_dataset | 0.942029 |
1511.02821 | Chao Chen | Chao Chen, Alina Zare, and J. Tory Cobb | Partial Membership Latent Dirichlet Allocation | cut to 6 pages, add sunset results | null | null | null | stat.ML cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Topic models (e.g., pLSA, LDA, SLDA) have been widely used for segmenting
imagery. These models are confined to crisp segmentation. Yet, there are many
images in which some regions cannot be assigned a crisp label (e.g., transition
regions between a foggy sky and the ground or between sand and water at a
beach). In these cases, a visual word is best represented with partial
memberships across multiple topics. To address this, we present a partial
membership latent Dirichlet allocation (PM-LDA) model and associated parameter
estimation algorithms. Experimental results on two natural image datasets and
one SONAR image dataset show that PM-LDA can produce both crisp and soft
semantic image segmentations; a capability existing methods do not have.
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2015 20:04:56 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 03:59:15 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Chen",
"Chao",
""
],
[
"Zare",
"Alina",
""
],
[
"Cobb",
"J. Tory",
""
]
] | TITLE: Partial Membership Latent Dirichlet Allocation
ABSTRACT: Topic models (e.g., pLSA, LDA, SLDA) have been widely used for segmenting
imagery. These models are confined to crisp segmentation. Yet, there are many
images in which some regions cannot be assigned a crisp label (e.g., transition
regions between a foggy sky and the ground or between sand and water at a
beach). In these cases, a visual word is best represented with partial
memberships across multiple topics. To address this, we present a partial
membership latent Dirichlet allocation (PM-LDA) model and associated parameter
estimation algorithms. Experimental results on two natural image datasets and
one SONAR image dataset show that PM-LDA can produce both crisp and soft
semantic image segmentations; a capability existing methods do not have.
| no_new_dataset | 0.948106 |
1511.06442 | Henry Gouk | Henry Gouk, Bernhard Pfahringer, Michael Cree | Fast Metric Learning For Deep Neural Networks | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity metrics are a core component of many information retrieval and
machine learning systems. In this work we propose a method capable of learning
a similarity metric from data equipped with a binary relation. By considering
only the similarity constraints, and initially ignoring the features, we are
able to learn target vectors for each instance using one of several
appropriately designed loss functions. A regression model can then be
constructed that maps novel feature vectors to the same target vector space,
resulting in a feature extractor that computes vectors for which a predefined
metric is a meaningful measure of similarity. We present results on both
multiclass and multi-label classification datasets that demonstrate
considerably faster convergence, as well as higher accuracy on the majority of
the intrinsic evaluation tasks and all extrinsic evaluation tasks.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 23:10:00 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Nov 2015 06:05:30 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Dec 2015 15:27:11 GMT"
},
{
"version": "v4",
"created": "Wed, 17 Feb 2016 02:11:00 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Apr 2016 07:29:48 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Gouk",
"Henry",
""
],
[
"Pfahringer",
"Bernhard",
""
],
[
"Cree",
"Michael",
""
]
] | TITLE: Fast Metric Learning For Deep Neural Networks
ABSTRACT: Similarity metrics are a core component of many information retrieval and
machine learning systems. In this work we propose a method capable of learning
a similarity metric from data equipped with a binary relation. By considering
only the similarity constraints, and initially ignoring the features, we are
able to learn target vectors for each instance using one of several
appropriately designed loss functions. A regression model can then be
constructed that maps novel feature vectors to the same target vector space,
resulting in a feature extractor that computes vectors for which a predefined
metric is a meaningful measure of similarity. We present results on both
multiclass and multi-label classification datasets that demonstrate
considerably faster convergence, as well as higher accuracy on the majority of
the intrinsic evaluation tasks and all extrinsic evaluation tasks.
| no_new_dataset | 0.949012 |
1604.01105 | Amit Sharma | Amit Sharma, Dan Cosley | Distinguishing between Personal Preferences and Social Influence in
Online Activity Feeds | 13 pages, ACM CSCW 2016 | null | 10.1145/2818048.2819982 | null | cs.SI cs.HC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many online social networks thrive on automatic sharing of friends'
activities to a user through activity feeds, which may influence the user's
next actions. However, identifying such social influence is tricky because
these activities are simultaneously impacted by influence and homophily. We
propose a statistical procedure that uses commonly available network and
observational data about people's actions to estimate the extent of
copy-influence---mimicking others' actions that appear in a feed. We assume
that non-friends don't influence users; thus, comparing how a user's activity
correlates with friends versus non-friends who have similar preferences can
help tease out the effect of copy-influence.
Experiments on datasets from multiple social networks show that estimates
that don't account for homophily overestimate copy-influence by varying, often
large amounts. Further, copy-influence estimates fall below 1% of total actions
in all networks: most people, and almost all actions, are not affected by the
feed. Our results question common perceptions around the extent of
copy-influence in online social networks and suggest improvements to diffusion
and recommendation models.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 01:16:30 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Sharma",
"Amit",
""
],
[
"Cosley",
"Dan",
""
]
] | TITLE: Distinguishing between Personal Preferences and Social Influence in
Online Activity Feeds
ABSTRACT: Many online social networks thrive on automatic sharing of friends'
activities to a user through activity feeds, which may influence the user's
next actions. However, identifying such social influence is tricky because
these activities are simultaneously impacted by influence and homophily. We
propose a statistical procedure that uses commonly available network and
observational data about people's actions to estimate the extent of
copy-influence---mimicking others' actions that appear in a feed. We assume
that non-friends don't influence users; thus, comparing how a user's activity
correlates with friends versus non-friends who have similar preferences can
help tease out the effect of copy-influence.
Experiments on datasets from multiple social networks show that estimates
that don't account for homophily overestimate copy-influence by varying, often
large amounts. Further, copy-influence estimates fall below 1% of total actions
in all networks: most people, and almost all actions, are not affected by the
feed. Our results question common perceptions around the extent of
copy-influence in online social networks and suggest improvements to diffusion
and recommendation models.
| no_new_dataset | 0.942981 |
1604.01131 | Khushnood Abbas | Khushnood Abbas, Shang Mingsheng and Luo Xin | Discovering items with potential popularity on social media | 7 pages in ACM style.7 figures and 1 table | null | null | null | cs.IR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the future popularity of online content is highly important in
many applications. Preferential attachment phenomena is encountered in scale
free networks.Under it's influece popular items get more popular thereby
resulting in long tailed distribution problem. Consequently, new items which
can be popular (potential ones), are suppressed by the already popular items.
This paper proposes a novel model which is able to identify potential items. It
identifies the potentially popular items by considering the number of links or
ratings it has recieved in recent past along with it's popularity decay. For
obtaining an effecient model we consider only temporal features of the content,
avoiding the cost of extracting other features. We have found that people
follow recent behaviours of their peers. In presence of fit or quality items
already popular items lose it's popularity. Prediction accuracy is measured on
three industrial datasets namely Movielens, Netflix and Facebook wall post.
Experimental results show that compare to state-of-the-art model our model have
better prediction accuracy.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 04:27:22 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Abbas",
"Khushnood",
""
],
[
"Mingsheng",
"Shang",
""
],
[
"Xin",
"Luo",
""
]
] | TITLE: Discovering items with potential popularity on social media
ABSTRACT: Predicting the future popularity of online content is highly important in
many applications. Preferential attachment phenomena is encountered in scale
free networks.Under it's influece popular items get more popular thereby
resulting in long tailed distribution problem. Consequently, new items which
can be popular (potential ones), are suppressed by the already popular items.
This paper proposes a novel model which is able to identify potential items. It
identifies the potentially popular items by considering the number of links or
ratings it has recieved in recent past along with it's popularity decay. For
obtaining an effecient model we consider only temporal features of the content,
avoiding the cost of extracting other features. We have found that people
follow recent behaviours of their peers. In presence of fit or quality items
already popular items lose it's popularity. Prediction accuracy is measured on
three industrial datasets namely Movielens, Netflix and Facebook wall post.
Experimental results show that compare to state-of-the-art model our model have
better prediction accuracy.
| no_new_dataset | 0.950227 |
1604.01146 | Chunhua Shen | Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, Anton van den Hengel | Less is more: zero-shot learning from online textual documents with
noise suppression | Accepted to Int. Conf. Computer Vision and Pattern Recognition (CVPR)
2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifying a visual concept merely from its associated online textual
source, such as a Wikipedia article, is an attractive research topic in
zero-shot learning because it alleviates the burden of manually collecting
semantic attributes. Several recent works have pursued this approach by
exploring various ways of connecting the visual and text domains. This paper
revisits this idea by stepping further to consider one important factor: the
textual representation is usually too noisy for the zero-shot learning
application. This consideration motivates us to design a simple-but-effective
zero-shot learning method capable of suppressing noise in the text.
More specifically, we propose an $l_{2,1}$-norm based objective function
which can simultaneously suppress the noisy signal in the text and learn a
function to match the text document and visual features. We also develop an
optimization algorithm to efficiently solve the resulting problem. By
conducting experiments on two large datasets, we demonstrate that the proposed
method significantly outperforms the competing methods which rely on online
information sources but without explicit noise suppression. We further make an
in-depth analysis of the proposed method and provide insight as to what kind of
information in documents is useful for zero-shot learning.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 06:13:06 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Qiao",
"Ruizhi",
""
],
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Less is more: zero-shot learning from online textual documents with
noise suppression
ABSTRACT: Classifying a visual concept merely from its associated online textual
source, such as a Wikipedia article, is an attractive research topic in
zero-shot learning because it alleviates the burden of manually collecting
semantic attributes. Several recent works have pursued this approach by
exploring various ways of connecting the visual and text domains. This paper
revisits this idea by stepping further to consider one important factor: the
textual representation is usually too noisy for the zero-shot learning
application. This consideration motivates us to design a simple-but-effective
zero-shot learning method capable of suppressing noise in the text.
More specifically, we propose an $l_{2,1}$-norm based objective function
which can simultaneously suppress the noisy signal in the text and learn a
function to match the text document and visual features. We also develop an
optimization algorithm to efficiently solve the resulting problem. By
conducting experiments on two large datasets, we demonstrate that the proposed
method significantly outperforms the competing methods which rely on online
information sources but without explicit noise suppression. We further make an
in-depth analysis of the proposed method and provide insight as to what kind of
information in documents is useful for zero-shot learning.
| no_new_dataset | 0.943243 |
1604.01304 | Li Li | Li Li and Houfeng Wang | Towards Label Imbalance in Multi-label Classification with Many Labels | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-label classification, an instance may be associated with a set of
labels simultaneously. Recently, the research on multi-label classification has
largely shifted its focus to the other end of the spectrum where the number of
labels is assumed to be extremely large. The existing works focus on how to
design scalable algorithms that offer fast training procedures and have a small
memory footprint. However they ignore and even compound another challenge - the
label imbalance problem. To address this drawback, we propose a novel
Representation-based Multi-label Learning with Sampling (RMLS) approach. To the
best of our knowledge, we are the first to tackle the imbalance problem in
multi-label classification with many labels. Our experimentations with
real-world datasets demonstrate the effectiveness of the proposed approach.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 15:44:33 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Li",
"Li",
""
],
[
"Wang",
"Houfeng",
""
]
] | TITLE: Towards Label Imbalance in Multi-label Classification with Many Labels
ABSTRACT: In multi-label classification, an instance may be associated with a set of
labels simultaneously. Recently, the research on multi-label classification has
largely shifted its focus to the other end of the spectrum where the number of
labels is assumed to be extremely large. The existing works focus on how to
design scalable algorithms that offer fast training procedures and have a small
memory footprint. However they ignore and even compound another challenge - the
label imbalance problem. To address this drawback, we propose a novel
Representation-based Multi-label Learning with Sampling (RMLS) approach. To the
best of our knowledge, we are the first to tackle the imbalance problem in
multi-label classification with many labels. Our experimentations with
real-world datasets demonstrate the effectiveness of the proposed approach.
| no_new_dataset | 0.944638 |
1604.01347 | Aayush Bansal | Aayush Bansal, Bryan Russell, Abhinav Gupta | Marr Revisited: 2D-3D Alignment via Surface Normal Prediction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an approach that leverages surface normal predictions, along
with appearance cues, to retrieve 3D models for objects depicted in 2D still
images from a large CAD object library. Critical to the success of our approach
is the ability to recover accurate surface normals for objects in the depicted
scene. We introduce a skip-network model built on the pre-trained Oxford VGG
convolutional neural network (CNN) for surface normal prediction. Our model
achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface
normal prediction, and recovers fine object detail compared to previous
methods. Furthermore, we develop a two-stream network over the input image and
predicted surface normals that jointly learns pose and style for CAD model
retrieval. When using the predicted surface normals, our two-stream network
matches prior work using surface normals computed from RGB-D images on the task
of pose prediction, and achieves state of the art when using RGB-D input.
Finally, our two-stream network allows us to retrieve CAD models that better
match the style and pose of a depicted object compared with baseline
approaches.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 17:51:39 GMT"
}
] | 2016-04-06T00:00:00 | [
[
"Bansal",
"Aayush",
""
],
[
"Russell",
"Bryan",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Marr Revisited: 2D-3D Alignment via Surface Normal Prediction
ABSTRACT: We introduce an approach that leverages surface normal predictions, along
with appearance cues, to retrieve 3D models for objects depicted in 2D still
images from a large CAD object library. Critical to the success of our approach
is the ability to recover accurate surface normals for objects in the depicted
scene. We introduce a skip-network model built on the pre-trained Oxford VGG
convolutional neural network (CNN) for surface normal prediction. Our model
achieves state-of-the-art accuracy on the NYUv2 RGB-D dataset for surface
normal prediction, and recovers fine object detail compared to previous
methods. Furthermore, we develop a two-stream network over the input image and
predicted surface normals that jointly learns pose and style for CAD model
retrieval. When using the predicted surface normals, our two-stream network
matches prior work using surface normals computed from RGB-D images on the task
of pose prediction, and achieves state of the art when using RGB-D input.
Finally, our two-stream network allows us to retrieve CAD models that better
match the style and pose of a depicted object compared with baseline
approaches.
| no_new_dataset | 0.949248 |
1502.01710 | Xiang Zhang | Xiang Zhang, Yann LeCun | Text Understanding from Scratch | This technical report is superseded by a paper entitled
"Character-level Convolutional Networks for Text Classification",
arXiv:1509.01626. It has considerably more experimental results and a
rewritten introduction | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article demontrates that we can apply deep learning to text
understanding from character-level inputs all the way up to abstract text
concepts, using temporal convolutional networks (ConvNets). We apply ConvNets
to various large-scale datasets, including ontology classification, sentiment
analysis, and text categorization. We show that temporal ConvNets can achieve
astonishing performance without the knowledge of words, phrases, sentences and
any other syntactic or semantic structures with regards to a human language.
Evidence shows that our models can work for both English and Chinese.
| [
{
"version": "v1",
"created": "Thu, 5 Feb 2015 20:45:19 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Apr 2015 21:32:01 GMT"
},
{
"version": "v3",
"created": "Sun, 7 Jun 2015 03:45:02 GMT"
},
{
"version": "v4",
"created": "Tue, 8 Sep 2015 04:42:29 GMT"
},
{
"version": "v5",
"created": "Mon, 4 Apr 2016 02:40:48 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Zhang",
"Xiang",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Text Understanding from Scratch
ABSTRACT: This article demontrates that we can apply deep learning to text
understanding from character-level inputs all the way up to abstract text
concepts, using temporal convolutional networks (ConvNets). We apply ConvNets
to various large-scale datasets, including ontology classification, sentiment
analysis, and text categorization. We show that temporal ConvNets can achieve
astonishing performance without the knowledge of words, phrases, sentences and
any other syntactic or semantic structures with regards to a human language.
Evidence shows that our models can work for both English and Chinese.
| no_new_dataset | 0.949059 |
1509.01626 | Xiang Zhang | Xiang Zhang, Junbo Zhao, Yann LeCun | Character-level Convolutional Networks for Text Classification | An early version of this work entitled "Text Understanding from
Scratch" was posted in Feb 2015 as arXiv:1502.01710. The present paper has
considerably more experimental results and a rewritten introduction, Advances
in Neural Information Processing Systems 28 (NIPS 2015) | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article offers an empirical exploration on the use of character-level
convolutional networks (ConvNets) for text classification. We constructed
several large-scale datasets to show that character-level convolutional
networks could achieve state-of-the-art or competitive results. Comparisons are
offered against traditional models such as bag of words, n-grams and their
TFIDF variants, and deep learning models such as word-based ConvNets and
recurrent neural networks.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2015 22:31:53 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Sep 2015 17:12:43 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Apr 2016 02:34:30 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Zhang",
"Xiang",
""
],
[
"Zhao",
"Junbo",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Character-level Convolutional Networks for Text Classification
ABSTRACT: This article offers an empirical exploration on the use of character-level
convolutional networks (ConvNets) for text classification. We constructed
several large-scale datasets to show that character-level convolutional
networks could achieve state-of-the-art or competitive results. Comparisons are
offered against traditional models such as bag of words, n-grams and their
TFIDF variants, and deep learning models such as word-based ConvNets and
recurrent neural networks.
| no_new_dataset | 0.853486 |
1511.04891 | Mohamed Elhoseiny Mohamed Elhoseiny | Mohamed Elhoseiny, Scott Cohen, Walter Chang, Brian Price, Ahmed
Elgammal | Sherlock: Scalable Fact Learning in Images | Jan 7 Update | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study scalable and uniform understanding of facts in images. Existing
visual recognition systems are typically modeled differently for each fact type
such as objects, actions, and interactions. We propose a setting where all
these facts can be modeled simultaneously with a capacity to understand
unbounded number of facts in a structured way. The training data comes as
structured facts in images, including (1) objects (e.g., $<$boy$>$), (2)
attributes (e.g., $<$boy, tall$>$), (3) actions (e.g., $<$boy, playing$>$), and
(4) interactions (e.g., $<$boy, riding, a horse $>$). Each fact has a semantic
language view (e.g., $<$ boy, playing$>$) and a visual view (an image with this
fact). We show that learning visual facts in a structured way enables not only
a uniform but also generalizable visual understanding. We propose and
investigate recent and strong approaches from the multiview learning literature
and also introduce two learning representation models as potential baselines.
We applied the investigated methods on several datasets that we augmented with
structured facts and a large scale dataset of more than 202,000 facts and
814,000 images. Our experiments show the advantage of relating facts by the
structure by the proposed models compared to the designed baselines on
bidirectional fact retrieval.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2015 09:56:04 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Nov 2015 22:36:55 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jan 2016 02:56:24 GMT"
},
{
"version": "v4",
"created": "Sat, 2 Apr 2016 05:26:39 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Elhoseiny",
"Mohamed",
""
],
[
"Cohen",
"Scott",
""
],
[
"Chang",
"Walter",
""
],
[
"Price",
"Brian",
""
],
[
"Elgammal",
"Ahmed",
""
]
] | TITLE: Sherlock: Scalable Fact Learning in Images
ABSTRACT: We study scalable and uniform understanding of facts in images. Existing
visual recognition systems are typically modeled differently for each fact type
such as objects, actions, and interactions. We propose a setting where all
these facts can be modeled simultaneously with a capacity to understand
unbounded number of facts in a structured way. The training data comes as
structured facts in images, including (1) objects (e.g., $<$boy$>$), (2)
attributes (e.g., $<$boy, tall$>$), (3) actions (e.g., $<$boy, playing$>$), and
(4) interactions (e.g., $<$boy, riding, a horse $>$). Each fact has a semantic
language view (e.g., $<$ boy, playing$>$) and a visual view (an image with this
fact). We show that learning visual facts in a structured way enables not only
a uniform but also generalizable visual understanding. We propose and
investigate recent and strong approaches from the multiview learning literature
and also introduce two learning representation models as potential baselines.
We applied the investigated methods on several datasets that we augmented with
structured facts and a large scale dataset of more than 202,000 facts and
814,000 images. Our experiments show the advantage of relating facts by the
structure by the proposed models compared to the designed baselines on
bidirectional fact retrieval.
| no_new_dataset | 0.509128 |
1511.05960 | Kan Chen | Kan Chen, Jiang Wang, Liang-Chieh Chen, Haoyuan Gao, Wei Xu, Ram
Nevatia | ABC-CNN: An Attention Based Convolutional Neural Network for Visual
Question Answering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel attention based deep learning architecture for visual
question answering task (VQA). Given an image and an image related natural
language question, VQA generates the natural language answer for the question.
Generating the correct answers requires the model's attention to focus on the
regions corresponding to the question, because different questions inquire
about the attributes of different image regions. We introduce an attention
based configurable convolutional neural network (ABC-CNN) to learn such
question-guided attention. ABC-CNN determines an attention map for an
image-question pair by convolving the image feature map with configurable
convolutional kernels derived from the question's semantics. We evaluate the
ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR,
and VQA dataset. ABC-CNN model achieves significant improvements over
state-of-the-art methods on these datasets. The question-guided attention
generated by ABC-CNN is also shown to reflect the regions that are highly
relevant to the questions.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 20:59:50 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2016 22:47:38 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Chen",
"Kan",
""
],
[
"Wang",
"Jiang",
""
],
[
"Chen",
"Liang-Chieh",
""
],
[
"Gao",
"Haoyuan",
""
],
[
"Xu",
"Wei",
""
],
[
"Nevatia",
"Ram",
""
]
] | TITLE: ABC-CNN: An Attention Based Convolutional Neural Network for Visual
Question Answering
ABSTRACT: We propose a novel attention based deep learning architecture for visual
question answering task (VQA). Given an image and an image related natural
language question, VQA generates the natural language answer for the question.
Generating the correct answers requires the model's attention to focus on the
regions corresponding to the question, because different questions inquire
about the attributes of different image regions. We introduce an attention
based configurable convolutional neural network (ABC-CNN) to learn such
question-guided attention. ABC-CNN determines an attention map for an
image-question pair by convolving the image feature map with configurable
convolutional kernels derived from the question's semantics. We evaluate the
ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR,
and VQA dataset. ABC-CNN model achieves significant improvements over
state-of-the-art methods on these datasets. The question-guided attention
generated by ABC-CNN is also shown to reflect the regions that are highly
relevant to the questions.
| no_new_dataset | 0.948202 |
1512.00103 | Daniel Gillick | Dan Gillick, Cliff Brunk, Oriol Vinyals, Amarnag Subramanya | Multilingual Language Processing From Bytes | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads
text as bytes and outputs span annotations of the form [start, length, label]
where start positions, lengths, and labels are separate entries in our
vocabulary. Because we operate directly on unicode bytes rather than
language-specific words or characters, we can analyze text in many languages
with a single model. Due to the small vocabulary size, these multilingual
models are very compact, but produce results similar to or better than the
state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that
use only the provided training datasets (no external data sources). Our models
are learning "from scratch" in that they do not rely on any elements of the
standard pipeline in Natural Language Processing (including tokenization), and
thus can run in standalone fashion on raw text.
| [
{
"version": "v1",
"created": "Tue, 1 Dec 2015 00:23:44 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Apr 2016 16:26:23 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Gillick",
"Dan",
""
],
[
"Brunk",
"Cliff",
""
],
[
"Vinyals",
"Oriol",
""
],
[
"Subramanya",
"Amarnag",
""
]
] | TITLE: Multilingual Language Processing From Bytes
ABSTRACT: We describe an LSTM-based model which we call Byte-to-Span (BTS) that reads
text as bytes and outputs span annotations of the form [start, length, label]
where start positions, lengths, and labels are separate entries in our
vocabulary. Because we operate directly on unicode bytes rather than
language-specific words or characters, we can analyze text in many languages
with a single model. Due to the small vocabulary size, these multilingual
models are very compact, but produce results similar to or better than the
state-of- the-art in Part-of-Speech tagging and Named Entity Recognition that
use only the provided training datasets (no external data sources). Our models
are learning "from scratch" in that they do not rely on any elements of the
standard pipeline in Natural Language Processing (including tokenization), and
thus can run in standalone fashion on raw text.
| no_new_dataset | 0.949153 |
1512.05830 | Zhouchen Lin | Li Shen and Zhouchen Lin and Qingming Huang | Relay Backpropagation for Effective Learning of Deep Convolutional
Neural Networks | Technical report for our submissions to the ILSVRC 2015 Scene
Classification Challenge, where we won the first place | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning deeper convolutional neural networks becomes a tendency in recent
years. However, many empirical evidences suggest that performance improvement
cannot be gained by simply stacking more layers. In this paper, we consider the
issue from an information theoretical perspective, and propose a novel method
Relay Backpropagation, that encourages the propagation of effective information
through the network in training stage. By virtue of the method, we achieved the
first place in ILSVRC 2015 Scene Classification Challenge. Extensive
experiments on two challenging large scale datasets demonstrate the
effectiveness of our method is not restricted to a specific dataset or network
architecture. Our models will be available to the research community later.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 00:13:10 GMT"
},
{
"version": "v2",
"created": "Sun, 3 Apr 2016 07:47:28 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Shen",
"Li",
""
],
[
"Lin",
"Zhouchen",
""
],
[
"Huang",
"Qingming",
""
]
] | TITLE: Relay Backpropagation for Effective Learning of Deep Convolutional
Neural Networks
ABSTRACT: Learning deeper convolutional neural networks becomes a tendency in recent
years. However, many empirical evidences suggest that performance improvement
cannot be gained by simply stacking more layers. In this paper, we consider the
issue from an information theoretical perspective, and propose a novel method
Relay Backpropagation, that encourages the propagation of effective information
through the network in training stage. By virtue of the method, we achieved the
first place in ILSVRC 2015 Scene Classification Challenge. Extensive
experiments on two challenging large scale datasets demonstrate the
effectiveness of our method is not restricted to a specific dataset or network
architecture. Our models will be available to the research community later.
| no_new_dataset | 0.94868 |
1603.00391 | \c{C}a\u{g}lar G\"ul\c{c}ehre | Caglar Gulcehre, Marcin Moczulski, Misha Denil and Yoshua Bengio | Noisy Activation Functions | null | null | null | null | cs.LG cs.NE stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Common nonlinear activation functions used in neural networks can cause
training difficulties due to the saturation behavior of the activation
function, which may hide dependencies that are not visible to vanilla-SGD
(using first order gradients only). Gating mechanisms that use softly
saturating activation functions to emulate the discrete switching of digital
logic circuits are good examples of this. We propose to exploit the injection
of appropriate noise so that the gradients may flow easily, even if the
noiseless application of the activation function would yield zero gradient.
Large noise will dominate the noise-free gradient and allow stochastic gradient
descent toexplore more. By adding noise only to the problematic parts of the
activation function, we allow the optimization procedure to explore the
boundary between the degenerate (saturating) and the well-behaved parts of the
activation function. We also establish connections to simulated annealing, when
the amount of noise is annealed down, making it easier to optimize hard
objective functions. We find experimentally that replacing such saturating
activation functions by noisy variants helps training in many contexts,
yielding state-of-the-art or competitive results on different datasets and
task, especially when training seems to be the most difficult, e.g., when
curriculum learning is necessary to obtain good results.
| [
{
"version": "v1",
"created": "Tue, 1 Mar 2016 18:30:15 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Mar 2016 20:51:57 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Apr 2016 21:41:47 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Gulcehre",
"Caglar",
""
],
[
"Moczulski",
"Marcin",
""
],
[
"Denil",
"Misha",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Noisy Activation Functions
ABSTRACT: Common nonlinear activation functions used in neural networks can cause
training difficulties due to the saturation behavior of the activation
function, which may hide dependencies that are not visible to vanilla-SGD
(using first order gradients only). Gating mechanisms that use softly
saturating activation functions to emulate the discrete switching of digital
logic circuits are good examples of this. We propose to exploit the injection
of appropriate noise so that the gradients may flow easily, even if the
noiseless application of the activation function would yield zero gradient.
Large noise will dominate the noise-free gradient and allow stochastic gradient
descent toexplore more. By adding noise only to the problematic parts of the
activation function, we allow the optimization procedure to explore the
boundary between the degenerate (saturating) and the well-behaved parts of the
activation function. We also establish connections to simulated annealing, when
the amount of noise is annealed down, making it easier to optimize hard
objective functions. We find experimentally that replacing such saturating
activation functions by noisy variants helps training in many contexts,
yielding state-of-the-art or competitive results on different datasets and
task, especially when training seems to be the most difficult, e.g., when
curriculum learning is necessary to obtain good results.
| no_new_dataset | 0.946745 |
1603.06201 | Gong Cheng | Gong Cheng, Junwei Han | A Survey on Object Detection in Optical Remote Sensing Images | This manuscript is the accepted version for ISPRS Journal of
Photogrammetry and Remote Sensing | ISPRS Journal of Photogrammetry and Remote Sensing, 117: 11-28,
2016 | 10.1016/j.isprsjprs.2016.03.014 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Object detection in optical remote sensing images, being a fundamental but
challenging problem in the field of aerial and satellite image analysis, plays
an important role for a wide range of applications and is receiving significant
attention in recent years. While enormous methods exist, a deep review of the
literature concerning generic object detection is still lacking. This paper
aims to provide a review of the recent progress in this field. Different from
several previously published surveys that focus on a specific object class such
as building and road, we concentrate on more generic object categories
including, but are not limited to, road, building, tree, vehicle, ship,
airport, urban-area. Covering about 270 publications we survey 1) template
matching-based object detection methods, 2) knowledge-based object detection
methods, 3) object-based image analysis (OBIA)-based object detection methods,
4) machine learning-based object detection methods, and 5) five publicly
available datasets and three standard evaluation metrics. We also discuss the
challenges of current studies and propose two promising research directions,
namely deep learning-based feature representation and weakly supervised
learning-based geospatial object detection. It is our hope that this survey
will be beneficial for the researchers to have better understanding of this
research field.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 11:09:30 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Mar 2016 03:13:29 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Cheng",
"Gong",
""
],
[
"Han",
"Junwei",
""
]
] | TITLE: A Survey on Object Detection in Optical Remote Sensing Images
ABSTRACT: Object detection in optical remote sensing images, being a fundamental but
challenging problem in the field of aerial and satellite image analysis, plays
an important role for a wide range of applications and is receiving significant
attention in recent years. While enormous methods exist, a deep review of the
literature concerning generic object detection is still lacking. This paper
aims to provide a review of the recent progress in this field. Different from
several previously published surveys that focus on a specific object class such
as building and road, we concentrate on more generic object categories
including, but are not limited to, road, building, tree, vehicle, ship,
airport, urban-area. Covering about 270 publications we survey 1) template
matching-based object detection methods, 2) knowledge-based object detection
methods, 3) object-based image analysis (OBIA)-based object detection methods,
4) machine learning-based object detection methods, and 5) five publicly
available datasets and three standard evaluation metrics. We also discuss the
challenges of current studies and propose two promising research directions,
namely deep learning-based feature representation and weakly supervised
learning-based geospatial object detection. It is our hope that this survey
will be beneficial for the researchers to have better understanding of this
research field.
| no_new_dataset | 0.943867 |
1604.00427 | Yu-Chuan Su | Yu-Chuan Su, Kristen Grauman | Leaving Some Stones Unturned: Dynamic Feature Prioritization for
Activity Detection in Streaming Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches for activity recognition often ignore constraints on
computational resources: 1) they rely on extensive feature computation to
obtain rich descriptors on all frames, and 2) they assume batch-mode access to
the entire test video at once. We propose a new active approach to activity
recognition that prioritizes "what to compute when" in order to make timely
predictions. The main idea is to learn a policy that dynamically schedules the
sequence of features to compute on selected frames of a given test video. In
contrast to traditional static feature selection, our approach continually
re-prioritizes computation based on the accumulated history of observations and
accounts for the transience of those observations in ongoing video. We develop
variants to handle both the batch and streaming settings. On two challenging
datasets, our method provides significantly better accuracy than alternative
techniques for a wide range of computational budgets.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 22:37:28 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Su",
"Yu-Chuan",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Leaving Some Stones Unturned: Dynamic Feature Prioritization for
Activity Detection in Streaming Video
ABSTRACT: Current approaches for activity recognition often ignore constraints on
computational resources: 1) they rely on extensive feature computation to
obtain rich descriptors on all frames, and 2) they assume batch-mode access to
the entire test video at once. We propose a new active approach to activity
recognition that prioritizes "what to compute when" in order to make timely
predictions. The main idea is to learn a policy that dynamically schedules the
sequence of features to compute on selected frames of a given test video. In
contrast to traditional static feature selection, our approach continually
re-prioritizes computation based on the accumulated history of observations and
accounts for the transience of those observations in ongoing video. We develop
variants to handle both the batch and streaming settings. On two challenging
datasets, our method provides significantly better accuracy than alternative
techniques for a wide range of computational budgets.
| no_new_dataset | 0.943348 |
1604.00470 | Raghvendra Kannao | Raghvendra Kannao and Prithwijit Guha | Overlay Text Extraction From TV News Broadcast | Published in INDICON 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The text data present in overlaid bands convey brief descriptions of news
events in broadcast videos. The process of text extraction becomes challenging
as overlay text is presented in widely varying formats and often with animation
effects. We note that existing edge density based methods are well suited for
our application on account of their simplicity and speed of operation. However,
these methods are sensitive to thresholds and have high false positive rates.
In this paper, we present a contrast enhancement based preprocessing stage for
overlay text detection and a parameter free edge density based scheme for
efficient text band detection. The second contribution of this paper is a novel
approach for multiple text region tracking with a formal identification of all
possible detection failure cases. The tracking stage enables us to establish
the temporal presence of text bands and their linking over time. The third
contribution is the adoption of Tesseract OCR for the specific task of overlay
text recognition using web news articles. The proposed approach is tested and
found superior on news videos acquired from three Indian English television
news channels along with benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 2 Apr 2016 07:28:23 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Kannao",
"Raghvendra",
""
],
[
"Guha",
"Prithwijit",
""
]
] | TITLE: Overlay Text Extraction From TV News Broadcast
ABSTRACT: The text data present in overlaid bands convey brief descriptions of news
events in broadcast videos. The process of text extraction becomes challenging
as overlay text is presented in widely varying formats and often with animation
effects. We note that existing edge density based methods are well suited for
our application on account of their simplicity and speed of operation. However,
these methods are sensitive to thresholds and have high false positive rates.
In this paper, we present a contrast enhancement based preprocessing stage for
overlay text detection and a parameter free edge density based scheme for
efficient text band detection. The second contribution of this paper is a novel
approach for multiple text region tracking with a formal identification of all
possible detection failure cases. The tracking stage enables us to establish
the temporal presence of text bands and their linking over time. The third
contribution is the adoption of Tesseract OCR for the specific task of overlay
text recognition using web news articles. The proposed approach is tested and
found superior on news videos acquired from three Indian English television
news channels along with benchmark datasets.
| no_new_dataset | 0.950411 |
1604.00606 | Yuzhuo Ren | Yuzhuo Ren, Chen Chen, Shangwen Li, and C.-C. Jay Kuo | GAL: A Global-Attributes Assisted Labeling System for Outdoor Scenes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An approach that extracts global attributes from outdoor images to facilitate
geometric layout labeling is investigated in this work. The proposed
Global-attributes Assisted Labeling (GAL) system exploits both local features
and global attributes. First, by following a classical method, we use local
features to provide initial labels for all super-pixels. Then, we develop a set
of techniques to extract global attributes from 2D outdoor images. They include
sky lines, ground lines, vanishing lines, etc. Finally, we propose the GAL
system that integrates global attributes in the conditional random field (CRF)
framework to improve initial labels so as to offer a more robust labeling
result. The performance of the proposed GAL system is demonstrated and
benchmarked with several state-of-the-art algorithms against a popular outdoor
scene layout dataset.
| [
{
"version": "v1",
"created": "Sun, 3 Apr 2016 07:36:50 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Ren",
"Yuzhuo",
""
],
[
"Chen",
"Chen",
""
],
[
"Li",
"Shangwen",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] | TITLE: GAL: A Global-Attributes Assisted Labeling System for Outdoor Scenes
ABSTRACT: An approach that extracts global attributes from outdoor images to facilitate
geometric layout labeling is investigated in this work. The proposed
Global-attributes Assisted Labeling (GAL) system exploits both local features
and global attributes. First, by following a classical method, we use local
features to provide initial labels for all super-pixels. Then, we develop a set
of techniques to extract global attributes from 2D outdoor images. They include
sky lines, ground lines, vanishing lines, etc. Finally, we propose the GAL
system that integrates global attributes in the conditional random field (CRF)
framework to improve initial labels so as to offer a more robust labeling
result. The performance of the proposed GAL system is demonstrated and
benchmarked with several state-of-the-art algorithms against a popular outdoor
scene layout dataset.
| no_new_dataset | 0.951278 |
1604.00647 | Ernesto Diaz-Aviles | Lucas Drumond, Ernesto Diaz-Aviles, and Lars Schmidt-Thieme | Multi-Relational Learning at Scale with ADMM | Keywords: Multi-Relational Learning, Distributed Learning,
Factorization Models, ADMM | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning from multiple-relational data which contains noise, ambiguities, or
duplicate entities is essential to a wide range of applications such as
statistical inference based on Web Linked Data, recommender systems,
computational biology, and natural language processing. These tasks usually
require working with very large and complex datasets - e.g., the Web graph -
however, current approaches to multi-relational learning are not practical for
such scenarios due to their high computational complexity and poor scalability
on large data.
In this paper, we propose a novel and scalable approach for multi-relational
factorization based on consensus optimization. Our model, called ConsMRF, is
based on the Alternating Direction Method of Multipliers (ADMM) framework,
which enables us to optimize each target relation using a smaller set of
parameters than the state-of-the-art competitors in this task.
Due to ADMM's nature, ConsMRF can be easily parallelized which makes it
suitable for large multi-relational data. Experiments on large Web datasets -
derived from DBpedia, Wikipedia and YAGO - show the efficiency and performance
improvement of ConsMRF over strong competitors. In addition, ConsMRF
near-linear scalability indicates great potential to tackle Web-scale problem
sizes.
| [
{
"version": "v1",
"created": "Sun, 3 Apr 2016 15:42:36 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Drumond",
"Lucas",
""
],
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] | TITLE: Multi-Relational Learning at Scale with ADMM
ABSTRACT: Learning from multiple-relational data which contains noise, ambiguities, or
duplicate entities is essential to a wide range of applications such as
statistical inference based on Web Linked Data, recommender systems,
computational biology, and natural language processing. These tasks usually
require working with very large and complex datasets - e.g., the Web graph -
however, current approaches to multi-relational learning are not practical for
such scenarios due to their high computational complexity and poor scalability
on large data.
In this paper, we propose a novel and scalable approach for multi-relational
factorization based on consensus optimization. Our model, called ConsMRF, is
based on the Alternating Direction Method of Multipliers (ADMM) framework,
which enables us to optimize each target relation using a smaller set of
parameters than the state-of-the-art competitors in this task.
Due to ADMM's nature, ConsMRF can be easily parallelized which makes it
suitable for large multi-relational data. Experiments on large Web datasets -
derived from DBpedia, Wikipedia and YAGO - show the efficiency and performance
improvement of ConsMRF over strong competitors. In addition, ConsMRF
near-linear scalability indicates great potential to tackle Web-scale problem
sizes.
| no_new_dataset | 0.943295 |
1604.00734 | Matthew Francis-Landau | Matthew Francis-Landau, Greg Durrett and Dan Klein | Capturing Semantic Similarity for Entity Linking with Convolutional
Neural Networks | Accepted at NAACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key challenge in entity linking is making effective use of contextual
information to disambiguate mentions that might refer to different entities in
different contexts. We present a model that uses convolutional neural networks
to capture semantic correspondence between a mention's context and a proposed
target entity. These convolutional networks operate at multiple granularities
to exploit various kinds of topic information, and their rich parameterization
gives them the capacity to learn which n-grams characterize different topics.
We combine these networks with a sparse linear model to achieve
state-of-the-art performance on multiple entity linking datasets, outperforming
the prior systems of Durrett and Klein (2014) and Nguyen et al. (2014).
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 03:58:31 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Francis-Landau",
"Matthew",
""
],
[
"Durrett",
"Greg",
""
],
[
"Klein",
"Dan",
""
]
] | TITLE: Capturing Semantic Similarity for Entity Linking with Convolutional
Neural Networks
ABSTRACT: A key challenge in entity linking is making effective use of contextual
information to disambiguate mentions that might refer to different entities in
different contexts. We present a model that uses convolutional neural networks
to capture semantic correspondence between a mention's context and a proposed
target entity. These convolutional networks operate at multiple granularities
to exploit various kinds of topic information, and their rich parameterization
gives them the capacity to learn which n-grams characterize different topics.
We combine these networks with a sparse linear model to achieve
state-of-the-art performance on multiple entity linking datasets, outperforming
the prior systems of Durrett and Klein (2014) and Nguyen et al. (2014).
| no_new_dataset | 0.949856 |
1604.00783 | Divya Padmanabhan | Divya Padmanabhan, Satyanath Bhat, Shirish Shevade, Y. Narahari | Topic Model Based Multi-Label Classification from the Crowd | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label classification is a common supervised machine learning problem
where each instance is associated with multiple classes. The key challenge in
this problem is learning the correlations between the classes. An additional
challenge arises when the labels of the training instances are provided by
noisy, heterogeneous crowdworkers with unknown qualities. We first assume
labels from a perfect source and propose a novel topic model where the present
as well as the absent classes generate the latent topics and hence the words.
We non-trivially extend our topic model to the scenario where the labels are
provided by noisy crowdworkers. Extensive experimentation on real world
datasets reveals the superior performance of the proposed model. The proposed
model learns the qualities of the annotators as well, even with minimal
training data.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 09:24:12 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Padmanabhan",
"Divya",
""
],
[
"Bhat",
"Satyanath",
""
],
[
"Shevade",
"Shirish",
""
],
[
"Narahari",
"Y.",
""
]
] | TITLE: Topic Model Based Multi-Label Classification from the Crowd
ABSTRACT: Multi-label classification is a common supervised machine learning problem
where each instance is associated with multiple classes. The key challenge in
this problem is learning the correlations between the classes. An additional
challenge arises when the labels of the training instances are provided by
noisy, heterogeneous crowdworkers with unknown qualities. We first assume
labels from a perfect source and propose a novel topic model where the present
as well as the absent classes generate the latent topics and hence the words.
We non-trivially extend our topic model to the scenario where the labels are
provided by noisy crowdworkers. Extensive experimentation on real world
datasets reveals the superior performance of the proposed model. The proposed
model learns the qualities of the annotators as well, even with minimal
training data.
| no_new_dataset | 0.950595 |
1604.00825 | Wojciech Samek | Alexander Binder and Gr\'egoire Montavon and Sebastian Bach and
Klaus-Robert M\"uller and Wojciech Samek | Layer-wise Relevance Propagation for Neural Networks with Local
Renormalization Layers | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Layer-wise relevance propagation is a framework which allows to decompose the
prediction of a deep neural network computed over a sample, e.g. an image, down
to relevance scores for the single input dimensions of the sample such as
subpixels of an image. While this approach can be applied directly to
generalized linear mappings, product type non-linearities are not covered. This
paper proposes an approach to extend layer-wise relevance propagation to neural
networks with local renormalization layers, which is a very common product-type
non-linearity in convolutional neural networks. We evaluate the proposed method
for local renormalization layers on the CIFAR-10, Imagenet and MIT Places
datasets.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 11:52:07 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Binder",
"Alexander",
""
],
[
"Montavon",
"Grégoire",
""
],
[
"Bach",
"Sebastian",
""
],
[
"Müller",
"Klaus-Robert",
""
],
[
"Samek",
"Wojciech",
""
]
] | TITLE: Layer-wise Relevance Propagation for Neural Networks with Local
Renormalization Layers
ABSTRACT: Layer-wise relevance propagation is a framework which allows to decompose the
prediction of a deep neural network computed over a sample, e.g. an image, down
to relevance scores for the single input dimensions of the sample such as
subpixels of an image. While this approach can be applied directly to
generalized linear mappings, product type non-linearities are not covered. This
paper proposes an approach to extend layer-wise relevance propagation to neural
networks with local renormalization layers, which is a very common product-type
non-linearity in convolutional neural networks. We evaluate the proposed method
for local renormalization layers on the CIFAR-10, Imagenet and MIT Places
datasets.
| no_new_dataset | 0.948394 |
1604.00837 | Dominik Kowald | Kowald Dominik and Lex Elisabeth | The Influence of Frequency, Recency and Semantic Context on the Reuse of
Tags in Social Tagging Systems | Accepted by Hypertext 2016 conference as short paper | null | null | null | cs.SI cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study factors that influence tag reuse behavior in social
tagging systems. Our work is guided by the activation equation of the cognitive
model ACT-R, which states that the usefulness of information in human memory
depends on the three factors usage frequency, recency and semantic context. It
is our aim to shed light on the influence of these factors on tag reuse. In our
experiments, we utilize six datasets from the social tagging systems Flickr,
CiteULike, BibSonomy, Delicious, LastFM and MovieLens, covering a range of
various tagging settings. Our results confirm that frequency, recency and
semantic context positively influence the reuse probability of tags. However,
the extent to which each factor individually influences tag reuse strongly
depends on the type of folksonomy present in a social tagging system. Our work
can serve as guideline for researchers and developers of tag-based recommender
systems when designing algorithms for social tagging environments.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 12:49:02 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Dominik",
"Kowald",
""
],
[
"Elisabeth",
"Lex",
""
]
] | TITLE: The Influence of Frequency, Recency and Semantic Context on the Reuse of
Tags in Social Tagging Systems
ABSTRACT: In this paper, we study factors that influence tag reuse behavior in social
tagging systems. Our work is guided by the activation equation of the cognitive
model ACT-R, which states that the usefulness of information in human memory
depends on the three factors usage frequency, recency and semantic context. It
is our aim to shed light on the influence of these factors on tag reuse. In our
experiments, we utilize six datasets from the social tagging systems Flickr,
CiteULike, BibSonomy, Delicious, LastFM and MovieLens, covering a range of
various tagging settings. Our results confirm that frequency, recency and
semantic context positively influence the reuse probability of tags. However,
the extent to which each factor individually influences tag reuse strongly
depends on the type of folksonomy present in a social tagging system. Our work
can serve as guideline for researchers and developers of tag-based recommender
systems when designing algorithms for social tagging environments.
| no_new_dataset | 0.950319 |
1604.00906 | Yu-Chuan Su | Yu-Chuan Su and Kristen Grauman | Detecting Engagement in Egocentric Video | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a wearable camera video, we see what the camera wearer sees. While this
makes it easy to know roughly what he chose to look at, it does not immediately
reveal when he was engaged with the environment. Specifically, at what moments
did his focus linger, as he paused to gather more information about something
he saw? Knowing this answer would benefit various applications in video
summarization and augmented reality, yet prior work focuses solely on the
"what" question (estimating saliency, gaze) without considering the "when"
(engagement). We propose a learning-based approach that uses long-term
egomotion cues to detect engagement, specifically in browsing scenarios where
one frequently takes in new visual information (e.g., shopping, touring). We
introduce a large, richly annotated dataset for ego-engagement that is the
first of its kind. Our approach outperforms a wide array of existing methods.
We show engagement can be detected well independent of both scene appearance
and the camera wearer's identity.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 15:21:16 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Su",
"Yu-Chuan",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Detecting Engagement in Egocentric Video
ABSTRACT: In a wearable camera video, we see what the camera wearer sees. While this
makes it easy to know roughly what he chose to look at, it does not immediately
reveal when he was engaged with the environment. Specifically, at what moments
did his focus linger, as he paused to gather more information about something
he saw? Knowing this answer would benefit various applications in video
summarization and augmented reality, yet prior work focuses solely on the
"what" question (estimating saliency, gaze) without considering the "when"
(engagement). We propose a learning-based approach that uses long-term
egomotion cues to detect engagement, specifically in browsing scenarios where
one frequently takes in new visual information (e.g., shopping, touring). We
introduce a large, richly annotated dataset for ego-engagement that is the
first of its kind. Our approach outperforms a wide array of existing methods.
We show engagement can be detected well independent of both scene appearance
and the camera wearer's identity.
| new_dataset | 0.958343 |
1604.00989 | Charles Otto | Charles Otto, Dayong Wang, Anil K. Jain | Clustering Millions of Faces by Identity | null | null | null | MSU-CSE-16-3 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we attempt to address the following problem: Given a large
number of unlabeled face images, cluster them into the individual identities
present in this data. We consider this a relevant problem in different
application scenarios ranging from social media to law enforcement. In
large-scale scenarios the number of faces in the collection can be of the order
of hundreds of million, while the number of clusters can range from a few
thousand to millions--leading to difficulties in terms of both run-time
complexity and evaluating clustering and per-cluster quality. An efficient and
effective Rank-Order clustering algorithm is developed to achieve the desired
scalability, and better clustering accuracy than other well-known algorithms
such as k-means and spectral clustering. We cluster up to 123 million face
images into over 10 million clusters, and analyze the results in terms of both
external cluster quality measures (known face labels) and internal cluster
quality measures (unknown face labels) and run-time. Our algorithm achieves an
F-measure of 0.87 on a benchmark unconstrained face dataset (LFW, consisting of
13K faces), and 0.27 on the largest dataset considered (13K images in LFW, plus
123M distractor images). Additionally, we present preliminary work on video
frame clustering (achieving 0.71 F-measure when clustering all frames in the
benchmark YouTube Faces dataset). A per-cluster quality measure is developed
which can be used to rank individual clusters and to automatically identify a
subset of good quality clusters for manual exploration.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 18:53:12 GMT"
}
] | 2016-04-05T00:00:00 | [
[
"Otto",
"Charles",
""
],
[
"Wang",
"Dayong",
""
],
[
"Jain",
"Anil K.",
""
]
] | TITLE: Clustering Millions of Faces by Identity
ABSTRACT: In this work, we attempt to address the following problem: Given a large
number of unlabeled face images, cluster them into the individual identities
present in this data. We consider this a relevant problem in different
application scenarios ranging from social media to law enforcement. In
large-scale scenarios the number of faces in the collection can be of the order
of hundreds of million, while the number of clusters can range from a few
thousand to millions--leading to difficulties in terms of both run-time
complexity and evaluating clustering and per-cluster quality. An efficient and
effective Rank-Order clustering algorithm is developed to achieve the desired
scalability, and better clustering accuracy than other well-known algorithms
such as k-means and spectral clustering. We cluster up to 123 million face
images into over 10 million clusters, and analyze the results in terms of both
external cluster quality measures (known face labels) and internal cluster
quality measures (unknown face labels) and run-time. Our algorithm achieves an
F-measure of 0.87 on a benchmark unconstrained face dataset (LFW, consisting of
13K faces), and 0.27 on the largest dataset considered (13K images in LFW, plus
123M distractor images). Additionally, we present preliminary work on video
frame clustering (achieving 0.71 F-measure when clustering all frames in the
benchmark YouTube Faces dataset). A per-cluster quality measure is developed
which can be used to rank individual clusters and to automatically identify a
subset of good quality clusters for manual exploration.
| no_new_dataset | 0.944125 |
1408.1228 | Yang Zhang | Jun Pang and Yang Zhang | Location Prediction: Communities Speak Louder than Friends | ACM Conference on Online Social Networks 2015, COSN 2015 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans are social animals, they interact with different communities of
friends to conduct different activities. The literature shows that human
mobility is constrained by their social relations. In this paper, we
investigate the social impact of a person's communities on his mobility,
instead of all friends from his online social networks. This study can be
particularly useful, as certain social behaviors are influenced by specific
communities but not all friends. To achieve our goal, we first develop a
measure to characterize a person's social diversity, which we term `community
entropy'. Through analysis of two real-life datasets, we demonstrate that a
person's mobility is influenced only by a small fraction of his communities and
the influence depends on the social contexts of the communities. We then
exploit machine learning techniques to predict users' future movement based on
their communities' information. Extensive experiments demonstrate the
prediction's effectiveness.
| [
{
"version": "v1",
"created": "Wed, 6 Aug 2014 09:52:13 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Mar 2015 10:25:36 GMT"
},
{
"version": "v3",
"created": "Fri, 1 Apr 2016 09:00:05 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Pang",
"Jun",
""
],
[
"Zhang",
"Yang",
""
]
] | TITLE: Location Prediction: Communities Speak Louder than Friends
ABSTRACT: Humans are social animals, they interact with different communities of
friends to conduct different activities. The literature shows that human
mobility is constrained by their social relations. In this paper, we
investigate the social impact of a person's communities on his mobility,
instead of all friends from his online social networks. This study can be
particularly useful, as certain social behaviors are influenced by specific
communities but not all friends. To achieve our goal, we first develop a
measure to characterize a person's social diversity, which we term `community
entropy'. Through analysis of two real-life datasets, we demonstrate that a
person's mobility is influenced only by a small fraction of his communities and
the influence depends on the social contexts of the communities. We then
exploit machine learning techniques to predict users' future movement based on
their communities' information. Extensive experiments demonstrate the
prediction's effectiveness.
| no_new_dataset | 0.945801 |
1603.02727 | Boxiang Dong | Boxiang Dong, Hui Wang | Efficient Authentication of Outsourced String Similarity Search | null | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing enables the outsourcing of big data analytics, where a third
party server is responsible for data storage and processing. In this paper, we
consider the outsourcing model that provides string similarity search as the
service. In particular, given a similarity search query, the service provider
returns all strings from the outsourced dataset that are similar to the query
string. A major security concern of the outsourcing paradigm is to authenticate
whether the service provider returns sound and complete search results. In this
paper, we design AutoS3, an authentication mechanism of outsourced string
similarity search. The key idea of AutoS3 is that the server returns a
verification object VO to prove the result correctness. First, we design an
authenticated string indexing structure named MBtree for VO construction.
Second, we design two lightweight authentication methods named VS2 and EVS2
that can catch the service provider various cheating behaviors with cheap
verification cost. Moreover, we generalize our solution for top k string
similarity search. We perform an extensive set of experiment results on real
world datasets to demonstrate the efficiency of our approach.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 22:40:41 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Dong",
"Boxiang",
""
],
[
"Wang",
"Hui",
""
]
] | TITLE: Efficient Authentication of Outsourced String Similarity Search
ABSTRACT: Cloud computing enables the outsourcing of big data analytics, where a third
party server is responsible for data storage and processing. In this paper, we
consider the outsourcing model that provides string similarity search as the
service. In particular, given a similarity search query, the service provider
returns all strings from the outsourced dataset that are similar to the query
string. A major security concern of the outsourcing paradigm is to authenticate
whether the service provider returns sound and complete search results. In this
paper, we design AutoS3, an authentication mechanism of outsourced string
similarity search. The key idea of AutoS3 is that the server returns a
verification object VO to prove the result correctness. First, we design an
authenticated string indexing structure named MBtree for VO construction.
Second, we design two lightweight authentication methods named VS2 and EVS2
that can catch the service provider various cheating behaviors with cheap
verification cost. Moreover, we generalize our solution for top k string
similarity search. We perform an extensive set of experiment results on real
world datasets to demonstrate the efficiency of our approach.
| no_new_dataset | 0.94743 |
1603.09439 | Phuc Nguyen X | Phuc Xuan Nguyen, Gregory Rogez, Charless Fowlkes, Deva Ramanan | The Open World of Micro-Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Micro-videos are six-second videos popular on social media networks with
several unique properties. Firstly, because of the authoring process, they
contain significantly more diversity and narrative structure than existing
collections of video "snippets". Secondly, because they are often captured by
hand-held mobile cameras, they contain specialized viewpoints including
third-person, egocentric, and self-facing views seldom seen in traditional
produced video. Thirdly, due to to their continuous production and publication
on social networks, aggregate micro-video content contains interesting
open-world dynamics that reflects the temporal evolution of tag topics. These
aspects make micro-videos an appealing well of visual data for developing
large-scale models for video understanding. We analyze a novel dataset of
micro-videos labeled with 58 thousand tags. To analyze this data, we introduce
viewpoint-specific and temporally-evolving models for video understanding,
defined over state-of-the-art motion and deep visual features. We conclude that
our dataset opens up new research opportunities for large-scale video analysis,
novel viewpoints, and open-world dynamics.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 02:19:53 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2016 01:53:32 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Nguyen",
"Phuc Xuan",
""
],
[
"Rogez",
"Gregory",
""
],
[
"Fowlkes",
"Charless",
""
],
[
"Ramanan",
"Deva",
""
]
] | TITLE: The Open World of Micro-Videos
ABSTRACT: Micro-videos are six-second videos popular on social media networks with
several unique properties. Firstly, because of the authoring process, they
contain significantly more diversity and narrative structure than existing
collections of video "snippets". Secondly, because they are often captured by
hand-held mobile cameras, they contain specialized viewpoints including
third-person, egocentric, and self-facing views seldom seen in traditional
produced video. Thirdly, due to to their continuous production and publication
on social networks, aggregate micro-video content contains interesting
open-world dynamics that reflects the temporal evolution of tag topics. These
aspects make micro-videos an appealing well of visual data for developing
large-scale models for video understanding. We analyze a novel dataset of
micro-videos labeled with 58 thousand tags. To analyze this data, we introduce
viewpoint-specific and temporally-evolving models for video understanding,
defined over state-of-the-art motion and deep visual features. We conclude that
our dataset opens up new research opportunities for large-scale video analysis,
novel viewpoints, and open-world dynamics.
| new_dataset | 0.955152 |
1603.09540 | Paolo Boldi | Paolo Boldi, Corrado Monti | LlamaFur: Learning Latent Category Matrix to Find Unexpected Relations
in Wikipedia | Short version appeared in Proc. WebSci '16, May 22-25, 2016,
Hannover, Germany | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Besides finding trends and unveiling typical patterns, modern information
retrieval is increasingly more interested in the discovery of surprising
information in textual datasets. In this work we focus on finding "unexpected
links" in hyperlinked document corpora when documents are assigned to
categories. To achieve this goal, we model the hyperlinks graph through node
categories: the presence of an arc is fostered or discouraged by the categories
of the head and the tail of the arc. Specifically, we determine a latent
category matrix that explains common links. The matrix is built using a
margin-based online learning algorithm (Passive-Aggressive), which makes us
able to process graphs with $10^{8}$ links in less than $10$ minutes. We show
that our method provides better accuracy than most existing text-based
techniques, with higher efficiency and relying on a much smaller amount of
information. It also provides higher precision than standard link prediction,
especially at low recall levels; the two methods are in fact shown to be
orthogonal to each other and can therefore be fruitfully combined.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 11:49:39 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2016 09:34:32 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Boldi",
"Paolo",
""
],
[
"Monti",
"Corrado",
""
]
] | TITLE: LlamaFur: Learning Latent Category Matrix to Find Unexpected Relations
in Wikipedia
ABSTRACT: Besides finding trends and unveiling typical patterns, modern information
retrieval is increasingly more interested in the discovery of surprising
information in textual datasets. In this work we focus on finding "unexpected
links" in hyperlinked document corpora when documents are assigned to
categories. To achieve this goal, we model the hyperlinks graph through node
categories: the presence of an arc is fostered or discouraged by the categories
of the head and the tail of the arc. Specifically, we determine a latent
category matrix that explains common links. The matrix is built using a
margin-based online learning algorithm (Passive-Aggressive), which makes us
able to process graphs with $10^{8}$ links in less than $10$ minutes. We show
that our method provides better accuracy than most existing text-based
techniques, with higher efficiency and relying on a much smaller amount of
information. It also provides higher precision than standard link prediction,
especially at low recall levels; the two methods are in fact shown to be
orthogonal to each other and can therefore be fruitfully combined.
| no_new_dataset | 0.944689 |
1604.00036 | Jos\'e Oramas | Jose Oramas, Tinne Tuytelaars | Modeling Visual Compatibility through Hierarchical Mid-level Elements | 29 pages, 19 Figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a hierarchical method to discover mid-level elements
with the objective of modeling visual compatibility between objects. At the
base-level, our method identifies patterns of CNN activations with the aim of
modeling different variations/styles in which objects of the classes of
interest may occur. At the top-level, the proposed method discovers patterns of
co-occurring activations of base-level elements that define visual
compatibility between pairs of object classes. Experiments on the massive
Amazon dataset show the strength of our method at describing object classes and
the characteristics that drive the compatibility between them.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 20:18:16 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Oramas",
"Jose",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Modeling Visual Compatibility through Hierarchical Mid-level Elements
ABSTRACT: In this paper we present a hierarchical method to discover mid-level elements
with the objective of modeling visual compatibility between objects. At the
base-level, our method identifies patterns of CNN activations with the aim of
modeling different variations/styles in which objects of the classes of
interest may occur. At the top-level, the proposed method discovers patterns of
co-occurring activations of base-level elements that define visual
compatibility between pairs of object classes. Experiments on the massive
Amazon dataset show the strength of our method at describing object classes and
the characteristics that drive the compatibility between them.
| no_new_dataset | 0.949248 |
1604.00300 | Benjamin Negrevergne | R\'emi Coletta and Benjamin Negrevergne | A SAT model to mine flexible sequences in transactional datasets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional pattern mining algorithms generally suffer from a lack of
flexibility. In this paper, we propose a SAT formulation of the problem to
successfully mine frequent flexible sequences occurring in transactional
datasets. Our SAT-based approach can easily be extended with extra constraints
to address a broad range of pattern mining applications. To demonstrate this
claim, we formulate and add several constraints, such as gap and span
constraints, to our model in order to extract more specific patterns. We also
use interactive solving to perform important derived tasks, such as closed
pattern mining or maximal pattern mining. Finally, we prove the practical
feasibility of our SAT model by running experiments on two real datasets.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 15:49:51 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Coletta",
"Rémi",
""
],
[
"Negrevergne",
"Benjamin",
""
]
] | TITLE: A SAT model to mine flexible sequences in transactional datasets
ABSTRACT: Traditional pattern mining algorithms generally suffer from a lack of
flexibility. In this paper, we propose a SAT formulation of the problem to
successfully mine frequent flexible sequences occurring in transactional
datasets. Our SAT-based approach can easily be extended with extra constraints
to address a broad range of pattern mining applications. To demonstrate this
claim, we formulate and add several constraints, such as gap and span
constraints, to our model in order to extract more specific patterns. We also
use interactive solving to perform important derived tasks, such as closed
pattern mining or maximal pattern mining. Finally, we prove the practical
feasibility of our SAT model by running experiments on two real datasets.
| no_new_dataset | 0.954732 |
1604.00317 | Ehud Ben-Reuven | Ehud Ben-Reuven and Jacob Goldberger | A Semisupervised Approach for Language Identification based on Ladder
Networks | null | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study we address the problem of training a neuralnetwork for language
identification using both labeled and unlabeled speech samples in the form of
i-vectors. We propose a neural network architecture that can also handle
out-of-set languages. We utilize a modified version of the recently proposed
Ladder Network semisupervised training procedure that optimizes the
reconstruction costs of a stack of denoising autoencoders. We show that this
approach can be successfully applied to the case where the training dataset is
composed of both labeled and unlabeled acoustic data. The results show enhanced
language identification on the NIST 2015 language identification dataset.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 16:26:57 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Ben-Reuven",
"Ehud",
""
],
[
"Goldberger",
"Jacob",
""
]
] | TITLE: A Semisupervised Approach for Language Identification based on Ladder
Networks
ABSTRACT: In this study we address the problem of training a neuralnetwork for language
identification using both labeled and unlabeled speech samples in the form of
i-vectors. We propose a neural network architecture that can also handle
out-of-set languages. We utilize a modified version of the recently proposed
Ladder Network semisupervised training procedure that optimizes the
reconstruction costs of a stack of denoising autoencoders. We show that this
approach can be successfully applied to the case where the training dataset is
composed of both labeled and unlabeled acoustic data. The results show enhanced
language identification on the NIST 2015 language identification dataset.
| no_new_dataset | 0.946001 |
1604.00326 | Ziad Al-Halah | Ziad Al-Halah and Rainer Stiefelhagen | How to Transfer? Zero-Shot Object Recognition via Hierarchical Transfer
of Semantic Attributes | Published as a conference paper at WACV 2015, modifications include
new results with GoogLeNet features | null | 10.1109/WACV.2015.116 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute based knowledge transfer has proven very successful in visual
object analysis and learning previously unseen classes. However, the common
approach learns and transfers attributes without taking into consideration the
embedded structure between the categories in the source set. Such information
provides important cues on the intra-attribute variations. We propose to
capture these variations in a hierarchical model that expands the knowledge
source with additional abstraction levels of attributes. We also provide a
novel transfer approach that can choose the appropriate attributes to be shared
with an unseen class. We evaluate our approach on three public datasets:
aPascal, Animals with Attributes and CUB-200-2011 Birds. The experiments
demonstrate the effectiveness of our model with significant improvement over
state-of-the-art.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 16:51:56 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Al-Halah",
"Ziad",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | TITLE: How to Transfer? Zero-Shot Object Recognition via Hierarchical Transfer
of Semantic Attributes
ABSTRACT: Attribute based knowledge transfer has proven very successful in visual
object analysis and learning previously unseen classes. However, the common
approach learns and transfers attributes without taking into consideration the
embedded structure between the categories in the source set. Such information
provides important cues on the intra-attribute variations. We propose to
capture these variations in a hierarchical model that expands the knowledge
source with additional abstraction levels of attributes. We also provide a
novel transfer approach that can choose the appropriate attributes to be shared
with an unseen class. We evaluate our approach on three public datasets:
aPascal, Animals with Attributes and CUB-200-2011 Birds. The experiments
demonstrate the effectiveness of our model with significant improvement over
state-of-the-art.
| no_new_dataset | 0.951953 |
1604.00367 | Mengran Gou | Mengran Gou, Xikang Zhang, Angels Rates-Borras, Sadjad
Asghari-Esfeden, Mario Sznaier, Octavia Camps | Person Re-identification in Appearance Impaired Scenarios | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification is critical in surveillance applications. Current
approaches rely on appearance based features extracted from a single or
multiple shots of the target and candidate matches. These approaches are at a
disadvantage when trying to distinguish between candidates dressed in similar
colors or when targets change their clothing. In this paper we propose a
dynamics-based feature to overcome this limitation. The main idea is to capture
soft biometrics from gait and motion patterns by gathering dense short
trajectories (tracklets) which are Fisher vector encoded. To illustrate the
merits of the proposed features we introduce three new "appearance-impaired"
datasets. Our experiments on the original and the appearance impaired datasets
demonstrate the benefits of incorporating dynamics-based information with
appearance-based information to re-identification algorithms.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 19:20:03 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Gou",
"Mengran",
""
],
[
"Zhang",
"Xikang",
""
],
[
"Rates-Borras",
"Angels",
""
],
[
"Asghari-Esfeden",
"Sadjad",
""
],
[
"Sznaier",
"Mario",
""
],
[
"Camps",
"Octavia",
""
]
] | TITLE: Person Re-identification in Appearance Impaired Scenarios
ABSTRACT: Person re-identification is critical in surveillance applications. Current
approaches rely on appearance based features extracted from a single or
multiple shots of the target and candidate matches. These approaches are at a
disadvantage when trying to distinguish between candidates dressed in similar
colors or when targets change their clothing. In this paper we propose a
dynamics-based feature to overcome this limitation. The main idea is to capture
soft biometrics from gait and motion patterns by gathering dense short
trajectories (tracklets) which are Fisher vector encoded. To illustrate the
merits of the proposed features we introduce three new "appearance-impaired"
datasets. Our experiments on the original and the appearance impaired datasets
demonstrate the benefits of incorporating dynamics-based information with
appearance-based information to re-identification algorithms.
| new_dataset | 0.960175 |
1604.00385 | Stephen Plaza | Stephen M. Plaza and Stuart E. Berg | Large-Scale Electron Microscopy Image Segmentation in Spark | null | null | null | null | q-bio.QM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emerging field of connectomics aims to unlock the mysteries of the brain
by understanding the connectivity between neurons. To map this connectivity, we
acquire thousands of electron microscopy (EM) images with nanometer-scale
resolution. After aligning these images, the resulting dataset has the
potential to reveal the shapes of neurons and the synaptic connections between
them. However, imaging the brain of even a tiny organism like the fruit fly
yields terabytes of data. It can take years of manual effort to examine such
image volumes and trace their neuronal connections. One solution is to apply
image segmentation algorithms to help automate the tracing tasks. In this
paper, we propose a novel strategy to apply such segmentation on very large
datasets that exceed the capacity of a single machine. Our solution is robust
to potential segmentation errors which could otherwise severely compromise the
quality of the overall segmentation, for example those due to poor classifier
generalizability or anomalies in the image dataset. We implement our algorithms
in a Spark application which minimizes disk I/O, and apply them to a few large
EM datasets, revealing both their effectiveness and scalability. We hope this
work will encourage external contributions to EM segmentation by providing 1) a
flexible plugin architecture that deploys easily on different cluster
environments and 2) an in-memory representation of segmentation that could be
conducive to new advances.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 19:53:30 GMT"
}
] | 2016-04-04T00:00:00 | [
[
"Plaza",
"Stephen M.",
""
],
[
"Berg",
"Stuart E.",
""
]
] | TITLE: Large-Scale Electron Microscopy Image Segmentation in Spark
ABSTRACT: The emerging field of connectomics aims to unlock the mysteries of the brain
by understanding the connectivity between neurons. To map this connectivity, we
acquire thousands of electron microscopy (EM) images with nanometer-scale
resolution. After aligning these images, the resulting dataset has the
potential to reveal the shapes of neurons and the synaptic connections between
them. However, imaging the brain of even a tiny organism like the fruit fly
yields terabytes of data. It can take years of manual effort to examine such
image volumes and trace their neuronal connections. One solution is to apply
image segmentation algorithms to help automate the tracing tasks. In this
paper, we propose a novel strategy to apply such segmentation on very large
datasets that exceed the capacity of a single machine. Our solution is robust
to potential segmentation errors which could otherwise severely compromise the
quality of the overall segmentation, for example those due to poor classifier
generalizability or anomalies in the image dataset. We implement our algorithms
in a Spark application which minimizes disk I/O, and apply them to a few large
EM datasets, revealing both their effectiveness and scalability. We hope this
work will encourage external contributions to EM segmentation by providing 1) a
flexible plugin architecture that deploys easily on different cluster
environments and 2) an in-memory representation of segmentation that could be
conducive to new advances.
| no_new_dataset | 0.927429 |
1511.06909 | Shihao Ji | Shihao Ji, S. V. N. Vishwanathan, Nadathur Satish, Michael J. Anderson
and Pradeep Dubey | BlackOut: Speeding up Recurrent Neural Network Language Models With Very
Large Vocabularies | Published as a conference paper at ICLR 2016 | null | null | null | cs.LG cs.CL cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose BlackOut, an approximation algorithm to efficiently train massive
recurrent neural network language models (RNNLMs) with million word
vocabularies. BlackOut is motivated by using a discriminative loss, and we
describe a new sampling strategy which significantly reduces computation while
improving stability, sample efficiency, and rate of convergence. One way to
understand BlackOut is to view it as an extension of the DropOut strategy to
the output layer, wherein we use a discriminative training loss and a weighted
sampling scheme. We also establish close connections between BlackOut,
importance sampling, and noise contrastive estimation (NCE). Our experiments,
on the recently released one billion word language modeling benchmark,
demonstrate scalability and accuracy of BlackOut; we outperform the
state-of-the art, and achieve the lowest perplexity scores on this dataset.
Moreover, unlike other established methods which typically require GPUs or CPU
clusters, we show that a carefully implemented version of BlackOut requires
only 1-10 days on a single machine to train a RNNLM with a million word
vocabulary and billions of parameters on one billion words. Although we
describe BlackOut in the context of RNNLM training, it can be used to any
networks with large softmax output layers.
| [
{
"version": "v1",
"created": "Sat, 21 Nov 2015 17:49:30 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Nov 2015 07:09:16 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Dec 2015 06:08:54 GMT"
},
{
"version": "v4",
"created": "Mon, 21 Dec 2015 04:40:55 GMT"
},
{
"version": "v5",
"created": "Wed, 6 Jan 2016 21:57:56 GMT"
},
{
"version": "v6",
"created": "Sun, 21 Feb 2016 16:40:26 GMT"
},
{
"version": "v7",
"created": "Thu, 31 Mar 2016 17:37:25 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Ji",
"Shihao",
""
],
[
"Vishwanathan",
"S. V. N.",
""
],
[
"Satish",
"Nadathur",
""
],
[
"Anderson",
"Michael J.",
""
],
[
"Dubey",
"Pradeep",
""
]
] | TITLE: BlackOut: Speeding up Recurrent Neural Network Language Models With Very
Large Vocabularies
ABSTRACT: We propose BlackOut, an approximation algorithm to efficiently train massive
recurrent neural network language models (RNNLMs) with million word
vocabularies. BlackOut is motivated by using a discriminative loss, and we
describe a new sampling strategy which significantly reduces computation while
improving stability, sample efficiency, and rate of convergence. One way to
understand BlackOut is to view it as an extension of the DropOut strategy to
the output layer, wherein we use a discriminative training loss and a weighted
sampling scheme. We also establish close connections between BlackOut,
importance sampling, and noise contrastive estimation (NCE). Our experiments,
on the recently released one billion word language modeling benchmark,
demonstrate scalability and accuracy of BlackOut; we outperform the
state-of-the art, and achieve the lowest perplexity scores on this dataset.
Moreover, unlike other established methods which typically require GPUs or CPU
clusters, we show that a carefully implemented version of BlackOut requires
only 1-10 days on a single machine to train a RNNLM with a million word
vocabulary and billions of parameters on one billion words. Although we
describe BlackOut in the context of RNNLM training, it can be used to any
networks with large softmax output layers.
| no_new_dataset | 0.813238 |
1603.08538 | {\L}ukasz Olech Piotr | Pawe{\l} B. Myszkowski and Marek E. Skowro\'nski and {\L}ukasz P.
Olech and Krzysztof O\'sliz{\l}o | Hybrid Ant Colony Optimization in solving Multi-Skill
Resource-Constrained Project Scheduling Problem | The final publication is available at Springer via
http://dx.doi.org/10.1007/s00500-014-1455-x | Soft Computing 19(12), 3599-3619 (2014) | 10.1007/s00500-014-1455-x | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper Hybrid Ant Colony Optimization (HAntCO) approach in solving
Multi--Skill Resource Constrained Project Scheduling Problem (MS--RCPSP) has
been presented. We have proposed hybrid approach that links classical heuristic
priority rules for project scheduling with Ant Colony Optimization (ACO).
Furthermore, a novel approach for updating pheromone value has been proposed,
based on both the best and worst solutions stored by ants. The objective of
this paper is to research the usability and robustness of ACO and its hybrids
with priority rules in solving MS--RCPSP. Experiments have been performed using
artificially created dataset instances, based on real--world ones. We published
those instances that can be used as a benchmark. Presented results show that
ACO--based hybrid method is an efficient approach. More directed search process
by hybrids makes this approach more stable and provides mostly better results
than classical ACO.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 20:15:53 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2016 07:19:03 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Myszkowski",
"Paweł B.",
""
],
[
"Skowroński",
"Marek E.",
""
],
[
"Olech",
"Łukasz P.",
""
],
[
"Oślizło",
"Krzysztof",
""
]
] | TITLE: Hybrid Ant Colony Optimization in solving Multi-Skill
Resource-Constrained Project Scheduling Problem
ABSTRACT: In this paper Hybrid Ant Colony Optimization (HAntCO) approach in solving
Multi--Skill Resource Constrained Project Scheduling Problem (MS--RCPSP) has
been presented. We have proposed hybrid approach that links classical heuristic
priority rules for project scheduling with Ant Colony Optimization (ACO).
Furthermore, a novel approach for updating pheromone value has been proposed,
based on both the best and worst solutions stored by ants. The objective of
this paper is to research the usability and robustness of ACO and its hybrids
with priority rules in solving MS--RCPSP. Experiments have been performed using
artificially created dataset instances, based on real--world ones. We published
those instances that can be used as a benchmark. Presented results show that
ACO--based hybrid method is an efficient approach. More directed search process
by hybrids makes this approach more stable and provides mostly better results
than classical ACO.
| new_dataset | 0.962497 |
1603.09016 | Kenneth Tran | Kenneth Tran, Xiaodong He, Lei Zhang, Jian Sun, Cornelia Carapcea,
Chris Thrasher, Chris Buehler, Chris Sienkiewicz | Rich Image Captioning in the Wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an image caption system that addresses new challenges of
automatically describing images in the wild. The challenges include high
quality caption quality with respect to human judgments, out-of-domain data
handling, and low latency required in many applications. Built on top of a
state-of-the-art framework, we developed a deep vision model that detects a
broad range of visual concepts, an entity recognition model that identifies
celebrities and landmarks, and a confidence model for the caption output.
Experimental results show that our caption engine outperforms previous
state-of-the-art systems significantly on both in-domain dataset (i.e. MS COCO)
and out of-domain datasets.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 01:55:33 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2016 01:45:31 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Tran",
"Kenneth",
""
],
[
"He",
"Xiaodong",
""
],
[
"Zhang",
"Lei",
""
],
[
"Sun",
"Jian",
""
],
[
"Carapcea",
"Cornelia",
""
],
[
"Thrasher",
"Chris",
""
],
[
"Buehler",
"Chris",
""
],
[
"Sienkiewicz",
"Chris",
""
]
] | TITLE: Rich Image Captioning in the Wild
ABSTRACT: We present an image caption system that addresses new challenges of
automatically describing images in the wild. The challenges include high
quality caption quality with respect to human judgments, out-of-domain data
handling, and low latency required in many applications. Built on top of a
state-of-the-art framework, we developed a deep vision model that detects a
broad range of visual concepts, an entity recognition model that identifies
celebrities and landmarks, and a confidence model for the caption output.
Experimental results show that our caption engine outperforms previous
state-of-the-art systems significantly on both in-domain dataset (i.e. MS COCO)
and out of-domain datasets.
| no_new_dataset | 0.956104 |
1603.09405 | Peng Li | Peng Li and Heng Huang | Enhancing Sentence Relation Modeling with Auxiliary Character-level
Embedding | null | null | null | null | cs.CL cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network based approaches for sentence relation modeling automatically
generate hidden matching features from raw sentence pairs. However, the quality
of matching feature representation may not be satisfied due to complex semantic
relations such as entailment or contradiction. To address this challenge, we
propose a new deep neural network architecture that jointly leverage
pre-trained word embedding and auxiliary character embedding to learn sentence
meanings. The two kinds of word sequence representations as inputs into
multi-layer bidirectional LSTM to learn enhanced sentence representation. After
that, we construct matching features followed by another temporal CNN to learn
high-level hidden matching feature representations. Experimental results
demonstrate that our approach consistently outperforms the existing methods on
standard evaluation datasets.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 22:39:59 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Li",
"Peng",
""
],
[
"Huang",
"Heng",
""
]
] | TITLE: Enhancing Sentence Relation Modeling with Auxiliary Character-level
Embedding
ABSTRACT: Neural network based approaches for sentence relation modeling automatically
generate hidden matching features from raw sentence pairs. However, the quality
of matching feature representation may not be satisfied due to complex semantic
relations such as entailment or contradiction. To address this challenge, we
propose a new deep neural network architecture that jointly leverage
pre-trained word embedding and auxiliary character embedding to learn sentence
meanings. The two kinds of word sequence representations as inputs into
multi-layer bidirectional LSTM to learn enhanced sentence representation. After
that, we construct matching features followed by another temporal CNN to learn
high-level hidden matching feature representations. Experimental results
demonstrate that our approach consistently outperforms the existing methods on
standard evaluation datasets.
| no_new_dataset | 0.944689 |
1603.09436 | Amit Sharma | Benjamin Shulman, Amit Sharma, Dan Cosley | Predictability of Popularity: Gaps between Prediction and Understanding | 10 pages, ICWSM 2016 | null | null | null | cs.SI stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can we predict the future popularity of a song, movie or tweet? Recent work
suggests that although it may be hard to predict an item's popularity when it
is first introduced, peeking into its early adopters and properties of their
social network makes the problem easier. We test the robustness of such claims
by using data from social networks spanning music, books, photos, and URLs. We
find a stronger result: not only do predictive models with peeking achieve high
accuracy on all datasets, they also generalize well, so much so that models
trained on any one dataset perform with comparable accuracy on items from other
datasets.
Though practically useful, our models (and those in other work) are
intellectually unsatisfying because common formulations of the problem, which
involve peeking at the first small-k adopters and predicting whether items end
up in the top half of popular items, are both too sensitive to the speed of
early adoption and too easy. Most of the predictive power comes from looking at
how quickly items reach their first few adopters, while for other features of
early adopters and their networks, even the direction of correlation with
popularity is not consistent across domains. Problem formulations that examine
items that reach k adopters in about the same amount of time reduce the
importance of temporal features, but also overall accuracy, highlighting that
we understand little about why items become popular while providing a context
in which we might build that understanding.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 01:52:34 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Shulman",
"Benjamin",
""
],
[
"Sharma",
"Amit",
""
],
[
"Cosley",
"Dan",
""
]
] | TITLE: Predictability of Popularity: Gaps between Prediction and Understanding
ABSTRACT: Can we predict the future popularity of a song, movie or tweet? Recent work
suggests that although it may be hard to predict an item's popularity when it
is first introduced, peeking into its early adopters and properties of their
social network makes the problem easier. We test the robustness of such claims
by using data from social networks spanning music, books, photos, and URLs. We
find a stronger result: not only do predictive models with peeking achieve high
accuracy on all datasets, they also generalize well, so much so that models
trained on any one dataset perform with comparable accuracy on items from other
datasets.
Though practically useful, our models (and those in other work) are
intellectually unsatisfying because common formulations of the problem, which
involve peeking at the first small-k adopters and predicting whether items end
up in the top half of popular items, are both too sensitive to the speed of
early adoption and too easy. Most of the predictive power comes from looking at
how quickly items reach their first few adopters, while for other features of
early adopters and their networks, even the direction of correlation with
popularity is not consistent across domains. Problem formulations that examine
items that reach k adopters in about the same amount of time reduce the
importance of temporal features, but also overall accuracy, highlighting that
we understand little about why items become popular while providing a context
in which we might build that understanding.
| no_new_dataset | 0.936052 |
1603.09596 | Georgios Samaras | Yannis Avrithis, Ioannis Z. Emiris, and Georgios Samaras | High-dimensional approximate nearest neighbor: k-d Generalized
Randomized Forests | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new data-structure, the generalized randomized kd forest, or
kgeraf, for approximate nearest neighbor searching in high dimensions. In
particular, we introduce new randomization techniques to specify a set of
independently constructed trees where search is performed simultaneously, hence
increasing accuracy. We omit backtracking, and we optimize distance
computations, thus accelerating queries. We release public domain software
geraf and we compare it to existing implementations of state-of-the-art methods
including BBD-trees, Locality Sensitive Hashing, randomized kd forests, and
product quantization. Experimental results indicate that our method would be
the method of choice in dimensions around 1,000, and probably up to 10,000, and
pointsets of cardinality up to a few hundred thousands or even one million;
this range of inputs is encountered in many critical applications today. For
instance, we handle a real dataset of $10^6$ images represented in 960
dimensions with a query time of less than $1$sec on average and 90\% responses
being true nearest neighbors.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 14:04:30 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Avrithis",
"Yannis",
""
],
[
"Emiris",
"Ioannis Z.",
""
],
[
"Samaras",
"Georgios",
""
]
] | TITLE: High-dimensional approximate nearest neighbor: k-d Generalized
Randomized Forests
ABSTRACT: We propose a new data-structure, the generalized randomized kd forest, or
kgeraf, for approximate nearest neighbor searching in high dimensions. In
particular, we introduce new randomization techniques to specify a set of
independently constructed trees where search is performed simultaneously, hence
increasing accuracy. We omit backtracking, and we optimize distance
computations, thus accelerating queries. We release public domain software
geraf and we compare it to existing implementations of state-of-the-art methods
including BBD-trees, Locality Sensitive Hashing, randomized kd forests, and
product quantization. Experimental results indicate that our method would be
the method of choice in dimensions around 1,000, and probably up to 10,000, and
pointsets of cardinality up to a few hundred thousands or even one million;
this range of inputs is encountered in many critical applications today. For
instance, we handle a real dataset of $10^6$ images represented in 960
dimensions with a query time of less than $1$sec on average and 90\% responses
being true nearest neighbors.
| no_new_dataset | 0.939692 |
1603.09727 | Ziang Xie | Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, Andrew Y. Ng | Neural Language Correction with Character-Based Attention | 10 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural language correction has the potential to help language learners
improve their writing skills. While approaches with separate classifiers for
different error types have high precision, they do not flexibly handle errors
such as redundancy or non-idiomatic phrasing. On the other hand, word and
phrase-based machine translation methods are not designed to cope with
orthographic errors, and have recently been outpaced by neural models.
Motivated by these issues, we present a neural network-based approach to
language correction. The core component of our method is an encoder-decoder
recurrent neural network with an attention mechanism. By operating at the
character level, the network avoids the problem of out-of-vocabulary words. We
illustrate the flexibility of our approach on dataset of noisy, user-generated
text collected from an English learner forum. When combined with a language
model, our method achieves a state-of-the-art $F_{0.5}$-score on the CoNLL 2014
Shared Task. We further demonstrate that training the network on additional
data with synthesized errors can improve performance.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 19:16:54 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Xie",
"Ziang",
""
],
[
"Avati",
"Anand",
""
],
[
"Arivazhagan",
"Naveen",
""
],
[
"Jurafsky",
"Dan",
""
],
[
"Ng",
"Andrew Y.",
""
]
] | TITLE: Neural Language Correction with Character-Based Attention
ABSTRACT: Natural language correction has the potential to help language learners
improve their writing skills. While approaches with separate classifiers for
different error types have high precision, they do not flexibly handle errors
such as redundancy or non-idiomatic phrasing. On the other hand, word and
phrase-based machine translation methods are not designed to cope with
orthographic errors, and have recently been outpaced by neural models.
Motivated by these issues, we present a neural network-based approach to
language correction. The core component of our method is an encoder-decoder
recurrent neural network with an attention mechanism. By operating at the
character level, the network avoids the problem of out-of-vocabulary words. We
illustrate the flexibility of our approach on dataset of noisy, user-generated
text collected from an English learner forum. When combined with a language
model, our method achieves a state-of-the-art $F_{0.5}$-score on the CoNLL 2014
Shared Task. We further demonstrate that training the network on additional
data with synthesized errors can improve performance.
| no_new_dataset | 0.929568 |
1603.09739 | Prithwish Chakraborty | Prithwish Chakraborty and Sathappan Muthiah and Ravi Tandon and Naren
Ramakrishnan | Hierarchical Quickest Change Detection via Surrogates | Submitted to a journal. See demo at
https://prithwi.github.io/hqcd_supplementary | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change detection (CD) in time series data is a critical problem as it reveal
changes in the underlying generative processes driving the time series. Despite
having received significant attention, one important unexplored aspect is how
to efficiently utilize additional correlated information to improve the
detection and the understanding of changepoints. We propose hierarchical
quickest change detection (HQCD), a framework that formalizes the process of
incorporating additional correlated sources for early changepoint detection.
The core ideas behind HQCD are rooted in the theory of quickest detection and
HQCD can be regarded as its novel generalization to a hierarchical setting. The
sources are classified into targets and surrogates, and HQCD leverages this
structure to systematically assimilate observed data to update changepoint
statistics across layers. The decision on actual changepoints are provided by
minimizing the delay while still maintaining reliability bounds. In addition,
HQCD also uncovers interesting relations between changes at targets from
changes across surrogates. We validate HQCD for reliability and performance
against several state-of-the-art methods for both synthetic dataset (known
changepoints) and several real-life examples (unknown changepoints). Our
experiments indicate that we gain significant robustness without loss of
detection delay through HQCD. Our real-life experiments also showcase the
usefulness of the hierarchical setting by connecting the surrogate sources
(such as Twitter chatter) to target sources (such as Employment related
protests that ultimately lead to major uprisings).
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 19:50:45 GMT"
}
] | 2016-04-01T00:00:00 | [
[
"Chakraborty",
"Prithwish",
""
],
[
"Muthiah",
"Sathappan",
""
],
[
"Tandon",
"Ravi",
""
],
[
"Ramakrishnan",
"Naren",
""
]
] | TITLE: Hierarchical Quickest Change Detection via Surrogates
ABSTRACT: Change detection (CD) in time series data is a critical problem as it reveal
changes in the underlying generative processes driving the time series. Despite
having received significant attention, one important unexplored aspect is how
to efficiently utilize additional correlated information to improve the
detection and the understanding of changepoints. We propose hierarchical
quickest change detection (HQCD), a framework that formalizes the process of
incorporating additional correlated sources for early changepoint detection.
The core ideas behind HQCD are rooted in the theory of quickest detection and
HQCD can be regarded as its novel generalization to a hierarchical setting. The
sources are classified into targets and surrogates, and HQCD leverages this
structure to systematically assimilate observed data to update changepoint
statistics across layers. The decision on actual changepoints are provided by
minimizing the delay while still maintaining reliability bounds. In addition,
HQCD also uncovers interesting relations between changes at targets from
changes across surrogates. We validate HQCD for reliability and performance
against several state-of-the-art methods for both synthetic dataset (known
changepoints) and several real-life examples (unknown changepoints). Our
experiments indicate that we gain significant robustness without loss of
detection delay through HQCD. Our real-life experiments also showcase the
usefulness of the hierarchical setting by connecting the surrogate sources
(such as Twitter chatter) to target sources (such as Employment related
protests that ultimately lead to major uprisings).
| no_new_dataset | 0.942665 |
1603.09035 | Ignacio Cano | Ignacio Cano, Markus Weimer, Dhruv Mahajan, Carlo Curino and Giovanni
Matteo Fumarola | Towards Geo-Distributed Machine Learning | null | null | null | null | cs.LG cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latency to end-users and regulatory requirements push large companies to
build data centers all around the world. The resulting data is "born"
geographically distributed. On the other hand, many machine learning
applications require a global view of such data in order to achieve the best
results. These types of applications form a new class of learning problems,
which we call Geo-Distributed Machine Learning (GDML). Such applications need
to cope with: 1) scarce and expensive cross-data center bandwidth, and 2)
growing privacy concerns that are pushing for stricter data sovereignty
regulations. Current solutions to learning from geo-distributed data sources
revolve around the idea of first centralizing the data in one data center, and
then training locally. As machine learning algorithms are
communication-intensive, the cost of centralizing the data is thought to be
offset by the lower cost of intra-data center communication during training. In
this work, we show that the current centralized practice can be far from
optimal, and propose a system for doing geo-distributed training. Furthermore,
we argue that the geo-distributed approach is structurally more amenable to
dealing with regulatory constraints, as raw data never leaves the source data
center. Our empirical evaluation on three real datasets confirms the general
validity of our approach, and shows that GDML is not only possible but also
advisable in many scenarios.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 04:05:29 GMT"
}
] | 2016-03-31T00:00:00 | [
[
"Cano",
"Ignacio",
""
],
[
"Weimer",
"Markus",
""
],
[
"Mahajan",
"Dhruv",
""
],
[
"Curino",
"Carlo",
""
],
[
"Fumarola",
"Giovanni Matteo",
""
]
] | TITLE: Towards Geo-Distributed Machine Learning
ABSTRACT: Latency to end-users and regulatory requirements push large companies to
build data centers all around the world. The resulting data is "born"
geographically distributed. On the other hand, many machine learning
applications require a global view of such data in order to achieve the best
results. These types of applications form a new class of learning problems,
which we call Geo-Distributed Machine Learning (GDML). Such applications need
to cope with: 1) scarce and expensive cross-data center bandwidth, and 2)
growing privacy concerns that are pushing for stricter data sovereignty
regulations. Current solutions to learning from geo-distributed data sources
revolve around the idea of first centralizing the data in one data center, and
then training locally. As machine learning algorithms are
communication-intensive, the cost of centralizing the data is thought to be
offset by the lower cost of intra-data center communication during training. In
this work, we show that the current centralized practice can be far from
optimal, and propose a system for doing geo-distributed training. Furthermore,
we argue that the geo-distributed approach is structurally more amenable to
dealing with regulatory constraints, as raw data never leaves the source data
center. Our empirical evaluation on three real datasets confirms the general
validity of our approach, and shows that GDML is not only possible but also
advisable in many scenarios.
| no_new_dataset | 0.947575 |
1603.09065 | Xiao Chu | Xiao Chu, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang | Structured Feature Learning for Pose Estimation | Accepted by CVPR2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a structured feature learning framework to reason
the correlations among body joints at the feature level in human pose
estimation. Different from existing approaches of modelling structures on score
maps or predicted labels, feature maps preserve substantially richer
descriptions of body joints. The relationships between feature maps of joints
are captured with the introduced geometrical transform kernels, which can be
easily implemented with a convolution layer. Features and their relationships
are jointly learned in an end-to-end learning system. A bi-directional tree
structured model is proposed, so that the feature channels at a body joint can
well receive information from other joints. The proposed framework improves
feature learning substantially. With very simple post processing, it reaches
the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of
learning features at each joint separately with ConvNet, the mean PCP has been
improved by 18% on FLIC. The code is released to the public.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 07:52:22 GMT"
}
] | 2016-03-31T00:00:00 | [
[
"Chu",
"Xiao",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Structured Feature Learning for Pose Estimation
ABSTRACT: In this paper, we propose a structured feature learning framework to reason
the correlations among body joints at the feature level in human pose
estimation. Different from existing approaches of modelling structures on score
maps or predicted labels, feature maps preserve substantially richer
descriptions of body joints. The relationships between feature maps of joints
are captured with the introduced geometrical transform kernels, which can be
easily implemented with a convolution layer. Features and their relationships
are jointly learned in an end-to-end learning system. A bi-directional tree
structured model is proposed, so that the feature channels at a body joint can
well receive information from other joints. The proposed framework improves
feature learning substantially. With very simple post processing, it reaches
the best mean PCP on the LSP and FLIC datasets. Compared with the baseline of
learning features at each joint separately with ConvNet, the mean PCP has been
improved by 18% on FLIC. The code is released to the public.
| no_new_dataset | 0.945045 |
1603.09164 | Swati Agarwal | Swati Agarwal, Ashish Sureka | Spider and the Flies : Focused Crawling on Tumblr to Detect Hate
Promoting Communities | 8 Pages, 7 Figures including 9 images, 2 Tables, 3 Algorithms,
Extended version of our work Agarwal et. al., Micropost 2015 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumblr is one of the largest and most popular microblogging website on the
Internet. Studies shows that due to high reachability among viewers, low
publication barriers and social networking connectivity, microblogging websites
are being misused as a platform to post hateful speech and recruiting new
members by existing extremist groups. Manual identification of such posts and
communities is overwhelmingly impractical due to large amount of posts and
blogs being published every day. We propose a topic based web crawler primarily
consisting of multiple phases: training a text classifier model consisting
examples of only hate promoting users, extracting posts of an unknown tumblr
micro-blogger, classifying hate promoting bloggers based on their activity
feeds, crawling through the external links to other bloggers and performing a
social network analysis on connected extremist bloggers. To investigate the
effectiveness of our approach, we conduct experiments on large real world
dataset. Experimental results reveals that the proposed approach is an
effective method and has an F-score of 0.80. We apply social network analysis
based techniques and identify influential and core bloggers in a community.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 13:00:15 GMT"
}
] | 2016-03-31T00:00:00 | [
[
"Agarwal",
"Swati",
""
],
[
"Sureka",
"Ashish",
""
]
] | TITLE: Spider and the Flies : Focused Crawling on Tumblr to Detect Hate
Promoting Communities
ABSTRACT: Tumblr is one of the largest and most popular microblogging website on the
Internet. Studies shows that due to high reachability among viewers, low
publication barriers and social networking connectivity, microblogging websites
are being misused as a platform to post hateful speech and recruiting new
members by existing extremist groups. Manual identification of such posts and
communities is overwhelmingly impractical due to large amount of posts and
blogs being published every day. We propose a topic based web crawler primarily
consisting of multiple phases: training a text classifier model consisting
examples of only hate promoting users, extracting posts of an unknown tumblr
micro-blogger, classifying hate promoting bloggers based on their activity
feeds, crawling through the external links to other bloggers and performing a
social network analysis on connected extremist bloggers. To investigate the
effectiveness of our approach, we conduct experiments on large real world
dataset. Experimental results reveals that the proposed approach is an
effective method and has an F-score of 0.80. We apply social network analysis
based techniques and identify influential and core bloggers in a community.
| no_new_dataset | 0.944842 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.