Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
list | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.03855
|
Wu-Jun Li
|
Wu-Jun Li, Sheng Wang, and Wang-Cheng Kang
|
Feature Learning based Deep Supervised Hashing with Pairwise Labels
|
IJCAI 2016
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed wide application of hashing for large-scale image
retrieval. However, most existing hashing methods are based on hand-crafted
features which might not be optimally compatible with the hashing procedure.
Recently, deep hashing methods have been proposed to perform simultaneous
feature learning and hash-code learning with deep neural networks, which have
shown better performance than traditional hashing methods with hand-crafted
features. Most of these deep hashing methods are supervised whose supervised
information is given with triplet labels. For another common application
scenario with pairwise labels, there have not existed methods for simultaneous
feature learning and hash-code learning. In this paper, we propose a novel deep
hashing method, called deep pairwise-supervised hashing(DPSH), to perform
simultaneous feature learning and hash-code learning for applications with
pairwise labels. Experiments on real datasets show that our DPSH method can
outperform other methods to achieve the state-of-the-art performance in image
retrieval applications.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 11:11:42 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 09:27:38 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Li",
"Wu-Jun",
""
],
[
"Wang",
"Sheng",
""
],
[
"Kang",
"Wang-Cheng",
""
]
] |
TITLE: Feature Learning based Deep Supervised Hashing with Pairwise Labels
ABSTRACT: Recent years have witnessed wide application of hashing for large-scale image
retrieval. However, most existing hashing methods are based on hand-crafted
features which might not be optimally compatible with the hashing procedure.
Recently, deep hashing methods have been proposed to perform simultaneous
feature learning and hash-code learning with deep neural networks, which have
shown better performance than traditional hashing methods with hand-crafted
features. Most of these deep hashing methods are supervised whose supervised
information is given with triplet labels. For another common application
scenario with pairwise labels, there have not existed methods for simultaneous
feature learning and hash-code learning. In this paper, we propose a novel deep
hashing method, called deep pairwise-supervised hashing(DPSH), to perform
simultaneous feature learning and hash-code learning for applications with
pairwise labels. Experiments on real datasets show that our DPSH method can
outperform other methods to achieve the state-of-the-art performance in image
retrieval applications.
|
1511.03908
|
Natalia Neverova
|
Natalia Neverova, Christian Wolf, Griffin Lacey, Lex Fridman, Deepak
Chandra, Brandon Barbello, Graham Taylor
|
Learning Human Identity from Motion Patterns
|
10 pages, 6 figures, 2 tables
| null | null | null |
cs.LG cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a large-scale study exploring the capability of temporal deep
neural networks to interpret natural human kinematics and introduce the first
method for active biometric authentication with mobile inertial sensors. At
Google, we have created a first-of-its-kind dataset of human movements,
passively collected by 1500 volunteers using their smartphones daily over
several months. We (1) compare several neural architectures for efficient
learning of temporal multi-modal data representations, (2) propose an optimized
shift-invariant dense convolutional mechanism (DCWRNN), and (3) incorporate the
discriminatively-trained dynamic features in a probabilistic generative
framework taking into account temporal characteristics. Our results demonstrate
that human kinematics convey important information about user identity and can
serve as a valuable component of multi-modal authentication systems.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 14:48:53 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 15:23:06 GMT"
},
{
"version": "v3",
"created": "Wed, 9 Dec 2015 01:59:58 GMT"
},
{
"version": "v4",
"created": "Thu, 21 Apr 2016 16:04:00 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Neverova",
"Natalia",
""
],
[
"Wolf",
"Christian",
""
],
[
"Lacey",
"Griffin",
""
],
[
"Fridman",
"Lex",
""
],
[
"Chandra",
"Deepak",
""
],
[
"Barbello",
"Brandon",
""
],
[
"Taylor",
"Graham",
""
]
] |
TITLE: Learning Human Identity from Motion Patterns
ABSTRACT: We present a large-scale study exploring the capability of temporal deep
neural networks to interpret natural human kinematics and introduce the first
method for active biometric authentication with mobile inertial sensors. At
Google, we have created a first-of-its-kind dataset of human movements,
passively collected by 1500 volunteers using their smartphones daily over
several months. We (1) compare several neural architectures for efficient
learning of temporal multi-modal data representations, (2) propose an optimized
shift-invariant dense convolutional mechanism (DCWRNN), and (3) incorporate the
discriminatively-trained dynamic features in a probabilistic generative
framework taking into account temporal characteristics. Our results demonstrate
that human kinematics convey important information about user identity and can
serve as a valuable component of multi-modal authentication systems.
|
1511.06522
|
Yan Zhang
|
Yan Zhang, Mete Ozay, Xing Liu, Takayuki Okatani
|
Integrating Deep Features for Material Recognition
|
6 pages
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for integration of features extracted using deep
representations of Convolutional Neural Networks (CNNs) each of which is
learned using a different image dataset of objects and materials for material
recognition. Given a set of representations of multiple pre-trained CNNs, we
first compute activations of features using the representations on the images
to select a set of samples which are best represented by the features. Then, we
measure the uncertainty of the features by computing the entropy of class
distributions for each sample set. Finally, we compute the contribution of each
feature to representation of classes for feature selection and integration. We
examine the proposed method on three benchmark datasets for material
recognition. Experimental results show that the proposed method achieves
state-of-the-art performance by integrating deep features. Additionally, we
introduce a new material dataset called EFMD by extending Flickr Material
Database (FMD). By the employment of the EFMD with transfer learning for
updating the learned CNN models, we achieve 84.0%+/-1.8% accuracy on the FMD
dataset which is close to human performance that is 84.9%.
|
[
{
"version": "v1",
"created": "Fri, 20 Nov 2015 08:31:00 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Nov 2015 14:21:28 GMT"
},
{
"version": "v3",
"created": "Sun, 13 Dec 2015 13:39:24 GMT"
},
{
"version": "v4",
"created": "Mon, 22 Feb 2016 14:36:36 GMT"
},
{
"version": "v5",
"created": "Tue, 5 Apr 2016 09:18:49 GMT"
},
{
"version": "v6",
"created": "Thu, 21 Apr 2016 10:19:56 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Zhang",
"Yan",
""
],
[
"Ozay",
"Mete",
""
],
[
"Liu",
"Xing",
""
],
[
"Okatani",
"Takayuki",
""
]
] |
TITLE: Integrating Deep Features for Material Recognition
ABSTRACT: We propose a method for integration of features extracted using deep
representations of Convolutional Neural Networks (CNNs) each of which is
learned using a different image dataset of objects and materials for material
recognition. Given a set of representations of multiple pre-trained CNNs, we
first compute activations of features using the representations on the images
to select a set of samples which are best represented by the features. Then, we
measure the uncertainty of the features by computing the entropy of class
distributions for each sample set. Finally, we compute the contribution of each
feature to representation of classes for feature selection and integration. We
examine the proposed method on three benchmark datasets for material
recognition. Experimental results show that the proposed method achieves
state-of-the-art performance by integrating deep features. Additionally, we
introduce a new material dataset called EFMD by extending Flickr Material
Database (FMD). By the employment of the EFMD with transfer learning for
updating the learned CNN models, we achieve 84.0%+/-1.8% accuracy on the FMD
dataset which is close to human performance that is 84.9%.
|
1604.06002
|
Travis Gagie
|
Djamal Belazzougui, Fabio Cunial, Travis Gagie, Nicola Prezza, Mathieu
Raffinot
|
Practical combinations of repetition-aware data structures
|
arXiv admin note: text overlap with arXiv:1502.05937
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Highly-repetitive collections of strings are increasingly being amassed by
genome sequencing and genetic variation experiments, as well as by storing all
versions of human-generated files, like webpages and source code. Existing
indexes for locating all the exact occurrences of a pattern in a
highly-repetitive string take advantage of a single measure of repetition.
However, multiple, distinct measures of repetition all grow sublinearly in the
length of a highly-repetitive string. In this paper we explore the practical
advantages of combining data structures whose size depends on distinct measures
of repetition. The main ingredient of our structures is the run-length encoded
BWT (RLBWT), which takes space proportional to the number of runs in the
Burrows-Wheeler transform of a string. We describe a range of practical
variants that combine RLBWT with the set of boundaries of the Lempel-Ziv 77
factors of a string, which take space proportional to the number of factors.
Such variants use, respectively, the RLBWT of a string and the RLBWT of its
reverse, or just one RLBWT inside a bidirectional index, or just one RLBWT with
support for unidirectional extraction. We also study the practical advantages
of combining RLBWT with the compact directed acyclic word graph of a string, a
data structure that takes space proportional to the number of one-character
extensions of maximal repeats. Our approaches are easy to implement, and
provide competitive tradeoffs on significant datasets.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 15:30:36 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Apr 2016 14:31:16 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Belazzougui",
"Djamal",
""
],
[
"Cunial",
"Fabio",
""
],
[
"Gagie",
"Travis",
""
],
[
"Prezza",
"Nicola",
""
],
[
"Raffinot",
"Mathieu",
""
]
] |
TITLE: Practical combinations of repetition-aware data structures
ABSTRACT: Highly-repetitive collections of strings are increasingly being amassed by
genome sequencing and genetic variation experiments, as well as by storing all
versions of human-generated files, like webpages and source code. Existing
indexes for locating all the exact occurrences of a pattern in a
highly-repetitive string take advantage of a single measure of repetition.
However, multiple, distinct measures of repetition all grow sublinearly in the
length of a highly-repetitive string. In this paper we explore the practical
advantages of combining data structures whose size depends on distinct measures
of repetition. The main ingredient of our structures is the run-length encoded
BWT (RLBWT), which takes space proportional to the number of runs in the
Burrows-Wheeler transform of a string. We describe a range of practical
variants that combine RLBWT with the set of boundaries of the Lempel-Ziv 77
factors of a string, which take space proportional to the number of factors.
Such variants use, respectively, the RLBWT of a string and the RLBWT of its
reverse, or just one RLBWT inside a bidirectional index, or just one RLBWT with
support for unidirectional extraction. We also study the practical advantages
of combining RLBWT with the compact directed acyclic word graph of a string, a
data structure that takes space proportional to the number of one-character
extensions of maximal repeats. Our approaches are easy to implement, and
provide competitive tradeoffs on significant datasets.
|
1604.06153
|
Chaobing Song
|
Chaobing Song, Shu-Tao Xia
|
Nonextensive information theoretical machine
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new discriminative model named \emph{nonextensive
information theoretical machine (NITM)} based on nonextensive generalization of
Shannon information theory. In NITM, weight parameters are treated as random
variables. Tsallis divergence is used to regularize the distribution of weight
parameters and maximum unnormalized Tsallis entropy distribution is used to
evaluate fitting effect. On the one hand, it is showed that some well-known
margin-based loss functions such as $\ell_{0/1}$ loss, hinge loss, squared
hinge loss and exponential loss can be unified by unnormalized Tsallis entropy.
On the other hand, Gaussian prior regularization is generalized to Student-t
prior regularization with similar computational complexity. The model can be
solved efficiently by gradient-based convex optimization and its performance is
illustrated on standard datasets.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 01:29:56 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Song",
"Chaobing",
""
],
[
"Xia",
"Shu-Tao",
""
]
] |
TITLE: Nonextensive information theoretical machine
ABSTRACT: In this paper, we propose a new discriminative model named \emph{nonextensive
information theoretical machine (NITM)} based on nonextensive generalization of
Shannon information theory. In NITM, weight parameters are treated as random
variables. Tsallis divergence is used to regularize the distribution of weight
parameters and maximum unnormalized Tsallis entropy distribution is used to
evaluate fitting effect. On the one hand, it is showed that some well-known
margin-based loss functions such as $\ell_{0/1}$ loss, hinge loss, squared
hinge loss and exponential loss can be unified by unnormalized Tsallis entropy.
On the other hand, Gaussian prior regularization is generalized to Student-t
prior regularization with similar computational complexity. The model can be
solved efficiently by gradient-based convex optimization and its performance is
illustrated on standard datasets.
|
1604.06243
|
Sounak Dey
|
Sounak Dey, Anguelos Nicolaou, Josep Llados, and Umapada Pal
|
Evaluation of the Effect of Improper Segmentation on Word Spotting
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word spotting is an important recognition task in historical document
analysis. In most cases methods are developed and evaluated assuming perfect
word segmentations. In this paper we propose an experimental framework to
quantify the effect of goodness of word segmentation has on the performance
achieved by word spotting methods in identical unbiased conditions. The
framework consists of generating systematic distortions on segmentation and
retrieving the original queries from the distorted dataset. We apply the
framework on the George Washington and Barcelona Marriage Dataset and on
several established and state-of-the-art methods. The experiments allow for an
estimate of the end-to-end performance of word spotting methods.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 10:20:12 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Dey",
"Sounak",
""
],
[
"Nicolaou",
"Anguelos",
""
],
[
"Llados",
"Josep",
""
],
[
"Pal",
"Umapada",
""
]
] |
TITLE: Evaluation of the Effect of Improper Segmentation on Word Spotting
ABSTRACT: Word spotting is an important recognition task in historical document
analysis. In most cases methods are developed and evaluated assuming perfect
word segmentations. In this paper we propose an experimental framework to
quantify the effect of goodness of word segmentation has on the performance
achieved by word spotting methods in identical unbiased conditions. The
framework consists of generating systematic distortions on segmentation and
retrieving the original queries from the distorted dataset. We apply the
framework on the George Washington and Barcelona Marriage Dataset and on
several established and state-of-the-art methods. The experiments allow for an
estimate of the end-to-end performance of word spotting methods.
|
1604.06270
|
Shuxin Wang
|
Shuxin Wang, Xin Jiang, Hang Li, Jun Xu and Bin Wang
|
Incorporating Semantic Knowledge into Latent Matching Model in Search
|
24 pages
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The relevance between a query and a document in search can be represented as
matching degree between the two objects. Latent space models have been proven
to be effective for the task, which are often trained with click-through data.
One technical challenge with the approach is that it is hard to train a model
for tail queries and tail documents for which there are not enough clicks. In
this paper, we propose to address the challenge by learning a latent matching
model, using not only click-through data but also semantic knowledge. The
semantic knowledge can be categories of queries and documents as well as
synonyms of words, manually or automatically created. Specifically, we
incorporate semantic knowledge into the objective function by including
regularization terms. We develop two methods to solve the learning task on the
basis of coordinate descent and gradient descent respectively, which can be
employed in different settings. Experimental results on two datasets from an
app search engine demonstrate that our model can make effective use of semantic
knowledge, and thus can significantly enhance the accuracies of latent matching
models, particularly for tail queries.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 12:17:42 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Wang",
"Shuxin",
""
],
[
"Jiang",
"Xin",
""
],
[
"Li",
"Hang",
""
],
[
"Xu",
"Jun",
""
],
[
"Wang",
"Bin",
""
]
] |
TITLE: Incorporating Semantic Knowledge into Latent Matching Model in Search
ABSTRACT: The relevance between a query and a document in search can be represented as
matching degree between the two objects. Latent space models have been proven
to be effective for the task, which are often trained with click-through data.
One technical challenge with the approach is that it is hard to train a model
for tail queries and tail documents for which there are not enough clicks. In
this paper, we propose to address the challenge by learning a latent matching
model, using not only click-through data but also semantic knowledge. The
semantic knowledge can be categories of queries and documents as well as
synonyms of words, manually or automatically created. Specifically, we
incorporate semantic knowledge into the objective function by including
regularization terms. We develop two methods to solve the learning task on the
basis of coordinate descent and gradient descent respectively, which can be
employed in different settings. Experimental results on two datasets from an
app search engine demonstrate that our model can make effective use of semantic
knowledge, and thus can significantly enhance the accuracies of latent matching
models, particularly for tail queries.
|
1604.06412
|
Paolo Missier
|
Paolo Missier and Jacek Cala and Eldarina Wijaya
|
The data, they are a-changin'
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cost of deriving actionable knowledge from large datasets has been
decreasing thanks to a convergence of positive factors: low cost data
generation, inexpensively scalable storage and processing infrastructure
(cloud), software frameworks and tools for massively distributed data
processing, and parallelisable data analytics algorithms. One observation that
is often overlooked, however, is that each of these elements is not immutable,
rather they all evolve over time. This suggests that the value of such
derivative knowledge may decay over time, unless it is preserved by reacting to
those changes. Our broad research goal is to develop models, methods, and tools
for selectively reacting to changes by balancing costs and benefits, i.e.
through complete or partial re-computation of some of the underlying processes.
In this paper we present an initial model for reasoning about change and
re-computations, and show how analysis of detailed provenance of derived
knowledge informs re-computation decisions. We illustrate the main ideas
through a real-world case study in genomics, namely on the interpretation of
human variants in support of genetic diagnosis.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 18:40:20 GMT"
}
] | 2016-04-22T00:00:00 |
[
[
"Missier",
"Paolo",
""
],
[
"Cala",
"Jacek",
""
],
[
"Wijaya",
"Eldarina",
""
]
] |
TITLE: The data, they are a-changin'
ABSTRACT: The cost of deriving actionable knowledge from large datasets has been
decreasing thanks to a convergence of positive factors: low cost data
generation, inexpensively scalable storage and processing infrastructure
(cloud), software frameworks and tools for massively distributed data
processing, and parallelisable data analytics algorithms. One observation that
is often overlooked, however, is that each of these elements is not immutable,
rather they all evolve over time. This suggests that the value of such
derivative knowledge may decay over time, unless it is preserved by reacting to
those changes. Our broad research goal is to develop models, methods, and tools
for selectively reacting to changes by balancing costs and benefits, i.e.
through complete or partial re-computation of some of the underlying processes.
In this paper we present an initial model for reasoning about change and
re-computations, and show how analysis of detailed provenance of derived
knowledge informs re-computation decisions. We illustrate the main ideas
through a real-world case study in genomics, namely on the interpretation of
human variants in support of genetic diagnosis.
|
1409.8403
|
Zeynep Akata
|
Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, Bernt Schiele
|
Evaluation of Output Embeddings for Fine-Grained Image Classification
|
@inproceedings {ARWLS15, title = {Evaluation of Output Embeddings for
Fine-Grained Image Classification}, booktitle = {IEEE Computer Vision and
Pattern Recognition}, year = {2015}, author = {Zeynep Akata and Scott Reed
and Daniel Walter and Honglak Lee and Bernt Schiele} }
| null |
10.1109/CVPR.2015.7298911
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image classification has advanced significantly in recent years with the
availability of large-scale image sets. However, fine-grained classification
remains a major challenge due to the annotation cost of large numbers of
fine-grained categories. This project shows that compelling classification
performance can be achieved on such categories even without labeled training
data. Given image and class embeddings, we learn a compatibility function such
that matching embeddings are assigned a higher score than mismatching ones;
zero-shot classification of an image proceeds by finding the label yielding the
highest joint compatibility score. We use state-of-the-art image features and
focus on different supervised attributes and unsupervised output embeddings
either derived from hierarchies or learned from unlabeled text corpora. We
establish a substantially improved state-of-the-art on the Animals with
Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate
that purely unsupervised output embeddings (learned from Wikipedia and improved
with fine-grained text) achieve compelling results, even outperforming the
previous supervised state-of-the-art. By combining different output embeddings,
we further improve results.
|
[
{
"version": "v1",
"created": "Tue, 30 Sep 2014 06:49:53 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Aug 2015 09:00:48 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Akata",
"Zeynep",
""
],
[
"Reed",
"Scott",
""
],
[
"Walter",
"Daniel",
""
],
[
"Lee",
"Honglak",
""
],
[
"Schiele",
"Bernt",
""
]
] |
TITLE: Evaluation of Output Embeddings for Fine-Grained Image Classification
ABSTRACT: Image classification has advanced significantly in recent years with the
availability of large-scale image sets. However, fine-grained classification
remains a major challenge due to the annotation cost of large numbers of
fine-grained categories. This project shows that compelling classification
performance can be achieved on such categories even without labeled training
data. Given image and class embeddings, we learn a compatibility function such
that matching embeddings are assigned a higher score than mismatching ones;
zero-shot classification of an image proceeds by finding the label yielding the
highest joint compatibility score. We use state-of-the-art image features and
focus on different supervised attributes and unsupervised output embeddings
either derived from hierarchies or learned from unlabeled text corpora. We
establish a substantially improved state-of-the-art on the Animals with
Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate
that purely unsupervised output embeddings (learned from Wikipedia and improved
with fine-grained text) achieve compelling results, even outperforming the
previous supervised state-of-the-art. By combining different output embeddings,
we further improve results.
|
1508.00715
|
Zhilin Yang
|
Zhilin Yang, Jie Tang, William Cohen
|
Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
| null | null | null | null |
cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the extent to which online social networks can be connected to open
knowledge bases. The problem is referred to as learning social knowledge
graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn
latent topics that generate word and network embeddings. GenVector leverages
large-scale unlabeled data with embeddings and represents data of two
modalities---i.e., social network users and knowledge concepts---in a shared
latent topic space. Experiments on three datasets show that the proposed method
clearly outperforms state-of-the-art methods. We then deploy the method on
AMiner, a large-scale online academic search system with a network of
38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our
method significantly decreases the error rate in an online A/B test with live
users.
|
[
{
"version": "v1",
"created": "Tue, 4 Aug 2015 09:34:22 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2016 19:57:37 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Yang",
"Zhilin",
""
],
[
"Tang",
"Jie",
""
],
[
"Cohen",
"William",
""
]
] |
TITLE: Multi-Modal Bayesian Embeddings for Learning Social Knowledge Graphs
ABSTRACT: We study the extent to which online social networks can be connected to open
knowledge bases. The problem is referred to as learning social knowledge
graphs. We propose a multi-modal Bayesian embedding model, GenVector, to learn
latent topics that generate word and network embeddings. GenVector leverages
large-scale unlabeled data with embeddings and represents data of two
modalities---i.e., social network users and knowledge concepts---in a shared
latent topic space. Experiments on three datasets show that the proposed method
clearly outperforms state-of-the-art methods. We then deploy the method on
AMiner, a large-scale online academic search system with a network of
38,049,189 researchers with a knowledge base with 35,415,011 concepts. Our
method significantly decreases the error rate in an online A/B test with live
users.
|
1602.09065
|
Amir Ghaderi
|
Srujana Gattupalli, Amir Ghaderi, Vassilis Athitsos
|
Evaluation of Deep Learning based Pose Estimation for Sign Language
Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human body pose estimation and hand detection are two important tasks for
systems that perform computer vision-based sign language recognition(SLR).
However, both tasks are challenging, especially when the input is color videos,
with no depth information. Many algorithms have been proposed in the literature
for these tasks, and some of the most successful recent algorithms are based on
deep learning. In this paper, we introduce a dataset for human pose estimation
for SLR domain. We evaluate the performance of two deep learning based pose
estimation methods, by performing user-independent experiments on our dataset.
We also perform transfer learning, and we obtain results that demonstrate that
transfer learning can improve pose estimation accuracy. The dataset and results
from these methods can create a useful baseline for future works.
|
[
{
"version": "v1",
"created": "Mon, 29 Feb 2016 17:45:10 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Apr 2016 16:56:41 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Apr 2016 23:43:10 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Gattupalli",
"Srujana",
""
],
[
"Ghaderi",
"Amir",
""
],
[
"Athitsos",
"Vassilis",
""
]
] |
TITLE: Evaluation of Deep Learning based Pose Estimation for Sign Language
Recognition
ABSTRACT: Human body pose estimation and hand detection are two important tasks for
systems that perform computer vision-based sign language recognition(SLR).
However, both tasks are challenging, especially when the input is color videos,
with no depth information. Many algorithms have been proposed in the literature
for these tasks, and some of the most successful recent algorithms are based on
deep learning. In this paper, we introduce a dataset for human pose estimation
for SLR domain. We evaluate the performance of two deep learning based pose
estimation methods, by performing user-independent experiments on our dataset.
We also perform transfer learning, and we obtain results that demonstrate that
transfer learning can improve pose estimation accuracy. The dataset and results
from these methods can create a useful baseline for future works.
|
1604.05747
|
Francesco Maria Elia
|
Francesco Elia
|
Syntactic and semantic classification of verb arguments using
dependency-based and rich semantic features
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15,
aimed at producing a system that can aid lexicographers in their efforts to
build a dictionary of meanings for English verbs using the CPA annotation
process. CPA parsing is one of the subtasks which this annotation process is
made of and it is the focus of this report. A supervised machine-learning
approach has been implemented, in which syntactic features derived from parse
trees and semantic features derived from WordNet and word embeddings are used.
It is shown that this approach performs well, even with the data sparsity
issues that characterize the dataset, and can obtain better results than other
system by a margin of about 4% f-score.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 20:59:32 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Elia",
"Francesco",
""
]
] |
TITLE: Syntactic and semantic classification of verb arguments using
dependency-based and rich semantic features
ABSTRACT: Corpus Pattern Analysis (CPA) has been the topic of Semeval 2015 Task 15,
aimed at producing a system that can aid lexicographers in their efforts to
build a dictionary of meanings for English verbs using the CPA annotation
process. CPA parsing is one of the subtasks which this annotation process is
made of and it is the focus of this report. A supervised machine-learning
approach has been implemented, in which syntactic features derived from parse
trees and semantic features derived from WordNet and word embeddings are used.
It is shown that this approach performs well, even with the data sparsity
issues that characterize the dataset, and can obtain better results than other
system by a margin of about 4% f-score.
|
1604.05766
|
Krishna Kumar Singh
|
Krishna Kumar Singh, Fanyi Xiao, Yong Jae Lee
|
Track and Transfer: Watching Videos to Simulate Strong Human Supervision
for Weakly-Supervised Object Detection
|
IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The status quo approach to training object detectors requires expensive
bounding box annotations. Our framework takes a markedly different direction:
we transfer tracked object boxes from weakly-labeled videos to weakly-labeled
images to automatically generate pseudo ground-truth boxes, which replace
manually annotated bounding boxes. We first mine discriminative regions in the
weakly-labeled image collection that frequently/rarely appear in the
positive/negative images. We then match those regions to videos and retrieve
the corresponding tracked object boxes. Finally, we design a hough transform
algorithm to vote for the best box to serve as the pseudo GT for each image,
and use them to train an object detector. Together, these lead to
state-of-the-art weakly-supervised detection results on the PASCAL 2007 and
2010 datasets.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 22:23:29 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Singh",
"Krishna Kumar",
""
],
[
"Xiao",
"Fanyi",
""
],
[
"Lee",
"Yong Jae",
""
]
] |
TITLE: Track and Transfer: Watching Videos to Simulate Strong Human Supervision
for Weakly-Supervised Object Detection
ABSTRACT: The status quo approach to training object detectors requires expensive
bounding box annotations. Our framework takes a markedly different direction:
we transfer tracked object boxes from weakly-labeled videos to weakly-labeled
images to automatically generate pseudo ground-truth boxes, which replace
manually annotated bounding boxes. We first mine discriminative regions in the
weakly-labeled image collection that frequently/rarely appear in the
positive/negative images. We then match those regions to videos and retrieve
the corresponding tracked object boxes. Finally, we design a hough transform
algorithm to vote for the best box to serve as the pseudo GT for each image,
and use them to train an object detector. Together, these lead to
state-of-the-art weakly-supervised detection results on the PASCAL 2007 and
2010 datasets.
|
1604.05813
|
Ruining He
|
Ruining He, Chunbin Lin, Jianguo Wang, Julian McAuley
|
Sherlock: Sparse Hierarchical Embeddings for Visually-aware One-class
Collaborative Filtering
|
7 pages, 3 figures
| null | null | null |
cs.IR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Building successful recommender systems requires uncovering the underlying
dimensions that describe the properties of items as well as users' preferences
toward them. In domains like clothing recommendation, explaining users'
preferences requires modeling the visual appearance of the items in question.
This makes recommendation especially challenging, due to both the complexity
and subtlety of people's 'visual preferences,' as well as the scale and
dimensionality of the data and features involved. Ultimately, a successful
model should be capable of capturing considerable variance across different
categories and styles, while still modeling the commonalities explained by
`global' structures in order to combat the sparsity (e.g. cold-start),
variability, and scale of real-world datasets. Here, we address these
challenges by building such structures to model the visual dimensions across
different product categories. With a novel hierarchical embedding architecture,
our method accounts for both high-level (colorfulness, darkness, etc.) and
subtle (e.g. casualness) visual characteristics simultaneously.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 04:36:57 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"He",
"Ruining",
""
],
[
"Lin",
"Chunbin",
""
],
[
"Wang",
"Jianguo",
""
],
[
"McAuley",
"Julian",
""
]
] |
TITLE: Sherlock: Sparse Hierarchical Embeddings for Visually-aware One-class
Collaborative Filtering
ABSTRACT: Building successful recommender systems requires uncovering the underlying
dimensions that describe the properties of items as well as users' preferences
toward them. In domains like clothing recommendation, explaining users'
preferences requires modeling the visual appearance of the items in question.
This makes recommendation especially challenging, due to both the complexity
and subtlety of people's 'visual preferences,' as well as the scale and
dimensionality of the data and features involved. Ultimately, a successful
model should be capable of capturing considerable variance across different
categories and styles, while still modeling the commonalities explained by
`global' structures in order to combat the sparsity (e.g. cold-start),
variability, and scale of real-world datasets. Here, we address these
challenges by building such structures to model the visual dimensions across
different product categories. With a novel hierarchical embedding architecture,
our method accounts for both high-level (colorfulness, darkness, etc.) and
subtle (e.g. casualness) visual characteristics simultaneously.
|
1604.05875
|
Tiep Mai
|
Tiep Mai, Bichen Shi, Patrick K. Nicholson, Deepak Ajwani, Alessandra
Sala
|
Distributed Entity Disambiguation with Per-Mention Learning
| null | null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entity disambiguation, or mapping a phrase to its canonical representation in
a knowledge base, is a fundamental step in many natural language processing
applications. Existing techniques based on global ranking models fail to
capture the individual peculiarities of the words and hence, either struggle to
meet the accuracy requirements of many real-world applications or they are too
complex to satisfy real-time constraints of applications.
In this paper, we propose a new disambiguation system that learns specialized
features and models for disambiguating each ambiguous phrase in the English
language. To train and validate the hundreds of thousands of learning models
for this purpose, we use a Wikipedia hyperlink dataset with more than 170
million labelled annotations. We provide an extensive experimental evaluation
to show that the accuracy of our approach compares favourably with respect to
many state-of-the-art disambiguation systems. The training required for our
approach can be easily distributed over a cluster. Furthermore, updating our
system for new entities or calibrating it for special ones is a computationally
fast process, that does not affect the disambiguation of the other entities.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 09:53:42 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Mai",
"Tiep",
""
],
[
"Shi",
"Bichen",
""
],
[
"Nicholson",
"Patrick K.",
""
],
[
"Ajwani",
"Deepak",
""
],
[
"Sala",
"Alessandra",
""
]
] |
TITLE: Distributed Entity Disambiguation with Per-Mention Learning
ABSTRACT: Entity disambiguation, or mapping a phrase to its canonical representation in
a knowledge base, is a fundamental step in many natural language processing
applications. Existing techniques based on global ranking models fail to
capture the individual peculiarities of the words and hence, either struggle to
meet the accuracy requirements of many real-world applications or they are too
complex to satisfy real-time constraints of applications.
In this paper, we propose a new disambiguation system that learns specialized
features and models for disambiguating each ambiguous phrase in the English
language. To train and validate the hundreds of thousands of learning models
for this purpose, we use a Wikipedia hyperlink dataset with more than 170
million labelled annotations. We provide an extensive experimental evaluation
to show that the accuracy of our approach compares favourably with respect to
many state-of-the-art disambiguation systems. The training required for our
approach can be easily distributed over a cluster. Furthermore, updating our
system for new entities or calibrating it for special ones is a computationally
fast process, that does not affect the disambiguation of the other entities.
|
1604.05878
|
Johannes Welbl
|
Johannes Welbl, Guillaume Bouchard, Sebastian Riedel
|
A Factorization Machine Framework for Testing Bigram Embeddings in
Knowledgebase Completion
|
accepted for AKBC 2016 workshop, 6pages
| null | null | null |
cs.CL cs.AI cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Embedding-based Knowledge Base Completion models have so far mostly combined
distributed representations of individual entities or relations to compute
truth scores of missing links. Facts can however also be represented using
pairwise embeddings, i.e. embeddings for pairs of entities and relations. In
this paper we explore such bigram embeddings with a flexible Factorization
Machine model and several ablations from it. We investigate the relevance of
various bigram types on the fb15k237 dataset and find relative improvements
compared to a compositional model.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 09:58:56 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Welbl",
"Johannes",
""
],
[
"Bouchard",
"Guillaume",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
TITLE: A Factorization Machine Framework for Testing Bigram Embeddings in
Knowledgebase Completion
ABSTRACT: Embedding-based Knowledge Base Completion models have so far mostly combined
distributed representations of individual entities or relations to compute
truth scores of missing links. Facts can however also be represented using
pairwise embeddings, i.e. embeddings for pairs of entities and relations. In
this paper we explore such bigram embeddings with a flexible Factorization
Machine model and several ablations from it. We investigate the relevance of
various bigram types on the fb15k237 dataset and find relative improvements
compared to a compositional model.
|
1604.06020
|
Stefano Teso
|
Stefano Teso, Andrea Passerini, Paolo Viappiani
|
Constructive Preference Elicitation by Setwise Max-margin Learning
|
7 pages. A conference version of this work is accepted by the 25th
International Joint Conference on Artificial Intelligence (IJCAI-16)
| null | null | null |
stat.ML cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose an approach to preference elicitation that is
suitable to large configuration spaces beyond the reach of existing
state-of-the-art approaches. Our setwise max-margin method can be viewed as a
generalization of max-margin learning to sets, and can produce a set of
"diverse" items that can be used to ask informative queries to the user.
Moreover, the approach can encourage sparsity in the parameter space, in order
to favor the assessment of utility towards combinations of weights that
concentrate on just few features. We present a mixed integer linear programming
formulation and show how our approach compares favourably with Bayesian
preference elicitation alternatives and easily scales to realistic datasets.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 16:22:01 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Teso",
"Stefano",
""
],
[
"Passerini",
"Andrea",
""
],
[
"Viappiani",
"Paolo",
""
]
] |
TITLE: Constructive Preference Elicitation by Setwise Max-margin Learning
ABSTRACT: In this paper we propose an approach to preference elicitation that is
suitable to large configuration spaces beyond the reach of existing
state-of-the-art approaches. Our setwise max-margin method can be viewed as a
generalization of max-margin learning to sets, and can produce a set of
"diverse" items that can be used to ask informative queries to the user.
Moreover, the approach can encourage sparsity in the parameter space, in order
to favor the assessment of utility towards combinations of weights that
concentrate on just few features. We present a mixed integer linear programming
formulation and show how our approach compares favourably with Bayesian
preference elicitation alternatives and easily scales to realistic datasets.
|
1604.06076
|
Daniel Khashabi Mr.
|
Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Peter Clark, Oren
Etzioni and Dan Roth
|
Question Answering via Integer Programming over Semi-Structured
Knowledge
|
Extended version of the paper accepted to IJCAI'16
| null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Answering science questions posed in natural language is an important AI
challenge. Answering such questions often requires non-trivial inference and
knowledge that goes beyond factoid retrieval. Yet, most systems for this task
are based on relatively shallow Information Retrieval (IR) and statistical
correlation techniques operating on large unstructured corpora. We propose a
structured inference system for this task, formulated as an Integer Linear
Program (ILP), that answers natural language questions using a semi-structured
knowledge base derived from text, including questions requiring multi-step
inference and a combination of multiple facts. On a dataset of real, unseen
science questions, our system significantly outperforms (+14%) the best
previous attempt at structured reasoning for this task, which used Markov Logic
Networks (MLNs). It also improves upon a previous ILP formulation by 17.7%.
When combined with unstructured inference methods, the ILP system significantly
boosts overall performance (+10%). Finally, we show our approach is
substantially more robust to a simple answer perturbation compared to
statistical correlation methods.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 19:48:07 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Khashabi",
"Daniel",
""
],
[
"Khot",
"Tushar",
""
],
[
"Sabharwal",
"Ashish",
""
],
[
"Clark",
"Peter",
""
],
[
"Etzioni",
"Oren",
""
],
[
"Roth",
"Dan",
""
]
] |
TITLE: Question Answering via Integer Programming over Semi-Structured
Knowledge
ABSTRACT: Answering science questions posed in natural language is an important AI
challenge. Answering such questions often requires non-trivial inference and
knowledge that goes beyond factoid retrieval. Yet, most systems for this task
are based on relatively shallow Information Retrieval (IR) and statistical
correlation techniques operating on large unstructured corpora. We propose a
structured inference system for this task, formulated as an Integer Linear
Program (ILP), that answers natural language questions using a semi-structured
knowledge base derived from text, including questions requiring multi-step
inference and a combination of multiple facts. On a dataset of real, unseen
science questions, our system significantly outperforms (+14%) the best
previous attempt at structured reasoning for this task, which used Markov Logic
Networks (MLNs). It also improves upon a previous ILP formulation by 17.7%.
When combined with unstructured inference methods, the ILP system significantly
boosts overall performance (+10%). Finally, we show our approach is
substantially more robust to a simple answer perturbation compared to
statistical correlation methods.
|
1604.06083
|
Bernardete Ribeiro Prof
|
Gon\c{c}alo Oliveira, Xavier Fraz\~ao, Andr\'e Pimentel, Bernardete
Ribeiro
|
Automatic Graphic Logo Detection via Fast Region-based Convolutional
Networks
|
7 pages, 9 figures, IJCNN 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brand recognition is a very challenging topic with many useful applications
in localization recognition, advertisement and marketing. In this paper we
present an automatic graphic logo detection system that robustly handles
unconstrained imaging conditions. Our approach is based on Fast Region-based
Convolutional Networks (FRCN) proposed by Ross Girshick, which have shown
state-of-the-art performance in several generic object recognition tasks
(PASCAL Visual Object Classes challenges). In particular, we use two CNN models
pre-trained with the ILSVRC ImageNet dataset and we look at the selective
search of windows `proposals' in the pre-processing stage and data augmentation
to enhance the logo recognition rate. The novelty lies in the use of transfer
learning to leverage powerful Convolutional Neural Network models trained with
large-scale datasets and repurpose them in the context of graphic logo
detection. Another benefit of this framework is that it allows for multiple
detections of graphic logos using regions that are likely to have an object.
Experimental results with the FlickrLogos-32 dataset show not only the
promising performance of our developed models with respect to noise and other
transformations a graphic logo can be subject to, but also its superiority over
state-of-the-art systems with hand-crafted models and features.
|
[
{
"version": "v1",
"created": "Wed, 20 Apr 2016 19:54:01 GMT"
}
] | 2016-04-21T00:00:00 |
[
[
"Oliveira",
"Gonçalo",
""
],
[
"Frazão",
"Xavier",
""
],
[
"Pimentel",
"André",
""
],
[
"Ribeiro",
"Bernardete",
""
]
] |
TITLE: Automatic Graphic Logo Detection via Fast Region-based Convolutional
Networks
ABSTRACT: Brand recognition is a very challenging topic with many useful applications
in localization recognition, advertisement and marketing. In this paper we
present an automatic graphic logo detection system that robustly handles
unconstrained imaging conditions. Our approach is based on Fast Region-based
Convolutional Networks (FRCN) proposed by Ross Girshick, which have shown
state-of-the-art performance in several generic object recognition tasks
(PASCAL Visual Object Classes challenges). In particular, we use two CNN models
pre-trained with the ILSVRC ImageNet dataset and we look at the selective
search of windows `proposals' in the pre-processing stage and data augmentation
to enhance the logo recognition rate. The novelty lies in the use of transfer
learning to leverage powerful Convolutional Neural Network models trained with
large-scale datasets and repurpose them in the context of graphic logo
detection. Another benefit of this framework is that it allows for multiple
detections of graphic logos using regions that are likely to have an object.
Experimental results with the FlickrLogos-32 dataset show not only the
promising performance of our developed models with respect to noise and other
transformations a graphic logo can be subject to, but also its superiority over
state-of-the-art systems with hand-crafted models and features.
|
1502.03044
|
Kelvin Xu
|
Kelvin Xu and Jimmy Ba and Ryan Kiros and Kyunghyun Cho and Aaron
Courville and Ruslan Salakhutdinov and Richard Zemel and Yoshua Bengio
|
Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO.
|
[
{
"version": "v1",
"created": "Tue, 10 Feb 2015 19:18:29 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Feb 2015 02:58:54 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Apr 2016 16:43:09 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Xu",
"Kelvin",
""
],
[
"Ba",
"Jimmy",
""
],
[
"Kiros",
"Ryan",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Courville",
"Aaron",
""
],
[
"Salakhutdinov",
"Ruslan",
""
],
[
"Zemel",
"Richard",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
TITLE: Show, Attend and Tell: Neural Image Caption Generation with Visual
Attention
ABSTRACT: Inspired by recent work in machine translation and object detection, we
introduce an attention based model that automatically learns to describe the
content of images. We describe how we can train this model in a deterministic
manner using standard backpropagation techniques and stochastically by
maximizing a variational lower bound. We also show through visualization how
the model is able to automatically learn to fix its gaze on salient objects
while generating the corresponding words in the output sequence. We validate
the use of attention with state-of-the-art performance on three benchmark
datasets: Flickr8k, Flickr30k and MS COCO.
|
1510.06335
|
Matteo Venanzi
|
Matteo Venanzi, John Guiver, Pushmeet Kohli, Nick Jennings
|
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing
Systems
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods.
|
[
{
"version": "v1",
"created": "Wed, 21 Oct 2015 16:42:55 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2016 21:09:58 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Venanzi",
"Matteo",
""
],
[
"Guiver",
"John",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Jennings",
"Nick",
""
]
] |
TITLE: Time-Sensitive Bayesian Information Aggregation for Crowdsourcing
Systems
ABSTRACT: Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods.
|
1511.01512
|
Iacopo Mastromatteo
|
Emmanuel Bacry, St\'ephane Ga\"iffas, Iacopo Mastromatteo and
Jean-Fran\c{c}ois Muzy
|
Mean-field inference of Hawkes point processes
|
29 pages, 8 figures
| null |
10.1088/1751-8113/49/17/174006
| null |
cs.LG cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a fast and efficient estimation method that is able to accurately
recover the parameters of a d-dimensional Hawkes point-process from a set of
observations. We exploit a mean-field approximation that is valid when the
fluctuations of the stochastic intensity are small. We show that this is
notably the case in situations when interactions are sufficiently weak, when
the dimension of the system is high or when the fluctuations are self-averaging
due to the large number of past events they involve. In such a regime the
estimation of a Hawkes process can be mapped on a least-squares problem for
which we provide an analytic solution. Though this estimator is biased, we show
that its precision can be comparable to the one of the Maximum Likelihood
Estimator while its computation speed is shown to be improved considerably. We
give a theoretical control on the accuracy of our new approach and illustrate
its efficiency using synthetic datasets, in order to assess the statistical
estimation error of the parameters.
|
[
{
"version": "v1",
"created": "Wed, 4 Nov 2015 21:09:33 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Bacry",
"Emmanuel",
""
],
[
"Gaïffas",
"Stéphane",
""
],
[
"Mastromatteo",
"Iacopo",
""
],
[
"Muzy",
"Jean-François",
""
]
] |
TITLE: Mean-field inference of Hawkes point processes
ABSTRACT: We propose a fast and efficient estimation method that is able to accurately
recover the parameters of a d-dimensional Hawkes point-process from a set of
observations. We exploit a mean-field approximation that is valid when the
fluctuations of the stochastic intensity are small. We show that this is
notably the case in situations when interactions are sufficiently weak, when
the dimension of the system is high or when the fluctuations are self-averaging
due to the large number of past events they involve. In such a regime the
estimation of a Hawkes process can be mapped on a least-squares problem for
which we provide an analytic solution. Though this estimator is biased, we show
that its precision can be comparable to the one of the Maximum Likelihood
Estimator while its computation speed is shown to be improved considerably. We
give a theoretical control on the accuracy of our new approach and illustrate
its efficiency using synthetic datasets, in order to assess the statistical
estimation error of the parameters.
|
1511.05099
|
Peng Zhang
|
Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, Devi Parikh
|
Yin and Yang: Balancing and Answering Binary Visual Questions
| null | null | null | null |
cs.CL cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The complex compositional structure of language makes problems at the
intersection of vision and language challenging. But language also provides a
strong prior that can result in good superficial performance, without the
underlying models truly understanding the visual content. This can hinder
progress in pushing state of art in the computer vision aspects of multi-modal
AI. In this paper, we address binary Visual Question Answering (VQA) on
abstract scenes. We formulate this problem as visual verification of concepts
inquired in the questions. Specifically, we convert the question to a tuple
that concisely summarizes the visual concept to be detected in the image. If
the concept can be found in the image, the answer to the question is "yes", and
otherwise "no". Abstract scenes play two roles (1) They allow us to focus on
the high-level semantics of the VQA task as opposed to the low-level
recognition problems, and perhaps more importantly, (2) They provide us the
modality to balance the dataset such that language priors are controlled, and
the role of vision is essential. In particular, we collect fine-grained pairs
of scenes for every question, such that the answer to the question is "yes" for
one scene, and "no" for the other for the exact same question. Indeed, language
priors alone do not perform better than chance on our balanced dataset.
Moreover, our proposed approach matches the performance of a state-of-the-art
VQA approach on the unbalanced dataset, and outperforms it on the balanced
dataset.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 19:38:14 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Nov 2015 20:54:47 GMT"
},
{
"version": "v3",
"created": "Sun, 22 Nov 2015 20:54:35 GMT"
},
{
"version": "v4",
"created": "Sun, 31 Jan 2016 20:58:39 GMT"
},
{
"version": "v5",
"created": "Tue, 19 Apr 2016 19:30:00 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Zhang",
"Peng",
""
],
[
"Goyal",
"Yash",
""
],
[
"Summers-Stay",
"Douglas",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
]
] |
TITLE: Yin and Yang: Balancing and Answering Binary Visual Questions
ABSTRACT: The complex compositional structure of language makes problems at the
intersection of vision and language challenging. But language also provides a
strong prior that can result in good superficial performance, without the
underlying models truly understanding the visual content. This can hinder
progress in pushing state of art in the computer vision aspects of multi-modal
AI. In this paper, we address binary Visual Question Answering (VQA) on
abstract scenes. We formulate this problem as visual verification of concepts
inquired in the questions. Specifically, we convert the question to a tuple
that concisely summarizes the visual concept to be detected in the image. If
the concept can be found in the image, the answer to the question is "yes", and
otherwise "no". Abstract scenes play two roles (1) They allow us to focus on
the high-level semantics of the VQA task as opposed to the low-level
recognition problems, and perhaps more importantly, (2) They provide us the
modality to balance the dataset such that language priors are controlled, and
the role of vision is essential. In particular, we collect fine-grained pairs
of scenes for every question, such that the answer to the question is "yes" for
one scene, and "no" for the other for the exact same question. Indeed, language
priors alone do not perform better than chance on our balanced dataset.
Moreover, our proposed approach matches the performance of a state-of-the-art
VQA approach on the unbalanced dataset, and outperforms it on the balanced
dataset.
|
1511.05175
|
Mohamed Elhoseiny Mohamed Elhoseiny
|
Mohamed Elhoseiny, Tarek El-Gaaly, Amr Bakry, Ahmed Elgammal
|
Convolutional Models for Joint Object Categorization and Pose Estimation
|
only for workshop presentation at ICLR
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the task of Object Recognition, there exists a dichotomy between the
categorization of objects and estimating object pose, where the former
necessitates a view-invariant representation, while the latter requires a
representation capable of capturing pose information over different categories
of objects. With the rise of deep architectures, the prime focus has been on
object category recognition. Deep learning methods have achieved wide success
in this task. In contrast, object pose regression using these approaches has
received relatively much less attention. In this paper we show how deep
architectures, specifically Convolutional Neural Networks (CNN), can be adapted
to the task of simultaneous categorization and pose estimation of objects. We
investigate and analyze the layers of various CNN models and extensively
compare between them with the goal of discovering how the layers of distributed
representations of CNNs represent object pose information and how this
contradicts with object category representations. We extensively experiment on
two recent large and challenging multi-view datasets. Our models achieve better
than state-of-the-art performance on both datasets.
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 21:08:22 GMT"
},
{
"version": "v2",
"created": "Thu, 19 Nov 2015 23:17:11 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jan 2016 23:40:23 GMT"
},
{
"version": "v4",
"created": "Wed, 20 Jan 2016 22:41:19 GMT"
},
{
"version": "v5",
"created": "Mon, 22 Feb 2016 23:54:23 GMT"
},
{
"version": "v6",
"created": "Tue, 19 Apr 2016 17:56:34 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Elhoseiny",
"Mohamed",
""
],
[
"El-Gaaly",
"Tarek",
""
],
[
"Bakry",
"Amr",
""
],
[
"Elgammal",
"Ahmed",
""
]
] |
TITLE: Convolutional Models for Joint Object Categorization and Pose Estimation
ABSTRACT: In the task of Object Recognition, there exists a dichotomy between the
categorization of objects and estimating object pose, where the former
necessitates a view-invariant representation, while the latter requires a
representation capable of capturing pose information over different categories
of objects. With the rise of deep architectures, the prime focus has been on
object category recognition. Deep learning methods have achieved wide success
in this task. In contrast, object pose regression using these approaches has
received relatively much less attention. In this paper we show how deep
architectures, specifically Convolutional Neural Networks (CNN), can be adapted
to the task of simultaneous categorization and pose estimation of objects. We
investigate and analyze the layers of various CNN models and extensively
compare between them with the goal of discovering how the layers of distributed
representations of CNNs represent object pose information and how this
contradicts with object category representations. We extensively experiment on
two recent large and challenging multi-view datasets. Our models achieve better
than state-of-the-art performance on both datasets.
|
1511.06931
|
Jason Weston
|
Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra,
Alexander Miller, Arthur Szlam, Jason Weston
|
Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A long-term goal of machine learning is to build intelligent conversational
agents. One recent popular approach is to train end-to-end models on a large
amount of real dialog transcripts between humans (Sordoni et al., 2015; Vinyals
& Le, 2015; Shang et al., 2015). However, this approach leaves many questions
unanswered as an understanding of the precise successes and shortcomings of
each model is hard to assess. A contrasting recent proposal are the bAbI tasks
(Weston et al., 2015b) which are synthetic data that measure the ability of
learning machines at various reasoning tasks over toy language. Unfortunately,
those tests are very small and hence may encourage methods that do not scale.
In this work, we propose a suite of new tasks of a much larger scale that
attempt to bridge the gap between the two regimes. Choosing the domain of
movies, we provide tasks that test the ability of models to answer factual
questions (utilizing OMDB), provide personalization (utilizing MovieLens),
carry short conversations about the two, and finally to perform on natural
dialogs from Reddit. We provide a dataset covering 75k movie entities and with
3.5M training examples. We present results of various models on these tasks,
and evaluate their performance.
|
[
{
"version": "v1",
"created": "Sat, 21 Nov 2015 22:26:49 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2015 09:31:59 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jan 2016 04:51:54 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Apr 2016 06:22:44 GMT"
},
{
"version": "v5",
"created": "Fri, 15 Apr 2016 20:22:13 GMT"
},
{
"version": "v6",
"created": "Tue, 19 Apr 2016 15:30:29 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Dodge",
"Jesse",
""
],
[
"Gane",
"Andreea",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Bordes",
"Antoine",
""
],
[
"Chopra",
"Sumit",
""
],
[
"Miller",
"Alexander",
""
],
[
"Szlam",
"Arthur",
""
],
[
"Weston",
"Jason",
""
]
] |
TITLE: Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems
ABSTRACT: A long-term goal of machine learning is to build intelligent conversational
agents. One recent popular approach is to train end-to-end models on a large
amount of real dialog transcripts between humans (Sordoni et al., 2015; Vinyals
& Le, 2015; Shang et al., 2015). However, this approach leaves many questions
unanswered as an understanding of the precise successes and shortcomings of
each model is hard to assess. A contrasting recent proposal are the bAbI tasks
(Weston et al., 2015b) which are synthetic data that measure the ability of
learning machines at various reasoning tasks over toy language. Unfortunately,
those tests are very small and hence may encourage methods that do not scale.
In this work, we propose a suite of new tasks of a much larger scale that
attempt to bridge the gap between the two regimes. Choosing the domain of
movies, we provide tasks that test the ability of models to answer factual
questions (utilizing OMDB), provide personalization (utilizing MovieLens),
carry short conversations about the two, and finally to perform on natural
dialogs from Reddit. We provide a dataset covering 75k movie entities and with
3.5M training examples. We present results of various models on these tasks,
and evaluate their performance.
|
1512.07506
|
Rigas Kouskouridas
|
Andreas Doumanoglou, Rigas Kouskouridas, Sotiris Malassiotis, Tae-Kyun
Kim
|
Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd
|
CVPR 2016 accepted paper, project page:
http://www.iis.ee.ic.ac.uk/rkouskou/6D_NBV.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection and 6D pose estimation in the crowd (scenes with multiple
object instances, severe foreground occlusions and background distractors), has
become an important problem in many rapidly evolving technological areas such
as robotics and augmented reality. Single shot-based 6D pose estimators with
manually designed features are still unable to tackle the above challenges,
motivating the research towards unsupervised feature learning and
next-best-view estimation. In this work, we present a complete framework for
both single shot-based 6D object pose estimation and next-best-view prediction
based on Hough Forests, the state of the art object pose estimator that
performs classification and regression jointly. Rather than using manually
designed features we a) propose an unsupervised feature learnt from
depth-invariant patches using a Sparse Autoencoder and b) offer an extensive
evaluation of various state of the art features. Furthermore, taking advantage
of the clustering performed in the leaf nodes of Hough Forests, we learn to
estimate the reduction of uncertainty in other views, formulating the problem
of selecting the next-best-view. To further improve pose estimation, we propose
an improved joint registration and hypotheses verification module as a final
refinement step to reject false detections. We provide two additional
challenging datasets inspired from realistic scenarios to extensively evaluate
the state of the art and our framework. One is related to domestic environments
and the other depicts a bin-picking scenario mostly found in industrial
settings. We show that our framework significantly outperforms state of the art
both on public and on our datasets.
|
[
{
"version": "v1",
"created": "Wed, 23 Dec 2015 15:06:05 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Apr 2016 17:31:56 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Doumanoglou",
"Andreas",
""
],
[
"Kouskouridas",
"Rigas",
""
],
[
"Malassiotis",
"Sotiris",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] |
TITLE: Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd
ABSTRACT: Object detection and 6D pose estimation in the crowd (scenes with multiple
object instances, severe foreground occlusions and background distractors), has
become an important problem in many rapidly evolving technological areas such
as robotics and augmented reality. Single shot-based 6D pose estimators with
manually designed features are still unable to tackle the above challenges,
motivating the research towards unsupervised feature learning and
next-best-view estimation. In this work, we present a complete framework for
both single shot-based 6D object pose estimation and next-best-view prediction
based on Hough Forests, the state of the art object pose estimator that
performs classification and regression jointly. Rather than using manually
designed features we a) propose an unsupervised feature learnt from
depth-invariant patches using a Sparse Autoencoder and b) offer an extensive
evaluation of various state of the art features. Furthermore, taking advantage
of the clustering performed in the leaf nodes of Hough Forests, we learn to
estimate the reduction of uncertainty in other views, formulating the problem
of selecting the next-best-view. To further improve pose estimation, we propose
an improved joint registration and hypotheses verification module as a final
refinement step to reject false detections. We provide two additional
challenging datasets inspired from realistic scenarios to extensively evaluate
the state of the art and our framework. One is related to domestic environments
and the other depicts a bin-picking scenario mostly found in industrial
settings. We show that our framework significantly outperforms state of the art
both on public and on our datasets.
|
1602.01890
|
Archith Bency
|
Archith J. Bency, S. Karthikeyan, Carter De Leo, Santhoshkumar
Sunderrajan and B. S. Manjunath
|
Search Tracker: Human-derived object tracking in-the-wild through
large-scale search and retrieval
|
Under review with the IEEE Transactions on Circuits and Systems for
Video Technology
| null |
10.1109/TCSVT.2016.2555718
| null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans use context and scene knowledge to easily localize moving objects in
conditions of complex illumination changes, scene clutter and occlusions. In
this paper, we present a method to leverage human knowledge in the form of
annotated video libraries in a novel search and retrieval based setting to
track objects in unseen video sequences. For every video sequence, a document
that represents motion information is generated. Documents of the unseen video
are queried against the library at multiple scales to find videos with similar
motion characteristics. This provides us with coarse localization of objects in
the unseen video. We further adapt these retrieved object locations to the new
video using an efficient warping scheme. The proposed method is validated on
in-the-wild video surveillance datasets where we outperform state-of-the-art
appearance-based trackers. We also introduce a new challenging dataset with
complex object appearance changes.
|
[
{
"version": "v1",
"created": "Fri, 5 Feb 2016 00:01:13 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Bency",
"Archith J.",
""
],
[
"Karthikeyan",
"S.",
""
],
[
"De Leo",
"Carter",
""
],
[
"Sunderrajan",
"Santhoshkumar",
""
],
[
"Manjunath",
"B. S.",
""
]
] |
TITLE: Search Tracker: Human-derived object tracking in-the-wild through
large-scale search and retrieval
ABSTRACT: Humans use context and scene knowledge to easily localize moving objects in
conditions of complex illumination changes, scene clutter and occlusions. In
this paper, we present a method to leverage human knowledge in the form of
annotated video libraries in a novel search and retrieval based setting to
track objects in unseen video sequences. For every video sequence, a document
that represents motion information is generated. Documents of the unseen video
are queried against the library at multiple scales to find videos with similar
motion characteristics. This provides us with coarse localization of objects in
the unseen video. We further adapt these retrieved object locations to the new
video using an efficient warping scheme. The proposed method is validated on
in-the-wild video surveillance datasets where we outperform state-of-the-art
appearance-based trackers. We also introduce a new challenging dataset with
complex object appearance changes.
|
1604.05377
|
Artit Wangperawong
|
Artit Wangperawong, Cyrille Brun, Olav Laudy, Rujikorn Pavasuthipaisit
|
Churn analysis using deep convolutional neural networks and autoencoders
| null | null | null | null |
stat.ML cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Customer temporal behavioral data was represented as images in order to
perform churn prediction by leveraging deep learning architectures prominent in
image classification. Supervised learning was performed on labeled data of over
6 million customers using deep convolutional neural networks, which achieved an
AUC of 0.743 on the test dataset using no more than 12 temporal features for
each customer. Unsupervised learning was conducted using autoencoders to better
understand the reasons for customer churn. Images that maximally activate the
hidden units of an autoencoder trained with churned customers reveal ample
opportunities for action to be taken to prevent churn among strong data, no
voice users.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2016 23:18:23 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Wangperawong",
"Artit",
""
],
[
"Brun",
"Cyrille",
""
],
[
"Laudy",
"Olav",
""
],
[
"Pavasuthipaisit",
"Rujikorn",
""
]
] |
TITLE: Churn analysis using deep convolutional neural networks and autoencoders
ABSTRACT: Customer temporal behavioral data was represented as images in order to
perform churn prediction by leveraging deep learning architectures prominent in
image classification. Supervised learning was performed on labeled data of over
6 million customers using deep convolutional neural networks, which achieved an
AUC of 0.743 on the test dataset using no more than 12 temporal features for
each customer. Unsupervised learning was conducted using autoencoders to better
understand the reasons for customer churn. Images that maximally activate the
hidden units of an autoencoder trained with churned customers reveal ample
opportunities for action to be taken to prevent churn among strong data, no
voice users.
|
1604.05413
|
Hariharan Ramasangu Dr
|
Hariharan Ramasangu, Neelam Sinha
|
Cognitive state classification using transformed fMRI data
|
5 pages, Conference-SPCOM14
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One approach, for understanding human brain functioning, is to analyze the
changes in the brain while performing cognitive tasks. Towards this, Functional
Magnetic Resonance (fMR) images of subjects performing well-defined tasks are
widely utilized for task-specific analyses. In this work, we propose a
procedure to enable classification between two chosen cognitive tasks, using
their respective fMR image sequences. The time series of expert-marked
anatomically-mapped relevant voxels are processed and fed as input to the
classical Naive Bayesian and SVM classifiers. The processing involves use of
random sieve function, phase information in the data transformed using Fourier
and Hilbert transformations. This processing results in improved
classification, as against using the voxel intensities directly, as
illustrated. The novelty of the proposed method lies in utilizing the phase
information in the transformed domain, for classifying between the cognitive
tasks along with random sieve function chosen with a particular probability
distribution. The proposed classification procedure is applied on a publicly
available dataset, StarPlus data, with 6 subjects performing the two distinct
cognitive tasks of watching either a picture or a sentence. The classification
accuracy stands at an average of 65.6%(using Naive Bayes classifier) and
76.4%(using SVM classifier) for raw data. The corresponding classification
accuracy stands at 96.8% and 97.5% for Fourier transformed data. For Hilbert
transformed data, it is 93.7% and 99%, for 6 subjects, on 2 cognitive tasks.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 02:52:31 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Ramasangu",
"Hariharan",
""
],
[
"Sinha",
"Neelam",
""
]
] |
TITLE: Cognitive state classification using transformed fMRI data
ABSTRACT: One approach, for understanding human brain functioning, is to analyze the
changes in the brain while performing cognitive tasks. Towards this, Functional
Magnetic Resonance (fMR) images of subjects performing well-defined tasks are
widely utilized for task-specific analyses. In this work, we propose a
procedure to enable classification between two chosen cognitive tasks, using
their respective fMR image sequences. The time series of expert-marked
anatomically-mapped relevant voxels are processed and fed as input to the
classical Naive Bayesian and SVM classifiers. The processing involves use of
random sieve function, phase information in the data transformed using Fourier
and Hilbert transformations. This processing results in improved
classification, as against using the voxel intensities directly, as
illustrated. The novelty of the proposed method lies in utilizing the phase
information in the transformed domain, for classifying between the cognitive
tasks along with random sieve function chosen with a particular probability
distribution. The proposed classification procedure is applied on a publicly
available dataset, StarPlus data, with 6 subjects performing the two distinct
cognitive tasks of watching either a picture or a sentence. The classification
accuracy stands at an average of 65.6%(using Naive Bayes classifier) and
76.4%(using SVM classifier) for raw data. The corresponding classification
accuracy stands at 96.8% and 97.5% for Fourier transformed data. For Hilbert
transformed data, it is 93.7% and 99%, for 6 subjects, on 2 cognitive tasks.
|
1604.05429
|
Nadia Kanwal
|
Nadia Kanwal and Erkan Bostanci
|
Comparative Study of Instance Based Learning and Back Propagation for
Classification Problems
|
15 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper presents a comparative study of the performance of Back Propagation
and Instance Based Learning Algorithm for classification tasks. The study is
carried out by a series of experiments will all possible combinations of
parameter values for the algorithms under evaluation. The algorithm's
classification accuracy is compared over a range of datasets and measurements
like Cross Validation, Kappa Statistics, Root Mean Squared Value and True
Positive vs False Positive rate have been used to evaluate their performance.
Along with performance comparison, techniques of handling missing values have
also been compared that include Mean or Mode replacement and Multiple
Imputation. The results showed that parameter adjustment plays vital role in
improving an algorithm's accuracy and therefore, Back Propagation has shown
better results as compared to Instance Based Learning. Furthermore, the problem
of missing values was better handled by Multiple imputation method, however,
not suitable for less amount of data.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 04:31:55 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Kanwal",
"Nadia",
""
],
[
"Bostanci",
"Erkan",
""
]
] |
TITLE: Comparative Study of Instance Based Learning and Back Propagation for
Classification Problems
ABSTRACT: The paper presents a comparative study of the performance of Back Propagation
and Instance Based Learning Algorithm for classification tasks. The study is
carried out by a series of experiments will all possible combinations of
parameter values for the algorithms under evaluation. The algorithm's
classification accuracy is compared over a range of datasets and measurements
like Cross Validation, Kappa Statistics, Root Mean Squared Value and True
Positive vs False Positive rate have been used to evaluate their performance.
Along with performance comparison, techniques of handling missing values have
also been compared that include Mean or Mode replacement and Multiple
Imputation. The results showed that parameter adjustment plays vital role in
improving an algorithm's accuracy and therefore, Back Propagation has shown
better results as compared to Instance Based Learning. Furthermore, the problem
of missing values was better handled by Multiple imputation method, however,
not suitable for less amount of data.
|
1604.05449
|
Dacheng Tao
|
Shan You, Chang Xu, Yunhe Wang, Chao Xu and Dacheng Tao
|
Streaming Label Learning for Modeling Labels on the Fly
| null | null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is challenging to handle a large volume of labels in multi-label learning.
However, existing approaches explicitly or implicitly assume that all the
labels in the learning process are given, which could be easily violated in
changing environments. In this paper, we define and study streaming label
learning (SLL), i.e., labels are arrived on the fly, to model newly arrived
labels with the help of the knowledge learned from past labels. The core of SLL
is to explore and exploit the relationships between new labels and past labels
and then inherit the relationship into hypotheses of labels to boost the
performance of new classifiers. In specific, we use the label
self-representation to model the label relationship, and SLL will be divided
into two steps: a regression problem and a empirical risk minimization (ERM)
problem. Both problems are simple and can be efficiently solved. We further
show that SLL can generate a tighter generalization error bound for new labels
than the general ERM framework with trace norm or Frobenius norm
regularization. Finally, we implement extensive experiments on various
benchmark datasets to validate the new setting. And results show that SLL can
effectively handle the constantly emerging new labels and provides excellent
classification performance.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 07:12:29 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"You",
"Shan",
""
],
[
"Xu",
"Chang",
""
],
[
"Wang",
"Yunhe",
""
],
[
"Xu",
"Chao",
""
],
[
"Tao",
"Dacheng",
""
]
] |
TITLE: Streaming Label Learning for Modeling Labels on the Fly
ABSTRACT: It is challenging to handle a large volume of labels in multi-label learning.
However, existing approaches explicitly or implicitly assume that all the
labels in the learning process are given, which could be easily violated in
changing environments. In this paper, we define and study streaming label
learning (SLL), i.e., labels are arrived on the fly, to model newly arrived
labels with the help of the knowledge learned from past labels. The core of SLL
is to explore and exploit the relationships between new labels and past labels
and then inherit the relationship into hypotheses of labels to boost the
performance of new classifiers. In specific, we use the label
self-representation to model the label relationship, and SLL will be divided
into two steps: a regression problem and a empirical risk minimization (ERM)
problem. Both problems are simple and can be efficiently solved. We further
show that SLL can generate a tighter generalization error bound for new labels
than the general ERM framework with trace norm or Frobenius norm
regularization. Finally, we implement extensive experiments on various
benchmark datasets to validate the new setting. And results show that SLL can
effectively handle the constantly emerging new labels and provides excellent
classification performance.
|
1604.05451
|
Dacheng Tao
|
Yunhe Wang, Chang Xu, Shan You, Dacheng Tao and Chao Xu
|
Parts for the Whole: The DCT Norm for Extreme Visual Recovery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Here we study the extreme visual recovery problem, in which over 90\% of
pixel values in a given image are missing. Existing low rank-based algorithms
are only effective for recovering data with at most 90\% missing values. Thus,
we exploit visual data's smoothness property to help solve this challenging
extreme visual recovery problem. Based on the Discrete Cosine Transformation
(DCT), we propose a novel DCT norm that involves all pixels and produces smooth
estimations in any view. Our theoretical analysis shows that the total
variation (TV) norm, which only achieves local smoothness, is a special case of
the proposed DCT norm. We also develop a new visual recovery algorithm by
minimizing the DCT and nuclear norms to achieve a more visually pleasing
estimation. Experimental results on a benchmark image dataset demonstrate that
the proposed approach is superior to state-of-the-art methods in terms of peak
signal-to-noise ratio and structural similarity.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 07:13:50 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Wang",
"Yunhe",
""
],
[
"Xu",
"Chang",
""
],
[
"You",
"Shan",
""
],
[
"Tao",
"Dacheng",
""
],
[
"Xu",
"Chao",
""
]
] |
TITLE: Parts for the Whole: The DCT Norm for Extreme Visual Recovery
ABSTRACT: Here we study the extreme visual recovery problem, in which over 90\% of
pixel values in a given image are missing. Existing low rank-based algorithms
are only effective for recovering data with at most 90\% missing values. Thus,
we exploit visual data's smoothness property to help solve this challenging
extreme visual recovery problem. Based on the Discrete Cosine Transformation
(DCT), we propose a novel DCT norm that involves all pixels and produces smooth
estimations in any view. Our theoretical analysis shows that the total
variation (TV) norm, which only achieves local smoothness, is a special case of
the proposed DCT norm. We also develop a new visual recovery algorithm by
minimizing the DCT and nuclear norms to achieve a more visually pleasing
estimation. Experimental results on a benchmark image dataset demonstrate that
the proposed approach is superior to state-of-the-art methods in terms of peak
signal-to-noise ratio and structural similarity.
|
1604.05499
|
Yijia Liu
|
Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu
|
Exploring Segment Representations for Neural Segmentation Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many natural language processing (NLP) tasks can be generalized into
segmentation problem. In this paper, we combine semi-CRF with neural network to
solve NLP segmentation tasks. Our model represents a segment both by composing
the input units and embedding the entire segment. We thoroughly study different
composition functions and different segment embeddings. We conduct extensive
experiments on two typical segmentation tasks: named entity recognition (NER)
and Chinese word segmentation (CWS). Experimental results show that our neural
semi-CRF model benefits from representing the entire segment and achieves the
state-of-the-art performance on CWS benchmark dataset and competitive results
on the CoNLL03 dataset.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 10:08:49 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Liu",
"Yijia",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Guo",
"Jiang",
""
],
[
"Qin",
"Bing",
""
],
[
"Liu",
"Ting",
""
]
] |
TITLE: Exploring Segment Representations for Neural Segmentation Models
ABSTRACT: Many natural language processing (NLP) tasks can be generalized into
segmentation problem. In this paper, we combine semi-CRF with neural network to
solve NLP segmentation tasks. Our model represents a segment both by composing
the input units and embedding the entire segment. We thoroughly study different
composition functions and different segment embeddings. We conduct extensive
experiments on two typical segmentation tasks: named entity recognition (NER)
and Chinese word segmentation (CWS). Experimental results show that our neural
semi-CRF model benefits from representing the entire segment and achieves the
state-of-the-art performance on CWS benchmark dataset and competitive results
on the CoNLL03 dataset.
|
1604.05525
|
Sonse Shimaoka
|
Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
|
An Attentive Neural Architecture for Fine-grained Entity Type
Classification
|
6 pages, 2 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we propose a novel attention-based neural network model for the
task of fine-grained entity type classification that unlike previously proposed
models recursively composes representations of entity mention contexts. Our
model achieves state-of-the-art performance with 74.94% loose micro F1-score on
the well-established FIGER dataset, a relative improvement of 2.59%. We also
investigate the behavior of the attention mechanism of our model and observe
that it can learn contextual linguistic expressions that indicate the
fine-grained category memberships of an entity.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 11:39:53 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Shimaoka",
"Sonse",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Inui",
"Kentaro",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
TITLE: An Attentive Neural Architecture for Fine-grained Entity Type
Classification
ABSTRACT: In this work we propose a novel attention-based neural network model for the
task of fine-grained entity type classification that unlike previously proposed
models recursively composes representations of entity mention contexts. Our
model achieves state-of-the-art performance with 74.94% loose micro F1-score on
the well-established FIGER dataset, a relative improvement of 2.59%. We also
investigate the behavior of the attention mechanism of our model and observe
that it can learn contextual linguistic expressions that indicate the
fine-grained category memberships of an entity.
|
1604.05576
|
Claudio Gennaro
|
Giuseppe Amato, Paolo Bolettieri, Fabrizio Falchi, Claudio Gennaro,
Lucia Vadicamo
|
Using Apache Lucene to Search Vector of Locally Aggregated Descriptors
|
In Proceedings of the 11th Joint Conference on Computer Vision,
Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2016) -
Volume 4: VISAPP, p. 383-392
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surrogate Text Representation (STR) is a profitable solution to efficient
similarity search on metric space using conventional text search engines, such
as Apache Lucene. This technique is based on comparing the permutations of some
reference objects in place of the original metric distance. However, the
Achilles heel of STR approach is the need to reorder the result set of the
search according to the metric distance. This forces to use a support database
to store the original objects, which requires efficient random I/O on a fast
secondary memory (such as flash-based storages). In this paper, we propose to
extend the Surrogate Text Representation to specifically address a class of
visual metric objects known as Vector of Locally Aggregated Descriptors (VLAD).
This approach is based on representing the individual sub-vectors forming the
VLAD vector with the STR, providing a finer representation of the vector and
enabling us to get rid of the reordering phase. The experiments on a publicly
available dataset show that the extended STR outperforms the baseline STR
achieving satisfactory performance near to the one obtained with the original
VLAD vectors.
|
[
{
"version": "v1",
"created": "Tue, 19 Apr 2016 14:08:34 GMT"
}
] | 2016-04-20T00:00:00 |
[
[
"Amato",
"Giuseppe",
""
],
[
"Bolettieri",
"Paolo",
""
],
[
"Falchi",
"Fabrizio",
""
],
[
"Gennaro",
"Claudio",
""
],
[
"Vadicamo",
"Lucia",
""
]
] |
TITLE: Using Apache Lucene to Search Vector of Locally Aggregated Descriptors
ABSTRACT: Surrogate Text Representation (STR) is a profitable solution to efficient
similarity search on metric space using conventional text search engines, such
as Apache Lucene. This technique is based on comparing the permutations of some
reference objects in place of the original metric distance. However, the
Achilles heel of STR approach is the need to reorder the result set of the
search according to the metric distance. This forces to use a support database
to store the original objects, which requires efficient random I/O on a fast
secondary memory (such as flash-based storages). In this paper, we propose to
extend the Surrogate Text Representation to specifically address a class of
visual metric objects known as Vector of Locally Aggregated Descriptors (VLAD).
This approach is based on representing the individual sub-vectors forming the
VLAD vector with the STR, providing a finer representation of the vector and
enabling us to get rid of the reordering phase. The experiments on a publicly
available dataset show that the extended STR outperforms the baseline STR
achieving satisfactory performance near to the one obtained with the original
VLAD vectors.
|
1412.7282
|
Jundong Li
|
Jundong Li, Aibek Adilmagambetovm, Mohomed Shazan Mohomed Jabbar,
Osmar R. Zaiane, Alvaro Osornio-Vargas, Osnat Wine
|
On Discovering Co-Location Patterns in Datasets: A Case Study of
Pollutants and Child Cancers
|
In GeoInformatica, 2016
|
GeoInformatica 2016
|
10.1007/s10707-016-0254-1
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We intend to identify relationships between cancer cases and pollutant
emissions and attempt to understand whether cancer in children is typically
located together with some specific chemical combinations or is independent.
Co-location pattern analysis seems to be the appropriate investigation to
perform. Co-location mining is one of the tasks of spatial data mining which
focuses on the detection of co-location patterns, the sets of spatial features
frequently located in close proximity of each other. Most previous works are
based on transaction-free apriori-like algorithms which are dependent on
user-defined thresholds and are designed for boolean data points. Due to the
absence of a clear notion of transactions, it is nontrivial to use association
rule mining techniques to tackle the co-location mining problem. The approach
we propose is based on a grid "transactionization" of the geographic space and
is designed to mine datasets with extended spatial objects. Uncertainty of the
feature presence in transactions is taken into account in our model. The
statistical test is used instead of global thresholds to detect significant
co-location patterns and rules. We evaluate our approach on synthetic and real
datasets. This approach can be used by researchers looking for spatial
associations between environmental and health factors. In addition, we explain
the data modelling framework which is used on real datasets of pollutants
(PRTR/NPRI) and childhood cancer cases.
|
[
{
"version": "v1",
"created": "Tue, 23 Dec 2014 07:59:09 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jan 2016 08:36:17 GMT"
},
{
"version": "v3",
"created": "Thu, 31 Mar 2016 18:56:07 GMT"
},
{
"version": "v4",
"created": "Fri, 1 Apr 2016 20:34:34 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Li",
"Jundong",
""
],
[
"Adilmagambetovm",
"Aibek",
""
],
[
"Jabbar",
"Mohomed Shazan Mohomed",
""
],
[
"Zaiane",
"Osmar R.",
""
],
[
"Osornio-Vargas",
"Alvaro",
""
],
[
"Wine",
"Osnat",
""
]
] |
TITLE: On Discovering Co-Location Patterns in Datasets: A Case Study of
Pollutants and Child Cancers
ABSTRACT: We intend to identify relationships between cancer cases and pollutant
emissions and attempt to understand whether cancer in children is typically
located together with some specific chemical combinations or is independent.
Co-location pattern analysis seems to be the appropriate investigation to
perform. Co-location mining is one of the tasks of spatial data mining which
focuses on the detection of co-location patterns, the sets of spatial features
frequently located in close proximity of each other. Most previous works are
based on transaction-free apriori-like algorithms which are dependent on
user-defined thresholds and are designed for boolean data points. Due to the
absence of a clear notion of transactions, it is nontrivial to use association
rule mining techniques to tackle the co-location mining problem. The approach
we propose is based on a grid "transactionization" of the geographic space and
is designed to mine datasets with extended spatial objects. Uncertainty of the
feature presence in transactions is taken into account in our model. The
statistical test is used instead of global thresholds to detect significant
co-location patterns and rules. We evaluate our approach on synthetic and real
datasets. This approach can be used by researchers looking for spatial
associations between environmental and health factors. In addition, we explain
the data modelling framework which is used on real datasets of pollutants
(PRTR/NPRI) and childhood cancer cases.
|
1507.05150
|
Amandianeze Nwana
|
Amandianeze O. Nwana and Tshuan Chen
|
Towards Understanding User Preferences from User Tagging Behavior for
Personalization
|
6 pages
| null |
10.1109/ISM.2015.79
| null |
cs.MM cs.HC cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personalizing image tags is a relatively new and growing area of research,
and in order to advance this research community, we must review and challenge
the de-facto standard of defining tag importance. We believe that for greater
progress to be made, we must go beyond tags that merely describe objects that
are visually represented in the image, towards more user-centric and subjective
notions such as emotion, sentiment, and preferences.
We focus on the notion of user preferences and show that the order that users
list tags on images is correlated to the order of preference over the tags that
they provided for the image. While this observation is not completely
surprising, to our knowledge, we are the first to explore this aspect of user
tagging behavior systematically and report empirical results to support this
observation. We argue that this observation can be exploited to help advance
the image tagging (and related) communities.
Our contributions include: 1.) conducting a user study demonstrating this
observation, 2.) collecting a dataset with user tag preferences explicitly
collected.
|
[
{
"version": "v1",
"created": "Sat, 18 Jul 2015 05:55:37 GMT"
},
{
"version": "v2",
"created": "Fri, 20 Nov 2015 19:56:36 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Nwana",
"Amandianeze O.",
""
],
[
"Chen",
"Tshuan",
""
]
] |
TITLE: Towards Understanding User Preferences from User Tagging Behavior for
Personalization
ABSTRACT: Personalizing image tags is a relatively new and growing area of research,
and in order to advance this research community, we must review and challenge
the de-facto standard of defining tag importance. We believe that for greater
progress to be made, we must go beyond tags that merely describe objects that
are visually represented in the image, towards more user-centric and subjective
notions such as emotion, sentiment, and preferences.
We focus on the notion of user preferences and show that the order that users
list tags on images is correlated to the order of preference over the tags that
they provided for the image. While this observation is not completely
surprising, to our knowledge, we are the first to explore this aspect of user
tagging behavior systematically and report empirical results to support this
observation. We argue that this observation can be exploited to help advance
the image tagging (and related) communities.
Our contributions include: 1.) conducting a user study demonstrating this
observation, 2.) collecting a dataset with user tag preferences explicitly
collected.
|
1511.07356
|
Sina Honari
|
Sina Honari, Jason Yosinski, Pascal Vincent, Christopher Pal
|
Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation
|
accepted in CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks with alternating convolutional, max-pooling and
decimation layers are widely used in state of the art architectures for
computer vision. Max-pooling purposefully discards precise spatial information
in order to create features that are more robust, and typically organized as
lower resolution spatial feature maps. On some tasks, such as whole-image
classification, max-pooling derived features are well suited; however, for
tasks requiring precise localization, such as pixel level prediction and
segmentation, max-pooling destroys exactly the information required to perform
well. Precise localization may be preserved by shallow convnets without pooling
but at the expense of robustness. Can we have our max-pooled multi-layered cake
and eat it too? Several papers have proposed summation and concatenation based
methods for combining upsampled coarse, abstract features with finer features
to produce robust pixel level predictions. Here we introduce another model ---
dubbed Recombinator Networks --- where coarse features inform finer features
early in their formation such that finer features can make use of several
layers of computation in deciding how to use coarse features. The model is
trained once, end-to-end and performs better than summation-based
architectures, reducing the error from the previous state of the art on two
facial keypoint datasets, AFW and AFLW, by 30\% and beating the current
state-of-the-art on 300W without using extra data. We improve performance even
further by adding a denoising prediction model based on a novel convnet
formulation.
|
[
{
"version": "v1",
"created": "Mon, 23 Nov 2015 18:42:36 GMT"
},
{
"version": "v2",
"created": "Sun, 17 Apr 2016 23:29:25 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Honari",
"Sina",
""
],
[
"Yosinski",
"Jason",
""
],
[
"Vincent",
"Pascal",
""
],
[
"Pal",
"Christopher",
""
]
] |
TITLE: Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation
ABSTRACT: Deep neural networks with alternating convolutional, max-pooling and
decimation layers are widely used in state of the art architectures for
computer vision. Max-pooling purposefully discards precise spatial information
in order to create features that are more robust, and typically organized as
lower resolution spatial feature maps. On some tasks, such as whole-image
classification, max-pooling derived features are well suited; however, for
tasks requiring precise localization, such as pixel level prediction and
segmentation, max-pooling destroys exactly the information required to perform
well. Precise localization may be preserved by shallow convnets without pooling
but at the expense of robustness. Can we have our max-pooled multi-layered cake
and eat it too? Several papers have proposed summation and concatenation based
methods for combining upsampled coarse, abstract features with finer features
to produce robust pixel level predictions. Here we introduce another model ---
dubbed Recombinator Networks --- where coarse features inform finer features
early in their formation such that finer features can make use of several
layers of computation in deciding how to use coarse features. The model is
trained once, end-to-end and performs better than summation-based
architectures, reducing the error from the previous state of the art on two
facial keypoint datasets, AFW and AFLW, by 30\% and beating the current
state-of-the-art on 300W without using extra data. We improve performance even
further by adding a denoising prediction model based on a novel convnet
formulation.
|
1512.02497
|
Francisco Massa
|
Francisco Massa, Bryan Russell, Mathieu Aubry
|
Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views
|
To appear in CVPR 2016
| null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an end-to-end convolutional neural network (CNN) for
2D-3D exemplar detection. We demonstrate that the ability to adapt the features
of natural images to better align with those of CAD rendered views is critical
to the success of our technique. We show that the adaptation can be learned by
compositing rendered views of textured object models on natural images. Our
approach can be naturally incorporated into a CNN detection pipeline and
extends the accuracy and speed benefits from recent advances in deep learning
to 2D-3D exemplar detection. We applied our method to two tasks: instance
detection, where we evaluated on the IKEA dataset, and object category
detection, where we out-perform Aubry et al. for "chair" detection on a subset
of the Pascal VOC dataset.
|
[
{
"version": "v1",
"created": "Tue, 8 Dec 2015 15:04:46 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2016 13:14:22 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Massa",
"Francisco",
""
],
[
"Russell",
"Bryan",
""
],
[
"Aubry",
"Mathieu",
""
]
] |
TITLE: Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views
ABSTRACT: This paper presents an end-to-end convolutional neural network (CNN) for
2D-3D exemplar detection. We demonstrate that the ability to adapt the features
of natural images to better align with those of CAD rendered views is critical
to the success of our technique. We show that the adaptation can be learned by
compositing rendered views of textured object models on natural images. Our
approach can be naturally incorporated into a CNN detection pipeline and
extends the accuracy and speed benefits from recent advances in deep learning
to 2D-3D exemplar detection. We applied our method to two tasks: instance
detection, where we evaluated on the IKEA dataset, and object category
detection, where we out-perform Aubry et al. for "chair" detection on a subset
of the Pascal VOC dataset.
|
1603.09303
|
Salman Habib
|
Salman Habib, Robert Roser (HEP Leads), Richard Gerber, Katie Antypas,
Katherine Riley, Tim Williams, Jack Wells, Tjerk Straatsma (ASCR Leads), A.
Almgren, J. Amundson, S. Bailey, D. Bard, K. Bloom, B. Bockelman, A.
Borgland, J. Borrill, R. Boughezal, R. Brower, B. Cowan, H. Finkel, N.
Frontiere, S. Fuess, L. Ge, N. Gnedin, S. Gottlieb, O. Gutsche, T. Han, K.
Heitmann, S. Hoeche, K. Ko, O. Kononenko, T. LeCompte, Z. Li, Z. Lukic, W.
Mori, P. Nugent, C.-K. Ng, G. Oleynik, B. O'Shea, N. Padmanabhan, D.
Petravick, F.J. Petriello, J. Power, J. Qiang, L. Reina, T.J. Rizzo, R. Ryne,
M. Schram, P. Spentzouris, D. Toussaint, J.-L. Vay, B. Viren, F. Wurthwein,
L. Xiao
|
ASCR/HEP Exascale Requirements Review Report
|
77 pages, 13 Figures; draft report, subject to further revision
| null | null | null |
physics.comp-ph astro-ph.CO hep-ex hep-lat hep-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.
|
[
{
"version": "v1",
"created": "Wed, 30 Mar 2016 18:34:28 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Mar 2016 20:52:37 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Habib",
"Salman",
"",
"HEP Leads"
],
[
"Roser",
"Robert",
"",
"HEP Leads"
],
[
"Gerber",
"Richard",
"",
"ASCR Leads"
],
[
"Antypas",
"Katie",
"",
"ASCR Leads"
],
[
"Riley",
"Katherine",
"",
"ASCR Leads"
],
[
"Williams",
"Tim",
"",
"ASCR Leads"
],
[
"Wells",
"Jack",
"",
"ASCR Leads"
],
[
"Straatsma",
"Tjerk",
"",
"ASCR Leads"
],
[
"Almgren",
"A.",
""
],
[
"Amundson",
"J.",
""
],
[
"Bailey",
"S.",
""
],
[
"Bard",
"D.",
""
],
[
"Bloom",
"K.",
""
],
[
"Bockelman",
"B.",
""
],
[
"Borgland",
"A.",
""
],
[
"Borrill",
"J.",
""
],
[
"Boughezal",
"R.",
""
],
[
"Brower",
"R.",
""
],
[
"Cowan",
"B.",
""
],
[
"Finkel",
"H.",
""
],
[
"Frontiere",
"N.",
""
],
[
"Fuess",
"S.",
""
],
[
"Ge",
"L.",
""
],
[
"Gnedin",
"N.",
""
],
[
"Gottlieb",
"S.",
""
],
[
"Gutsche",
"O.",
""
],
[
"Han",
"T.",
""
],
[
"Heitmann",
"K.",
""
],
[
"Hoeche",
"S.",
""
],
[
"Ko",
"K.",
""
],
[
"Kononenko",
"O.",
""
],
[
"LeCompte",
"T.",
""
],
[
"Li",
"Z.",
""
],
[
"Lukic",
"Z.",
""
],
[
"Mori",
"W.",
""
],
[
"Nugent",
"P.",
""
],
[
"Ng",
"C. -K.",
""
],
[
"Oleynik",
"G.",
""
],
[
"O'Shea",
"B.",
""
],
[
"Padmanabhan",
"N.",
""
],
[
"Petravick",
"D.",
""
],
[
"Petriello",
"F. J.",
""
],
[
"Power",
"J.",
""
],
[
"Qiang",
"J.",
""
],
[
"Reina",
"L.",
""
],
[
"Rizzo",
"T. J.",
""
],
[
"Ryne",
"R.",
""
],
[
"Schram",
"M.",
""
],
[
"Spentzouris",
"P.",
""
],
[
"Toussaint",
"D.",
""
],
[
"Vay",
"J. -L.",
""
],
[
"Viren",
"B.",
""
],
[
"Wurthwein",
"F.",
""
],
[
"Xiao",
"L.",
""
]
] |
TITLE: ASCR/HEP Exascale Requirements Review Report
ABSTRACT: This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.
|
1604.02796
|
Chih-Hang Wang
|
Chih-Hang Wang, Po-Shun Huang, De-Nian Yang, Wen-Tsuen Chen
|
Cross-Layer Design of Influence Maximization in Mobile Social Networks
|
8 pages, 6 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most prior algorithms for influence maximization focused are designed for
Online Social Networks (OSNs) and require centralized computation. Directly
deploying the above algorithms in distributed Mobile Social Networks (MSNs)
will overwhelm the networks due to an enormous number of messages required for
seed selection. In this paper, therefore, we design a new cross-layer strategy
to jointly examine MSN and mobile ad hoc networks (MANETs) to facilitate
efficient seed selection, by extracting a subset of nodes as agents to
represent nearby friends during the distributed computation. Specifically, we
formulate a new optimization problem, named Agent Selection Problem (ASP), to
minimize the message overhead transmitted in MANET. We prove that ASP is
NP-Hard and design an effectively distributed algorithm. Simulation results in
real and synthetic datasets manifest that the message overhead can be
significantly reduced compared with the existing approaches.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2016 05:43:25 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Apr 2016 06:51:22 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Wang",
"Chih-Hang",
""
],
[
"Huang",
"Po-Shun",
""
],
[
"Yang",
"De-Nian",
""
],
[
"Chen",
"Wen-Tsuen",
""
]
] |
TITLE: Cross-Layer Design of Influence Maximization in Mobile Social Networks
ABSTRACT: Most prior algorithms for influence maximization focused are designed for
Online Social Networks (OSNs) and require centralized computation. Directly
deploying the above algorithms in distributed Mobile Social Networks (MSNs)
will overwhelm the networks due to an enormous number of messages required for
seed selection. In this paper, therefore, we design a new cross-layer strategy
to jointly examine MSN and mobile ad hoc networks (MANETs) to facilitate
efficient seed selection, by extracting a subset of nodes as agents to
represent nearby friends during the distributed computation. Specifically, we
formulate a new optimization problem, named Agent Selection Problem (ASP), to
minimize the message overhead transmitted in MANET. We prove that ASP is
NP-Hard and design an effectively distributed algorithm. Simulation results in
real and synthetic datasets manifest that the message overhead can be
significantly reduced compared with the existing approaches.
|
1604.04639
|
Dylan Hutchison
|
Dylan Hutchison
|
ModelWizard: Toward Interactive Model Construction
|
Master's Thesis
| null | null | null |
cs.PL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data scientists engage in model construction to discover machine learning
models that well explain a dataset, in terms of predictiveness,
understandability and generalization across domains. Questions such as "what if
we model common cause Z" and "what if Y's dependence on X reverses" inspire
many candidate models to consider and compare, yet current tools emphasize
constructing a final model all at once.
To more naturally reflect exploration when debating numerous models, we
propose an interactive model construction framework grounded in composable
operations. Primitive operations capture core steps refining data and model
that, when verified, form an inductive basis to prove model validity. Derived,
composite operations enable advanced model families, both generic and
specialized, abstracted away from low-level details.
We prototype our envisioned framework in ModelWizard, a domain-specific
language embedded in F# to construct Tabular models. We enumerate language
design and demonstrate its use through several applications, emphasizing how
language may facilitate creation of complex models. To future engineers
designing data science languages and tools, we offer ModelWizard's design as a
new model construction paradigm, speeding discovery of our universe's
structure.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 20:43:20 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Hutchison",
"Dylan",
""
]
] |
TITLE: ModelWizard: Toward Interactive Model Construction
ABSTRACT: Data scientists engage in model construction to discover machine learning
models that well explain a dataset, in terms of predictiveness,
understandability and generalization across domains. Questions such as "what if
we model common cause Z" and "what if Y's dependence on X reverses" inspire
many candidate models to consider and compare, yet current tools emphasize
constructing a final model all at once.
To more naturally reflect exploration when debating numerous models, we
propose an interactive model construction framework grounded in composable
operations. Primitive operations capture core steps refining data and model
that, when verified, form an inductive basis to prove model validity. Derived,
composite operations enable advanced model families, both generic and
specialized, abstracted away from low-level details.
We prototype our envisioned framework in ModelWizard, a domain-specific
language embedded in F# to construct Tabular models. We enumerate language
design and demonstrate its use through several applications, emphasizing how
language may facilitate creation of complex models. To future engineers
designing data science languages and tools, we offer ModelWizard's design as a
new model construction paradigm, speeding discovery of our universe's
structure.
|
1604.04673
|
Hamid Tizhoosh
|
Hamid R. Tizhoosh, Shahryar Rahnamayan
|
Evolutionary Projection Selection for Radon Barcodes
|
To appear in proceedings of The 2016 IEEE Congress on Evolutionary
Computation (IEEE CEC 2016), July 24-29, 2016, Vancouver, Canada
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Radon transformation has been used to generate barcodes for tagging
medical images. The under-sampled image is projected in certain directions, and
each projection is binarized using a local threshold. The concatenation of the
thresholded projections creates a barcode that can be used for tagging or
annotating medical images. A small number of equidistant projections, e.g., 4
or 8, is generally used to generate short barcodes. However, due to the diverse
nature of digital images, and since we are only working with a small number of
projections (to keep the barcode short), taking equidistant projections may not
be the best course of action. In this paper, we proposed to find $n$ optimal
projections, whereas $n\!<\!180$, in order to increase the expressiveness of
Radon barcodes. We show examples for the exhaustive search for the simple case
when we attempt to find 4 best projections out of 16 equidistant projections
and compare it with the evolutionary approach in order to establish the benefit
of the latter when operating on a small population size as in the case of
micro-DE. We randomly selected 10 different classes from IRMA dataset (14,400
x-ray images in 58 classes) and further randomly selected 5 images per class
for our tests.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2016 00:48:52 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Rahnamayan",
"Shahryar",
""
]
] |
TITLE: Evolutionary Projection Selection for Radon Barcodes
ABSTRACT: Recently, Radon transformation has been used to generate barcodes for tagging
medical images. The under-sampled image is projected in certain directions, and
each projection is binarized using a local threshold. The concatenation of the
thresholded projections creates a barcode that can be used for tagging or
annotating medical images. A small number of equidistant projections, e.g., 4
or 8, is generally used to generate short barcodes. However, due to the diverse
nature of digital images, and since we are only working with a small number of
projections (to keep the barcode short), taking equidistant projections may not
be the best course of action. In this paper, we proposed to find $n$ optimal
projections, whereas $n\!<\!180$, in order to increase the expressiveness of
Radon barcodes. We show examples for the exhaustive search for the simple case
when we attempt to find 4 best projections out of 16 equidistant projections
and compare it with the evolutionary approach in order to establish the benefit
of the latter when operating on a small population size as in the case of
micro-DE. We randomly selected 10 different classes from IRMA dataset (14,400
x-ray images in 58 classes) and further randomly selected 5 images per class
for our tests.
|
1604.04675
|
Hamid Tizhoosh
|
Shujin Zhu, H.R.Tizhoosh
|
Radon Features and Barcodes for Medical Image Retrieval via SVM
|
To appear in proceedings of The 2016 IEEE International Joint
Conference on Neural Networks (IJCNN 2016), July 24-29, 2016, Vancouver,
Canada
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For more than two decades, research has been performed on content-based image
retrieval (CBIR). By combining Radon projections and the support vector
machines (SVM), a content-based medical image retrieval method is presented in
this work. The proposed approach employs the normalized Radon projections with
corresponding image category labels to build an SVM classifier, and the Radon
barcode database which encodes every image in a binary format is also generated
simultaneously to tag all images. To retrieve similar images when a query image
is given, Radon projections and the barcode of the query image are generated.
Subsequently, the k-nearest neighbor search method is applied to find the
images with minimum Hamming distance of the Radon barcode within the same class
predicted by the trained SVM classifier that uses Radon features. The
performance of the proposed method is validated by using the IRMA 2009 dataset
with 14,410 x-ray images in 57 categories. The results demonstrate that our
method has the capacity to retrieve similar responses for the correctly
identified query image and even for those mistakenly classified by SVM. The
approach further is very fast and has low memory requirement.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2016 01:13:23 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Zhu",
"Shujin",
""
],
[
"Tizhoosh",
"H. R.",
""
]
] |
TITLE: Radon Features and Barcodes for Medical Image Retrieval via SVM
ABSTRACT: For more than two decades, research has been performed on content-based image
retrieval (CBIR). By combining Radon projections and the support vector
machines (SVM), a content-based medical image retrieval method is presented in
this work. The proposed approach employs the normalized Radon projections with
corresponding image category labels to build an SVM classifier, and the Radon
barcode database which encodes every image in a binary format is also generated
simultaneously to tag all images. To retrieve similar images when a query image
is given, Radon projections and the barcode of the query image are generated.
Subsequently, the k-nearest neighbor search method is applied to find the
images with minimum Hamming distance of the Radon barcode within the same class
predicted by the trained SVM classifier that uses Radon features. The
performance of the proposed method is validated by using the IRMA 2009 dataset
with 14,410 x-ray images in 57 categories. The results demonstrate that our
method has the capacity to retrieve similar responses for the correctly
identified query image and even for those mistakenly classified by SVM. The
approach further is very fast and has low memory requirement.
|
1604.04676
|
Hamid Tizhoosh
|
Xinran Liu, Hamid R. Tizhoosh, Jonathan Kofman
|
Generating Binary Tags for Fast Medical Image Retrieval Based on
Convolutional Nets and Radon Transform
|
To appear in proceedings of The 2016 IEEE International Joint
Conference on Neural Networks (IJCNN 2016), July 24-29, 2016, Vancouver,
Canada
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content-based image retrieval (CBIR) in large medical image archives is a
challenging and necessary task. Generally, different feature extraction methods
are used to assign expressive and invariant features to each image such that
the search for similar images comes down to feature classification and/or
matching. The present work introduces a new image retrieval method for medical
applications that employs a convolutional neural network (CNN) with recently
introduced Radon barcodes. We combine neural codes for global classification
with Radon barcodes for the final retrieval. We also examine image search based
on regions of interest (ROI) matching after image retrieval. The IRMA dataset
with more than 14,000 x-rays images is used to evaluate the performance of our
method. Experimental results show that our approach is superior to many
published works.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2016 01:30:01 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Liu",
"Xinran",
""
],
[
"Tizhoosh",
"Hamid R.",
""
],
[
"Kofman",
"Jonathan",
""
]
] |
TITLE: Generating Binary Tags for Fast Medical Image Retrieval Based on
Convolutional Nets and Radon Transform
ABSTRACT: Content-based image retrieval (CBIR) in large medical image archives is a
challenging and necessary task. Generally, different feature extraction methods
are used to assign expressive and invariant features to each image such that
the search for similar images comes down to feature classification and/or
matching. The present work introduces a new image retrieval method for medical
applications that employs a convolutional neural network (CNN) with recently
introduced Radon barcodes. We combine neural codes for global classification
with Radon barcodes for the final retrieval. We also examine image search based
on regions of interest (ROI) matching after image retrieval. The IRMA dataset
with more than 14,000 x-rays images is used to evaluate the performance of our
method. Experimental results show that our approach is superior to many
published works.
|
1604.04724
|
Shanmuganathan Raman
|
Sri Raghu Malireddi, Shanmuganathan Raman
|
Automatic Segmentation of Dynamic Objects from an Image Pair
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic segmentation of objects from a single image is a challenging
problem which generally requires training on large number of images. We
consider the problem of automatically segmenting only the dynamic objects from
a given pair of images of a scene captured from different positions. We exploit
dense correspondences along with saliency measures in order to first localize
the interest points on the dynamic objects from the two images. We propose a
novel approach based on techniques from computational geometry in order to
automatically segment the dynamic objects from both the images using a top-down
segmentation strategy. We discuss how the proposed approach is unique in
novelty compared to other state-of-the-art segmentation algorithms. We show
that the proposed approach for segmentation is efficient in handling large
motions and is able to achieve very good segmentation of the objects for
different scenes. We analyse the results with respect to the manually marked
ground truth segmentation masks created using our own dataset and provide key
observations in order to improve the work in future.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2016 11:00:24 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Malireddi",
"Sri Raghu",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] |
TITLE: Automatic Segmentation of Dynamic Objects from an Image Pair
ABSTRACT: Automatic segmentation of objects from a single image is a challenging
problem which generally requires training on large number of images. We
consider the problem of automatically segmenting only the dynamic objects from
a given pair of images of a scene captured from different positions. We exploit
dense correspondences along with saliency measures in order to first localize
the interest points on the dynamic objects from the two images. We propose a
novel approach based on techniques from computational geometry in order to
automatically segment the dynamic objects from both the images using a top-down
segmentation strategy. We discuss how the proposed approach is unique in
novelty compared to other state-of-the-art segmentation algorithms. We show
that the proposed approach for segmentation is efficient in handling large
motions and is able to achieve very good segmentation of the objects for
different scenes. We analyse the results with respect to the manually marked
ground truth segmentation masks created using our own dataset and provide key
observations in order to improve the work in future.
|
1604.04784
|
Jiyang Gao
|
Jiyang Gao, Chen Sun, Ram Nevatia
|
ACD: Action Concept Discovery from Image-Sentence Corpora
|
8 pages, accepted by ICMR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action classification in still images is an important task in computer
vision. It is challenging as the appearances of ac- tions may vary depending on
their context (e.g. associated objects). Manually labeling of context
information would be time consuming and difficult to scale up. To address this
challenge, we propose a method to automatically discover and cluster action
concepts, and learn their classifiers from weakly supervised image-sentence
corpora. It obtains candidate action concepts by extracting verb-object pairs
from sentences and verifies their visualness with the associated images.
Candidate action concepts are then clustered by using a multi-modal
representation with image embeddings from deep convolutional networks and text
embeddings from word2vec. More than one hundred human action concept
classifiers are learned from the Flickr 30k dataset with no additional human
effort and promising classification results are obtained. We further apply the
AdaBoost algorithm to automatically select and combine relevant action concepts
given an action query. Promising results have been shown on the PASCAL VOC 2012
action classification benchmark, which has zero overlap with Flickr30k.
|
[
{
"version": "v1",
"created": "Sat, 16 Apr 2016 18:26:13 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Gao",
"Jiyang",
""
],
[
"Sun",
"Chen",
""
],
[
"Nevatia",
"Ram",
""
]
] |
TITLE: ACD: Action Concept Discovery from Image-Sentence Corpora
ABSTRACT: Action classification in still images is an important task in computer
vision. It is challenging as the appearances of ac- tions may vary depending on
their context (e.g. associated objects). Manually labeling of context
information would be time consuming and difficult to scale up. To address this
challenge, we propose a method to automatically discover and cluster action
concepts, and learn their classifiers from weakly supervised image-sentence
corpora. It obtains candidate action concepts by extracting verb-object pairs
from sentences and verifies their visualness with the associated images.
Candidate action concepts are then clustered by using a multi-modal
representation with image embeddings from deep convolutional networks and text
embeddings from word2vec. More than one hundred human action concept
classifiers are learned from the Flickr 30k dataset with no additional human
effort and promising classification results are obtained. We further apply the
AdaBoost algorithm to automatically select and combine relevant action concepts
given an action query. Promising results have been shown on the PASCAL VOC 2012
action classification benchmark, which has zero overlap with Flickr30k.
|
1604.04842
|
Chao-Yeh Chen
|
Chao-Yeh Chen and Kristen Grauman
|
Subjects and Their Objects: Localizing Interactees for a Person-Centric
View of Importance
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding images with people often entails understanding their
\emph{interactions} with other objects or people. As such, given a novel image,
a vision system ought to infer which other objects/people play an important
role in a given person's activity. However, existing methods are limited to
learning action-specific interactions (e.g., how the pose of a tennis player
relates to the position of his racquet when serving the ball) for improved
recognition, making them unequipped to reason about novel interactions with
actions or objects unobserved in the training data.
We propose to predict the "interactee" in novel images---that is, to localize
the \emph{object} of a person's action. Given an arbitrary image with a
detected person, the goal is to produce a saliency map indicating the most
likely positions and scales where that person's interactee would be found. To
that end, we explore ways to learn the generic, action-independent connections
between (a) representations of a person's pose, gaze, and scene cues and (b)
the interactee object's position and scale. We provide results on a newly
collected UT Interactee dataset spanning more than 10,000 images from SUN,
PASCAL, and COCO. We show that the proposed interaction-informed saliency
metric has practical utility for four tasks: contextual object detection, image
retargeting, predicting object importance, and data-driven natural language
scene description. All four scenarios reveal the value in linking the subject
to its object in order to understand the story of an image.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 08:26:31 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Chen",
"Chao-Yeh",
""
],
[
"Grauman",
"Kristen",
""
]
] |
TITLE: Subjects and Their Objects: Localizing Interactees for a Person-Centric
View of Importance
ABSTRACT: Understanding images with people often entails understanding their
\emph{interactions} with other objects or people. As such, given a novel image,
a vision system ought to infer which other objects/people play an important
role in a given person's activity. However, existing methods are limited to
learning action-specific interactions (e.g., how the pose of a tennis player
relates to the position of his racquet when serving the ball) for improved
recognition, making them unequipped to reason about novel interactions with
actions or objects unobserved in the training data.
We propose to predict the "interactee" in novel images---that is, to localize
the \emph{object} of a person's action. Given an arbitrary image with a
detected person, the goal is to produce a saliency map indicating the most
likely positions and scales where that person's interactee would be found. To
that end, we explore ways to learn the generic, action-independent connections
between (a) representations of a person's pose, gaze, and scene cues and (b)
the interactee object's position and scale. We provide results on a newly
collected UT Interactee dataset spanning more than 10,000 images from SUN,
PASCAL, and COCO. We show that the proposed interaction-informed saliency
metric has practical utility for four tasks: contextual object detection, image
retargeting, predicting object importance, and data-driven natural language
scene description. All four scenarios reveal the value in linking the subject
to its object in order to understand the story of an image.
|
1604.04879
|
Jorge Luis Rivero Jlrivero
|
Jorge Luis Rivero Perez, Bernardete Ribeiro, Carlos Morell Perez
|
Mahalanobis Distance Metric Learning Algorithm for Instance-based Data
Stream Classification
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the massive data challenges nowadays and the rapid growing of
technology, stream mining has recently received considerable attention. To
address the large number of scenarios in which this phenomenon manifests itself
suitable tools are required in various research fields. Instance-based data
stream algorithms generally employ the Euclidean distance for the
classification task underlying this problem. A novel way to look into this
issue is to take advantage of a more flexible metric due to the increased
requirements imposed by the data stream scenario. In this paper we present a
new algorithm that learns a Mahalanobis metric using similarity and
dissimilarity constraints in an online manner. This approach hybridizes a
Mahalanobis distance metric learning algorithm and a k-NN data stream
classification algorithm with concept drift detection. First, some basic
aspects of Mahalanobis distance metric learning are described taking into
account key properties as well as online distance metric learning algorithms.
Second, we implement specific evaluation methodologies and comparative metrics
such as Q statistic for data stream classification algorithms. Finally, our
algorithm is evaluated on different datasets by comparing its results with one
of the best instance-based data stream classification algorithm of the state of
the art. The results demonstrate that our proposal is better
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 15:01:51 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Perez",
"Jorge Luis Rivero",
""
],
[
"Ribeiro",
"Bernardete",
""
],
[
"Perez",
"Carlos Morell",
""
]
] |
TITLE: Mahalanobis Distance Metric Learning Algorithm for Instance-based Data
Stream Classification
ABSTRACT: With the massive data challenges nowadays and the rapid growing of
technology, stream mining has recently received considerable attention. To
address the large number of scenarios in which this phenomenon manifests itself
suitable tools are required in various research fields. Instance-based data
stream algorithms generally employ the Euclidean distance for the
classification task underlying this problem. A novel way to look into this
issue is to take advantage of a more flexible metric due to the increased
requirements imposed by the data stream scenario. In this paper we present a
new algorithm that learns a Mahalanobis metric using similarity and
dissimilarity constraints in an online manner. This approach hybridizes a
Mahalanobis distance metric learning algorithm and a k-NN data stream
classification algorithm with concept drift detection. First, some basic
aspects of Mahalanobis distance metric learning are described taking into
account key properties as well as online distance metric learning algorithms.
Second, we implement specific evaluation methodologies and comparative metrics
such as Q statistic for data stream classification algorithms. Finally, our
algorithm is evaluated on different datasets by comparing its results with one
of the best instance-based data stream classification algorithm of the state of
the art. The results demonstrate that our proposal is better
|
1604.04892
|
Joshua Joy
|
Josh Joy, Mario Gerla
|
PAS-MC: Privacy-preserving Analytics Stream for the Mobile Cloud
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In today's digital world, personal data is being continuously collected and
analyzed without data owners' consent and choice. As data owners constantly
generate data on their personal devices, the tension of storing private data on
their own devices yet allowing third party analysts to perform aggregate
analytics yields an interesting dilemma.
This paper introduces PAS-MC, the first practical privacy-preserving and
anonymity stream analytics system. PAS-MC ensures that each data owner locally
privatizes their sensitive data before responding to analysts' queries. PAS-MC
also protects against traffic analysis attacks with minimal trust
vulnerabilities.We evaluate the scheme over the California Transportation
Dataset and show that we can privately and anonymously stream vehicular
location updates yet preserve high accuracy.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 16:24:19 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Joy",
"Josh",
""
],
[
"Gerla",
"Mario",
""
]
] |
TITLE: PAS-MC: Privacy-preserving Analytics Stream for the Mobile Cloud
ABSTRACT: In today's digital world, personal data is being continuously collected and
analyzed without data owners' consent and choice. As data owners constantly
generate data on their personal devices, the tension of storing private data on
their own devices yet allowing third party analysts to perform aggregate
analytics yields an interesting dilemma.
This paper introduces PAS-MC, the first practical privacy-preserving and
anonymity stream analytics system. PAS-MC ensures that each data owner locally
privatizes their sensitive data before responding to analysts' queries. PAS-MC
also protects against traffic analysis attacks with minimal trust
vulnerabilities.We evaluate the scheme over the California Transportation
Dataset and show that we can privately and anonymously stream vehicular
location updates yet preserve high accuracy.
|
1604.04893
|
Fouad Khan
|
Fouad Khan
|
An Initial Seed Selection Algorithm for K-means Clustering of
Georeferenced Data to Improve Replicability of Cluster Assignments for
Mapping Application
|
Applied Soft Computing 12 (2012)
| null |
10.1016/j.asoc.2012.07.021
| null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
K-means is one of the most widely used clustering algorithms in various
disciplines, especially for large datasets. However the method is known to be
highly sensitive to initial seed selection of cluster centers. K-means++ has
been proposed to overcome this problem and has been shown to have better
accuracy and computational efficiency than k-means. In many clustering problems
though -such as when classifying georeferenced data for mapping applications-
standardization of clustering methodology, specifically, the ability to arrive
at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure
of accuracy, especially when the solution is known to be non-unique, as in the
case of k-means clustering. Here we propose a simple initial seed selection
algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it
incorporates a measure to maximize distance between consecutive cluster centers
which augments the conventional k-means optimization for minimum distance
between cluster center and cluster members. Unlike existing initialization
methods, no additional parameters or degrees of freedom are introduced to the
clustering algorithm. This improves the replicability of cluster assignments by
as much as 100% over k-means and k-means++, virtually reducing the variance
over different runs to zero, without introducing any additional parameters to
the clustering process. Further, the proposed method is more computationally
efficient than k-means++ and in some cases, more accurate.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 16:25:15 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Khan",
"Fouad",
""
]
] |
TITLE: An Initial Seed Selection Algorithm for K-means Clustering of
Georeferenced Data to Improve Replicability of Cluster Assignments for
Mapping Application
ABSTRACT: K-means is one of the most widely used clustering algorithms in various
disciplines, especially for large datasets. However the method is known to be
highly sensitive to initial seed selection of cluster centers. K-means++ has
been proposed to overcome this problem and has been shown to have better
accuracy and computational efficiency than k-means. In many clustering problems
though -such as when classifying georeferenced data for mapping applications-
standardization of clustering methodology, specifically, the ability to arrive
at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure
of accuracy, especially when the solution is known to be non-unique, as in the
case of k-means clustering. Here we propose a simple initial seed selection
algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it
incorporates a measure to maximize distance between consecutive cluster centers
which augments the conventional k-means optimization for minimum distance
between cluster center and cluster members. Unlike existing initialization
methods, no additional parameters or degrees of freedom are introduced to the
clustering algorithm. This improves the replicability of cluster assignments by
as much as 100% over k-means and k-means++, virtually reducing the variance
over different runs to zero, without introducing any additional parameters to
the clustering process. Further, the proposed method is more computationally
efficient than k-means++ and in some cases, more accurate.
|
1604.04894
|
Yahia Lebbah
|
Mehdi Maamar, Nadjib Lazaar, Samir Loudni, Yahia Lebbah
|
A global constraint for closed itemset mining
| null | null | null | null |
cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Discovering the set of closed frequent patterns is one of the fundamental
problems in Data Mining. Recent Constraint Programming (CP) approaches for
declarative itemset mining have proven their usefulness and flexibility. But
the wide use of reified constraints in current CP approaches raises many
difficulties to cope with high dimensional datasets. This paper proposes CLOSED
PATTERN global constraint which does not require any reified constraints nor
any extra variables to encode efficiently the Closed Frequent Pattern Mining
(CFPM) constraint. CLOSED-PATTERN captures the particular semantics of the CFPM
problem in order to ensure a polynomial pruning algorithm ensuring domain
consistency. The computational properties of our constraint are analyzed and
their practical effectiveness is experimentally evaluated.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 16:32:27 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Maamar",
"Mehdi",
""
],
[
"Lazaar",
"Nadjib",
""
],
[
"Loudni",
"Samir",
""
],
[
"Lebbah",
"Yahia",
""
]
] |
TITLE: A global constraint for closed itemset mining
ABSTRACT: Discovering the set of closed frequent patterns is one of the fundamental
problems in Data Mining. Recent Constraint Programming (CP) approaches for
declarative itemset mining have proven their usefulness and flexibility. But
the wide use of reified constraints in current CP approaches raises many
difficulties to cope with high dimensional datasets. This paper proposes CLOSED
PATTERN global constraint which does not require any reified constraints nor
any extra variables to encode efficiently the Closed Frequent Pattern Mining
(CFPM) constraint. CLOSED-PATTERN captures the particular semantics of the CFPM
problem in order to ensure a polynomial pruning algorithm ensuring domain
consistency. The computational properties of our constraint are analyzed and
their practical effectiveness is experimentally evaluated.
|
1604.04895
|
Fouad Khan
|
Fouad Khan, Laszlo Pinter
|
Scaling indicator and planning plane: an indicator and a visual tool for
exploring the relationship between urban form, energy efficiency and carbon
emissions
| null | null |
10.1016/j.ecolind.2016.02.046
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ecosystems and other naturally resilient systems exhibit allometric scaling
in the distribution of sizes of their elements. In this paper we define an
allometry inspired scaling indicator for cities that is a first step towards
quantifying the resilience borne of a complex systems' hierarchical structural
composition. The scaling indicator is calculated using large census datasets
and is analogous to fractal dimension in spatial analysis. Lack of numerical
rigor and the resulting variation in scaling indicators -inherent in the use of
box counting mechanism for fractal dimension calculation for cities- has been
one of the hindrances in the adoption of fractal dimension as an urban
indicator of note. The intra-urban indicator of scaling in population density
distribution developed here is calculated for 58 US cities using a methodology
that produces replicable results, employing large census-block wise population
datasets from the 2010 US Census 2010 and the 2007 US Economic Census. We show
that rising disparity -as measured by the proposed indicator of population
density distribution in census blocks in metropolitan statistical areas (using
US Census 2010 data) adversely affects energy consumption efficiency and carbon
emissions in cities and leads to a higher urban carbon footprint. We then
define a planning plane as a visual and analytic tool for incorporation of
scaling indicator analysis into policy and decision-making.
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 16:40:05 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Khan",
"Fouad",
""
],
[
"Pinter",
"Laszlo",
""
]
] |
TITLE: Scaling indicator and planning plane: an indicator and a visual tool for
exploring the relationship between urban form, energy efficiency and carbon
emissions
ABSTRACT: Ecosystems and other naturally resilient systems exhibit allometric scaling
in the distribution of sizes of their elements. In this paper we define an
allometry inspired scaling indicator for cities that is a first step towards
quantifying the resilience borne of a complex systems' hierarchical structural
composition. The scaling indicator is calculated using large census datasets
and is analogous to fractal dimension in spatial analysis. Lack of numerical
rigor and the resulting variation in scaling indicators -inherent in the use of
box counting mechanism for fractal dimension calculation for cities- has been
one of the hindrances in the adoption of fractal dimension as an urban
indicator of note. The intra-urban indicator of scaling in population density
distribution developed here is calculated for 58 US cities using a methodology
that produces replicable results, employing large census-block wise population
datasets from the 2010 US Census 2010 and the 2007 US Economic Census. We show
that rising disparity -as measured by the proposed indicator of population
density distribution in census blocks in metropolitan statistical areas (using
US Census 2010 data) adversely affects energy consumption efficiency and carbon
emissions in cities and leads to a higher urban carbon footprint. We then
define a planning plane as a visual and analytic tool for incorporation of
scaling indicator analysis into policy and decision-making.
|
1604.04896
|
Daniele Rotolo
|
Nicola Grassano, Daniele Rotolo, Josh Hutton, Fr\'ed\'erique Lang,
Michael M. Hopkins
|
Funding Data from Publication Acknowledgements: Coverage, Uses and
Limitations
|
in press, Journal of the Association for Information Science and
Technology 2016
| null |
10.1002/jasist.23737
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article contributes to the development of methods for analysing research
funding systems by exploring the robustness and comparability of emerging
approaches to generate funding landscapes useful for policy making. We use a
novel dataset of manually extracted and coded data on the funding
acknowledgements of 7,510 publications representing UK cancer research in the
year 2011 and compare these 'reference data' with funding data provided by Web
of Science (WoS) and MEDLINE/PubMed. Findings show high recall (about 93%) of
WoS funding data. By contrast, MEDLINE/PubMed data retrieved less than half of
the UK cancer publications acknowledging at least one funder. Conversely, both
databases have high precision (+90%): i.e. few cases of publications with no
acknowledgement to funders are identified as having funding data. Nonetheless,
funders acknowledged in UK cancer publications were not correctly listed by
MEDLINE/PubMed and WoS in about 75% and 32% of the cases, respectively.
'Reference data' on the UK cancer research funding system are then used as a
case-study to demonstrate the utility of funding data for strategic
intelligence applications (e.g. mapping of funding landscape, comparison of
funders' research portfolios).
|
[
{
"version": "v1",
"created": "Sun, 17 Apr 2016 16:45:07 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Grassano",
"Nicola",
""
],
[
"Rotolo",
"Daniele",
""
],
[
"Hutton",
"Josh",
""
],
[
"Lang",
"Frédérique",
""
],
[
"Hopkins",
"Michael M.",
""
]
] |
TITLE: Funding Data from Publication Acknowledgements: Coverage, Uses and
Limitations
ABSTRACT: This article contributes to the development of methods for analysing research
funding systems by exploring the robustness and comparability of emerging
approaches to generate funding landscapes useful for policy making. We use a
novel dataset of manually extracted and coded data on the funding
acknowledgements of 7,510 publications representing UK cancer research in the
year 2011 and compare these 'reference data' with funding data provided by Web
of Science (WoS) and MEDLINE/PubMed. Findings show high recall (about 93%) of
WoS funding data. By contrast, MEDLINE/PubMed data retrieved less than half of
the UK cancer publications acknowledging at least one funder. Conversely, both
databases have high precision (+90%): i.e. few cases of publications with no
acknowledgement to funders are identified as having funding data. Nonetheless,
funders acknowledged in UK cancer publications were not correctly listed by
MEDLINE/PubMed and WoS in about 75% and 32% of the cases, respectively.
'Reference data' on the UK cancer research funding system are then used as a
case-study to demonstrate the utility of funding data for strategic
intelligence applications (e.g. mapping of funding landscape, comparison of
funders' research portfolios).
|
1604.04960
|
Seungjin Choi
|
Suwon Suh and Seungjin Choi
|
Gaussian Copula Variational Autoencoders for Mixed Data
|
21 pages
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The variational autoencoder (VAE) is a generative model with continuous
latent variables where a pair of probabilistic encoder (bottom-up) and decoder
(top-down) is jointly learned by stochastic gradient variational Bayes. We
first elaborate Gaussian VAE, approximating the local covariance matrix of the
decoder as an outer product of the principal direction at a position determined
by a sample drawn from Gaussian distribution. We show that this model, referred
to as VAE-ROC, better captures the data manifold, compared to the standard
Gaussian VAE where independent multivariate Gaussian was used to model the
decoder. Then we extend the VAE-ROC to handle mixed categorical and continuous
data. To this end, we employ Gaussian copula to model the local dependency in
mixed categorical and continuous data, leading to {\em Gaussian copula
variational autoencoder} (GCVAE). As in VAE-ROC, we use the rank-one
approximation for the covariance in the Gaussian copula, to capture the local
dependency structure in the mixed data. Experiments on various datasets
demonstrate the useful behaviour of VAE-ROC and GCVAE, compared to the standard
VAE.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2016 02:14:07 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Suh",
"Suwon",
""
],
[
"Choi",
"Seungjin",
""
]
] |
TITLE: Gaussian Copula Variational Autoencoders for Mixed Data
ABSTRACT: The variational autoencoder (VAE) is a generative model with continuous
latent variables where a pair of probabilistic encoder (bottom-up) and decoder
(top-down) is jointly learned by stochastic gradient variational Bayes. We
first elaborate Gaussian VAE, approximating the local covariance matrix of the
decoder as an outer product of the principal direction at a position determined
by a sample drawn from Gaussian distribution. We show that this model, referred
to as VAE-ROC, better captures the data manifold, compared to the standard
Gaussian VAE where independent multivariate Gaussian was used to model the
decoder. Then we extend the VAE-ROC to handle mixed categorical and continuous
data. To this end, we employ Gaussian copula to model the local dependency in
mixed categorical and continuous data, leading to {\em Gaussian copula
variational autoencoder} (GCVAE). As in VAE-ROC, we use the rank-one
approximation for the covariance in the Gaussian copula, to capture the local
dependency structure in the mixed data. Experiments on various datasets
demonstrate the useful behaviour of VAE-ROC and GCVAE, compared to the standard
VAE.
|
1604.05132
|
Christian Mostegel
|
Christian Mostegel, Markus Rumpler, Friedrich Fraundorfer and Horst
Bischof
|
Using Self-Contradiction to Learn Confidence Measures in Stereo Vision
|
This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2016. The copyright was transfered to IEEE
(https://www.ieee.org). The official version of the paper will be made
available on IEEE Xplore (R) (http://ieeexplore.ieee.org). This version of
the paper also contains the supplementary material, which will not appear
IEEE Xplore (R)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learned confidence measures gain increasing importance for outlier removal
and quality improvement in stereo vision. However, acquiring the necessary
training data is typically a tedious and time consuming task that involves
manual interaction, active sensing devices and/or synthetic scenes. To overcome
this problem, we propose a new, flexible, and scalable way for generating
training data that only requires a set of stereo images as input. The key idea
of our approach is to use different view points for reasoning about
contradictions and consistencies between multiple depth maps generated with the
same stereo algorithm. This enables us to generate a huge amount of training
data in a fully automated manner. Among other experiments, we demonstrate the
potential of our approach by boosting the performance of three learned
confidence measures on the KITTI2012 dataset by simply training them on a vast
amount of automatically generated training data rather than a limited amount of
laser ground truth data.
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2016 13:26:46 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Mostegel",
"Christian",
""
],
[
"Rumpler",
"Markus",
""
],
[
"Fraundorfer",
"Friedrich",
""
],
[
"Bischof",
"Horst",
""
]
] |
TITLE: Using Self-Contradiction to Learn Confidence Measures in Stereo Vision
ABSTRACT: Learned confidence measures gain increasing importance for outlier removal
and quality improvement in stereo vision. However, acquiring the necessary
training data is typically a tedious and time consuming task that involves
manual interaction, active sensing devices and/or synthetic scenes. To overcome
this problem, we propose a new, flexible, and scalable way for generating
training data that only requires a set of stereo images as input. The key idea
of our approach is to use different view points for reasoning about
contradictions and consistencies between multiple depth maps generated with the
same stereo algorithm. This enables us to generate a huge amount of training
data in a fully automated manner. Among other experiments, we demonstrate the
potential of our approach by boosting the performance of three learned
confidence measures on the KITTI2012 dataset by simply training them on a vast
amount of automatically generated training data rather than a limited amount of
laser ground truth data.
|
1604.05144
|
Jifeng Dai
|
Di Lin, Jifeng Dai, Jiaya Jia, Kaiming He, Jian Sun
|
ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic
Segmentation
|
accepted by CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large-scale data is of crucial importance for learning semantic segmentation
models, but annotating per-pixel masks is a tedious and inefficient procedure.
We note that for the topic of interactive image segmentation, scribbles are
very widely used in academic research and commercial software, and are
recognized as one of the most user-friendly ways of interacting. In this paper,
we propose to use scribbles to annotate images, and develop an algorithm to
train convolutional networks for semantic segmentation supervised by scribbles.
Our algorithm is based on a graphical model that jointly propagates information
from scribbles to unmarked pixels and learns network parameters. We present
competitive object semantic segmentation results on the PASCAL VOC dataset by
using scribbles as annotations. Scribbles are also favored for annotating stuff
(e.g., water, sky, grass) that has no well-defined shape, and our method shows
excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive
scribble annotations. Our scribble annotations on PASCAL VOC are available at
http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup
|
[
{
"version": "v1",
"created": "Mon, 18 Apr 2016 13:46:23 GMT"
}
] | 2016-04-19T00:00:00 |
[
[
"Lin",
"Di",
""
],
[
"Dai",
"Jifeng",
""
],
[
"Jia",
"Jiaya",
""
],
[
"He",
"Kaiming",
""
],
[
"Sun",
"Jian",
""
]
] |
TITLE: ScribbleSup: Scribble-Supervised Convolutional Networks for Semantic
Segmentation
ABSTRACT: Large-scale data is of crucial importance for learning semantic segmentation
models, but annotating per-pixel masks is a tedious and inefficient procedure.
We note that for the topic of interactive image segmentation, scribbles are
very widely used in academic research and commercial software, and are
recognized as one of the most user-friendly ways of interacting. In this paper,
we propose to use scribbles to annotate images, and develop an algorithm to
train convolutional networks for semantic segmentation supervised by scribbles.
Our algorithm is based on a graphical model that jointly propagates information
from scribbles to unmarked pixels and learns network parameters. We present
competitive object semantic segmentation results on the PASCAL VOC dataset by
using scribbles as annotations. Scribbles are also favored for annotating stuff
(e.g., water, sky, grass) that has no well-defined shape, and our method shows
excellent results on the PASCAL-CONTEXT dataset thanks to extra inexpensive
scribble annotations. Our scribble annotations on PASCAL VOC are available at
http://research.microsoft.com/en-us/um/people/jifdai/downloads/scribble_sup
|
1506.01333
|
Praveen Rao
|
Vasil Slavov, Anas Katib, Praveen Rao, Srivenu Paturi, Dinesh
Barenkala
|
Fast Processing of SPARQL Queries on RDF Quadruples
|
This paper was published in the 17th International Workshop on the
Web and Databases (WebDB 2014), Snowbird, UT
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose a new approach for fast processing of SPARQL
queries on large RDF datasets containing RDF quadruples (or quads). Our
approach called RIQ employs a decrease-and-conquer strategy: Rather than
indexing the entire RDF dataset, RIQ identifies groups of similar RDF graphs
and indexes each group separately. During query processing, RIQ uses a novel
filtering index to first identify candidate groups that may contain matches for
the query. On these candidates, it executes optimized queries using a
conventional SPARQL processor to produce the final results. Our initial
performance evaluation results are promising: Using a synthetic and a real
dataset, each containing about 1.4 billion quads, we show that RIQ outperforms
RDF-3X and Jena TDB on a variety of SPARQL queries.
|
[
{
"version": "v1",
"created": "Wed, 3 Jun 2015 17:50:35 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 22:40:43 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Slavov",
"Vasil",
""
],
[
"Katib",
"Anas",
""
],
[
"Rao",
"Praveen",
""
],
[
"Paturi",
"Srivenu",
""
],
[
"Barenkala",
"Dinesh",
""
]
] |
TITLE: Fast Processing of SPARQL Queries on RDF Quadruples
ABSTRACT: In this paper, we propose a new approach for fast processing of SPARQL
queries on large RDF datasets containing RDF quadruples (or quads). Our
approach called RIQ employs a decrease-and-conquer strategy: Rather than
indexing the entire RDF dataset, RIQ identifies groups of similar RDF graphs
and indexes each group separately. During query processing, RIQ uses a novel
filtering index to first identify candidate groups that may contain matches for
the query. On these candidates, it executes optimized queries using a
conventional SPARQL processor to produce the final results. Our initial
performance evaluation results are promising: Using a synthetic and a real
dataset, each containing about 1.4 billion quads, we show that RIQ outperforms
RDF-3X and Jena TDB on a variety of SPARQL queries.
|
1602.08409
|
Miguel Guevara
|
Miguel R. Guevara, Dominik Hartmann, Manuel Aristar\'an, Marcelo
Mendoza, C\'esar A. Hidalgo
|
The Research Space: using the career paths of scholars to predict the
evolution of the research output of individuals, institutions, and nations
| null | null | null | null |
cs.DL cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years scholars have built maps of science by connecting the
academic fields that cite each other, are cited together, or that cite a
similar literature. But since scholars cannot always publish in the fields they
cite, or that cite them, these science maps are only rough proxies for the
potential of a scholar, organization, or country, to enter a new academic
field. Here we use a large dataset of scholarly publications disambiguated at
the individual level to create a map of science-or research space-where links
connect pairs of fields based on the probability that an individual has
published in both of them. We find that the research space is a significantly
more accurate predictor of the fields that individuals and organizations will
enter in the future than citation based science maps. At the country level,
however, the research space and citations based science maps are equally
accurate. These findings show that data on career trajectories-the set of
fields that individuals have previously published in-provide more accurate
predictors of future research output for more focalized units-such as
individuals or organizations-than citation based science maps.
|
[
{
"version": "v1",
"created": "Fri, 26 Feb 2016 17:31:04 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Feb 2016 16:26:26 GMT"
},
{
"version": "v3",
"created": "Thu, 14 Apr 2016 20:21:12 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Guevara",
"Miguel R.",
""
],
[
"Hartmann",
"Dominik",
""
],
[
"Aristarán",
"Manuel",
""
],
[
"Mendoza",
"Marcelo",
""
],
[
"Hidalgo",
"César A.",
""
]
] |
TITLE: The Research Space: using the career paths of scholars to predict the
evolution of the research output of individuals, institutions, and nations
ABSTRACT: In recent years scholars have built maps of science by connecting the
academic fields that cite each other, are cited together, or that cite a
similar literature. But since scholars cannot always publish in the fields they
cite, or that cite them, these science maps are only rough proxies for the
potential of a scholar, organization, or country, to enter a new academic
field. Here we use a large dataset of scholarly publications disambiguated at
the individual level to create a map of science-or research space-where links
connect pairs of fields based on the probability that an individual has
published in both of them. We find that the research space is a significantly
more accurate predictor of the fields that individuals and organizations will
enter in the future than citation based science maps. At the country level,
however, the research space and citations based science maps are equally
accurate. These findings show that data on career trajectories-the set of
fields that individuals have previously published in-provide more accurate
predictors of future research output for more focalized units-such as
individuals or organizations-than citation based science maps.
|
1604.03169
|
Marcel Salathe
|
Sharada Prasanna Mohanty, David Hughes, Marcel Salathe
|
Using Deep Learning for Image-Based Plant Disease Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crop diseases are a major threat to food security, but their rapid
identification remains difficult in many parts of the world due to the lack of
the necessary infrastructure. The combination of increasing global smartphone
penetration and recent advances in computer vision made possible by deep
learning has paved the way for smartphone-assisted disease diagnosis. Using a
public dataset of 54,306 images of diseased and healthy plant leaves collected
under controlled conditions, we train a deep convolutional neural network to
identify 14 crop species and 26 diseases (or absence thereof). The trained
model achieves an accuracy of 99.35% on a held-out test set, demonstrating the
feasibility of this approach. When testing the model on a set of images
collected from trusted online sources - i.e. taken under conditions different
from the images used for training - the model still achieves an accuracy of
31.4%. While this accuracy is much higher than the one based on random
selection (2.6%), a more diverse set of training data is needed to improve the
general accuracy. Overall, the approach of training deep learning models on
increasingly large and publicly available image datasets presents a clear path
towards smartphone-assisted crop disease diagnosis on a massive global scale.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2016 22:44:20 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Apr 2016 14:05:34 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Mohanty",
"Sharada Prasanna",
""
],
[
"Hughes",
"David",
""
],
[
"Salathe",
"Marcel",
""
]
] |
TITLE: Using Deep Learning for Image-Based Plant Disease Detection
ABSTRACT: Crop diseases are a major threat to food security, but their rapid
identification remains difficult in many parts of the world due to the lack of
the necessary infrastructure. The combination of increasing global smartphone
penetration and recent advances in computer vision made possible by deep
learning has paved the way for smartphone-assisted disease diagnosis. Using a
public dataset of 54,306 images of diseased and healthy plant leaves collected
under controlled conditions, we train a deep convolutional neural network to
identify 14 crop species and 26 diseases (or absence thereof). The trained
model achieves an accuracy of 99.35% on a held-out test set, demonstrating the
feasibility of this approach. When testing the model on a set of images
collected from trusted online sources - i.e. taken under conditions different
from the images used for training - the model still achieves an accuracy of
31.4%. While this accuracy is much higher than the one based on random
selection (2.6%), a more diverse set of training data is needed to improve the
general accuracy. Overall, the approach of training deep learning models on
increasingly large and publicly available image datasets presents a clear path
towards smartphone-assisted crop disease diagnosis on a massive global scale.
|
1604.04326
|
Stephan Zheng
|
Stephan Zheng, Yang Song, Thomas Leung, Ian Goodfellow
|
Improving the Robustness of Deep Neural Networks via Stability Training
|
Published in CVPR 2016
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we address the issue of output instability of deep neural
networks: small perturbations in the visual input can significantly distort the
feature embeddings and output of a neural network. Such instability affects
many deep architectures with state-of-the-art performance on a wide range of
computer vision tasks. We present a general stability training method to
stabilize deep networks against small input distortions that result from
various types of common image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the state-of-the-art Inception
architecture against these types of distortions. In addition, we demonstrate
that our stabilized model gives robust state-of-the-art performance on
large-scale near-duplicate detection, similar-image ranking, and classification
on noisy datasets.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 01:15:18 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Zheng",
"Stephan",
""
],
[
"Song",
"Yang",
""
],
[
"Leung",
"Thomas",
""
],
[
"Goodfellow",
"Ian",
""
]
] |
TITLE: Improving the Robustness of Deep Neural Networks via Stability Training
ABSTRACT: In this paper we address the issue of output instability of deep neural
networks: small perturbations in the visual input can significantly distort the
feature embeddings and output of a neural network. Such instability affects
many deep architectures with state-of-the-art performance on a wide range of
computer vision tasks. We present a general stability training method to
stabilize deep networks against small input distortions that result from
various types of common image processing, such as compression, rescaling, and
cropping. We validate our method by stabilizing the state-of-the-art Inception
architecture against these types of distortions. In addition, we demonstrate
that our stabilized model gives robust state-of-the-art performance on
large-scale near-duplicate detection, similar-image ranking, and classification
on noisy datasets.
|
1604.04339
|
Chunhua Shen
|
Zifeng Wu, Chunhua Shen, Anton van den Hengel
|
High-performance Semantic Segmentation Using Very Deep Fully
Convolutional Networks
|
11 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for high-performance semantic image segmentation (or
semantic pixel labelling) based on very deep residual networks, which achieves
the state-of-the-art performance. A few design factors are carefully considered
to this end.
We make the following contributions. (i) First, we evaluate different
variations of a fully convolutional residual network so as to find the best
configuration, including the number of layers, the resolution of feature maps,
and the size of field-of-view. Our experiments show that further enlarging the
field-of-view and increasing the resolution of feature maps are typically
beneficial, which however inevitably leads to a higher demand for GPU memories.
To walk around the limitation, we propose a new method to simulate a high
resolution network with a low resolution network, which can be applied during
training and/or testing. (ii) Second, we propose an online bootstrapping method
for training. We demonstrate that online bootstrapping is critically important
for achieving good accuracy. (iii) Third we apply the traditional dropout to
some of the residual blocks, which further improves the performance. (iv)
Finally, our method achieves the currently best mean intersection-over-union
78.3\% on the PASCAL VOC 2012 dataset, as well as on the recent dataset
Cityscapes.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 02:52:46 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Wu",
"Zifeng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] |
TITLE: High-performance Semantic Segmentation Using Very Deep Fully
Convolutional Networks
ABSTRACT: We propose a method for high-performance semantic image segmentation (or
semantic pixel labelling) based on very deep residual networks, which achieves
the state-of-the-art performance. A few design factors are carefully considered
to this end.
We make the following contributions. (i) First, we evaluate different
variations of a fully convolutional residual network so as to find the best
configuration, including the number of layers, the resolution of feature maps,
and the size of field-of-view. Our experiments show that further enlarging the
field-of-view and increasing the resolution of feature maps are typically
beneficial, which however inevitably leads to a higher demand for GPU memories.
To walk around the limitation, we propose a new method to simulate a high
resolution network with a low resolution network, which can be applied during
training and/or testing. (ii) Second, we propose an online bootstrapping method
for training. We demonstrate that online bootstrapping is critically important
for achieving good accuracy. (iii) Third we apply the traditional dropout to
some of the residual blocks, which further improves the performance. (iv)
Finally, our method achieves the currently best mean intersection-over-union
78.3\% on the PASCAL VOC 2012 dataset, as well as on the recent dataset
Cityscapes.
|
1604.04377
|
Guangrun Wang
|
Guangrun Wang, Liang Lin, Shengyong Ding, Ya Li and Qing Wang
|
DARI: Distance metric And Representation Integration for Person
Verification
|
To appear in Proceedings of AAAI Conference on Artificial
Intelligence (AAAI), 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The past decade has witnessed the rapid development of feature representation
learning and distance metric learning, whereas the two steps are often
discussed separately. To explore their interaction, this work proposes an
end-to-end learning framework called DARI, i.e. Distance metric And
Representation Integration, and validates the effectiveness of DARI in the
challenging task of person verification. Given the training images annotated
with the labels, we first produce a large number of triplet units, and each one
contains three images, i.e. one person and the matched/mismatch references. For
each triplet unit, the distance disparity between the matched pair and the
mismatched pair tends to be maximized. We solve this objective by building a
deep architecture of convolutional neural networks. In particular, the
Mahalanobis distance matrix is naturally factorized as one top fully-connected
layer that is seamlessly integrated with other bottom layers representing the
image feature. The image feature and the distance metric can be thus
simultaneously optimized via the one-shot backward propagation. On several
public datasets, DARI shows very promising performance on re-identifying
individuals cross cameras against various challenges, and outperforms other
state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 07:21:26 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Wang",
"Guangrun",
""
],
[
"Lin",
"Liang",
""
],
[
"Ding",
"Shengyong",
""
],
[
"Li",
"Ya",
""
],
[
"Wang",
"Qing",
""
]
] |
TITLE: DARI: Distance metric And Representation Integration for Person
Verification
ABSTRACT: The past decade has witnessed the rapid development of feature representation
learning and distance metric learning, whereas the two steps are often
discussed separately. To explore their interaction, this work proposes an
end-to-end learning framework called DARI, i.e. Distance metric And
Representation Integration, and validates the effectiveness of DARI in the
challenging task of person verification. Given the training images annotated
with the labels, we first produce a large number of triplet units, and each one
contains three images, i.e. one person and the matched/mismatch references. For
each triplet unit, the distance disparity between the matched pair and the
mismatched pair tends to be maximized. We solve this objective by building a
deep architecture of convolutional neural networks. In particular, the
Mahalanobis distance matrix is naturally factorized as one top fully-connected
layer that is seamlessly integrated with other bottom layers representing the
image feature. The image feature and the distance metric can be thus
simultaneously optimized via the one-shot backward propagation. On several
public datasets, DARI shows very promising performance on re-identifying
individuals cross cameras against various challenges, and outperforms other
state-of-the-art approaches.
|
1604.04473
|
Xiaopeng Hong
|
Xiaopeng Hong, Xianbiao Qi, Guoying Zhao, Matti Pietik\"ainen
|
Probing the Intra-Component Correlations within Fisher Vector for
Material Classification
|
It is manuscript submitted to Neurocomputing on the end of April,
2015 (!). One year past but no review comments we received yet!
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fisher vector (FV) has become a popular image representation. One notable
underlying assumption of the FV framework is that local descriptors are well
decorrelated within each cluster so that the covariance matrix for each
Gaussian can be simplified to be diagonal. Though the FV usually relies on the
Principal Component Analysis (PCA) to decorrelate local features, the PCA is
applied to the entire training data and hence it only diagonalizes the
\textit{universal} covariance matrix, rather than those w.r.t. the local
components. As a result, the local decorrelation assumption is usually not
supported in practice.
To relax this assumption, this paper proposes a completed model of the Fisher
vector, which is termed as the Completed Fisher vector (CFV). The CFV is a more
general framework of the FV, since it encodes not only the variances but also
the correlations of the whitened local descriptors. The CFV thus leads to
improved discriminative power. We take the task of material categorization as
an example and experimentally show that: 1) the CFV outperforms the FV under
all parameter settings; 2) the CFV is robust to the changes in the number of
components in the mixture; 3) even with a relatively small visual vocabulary
the CFV still works well on two challenging datasets.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 12:55:00 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Hong",
"Xiaopeng",
""
],
[
"Qi",
"Xianbiao",
""
],
[
"Zhao",
"Guoying",
""
],
[
"Pietikäinen",
"Matti",
""
]
] |
TITLE: Probing the Intra-Component Correlations within Fisher Vector for
Material Classification
ABSTRACT: Fisher vector (FV) has become a popular image representation. One notable
underlying assumption of the FV framework is that local descriptors are well
decorrelated within each cluster so that the covariance matrix for each
Gaussian can be simplified to be diagonal. Though the FV usually relies on the
Principal Component Analysis (PCA) to decorrelate local features, the PCA is
applied to the entire training data and hence it only diagonalizes the
\textit{universal} covariance matrix, rather than those w.r.t. the local
components. As a result, the local decorrelation assumption is usually not
supported in practice.
To relax this assumption, this paper proposes a completed model of the Fisher
vector, which is termed as the Completed Fisher vector (CFV). The CFV is a more
general framework of the FV, since it encodes not only the variances but also
the correlations of the whitened local descriptors. The CFV thus leads to
improved discriminative power. We take the task of material categorization as
an example and experimentally show that: 1) the CFV outperforms the FV under
all parameter settings; 2) the CFV is robust to the changes in the number of
components in the mixture; 3) even with a relatively small visual vocabulary
the CFV still works well on two challenging datasets.
|
1604.04573
|
Jiang Wang Mr.
|
Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, Wei Xu
|
CNN-RNN: A Unified Framework for Multi-label Image Classification
|
CVPR 2016
| null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While deep convolutional neural networks (CNNs) have shown a great success in
single-label image classification, it is important to note that real world
images generally contain multiple labels, which could correspond to different
objects, scenes, actions and attributes in an image. Traditional approaches to
multi-label image classification learn independent classifiers for each
category and employ ranking or thresholding on the classification results.
These techniques, although working well, fail to explicitly exploit the label
dependencies in an image. In this paper, we utilize recurrent neural networks
(RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN
framework learns a joint image-label embedding to characterize the semantic
label dependency as well as the image-label relevance, and it can be trained
end-to-end from scratch to integrate both information in a unified framework.
Experimental results on public benchmark datasets demonstrate that the proposed
architecture achieves better performance than the state-of-the-art multi-label
classification model
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 17:10:54 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Wang",
"Jiang",
""
],
[
"Yang",
"Yi",
""
],
[
"Mao",
"Junhua",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Huang",
"Chang",
""
],
[
"Xu",
"Wei",
""
]
] |
TITLE: CNN-RNN: A Unified Framework for Multi-label Image Classification
ABSTRACT: While deep convolutional neural networks (CNNs) have shown a great success in
single-label image classification, it is important to note that real world
images generally contain multiple labels, which could correspond to different
objects, scenes, actions and attributes in an image. Traditional approaches to
multi-label image classification learn independent classifiers for each
category and employ ranking or thresholding on the classification results.
These techniques, although working well, fail to explicitly exploit the label
dependencies in an image. In this paper, we utilize recurrent neural networks
(RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN
framework learns a joint image-label embedding to characterize the semantic
label dependency as well as the image-label relevance, and it can be trained
end-to-end from scratch to integrate both information in a unified framework.
Experimental results on public benchmark datasets demonstrate that the proposed
architecture achieves better performance than the state-of-the-art multi-label
classification model
|
1604.04574
|
Jonghyun Choi
|
Mahmudul Hasan, Jonghyun Choi, Jan Neumann, Amit K. Roy-Chowdhury,
Larry S. Davis
|
Learning Temporal Regularity in Video Sequences
|
CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perceiving meaningful activities in a long video sequence is a challenging
problem due to ambiguous definition of 'meaningfulness' as well as clutters in
the scene. We approach this problem by learning a generative model for regular
motion patterns, termed as regularity, using multiple sources with very limited
supervision. Specifically, we propose two methods that are built upon the
autoencoders for their ability to work with little to no supervision. We first
leverage the conventional handcrafted spatio-temporal local features and learn
a fully connected autoencoder on them. Second, we build a fully convolutional
feed-forward autoencoder to learn both the local features and the classifiers
as an end-to-end learning framework. Our model can capture the regularities
from multiple datasets. We evaluate our methods in both qualitative and
quantitative ways - showing the learned regularity of videos in various aspects
and demonstrating competitive performance on anomaly detection datasets as an
application.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 17:20:01 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Hasan",
"Mahmudul",
""
],
[
"Choi",
"Jonghyun",
""
],
[
"Neumann",
"Jan",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
],
[
"Davis",
"Larry S.",
""
]
] |
TITLE: Learning Temporal Regularity in Video Sequences
ABSTRACT: Perceiving meaningful activities in a long video sequence is a challenging
problem due to ambiguous definition of 'meaningfulness' as well as clutters in
the scene. We approach this problem by learning a generative model for regular
motion patterns, termed as regularity, using multiple sources with very limited
supervision. Specifically, we propose two methods that are built upon the
autoencoders for their ability to work with little to no supervision. We first
leverage the conventional handcrafted spatio-temporal local features and learn
a fully connected autoencoder on them. Second, we build a fully convolutional
feed-forward autoencoder to learn both the local features and the classifiers
as an end-to-end learning framework. Our model can capture the regularities
from multiple datasets. We evaluate our methods in both qualitative and
quantitative ways - showing the learned regularity of videos in various aspects
and demonstrating competitive performance on anomaly detection datasets as an
application.
|
1604.04618
|
Thomas Steinke
|
Mark Bun, Thomas Steinke, Jonathan Ullman
|
Make Up Your Mind: The Price of Online Queries in Differential Privacy
| null | null | null | null |
cs.CR cs.DS cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of answering queries about a sensitive dataset
subject to differential privacy. The queries may be chosen adversarially from a
larger set Q of allowable queries in one of three ways, which we list in order
from easiest to hardest to answer:
Offline: The queries are chosen all at once and the differentially private
mechanism answers the queries in a single batch.
Online: The queries are chosen all at once, but the mechanism only receives
the queries in a streaming fashion and must answer each query before seeing the
next query.
Adaptive: The queries are chosen one at a time and the mechanism must answer
each query before the next query is chosen. In particular, each query may
depend on the answers given to previous queries.
Many differentially private mechanisms are just as efficient in the adaptive
model as they are in the offline model. Meanwhile, most lower bounds for
differential privacy hold in the offline setting. This suggests that the three
models may be equivalent.
We prove that these models are all, in fact, distinct. Specifically, we show
that there is a family of statistical queries such that exponentially more
queries from this family can be answered in the offline model than in the
online model. We also exhibit a family of search queries such that
exponentially more queries from this family can be answered in the online model
than in the adaptive model. We also investigate whether such separations might
hold for simple queries like threshold queries over the real line.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 19:55:26 GMT"
}
] | 2016-04-18T00:00:00 |
[
[
"Bun",
"Mark",
""
],
[
"Steinke",
"Thomas",
""
],
[
"Ullman",
"Jonathan",
""
]
] |
TITLE: Make Up Your Mind: The Price of Online Queries in Differential Privacy
ABSTRACT: We consider the problem of answering queries about a sensitive dataset
subject to differential privacy. The queries may be chosen adversarially from a
larger set Q of allowable queries in one of three ways, which we list in order
from easiest to hardest to answer:
Offline: The queries are chosen all at once and the differentially private
mechanism answers the queries in a single batch.
Online: The queries are chosen all at once, but the mechanism only receives
the queries in a streaming fashion and must answer each query before seeing the
next query.
Adaptive: The queries are chosen one at a time and the mechanism must answer
each query before the next query is chosen. In particular, each query may
depend on the answers given to previous queries.
Many differentially private mechanisms are just as efficient in the adaptive
model as they are in the offline model. Meanwhile, most lower bounds for
differential privacy hold in the offline setting. This suggests that the three
models may be equivalent.
We prove that these models are all, in fact, distinct. Specifically, we show
that there is a family of statistical queries such that exponentially more
queries from this family can be answered in the offline model than in the
online model. We also exhibit a family of search queries such that
exponentially more queries from this family can be answered in the online model
than in the adaptive model. We also investigate whether such separations might
hold for simple queries like threshold queries over the real line.
|
1506.04714
|
Dinesh Jayaraman
|
Dinesh Jayaraman and Kristen Grauman
|
Slow and steady feature analysis: higher order temporal coherence in
video
|
in Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas,
NV, June 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can unlabeled video augment visual learning? Existing methods perform
"slow" feature analysis, encouraging the representations of temporally close
frames to exhibit only small differences. While this standard approach captures
the fact that high-level visual signals change slowly over time, it fails to
capture *how* the visual content changes. We propose to generalize slow feature
analysis to "steady" feature analysis. The key idea is to impose a prior that
higher order derivatives in the learned feature space must be small. To this
end, we train a convolutional neural network with a regularizer on tuples of
sequential frames from unlabeled video. It encourages feature changes over time
to be smooth, i.e., similar to the most recent changes. Using five diverse
datasets, including unlabeled YouTube and KITTI videos, we demonstrate our
method's impact on object, scene, and action recognition tasks. We further show
that our features learned from unlabeled video can even surpass a standard
heavily supervised pretraining approach.
|
[
{
"version": "v1",
"created": "Mon, 15 Jun 2015 19:26:38 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 18:37:33 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] |
TITLE: Slow and steady feature analysis: higher order temporal coherence in
video
ABSTRACT: How can unlabeled video augment visual learning? Existing methods perform
"slow" feature analysis, encouraging the representations of temporally close
frames to exhibit only small differences. While this standard approach captures
the fact that high-level visual signals change slowly over time, it fails to
capture *how* the visual content changes. We propose to generalize slow feature
analysis to "steady" feature analysis. The key idea is to impose a prior that
higher order derivatives in the learned feature space must be small. To this
end, we train a convolutional neural network with a regularizer on tuples of
sequential frames from unlabeled video. It encourages feature changes over time
to be smooth, i.e., similar to the most recent changes. Using five diverse
datasets, including unlabeled YouTube and KITTI videos, we demonstrate our
method's impact on object, scene, and action recognition tasks. We further show
that our features learned from unlabeled video can even surpass a standard
heavily supervised pretraining approach.
|
1511.06078
|
Liwei Wang
|
Liwei Wang, Yin Li, Svetlana Lazebnik
|
Learning Deep Structure-Preserving Image-Text Embeddings
| null | null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a method for learning joint embeddings of images and text
using a two-branch neural network with multiple layers of linear projections
followed by nonlinearities. The network is trained using a large margin
objective that combines cross-view ranking constraints with within-view
neighborhood structure preservation constraints inspired by metric learning
literature. Extensive experiments show that our approach gains significant
improvements in accuracy for image-to-text and text-to-image retrieval. Our
method achieves new state-of-the-art results on the Flickr30K and MSCOCO
image-sentence datasets and shows promise on the new task of phrase
localization on the Flickr30K Entities dataset.
|
[
{
"version": "v1",
"created": "Thu, 19 Nov 2015 07:17:49 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 03:10:04 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Wang",
"Liwei",
""
],
[
"Li",
"Yin",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] |
TITLE: Learning Deep Structure-Preserving Image-Text Embeddings
ABSTRACT: This paper proposes a method for learning joint embeddings of images and text
using a two-branch neural network with multiple layers of linear projections
followed by nonlinearities. The network is trained using a large margin
objective that combines cross-view ranking constraints with within-view
neighborhood structure preservation constraints inspired by metric learning
literature. Extensive experiments show that our approach gains significant
improvements in accuracy for image-to-text and text-to-image retrieval. Our
method achieves new state-of-the-art results on the Flickr30K and MSCOCO
image-sentence datasets and shows promise on the new task of phrase
localization on the Flickr30K Entities dataset.
|
1511.06973
|
Chunhua Shen
|
Qi Wu, Peng Wang, Chunhua Shen, Anthony Dick, Anton van den Hengel
|
Ask Me Anything: Free-form Visual Question Answering Based on Knowledge
from External Sources
|
Accepted to IEEE Conf. Computer Vision and Pattern Recognition
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for visual question answering which combines an internal
representation of the content of an image with information extracted from a
general knowledge base to answer a broad range of image-based questions. This
allows more complex questions to be answered using the predominant neural
network-based approach than has previously been possible. It particularly
allows questions to be asked about the contents of an image, even when the
image itself does not contain the whole answer. The method constructs a textual
representation of the semantic content of an image, and merges it with textual
information sourced from a knowledge base, to develop a deeper understanding of
the scene viewed. Priming a recurrent neural network with this combined
information, and the submitted question, leads to a very flexible visual
question answering approach. We are specifically able to answer questions posed
in natural language, that refer to information not contained in the image. We
demonstrate the effectiveness of our model on two publicly available datasets,
Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported
results in both cases.
|
[
{
"version": "v1",
"created": "Sun, 22 Nov 2015 07:08:14 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 08:09:08 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Wu",
"Qi",
""
],
[
"Wang",
"Peng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Dick",
"Anthony",
""
],
[
"Hengel",
"Anton van den",
""
]
] |
TITLE: Ask Me Anything: Free-form Visual Question Answering Based on Knowledge
from External Sources
ABSTRACT: We propose a method for visual question answering which combines an internal
representation of the content of an image with information extracted from a
general knowledge base to answer a broad range of image-based questions. This
allows more complex questions to be answered using the predominant neural
network-based approach than has previously been possible. It particularly
allows questions to be asked about the contents of an image, even when the
image itself does not contain the whole answer. The method constructs a textual
representation of the semantic content of an image, and merges it with textual
information sourced from a knowledge base, to develop a deeper understanding of
the scene viewed. Priming a recurrent neural network with this combined
information, and the submitted question, leads to a very flexible visual
question answering approach. We are specifically able to answer questions posed
in natural language, that refer to information not contained in the image. We
demonstrate the effectiveness of our model on two publicly available datasets,
Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported
results in both cases.
|
1601.05150
|
Wanli Ouyang
|
Wanli Ouyang, Xiaogang Wang, Cong Zhang, Xiaokang Yang
|
Factors in Finetuning Deep Model for object detection
|
CVPR2016 camera ready version. Our ImageNet large scale recognition
challenge (ILSVRC15) object detection results (rank 3rd for provided data and
2nd for external data) are based on this method. Code available later on
http://www.ee.cuhk.edu.hk/~wlouyang/projects/ImageNetFactors/CVPR16.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finetuning from a pretrained deep model is found to yield state-of-the-art
performance for many vision tasks. This paper investigates many factors that
influence the performance in finetuning for object detection. There is a
long-tailed distribution of sample numbers for classes in object detection. Our
analysis and empirical results show that classes with more samples have higher
impact on the feature learning. And it is better to make the sample number more
uniform across classes. Generic object detection can be considered as multiple
equally important tasks. Detection of each class is a task. These classes/tasks
have their individuality in discriminative visual appearance representation.
Taking this individuality into account, we cluster objects into visually
similar class groups and learn deep representations for these groups
separately. A hierarchical feature learning scheme is proposed. In this scheme,
the knowledge from the group with large number of classes is transferred for
learning features in its sub-groups. Finetuned on the GoogLeNet model,
experimental results show 4.7% absolute mAP improvement of our approach on the
ImageNet object detection dataset without increasing much computational cost at
the testing stage.
|
[
{
"version": "v1",
"created": "Wed, 20 Jan 2016 02:19:48 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 01:15:12 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Zhang",
"Cong",
""
],
[
"Yang",
"Xiaokang",
""
]
] |
TITLE: Factors in Finetuning Deep Model for object detection
ABSTRACT: Finetuning from a pretrained deep model is found to yield state-of-the-art
performance for many vision tasks. This paper investigates many factors that
influence the performance in finetuning for object detection. There is a
long-tailed distribution of sample numbers for classes in object detection. Our
analysis and empirical results show that classes with more samples have higher
impact on the feature learning. And it is better to make the sample number more
uniform across classes. Generic object detection can be considered as multiple
equally important tasks. Detection of each class is a task. These classes/tasks
have their individuality in discriminative visual appearance representation.
Taking this individuality into account, we cluster objects into visually
similar class groups and learn deep representations for these groups
separately. A hierarchical feature learning scheme is proposed. In this scheme,
the knowledge from the group with large number of classes is transferred for
learning features in its sub-groups. Finetuned on the GoogLeNet model,
experimental results show 4.7% absolute mAP improvement of our approach on the
ImageNet object detection dataset without increasing much computational cost at
the testing stage.
|
1603.04595
|
Olivier Mor\`ere
|
Olivier Mor\`ere, Jie Lin, Antoine Veillard, Vijay Chandrasekhar,
Tomaso Poggio
|
Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval
|
Image Instance Retrieval, CNN, Invariant Representation, Hashing,
Unsupervised Learning, Regularization. arXiv admin note: text overlap with
arXiv:1601.02093
| null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this work is the computation of very compact binary hashes for
image instance retrieval. Our approach has two novel contributions. The first
one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a
mathematical theory for computing group invariant transformations with
feed-forward neural networks. NIP is able to produce compact and
well-performing descriptors with visual representations extracted from
convolutional neural networks. We specifically incorporate scale, translation
and rotation invariances but the scheme can be extended to any arbitrary sets
of transformations. We also show that using moments of increasing order
throughout nesting is important. The NIP descriptors are then hashed to the
target code size (32-256 bits) with a Restricted Boltzmann Machine with a novel
batch-level regularization scheme specifically designed for the purpose of
hashing (RBMH). A thorough empirical evaluation with state-of-the-art shows
that the results obtained both with the NIP descriptors and the NIP+RBMH hashes
are consistently outstanding across a wide range of datasets.
|
[
{
"version": "v1",
"created": "Tue, 15 Mar 2016 08:56:33 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Apr 2016 14:11:18 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Morère",
"Olivier",
""
],
[
"Lin",
"Jie",
""
],
[
"Veillard",
"Antoine",
""
],
[
"Chandrasekhar",
"Vijay",
""
],
[
"Poggio",
"Tomaso",
""
]
] |
TITLE: Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval
ABSTRACT: The goal of this work is the computation of very compact binary hashes for
image instance retrieval. Our approach has two novel contributions. The first
one is Nested Invariance Pooling (NIP), a method inspired from i-theory, a
mathematical theory for computing group invariant transformations with
feed-forward neural networks. NIP is able to produce compact and
well-performing descriptors with visual representations extracted from
convolutional neural networks. We specifically incorporate scale, translation
and rotation invariances but the scheme can be extended to any arbitrary sets
of transformations. We also show that using moments of increasing order
throughout nesting is important. The NIP descriptors are then hashed to the
target code size (32-256 bits) with a Restricted Boltzmann Machine with a novel
batch-level regularization scheme specifically designed for the purpose of
hashing (RBMH). A thorough empirical evaluation with state-of-the-art shows
that the results obtained both with the NIP descriptors and the NIP+RBMH hashes
are consistently outstanding across a wide range of datasets.
|
1604.01219
|
Yuting Qaing
|
Yuting Qiang, Yanwei Fu, Yanwen Guo, Zhi-Hua Zhou and Leonid Sigal
|
Learning to Generate Posters of Scientific Papers
|
in Proceedings of the 30th AAAI Conference on Artificial Intelligence
(AAAI'16), Phoenix, AZ, 2016
| null | null | null |
cs.AI cs.CL cs.HC cs.MM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Researchers often summarize their work in the form of posters. Posters
provide a coherent and efficient way to convey core ideas from scientific
papers. Generating a good scientific poster, however, is a complex and time
consuming cognitive task, since such posters need to be readable, informative,
and visually aesthetic. In this paper, for the first time, we study the
challenging problem of learning to generate posters from scientific papers. To
this end, a data-driven framework, that utilizes graphical models, is proposed.
Specifically, given content to display, the key elements of a good poster,
including panel layout and attributes of each panel, are learned and inferred
from data. Then, given inferred layout and attributes, composition of graphical
elements within each panel is synthesized. To learn and validate our model, we
collect and make public a Poster-Paper dataset, which consists of scientific
papers and corresponding posters with exhaustively labelled panels and
attributes. Qualitative and quantitative results indicate the effectiveness of
our approach.
|
[
{
"version": "v1",
"created": "Tue, 5 Apr 2016 11:18:04 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Qiang",
"Yuting",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Guo",
"Yanwen",
""
],
[
"Zhou",
"Zhi-Hua",
""
],
[
"Sigal",
"Leonid",
""
]
] |
TITLE: Learning to Generate Posters of Scientific Papers
ABSTRACT: Researchers often summarize their work in the form of posters. Posters
provide a coherent and efficient way to convey core ideas from scientific
papers. Generating a good scientific poster, however, is a complex and time
consuming cognitive task, since such posters need to be readable, informative,
and visually aesthetic. In this paper, for the first time, we study the
challenging problem of learning to generate posters from scientific papers. To
this end, a data-driven framework, that utilizes graphical models, is proposed.
Specifically, given content to display, the key elements of a good poster,
including panel layout and attributes of each panel, are learned and inferred
from data. Then, given inferred layout and attributes, composition of graphical
elements within each panel is synthesized. To learn and validate our model, we
collect and make public a Poster-Paper dataset, which consists of scientific
papers and corresponding posters with exhaustively labelled panels and
attributes. Qualitative and quantitative results indicate the effectiveness of
our approach.
|
1604.03968
|
Francis Ferraro
|
Ting-Hao (Kenneth) Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan
Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet
Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende,
Michel Galley, Margaret Mitchell
|
Visual Storytelling
|
to appear in NAACL 2016
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first dataset for sequential vision-to-language, and explore
how this data may be used for the task of visual storytelling. The first
release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211
sequences, aligned to both descriptive (caption) and story language. We
establish several strong baselines for the storytelling task, and motivate an
automatic metric to benchmark progress. Modelling concrete description as well
as figurative and social language, as provided in this dataset and the
storytelling task, has the potential to move artificial intelligence from basic
understandings of typical visual scenes towards more and more human-like
understanding of grounded event structure and subjective expression.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2016 20:27:43 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Ting-Hao",
"",
"",
"Kenneth"
],
[
"Huang",
"",
""
],
[
"Ferraro",
"Francis",
""
],
[
"Mostafazadeh",
"Nasrin",
""
],
[
"Misra",
"Ishan",
""
],
[
"Agrawal",
"Aishwarya",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Girshick",
"Ross",
""
],
[
"He",
"Xiaodong",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Parikh",
"Devi",
""
],
[
"Vanderwende",
"Lucy",
""
],
[
"Galley",
"Michel",
""
],
[
"Mitchell",
"Margaret",
""
]
] |
TITLE: Visual Storytelling
ABSTRACT: We introduce the first dataset for sequential vision-to-language, and explore
how this data may be used for the task of visual storytelling. The first
release of this dataset, SIND v.1, includes 81,743 unique photos in 20,211
sequences, aligned to both descriptive (caption) and story language. We
establish several strong baselines for the storytelling task, and motivate an
automatic metric to benchmark progress. Modelling concrete description as well
as figurative and social language, as provided in this dataset and the
storytelling task, has the potential to move artificial intelligence from basic
understandings of typical visual scenes towards more and more human-like
understanding of grounded event structure and subjective expression.
|
1604.04007
|
Haibing Wu
|
Haibing Wu, Xiaodong Gu
|
Balancing Between Over-Weighting and Under-Weighting in Supervised Term
Weighting
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised term weighting could improve the performance of text
categorization. A way proven to be effective is to give more weight to terms
with more imbalanced distributions across categories. This paper shows that
supervised term weighting should not just assign large weights to imbalanced
terms, but should also control the trade-off between over-weighting and
under-weighting. Over-weighting, a new concept proposed in this paper, is
caused by the improper handling of singular terms and too large ratios between
term weights. To prevent over-weighting, we present three regularization
techniques: add-one smoothing, sublinear scaling and bias term. Add-one
smoothing is used to handle singular terms. Sublinear scaling and bias term
shrink the ratios between term weights. However, if sublinear functions scale
down term weights too much, or the bias term is too large, under-weighting
would occur and harm the performance. It is therefore critical to balance
between over-weighting and under-weighting. Inspired by this insight, we also
propose a new supervised term weighting scheme, regularized entropy (re). Our
re employs entropy to measure term distribution, and introduces the bias term
to control over-weighting and under-weighting. Empirical evaluations on topical
and sentiment classification datasets indicate that sublinear scaling and bias
term greatly influence the performance of supervised term weighting, and our re
enjoys the best results in comparison with existing schemes.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2016 01:29:52 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Wu",
"Haibing",
""
],
[
"Gu",
"Xiaodong",
""
]
] |
TITLE: Balancing Between Over-Weighting and Under-Weighting in Supervised Term
Weighting
ABSTRACT: Supervised term weighting could improve the performance of text
categorization. A way proven to be effective is to give more weight to terms
with more imbalanced distributions across categories. This paper shows that
supervised term weighting should not just assign large weights to imbalanced
terms, but should also control the trade-off between over-weighting and
under-weighting. Over-weighting, a new concept proposed in this paper, is
caused by the improper handling of singular terms and too large ratios between
term weights. To prevent over-weighting, we present three regularization
techniques: add-one smoothing, sublinear scaling and bias term. Add-one
smoothing is used to handle singular terms. Sublinear scaling and bias term
shrink the ratios between term weights. However, if sublinear functions scale
down term weights too much, or the bias term is too large, under-weighting
would occur and harm the performance. It is therefore critical to balance
between over-weighting and under-weighting. Inspired by this insight, we also
propose a new supervised term weighting scheme, regularized entropy (re). Our
re employs entropy to measure term distribution, and introduces the bias term
to control over-weighting and under-weighting. Empirical evaluations on topical
and sentiment classification datasets indicate that sublinear scaling and bias
term greatly influence the performance of supervised term weighting, and our re
enjoys the best results in comparison with existing schemes.
|
1604.04026
|
Nguyen Duy Khuong
|
Duy Khuong Nguyen, Tu Bao Ho
|
Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization
with KL Divergence for Large Sparse Datasets
| null | null | null | null |
math.OC cs.LG cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence
(NMF-KL) is one of the most significant NMF problems and equivalent to
Probabilistic Latent Semantic Indexing (PLSI), which has been successfully
applied in many applications. For sparse count data, a Poisson distribution and
KL divergence provide sparse models and sparse representation, which describe
the random variation better than a normal distribution and Frobenius norm.
Specially, sparse models provide more concise understanding of the appearance
of attributes over latent components, while sparse representation provides
concise interpretability of the contribution of latent components over
instances. However, minimizing NMF with KL divergence is much more difficult
than minimizing NMF with Frobenius norm; and sparse models, sparse
representation and fast algorithms for large sparse datasets are still
challenges for NMF with KL divergence. In this paper, we propose a fast
parallel randomized coordinate descent algorithm having fast convergence for
large sparse datasets to archive sparse models and sparse representation. The
proposed algorithm's experimental results overperform the current studies' ones
in this problem.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2016 03:40:35 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Nguyen",
"Duy Khuong",
""
],
[
"Ho",
"Tu Bao",
""
]
] |
TITLE: Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization
with KL Divergence for Large Sparse Datasets
ABSTRACT: Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence
(NMF-KL) is one of the most significant NMF problems and equivalent to
Probabilistic Latent Semantic Indexing (PLSI), which has been successfully
applied in many applications. For sparse count data, a Poisson distribution and
KL divergence provide sparse models and sparse representation, which describe
the random variation better than a normal distribution and Frobenius norm.
Specially, sparse models provide more concise understanding of the appearance
of attributes over latent components, while sparse representation provides
concise interpretability of the contribution of latent components over
instances. However, minimizing NMF with KL divergence is much more difficult
than minimizing NMF with Frobenius norm; and sparse models, sparse
representation and fast algorithms for large sparse datasets are still
challenges for NMF with KL divergence. In this paper, we propose a fast
parallel randomized coordinate descent algorithm having fast convergence for
large sparse datasets to archive sparse models and sparse representation. The
proposed algorithm's experimental results overperform the current studies' ones
in this problem.
|
1604.04048
|
Wenqing Chu
|
Wenqing Chu and Deng Cai
|
Deep Feature Based Contextual Model for Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection is one of the most active areas in computer vision, which
has made significant improvement in recent years. Current state-of-the-art
object detection methods mostly adhere to the framework of regions with
convolutional neural network (R-CNN) and only use local appearance features
inside object bounding boxes. Since these approaches ignore the contextual
information around the object proposals, the outcome of these detectors may
generate a semantically incoherent interpretation of the input image. In this
paper, we propose an ensemble object detection system which incorporates the
local appearance, the contextual information in term of relationships among
objects and the global scene based contextual feature generated by a
convolutional neural network. The system is formulated as a fully connected
conditional random field (CRF) defined on object proposals and the contextual
constraints among object proposals are modeled as edges naturally. Furthermore,
a fast mean field approximation method is utilized to inference in this CRF
model efficiently. The experimental results demonstrate that our approach
achieves a higher mean average precision (mAP) on PASCAL VOC 2007 datasets
compared to the baseline algorithm Faster R-CNN.
|
[
{
"version": "v1",
"created": "Thu, 14 Apr 2016 07:01:23 GMT"
}
] | 2016-04-15T00:00:00 |
[
[
"Chu",
"Wenqing",
""
],
[
"Cai",
"Deng",
""
]
] |
TITLE: Deep Feature Based Contextual Model for Object Detection
ABSTRACT: Object detection is one of the most active areas in computer vision, which
has made significant improvement in recent years. Current state-of-the-art
object detection methods mostly adhere to the framework of regions with
convolutional neural network (R-CNN) and only use local appearance features
inside object bounding boxes. Since these approaches ignore the contextual
information around the object proposals, the outcome of these detectors may
generate a semantically incoherent interpretation of the input image. In this
paper, we propose an ensemble object detection system which incorporates the
local appearance, the contextual information in term of relationships among
objects and the global scene based contextual feature generated by a
convolutional neural network. The system is formulated as a fully connected
conditional random field (CRF) defined on object proposals and the contextual
constraints among object proposals are modeled as edges naturally. Furthermore,
a fast mean field approximation method is utilized to inference in this CRF
model efficiently. The experimental results demonstrate that our approach
achieves a higher mean average precision (mAP) on PASCAL VOC 2007 datasets
compared to the baseline algorithm Faster R-CNN.
|
1510.01257
|
Yongxi Lu
|
Yongxi Lu and Tara Javidi
|
Efficient Object Detection for High Resolution Images
| null | null |
10.1109/ALLERTON.2015.7447130
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient generation of high-quality object proposals is an essential step in
state-of-the-art object detection systems based on deep convolutional neural
networks (DCNN) features. Current object proposal algorithms are
computationally inefficient in processing high resolution images containing
small objects, which makes them the bottleneck in object detection systems. In
this paper we present effective methods to detect objects for high resolution
images. We combine two complementary strategies. The first approach is to
predict bounding boxes based on adjacent visual features. The second approach
uses high level image features to guide a two-step search process that
adaptively focuses on regions that are likely to contain small objects. We
extract features required for the two strategies by utilizing a pre-trained
DCNN model known as AlexNet. We demonstrate the effectiveness of our algorithm
by showing its performance on a high-resolution image subset of the SUN 2012
object detection dataset.
|
[
{
"version": "v1",
"created": "Mon, 5 Oct 2015 17:48:02 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Lu",
"Yongxi",
""
],
[
"Javidi",
"Tara",
""
]
] |
TITLE: Efficient Object Detection for High Resolution Images
ABSTRACT: Efficient generation of high-quality object proposals is an essential step in
state-of-the-art object detection systems based on deep convolutional neural
networks (DCNN) features. Current object proposal algorithms are
computationally inefficient in processing high resolution images containing
small objects, which makes them the bottleneck in object detection systems. In
this paper we present effective methods to detect objects for high resolution
images. We combine two complementary strategies. The first approach is to
predict bounding boxes based on adjacent visual features. The second approach
uses high level image features to guide a two-step search process that
adaptively focuses on regions that are likely to contain small objects. We
extract features required for the two strategies by utilizing a pre-trained
DCNN model known as AlexNet. We demonstrate the effectiveness of our algorithm
by showing its performance on a high-resolution image subset of the SUN 2012
object detection dataset.
|
1511.03776
|
Chen Sun
|
Chen Sun and Manohar Paluri and Ronan Collobert and Ram Nevatia and
Lubomir Bourdev
|
ProNet: Learning to Propose Object-specific Boxes for Cascaded Neural
Networks
|
CVPR 2016 (fixed reference issue)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to classify and locate objects accurately and efficiently,
without using bounding box annotations. It is challenging as objects in the
wild could appear at arbitrary locations and in different scales. In this
paper, we propose a novel classification architecture ProNet based on
convolutional neural networks. It uses computationally efficient neural
networks to propose image regions that are likely to contain objects, and
applies more powerful but slower networks on the proposed regions. The basic
building block is a multi-scale fully-convolutional network which assigns
object confidence scores to boxes at different locations and scales. We show
that such networks can be trained effectively using image-level annotations,
and can be connected into cascades or trees for efficient object
classification. ProNet outperforms previous state-of-the-art significantly on
PASCAL VOC 2012 and MS COCO datasets for object classification and point-based
localization.
|
[
{
"version": "v1",
"created": "Thu, 12 Nov 2015 05:06:16 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Apr 2016 04:42:22 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Apr 2016 02:56:43 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Sun",
"Chen",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Collobert",
"Ronan",
""
],
[
"Nevatia",
"Ram",
""
],
[
"Bourdev",
"Lubomir",
""
]
] |
TITLE: ProNet: Learning to Propose Object-specific Boxes for Cascaded Neural
Networks
ABSTRACT: This paper aims to classify and locate objects accurately and efficiently,
without using bounding box annotations. It is challenging as objects in the
wild could appear at arbitrary locations and in different scales. In this
paper, we propose a novel classification architecture ProNet based on
convolutional neural networks. It uses computationally efficient neural
networks to propose image regions that are likely to contain objects, and
applies more powerful but slower networks on the proposed regions. The basic
building block is a multi-scale fully-convolutional network which assigns
object confidence scores to boxes at different locations and scales. We show
that such networks can be trained effectively using image-level annotations,
and can be connected into cascades or trees for efficient object
classification. ProNet outperforms previous state-of-the-art significantly on
PASCAL VOC 2012 and MS COCO datasets for object classification and point-based
localization.
|
1512.00486
|
Maksim Lapin
|
Maksim Lapin, Matthias Hein, Bernt Schiele
|
Loss Functions for Top-k Error: Analysis and Insights
|
In Computer Vision and Pattern Recognition (CVPR), 2016
| null | null | null |
stat.ML cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to push the performance on realistic computer vision tasks, the
number of classes in modern benchmark datasets has significantly increased in
recent years. This increase in the number of classes comes along with increased
ambiguity between the class labels, raising the question if top-1 error is the
right performance measure. In this paper, we provide an extensive comparison
and evaluation of established multiclass methods comparing their top-k
performance both from a practical as well as from a theoretical perspective.
Moreover, we introduce novel top-k loss functions as modifications of the
softmax and the multiclass SVM losses and provide efficient optimization
schemes for them. In the experiments, we compare on various datasets all of the
proposed and established methods for top-k error optimization. An interesting
insight of this paper is that the softmax loss yields competitive top-k
performance for all k simultaneously. For a specific top-k error, our new top-k
losses lead typically to further improvements while being faster to train than
the softmax.
|
[
{
"version": "v1",
"created": "Tue, 1 Dec 2015 21:22:35 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2016 15:12:01 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Lapin",
"Maksim",
""
],
[
"Hein",
"Matthias",
""
],
[
"Schiele",
"Bernt",
""
]
] |
TITLE: Loss Functions for Top-k Error: Analysis and Insights
ABSTRACT: In order to push the performance on realistic computer vision tasks, the
number of classes in modern benchmark datasets has significantly increased in
recent years. This increase in the number of classes comes along with increased
ambiguity between the class labels, raising the question if top-1 error is the
right performance measure. In this paper, we provide an extensive comparison
and evaluation of established multiclass methods comparing their top-k
performance both from a practical as well as from a theoretical perspective.
Moreover, we introduce novel top-k loss functions as modifications of the
softmax and the multiclass SVM losses and provide efficient optimization
schemes for them. In the experiments, we compare on various datasets all of the
proposed and established methods for top-k error optimization. An interesting
insight of this paper is that the softmax loss yields competitive top-k
performance for all k simultaneously. For a specific top-k error, our new top-k
losses lead typically to further improvements while being faster to train than
the softmax.
|
1604.00999
|
Michael Firman
|
Michael Firman
|
RGBD Datasets: Past, Present and Future
|
8 pages excluding references (CVPR style)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.
|
[
{
"version": "v1",
"created": "Mon, 4 Apr 2016 19:35:56 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Apr 2016 09:19:44 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Firman",
"Michael",
""
]
] |
TITLE: RGBD Datasets: Past, Present and Future
ABSTRACT: Since the launch of the Microsoft Kinect, scores of RGBD datasets have been
released. These have propelled advances in areas from reconstruction to gesture
recognition. In this paper we explore the field, reviewing datasets across
eight categories: semantics, object pose estimation, camera tracking, scene
reconstruction, object tracking, human actions, faces and identification. By
extracting relevant information in each category we help researchers to find
appropriate data for their needs, and we consider which datasets have succeeded
in driving computer vision forward and why.
Finally, we examine the future of RGBD datasets. We identify key areas which
are currently underexplored, and suggest that future directions may include
synthetic data and dense reconstructions of static and dynamic scenes.
|
1604.03627
|
Norah Abokhodair
|
Norah Abokhodair, Daisy Yoo, David W. McDonald
|
Dissecting a Social Botnet: Growth, Content and Influence in Twitter
|
13 pages, 4 figures, Presented at the ACM conference on
Computer-Supported Cooperative Work and Social Computing (CSCW 2016)
| null |
10.1145/2675133.2675208
| null |
cs.CY cs.CL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social botnets have become an important phenomenon on social media. There are
many ways in which social bots can disrupt or influence online discourse, such
as, spam hashtags, scam twitter users, and astroturfing. In this paper we
considered one specific social botnet in Twitter to understand how it grows
over time, how the content of tweets by the social botnet differ from regular
users in the same dataset, and lastly, how the social botnet may have
influenced the relevant discussions. Our analysis is based on a qualitative
coding for approximately 3000 tweets in Arabic and English from the Syrian
social bot that was active for 35 weeks on Twitter before it was shutdown. We
find that the growth, behavior and content of this particular botnet did not
specifically align with common conceptions of botnets. Further we identify
interesting aspects of the botnet that distinguish it from regular users.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2016 01:00:24 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Abokhodair",
"Norah",
""
],
[
"Yoo",
"Daisy",
""
],
[
"McDonald",
"David W.",
""
]
] |
TITLE: Dissecting a Social Botnet: Growth, Content and Influence in Twitter
ABSTRACT: Social botnets have become an important phenomenon on social media. There are
many ways in which social bots can disrupt or influence online discourse, such
as, spam hashtags, scam twitter users, and astroturfing. In this paper we
considered one specific social botnet in Twitter to understand how it grows
over time, how the content of tweets by the social botnet differ from regular
users in the same dataset, and lastly, how the social botnet may have
influenced the relevant discussions. Our analysis is based on a qualitative
coding for approximately 3000 tweets in Arabic and English from the Syrian
social bot that was active for 35 weeks on Twitter before it was shutdown. We
find that the growth, behavior and content of this particular botnet did not
specifically align with common conceptions of botnets. Further we identify
interesting aspects of the botnet that distinguish it from regular users.
|
1604.03647
|
Yihong Yuan
|
Yihong Yuan
|
Modeling Inter-Country Connection from Geotagged News Reports: A
Time-Series Analysis
| null | null | null | null |
stat.AP cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of theories and techniques for big data analytics offers
tremendous flexibility for investigating large-scale events and patterns that
emerge over space and time. In this research, we utilize a unique open-access
dataset "The Global Data on Events, Location and Tone" (GDELT) to model the
image of China in mass media, specifically, how China has related to the rest
of the world and how this connection has evolved upon time based on an
autoregressive integrated moving average (ARIMA) model. The results of this
research contribute both in methodological and empirical perspectives: We
examined the effectiveness of time series models in predicting trends in
long-term mass media data. In addition, we identified various types of
connection strength patterns between China and its top 15 related countries.
This study generates valuable input to interpret China's diplomatic and
regional relations based on mass media data, as well as providing
methodological references for investigating international relations in other
countries and regions in the big data era.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2016 03:53:53 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Yuan",
"Yihong",
""
]
] |
TITLE: Modeling Inter-Country Connection from Geotagged News Reports: A
Time-Series Analysis
ABSTRACT: The development of theories and techniques for big data analytics offers
tremendous flexibility for investigating large-scale events and patterns that
emerge over space and time. In this research, we utilize a unique open-access
dataset "The Global Data on Events, Location and Tone" (GDELT) to model the
image of China in mass media, specifically, how China has related to the rest
of the world and how this connection has evolved upon time based on an
autoregressive integrated moving average (ARIMA) model. The results of this
research contribute both in methodological and empirical perspectives: We
examined the effectiveness of time series models in predicting trends in
long-term mass media data. In addition, we identified various types of
connection strength patterns between China and its top 15 related countries.
This study generates valuable input to interpret China's diplomatic and
regional relations based on mass media data, as well as providing
methodological references for investigating international relations in other
countries and regions in the big data era.
|
1604.03734
|
Michael Tanner
|
Michael Tanner and Pedro Pinies and Lina Maria Paz and Paul Newman
|
DENSER Cities: A System for Dense Efficient Reconstructions of Cities
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is about the efficient generation of dense, colored models of
city-scale environments from range data and in particular, stereo cameras.
Better maps make for better understanding; better understanding leads to better
robots, but this comes at a cost. The computational and memory requirements of
large dense models can be prohibitive. We provide the theory and the system
needed to create city-scale dense reconstructions. To do so, we apply a
regularizer over a compressed 3D data structure while dealing with the complex
boundary conditions this induces during the data-fusion stage. We show that
only with these considerations can we swiftly create neat, large, "well
behaved" reconstructions. We evaluate our system using the KITTI dataset and
provide statistics for the metric errors in all surfaces created compared to
those measured with 3D laser. Our regularizer reduces the median error by 40%
in 3.4 km of dense reconstructions with a median accuracy of 6 cm. For
subjective analysis, we provide a qualitative review of 6.1 km of our dense
reconstructions in an attached video. These are the largest dense
reconstructions from a single passive camera we are aware of in the literature.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2016 12:37:59 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Tanner",
"Michael",
""
],
[
"Pinies",
"Pedro",
""
],
[
"Paz",
"Lina Maria",
""
],
[
"Newman",
"Paul",
""
]
] |
TITLE: DENSER Cities: A System for Dense Efficient Reconstructions of Cities
ABSTRACT: This paper is about the efficient generation of dense, colored models of
city-scale environments from range data and in particular, stereo cameras.
Better maps make for better understanding; better understanding leads to better
robots, but this comes at a cost. The computational and memory requirements of
large dense models can be prohibitive. We provide the theory and the system
needed to create city-scale dense reconstructions. To do so, we apply a
regularizer over a compressed 3D data structure while dealing with the complex
boundary conditions this induces during the data-fusion stage. We show that
only with these considerations can we swiftly create neat, large, "well
behaved" reconstructions. We evaluate our system using the KITTI dataset and
provide statistics for the metric errors in all surfaces created compared to
those measured with 3D laser. Our regularizer reduces the median error by 40%
in 3.4 km of dense reconstructions with a median accuracy of 6 cm. For
subjective analysis, we provide a qualitative review of 6.1 km of our dense
reconstructions in an attached video. These are the largest dense
reconstructions from a single passive camera we are aware of in the literature.
|
1604.03880
|
Hao Jiang
|
Hao Jiang and Kristen Grauman
|
Detangling People: Individuating Multiple Close People and Their Body
Parts via Region Assembly
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's person detection methods work best when people are in common upright
poses and appear reasonably well spaced out in the image. However, in many real
images, that's not what people do. People often appear quite close to each
other, e.g., with limbs linked or heads touching, and their poses are often not
pedestrian-like. We propose an approach to detangle people in multi-person
images. We formulate the task as a region assembly problem. Starting from a
large set of overlapping regions from body part semantic segmentation and
generic object proposals, our optimization approach reassembles those pieces
together into multiple person instances. It enforces that the composed body
part regions of each person instance obey constraints on relative sizes, mutual
spatial relationships, foreground coverage, and exclusive label assignments
when overlapping. Since optimal region assembly is a challenging combinatorial
problem, we present a Lagrangian relaxation method to accelerate the lower
bound estimation, thereby enabling a fast branch and bound solution for the
global optimum. As output, our method produces a pixel-level map indicating
both 1) the body part labels (arm, leg, torso, and head), and 2) which parts
belong to which individual person. Our results on three challenging datasets
show our method is robust to clutter, occlusion, and complex poses. It
outperforms a variety of competing methods, including existing detector CRF
methods and region CNN approaches. In addition, we demonstrate its impact on a
proxemics recognition task, which demands a precise representation of "whose
body part is where" in crowded images.
|
[
{
"version": "v1",
"created": "Wed, 13 Apr 2016 17:35:05 GMT"
}
] | 2016-04-14T00:00:00 |
[
[
"Jiang",
"Hao",
""
],
[
"Grauman",
"Kristen",
""
]
] |
TITLE: Detangling People: Individuating Multiple Close People and Their Body
Parts via Region Assembly
ABSTRACT: Today's person detection methods work best when people are in common upright
poses and appear reasonably well spaced out in the image. However, in many real
images, that's not what people do. People often appear quite close to each
other, e.g., with limbs linked or heads touching, and their poses are often not
pedestrian-like. We propose an approach to detangle people in multi-person
images. We formulate the task as a region assembly problem. Starting from a
large set of overlapping regions from body part semantic segmentation and
generic object proposals, our optimization approach reassembles those pieces
together into multiple person instances. It enforces that the composed body
part regions of each person instance obey constraints on relative sizes, mutual
spatial relationships, foreground coverage, and exclusive label assignments
when overlapping. Since optimal region assembly is a challenging combinatorial
problem, we present a Lagrangian relaxation method to accelerate the lower
bound estimation, thereby enabling a fast branch and bound solution for the
global optimum. As output, our method produces a pixel-level map indicating
both 1) the body part labels (arm, leg, torso, and head), and 2) which parts
belong to which individual person. Our results on three challenging datasets
show our method is robust to clutter, occlusion, and complex poses. It
outperforms a variety of competing methods, including existing detector CRF
methods and region CNN approaches. In addition, we demonstrate its impact on a
proxemics recognition task, which demands a precise representation of "whose
body part is where" in crowded images.
|
1507.02558
|
Ilaria Gori
|
Ilaria Gori, J. K. Aggarwal, Larry Matthies, Michael S. Ryoo
|
Multi-Type Activity Recognition in Robot-Centric Scenarios
| null |
IEEE Robotics and Automation Letters (RA-L), 1(1):593-600, 2016
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activity recognition is very useful in scenarios where robots interact with,
monitor or assist humans. In the past years many types of activities -- single
actions, two persons interactions or ego-centric activities, to name a few --
have been analyzed. Whereas traditional methods treat such types of activities
separately, an autonomous robot should be able to detect and recognize multiple
types of activities to effectively fulfill its tasks. We propose a method that
is intrinsically able to detect and recognize activities of different types
that happen in sequence or concurrently. We present a new unified descriptor,
called Relation History Image (RHI), which can be extracted from all the
activity types we are interested in. We then formulate an optimization
procedure to detect and recognize activities of different types. We apply our
approach to a new dataset recorded from a robot-centric perspective and
systematically evaluate its quality compared to multiple baselines. Finally, we
show the efficacy of the RHI descriptor on publicly available datasets
performing extensive comparisons.
|
[
{
"version": "v1",
"created": "Thu, 9 Jul 2015 15:33:40 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 01:33:06 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Gori",
"Ilaria",
""
],
[
"Aggarwal",
"J. K.",
""
],
[
"Matthies",
"Larry",
""
],
[
"Ryoo",
"Michael S.",
""
]
] |
TITLE: Multi-Type Activity Recognition in Robot-Centric Scenarios
ABSTRACT: Activity recognition is very useful in scenarios where robots interact with,
monitor or assist humans. In the past years many types of activities -- single
actions, two persons interactions or ego-centric activities, to name a few --
have been analyzed. Whereas traditional methods treat such types of activities
separately, an autonomous robot should be able to detect and recognize multiple
types of activities to effectively fulfill its tasks. We propose a method that
is intrinsically able to detect and recognize activities of different types
that happen in sequence or concurrently. We present a new unified descriptor,
called Relation History Image (RHI), which can be extracted from all the
activity types we are interested in. We then formulate an optimization
procedure to detect and recognize activities of different types. We apply our
approach to a new dataset recorded from a robot-centric perspective and
systematically evaluate its quality compared to multiple baselines. Finally, we
show the efficacy of the RHI descriptor on publicly available datasets
performing extensive comparisons.
|
1509.09132
|
Adam Hackett
|
A. Hackett, D. Cellai, S. G\'omez, A. Arenas, and J. P. Gleeson
|
Bond percolation on multiplex networks
|
8 pages, 4 figures
|
Phys. Rev. X 6, 021002 (2016)
|
10.1103/PhysRevX.6.021002
| null |
physics.soc-ph cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an analytical approach for bond percolation on multiplex networks
and use it to determine the expected size of the giant connected component and
the value of the critical bond occupation probability in these networks. We
advocate the relevance of these tools to the modeling of multilayer robustness
and contribute to the debate on whether any benefit is to be yielded from
studying a full multiplex structure as opposed to its monoplex projection,
especially in the seemingly irrelevant case of a bond occupation probability
that does not depend on the layer. Although we find that in many cases the
predictions of our theory for multiplex networks coincide with previously
derived results for monoplex networks, we also uncover the remarkable result
that for a certain class of multiplex networks, well described by our theory,
new critical phenomena occur as multiple percolation phase transitions are
present. We provide an instance of this phenomenon in a multipex network
constructed from London rail and European air transportation datasets.
|
[
{
"version": "v1",
"created": "Wed, 30 Sep 2015 11:41:27 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jan 2016 12:48:06 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Apr 2016 16:12:33 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Hackett",
"A.",
""
],
[
"Cellai",
"D.",
""
],
[
"Gómez",
"S.",
""
],
[
"Arenas",
"A.",
""
],
[
"Gleeson",
"J. P.",
""
]
] |
TITLE: Bond percolation on multiplex networks
ABSTRACT: We present an analytical approach for bond percolation on multiplex networks
and use it to determine the expected size of the giant connected component and
the value of the critical bond occupation probability in these networks. We
advocate the relevance of these tools to the modeling of multilayer robustness
and contribute to the debate on whether any benefit is to be yielded from
studying a full multiplex structure as opposed to its monoplex projection,
especially in the seemingly irrelevant case of a bond occupation probability
that does not depend on the layer. Although we find that in many cases the
predictions of our theory for multiplex networks coincide with previously
derived results for monoplex networks, we also uncover the remarkable result
that for a certain class of multiplex networks, well described by our theory,
new critical phenomena occur as multiple percolation phase transitions are
present. We provide an instance of this phenomenon in a multipex network
constructed from London rail and European air transportation datasets.
|
1511.03240
|
Andreas Geiger
|
Jun Xie and Martin Kiefel and Ming-Ting Sun and Andreas Geiger
|
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
|
10 pages in Conference on Computer Vision and Pattern Recognition
(CVPR), 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semantic annotations are vital for training models for object recognition,
semantic segmentation or scene understanding. Unfortunately, pixelwise
annotation of images at very large scale is labor-intensive and only little
labeled data is available, particularly at instance level and for street
scenes. In this paper, we propose to tackle this problem by lifting the
semantic instance labeling task from 2D into 3D. Given reconstructions from
stereo or laser data, we annotate static 3D scene elements with rough bounding
primitives and develop a model which transfers this information into the image
domain. We leverage our method to obtain 2D labels for a novel suburban video
dataset which we have collected, resulting in 400k semantic and instance image
annotations. A comparison of our method to state-of-the-art label transfer
baselines reveals that 3D information enables more efficient annotation while
at the same time resulting in improved accuracy and time-coherent labels.
|
[
{
"version": "v1",
"created": "Tue, 10 Nov 2015 19:56:01 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 07:08:11 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Xie",
"Jun",
""
],
[
"Kiefel",
"Martin",
""
],
[
"Sun",
"Ming-Ting",
""
],
[
"Geiger",
"Andreas",
""
]
] |
TITLE: Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
ABSTRACT: Semantic annotations are vital for training models for object recognition,
semantic segmentation or scene understanding. Unfortunately, pixelwise
annotation of images at very large scale is labor-intensive and only little
labeled data is available, particularly at instance level and for street
scenes. In this paper, we propose to tackle this problem by lifting the
semantic instance labeling task from 2D into 3D. Given reconstructions from
stereo or laser data, we annotate static 3D scene elements with rough bounding
primitives and develop a model which transfers this information into the image
domain. We leverage our method to obtain 2D labels for a novel suburban video
dataset which we have collected, resulting in 400k semantic and instance image
annotations. A comparison of our method to state-of-the-art label transfer
baselines reveals that 3D information enables more efficient annotation while
at the same time resulting in improved accuracy and time-coherent labels.
|
1511.05197
|
Tsung-Yu Lin
|
Tsung-Yu Lin and Subhransu Maji
|
Visualizing and Understanding Deep Texture Representations
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A number of recent approaches have used deep convolutional neural networks
(CNNs) to build texture representations. Nevertheless, it is still unclear how
these models represent texture and invariances to categorical variations. This
work conducts a systematic evaluation of recent CNN-based texture descriptors
for recognition and attempts to understand the nature of invariances captured
by these representations. First we show that the recently proposed bilinear CNN
model [25] is an excellent general-purpose texture descriptor and compares
favorably to other CNN-based descriptors on various texture and scene
recognition benchmarks. The model is translationally invariant and obtains
better accuracy on the ImageNet dataset without requiring spatial jittering of
data compared to corresponding models trained with spatial jittering. Based on
recent work [13, 28] we propose a technique to visualize pre-images, providing
a means for understanding categorical properties that are captured by these
representations. Finally, we show preliminary results on how a unified
parametric model of texture analysis and synthesis can be used for
attribute-based image manipulation, e.g. to make an image more swirly,
honeycombed, or knitted. The source code and additional visualizations are
available at http://vis-www.cs.umass.edu/texture
|
[
{
"version": "v1",
"created": "Mon, 16 Nov 2015 22:01:16 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 16:37:46 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Lin",
"Tsung-Yu",
""
],
[
"Maji",
"Subhransu",
""
]
] |
TITLE: Visualizing and Understanding Deep Texture Representations
ABSTRACT: A number of recent approaches have used deep convolutional neural networks
(CNNs) to build texture representations. Nevertheless, it is still unclear how
these models represent texture and invariances to categorical variations. This
work conducts a systematic evaluation of recent CNN-based texture descriptors
for recognition and attempts to understand the nature of invariances captured
by these representations. First we show that the recently proposed bilinear CNN
model [25] is an excellent general-purpose texture descriptor and compares
favorably to other CNN-based descriptors on various texture and scene
recognition benchmarks. The model is translationally invariant and obtains
better accuracy on the ImageNet dataset without requiring spatial jittering of
data compared to corresponding models trained with spatial jittering. Based on
recent work [13, 28] we propose a technique to visualize pre-images, providing
a means for understanding categorical properties that are captured by these
representations. Finally, we show preliminary results on how a unified
parametric model of texture analysis and synthesis can be used for
attribute-based image manipulation, e.g. to make an image more swirly,
honeycombed, or knitted. The source code and additional visualizations are
available at http://vis-www.cs.umass.edu/texture
|
1511.06062
|
Yang Gao
|
Yang Gao, Oscar Beijbom, Ning Zhang, Trevor Darrell
|
Compact Bilinear Pooling
|
Camera ready version for CVPR
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bilinear models has been shown to achieve impressive performance on a wide
range of visual tasks, such as semantic segmentation, fine grained recognition
and face recognition. However, bilinear features are high dimensional,
typically on the order of hundreds of thousands to a few million, which makes
them impractical for subsequent analysis. We propose two compact bilinear
representations with the same discriminative power as the full bilinear
representation but with only a few thousand dimensions. Our compact
representations allow back-propagation of classification errors enabling an
end-to-end optimization of the visual recognition system. The compact bilinear
representations are derived through a novel kernelized analysis of bilinear
pooling which provide insights into the discriminative power of bilinear
pooling, and a platform for further research in compact pooling methods.
Experimentation illustrate the utility of the proposed representations for
image classification and few-shot learning across several datasets.
|
[
{
"version": "v1",
"created": "Thu, 19 Nov 2015 05:34:35 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 01:59:15 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Gao",
"Yang",
""
],
[
"Beijbom",
"Oscar",
""
],
[
"Zhang",
"Ning",
""
],
[
"Darrell",
"Trevor",
""
]
] |
TITLE: Compact Bilinear Pooling
ABSTRACT: Bilinear models has been shown to achieve impressive performance on a wide
range of visual tasks, such as semantic segmentation, fine grained recognition
and face recognition. However, bilinear features are high dimensional,
typically on the order of hundreds of thousands to a few million, which makes
them impractical for subsequent analysis. We propose two compact bilinear
representations with the same discriminative power as the full bilinear
representation but with only a few thousand dimensions. Our compact
representations allow back-propagation of classification errors enabling an
end-to-end optimization of the visual recognition system. The compact bilinear
representations are derived through a novel kernelized analysis of bilinear
pooling which provide insights into the discriminative power of bilinear
pooling, and a platform for further research in compact pooling methods.
Experimentation illustrate the utility of the proposed representations for
image classification and few-shot learning across several datasets.
|
1512.06974
|
Ishan Misra
|
Ishan Misra and C. Lawrence Zitnick and Margaret Mitchell and Ross
Girshick
|
Seeing through the Human Reporting Bias: Visual Classifiers from Noisy
Human-Centric Labels
|
To appear in CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When human annotators are given a choice about what to label in an image,
they apply their own subjective judgments on what to ignore and what to
mention. We refer to these noisy "human-centric" annotations as exhibiting
human reporting bias. Examples of such annotations include image tags and
keywords found on photo sharing sites, or in datasets containing image
captions. In this paper, we use these noisy annotations for learning visually
correct image classifiers. Such annotations do not use consistent vocabulary,
and miss a significant amount of the information present in an image; however,
we demonstrate that the noise in these annotations exhibits structure and can
be modeled. We propose an algorithm to decouple the human reporting bias from
the correct visually grounded labels. Our results are highly interpretable for
reporting "what's in the image" versus "what's worth saying." We demonstrate
the algorithm's efficacy along a variety of metrics and datasets, including MS
COCO and Yahoo Flickr 100M. We show significant improvements over traditional
algorithms for both image classification and image captioning, doubling the
performance of existing methods in some cases.
|
[
{
"version": "v1",
"created": "Tue, 22 Dec 2015 07:28:06 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 19:58:29 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Misra",
"Ishan",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"Girshick",
"Ross",
""
]
] |
TITLE: Seeing through the Human Reporting Bias: Visual Classifiers from Noisy
Human-Centric Labels
ABSTRACT: When human annotators are given a choice about what to label in an image,
they apply their own subjective judgments on what to ignore and what to
mention. We refer to these noisy "human-centric" annotations as exhibiting
human reporting bias. Examples of such annotations include image tags and
keywords found on photo sharing sites, or in datasets containing image
captions. In this paper, we use these noisy annotations for learning visually
correct image classifiers. Such annotations do not use consistent vocabulary,
and miss a significant amount of the information present in an image; however,
we demonstrate that the noise in these annotations exhibits structure and can
be modeled. We propose an algorithm to decouple the human reporting bias from
the correct visually grounded labels. Our results are highly interpretable for
reporting "what's in the image" versus "what's worth saying." We demonstrate
the algorithm's efficacy along a variety of metrics and datasets, including MS
COCO and Yahoo Flickr 100M. We show significant improvements over traditional
algorithms for both image classification and image captioning, doubling the
performance of existing methods in some cases.
|
1602.00134
|
Shih-En Wei
|
Shih-En Wei, Varun Ramakrishna, Takeo Kanade, Yaser Sheikh
|
Convolutional Pose Machines
|
camera ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pose Machines provide a sequential prediction framework for learning rich
implicit spatial models. In this work we show a systematic design for how
convolutional networks can be incorporated into the pose machine framework for
learning image features and image-dependent spatial models for the task of pose
estimation. The contribution of this paper is to implicitly model long-range
dependencies between variables in structured prediction tasks such as
articulated pose estimation. We achieve this by designing a sequential
architecture composed of convolutional networks that directly operate on belief
maps from previous stages, producing increasingly refined estimates for part
locations, without the need for explicit graphical model-style inference. Our
approach addresses the characteristic difficulty of vanishing gradients during
training by providing a natural learning objective function that enforces
intermediate supervision, thereby replenishing back-propagated gradients and
conditioning the learning procedure. We demonstrate state-of-the-art
performance and outperform competing methods on standard benchmarks including
the MPII, LSP, and FLIC datasets.
|
[
{
"version": "v1",
"created": "Sat, 30 Jan 2016 16:15:28 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Feb 2016 04:58:41 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2016 10:22:17 GMT"
},
{
"version": "v4",
"created": "Tue, 12 Apr 2016 03:31:53 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Wei",
"Shih-En",
""
],
[
"Ramakrishna",
"Varun",
""
],
[
"Kanade",
"Takeo",
""
],
[
"Sheikh",
"Yaser",
""
]
] |
TITLE: Convolutional Pose Machines
ABSTRACT: Pose Machines provide a sequential prediction framework for learning rich
implicit spatial models. In this work we show a systematic design for how
convolutional networks can be incorporated into the pose machine framework for
learning image features and image-dependent spatial models for the task of pose
estimation. The contribution of this paper is to implicitly model long-range
dependencies between variables in structured prediction tasks such as
articulated pose estimation. We achieve this by designing a sequential
architecture composed of convolutional networks that directly operate on belief
maps from previous stages, producing increasingly refined estimates for part
locations, without the need for explicit graphical model-style inference. Our
approach addresses the characteristic difficulty of vanishing gradients during
training by providing a natural learning objective function that enforces
intermediate supervision, thereby replenishing back-propagated gradients and
conditioning the learning procedure. We demonstrate state-of-the-art
performance and outperform competing methods on standard benchmarks including
the MPII, LSP, and FLIC datasets.
|
1602.08977
|
Crist\'obal Mackenzie
|
Crist\'obal Mackenzie, Karim Pichara, Pavlos Protopapas
|
Clustering Based Feature Learning on Variable Stars
| null |
ApJ 820 (2016) 138
|
10.3847/0004-637X/820/2/138
| null |
astro-ph.SR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of automatic classification of variable stars strongly depends on
the lightcurve representation. Usually, lightcurves are represented as a vector
of many statistical descriptors designed by astronomers called features. These
descriptors commonly demand significant computational power to calculate,
require substantial research effort to develop and do not guarantee good
performance on the final classification task. Today, lightcurve representation
is not entirely automatic; algorithms that extract lightcurve features are
designed by humans and must be manually tuned up for every survey. The vast
amounts of data that will be generated in future surveys like LSST mean
astronomers must develop analysis pipelines that are both scalable and
automated. Recently, substantial efforts have been made in the machine learning
community to develop methods that prescind from expert-designed and manually
tuned features for features that are automatically learned from data. In this
work we present what is, to our knowledge, the first unsupervised feature
learning algorithm designed for variable stars. Our method first extracts a
large number of lightcurve subsequences from a given set of photometric data,
which are then clustered to find common local patterns in the time series.
Representatives of these patterns, called exemplars, are then used to transform
lightcurves of a labeled set into a new representation that can then be used to
train an automatic classifier. The proposed algorithm learns the features from
both labeled and unlabeled lightcurves, overcoming the bias generated when the
learning process is done only with labeled data. We test our method on MACHO
and OGLE datasets; the results show that the classification performance we
achieve is as good and in some cases better than the performance achieved using
traditional features, while the computational cost is significantly lower.
|
[
{
"version": "v1",
"created": "Mon, 29 Feb 2016 14:26:17 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Mackenzie",
"Cristóbal",
""
],
[
"Pichara",
"Karim",
""
],
[
"Protopapas",
"Pavlos",
""
]
] |
TITLE: Clustering Based Feature Learning on Variable Stars
ABSTRACT: The success of automatic classification of variable stars strongly depends on
the lightcurve representation. Usually, lightcurves are represented as a vector
of many statistical descriptors designed by astronomers called features. These
descriptors commonly demand significant computational power to calculate,
require substantial research effort to develop and do not guarantee good
performance on the final classification task. Today, lightcurve representation
is not entirely automatic; algorithms that extract lightcurve features are
designed by humans and must be manually tuned up for every survey. The vast
amounts of data that will be generated in future surveys like LSST mean
astronomers must develop analysis pipelines that are both scalable and
automated. Recently, substantial efforts have been made in the machine learning
community to develop methods that prescind from expert-designed and manually
tuned features for features that are automatically learned from data. In this
work we present what is, to our knowledge, the first unsupervised feature
learning algorithm designed for variable stars. Our method first extracts a
large number of lightcurve subsequences from a given set of photometric data,
which are then clustered to find common local patterns in the time series.
Representatives of these patterns, called exemplars, are then used to transform
lightcurves of a labeled set into a new representation that can then be used to
train an automatic classifier. The proposed algorithm learns the features from
both labeled and unlabeled lightcurves, overcoming the bias generated when the
learning process is done only with labeled data. We test our method on MACHO
and OGLE datasets; the results show that the classification performance we
achieve is as good and in some cases better than the performance achieved using
traditional features, while the computational cost is significantly lower.
|
1604.02748
|
Yuncheng Li
|
Yuncheng Li, Yale Song, Liangliang Cao, Joel Tetreault, Larry
Goldberg, Alejandro Jaimes, Jiebo Luo
|
TGIF: A New Dataset and Benchmark on Animated GIF Description
|
CVPR 2016 Camera Ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the recent popularity of animated GIFs on social media, there is need
for ways to index them with rich metadata. To advance research on animated GIF
understanding, we collected a new dataset, Tumblr GIF (TGIF), with 100K
animated GIFs from Tumblr and 120K natural language descriptions obtained via
crowdsourcing. The motivation for this work is to develop a testbed for image
sequence description systems, where the task is to generate natural language
descriptions for animated GIFs or video clips. To ensure a high quality
dataset, we developed a series of novel quality controls to validate free-form
text input from crowdworkers. We show that there is unambiguous association
between visual content and natural language descriptions in our dataset, making
it an ideal benchmark for the visual content captioning task. We perform
extensive statistical analyses to compare our dataset to existing image and
video description datasets. Next, we provide baseline results on the animated
GIF description task, using three representative techniques: nearest neighbor,
statistical machine translation, and recurrent neural networks. Finally, we
show that models fine-tuned from our animated GIF description dataset can be
helpful for automatic movie description.
|
[
{
"version": "v1",
"created": "Sun, 10 Apr 2016 22:15:14 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Apr 2016 01:47:19 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Li",
"Yuncheng",
""
],
[
"Song",
"Yale",
""
],
[
"Cao",
"Liangliang",
""
],
[
"Tetreault",
"Joel",
""
],
[
"Goldberg",
"Larry",
""
],
[
"Jaimes",
"Alejandro",
""
],
[
"Luo",
"Jiebo",
""
]
] |
TITLE: TGIF: A New Dataset and Benchmark on Animated GIF Description
ABSTRACT: With the recent popularity of animated GIFs on social media, there is need
for ways to index them with rich metadata. To advance research on animated GIF
understanding, we collected a new dataset, Tumblr GIF (TGIF), with 100K
animated GIFs from Tumblr and 120K natural language descriptions obtained via
crowdsourcing. The motivation for this work is to develop a testbed for image
sequence description systems, where the task is to generate natural language
descriptions for animated GIFs or video clips. To ensure a high quality
dataset, we developed a series of novel quality controls to validate free-form
text input from crowdworkers. We show that there is unambiguous association
between visual content and natural language descriptions in our dataset, making
it an ideal benchmark for the visual content captioning task. We perform
extensive statistical analyses to compare our dataset to existing image and
video description datasets. Next, we provide baseline results on the animated
GIF description task, using three representative techniques: nearest neighbor,
statistical machine translation, and recurrent neural networks. Finally, we
show that models fine-tuned from our animated GIF description dataset can be
helpful for automatic movie description.
|
1604.03227
|
Jason Kuen
|
Jason Kuen, Zhenhua Wang, Gang Wang
|
Recurrent Attentional Networks for Saliency Detection
|
CVPR 2016
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional-deconvolution networks can be adopted to perform end-to-end
saliency detection. But, they do not work well with objects of multiple scales.
To overcome such a limitation, in this work, we propose a recurrent attentional
convolutional-deconvolution network (RACDNN). Using spatial transformer and
recurrent network units, RACDNN is able to iteratively attend to selected image
sub-regions to perform saliency refinement progressively. Besides tackling the
scale problem, RACDNN can also learn context-aware features from past
iterations to enhance saliency refinement in future iterations. Experiments on
several challenging saliency detection datasets validate the effectiveness of
RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection
methods.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 03:03:04 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Kuen",
"Jason",
""
],
[
"Wang",
"Zhenhua",
""
],
[
"Wang",
"Gang",
""
]
] |
TITLE: Recurrent Attentional Networks for Saliency Detection
ABSTRACT: Convolutional-deconvolution networks can be adopted to perform end-to-end
saliency detection. But, they do not work well with objects of multiple scales.
To overcome such a limitation, in this work, we propose a recurrent attentional
convolutional-deconvolution network (RACDNN). Using spatial transformer and
recurrent network units, RACDNN is able to iteratively attend to selected image
sub-regions to perform saliency refinement progressively. Besides tackling the
scale problem, RACDNN can also learn context-aware features from past
iterations to enhance saliency refinement in future iterations. Experiments on
several challenging saliency detection datasets validate the effectiveness of
RACDNN, and show that RACDNN outperforms state-of-the-art saliency detection
methods.
|
1604.03247
|
Dinesh Govindaraj
|
Dinesh Govindaraj
|
Thesis: Multiple Kernel Learning for Object Categorization
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object Categorization is a challenging problem, especially when the images
have clutter background, occlusions or different lighting conditions. In the
past, many descriptors have been proposed which aid object categorization even
in such adverse conditions. Each descriptor has its own merits and de-merits.
Some descriptors are invariant to transformations while the others are more
discriminative. Past research has shown that, employing multiple descriptors
rather than any single descriptor leads to better recognition. The problem of
learning the optimal combination of the available descriptors for a particular
classification task is studied. Multiple Kernel Learning (MKL) framework has
been developed for learning an optimal combination of descriptors for object
categorization. Existing MKL formulations often employ block l-1 norm
regularization which is equivalent to selecting a single kernel from a library
of kernels. Since essentially a single descriptor is selected, the existing
formulations maybe sub- optimal for object categorization. A MKL formulation
based on block l-infinity norm regularization has been developed, which chooses
an optimal combination of kernels as opposed to selecting a single kernel. A
Composite Multiple Kernel Learning(CKL) formulation based on mixed l-infinity
and l-1 norm regularization has been developed. These formulations end in
Second Order Cone Programs(SOCP). Other efficient alter- native algorithms for
these formulation have been implemented. Empirical results on benchmark
datasets show significant improvement using these new MKL formulations.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 04:56:24 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Govindaraj",
"Dinesh",
""
]
] |
TITLE: Thesis: Multiple Kernel Learning for Object Categorization
ABSTRACT: Object Categorization is a challenging problem, especially when the images
have clutter background, occlusions or different lighting conditions. In the
past, many descriptors have been proposed which aid object categorization even
in such adverse conditions. Each descriptor has its own merits and de-merits.
Some descriptors are invariant to transformations while the others are more
discriminative. Past research has shown that, employing multiple descriptors
rather than any single descriptor leads to better recognition. The problem of
learning the optimal combination of the available descriptors for a particular
classification task is studied. Multiple Kernel Learning (MKL) framework has
been developed for learning an optimal combination of descriptors for object
categorization. Existing MKL formulations often employ block l-1 norm
regularization which is equivalent to selecting a single kernel from a library
of kernels. Since essentially a single descriptor is selected, the existing
formulations maybe sub- optimal for object categorization. A MKL formulation
based on block l-infinity norm regularization has been developed, which chooses
an optimal combination of kernels as opposed to selecting a single kernel. A
Composite Multiple Kernel Learning(CKL) formulation based on mixed l-infinity
and l-1 norm regularization has been developed. These formulations end in
Second Order Cone Programs(SOCP). Other efficient alter- native algorithms for
these formulation have been implemented. Empirical results on benchmark
datasets show significant improvement using these new MKL formulations.
|
1604.03373
|
Jiaqian Yu
|
Jiaqian Yu (CVC, GALEN), Matthew Blaschko
|
A Convex Surrogate Operator for General Non-Modular Loss Functions
|
in The 19th International Conference on Artificial Intelligence and
Statistics, May 2016, Cadiz, Spain
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Empirical risk minimization frequently employs convex surrogates to
underlying discrete loss functions in order to achieve computational
tractability during optimization. However, classical convex surrogates can only
tightly bound modular loss functions, sub-modular functions or supermodular
functions separately while maintaining polynomial time computation. In this
work, a novel generic convex surrogate for general non-modular loss functions
is introduced, which provides for the first time a tractable solution for loss
functions that are neither super-modular nor submodular. This convex surro-gate
is based on a submodular-supermodular decomposition for which the existence and
uniqueness is proven in this paper. It takes the sum of two convex surrogates
that separately bound the supermodular component and the submodular component
using slack-rescaling and the Lov{\'a}sz hinge, respectively. It is further
proven that this surrogate is convex , piecewise linear, an extension of the
loss function, and for which subgradient computation is polynomial time.
Empirical results are reported on a non-submodular loss based on the
S{{\o}}rensen-Dice difference function, and a real-world face track dataset
with tens of thousands of frames, demonstrating the improved performance,
efficiency, and scalabil-ity of the novel convex surrogate.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 12:31:59 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Yu",
"Jiaqian",
"",
"CVC, GALEN"
],
[
"Blaschko",
"Matthew",
""
]
] |
TITLE: A Convex Surrogate Operator for General Non-Modular Loss Functions
ABSTRACT: Empirical risk minimization frequently employs convex surrogates to
underlying discrete loss functions in order to achieve computational
tractability during optimization. However, classical convex surrogates can only
tightly bound modular loss functions, sub-modular functions or supermodular
functions separately while maintaining polynomial time computation. In this
work, a novel generic convex surrogate for general non-modular loss functions
is introduced, which provides for the first time a tractable solution for loss
functions that are neither super-modular nor submodular. This convex surro-gate
is based on a submodular-supermodular decomposition for which the existence and
uniqueness is proven in this paper. It takes the sum of two convex surrogates
that separately bound the supermodular component and the submodular component
using slack-rescaling and the Lov{\'a}sz hinge, respectively. It is further
proven that this surrogate is convex , piecewise linear, an extension of the
loss function, and for which subgradient computation is polynomial time.
Empirical results are reported on a non-submodular loss based on the
S{{\o}}rensen-Dice difference function, and a real-world face track dataset
with tens of thousands of frames, demonstrating the improved performance,
efficiency, and scalabil-ity of the novel convex surrogate.
|
1604.03427
|
Danica Greetham
|
Nathaniel Charlton, Colin Singleton, Danica Vukadinovi\'c Greetham
|
In the mood: the dynamics of collective sentiments on Twitter
| null | null | null | null |
cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the relationship between the sentiment levels of Twitter users and
the evolving network structure that the users created by @-mentioning each
other. We use a large dataset of tweets to which we apply three sentiment
scoring algorithms, including the open source SentiStrength program.
Specifically we make three contributions. Firstly we find that people who have
potentially the largest communication reach (according to a dynamic centrality
measure) use sentiment differently than the average user: for example they use
positive sentiment more often and negative sentiment less often. Secondly we
find that when we follow structurally stable Twitter communities over a period
of months, their sentiment levels are also stable, and sudden changes in
community sentiment from one day to the next can in most cases be traced to
external events affecting the community. Thirdly, based on our findings, we
create and calibrate a simple agent-based model that is capable of reproducing
measures of emotive response comparable to those obtained from our empirical
dataset.
|
[
{
"version": "v1",
"created": "Mon, 11 Apr 2016 16:24:22 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Charlton",
"Nathaniel",
""
],
[
"Singleton",
"Colin",
""
],
[
"Greetham",
"Danica Vukadinović",
""
]
] |
TITLE: In the mood: the dynamics of collective sentiments on Twitter
ABSTRACT: We study the relationship between the sentiment levels of Twitter users and
the evolving network structure that the users created by @-mentioning each
other. We use a large dataset of tweets to which we apply three sentiment
scoring algorithms, including the open source SentiStrength program.
Specifically we make three contributions. Firstly we find that people who have
potentially the largest communication reach (according to a dynamic centrality
measure) use sentiment differently than the average user: for example they use
positive sentiment more often and negative sentiment less often. Secondly we
find that when we follow structurally stable Twitter communities over a period
of months, their sentiment levels are also stable, and sudden changes in
community sentiment from one day to the next can in most cases be traced to
external events affecting the community. Thirdly, based on our findings, we
create and calibrate a simple agent-based model that is capable of reproducing
measures of emotive response comparable to those obtained from our empirical
dataset.
|
1604.03443
|
Li Jinxing
|
Jinxing Li, David Zhang, Yongcheng Li, and Jian Wu
|
Multi-modal Fusion for Diabetes Mellitus and Impaired Glucose Regulation
Detection
|
9 pages, 8 figures, 30 conference
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective and accurate diagnosis of Diabetes Mellitus (DM), as well as its
early stage Impaired Glucose Regulation (IGR), has attracted much attention
recently. Traditional Chinese Medicine (TCM) [3], [5] etc. has proved that
tongue, face and sublingual diagnosis as a noninvasive method is a reasonable
way for disease detection. However, most previous works only focus on a single
modality (tongue, face or sublingual) for diagnosis, although different
modalities may provide complementary information for the diagnosis of DM and
IGR. In this paper, we propose a novel multi-modal classification method to
discriminate between DM (or IGR) and healthy controls. Specially, the tongue,
facial and sublingual images are first collected by using a non-invasive
capture device. The color, texture and geometry features of these three types
of images are then extracted, respectively. Finally, our so-called multi-modal
similar and specific learning (MMSSL) approach is proposed to combine features
of tongue, face and sublingual, which not only exploits the correlation but
also extracts individual components among them. Experimental results on a
dataset consisting of 192 Healthy, 198 DM and 114 IGR samples (all samples were
obtained from Guangdong Provincial Hospital of Traditional Chinese Medicine)
substantiate the effectiveness and superiority of our proposed method for the
diagnosis of DM and IGR, compared to the case of using a single modality.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 15:31:52 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Li",
"Jinxing",
""
],
[
"Zhang",
"David",
""
],
[
"Li",
"Yongcheng",
""
],
[
"Wu",
"Jian",
""
]
] |
TITLE: Multi-modal Fusion for Diabetes Mellitus and Impaired Glucose Regulation
Detection
ABSTRACT: Effective and accurate diagnosis of Diabetes Mellitus (DM), as well as its
early stage Impaired Glucose Regulation (IGR), has attracted much attention
recently. Traditional Chinese Medicine (TCM) [3], [5] etc. has proved that
tongue, face and sublingual diagnosis as a noninvasive method is a reasonable
way for disease detection. However, most previous works only focus on a single
modality (tongue, face or sublingual) for diagnosis, although different
modalities may provide complementary information for the diagnosis of DM and
IGR. In this paper, we propose a novel multi-modal classification method to
discriminate between DM (or IGR) and healthy controls. Specially, the tongue,
facial and sublingual images are first collected by using a non-invasive
capture device. The color, texture and geometry features of these three types
of images are then extracted, respectively. Finally, our so-called multi-modal
similar and specific learning (MMSSL) approach is proposed to combine features
of tongue, face and sublingual, which not only exploits the correlation but
also extracts individual components among them. Experimental results on a
dataset consisting of 192 Healthy, 198 DM and 114 IGR samples (all samples were
obtained from Guangdong Provincial Hospital of Traditional Chinese Medicine)
substantiate the effectiveness and superiority of our proposed method for the
diagnosis of DM and IGR, compared to the case of using a single modality.
|
1604.03518
|
Hyungtae Lee
|
Hyungtae Lee, Heesung Kwon, Ryan M. Robinson, and William D. Nothwang
|
DTM: Deformable Template Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel template matching algorithm that can incorporate the concept of
deformable parts, is presented in this paper. Unlike the deformable part model
(DPM) employed in object recognition, the proposed template-matching approach
called Deformable Template Matching (DTM) does not require a training step.
Instead, deformation is achieved by a set of predefined basic rules (e.g. the
left sub-patch cannot pass across the right patch). Experimental evaluation of
this new method using the PASCAL VOC 07 dataset demonstrated substantial
performance improvement over conventional template matching algorithms.
Additionally, to confirm the applicability of DTM, the concept is applied to
the generation of a rotation-invariant SIFT descriptor. Experimental evaluation
employing deformable matching of SIFT features shows an increased number of
matching features compared to a conventional SIFT matching.
|
[
{
"version": "v1",
"created": "Tue, 12 Apr 2016 18:44:25 GMT"
}
] | 2016-04-13T00:00:00 |
[
[
"Lee",
"Hyungtae",
""
],
[
"Kwon",
"Heesung",
""
],
[
"Robinson",
"Ryan M.",
""
],
[
"Nothwang",
"William D.",
""
]
] |
TITLE: DTM: Deformable Template Matching
ABSTRACT: A novel template matching algorithm that can incorporate the concept of
deformable parts, is presented in this paper. Unlike the deformable part model
(DPM) employed in object recognition, the proposed template-matching approach
called Deformable Template Matching (DTM) does not require a training step.
Instead, deformation is achieved by a set of predefined basic rules (e.g. the
left sub-patch cannot pass across the right patch). Experimental evaluation of
this new method using the PASCAL VOC 07 dataset demonstrated substantial
performance improvement over conventional template matching algorithms.
Additionally, to confirm the applicability of DTM, the concept is applied to
the generation of a rotation-invariant SIFT descriptor. Experimental evaluation
employing deformable matching of SIFT features shows an increased number of
matching features compared to a conventional SIFT matching.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.