id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.09188 | Spandana Gella | Spandana Gella, Mirella Lapata, Frank Keller | Unsupervised Visual Sense Disambiguation for Verbs using Multimodal
Embeddings | 11 pages, NAACL-HLT 2016 | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new task, visual sense disambiguation for verbs: given an
image and a verb, assign the correct sense of the verb, i.e., the one that
describes the action depicted in the image. Just as textual word sense
disambiguation is useful for a wide range of NLP tasks, visual sense
disambiguation can be useful for multimodal tasks such as image retrieval,
image description, and text illustration. We introduce VerSe, a new dataset
that augments existing multimodal datasets (COCO and TUHOI) with sense labels.
We propose an unsupervised algorithm based on Lesk which performs visual sense
disambiguation using textual, visual, or multimodal embeddings. We find that
textual embeddings perform well when gold-standard textual annotations (object
labels and image descriptions) are available, while multimodal embeddings
perform well on unannotated images. We also verify our findings by using the
textual and multimodal embeddings as features in a supervised setting and
analyse the performance of visual sense disambiguation task. VerSe is made
publicly available and can be downloaded at:
https://github.com/spandanagella/verse.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 13:43:38 GMT"
}
] | 2016-03-31T00:00:00 | [
[
"Gella",
"Spandana",
""
],
[
"Lapata",
"Mirella",
""
],
[
"Keller",
"Frank",
""
]
] | TITLE: Unsupervised Visual Sense Disambiguation for Verbs using Multimodal
Embeddings
ABSTRACT: We introduce a new task, visual sense disambiguation for verbs: given an
image and a verb, assign the correct sense of the verb, i.e., the one that
describes the action depicted in the image. Just as textual word sense
disambiguation is useful for a wide range of NLP tasks, visual sense
disambiguation can be useful for multimodal tasks such as image retrieval,
image description, and text illustration. We introduce VerSe, a new dataset
that augments existing multimodal datasets (COCO and TUHOI) with sense labels.
We propose an unsupervised algorithm based on Lesk which performs visual sense
disambiguation using textual, visual, or multimodal embeddings. We find that
textual embeddings perform well when gold-standard textual annotations (object
labels and image descriptions) are available, while multimodal embeddings
perform well on unannotated images. We also verify our findings by using the
textual and multimodal embeddings as features in a supervised setting and
analyse the performance of visual sense disambiguation task. VerSe is made
publicly available and can be downloaded at:
https://github.com/spandanagella/verse.
| new_dataset | 0.960547 |
1409.4327 | Dinesh Jayaraman | Dinesh Jayaraman and Kristen Grauman | Zero Shot Recognition with Unreliable Attributes | NIPS 2014 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In principle, zero-shot learning makes it possible to train a recognition
model simply by specifying the category's attributes. For example, with
classifiers for generic attributes like \emph{striped} and \emph{four-legged},
one can construct a classifier for the zebra category by enumerating which
properties it possesses---even without providing zebra training images. In
practice, however, the standard zero-shot paradigm suffers because attribute
predictions in novel images are hard to get right. We propose a novel random
forest approach to train zero-shot models that explicitly accounts for the
unreliability of attribute predictions. By leveraging statistics about each
attribute's error tendencies, our method obtains more robust discriminative
models for the unseen classes. We further devise extensions to handle the
few-shot scenario and unreliable attribute descriptions. On three datasets, we
demonstrate the benefit for visual category learning with zero or few training
examples, a critical domain for rare categories or categories defined on the
fly.
| [
{
"version": "v1",
"created": "Mon, 15 Sep 2014 16:56:07 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2016 19:33:17 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Zero Shot Recognition with Unreliable Attributes
ABSTRACT: In principle, zero-shot learning makes it possible to train a recognition
model simply by specifying the category's attributes. For example, with
classifiers for generic attributes like \emph{striped} and \emph{four-legged},
one can construct a classifier for the zebra category by enumerating which
properties it possesses---even without providing zebra training images. In
practice, however, the standard zero-shot paradigm suffers because attribute
predictions in novel images are hard to get right. We propose a novel random
forest approach to train zero-shot models that explicitly accounts for the
unreliability of attribute predictions. By leveraging statistics about each
attribute's error tendencies, our method obtains more robust discriminative
models for the unseen classes. We further devise extensions to handle the
few-shot scenario and unreliable attribute descriptions. On three datasets, we
demonstrate the benefit for visual category learning with zero or few training
examples, a critical domain for rare categories or categories defined on the
fly.
| no_new_dataset | 0.947332 |
1505.02206 | Dinesh Jayaraman | Dinesh Jayaraman and Kristen Grauman | Learning image representations tied to ego-motion | Supplementary material appended at end. In ICCV 2015 | null | null | null | cs.CV cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how images of objects and scenes behave in response to specific
ego-motions is a crucial aspect of proper visual development, yet existing
visual learning methods are conspicuously disconnected from the physical source
of their images. We propose to exploit proprioceptive motor signals to provide
unsupervised regularization in convolutional neural networks to learn visual
representations from egocentric video. Specifically, we enforce that our
learned features exhibit equivariance i.e. they respond predictably to
transformations associated with distinct ego-motions. With three datasets, we
show that our unsupervised feature learning approach significantly outperforms
previous approaches on visual recognition and next-best-view prediction tasks.
In the most challenging test, we show that features learned from video captured
on an autonomous driving platform improve large-scale scene recognition in
static images from a disjoint domain.
| [
{
"version": "v1",
"created": "Fri, 8 May 2015 23:15:00 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2016 19:30:18 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Learning image representations tied to ego-motion
ABSTRACT: Understanding how images of objects and scenes behave in response to specific
ego-motions is a crucial aspect of proper visual development, yet existing
visual learning methods are conspicuously disconnected from the physical source
of their images. We propose to exploit proprioceptive motor signals to provide
unsupervised regularization in convolutional neural networks to learn visual
representations from egocentric video. Specifically, we enforce that our
learned features exhibit equivariance i.e. they respond predictably to
transformations associated with distinct ego-motions. With three datasets, we
show that our unsupervised feature learning approach significantly outperforms
previous approaches on visual recognition and next-best-view prediction tasks.
In the most challenging test, we show that features learned from video captured
on an autonomous driving platform improve large-scale scene recognition in
static images from a disjoint domain.
| no_new_dataset | 0.946399 |
1511.06881 | Fangting Xia | Fangting Xia, Peng Wang, Liang-Chieh Chen, Alan L. Yuille | Zoom Better to See Clearer: Human and Object Parsing with Hierarchical
Auto-Zoom Net | A shortened version has been submitted to ECCV 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parsing articulated objects, e.g. humans and animals, into semantic parts
(e.g. body, head and arms, etc.) from natural images is a challenging and
fundamental problem for computer vision. A big difficulty is the large
variability of scale and location for objects and their corresponding parts.
Even limited mistakes in estimating scale and location will degrade the parsing
output and cause errors in boundary details. To tackle these difficulties, we
propose a "Hierarchical Auto-Zoom Net" (HAZN) for object part parsing which
adapts to the local scales of objects and parts. HAZN is a sequence of two
"Auto-Zoom Net" (AZNs), each employing fully convolutional networks that
perform two tasks: (1) predict the locations and scales of object instances
(the first AZN) or their parts (the second AZN); (2) estimate the part scores
for predicted object instance or part regions. Our model can adaptively "zoom"
(resize) predicted image regions into their proper scales to refine the
parsing.
We conduct extensive experiments over the PASCAL part datasets on humans,
horses, and cows. For humans, our approach significantly outperforms the
state-of-the-arts by 5% mIOU and is especially better at segmenting small
instances and small parts. We obtain similar improvements for parsing cows and
horses over alternative methods. In summary, our strategy of first zooming into
objects and then zooming into parts is very effective. It also enables us to
process different regions of the image at different scales adaptively so that,
for example, we do not need to waste computational resources scaling the entire
image.
| [
{
"version": "v1",
"created": "Sat, 21 Nov 2015 13:32:26 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Nov 2015 00:39:14 GMT"
},
{
"version": "v3",
"created": "Mon, 30 Nov 2015 02:32:33 GMT"
},
{
"version": "v4",
"created": "Thu, 7 Jan 2016 23:48:34 GMT"
},
{
"version": "v5",
"created": "Mon, 28 Mar 2016 21:53:31 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Xia",
"Fangting",
""
],
[
"Wang",
"Peng",
""
],
[
"Chen",
"Liang-Chieh",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Zoom Better to See Clearer: Human and Object Parsing with Hierarchical
Auto-Zoom Net
ABSTRACT: Parsing articulated objects, e.g. humans and animals, into semantic parts
(e.g. body, head and arms, etc.) from natural images is a challenging and
fundamental problem for computer vision. A big difficulty is the large
variability of scale and location for objects and their corresponding parts.
Even limited mistakes in estimating scale and location will degrade the parsing
output and cause errors in boundary details. To tackle these difficulties, we
propose a "Hierarchical Auto-Zoom Net" (HAZN) for object part parsing which
adapts to the local scales of objects and parts. HAZN is a sequence of two
"Auto-Zoom Net" (AZNs), each employing fully convolutional networks that
perform two tasks: (1) predict the locations and scales of object instances
(the first AZN) or their parts (the second AZN); (2) estimate the part scores
for predicted object instance or part regions. Our model can adaptively "zoom"
(resize) predicted image regions into their proper scales to refine the
parsing.
We conduct extensive experiments over the PASCAL part datasets on humans,
horses, and cows. For humans, our approach significantly outperforms the
state-of-the-arts by 5% mIOU and is especially better at segmenting small
instances and small parts. We obtain similar improvements for parsing cows and
horses over alternative methods. In summary, our strategy of first zooming into
objects and then zooming into parts is very effective. It also enables us to
process different regions of the image at different scales adaptively so that,
for example, we do not need to waste computational resources scaling the entire
image.
| no_new_dataset | 0.949716 |
1511.08418 | Maria Oliver | Maria Oliver, Gloria Haro, Mariella Dimiccoli, Baptiste Mazin and
Coloma Ballester | A Computational Model for Amodal Completion | null | null | 10.1007/s10851-016-0652-x | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a computational model to recover the most likely
interpretation of the 3D scene structure from a planar image, where some
objects may occlude others. The estimated scene interpretation is obtained by
integrating some global and local cues and provides both the complete
disoccluded objects that form the scene and their ordering according to depth.
Our method first computes several distal scenes which are compatible with the
proximal planar image. To compute these different hypothesized scenes, we
propose a perceptually inspired object disocclusion method, which works by
minimizing the Euler's elastica as well as by incorporating the relatability of
partially occluded contours and the convexity of the disoccluded objects. Then,
to estimate the preferred scene we rely on a Bayesian model and define
probabilities taking into account the global complexity of the objects in the
hypothesized scenes as well as the effort of bringing these objects in their
relative position in the planar image, which is also measured by an Euler's
elastica-based quantity. The model is illustrated with numerical experiments
on, both, synthetic and real images showing the ability of our model to
reconstruct the occluded objects and the preferred perceptual order among them.
We also present results on images of the Berkeley dataset with provided
figure-ground ground-truth labeling.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 15:25:46 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2016 14:49:41 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Oliver",
"Maria",
""
],
[
"Haro",
"Gloria",
""
],
[
"Dimiccoli",
"Mariella",
""
],
[
"Mazin",
"Baptiste",
""
],
[
"Ballester",
"Coloma",
""
]
] | TITLE: A Computational Model for Amodal Completion
ABSTRACT: This paper presents a computational model to recover the most likely
interpretation of the 3D scene structure from a planar image, where some
objects may occlude others. The estimated scene interpretation is obtained by
integrating some global and local cues and provides both the complete
disoccluded objects that form the scene and their ordering according to depth.
Our method first computes several distal scenes which are compatible with the
proximal planar image. To compute these different hypothesized scenes, we
propose a perceptually inspired object disocclusion method, which works by
minimizing the Euler's elastica as well as by incorporating the relatability of
partially occluded contours and the convexity of the disoccluded objects. Then,
to estimate the preferred scene we rely on a Bayesian model and define
probabilities taking into account the global complexity of the objects in the
hypothesized scenes as well as the effort of bringing these objects in their
relative position in the planar image, which is also measured by an Euler's
elastica-based quantity. The model is illustrated with numerical experiments
on, both, synthetic and real images showing the ability of our model to
reconstruct the occluded objects and the preferred perceptual order among them.
We also present results on images of the Berkeley dataset with provided
figure-ground ground-truth labeling.
| no_new_dataset | 0.949482 |
1512.06395 | Jaroslaw Szlichta | Mehdi Kargar, Lukasz Golab, Jaroslaw Szlichta | Effective Keyword Search in Graphs | 7 pages, 9 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a node-labeled graph, keyword search finds subtrees of the graph whose
nodes contain all of the query keywords. This provides a way to query graph
databases that neither requires mastery of a query language such as SPARQL, nor
a deep knowledge of the database schema. Previous work ranks answer trees using
combinations of structural and content-based metrics, such as path lengths
between keywords or relevance of the labels in the answer tree to the query
keywords. We propose two new ways to rank keyword search results over graphs.
The first takes node importance into account while the second is a bi-objective
optimization of edge weights and node importance. Since both of these problems
are NP-hard, we propose greedy algorithms to solve them, and experimentally
verify their effectiveness and efficiency on a real dataset.
| [
{
"version": "v1",
"created": "Sun, 20 Dec 2015 16:20:17 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jan 2016 16:59:38 GMT"
},
{
"version": "v3",
"created": "Wed, 3 Feb 2016 19:14:03 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Mar 2016 19:54:12 GMT"
},
{
"version": "v5",
"created": "Tue, 29 Mar 2016 15:43:11 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Kargar",
"Mehdi",
""
],
[
"Golab",
"Lukasz",
""
],
[
"Szlichta",
"Jaroslaw",
""
]
] | TITLE: Effective Keyword Search in Graphs
ABSTRACT: In a node-labeled graph, keyword search finds subtrees of the graph whose
nodes contain all of the query keywords. This provides a way to query graph
databases that neither requires mastery of a query language such as SPARQL, nor
a deep knowledge of the database schema. Previous work ranks answer trees using
combinations of structural and content-based metrics, such as path lengths
between keywords or relevance of the labels in the answer tree to the query
keywords. We propose two new ways to rank keyword search results over graphs.
The first takes node importance into account while the second is a bi-objective
optimization of edge weights and node importance. Since both of these problems
are NP-hard, we propose greedy algorithms to solve them, and experimentally
verify their effectiveness and efficiency on a real dataset.
| no_new_dataset | 0.951233 |
1602.05388 | Muhammad Imran | Muhammad Imran, Prasenjit Mitra, Jaideep Srivastava | Cross-Language Domain Adaptation for Classifying Crisis-Related Short
Messages | ISCRAM 2016, 10 pages, 4 tables | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rapid crisis response requires real-time analysis of messages. After a
disaster happens, volunteers attempt to classify tweets to determine needs,
e.g., supplies, infrastructure damage, etc. Given labeled data, supervised
machine learning can help classify these messages. Scarcity of labeled data
causes poor performance in machine training. Can we reuse old tweets to train
classifiers? How can we choose labeled tweets for training? Specifically, we
study the usefulness of labeled data of past events. Do labeled tweets in
different language help? We observe the performance of our classifiers trained
using different combinations of training sets obtained from past disasters. We
perform extensive experimentation on real crisis datasets and show that the
past labels are useful when both source and target events are of the same type
(e.g. both earthquakes). For similar languages (e.g., Italian and Spanish),
cross-language domain adaptation was useful, however, when for different
languages (e.g., Italian and English), the performance decreased.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 12:29:56 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2016 07:18:43 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Imran",
"Muhammad",
""
],
[
"Mitra",
"Prasenjit",
""
],
[
"Srivastava",
"Jaideep",
""
]
] | TITLE: Cross-Language Domain Adaptation for Classifying Crisis-Related Short
Messages
ABSTRACT: Rapid crisis response requires real-time analysis of messages. After a
disaster happens, volunteers attempt to classify tweets to determine needs,
e.g., supplies, infrastructure damage, etc. Given labeled data, supervised
machine learning can help classify these messages. Scarcity of labeled data
causes poor performance in machine training. Can we reuse old tweets to train
classifiers? How can we choose labeled tweets for training? Specifically, we
study the usefulness of labeled data of past events. Do labeled tweets in
different language help? We observe the performance of our classifiers trained
using different combinations of training sets obtained from past disasters. We
perform extensive experimentation on real crisis datasets and show that the
past labels are useful when both source and target events are of the same type
(e.g. both earthquakes). For similar languages (e.g., Italian and Spanish),
cross-language domain adaptation was useful, however, when for different
languages (e.g., Italian and English), the performance decreased.
| no_new_dataset | 0.949949 |
1603.01774 | Behnam Ghavimi | Behnam Ghavimi (1,2), Philipp Mayr (1), Sahar Vahdati (2) and
Christoph Lange (2,3) ((1) GESIS Leibniz Institute for the Social Sciences,
(2) University of Bonn, (3) Fraunhofer IAIS) | Identifying and Improving Dataset References in Social Sciences Full
Texts | null | null | null | null | cs.DL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientific full text papers are usually stored in separate places than their
underlying research datasets. Authors typically make references to datasets by
mentioning them for example by using their titles and the year of publication.
However, in most cases explicit links that would provide readers with direct
access to referenced datasets are missing. Manually detecting references to
datasets in papers is time consuming and requires an expert in the domain of
the paper. In order to make explicit all links to datasets in papers that have
been published already, we suggest and evaluate a semi-automatic approach for
finding references to datasets in social sciences papers. Our approach does not
need a corpus of papers (no cold start problem) and it performs well on a small
test corpus (gold standard). Our approach achieved an F-measure of 0.84 for
identifying references in full texts and an F-measure of 0.83 for finding
correct matches of detected references in the da|ra dataset registry.
| [
{
"version": "v1",
"created": "Sun, 6 Mar 2016 01:09:08 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Mar 2016 12:36:27 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Ghavimi",
"Behnam",
""
],
[
"Mayr",
"Philipp",
""
],
[
"Vahdati",
"Sahar",
""
],
[
"Lange",
"Christoph",
""
]
] | TITLE: Identifying and Improving Dataset References in Social Sciences Full
Texts
ABSTRACT: Scientific full text papers are usually stored in separate places than their
underlying research datasets. Authors typically make references to datasets by
mentioning them for example by using their titles and the year of publication.
However, in most cases explicit links that would provide readers with direct
access to referenced datasets are missing. Manually detecting references to
datasets in papers is time consuming and requires an expert in the domain of
the paper. In order to make explicit all links to datasets in papers that have
been published already, we suggest and evaluate a semi-automatic approach for
finding references to datasets in social sciences papers. Our approach does not
need a corpus of papers (no cold start problem) and it performs well on a small
test corpus (gold standard). Our approach achieved an F-measure of 0.84 for
identifying references in full texts and an F-measure of 0.83 for finding
correct matches of detected references in the da|ra dataset registry.
| no_new_dataset | 0.943504 |
1603.08701 | Enrico Santus | Enrico Santus, Tin-Shing Chiu, Qin Lu, Alessandro Lenci, Chu-Ren Huang | What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL
Datasets | in LREC 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we claim that Vector Cosine, which is generally considered one
of the most efficient unsupervised measures for identifying word similarity in
Vector Space Models, can be outperformed by a completely unsupervised measure
that evaluates the extent of the intersection among the most associated
contexts of two target words, weighting such intersection according to the rank
of the shared contexts in the dependency ranked lists. This claim comes from
the hypothesis that similar words do not simply occur in similar contexts, but
they share a larger portion of their most relevant contexts compared to other
related words. To prove it, we describe and evaluate APSyn, a variant of
Average Precision that, independently of the adopted parameters, outperforms
the Vector Cosine and the co-occurrence on the ESL and TOEFL test sets. In the
best setting, APSyn reaches 0.73 accuracy on the ESL dataset and 0.70 accuracy
in the TOEFL dataset, beating therefore the non-English US college applicants
(whose average, as reported in the literature, is 64.50%) and several
state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 10:00:27 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Santus",
"Enrico",
""
],
[
"Chiu",
"Tin-Shing",
""
],
[
"Lu",
"Qin",
""
],
[
"Lenci",
"Alessandro",
""
],
[
"Huang",
"Chu-Ren",
""
]
] | TITLE: What a Nerd! Beating Students and Vector Cosine in the ESL and TOEFL
Datasets
ABSTRACT: In this paper, we claim that Vector Cosine, which is generally considered one
of the most efficient unsupervised measures for identifying word similarity in
Vector Space Models, can be outperformed by a completely unsupervised measure
that evaluates the extent of the intersection among the most associated
contexts of two target words, weighting such intersection according to the rank
of the shared contexts in the dependency ranked lists. This claim comes from
the hypothesis that similar words do not simply occur in similar contexts, but
they share a larger portion of their most relevant contexts compared to other
related words. To prove it, we describe and evaluate APSyn, a variant of
Average Precision that, independently of the adopted parameters, outperforms
the Vector Cosine and the co-occurrence on the ESL and TOEFL test sets. In the
best setting, APSyn reaches 0.73 accuracy on the ESL dataset and 0.70 accuracy
in the TOEFL dataset, beating therefore the non-English US college applicants
(whose average, as reported in the literature, is 64.50%) and several
state-of-the-art approaches.
| no_new_dataset | 0.944791 |
1603.08702 | Enrico Santus | Enrico Santus, Alessandro Lenci, Tin-Shing Chiu, Qin Lu, Chu-Ren Huang | Nine Features in a Random Forest to Learn Taxonomical Semantic Relations | in LREC 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms
and random words that is derived from the already introduced ROOT13 (Santus et
al., 2016). It relies on a Random Forest algorithm and nine unsupervised
corpus-based features. We evaluate it with a 10-fold cross validation on 9,600
pairs, equally distributed among the three classes and involving several
Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are
present, ROOT9 achieves an F1 score of 90.7%, against a baseline of 57.2%
(vector cosine). When the classification is binary, ROOT9 achieves the
following results against the baseline: hypernyms-co-hyponyms 95.7% vs. 69.8%,
hypernyms-random 91.8% vs. 64.1% and co-hyponyms-random 97.8% vs. 79.4%. In
order to compare the performance with the state-of-the-art, we have also
evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it
is in fact competitive. Finally, we investigated whether the system learns the
semantic relation or it simply learns the prototypical hypernyms, as claimed by
Levy et al. (2015). The second possibility seems to be the most likely, even
though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to
drastically reduce this bias.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 10:00:40 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Santus",
"Enrico",
""
],
[
"Lenci",
"Alessandro",
""
],
[
"Chiu",
"Tin-Shing",
""
],
[
"Lu",
"Qin",
""
],
[
"Huang",
"Chu-Ren",
""
]
] | TITLE: Nine Features in a Random Forest to Learn Taxonomical Semantic Relations
ABSTRACT: ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms
and random words that is derived from the already introduced ROOT13 (Santus et
al., 2016). It relies on a Random Forest algorithm and nine unsupervised
corpus-based features. We evaluate it with a 10-fold cross validation on 9,600
pairs, equally distributed among the three classes and involving several
Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are
present, ROOT9 achieves an F1 score of 90.7%, against a baseline of 57.2%
(vector cosine). When the classification is binary, ROOT9 achieves the
following results against the baseline: hypernyms-co-hyponyms 95.7% vs. 69.8%,
hypernyms-random 91.8% vs. 64.1% and co-hyponyms-random 97.8% vs. 79.4%. In
order to compare the performance with the state-of-the-art, we have also
evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it
is in fact competitive. Finally, we investigated whether the system learns the
semantic relation or it simply learns the prototypical hypernyms, as claimed by
Levy et al. (2015). The second possibility seems to be the most likely, even
though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to
drastically reduce this bias.
| no_new_dataset | 0.95469 |
1603.08767 | Daniel Pop | Daniel Pop | Machine Learning and Cloud Computing: Survey of Distributed and SaaS
Solutions | This manuscript was originally published as IEAT Technical Report at
https://www.ieat.ro/technical-reports in 2012 | null | null | IEAT-TR-2012-1 | cs.DC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Applying popular machine learning algorithms to large amounts of data raised
new challenges for the ML practitioners. Traditional ML libraries does not
support well processing of huge datasets, so that new approaches were needed.
Parallelization using modern parallel computing frameworks, such as MapReduce,
CUDA, or Dryad gained in popularity and acceptance, resulting in new ML
libraries developed on top of these frameworks. We will briefly introduce the
most prominent industrial and academic outcomes, such as Apache Mahout,
GraphLab or Jubatus.
We will investigate how cloud computing paradigm impacted the field of ML.
First direction is of popular statistics tools and libraries (R system, Python)
deployed in the cloud. A second line of products is augmenting existing tools
with plugins that allow users to create a Hadoop cluster in the cloud and run
jobs on it. Next on the list are libraries of distributed implementations for
ML algorithms, and on-premise deployments of complex systems for data analytics
and data mining. Last approach on the radar of this survey is ML as
Software-as-a-Service, several BigData start-ups (and large companies as well)
already opening their solutions to the market.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 13:29:35 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Pop",
"Daniel",
""
]
] | TITLE: Machine Learning and Cloud Computing: Survey of Distributed and SaaS
Solutions
ABSTRACT: Applying popular machine learning algorithms to large amounts of data raised
new challenges for the ML practitioners. Traditional ML libraries does not
support well processing of huge datasets, so that new approaches were needed.
Parallelization using modern parallel computing frameworks, such as MapReduce,
CUDA, or Dryad gained in popularity and acceptance, resulting in new ML
libraries developed on top of these frameworks. We will briefly introduce the
most prominent industrial and academic outcomes, such as Apache Mahout,
GraphLab or Jubatus.
We will investigate how cloud computing paradigm impacted the field of ML.
First direction is of popular statistics tools and libraries (R system, Python)
deployed in the cloud. A second line of products is augmenting existing tools
with plugins that allow users to create a Hadoop cluster in the cloud and run
jobs on it. Next on the list are libraries of distributed implementations for
ML algorithms, and on-premise deployments of complex systems for data analytics
and data mining. Last approach on the radar of this survey is ML as
Software-as-a-Service, several BigData start-ups (and large companies as well)
already opening their solutions to the market.
| no_new_dataset | 0.941223 |
1603.08869 | Tiancheng Zhao | Tiancheng Zhao, Mohammad Gowayyed | Algorithms for Batch Hierarchical Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Hierarchical Reinforcement Learning (HRL) exploits temporal abstraction to
solve large Markov Decision Processes (MDP) and provide transferable subtask
policies. In this paper, we introduce an off-policy HRL algorithm: Hierarchical
Q-value Iteration (HQI). We show that it is possible to effectively learn
recursive optimal policies for any valid hierarchical decomposition of the
original MDP, given a fixed dataset collected from a flat stochastic behavioral
policy. We first formally prove the convergence of the algorithm for tabular
MDP. Then our experiments on the Taxi domain show that HQI converges faster
than a flat Q-value Iteration and enjoys easy state abstraction. Also, we
demonstrate that our algorithm is able to learn optimal policies for different
hierarchical structures from the same fixed dataset, which enables model
comparison without recollecting data.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 18:17:17 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Zhao",
"Tiancheng",
""
],
[
"Gowayyed",
"Mohammad",
""
]
] | TITLE: Algorithms for Batch Hierarchical Reinforcement Learning
ABSTRACT: Hierarchical Reinforcement Learning (HRL) exploits temporal abstraction to
solve large Markov Decision Processes (MDP) and provide transferable subtask
policies. In this paper, we introduce an off-policy HRL algorithm: Hierarchical
Q-value Iteration (HQI). We show that it is possible to effectively learn
recursive optimal policies for any valid hierarchical decomposition of the
original MDP, given a fixed dataset collected from a flat stochastic behavioral
policy. We first formally prove the convergence of the algorithm for tabular
MDP. Then our experiments on the Taxi domain show that HQI converges faster
than a flat Q-value Iteration and enjoys easy state abstraction. Also, we
demonstrate that our algorithm is able to learn optimal policies for different
hierarchical structures from the same fixed dataset, which enables model
comparison without recollecting data.
| no_new_dataset | 0.942876 |
1603.08884 | Adam Trischler | Adam Trischler and Zheng Ye and Xingdi Yuan and Jing He and Phillip
Bachman and Kaheer Suleman | A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data | 9 pages, submitted to ACL | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding unstructured text is a major goal within natural language
processing. Comprehension tests pose questions based on short text passages to
evaluate such understanding. In this work, we investigate machine comprehension
on the challenging {\it MCTest} benchmark. Partly because of its limited size,
prior work on {\it MCTest} has focused mainly on engineering better features.
We tackle the dataset with a neural approach, harnessing simple neural networks
arranged in a parallel hierarchy. The parallel hierarchy enables our model to
compare the passage, question, and answer from a variety of trainable
perspectives, as opposed to using a manually designed, rigid feature set.
Perspectives range from the word level to sentence fragments to sequences of
sentences; the networks operate only on word-embedding representations of text.
When trained with a methodology designed to help cope with limited training
data, our Parallel-Hierarchical model sets a new state of the art for {\it
MCTest}, outperforming previous feature-engineered approaches slightly and
previous neural approaches by a significant margin (over 15\% absolute).
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 18:52:46 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Trischler",
"Adam",
""
],
[
"Ye",
"Zheng",
""
],
[
"Yuan",
"Xingdi",
""
],
[
"He",
"Jing",
""
],
[
"Bachman",
"Phillip",
""
],
[
"Suleman",
"Kaheer",
""
]
] | TITLE: A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
ABSTRACT: Understanding unstructured text is a major goal within natural language
processing. Comprehension tests pose questions based on short text passages to
evaluate such understanding. In this work, we investigate machine comprehension
on the challenging {\it MCTest} benchmark. Partly because of its limited size,
prior work on {\it MCTest} has focused mainly on engineering better features.
We tackle the dataset with a neural approach, harnessing simple neural networks
arranged in a parallel hierarchy. The parallel hierarchy enables our model to
compare the passage, question, and answer from a variety of trainable
perspectives, as opposed to using a manually designed, rigid feature set.
Perspectives range from the word level to sentence fragments to sequences of
sentences; the networks operate only on word-embedding representations of text.
When trained with a methodology designed to help cope with limited training
data, our Parallel-Hierarchical model sets a new state of the art for {\it
MCTest}, outperforming previous feature-engineered approaches slightly and
previous neural approaches by a significant margin (over 15\% absolute).
| no_new_dataset | 0.947721 |
1603.08907 | Punarjay Chakravarty | Punarjay Chakravarty and Tinne Tuytelaars | Cross-modal Supervision for Learning Active Speaker Detection in Video | 16 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we show how to use audio to supervise the learning of active
speaker detection in video. Voice Activity Detection (VAD) guides the learning
of the vision-based classifier in a weakly supervised manner. The classifier
uses spatio-temporal features to encode upper body motion - facial expressions
and gesticulations associated with speaking. We further improve a generic model
for active speaker detection by learning person specific models. Finally, we
demonstrate the online adaptation of generic models learnt on one dataset, to
previously unseen people in a new dataset, again using audio (VAD) for weak
supervision. The use of temporal continuity overcomes the lack of clean
training data. We are the first to present an active speaker detection system
that learns on one audio-visual dataset and automatically adapts to speakers in
a new dataset. This work can be seen as an example of how the availability of
multi-modal data allows us to learn a model without the need for supervision,
by transferring knowledge from one modality to another.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 19:47:46 GMT"
}
] | 2016-03-30T00:00:00 | [
[
"Chakravarty",
"Punarjay",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Cross-modal Supervision for Learning Active Speaker Detection in Video
ABSTRACT: In this paper, we show how to use audio to supervise the learning of active
speaker detection in video. Voice Activity Detection (VAD) guides the learning
of the vision-based classifier in a weakly supervised manner. The classifier
uses spatio-temporal features to encode upper body motion - facial expressions
and gesticulations associated with speaking. We further improve a generic model
for active speaker detection by learning person specific models. Finally, we
demonstrate the online adaptation of generic models learnt on one dataset, to
previously unseen people in a new dataset, again using audio (VAD) for weak
supervision. The use of temporal continuity overcomes the lack of clean
training data. We are the first to present an active speaker detection system
that learns on one audio-visual dataset and automatically adapts to speakers in
a new dataset. This work can be seen as an example of how the availability of
multi-modal data allows us to learn a model without the need for supervision,
by transferring knowledge from one modality to another.
| no_new_dataset | 0.946051 |
1404.4078 | Feng Lin | Xia Li, Feng Lin, Robert C. Qiu | Modeling Massive Amount of Experimental Data with Large Random Matrices
in a Real-Time UWB-MIMO System | 4 pages, 11 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to study data modeling for massive datasets. Large
random matrices are used to model the massive amount of data collected from our
experimental testbed. This testbed was developed for a real-time
ultra-wideband, multiple input multiple output (UWB-MIMO) system. Empirical
spectral density is the relevant information we seek for. After we treat this
UWB-MIMO system as a black box, we aim to model the output of the black box as
a large statistical system, whose outputs can be described by (large) random
matrices. This model is extremely general to allow for the study of non-linear
and non-Gaussian phenomenon. The good agreements between the theoretical
predictions and the empirical findings validate the correctness of the our
suggested data model.
| [
{
"version": "v1",
"created": "Tue, 15 Apr 2014 20:57:17 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Mar 2016 20:08:13 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Li",
"Xia",
""
],
[
"Lin",
"Feng",
""
],
[
"Qiu",
"Robert C.",
""
]
] | TITLE: Modeling Massive Amount of Experimental Data with Large Random Matrices
in a Real-Time UWB-MIMO System
ABSTRACT: The aim of this paper is to study data modeling for massive datasets. Large
random matrices are used to model the massive amount of data collected from our
experimental testbed. This testbed was developed for a real-time
ultra-wideband, multiple input multiple output (UWB-MIMO) system. Empirical
spectral density is the relevant information we seek for. After we treat this
UWB-MIMO system as a black box, we aim to model the output of the black box as
a large statistical system, whose outputs can be described by (large) random
matrices. This model is extremely general to allow for the study of non-linear
and non-Gaussian phenomenon. The good agreements between the theoretical
predictions and the empirical findings validate the correctness of the our
suggested data model.
| no_new_dataset | 0.946597 |
1511.04108 | Ming Tan | Ming Tan, Cicero dos Santos, Bing Xiang, Bowen Zhou | LSTM-based Deep Learning Models for Non-factoid Answer Selection | added new experiments on TREC-QA | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we apply a general deep learning (DL) framework for the answer
selection task, which does not depend on manually defined features or
linguistic tools. The basic framework is to build the embeddings of questions
and answers based on bidirectional long short-term memory (biLSTM) models, and
measure their closeness by cosine similarity. We further extend this basic
model in two directions. One direction is to define a more composite
representation for questions and answers by combining convolutional neural
network with the basic framework. The other direction is to utilize a simple
but efficient attention mechanism in order to generate the answer
representation according to the question context. Several variations of models
are provided. The models are examined by two datasets, including TREC-QA and
InsuranceQA. Experimental results demonstrate that the proposed models
substantially outperform several strong baselines.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 22:01:54 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Nov 2015 15:00:46 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Jan 2016 17:56:29 GMT"
},
{
"version": "v4",
"created": "Mon, 28 Mar 2016 04:12:45 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Tan",
"Ming",
""
],
[
"Santos",
"Cicero dos",
""
],
[
"Xiang",
"Bing",
""
],
[
"Zhou",
"Bowen",
""
]
] | TITLE: LSTM-based Deep Learning Models for Non-factoid Answer Selection
ABSTRACT: In this paper, we apply a general deep learning (DL) framework for the answer
selection task, which does not depend on manually defined features or
linguistic tools. The basic framework is to build the embeddings of questions
and answers based on bidirectional long short-term memory (biLSTM) models, and
measure their closeness by cosine similarity. We further extend this basic
model in two directions. One direction is to define a more composite
representation for questions and answers by combining convolutional neural
network with the basic framework. The other direction is to utilize a simple
but efficient attention mechanism in order to generate the answer
representation according to the question context. Several variations of models
are provided. The models are examined by two datasets, including TREC-QA and
InsuranceQA. Experimental results demonstrate that the proposed models
substantially outperform several strong baselines.
| no_new_dataset | 0.947817 |
1601.00072 | Mishal Almazrooie Mr | Mishal Almazrooie, Mogana Vadiveloo, and Rosni Abdullah | GPU-Based Fuzzy C-Means Clustering Algorithm for Image Segmentation | null | null | null | null | cs.DC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a fast and practical GPU-based implementation of Fuzzy
C-Means(FCM) clustering algorithm for image segmentation is proposed. First, an
extensive analysis is conducted to study the dependency among the image pixels
in the algorithm for parallelization. The proposed GPU-based FCM has been
tested on digital brain simulated dataset to segment white matter(WM), gray
matter(GM) and cerebrospinal fluid (CSF) soft tissue regions. The execution
time of the sequential FCM is 519 seconds for an image dataset with the size of
1MB. While the proposed GPU-based FCM requires only 2.33 seconds for the
similar size of image dataset. An estimated 245-fold speedup is measured for
the data size of 40 KB on a CUDA device that has 448 processors.
| [
{
"version": "v1",
"created": "Fri, 1 Jan 2016 11:18:31 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jan 2016 02:27:45 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Mar 2016 09:47:29 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Almazrooie",
"Mishal",
""
],
[
"Vadiveloo",
"Mogana",
""
],
[
"Abdullah",
"Rosni",
""
]
] | TITLE: GPU-Based Fuzzy C-Means Clustering Algorithm for Image Segmentation
ABSTRACT: In this paper, a fast and practical GPU-based implementation of Fuzzy
C-Means(FCM) clustering algorithm for image segmentation is proposed. First, an
extensive analysis is conducted to study the dependency among the image pixels
in the algorithm for parallelization. The proposed GPU-based FCM has been
tested on digital brain simulated dataset to segment white matter(WM), gray
matter(GM) and cerebrospinal fluid (CSF) soft tissue regions. The execution
time of the sequential FCM is 519 seconds for an image dataset with the size of
1MB. While the proposed GPU-based FCM requires only 2.33 seconds for the
similar size of image dataset. An estimated 245-fold speedup is measured for
the data size of 40 KB on a CUDA device that has 448 processors.
| no_new_dataset | 0.953188 |
1601.05270 | Sidra Faisal | Sidra Faisal, Kemele M. Endris, Saeedeh Shekarpour, S\"oren Auer | Co-evolution of RDF Datasets | 18 pages, 4 figures, Accepted in ICWE, 2016 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linking Data initiatives have fostered the publication of large number of RDF
datasets in the Linked Open Data (LOD) cloud, as well as the development of
query processing infrastructures to access these data in a federated fashion.
However, different experimental studies have shown that availability of LOD
datasets cannot be always ensured, being RDF data replication required for
envisioning reliable federated query frameworks. Albeit enhancing data
availability, RDF data replication requires synchronization and conflict
resolution when replicas and source datasets are allowed to change data over
time, i.e., co-evolution management needs to be provided to ensure consistency.
In this paper, we tackle the problem of RDF data co-evolution and devise an
approach for conflict resolution during co-evolution of RDF datasets. Our
proposed approach is property-oriented and allows for exploiting semantics
about RDF properties during co-evolution management. The quality of our
approach is empirically evaluated in different scenarios on the DBpedia-live
dataset. Experimental results suggest that proposed proposed techniques have a
positive impact on the quality of data in source datasets and replicas.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 13:46:24 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2016 18:21:40 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Faisal",
"Sidra",
""
],
[
"Endris",
"Kemele M.",
""
],
[
"Shekarpour",
"Saeedeh",
""
],
[
"Auer",
"Sören",
""
]
] | TITLE: Co-evolution of RDF Datasets
ABSTRACT: Linking Data initiatives have fostered the publication of large number of RDF
datasets in the Linked Open Data (LOD) cloud, as well as the development of
query processing infrastructures to access these data in a federated fashion.
However, different experimental studies have shown that availability of LOD
datasets cannot be always ensured, being RDF data replication required for
envisioning reliable federated query frameworks. Albeit enhancing data
availability, RDF data replication requires synchronization and conflict
resolution when replicas and source datasets are allowed to change data over
time, i.e., co-evolution management needs to be provided to ensure consistency.
In this paper, we tackle the problem of RDF data co-evolution and devise an
approach for conflict resolution during co-evolution of RDF datasets. Our
proposed approach is property-oriented and allows for exploiting semantics
about RDF properties during co-evolution management. The quality of our
approach is empirically evaluated in different scenarios on the DBpedia-live
dataset. Experimental results suggest that proposed proposed techniques have a
positive impact on the quality of data in source datasets and replicas.
| no_new_dataset | 0.94428 |
1603.08028 | Daniel Cullina | Daniel Cullina, Kushagra Singhal, Negar Kiyavash, Prateek Mittal | On the Simultaneous Preservation of Privacy and Community Structure in
Anonymized Networks | 10 pages | null | null | null | cs.LG cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of performing community detection on a network, while
maintaining privacy, assuming that the adversary has access to an auxiliary
correlated network. We ask the question "Does there exist a regime where the
network cannot be deanonymized perfectly, yet the community structure could be
learned?." To answer this question, we derive information theoretic converses
for the perfect deanonymization problem using the Stochastic Block Model and
edge sub-sampling. We also provide an almost tight achievability result for
perfect deanonymization.
We also evaluate the performance of percolation based deanonymization
algorithm on Stochastic Block Model data-sets that satisfy the conditions of
our converse. Although our converse applies to exact deanonymization, the
algorithm fails drastically when the conditions of the converse are met.
Additionally, we study the effect of edge sub-sampling on the community
structure of a real world dataset. Results show that the dataset falls under
the purview of the idea of this paper. There results suggest that it may be
possible to prove stronger partial deanonymizability converses, which would
enable better privacy guarantees.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 20:45:32 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Cullina",
"Daniel",
""
],
[
"Singhal",
"Kushagra",
""
],
[
"Kiyavash",
"Negar",
""
],
[
"Mittal",
"Prateek",
""
]
] | TITLE: On the Simultaneous Preservation of Privacy and Community Structure in
Anonymized Networks
ABSTRACT: We consider the problem of performing community detection on a network, while
maintaining privacy, assuming that the adversary has access to an auxiliary
correlated network. We ask the question "Does there exist a regime where the
network cannot be deanonymized perfectly, yet the community structure could be
learned?." To answer this question, we derive information theoretic converses
for the perfect deanonymization problem using the Stochastic Block Model and
edge sub-sampling. We also provide an almost tight achievability result for
perfect deanonymization.
We also evaluate the performance of percolation based deanonymization
algorithm on Stochastic Block Model data-sets that satisfy the conditions of
our converse. Although our converse applies to exact deanonymization, the
algorithm fails drastically when the conditions of the converse are met.
Additionally, we study the effect of edge sub-sampling on the community
structure of a real world dataset. Results show that the dataset falls under
the purview of the idea of this paper. There results suggest that it may be
possible to prove stronger partial deanonymizability converses, which would
enable better privacy guarantees.
| no_new_dataset | 0.940134 |
1603.08067 | Bo Li | Bo Li and Tianfu Wu and Caiming Xiong and Song-Chun Zhu | Recognizing Car Fluents from Video | Accepted by CVPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physical fluents, a term originally used by Newton [40], refers to
time-varying object states in dynamic scenes. In this paper, we are interested
in inferring the fluents of vehicles from video. For example, a door (hood,
trunk) is open or closed through various actions, light is blinking to turn.
Recognizing these fluents has broad applications, yet have received scant
attention in the computer vision literature. Car fluent recognition entails a
unified framework for car detection, car part localization and part status
recognition, which is made difficult by large structural and appearance
variations, low resolutions and occlusions. This paper learns a
spatial-temporal And-Or hierarchical model to represent car fluents. The
learning of this model is formulated under the latent structural SVM framework.
Since there are no publicly related dataset, we collect and annotate a car
fluent dataset consisting of car videos with diverse fluents. In experiments,
the proposed method outperforms several highly related baseline methods in
terms of car fluent recognition and car part localization.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2016 03:45:00 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"Wu",
"Tianfu",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: Recognizing Car Fluents from Video
ABSTRACT: Physical fluents, a term originally used by Newton [40], refers to
time-varying object states in dynamic scenes. In this paper, we are interested
in inferring the fluents of vehicles from video. For example, a door (hood,
trunk) is open or closed through various actions, light is blinking to turn.
Recognizing these fluents has broad applications, yet have received scant
attention in the computer vision literature. Car fluent recognition entails a
unified framework for car detection, car part localization and part status
recognition, which is made difficult by large structural and appearance
variations, low resolutions and occlusions. This paper learns a
spatial-temporal And-Or hierarchical model to represent car fluents. The
learning of this model is formulated under the latent structural SVM framework.
Since there are no publicly related dataset, we collect and annotate a car
fluent dataset consisting of car videos with diverse fluents. In experiments,
the proposed method outperforms several highly related baseline methods in
terms of car fluent recognition and car part localization.
| new_dataset | 0.960287 |
1603.08092 | Jianyu Tang | Jianyu Tang, Hanzi Wang and Yan Yan | Learning Hough Regression Models via Bridge Partial Least Squares for
Object Detection | null | Neurocomputing, 2015,152(3):236-249 | 10.1016/j.neucom.2014.10.071 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Popular Hough Transform-based object detection approaches usually construct
an appearance codebook by clustering local image features. However, how to
choose appropriate values for the parameters used in the clustering step
remains an open problem. Moreover, some popular histogram features extracted
from overlapping image blocks may cause a high degree of redundancy and
multicollinearity. In this paper, we propose a novel Hough Transform-based
object detection approach. First, to address the above issues, we exploit a
Bridge Partial Least Squares (BPLS) technique to establish context-encoded
Hough Regression Models (HRMs), which are linear regression models that cast
probabilistic Hough votes to predict object locations. BPLS is an efficient
variant of Partial Least Squares (PLS). PLS-based regression techniques
(including BPLS) can reduce the redundancy and eliminate the multicollinearity
of a feature set. And the appropriate value of the only parameter used in PLS
(i.e., the number of latent components) can be determined by using a
cross-validation procedure. Second, to efficiently handle object scale changes,
we propose a novel multi-scale voting scheme. In this scheme, multiple Hough
images corresponding to multiple object scales can be obtained simultaneously.
Third, an object in a test image may correspond to multiple true and false
positive hypotheses at different scales. Based on the proposed multi-scale
voting scheme, a principled strategy is proposed to fuse hypotheses to reduce
false positives by evaluating normalized pointwise mutual information between
hypotheses. In the experiments, we also compare the proposed HRM approach with
its several variants to evaluate the influences of its components on its
performance. Experimental results show that the proposed HRM approach has
achieved desirable performances on popular benchmark datasets.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2016 09:33:30 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Tang",
"Jianyu",
""
],
[
"Wang",
"Hanzi",
""
],
[
"Yan",
"Yan",
""
]
] | TITLE: Learning Hough Regression Models via Bridge Partial Least Squares for
Object Detection
ABSTRACT: Popular Hough Transform-based object detection approaches usually construct
an appearance codebook by clustering local image features. However, how to
choose appropriate values for the parameters used in the clustering step
remains an open problem. Moreover, some popular histogram features extracted
from overlapping image blocks may cause a high degree of redundancy and
multicollinearity. In this paper, we propose a novel Hough Transform-based
object detection approach. First, to address the above issues, we exploit a
Bridge Partial Least Squares (BPLS) technique to establish context-encoded
Hough Regression Models (HRMs), which are linear regression models that cast
probabilistic Hough votes to predict object locations. BPLS is an efficient
variant of Partial Least Squares (PLS). PLS-based regression techniques
(including BPLS) can reduce the redundancy and eliminate the multicollinearity
of a feature set. And the appropriate value of the only parameter used in PLS
(i.e., the number of latent components) can be determined by using a
cross-validation procedure. Second, to efficiently handle object scale changes,
we propose a novel multi-scale voting scheme. In this scheme, multiple Hough
images corresponding to multiple object scales can be obtained simultaneously.
Third, an object in a test image may correspond to multiple true and false
positive hypotheses at different scales. Based on the proposed multi-scale
voting scheme, a principled strategy is proposed to fuse hypotheses to reduce
false positives by evaluating normalized pointwise mutual information between
hypotheses. In the experiments, we also compare the proposed HRM approach with
its several variants to evaluate the influences of its components on its
performance. Experimental results show that the proposed HRM approach has
achieved desirable performances on popular benchmark datasets.
| no_new_dataset | 0.94801 |
1603.08105 | Ayush Mittal | Ayush Mittal, Anant Raj, Vinay P. Namboodiri and Tinne Tuytelaars | Unsupervised Domain Adaptation in the Wild: Dealing with Asymmetric
Label Sets | supplementary material:
http://home.iitk.ac.in/~ayushmi/supplementary-material-unsupervised.pdf | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of domain adaptation is to adapt models learned on a source domain
to a particular target domain. Most methods for unsupervised domain adaptation
proposed in the literature to date, assume that the set of classes present in
the target domain is identical to the set of classes present in the source
domain. This is a restrictive assumption that limits the practical
applicability of unsupervised domain adaptation techniques in real world
settings ("in the wild"). Therefore, we relax this constraint and propose a
technique that allows the set of target classes to be a subset of the source
classes. This way, large publicly available annotated datasets with a wide
variety of classes can be used as source, even if the actual set of classes in
target can be more limited and, maybe most importantly, unknown beforehand.
To this end, we propose an algorithm that orders a set of source subspaces
that are relevant to the target classification problem. Our method then chooses
a restricted set from this ordered set of source subspaces. As an extension,
even starting from multiple source datasets with varied sets of categories,
this method automatically selects an appropriate subset of source categories
relevant to a target dataset. Empirical analysis on a number of source and
target domain datasets shows that restricting the source subspace to only a
subset of categories does indeed substantially improve the eventual target
classification accuracy over the baseline that considers all source classes.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2016 13:22:55 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Mittal",
"Ayush",
""
],
[
"Raj",
"Anant",
""
],
[
"Namboodiri",
"Vinay P.",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Unsupervised Domain Adaptation in the Wild: Dealing with Asymmetric
Label Sets
ABSTRACT: The goal of domain adaptation is to adapt models learned on a source domain
to a particular target domain. Most methods for unsupervised domain adaptation
proposed in the literature to date, assume that the set of classes present in
the target domain is identical to the set of classes present in the source
domain. This is a restrictive assumption that limits the practical
applicability of unsupervised domain adaptation techniques in real world
settings ("in the wild"). Therefore, we relax this constraint and propose a
technique that allows the set of target classes to be a subset of the source
classes. This way, large publicly available annotated datasets with a wide
variety of classes can be used as source, even if the actual set of classes in
target can be more limited and, maybe most importantly, unknown beforehand.
To this end, we propose an algorithm that orders a set of source subspaces
that are relevant to the target classification problem. Our method then chooses
a restricted set from this ordered set of source subspaces. As an extension,
even starting from multiple source datasets with varied sets of categories,
this method automatically selects an appropriate subset of source categories
relevant to a target dataset. Empirical analysis on a number of source and
target domain datasets shows that restricting the source subspace to only a
subset of categories does indeed substantially improve the eventual target
classification accuracy over the baseline that considers all source classes.
| no_new_dataset | 0.946448 |
1603.08124 | Wenbin Li | Wenbin Li, Darren Cosker | Video Interpolation using Optical Flow and Laplacian Smoothness | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Non-rigid video interpolation is a common computer vision task. In this paper
we present an optical flow approach which adopts a Laplacian Cotangent Mesh
constraint to enhance the local smoothness. Similar to Li et al., our approach
adopts a mesh to the image with a resolution up to one vertex per pixel and
uses angle constraints to ensure sensible local deformations between image
pairs. The Laplacian Mesh constraints are expressed wholly inside the optical
flow optimization, and can be applied in a straightforward manner to a wide
range of image tracking and registration problems. We evaluate our approach by
testing on several benchmark datasets, including the Middlebury and Garg et al.
datasets. In addition, we show application of our method for constructing 3D
Morphable Facial Models from dynamic 3D data.
| [
{
"version": "v1",
"created": "Sat, 26 Mar 2016 17:13:25 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Li",
"Wenbin",
""
],
[
"Cosker",
"Darren",
""
]
] | TITLE: Video Interpolation using Optical Flow and Laplacian Smoothness
ABSTRACT: Non-rigid video interpolation is a common computer vision task. In this paper
we present an optical flow approach which adopts a Laplacian Cotangent Mesh
constraint to enhance the local smoothness. Similar to Li et al., our approach
adopts a mesh to the image with a resolution up to one vertex per pixel and
uses angle constraints to ensure sensible local deformations between image
pairs. The Laplacian Mesh constraints are expressed wholly inside the optical
flow optimization, and can be applied in a straightforward manner to a wide
range of image tracking and registration problems. We evaluate our approach by
testing on several benchmark datasets, including the Middlebury and Garg et al.
datasets. In addition, we show application of our method for constructing 3D
Morphable Facial Models from dynamic 3D data.
| no_new_dataset | 0.957636 |
1603.08212 | Ethan Fetaya | Ita Lifshitz, Ethan Fetaya and Shimon Ullman | Human Pose Estimation using Deep Consensus Voting | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of human pose estimation from a single
still image. We propose a novel approach where each location in the image votes
for the position of each keypoint using a convolutional neural net. The voting
scheme allows us to utilize information from the whole image, rather than rely
on a sparse set of keypoint locations. Using dense, multi-target votes, not
only produces good keypoint predictions, but also enables us to compute
image-dependent joint keypoint probabilities by looking at consensus voting.
This differs from most previous methods where joint probabilities are learned
from relative keypoint locations and are independent of the image. We finally
combine the keypoints votes and joint probabilities in order to identify the
optimal pose configuration. We show our competitive performance on the MPII
Human Pose and Leeds Sports Pose datasets.
| [
{
"version": "v1",
"created": "Sun, 27 Mar 2016 12:45:33 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Lifshitz",
"Ita",
""
],
[
"Fetaya",
"Ethan",
""
],
[
"Ullman",
"Shimon",
""
]
] | TITLE: Human Pose Estimation using Deep Consensus Voting
ABSTRACT: In this paper we consider the problem of human pose estimation from a single
still image. We propose a novel approach where each location in the image votes
for the position of each keypoint using a convolutional neural net. The voting
scheme allows us to utilize information from the whole image, rather than rely
on a sparse set of keypoint locations. Using dense, multi-target votes, not
only produces good keypoint predictions, but also enables us to compute
image-dependent joint keypoint probabilities by looking at consensus voting.
This differs from most previous methods where joint probabilities are learned
from relative keypoint locations and are independent of the image. We finally
combine the keypoints votes and joint probabilities in order to identify the
optimal pose configuration. We show our competitive performance on the MPII
Human Pose and Leeds Sports Pose datasets.
| no_new_dataset | 0.951278 |
1603.08252 | Ajay Saini | Ajay Saini, Natasha Markuzon | Predictive Modeling of Opinion and Connectivity Dynamics in Social
Networks | 19 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years saw an increased interest in modeling and understanding the
mechanisms of opinion and innovation spread through human networks. Using
analysis of real-world social data, researchers are able to gain a better
understanding of the dynamics of social networks and subsequently model the
changes in such networks over time. We developed a social network model that
both utilizes an agent-based approach with a dynamic update of opinions and
connections between agents and reflects opinion propagation and structural
changes over time as observed in real-world data. We validate the model using
data from the Social Evolution dataset of the MIT Human Dynamics Lab describing
changes in friendships and health self-perception in a targeted student
population over a nine-month period. We demonstrate the effectiveness of the
approach by predicting changes in both opinion spread and connectivity of the
network. We also use the model to evaluate how the network parameters, such as
the level of `openness' and willingness to incorporate opinions of neighboring
agents, affect the outcome. The model not only provides insight into the
dynamics of ever changing social networks, but also presents a tool with which
one can investigate opinion propagation strategies for networks of various
structures and opinion distributions.
| [
{
"version": "v1",
"created": "Sun, 27 Mar 2016 19:53:21 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Saini",
"Ajay",
""
],
[
"Markuzon",
"Natasha",
""
]
] | TITLE: Predictive Modeling of Opinion and Connectivity Dynamics in Social
Networks
ABSTRACT: Recent years saw an increased interest in modeling and understanding the
mechanisms of opinion and innovation spread through human networks. Using
analysis of real-world social data, researchers are able to gain a better
understanding of the dynamics of social networks and subsequently model the
changes in such networks over time. We developed a social network model that
both utilizes an agent-based approach with a dynamic update of opinions and
connections between agents and reflects opinion propagation and structural
changes over time as observed in real-world data. We validate the model using
data from the Social Evolution dataset of the MIT Human Dynamics Lab describing
changes in friendships and health self-perception in a targeted student
population over a nine-month period. We demonstrate the effectiveness of the
approach by predicting changes in both opinion spread and connectivity of the
network. We also use the model to evaluate how the network parameters, such as
the level of `openness' and willingness to incorporate opinions of neighboring
agents, affect the outcome. The model not only provides insight into the
dynamics of ever changing social networks, but also presents a tool with which
one can investigate opinion propagation strategies for networks of various
structures and opinion distributions.
| no_new_dataset | 0.95096 |
1603.08321 | Linlin Chao | Linlin Chao, Jianhua Tao, Minghao Yang, Ya Li and Zhengqi Wen | Audio Visual Emotion Recognition with Temporal Alignment and Perception
Attention | null | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on two key problems for audio-visual emotion recognition
in the video. One is the audio and visual streams temporal alignment for
feature level fusion. The other one is locating and re-weighting the perception
attentions in the whole audio-visual stream for better recognition. The Long
Short Term Memory Recurrent Neural Network (LSTM-RNN) is employed as the main
classification architecture. Firstly, soft attention mechanism aligns the audio
and visual streams. Secondly, seven emotion embedding vectors, which are
corresponding to each classification emotion type, are added to locate the
perception attentions. The locating and re-weighting process is also based on
the soft attention mechanism. The experiment results on EmotiW2015 dataset and
the qualitative analysis show the efficiency of the proposed two techniques.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 06:06:10 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Chao",
"Linlin",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Yang",
"Minghao",
""
],
[
"Li",
"Ya",
""
],
[
"Wen",
"Zhengqi",
""
]
] | TITLE: Audio Visual Emotion Recognition with Temporal Alignment and Perception
Attention
ABSTRACT: This paper focuses on two key problems for audio-visual emotion recognition
in the video. One is the audio and visual streams temporal alignment for
feature level fusion. The other one is locating and re-weighting the perception
attentions in the whole audio-visual stream for better recognition. The Long
Short Term Memory Recurrent Neural Network (LSTM-RNN) is employed as the main
classification architecture. Firstly, soft attention mechanism aligns the audio
and visual streams. Secondly, seven emotion embedding vectors, which are
corresponding to each classification emotion type, are added to locate the
perception attentions. The locating and re-weighting process is also based on
the soft attention mechanism. The experiment results on EmotiW2015 dataset and
the qualitative analysis show the efficiency of the proposed two techniques.
| no_new_dataset | 0.952574 |
1603.08486 | Hoo Chang Shin | Hoo-Chang Shin, Kirk Roberts, Le Lu, Dina Demner-Fushman, Jianhua Yao,
Ronald M Summers | Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for
Automated Image Annotation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent advances in automatically describing image contents, their
applications have been mostly limited to image caption datasets containing
natural images (e.g., Flickr 30k, MSCOCO). In this paper, we present a deep
learning model to efficiently detect a disease from an image and annotate its
contexts (e.g., location, severity and the affected organs). We employ a
publicly available radiology dataset of chest x-rays and their reports, and use
its image annotations to mine disease names to train convolutional neural
networks (CNNs). In doing so, we adopt various regularization techniques to
circumvent the large normal-vs-diseased cases bias. Recurrent neural networks
(RNNs) are then trained to describe the contexts of a detected disease, based
on the deep CNN features. Moreover, we introduce a novel approach to use the
weights of the already trained pair of CNN/RNN on the domain-specific
image/text dataset, to infer the joint image/text contexts for composite image
labeling. Significantly improved image annotation results are demonstrated
using the recurrent neural cascade model by taking the joint image/text
contexts into account.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 19:02:07 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Shin",
"Hoo-Chang",
""
],
[
"Roberts",
"Kirk",
""
],
[
"Lu",
"Le",
""
],
[
"Demner-Fushman",
"Dina",
""
],
[
"Yao",
"Jianhua",
""
],
[
"Summers",
"Ronald M",
""
]
] | TITLE: Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for
Automated Image Annotation
ABSTRACT: Despite the recent advances in automatically describing image contents, their
applications have been mostly limited to image caption datasets containing
natural images (e.g., Flickr 30k, MSCOCO). In this paper, we present a deep
learning model to efficiently detect a disease from an image and annotate its
contexts (e.g., location, severity and the affected organs). We employ a
publicly available radiology dataset of chest x-rays and their reports, and use
its image annotations to mine disease names to train convolutional neural
networks (CNNs). In doing so, we adopt various regularization techniques to
circumvent the large normal-vs-diseased cases bias. Recurrent neural networks
(RNNs) are then trained to describe the contexts of a detected disease, based
on the deep CNN features. Moreover, we introduce a novel approach to use the
weights of the already trained pair of CNN/RNN on the domain-specific
image/text dataset, to infer the joint image/text contexts for composite image
labeling. Significantly improved image annotation results are demonstrated
using the recurrent neural cascade model by taking the joint image/text
contexts into account.
| no_new_dataset | 0.945147 |
1603.08507 | Lisa Anne Hendricks | Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue,
Bernt Schiele, Trevor Darrell | Generating Visual Explanations | null | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clearly explaining a rationale for a classification decision to an end-user
can be as important as the decision itself. Existing approaches for deep visual
recognition are generally opaque and do not output any justification text;
contemporary vision-language models can describe image content but fail to take
into account class-discriminative image aspects which justify visual
predictions. We propose a new model that focuses on the discriminating
properties of the visible object, jointly predicts a class label, and explains
why the predicted label is appropriate for the image. We propose a novel loss
function based on sampling and reinforcement learning that learns to generate
sentences that realize a global sentence property, such as class specificity.
Our results on a fine-grained bird species classification dataset show that our
model is able to generate explanations which are not only consistent with an
image but also more discriminative than descriptions produced by existing
captioning methods.
| [
{
"version": "v1",
"created": "Mon, 28 Mar 2016 19:54:12 GMT"
}
] | 2016-03-29T00:00:00 | [
[
"Hendricks",
"Lisa Anne",
""
],
[
"Akata",
"Zeynep",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Generating Visual Explanations
ABSTRACT: Clearly explaining a rationale for a classification decision to an end-user
can be as important as the decision itself. Existing approaches for deep visual
recognition are generally opaque and do not output any justification text;
contemporary vision-language models can describe image content but fail to take
into account class-discriminative image aspects which justify visual
predictions. We propose a new model that focuses on the discriminating
properties of the visible object, jointly predicts a class label, and explains
why the predicted label is appropriate for the image. We propose a novel loss
function based on sampling and reinforcement learning that learns to generate
sentences that realize a global sentence property, such as class specificity.
Our results on a fine-grained bird species classification dataset show that our
model is able to generate explanations which are not only consistent with an
image but also more discriminative than descriptions produced by existing
captioning methods.
| no_new_dataset | 0.951414 |
1505.01197 | Georgia Gkioxari | Georgia Gkioxari, Ross Girshick, Jitendra Malik | Contextual Action Recognition with R*CNN | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are multiple cues in an image which reveal what action a person is
performing. For example, a jogger has a pose that is characteristic for
jogging, but the scene (e.g. road, trail) and the presence of other joggers can
be an additional source of information. In this work, we exploit the simple
observation that actions are accompanied by contextual cues to build a strong
action recognition system. We adapt RCNN to use more than one region for
classification while still maintaining the ability to localize the action. We
call our system R*CNN. The action-specific models and the feature maps are
trained jointly, allowing for action specific representations to emerge. R*CNN
achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other
approaches in the field by a significant margin. Last, we show that R*CNN is
not limited to action recognition. In particular, R*CNN can also be used to
tackle fine-grained tasks such as attribute classification. We validate this
claim by reporting state-of-the-art performance on the Berkeley Attributes of
People dataset.
| [
{
"version": "v1",
"created": "Tue, 5 May 2015 21:56:10 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Sep 2015 20:29:26 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2016 01:06:01 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Gkioxari",
"Georgia",
""
],
[
"Girshick",
"Ross",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Contextual Action Recognition with R*CNN
ABSTRACT: There are multiple cues in an image which reveal what action a person is
performing. For example, a jogger has a pose that is characteristic for
jogging, but the scene (e.g. road, trail) and the presence of other joggers can
be an additional source of information. In this work, we exploit the simple
observation that actions are accompanied by contextual cues to build a strong
action recognition system. We adapt RCNN to use more than one region for
classification while still maintaining the ability to localize the action. We
call our system R*CNN. The action-specific models and the feature maps are
trained jointly, allowing for action specific representations to emerge. R*CNN
achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other
approaches in the field by a significant margin. Last, we show that R*CNN is
not limited to action recognition. In particular, R*CNN can also be used to
tackle fine-grained tasks such as attribute classification. We validate this
claim by reporting state-of-the-art performance on the Berkeley Attributes of
People dataset.
| no_new_dataset | 0.945349 |
1602.00417 | Jumabek Alikhanov | Jumabek Alikhanov, Myeong Hyeon Ga, Seunghyun Ko and Geun-Sik Jo | Transfer Learning Based on AdaBoost for Feature Selection from Multiple
ConvNet Layer Features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Networks (ConvNets) are powerful models that learn hierarchies
of visual features, which could also be used to obtain image representations
for transfer learning. The basic pipeline for transfer learning is to first
train a ConvNet on a large dataset (source task) and then use feed-forward
units activation of the trained ConvNet as image representation for smaller
datasets (target task). Our key contribution is to demonstrate superior
performance of multiple ConvNet layer features over single ConvNet layer
features. Combining multiple ConvNet layer features will result in more complex
feature space with some features being repetitive. This requires some form of
feature selection. We use AdaBoost with single stumps to implicitly select only
distinct features that are useful towards classification from concatenated
ConvNet features. Experimental results show that using multiple ConvNet layer
activation features instead of single ConvNet layer features consistently will
produce superior performance. Improvements becomes significant as we increase
the distance between source task and the target task.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2016 08:02:06 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Mar 2016 12:03:49 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Alikhanov",
"Jumabek",
""
],
[
"Ga",
"Myeong Hyeon",
""
],
[
"Ko",
"Seunghyun",
""
],
[
"Jo",
"Geun-Sik",
""
]
] | TITLE: Transfer Learning Based on AdaBoost for Feature Selection from Multiple
ConvNet Layer Features
ABSTRACT: Convolutional Networks (ConvNets) are powerful models that learn hierarchies
of visual features, which could also be used to obtain image representations
for transfer learning. The basic pipeline for transfer learning is to first
train a ConvNet on a large dataset (source task) and then use feed-forward
units activation of the trained ConvNet as image representation for smaller
datasets (target task). Our key contribution is to demonstrate superior
performance of multiple ConvNet layer features over single ConvNet layer
features. Combining multiple ConvNet layer features will result in more complex
feature space with some features being repetitive. This requires some form of
feature selection. We use AdaBoost with single stumps to implicitly select only
distinct features that are useful towards classification from concatenated
ConvNet features. Experimental results show that using multiple ConvNet layer
activation features instead of single ConvNet layer features consistently will
produce superior performance. Improvements becomes significant as we increase
the distance between source task and the target task.
| no_new_dataset | 0.944177 |
1602.03534 | Ozan Sener | Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese | Unsupervised Transductive Domain Adaptation | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supervised learning with large scale labeled datasets and deep layered models
has made a paradigm shift in diverse areas in learning and recognition.
However, this approach still suffers generalization issues under the presence
of a domain shift between the training and the test data distribution. In this
regard, unsupervised domain adaptation algorithms have been proposed to
directly address the domain shift problem. In this paper, we approach the
problem from a transductive perspective. We incorporate the domain shift and
the transductive target inference into our framework by jointly solving for an
asymmetric similarity metric and the optimal transductive target label
assignment. We also show that our model can easily be extended for deep feature
learning in order to learn features which are discriminative in the target
domain. Our experiments show that the proposed method significantly outperforms
state-of-the-art algorithms in both object recognition and digit classification
experiments by a large margin.
| [
{
"version": "v1",
"created": "Wed, 10 Feb 2016 21:07:23 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Feb 2016 22:37:36 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Mar 2016 16:47:54 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Sener",
"Ozan",
""
],
[
"Song",
"Hyun Oh",
""
],
[
"Saxena",
"Ashutosh",
""
],
[
"Savarese",
"Silvio",
""
]
] | TITLE: Unsupervised Transductive Domain Adaptation
ABSTRACT: Supervised learning with large scale labeled datasets and deep layered models
has made a paradigm shift in diverse areas in learning and recognition.
However, this approach still suffers generalization issues under the presence
of a domain shift between the training and the test data distribution. In this
regard, unsupervised domain adaptation algorithms have been proposed to
directly address the domain shift problem. In this paper, we approach the
problem from a transductive perspective. We incorporate the domain shift and
the transductive target inference into our framework by jointly solving for an
asymmetric similarity metric and the optimal transductive target label
assignment. We also show that our model can easily be extended for deep feature
learning in order to learn features which are discriminative in the target
domain. Our experiments show that the proposed method significantly outperforms
state-of-the-art algorithms in both object recognition and digit classification
experiments by a large margin.
| no_new_dataset | 0.944434 |
1603.07745 | Ganesh Sundaramoorthi | Ganesh Sundaramoorthi, Naeemullah Khan, Byung-Woo Hong | Coarse-to-Fine Segmentation With Shape-Tailored Scale Spaces | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate a general energy and method for segmentation that is designed to
have preference for segmenting the coarse structure over the fine structure of
the data, without smoothing across boundaries of regions. The energy is
formulated by considering data terms at a continuum of scales from the scale
space computed from the Heat Equation within regions, and integrating these
terms over all time. We show that the energy may be approximately optimized
without solving for the entire scale space, but rather solving time-independent
linear equations at the native scale of the image, making the method
computationally feasible. We provide a multi-region scheme, and apply our
method to motion segmentation. Experiments on a benchmark dataset shows that
our method is less sensitive to clutter or other undesirable fine-scale
structure, and leads to better performance in motion segmentation.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 20:39:24 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Sundaramoorthi",
"Ganesh",
""
],
[
"Khan",
"Naeemullah",
""
],
[
"Hong",
"Byung-Woo",
""
]
] | TITLE: Coarse-to-Fine Segmentation With Shape-Tailored Scale Spaces
ABSTRACT: We formulate a general energy and method for segmentation that is designed to
have preference for segmenting the coarse structure over the fine structure of
the data, without smoothing across boundaries of regions. The energy is
formulated by considering data terms at a continuum of scales from the scale
space computed from the Heat Equation within regions, and integrating these
terms over all time. We show that the energy may be approximately optimized
without solving for the entire scale space, but rather solving time-independent
linear equations at the native scale of the image, making the method
computationally feasible. We provide a multi-region scheme, and apply our
method to motion segmentation. Experiments on a benchmark dataset shows that
our method is less sensitive to clutter or other undesirable fine-scale
structure, and leads to better performance in motion segmentation.
| no_new_dataset | 0.952838 |
1603.07772 | Wentao Zhu | Wentao Zhu, Cuiling Lan, Junliang Xing, Wenjun Zeng, Yanghao Li, Li
Shen, Xiaohui Xie | Co-occurrence Feature Learning for Skeleton based Action Recognition
using Regularized Deep LSTM Networks | AAAI 2016 conference | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Skeleton based action recognition distinguishes human actions using the
trajectories of skeleton joints, which provide a very good representation for
describing actions. Considering that recurrent neural networks (RNNs) with Long
Short-Term Memory (LSTM) can learn feature representations and model long-term
temporal dependencies automatically, we propose an end-to-end fully connected
deep LSTM network for skeleton based action recognition. Inspired by the
observation that the co-occurrences of the joints intrinsically characterize
human actions, we take the skeleton as the input at each time slot and
introduce a novel regularization scheme to learn the co-occurrence features of
skeleton joints. To train the deep LSTM network effectively, we propose a new
dropout algorithm which simultaneously operates on the gates, cells, and output
responses of the LSTM neurons. Experimental results on three human action
recognition datasets consistently demonstrate the effectiveness of the proposed
model.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 22:43:55 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Zhu",
"Wentao",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Xing",
"Junliang",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Li",
"Yanghao",
""
],
[
"Shen",
"Li",
""
],
[
"Xie",
"Xiaohui",
""
]
] | TITLE: Co-occurrence Feature Learning for Skeleton based Action Recognition
using Regularized Deep LSTM Networks
ABSTRACT: Skeleton based action recognition distinguishes human actions using the
trajectories of skeleton joints, which provide a very good representation for
describing actions. Considering that recurrent neural networks (RNNs) with Long
Short-Term Memory (LSTM) can learn feature representations and model long-term
temporal dependencies automatically, we propose an end-to-end fully connected
deep LSTM network for skeleton based action recognition. Inspired by the
observation that the co-occurrences of the joints intrinsically characterize
human actions, we take the skeleton as the input at each time slot and
introduce a novel regularization scheme to learn the co-occurrence features of
skeleton joints. To train the deep LSTM network effectively, we propose a new
dropout algorithm which simultaneously operates on the gates, cells, and output
responses of the LSTM neurons. Experimental results on three human action
recognition datasets consistently demonstrate the effectiveness of the proposed
model.
| no_new_dataset | 0.946794 |
1603.07846 | Wei Wang | Wei Wang, Gang Chen, Haibo Chen, Tien Tuan Anh Dinh, Jinyang Gao, Beng
Chin Ooi, Kian-Lee Tan and Sheng Wang | Deep Learning At Scale and At Ease | submitted to TOMM (under review) | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep learning techniques have enjoyed success in various multimedia
applications, such as image classification and multi-modal data analysis. Large
deep learning models are developed for learning rich representations of complex
data. There are two challenges to overcome before deep learning can be widely
adopted in multimedia and other applications. One is usability, namely the
implementation of different models and training algorithms must be done by
non-experts without much effort especially when the model is large and complex.
The other is scalability, that is the deep learning system must be able to
provision for a huge demand of computing resources for training large models
with massive datasets. To address these two challenges, in this paper, we
design a distributed deep learning platform called SINGA which has an intuitive
programming model based on the common layer abstraction of deep learning
models. Good scalability is achieved through flexible distributed training
architecture and specific optimization techniques. SINGA runs on GPUs as well
as on CPUs, and we show that it outperforms many other state-of-the-art deep
learning systems. Our experience with developing and training deep learning
models for real-life multimedia applications in SINGA shows that the platform
is both usable and scalable.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 08:46:02 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Wang",
"Wei",
""
],
[
"Chen",
"Gang",
""
],
[
"Chen",
"Haibo",
""
],
[
"Dinh",
"Tien Tuan Anh",
""
],
[
"Gao",
"Jinyang",
""
],
[
"Ooi",
"Beng Chin",
""
],
[
"Tan",
"Kian-Lee",
""
],
[
"Wang",
"Sheng",
""
]
] | TITLE: Deep Learning At Scale and At Ease
ABSTRACT: Recently, deep learning techniques have enjoyed success in various multimedia
applications, such as image classification and multi-modal data analysis. Large
deep learning models are developed for learning rich representations of complex
data. There are two challenges to overcome before deep learning can be widely
adopted in multimedia and other applications. One is usability, namely the
implementation of different models and training algorithms must be done by
non-experts without much effort especially when the model is large and complex.
The other is scalability, that is the deep learning system must be able to
provision for a huge demand of computing resources for training large models
with massive datasets. To address these two challenges, in this paper, we
design a distributed deep learning platform called SINGA which has an intuitive
programming model based on the common layer abstraction of deep learning
models. Good scalability is achieved through flexible distributed training
architecture and specific optimization techniques. SINGA runs on GPUs as well
as on CPUs, and we show that it outperforms many other state-of-the-art deep
learning systems. Our experience with developing and training deep learning
models for real-life multimedia applications in SINGA shows that the platform
is both usable and scalable.
| no_new_dataset | 0.940353 |
1603.07849 | Eric Makita | Eric Makita, Artem Lenskiy | A multinomial probabilistic model for movie genre predictions | 5 pages, 4 figures, 8th International Conference on Machine Learning
and Computing, Hong Kong | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a movie genre-prediction based on multinomial probability
model. To the best of our knowledge, this problem has not been addressed yet in
the field of recommender system. The prediction of a movie genre has many
practical applications including complementing the items categories given by
experts and providing a surprise effect in the recommendations given to a user.
We employ mulitnomial event model to estimate a likelihood of a movie given
genre and the Bayes rule to evaluate the posterior probability of a genre given
a movie. Experiments with the MovieLens dataset validate our approach. We
achieved 70% prediction rate using only 15% of the whole set for training.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 08:49:39 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Makita",
"Eric",
""
],
[
"Lenskiy",
"Artem",
""
]
] | TITLE: A multinomial probabilistic model for movie genre predictions
ABSTRACT: This paper proposes a movie genre-prediction based on multinomial probability
model. To the best of our knowledge, this problem has not been addressed yet in
the field of recommender system. The prediction of a movie genre has many
practical applications including complementing the items categories given by
experts and providing a surprise effect in the recommendations given to a user.
We employ mulitnomial event model to estimate a likelihood of a movie given
genre and the Bayes rule to evaluate the posterior probability of a genre given
a movie. Experiments with the MovieLens dataset validate our approach. We
achieved 70% prediction rate using only 15% of the whole set for training.
| no_new_dataset | 0.95222 |
1603.07879 | Raja Kishor D Mr. | D. Raja Kishor, N. B. Venkateswarlu | Hybridization of Expectation-Maximization and K-Means Algorithms for
Better Clustering Performance | 17 pages, 18 figures | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The present work proposes hybridization of Expectation-Maximization (EM) and
K-Means techniques as an attempt to speed-up the clustering process. Though
both K-Means and EM techniques look into different areas, K-means can be viewed
as an approximate way to obtain maximum likelihood estimates for the means.
Along with the proposed algorithm for hybridization, the present work also
experiments with the Standard EM algorithm. Six different datasets are used for
the experiments of which three are synthetic datasets. Clustering fitness and
Sum of Squared Errors (SSE) are computed for measuring the clustering
performance. In all the experiments it is observed that the proposed algorithm
for hybridization of EM and K-Means techniques is consistently taking less
execution time with acceptable Clustering Fitness value and less SSE than the
standard EM algorithm. It is also observed that the proposed algorithm is
producing better clustering results than the Cluster package of Purdue
University.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 11:09:22 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Kishor",
"D. Raja",
""
],
[
"Venkateswarlu",
"N. B.",
""
]
] | TITLE: Hybridization of Expectation-Maximization and K-Means Algorithms for
Better Clustering Performance
ABSTRACT: The present work proposes hybridization of Expectation-Maximization (EM) and
K-Means techniques as an attempt to speed-up the clustering process. Though
both K-Means and EM techniques look into different areas, K-means can be viewed
as an approximate way to obtain maximum likelihood estimates for the means.
Along with the proposed algorithm for hybridization, the present work also
experiments with the Standard EM algorithm. Six different datasets are used for
the experiments of which three are synthetic datasets. Clustering fitness and
Sum of Squared Errors (SSE) are computed for measuring the clustering
performance. In all the experiments it is observed that the proposed algorithm
for hybridization of EM and K-Means techniques is consistently taking less
execution time with acceptable Clustering Fitness value and less SSE than the
standard EM algorithm. It is also observed that the proposed algorithm is
producing better clustering results than the Cluster package of Purdue
University.
| no_new_dataset | 0.951504 |
1603.07886 | Shanlin Zhong | Peijie Yin, Hong Qiao, Wei Wu, Lu Qi, YinLin Li, Shanlin Zhong, Bo
Zhang | A Novel Biologically Mechanism-Based Visual Cognition Model--Automatic
Extraction of Semantics, Formation of Integrated Concepts and Re-selection
Features for Ambiguity | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integration between biology and information science benefits both fields.
Many related models have been proposed, such as computational visual cognition
models, computational motor control models, integrations of both and so on. In
general, the robustness and precision of recognition is one of the key problems
for object recognition models.
In this paper, inspired by features of human recognition process and their
biological mechanisms, a new integrated and dynamic framework is proposed to
mimic the semantic extraction, concept formation and feature re-selection in
human visual processing. The main contributions of the proposed model are as
follows:
(1) Semantic feature extraction: Local semantic features are learnt from
episodic features that are extracted from raw images through a deep neural
network;
(2) Integrated concept formation: Concepts are formed with local semantic
information and structural information learnt through network.
(3) Feature re-selection: When ambiguity is detected during recognition
process, distinctive features according to the difference between ambiguous
candidates are re-selected for recognition.
Experimental results on hand-written digits and facial shape dataset show
that, compared with other methods, the new proposed model exhibits higher
robustness and precision for visual recognition, especially in the condition
when input samples are smantic ambiguous. Meanwhile, the introduced biological
mechanisms further strengthen the interaction between neuroscience and
information science.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 11:47:16 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Yin",
"Peijie",
""
],
[
"Qiao",
"Hong",
""
],
[
"Wu",
"Wei",
""
],
[
"Qi",
"Lu",
""
],
[
"Li",
"YinLin",
""
],
[
"Zhong",
"Shanlin",
""
],
[
"Zhang",
"Bo",
""
]
] | TITLE: A Novel Biologically Mechanism-Based Visual Cognition Model--Automatic
Extraction of Semantics, Formation of Integrated Concepts and Re-selection
Features for Ambiguity
ABSTRACT: Integration between biology and information science benefits both fields.
Many related models have been proposed, such as computational visual cognition
models, computational motor control models, integrations of both and so on. In
general, the robustness and precision of recognition is one of the key problems
for object recognition models.
In this paper, inspired by features of human recognition process and their
biological mechanisms, a new integrated and dynamic framework is proposed to
mimic the semantic extraction, concept formation and feature re-selection in
human visual processing. The main contributions of the proposed model are as
follows:
(1) Semantic feature extraction: Local semantic features are learnt from
episodic features that are extracted from raw images through a deep neural
network;
(2) Integrated concept formation: Concepts are formed with local semantic
information and structural information learnt through network.
(3) Feature re-selection: When ambiguity is detected during recognition
process, distinctive features according to the difference between ambiguous
candidates are re-selected for recognition.
Experimental results on hand-written digits and facial shape dataset show
that, compared with other methods, the new proposed model exhibits higher
robustness and precision for visual recognition, especially in the condition
when input samples are smantic ambiguous. Meanwhile, the introduced biological
mechanisms further strengthen the interaction between neuroscience and
information science.
| no_new_dataset | 0.951369 |
1603.07980 | Joseph Dulny III | Joseph Dulny III and Michael Kim | Developing Quantum Annealer Driven Data Discovery | null | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning applications are limited by computational power. In this
paper, we gain novel insights into the application of quantum annealing (QA) to
machine learning (ML) through experiments in natural language processing (NLP),
seizure prediction, and linear separability testing. These experiments are
performed on QA simulators and early-stage commercial QA hardware and compared
to an unprecedented number of traditional ML techniques. We extend QBoost, an
early implementation of a binary classifier that utilizes a quantum annealer,
via resampling and ensembling of predicted probabilities to produce a more
robust class estimator. To determine the strengths and weaknesses of this
approach, resampled QBoost (RQBoost) is tested across several datasets and
compared to QBoost and traditional ML. We show and explain how QBoost in
combination with a commercial QA device are unable to perfectly separate binary
class data which is linearly separable via logistic regression with shrinkage.
We further explore the performance of RQBoost in the space of NLP and seizure
prediction and find QA-enabled ML using QBoost and RQBoost is outperformed by
traditional techniques. Additionally, we provide a detailed discussion of
algorithmic constraints and trade-offs imposed by the use of this QA hardware.
Through these experiments, we provide unique insights into the state of quantum
ML via boosting and the use of quantum annealing hardware that are valuable to
institutions interested in applying QA to problems in ML and beyond.
| [
{
"version": "v1",
"created": "Fri, 25 Mar 2016 18:36:33 GMT"
}
] | 2016-03-28T00:00:00 | [
[
"Dulny",
"Joseph",
"III"
],
[
"Kim",
"Michael",
""
]
] | TITLE: Developing Quantum Annealer Driven Data Discovery
ABSTRACT: Machine learning applications are limited by computational power. In this
paper, we gain novel insights into the application of quantum annealing (QA) to
machine learning (ML) through experiments in natural language processing (NLP),
seizure prediction, and linear separability testing. These experiments are
performed on QA simulators and early-stage commercial QA hardware and compared
to an unprecedented number of traditional ML techniques. We extend QBoost, an
early implementation of a binary classifier that utilizes a quantum annealer,
via resampling and ensembling of predicted probabilities to produce a more
robust class estimator. To determine the strengths and weaknesses of this
approach, resampled QBoost (RQBoost) is tested across several datasets and
compared to QBoost and traditional ML. We show and explain how QBoost in
combination with a commercial QA device are unable to perfectly separate binary
class data which is linearly separable via logistic regression with shrinkage.
We further explore the performance of RQBoost in the space of NLP and seizure
prediction and find QA-enabled ML using QBoost and RQBoost is outperformed by
traditional techniques. Additionally, we provide a detailed discussion of
algorithmic constraints and trade-offs imposed by the use of this QA hardware.
Through these experiments, we provide unique insights into the state of quantum
ML via boosting and the use of quantum annealing hardware that are valuable to
institutions interested in applying QA to problems in ML and beyond.
| no_new_dataset | 0.942876 |
1603.07342 | Arkadiusz Hypki Dr | Arkadiusz Hypki | BEANS - a software package for distributed Big Data analysis | 14 pages, 6 figures, submitted to MNRAS, comments are welcome | null | null | null | astro-ph.IM cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BEANS software is a web based, easy to install and maintain, new tool to
store and analyse data in a distributed way for a massive amount of data. It
provides a clear interface for querying, filtering, aggregating, and plotting
data from an arbitrary number of datasets. Its main purpose is to simplify the
process of storing, examining and finding new relations in the so-called Big
Data.
Creation of BEANS software is an answer to the growing needs of the
astronomical community to have a versatile tool to store, analyse and compare
the complex astrophysical numerical simulations with observations (e.g.
simulations of the Galaxy or star clusters with the Gaia archive). However,
this software was built in a general form and it is ready to use in any other
research field or open source software.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 20:14:34 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Hypki",
"Arkadiusz",
""
]
] | TITLE: BEANS - a software package for distributed Big Data analysis
ABSTRACT: BEANS software is a web based, easy to install and maintain, new tool to
store and analyse data in a distributed way for a massive amount of data. It
provides a clear interface for querying, filtering, aggregating, and plotting
data from an arbitrary number of datasets. Its main purpose is to simplify the
process of storing, examining and finding new relations in the so-called Big
Data.
Creation of BEANS software is an answer to the growing needs of the
astronomical community to have a versatile tool to store, analyse and compare
the complex astrophysical numerical simulations with observations (e.g.
simulations of the Galaxy or star clusters with the Gaia archive). However,
this software was built in a general form and it is ready to use in any other
research field or open source software.
| no_new_dataset | 0.93852 |
1603.07376 | Paolo Cintia | Paolo Cintia, Mirco Nanni | An effective Time-Aware Map Matching process for low sampling GPS data | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of the proliferation of Geo-Spatial Data, induced by the diffusion
of GPS devices, the map matching problem still represents an important and
valuable challenge. The process of associating a segment of the underlying road
network to a GPS point gives us the chance to enrich raw data with the semantic
layer provided by the roadmap, with all contextual information associated to
it, e.g. the presence of speed limits, attraction points, changes in elevation,
etc. Most state-of-art solutions for this classical problem simply look for the
shortest or fastest path connecting any pair of consecutive points in a trip.
While in some contexts that is reasonable, in this work we argue that the
shortest/fastest path assumption can be in general erroneous. Indeed, we show
that such approaches can yield travel times that are significantly incoherent
with the real ones, and propose a Time-Aware Map matching process that tries to
improve the state-of-art by taking into account also such temporal aspect. Our
algorithm results to be very efficient, effective on low- sampling data and to
outperform existing solutions, as proved by experiments on large datasets of
real GPS trajectories. Moreover, our algorithm is parameter-free and does not
depend on specific characteristics of the GPS localization error and of the
road network (e.g. density of roads, road network topology, etc.).
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 21:48:38 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Cintia",
"Paolo",
""
],
[
"Nanni",
"Mirco",
""
]
] | TITLE: An effective Time-Aware Map Matching process for low sampling GPS data
ABSTRACT: In the era of the proliferation of Geo-Spatial Data, induced by the diffusion
of GPS devices, the map matching problem still represents an important and
valuable challenge. The process of associating a segment of the underlying road
network to a GPS point gives us the chance to enrich raw data with the semantic
layer provided by the roadmap, with all contextual information associated to
it, e.g. the presence of speed limits, attraction points, changes in elevation,
etc. Most state-of-art solutions for this classical problem simply look for the
shortest or fastest path connecting any pair of consecutive points in a trip.
While in some contexts that is reasonable, in this work we argue that the
shortest/fastest path assumption can be in general erroneous. Indeed, we show
that such approaches can yield travel times that are significantly incoherent
with the real ones, and propose a Time-Aware Map matching process that tries to
improve the state-of-art by taking into account also such temporal aspect. Our
algorithm results to be very efficient, effective on low- sampling data and to
outperform existing solutions, as proved by experiments on large datasets of
real GPS trajectories. Moreover, our algorithm is parameter-free and does not
depend on specific characteristics of the GPS localization error and of the
road network (e.g. density of roads, road network topology, etc.).
| no_new_dataset | 0.947721 |
1603.07396 | Aniruddha Kembhavi | Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh
Hajishirzi, Ali Farhadi | A Diagram Is Worth A Dozen Images | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 00:02:58 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Kembhavi",
"Aniruddha",
""
],
[
"Salvato",
"Mike",
""
],
[
"Kolve",
"Eric",
""
],
[
"Seo",
"Minjoon",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: A Diagram Is Worth A Dozen Images
ABSTRACT: Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs.
| new_dataset | 0.958731 |
1603.07433 | Shouhuai Xu | Zhenxin Zhan and Maochao Xu and Shouhuai Xu | Characterizing Honeypot-Captured Cyber Attacks: Statistical Framework
and Case Study | null | IEEE Transactions on Information Forensics & Security (IEEE TIFS),
8(11): 1775-1789, (2013) | null | null | cs.CR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rigorously characterizing the statistical properties of cyber attacks is an
important problem. In this paper, we propose the {\em first} statistical
framework for rigorously analyzing honeypot-captured cyber attack data. The
framework is built on the novel concept of {\em stochastic cyber attack
process}, a new kind of mathematical objects for describing cyber attacks. To
demonstrate use of the framework, we apply it to analyze a low-interaction
honeypot dataset, while noting that the framework can be equally applied to
analyze high-interaction honeypot data that contains richer information about
the attacks. The case study finds, for the first time, that Long-Range
Dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case
study confirms that by exploiting the statistical properties (LRD in this
case), it is feasible to predict cyber attacks (at least in terms of attack
rate) with good accuracy. This kind of prediction capability would provide
sufficient early-warning time for defenders to adjust their defense
configurations or resource allocations. The idea of "gray-box" (rather than
"black-box") prediction is central to the utility of the statistical framework,
and represents a significant step towards ultimately understanding (the degree
of) the {\em predictability} of cyber attacks.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 04:27:09 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Zhan",
"Zhenxin",
""
],
[
"Xu",
"Maochao",
""
],
[
"Xu",
"Shouhuai",
""
]
] | TITLE: Characterizing Honeypot-Captured Cyber Attacks: Statistical Framework
and Case Study
ABSTRACT: Rigorously characterizing the statistical properties of cyber attacks is an
important problem. In this paper, we propose the {\em first} statistical
framework for rigorously analyzing honeypot-captured cyber attack data. The
framework is built on the novel concept of {\em stochastic cyber attack
process}, a new kind of mathematical objects for describing cyber attacks. To
demonstrate use of the framework, we apply it to analyze a low-interaction
honeypot dataset, while noting that the framework can be equally applied to
analyze high-interaction honeypot data that contains richer information about
the attacks. The case study finds, for the first time, that Long-Range
Dependence (LRD) is exhibited by honeypot-captured cyber attacks. The case
study confirms that by exploiting the statistical properties (LRD in this
case), it is feasible to predict cyber attacks (at least in terms of attack
rate) with good accuracy. This kind of prediction capability would provide
sufficient early-warning time for defenders to adjust their defense
configurations or resource allocations. The idea of "gray-box" (rather than
"black-box") prediction is central to the utility of the statistical framework,
and represents a significant step towards ultimately understanding (the degree
of) the {\em predictability} of cyber attacks.
| no_new_dataset | 0.949856 |
1603.07463 | Olivier Delestre | Morgan Abily (I-CiTy), Olivier Delestre (JAD), Laura Amoss\'e,
Nathalie Bertrand (IRSN), Christian Laguerre (MAPMO), Claire-Marie Duluc
(IRSN), Philippe Gourbesville (I-CiTy) | Use of 3D classified topographic data with FullSWOF for high resolution
simulation of a river flood event over a dense urban area | 3rd IAHR Europe Congress, 14-16 April 2014, Porto, Portugal, Apr
2014, Porto, Portugal. 2016 | null | null | null | math.NA cs.CE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High resolution (infra-metric) topographic data, including photogram-metric
born 3D classified data, are becoming commonly available at large range of
spatial extend, such as municipality or industrial site scale. This category of
dataset is promising for high resolution (HR) Digital Surface Model (DSM)
generation, allowing inclusion of fine above-ground structures which might
influence overland flow hydrodynamic in urban environment. Nonetheless several
categories of technical and numerical challenges arise from this type of data
use with standard 2D Shallow Water Equations (SWE) based numerical codes.
FullSWOF (Full Shallow Water equations for Overland Flow) is a code based on 2D
SWE under conservative form. This code relies on a well-balanced finite volume
method over a regular grid using numerical method based on hydrostatic
reconstruction scheme. When compared to existing industrial codes used for
urban flooding simulations, numerical approach implemented in FullSWOF allows
to handle properly flow regime changes, preservation of water depth positivity
at wet/dry cells transitions and steady state preservation. FullSWOF has
already been tested on analytical solution library (SWASHES) and has been used
to simulate runoff and dam-breaks. FullSWOFs above mentioned properties are of
good interest for urban overland flow. Objectives of this study are (i) to
assess the feasibility and added values of using HR 3D classified topographic
data to model river overland flow and (ii) to take advantage of FullSWOF code
properties for overland flow simulation in urban environment.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 07:59:15 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Abily",
"Morgan",
"",
"I-CiTy"
],
[
"Delestre",
"Olivier",
"",
"JAD"
],
[
"Amossé",
"Laura",
"",
"IRSN"
],
[
"Bertrand",
"Nathalie",
"",
"IRSN"
],
[
"Laguerre",
"Christian",
"",
"MAPMO"
],
[
"Duluc",
"Claire-Marie",
"",
"IRSN"
],
[
"Gourbesville",
"Philippe",
"",
"I-CiTy"
]
] | TITLE: Use of 3D classified topographic data with FullSWOF for high resolution
simulation of a river flood event over a dense urban area
ABSTRACT: High resolution (infra-metric) topographic data, including photogram-metric
born 3D classified data, are becoming commonly available at large range of
spatial extend, such as municipality or industrial site scale. This category of
dataset is promising for high resolution (HR) Digital Surface Model (DSM)
generation, allowing inclusion of fine above-ground structures which might
influence overland flow hydrodynamic in urban environment. Nonetheless several
categories of technical and numerical challenges arise from this type of data
use with standard 2D Shallow Water Equations (SWE) based numerical codes.
FullSWOF (Full Shallow Water equations for Overland Flow) is a code based on 2D
SWE under conservative form. This code relies on a well-balanced finite volume
method over a regular grid using numerical method based on hydrostatic
reconstruction scheme. When compared to existing industrial codes used for
urban flooding simulations, numerical approach implemented in FullSWOF allows
to handle properly flow regime changes, preservation of water depth positivity
at wet/dry cells transitions and steady state preservation. FullSWOF has
already been tested on analytical solution library (SWASHES) and has been used
to simulate runoff and dam-breaks. FullSWOFs above mentioned properties are of
good interest for urban overland flow. Objectives of this study are (i) to
assess the feasibility and added values of using HR 3D classified topographic
data to model river overland flow and (ii) to take advantage of FullSWOF code
properties for overland flow simulation in urban environment.
| no_new_dataset | 0.949012 |
1603.07466 | Marlon Dumas | Diego Calvanese, Marlon Dumas, \"Ulari Laurson, Fabrizio M. Maggi,
Marco Montali, Irene Teinemaa | Semantics and Analysis of DMN Decision Tables | Submitted to the International Conference on Business Process
Management (BPM 2016) | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Decision Model and Notation (DMN) is a standard notation to capture
decision logic in business applications in general and business processes in
particular. A central construct in DMN is that of a decision table. The
increasing use of DMN decision tables to capture critical business knowledge
raises the need to support analysis tasks on these tables such as correctness
and completeness checking. This paper provides a formal semantics for DMN
tables, a formal definition of key analysis tasks and scalable algorithms to
tackle two such tasks, i.e., detection of overlapping rules and of missing
rules. The algorithms are based on a geometric interpretation of decision
tables that can be used to support other analysis tasks by tapping into
geometric algorithms. The algorithms have been implemented in an open-source
DMN editor and tested on large decision tables derived from a credit lending
dataset.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 08:22:36 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Calvanese",
"Diego",
""
],
[
"Dumas",
"Marlon",
""
],
[
"Laurson",
"Ülari",
""
],
[
"Maggi",
"Fabrizio M.",
""
],
[
"Montali",
"Marco",
""
],
[
"Teinemaa",
"Irene",
""
]
] | TITLE: Semantics and Analysis of DMN Decision Tables
ABSTRACT: The Decision Model and Notation (DMN) is a standard notation to capture
decision logic in business applications in general and business processes in
particular. A central construct in DMN is that of a decision table. The
increasing use of DMN decision tables to capture critical business knowledge
raises the need to support analysis tasks on these tables such as correctness
and completeness checking. This paper provides a formal semantics for DMN
tables, a formal definition of key analysis tasks and scalable algorithms to
tackle two such tasks, i.e., detection of overlapping rules and of missing
rules. The algorithms are based on a geometric interpretation of decision
tables that can be used to support other analysis tasks by tapping into
geometric algorithms. The algorithms have been implemented in an open-source
DMN editor and tested on large decision tables derived from a credit lending
dataset.
| no_new_dataset | 0.944587 |
1603.07475 | Youngjin Yoon | Youngjin Yoon, Gyeongmin Choe, Namil Kim, Joon-Young Lee, In So Kweon | Fine-scale Surface Normal Estimation using a Single NIR Image | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present surface normal estimation using a single near infrared (NIR)
image. We are focusing on fine-scale surface geometry captured with an
uncalibrated light source. To tackle this ill-posed problem, we adopt a
generative adversarial network which is effective in recovering a sharp output,
which is also essential for fine-scale surface normal estimation. We
incorporate angular error and integrability constraint into the objective
function of the network to make estimated normals physically meaningful. We
train and validate our network on a recent NIR dataset, and also evaluate the
generality of our trained model by using new external datasets which are
captured with a different camera under different environment.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 08:43:14 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Yoon",
"Youngjin",
""
],
[
"Choe",
"Gyeongmin",
""
],
[
"Kim",
"Namil",
""
],
[
"Lee",
"Joon-Young",
""
],
[
"Kweon",
"In So",
""
]
] | TITLE: Fine-scale Surface Normal Estimation using a Single NIR Image
ABSTRACT: We present surface normal estimation using a single near infrared (NIR)
image. We are focusing on fine-scale surface geometry captured with an
uncalibrated light source. To tackle this ill-posed problem, we adopt a
generative adversarial network which is effective in recovering a sharp output,
which is also essential for fine-scale surface normal estimation. We
incorporate angular error and integrability constraint into the objective
function of the network to make estimated normals physically meaningful. We
train and validate our network on a recent NIR dataset, and also evaluate the
generality of our trained model by using new external datasets which are
captured with a different camera under different environment.
| no_new_dataset | 0.878991 |
1603.07646 | Saurabh Kataria | Saurabh Kataria | Recursive Neural Language Architecture for Tag Prediction | null | null | null | null | cs.IR cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of learning distributed representations for tags from
their associated content for the task of tag recommendation. Considering
tagging information is usually very sparse, effective learning from content and
tag association is very crucial and challenging task. Recently, various neural
representation learning models such as WSABIE and its variants show promising
performance, mainly due to compact feature representations learned in a
semantic space. However, their capacity is limited by a linear compositional
approach for representing tags as sum of equal parts and hurt their
performance. In this work, we propose a neural feedback relevance model for
learning tag representations with weighted feature representations. Our
experiments on two widely used datasets show significant improvement for
quality of recommendations over various baselines.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 16:39:37 GMT"
}
] | 2016-03-25T00:00:00 | [
[
"Kataria",
"Saurabh",
""
]
] | TITLE: Recursive Neural Language Architecture for Tag Prediction
ABSTRACT: We consider the problem of learning distributed representations for tags from
their associated content for the task of tag recommendation. Considering
tagging information is usually very sparse, effective learning from content and
tag association is very crucial and challenging task. Recently, various neural
representation learning models such as WSABIE and its variants show promising
performance, mainly due to compact feature representations learned in a
semantic space. However, their capacity is limited by a linear compositional
approach for representing tags as sum of equal parts and hurt their
performance. In this work, we propose a neural feedback relevance model for
learning tag representations with weighted feature representations. Our
experiments on two widely used datasets show significant improvement for
quality of recommendations over various baselines.
| no_new_dataset | 0.944382 |
1412.0722 | Stuart Hamilton | Stuart Hamilton and Daniel Casey | Creation of a high spatiotemporal resolution global database of
continuous mangrove forest cover for the 21st Century (CGMFC-21) | 31 pages, 3 tables, 1 Figure | null | 10.1111/geb.12449 | null | physics.geo-ph physics.ao-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The goal of this research is to provide high resolution local, regional,
national and global estimates of annual mangrove forest area from 2000 through
to 2012. To achieve this we synthesize the Global Forest Change database, the
Terrestrial Ecosystems of the World database, and the Mangrove Forests of the
World database to extract mangrove forest cover at high spatial and temporal
resolutions. We then use the new database to monitor mangrove cover at the
global, national and protected area scales. Countries showing relatively high
amounts of mangrove loss include Myanmar, Malaysia, Cambodia, Indonesia, and
Guatemala. Indonesia remains by far the largest mangrove-holding nation,
containing between 26 percent and 29 percent of the global mangrove inventory
with a deforestation rate of between 0.26 percent and 0.66 percent annually.
Global mangrove deforestation continues but at a much reduced rate of between
0.16 percent and 0.39 percent annually. Southeast Asia is a region of concern
with mangrove deforestation rates between 3.58 percent and 8.08 percent during
the analysis period, this in a region containing half of the entire global
mangrove forest inventory. The global mangrove deforestation pattern from 2000
to 2012 is one of decreasing rates of deforestation, with many nations
essentially stable, with the exception of the largest mangrove-holding region
of Southeast Asia. We provide a standardized global spatial dataset that
monitors mangrove deforestation globally at high spatiotemporal resolutions,
covering 99 percent of all mangrove forests. These data can be used to drive
the mangrove research agenda particularly as it pertains to improved monitoring
of mangrove carbon stocks and the establishment of baseline local mangrove
forest inventories required for payment for ecosystem service initiatives.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 22:58:11 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Sep 2015 20:01:53 GMT"
},
{
"version": "v3",
"created": "Wed, 6 Jan 2016 19:39:55 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Hamilton",
"Stuart",
""
],
[
"Casey",
"Daniel",
""
]
] | TITLE: Creation of a high spatiotemporal resolution global database of
continuous mangrove forest cover for the 21st Century (CGMFC-21)
ABSTRACT: The goal of this research is to provide high resolution local, regional,
national and global estimates of annual mangrove forest area from 2000 through
to 2012. To achieve this we synthesize the Global Forest Change database, the
Terrestrial Ecosystems of the World database, and the Mangrove Forests of the
World database to extract mangrove forest cover at high spatial and temporal
resolutions. We then use the new database to monitor mangrove cover at the
global, national and protected area scales. Countries showing relatively high
amounts of mangrove loss include Myanmar, Malaysia, Cambodia, Indonesia, and
Guatemala. Indonesia remains by far the largest mangrove-holding nation,
containing between 26 percent and 29 percent of the global mangrove inventory
with a deforestation rate of between 0.26 percent and 0.66 percent annually.
Global mangrove deforestation continues but at a much reduced rate of between
0.16 percent and 0.39 percent annually. Southeast Asia is a region of concern
with mangrove deforestation rates between 3.58 percent and 8.08 percent during
the analysis period, this in a region containing half of the entire global
mangrove forest inventory. The global mangrove deforestation pattern from 2000
to 2012 is one of decreasing rates of deforestation, with many nations
essentially stable, with the exception of the largest mangrove-holding region
of Southeast Asia. We provide a standardized global spatial dataset that
monitors mangrove deforestation globally at high spatiotemporal resolutions,
covering 99 percent of all mangrove forests. These data can be used to drive
the mangrove research agenda particularly as it pertains to improved monitoring
of mangrove carbon stocks and the establishment of baseline local mangrove
forest inventories required for payment for ecosystem service initiatives.
| no_new_dataset | 0.766818 |
1601.03128 | Anand Mishra Mr. | Anand Mishra and Karteek Alahari and C. V. Jawahar | Enhancing Energy Minimization Framework for Scene Text Recognition with
Top-Down Cues | null | null | 10.1016/j.cviu.2016.01.002 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing scene text is a challenging problem, even more so than the
recognition of scanned documents. This problem has gained significant attention
from the computer vision community in recent years, and several methods based
on energy minimization frameworks and deep learning approaches have been
proposed. In this work, we focus on the energy minimization framework and
propose a model that exploits both bottom-up and top-down cues for recognizing
cropped words extracted from street images. The bottom-up cues are derived from
individual character detections from an image. We build a conditional random
field model on these detections to jointly model the strength of the detections
and the interactions between them. These interactions are top-down cues
obtained from a lexicon-based prior, i.e., language statistics. The optimal
word represented by the text image is obtained by minimizing the energy
function corresponding to the random field model. We evaluate our proposed
algorithm extensively on a number of cropped scene text benchmark datasets,
namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word,
and show better performance than comparable methods. We perform a rigorous
analysis of all the steps in our approach and analyze the results. We also show
that state-of-the-art convolutional neural network features can be integrated
in our framework to further improve the recognition performance.
| [
{
"version": "v1",
"created": "Wed, 13 Jan 2016 04:47:28 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Mishra",
"Anand",
""
],
[
"Alahari",
"Karteek",
""
],
[
"Jawahar",
"C. V.",
""
]
] | TITLE: Enhancing Energy Minimization Framework for Scene Text Recognition with
Top-Down Cues
ABSTRACT: Recognizing scene text is a challenging problem, even more so than the
recognition of scanned documents. This problem has gained significant attention
from the computer vision community in recent years, and several methods based
on energy minimization frameworks and deep learning approaches have been
proposed. In this work, we focus on the energy minimization framework and
propose a model that exploits both bottom-up and top-down cues for recognizing
cropped words extracted from street images. The bottom-up cues are derived from
individual character detections from an image. We build a conditional random
field model on these detections to jointly model the strength of the detections
and the interactions between them. These interactions are top-down cues
obtained from a lexicon-based prior, i.e., language statistics. The optimal
word represented by the text image is obtained by minimizing the energy
function corresponding to the random field model. We evaluate our proposed
algorithm extensively on a number of cropped scene text benchmark datasets,
namely Street View Text, ICDAR 2003, 2011 and 2013 datasets, and IIIT 5K-word,
and show better performance than comparable methods. We perform a rigorous
analysis of all the steps in our approach and analyze the results. We also show
that state-of-the-art convolutional neural network features can be integrated
in our framework to further improve the recognition performance.
| no_new_dataset | 0.950824 |
1603.07022 | Alberto Pretto | Marco Imperoli and Alberto Pretto | Active Detection and Localization of Textureless Objects in Cluttered
Environments | null | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an active object detection and localization framework
that combines a robust untextured object detection and 3D pose estimation
algorithm with a novel next-best-view selection strategy. We address the
detection and localization problems by proposing an edge-based registration
algorithm that refines the object position by minimizing a cost directly
extracted from a 3D image tensor that encodes the minimum distance to an edge
point in a joint direction/location space. We face the next-best-view problem
by exploiting a sequential decision process that, for each step, selects the
next camera position which maximizes the mutual information between the state
and the next observations. We solve the intrinsic intractability of this
solution by generating observations that represent scene realizations, i.e.
combination samples of object hypothesis provided by the object detector, while
modeling the state by means of a set of constantly resampled particles.
Experiments performed on different real world, challenging datasets confirm the
effectiveness of the proposed methods.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 22:55:03 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Imperoli",
"Marco",
""
],
[
"Pretto",
"Alberto",
""
]
] | TITLE: Active Detection and Localization of Textureless Objects in Cluttered
Environments
ABSTRACT: This paper introduces an active object detection and localization framework
that combines a robust untextured object detection and 3D pose estimation
algorithm with a novel next-best-view selection strategy. We address the
detection and localization problems by proposing an edge-based registration
algorithm that refines the object position by minimizing a cost directly
extracted from a 3D image tensor that encodes the minimum distance to an edge
point in a joint direction/location space. We face the next-best-view problem
by exploiting a sequential decision process that, for each step, selects the
next camera position which maximizes the mutual information between the state
and the next observations. We solve the intrinsic intractability of this
solution by generating observations that represent scene realizations, i.e.
combination samples of object hypothesis provided by the object detector, while
modeling the state by means of a set of constantly resampled particles.
Experiments performed on different real world, challenging datasets confirm the
effectiveness of the proposed methods.
| no_new_dataset | 0.950041 |
1603.07063 | Xiaodan Liang | Xiaodan Liang and Xiaohui Shen and Jiashi Feng and Liang Lin and
Shuicheng Yan | Semantic Object Parsing with Graph LSTM | 18 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By taking the semantic object parsing task as an exemplar application
scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network,
which is the generalization of LSTM from sequential data or multi-dimensional
data to general graph-structured data. Particularly, instead of evenly and
fixedly dividing an image to pixels or patches in existing multi-dimensional
LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each
arbitrary-shaped superpixel as a semantically consistent node, and adaptively
construct an undirected graph for each image, where the spatial relations of
the superpixels are naturally used as edges. Constructed on such an adaptive
graph topology, the Graph LSTM is more naturally aligned with the visual
patterns in the image (e.g., object boundaries or appearance similarities) and
provides a more economical information propagation route. Furthermore, for each
optimization step over Graph LSTM, we propose to use a confidence-driven scheme
to update the hidden and memory states of nodes progressively till all nodes
are updated. In addition, for each node, the forgets gates are adaptively
learned to capture different degrees of semantic correlation with neighboring
nodes. Comprehensive evaluations on four diverse semantic object parsing
datasets well demonstrate the significant superiority of our Graph LSTM over
other state-of-the-art solutions.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 03:31:02 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Liang",
"Xiaodan",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Lin",
"Liang",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Semantic Object Parsing with Graph LSTM
ABSTRACT: By taking the semantic object parsing task as an exemplar application
scenario, we propose the Graph Long Short-Term Memory (Graph LSTM) network,
which is the generalization of LSTM from sequential data or multi-dimensional
data to general graph-structured data. Particularly, instead of evenly and
fixedly dividing an image to pixels or patches in existing multi-dimensional
LSTM structures (e.g., Row, Grid and Diagonal LSTMs), we take each
arbitrary-shaped superpixel as a semantically consistent node, and adaptively
construct an undirected graph for each image, where the spatial relations of
the superpixels are naturally used as edges. Constructed on such an adaptive
graph topology, the Graph LSTM is more naturally aligned with the visual
patterns in the image (e.g., object boundaries or appearance similarities) and
provides a more economical information propagation route. Furthermore, for each
optimization step over Graph LSTM, we propose to use a confidence-driven scheme
to update the hidden and memory states of nodes progressively till all nodes
are updated. In addition, for each node, the forgets gates are adaptively
learned to capture different degrees of semantic correlation with neighboring
nodes. Comprehensive evaluations on four diverse semantic object parsing
datasets well demonstrate the significant superiority of our Graph LSTM over
other state-of-the-art solutions.
| no_new_dataset | 0.951729 |
1603.07064 | Saman Sarraf | Saman Sarraf, Mehdi Ostadhashem | Big Data Spark Solution for Functional Magnetic Resonance Imaging | 4 pages, IEEE EMBS 2016 ORLANDO | null | null | null | cs.DC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Big Data applications have rapidly expanded into different
industries. Healthcare is also one the industries willing to use big data
platforms so that some big data analytics tools have been adopted in this field
to some extent. Medical imaging which is a pillar in diagnostic healthcare
deals with high volume of data collection and processing. A huge amount of 3D
and 4D images are acquired in different forms and resolutions using a variety
of medical imaging modalities. Preprocessing and analyzing imaging data is
currently a long process and cost and time consuming. However, not many big
data platforms have been provided or redesigned for medical imaging purposes
because of some restrictions such as data format. In this paper, we designed,
developed and successfully tested a new pipeline for medical imaging data
(especially functional magnetic resonance imaging - fMRI) using Big Data Spark
/ PySpark platform on a single node which allows us to read and load imaging
data, convert them to Resilient Distributed Datasets in order manipulate and
perform in-memory data processing in parallel and convert final results to
imaging format while the pipeline provides an option to store the results in
other formats such as data frame. Using this new solution and pipeline, we
repeated our previous works in which we extracted brain networks from fMRI data
using template matching and sum of squared differences (SSD) method. The final
results revealed our Spark (PySpark) based solution improved the performance
(in terms of processing time) around 4 times on a single compared to the
previous work developed in Python.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 03:42:44 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Sarraf",
"Saman",
""
],
[
"Ostadhashem",
"Mehdi",
""
]
] | TITLE: Big Data Spark Solution for Functional Magnetic Resonance Imaging
ABSTRACT: Recently, Big Data applications have rapidly expanded into different
industries. Healthcare is also one the industries willing to use big data
platforms so that some big data analytics tools have been adopted in this field
to some extent. Medical imaging which is a pillar in diagnostic healthcare
deals with high volume of data collection and processing. A huge amount of 3D
and 4D images are acquired in different forms and resolutions using a variety
of medical imaging modalities. Preprocessing and analyzing imaging data is
currently a long process and cost and time consuming. However, not many big
data platforms have been provided or redesigned for medical imaging purposes
because of some restrictions such as data format. In this paper, we designed,
developed and successfully tested a new pipeline for medical imaging data
(especially functional magnetic resonance imaging - fMRI) using Big Data Spark
/ PySpark platform on a single node which allows us to read and load imaging
data, convert them to Resilient Distributed Datasets in order manipulate and
perform in-memory data processing in parallel and convert final results to
imaging format while the pipeline provides an option to store the results in
other formats such as data frame. Using this new solution and pipeline, we
repeated our previous works in which we extracted brain networks from fMRI data
using template matching and sum of squared differences (SSD) method. The final
results revealed our Spark (PySpark) based solution improved the performance
(in terms of processing time) around 4 times on a single compared to the
previous work developed in Python.
| no_new_dataset | 0.942082 |
1603.07094 | Kai Morino | Motohide Higaki, Kai Morino, Hiroshi Murata, Ryo Asaoka, and Kenji
Yamanishi | Predicting Glaucoma Visual Field Loss by Hierarchically Aggregating
Clustering-based Predictors | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study addresses the issue of predicting the glaucomatous visual field
loss from patient disease datasets. Our goal is to accurately predict the
progress of the disease in individual patients. As very few measurements are
available for each patient, it is difficult to produce good predictors for
individuals. A recently proposed clustering-based method enhances the power of
prediction using patient data with similar spatiotemporal patterns. Each
patient is categorized into a cluster of patients, and a predictive model is
constructed using all of the data in the class. Predictions are highly
dependent on the quality of clustering, but it is difficult to identify the
best clustering method. Thus, we propose a method for aggregating cluster-based
predictors to obtain better prediction accuracy than from a single
cluster-based prediction. Further, the method shows very high performances by
hierarchically aggregating experts generated from several cluster-based
methods. We use real datasets to demonstrate that our method performs
significantly better than conventional clustering-based and patient-wise
regression methods, because the hierarchical aggregating strategy has a
mechanism whereby good predictors in a small community can thrive.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 09:06:19 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Higaki",
"Motohide",
""
],
[
"Morino",
"Kai",
""
],
[
"Murata",
"Hiroshi",
""
],
[
"Asaoka",
"Ryo",
""
],
[
"Yamanishi",
"Kenji",
""
]
] | TITLE: Predicting Glaucoma Visual Field Loss by Hierarchically Aggregating
Clustering-based Predictors
ABSTRACT: This study addresses the issue of predicting the glaucomatous visual field
loss from patient disease datasets. Our goal is to accurately predict the
progress of the disease in individual patients. As very few measurements are
available for each patient, it is difficult to produce good predictors for
individuals. A recently proposed clustering-based method enhances the power of
prediction using patient data with similar spatiotemporal patterns. Each
patient is categorized into a cluster of patients, and a predictive model is
constructed using all of the data in the class. Predictions are highly
dependent on the quality of clustering, but it is difficult to identify the
best clustering method. Thus, we propose a method for aggregating cluster-based
predictors to obtain better prediction accuracy than from a single
cluster-based prediction. Further, the method shows very high performances by
hierarchically aggregating experts generated from several cluster-based
methods. We use real datasets to demonstrate that our method performs
significantly better than conventional clustering-based and patient-wise
regression methods, because the hierarchical aggregating strategy has a
mechanism whereby good predictors in a small community can thrive.
| no_new_dataset | 0.94801 |
1603.07141 | Francesc Moreno-Noguer | Arnau Ramisa, Fei Yan, Francesc Moreno-Noguer and Krystian Mikolajczyk | BreakingNews: Article Annotation by Image and Text Processing | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building upon recent Deep Neural Network architectures, current approaches
lying in the intersection of computer vision and natural language processing
have achieved unprecedented breakthroughs in tasks like automatic captioning or
image retrieval. Most of these learning methods, though, rely on large training
sets of images associated with human annotations that specifically describe the
visual content. In this paper we propose to go a step further and explore the
more complex cases where textual descriptions are loosely related to the
images. We focus on the particular domain of News articles in which the textual
content often expresses connotative and ambiguous relations that are only
suggested but not directly inferred from images. We introduce new deep learning
methods that address source detection, popularity prediction, article
illustration and geolocation of articles. An adaptive CNN architecture is
proposed, that shares most of the structure for all the tasks, and is suitable
for multitask and transfer learning. Deep Canonical Correlation Analysis is
deployed for article illustration, and a new loss function based on Great
Circle Distance is proposed for geolocation. Furthermore, we present
BreakingNews, a novel dataset with approximately 100K news articles including
images, text and captions, and enriched with heterogeneous meta-data (such as
GPS coordinates and popularity metrics). We show this dataset to be appropriate
to explore all aforementioned problems, for which we provide a baseline
performance using various Deep Learning architectures, and different
representations of the textual and visual features. We report very promising
results and bring to light several limitations of current state-of-the-art in
this kind of domain, which we hope will help spur progress in the field.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 11:30:24 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Ramisa",
"Arnau",
""
],
[
"Yan",
"Fei",
""
],
[
"Moreno-Noguer",
"Francesc",
""
],
[
"Mikolajczyk",
"Krystian",
""
]
] | TITLE: BreakingNews: Article Annotation by Image and Text Processing
ABSTRACT: Building upon recent Deep Neural Network architectures, current approaches
lying in the intersection of computer vision and natural language processing
have achieved unprecedented breakthroughs in tasks like automatic captioning or
image retrieval. Most of these learning methods, though, rely on large training
sets of images associated with human annotations that specifically describe the
visual content. In this paper we propose to go a step further and explore the
more complex cases where textual descriptions are loosely related to the
images. We focus on the particular domain of News articles in which the textual
content often expresses connotative and ambiguous relations that are only
suggested but not directly inferred from images. We introduce new deep learning
methods that address source detection, popularity prediction, article
illustration and geolocation of articles. An adaptive CNN architecture is
proposed, that shares most of the structure for all the tasks, and is suitable
for multitask and transfer learning. Deep Canonical Correlation Analysis is
deployed for article illustration, and a new loss function based on Great
Circle Distance is proposed for geolocation. Furthermore, we present
BreakingNews, a novel dataset with approximately 100K news articles including
images, text and captions, and enriched with heterogeneous meta-data (such as
GPS coordinates and popularity metrics). We show this dataset to be appropriate
to explore all aforementioned problems, for which we provide a baseline
performance using various Deep Learning architectures, and different
representations of the textual and visual features. We report very promising
results and bring to light several limitations of current state-of-the-art in
this kind of domain, which we hope will help spur progress in the field.
| new_dataset | 0.965086 |
1603.07173 | Veronica Morfi | Veronica Morfi, Dan Stowell | Deductive Refinement of Species Labelling in Weakly Labelled Birdsong
Recordings | 11 pages, 1 figure | null | null | null | cs.SD | http://creativecommons.org/licenses/by/4.0/ | Many approaches have been used in bird species classification from their
sound in order to provide labels for the whole of a recording. However, a more
precise classification of each bird vocalization would be of great importance
to the use and management of sound archives and bird monitoring. In this work,
we introduce a technique that using a two step process can first automatically
detect all bird vocalizations and then, with the use of 'weakly' labelled
recordings, classify them. Evaluations of our proposed method show that it
achieves a correct classification of 61% when used in a synthetic dataset, and
up to 89% when the synthetic dataset only consists of vocalizations larger than
1000 pixels.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 13:21:12 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Morfi",
"Veronica",
""
],
[
"Stowell",
"Dan",
""
]
] | TITLE: Deductive Refinement of Species Labelling in Weakly Labelled Birdsong
Recordings
ABSTRACT: Many approaches have been used in bird species classification from their
sound in order to provide labels for the whole of a recording. However, a more
precise classification of each bird vocalization would be of great importance
to the use and management of sound archives and bird monitoring. In this work,
we introduce a technique that using a two step process can first automatically
detect all bird vocalizations and then, with the use of 'weakly' labelled
recordings, classify them. Evaluations of our proposed method show that it
achieves a correct classification of 61% when used in a synthetic dataset, and
up to 89% when the synthetic dataset only consists of vocalizations larger than
1000 pixels.
| no_new_dataset | 0.943556 |
1603.07234 | Rahaf Aljundi | Rahaf Aljundi and Tinne Tuytelaars | Lightweight Unsupervised Domain Adaptation by Convolutional Filter
Reconstruction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end learning methods have achieved impressive results in many areas of
computer vision. At the same time, these methods still suffer from a
degradation in performance when testing on new datasets that stem from a
different distribution. This is known as the domain shift effect. Recently
proposed adaptation methods focus on retraining the network parameters.
However, this requires access to all (labeled) source data, a large amount of
(unlabeled) target data, and plenty of computational resources. In this work,
we propose a lightweight alternative, that allows adapting to the target domain
based on a limited number of target samples in a matter of minutes rather than
hours, days or even weeks. To this end, we first analyze the output of each
convolutional layer from a domain adaptation perspective. Surprisingly, we find
that already at the very first layer, domain shift effects pop up. We then
propose a new domain adaptation method, where first layer convolutional filters
that are badly affected by the domain shift are reconstructed based on less
affected ones. This improves the performance of the deep network on various
benchmark datasets.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 15:28:29 GMT"
}
] | 2016-03-24T00:00:00 | [
[
"Aljundi",
"Rahaf",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Lightweight Unsupervised Domain Adaptation by Convolutional Filter
Reconstruction
ABSTRACT: End-to-end learning methods have achieved impressive results in many areas of
computer vision. At the same time, these methods still suffer from a
degradation in performance when testing on new datasets that stem from a
different distribution. This is known as the domain shift effect. Recently
proposed adaptation methods focus on retraining the network parameters.
However, this requires access to all (labeled) source data, a large amount of
(unlabeled) target data, and plenty of computational resources. In this work,
we propose a lightweight alternative, that allows adapting to the target domain
based on a limited number of target samples in a matter of minutes rather than
hours, days or even weeks. To this end, we first analyze the output of each
convolutional layer from a domain adaptation perspective. Surprisingly, we find
that already at the very first layer, domain shift effects pop up. We then
propose a new domain adaptation method, where first layer convolutional filters
that are badly affected by the domain shift are reconstructed based on less
affected ones. This improves the performance of the deep network on various
benchmark datasets.
| no_new_dataset | 0.949012 |
1507.02264 | Scott Dawson | Scott T. M. Dawson, Maziar S. Hemati, Matthew O. Williams, Clarence W.
Rowley | Characterizing and correcting for the effect of sensor noise in the
dynamic mode decomposition | null | null | 10.1007/s00348-016-2127-7 | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic mode decomposition (DMD) provides a practical means of extracting
insightful dynamical information from fluids datasets. Like any data processing
technique, DMD's usefulness is limited by its ability to extract real and
accurate dynamical features from noise-corrupted data. Here we show
analytically that DMD is biased to sensor noise, and quantify how this bias
depends on the size and noise level of the data. We present three modifications
to DMD that can be used to remove this bias: (i) a direct correction of the
identified bias using known noise properties, (ii) combining the results of
performing DMD forwards and backwards in time, and (iii) a total
least-squares-inspired algorithm. We discuss the relative merits of each
algorithm, and demonstrate the performance of these modifications on a range of
synthetic, numerical, and experimental datasets. We further compare our
modified DMD algorithms with other variants proposed in recent literature.
| [
{
"version": "v1",
"created": "Wed, 8 Jul 2015 19:24:10 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Aug 2015 17:19:58 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Jan 2016 00:39:42 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Dawson",
"Scott T. M.",
""
],
[
"Hemati",
"Maziar S.",
""
],
[
"Williams",
"Matthew O.",
""
],
[
"Rowley",
"Clarence W.",
""
]
] | TITLE: Characterizing and correcting for the effect of sensor noise in the
dynamic mode decomposition
ABSTRACT: Dynamic mode decomposition (DMD) provides a practical means of extracting
insightful dynamical information from fluids datasets. Like any data processing
technique, DMD's usefulness is limited by its ability to extract real and
accurate dynamical features from noise-corrupted data. Here we show
analytically that DMD is biased to sensor noise, and quantify how this bias
depends on the size and noise level of the data. We present three modifications
to DMD that can be used to remove this bias: (i) a direct correction of the
identified bias using known noise properties, (ii) combining the results of
performing DMD forwards and backwards in time, and (iii) a total
least-squares-inspired algorithm. We discuss the relative merits of each
algorithm, and demonstrate the performance of these modifications on a range of
synthetic, numerical, and experimental datasets. We further compare our
modified DMD algorithms with other variants proposed in recent literature.
| no_new_dataset | 0.946646 |
1512.01881 | Min Sun | Cheng-Sheng Chan, Shou-Zhong Chen, Pei-Xuan Xie, Chiung-Chih Chang,
Min Sun | Recognition from Hand Cameras | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the study of a wrist-mounted camera system (referred to as
HandCam) for recognizing activities of hands. HandCam has two unique properties
as compared to egocentric systems (referred to as HeadCam): (1) it avoids the
need to detect hands; (2) it more consistently observes the activities of
hands. By taking advantage of these properties, we propose a
deep-learning-based method to recognize hand states (free v.s. active hands,
hand gestures, object categories), and discover object categories. Moreover, we
propose a novel two-streams deep network to further take advantage of both
HandCam and HeadCam. We have collected a new synchronized HandCam and HeadCam
dataset with 20 videos captured in three scenes for hand states recognition.
Experiments show that our HandCam system consistently outperforms a
deep-learning-based HeadCam method (with estimated manipulation regions) and a
dense-trajectory-based HeadCam method in all tasks. We also show that HandCam
videos captured by different users can be easily aligned to improve free v.s.
active recognition accuracy (3.3% improvement) in across-scenes use case.
Moreover, we observe that finetuning Convolutional Neural Network consistently
improves accuracy. Finally, our novel two-streams deep network combining
HandCam and HeadCam features achieves the best performance in four out of five
tasks. With more data, we believe a joint HandCam and HeadCam system can
robustly log hand states in daily life.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 02:06:29 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Dec 2015 16:04:40 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2016 09:12:04 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Chan",
"Cheng-Sheng",
""
],
[
"Chen",
"Shou-Zhong",
""
],
[
"Xie",
"Pei-Xuan",
""
],
[
"Chang",
"Chiung-Chih",
""
],
[
"Sun",
"Min",
""
]
] | TITLE: Recognition from Hand Cameras
ABSTRACT: We revisit the study of a wrist-mounted camera system (referred to as
HandCam) for recognizing activities of hands. HandCam has two unique properties
as compared to egocentric systems (referred to as HeadCam): (1) it avoids the
need to detect hands; (2) it more consistently observes the activities of
hands. By taking advantage of these properties, we propose a
deep-learning-based method to recognize hand states (free v.s. active hands,
hand gestures, object categories), and discover object categories. Moreover, we
propose a novel two-streams deep network to further take advantage of both
HandCam and HeadCam. We have collected a new synchronized HandCam and HeadCam
dataset with 20 videos captured in three scenes for hand states recognition.
Experiments show that our HandCam system consistently outperforms a
deep-learning-based HeadCam method (with estimated manipulation regions) and a
dense-trajectory-based HeadCam method in all tasks. We also show that HandCam
videos captured by different users can be easily aligned to improve free v.s.
active recognition accuracy (3.3% improvement) in across-scenes use case.
Moreover, we observe that finetuning Convolutional Neural Network consistently
improves accuracy. Finally, our novel two-streams deep network combining
HandCam and HeadCam features achieves the best performance in four out of five
tasks. With more data, we believe a joint HandCam and HeadCam system can
robustly log hand states in daily life.
| new_dataset | 0.956917 |
1512.06110 | Manaal Faruqui | Manaal Faruqui and Yulia Tsvetkov and Graham Neubig and Chris Dyer | Morphological Inflection Generation Using Character Sequence to Sequence
Learning | Proceedings of NAACL 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Morphological inflection generation is the task of generating the inflected
form of a given lemma corresponding to a particular linguistic transformation.
We model the problem of inflection generation as a character sequence to
sequence learning problem and present a variant of the neural encoder-decoder
model for solving it. Our model is language independent and can be trained in
both supervised and semi-supervised settings. We evaluate our system on seven
datasets of morphologically rich languages and achieve either better or
comparable results to existing state-of-the-art models of inflection
generation.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 20:48:26 GMT"
},
{
"version": "v2",
"created": "Thu, 31 Dec 2015 17:23:32 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Mar 2016 01:02:01 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Faruqui",
"Manaal",
""
],
[
"Tsvetkov",
"Yulia",
""
],
[
"Neubig",
"Graham",
""
],
[
"Dyer",
"Chris",
""
]
] | TITLE: Morphological Inflection Generation Using Character Sequence to Sequence
Learning
ABSTRACT: Morphological inflection generation is the task of generating the inflected
form of a given lemma corresponding to a particular linguistic transformation.
We model the problem of inflection generation as a character sequence to
sequence learning problem and present a variant of the neural encoder-decoder
model for solving it. Our model is language independent and can be trained in
both supervised and semi-supervised settings. We evaluate our system on seven
datasets of morphologically rich languages and achieve either better or
comparable results to existing state-of-the-art models of inflection
generation.
| no_new_dataset | 0.954942 |
1601.01651 | Antonino Miceli | Daikang Yan, Thomas Cecil, Lisa Gades, Chris Jacobsen, Timothy Madden,
Antonino Miceli | Processing of X-ray Microcalorimeter Data with Pulse Shape Variation
using Principal Component Analysis | Accepted for publication in J. Low Temperature Physics, Low
Temperature Detectors 16 (LTD-16) conference | null | 10.1007/s10909-016-1480-5 | null | physics.ins-det astro-ph.IM physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method using principal component analysis (PCA) to process x-ray
pulses with severe shape variation where traditional optimal filter methods
fail. We demonstrate that PCA is able to noise-filter and extract energy
information from x-ray pulses despite their different shapes. We apply this
method to a dataset from an x-ray thermal kinetic inductance detector which has
severe pulse shape variation arising from position-dependent absorption.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2016 20:00:01 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Yan",
"Daikang",
""
],
[
"Cecil",
"Thomas",
""
],
[
"Gades",
"Lisa",
""
],
[
"Jacobsen",
"Chris",
""
],
[
"Madden",
"Timothy",
""
],
[
"Miceli",
"Antonino",
""
]
] | TITLE: Processing of X-ray Microcalorimeter Data with Pulse Shape Variation
using Principal Component Analysis
ABSTRACT: We present a method using principal component analysis (PCA) to process x-ray
pulses with severe shape variation where traditional optimal filter methods
fail. We demonstrate that PCA is able to noise-filter and extract energy
information from x-ray pulses despite their different shapes. We apply this
method to a dataset from an x-ray thermal kinetic inductance detector which has
severe pulse shape variation arising from position-dependent absorption.
| no_new_dataset | 0.954478 |
1603.06169 | Alexander G\'omez Villa | Alexander Gomez, Augusto Salazar and Francisco Vargas | Towards Automatic Wild Animal Monitoring: Identification of Animal
Species in Camera-trap Images using Very Deep Convolutional Neural Networks | Submitted to ECCV16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non intrusive monitoring of animals in the wild is possible using camera
trapping framework, which uses cameras triggered by sensors to take a burst of
images of animals in their habitat. However camera trapping framework produces
a high volume of data (in the order on thousands or millions of images), which
must be analyzed by a human expert. In this work, a method for animal species
identification in the wild using very deep convolutional neural networks is
presented. Multiple versions of the Snapshot Serengeti dataset were used in
order to probe the ability of the method to cope with different challenges that
camera-trap images demand. The method reached 88.9% of accuracy in Top-1 and
98.1% in Top-5 in the evaluation set using a residual network topology. Also,
the results show that the proposed method outperforms previous approximations
and proves that recognition in camera-trap images can be automated.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 00:47:46 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Mar 2016 00:53:37 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Gomez",
"Alexander",
""
],
[
"Salazar",
"Augusto",
""
],
[
"Vargas",
"Francisco",
""
]
] | TITLE: Towards Automatic Wild Animal Monitoring: Identification of Animal
Species in Camera-trap Images using Very Deep Convolutional Neural Networks
ABSTRACT: Non intrusive monitoring of animals in the wild is possible using camera
trapping framework, which uses cameras triggered by sensors to take a burst of
images of animals in their habitat. However camera trapping framework produces
a high volume of data (in the order on thousands or millions of images), which
must be analyzed by a human expert. In this work, a method for animal species
identification in the wild using very deep convolutional neural networks is
presented. Multiple versions of the Snapshot Serengeti dataset were used in
order to probe the ability of the method to cope with different challenges that
camera-trap images demand. The method reached 88.9% of accuracy in Top-1 and
98.1% in Top-5 in the evaluation set using a residual network topology. Also,
the results show that the proposed method outperforms previous approximations
and proves that recognition in camera-trap images can be automated.
| no_new_dataset | 0.937726 |
1603.06655 | Zhen Dong | Zhen Dong, Su Jia, Chi Zhang, Mingtao Pei | Input Aggregated Network for Face Video Representation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, deep neural network has shown promising performance in face image
recognition. The inputs of most networks are face images, and there is hardly
any work reported in literature on network with face videos as input. To
sufficiently discover the useful information contained in face videos, we
present a novel network architecture called input aggregated network which is
able to learn fixed-length representations for variable-length face videos. To
accomplish this goal, an aggregation unit is designed to model a face video
with various frames as a point on a Riemannian manifold, and the mapping unit
aims at mapping the point into high-dimensional space where face videos
belonging to the same subject are close-by and others are distant. These two
units together with the frame representation unit build an end-to-end learning
system which can learn representations of face videos for the specific tasks.
Experiments on two public face video datasets demonstrate the effectiveness of
the proposed network.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 01:27:50 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Dong",
"Zhen",
""
],
[
"Jia",
"Su",
""
],
[
"Zhang",
"Chi",
""
],
[
"Pei",
"Mingtao",
""
]
] | TITLE: Input Aggregated Network for Face Video Representation
ABSTRACT: Recently, deep neural network has shown promising performance in face image
recognition. The inputs of most networks are face images, and there is hardly
any work reported in literature on network with face videos as input. To
sufficiently discover the useful information contained in face videos, we
present a novel network architecture called input aggregated network which is
able to learn fixed-length representations for variable-length face videos. To
accomplish this goal, an aggregation unit is designed to model a face video
with various frames as a point on a Riemannian manifold, and the mapping unit
aims at mapping the point into high-dimensional space where face videos
belonging to the same subject are close-by and others are distant. These two
units together with the frame representation unit build an end-to-end learning
system which can learn representations of face videos for the specific tasks.
Experiments on two public face video datasets demonstrate the effectiveness of
the proposed network.
| no_new_dataset | 0.949576 |
1603.06759 | Yanwei Pang | Yanwei Pang, Manli Sun, Xiaoheng Jiang, Xuelong Li | Convolution in Convolution for Network in Network | A method of Convolutional Neural Networks | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network in Netwrok (NiN) is an effective instance and an important extension
of Convolutional Neural Network (CNN) consisting of alternating convolutional
layers and pooling layers. Instead of using a linear filter for convolution,
NiN utilizes shallow MultiLayer Perceptron (MLP), a nonlinear function, to
replace the linear filter. Because of the powerfulness of MLP and $ 1\times 1 $
convolutions in spatial domain, NiN has stronger ability of feature
representation and hence results in better recognition rate. However, MLP
itself consists of fully connected layers which give rise to a large number of
parameters. In this paper, we propose to replace dense shallow MLP with sparse
shallow MLP. One or more layers of the sparse shallow MLP are sparely connected
in the channel dimension or channel-spatial domain. The proposed method is
implemented by applying unshared convolution across the channel dimension and
applying shared convolution across the spatial dimension in some computational
layers. The proposed method is called CiC. Experimental results on the CIFAR10
dataset, augmented CIFAR10 dataset, and CIFAR100 dataset demonstrate the
effectiveness of the proposed CiC method.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 12:33:11 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Pang",
"Yanwei",
""
],
[
"Sun",
"Manli",
""
],
[
"Jiang",
"Xiaoheng",
""
],
[
"Li",
"Xuelong",
""
]
] | TITLE: Convolution in Convolution for Network in Network
ABSTRACT: Network in Netwrok (NiN) is an effective instance and an important extension
of Convolutional Neural Network (CNN) consisting of alternating convolutional
layers and pooling layers. Instead of using a linear filter for convolution,
NiN utilizes shallow MultiLayer Perceptron (MLP), a nonlinear function, to
replace the linear filter. Because of the powerfulness of MLP and $ 1\times 1 $
convolutions in spatial domain, NiN has stronger ability of feature
representation and hence results in better recognition rate. However, MLP
itself consists of fully connected layers which give rise to a large number of
parameters. In this paper, we propose to replace dense shallow MLP with sparse
shallow MLP. One or more layers of the sparse shallow MLP are sparely connected
in the channel dimension or channel-spatial domain. The proposed method is
implemented by applying unshared convolution across the channel dimension and
applying shared convolution across the spatial dimension in some computational
layers. The proposed method is called CiC. Experimental results on the CIFAR10
dataset, augmented CIFAR10 dataset, and CIFAR100 dataset demonstrate the
effectiveness of the proposed CiC method.
| no_new_dataset | 0.952353 |
1603.06829 | Otkrist Gupta | Otkrist Gupta, Dan Raviv and Ramesh Raskar | Multi-velocity neural networks for gesture recognition in videos | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new action recognition deep neural network which adaptively
learns the best action velocities in addition to the classification. While deep
neural networks have reached maturity for image understanding tasks, we are
still exploring network topologies and features to handle the richer
environment of video clips. Here, we tackle the problem of multiple velocities
in action recognition, and provide state-of-the-art results for gesture
recognition, on known and new collected datasets. We further provide the
training steps for our semi-supervised network, suited to learn from huge
unlabeled datasets with only a fraction of labeled examples.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 15:26:26 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Gupta",
"Otkrist",
""
],
[
"Raviv",
"Dan",
""
],
[
"Raskar",
"Ramesh",
""
]
] | TITLE: Multi-velocity neural networks for gesture recognition in videos
ABSTRACT: We present a new action recognition deep neural network which adaptively
learns the best action velocities in addition to the classification. While deep
neural networks have reached maturity for image understanding tasks, we are
still exploring network topologies and features to handle the richer
environment of video clips. Here, we tackle the problem of multiple velocities
in action recognition, and provide state-of-the-art results for gesture
recognition, on known and new collected datasets. We further provide the
training steps for our semi-supervised network, suited to learn from huge
unlabeled datasets with only a fraction of labeled examples.
| new_dataset | 0.943867 |
1603.06861 | Anastasios Kyrillidis | Vatsal Shah, Megasthenis Asteris, Anastasios Kyrillidis, Sujay
Sanghavi | Trading-off variance and complexity in stochastic gradient descent | 14 pages, 13 figures, first edition on 9th of October 2015 | null | null | null | stat.ML cs.IT cs.LG math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic gradient descent is the method of choice for large-scale machine
learning problems, by virtue of its light complexity per iteration. However, it
lags behind its non-stochastic counterparts with respect to the convergence
rate, due to high variance introduced by the stochastic updates. The popular
Stochastic Variance-Reduced Gradient (SVRG) method mitigates this shortcoming,
introducing a new update rule which requires infrequent passes over the entire
input dataset to compute the full-gradient.
In this work, we propose CheapSVRG, a stochastic variance-reduction
optimization scheme. Our algorithm is similar to SVRG but instead of the full
gradient, it uses a surrogate which can be efficiently computed on a small
subset of the input data. It achieves a linear convergence rate ---up to some
error level, depending on the nature of the optimization problem---and features
a trade-off between the computational complexity and the convergence rate.
Empirical evaluation shows that CheapSVRG performs at least competitively
compared to the state of the art.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 16:34:26 GMT"
}
] | 2016-03-23T00:00:00 | [
[
"Shah",
"Vatsal",
""
],
[
"Asteris",
"Megasthenis",
""
],
[
"Kyrillidis",
"Anastasios",
""
],
[
"Sanghavi",
"Sujay",
""
]
] | TITLE: Trading-off variance and complexity in stochastic gradient descent
ABSTRACT: Stochastic gradient descent is the method of choice for large-scale machine
learning problems, by virtue of its light complexity per iteration. However, it
lags behind its non-stochastic counterparts with respect to the convergence
rate, due to high variance introduced by the stochastic updates. The popular
Stochastic Variance-Reduced Gradient (SVRG) method mitigates this shortcoming,
introducing a new update rule which requires infrequent passes over the entire
input dataset to compute the full-gradient.
In this work, we propose CheapSVRG, a stochastic variance-reduction
optimization scheme. Our algorithm is similar to SVRG but instead of the full
gradient, it uses a surrogate which can be efficiently computed on a small
subset of the input data. It achieves a linear convergence rate ---up to some
error level, depending on the nature of the optimization problem---and features
a trade-off between the computational complexity and the convergence rate.
Empirical evaluation shows that CheapSVRG performs at least competitively
compared to the state of the art.
| no_new_dataset | 0.942612 |
1402.5874 | Mohammad Ghasemi Hamed | Mohammad Ghasemi Hamed, Mathieu Serrurier, Nicolas Durand | Predictive Interval Models for Non-parametric Regression | This paper has been withdrawn by the authors due to multiple errors
in the formulations and equations | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Having a regression model, we are interested in finding two-sided intervals
that are guaranteed to contain at least a desired proportion of the conditional
distribution of the response variable given a specific combination of
predictors. We name such intervals predictive intervals. This work presents a
new method to find two-sided predictive intervals for non-parametric least
squares regression without the homoscedasticity assumption. Our predictive
intervals are built by using tolerance intervals on prediction errors in the
query point's neighborhood. We proposed a predictive interval model test and we
also used it as a constraint in our hyper-parameter tuning algorithm. This
gives an algorithm that finds the smallest reliable predictive intervals for a
given dataset. We also introduce a measure for comparing different interval
prediction methods yielding intervals having different size and coverage. These
experiments show that our methods are more reliable, effective and precise than
other interval prediction methods.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2014 16:16:17 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2016 10:56:40 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Hamed",
"Mohammad Ghasemi",
""
],
[
"Serrurier",
"Mathieu",
""
],
[
"Durand",
"Nicolas",
""
]
] | TITLE: Predictive Interval Models for Non-parametric Regression
ABSTRACT: Having a regression model, we are interested in finding two-sided intervals
that are guaranteed to contain at least a desired proportion of the conditional
distribution of the response variable given a specific combination of
predictors. We name such intervals predictive intervals. This work presents a
new method to find two-sided predictive intervals for non-parametric least
squares regression without the homoscedasticity assumption. Our predictive
intervals are built by using tolerance intervals on prediction errors in the
query point's neighborhood. We proposed a predictive interval model test and we
also used it as a constraint in our hyper-parameter tuning algorithm. This
gives an algorithm that finds the smallest reliable predictive intervals for a
given dataset. We also introduce a measure for comparing different interval
prediction methods yielding intervals having different size and coverage. These
experiments show that our methods are more reliable, effective and precise than
other interval prediction methods.
| no_new_dataset | 0.946051 |
1503.00164 | Yuan Yao | Braxton Osting and Jiechao Xiong and Qianqian Xu and Yuan Yao | Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse
Random Graphs | null | Applied and Computational Harmonic Analysis, 2016 | 10.1016/j.acha.2016.03.007 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourcing platforms are now extensively used for conducting subjective
pairwise comparison studies. In this setting, a pairwise comparison dataset is
typically gathered via random sampling, either \emph{with} or \emph{without}
replacement. In this paper, we use tools from random graph theory to analyze
these two random sampling methods for the HodgeRank estimator. Using the
Fiedler value of the graph as a measurement for estimator stability
(informativeness), we provide a new estimate of the Fiedler value for these two
random graph models. In the asymptotic limit as the number of vertices tends to
infinity, we prove the validity of the estimate. Based on our findings, for a
small number of items to be compared, we recommend a two-stage sampling
strategy where a greedy sampling method is used initially and random sampling
\emph{without} replacement is used in the second stage. When a large number of
items is to be compared, we recommend random sampling with replacement as this
is computationally inexpensive and trivially parallelizable. Experiments on
synthetic and real-world datasets support our analysis.
| [
{
"version": "v1",
"created": "Sat, 28 Feb 2015 18:32:45 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2016 11:47:10 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Osting",
"Braxton",
""
],
[
"Xiong",
"Jiechao",
""
],
[
"Xu",
"Qianqian",
""
],
[
"Yao",
"Yuan",
""
]
] | TITLE: Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse
Random Graphs
ABSTRACT: Crowdsourcing platforms are now extensively used for conducting subjective
pairwise comparison studies. In this setting, a pairwise comparison dataset is
typically gathered via random sampling, either \emph{with} or \emph{without}
replacement. In this paper, we use tools from random graph theory to analyze
these two random sampling methods for the HodgeRank estimator. Using the
Fiedler value of the graph as a measurement for estimator stability
(informativeness), we provide a new estimate of the Fiedler value for these two
random graph models. In the asymptotic limit as the number of vertices tends to
infinity, we prove the validity of the estimate. Based on our findings, for a
small number of items to be compared, we recommend a two-stage sampling
strategy where a greedy sampling method is used initially and random sampling
\emph{without} replacement is used in the second stage. When a large number of
items is to be compared, we recommend random sampling with replacement as this
is computationally inexpensive and trivially parallelizable. Experiments on
synthetic and real-world datasets support our analysis.
| no_new_dataset | 0.954816 |
1505.05914 | Huijuan Xu | Huijuan Xu, Subhashini Venugopalan, Vasili Ramanishka, Marcus
Rohrbach, Kate Saenko | A Multi-scale Multiple Instance Video Description Network | ICCV15 workshop on Closing the Loop Between Vision and Language | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating natural language descriptions for in-the-wild videos is a
challenging task. Most state-of-the-art methods for solving this problem borrow
existing deep convolutional neural network (CNN) architectures (AlexNet,
GoogLeNet) to extract a visual representation of the input video. However,
these deep CNN architectures are designed for single-label centered-positioned
object classification. While they generate strong semantic features, they have
no inherent structure allowing them to detect multiple objects of different
sizes and locations in the frame. Our paper tries to solve this problem by
integrating the base CNN into several fully convolutional neural networks
(FCNs) to form a multi-scale network that handles multiple receptive field
sizes in the original image. FCNs, previously applied to image segmentation,
can generate class heat-maps efficiently compared to sliding window mechanisms,
and can easily handle multiple scales. To further handle the ambiguity over
multiple objects and locations, we incorporate the Multiple Instance Learning
mechanism (MIL) to consider objects in different positions and at different
scales simultaneously. We integrate our multi-scale multi-instance architecture
with a sequence-to-sequence recurrent neural network to generate sentence
descriptions based on the visual representation. Ours is the first end-to-end
trainable architecture that is capable of multi-scale region processing.
Evaluation on a Youtube video dataset shows the advantage of our approach
compared to the original single-scale whole frame CNN model. Our flexible and
efficient architecture can potentially be extended to support other video
processing tasks.
| [
{
"version": "v1",
"created": "Thu, 21 May 2015 21:47:08 GMT"
},
{
"version": "v2",
"created": "Mon, 25 May 2015 16:28:56 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Mar 2016 02:27:58 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Xu",
"Huijuan",
""
],
[
"Venugopalan",
"Subhashini",
""
],
[
"Ramanishka",
"Vasili",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: A Multi-scale Multiple Instance Video Description Network
ABSTRACT: Generating natural language descriptions for in-the-wild videos is a
challenging task. Most state-of-the-art methods for solving this problem borrow
existing deep convolutional neural network (CNN) architectures (AlexNet,
GoogLeNet) to extract a visual representation of the input video. However,
these deep CNN architectures are designed for single-label centered-positioned
object classification. While they generate strong semantic features, they have
no inherent structure allowing them to detect multiple objects of different
sizes and locations in the frame. Our paper tries to solve this problem by
integrating the base CNN into several fully convolutional neural networks
(FCNs) to form a multi-scale network that handles multiple receptive field
sizes in the original image. FCNs, previously applied to image segmentation,
can generate class heat-maps efficiently compared to sliding window mechanisms,
and can easily handle multiple scales. To further handle the ambiguity over
multiple objects and locations, we incorporate the Multiple Instance Learning
mechanism (MIL) to consider objects in different positions and at different
scales simultaneously. We integrate our multi-scale multi-instance architecture
with a sequence-to-sequence recurrent neural network to generate sentence
descriptions based on the visual representation. Ours is the first end-to-end
trainable architecture that is capable of multi-scale region processing.
Evaluation on a Youtube video dataset shows the advantage of our approach
compared to the original single-scale whole frame CNN model. Our flexible and
efficient architecture can potentially be extended to support other video
processing tasks.
| no_new_dataset | 0.94868 |
1511.05234 | Huijuan Xu | Huijuan Xu and Kate Saenko | Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for
Visual Question Answering | include test-standard result on VQA full release (V1.0) dataset | null | null | null | cs.CV cs.AI cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3].
| [
{
"version": "v1",
"created": "Tue, 17 Nov 2015 01:00:04 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Mar 2016 03:06:58 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Xu",
"Huijuan",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for
Visual Question Answering
ABSTRACT: We address the problem of Visual Question Answering (VQA), which requires
joint image and language understanding to answer a question about a given
photograph. Recent approaches have applied deep image captioning methods based
on convolutional-recurrent networks to this problem, but have failed to model
spatial inference. To remedy this, we propose a model we call the Spatial
Memory Network and apply it to the VQA task. Memory networks are recurrent
neural networks with an explicit attention mechanism that selects certain parts
of the information stored in memory. Our Spatial Memory Network stores neuron
activations from different spatial regions of the image in its memory, and uses
the question to choose relevant regions for computing the answer, a process of
which constitutes a single "hop" in the network. We propose a novel spatial
attention architecture that aligns words with image patches in the first hop,
and obtain improved results by adding a second attention hop which considers
the whole question to choose visual evidence based on the results of the first
hop. To better understand the inference process learned by the network, we
design synthetic questions that specifically require spatial inference and
visualize the attention weights. We evaluate our model on two published visual
question answering datasets, DAQUAR [1] and VQA [2], and obtain improved
results compared to a strong deep baseline model (iBOWIMG) which concatenates
image and question features to predict the answer [3].
| no_new_dataset | 0.952926 |
1603.06060 | Abhijit Guha Roy | Abhijit Guha Roy and Debdoot Sheet | DASA: Domain Adaptation in Stacked Autoencoders using Systematic Dropout | Accepted at Asian Conference on Pattern Recognition 2015 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain adaptation deals with adapting behaviour of machine learning based
systems trained using samples in source domain to their deployment in target
domain where the statistics of samples in both domains are dissimilar. The task
of directly training or adapting a learner in the target domain is challenged
by lack of abundant labeled samples. In this paper we propose a technique for
domain adaptation in stacked autoencoder (SAE) based deep neural networks (DNN)
performed in two stages: (i) unsupervised weight adaptation using systematic
dropouts in mini-batch training, (ii) supervised fine-tuning with limited
number of labeled samples in target domain. We experimentally evaluate
performance in the problem of retinal vessel segmentation where the SAE-DNN is
trained using large number of labeled samples in the source domain (DRIVE
dataset) and adapted using less number of labeled samples in target domain
(STARE dataset). The performance of SAE-DNN measured using $logloss$ in source
domain is $0.19$, without and with adaptation are $0.40$ and $0.18$, and $0.39$
when trained exclusively with limited samples in target domain. The area under
ROC curve is observed respectively as $0.90$, $0.86$, $0.92$ and $0.87$. The
high efficiency of vessel segmentation with DASA strongly substantiates our
claim.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2016 07:27:56 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Roy",
"Abhijit Guha",
""
],
[
"Sheet",
"Debdoot",
""
]
] | TITLE: DASA: Domain Adaptation in Stacked Autoencoders using Systematic Dropout
ABSTRACT: Domain adaptation deals with adapting behaviour of machine learning based
systems trained using samples in source domain to their deployment in target
domain where the statistics of samples in both domains are dissimilar. The task
of directly training or adapting a learner in the target domain is challenged
by lack of abundant labeled samples. In this paper we propose a technique for
domain adaptation in stacked autoencoder (SAE) based deep neural networks (DNN)
performed in two stages: (i) unsupervised weight adaptation using systematic
dropouts in mini-batch training, (ii) supervised fine-tuning with limited
number of labeled samples in target domain. We experimentally evaluate
performance in the problem of retinal vessel segmentation where the SAE-DNN is
trained using large number of labeled samples in the source domain (DRIVE
dataset) and adapted using less number of labeled samples in target domain
(STARE dataset). The performance of SAE-DNN measured using $logloss$ in source
domain is $0.19$, without and with adaptation are $0.40$ and $0.18$, and $0.39$
when trained exclusively with limited samples in target domain. The area under
ROC curve is observed respectively as $0.90$, $0.86$, $0.92$ and $0.87$. The
high efficiency of vessel segmentation with DASA strongly substantiates our
claim.
| no_new_dataset | 0.948155 |
1603.06129 | Rishabh Singh | Sahil Bhatia and Rishabh Singh | Automated Correction for Syntax Errors in Programming Assignments using
Recurrent Neural Networks | null | null | null | null | cs.PL cs.AI cs.LG cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for automatically generating repair feedback for syntax
errors for introductory programming problems. Syntax errors constitute one of
the largest classes of errors (34%) in our dataset of student submissions
obtained from a MOOC course on edX. The previous techniques for generating
automated feed- back on programming assignments have focused on functional
correctness and style considerations of student programs. These techniques
analyze the program AST of the program and then perform some dynamic and
symbolic analyses to compute repair feedback. Unfortunately, it is not possible
to generate ASTs for student pro- grams with syntax errors and therefore the
previous feedback techniques are not applicable in repairing syntax errors.
We present a technique for providing feedback on syntax errors that uses
Recurrent neural networks (RNNs) to model syntactically valid token sequences.
Our approach is inspired from the recent work on learning language models from
Big Code (large code corpus). For a given programming assignment, we first
learn an RNN to model all valid token sequences using the set of syntactically
correct student submissions. Then, for a student submission with syntax errors,
we query the learnt RNN model with the prefix to- ken sequence to predict token
sequences that can fix the error by either replacing or inserting the predicted
token sequence at the error location. We evaluate our technique on over 14, 000
student submissions with syntax errors. Our technique can completely re- pair
31.69% (4501/14203) of submissions with syntax errors and in addition partially
correct 6.39% (908/14203) of the submissions.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2016 18:43:28 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Bhatia",
"Sahil",
""
],
[
"Singh",
"Rishabh",
""
]
] | TITLE: Automated Correction for Syntax Errors in Programming Assignments using
Recurrent Neural Networks
ABSTRACT: We present a method for automatically generating repair feedback for syntax
errors for introductory programming problems. Syntax errors constitute one of
the largest classes of errors (34%) in our dataset of student submissions
obtained from a MOOC course on edX. The previous techniques for generating
automated feed- back on programming assignments have focused on functional
correctness and style considerations of student programs. These techniques
analyze the program AST of the program and then perform some dynamic and
symbolic analyses to compute repair feedback. Unfortunately, it is not possible
to generate ASTs for student pro- grams with syntax errors and therefore the
previous feedback techniques are not applicable in repairing syntax errors.
We present a technique for providing feedback on syntax errors that uses
Recurrent neural networks (RNNs) to model syntactically valid token sequences.
Our approach is inspired from the recent work on learning language models from
Big Code (large code corpus). For a given programming assignment, we first
learn an RNN to model all valid token sequences using the set of syntactically
correct student submissions. Then, for a student submission with syntax errors,
we query the learnt RNN model with the prefix to- ken sequence to predict token
sequences that can fix the error by either replacing or inserting the predicted
token sequence at the error location. We evaluate our technique on over 14, 000
student submissions with syntax errors. Our technique can completely re- pair
31.69% (4501/14203) of submissions with syntax errors and in addition partially
correct 6.39% (908/14203) of the submissions.
| no_new_dataset | 0.762601 |
1603.06180 | Ronghang Hu | Ronghang Hu, Marcus Rohrbach, Trevor Darrell | Segmentation from Natural Language Expressions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we approach the novel problem of segmenting an image based on a
natural language expression. This is different from traditional semantic
segmentation over a predefined set of semantic classes, as e.g., the phrase
"two men sitting on the right bench" requires segmenting only the two people on
the right bench and no one standing or sitting on another bench. Previous
approaches suitable for this task were limited to a fixed set of categories
and/or rectangular regions. To produce pixelwise segmentation for the language
expression, we propose an end-to-end trainable recurrent and convolutional
network model that jointly learns to process visual and linguistic information.
In our model, a recurrent LSTM network is used to encode the referential
expression into a vector representation, and a fully convolutional network is
used to a extract a spatial feature map from the image and output a spatial
response map for the target object. We demonstrate on a benchmark dataset that
our model can produce quality segmentation output from the natural language
expression, and outperforms baseline methods by a large margin.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 04:10:53 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Hu",
"Ronghang",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Segmentation from Natural Language Expressions
ABSTRACT: In this paper we approach the novel problem of segmenting an image based on a
natural language expression. This is different from traditional semantic
segmentation over a predefined set of semantic classes, as e.g., the phrase
"two men sitting on the right bench" requires segmenting only the two people on
the right bench and no one standing or sitting on another bench. Previous
approaches suitable for this task were limited to a fixed set of categories
and/or rectangular regions. To produce pixelwise segmentation for the language
expression, we propose an end-to-end trainable recurrent and convolutional
network model that jointly learns to process visual and linguistic information.
In our model, a recurrent LSTM network is used to encode the referential
expression into a vector representation, and a fully convolutional network is
used to a extract a spatial feature map from the image and output a spatial
response map for the target object. We demonstrate on a benchmark dataset that
our model can produce quality segmentation output from the natural language
expression, and outperforms baseline methods by a large margin.
| no_new_dataset | 0.947866 |
1603.06289 | Muhammad Ikram | Muhammad Ikram, Hassan Jameel Asghar, Mohamed Ali Kaafar, Balachander
Krishnamurthy, Anirban Mahanti | Towards Seamless Tracking-Free Web: Improved Detection of Trackers via
One-class Learning | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous tools have been developed to aggressively block the execution of
popular JavaScript programs (JS) in Web browsers. Such blocking also affects
functionality of webpages and impairs user experience. As a consequence, many
privacy preserving tools (PP-Tools) that have been developed to limit online
tracking, often executed via JS, may suffer from poor performance and limited
uptake. A mechanism that can isolate JS necessary for proper functioning of the
website from tracking JS would thus be useful. Through the use of a manually
labelled dataset composed of 2,612 JS, we show how current PP-Tools are
ineffective in finding the right balance between blocking tracking JS and
allowing functional JS. To the best of our knowledge, this is the first study
to assess the performance of current web PP-Tools.
To improve this balance, we examine the two classes of JS and hypothesize
that tracking JS share structural similarities that can be used to
differentiate them from functional JS. The rationale of our approach is that
web developers often borrow and customize existing pieces of code in order to
embed tracking (resp. functional) JS into their webpages. We then propose
one-class machine learning classifiers using syntactic and semantic features
extracted from JS. When trained only on samples of tracking JS, our classifiers
achieve an accuracy of 99%, where the best of the PP-Tools achieved an accuracy
of 78%.
We further test our classifiers and several popular PP-Tools on a corpus of
4K websites with 135K JS. The output of our best classifier on this data is
between 20 to 64% different from the PP-Tools. We manually analyse a sample of
the JS for which our classifier is in disagreement with all other PP-Tools, and
show that our approach is not only able to enhance user web experience by
correctly classifying more functional JS, but also discovers previously unknown
tracking services.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 23:33:55 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Ikram",
"Muhammad",
""
],
[
"Asghar",
"Hassan Jameel",
""
],
[
"Kaafar",
"Mohamed Ali",
""
],
[
"Krishnamurthy",
"Balachander",
""
],
[
"Mahanti",
"Anirban",
""
]
] | TITLE: Towards Seamless Tracking-Free Web: Improved Detection of Trackers via
One-class Learning
ABSTRACT: Numerous tools have been developed to aggressively block the execution of
popular JavaScript programs (JS) in Web browsers. Such blocking also affects
functionality of webpages and impairs user experience. As a consequence, many
privacy preserving tools (PP-Tools) that have been developed to limit online
tracking, often executed via JS, may suffer from poor performance and limited
uptake. A mechanism that can isolate JS necessary for proper functioning of the
website from tracking JS would thus be useful. Through the use of a manually
labelled dataset composed of 2,612 JS, we show how current PP-Tools are
ineffective in finding the right balance between blocking tracking JS and
allowing functional JS. To the best of our knowledge, this is the first study
to assess the performance of current web PP-Tools.
To improve this balance, we examine the two classes of JS and hypothesize
that tracking JS share structural similarities that can be used to
differentiate them from functional JS. The rationale of our approach is that
web developers often borrow and customize existing pieces of code in order to
embed tracking (resp. functional) JS into their webpages. We then propose
one-class machine learning classifiers using syntactic and semantic features
extracted from JS. When trained only on samples of tracking JS, our classifiers
achieve an accuracy of 99%, where the best of the PP-Tools achieved an accuracy
of 78%.
We further test our classifiers and several popular PP-Tools on a corpus of
4K websites with 135K JS. The output of our best classifier on this data is
between 20 to 64% different from the PP-Tools. We manually analyse a sample of
the JS for which our classifier is in disagreement with all other PP-Tools, and
show that our approach is not only able to enhance user web experience by
correctly classifying more functional JS, but also discovers previously unknown
tracking services.
| no_new_dataset | 0.929824 |
1603.06371 | Floriana Gargiulo | Floriana Gargiulo, Auguste Caen, Renaud Lambiotte and Timoteo Carletti | The classical origin of modern mathematics | null | null | null | null | math.HO cs.CY physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to study the historical evolution of mathematical
thinking and its spatial spreading. To do so, we have collected and integrated
data from different online academic datasets. In its final stage, the database
includes a large number (N~200K) of advisor-student relationships, with
affiliations and keywords on their research topic, over several centuries, from
the 14th century until today. We focus on two different topics, the evolving
importance of countries and of the research disciplines over time. Moreover we
study the database at three levels, its global statistics, the mesoscale
networks connecting countries and disciplines, and the genealogical level.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 09:53:49 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Gargiulo",
"Floriana",
""
],
[
"Caen",
"Auguste",
""
],
[
"Lambiotte",
"Renaud",
""
],
[
"Carletti",
"Timoteo",
""
]
] | TITLE: The classical origin of modern mathematics
ABSTRACT: The aim of this paper is to study the historical evolution of mathematical
thinking and its spatial spreading. To do so, we have collected and integrated
data from different online academic datasets. In its final stage, the database
includes a large number (N~200K) of advisor-student relationships, with
affiliations and keywords on their research topic, over several centuries, from
the 14th century until today. We focus on two different topics, the evolving
importance of countries and of the research disciplines over time. Moreover we
study the database at three levels, its global statistics, the mesoscale
networks connecting countries and disciplines, and the genealogical level.
| no_new_dataset | 0.93511 |
1603.06398 | Liqian Ma | Liqian Ma, Jue Wang, Eli Shechtman, Kalyan Sunkavalli, Shimin Hu | Appearance Harmonization for Single Image Shadow Removal | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shadows often create unwanted artifacts in photographs, and removing them can
be very challenging. Previous shadow removal methods often produce de-shadowed
regions that are visually inconsistent with the rest of the image. In this work
we propose a fully automatic shadow region harmonization approach that improves
the appearance compatibility of the de-shadowed region as typically produced by
previous methods. It is based on a shadow-guided patch-based image synthesis
approach that reconstructs the shadow region using patches sampled from
non-shadowed regions. The result is then refined based on the reconstruction
confidence to handle unique image patterns. Many shadow removal results and
comparisons are show the effectiveness of our improvement. Quantitative
evaluation on a benchmark dataset suggests that our automatic shadow
harmonization approach effectively improves upon the state-of-the-art.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 12:01:36 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Ma",
"Liqian",
""
],
[
"Wang",
"Jue",
""
],
[
"Shechtman",
"Eli",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Hu",
"Shimin",
""
]
] | TITLE: Appearance Harmonization for Single Image Shadow Removal
ABSTRACT: Shadows often create unwanted artifacts in photographs, and removing them can
be very challenging. Previous shadow removal methods often produce de-shadowed
regions that are visually inconsistent with the rest of the image. In this work
we propose a fully automatic shadow region harmonization approach that improves
the appearance compatibility of the de-shadowed region as typically produced by
previous methods. It is based on a shadow-guided patch-based image synthesis
approach that reconstructs the shadow region using patches sampled from
non-shadowed regions. The result is then refined based on the reconstruction
confidence to handle unique image patterns. Many shadow removal results and
comparisons are show the effectiveness of our improvement. Quantitative
evaluation on a benchmark dataset suggests that our automatic shadow
harmonization approach effectively improves upon the state-of-the-art.
| no_new_dataset | 0.955527 |
1603.06531 | Otkrist Gupta | Otkrist Gupta, Dan Raviv, Ramesh Raskar | Deep video gesture recognition using illumination invariants | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present architectures based on deep neural nets for gesture
recognition in videos, which are invariant to local scaling. We amalgamate
autoencoder and predictor architectures using an adaptive weighting scheme
coping with a reduced size labeled dataset, while enriching our models from
enormous unlabeled sets. We further improve robustness to lighting conditions
by introducing a new adaptive filer based on temporal local scale
normalization. We provide superior results over known methods, including recent
reported approaches based on neural nets.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 18:33:29 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Gupta",
"Otkrist",
""
],
[
"Raviv",
"Dan",
""
],
[
"Raskar",
"Ramesh",
""
]
] | TITLE: Deep video gesture recognition using illumination invariants
ABSTRACT: In this paper we present architectures based on deep neural nets for gesture
recognition in videos, which are invariant to local scaling. We amalgamate
autoencoder and predictor architectures using an adaptive weighting scheme
coping with a reduced size labeled dataset, while enriching our models from
enormous unlabeled sets. We further improve robustness to lighting conditions
by introducing a new adaptive filer based on temporal local scale
normalization. We provide superior results over known methods, including recent
reported approaches based on neural nets.
| no_new_dataset | 0.944995 |
1603.06541 | Ping Li | Ping Li | A Comparison Study of Nonlinear Kernels | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we compare 5 different nonlinear kernels: min-max, RBF, fRBF
(folded RBF), acos, and acos-$\chi^2$, on a wide range of publicly available
datasets. The proposed fRBF kernel performs very similarly to the RBF kernel.
Both RBF and fRBF kernels require an important tuning parameter ($\gamma$).
Interestingly, for a significant portion of the datasets, the min-max kernel
outperforms the best-tuned RBF/fRBF kernels. The acos kernel and acos-$\chi^2$
kernel also perform well in general and in some datasets achieve the best
accuracies.
One crucial issue with the use of nonlinear kernels is the excessive
computational and memory cost. These days, one increasingly popular strategy is
to linearize the kernels through various randomization algorithms. In our
study, the randomization method for the min-max kernel demonstrates excellent
performance compared to the randomization methods for other types of nonlinear
kernels, measured in terms of the number of nonzero terms in the transformed
dataset.
Our study provides evidence for supporting the use of the min-max kernel and
the corresponding randomized linearization method (i.e., the so-called "0-bit
CWS"). Furthermore, the results motivate at least two directions for future
research: (i) To develop new (and linearizable) nonlinear kernels for better
accuracies; and (ii) To develop better linearization algorithms for improving
the current linearization methods for the RBF kernel, the acos kernel, and the
acos-$\chi^2$ kernel. One attempt is to combine the min-max kernel with the
acos kernel or the acos-$\chi^2$ kernel. The advantages of these two new and
tuning-free nonlinear kernels are demonstrated vias our extensive experiments.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 19:11:50 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Li",
"Ping",
""
]
] | TITLE: A Comparison Study of Nonlinear Kernels
ABSTRACT: In this paper, we compare 5 different nonlinear kernels: min-max, RBF, fRBF
(folded RBF), acos, and acos-$\chi^2$, on a wide range of publicly available
datasets. The proposed fRBF kernel performs very similarly to the RBF kernel.
Both RBF and fRBF kernels require an important tuning parameter ($\gamma$).
Interestingly, for a significant portion of the datasets, the min-max kernel
outperforms the best-tuned RBF/fRBF kernels. The acos kernel and acos-$\chi^2$
kernel also perform well in general and in some datasets achieve the best
accuracies.
One crucial issue with the use of nonlinear kernels is the excessive
computational and memory cost. These days, one increasingly popular strategy is
to linearize the kernels through various randomization algorithms. In our
study, the randomization method for the min-max kernel demonstrates excellent
performance compared to the randomization methods for other types of nonlinear
kernels, measured in terms of the number of nonzero terms in the transformed
dataset.
Our study provides evidence for supporting the use of the min-max kernel and
the corresponding randomized linearization method (i.e., the so-called "0-bit
CWS"). Furthermore, the results motivate at least two directions for future
research: (i) To develop new (and linearizable) nonlinear kernels for better
accuracies; and (ii) To develop better linearization algorithms for improving
the current linearization methods for the RBF kernel, the acos kernel, and the
acos-$\chi^2$ kernel. One attempt is to combine the min-max kernel with the
acos kernel or the acos-$\chi^2$ kernel. The advantages of these two new and
tuning-free nonlinear kernels are demonstrated vias our extensive experiments.
| no_new_dataset | 0.950319 |
1603.06554 | Mohamed Amer | Timothy J. Shields, Mohamed R. Amer, Max Ehrlich, Amir Tamrakar | Action-Affect Classification and Morphing using Multi-Task
Representation Learning | null | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Most recent work focused on affect from facial expressions, and not as much
on body. This work focuses on body affect analysis. Affect does not occur in
isolation. Humans usually couple affect with an action in natural interactions;
for example, a person could be talking and smiling. Recognizing body affect in
sequences requires efficient algorithms to capture both the micro movements
that differentiate between happy and sad and the macro variations between
different actions. We depart from traditional approaches for time-series data
analytics by proposing a multi-task learning model that learns a shared
representation that is well-suited for action-affect classification as well as
generation. For this paper we choose Conditional Restricted Boltzmann Machines
to be our building block. We propose a new model that enhances the CRBM model
with a factored multi-task component to become Multi-Task Conditional
Restricted Boltzmann Machines (MTCRBMs). We evaluate our approach on two
publicly available datasets, the Body Affect dataset and the Tower Game
dataset, and show superior classification performance improvement over the
state-of-the-art, as well as the generative abilities of our model.
| [
{
"version": "v1",
"created": "Mon, 21 Mar 2016 19:38:07 GMT"
}
] | 2016-03-22T00:00:00 | [
[
"Shields",
"Timothy J.",
""
],
[
"Amer",
"Mohamed R.",
""
],
[
"Ehrlich",
"Max",
""
],
[
"Tamrakar",
"Amir",
""
]
] | TITLE: Action-Affect Classification and Morphing using Multi-Task
Representation Learning
ABSTRACT: Most recent work focused on affect from facial expressions, and not as much
on body. This work focuses on body affect analysis. Affect does not occur in
isolation. Humans usually couple affect with an action in natural interactions;
for example, a person could be talking and smiling. Recognizing body affect in
sequences requires efficient algorithms to capture both the micro movements
that differentiate between happy and sad and the macro variations between
different actions. We depart from traditional approaches for time-series data
analytics by proposing a multi-task learning model that learns a shared
representation that is well-suited for action-affect classification as well as
generation. For this paper we choose Conditional Restricted Boltzmann Machines
to be our building block. We propose a new model that enhances the CRBM model
with a factored multi-task component to become Multi-Task Conditional
Restricted Boltzmann Machines (MTCRBMs). We evaluate our approach on two
publicly available datasets, the Body Affect dataset and the Tower Game
dataset, and show superior classification performance improvement over the
state-of-the-art, as well as the generative abilities of our model.
| no_new_dataset | 0.944074 |
1508.03865 | Yannick Meier | Yannick Meier, Jie Xu, Onur Atan and Mihaela van der Schaar | Predicting Grades | 15 pages, 15 figures | IEEE Transactions on Signal Processing, vol. 64, no. 4, pp.
959-972, Feb.15, 2016 | 10.1109/TSP.2015.2496278 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To increase efficacy in traditional classroom courses as well as in Massive
Open Online Courses (MOOCs), automated systems supporting the instructor are
needed. One important problem is to automatically detect students that are
going to do poorly in a course early enough to be able to take remedial
actions. Existing grade prediction systems focus on maximizing the accuracy of
the prediction while overseeing the importance of issuing timely and
personalized predictions. This paper proposes an algorithm that predicts the
final grade of each student in a class. It issues a prediction for each student
individually, when the expected accuracy of the prediction is sufficient. The
algorithm learns online what is the optimal prediction and time to issue a
prediction based on past history of students' performance in a course. We
derive a confidence estimate for the prediction accuracy and demonstrate the
performance of our algorithm on a dataset obtained based on the performance of
approximately 700 UCLA undergraduate students who have taken an introductory
digital signal processing over the past 7 years. We demonstrate that for 85% of
the students we can predict with 76% accuracy whether they are going do well or
poorly in the class after the 4th course week. Using data obtained from a pilot
course, our methodology suggests that it is effective to perform early in-class
assessments such as quizzes, which result in timely performance prediction for
each student, thereby enabling timely interventions by the instructor (at the
student or class level) when necessary.
| [
{
"version": "v1",
"created": "Sun, 16 Aug 2015 20:53:09 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2016 15:52:33 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Meier",
"Yannick",
""
],
[
"Xu",
"Jie",
""
],
[
"Atan",
"Onur",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] | TITLE: Predicting Grades
ABSTRACT: To increase efficacy in traditional classroom courses as well as in Massive
Open Online Courses (MOOCs), automated systems supporting the instructor are
needed. One important problem is to automatically detect students that are
going to do poorly in a course early enough to be able to take remedial
actions. Existing grade prediction systems focus on maximizing the accuracy of
the prediction while overseeing the importance of issuing timely and
personalized predictions. This paper proposes an algorithm that predicts the
final grade of each student in a class. It issues a prediction for each student
individually, when the expected accuracy of the prediction is sufficient. The
algorithm learns online what is the optimal prediction and time to issue a
prediction based on past history of students' performance in a course. We
derive a confidence estimate for the prediction accuracy and demonstrate the
performance of our algorithm on a dataset obtained based on the performance of
approximately 700 UCLA undergraduate students who have taken an introductory
digital signal processing over the past 7 years. We demonstrate that for 85% of
the students we can predict with 76% accuracy whether they are going do well or
poorly in the class after the 4th course week. Using data obtained from a pilot
course, our methodology suggests that it is effective to perform early in-class
assessments such as quizzes, which result in timely performance prediction for
each student, thereby enabling timely interventions by the instructor (at the
student or class level) when necessary.
| no_new_dataset | 0.941277 |
1601.06062 | Matthias Hoffmann | Matthias Hoffmann, Christopher Kowalewski, Andreas Maier, Klaus
Kurzidim, Norbert Strobel, Joachim Hornegger | 3-D/2-D Registration of Cardiac Structures by 3-D Contrast Agent
Distribution Estimation | null | null | 10.1155/2016/7690391 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For augmented fluoroscopy during cardiac catheter ablation procedures, a
preoperatively acquired 3-D model of the left atrium of the patient can be
registered to X-ray images. Therefore the 3D-model is matched with the contrast
agent based appearance of the left atrium. Commonly, only small amounts of
contrast agent (CA) are used to locate the left atrium. This is why we focus on
robust registration methods that work also if the structure of interest is only
partially contrasted. In particular, we propose two similarity measures for
CA-based registration: The first similarity measure, explicit apparent edges,
focuses on edges of the patient anatomy made visible by contrast agent and can
be computed quickly on the GPU. The second novel similarity measure computes a
contrast agent distribution estimate (CADE) inside the 3-D model and rates its
consistency with the CA seen in biplane fluoroscopic images. As the CADE
computation involves a reconstruction of CA in 3-D using the CA within the
fluoroscopic images, it is slower. Using a combination of both methods, our
evaluation on 11 well-contrasted clinical datasets yielded an error of
7.9+/-6.3 mm over all frames. For 10 datasets with little CA, we obtained an
error of 8.8+/-6.7 mm. Our new methods outperform a registration based on the
projected shadow significantly (p<0.05).
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 16:23:25 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Hoffmann",
"Matthias",
""
],
[
"Kowalewski",
"Christopher",
""
],
[
"Maier",
"Andreas",
""
],
[
"Kurzidim",
"Klaus",
""
],
[
"Strobel",
"Norbert",
""
],
[
"Hornegger",
"Joachim",
""
]
] | TITLE: 3-D/2-D Registration of Cardiac Structures by 3-D Contrast Agent
Distribution Estimation
ABSTRACT: For augmented fluoroscopy during cardiac catheter ablation procedures, a
preoperatively acquired 3-D model of the left atrium of the patient can be
registered to X-ray images. Therefore the 3D-model is matched with the contrast
agent based appearance of the left atrium. Commonly, only small amounts of
contrast agent (CA) are used to locate the left atrium. This is why we focus on
robust registration methods that work also if the structure of interest is only
partially contrasted. In particular, we propose two similarity measures for
CA-based registration: The first similarity measure, explicit apparent edges,
focuses on edges of the patient anatomy made visible by contrast agent and can
be computed quickly on the GPU. The second novel similarity measure computes a
contrast agent distribution estimate (CADE) inside the 3-D model and rates its
consistency with the CA seen in biplane fluoroscopic images. As the CADE
computation involves a reconstruction of CA in 3-D using the CA within the
fluoroscopic images, it is slower. Using a combination of both methods, our
evaluation on 11 well-contrasted clinical datasets yielded an error of
7.9+/-6.3 mm over all frames. For 10 datasets with little CA, we obtained an
error of 8.8+/-6.7 mm. Our new methods outperform a registration based on the
projected shadow significantly (p<0.05).
| no_new_dataset | 0.956431 |
1603.05772 | Matthias Nie{\ss}ner | Julien Valentin and Angela Dai and Matthias Nie{\ss}ner and Pushmeet
Kohli and Philip Torr and Shahram Izadi and Cem Keskin | Learning to Navigate the Energy Landscape | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel and efficient architecture for addressing
computer vision problems that use `Analysis by Synthesis'. Analysis by
synthesis involves the minimization of the reconstruction error which is
typically a non-convex function of the latent target variables.
State-of-the-art methods adopt a hybrid scheme where discriminatively trained
predictors like Random Forests or Convolutional Neural Networks are used to
initialize local search algorithms. While these methods have been shown to
produce promising results, they often get stuck in local optima. Our method
goes beyond the conventional hybrid architecture by not only proposing multiple
accurate initial solutions but by also defining a navigational structure over
the solution space that can be used for extremely efficient gradient-free local
search. We demonstrate the efficacy of our approach on the challenging problem
of RGB Camera Relocalization. To make the RGB camera relocalization problem
particularly challenging, we introduce a new dataset of 3D environments which
are significantly larger than those found in other publicly-available datasets.
Our experiments reveal that the proposed method is able to achieve
state-of-the-art camera relocalization results. We also demonstrate the
generalizability of our approach on Hand Pose Estimation and Image Retrieval
tasks.
| [
{
"version": "v1",
"created": "Fri, 18 Mar 2016 05:45:39 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Valentin",
"Julien",
""
],
[
"Dai",
"Angela",
""
],
[
"Nießner",
"Matthias",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Torr",
"Philip",
""
],
[
"Izadi",
"Shahram",
""
],
[
"Keskin",
"Cem",
""
]
] | TITLE: Learning to Navigate the Energy Landscape
ABSTRACT: In this paper, we present a novel and efficient architecture for addressing
computer vision problems that use `Analysis by Synthesis'. Analysis by
synthesis involves the minimization of the reconstruction error which is
typically a non-convex function of the latent target variables.
State-of-the-art methods adopt a hybrid scheme where discriminatively trained
predictors like Random Forests or Convolutional Neural Networks are used to
initialize local search algorithms. While these methods have been shown to
produce promising results, they often get stuck in local optima. Our method
goes beyond the conventional hybrid architecture by not only proposing multiple
accurate initial solutions but by also defining a navigational structure over
the solution space that can be used for extremely efficient gradient-free local
search. We demonstrate the efficacy of our approach on the challenging problem
of RGB Camera Relocalization. To make the RGB camera relocalization problem
particularly challenging, we introduce a new dataset of 3D environments which
are significantly larger than those found in other publicly-available datasets.
Our experiments reveal that the proposed method is able to achieve
state-of-the-art camera relocalization results. We also demonstrate the
generalizability of our approach on Hand Pose Estimation and Image Retrieval
tasks.
| new_dataset | 0.955817 |
1603.05782 | Xiangyu Wang | Xiangyu Wang and Alex Yong-Sang Chia | Unsupervised Cross-Media Hashing with Structure Preservation | null | null | null | null | cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen the exponential growth of heterogeneous multimedia
data. The need for effective and accurate data retrieval from heterogeneous
data sources has attracted much research interest in cross-media retrieval.
Here, given a query of any media type, cross-media retrieval seeks to find
relevant results of different media types from heterogeneous data sources. To
facilitate large-scale cross-media retrieval, we propose a novel unsupervised
cross-media hashing method. Our method incorporates local affinity and distance
repulsion constraints into a matrix factorization framework. Correspondingly,
the proposed method learns hash functions that generates unified hash codes
from different media types, while ensuring intrinsic geometric structure of the
data distribution is preserved. These hash codes empower the similarity between
data of different media types to be evaluated directly. Experimental results on
two large-scale multimedia datasets demonstrate the effectiveness of the
proposed method, where we outperform the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 18 Mar 2016 07:10:35 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Wang",
"Xiangyu",
""
],
[
"Chia",
"Alex Yong-Sang",
""
]
] | TITLE: Unsupervised Cross-Media Hashing with Structure Preservation
ABSTRACT: Recent years have seen the exponential growth of heterogeneous multimedia
data. The need for effective and accurate data retrieval from heterogeneous
data sources has attracted much research interest in cross-media retrieval.
Here, given a query of any media type, cross-media retrieval seeks to find
relevant results of different media types from heterogeneous data sources. To
facilitate large-scale cross-media retrieval, we propose a novel unsupervised
cross-media hashing method. Our method incorporates local affinity and distance
repulsion constraints into a matrix factorization framework. Correspondingly,
the proposed method learns hash functions that generates unified hash codes
from different media types, while ensuring intrinsic geometric structure of the
data distribution is preserved. These hash codes empower the similarity between
data of different media types to be evaluated directly. Experimental results on
two large-scale multimedia datasets demonstrate the effectiveness of the
proposed method, where we outperform the state-of-the-art methods.
| no_new_dataset | 0.946498 |
1603.05824 | Lars Hertel | Lars Hertel, Huy Phan, Alfred Mertins | Comparing Time and Frequency Domain for Audio Event Recognition Using
Deep Learning | 5 pages, accepted version for publication in Proceedings of the IEEE
International Joint Conference on Neural Networks (IJCNN), July 2016,
Vancouver, Canada | null | null | null | cs.NE cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing acoustic events is an intricate problem for a machine and an
emerging field of research. Deep neural networks achieve convincing results and
are currently the state-of-the-art approach for many tasks. One advantage is
their implicit feature learning, opposite to an explicit feature extraction of
the input signal. In this work, we analyzed whether more discriminative
features can be learned from either the time-domain or the frequency-domain
representation of the audio signal. For this purpose, we trained multiple deep
networks with different architectures on the Freiburg-106 and ESC-10 datasets.
Our results show that feature learning from the frequency domain is superior to
the time domain. Moreover, additionally using convolution and pooling layers,
to explore local structures of the audio signal, significantly improves the
recognition performance and achieves state-of-the-art results.
| [
{
"version": "v1",
"created": "Fri, 18 Mar 2016 10:38:23 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Hertel",
"Lars",
""
],
[
"Phan",
"Huy",
""
],
[
"Mertins",
"Alfred",
""
]
] | TITLE: Comparing Time and Frequency Domain for Audio Event Recognition Using
Deep Learning
ABSTRACT: Recognizing acoustic events is an intricate problem for a machine and an
emerging field of research. Deep neural networks achieve convincing results and
are currently the state-of-the-art approach for many tasks. One advantage is
their implicit feature learning, opposite to an explicit feature extraction of
the input signal. In this work, we analyzed whether more discriminative
features can be learned from either the time-domain or the frequency-domain
representation of the audio signal. For this purpose, we trained multiple deep
networks with different architectures on the Freiburg-106 and ESC-10 datasets.
Our results show that feature learning from the frequency domain is superior to
the time domain. Moreover, additionally using convolution and pooling layers,
to explore local structures of the audio signal, significantly improves the
recognition performance and achieves state-of-the-art results.
| no_new_dataset | 0.950595 |
1603.05850 | Joey Tianyi Zhou Dr | Joey Tianyi Zhou, Ivor W. Tsang, Shen-Shyang Ho and Klaus-Robert
Muller | N-ary Error Correcting Coding Scheme | Under submission to IEEE Transaction on Information Theory | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The coding matrix design plays a fundamental role in the prediction
performance of the error correcting output codes (ECOC)-based multi-class task.
{In many-class classification problems, e.g., fine-grained categorization, it
is difficult to distinguish subtle between-class differences under existing
coding schemes due to a limited choices of coding values.} In this paper, we
investigate whether one can relax existing binary and ternary code design to
$N$-ary code design to achieve better classification performance. {In
particular, we present a novel $N$-ary coding scheme that decomposes the
original multi-class problem into simpler multi-class subproblems, which is
similar to applying a divide-and-conquer method.} The two main advantages of
such a coding scheme are as follows: (i) the ability to construct more
discriminative codes and (ii) the flexibility for the user to select the best
$N$ for ECOC-based classification. We show empirically that the optimal $N$
(based on classification performance) lies in $[3, 10]$ with some trade-off in
computational cost. Moreover, we provide theoretical insights on the dependency
of the generalization error bound of an $N$-ary ECOC on the average base
classifier generalization error and the minimum distance between any two codes
constructed. Extensive experimental results on benchmark multi-class datasets
show that the proposed coding scheme achieves superior prediction performance
over the state-of-the-art coding methods.
| [
{
"version": "v1",
"created": "Fri, 18 Mar 2016 11:51:09 GMT"
}
] | 2016-03-21T00:00:00 | [
[
"Zhou",
"Joey Tianyi",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Ho",
"Shen-Shyang",
""
],
[
"Muller",
"Klaus-Robert",
""
]
] | TITLE: N-ary Error Correcting Coding Scheme
ABSTRACT: The coding matrix design plays a fundamental role in the prediction
performance of the error correcting output codes (ECOC)-based multi-class task.
{In many-class classification problems, e.g., fine-grained categorization, it
is difficult to distinguish subtle between-class differences under existing
coding schemes due to a limited choices of coding values.} In this paper, we
investigate whether one can relax existing binary and ternary code design to
$N$-ary code design to achieve better classification performance. {In
particular, we present a novel $N$-ary coding scheme that decomposes the
original multi-class problem into simpler multi-class subproblems, which is
similar to applying a divide-and-conquer method.} The two main advantages of
such a coding scheme are as follows: (i) the ability to construct more
discriminative codes and (ii) the flexibility for the user to select the best
$N$ for ECOC-based classification. We show empirically that the optimal $N$
(based on classification performance) lies in $[3, 10]$ with some trade-off in
computational cost. Moreover, we provide theoretical insights on the dependency
of the generalization error bound of an $N$-ary ECOC on the average base
classifier generalization error and the minimum distance between any two codes
constructed. Extensive experimental results on benchmark multi-class datasets
show that the proposed coding scheme achieves superior prediction performance
over the state-of-the-art coding methods.
| no_new_dataset | 0.939913 |
1507.04717 | Alessandro Rudi | Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco | Less is More: Nystr\"om Computational Regularization | updated version of NIPS 2015 (oral) | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study Nystr\"om type subsampling approaches to large scale kernel methods,
and prove learning bounds in the statistical learning setting, where random
sampling and high probability estimates are considered. In particular, we prove
that these approaches can achieve optimal learning bounds, provided the
subsampling level is suitably chosen. These results suggest a simple
incremental variant of Nystr\"om Kernel Regularized Least Squares, where the
subsampling level implements a form of computational regularization, in the
sense that it controls at the same time regularization and computations.
Extensive experimental analysis shows that the considered approach achieves
state of the art performances on benchmark large scale datasets.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 19:26:27 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jul 2015 15:37:29 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Oct 2015 21:34:59 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Nov 2015 15:16:59 GMT"
},
{
"version": "v5",
"created": "Mon, 7 Mar 2016 17:34:28 GMT"
},
{
"version": "v6",
"created": "Thu, 17 Mar 2016 16:27:36 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Rudi",
"Alessandro",
""
],
[
"Camoriano",
"Raffaello",
""
],
[
"Rosasco",
"Lorenzo",
""
]
] | TITLE: Less is More: Nystr\"om Computational Regularization
ABSTRACT: We study Nystr\"om type subsampling approaches to large scale kernel methods,
and prove learning bounds in the statistical learning setting, where random
sampling and high probability estimates are considered. In particular, we prove
that these approaches can achieve optimal learning bounds, provided the
subsampling level is suitably chosen. These results suggest a simple
incremental variant of Nystr\"om Kernel Regularized Least Squares, where the
subsampling level implements a form of computational regularization, in the
sense that it controls at the same time regularization and computations.
Extensive experimental analysis shows that the considered approach achieves
state of the art performances on benchmark large scale datasets.
| no_new_dataset | 0.951006 |
1508.06073 | Hilde Kuehne | Hilde Kuehne and Juergen Gall and Thomas Serre | Cooking in the kitchen: Recognizing and Segmenting Human Activities in
Videos | 15 pages, 12 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As research on action recognition matures, the focus is shifting away from
categorizing basic task-oriented actions using hand-segmented video datasets to
understanding complex goal-oriented daily human activities in real-world
settings. Temporally structured models would seem obvious to tackle this set of
problems, but so far, cases where these models have outperformed simpler
unstructured bag-of-word types of models are scarce. With the increasing
availability of large human activity datasets, combined with the development of
novel feature coding techniques that yield more compact representations, it is
time to revisit structured generative approaches.
Here, we describe an end-to-end generative approach from the encoding of
features to the structural modeling of complex human activities by applying
Fisher vectors and temporal models for the analysis of video sequences.
We systematically evaluate the proposed approach on several available
datasets (ADL, MPIICooking, and Breakfast datasets) using a variety of
performance metrics. Through extensive system evaluations, we demonstrate that
combining compact video representations based on Fisher Vectors with HMM-based
modeling yields very significant gains in accuracy and when properly trained
with sufficient training samples, structured temporal models outperform
unstructured bag-of-word types of models by a large margin on the tested
performance metric.
| [
{
"version": "v1",
"created": "Tue, 25 Aug 2015 08:59:46 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2016 10:04:21 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Kuehne",
"Hilde",
""
],
[
"Gall",
"Juergen",
""
],
[
"Serre",
"Thomas",
""
]
] | TITLE: Cooking in the kitchen: Recognizing and Segmenting Human Activities in
Videos
ABSTRACT: As research on action recognition matures, the focus is shifting away from
categorizing basic task-oriented actions using hand-segmented video datasets to
understanding complex goal-oriented daily human activities in real-world
settings. Temporally structured models would seem obvious to tackle this set of
problems, but so far, cases where these models have outperformed simpler
unstructured bag-of-word types of models are scarce. With the increasing
availability of large human activity datasets, combined with the development of
novel feature coding techniques that yield more compact representations, it is
time to revisit structured generative approaches.
Here, we describe an end-to-end generative approach from the encoding of
features to the structural modeling of complex human activities by applying
Fisher vectors and temporal models for the analysis of video sequences.
We systematically evaluate the proposed approach on several available
datasets (ADL, MPIICooking, and Breakfast datasets) using a variety of
performance metrics. Through extensive system evaluations, we demonstrate that
combining compact video representations based on Fisher Vectors with HMM-based
modeling yields very significant gains in accuracy and when properly trained
with sufficient training samples, structured temporal models outperform
unstructured bag-of-word types of models by a large margin on the tested
performance metric.
| no_new_dataset | 0.941439 |
1509.01947 | Hilde Kuehne | Hilde Kuehne and Juergen Gall and Thomas Serre | An end-to-end generative framework for video segmentation and
recognition | Proc. of IEEE Winter Conference on Applications of Computer Vision
(WACV), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an end-to-end generative approach for the segmentation and
recognition of human activities. In this approach, a visual representation
based on reduced Fisher Vectors is combined with a structured temporal model
for recognition. We show that the statistical properties of Fisher Vectors make
them an especially suitable front-end for generative models such as Gaussian
mixtures. The system is evaluated for both the recognition of complex
activities as well as their parsing into action units. Using a variety of video
datasets ranging from human cooking activities to animal behaviors, our
experiments demonstrate that the resulting architecture outperforms
state-of-the-art approaches for larger datasets, i.e. when sufficient amount of
data is available for training structured generative models.
| [
{
"version": "v1",
"created": "Mon, 7 Sep 2015 08:35:48 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2016 09:43:10 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Kuehne",
"Hilde",
""
],
[
"Gall",
"Juergen",
""
],
[
"Serre",
"Thomas",
""
]
] | TITLE: An end-to-end generative framework for video segmentation and
recognition
ABSTRACT: We describe an end-to-end generative approach for the segmentation and
recognition of human activities. In this approach, a visual representation
based on reduced Fisher Vectors is combined with a structured temporal model
for recognition. We show that the statistical properties of Fisher Vectors make
them an especially suitable front-end for generative models such as Gaussian
mixtures. The system is evaluated for both the recognition of complex
activities as well as their parsing into action units. Using a variety of video
datasets ranging from human cooking activities to animal behaviors, our
experiments demonstrate that the resulting architecture outperforms
state-of-the-art approaches for larger datasets, i.e. when sufficient amount of
data is available for training structured generative models.
| no_new_dataset | 0.952175 |
1511.02917 | Vignesh Ramanathan | Vignesh Ramanathan and Jonathan Huang and Sami Abu-El-Haija and
Alexander Gorban and Kevin Murphy and Li Fei-Fei | Detecting events and key actors in multi-person videos | Accepted for publication in CVPR'16 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-person event recognition is a challenging task, often with many people
active in the scene but only a small subset contributing to an actual event. In
this paper, we propose a model which learns to detect events in such videos
while automatically "attending" to the people responsible for the event. Our
model does not use explicit annotations regarding who or where those people are
during training and testing. In particular, we track people in videos and use a
recurrent neural network (RNN) to represent the track features. We learn
time-varying attention weights to combine these features at each time-instant.
The attended features are then processed using another RNN for event
detection/classification. Since most video datasets with multiple people are
restricted to a small number of videos, we also collected a new basketball
dataset comprising 257 basketball games with 14K event annotations
corresponding to 11 event classes. Our model outperforms state-of-the-art
methods for both event classification and detection on this new dataset.
Additionally, we show that the attention mechanism is able to consistently
localize the relevant players.
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2015 22:30:19 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2016 00:02:03 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Ramanathan",
"Vignesh",
""
],
[
"Huang",
"Jonathan",
""
],
[
"Abu-El-Haija",
"Sami",
""
],
[
"Gorban",
"Alexander",
""
],
[
"Murphy",
"Kevin",
""
],
[
"Fei-Fei",
"Li",
""
]
] | TITLE: Detecting events and key actors in multi-person videos
ABSTRACT: Multi-person event recognition is a challenging task, often with many people
active in the scene but only a small subset contributing to an actual event. In
this paper, we propose a model which learns to detect events in such videos
while automatically "attending" to the people responsible for the event. Our
model does not use explicit annotations regarding who or where those people are
during training and testing. In particular, we track people in videos and use a
recurrent neural network (RNN) to represent the track features. We learn
time-varying attention weights to combine these features at each time-instant.
The attended features are then processed using another RNN for event
detection/classification. Since most video datasets with multiple people are
restricted to a small number of videos, we also collected a new basketball
dataset comprising 257 basketball games with 14K event annotations
corresponding to 11 event classes. Our model outperforms state-of-the-art
methods for both event classification and detection on this new dataset.
Additionally, we show that the attention mechanism is able to consistently
localize the relevant players.
| new_dataset | 0.963746 |
1602.02830 | Itay Hubara | Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv and
Yoshua Bengio | Binarized Neural Networks: Training Deep Neural Networks with Weights
and Activations Constrained to +1 or -1 | 11 pages and 3 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a method to train Binarized Neural Networks (BNNs) - neural
networks with binary weights and activations at run-time. At training-time the
binary weights and activations are used for computing the parameters gradients.
During the forward pass, BNNs drastically reduce memory size and accesses, and
replace most arithmetic operations with bit-wise operations, which is expected
to substantially improve power-efficiency. To validate the effectiveness of
BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On
both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10
and SVHN datasets. Last but not least, we wrote a binary matrix multiplication
GPU kernel with which it is possible to run our MNIST BNN 7 times faster than
with an unoptimized GPU kernel, without suffering any loss in classification
accuracy. The code for training and running our BNNs is available on-line.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 01:01:59 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Feb 2016 21:26:53 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Mar 2016 14:54:25 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Courbariaux",
"Matthieu",
""
],
[
"Hubara",
"Itay",
""
],
[
"Soudry",
"Daniel",
""
],
[
"El-Yaniv",
"Ran",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Binarized Neural Networks: Training Deep Neural Networks with Weights
and Activations Constrained to +1 or -1
ABSTRACT: We introduce a method to train Binarized Neural Networks (BNNs) - neural
networks with binary weights and activations at run-time. At training-time the
binary weights and activations are used for computing the parameters gradients.
During the forward pass, BNNs drastically reduce memory size and accesses, and
replace most arithmetic operations with bit-wise operations, which is expected
to substantially improve power-efficiency. To validate the effectiveness of
BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On
both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10
and SVHN datasets. Last but not least, we wrote a binary matrix multiplication
GPU kernel with which it is possible to run our MNIST BNN 7 times faster than
with an unoptimized GPU kernel, without suffering any loss in classification
accuracy. The code for training and running our BNNs is available on-line.
| no_new_dataset | 0.945045 |
1603.05422 | Panagiotis Bouros | Panagiotis Bouros, Nikos Mamoulis, Shen Ge and Manolis Terrovitis | Set Containment Join Revisited | To appear at the Knowledge and Information Systems journal (KAIS) | null | 10.1007/s10115-015-0895-7 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given two collections of set objects $R$ and $S$, the $R \bowtie_{\subseteq}
S$ set containment join returns all object pairs $(r, s) \in R \times S$ such
that $r \subseteq s$. Besides being a basic operator in all modern data
management systems with a wide range of applications, the join can be used to
evaluate complex SQL queries based on relational division and as a module of
data mining algorithms. The state-of-the-art algorithm for set containment
joins (PRETTI) builds an inverted index on the right-hand collection $S$ and a
prefix tree on the left-hand collection $R$ that groups set objects with common
prefixes and thus, avoids redundant processing. In this paper, we present a
framework which improves PRETTI in two directions. First, we limit the prefix
tree construction by proposing an adaptive methodology based on a cost model;
this way, we can greatly reduce the space and time cost of the join. Second, we
partition the objects of each collection based on their first contained item,
assuming that the set objects are internally sorted. We show that we can
process the partitions and evaluate the join while building the prefix tree and
the inverted index progressively. This allows us to significantly reduce not
only the join cost, but also the maximum memory requirements during the join.
An experimental evaluation using both real and synthetic datasets shows that
our framework outperforms PRETTI by a wide margin.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 10:47:48 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Bouros",
"Panagiotis",
""
],
[
"Mamoulis",
"Nikos",
""
],
[
"Ge",
"Shen",
""
],
[
"Terrovitis",
"Manolis",
""
]
] | TITLE: Set Containment Join Revisited
ABSTRACT: Given two collections of set objects $R$ and $S$, the $R \bowtie_{\subseteq}
S$ set containment join returns all object pairs $(r, s) \in R \times S$ such
that $r \subseteq s$. Besides being a basic operator in all modern data
management systems with a wide range of applications, the join can be used to
evaluate complex SQL queries based on relational division and as a module of
data mining algorithms. The state-of-the-art algorithm for set containment
joins (PRETTI) builds an inverted index on the right-hand collection $S$ and a
prefix tree on the left-hand collection $R$ that groups set objects with common
prefixes and thus, avoids redundant processing. In this paper, we present a
framework which improves PRETTI in two directions. First, we limit the prefix
tree construction by proposing an adaptive methodology based on a cost model;
this way, we can greatly reduce the space and time cost of the join. Second, we
partition the objects of each collection based on their first contained item,
assuming that the set objects are internally sorted. We show that we can
process the partitions and evaluate the join while building the prefix tree and
the inverted index progressively. This allows us to significantly reduce not
only the join cost, but also the maximum memory requirements during the join.
An experimental evaluation using both real and synthetic datasets shows that
our framework outperforms PRETTI by a wide margin.
| no_new_dataset | 0.943556 |
1603.05435 | Rajeev Rajan | Rajeev Rajan and Hema A. Murthy | Modified Group Delay Based MultiPitch Estimation in Co-Channel Speech | null | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phase processing has been replaced by group delay processing for the
extraction of source and system parameters from speech. Group delay functions
are ill-behaved when the transfer function has zeros that are close to unit
circle in the z-domain. The modified group delay function addresses this
problem and has been successfully used for formant and monopitch estimation. In
this paper, modified group delay functions are used for multipitch estimation
in concurrent speech. The power spectrum of the speech is first flattened in
order to annihilate the system characteristics, while retaining the source
characteristics. Group delay analysis on this flattened spectrum picks the
predominant pitch in the first pass and a comb filter is used to filter out the
estimated pitch along with its harmonics. The residual spectrum is again
analyzed for the next candidate pitch estimate in the second pass. The final
pitch trajectories of the constituent speech utterances are formed using pitch
grouping and post processing techniques. The performance of the proposed
algorithm was evaluated on standard datasets using two metrics; pitch accuracy
and standard deviation of fine pitch error. Our results show that the proposed
algorithm is a promising pitch detection method in multipitch environment for
real speech recordings.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 11:35:09 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Rajan",
"Rajeev",
""
],
[
"Murthy",
"Hema A.",
""
]
] | TITLE: Modified Group Delay Based MultiPitch Estimation in Co-Channel Speech
ABSTRACT: Phase processing has been replaced by group delay processing for the
extraction of source and system parameters from speech. Group delay functions
are ill-behaved when the transfer function has zeros that are close to unit
circle in the z-domain. The modified group delay function addresses this
problem and has been successfully used for formant and monopitch estimation. In
this paper, modified group delay functions are used for multipitch estimation
in concurrent speech. The power spectrum of the speech is first flattened in
order to annihilate the system characteristics, while retaining the source
characteristics. Group delay analysis on this flattened spectrum picks the
predominant pitch in the first pass and a comb filter is used to filter out the
estimated pitch along with its harmonics. The residual spectrum is again
analyzed for the next candidate pitch estimate in the second pass. The final
pitch trajectories of the constituent speech utterances are formed using pitch
grouping and post processing techniques. The performance of the proposed
algorithm was evaluated on standard datasets using two metrics; pitch accuracy
and standard deviation of fine pitch error. Our results show that the proposed
algorithm is a promising pitch detection method in multipitch environment for
real speech recordings.
| no_new_dataset | 0.95222 |
1603.05462 | Aanjhan Ranganathan | Aanjhan Ranganathan, Hildur \'Olafsd\'ottir, Srdjan Capkun | SPREE: Spoofing Resistant GPS Receiver | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global Positioning System (GPS) is used ubiquitously in a wide variety of
applications ranging from navigation and tracking to modern smart grids and
communication networks. However, it has been demonstrated that modern GPS
receivers are vulnerable to signal spoofing attacks. For example, today it is
possible to change the course of a ship or force a drone to land in an hostile
area by simply spoofing GPS signals. Several countermeasures have been proposed
in the past to detect GPS spoofing attacks. These countermeasures offer
protection only against naive attackers. They are incapable of detecting strong
attackers such as those capable of seamlessly taking over a GPS receiver, which
is currently receiving legitimate satellite signals, and spoofing them to an
arbitrary location. Also, there is no hardware platform that can be used to
compare and evaluate the effectiveness of existing countermeasures in
real-world scenarios. In this work, we present SPREE, which is, to the best of
our knowledge, the first GPS receiver capable of detecting all spoofing attacks
described in literature. Our novel spoofing detection technique called
auxiliary peak tracking enables detection of even a strong attacker capable of
executing the seamless takeover attack. We implement and evaluate our receiver
against three different sets of GPS signal traces and show that SPREE
constrains even a strong attacker (capable of seamless takeover attack) from
spoofing the receiver to a location not more than 1 km away from its true
location. This is a significant improvement over modern GPS receivers that can
be spoofed to any arbitrary location. Finally, we release our implementation
and datasets to the community for further research and development.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 13:00:41 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Ranganathan",
"Aanjhan",
""
],
[
"Ólafsdóttir",
"Hildur",
""
],
[
"Capkun",
"Srdjan",
""
]
] | TITLE: SPREE: Spoofing Resistant GPS Receiver
ABSTRACT: Global Positioning System (GPS) is used ubiquitously in a wide variety of
applications ranging from navigation and tracking to modern smart grids and
communication networks. However, it has been demonstrated that modern GPS
receivers are vulnerable to signal spoofing attacks. For example, today it is
possible to change the course of a ship or force a drone to land in an hostile
area by simply spoofing GPS signals. Several countermeasures have been proposed
in the past to detect GPS spoofing attacks. These countermeasures offer
protection only against naive attackers. They are incapable of detecting strong
attackers such as those capable of seamlessly taking over a GPS receiver, which
is currently receiving legitimate satellite signals, and spoofing them to an
arbitrary location. Also, there is no hardware platform that can be used to
compare and evaluate the effectiveness of existing countermeasures in
real-world scenarios. In this work, we present SPREE, which is, to the best of
our knowledge, the first GPS receiver capable of detecting all spoofing attacks
described in literature. Our novel spoofing detection technique called
auxiliary peak tracking enables detection of even a strong attacker capable of
executing the seamless takeover attack. We implement and evaluate our receiver
against three different sets of GPS signal traces and show that SPREE
constrains even a strong attacker (capable of seamless takeover attack) from
spoofing the receiver to a location not more than 1 km away from its true
location. This is a significant improvement over modern GPS receivers that can
be spoofed to any arbitrary location. Finally, we release our implementation
and datasets to the community for further research and development.
| no_new_dataset | 0.921428 |
1603.05583 | L\'aszl\'o Gyarmati | Laszlo Gyarmati, Mohamed Hefeeda | Analyzing In-Game Movements of Soccer Players at Scale | MIT Sloan Sports Analytics Conference 2016 | null | null | null | cs.OH stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is challenging to get access to datasets related to the physical
performance of soccer players. The teams consider such information highly
confidential, especially if it covers in-game performance.Hence, most of the
analysis and evaluation of the players' performance do not contain much
information on the physical aspect of the game, creating a blindspot in
performance analysis. We propose a novel method to solve this issue by deriving
movement characteristics of soccer players. We use event-based datasets from
data provider companies covering 50+ soccer leagues allowing us to analyze the
movement profiles of potentially tens of thousands of players without any major
investment. Our methodology does not require expensive, dedicated player
tracking system deployed in the stadium. We also compute the similarity of the
players based on their movement characteristics and as such identify potential
candidates who may be able to replace a given player. Finally, we quantify the
uniqueness and consistency of players in terms of their in-game movements. Our
study is the first of its kind that focuses on the movements of soccer players
at scale, while it derives novel, actionable insights for the soccer industry
from event-based datasets.
| [
{
"version": "v1",
"created": "Fri, 11 Mar 2016 23:54:55 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Gyarmati",
"Laszlo",
""
],
[
"Hefeeda",
"Mohamed",
""
]
] | TITLE: Analyzing In-Game Movements of Soccer Players at Scale
ABSTRACT: It is challenging to get access to datasets related to the physical
performance of soccer players. The teams consider such information highly
confidential, especially if it covers in-game performance.Hence, most of the
analysis and evaluation of the players' performance do not contain much
information on the physical aspect of the game, creating a blindspot in
performance analysis. We propose a novel method to solve this issue by deriving
movement characteristics of soccer players. We use event-based datasets from
data provider companies covering 50+ soccer leagues allowing us to analyze the
movement profiles of potentially tens of thousands of players without any major
investment. Our methodology does not require expensive, dedicated player
tracking system deployed in the stadium. We also compute the similarity of the
players based on their movement characteristics and as such identify potential
candidates who may be able to replace a given player. Finally, we quantify the
uniqueness and consistency of players in terms of their in-game movements. Our
study is the first of its kind that focuses on the movements of soccer players
at scale, while it derives novel, actionable insights for the soccer industry
from event-based datasets.
| no_new_dataset | 0.942507 |
1603.05600 | Roozbeh Mottaghi | Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, Ali Farhadi | "What happens if..." Learning to Predict the Effect of Forces in Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What happens if one pushes a cup sitting on a table toward the edge of the
table? How about pushing a desk against a wall? In this paper, we study the
problem of understanding the movements of objects as a result of applying
external forces to them. For a given force vector applied to a specific
location in an image, our goal is to predict long-term sequential movements
caused by that force. Doing so entails reasoning about scene geometry, objects,
their attributes, and the physical rules that govern the movements of objects.
We design a deep neural network model that learns long-term sequential
dependencies of object movements while taking into account the geometry and
appearance of the scene by combining Convolutional and Recurrent Neural
Networks. Training our model requires a large-scale dataset of object movements
caused by external forces. To build a dataset of forces in scenes, we
reconstructed all images in SUN RGB-D dataset in a physics simulator to
estimate the physical movements of objects caused by external forces applied to
them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a
variety of external forces are applied to different types of objects resulting
in more than 65,000 object movements represented in 3D. Our experimental
evaluations show that the challenging task of predicting long-term movements of
objects as their reaction to external forces is possible from a single image.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 18:12:33 GMT"
}
] | 2016-03-18T00:00:00 | [
[
"Mottaghi",
"Roozbeh",
""
],
[
"Rastegari",
"Mohammad",
""
],
[
"Gupta",
"Abhinav",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: "What happens if..." Learning to Predict the Effect of Forces in Images
ABSTRACT: What happens if one pushes a cup sitting on a table toward the edge of the
table? How about pushing a desk against a wall? In this paper, we study the
problem of understanding the movements of objects as a result of applying
external forces to them. For a given force vector applied to a specific
location in an image, our goal is to predict long-term sequential movements
caused by that force. Doing so entails reasoning about scene geometry, objects,
their attributes, and the physical rules that govern the movements of objects.
We design a deep neural network model that learns long-term sequential
dependencies of object movements while taking into account the geometry and
appearance of the scene by combining Convolutional and Recurrent Neural
Networks. Training our model requires a large-scale dataset of object movements
caused by external forces. To build a dataset of forces in scenes, we
reconstructed all images in SUN RGB-D dataset in a physics simulator to
estimate the physical movements of objects caused by external forces applied to
them. Our Forces in Scenes (ForScene) dataset contains 10,335 images in which a
variety of external forces are applied to different types of objects resulting
in more than 65,000 object movements represented in 3D. Our experimental
evaluations show that the challenging task of predicting long-term movements of
objects as their reaction to external forces is possible from a single image.
| new_dataset | 0.964254 |
1603.04871 | Zhicheng Yan | Zhicheng Yan, Hao Zhang, Yangqing Jia, Thomas Breuel, Yizhou Yu | Combining the Best of Convolutional Layers and Recurrent Layers: A
Hybrid Network for Semantic Segmentation | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art results of semantic segmentation are established by Fully
Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and
pooling layers to gradually enlarge the receptive fields of neurons, resulting
in an indirect way of modeling the distant contextual dependence. In this work,
we advocate the use of spatially recurrent layers (i.e. ReNet layers) which
directly capture global contexts and lead to improved feature representations.
We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet
(N-ReNet), which achieves competitive performance on Stanford Background
dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel
Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including
full-image receptive fields, end-to-end training, and efficient network
execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results
of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6%,
2.3% and 0.2%, respectively, and achieves the highest IoUs for 13 out of the 20
object classes.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 20:10:48 GMT"
}
] | 2016-03-17T00:00:00 | [
[
"Yan",
"Zhicheng",
""
],
[
"Zhang",
"Hao",
""
],
[
"Jia",
"Yangqing",
""
],
[
"Breuel",
"Thomas",
""
],
[
"Yu",
"Yizhou",
""
]
] | TITLE: Combining the Best of Convolutional Layers and Recurrent Layers: A
Hybrid Network for Semantic Segmentation
ABSTRACT: State-of-the-art results of semantic segmentation are established by Fully
Convolutional neural Networks (FCNs). FCNs rely on cascaded convolutional and
pooling layers to gradually enlarge the receptive fields of neurons, resulting
in an indirect way of modeling the distant contextual dependence. In this work,
we advocate the use of spatially recurrent layers (i.e. ReNet layers) which
directly capture global contexts and lead to improved feature representations.
We demonstrate the effectiveness of ReNet layers by building a Naive deep ReNet
(N-ReNet), which achieves competitive performance on Stanford Background
dataset. Furthermore, we integrate ReNet layers with FCNs, and develop a novel
Hybrid deep ReNet (H-ReNet). It enjoys a few remarkable properties, including
full-image receptive fields, end-to-end training, and efficient network
execution. On the PASCAL VOC 2012 benchmark, the H-ReNet improves the results
of state-of-the-art approaches Piecewise, CRFasRNN and DeepParsing by 3.6%,
2.3% and 0.2%, respectively, and achieves the highest IoUs for 13 out of the 20
object classes.
| no_new_dataset | 0.952131 |
1603.04918 | Shahzad Bhatti | Shahzad Bhatti, Carolyn Beck, Angelia Nedic | Data Clustering and Graph Partitioning via Simulated Mixing | 28 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectral clustering approaches have led to well-accepted algorithms for
finding accurate clusters in a given dataset. However, their application to
large-scale datasets has been hindered by computational complexity of
eigenvalue decompositions. Several algorithms have been proposed in the recent
past to accelerate spectral clustering, however they compromise on the accuracy
of the spectral clustering to achieve faster speed. In this paper, we propose a
novel spectral clustering algorithm based on a mixing process on a graph.
Unlike the existing spectral clustering algorithms, our algorithm does not
require computing eigenvectors. Specifically, it finds the equivalent of a
linear combination of eigenvectors of the normalized similarity matrix weighted
with corresponding eigenvalues. This linear combination is then used to
partition the dataset into meaningful clusters. Simulations on real datasets
show that partitioning datasets based on such linear combinations of
eigenvectors achieves better accuracy than standard spectral clustering methods
as the number of clusters increase. Our algorithm can easily be implemented in
a distributed setting.
| [
{
"version": "v1",
"created": "Tue, 15 Mar 2016 23:06:19 GMT"
}
] | 2016-03-17T00:00:00 | [
[
"Bhatti",
"Shahzad",
""
],
[
"Beck",
"Carolyn",
""
],
[
"Nedic",
"Angelia",
""
]
] | TITLE: Data Clustering and Graph Partitioning via Simulated Mixing
ABSTRACT: Spectral clustering approaches have led to well-accepted algorithms for
finding accurate clusters in a given dataset. However, their application to
large-scale datasets has been hindered by computational complexity of
eigenvalue decompositions. Several algorithms have been proposed in the recent
past to accelerate spectral clustering, however they compromise on the accuracy
of the spectral clustering to achieve faster speed. In this paper, we propose a
novel spectral clustering algorithm based on a mixing process on a graph.
Unlike the existing spectral clustering algorithms, our algorithm does not
require computing eigenvectors. Specifically, it finds the equivalent of a
linear combination of eigenvectors of the normalized similarity matrix weighted
with corresponding eigenvalues. This linear combination is then used to
partition the dataset into meaningful clusters. Simulations on real datasets
show that partitioning datasets based on such linear combinations of
eigenvectors achieves better accuracy than standard spectral clustering methods
as the number of clusters increase. Our algorithm can easily be implemented in
a distributed setting.
| no_new_dataset | 0.950549 |
1603.05015 | Ravi Garg | Ravi Garg, Anders Eriksson and Ian Reid | Non-linear Dimensionality Regularizer for Solving Inverse Problems | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consider an ill-posed inverse problem of estimating causal factors from
observations, one of which is known to lie near some (un- known)
low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel.
Solving this problem requires simultaneous estimation of these factors and
learning the low-dimensional representation for them. In this work, we
introduce a novel non-linear dimensionality regulariza- tion technique for
solving such problems without pre-training. We re-formulate Kernel-PCA as an
energy minimization problem in which low dimensionality constraints are
introduced as regularization terms in the energy. To the best of our knowledge,
ours is the first at- tempt to create a dimensionality regularizer in the KPCA
framework. Our approach relies on robustly penalizing the rank of the recovered
fac- tors directly in the implicit feature space to create their
low-dimensional approximations in closed form. Our approach performs robust
KPCA in the presence of missing data and noise. We demonstrate state-of-the-art
results on predicting missing entries in the standard oil flow dataset.
Additionally, we evaluate our method on the challenging problem of Non-Rigid
Structure from Motion and our approach delivers promising results on CMU mocap
dataset despite the presence of significant occlusions and noise.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2016 10:04:38 GMT"
}
] | 2016-03-17T00:00:00 | [
[
"Garg",
"Ravi",
""
],
[
"Eriksson",
"Anders",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Non-linear Dimensionality Regularizer for Solving Inverse Problems
ABSTRACT: Consider an ill-posed inverse problem of estimating causal factors from
observations, one of which is known to lie near some (un- known)
low-dimensional, non-linear manifold expressed by a predefined Mercer-kernel.
Solving this problem requires simultaneous estimation of these factors and
learning the low-dimensional representation for them. In this work, we
introduce a novel non-linear dimensionality regulariza- tion technique for
solving such problems without pre-training. We re-formulate Kernel-PCA as an
energy minimization problem in which low dimensionality constraints are
introduced as regularization terms in the energy. To the best of our knowledge,
ours is the first at- tempt to create a dimensionality regularizer in the KPCA
framework. Our approach relies on robustly penalizing the rank of the recovered
fac- tors directly in the implicit feature space to create their
low-dimensional approximations in closed form. Our approach performs robust
KPCA in the presence of missing data and noise. We demonstrate state-of-the-art
results on predicting missing entries in the standard oil flow dataset.
Additionally, we evaluate our method on the challenging problem of Non-Rigid
Structure from Motion and our approach delivers promising results on CMU mocap
dataset despite the presence of significant occlusions and noise.
| no_new_dataset | 0.944536 |
1603.05152 | Kleanthis Malialis | Kleanthis Malialis and Jun Wang and Gary Brooks and George Frangou | Feature Selection as a Multiagent Coordination Problem | AAMAS-16 Workshop on Adaptive and Learning Agents (ALA-16) | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Datasets with hundreds to tens of thousands features is the new norm. Feature
selection constitutes a central problem in machine learning, where the aim is
to derive a representative set of features from which to construct a
classification (or prediction) model for a specific task. Our experimental
study involves microarray gene expression datasets, these are high-dimensional
and noisy datasets that contain genetic data typically used for distinguishing
between benign or malicious tissues or classifying different types of cancer.
In this paper, we formulate feature selection as a multiagent coordination
problem and propose a novel feature selection method using multiagent
reinforcement learning. The central idea of the proposed approach is to
"assign" a reinforcement learning agent to each feature where each agent learns
to control a single feature, we refer to this approach as MARL. Applying this
to microarray datasets creates an enormous multiagent coordination problem
between thousands of learning agents. To address the scalability challenge we
apply a form of reward shaping called CLEAN rewards. We compare in total nine
feature selection methods, including state-of-the-art methods, and show that
the proposed method using CLEAN rewards can significantly scale-up, thus
outperforming the rest of learning-based methods. We further show that a hybrid
variant of MARL achieves the best overall performance.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2016 15:49:37 GMT"
}
] | 2016-03-17T00:00:00 | [
[
"Malialis",
"Kleanthis",
""
],
[
"Wang",
"Jun",
""
],
[
"Brooks",
"Gary",
""
],
[
"Frangou",
"George",
""
]
] | TITLE: Feature Selection as a Multiagent Coordination Problem
ABSTRACT: Datasets with hundreds to tens of thousands features is the new norm. Feature
selection constitutes a central problem in machine learning, where the aim is
to derive a representative set of features from which to construct a
classification (or prediction) model for a specific task. Our experimental
study involves microarray gene expression datasets, these are high-dimensional
and noisy datasets that contain genetic data typically used for distinguishing
between benign or malicious tissues or classifying different types of cancer.
In this paper, we formulate feature selection as a multiagent coordination
problem and propose a novel feature selection method using multiagent
reinforcement learning. The central idea of the proposed approach is to
"assign" a reinforcement learning agent to each feature where each agent learns
to control a single feature, we refer to this approach as MARL. Applying this
to microarray datasets creates an enormous multiagent coordination problem
between thousands of learning agents. To address the scalability challenge we
apply a form of reward shaping called CLEAN rewards. We compare in total nine
feature selection methods, including state-of-the-art methods, and show that
the proposed method using CLEAN rewards can significantly scale-up, thus
outperforming the rest of learning-based methods. We further show that a hybrid
variant of MARL achieves the best overall performance.
| no_new_dataset | 0.94625 |
1603.05191 | Martin Tak\'a\v{c} | Chenxin Ma and Martin Tak\'a\v{c} | Distributed Inexact Damped Newton Method: Data Partitioning and
Load-Balancing | null | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study inexact dumped Newton method implemented in a
distributed environment. We start with an original DiSCO algorithm
[Communication-Efficient Distributed Optimization of Self-Concordant Empirical
Loss, Yuchen Zhang and Lin Xiao, 2015]. We will show that this algorithm may
not scale well and propose an algorithmic modifications which will lead to less
communications, better load-balancing and more efficient computation. We
perform numerical experiments with an regularized empirical loss minimization
instance described by a 273GB dataset.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2016 17:50:33 GMT"
}
] | 2016-03-17T00:00:00 | [
[
"Ma",
"Chenxin",
""
],
[
"Takáč",
"Martin",
""
]
] | TITLE: Distributed Inexact Damped Newton Method: Data Partitioning and
Load-Balancing
ABSTRACT: In this paper we study inexact dumped Newton method implemented in a
distributed environment. We start with an original DiSCO algorithm
[Communication-Efficient Distributed Optimization of Self-Concordant Empirical
Loss, Yuchen Zhang and Lin Xiao, 2015]. We will show that this algorithm may
not scale well and propose an algorithmic modifications which will lead to less
communications, better load-balancing and more efficient computation. We
perform numerical experiments with an regularized empirical loss minimization
instance described by a 273GB dataset.
| no_new_dataset | 0.555808 |
1511.04960 | Mohammad Najafi | Mohammad Najafi, Sarah Taghavi Namin, Mathieu Salzmann, Lars Petersson | Sample and Filter: Nonparametric Scene Parsing via Efficient Filtering | Please refer to the CVPR-2016 version of this manuscript | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene parsing has attracted a lot of attention in computer vision. While
parametric models have proven effective for this task, they cannot easily
incorporate new training data. By contrast, nonparametric approaches, which
bypass any learning phase and directly transfer the labels from the training
data to the query images, can readily exploit new labeled samples as they
become available. Unfortunately, because of the computational cost of their
label transfer procedures, state-of-the-art nonparametric methods typically
filter out most training images to only keep a few relevant ones to label the
query. As such, these methods throw away many images that still contain
valuable information and generally obtain an unbalanced set of labeled samples.
In this paper, we introduce a nonparametric approach to scene parsing that
follows a sample-and-filter strategy. More specifically, we propose to sample
labeled superpixels according to an image similarity score, which allows us to
obtain a balanced set of samples. We then formulate label transfer as an
efficient filtering procedure, which lets us exploit more labeled samples than
existing techniques. Our experiments evidence the benefits of our approach over
state-of-the-art nonparametric methods on two benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2015 14:07:47 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2016 01:29:03 GMT"
}
] | 2016-03-16T00:00:00 | [
[
"Najafi",
"Mohammad",
""
],
[
"Namin",
"Sarah Taghavi",
""
],
[
"Salzmann",
"Mathieu",
""
],
[
"Petersson",
"Lars",
""
]
] | TITLE: Sample and Filter: Nonparametric Scene Parsing via Efficient Filtering
ABSTRACT: Scene parsing has attracted a lot of attention in computer vision. While
parametric models have proven effective for this task, they cannot easily
incorporate new training data. By contrast, nonparametric approaches, which
bypass any learning phase and directly transfer the labels from the training
data to the query images, can readily exploit new labeled samples as they
become available. Unfortunately, because of the computational cost of their
label transfer procedures, state-of-the-art nonparametric methods typically
filter out most training images to only keep a few relevant ones to label the
query. As such, these methods throw away many images that still contain
valuable information and generally obtain an unbalanced set of labeled samples.
In this paper, we introduce a nonparametric approach to scene parsing that
follows a sample-and-filter strategy. More specifically, we propose to sample
labeled superpixels according to an image similarity score, which allows us to
obtain a balanced set of samples. We then formulate label transfer as an
efficient filtering procedure, which lets us exploit more labeled samples than
existing techniques. Our experiments evidence the benefits of our approach over
state-of-the-art nonparametric methods on two benchmark datasets.
| no_new_dataset | 0.948298 |
1601.04155 | Zhangyang Wang | Zhangyang Wang, Shiyu Chang, Florin Dolcos, Diane Beck, Ding Liu, and
Thomas S. Huang | Brain-Inspired Deep Networks for Image Aesthetics Assessment | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image aesthetics assessment has been challenging due to its subjective
nature. Inspired by the scientific advances in the human visual perception and
neuroaesthetics, we design Brain-Inspired Deep Networks (BDN) for this task.
BDN first learns attributes through the parallel supervised pathways, on a
variety of selected feature dimensions. A high-level synthesis network is
trained to associate and transform those attributes into the overall aesthetics
rating. We then extend BDN to predicting the distribution of human ratings,
since aesthetics ratings are often subjective. Another highlight is our
first-of-its-kind study of label-preserving transformations in the context of
aesthetics assessment, which leads to an effective data augmentation approach.
Experimental results on the AVA dataset show that our biological inspired and
task-specific BDN model gains significantly performance improvement, compared
to other state-of-the-art models with the same or higher parameter capacity.
| [
{
"version": "v1",
"created": "Sat, 16 Jan 2016 10:59:40 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Mar 2016 03:46:27 GMT"
}
] | 2016-03-16T00:00:00 | [
[
"Wang",
"Zhangyang",
""
],
[
"Chang",
"Shiyu",
""
],
[
"Dolcos",
"Florin",
""
],
[
"Beck",
"Diane",
""
],
[
"Liu",
"Ding",
""
],
[
"Huang",
"Thomas S.",
""
]
] | TITLE: Brain-Inspired Deep Networks for Image Aesthetics Assessment
ABSTRACT: Image aesthetics assessment has been challenging due to its subjective
nature. Inspired by the scientific advances in the human visual perception and
neuroaesthetics, we design Brain-Inspired Deep Networks (BDN) for this task.
BDN first learns attributes through the parallel supervised pathways, on a
variety of selected feature dimensions. A high-level synthesis network is
trained to associate and transform those attributes into the overall aesthetics
rating. We then extend BDN to predicting the distribution of human ratings,
since aesthetics ratings are often subjective. Another highlight is our
first-of-its-kind study of label-preserving transformations in the context of
aesthetics assessment, which leads to an effective data augmentation approach.
Experimental results on the AVA dataset show that our biological inspired and
task-specific BDN model gains significantly performance improvement, compared
to other state-of-the-art models with the same or higher parameter capacity.
| no_new_dataset | 0.947962 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.