id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1601.03896 | Frank Keller | Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut
Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, Barbara Plank | Automatic Description Generation from Images: A Survey of Models,
Datasets, and Evaluation Measures | Journal of Artificial Intelligence Research 55, 409-442, 2016 | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic description generation from natural images is a challenging problem
that has recently received a large amount of interest from the computer vision
and natural language processing communities. In this survey, we classify the
existing approaches based on how they conceptualize this problem, viz., models
that cast description as either generation problem or as a retrieval problem
over a visual or multimodal representational space. We provide a detailed
review of existing models, highlighting their advantages and disadvantages.
Moreover, we give an overview of the benchmark image datasets and the
evaluation measures that have been developed to assess the quality of
machine-generated image descriptions. Finally we extrapolate future directions
in the area of automatic image description generation.
| [
{
"version": "v1",
"created": "Fri, 15 Jan 2016 12:50:32 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Apr 2017 09:47:20 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Bernardi",
"Raffaella",
""
],
[
"Cakici",
"Ruket",
""
],
[
"Elliott",
"Desmond",
""
],
[
"Erdem",
"Aykut",
""
],
[
"Erdem",
"Erkut",
""
],
[
"Ikizler-Cinbis",
"Nazli",
""
],
[
"Keller",
"Frank",
""
],
[
"Muscat",
"Adrian",
""
],
[
"Plank",
"Barbara",
""
]
] | TITLE: Automatic Description Generation from Images: A Survey of Models,
Datasets, and Evaluation Measures
ABSTRACT: Automatic description generation from natural images is a challenging problem
that has recently received a large amount of interest from the computer vision
and natural language processing communities. In this survey, we classify the
existing approaches based on how they conceptualize this problem, viz., models
that cast description as either generation problem or as a retrieval problem
over a visual or multimodal representational space. We provide a detailed
review of existing models, highlighting their advantages and disadvantages.
Moreover, we give an overview of the benchmark image datasets and the
evaluation measures that have been developed to assess the quality of
machine-generated image descriptions. Finally we extrapolate future directions
in the area of automatic image description generation.
| no_new_dataset | 0.954942 |
1603.02056 | Wenqiang Liu | Wenqiang Liu, Jun Liu, Jian Zhang, Haimeng Duan, Bifan Wei | TruthDiscover: Resolving Object Conflicts on Massive Linked Data | This paper had been accepted by Proceedings of the 26th International
Conference on World Wide Web Companion. International World Wide Web
Conferences Steering Committee, 2017, WWW2017 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable effort has been made to increase the scale of Linked Data.
However, because of the openness of the Semantic Web and the ease of extracting
Linked Data from semi-structured sources (e.g., Wikipedia) and unstructured
sources, many Linked Data sources often provide conflicting objects for a
certain predicate of a real-world entity. Existing methods cannot be trivially
extended to resolve conflicts in Linked Data because Linked Data has a
scale-free property. In this demonstration, we present a novel system called
TruthDiscover, to identify the truth in Linked Data with a scale-free property.
First, TruthDiscover leverages the topological properties of the Source Belief
Graph to estimate the priori beliefs of sources, which are utilized to smooth
the trustworthiness of sources. Second, the Hidden Markov Random Field is
utilized to model interdependencies among objects for estimating the trust
values of objects accurately. TruthDiscover can visualize the process of
resolving conflicts in Linked Data. Experiments results on four datasets show
that TruthDiscover exhibits satisfactory accuracy when confronted with data
having a scale-free property.
| [
{
"version": "v1",
"created": "Mon, 7 Mar 2016 13:34:36 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2017 22:46:20 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Liu",
"Wenqiang",
""
],
[
"Liu",
"Jun",
""
],
[
"Zhang",
"Jian",
""
],
[
"Duan",
"Haimeng",
""
],
[
"Wei",
"Bifan",
""
]
] | TITLE: TruthDiscover: Resolving Object Conflicts on Massive Linked Data
ABSTRACT: Considerable effort has been made to increase the scale of Linked Data.
However, because of the openness of the Semantic Web and the ease of extracting
Linked Data from semi-structured sources (e.g., Wikipedia) and unstructured
sources, many Linked Data sources often provide conflicting objects for a
certain predicate of a real-world entity. Existing methods cannot be trivially
extended to resolve conflicts in Linked Data because Linked Data has a
scale-free property. In this demonstration, we present a novel system called
TruthDiscover, to identify the truth in Linked Data with a scale-free property.
First, TruthDiscover leverages the topological properties of the Source Belief
Graph to estimate the priori beliefs of sources, which are utilized to smooth
the trustworthiness of sources. Second, the Hidden Markov Random Field is
utilized to model interdependencies among objects for estimating the trust
values of objects accurately. TruthDiscover can visualize the process of
resolving conflicts in Linked Data. Experiments results on four datasets show
that TruthDiscover exhibits satisfactory accuracy when confronted with data
having a scale-free property.
| no_new_dataset | 0.951051 |
1604.04562 | Tsung-Hsien Wen | Tsung-Hsien Wen, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M.
Rojas-Barahona, Pei-Hao Su, Stefan Ultes, Steve Young | A Network-based End-to-End Trainable Task-oriented Dialogue System | published at EACL 2017 | null | null | null | cs.CL cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain.
| [
{
"version": "v1",
"created": "Fri, 15 Apr 2016 16:40:49 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 14:03:58 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2017 10:55:12 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Wen",
"Tsung-Hsien",
""
],
[
"Vandyke",
"David",
""
],
[
"Mrksic",
"Nikola",
""
],
[
"Gasic",
"Milica",
""
],
[
"Rojas-Barahona",
"Lina M.",
""
],
[
"Su",
"Pei-Hao",
""
],
[
"Ultes",
"Stefan",
""
],
[
"Young",
"Steve",
""
]
] | TITLE: A Network-based End-to-End Trainable Task-oriented Dialogue System
ABSTRACT: Teaching machines to accomplish tasks by conversing naturally with humans is
challenging. Currently, developing task-oriented dialogue systems requires
creating multiple components and typically this involves either a large amount
of handcrafting, or acquiring costly labelled datasets to solve a statistical
learning problem for each component. In this work we introduce a neural
network-based text-in, text-out end-to-end trainable goal-oriented dialogue
system along with a new way of collecting dialogue data based on a novel
pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue
systems easily and without making too many assumptions about the task at hand.
The results show that the model can converse with human subjects naturally
whilst helping them to accomplish tasks in a restaurant search domain.
| no_new_dataset | 0.954009 |
1604.08407 | Wenqiang Liu | Wenqiang Liu, Jun Liu, Haimeng Duan, Xie He, Bifan Wei | Exploiting Source-Object Network to Resolve Object Conflicts in Linked
Data | This paper had been accepted by ESWC2017 Research Tracks | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Considerable effort has been made to increase the scale of Linked Data.
However, an inevitable problem when dealing with data integration from multiple
sources is that multiple different sources often provide conflicting objects
for a certain predicate of the same real-world entity, so-called object
conflicts problem. Currently, the object conflicts problem has not received
sufficient attention in the Linked Data community. In this paper, we first
formalize the object conflicts resolution problem as computing the joint
distribution of variables on a heterogeneous information network called the
Source-Object Network, which successfully captures the all correlations from
objects and Linked Data sources. Then, we introduce a novel approach based on
network effects called ObResolution(Object Resolution), to identify a true
object from multiple conflicting objects. ObResolution adopts a pairwise Markov
Random Field (pMRF) to model all evidences under a unified framework. Extensive
experimental results on six real-world datasets show that our method exhibits
higher accuracy than existing approaches and it is robust and consistent in
various domains. \keywords{Linked Data, Object Conflicts, Linked Data Quality,
Truth Discovery
| [
{
"version": "v1",
"created": "Thu, 28 Apr 2016 13:22:54 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2017 21:32:48 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2017 22:38:10 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Liu",
"Wenqiang",
""
],
[
"Liu",
"Jun",
""
],
[
"Duan",
"Haimeng",
""
],
[
"He",
"Xie",
""
],
[
"Wei",
"Bifan",
""
]
] | TITLE: Exploiting Source-Object Network to Resolve Object Conflicts in Linked
Data
ABSTRACT: Considerable effort has been made to increase the scale of Linked Data.
However, an inevitable problem when dealing with data integration from multiple
sources is that multiple different sources often provide conflicting objects
for a certain predicate of the same real-world entity, so-called object
conflicts problem. Currently, the object conflicts problem has not received
sufficient attention in the Linked Data community. In this paper, we first
formalize the object conflicts resolution problem as computing the joint
distribution of variables on a heterogeneous information network called the
Source-Object Network, which successfully captures the all correlations from
objects and Linked Data sources. Then, we introduce a novel approach based on
network effects called ObResolution(Object Resolution), to identify a true
object from multiple conflicting objects. ObResolution adopts a pairwise Markov
Random Field (pMRF) to model all evidences under a unified framework. Extensive
experimental results on six real-world datasets show that our method exhibits
higher accuracy than existing approaches and it is robust and consistent in
various domains. \keywords{Linked Data, Object Conflicts, Linked Data Quality,
Truth Discovery
| no_new_dataset | 0.951639 |
1606.01549 | Bhuwan Dhingra | Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, Ruslan
Salakhutdinov | Gated-Attention Readers for Text Comprehension | Accepted at ACL 2017 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we study the problem of answering cloze-style questions over
documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop
architecture with a novel attention mechanism, which is based on multiplicative
interactions between the query embedding and the intermediate states of a
recurrent neural network document reader. This enables the reader to build
query-specific representations of tokens in the document for accurate answer
selection. The GA Reader obtains state-of-the-art results on three benchmarks
for this task--the CNN \& Daily Mail news stories and the Who Did What dataset.
The effectiveness of multiplicative interaction is demonstrated by an ablation
study, and by comparing to alternative compositional operators for implementing
the gated-attention. The code is available at
https://github.com/bdhingra/ga-reader.
| [
{
"version": "v1",
"created": "Sun, 5 Jun 2016 19:30:39 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 19:27:42 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2017 18:50:05 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Dhingra",
"Bhuwan",
""
],
[
"Liu",
"Hanxiao",
""
],
[
"Yang",
"Zhilin",
""
],
[
"Cohen",
"William W.",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Gated-Attention Readers for Text Comprehension
ABSTRACT: In this paper we study the problem of answering cloze-style questions over
documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop
architecture with a novel attention mechanism, which is based on multiplicative
interactions between the query embedding and the intermediate states of a
recurrent neural network document reader. This enables the reader to build
query-specific representations of tokens in the document for accurate answer
selection. The GA Reader obtains state-of-the-art results on three benchmarks
for this task--the CNN \& Daily Mail news stories and the Who Did What dataset.
The effectiveness of multiplicative interaction is demonstrated by an ablation
study, and by comparing to alternative compositional operators for implementing
the gated-attention. The code is available at
https://github.com/bdhingra/ga-reader.
| no_new_dataset | 0.949295 |
1607.07249 | Joern Hees | J\"orn Hees, Rouven Bauer, Joachim Folz, Damian Borth and Andreas
Dengel | An Evolutionary Algorithm to Learn SPARQL Queries for
Source-Target-Pairs: Finding Patterns for Human Associations in DBpedia | 15 pages, 2 figures, as of 2016-09-13
6a19d5d7020770dc0711081ce2c1e52f71bf4b86 | null | 10.1007/978-3-319-49004-5_22 | null | cs.AI cs.DB cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient usage of the knowledge provided by the Linked Data community is
often hindered by the need for domain experts to formulate the right SPARQL
queries to answer questions. For new questions they have to decide which
datasets are suitable and in which terminology and modelling style to phrase
the SPARQL query.
In this work we present an evolutionary algorithm to help with this
challenging task. Given a training list of source-target node-pair examples our
algorithm can learn patterns (SPARQL queries) from a SPARQL endpoint. The
learned patterns can be visualised to form the basis for further investigation,
or they can be used to predict target nodes for new source nodes.
Amongst others, we apply our algorithm to a dataset of several hundred human
associations (such as "circle - square") to find patterns for them in DBpedia.
We show the scalability of the algorithm by running it against a SPARQL
endpoint loaded with > 7.9 billion triples. Further, we use the resulting
SPARQL queries to mimic human associations with a Mean Average Precision (MAP)
of 39.9 % and a Recall@10 of 63.9 %.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 12:47:38 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Jul 2016 12:13:14 GMT"
},
{
"version": "v3",
"created": "Tue, 13 Sep 2016 10:27:06 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Hees",
"Jörn",
""
],
[
"Bauer",
"Rouven",
""
],
[
"Folz",
"Joachim",
""
],
[
"Borth",
"Damian",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: An Evolutionary Algorithm to Learn SPARQL Queries for
Source-Target-Pairs: Finding Patterns for Human Associations in DBpedia
ABSTRACT: Efficient usage of the knowledge provided by the Linked Data community is
often hindered by the need for domain experts to formulate the right SPARQL
queries to answer questions. For new questions they have to decide which
datasets are suitable and in which terminology and modelling style to phrase
the SPARQL query.
In this work we present an evolutionary algorithm to help with this
challenging task. Given a training list of source-target node-pair examples our
algorithm can learn patterns (SPARQL queries) from a SPARQL endpoint. The
learned patterns can be visualised to form the basis for further investigation,
or they can be used to predict target nodes for new source nodes.
Amongst others, we apply our algorithm to a dataset of several hundred human
associations (such as "circle - square") to find patterns for them in DBpedia.
We show the scalability of the algorithm by running it against a SPARQL
endpoint loaded with > 7.9 billion triples. Further, we use the resulting
SPARQL queries to mimic human associations with a Mean Average Precision (MAP)
of 39.9 % and a Recall@10 of 63.9 %.
| no_new_dataset | 0.769254 |
1609.02200 | Jason Rolfe | Jason Tyler Rolfe | Discrete Variational Autoencoders | Published as a conference paper at ICLR 2017 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 21:41:32 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Apr 2017 01:23:06 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Rolfe",
"Jason Tyler",
""
]
] | TITLE: Discrete Variational Autoencoders
ABSTRACT: Probabilistic models with discrete latent variables naturally capture
datasets composed of discrete classes. However, they are difficult to train
efficiently, since backpropagation through discrete variables is generally not
possible. We present a novel method to train a class of probabilistic models
with discrete latent variables using the variational autoencoder framework,
including backpropagation through the discrete latent variables. The associated
class of probabilistic models comprises an undirected discrete component and a
directed hierarchical continuous component. The discrete component captures the
distribution over the disconnected smooth manifolds induced by the continuous
component. As a result, this class of models efficiently learns both the class
of objects in an image, and their specific realization in pixels, from
unsupervised data, and outperforms state-of-the-art methods on the
permutation-invariant MNIST, Omniglot, and Caltech-101 Silhouettes datasets.
| no_new_dataset | 0.94868 |
1611.00020 | Chen Liang | Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao | Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision | ACL 2017 camera ready version | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Harnessing the statistical power of neural networks to perform language
understanding and symbolic reasoning is difficult, when it requires executing
efficient discrete operations against a large knowledge-base. In this work, we
introduce a Neural Symbolic Machine, which contains (a) a neural "programmer",
i.e., a sequence-to-sequence model that maps language utterances to programs
and utilizes a key-variable memory to handle compositionality (b) a symbolic
"computer", i.e., a Lisp interpreter that performs program execution, and helps
find good programs by pruning the search space. We apply REINFORCE to directly
optimize the task reward of this structured prediction problem. To train with
weak supervision and improve the stability of REINFORCE, we augment it with an
iterative maximum-likelihood training process. NSM outperforms the
state-of-the-art on the WebQuestionsSP dataset when trained from
question-answer pairs only, without requiring any feature engineering or
domain-specific knowledge.
| [
{
"version": "v1",
"created": "Mon, 31 Oct 2016 20:07:23 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Nov 2016 05:25:19 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Nov 2016 16:24:24 GMT"
},
{
"version": "v4",
"created": "Sun, 23 Apr 2017 07:16:13 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Liang",
"Chen",
""
],
[
"Berant",
"Jonathan",
""
],
[
"Le",
"Quoc",
""
],
[
"Forbus",
"Kenneth D.",
""
],
[
"Lao",
"Ni",
""
]
] | TITLE: Neural Symbolic Machines: Learning Semantic Parsers on Freebase with
Weak Supervision
ABSTRACT: Harnessing the statistical power of neural networks to perform language
understanding and symbolic reasoning is difficult, when it requires executing
efficient discrete operations against a large knowledge-base. In this work, we
introduce a Neural Symbolic Machine, which contains (a) a neural "programmer",
i.e., a sequence-to-sequence model that maps language utterances to programs
and utilizes a key-variable memory to handle compositionality (b) a symbolic
"computer", i.e., a Lisp interpreter that performs program execution, and helps
find good programs by pruning the search space. We apply REINFORCE to directly
optimize the task reward of this structured prediction problem. To train with
weak supervision and improve the stability of REINFORCE, we augment it with an
iterative maximum-likelihood training process. NSM outperforms the
state-of-the-art on the WebQuestionsSP dataset when trained from
question-answer pairs only, without requiring any feature engineering or
domain-specific knowledge.
| no_new_dataset | 0.945197 |
1611.02261 | Rasool Fakoor | Rasool Fakoor, Abdel-rahman Mohamed, Margaret Mitchell, Sing Bing
Kang, Pushmeet Kohli | Memory-augmented Attention Modelling for Videos | Revised version, minor changes, add the link for the source codes | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method to improve video description generation by modeling
higher-order interactions between video frames and described concepts. By
storing past visual attention in the video associated to previously generated
words, the system is able to decide what to look at and describe in light of
what it has already looked at and described. This enables not only more
effective local attention, but tractable consideration of the video sequence
while generating each word. Evaluation on the challenging and popular MSVD and
Charades datasets demonstrates that the proposed architecture outperforms
previous video description approaches without requiring external temporal video
features.
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2016 20:50:08 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Nov 2016 22:39:13 GMT"
},
{
"version": "v3",
"created": "Mon, 13 Feb 2017 02:22:51 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Apr 2017 07:26:01 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Fakoor",
"Rasool",
""
],
[
"Mohamed",
"Abdel-rahman",
""
],
[
"Mitchell",
"Margaret",
""
],
[
"Kang",
"Sing Bing",
""
],
[
"Kohli",
"Pushmeet",
""
]
] | TITLE: Memory-augmented Attention Modelling for Videos
ABSTRACT: We present a method to improve video description generation by modeling
higher-order interactions between video frames and described concepts. By
storing past visual attention in the video associated to previously generated
words, the system is able to decide what to look at and describe in light of
what it has already looked at and described. This enables not only more
effective local attention, but tractable consideration of the video sequence
while generating each word. Evaluation on the challenging and popular MSVD and
Charades datasets demonstrates that the proposed architecture outperforms
previous video description approaches without requiring external temporal video
features.
| no_new_dataset | 0.947721 |
1612.01428 | Seyed Mohammad Taheri | Seyed Mohammad Taheri, Hamidreza Mahyar, Mohammad Firouzi, Elahe
Ghalebi K., Radu Grosu, Ali Movaghar | Extracting Implicit Social Relation for Social Recommendation Techniques
in User Rating Prediction | null | null | 10.1145/3041021.3051153 | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation plays an increasingly important role in our daily lives.
Recommender systems automatically suggest items to users that might be
interesting for them. Recent studies illustrate that incorporating social trust
in Matrix Factorization methods demonstrably improves accuracy of rating
prediction. Such approaches mainly use the trust scores explicitly expressed by
users. However, it is often challenging to have users provide explicit trust
scores of each other. There exist quite a few works, which propose Trust
Metrics to compute and predict trust scores between users based on their
interactions. In this paper, first we present how social relation can be
extracted from users' ratings to items by describing Hellinger distance between
users in recommender systems. Then, we propose to incorporate the predicted
trust scores into social matrix factorization models. By analyzing social
relation extraction from three well-known real-world datasets, which both:
trust and recommendation data available, we conclude that using the implicit
social relation in social recommendation techniques has almost the same
performance compared to the actual trust scores explicitly expressed by users.
Hence, we build our method, called Hell-TrustSVD, on top of the
state-of-the-art social recommendation technique to incorporate both the
extracted implicit social relations and ratings given by users on the
prediction of items for an active user. To the best of our knowledge, this is
the first work to extend TrustSVD with extracted social trust information. The
experimental results support the idea of employing implicit trust into matrix
factorization whenever explicit trust is not available, can perform much better
than the state-of-the-art approaches in user rating prediction.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 16:47:02 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2017 15:23:15 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Taheri",
"Seyed Mohammad",
""
],
[
"Mahyar",
"Hamidreza",
""
],
[
"Firouzi",
"Mohammad",
""
],
[
"K.",
"Elahe Ghalebi",
""
],
[
"Grosu",
"Radu",
""
],
[
"Movaghar",
"Ali",
""
]
] | TITLE: Extracting Implicit Social Relation for Social Recommendation Techniques
in User Rating Prediction
ABSTRACT: Recommendation plays an increasingly important role in our daily lives.
Recommender systems automatically suggest items to users that might be
interesting for them. Recent studies illustrate that incorporating social trust
in Matrix Factorization methods demonstrably improves accuracy of rating
prediction. Such approaches mainly use the trust scores explicitly expressed by
users. However, it is often challenging to have users provide explicit trust
scores of each other. There exist quite a few works, which propose Trust
Metrics to compute and predict trust scores between users based on their
interactions. In this paper, first we present how social relation can be
extracted from users' ratings to items by describing Hellinger distance between
users in recommender systems. Then, we propose to incorporate the predicted
trust scores into social matrix factorization models. By analyzing social
relation extraction from three well-known real-world datasets, which both:
trust and recommendation data available, we conclude that using the implicit
social relation in social recommendation techniques has almost the same
performance compared to the actual trust scores explicitly expressed by users.
Hence, we build our method, called Hell-TrustSVD, on top of the
state-of-the-art social recommendation technique to incorporate both the
extracted implicit social relations and ratings given by users on the
prediction of items for an active user. To the best of our knowledge, this is
the first work to extend TrustSVD with extracted social trust information. The
experimental results support the idea of employing implicit trust into matrix
factorization whenever explicit trust is not available, can perform much better
than the state-of-the-art approaches in user rating prediction.
| no_new_dataset | 0.945096 |
1701.08207 | Luis Fern\'andez-Menchero | K. Wang, L. Fern\'andez-Menchero, O. Zatsarinny, and K. Bartschat | Benchmark calculations for electron-impact excitation of Mg$^{4+}$ | 10 pages, 7 figures, 1 table. Online material | Phys. Rev. A 95, 042709 (2017) | 10.1103/PhysRevA.95.042709 | null | physics.atom-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are major discrepancies between recent B-spline R-matrix (BSR) and
Dirac Atomic R-matrix Code (DARC) calculations regarding electron-impact
excitation rates for transitions in Mg$^{4+}$, with claims that the DARC
calculations are much more accurate. To identify possible reasons for these
discrepancies and to estimate the accuracy of the various results, we carried
out independent BSR calculations with the same 86 target states as in the
previous calculations, but with a different and more accurate representation of
the target structure. We find close agreement with the previous BSR results for
the majority of transitions, thereby confirming their accuracy. At the same
time the differences with the DARC results are much more pronounced. The
discrepancies in the final results for the collision strengths are mainly due
to differences in the structure description, specifically the inclusion of
correlation effects, and due to the likely occurrence of pseudoresonances. To
further check the convergence of the predicted collision rates, we carried out
even more extensive calculations involving 316 states of Mg$^{4+}$. Extending
the close-coupling expansion results in major corrections for transitions
involving the higher-lying states and allows us to assess the likely
uncertainties in the existing datasets.
| [
{
"version": "v1",
"created": "Fri, 27 Jan 2017 22:07:39 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2017 14:57:54 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Wang",
"K.",
""
],
[
"Fernández-Menchero",
"L.",
""
],
[
"Zatsarinny",
"O.",
""
],
[
"Bartschat",
"K.",
""
]
] | TITLE: Benchmark calculations for electron-impact excitation of Mg$^{4+}$
ABSTRACT: There are major discrepancies between recent B-spline R-matrix (BSR) and
Dirac Atomic R-matrix Code (DARC) calculations regarding electron-impact
excitation rates for transitions in Mg$^{4+}$, with claims that the DARC
calculations are much more accurate. To identify possible reasons for these
discrepancies and to estimate the accuracy of the various results, we carried
out independent BSR calculations with the same 86 target states as in the
previous calculations, but with a different and more accurate representation of
the target structure. We find close agreement with the previous BSR results for
the majority of transitions, thereby confirming their accuracy. At the same
time the differences with the DARC results are much more pronounced. The
discrepancies in the final results for the collision strengths are mainly due
to differences in the structure description, specifically the inclusion of
correlation effects, and due to the likely occurrence of pseudoresonances. To
further check the convergence of the predicted collision rates, we carried out
even more extensive calculations involving 316 states of Mg$^{4+}$. Extending
the close-coupling expansion results in major corrections for transitions
involving the higher-lying states and allows us to assess the likely
uncertainties in the existing datasets.
| no_new_dataset | 0.950915 |
1702.03274 | Jason Williams | Jason D. Williams, Kavosh Asadi, Geoffrey Zweig | Hybrid Code Networks: practical and efficient end-to-end dialog control
with supervised and reinforcement learning | Accepted as a long paper for the 55th Annual Meeting of the
Association for Computational Linguistics (ACL 2017) | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | End-to-end learning of recurrent neural networks (RNNs) is an attractive
solution for dialog systems; however, current techniques are data-intensive and
require thousands of dialogs to learn simple behaviors. We introduce Hybrid
Code Networks (HCNs), which combine an RNN with domain-specific knowledge
encoded as software and system action templates. Compared to existing
end-to-end approaches, HCNs considerably reduce the amount of training data
required, while retaining the key benefit of inferring a latent representation
of dialog state. In addition, HCNs can be optimized with supervised learning,
reinforcement learning, or a mixture of both. HCNs attain state-of-the-art
performance on the bAbI dialog dataset, and outperform two commercially
deployed customer-facing dialog systems.
| [
{
"version": "v1",
"created": "Fri, 10 Feb 2017 18:24:13 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Apr 2017 14:39:27 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Williams",
"Jason D.",
""
],
[
"Asadi",
"Kavosh",
""
],
[
"Zweig",
"Geoffrey",
""
]
] | TITLE: Hybrid Code Networks: practical and efficient end-to-end dialog control
with supervised and reinforcement learning
ABSTRACT: End-to-end learning of recurrent neural networks (RNNs) is an attractive
solution for dialog systems; however, current techniques are data-intensive and
require thousands of dialogs to learn simple behaviors. We introduce Hybrid
Code Networks (HCNs), which combine an RNN with domain-specific knowledge
encoded as software and system action templates. Compared to existing
end-to-end approaches, HCNs considerably reduce the amount of training data
required, while retaining the key benefit of inferring a latent representation
of dialog state. In addition, HCNs can be optimized with supervised learning,
reinforcement learning, or a mixture of both. HCNs attain state-of-the-art
performance on the bAbI dialog dataset, and outperform two commercially
deployed customer-facing dialog systems.
| no_new_dataset | 0.950134 |
1704.06752 | Siyuan Qiao | Siyuan Qiao, Wei Shen, Weichao Qiu, Chenxi Liu, Alan Yuille | ScaleNet: Guiding Object Proposal Generation in Supermarkets and Beyond | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by product detection in supermarkets, this paper studies the
problem of object proposal generation in supermarket images and other natural
images. We argue that estimation of object scales in images is helpful for
generating object proposals, especially for supermarket images where object
scales are usually within a small range. Therefore, we propose to estimate
object scales of images before generating object proposals. The proposed method
for predicting object scales is called ScaleNet. To validate the effectiveness
of ScaleNet, we build three supermarket datasets, two of which are real-world
datasets used for testing and the other one is a synthetic dataset used for
training. In short, we extend the previous state-of-the-art object proposal
methods by adding a scale prediction phase. The resulted method outperforms the
previous state-of-the-art on the supermarket datasets by a large margin. We
also show that the approach works for object proposal on other natural images
and it outperforms the previous state-of-the-art object proposal methods on the
MS COCO dataset. The supermarket datasets, the virtual supermarkets, and the
tools for creating more synthetic datasets will be made public.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 06:05:31 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Qiao",
"Siyuan",
""
],
[
"Shen",
"Wei",
""
],
[
"Qiu",
"Weichao",
""
],
[
"Liu",
"Chenxi",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: ScaleNet: Guiding Object Proposal Generation in Supermarkets and Beyond
ABSTRACT: Motivated by product detection in supermarkets, this paper studies the
problem of object proposal generation in supermarket images and other natural
images. We argue that estimation of object scales in images is helpful for
generating object proposals, especially for supermarket images where object
scales are usually within a small range. Therefore, we propose to estimate
object scales of images before generating object proposals. The proposed method
for predicting object scales is called ScaleNet. To validate the effectiveness
of ScaleNet, we build three supermarket datasets, two of which are real-world
datasets used for testing and the other one is a synthetic dataset used for
training. In short, we extend the previous state-of-the-art object proposal
methods by adding a scale prediction phase. The resulted method outperforms the
previous state-of-the-art on the supermarket datasets by a large margin. We
also show that the approach works for object proposal on other natural images
and it outperforms the previous state-of-the-art object proposal methods on the
MS COCO dataset. The supermarket datasets, the virtual supermarkets, and the
tools for creating more synthetic datasets will be made public.
| new_dataset | 0.965218 |
1704.06779 | Nafise Sadat Moosavi | Nafise Sadat Moosavi and Michael Strube | Lexical Features in Coreference Resolution: To be Used With Caution | 6 pages, ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lexical features are a major source of information in state-of-the-art
coreference resolvers. Lexical features implicitly model some of the linguistic
phenomena at a fine granularity level. They are especially useful for
representing the context of mentions. In this paper we investigate a drawback
of using many lexical features in state-of-the-art coreference resolvers. We
show that if coreference resolvers mainly rely on lexical features, they can
hardly generalize to unseen domains. Furthermore, we show that the current
coreference resolution evaluation is clearly flawed by only evaluating on a
specific split of a specific dataset in which there is a notable overlap
between the training, development and test sets.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 09:59:42 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Moosavi",
"Nafise Sadat",
""
],
[
"Strube",
"Michael",
""
]
] | TITLE: Lexical Features in Coreference Resolution: To be Used With Caution
ABSTRACT: Lexical features are a major source of information in state-of-the-art
coreference resolvers. Lexical features implicitly model some of the linguistic
phenomena at a fine granularity level. They are especially useful for
representing the context of mentions. In this paper we investigate a drawback
of using many lexical features in state-of-the-art coreference resolvers. We
show that if coreference resolvers mainly rely on lexical features, they can
hardly generalize to unseen domains. Furthermore, we show that the current
coreference resolution evaluation is clearly flawed by only evaluating on a
specific split of a specific dataset in which there is a notable overlap
between the training, development and test sets.
| no_new_dataset | 0.945147 |
1704.06803 | Federico Monti | Federico Monti, Michael M. Bronstein, Xavier Bresson | Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks | null | null | null | null | cs.LG cs.IR cs.NA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matrix completion models are among the most common formulations of
recommender systems. Recent works have showed a boost of performance of these
techniques when introducing the pairwise relationships between users/items in
the form of graphs, and imposing smoothness priors on these graphs. However,
such techniques do not fully exploit the local stationarity structures of
user/item graphs, and the number of parameters to learn is linear w.r.t. the
number of users and items. We propose a novel approach to overcome these
limitations by using geometric deep learning on graphs. Our matrix completion
architecture combines graph convolutional neural networks and recurrent neural
networks to learn meaningful statistical graph-structured patterns and the
non-linear diffusion process that generates the known ratings. This neural
network system requires a constant number of parameters independent of the
matrix size. We apply our method on both synthetic and real datasets, showing
that it outperforms state-of-the-art techniques.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 14:02:01 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Monti",
"Federico",
""
],
[
"Bronstein",
"Michael M.",
""
],
[
"Bresson",
"Xavier",
""
]
] | TITLE: Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks
ABSTRACT: Matrix completion models are among the most common formulations of
recommender systems. Recent works have showed a boost of performance of these
techniques when introducing the pairwise relationships between users/items in
the form of graphs, and imposing smoothness priors on these graphs. However,
such techniques do not fully exploit the local stationarity structures of
user/item graphs, and the number of parameters to learn is linear w.r.t. the
number of users and items. We propose a novel approach to overcome these
limitations by using geometric deep learning on graphs. Our matrix completion
architecture combines graph convolutional neural networks and recurrent neural
networks to learn meaningful statistical graph-structured patterns and the
non-linear diffusion process that generates the known ratings. This neural
network system requires a constant number of parameters independent of the
matrix size. We apply our method on both synthetic and real datasets, showing
that it outperforms state-of-the-art techniques.
| no_new_dataset | 0.941277 |
1704.06836 | Lotem Peled | Lotem Peled and Roi Reichart | Sarcasm SIGN: Interpreting Sarcasm with Sentiment Based Monolingual
Machine Translation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sarcasm is a form of speech in which speakers say the opposite of what they
truly mean in order to convey a strong sentiment. In other words, "Sarcasm is
the giant chasm between what I say, and the person who doesn't get it.". In
this paper we present the novel task of sarcasm interpretation, defined as the
generation of a non-sarcastic utterance conveying the same message as the
original sarcastic one. We introduce a novel dataset of 3000 sarcastic tweets,
each interpreted by five human judges. Addressing the task as monolingual
machine translation (MT), we experiment with MT algorithms and evaluation
measures. We then present SIGN: an MT based sarcasm interpretation algorithm
that targets sentiment words, a defining element of textual sarcasm. We show
that while the scores of n-gram based automatic measures are similar for all
interpretation models, SIGN's interpretations are scored higher by humans for
adequacy and sentiment polarity. We conclude with a discussion on future
research directions for our new task.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 18:59:25 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Peled",
"Lotem",
""
],
[
"Reichart",
"Roi",
""
]
] | TITLE: Sarcasm SIGN: Interpreting Sarcasm with Sentiment Based Monolingual
Machine Translation
ABSTRACT: Sarcasm is a form of speech in which speakers say the opposite of what they
truly mean in order to convey a strong sentiment. In other words, "Sarcasm is
the giant chasm between what I say, and the person who doesn't get it.". In
this paper we present the novel task of sarcasm interpretation, defined as the
generation of a non-sarcastic utterance conveying the same message as the
original sarcastic one. We introduce a novel dataset of 3000 sarcastic tweets,
each interpreted by five human judges. Addressing the task as monolingual
machine translation (MT), we experiment with MT algorithms and evaluation
measures. We then present SIGN: an MT based sarcasm interpretation algorithm
that targets sentiment words, a defining element of textual sarcasm. We show
that while the scores of n-gram based automatic measures are similar for all
interpretation models, SIGN's interpretations are scored higher by humans for
adequacy and sentiment polarity. We conclude with a discussion on future
research directions for our new task.
| new_dataset | 0.958148 |
1704.06841 | Toyotaro Suzumura Prof | Mark Hughes, Irene Li, Spyros Kotoulas, Toyotaro Suzumura | Medical Text Classification using Convolutional Neural Networks | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present an approach to automatically classify clinical text at a sentence
level. We are using deep convolutional neural networks to represent complex
features. We train the network on a dataset providing a broad categorization of
health information. Through a detailed evaluation, we demonstrate that our
method outperforms several approaches widely used in natural language
processing tasks by about 15%.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 19:39:32 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Hughes",
"Mark",
""
],
[
"Li",
"Irene",
""
],
[
"Kotoulas",
"Spyros",
""
],
[
"Suzumura",
"Toyotaro",
""
]
] | TITLE: Medical Text Classification using Convolutional Neural Networks
ABSTRACT: We present an approach to automatically classify clinical text at a sentence
level. We are using deep convolutional neural networks to represent complex
features. We train the network on a dataset providing a broad categorization of
health information. Through a detailed evaluation, we demonstrate that our
method outperforms several approaches widely used in natural language
processing tasks by about 15%.
| no_new_dataset | 0.948965 |
1704.06843 | Cenek Albl | Cenek Albl, Zuzana Kukelova, Andrew Fitzgibbon, Jan Heller, Matej Smid
and Tomas Pajdla | On the Two-View Geometry of Unsynchronized Cameras | 12 pages, 9 figures, Computer Vision and Pattern Recognition (CVPR)
2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present new methods for simultaneously estimating camera geometry and time
shift from video sequences from multiple unsynchronized cameras. Algorithms for
simultaneous computation of a fundamental matrix or a homography with unknown
time shift between images are developed. Our methods use minimal correspondence
sets (eight for fundamental matrix and four and a half for homography) and
therefore are suitable for robust estimation using RANSAC. Furthermore, we
present an iterative algorithm that extends the applicability on sequences
which are significantly unsynchronized, finding the correct time shift up to
several seconds. We evaluated the methods on synthetic and wide range of real
world datasets and the results show a broad applicability to the problem of
camera synchronization.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 19:45:46 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Albl",
"Cenek",
""
],
[
"Kukelova",
"Zuzana",
""
],
[
"Fitzgibbon",
"Andrew",
""
],
[
"Heller",
"Jan",
""
],
[
"Smid",
"Matej",
""
],
[
"Pajdla",
"Tomas",
""
]
] | TITLE: On the Two-View Geometry of Unsynchronized Cameras
ABSTRACT: We present new methods for simultaneously estimating camera geometry and time
shift from video sequences from multiple unsynchronized cameras. Algorithms for
simultaneous computation of a fundamental matrix or a homography with unknown
time shift between images are developed. Our methods use minimal correspondence
sets (eight for fundamental matrix and four and a half for homography) and
therefore are suitable for robust estimation using RANSAC. Furthermore, we
present an iterative algorithm that extends the applicability on sequences
which are significantly unsynchronized, finding the correct time shift up to
several seconds. We evaluated the methods on synthetic and wide range of real
world datasets and the results show a broad applicability to the problem of
camera synchronization.
| no_new_dataset | 0.950411 |
1704.06857 | Alberto Garcia-Garcia | Alberto Garcia-Garcia, Sergio Orts-Escolano, Sergiu Oprea, Victor
Villena-Martinez, Jose Garcia-Rodriguez | A Review on Deep Learning Techniques Applied to Semantic Segmentation | Submitted to TPAMI on Apr. 22, 2017 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image semantic segmentation is more and more being of interest for computer
vision and machine learning researchers. Many applications on the rise need
accurate and efficient segmentation mechanisms: autonomous driving, indoor
navigation, and even virtual or augmented reality systems to name a few. This
demand coincides with the rise of deep learning approaches in almost every
field or application target related to computer vision, including semantic
segmentation or scene understanding. This paper provides a review on deep
learning methods for semantic segmentation applied to various application
areas. Firstly, we describe the terminology of this field as well as mandatory
background concepts. Next, the main datasets and challenges are exposed to help
researchers decide which are the ones that best suit their needs and their
targets. Then, existing methods are reviewed, highlighting their contributions
and their significance in the field. Finally, quantitative results are given
for the described methods and the datasets in which they were evaluated,
following up with a discussion of the results. At last, we point out a set of
promising future works and draw our own conclusions about the state of the art
of semantic segmentation using deep learning techniques.
| [
{
"version": "v1",
"created": "Sat, 22 Apr 2017 23:37:43 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Garcia-Garcia",
"Alberto",
""
],
[
"Orts-Escolano",
"Sergio",
""
],
[
"Oprea",
"Sergiu",
""
],
[
"Villena-Martinez",
"Victor",
""
],
[
"Garcia-Rodriguez",
"Jose",
""
]
] | TITLE: A Review on Deep Learning Techniques Applied to Semantic Segmentation
ABSTRACT: Image semantic segmentation is more and more being of interest for computer
vision and machine learning researchers. Many applications on the rise need
accurate and efficient segmentation mechanisms: autonomous driving, indoor
navigation, and even virtual or augmented reality systems to name a few. This
demand coincides with the rise of deep learning approaches in almost every
field or application target related to computer vision, including semantic
segmentation or scene understanding. This paper provides a review on deep
learning methods for semantic segmentation applied to various application
areas. Firstly, we describe the terminology of this field as well as mandatory
background concepts. Next, the main datasets and challenges are exposed to help
researchers decide which are the ones that best suit their needs and their
targets. Then, existing methods are reviewed, highlighting their contributions
and their significance in the field. Finally, quantitative results are given
for the described methods and the datasets in which they were evaluated,
following up with a discussion of the results. At last, we point out a set of
promising future works and draw our own conclusions about the state of the art
of semantic segmentation using deep learning techniques.
| no_new_dataset | 0.949902 |
1704.06869 | Vlad Niculae | Vlad Niculae, Joonsuk Park, Claire Cardie | Argument Mining with Structured SVMs and RNNs | Accepted for publication at ACL 2017. 11 pages, 5 figures. Code at
https://github.com/vene/marseille and data at http://joonsuk.org/ | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel factor graph model for argument mining, designed for
settings in which the argumentative relations in a document do not necessarily
form a tree structure. (This is the case in over 20% of the web comments
dataset we release.) Our model jointly learns elementary unit type
classification and argumentative relation prediction. Moreover, our model
supports SVM and RNN parametrizations, can enforce structure constraints (e.g.,
transitivity), and can express dependencies between adjacent relations and
propositions. Our approaches outperform unstructured baselines in both web
comments and argumentative essay datasets.
| [
{
"version": "v1",
"created": "Sun, 23 Apr 2017 01:14:55 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Niculae",
"Vlad",
""
],
[
"Park",
"Joonsuk",
""
],
[
"Cardie",
"Claire",
""
]
] | TITLE: Argument Mining with Structured SVMs and RNNs
ABSTRACT: We propose a novel factor graph model for argument mining, designed for
settings in which the argumentative relations in a document do not necessarily
form a tree structure. (This is the case in over 20% of the web comments
dataset we release.) Our model jointly learns elementary unit type
classification and argumentative relation prediction. Moreover, our model
supports SVM and RNN parametrizations, can enforce structure constraints (e.g.,
transitivity), and can express dependencies between adjacent relations and
propositions. Our approaches outperform unstructured baselines in both web
comments and argumentative essay datasets.
| new_dataset | 0.930962 |
1704.06880 | Avishek Ghosh | Avishek Ghosh, Sayak Ray Chowdhury, Aditya Gopalan | Misspecified Linear Bandits | Thirty-First AAAI Conference on Artificial Intelligence, 2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of online learning in misspecified linear stochastic
multi-armed bandit problems. Regret guarantees for state-of-the-art linear
bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit
(OFUL) hold under the assumption that the arms expected rewards are perfectly
linear in their features. It is, however, of interest to investigate the impact
of potential misspecification in linear bandit models, where the expected
rewards are perturbed away from the linear subspace determined by the arms
features. Although OFUL has recently been shown to be robust to relatively
small deviations from linearity, we show that any linear bandit algorithm that
enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL)
must suffer linear regret under a sparse additive perturbation of the linear
model. In an attempt to overcome this negative result, we define a natural
class of bandit models characterized by a non-sparse deviation from linearity.
We argue that the OFUL algorithm can fail to achieve sublinear regret even
under models that have non-sparse deviation.We finally develop a novel bandit
algorithm, comprising a hypothesis test for linearity followed by a decision to
use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly
linear bandit models, the algorithm provably exhibits OFULs favorable regret
performance, while for misspecified models satisfying the non-sparse deviation
property, the algorithm avoids the linear regret phenomenon and falls back on
UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on
recommendation data from the public Yahoo! Learning to Rank Challenge dataset,
empirically support our findings.
| [
{
"version": "v1",
"created": "Sun, 23 Apr 2017 04:37:57 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Ghosh",
"Avishek",
""
],
[
"Chowdhury",
"Sayak Ray",
""
],
[
"Gopalan",
"Aditya",
""
]
] | TITLE: Misspecified Linear Bandits
ABSTRACT: We consider the problem of online learning in misspecified linear stochastic
multi-armed bandit problems. Regret guarantees for state-of-the-art linear
bandit algorithms such as Optimism in the Face of Uncertainty Linear bandit
(OFUL) hold under the assumption that the arms expected rewards are perfectly
linear in their features. It is, however, of interest to investigate the impact
of potential misspecification in linear bandit models, where the expected
rewards are perturbed away from the linear subspace determined by the arms
features. Although OFUL has recently been shown to be robust to relatively
small deviations from linearity, we show that any linear bandit algorithm that
enjoys optimal regret performance in the perfectly linear setting (e.g., OFUL)
must suffer linear regret under a sparse additive perturbation of the linear
model. In an attempt to overcome this negative result, we define a natural
class of bandit models characterized by a non-sparse deviation from linearity.
We argue that the OFUL algorithm can fail to achieve sublinear regret even
under models that have non-sparse deviation.We finally develop a novel bandit
algorithm, comprising a hypothesis test for linearity followed by a decision to
use either the OFUL or Upper Confidence Bound (UCB) algorithm. For perfectly
linear bandit models, the algorithm provably exhibits OFULs favorable regret
performance, while for misspecified models satisfying the non-sparse deviation
property, the algorithm avoids the linear regret phenomenon and falls back on
UCBs sublinear regret scaling. Numerical experiments on synthetic data, and on
recommendation data from the public Yahoo! Learning to Rank Challenge dataset,
empirically support our findings.
| no_new_dataset | 0.941601 |
1704.06904 | Fei Wang | Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang
Zhang, Xiaogang Wang, Xiaoou Tang | Residual Attention Network for Image Classification | accepted to CVPR2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose "Residual Attention Network", a convolutional neural
network using attention mechanism which can incorporate with state-of-art feed
forward network architecture in an end-to-end training fashion. Our Residual
Attention Network is built by stacking Attention Modules which generate
attention-aware features. The attention-aware features from different modules
change adaptively as layers going deeper. Inside each Attention Module,
bottom-up top-down feedforward structure is used to unfold the feedforward and
feedback attention process into a single feedforward process. Importantly, we
propose attention residual learning to train very deep Residual Attention
Networks which can be easily scaled up to hundreds of layers. Extensive
analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the
effectiveness of every module mentioned above. Our Residual Attention Network
achieves state-of-the-art object recognition performance on three benchmark
datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and
ImageNet (4.8% single model and single crop, top-5 error). Note that, our
method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69%
forward FLOPs comparing to ResNet-200. The experiment also demonstrates that
our network is robust against noisy labels.
| [
{
"version": "v1",
"created": "Sun, 23 Apr 2017 10:03:49 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Wang",
"Fei",
""
],
[
"Jiang",
"Mengqing",
""
],
[
"Qian",
"Chen",
""
],
[
"Yang",
"Shuo",
""
],
[
"Li",
"Cheng",
""
],
[
"Zhang",
"Honggang",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Residual Attention Network for Image Classification
ABSTRACT: In this work, we propose "Residual Attention Network", a convolutional neural
network using attention mechanism which can incorporate with state-of-art feed
forward network architecture in an end-to-end training fashion. Our Residual
Attention Network is built by stacking Attention Modules which generate
attention-aware features. The attention-aware features from different modules
change adaptively as layers going deeper. Inside each Attention Module,
bottom-up top-down feedforward structure is used to unfold the feedforward and
feedback attention process into a single feedforward process. Importantly, we
propose attention residual learning to train very deep Residual Attention
Networks which can be easily scaled up to hundreds of layers. Extensive
analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the
effectiveness of every module mentioned above. Our Residual Attention Network
achieves state-of-the-art object recognition performance on three benchmark
datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and
ImageNet (4.8% single model and single crop, top-5 error). Note that, our
method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69%
forward FLOPs comparing to ResNet-200. The experiment also demonstrates that
our network is robust against noisy labels.
| no_new_dataset | 0.949669 |
1704.06972 | Yufei Wang | Yufei Wang, Zhe Lin, Xiaohui Shen, Scott Cohen, Garrison W. Cottrell | Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition | Accepted by CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a lot of interest in automatically generating
descriptions for an image. Most existing language-model based approaches for
this task learn to generate an image description word by word in its original
word order. However, for humans, it is more natural to locate the objects and
their relationships first, and then elaborate on each object, describing
notable attributes. We present a coarse-to-fine method that decomposes the
original image description into a skeleton sentence and its attributes, and
generates the skeleton sentence and attribute phrases separately. By this
decomposition, our method can generate more accurate and novel descriptions
than the previous state-of-the-art. Experimental results on the MS-COCO and a
larger scale Stock3M datasets show that our algorithm yields consistent
improvements across different evaluation metrics, especially on the SPICE
metric, which has much higher correlation with human ratings than the
conventional metrics. Furthermore, our algorithm can generate descriptions with
varied length, benefiting from the separate control of the skeleton and
attributes. This enables image description generation that better accommodates
user preferences.
| [
{
"version": "v1",
"created": "Sun, 23 Apr 2017 20:17:12 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Wang",
"Yufei",
""
],
[
"Lin",
"Zhe",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Cohen",
"Scott",
""
],
[
"Cottrell",
"Garrison W.",
""
]
] | TITLE: Skeleton Key: Image Captioning by Skeleton-Attribute Decomposition
ABSTRACT: Recently, there has been a lot of interest in automatically generating
descriptions for an image. Most existing language-model based approaches for
this task learn to generate an image description word by word in its original
word order. However, for humans, it is more natural to locate the objects and
their relationships first, and then elaborate on each object, describing
notable attributes. We present a coarse-to-fine method that decomposes the
original image description into a skeleton sentence and its attributes, and
generates the skeleton sentence and attribute phrases separately. By this
decomposition, our method can generate more accurate and novel descriptions
than the previous state-of-the-art. Experimental results on the MS-COCO and a
larger scale Stock3M datasets show that our algorithm yields consistent
improvements across different evaluation metrics, especially on the SPICE
metric, which has much higher correlation with human ratings than the
conventional metrics. Furthermore, our algorithm can generate descriptions with
varied length, benefiting from the separate control of the skeleton and
attributes. This enables image description generation that better accommodates
user preferences.
| no_new_dataset | 0.948251 |
1704.07047 | Deng Cai | Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, Feiyue Huang | Fast and Accurate Neural Word Segmentation for Chinese | To appear in ACL2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural models with minimal feature engineering have achieved competitive
performance against traditional methods for the task of Chinese word
segmentation. However, both training and working procedures of the current
neural models are computationally inefficient. This paper presents a greedy
neural word segmenter with balanced word and character embedding inputs to
alleviate the existing drawbacks. Our segmenter is truly end-to-end, capable of
performing segmentation much faster and even more accurate than
state-of-the-art neural models on Chinese benchmark datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 05:50:29 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Cai",
"Deng",
""
],
[
"Zhao",
"Hai",
""
],
[
"Zhang",
"Zhisong",
""
],
[
"Xin",
"Yuan",
""
],
[
"Wu",
"Yongjian",
""
],
[
"Huang",
"Feiyue",
""
]
] | TITLE: Fast and Accurate Neural Word Segmentation for Chinese
ABSTRACT: Neural models with minimal feature engineering have achieved competitive
performance against traditional methods for the task of Chinese word
segmentation. However, both training and working procedures of the current
neural models are computationally inefficient. This paper presents a greedy
neural word segmenter with balanced word and character embedding inputs to
alleviate the existing drawbacks. Our segmenter is truly end-to-end, capable of
performing segmentation much faster and even more accurate than
state-of-the-art neural models on Chinese benchmark datasets.
| no_new_dataset | 0.949435 |
1704.07129 | Spandana Gella | Spandana Gella, Frank Keller | An Analysis of Action Recognition Datasets for Language and Vision Tasks | To appear in Proceedings of ACL 2017, 8 pages | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large amount of recent research has focused on tasks that combine language
and vision, resulting in a proliferation of datasets and methods. One such task
is action recognition, whose applications include image annotation, scene
under- standing and image retrieval. In this survey, we categorize the existing
ap- proaches based on how they conceptualize this problem and provide a
detailed review of existing datasets, highlighting their di- versity as well as
advantages and disad- vantages. We focus on recently devel- oped datasets which
link visual informa- tion with linguistic resources and provide a fine-grained
syntactic and semantic anal- ysis of actions in images.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 10:38:23 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Gella",
"Spandana",
""
],
[
"Keller",
"Frank",
""
]
] | TITLE: An Analysis of Action Recognition Datasets for Language and Vision Tasks
ABSTRACT: A large amount of recent research has focused on tasks that combine language
and vision, resulting in a proliferation of datasets and methods. One such task
is action recognition, whose applications include image annotation, scene
under- standing and image retrieval. In this survey, we categorize the existing
ap- proaches based on how they conceptualize this problem and provide a
detailed review of existing datasets, highlighting their di- versity as well as
advantages and disad- vantages. We focus on recently devel- oped datasets which
link visual informa- tion with linguistic resources and provide a fine-grained
syntactic and semantic anal- ysis of actions in images.
| no_new_dataset | 0.944944 |
1704.07130 | He He | He He and Anusha Balakrishnan and Mihail Eric and Percy Liang | Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge
Graph Embeddings | ACL 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a symmetric collaborative dialogue setting in which two agents, each
with private knowledge, must strategically communicate to achieve a common
goal. The open-ended dialogue state in this setting poses new challenges for
existing dialogue systems. We collected a dataset of 11K human-human dialogues,
which exhibits interesting lexical, semantic, and strategic elements. To model
both structured knowledge and unstructured language, we propose a neural model
with dynamic knowledge graph embeddings that evolve as the dialogue progresses.
Automatic and human evaluations show that our model is both more effective at
achieving the goal and more human-like than baseline neural and rule-based
models.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 10:38:24 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"He",
"He",
""
],
[
"Balakrishnan",
"Anusha",
""
],
[
"Eric",
"Mihail",
""
],
[
"Liang",
"Percy",
""
]
] | TITLE: Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge
Graph Embeddings
ABSTRACT: We study a symmetric collaborative dialogue setting in which two agents, each
with private knowledge, must strategically communicate to achieve a common
goal. The open-ended dialogue state in this setting poses new challenges for
existing dialogue systems. We collected a dataset of 11K human-human dialogues,
which exhibits interesting lexical, semantic, and strategic elements. To model
both structured knowledge and unstructured language, we propose a neural model
with dynamic knowledge graph embeddings that evolve as the dialogue progresses.
Automatic and human evaluations show that our model is both more effective at
achieving the goal and more human-like than baseline neural and rule-based
models.
| new_dataset | 0.957278 |
1704.07156 | Marek Rei | Marek Rei | Semi-supervised Multitask Learning for Sequence Labeling | ACL 2017 | null | null | null | cs.CL cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a sequence labeling framework with a secondary training objective,
learning to predict surrounding words for every word in the dataset. This
language modeling objective incentivises the system to learn general-purpose
patterns of semantic and syntactic composition, which are also useful for
improving accuracy on different sequence labeling tasks. The architecture was
evaluated on a range of datasets, covering the tasks of error detection in
learner texts, named entity recognition, chunking and POS-tagging. The novel
language modeling objective provided consistent performance improvements on
every benchmark, without requiring any additional annotated or unannotated
data.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 11:47:06 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Rei",
"Marek",
""
]
] | TITLE: Semi-supervised Multitask Learning for Sequence Labeling
ABSTRACT: We propose a sequence labeling framework with a secondary training objective,
learning to predict surrounding words for every word in the dataset. This
language modeling objective incentivises the system to learn general-purpose
patterns of semantic and syntactic composition, which are also useful for
improving accuracy on different sequence labeling tasks. The architecture was
evaluated on a range of datasets, covering the tasks of error detection in
learner texts, named entity recognition, chunking and POS-tagging. The novel
language modeling objective provided consistent performance improvements on
every benchmark, without requiring any additional annotated or unannotated
data.
| no_new_dataset | 0.948106 |
1704.07163 | Chang-Ryeol Lee | Chang-Ryeol Lee and Kuk-Jin Yoon | Monocular Visual Odometry with a Rolling Shutter Camera | 14 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rolling Shutter (RS) cameras have become popularized because of low-cost
imaging capability. However, the RS cameras suffer from undesirable artifacts
when the camera or the subject is moving, or illumination condition changes.
For that reason, Monocular Visual Odometry (MVO) with RS cameras produces
inaccurate ego-motion estimates. Previous works solve this RS distortion
problem with motion prediction from images and/or inertial sensors. However,
the MVO still has trouble in handling the RS distortion when the camera motion
changes abruptly (e.g. vibration of mobile cameras causes extremely fast motion
instantaneously). To address the problem, we propose the novel MVO algorithm in
consideration of the geometric characteristics of RS cameras. The key idea of
the proposed algorithm is the new RS essential matrix which incorporates the
instantaneous angular and linear velocities at each frame. Our algorithm
produces accurate and robust ego-motion estimates in an online manner, and is
applicable to various mobile applications with RS cameras. The superiority of
the proposed algorithm is validated through quantitative and qualitative
comparison on both synthetic and real dataset.
| [
{
"version": "v1",
"created": "Mon, 24 Apr 2017 12:02:53 GMT"
}
] | 2017-04-25T00:00:00 | [
[
"Lee",
"Chang-Ryeol",
""
],
[
"Yoon",
"Kuk-Jin",
""
]
] | TITLE: Monocular Visual Odometry with a Rolling Shutter Camera
ABSTRACT: Rolling Shutter (RS) cameras have become popularized because of low-cost
imaging capability. However, the RS cameras suffer from undesirable artifacts
when the camera or the subject is moving, or illumination condition changes.
For that reason, Monocular Visual Odometry (MVO) with RS cameras produces
inaccurate ego-motion estimates. Previous works solve this RS distortion
problem with motion prediction from images and/or inertial sensors. However,
the MVO still has trouble in handling the RS distortion when the camera motion
changes abruptly (e.g. vibration of mobile cameras causes extremely fast motion
instantaneously). To address the problem, we propose the novel MVO algorithm in
consideration of the geometric characteristics of RS cameras. The key idea of
the proposed algorithm is the new RS essential matrix which incorporates the
instantaneous angular and linear velocities at each frame. Our algorithm
produces accurate and robust ego-motion estimates in an online manner, and is
applicable to various mobile applications with RS cameras. The superiority of
the proposed algorithm is validated through quantitative and qualitative
comparison on both synthetic and real dataset.
| no_new_dataset | 0.950915 |
1504.03440 | Georgios Kellaris | Georgios Kellaris, Stavros Papadopoulos, and Dimitris Papadias | Engineering Methods for Differentially Private Histograms: Efficiency
Beyond Utility | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Publishing histograms with $\epsilon$-differential privacy has been studied
extensively in the literature. Existing schemes aim at maximizing the utility
of the published data, while previous experimental evaluations analyze the
privacy/utility trade-off. In this paper we provide the first experimental
evaluation of differentially private methods that goes beyond utility,
emphasizing also on another important aspect, namely efficiency. Towards this
end, we first observe that all existing schemes are comprised of a small set of
common blocks. We then optimize and choose the best implementation for each
block, determine the combinations of blocks that capture the entire literature,
and propose novel block combinations. We qualitatively assess the quality of
the schemes based on the skyline of efficiency and utility, i.e., based on
whether a method is dominated on both aspects or not. Using exhaustive
experiments on four real datasets with different characteristics, we conclude
that there are always trade-offs in terms of utility and efficiency. We
demonstrate that the schemes derived from our novel block combinations provide
the best trade-offs for time critical applications. Our work can serve as a
guide to help practitioners engineer a differentially private histogram scheme
depending on their application requirements.
| [
{
"version": "v1",
"created": "Tue, 14 Apr 2015 07:29:25 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 19:27:53 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Kellaris",
"Georgios",
""
],
[
"Papadopoulos",
"Stavros",
""
],
[
"Papadias",
"Dimitris",
""
]
] | TITLE: Engineering Methods for Differentially Private Histograms: Efficiency
Beyond Utility
ABSTRACT: Publishing histograms with $\epsilon$-differential privacy has been studied
extensively in the literature. Existing schemes aim at maximizing the utility
of the published data, while previous experimental evaluations analyze the
privacy/utility trade-off. In this paper we provide the first experimental
evaluation of differentially private methods that goes beyond utility,
emphasizing also on another important aspect, namely efficiency. Towards this
end, we first observe that all existing schemes are comprised of a small set of
common blocks. We then optimize and choose the best implementation for each
block, determine the combinations of blocks that capture the entire literature,
and propose novel block combinations. We qualitatively assess the quality of
the schemes based on the skyline of efficiency and utility, i.e., based on
whether a method is dominated on both aspects or not. Using exhaustive
experiments on four real datasets with different characteristics, we conclude
that there are always trade-offs in terms of utility and efficiency. We
demonstrate that the schemes derived from our novel block combinations provide
the best trade-offs for time critical applications. Our work can serve as a
guide to help practitioners engineer a differentially private histogram scheme
depending on their application requirements.
| no_new_dataset | 0.945147 |
1602.02285 | Uri Shaham | Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph
Chang, Yuval Kluger | A Deep Learning Approach to Unsupervised Ensemble Learning | null | null | null | PMLR 48:30-39 | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show how deep learning methods can be applied in the context of
crowdsourcing and unsupervised ensemble learning. First, we prove that the
popular model of Dawid and Skene, which assumes that all classifiers are
conditionally independent, is {\em equivalent} to a Restricted Boltzmann
Machine (RBM) with a single hidden node. Hence, under this model, the posterior
probabilities of the true labels can be instead estimated via a trained RBM.
Next, to address the more general case, where classifiers may strongly violate
the conditional independence assumption, we propose to apply RBM-based Deep
Neural Net (DNN). Experimental results on various simulated and real-world
datasets demonstrate that our proposed DNN approach outperforms other
state-of-the-art methods, in particular when the data violates the conditional
independence assumption.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 17:56:59 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Shaham",
"Uri",
""
],
[
"Cheng",
"Xiuyuan",
""
],
[
"Dror",
"Omer",
""
],
[
"Jaffe",
"Ariel",
""
],
[
"Nadler",
"Boaz",
""
],
[
"Chang",
"Joseph",
""
],
[
"Kluger",
"Yuval",
""
]
] | TITLE: A Deep Learning Approach to Unsupervised Ensemble Learning
ABSTRACT: We show how deep learning methods can be applied in the context of
crowdsourcing and unsupervised ensemble learning. First, we prove that the
popular model of Dawid and Skene, which assumes that all classifiers are
conditionally independent, is {\em equivalent} to a Restricted Boltzmann
Machine (RBM) with a single hidden node. Hence, under this model, the posterior
probabilities of the true labels can be instead estimated via a trained RBM.
Next, to address the more general case, where classifiers may strongly violate
the conditional independence assumption, we propose to apply RBM-based Deep
Neural Net (DNN). Experimental results on various simulated and real-world
datasets demonstrate that our proposed DNN approach outperforms other
state-of-the-art methods, in particular when the data violates the conditional
independence assumption.
| no_new_dataset | 0.947721 |
1603.07188 | Pavel Tokmakov | Pavel Tokmakov, Karteek Alahari, Cordelia Schmid | Weakly-Supervised Semantic Segmentation using Motion Cues | Extended version of our ECCV 2016 paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fully convolutional neural networks (FCNNs) trained on a large number of
images with strong pixel-level annotations have become the new state of the art
for the semantic segmentation task. While there have been recent attempts to
learn FCNNs from image-level weak annotations, they need additional
constraints, such as the size of an object, to obtain reasonable performance.
To address this issue, we present motion-CNN (M-CNN), a novel FCNN framework
which incorporates motion cues and is learned from video-level weak
annotations. Our learning scheme to train the network uses motion segments as
soft constraints, thereby handling noisy motion information. When trained on
weakly-annotated videos, our method outperforms the state-of-the-art EM-Adapt
approach on the PASCAL VOC 2012 image segmentation benchmark. We also
demonstrate that the performance of M-CNN learned with 150 weak video
annotations is on par with state-of-the-art weakly-supervised methods trained
with thousands of images. Finally, M-CNN substantially outperforms recent
approaches in a related task of video co-localization on the YouTube-Objects
dataset.
| [
{
"version": "v1",
"created": "Wed, 23 Mar 2016 14:01:03 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2016 12:21:37 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Apr 2017 08:16:06 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Tokmakov",
"Pavel",
""
],
[
"Alahari",
"Karteek",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: Weakly-Supervised Semantic Segmentation using Motion Cues
ABSTRACT: Fully convolutional neural networks (FCNNs) trained on a large number of
images with strong pixel-level annotations have become the new state of the art
for the semantic segmentation task. While there have been recent attempts to
learn FCNNs from image-level weak annotations, they need additional
constraints, such as the size of an object, to obtain reasonable performance.
To address this issue, we present motion-CNN (M-CNN), a novel FCNN framework
which incorporates motion cues and is learned from video-level weak
annotations. Our learning scheme to train the network uses motion segments as
soft constraints, thereby handling noisy motion information. When trained on
weakly-annotated videos, our method outperforms the state-of-the-art EM-Adapt
approach on the PASCAL VOC 2012 image segmentation benchmark. We also
demonstrate that the performance of M-CNN learned with 150 weak video
annotations is on par with state-of-the-art weakly-supervised methods trained
with thousands of images. Finally, M-CNN substantially outperforms recent
approaches in a related task of video co-localization on the YouTube-Objects
dataset.
| no_new_dataset | 0.947381 |
1606.03777 | Nikola Mrk\v{s}i\'c | Nikola Mrk\v{s}i\'c and Diarmuid \'O S\'eaghdha and Tsung-Hsien Wen
and Blaise Thomson and Steve Young | Neural Belief Tracker: Data-Driven Dialogue State Tracking | Accepted as a long paper for the 55th Annual Meeting of the
Association for Computational Linguistics (ACL 2017) | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the core components of modern spoken dialogue systems is the belief
tracker, which estimates the user's goal at every step of the dialogue.
However, most current approaches have difficulty scaling to larger, more
complex dialogue domains. This is due to their dependency on either: a) Spoken
Language Understanding models that require large amounts of annotated training
data; or b) hand-crafted lexicons for capturing some of the linguistic
variation in users' language. We propose a novel Neural Belief Tracking (NBT)
framework which overcomes these problems by building on recent advances in
representation learning. NBT models reason over pre-trained word vectors,
learning to compose them into distributed representations of user utterances
and dialogue context. Our evaluation on two datasets shows that this approach
surpasses past limitations, matching the performance of state-of-the-art models
which rely on hand-crafted semantic lexicons and outperforming them when such
lexicons are not provided.
| [
{
"version": "v1",
"created": "Sun, 12 Jun 2016 22:59:14 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2017 15:15:03 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Mrkšić",
"Nikola",
""
],
[
"Séaghdha",
"Diarmuid Ó",
""
],
[
"Wen",
"Tsung-Hsien",
""
],
[
"Thomson",
"Blaise",
""
],
[
"Young",
"Steve",
""
]
] | TITLE: Neural Belief Tracker: Data-Driven Dialogue State Tracking
ABSTRACT: One of the core components of modern spoken dialogue systems is the belief
tracker, which estimates the user's goal at every step of the dialogue.
However, most current approaches have difficulty scaling to larger, more
complex dialogue domains. This is due to their dependency on either: a) Spoken
Language Understanding models that require large amounts of annotated training
data; or b) hand-crafted lexicons for capturing some of the linguistic
variation in users' language. We propose a novel Neural Belief Tracking (NBT)
framework which overcomes these problems by building on recent advances in
representation learning. NBT models reason over pre-trained word vectors,
learning to compose them into distributed representations of user utterances
and dialogue context. Our evaluation on two datasets shows that this approach
surpasses past limitations, matching the performance of state-of-the-art models
which rely on hand-crafted semantic lexicons and outperforming them when such
lexicons are not provided.
| no_new_dataset | 0.944228 |
1703.08338 | Michael Wray | Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas and Dima Damen | Improving Classification by Improving Labelling: Introducing
Probabilistic Multi-Label Object Interaction Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work deviates from easy-to-define class boundaries for object
interactions. For the task of object interaction recognition, often captured
using an egocentric view, we show that semantic ambiguities in verbs and
recognising sub-interactions along with concurrent interactions result in
legitimate class overlaps (Figure 1). We thus aim to model the mapping between
observations and interaction classes, as well as class overlaps, towards a
probabilistic multi-label classifier that emulates human annotators. Given a
video segment containing an object interaction, we model the probability for a
verb, out of a list of possible verbs, to be used to annotate that interaction.
The proba- bility is learnt from crowdsourced annotations, and is tested on two
public datasets, comprising 1405 video sequences for which we provide
annotations on 90 verbs. We outper- form conventional single-label
classification by 11% and 6% on the two datasets respectively, and show that
learning from annotation probabilities outperforms majority voting and enables
discovery of co-occurring labels.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 10:11:03 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Apr 2017 16:29:22 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Wray",
"Michael",
""
],
[
"Moltisanti",
"Davide",
""
],
[
"Mayol-Cuevas",
"Walterio",
""
],
[
"Damen",
"Dima",
""
]
] | TITLE: Improving Classification by Improving Labelling: Introducing
Probabilistic Multi-Label Object Interaction Recognition
ABSTRACT: This work deviates from easy-to-define class boundaries for object
interactions. For the task of object interaction recognition, often captured
using an egocentric view, we show that semantic ambiguities in verbs and
recognising sub-interactions along with concurrent interactions result in
legitimate class overlaps (Figure 1). We thus aim to model the mapping between
observations and interaction classes, as well as class overlaps, towards a
probabilistic multi-label classifier that emulates human annotators. Given a
video segment containing an object interaction, we model the probability for a
verb, out of a list of possible verbs, to be used to annotate that interaction.
The proba- bility is learnt from crowdsourced annotations, and is tested on two
public datasets, comprising 1405 video sequences for which we provide
annotations on 90 verbs. We outper- form conventional single-label
classification by 11% and 6% on the two datasets respectively, and show that
learning from annotation probabilities outperforms majority voting and enables
discovery of co-occurring labels.
| no_new_dataset | 0.949342 |
1704.06360 | Jason Fries | Jason Fries, Sen Wu, Alex Ratner, Christopher R\'e | SwellShark: A Generative Model for Biomedical Named Entity Recognition
without Labeled Data | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SwellShark, a framework for building biomedical named entity
recognition (NER) systems quickly and without hand-labeled data. Our approach
views biomedical resources like lexicons as function primitives for
autogenerating weak supervision. We then use a generative model to unify and
denoise this supervision and construct large-scale, probabilistically labeled
datasets for training high-accuracy NER taggers. In three biomedical NER tasks,
SwellShark achieves competitive scores with state-of-the-art supervised
benchmarks using no hand-labeled training data. In a drug name extraction task
using patient medical records, one domain expert using SwellShark achieved
within 5.1% of a crowdsourced annotation approach -- which originally utilized
20 teams over the course of several weeks -- in 24 hours.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 23:02:14 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Fries",
"Jason",
""
],
[
"Wu",
"Sen",
""
],
[
"Ratner",
"Alex",
""
],
[
"Ré",
"Christopher",
""
]
] | TITLE: SwellShark: A Generative Model for Biomedical Named Entity Recognition
without Labeled Data
ABSTRACT: We present SwellShark, a framework for building biomedical named entity
recognition (NER) systems quickly and without hand-labeled data. Our approach
views biomedical resources like lexicons as function primitives for
autogenerating weak supervision. We then use a generative model to unify and
denoise this supervision and construct large-scale, probabilistically labeled
datasets for training high-accuracy NER taggers. In three biomedical NER tasks,
SwellShark achieves competitive scores with state-of-the-art supervised
benchmarks using no hand-labeled training data. In a drug name extraction task
using patient medical records, one domain expert using SwellShark achieved
within 5.1% of a crowdsourced annotation approach -- which originally utilized
20 teams over the course of several weeks -- in 24 hours.
| no_new_dataset | 0.946498 |
1704.06363 | Arthur Szlam | Sam Gross and Marc'Aurelio Ranzato and Arthur Szlam | Hard Mixtures of Experts for Large Scale Weakly Supervised Vision | Appearing in CVPR 2017 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training convolutional networks (CNN's) that fit on a single GPU with
minibatch stochastic gradient descent has become effective in practice.
However, there is still no effective method for training large CNN's that do
not fit in the memory of a few GPU cards, or for parallelizing CNN training. In
this work we show that a simple hard mixture of experts model can be
efficiently trained to good effect on large scale hashtag (multilabel)
prediction tasks. Mixture of experts models are not new (Jacobs et. al. 1991,
Collobert et. al. 2003), but in the past, researchers have had to devise
sophisticated methods to deal with data fragmentation. We show empirically that
modern weakly supervised data sets are large enough to support naive
partitioning schemes where each data point is assigned to a single expert.
Because the experts are independent, training them in parallel is easy, and
evaluation is cheap for the size of the model. Furthermore, we show that we can
use a single decoding layer for all the experts, allowing a unified feature
embedding space. We demonstrate that it is feasible (and in fact relatively
painless) to train far larger models than could be practically trained with
standard CNN architectures, and that the extra capacity can be well used on
current datasets.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 23:45:27 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Gross",
"Sam",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Szlam",
"Arthur",
""
]
] | TITLE: Hard Mixtures of Experts for Large Scale Weakly Supervised Vision
ABSTRACT: Training convolutional networks (CNN's) that fit on a single GPU with
minibatch stochastic gradient descent has become effective in practice.
However, there is still no effective method for training large CNN's that do
not fit in the memory of a few GPU cards, or for parallelizing CNN training. In
this work we show that a simple hard mixture of experts model can be
efficiently trained to good effect on large scale hashtag (multilabel)
prediction tasks. Mixture of experts models are not new (Jacobs et. al. 1991,
Collobert et. al. 2003), but in the past, researchers have had to devise
sophisticated methods to deal with data fragmentation. We show empirically that
modern weakly supervised data sets are large enough to support naive
partitioning schemes where each data point is assigned to a single expert.
Because the experts are independent, training them in parallel is easy, and
evaluation is cheap for the size of the model. Furthermore, we show that we can
use a single decoding layer for all the experts, allowing a unified feature
embedding space. We demonstrate that it is feasible (and in fact relatively
painless) to train far larger models than could be practically trained with
standard CNN architectures, and that the extra capacity can be well used on
current datasets.
| no_new_dataset | 0.945951 |
1704.06370 | Md Zahangir Alom | Md Zahangir Alom and Tarek M. Taha | Robust Multi-view Pedestrian Tracking Using Neural Networks | 8 pages, 3 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a real-time robust multi-view pedestrian detection
and tracking system for video surveillance using neural networks which can be
used in dynamic environments. The proposed system consists of two phases:
multi-view pedestrian detection and tracking. First, pedestrian detection
utilizes background subtraction to segment the foreground blob. An adaptive
background subtraction method where each of the pixel of input image models as
a mixture of Gaussians and uses an on-line approximation to update the model
applies to extract the foreground region. The Gaussian distributions are then
evaluated to determine which are most likely to result from a background
process. This method produces a steady, real-time tracker in outdoor
environment that consistently deals with changes of lighting condition, and
long-term scene change. Second, the Tracking is performed at two phases:
pedestrian classification and tracking the individual subject. A sliding window
is applied on foreground binary image to select an input window which is used
for selecting the input image patches from actually input frame. The neural
networks is used for classification with PHOG features. Finally, a Kalman
filter is applied to calculate the subsequent step for tracking that aims at
finding the exact position of pedestrians in an input image. The experimental
result shows that the proposed approach yields promising performance on
multi-view pedestrian detection and tracking on different benchmark datasets.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 00:12:23 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Alom",
"Md Zahangir",
""
],
[
"Taha",
"Tarek M.",
""
]
] | TITLE: Robust Multi-view Pedestrian Tracking Using Neural Networks
ABSTRACT: In this paper, we present a real-time robust multi-view pedestrian detection
and tracking system for video surveillance using neural networks which can be
used in dynamic environments. The proposed system consists of two phases:
multi-view pedestrian detection and tracking. First, pedestrian detection
utilizes background subtraction to segment the foreground blob. An adaptive
background subtraction method where each of the pixel of input image models as
a mixture of Gaussians and uses an on-line approximation to update the model
applies to extract the foreground region. The Gaussian distributions are then
evaluated to determine which are most likely to result from a background
process. This method produces a steady, real-time tracker in outdoor
environment that consistently deals with changes of lighting condition, and
long-term scene change. Second, the Tracking is performed at two phases:
pedestrian classification and tracking the individual subject. A sliding window
is applied on foreground binary image to select an input window which is used
for selecting the input image patches from actually input frame. The neural
networks is used for classification with PHOG features. Finally, a Kalman
filter is applied to calculate the subsequent step for tracking that aims at
finding the exact position of pedestrians in an input image. The experimental
result shows that the proposed approach yields promising performance on
multi-view pedestrian detection and tracking on different benchmark datasets.
| no_new_dataset | 0.948202 |
1704.06382 | Holger Roth | Holger R. Roth, Hirohisa Oda, Yuichiro Hayashi, Masahiro Oda, Natsuki
Shimizu, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori | Hierarchical 3D fully convolutional networks for multi-organ
segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in 3D fully convolutional networks (FCN) have made it
feasible to produce dense voxel-wise predictions of full volumetric images. In
this work, we show that a multi-class 3D FCN trained on manually labeled CT
scans of seven abdominal structures (artery, vein, liver, spleen, stomach,
gallbladder, and pancreas) can achieve competitive segmentation results, while
avoiding the need for handcrafting features or training organ-specific models.
To this end, we propose a two-stage, coarse-to-fine approach that trains an FCN
model to roughly delineate the organs of interest in the first stage (seeing
$\sim$40% of the voxels within a simple, automatically generated binary mask of
the patient's body). We then use these predictions of the first-stage FCN to
define a candidate region that will be used to train a second FCN. This step
reduces the number of voxels the FCN has to classify to $\sim$10% while
maintaining a recall high of $>$99%. This second-stage FCN can now focus on
more detailed segmentation of the organs. We respectively utilize training and
validation sets consisting of 281 and 50 clinical CT images. Our hierarchical
approach provides an improved Dice score of 7.5 percentage points per organ on
average in our validation set. We furthermore test our models on a completely
unseen data collection acquired at a different hospital that includes 150 CT
scans with three anatomical labels (liver, spleen, and pancreas). In such
challenging organs as the pancreas, our hierarchical approach improves the mean
Dice score from 68.5 to 82.2%, achieving the highest reported average score on
this dataset.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 03:05:15 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Roth",
"Holger R.",
""
],
[
"Oda",
"Hirohisa",
""
],
[
"Hayashi",
"Yuichiro",
""
],
[
"Oda",
"Masahiro",
""
],
[
"Shimizu",
"Natsuki",
""
],
[
"Fujiwara",
"Michitaka",
""
],
[
"Misawa",
"Kazunari",
""
],
[
"Mori",
"Kensaku",
""
]
] | TITLE: Hierarchical 3D fully convolutional networks for multi-organ
segmentation
ABSTRACT: Recent advances in 3D fully convolutional networks (FCN) have made it
feasible to produce dense voxel-wise predictions of full volumetric images. In
this work, we show that a multi-class 3D FCN trained on manually labeled CT
scans of seven abdominal structures (artery, vein, liver, spleen, stomach,
gallbladder, and pancreas) can achieve competitive segmentation results, while
avoiding the need for handcrafting features or training organ-specific models.
To this end, we propose a two-stage, coarse-to-fine approach that trains an FCN
model to roughly delineate the organs of interest in the first stage (seeing
$\sim$40% of the voxels within a simple, automatically generated binary mask of
the patient's body). We then use these predictions of the first-stage FCN to
define a candidate region that will be used to train a second FCN. This step
reduces the number of voxels the FCN has to classify to $\sim$10% while
maintaining a recall high of $>$99%. This second-stage FCN can now focus on
more detailed segmentation of the organs. We respectively utilize training and
validation sets consisting of 281 and 50 clinical CT images. Our hierarchical
approach provides an improved Dice score of 7.5 percentage points per organ on
average in our validation set. We furthermore test our models on a completely
unseen data collection acquired at a different hospital that includes 150 CT
scans with three anatomical labels (liver, spleen, and pancreas). In such
challenging organs as the pancreas, our hierarchical approach improves the mean
Dice score from 68.5 to 82.2%, achieving the highest reported average score on
this dataset.
| no_new_dataset | 0.94079 |
1704.06392 | Mohamed Elawady | Mohamed Elawady, Olivier Alata, Christophe Ducottet, Cecile Barat,
Philippe Colantoni | Multiple Reflection Symmetry Detection via Linear-Directional Kernel
Density Estimation | Submitted to CAIP 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry is an important composition feature by investigating similar sides
inside an image plane. It has a crucial effect to recognize man-made or nature
objects within the universe. Recent symmetry detection approaches used a
smoothing kernel over different voting maps in the polar coordinate system to
detect symmetry peaks, which split the regions of symmetry axis candidates in
inefficient way. We propose a reliable voting representation based on weighted
linear-directional kernel density estimation, to detect multiple symmetries
over challenging real-world and synthetic images. Experimental evaluation on
two public datasets demonstrates the superior performance of the proposed
algorithm to detect global symmetry axes respect to the major image shapes.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 04:15:15 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Elawady",
"Mohamed",
""
],
[
"Alata",
"Olivier",
""
],
[
"Ducottet",
"Christophe",
""
],
[
"Barat",
"Cecile",
""
],
[
"Colantoni",
"Philippe",
""
]
] | TITLE: Multiple Reflection Symmetry Detection via Linear-Directional Kernel
Density Estimation
ABSTRACT: Symmetry is an important composition feature by investigating similar sides
inside an image plane. It has a crucial effect to recognize man-made or nature
objects within the universe. Recent symmetry detection approaches used a
smoothing kernel over different voting maps in the polar coordinate system to
detect symmetry peaks, which split the regions of symmetry axis candidates in
inefficient way. We propose a reliable voting representation based on weighted
linear-directional kernel density estimation, to detect multiple symmetries
over challenging real-world and synthetic images. Experimental evaluation on
two public datasets demonstrates the superior performance of the proposed
algorithm to detect global symmetry axes respect to the major image shapes.
| no_new_dataset | 0.953708 |
1704.06456 | Qianru Sun | Qianru Sun, Bernt Schiele and Mario Fritz | A Domain Based Approach to Social Relation Recognition | To appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Social relations are the foundation of human daily life. Developing
techniques to analyze such relations from visual data bears great potential to
build machines that better understand us and are capable of interacting with us
at a social level. Previous investigations have remained partial due to the
overwhelming diversity and complexity of the topic and consequently have only
focused on a handful of social relations. In this paper, we argue that the
domain-based theory from social psychology is a great starting point to
systematically approach this problem. The theory provides coverage of all
aspects of social relations and equally is concrete and predictive about the
visual attributes and behaviors defining the relations included in each domain.
We provide the first dataset built on this holistic conceptualization of social
life that is composed of a hierarchical label space of social domains and
social relations. We also contribute the first models to recognize such domains
and relations and find superior performance for attribute based features.
Beyond the encouraging performance of the attribute based approach, we also
find interpretable features that are in accordance with the predictions from
social psychology literature. Beyond our findings, we believe that our
contributions more tightly interleave visual recognition and social psychology
theory that has the potential to complement the theoretical work in the area
with empirical and data-driven models of social life.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 09:27:32 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Sun",
"Qianru",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: A Domain Based Approach to Social Relation Recognition
ABSTRACT: Social relations are the foundation of human daily life. Developing
techniques to analyze such relations from visual data bears great potential to
build machines that better understand us and are capable of interacting with us
at a social level. Previous investigations have remained partial due to the
overwhelming diversity and complexity of the topic and consequently have only
focused on a handful of social relations. In this paper, we argue that the
domain-based theory from social psychology is a great starting point to
systematically approach this problem. The theory provides coverage of all
aspects of social relations and equally is concrete and predictive about the
visual attributes and behaviors defining the relations included in each domain.
We provide the first dataset built on this holistic conceptualization of social
life that is composed of a hierarchical label space of social domains and
social relations. We also contribute the first models to recognize such domains
and relations and find superior performance for attribute based features.
Beyond the encouraging performance of the attribute based approach, we also
find interpretable features that are in accordance with the predictions from
social psychology literature. Beyond our findings, we believe that our
contributions more tightly interleave visual recognition and social psychology
theory that has the potential to complement the theoretical work in the area
with empirical and data-driven models of social life.
| new_dataset | 0.958187 |
1704.06569 | Caiyun Huang | Caiyun Huang, Peng Zhang, Junpeng Liu, Yong Sun, Xueqiang Zou | SFCSD: A Self-Feedback Correction System for DNS Based on Active and
Passive Measurement | 7 pages, 3 figures, 7 tables, submitted to GlobeCOM 2017 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain Name System (DNS), one of the important infrastructure in the
Internet, was vulnerable to attacks, for the DNS designer didn't take security
issues into consideration at the beginning. The defects of DNS may lead to
users' failure of access to the websites, what's worse, users might suffer a
huge economic loss.
In order to correct the DNS wrong resource records, we propose a
Self-Feedback Correction System for DNS (SFCSD), which can find and track a
large number of common websites' domain name and IP address correct
correspondences to provide users with a real-time auto-updated correct (IP,
Domain) binary tuple list. By matching specific strings with SSL, DNS and HTTP
traffic passively, filtering with the CDN CNAME and non-homepage URL feature
strings, verifying with webpage fingerprint algorithm, SFCSD obtains a large
number of highly possibly correct IP addresses to make an active manual
correction in the end. Its self-feedback mechanism can expand search range and
improve performance.
Experiments show that, SFCSD can achieve 94.3% precision and 93.07% recall
rate with the optimal threshold selection in the test dataset. It has 8Gbps
processing speed stand-alone to find almost 1000 possibly correct (IP, Domain)
per day for the each specific string and to correct almost 200.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 14:41:10 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Huang",
"Caiyun",
""
],
[
"Zhang",
"Peng",
""
],
[
"Liu",
"Junpeng",
""
],
[
"Sun",
"Yong",
""
],
[
"Zou",
"Xueqiang",
""
]
] | TITLE: SFCSD: A Self-Feedback Correction System for DNS Based on Active and
Passive Measurement
ABSTRACT: Domain Name System (DNS), one of the important infrastructure in the
Internet, was vulnerable to attacks, for the DNS designer didn't take security
issues into consideration at the beginning. The defects of DNS may lead to
users' failure of access to the websites, what's worse, users might suffer a
huge economic loss.
In order to correct the DNS wrong resource records, we propose a
Self-Feedback Correction System for DNS (SFCSD), which can find and track a
large number of common websites' domain name and IP address correct
correspondences to provide users with a real-time auto-updated correct (IP,
Domain) binary tuple list. By matching specific strings with SSL, DNS and HTTP
traffic passively, filtering with the CDN CNAME and non-homepage URL feature
strings, verifying with webpage fingerprint algorithm, SFCSD obtains a large
number of highly possibly correct IP addresses to make an active manual
correction in the end. Its self-feedback mechanism can expand search range and
improve performance.
Experiments show that, SFCSD can achieve 94.3% precision and 93.07% recall
rate with the optimal threshold selection in the test dataset. It has 8Gbps
processing speed stand-alone to find almost 1000 possibly correct (IP, Domain)
per day for the each specific string and to correct almost 200.
| no_new_dataset | 0.940735 |
1704.06591 | Ahmet Iscen | Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, Teddy Furon, Ondrej Chum | Panorama to panorama matching for location recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Location recognition is commonly treated as visual instance retrieval on
"street view" imagery. The dataset items and queries are panoramic views, i.e.
groups of images taken at a single location. This work introduces a novel
panorama-to-panorama matching process, either by aggregating features of
individual images in a group or by explicitly constructing a larger panorama.
In either case, multiple views are used as queries. We reach near perfect
location recognition on a standard benchmark with only four query views.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 15:23:29 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Iscen",
"Ahmet",
""
],
[
"Tolias",
"Giorgos",
""
],
[
"Avrithis",
"Yannis",
""
],
[
"Furon",
"Teddy",
""
],
[
"Chum",
"Ondrej",
""
]
] | TITLE: Panorama to panorama matching for location recognition
ABSTRACT: Location recognition is commonly treated as visual instance retrieval on
"street view" imagery. The dataset items and queries are panoramic views, i.e.
groups of images taken at a single location. This work introduces a novel
panorama-to-panorama matching process, either by aggregating features of
individual images in a group or by explicitly constructing a larger panorama.
In either case, multiple views are used as queries. We reach near perfect
location recognition on a standard benchmark with only four query views.
| no_new_dataset | 0.94743 |
1704.06610 | Jose Oramas | Jose Oramas and Luc De Raedt and Tinne Tuytelaars | Context-based Object Viewpoint Estimation: A 2D Relational Approach | Computer Vision and Image Understanding (CVIU) | null | 10.1016/j.cviu.2017.04.005 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of object viewpoint estimation has been a challenge since the early
days of computer vision. To estimate the viewpoint (or pose) of an object,
people have mostly looked at object intrinsic features, such as shape or
appearance. Surprisingly, informative features provided by other, extrinsic
elements in the scene, have so far mostly been ignored. At the same time,
contextual cues have been proven to be of great benefit for related tasks such
as object detection or action recognition. In this paper, we explore how
information from other objects in the scene can be exploited for viewpoint
estimation. In particular, we look at object configurations by following a
relational neighbor-based approach for reasoning about object relations. We
show that, starting from noisy object detections and viewpoint estimates,
exploiting the estimated viewpoint and location of other objects in the scene
can lead to improved object viewpoint predictions. Experiments on the KITTI
dataset demonstrate that object configurations can indeed be used as a
complementary cue to appearance-based viewpoint estimation. Our analysis
reveals that the proposed context-based method can improve object viewpoint
estimation by reducing specific types of viewpoint estimation errors commonly
made by methods that only consider local information. Moreover, considering
contextual information produces superior performance in scenes where a high
number of object instances occur. Finally, our results suggest that, following
a cautious relational neighbor formulation brings improvements over its
aggressive counterpart for the task of object viewpoint estimation.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 15:55:54 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Oramas",
"Jose",
""
],
[
"De Raedt",
"Luc",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Context-based Object Viewpoint Estimation: A 2D Relational Approach
ABSTRACT: The task of object viewpoint estimation has been a challenge since the early
days of computer vision. To estimate the viewpoint (or pose) of an object,
people have mostly looked at object intrinsic features, such as shape or
appearance. Surprisingly, informative features provided by other, extrinsic
elements in the scene, have so far mostly been ignored. At the same time,
contextual cues have been proven to be of great benefit for related tasks such
as object detection or action recognition. In this paper, we explore how
information from other objects in the scene can be exploited for viewpoint
estimation. In particular, we look at object configurations by following a
relational neighbor-based approach for reasoning about object relations. We
show that, starting from noisy object detections and viewpoint estimates,
exploiting the estimated viewpoint and location of other objects in the scene
can lead to improved object viewpoint predictions. Experiments on the KITTI
dataset demonstrate that object configurations can indeed be used as a
complementary cue to appearance-based viewpoint estimation. Our analysis
reveals that the proposed context-based method can improve object viewpoint
estimation by reducing specific types of viewpoint estimation errors commonly
made by methods that only consider local information. Moreover, considering
contextual information produces superior performance in scenes where a high
number of object instances occur. Finally, our results suggest that, following
a cautious relational neighbor formulation brings improvements over its
aggressive counterpart for the task of object viewpoint estimation.
| no_new_dataset | 0.948106 |
1704.06619 | Arman Cohan | Arman Cohan and Nazli Goharian | Scientific Article Summarization Using Citation-Context and Article's
Discourse Structure | EMNLP 2015 | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a summarization approach for scientific articles which takes
advantage of citation-context and the document discourse model. While citations
have been previously used in generating scientific summaries, they lack the
related context from the referenced article and therefore do not accurately
reflect the article's content. Our method overcomes the problem of
inconsistency between the citation summary and the article's content by
providing context for each citation. We also leverage the inherent scientific
article's discourse for producing better summaries. We show that our proposed
method effectively improves over existing summarization approaches (greater
than 30% improvement over the best performing baseline) in terms of
\textsc{Rouge} scores on TAC2014 scientific summarization dataset. While the
dataset we use for evaluation is in the biomedical domain, most of our
approaches are general and therefore adaptable to other domains.
| [
{
"version": "v1",
"created": "Fri, 21 Apr 2017 16:17:58 GMT"
}
] | 2017-04-24T00:00:00 | [
[
"Cohan",
"Arman",
""
],
[
"Goharian",
"Nazli",
""
]
] | TITLE: Scientific Article Summarization Using Citation-Context and Article's
Discourse Structure
ABSTRACT: We propose a summarization approach for scientific articles which takes
advantage of citation-context and the document discourse model. While citations
have been previously used in generating scientific summaries, they lack the
related context from the referenced article and therefore do not accurately
reflect the article's content. Our method overcomes the problem of
inconsistency between the citation summary and the article's content by
providing context for each citation. We also leverage the inherent scientific
article's discourse for producing better summaries. We show that our proposed
method effectively improves over existing summarization approaches (greater
than 30% improvement over the best performing baseline) in terms of
\textsc{Rouge} scores on TAC2014 scientific summarization dataset. While the
dataset we use for evaluation is in the biomedical domain, most of our
approaches are general and therefore adaptable to other domains.
| no_new_dataset | 0.949763 |
1512.03155 | Erkan Bostanci | Erkan Bostanci | Enhanced image feature coverage: Key-point selection using genetic
algorithms | 14 pages, journal | null | 10.1080/13682199.2016.1254939 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coverage of image features play an important role in many vision algorithms
since their distribution affect the estimated homography. This paper presents a
Genetic Algorithm (GA) in order to select the optimal set of features yielding
maximum coverage of the image which is measured by a robust method based on
spatial statistics. It is shown with statistical tests on two datasets that the
metric yields better coverage and this is also confirmed by an accuracy test on
the computed homography for the original set and the newly selected set of
features. Results have demonstrated that the new set has similar performance in
terms of the accuracy of the computed homography with the original one with an
extra benefit of using fewer number of features ultimately reducing the time
required for descriptor calculation and matching.
| [
{
"version": "v1",
"created": "Thu, 10 Dec 2015 06:51:28 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Bostanci",
"Erkan",
""
]
] | TITLE: Enhanced image feature coverage: Key-point selection using genetic
algorithms
ABSTRACT: Coverage of image features play an important role in many vision algorithms
since their distribution affect the estimated homography. This paper presents a
Genetic Algorithm (GA) in order to select the optimal set of features yielding
maximum coverage of the image which is measured by a robust method based on
spatial statistics. It is shown with statistical tests on two datasets that the
metric yields better coverage and this is also confirmed by an accuracy test on
the computed homography for the original set and the newly selected set of
features. Results have demonstrated that the new set has similar performance in
terms of the accuracy of the computed homography with the original one with an
extra benefit of using fewer number of features ultimately reducing the time
required for descriptor calculation and matching.
| no_new_dataset | 0.953837 |
1610.07448 | Simone Scardapane | Simone Scardapane and Paolo Di Lorenzo | A Framework for Parallel and Distributed Training of Neural Networks | Published on Neural Networks (Elsevier), in press | null | 10.1016/j.neunet.2017.04.004 | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to develop a general framework for training neural
networks (NNs) in a distributed environment, where training data is partitioned
over a set of agents that communicate with each other through a sparse,
possibly time-varying, connectivity pattern. In such distributed scenario, the
training problem can be formulated as the (regularized) optimization of a
non-convex social cost function, given by the sum of local (non-convex) costs,
where each agent contributes with a single error term defined with respect to
its local dataset. To devise a flexible and efficient solution, we customize a
recently proposed framework for non-convex optimization over networks, which
hinges on a (primal) convexification-decomposition technique to handle
non-convexity, and a dynamic consensus procedure to diffuse information among
the agents. Several typical choices for the training criterion (e.g., squared
loss, cross entropy, etc.) and regularization (e.g., $\ell_2$ norm, sparsity
inducing penalties, etc.) are included in the framework and explored along the
paper. Convergence to a stationary solution of the social non-convex problem is
guaranteed under mild assumptions. Additionally, we show a principled way
allowing each agent to exploit a possible multi-core architecture (e.g., a
local cloud) in order to parallelize its local optimization step, resulting in
strategies that are both distributed (across the agents) and parallel (inside
each agent) in nature. A comprehensive set of experimental results validate the
proposed approach.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 14:58:56 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2017 11:00:58 GMT"
},
{
"version": "v3",
"created": "Thu, 20 Apr 2017 08:55:19 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Scardapane",
"Simone",
""
],
[
"Di Lorenzo",
"Paolo",
""
]
] | TITLE: A Framework for Parallel and Distributed Training of Neural Networks
ABSTRACT: The aim of this paper is to develop a general framework for training neural
networks (NNs) in a distributed environment, where training data is partitioned
over a set of agents that communicate with each other through a sparse,
possibly time-varying, connectivity pattern. In such distributed scenario, the
training problem can be formulated as the (regularized) optimization of a
non-convex social cost function, given by the sum of local (non-convex) costs,
where each agent contributes with a single error term defined with respect to
its local dataset. To devise a flexible and efficient solution, we customize a
recently proposed framework for non-convex optimization over networks, which
hinges on a (primal) convexification-decomposition technique to handle
non-convexity, and a dynamic consensus procedure to diffuse information among
the agents. Several typical choices for the training criterion (e.g., squared
loss, cross entropy, etc.) and regularization (e.g., $\ell_2$ norm, sparsity
inducing penalties, etc.) are included in the framework and explored along the
paper. Convergence to a stationary solution of the social non-convex problem is
guaranteed under mild assumptions. Additionally, we show a principled way
allowing each agent to exploit a possible multi-core architecture (e.g., a
local cloud) in order to parallelize its local optimization step, resulting in
strategies that are both distributed (across the agents) and parallel (inside
each agent) in nature. A comprehensive set of experimental results validate the
proposed approach.
| no_new_dataset | 0.943556 |
1612.03925 | Jose Dolz | J. Dolz, C. Desrosiers, I. Ben Ayed | 3D fully convolutional networks for subcortical segmentation in MRI: A
large-scale study | Accepted in the special issue of Neuroimage: "Brain Segmentation and
Parcellation" | null | 10.1016/j.neuroimage.2017.04.039 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates a 3D and fully convolutional neural network (CNN) for
subcortical brain structure segmentation in MRI. 3D CNN architectures have been
generally avoided due to their computational and memory requirements during
inference. We address the problem via small kernels, allowing deeper
architectures. We further model both local and global context by embedding
intermediate-layer outputs in the final prediction, which encourages
consistency between features extracted at different scales and embeds
fine-grained information directly in the segmentation process. Our model is
efficiently trained end-to-end on a graphics processing unit (GPU), in a single
stage, exploiting the dense inference capabilities of fully CNNs.
We performed comprehensive experiments over two publicly available datasets.
First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then,
we report a {\em large-scale} multi-site evaluation over 1112 unregistered
subject datasets acquired from 17 different sites (ABIDE dataset), with ages
ranging from 7 to 64 years, showing that our method is robust to various
acquisition protocols, demographics and clinical factors. Our method yielded
segmentations that are highly consistent with a standard atlas-based approach,
while running in a fraction of the time needed by atlas-based methods and
avoiding registration/normalization steps. This makes it convenient for massive
multi-site neuroanatomical imaging studies. To the best of our knowledge, our
work is the first to study subcortical structure segmentation on such
large-scale and heterogeneous data.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 21:09:06 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 02:03:35 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Dolz",
"J.",
""
],
[
"Desrosiers",
"C.",
""
],
[
"Ayed",
"I. Ben",
""
]
] | TITLE: 3D fully convolutional networks for subcortical segmentation in MRI: A
large-scale study
ABSTRACT: This study investigates a 3D and fully convolutional neural network (CNN) for
subcortical brain structure segmentation in MRI. 3D CNN architectures have been
generally avoided due to their computational and memory requirements during
inference. We address the problem via small kernels, allowing deeper
architectures. We further model both local and global context by embedding
intermediate-layer outputs in the final prediction, which encourages
consistency between features extracted at different scales and embeds
fine-grained information directly in the segmentation process. Our model is
efficiently trained end-to-end on a graphics processing unit (GPU), in a single
stage, exploiting the dense inference capabilities of fully CNNs.
We performed comprehensive experiments over two publicly available datasets.
First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then,
we report a {\em large-scale} multi-site evaluation over 1112 unregistered
subject datasets acquired from 17 different sites (ABIDE dataset), with ages
ranging from 7 to 64 years, showing that our method is robust to various
acquisition protocols, demographics and clinical factors. Our method yielded
segmentations that are highly consistent with a standard atlas-based approach,
while running in a fraction of the time needed by atlas-based methods and
avoiding registration/normalization steps. This makes it convenient for massive
multi-site neuroanatomical imaging studies. To the best of our knowledge, our
work is the first to study subcortical structure segmentation on such
large-scale and heterogeneous data.
| no_new_dataset | 0.950227 |
1701.08251 | Nasrin Mostafazadeh | Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley,
Jianfeng Gao, Georgios P. Spithourakis, Lucy Vanderwende | Image-Grounded Conversations: Multimodal Context for Natural Question
and Response Generation | null | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The popularity of image sharing on social media and the engagement it creates
between users reflects the important role that visual context plays in everyday
conversations. We present a novel task, Image-Grounded Conversations (IGC), in
which natural-sounding conversations are generated about a shared image. To
benchmark progress, we introduce a new multiple-reference dataset of
crowd-sourced, event-centric conversations on images. IGC falls on the
continuum between chit-chat and goal-directed conversation models, where visual
grounding constrains the topic of conversation to event-driven utterances.
Experiments with models trained on social media data show that the combination
of visual and textual context enhances the quality of generated conversational
turns. In human evaluation, the gap between human performance and that of both
neural and retrieval architectures suggests that multi-modal IGC presents an
interesting challenge for dialogue research.
| [
{
"version": "v1",
"created": "Sat, 28 Jan 2017 05:06:11 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 00:36:35 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Mostafazadeh",
"Nasrin",
""
],
[
"Brockett",
"Chris",
""
],
[
"Dolan",
"Bill",
""
],
[
"Galley",
"Michel",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Spithourakis",
"Georgios P.",
""
],
[
"Vanderwende",
"Lucy",
""
]
] | TITLE: Image-Grounded Conversations: Multimodal Context for Natural Question
and Response Generation
ABSTRACT: The popularity of image sharing on social media and the engagement it creates
between users reflects the important role that visual context plays in everyday
conversations. We present a novel task, Image-Grounded Conversations (IGC), in
which natural-sounding conversations are generated about a shared image. To
benchmark progress, we introduce a new multiple-reference dataset of
crowd-sourced, event-centric conversations on images. IGC falls on the
continuum between chit-chat and goal-directed conversation models, where visual
grounding constrains the topic of conversation to event-driven utterances.
Experiments with models trained on social media data show that the combination
of visual and textual context enhances the quality of generated conversational
turns. In human evaluation, the gap between human performance and that of both
neural and retrieval architectures suggests that multi-modal IGC presents an
interesting challenge for dialogue research.
| new_dataset | 0.959573 |
1704.02259 | Mireya Paredes Ms. | Mireya Paredes, Graham Riley and Mikel Lujan | Vectorization of Hybrid Breadth First Search on the Intel Xeon Phi | 9 pages | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | The Breadth-First Search (BFS) algorithm is an important building block for
graph analysis of large datasets. The BFS parallelisation has been shown to be
challenging because of its inherent characteristics, including irregular memory
access patterns, data dependencies and workload imbalance, that limit its
scalability. We investigate the optimisation and vectorisation of the hybrid
BFS (a combination of top-down and bottom-up approaches for BFS) on the Xeon
Phi, which has advanced vector processing capabilities. The results show that
our new implementation improves by 33\%, for a one million vertices graph,
compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 15:12:05 GMT"
},
{
"version": "v2",
"created": "Thu, 20 Apr 2017 03:59:33 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Paredes",
"Mireya",
""
],
[
"Riley",
"Graham",
""
],
[
"Lujan",
"Mikel",
""
]
] | TITLE: Vectorization of Hybrid Breadth First Search on the Intel Xeon Phi
ABSTRACT: The Breadth-First Search (BFS) algorithm is an important building block for
graph analysis of large datasets. The BFS parallelisation has been shown to be
challenging because of its inherent characteristics, including irregular memory
access patterns, data dependencies and workload imbalance, that limit its
scalability. We investigate the optimisation and vectorisation of the hybrid
BFS (a combination of top-down and bottom-up approaches for BFS) on the Xeon
Phi, which has advanced vector processing capabilities. The results show that
our new implementation improves by 33\%, for a one million vertices graph,
compared to the state-of-the-art.
| no_new_dataset | 0.948442 |
1704.05420 | Cem Subakan | Y. Cem Subakan, Paris Smaragdis | Diagonal RNNs in Symbolic Music Modeling | Submitted to Waspaa 2017 | null | null | null | cs.NE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new Recurrent Neural Network (RNN) architecture.
The novelty is simple: We use diagonal recurrent matrices instead of full. This
results in better test likelihood and faster convergence compared to regular
full RNNs in most of our experiments. We show the benefits of using diagonal
recurrent matrices with popularly used LSTM and GRU architectures as well as
with the vanilla RNN architecture, on four standard symbolic music datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 16:47:38 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2017 23:36:18 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Subakan",
"Y. Cem",
""
],
[
"Smaragdis",
"Paris",
""
]
] | TITLE: Diagonal RNNs in Symbolic Music Modeling
ABSTRACT: In this paper, we propose a new Recurrent Neural Network (RNN) architecture.
The novelty is simple: We use diagonal recurrent matrices instead of full. This
results in better test likelihood and faster convergence compared to regular
full RNNs in most of our experiments. We show the benefits of using diagonal
recurrent matrices with popularly used LSTM and GRU architectures as well as
with the vanilla RNN architecture, on four standard symbolic music datasets.
| no_new_dataset | 0.952926 |
1704.05860 | Junye Wang | Mihal Miu, Xiaokun Zhang, M. Ali Akber Dewan, Junye Wang | Aggregation and visualization of spatial data with application to
classification of land use and land cover | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aggregation and visualization of geographical data are an important part of
environmental data mining, environmental modelling, and agricultural
management. However, it is difficult to aggregate geospatial data of the
various formats, such as maps, census and survey data. This paper presents a
framework named PlaniSphere, which can aggregate the various geospatial
datasets, and synthesizes raw data. We developed an algorithm in PlaniSphere to
aggregate remote sensing images with census data for classification and
visualization of land use and land cover (LULC). The results show that the
framework is able to classify geospatial data sets of LULC from multiple
formats. National census data sets can be used for calibration of remote
sensing LULC classifications. This provides a new approach for the
classification of remote sensing data. This approach proposed in this paper
should be useful for LULC classification in environmental spatial analysis.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 18:01:29 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Miu",
"Mihal",
""
],
[
"Zhang",
"Xiaokun",
""
],
[
"Dewan",
"M. Ali Akber",
""
],
[
"Wang",
"Junye",
""
]
] | TITLE: Aggregation and visualization of spatial data with application to
classification of land use and land cover
ABSTRACT: Aggregation and visualization of geographical data are an important part of
environmental data mining, environmental modelling, and agricultural
management. However, it is difficult to aggregate geospatial data of the
various formats, such as maps, census and survey data. This paper presents a
framework named PlaniSphere, which can aggregate the various geospatial
datasets, and synthesizes raw data. We developed an algorithm in PlaniSphere to
aggregate remote sensing images with census data for classification and
visualization of land use and land cover (LULC). The results show that the
framework is able to classify geospatial data sets of LULC from multiple
formats. National census data sets can be used for calibration of remote
sensing LULC classifications. This provides a new approach for the
classification of remote sensing data. This approach proposed in this paper
should be useful for LULC classification in environmental spatial analysis.
| no_new_dataset | 0.949763 |
1704.05921 | Dat Tran | Dat Tran and Christof Teuscher | Memcapacitive Devices in Logic and Crossbar Applications | null | null | null | null | cs.ET | http://creativecommons.org/publicdomain/zero/1.0/ | Over the last decade, memristive devices have been widely adopted in
computing for various conventional and unconventional applications. While the
integration density, memory property, and nonlinear characteristics have many
benefits, reducing the energy consumption is limited by the resistive nature of
the devices. Memcapacitors would address that limitation while still having all
the benefits of memristors. Recent work has shown that with adjusted parameters
during the fabrication process, a metal-oxide device can indeed exhibit a
memcapacitive behavior. We introduce novel memcapacitive logic gates and
memcapacitive crossbar classifiers as a proof of concept that such applications
can outperform memristor-based architectures. The results illustrate that,
compared to memristive logic gates, our memcapacitive gates consume about 7x
less power. The memcapacitive crossbar classifier achieves similar
classification performance but reduces the power consumption by a factor of
about 1,500x for the MNIST dataset and a factor of about 1,000x for the
CIFAR-10 dataset compared to a memristive crossbar. Our simulation results
demonstrate that memcapacitive devices have great potential for both Boolean
logic and analog low-power applications.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 20:13:41 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Tran",
"Dat",
""
],
[
"Teuscher",
"Christof",
""
]
] | TITLE: Memcapacitive Devices in Logic and Crossbar Applications
ABSTRACT: Over the last decade, memristive devices have been widely adopted in
computing for various conventional and unconventional applications. While the
integration density, memory property, and nonlinear characteristics have many
benefits, reducing the energy consumption is limited by the resistive nature of
the devices. Memcapacitors would address that limitation while still having all
the benefits of memristors. Recent work has shown that with adjusted parameters
during the fabrication process, a metal-oxide device can indeed exhibit a
memcapacitive behavior. We introduce novel memcapacitive logic gates and
memcapacitive crossbar classifiers as a proof of concept that such applications
can outperform memristor-based architectures. The results illustrate that,
compared to memristive logic gates, our memcapacitive gates consume about 7x
less power. The memcapacitive crossbar classifier achieves similar
classification performance but reduces the power consumption by a factor of
about 1,500x for the MNIST dataset and a factor of about 1,000x for the
CIFAR-10 dataset compared to a memristive crossbar. Our simulation results
demonstrate that memcapacitive devices have great potential for both Boolean
logic and analog low-power applications.
| no_new_dataset | 0.948775 |
1704.05939 | Karel Lenc | Vassileios Balntas and Karel Lenc and Andrea Vedaldi and Krystian
Mikolajczyk | HPatches: A benchmark and evaluation of handcrafted and learned local
descriptors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel benchmark for evaluating local image
descriptors. We demonstrate that the existing datasets and evaluation protocols
do not specify unambiguously all aspects of evaluation, leading to ambiguities
and inconsistencies in results reported in the literature. Furthermore, these
datasets are nearly saturated due to the recent improvements in local
descriptors obtained by learning them from large annotated datasets. Therefore,
we introduce a new large dataset suitable for training and testing modern
descriptors, together with strictly defined evaluation protocols in several
tasks such as matching, retrieval and classification. This allows for more
realistic, and thus more reliable comparisons in different application
scenarios. We evaluate the performance of several state-of-the-art descriptors
and analyse their properties. We show that a simple normalisation of
traditional hand-crafted descriptors can boost their performance to the level
of deep learning based descriptors within a realistic benchmarks evaluation.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 21:37:03 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Balntas",
"Vassileios",
""
],
[
"Lenc",
"Karel",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Mikolajczyk",
"Krystian",
""
]
] | TITLE: HPatches: A benchmark and evaluation of handcrafted and learned local
descriptors
ABSTRACT: In this paper, we propose a novel benchmark for evaluating local image
descriptors. We demonstrate that the existing datasets and evaluation protocols
do not specify unambiguously all aspects of evaluation, leading to ambiguities
and inconsistencies in results reported in the literature. Furthermore, these
datasets are nearly saturated due to the recent improvements in local
descriptors obtained by learning them from large annotated datasets. Therefore,
we introduce a new large dataset suitable for training and testing modern
descriptors, together with strictly defined evaluation protocols in several
tasks such as matching, retrieval and classification. This allows for more
realistic, and thus more reliable comparisons in different application
scenarios. We evaluate the performance of several state-of-the-art descriptors
and analyse their properties. We show that a simple normalisation of
traditional hand-crafted descriptors can boost their performance to the level
of deep learning based descriptors within a realistic benchmarks evaluation.
| new_dataset | 0.966789 |
1704.05963 | Daniel R. Jiang | Daniel R. Jiang, Lina Al-Kanj, Warren B. Powell | Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds | 33 pages, 6 figures | null | null | null | math.OC cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS), most famously used in game-play artificial
intelligence (e.g., the game of Go), is a well-known strategy for constructing
approximate solutions to sequential decision problems. Its primary innovation
is the use of a heuristic, known as a default policy, to obtain Monte Carlo
estimates of downstream values for states in a decision tree. This information
is used to iteratively expand the tree towards regions of states and actions
that an optimal policy might visit. However, to guarantee convergence to the
optimal action, MCTS requires the entire tree to be expanded asymptotically. In
this paper, we propose a new technique called Primal-Dual MCTS that utilizes
sampled information relaxation upper bounds on potential actions, creating the
possibility of "ignoring" parts of the tree that stem from highly suboptimal
choices. This allows us to prove that despite converging to a partial decision
tree in the limit, the recommended action from Primal-Dual MCTS is optimal. The
new approach shows significant promise when used to optimize the behavior of a
single driver navigating a graph while operating on a ride-sharing platform.
Numerical experiments on a real dataset of 7,000 trips in New Jersey suggest
that Primal-Dual MCTS improves upon standard MCTS by producing deeper decision
trees and exhibits a reduced sensitivity to the size of the action space.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 00:16:01 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Jiang",
"Daniel R.",
""
],
[
"Al-Kanj",
"Lina",
""
],
[
"Powell",
"Warren B.",
""
]
] | TITLE: Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds
ABSTRACT: Monte Carlo Tree Search (MCTS), most famously used in game-play artificial
intelligence (e.g., the game of Go), is a well-known strategy for constructing
approximate solutions to sequential decision problems. Its primary innovation
is the use of a heuristic, known as a default policy, to obtain Monte Carlo
estimates of downstream values for states in a decision tree. This information
is used to iteratively expand the tree towards regions of states and actions
that an optimal policy might visit. However, to guarantee convergence to the
optimal action, MCTS requires the entire tree to be expanded asymptotically. In
this paper, we propose a new technique called Primal-Dual MCTS that utilizes
sampled information relaxation upper bounds on potential actions, creating the
possibility of "ignoring" parts of the tree that stem from highly suboptimal
choices. This allows us to prove that despite converging to a partial decision
tree in the limit, the recommended action from Primal-Dual MCTS is optimal. The
new approach shows significant promise when used to optimize the behavior of a
single driver navigating a graph while operating on a ride-sharing platform.
Numerical experiments on a real dataset of 7,000 trips in New Jersey suggest
that Primal-Dual MCTS improves upon standard MCTS by producing deeper decision
trees and exhibits a reduced sensitivity to the size of the action space.
| no_new_dataset | 0.946646 |
1704.05972 | Leon Derczynski | Leon Derczynski and Kalina Bontcheva and Maria Liakata and Rob Procter
and Geraldine Wong Sak Hoi and Arkaitz Zubiaga | SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support
for rumours | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Media is full of false claims. Even Oxford Dictionaries named "post-truth" as
the word of 2016. This makes it more important than ever to build systems that
can identify the veracity of a story, and the kind of discourse there is around
it. RumourEval is a SemEval shared task that aims to identify and handle
rumours and reactions to them, in text. We present an annotation scheme, a
large dataset covering multiple topics - each having their own families of
claims and replies - and use these to pose two concrete challenges as well as
the results achieved by participants on these challenges.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 01:21:20 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Derczynski",
"Leon",
""
],
[
"Bontcheva",
"Kalina",
""
],
[
"Liakata",
"Maria",
""
],
[
"Procter",
"Rob",
""
],
[
"Hoi",
"Geraldine Wong Sak",
""
],
[
"Zubiaga",
"Arkaitz",
""
]
] | TITLE: SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support
for rumours
ABSTRACT: Media is full of false claims. Even Oxford Dictionaries named "post-truth" as
the word of 2016. This makes it more important than ever to build systems that
can identify the veracity of a story, and the kind of discourse there is around
it. RumourEval is a SemEval shared task that aims to identify and handle
rumours and reactions to them, in text. We present an annotation scheme, a
large dataset covering multiple topics - each having their own families of
claims and replies - and use these to pose two concrete challenges as well as
the results achieved by participants on these challenges.
| new_dataset | 0.95096 |
1704.05973 | Lin Wu | Tong Chen, Lin Wu, Xue Li, Jun Zhang, Hongzhi Yin, Yang Wang | Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks
for Early Rumor Detection | 9 pages | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of social media in communication and information
dissemination has made it an ideal platform for spreading rumors. Automatically
debunking rumors at their stage of diffusion is known as \textit{early rumor
detection}, which refers to dealing with sequential posts regarding disputed
factual claims with certain variations and highly textual duplication over
time. Thus, identifying trending rumors demands an efficient yet flexible model
that is able to capture long-range dependencies among postings and produce
distinct representations for the accurate early detection. However, it is a
challenging task to apply conventional classification algorithms to rumor
detection in earliness since they rely on hand-crafted features which require
intensive manual efforts in the case of large amount of posts. This paper
presents a deep attention model on the basis of recurrent neural networks (RNN)
to learn \textit{selectively} temporal hidden representations of sequential
posts for identifying rumors. The proposed model delves soft-attention into the
recurrence to simultaneously pool out distinct features with particular focus
and produce hidden representations that capture contextual variations of
relevant posts over time. Extensive experiments on real datasets collected from
social media websites demonstrate that (1) the deep attention based RNN model
outperforms state-of-the-arts that rely on hand-crafted features; (2) the
introduction of soft attention mechanism can effectively distill relevant parts
to rumors from original posts in advance; (3) the proposed method detects
rumors more quickly and accurately than competitors.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 01:22:57 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Chen",
"Tong",
""
],
[
"Wu",
"Lin",
""
],
[
"Li",
"Xue",
""
],
[
"Zhang",
"Jun",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Wang",
"Yang",
""
]
] | TITLE: Call Attention to Rumors: Deep Attention Based Recurrent Neural Networks
for Early Rumor Detection
ABSTRACT: The proliferation of social media in communication and information
dissemination has made it an ideal platform for spreading rumors. Automatically
debunking rumors at their stage of diffusion is known as \textit{early rumor
detection}, which refers to dealing with sequential posts regarding disputed
factual claims with certain variations and highly textual duplication over
time. Thus, identifying trending rumors demands an efficient yet flexible model
that is able to capture long-range dependencies among postings and produce
distinct representations for the accurate early detection. However, it is a
challenging task to apply conventional classification algorithms to rumor
detection in earliness since they rely on hand-crafted features which require
intensive manual efforts in the case of large amount of posts. This paper
presents a deep attention model on the basis of recurrent neural networks (RNN)
to learn \textit{selectively} temporal hidden representations of sequential
posts for identifying rumors. The proposed model delves soft-attention into the
recurrence to simultaneously pool out distinct features with particular focus
and produce hidden representations that capture contextual variations of
relevant posts over time. Extensive experiments on real datasets collected from
social media websites demonstrate that (1) the deep attention based RNN model
outperforms state-of-the-arts that rely on hand-crafted features; (2) the
introduction of soft attention mechanism can effectively distill relevant parts
to rumors from original posts in advance; (3) the proposed method detects
rumors more quickly and accurately than competitors.
| no_new_dataset | 0.948822 |
1704.05982 | Tao Wu | Tao Wu and David Gleich | Retrospective Higher-Order Markov Processes for User Trails | null | null | null | null | cs.SI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users form information trails as they browse the web, checkin with a
geolocation, rate items, or consume media. A common problem is to predict what
a user might do next for the purposes of guidance, recommendation, or
prefetching. First-order and higher-order Markov chains have been widely used
methods to study such sequences of data. First-order Markov chains are easy to
estimate, but lack accuracy when history matters. Higher-order Markov chains,
in contrast, have too many parameters and suffer from overfitting the training
data. Fitting these parameters with regularization and smoothing only offers
mild improvements. In this paper we propose the retrospective higher-order
Markov process (RHOMP) as a low-parameter model for such sequences. This model
is a special case of a higher-order Markov chain where the transitions depend
retrospectively on a single history state instead of an arbitrary combination
of history states. There are two immediate computational advantages: the number
of parameters is linear in the order of the Markov chain and the model can be
fit to large state spaces. Furthermore, by providing a specific structure to
the higher-order chain, RHOMPs improve the model accuracy by efficiently
utilizing history states without risks of overfitting the data. We demonstrate
how to estimate a RHOMP from data and we demonstrate the effectiveness of our
method on various real application datasets spanning geolocation data, review
sequences, and business locations. The RHOMP model uniformly outperforms
higher-order Markov chains, Kneser-Ney regularization, and tensor
factorizations in terms of prediction accuracy.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 02:14:17 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Wu",
"Tao",
""
],
[
"Gleich",
"David",
""
]
] | TITLE: Retrospective Higher-Order Markov Processes for User Trails
ABSTRACT: Users form information trails as they browse the web, checkin with a
geolocation, rate items, or consume media. A common problem is to predict what
a user might do next for the purposes of guidance, recommendation, or
prefetching. First-order and higher-order Markov chains have been widely used
methods to study such sequences of data. First-order Markov chains are easy to
estimate, but lack accuracy when history matters. Higher-order Markov chains,
in contrast, have too many parameters and suffer from overfitting the training
data. Fitting these parameters with regularization and smoothing only offers
mild improvements. In this paper we propose the retrospective higher-order
Markov process (RHOMP) as a low-parameter model for such sequences. This model
is a special case of a higher-order Markov chain where the transitions depend
retrospectively on a single history state instead of an arbitrary combination
of history states. There are two immediate computational advantages: the number
of parameters is linear in the order of the Markov chain and the model can be
fit to large state spaces. Furthermore, by providing a specific structure to
the higher-order chain, RHOMPs improve the model accuracy by efficiently
utilizing history states without risks of overfitting the data. We demonstrate
how to estimate a RHOMP from data and we demonstrate the effectiveness of our
method on various real application datasets spanning geolocation data, review
sequences, and business locations. The RHOMP model uniformly outperforms
higher-order Markov chains, Kneser-Ney regularization, and tensor
factorizations in terms of prediction accuracy.
| no_new_dataset | 0.951953 |
1704.06033 | Hongyoon Choi Dr | Hongyoon Choi, Kyong Hwan Jin | Predicting Cognitive Decline with Deep Learning of Brain Metabolism and
Amyloid Imaging | 24 pages | null | null | null | cs.CV cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For effective treatment of Alzheimer disease (AD), it is important to
identify subjects who are most likely to exhibit rapid cognitive decline.
Herein, we developed a novel framework based on a deep convolutional neural
network which can predict future cognitive decline in mild cognitive impairment
(MCI) patients using flurodeoxyglucose and florbetapir positron emission
tomography (PET). The architecture of the network only relies on baseline PET
studies of AD and normal subjects as the training dataset. Feature extraction
and complicated image preprocessing including nonlinear warping are unnecessary
for our approach. Accuracy of prediction (84.2%) for conversion to AD in MCI
patients outperformed conventional feature-based quantification approaches. ROC
analyses revealed that performance of CNN-based approach was significantly
higher than that of the conventional quantification methods (p < 0.05). Output
scores of the network were strongly correlated with the longitudinal change in
cognitive measurements. These results show the feasibility of deep learning as
a tool for predicting disease outcome using brain images.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 07:33:18 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Choi",
"Hongyoon",
""
],
[
"Jin",
"Kyong Hwan",
""
]
] | TITLE: Predicting Cognitive Decline with Deep Learning of Brain Metabolism and
Amyloid Imaging
ABSTRACT: For effective treatment of Alzheimer disease (AD), it is important to
identify subjects who are most likely to exhibit rapid cognitive decline.
Herein, we developed a novel framework based on a deep convolutional neural
network which can predict future cognitive decline in mild cognitive impairment
(MCI) patients using flurodeoxyglucose and florbetapir positron emission
tomography (PET). The architecture of the network only relies on baseline PET
studies of AD and normal subjects as the training dataset. Feature extraction
and complicated image preprocessing including nonlinear warping are unnecessary
for our approach. Accuracy of prediction (84.2%) for conversion to AD in MCI
patients outperformed conventional feature-based quantification approaches. ROC
analyses revealed that performance of CNN-based approach was significantly
higher than that of the conventional quantification methods (p < 0.05). Output
scores of the network were strongly correlated with the longitudinal change in
cognitive measurements. These results show the feasibility of deep learning as
a tool for predicting disease outcome using brain images.
| no_new_dataset | 0.945399 |
1704.06062 | Yehezkel Resheff | Yehezkel S. Resheff, Amit Mandelbaum, Daphna Weinshall | Every Untrue Label is Untrue in its Own Way: Controlling Error Type with
the Log Bilinear Loss | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has become the method of choice in many application domains of
machine learning in recent years, especially for multi-class classification
tasks. The most common loss function used in this context is the cross-entropy
loss, which reduces to the log loss in the typical case when there is a single
correct response label. While this loss is insensitive to the identity of the
assigned class in the case of misclassification, in practice it is often the
case that some errors may be more detrimental than others. Here we present the
bilinear-loss (and related log-bilinear-loss) which differentially penalizes
the different wrong assignments of the model. We thoroughly test this method
using standard models and benchmark image datasets. As one application, we show
the ability of this method to better contain error within the correct
super-class, in the hierarchically labeled CIFAR100 dataset, without affecting
the overall performance of the classifier.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 09:29:09 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Resheff",
"Yehezkel S.",
""
],
[
"Mandelbaum",
"Amit",
""
],
[
"Weinshall",
"Daphna",
""
]
] | TITLE: Every Untrue Label is Untrue in its Own Way: Controlling Error Type with
the Log Bilinear Loss
ABSTRACT: Deep learning has become the method of choice in many application domains of
machine learning in recent years, especially for multi-class classification
tasks. The most common loss function used in this context is the cross-entropy
loss, which reduces to the log loss in the typical case when there is a single
correct response label. While this loss is insensitive to the identity of the
assigned class in the case of misclassification, in practice it is often the
case that some errors may be more detrimental than others. Here we present the
bilinear-loss (and related log-bilinear-loss) which differentially penalizes
the different wrong assignments of the model. We thoroughly test this method
using standard models and benchmark image datasets. As one application, we show
the ability of this method to better contain error within the correct
super-class, in the hierarchically labeled CIFAR100 dataset, without affecting
the overall performance of the classifier.
| no_new_dataset | 0.945349 |
1704.06125 | Mathieu Cliche | Mathieu Cliche | BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and
LSTMs | Published in Proceedings of SemEval-2017, 8 pages | null | null | null | cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we describe our attempt at producing a state-of-the-art Twitter
sentiment classifier using Convolutional Neural Networks (CNNs) and Long Short
Term Memory (LSTMs) networks. Our system leverages a large amount of unlabeled
data to pre-train word embeddings. We then use a subset of the unlabeled data
to fine tune the embeddings using distant supervision. The final CNNs and LSTMs
are trained on the SemEval-2017 Twitter dataset where the embeddings are fined
tuned again. To boost performances we ensemble several CNNs and LSTMs together.
Our approach achieved first rank on all of the five English subtasks amongst 40
teams.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 13:10:25 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Cliche",
"Mathieu",
""
]
] | TITLE: BB_twtr at SemEval-2017 Task 4: Twitter Sentiment Analysis with CNNs and
LSTMs
ABSTRACT: In this paper we describe our attempt at producing a state-of-the-art Twitter
sentiment classifier using Convolutional Neural Networks (CNNs) and Long Short
Term Memory (LSTMs) networks. Our system leverages a large amount of unlabeled
data to pre-train word embeddings. We then use a subset of the unlabeled data
to fine tune the embeddings using distant supervision. The final CNNs and LSTMs
are trained on the SemEval-2017 Twitter dataset where the embeddings are fined
tuned again. To boost performances we ensemble several CNNs and LSTMs together.
Our approach achieved first rank on all of the five English subtasks amongst 40
teams.
| no_new_dataset | 0.953708 |
1704.06254 | Shubham Tulsiani | Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, Jitendra Malik | Multi-view Supervision for Single-view Reconstruction via Differentiable
Ray Consistency | To appear at CVPR 2017. Project webpage :
https://shubhtuls.github.io/drc/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the notion of consistency between a 3D shape and a 2D observation
and propose a differentiable formulation which allows computing gradients of
the 3D shape given an observation from an arbitrary view. We do so by
reformulating view consistency using a differentiable ray consistency (DRC)
term. We show that this formulation can be incorporated in a learning framework
to leverage different types of multi-view observations e.g. foreground masks,
depth, color images, semantics etc. as supervision for learning single-view 3D
prediction. We present empirical analysis of our technique in a controlled
setting. We also show that this approach allows us to improve over existing
techniques for single-view reconstruction of objects from the PASCAL VOC
dataset.
| [
{
"version": "v1",
"created": "Thu, 20 Apr 2017 17:56:53 GMT"
}
] | 2017-04-21T00:00:00 | [
[
"Tulsiani",
"Shubham",
""
],
[
"Zhou",
"Tinghui",
""
],
[
"Efros",
"Alexei A.",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Multi-view Supervision for Single-view Reconstruction via Differentiable
Ray Consistency
ABSTRACT: We study the notion of consistency between a 3D shape and a 2D observation
and propose a differentiable formulation which allows computing gradients of
the 3D shape given an observation from an arbitrary view. We do so by
reformulating view consistency using a differentiable ray consistency (DRC)
term. We show that this formulation can be incorporated in a learning framework
to leverage different types of multi-view observations e.g. foreground masks,
depth, color images, semantics etc. as supervision for learning single-view 3D
prediction. We present empirical analysis of our technique in a controlled
setting. We also show that this approach allows us to improve over existing
techniques for single-view reconstruction of objects from the PASCAL VOC
dataset.
| no_new_dataset | 0.949529 |
1504.05773 | Kitty Meeks | Jessica Enright and Kitty Meeks | Deleting edges to restrict the size of an epidemic | Author final version of article to appear in Algorithmica (funding
details updated from previous version) | null | null | null | cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by applications in network epidemiology, we consider the problem of
determining whether it is possible to delete at most $k$ edges from a given
input graph (of small treewidth) so that the resulting graph avoids a set
$\mathcal{F}$ of forbidden subgraphs; of particular interest is the problem of
determining whether it is possible to delete at most $k$ edges so that the
resulting graph has no connected component of more than $h$ vertices, as this
bounds the worst-case size of an epidemic. While even this special case of the
problem is NP-complete in general (even when $h=3$), we provide evidence that
many of the real-world networks of interest are likely to have small treewidth,
and we describe an algorithm which solves the general problem in time
\genruntime ~on an input graph having $n$ vertices and whose treewidth is
bounded by a fixed constant $w$, if each of the subgraphs we wish to avoid has
at most $r$ vertices. For the special case in which we wish only to ensure that
no component has more than $h$ vertices, we improve on this to give an
algorithm running in time $O((wh)^{2w}n)$, which we have implemented and tested
on real datasets based on cattle movements.
| [
{
"version": "v1",
"created": "Wed, 22 Apr 2015 12:58:59 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jul 2015 11:26:21 GMT"
},
{
"version": "v3",
"created": "Tue, 10 May 2016 16:17:49 GMT"
},
{
"version": "v4",
"created": "Wed, 12 Apr 2017 12:45:24 GMT"
},
{
"version": "v5",
"created": "Wed, 19 Apr 2017 09:30:38 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Enright",
"Jessica",
""
],
[
"Meeks",
"Kitty",
""
]
] | TITLE: Deleting edges to restrict the size of an epidemic
ABSTRACT: Motivated by applications in network epidemiology, we consider the problem of
determining whether it is possible to delete at most $k$ edges from a given
input graph (of small treewidth) so that the resulting graph avoids a set
$\mathcal{F}$ of forbidden subgraphs; of particular interest is the problem of
determining whether it is possible to delete at most $k$ edges so that the
resulting graph has no connected component of more than $h$ vertices, as this
bounds the worst-case size of an epidemic. While even this special case of the
problem is NP-complete in general (even when $h=3$), we provide evidence that
many of the real-world networks of interest are likely to have small treewidth,
and we describe an algorithm which solves the general problem in time
\genruntime ~on an input graph having $n$ vertices and whose treewidth is
bounded by a fixed constant $w$, if each of the subgraphs we wish to avoid has
at most $r$ vertices. For the special case in which we wish only to ensure that
no component has more than $h$ vertices, we improve on this to give an
algorithm running in time $O((wh)^{2w}n)$, which we have implemented and tested
on real datasets based on cattle movements.
| no_new_dataset | 0.946448 |
1608.07433 | Hossein Ziaei Nafchi | Hossein Ziaei Nafchi, Atena Shahkolaei, Rachid Hedjam, Mohamed Cheriet | Mean Deviation Similarity Index: Efficient and Reliable Full-Reference
Image Quality Evaluator | 11 pages, 8 figures, 6 tables | null | 10.1109/ACCESS.2016.2604042 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications of perceptual image quality assessment (IQA) in image and video
processing, such as image acquisition, image compression, image restoration and
multimedia communication, have led to the development of many IQA metrics. In
this paper, a reliable full reference IQA model is proposed that utilize
gradient similarity (GS), chromaticity similarity (CS), and deviation pooling
(DP). By considering the shortcomings of the commonly used GS to model human
visual system (HVS), a new GS is proposed through a fusion technique that is
more likely to follow HVS. We propose an efficient and effective formulation to
calculate the joint similarity map of two chromatic channels for the purpose of
measuring color changes. In comparison with a commonly used formulation in the
literature, the proposed CS map is shown to be more efficient and provide
comparable or better quality predictions. Motivated by a recent work that
utilizes the standard deviation pooling, a general formulation of the DP is
presented in this paper and used to compute a final score from the proposed GS
and CS maps. This proposed formulation of DP benefits from the Minkowski
pooling and a proposed power pooling as well. The experimental results on six
datasets of natural images, a synthetic dataset, and a digitally retouched
dataset show that the proposed index provides comparable or better quality
predictions than the most recent and competing state-of-the-art IQA metrics in
the literature, it is reliable and has low complexity. The MATLAB source code
of the proposed metric is available at
https://www.mathworks.com/matlabcentral/fileexchange/59809.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2016 12:16:09 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2016 12:10:59 GMT"
},
{
"version": "v3",
"created": "Fri, 16 Sep 2016 08:17:07 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Apr 2017 05:41:11 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Nafchi",
"Hossein Ziaei",
""
],
[
"Shahkolaei",
"Atena",
""
],
[
"Hedjam",
"Rachid",
""
],
[
"Cheriet",
"Mohamed",
""
]
] | TITLE: Mean Deviation Similarity Index: Efficient and Reliable Full-Reference
Image Quality Evaluator
ABSTRACT: Applications of perceptual image quality assessment (IQA) in image and video
processing, such as image acquisition, image compression, image restoration and
multimedia communication, have led to the development of many IQA metrics. In
this paper, a reliable full reference IQA model is proposed that utilize
gradient similarity (GS), chromaticity similarity (CS), and deviation pooling
(DP). By considering the shortcomings of the commonly used GS to model human
visual system (HVS), a new GS is proposed through a fusion technique that is
more likely to follow HVS. We propose an efficient and effective formulation to
calculate the joint similarity map of two chromatic channels for the purpose of
measuring color changes. In comparison with a commonly used formulation in the
literature, the proposed CS map is shown to be more efficient and provide
comparable or better quality predictions. Motivated by a recent work that
utilizes the standard deviation pooling, a general formulation of the DP is
presented in this paper and used to compute a final score from the proposed GS
and CS maps. This proposed formulation of DP benefits from the Minkowski
pooling and a proposed power pooling as well. The experimental results on six
datasets of natural images, a synthetic dataset, and a digitally retouched
dataset show that the proposed index provides comparable or better quality
predictions than the most recent and competing state-of-the-art IQA metrics in
the literature, it is reliable and has low complexity. The MATLAB source code
of the proposed metric is available at
https://www.mathworks.com/matlabcentral/fileexchange/59809.
| no_new_dataset | 0.948442 |
1609.08913 | George Monta\~nez | George D. Montanez | The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm | null | null | null | null | stat.ML cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Casting machine learning as a type of search, we demonstrate that the
proportion of problems that are favorable for a fixed algorithm is strictly
bounded, such that no single algorithm can perform well over a large fraction
of them. Our results explain why we must either continue to develop new
learning methods year after year or move towards highly parameterized models
that are both flexible and sensitive to their hyperparameters. We further give
an upper bound on the expected performance for a search algorithm as a function
of the mutual information between the target and the information resource
(e.g., training dataset), proving the importance of certain types of dependence
for machine learning. Lastly, we show that the expected per-query probability
of success for an algorithm is mathematically equivalent to a single-query
probability of success under a distribution (called a search strategy), and
prove that the proportion of favorable strategies is also strictly bounded.
Thus, whether one holds fixed the search algorithm and considers all possible
problems or one fixes the search problem and looks at all possible search
strategies, favorable matches are exceedingly rare. The forte (strength) of any
algorithm is quantifiably restricted.
| [
{
"version": "v1",
"created": "Wed, 28 Sep 2016 13:52:17 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2017 14:21:53 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Montanez",
"George D.",
""
]
] | TITLE: The Famine of Forte: Few Search Problems Greatly Favor Your Algorithm
ABSTRACT: Casting machine learning as a type of search, we demonstrate that the
proportion of problems that are favorable for a fixed algorithm is strictly
bounded, such that no single algorithm can perform well over a large fraction
of them. Our results explain why we must either continue to develop new
learning methods year after year or move towards highly parameterized models
that are both flexible and sensitive to their hyperparameters. We further give
an upper bound on the expected performance for a search algorithm as a function
of the mutual information between the target and the information resource
(e.g., training dataset), proving the importance of certain types of dependence
for machine learning. Lastly, we show that the expected per-query probability
of success for an algorithm is mathematically equivalent to a single-query
probability of success under a distribution (called a search strategy), and
prove that the proportion of favorable strategies is also strictly bounded.
Thus, whether one holds fixed the search algorithm and considers all possible
problems or one fixes the search problem and looks at all possible search
strategies, favorable matches are exceedingly rare. The forte (strength) of any
algorithm is quantifiably restricted.
| no_new_dataset | 0.946794 |
1703.04454 | Chao Zhang | Chao Zhang, Sergi Pujades, Michael Black, and Gerard Pons-Moll | Detailed, accurate, human shape estimation from clothed 3D scan
sequences | CVPR 2017, camera ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of estimating human pose and body shape from 3D scans
over time. Reliable estimation of 3D body shape is necessary for many
applications including virtual try-on, health monitoring, and avatar creation
for virtual reality. Scanning bodies in minimal clothing, however, presents a
practical barrier to these applications. We address this problem by estimating
body shape under clothing from a sequence of 3D scans. Previous methods that
have exploited body models produce smooth shapes lacking personalized details.
We contribute a new approach to recover a personalized shape of the person. The
estimated shape deviates from a parametric model to fit the 3D scans. We
demonstrate the method using high quality 4D data as well as sequences of
visual hulls extracted from multi-view images. We also make available BUFF, a
new 4D dataset that enables quantitative evaluation
(http://buff.is.tue.mpg.de). Our method outperforms the state of the art in
both pose estimation and shape estimation, qualitatively and quantitatively.
| [
{
"version": "v1",
"created": "Mon, 13 Mar 2017 15:41:36 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Apr 2017 12:26:27 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Zhang",
"Chao",
""
],
[
"Pujades",
"Sergi",
""
],
[
"Black",
"Michael",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] | TITLE: Detailed, accurate, human shape estimation from clothed 3D scan
sequences
ABSTRACT: We address the problem of estimating human pose and body shape from 3D scans
over time. Reliable estimation of 3D body shape is necessary for many
applications including virtual try-on, health monitoring, and avatar creation
for virtual reality. Scanning bodies in minimal clothing, however, presents a
practical barrier to these applications. We address this problem by estimating
body shape under clothing from a sequence of 3D scans. Previous methods that
have exploited body models produce smooth shapes lacking personalized details.
We contribute a new approach to recover a personalized shape of the person. The
estimated shape deviates from a parametric model to fit the 3D scans. We
demonstrate the method using high quality 4D data as well as sequences of
visual hulls extracted from multi-view images. We also make available BUFF, a
new 4D dataset that enables quantitative evaluation
(http://buff.is.tue.mpg.de). Our method outperforms the state of the art in
both pose estimation and shape estimation, qualitatively and quantitatively.
| new_dataset | 0.958304 |
1704.05548 | Lluis Castrejon | Lluis Castrejon, Kaustav Kundu, Raquel Urtasun, Sanja Fidler | Annotating Object Instances with a Polygon-RNN | null | CVPR 2017 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach for semi-automatic annotation of object instances.
While most current methods treat object segmentation as a pixel-labeling
problem, we here cast it as a polygon prediction task, mimicking how most
current datasets have been annotated. In particular, our approach takes as
input an image crop and sequentially produces vertices of the polygon outlining
the object. This allows a human annotator to interfere at any time and correct
a vertex if needed, producing as accurate segmentation as desired by the
annotator. We show that our approach speeds up the annotation process by a
factor of 4.7 across all classes in Cityscapes, while achieving 78.4% agreement
in IoU with original ground-truth, matching the typical agreement between human
annotators. For cars, our speed-up factor is 7.3 for an agreement of 82.2%. We
further show generalization capabilities of our approach to unseen datasets.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 22:17:28 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Castrejon",
"Lluis",
""
],
[
"Kundu",
"Kaustav",
""
],
[
"Urtasun",
"Raquel",
""
],
[
"Fidler",
"Sanja",
""
]
] | TITLE: Annotating Object Instances with a Polygon-RNN
ABSTRACT: We propose an approach for semi-automatic annotation of object instances.
While most current methods treat object segmentation as a pixel-labeling
problem, we here cast it as a polygon prediction task, mimicking how most
current datasets have been annotated. In particular, our approach takes as
input an image crop and sequentially produces vertices of the polygon outlining
the object. This allows a human annotator to interfere at any time and correct
a vertex if needed, producing as accurate segmentation as desired by the
annotator. We show that our approach speeds up the annotation process by a
factor of 4.7 across all classes in Cityscapes, while achieving 78.4% agreement
in IoU with original ground-truth, matching the typical agreement between human
annotators. For cars, our speed-up factor is 7.3 for an agreement of 82.2%. We
further show generalization capabilities of our approach to unseen datasets.
| no_new_dataset | 0.953319 |
1704.05566 | Jeremy Morton | Jeremy Morton and Mykel J. Kochenderfer | Simultaneous Policy Learning and Latent State Inference for Imitating
Driver Behavior | 7 pages, 6 figures, 2 tables | null | null | null | cs.LG cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose a method for learning driver models that account for
variables that cannot be observed directly. When trained on a synthetic
dataset, our models are able to learn encodings for vehicle trajectories that
distinguish between four distinct classes of driver behavior. Such encodings
are learned without any knowledge of the number of driver classes or any
objective that directly requires the models to learn encodings for each class.
We show that driving policies trained with knowledge of latent variables are
more effective than baseline methods at imitating the driver behavior that they
are trained to replicate. Furthermore, we demonstrate that the actions chosen
by our policy are heavily influenced by the latent variable settings that are
provided to them.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 00:23:59 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Morton",
"Jeremy",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] | TITLE: Simultaneous Policy Learning and Latent State Inference for Imitating
Driver Behavior
ABSTRACT: In this work, we propose a method for learning driver models that account for
variables that cannot be observed directly. When trained on a synthetic
dataset, our models are able to learn encodings for vehicle trajectories that
distinguish between four distinct classes of driver behavior. Such encodings
are learned without any knowledge of the number of driver classes or any
objective that directly requires the models to learn encodings for each class.
We show that driving policies trained with knowledge of latent variables are
more effective than baseline methods at imitating the driver behavior that they
are trained to replicate. Furthermore, we demonstrate that the actions chosen
by our policy are heavily influenced by the latent variable settings that are
provided to them.
| no_new_dataset | 0.944893 |
1704.05617 | Chun-Nan Hsu | Sanjeev Shenoy, Tsung-Ting Kuo, Rodney Gabriel, Julian McAuley and
Chun-Nan Hsu | Deduplication in a massive clinical note dataset | Extended from the Master project report of Sanjeev Shenoy, Department
of Computer Science and Engineering, University of California, San Diego.
June 2016 | null | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Duplication, whether exact or partial, is a common issue in many datasets. In
clinical notes data, duplication (and near duplication) can arise for many
reasons, such as the pervasive use of templates, copy-pasting, or notes being
generated by automated procedures. A key challenge in removing such near
duplicates is the size of such datasets; our own dataset consists of more than
10 million notes. To detect and correct such duplicates requires algorithms
that both accurate and highly scalable. We describe a solution based on
Minhashing with Locality Sensitive Hashing. In this paper, we present the
theory behind this method and present a database-inspired approach to make the
method scalable. We also present a clustering technique using disjoint sets to
produce dense clusters, which speeds up our algorithm.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 05:33:21 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Shenoy",
"Sanjeev",
""
],
[
"Kuo",
"Tsung-Ting",
""
],
[
"Gabriel",
"Rodney",
""
],
[
"McAuley",
"Julian",
""
],
[
"Hsu",
"Chun-Nan",
""
]
] | TITLE: Deduplication in a massive clinical note dataset
ABSTRACT: Duplication, whether exact or partial, is a common issue in many datasets. In
clinical notes data, duplication (and near duplication) can arise for many
reasons, such as the pervasive use of templates, copy-pasting, or notes being
generated by automated procedures. A key challenge in removing such near
duplicates is the size of such datasets; our own dataset consists of more than
10 million notes. To detect and correct such duplicates requires algorithms
that both accurate and highly scalable. We describe a solution based on
Minhashing with Locality Sensitive Hashing. In this paper, we present the
theory behind this method and present a database-inspired approach to make the
method scalable. We also present a clustering technique using disjoint sets to
produce dense clusters, which speeds up our algorithm.
| no_new_dataset | 0.721792 |
1704.05643 | Bo Li | Bo Li, Huahui Chen, Yucheng Chen, Yuchao Dai, Mingyi He | Skeleton Boxes: Solving skeleton based action detection with a single
deep convolutional neural network | 4 pages,3 figures, icmew 2017 | icmew 2017 | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action recognition from well-segmented 3D skeleton video has been intensively
studied. However, due to the difficulty in representing the 3D skeleton video
and the lack of training data, action detection from streaming 3D skeleton
video still lags far behind its recognition counterpart and image based object
detection. In this paper, we propose a novel approach for this problem, which
leverages both effective skeleton video encoding and deep regression based
object detection from images. Our framework consists of two parts:
skeleton-based video image mapping, which encodes a skeleton video to a color
image in a temporal preserving way, and an end-to-end trainable fast skeleton
action detector (Skeleton Boxes) based on image detection. Experimental results
on the latest and largest PKU-MMD benchmark dataset demonstrate that our method
outperforms the state-of-the-art methods with a large margin. We believe our
idea would inspire and benefit future research in this important area.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 08:16:13 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Li",
"Bo",
""
],
[
"Chen",
"Huahui",
""
],
[
"Chen",
"Yucheng",
""
],
[
"Dai",
"Yuchao",
""
],
[
"He",
"Mingyi",
""
]
] | TITLE: Skeleton Boxes: Solving skeleton based action detection with a single
deep convolutional neural network
ABSTRACT: Action recognition from well-segmented 3D skeleton video has been intensively
studied. However, due to the difficulty in representing the 3D skeleton video
and the lack of training data, action detection from streaming 3D skeleton
video still lags far behind its recognition counterpart and image based object
detection. In this paper, we propose a novel approach for this problem, which
leverages both effective skeleton video encoding and deep regression based
object detection from images. Our framework consists of two parts:
skeleton-based video image mapping, which encodes a skeleton video to a color
image in a temporal preserving way, and an end-to-end trainable fast skeleton
action detector (Skeleton Boxes) based on image detection. Experimental results
on the latest and largest PKU-MMD benchmark dataset demonstrate that our method
outperforms the state-of-the-art methods with a large margin. We believe our
idea would inspire and benefit future research in this important area.
| no_new_dataset | 0.949763 |
1704.05646 | Lech Szymanski | Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou | Effects of the optimisation of the margin distribution on generalisation
in deep architectures | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite being so vital to success of Support Vector Machines, the principle
of separating margin maximisation is not used in deep learning. We show that
minimisation of margin variance and not maximisation of the margin is more
suitable for improving generalisation in deep architectures. We propose the
Halfway loss function that minimises the Normalised Margin Variance (NMV) at
the output of a deep learning models and evaluate its performance against the
Softmax Cross-Entropy loss on the MNIST, smallNORB and CIFAR-10 datasets.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 08:31:20 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Szymanski",
"Lech",
""
],
[
"McCane",
"Brendan",
""
],
[
"Gao",
"Wei",
""
],
[
"Zhou",
"Zhi-Hua",
""
]
] | TITLE: Effects of the optimisation of the margin distribution on generalisation
in deep architectures
ABSTRACT: Despite being so vital to success of Support Vector Machines, the principle
of separating margin maximisation is not used in deep learning. We show that
minimisation of margin variance and not maximisation of the margin is more
suitable for improving generalisation in deep architectures. We propose the
Halfway loss function that minimises the Normalised Margin Variance (NMV) at
the output of a deep learning models and evaluate its performance against the
Softmax Cross-Entropy loss on the MNIST, smallNORB and CIFAR-10 datasets.
| no_new_dataset | 0.946892 |
1704.05665 | Qingcai Chen | Xin Liu, Qingcai Chen, Xiangping Wu, Yan Liu, Yang Liu | CNN based music emotion classification | 7 pages, 4 figures | null | null | null | cs.MM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Music emotion recognition (MER) is usually regarded as a multi-label tagging
task, and each segment of music can inspire specific emotion tags. Most
researchers extract acoustic features from music and explore the relations
between these features and their corresponding emotion tags. Considering the
inconsistency of emotions inspired by the same music segment for human beings,
seeking for the key acoustic features that really affect on emotions is really
a challenging task. In this paper, we propose a novel MER method by using deep
convolutional neural network (CNN) on the music spectrograms that contains both
the original time and frequency domain information. By the proposed method, no
additional effort on extracting specific features required, which is left to
the training procedure of the CNN model. Experiments are conducted on the
standard CAL500 and CAL500exp dataset. Results show that, for both datasets,
the proposed method outperforms state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 09:28:39 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Liu",
"Xin",
""
],
[
"Chen",
"Qingcai",
""
],
[
"Wu",
"Xiangping",
""
],
[
"Liu",
"Yan",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: CNN based music emotion classification
ABSTRACT: Music emotion recognition (MER) is usually regarded as a multi-label tagging
task, and each segment of music can inspire specific emotion tags. Most
researchers extract acoustic features from music and explore the relations
between these features and their corresponding emotion tags. Considering the
inconsistency of emotions inspired by the same music segment for human beings,
seeking for the key acoustic features that really affect on emotions is really
a challenging task. In this paper, we propose a novel MER method by using deep
convolutional neural network (CNN) on the music spectrograms that contains both
the original time and frequency domain information. By the proposed method, no
additional effort on extracting specific features required, which is left to
the training procedure of the CNN model. Experiments are conducted on the
standard CAL500 and CAL500exp dataset. Results show that, for both datasets,
the proposed method outperforms state-of-the-art methods.
| no_new_dataset | 0.950088 |
1704.05674 | Emanuela Haller | Emanuela Haller and Marius Leordeanu | Unsupervised object segmentation in video by efficient selection of
highly probable positive features | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address an essential problem in computer vision, that of unsupervised
object segmentation in video, where a main object of interest in a video
sequence should be automatically separated from its background. An efficient
solution to this task would enable large-scale video interpretation at a high
semantic level in the absence of the costly manually labeled ground truth. We
propose an efficient unsupervised method for generating foreground object
soft-segmentation masks based on automatic selection and learning from highly
probable positive features. We show that such features can be selected
efficiently by taking into consideration the spatio-temporal, appearance and
motion consistency of the object during the whole observed sequence. We also
emphasize the role of the contrasting properties between the foreground object
and its background. Our model is created in two stages: we start from pixel
level analysis, on top of which we add a regression model trained on a
descriptor that considers information over groups of pixels and is both
discriminative and invariant to many changes that the object undergoes
throughout the video. We also present theoretical properties of our
unsupervised learning method, that under some mild constraints is guaranteed to
learn a correct discriminative classifier even in the unsupervised case. Our
method achieves competitive and even state of the art results on the
challenging Youtube-Objects and SegTrack datasets, while being at least one
order of magnitude faster than the competition. We believe that the competitive
performance of our method in practice, along with its theoretical properties,
constitute an important step towards solving unsupervised discovery in video.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 10:00:46 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Haller",
"Emanuela",
""
],
[
"Leordeanu",
"Marius",
""
]
] | TITLE: Unsupervised object segmentation in video by efficient selection of
highly probable positive features
ABSTRACT: We address an essential problem in computer vision, that of unsupervised
object segmentation in video, where a main object of interest in a video
sequence should be automatically separated from its background. An efficient
solution to this task would enable large-scale video interpretation at a high
semantic level in the absence of the costly manually labeled ground truth. We
propose an efficient unsupervised method for generating foreground object
soft-segmentation masks based on automatic selection and learning from highly
probable positive features. We show that such features can be selected
efficiently by taking into consideration the spatio-temporal, appearance and
motion consistency of the object during the whole observed sequence. We also
emphasize the role of the contrasting properties between the foreground object
and its background. Our model is created in two stages: we start from pixel
level analysis, on top of which we add a regression model trained on a
descriptor that considers information over groups of pixels and is both
discriminative and invariant to many changes that the object undergoes
throughout the video. We also present theoretical properties of our
unsupervised learning method, that under some mild constraints is guaranteed to
learn a correct discriminative classifier even in the unsupervised case. Our
method achieves competitive and even state of the art results on the
challenging Youtube-Objects and SegTrack datasets, while being at least one
order of magnitude faster than the competition. We believe that the competitive
performance of our method in practice, along with its theoretical properties,
constitute an important step towards solving unsupervised discovery in video.
| no_new_dataset | 0.948106 |
1704.05708 | Usman Mahmood Khan Usman Mahmood Khan | U. M. Khan, Z. Kabir, S. A. Hassan, S. H. Ahmed | A Deep Learning Framework using Passive WiFi Sensing for Respiration
Monitoring | 7 pages, 11 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an end-to-end deep learning framework using passive WiFi
sensing to classify and estimate human respiration activity. A passive radar
test-bed is used with two channels where the first channel provides the
reference WiFi signal, whereas the other channel provides a surveillance signal
that contains reflections from the human target. Adaptive filtering is
performed to make the surveillance signal source-data invariant by eliminating
the echoes of the direct transmitted signal. We propose a novel convolutional
neural network to classify the complex time series data and determine if it
corresponds to a breathing activity, followed by a random forest estimator to
determine breathing rate. We collect an extensive dataset to train the learning
models and develop reference benchmarks for the future studies in the field.
Based on the results, we conclude that deep learning techniques coupled with
passive radars offer great potential for end-to-end human activity recognition.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 12:35:17 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Khan",
"U. M.",
""
],
[
"Kabir",
"Z.",
""
],
[
"Hassan",
"S. A.",
""
],
[
"Ahmed",
"S. H.",
""
]
] | TITLE: A Deep Learning Framework using Passive WiFi Sensing for Respiration
Monitoring
ABSTRACT: This paper presents an end-to-end deep learning framework using passive WiFi
sensing to classify and estimate human respiration activity. A passive radar
test-bed is used with two channels where the first channel provides the
reference WiFi signal, whereas the other channel provides a surveillance signal
that contains reflections from the human target. Adaptive filtering is
performed to make the surveillance signal source-data invariant by eliminating
the echoes of the direct transmitted signal. We propose a novel convolutional
neural network to classify the complex time series data and determine if it
corresponds to a breathing activity, followed by a random forest estimator to
determine breathing rate. We collect an extensive dataset to train the learning
models and develop reference benchmarks for the future studies in the field.
Based on the results, we conclude that deep learning techniques coupled with
passive radars offer great potential for end-to-end human activity recognition.
| no_new_dataset | 0.950641 |
1704.05742 | Pengfei Liu | Pengfei Liu and Xipeng Qiu and Xuanjing Huang | Adversarial Multi-task Learning for Text Classification | Accepted by ACL2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural network models have shown their promising opportunities for multi-task
learning, which focus on learning the shared layers to extract the common and
task-invariant features. However, in most existing approaches, the extracted
shared features are prone to be contaminated by task-specific features or the
noise brought by other tasks. In this paper, we propose an adversarial
multi-task learning framework, alleviating the shared and private latent
feature spaces from interfering with each other. We conduct extensive
experiments on 16 different text classification tasks, which demonstrates the
benefits of our approach. Besides, we show that the shared knowledge learned by
our proposed model can be regarded as off-the-shelf knowledge and easily
transferred to new tasks. The datasets of all 16 tasks are publicly available
at \url{http://nlp.fudan.edu.cn/data/}
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 14:17:25 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Liu",
"Pengfei",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: Adversarial Multi-task Learning for Text Classification
ABSTRACT: Neural network models have shown their promising opportunities for multi-task
learning, which focus on learning the shared layers to extract the common and
task-invariant features. However, in most existing approaches, the extracted
shared features are prone to be contaminated by task-specific features or the
noise brought by other tasks. In this paper, we propose an adversarial
multi-task learning framework, alleviating the shared and private latent
feature spaces from interfering with each other. We conduct extensive
experiments on 16 different text classification tasks, which demonstrates the
benefits of our approach. Besides, we show that the shared knowledge learned by
our proposed model can be regarded as off-the-shelf knowledge and easily
transferred to new tasks. The datasets of all 16 tasks are publicly available
at \url{http://nlp.fudan.edu.cn/data/}
| no_new_dataset | 0.945551 |
1704.05754 | Federico Magliani | Federico Magliani, Navid Mahmoudian Bidgoli, Andrea Prati | A location-aware embedding technique for accurate landmark recognition | 6 pages, 5 figures, ICDSC 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current state of the research in landmark recognition highlights the good
accuracy which can be achieved by embedding techniques, such as Fisher vector
and VLAD. All these techniques do not exploit spatial information, i.e.
consider all the features and the corresponding descriptors without embedding
their location in the image. This paper presents a new variant of the
well-known VLAD (Vector of Locally Aggregated Descriptors) embedding technique
which accounts, at a certain degree, for the location of features. The driving
motivation comes from the observation that, usually, the most interesting part
of an image (e.g., the landmark to be recognized) is almost at the center of
the image, while the features at the borders are irrelevant features which do
no depend on the landmark. The proposed variant, called locVLAD (location-aware
VLAD), computes the mean of the two global descriptors: the VLAD executed on
the entire original image, and the one computed on a cropped image which
removes a certain percentage of the image borders. This simple variant shows an
accuracy greater than the existing state-of-the-art approach. Experiments are
conducted on two public datasets (ZuBuD and Holidays) which are used both for
training and testing. Morever a more balanced version of ZuBuD is proposed.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 14:45:23 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Magliani",
"Federico",
""
],
[
"Bidgoli",
"Navid Mahmoudian",
""
],
[
"Prati",
"Andrea",
""
]
] | TITLE: A location-aware embedding technique for accurate landmark recognition
ABSTRACT: The current state of the research in landmark recognition highlights the good
accuracy which can be achieved by embedding techniques, such as Fisher vector
and VLAD. All these techniques do not exploit spatial information, i.e.
consider all the features and the corresponding descriptors without embedding
their location in the image. This paper presents a new variant of the
well-known VLAD (Vector of Locally Aggregated Descriptors) embedding technique
which accounts, at a certain degree, for the location of features. The driving
motivation comes from the observation that, usually, the most interesting part
of an image (e.g., the landmark to be recognized) is almost at the center of
the image, while the features at the borders are irrelevant features which do
no depend on the landmark. The proposed variant, called locVLAD (location-aware
VLAD), computes the mean of the two global descriptors: the VLAD executed on
the entire original image, and the one computed on a cropped image which
removes a certain percentage of the image borders. This simple variant shows an
accuracy greater than the existing state-of-the-art approach. Experiments are
conducted on two public datasets (ZuBuD and Holidays) which are used both for
training and testing. Morever a more balanced version of ZuBuD is proposed.
| no_new_dataset | 0.949012 |
1704.05776 | Jimmy Ren | Jimmy Ren, Xiaohao Chen, Jianbo Liu, Wenxiu Sun, Jiahao Pang, Qiong
Yan, Yu-Wing Tai, Li Xu | Accurate Single Stage Detector Using Recurrent Rolling Convolution | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the recent successful methods in accurate object detection and
localization used some variants of R-CNN style two stage Convolutional Neural
Networks (CNN) where plausible regions were proposed in the first stage then
followed by a second stage for decision refinement. Despite the simplicity of
training and the efficiency in deployment, the single stage detection methods
have not been as competitive when evaluated in benchmarks consider mAP for high
IoU thresholds. In this paper, we proposed a novel single stage end-to-end
trainable object detection network to overcome this limitation. We achieved
this by introducing Recurrent Rolling Convolution (RRC) architecture over
multi-scale feature maps to construct object classifiers and bounding box
regressors which are "deep in context". We evaluated our method in the
challenging KITTI dataset which measures methods under IoU threshold of 0.7. We
showed that with RRC, a single reduced VGG-16 based model already significantly
outperformed all the previously published results. At the time this paper was
written our models ranked the first in KITTI car detection (the hard level),
the first in cyclist detection and the second in pedestrian detection. These
results were not reached by the previous single stage methods. The code is
publicly available.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 15:31:01 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Ren",
"Jimmy",
""
],
[
"Chen",
"Xiaohao",
""
],
[
"Liu",
"Jianbo",
""
],
[
"Sun",
"Wenxiu",
""
],
[
"Pang",
"Jiahao",
""
],
[
"Yan",
"Qiong",
""
],
[
"Tai",
"Yu-Wing",
""
],
[
"Xu",
"Li",
""
]
] | TITLE: Accurate Single Stage Detector Using Recurrent Rolling Convolution
ABSTRACT: Most of the recent successful methods in accurate object detection and
localization used some variants of R-CNN style two stage Convolutional Neural
Networks (CNN) where plausible regions were proposed in the first stage then
followed by a second stage for decision refinement. Despite the simplicity of
training and the efficiency in deployment, the single stage detection methods
have not been as competitive when evaluated in benchmarks consider mAP for high
IoU thresholds. In this paper, we proposed a novel single stage end-to-end
trainable object detection network to overcome this limitation. We achieved
this by introducing Recurrent Rolling Convolution (RRC) architecture over
multi-scale feature maps to construct object classifiers and bounding box
regressors which are "deep in context". We evaluated our method in the
challenging KITTI dataset which measures methods under IoU threshold of 0.7. We
showed that with RRC, a single reduced VGG-16 based model already significantly
outperformed all the previously published results. At the time this paper was
written our models ranked the first in KITTI car detection (the hard level),
the first in cyclist detection and the second in pedestrian detection. These
results were not reached by the previous single stage methods. The code is
publicly available.
| no_new_dataset | 0.949716 |
1704.05815 | Thomas Louail | Thomas Louail and Marc Barthelemy | Headphones on the wire | 10 pages, 4 figures + SI (13 pages and 13 Supplementary figures) | null | null | null | physics.soc-ph cs.CY cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze a dataset providing the complete information on the effective
plays of thousands of music listeners during several months. Our analysis
confirms a number of properties previously highlighted by research based on
interviews and questionnaires, but also uncover new statistical patterns, both
at the individual and collective levels. In particular, we show that
individuals follow common listening rhythms characterized by the same
fluctuations, alternating heavy and light listening periods, and can be
classified in four groups of similar sizes according to their temporal habits
--- 'early birds', 'working hours listeners', 'evening listeners' and 'night
owls'. We provide a detailed radioscopy of the listeners' interplay between
repeated listening and discovery of new content. We show that different genres
encourage different listening habits, from Classical or Jazz music with a more
balanced listening among different songs, to Hip Hop and Dance with a more
heterogeneous distribution of plays. Finally, we provide measures of how
distant people are from each other in terms of common songs. In particular, we
show that the number of songs $S$ a DJ should play to a random audience of size
$N$ such that everyone hears at least one song he/she currently listens to, is
of the form $S\sim N^\alpha$ where the exponent depends on the music genre and
is in the range $[0.5,0.8]$. More generally, our results show that the recent
access to virtually infinite catalogs of songs does not promote exploration for
novelty, but that most users favor repetition of the same songs.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 16:51:32 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Louail",
"Thomas",
""
],
[
"Barthelemy",
"Marc",
""
]
] | TITLE: Headphones on the wire
ABSTRACT: We analyze a dataset providing the complete information on the effective
plays of thousands of music listeners during several months. Our analysis
confirms a number of properties previously highlighted by research based on
interviews and questionnaires, but also uncover new statistical patterns, both
at the individual and collective levels. In particular, we show that
individuals follow common listening rhythms characterized by the same
fluctuations, alternating heavy and light listening periods, and can be
classified in four groups of similar sizes according to their temporal habits
--- 'early birds', 'working hours listeners', 'evening listeners' and 'night
owls'. We provide a detailed radioscopy of the listeners' interplay between
repeated listening and discovery of new content. We show that different genres
encourage different listening habits, from Classical or Jazz music with a more
balanced listening among different songs, to Hip Hop and Dance with a more
heterogeneous distribution of plays. Finally, we provide measures of how
distant people are from each other in terms of common songs. In particular, we
show that the number of songs $S$ a DJ should play to a random audience of size
$N$ such that everyone hears at least one song he/she currently listens to, is
of the form $S\sim N^\alpha$ where the exponent depends on the music genre and
is in the range $[0.5,0.8]$. More generally, our results show that the recent
access to virtually infinite catalogs of songs does not promote exploration for
novelty, but that most users favor repetition of the same songs.
| no_new_dataset | 0.918334 |
1704.05817 | Wenbin Li | Wenbin Li, Da Chen, Zhihan Lv, Yan Yan, Darren Cosker | Learn to Model Motion from Blurry Footages | Preprint of our paper accepted by Pattern Recognition | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | It is difficult to recover the motion field from a real-world footage given a
mixture of camera shake and other photometric effects. In this paper we propose
a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a
traditional optical flow energy. We first conduct a CNN architecture using a
novel learnable directional filtering layer. Such layer encodes the angle and
distance similarity matrix between blur and camera motion, which is able to
enhance the blur features of the camera-shake footages. The proposed CNNs are
then integrated into an iterative optical flow framework, which enable the
capability of modelling and solving both the blind deconvolution and the
optical flow estimation problems simultaneously. Our framework is trained
end-to-end on a synthetic dataset and yields competitive precision and
performance against the state-of-the-art approaches.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2017 16:54:54 GMT"
}
] | 2017-04-20T00:00:00 | [
[
"Li",
"Wenbin",
""
],
[
"Chen",
"Da",
""
],
[
"Lv",
"Zhihan",
""
],
[
"Yan",
"Yan",
""
],
[
"Cosker",
"Darren",
""
]
] | TITLE: Learn to Model Motion from Blurry Footages
ABSTRACT: It is difficult to recover the motion field from a real-world footage given a
mixture of camera shake and other photometric effects. In this paper we propose
a hybrid framework by interleaving a Convolutional Neural Network (CNN) and a
traditional optical flow energy. We first conduct a CNN architecture using a
novel learnable directional filtering layer. Such layer encodes the angle and
distance similarity matrix between blur and camera motion, which is able to
enhance the blur features of the camera-shake footages. The proposed CNNs are
then integrated into an iterative optical flow framework, which enable the
capability of modelling and solving both the blind deconvolution and the
optical flow estimation problems simultaneously. Our framework is trained
end-to-end on a synthetic dataset and yields competitive precision and
performance against the state-of-the-art approaches.
| no_new_dataset | 0.948106 |
1612.09542 | Licheng Yu | Licheng Yu, Hao Tan, Mohit Bansal, Tamara L. Berg | A Joint Speaker-Listener-Reinforcer Model for Referring Expressions | Some typo fixed; comprehension results on refcocog updated; more
human evaluation results added | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring expressions are natural language constructions used to identify
particular objects within a scene. In this paper, we propose a unified
framework for the tasks of referring expression comprehension and generation.
Our model is composed of three modules: speaker, listener, and reinforcer. The
speaker generates referring expressions, the listener comprehends referring
expressions, and the reinforcer introduces a reward function to guide sampling
of more discriminative expressions. The listener-speaker modules are trained
jointly in an end-to-end learning framework, allowing the modules to be aware
of one another during learning while also benefiting from the discriminative
reinforcer's feedback. We demonstrate that this unified framework and training
achieves state-of-the-art results for both comprehension and generation on
three referring expression datasets. Project and demo page:
https://vision.cs.unc.edu/refer
| [
{
"version": "v1",
"created": "Fri, 30 Dec 2016 17:39:19 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2017 20:13:49 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Yu",
"Licheng",
""
],
[
"Tan",
"Hao",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Berg",
"Tamara L.",
""
]
] | TITLE: A Joint Speaker-Listener-Reinforcer Model for Referring Expressions
ABSTRACT: Referring expressions are natural language constructions used to identify
particular objects within a scene. In this paper, we propose a unified
framework for the tasks of referring expression comprehension and generation.
Our model is composed of three modules: speaker, listener, and reinforcer. The
speaker generates referring expressions, the listener comprehends referring
expressions, and the reinforcer introduces a reward function to guide sampling
of more discriminative expressions. The listener-speaker modules are trained
jointly in an end-to-end learning framework, allowing the modules to be aware
of one another during learning while also benefiting from the discriminative
reinforcer's feedback. We demonstrate that this unified framework and training
achieves state-of-the-art results for both comprehension and generation on
three referring expression datasets. Project and demo page:
https://vision.cs.unc.edu/refer
| no_new_dataset | 0.951639 |
1702.01072 | Giovanni Bussi | Vojt\v{e}ch Ml\'ynsk\'y and Giovanni Bussi | Understanding In-line Probing Experiments by Modeling Cleavage of
Non-reactive RNA Nucleotides | null | RNA 2017, 23, 712-720 | 10.1261/rna.060442.116 | null | q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ribonucleic acid (RNA) is involved in many regulatory and catalytic processes
in the cell. The function of any RNA molecule is intimately related with its
structure. In-line probing experiments provide valuable structural datasets for
a variety of RNAs and are used to characterize conformational changes in
riboswitches. However, the structural determinants that lead to differential
reactivities in unpaired nucleotides have not been investigated yet. In this
work we used a combination of theoretical approaches, i.e., classical molecular
dynamics simulations, multiscale quantum mechanical/molecular mechanical
calculations, and enhanced sampling techniques in order to compute and
interpret the differential reactivity of individual residues in several RNA
motifs including members of the most important GNRA and UNCG tetraloop
families. Simulations on the multi ns timescale are required to converge the
related free-energy landscapes. The results for uGAAAg and cUUCGg tetraloops
and double helices are compared with available data from in-line probing
experiments and show that the introduced technique is able to distinguish
between nucleotides of the uGAAAg tetraloop based on their structural
predispositions towards phosphodiester backbone cleavage. For the cUUCGg
tetraloop, more advanced ab initio calculations would be required. This study
is the first attempt to computationally classify chemical probing experiments
and paves the way for an identification of tertiary structures based on the
measured reactivity of non-reactive nucleotides.
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2017 16:42:09 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Mlýnský",
"Vojtěch",
""
],
[
"Bussi",
"Giovanni",
""
]
] | TITLE: Understanding In-line Probing Experiments by Modeling Cleavage of
Non-reactive RNA Nucleotides
ABSTRACT: Ribonucleic acid (RNA) is involved in many regulatory and catalytic processes
in the cell. The function of any RNA molecule is intimately related with its
structure. In-line probing experiments provide valuable structural datasets for
a variety of RNAs and are used to characterize conformational changes in
riboswitches. However, the structural determinants that lead to differential
reactivities in unpaired nucleotides have not been investigated yet. In this
work we used a combination of theoretical approaches, i.e., classical molecular
dynamics simulations, multiscale quantum mechanical/molecular mechanical
calculations, and enhanced sampling techniques in order to compute and
interpret the differential reactivity of individual residues in several RNA
motifs including members of the most important GNRA and UNCG tetraloop
families. Simulations on the multi ns timescale are required to converge the
related free-energy landscapes. The results for uGAAAg and cUUCGg tetraloops
and double helices are compared with available data from in-line probing
experiments and show that the introduced technique is able to distinguish
between nucleotides of the uGAAAg tetraloop based on their structural
predispositions towards phosphodiester backbone cleavage. For the cUUCGg
tetraloop, more advanced ab initio calculations would be required. This study
is the first attempt to computationally classify chemical probing experiments
and paves the way for an identification of tertiary structures based on the
measured reactivity of non-reactive nucleotides.
| no_new_dataset | 0.944893 |
1704.01792 | Qingyu Zhou | Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, Ming Zhou | Neural Question Generation from Text: A Preliminary Study | Submitted to EMNLP 2017 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic question generation aims to generate questions from a text passage
where the generated questions can be answered by certain sub-spans of the given
passage. Traditional methods mainly use rigid heuristic rules to transform a
sentence into related questions. In this work, we propose to apply the neural
encoder-decoder model to generate meaningful and diverse questions from natural
language sentences. The encoder reads the input text and the answer position,
to produce an answer-aware input representation, which is fed to the decoder to
generate an answer focused question. We conduct a preliminary study on neural
question generation from text with the SQuAD dataset, and the experiment
results show that our method can produce fluent and diverse questions.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 11:44:07 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Apr 2017 03:27:15 GMT"
},
{
"version": "v3",
"created": "Tue, 18 Apr 2017 07:54:52 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Zhou",
"Qingyu",
""
],
[
"Yang",
"Nan",
""
],
[
"Wei",
"Furu",
""
],
[
"Tan",
"Chuanqi",
""
],
[
"Bao",
"Hangbo",
""
],
[
"Zhou",
"Ming",
""
]
] | TITLE: Neural Question Generation from Text: A Preliminary Study
ABSTRACT: Automatic question generation aims to generate questions from a text passage
where the generated questions can be answered by certain sub-spans of the given
passage. Traditional methods mainly use rigid heuristic rules to transform a
sentence into related questions. In this work, we propose to apply the neural
encoder-decoder model to generate meaningful and diverse questions from natural
language sentences. The encoder reads the input text and the answer position,
to produce an answer-aware input representation, which is fed to the decoder to
generate an answer focused question. We conduct a preliminary study on neural
question generation from text with the SQuAD dataset, and the experiment
results show that our method can produce fluent and diverse questions.
| no_new_dataset | 0.947527 |
1704.05165 | Brent Griffin | Brent A. Griffin and Jason J. Corso | Video Object Segmentation using Supervoxel-Based Gerrymandering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pixels operate locally. Superpixels have some potential to collect
information across many pixels; supervoxels have more potential by implicitly
operating across time. In this paper, we explore this well established notion
thoroughly analyzing how supervoxels can be used in place of and in conjunction
with other means of aggregating information across space-time. Focusing on the
problem of strictly unsupervised video object segmentation, we devise a method
called supervoxel gerrymandering that links masks of foregroundness and
backgroundness via local and non-local consensus measures. We pose and answer a
series of critical questions about the ability of supervoxels to adequately
sway local voting; the questions regard type and scale of supervoxels as well
as local versus non-local consensus, and the questions are posed in a general
way so as to impact the broader knowledge of the use of supervoxels in video
understanding. We work with the DAVIS dataset and find that our analysis yields
an unsupervised method that outperforms all other known unsupervised methods
and even many supervised ones.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 01:11:35 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Griffin",
"Brent A.",
""
],
[
"Corso",
"Jason J.",
""
]
] | TITLE: Video Object Segmentation using Supervoxel-Based Gerrymandering
ABSTRACT: Pixels operate locally. Superpixels have some potential to collect
information across many pixels; supervoxels have more potential by implicitly
operating across time. In this paper, we explore this well established notion
thoroughly analyzing how supervoxels can be used in place of and in conjunction
with other means of aggregating information across space-time. Focusing on the
problem of strictly unsupervised video object segmentation, we devise a method
called supervoxel gerrymandering that links masks of foregroundness and
backgroundness via local and non-local consensus measures. We pose and answer a
series of critical questions about the ability of supervoxels to adequately
sway local voting; the questions regard type and scale of supervoxels as well
as local versus non-local consensus, and the questions are posed in a general
way so as to impact the broader knowledge of the use of supervoxels in video
understanding. We work with the DAVIS dataset and find that our analysis yields
an unsupervised method that outperforms all other known unsupervised methods
and even many supervised ones.
| no_new_dataset | 0.947527 |
1704.05215 | Ashwin Mathur Mr. | Ashwin Mathur, Fei Han, and Hao Zhang | Multisensory Omni-directional Long-term Place Recognition: Benchmark
Dataset and Analysis | 15 pages | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing a previously visited place, also known as place recognition (or
loop closure detection) is the key towards fully autonomous mobile robots and
self-driving vehicle navigation. Augmented with various Simultaneous
Localization and Mapping techniques (SLAM), loop closure detection allows for
incremental pose correction and can bolster efficient and accurate map
creation. However, repeated and similar scenes (perceptual aliasing) and long
term appearance changes (e.g. weather variations) are major challenges for
current place recognition algorithms. We introduce a new dataset Multisensory
Omnidirectional Long-term Place recognition (MOLP) comprising omnidirectional
intensity and disparity images. This dataset presents many of the challenges
faced by outdoor mobile robots and current place recognition algorithms. Using
MOLP dataset, we formulate the place recognition problem as a regularized
sparse convex optimization problem. We conclude that information extracted from
intensity image is superior to disparity image in isolating discriminative
features for successful long term place recognition. Furthermore, when these
discriminative features are extracted from an omnidirectional vision sensor, a
robust bidirectional loop closure detection approach is established, allowing
mobile robots to close the loop, regardless of the difference in the direction
when revisiting a place.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 06:36:48 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Mathur",
"Ashwin",
""
],
[
"Han",
"Fei",
""
],
[
"Zhang",
"Hao",
""
]
] | TITLE: Multisensory Omni-directional Long-term Place Recognition: Benchmark
Dataset and Analysis
ABSTRACT: Recognizing a previously visited place, also known as place recognition (or
loop closure detection) is the key towards fully autonomous mobile robots and
self-driving vehicle navigation. Augmented with various Simultaneous
Localization and Mapping techniques (SLAM), loop closure detection allows for
incremental pose correction and can bolster efficient and accurate map
creation. However, repeated and similar scenes (perceptual aliasing) and long
term appearance changes (e.g. weather variations) are major challenges for
current place recognition algorithms. We introduce a new dataset Multisensory
Omnidirectional Long-term Place recognition (MOLP) comprising omnidirectional
intensity and disparity images. This dataset presents many of the challenges
faced by outdoor mobile robots and current place recognition algorithms. Using
MOLP dataset, we formulate the place recognition problem as a regularized
sparse convex optimization problem. We conclude that information extracted from
intensity image is superior to disparity image in isolating discriminative
features for successful long term place recognition. Furthermore, when these
discriminative features are extracted from an omnidirectional vision sensor, a
robust bidirectional loop closure detection approach is established, allowing
mobile robots to close the loop, regardless of the difference in the direction
when revisiting a place.
| new_dataset | 0.961425 |
1704.05358 | Jiaqi Mu | Jiaqi Mu, Suma Bhat, Pramod Viswanath | Representing Sentences as Low-Rank Subspaces | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentences are important semantic units of natural language. A generic,
distributional representation of sentences that can capture the latent
semantics is beneficial to multiple downstream applications. We observe a
simple geometry of sentences -- the word representations of a given sentence
(on average 10.23 words in all SemEval datasets with a standard deviation 4.84)
roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this
observation, we represent a sentence by the low-rank subspace spanned by its
word vectors. Such an unsupervised representation is empirically validated via
semantic textual similarity tasks on 19 different datasets, where it
outperforms the sophisticated neural network models, including skip-thought
vectors, by 15% on average.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 14:30:32 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Mu",
"Jiaqi",
""
],
[
"Bhat",
"Suma",
""
],
[
"Viswanath",
"Pramod",
""
]
] | TITLE: Representing Sentences as Low-Rank Subspaces
ABSTRACT: Sentences are important semantic units of natural language. A generic,
distributional representation of sentences that can capture the latent
semantics is beneficial to multiple downstream applications. We observe a
simple geometry of sentences -- the word representations of a given sentence
(on average 10.23 words in all SemEval datasets with a standard deviation 4.84)
roughly lie in a low-rank subspace (roughly, rank 4). Motivated by this
observation, we represent a sentence by the low-rank subspace spanned by its
word vectors. Such an unsupervised representation is empirically validated via
semantic textual similarity tasks on 19 different datasets, where it
outperforms the sophisticated neural network models, including skip-thought
vectors, by 15% on average.
| no_new_dataset | 0.945399 |
1704.05368 | Jan Egger | Jan Egger, Dieter Schmalstieg, Xiaojun Chen, Wolfram G. Zoller,
Alexander Hann | Interactive Outlining of Pancreatic Cancer Liver Metastases in
Ultrasound Images | 15 pages, 16 figures, 2 tables, 58 references | Sci. Rep. 7, 892, 2017 | 10.1038/s41598-017-00940-z | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ultrasound (US) is the most commonly used liver imaging modality worldwide.
Due to its low cost, it is increasingly used in the follow-up of cancer
patients with metastases localized in the liver. In this contribution, we
present the results of an interactive segmentation approach for liver
metastases in US acquisitions. A (semi-) automatic segmentation is still very
challenging because of the low image quality and the low contrast between the
metastasis and the surrounding liver tissue. Thus, the state of the art in
clinical practice is still manual measurement and outlining of the metastases
in the US images. We tackle the problem by providing an interactive
segmentation approach providing real-time feedback of the segmentation results.
The approach has been evaluated with typical US acquisitions from the clinical
routine, and the datasets consisted of pancreatic cancer metastases. Even for
difficult cases, satisfying segmentations results could be achieved because of
the interactive real-time behavior of the approach. In total, 40 clinical
images have been evaluated with our method by comparing the results against
manual ground truth segmentations. This evaluation yielded to an average Dice
Score of 85% and an average Hausdorff Distance of 13 pixels.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 14:45:20 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Egger",
"Jan",
""
],
[
"Schmalstieg",
"Dieter",
""
],
[
"Chen",
"Xiaojun",
""
],
[
"Zoller",
"Wolfram G.",
""
],
[
"Hann",
"Alexander",
""
]
] | TITLE: Interactive Outlining of Pancreatic Cancer Liver Metastases in
Ultrasound Images
ABSTRACT: Ultrasound (US) is the most commonly used liver imaging modality worldwide.
Due to its low cost, it is increasingly used in the follow-up of cancer
patients with metastases localized in the liver. In this contribution, we
present the results of an interactive segmentation approach for liver
metastases in US acquisitions. A (semi-) automatic segmentation is still very
challenging because of the low image quality and the low contrast between the
metastasis and the surrounding liver tissue. Thus, the state of the art in
clinical practice is still manual measurement and outlining of the metastases
in the US images. We tackle the problem by providing an interactive
segmentation approach providing real-time feedback of the segmentation results.
The approach has been evaluated with typical US acquisitions from the clinical
routine, and the datasets consisted of pancreatic cancer metastases. Even for
difficult cases, satisfying segmentations results could be achieved because of
the interactive real-time behavior of the approach. In total, 40 clinical
images have been evaluated with our method by comparing the results against
manual ground truth segmentations. This evaluation yielded to an average Dice
Score of 85% and an average Hausdorff Distance of 13 pixels.
| no_new_dataset | 0.946151 |
1704.05393 | Michela Fazzolari | Michela Fazzolari, Marinella Petrocchi, Alessandro Tommasi, Cesare
Zavattari | Mining Worse and Better Opinions. Unsupervised and Agnostic Aggregation
of Online Reviews | null | null | null | null | cs.SI cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel approach for aggregating online reviews,
according to the opinions they express. Our methodology is unsupervised - due
to the fact that it does not rely on pre-labeled reviews - and it is agnostic -
since it does not make any assumption about the domain or the language of the
review content. We measure the adherence of a review content to the domain
terminology extracted from a review set. First, we demonstrate the
informativeness of the adherence metric with respect to the score associated
with a review. Then, we exploit the metric values to group reviews, according
to the opinions they express. Our experimental campaign has been carried out on
two large datasets collected from Booking and Amazon, respectively.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 15:20:25 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Fazzolari",
"Michela",
""
],
[
"Petrocchi",
"Marinella",
""
],
[
"Tommasi",
"Alessandro",
""
],
[
"Zavattari",
"Cesare",
""
]
] | TITLE: Mining Worse and Better Opinions. Unsupervised and Agnostic Aggregation
of Online Reviews
ABSTRACT: In this paper, we propose a novel approach for aggregating online reviews,
according to the opinions they express. Our methodology is unsupervised - due
to the fact that it does not rely on pre-labeled reviews - and it is agnostic -
since it does not make any assumption about the domain or the language of the
review content. We measure the adherence of a review content to the domain
terminology extracted from a review set. First, we demonstrate the
informativeness of the adherence metric with respect to the score associated
with a review. Then, we exploit the metric values to group reviews, according
to the opinions they express. Our experimental campaign has been carried out on
two large datasets collected from Booking and Amazon, respectively.
| no_new_dataset | 0.949201 |
1704.05409 | Giorgio Roffo | Giorgio Roffo and Simone Melzi | Ranking to Learn: Feature Ranking and Selection via Eigenvector
Centrality | Preprint version - Lecture Notes in Computer Science - Springer 2017 | New Frontiers in Mining Complex Patterns, Fifth International
workshop, nfMCP2016. Lecture Notes in Computer Science - Springer | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an era where accumulating data is easy and storing it inexpensive, feature
selection plays a central role in helping to reduce the high-dimensionality of
huge amounts of otherwise meaningless data. In this paper, we propose a
graph-based method for feature selection that ranks features by identifying the
most important ones into arbitrary set of cues. Mapping the problem on an
affinity graph-where features are the nodes-the solution is given by assessing
the importance of nodes through some indicators of centrality, in particular,
the Eigen-vector Centrality (EC). The gist of EC is to estimate the importance
of a feature as a function of the importance of its neighbors. Ranking central
nodes individuates candidate features, which turn out to be effective from a
classification point of view, as proved by a thoroughly experimental section.
Our approach has been tested on 7 diverse datasets from recent literature
(e.g., biological data and object recognition, among others), and compared
against filter, embedded and wrappers methods. The results are remarkable in
terms of accuracy, stability and low execution time.
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2017 16:21:05 GMT"
}
] | 2017-04-19T00:00:00 | [
[
"Roffo",
"Giorgio",
""
],
[
"Melzi",
"Simone",
""
]
] | TITLE: Ranking to Learn: Feature Ranking and Selection via Eigenvector
Centrality
ABSTRACT: In an era where accumulating data is easy and storing it inexpensive, feature
selection plays a central role in helping to reduce the high-dimensionality of
huge amounts of otherwise meaningless data. In this paper, we propose a
graph-based method for feature selection that ranks features by identifying the
most important ones into arbitrary set of cues. Mapping the problem on an
affinity graph-where features are the nodes-the solution is given by assessing
the importance of nodes through some indicators of centrality, in particular,
the Eigen-vector Centrality (EC). The gist of EC is to estimate the importance
of a feature as a function of the importance of its neighbors. Ranking central
nodes individuates candidate features, which turn out to be effective from a
classification point of view, as proved by a thoroughly experimental section.
Our approach has been tested on 7 diverse datasets from recent literature
(e.g., biological data and object recognition, among others), and compared
against filter, embedded and wrappers methods. The results are remarkable in
terms of accuracy, stability and low execution time.
| no_new_dataset | 0.945349 |
1608.03016 | Yuncheng Li | Yuncheng Li, LiangLiang Cao, Jiang Zhu, Jiebo Luo | Mining Fashion Outfit Composition Using An End-to-End Deep Learning
Approach on Set Data | IEEE TMM | null | 10.1109/TMM.2017.2690144 | null | cs.MM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composing fashion outfits involves deep understanding of fashion standards
while incorporating creativity for choosing multiple fashion items (e.g.,
Jewelry, Bag, Pants, Dress). In fashion websites, popular or high-quality
fashion outfits are usually designed by fashion experts and followed by large
audiences. In this paper, we propose a machine learning system to compose
fashion outfits automatically. The core of the proposed automatic composition
system is to score fashion outfit candidates based on the appearances and
meta-data. We propose to leverage outfit popularity on fashion oriented
websites to supervise the scoring component. The scoring component is a
multi-modal multi-instance deep learning system that evaluates instance
aesthetics and set compatibility simultaneously. In order to train and evaluate
the proposed composition system, we have collected a large scale fashion outfit
dataset with 195K outfits and 368K fashion items from Polyvore. Although the
fashion outfit scoring and composition is rather challenging, we have achieved
an AUC of 85% for the scoring component, and an accuracy of 77% for a
constrained composition task.
| [
{
"version": "v1",
"created": "Wed, 10 Aug 2016 01:11:32 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2017 05:26:23 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Li",
"Yuncheng",
""
],
[
"Cao",
"LiangLiang",
""
],
[
"Zhu",
"Jiang",
""
],
[
"Luo",
"Jiebo",
""
]
] | TITLE: Mining Fashion Outfit Composition Using An End-to-End Deep Learning
Approach on Set Data
ABSTRACT: Composing fashion outfits involves deep understanding of fashion standards
while incorporating creativity for choosing multiple fashion items (e.g.,
Jewelry, Bag, Pants, Dress). In fashion websites, popular or high-quality
fashion outfits are usually designed by fashion experts and followed by large
audiences. In this paper, we propose a machine learning system to compose
fashion outfits automatically. The core of the proposed automatic composition
system is to score fashion outfit candidates based on the appearances and
meta-data. We propose to leverage outfit popularity on fashion oriented
websites to supervise the scoring component. The scoring component is a
multi-modal multi-instance deep learning system that evaluates instance
aesthetics and set compatibility simultaneously. In order to train and evaluate
the proposed composition system, we have collected a large scale fashion outfit
dataset with 195K outfits and 368K fashion items from Polyvore. Although the
fashion outfit scoring and composition is rather challenging, we have achieved
an AUC of 85% for the scoring component, and an accuracy of 77% for a
constrained composition task.
| new_dataset | 0.854095 |
1612.03216 | Peter Potash | Peter Potash, Alexey Romanov, Anna Rumshisky | #HashtagWars: Learning a Sense of Humor | 10 Pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present a new dataset for computational humor, specifically
comparative humor ranking, which attempts to eschew the ubiquitous binary
approach to humor detection. The dataset consists of tweets that are humorous
responses to a given hashtag. We describe the motivation for this new dataset,
as well as the collection process, which includes a description of our
semi-automated system for data collection. We also present initial experiments
for this dataset using both unsupervised and supervised approaches. Our best
supervised system achieved 63.7% accuracy, suggesting that this task is much
more difficult than comparable humor detection tasks. Initial experiments
indicate that a character-level model is more suitable for this task than a
token-level model, likely due to a large amount of puns that can be captured by
a character-level model.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 23:28:16 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2017 18:41:44 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Potash",
"Peter",
""
],
[
"Romanov",
"Alexey",
""
],
[
"Rumshisky",
"Anna",
""
]
] | TITLE: #HashtagWars: Learning a Sense of Humor
ABSTRACT: In this work, we present a new dataset for computational humor, specifically
comparative humor ranking, which attempts to eschew the ubiquitous binary
approach to humor detection. The dataset consists of tweets that are humorous
responses to a given hashtag. We describe the motivation for this new dataset,
as well as the collection process, which includes a description of our
semi-automated system for data collection. We also present initial experiments
for this dataset using both unsupervised and supervised approaches. Our best
supervised system achieved 63.7% accuracy, suggesting that this task is much
more difficult than comparable humor detection tasks. Initial experiments
indicate that a character-level model is more suitable for this task than a
token-level model, likely due to a large amount of puns that can be captured by
a character-level model.
| new_dataset | 0.956431 |
1612.04402 | Peiyun Hu | Peiyun Hu, Deva Ramanan | Finding Tiny Faces | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though tremendous strides have been made in object recognition, one of the
remaining open challenges is detecting small objects. We explore three aspects
of the problem in the context of finding small faces: the role of scale
invariance, image resolution, and contextual reasoning. While most recognition
approaches aim to be scale-invariant, the cues for recognizing a 3px tall face
are fundamentally different than those for recognizing a 300px tall face. We
take a different approach and train separate detectors for different scales. To
maintain efficiency, detectors are trained in a multi-task fashion: they make
use of features extracted from multiple layers of single (deep) feature
hierarchy. While training detectors for large objects is straightforward, the
crucial challenge remains training detectors for small objects. We show that
context is crucial, and define templates that make use of massively-large
receptive fields (where 99% of the template extends beyond the object of
interest). Finally, we explore the role of scale in pre-trained deep networks,
providing ways to extrapolate networks tuned for limited scales to rather
extreme ranges. We demonstrate state-of-the-art results on
massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when
compared to prior art on WIDER FACE, our results reduce error by a factor of 2
(our models produce an AP of 82% while prior art ranges from 29-64%).
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 21:28:02 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Apr 2017 06:18:08 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Hu",
"Peiyun",
""
],
[
"Ramanan",
"Deva",
""
]
] | TITLE: Finding Tiny Faces
ABSTRACT: Though tremendous strides have been made in object recognition, one of the
remaining open challenges is detecting small objects. We explore three aspects
of the problem in the context of finding small faces: the role of scale
invariance, image resolution, and contextual reasoning. While most recognition
approaches aim to be scale-invariant, the cues for recognizing a 3px tall face
are fundamentally different than those for recognizing a 300px tall face. We
take a different approach and train separate detectors for different scales. To
maintain efficiency, detectors are trained in a multi-task fashion: they make
use of features extracted from multiple layers of single (deep) feature
hierarchy. While training detectors for large objects is straightforward, the
crucial challenge remains training detectors for small objects. We show that
context is crucial, and define templates that make use of massively-large
receptive fields (where 99% of the template extends beyond the object of
interest). Finally, we explore the role of scale in pre-trained deep networks,
providing ways to extrapolate networks tuned for limited scales to rather
extreme ranges. We demonstrate state-of-the-art results on
massively-benchmarked face datasets (FDDB and WIDER FACE). In particular, when
compared to prior art on WIDER FACE, our results reduce error by a factor of 2
(our models produce an AP of 82% while prior art ranges from 29-64%).
| no_new_dataset | 0.948298 |
1701.01779 | George Papandreou | George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev,
Jonathan Tompson, Chris Bregler, Kevin Murphy | Towards Accurate Multi-person Pose Estimation in the Wild | Paper describing an improved version of the G-RMI entry to the 2016
COCO keypoints challenge (http://image-net.org/challenges/ilsvrc+coco2016).
Camera ready version to appear in the Proceedings of CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method for multi-person detection and 2-D pose estimation that
achieves state-of-art results on the challenging COCO keypoints task. It is a
simple, yet powerful, top-down approach consisting of two stages.
In the first stage, we predict the location and scale of boxes which are
likely to contain people; for this we use the Faster RCNN detector. In the
second stage, we estimate the keypoints of the person potentially contained in
each proposed bounding box. For each keypoint type we predict dense heatmaps
and offsets using a fully convolutional ResNet. To combine these outputs we
introduce a novel aggregation procedure to obtain highly localized keypoint
predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression
(NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based
confidence score estimation, instead of box-level scoring.
Trained on COCO data alone, our final system achieves average precision of
0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming
the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Further, by using additional in-house labeled data we obtain an even higher
average precision of 0.685 on the test-dev set and 0.673 on the test-standard
set, more than 5% absolute improvement compared to the previous best performing
method on the same dataset.
| [
{
"version": "v1",
"created": "Fri, 6 Jan 2017 23:56:02 GMT"
},
{
"version": "v2",
"created": "Fri, 14 Apr 2017 18:30:58 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Papandreou",
"George",
""
],
[
"Zhu",
"Tyler",
""
],
[
"Kanazawa",
"Nori",
""
],
[
"Toshev",
"Alexander",
""
],
[
"Tompson",
"Jonathan",
""
],
[
"Bregler",
"Chris",
""
],
[
"Murphy",
"Kevin",
""
]
] | TITLE: Towards Accurate Multi-person Pose Estimation in the Wild
ABSTRACT: We propose a method for multi-person detection and 2-D pose estimation that
achieves state-of-art results on the challenging COCO keypoints task. It is a
simple, yet powerful, top-down approach consisting of two stages.
In the first stage, we predict the location and scale of boxes which are
likely to contain people; for this we use the Faster RCNN detector. In the
second stage, we estimate the keypoints of the person potentially contained in
each proposed bounding box. For each keypoint type we predict dense heatmaps
and offsets using a fully convolutional ResNet. To combine these outputs we
introduce a novel aggregation procedure to obtain highly localized keypoint
predictions. We also use a novel form of keypoint-based Non-Maximum-Suppression
(NMS), instead of the cruder box-level NMS, and a novel form of keypoint-based
confidence score estimation, instead of box-level scoring.
Trained on COCO data alone, our final system achieves average precision of
0.649 on the COCO test-dev set and the 0.643 test-standard sets, outperforming
the winner of the 2016 COCO keypoints challenge and other recent state-of-art.
Further, by using additional in-house labeled data we obtain an even higher
average precision of 0.685 on the test-dev set and 0.673 on the test-standard
set, more than 5% absolute improvement compared to the previous best performing
method on the same dataset.
| no_new_dataset | 0.94699 |
1704.03944 | Yuting Zhang | Yuting Zhang, Luyao Yuan, Yijie Guo, Zhiyuan He, I-An Huang, Honglak
Lee | Discriminative Bimodal Networks for Visual Localization and Detection
with Natural Language Queries | IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2017 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Associating image regions with text queries has been recently explored as a
new way to bridge visual and linguistic representations. A few pioneering
approaches have been proposed based on recurrent neural language models trained
generatively (e.g., generating captions), but achieving somewhat limited
localization accuracy. To better address natural-language-based visual entity
localization, we propose a discriminative approach. We formulate a
discriminative bimodal neural network (DBNet), which can be trained by a
classifier with extensive use of negative samples. Our training objective
encourages better localization on single images, incorporates text phrases in a
broad range, and properly pairs image regions with text phrases into positive
and negative examples. Experiments on the Visual Genome dataset demonstrate the
proposed DBNet significantly outperforms previous state-of-the-art methods both
for localization on single images and for detection on multiple images. We we
also establish an evaluation protocol for natural-language visual detection.
| [
{
"version": "v1",
"created": "Wed, 12 Apr 2017 22:09:36 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Apr 2017 07:22:14 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Zhang",
"Yuting",
""
],
[
"Yuan",
"Luyao",
""
],
[
"Guo",
"Yijie",
""
],
[
"He",
"Zhiyuan",
""
],
[
"Huang",
"I-An",
""
],
[
"Lee",
"Honglak",
""
]
] | TITLE: Discriminative Bimodal Networks for Visual Localization and Detection
with Natural Language Queries
ABSTRACT: Associating image regions with text queries has been recently explored as a
new way to bridge visual and linguistic representations. A few pioneering
approaches have been proposed based on recurrent neural language models trained
generatively (e.g., generating captions), but achieving somewhat limited
localization accuracy. To better address natural-language-based visual entity
localization, we propose a discriminative approach. We formulate a
discriminative bimodal neural network (DBNet), which can be trained by a
classifier with extensive use of negative samples. Our training objective
encourages better localization on single images, incorporates text phrases in a
broad range, and properly pairs image regions with text phrases into positive
and negative examples. Experiments on the Visual Genome dataset demonstrate the
proposed DBNet significantly outperforms previous state-of-the-art methods both
for localization on single images and for detection on multiple images. We we
also establish an evaluation protocol for natural-language visual detection.
| no_new_dataset | 0.946448 |
1704.04516 | Tae Soo Kim | Tae Soo Kim, Austin Reiter | Interpretable 3D Human Action Analysis with Temporal Convolutional
Networks | 8 pages, 5 figures, BNMW CVPR 2017 Submission | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discriminative power of modern deep learning models for 3D human action
recognition is growing ever so potent. In conjunction with the recent
resurgence of 3D human action representation with 3D skeletons, the quality and
the pace of recent progress have been significant. However, the inner workings
of state-of-the-art learning based methods in 3D human action recognition still
remain mostly black-box. In this work, we propose to use a new class of models
known as Temporal Convolutional Neural Networks (TCN) for 3D human action
recognition. Compared to popular LSTM-based Recurrent Neural Network models,
given interpretable input such as 3D skeletons, TCN provides us a way to
explicitly learn readily interpretable spatio-temporal representations for 3D
human action recognition. We provide our strategy in re-designing the TCN with
interpretability in mind and how such characteristics of the model is leveraged
to construct a powerful 3D activity recognition method. Through this work, we
wish to take a step towards a spatio-temporal model that is easier to
understand, explain and interpret. The resulting model, Res-TCN, achieves
state-of-the-art results on the largest 3D human action recognition dataset,
NTU-RGBD.
| [
{
"version": "v1",
"created": "Fri, 14 Apr 2017 19:00:36 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Kim",
"Tae Soo",
""
],
[
"Reiter",
"Austin",
""
]
] | TITLE: Interpretable 3D Human Action Analysis with Temporal Convolutional
Networks
ABSTRACT: The discriminative power of modern deep learning models for 3D human action
recognition is growing ever so potent. In conjunction with the recent
resurgence of 3D human action representation with 3D skeletons, the quality and
the pace of recent progress have been significant. However, the inner workings
of state-of-the-art learning based methods in 3D human action recognition still
remain mostly black-box. In this work, we propose to use a new class of models
known as Temporal Convolutional Neural Networks (TCN) for 3D human action
recognition. Compared to popular LSTM-based Recurrent Neural Network models,
given interpretable input such as 3D skeletons, TCN provides us a way to
explicitly learn readily interpretable spatio-temporal representations for 3D
human action recognition. We provide our strategy in re-designing the TCN with
interpretability in mind and how such characteristics of the model is leveraged
to construct a powerful 3D activity recognition method. Through this work, we
wish to take a step towards a spatio-temporal model that is easier to
understand, explain and interpret. The resulting model, Res-TCN, achieves
state-of-the-art results on the largest 3D human action recognition dataset,
NTU-RGBD.
| no_new_dataset | 0.939081 |
1704.04723 | Jalal Mahmud | Jalal Mahmud, Geli Fei, Anbang Xu, Aditya Pal, Michelle Zhou | Computational Models for Attitude and Actions Prediction | This is an extended version of a previously published IUI 2016 paper
from same authors. http://dl.acm.org/citation.cfm?id=2856800 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present computational models to predict Twitter users'
attitude towards a specific brand through their personal and social
characteristics. We also predict their likelihood to take different actions
based on their attitudes. In order to operationalize our research on users'
attitude and actions, we collected ground-truth data through surveys of Twitter
users. We have conducted experiments using two real world datasets to validate
the effectiveness of our attitude and action prediction framework. Finally, we
show how our models can be integrated with a visual analytics system for
customer intervention.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2017 05:03:22 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Mahmud",
"Jalal",
""
],
[
"Fei",
"Geli",
""
],
[
"Xu",
"Anbang",
""
],
[
"Pal",
"Aditya",
""
],
[
"Zhou",
"Michelle",
""
]
] | TITLE: Computational Models for Attitude and Actions Prediction
ABSTRACT: In this paper, we present computational models to predict Twitter users'
attitude towards a specific brand through their personal and social
characteristics. We also predict their likelihood to take different actions
based on their attitudes. In order to operationalize our research on users'
attitude and actions, we collected ground-truth data through surveys of Twitter
users. We have conducted experiments using two real world datasets to validate
the effectiveness of our attitude and action prediction framework. Finally, we
show how our models can be integrated with a visual analytics system for
customer intervention.
| no_new_dataset | 0.943815 |
1704.04799 | Alexander Jung | Saeed Basirian and Alexander Jung | Random Walk Sampling for Big Data over Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been shown recently that graph signals with small total variation can
be accurately recovered from only few samples if the sampling set satisfies a
certain condition, referred to as the network nullspace property. Based on this
recovery condition, we propose a sampling strategy for smooth graph signals
based on random walks. Numerical experiments demonstrate the effectiveness of
this approach for graph signals obtained from a synthetic random graph model as
well as a real-world dataset.
| [
{
"version": "v1",
"created": "Sun, 16 Apr 2017 17:43:38 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Basirian",
"Saeed",
""
],
[
"Jung",
"Alexander",
""
]
] | TITLE: Random Walk Sampling for Big Data over Networks
ABSTRACT: It has been shown recently that graph signals with small total variation can
be accurately recovered from only few samples if the sampling set satisfies a
certain condition, referred to as the network nullspace property. Based on this
recovery condition, we propose a sampling strategy for smooth graph signals
based on random walks. Numerical experiments demonstrate the effectiveness of
this approach for graph signals obtained from a synthetic random graph model as
well as a real-world dataset.
| no_new_dataset | 0.954095 |
1704.04865 | Felix Juefei-Xu | Felix Juefei-Xu, Vishnu Naresh Boddeti, Marios Savvides | Gang of GANs: Generative Adversarial Networks with Maximum Margin
Ranking | 16 pages. 11 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional generative adversarial networks (GAN) and many of its variants
are trained by minimizing the KL or JS-divergence loss that measures how close
the generated data distribution is from the true data distribution. A recent
advance called the WGAN based on Wasserstein distance can improve on the KL and
JS-divergence based GANs, and alleviate the gradient vanishing, instability,
and mode collapse issues that are common in the GAN training. In this work, we
aim at improving on the WGAN by first generalizing its discriminator loss to a
margin-based one, which leads to a better discriminator, and in turn a better
generator, and then carrying out a progressive training paradigm involving
multiple GANs to contribute to the maximum margin ranking loss so that the GAN
at later stages will improve upon early stages. We call this method Gang of
GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce
the gap between the true data distribution and the generated data distribution
by at least half in an optimally trained WGAN. We have also proposed a new way
of measuring GAN quality which is based on image completion tasks. We have
evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10,
and 50K-SSFF, and have seen both visual and quantitative improvement over
baseline WGAN.
| [
{
"version": "v1",
"created": "Mon, 17 Apr 2017 04:42:56 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Juefei-Xu",
"Felix",
""
],
[
"Boddeti",
"Vishnu Naresh",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: Gang of GANs: Generative Adversarial Networks with Maximum Margin
Ranking
ABSTRACT: Traditional generative adversarial networks (GAN) and many of its variants
are trained by minimizing the KL or JS-divergence loss that measures how close
the generated data distribution is from the true data distribution. A recent
advance called the WGAN based on Wasserstein distance can improve on the KL and
JS-divergence based GANs, and alleviate the gradient vanishing, instability,
and mode collapse issues that are common in the GAN training. In this work, we
aim at improving on the WGAN by first generalizing its discriminator loss to a
margin-based one, which leads to a better discriminator, and in turn a better
generator, and then carrying out a progressive training paradigm involving
multiple GANs to contribute to the maximum margin ranking loss so that the GAN
at later stages will improve upon early stages. We call this method Gang of
GANs (GoGAN). We have shown theoretically that the proposed GoGAN can reduce
the gap between the true data distribution and the generated data distribution
by at least half in an optimally trained WGAN. We have also proposed a new way
of measuring GAN quality which is based on image completion tasks. We have
evaluated our method on four visual datasets: CelebA, LSUN Bedroom, CIFAR-10,
and 50K-SSFF, and have seen both visual and quantitative improvement over
baseline WGAN.
| no_new_dataset | 0.948585 |
1704.04962 | Thomas Brouwer | Thomas Brouwer, Pietro Li\'o | Bayesian Hybrid Matrix Factorisation for Data Integration | Proceedings of the 20th International Conference on Artificial
Intelligence and Statistics (AISTATS 2017) | PMLR 54:557-566, 2017 | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel Bayesian hybrid matrix factorisation model (HMF) for
data integration, based on combining multiple matrix factorisation methods,
that can be used for in- and out-of-matrix prediction of missing values. The
model is very general and can be used to integrate many datasets across
different entity types, including repeated experiments, similarity matrices,
and very sparse datasets. We apply our method on two biological applications,
and extensively compare it to state-of-the-art machine learning and matrix
factorisation models. For in-matrix predictions on drug sensitivity datasets we
obtain consistently better performances than existing methods. This is
especially the case when we increase the sparsity of the datasets. Furthermore,
we perform out-of-matrix predictions on methylation and gene expression
datasets, and obtain the best results on two of the three datasets, especially
when the predictivity of datasets is high.
| [
{
"version": "v1",
"created": "Mon, 17 Apr 2017 13:39:29 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Brouwer",
"Thomas",
""
],
[
"Lió",
"Pietro",
""
]
] | TITLE: Bayesian Hybrid Matrix Factorisation for Data Integration
ABSTRACT: We introduce a novel Bayesian hybrid matrix factorisation model (HMF) for
data integration, based on combining multiple matrix factorisation methods,
that can be used for in- and out-of-matrix prediction of missing values. The
model is very general and can be used to integrate many datasets across
different entity types, including repeated experiments, similarity matrices,
and very sparse datasets. We apply our method on two biological applications,
and extensively compare it to state-of-the-art machine learning and matrix
factorisation models. For in-matrix predictions on drug sensitivity datasets we
obtain consistently better performances than existing methods. This is
especially the case when we increase the sparsity of the datasets. Furthermore,
we perform out-of-matrix predictions on methylation and gene expression
datasets, and obtain the best results on two of the three datasets, especially
when the predictivity of datasets is high.
| no_new_dataset | 0.950041 |
1704.05017 | Mathieu Galtier | Mathieu Galtier and Camille Marini | Morpheo: Traceable Machine Learning on Hidden data | whitepaper, 9 pages, 6 figures | null | null | null | cs.AI cs.CR cs.DC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Morpheo is a transparent and secure machine learning platform collecting and
analysing large datasets. It aims at building state-of-the art prediction
models in various fields where data are sensitive. Indeed, it offers strong
privacy of data and algorithm, by preventing anyone to read the data, apart
from the owner and the chosen algorithms. Computations in Morpheo are
orchestrated by a blockchain infrastructure, thus offering total traceability
of operations. Morpheo aims at building an attractive economic ecosystem around
data prediction by channelling crypto-money from prediction requests to useful
data and algorithms providers. Morpheo is designed to handle multiple data
sources in a transfer learning approach in order to mutualize knowledge
acquired from large datasets for applications with smaller but similar
datasets.
| [
{
"version": "v1",
"created": "Mon, 17 Apr 2017 16:24:29 GMT"
}
] | 2017-04-18T00:00:00 | [
[
"Galtier",
"Mathieu",
""
],
[
"Marini",
"Camille",
""
]
] | TITLE: Morpheo: Traceable Machine Learning on Hidden data
ABSTRACT: Morpheo is a transparent and secure machine learning platform collecting and
analysing large datasets. It aims at building state-of-the art prediction
models in various fields where data are sensitive. Indeed, it offers strong
privacy of data and algorithm, by preventing anyone to read the data, apart
from the owner and the chosen algorithms. Computations in Morpheo are
orchestrated by a blockchain infrastructure, thus offering total traceability
of operations. Morpheo aims at building an attractive economic ecosystem around
data prediction by channelling crypto-money from prediction requests to useful
data and algorithms providers. Morpheo is designed to handle multiple data
sources in a transfer learning approach in order to mutualize knowledge
acquired from large datasets for applications with smaller but similar
datasets.
| no_new_dataset | 0.945298 |
1112.6209 | Quoc Le | Quoc V. Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai
Chen, Greg S. Corrado, Jeff Dean, Andrew Y. Ng | Building high-level features using large scale unsupervised learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of building high-level, class-specific feature
detectors from only unlabeled data. For example, is it possible to learn a face
detector using only unlabeled images? To answer this, we train a 9-layered
locally connected sparse autoencoder with pooling and local contrast
normalization on a large dataset of images (the model has 1 billion
connections, the dataset has 10 million 200x200 pixel images downloaded from
the Internet). We train this network using model parallelism and asynchronous
SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to
what appears to be a widely-held intuition, our experimental results reveal
that it is possible to train a face detector without having to label images as
containing a face or not. Control experiments show that this feature detector
is robust not only to translation but also to scaling and out-of-plane
rotation. We also find that the same network is sensitive to other high-level
concepts such as cat faces and human bodies. Starting with these learned
features, we trained our network to obtain 15.8% accuracy in recognizing 20,000
object categories from ImageNet, a leap of 70% relative improvement over the
previous state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 29 Dec 2011 00:26:54 GMT"
},
{
"version": "v2",
"created": "Tue, 22 May 2012 08:12:49 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jun 2012 05:12:56 GMT"
},
{
"version": "v4",
"created": "Wed, 11 Jul 2012 04:40:33 GMT"
},
{
"version": "v5",
"created": "Thu, 12 Jul 2012 04:32:50 GMT"
}
] | 2017-04-17T00:00:00 | [
[
"Le",
"Quoc V.",
""
],
[
"Ranzato",
"Marc'Aurelio",
""
],
[
"Monga",
"Rajat",
""
],
[
"Devin",
"Matthieu",
""
],
[
"Chen",
"Kai",
""
],
[
"Corrado",
"Greg S.",
""
],
[
"Dean",
"Jeff",
""
],
[
"Ng",
"Andrew Y.",
""
]
] | TITLE: Building high-level features using large scale unsupervised learning
ABSTRACT: We consider the problem of building high-level, class-specific feature
detectors from only unlabeled data. For example, is it possible to learn a face
detector using only unlabeled images? To answer this, we train a 9-layered
locally connected sparse autoencoder with pooling and local contrast
normalization on a large dataset of images (the model has 1 billion
connections, the dataset has 10 million 200x200 pixel images downloaded from
the Internet). We train this network using model parallelism and asynchronous
SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to
what appears to be a widely-held intuition, our experimental results reveal
that it is possible to train a face detector without having to label images as
containing a face or not. Control experiments show that this feature detector
is robust not only to translation but also to scaling and out-of-plane
rotation. We also find that the same network is sensitive to other high-level
concepts such as cat faces and human bodies. Starting with these learned
features, we trained our network to obtain 15.8% accuracy in recognizing 20,000
object categories from ImageNet, a leap of 70% relative improvement over the
previous state-of-the-art.
| no_new_dataset | 0.939192 |
1703.05593 | Ignacio Rocco | Ignacio Rocco, Relja Arandjelovi\'c, Josef Sivic | Convolutional neural network architecture for geometric matching | In 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR 2017) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.
| [
{
"version": "v1",
"created": "Thu, 16 Mar 2017 13:03:54 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2017 22:32:43 GMT"
}
] | 2017-04-17T00:00:00 | [
[
"Rocco",
"Ignacio",
""
],
[
"Arandjelović",
"Relja",
""
],
[
"Sivic",
"Josef",
""
]
] | TITLE: Convolutional neural network architecture for geometric matching
ABSTRACT: We address the problem of determining correspondences between two images in
agreement with a geometric model such as an affine or thin-plate spline
transformation, and estimating its parameters. The contributions of this work
are three-fold. First, we propose a convolutional neural network architecture
for geometric matching. The architecture is based on three main components that
mimic the standard steps of feature extraction, matching and simultaneous
inlier detection and model parameter estimation, while being trainable
end-to-end. Second, we demonstrate that the network parameters can be trained
from synthetically generated imagery without the need for manual annotation and
that our matching layer significantly increases generalization capabilities to
never seen before images. Finally, we show that the same model can perform both
instance-level and category-level matching giving state-of-the-art results on
the challenging Proposal Flow dataset.
| no_new_dataset | 0.952574 |
1704.00057 | Layla El Asri | Layla El Asri and Hannes Schulz and Shikhar Sharma and Jeremie Zumer
and Justin Harris and Emery Fine and Rahul Mehrotra and Kaheer Suleman | Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the Frames dataset (Frames is available at
http://datasets.maluuba.com/Frames), a corpus of 1369 human-human dialogues
with an average of 15 turns per dialogue. We developed this dataset to study
the role of memory in goal-oriented dialogue systems. Based on Frames, we
introduce a task called frame tracking, which extends state tracking to a
setting where several states are tracked simultaneously. We propose a baseline
model for this task. We show that Frames can also be used to study memory in
dialogue management and information presentation through natural language
generation.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 21:03:58 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2017 18:22:49 GMT"
}
] | 2017-04-17T00:00:00 | [
[
"Asri",
"Layla El",
""
],
[
"Schulz",
"Hannes",
""
],
[
"Sharma",
"Shikhar",
""
],
[
"Zumer",
"Jeremie",
""
],
[
"Harris",
"Justin",
""
],
[
"Fine",
"Emery",
""
],
[
"Mehrotra",
"Rahul",
""
],
[
"Suleman",
"Kaheer",
""
]
] | TITLE: Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems
ABSTRACT: This paper presents the Frames dataset (Frames is available at
http://datasets.maluuba.com/Frames), a corpus of 1369 human-human dialogues
with an average of 15 turns per dialogue. We developed this dataset to study
the role of memory in goal-oriented dialogue systems. Based on Frames, we
introduce a task called frame tracking, which extends state tracking to a
setting where several states are tracked simultaneously. We propose a baseline
model for this task. We show that Frames can also be used to study memory in
dialogue management and information presentation through natural language
generation.
| new_dataset | 0.958538 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.