Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
list | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.06047
|
Gerasimos Spanakis
|
Gerasimos Spanakis, Gerhard Weiss
|
AMSOM: Adaptive Moving Self-organizing Map for Clustering and
Visualization
|
ICAART 2016 accepted full paper
| null |
10.5220/0005704801290140
| null |
cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-Organizing Map (SOM) is a neural network model which is used to obtain a
topology-preserving mapping from the (usually high dimensional) input/feature
space to an output/map space of fewer dimensions (usually two or three in order
to facilitate visualization). Neurons in the output space are connected with
each other but this structure remains fixed throughout training and learning is
achieved through the updating of neuron reference vectors in feature space.
Despite the fact that growing variants of SOM overcome the fixed structure
limitation they increase computational cost and also do not allow the removal
of a neuron after its introduction. In this paper, a variant of SOM is proposed
called AMSOM (Adaptive Moving Self-Organizing Map) that on the one hand creates
a more flexible structure where neuron positions are dynamically altered during
training and on the other hand tackles the drawback of having a predefined grid
by allowing neuron addition and/or removal during training. Experiments using
multiple literature datasets show that the proposed method improves training
performance of SOM, leads to a better visualization of the input dataset and
provides a framework for determining the optimal number and structure of
neurons.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 16:41:00 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Spanakis",
"Gerasimos",
""
],
[
"Weiss",
"Gerhard",
""
]
] |
TITLE: AMSOM: Adaptive Moving Self-organizing Map for Clustering and
Visualization
ABSTRACT: Self-Organizing Map (SOM) is a neural network model which is used to obtain a
topology-preserving mapping from the (usually high dimensional) input/feature
space to an output/map space of fewer dimensions (usually two or three in order
to facilitate visualization). Neurons in the output space are connected with
each other but this structure remains fixed throughout training and learning is
achieved through the updating of neuron reference vectors in feature space.
Despite the fact that growing variants of SOM overcome the fixed structure
limitation they increase computational cost and also do not allow the removal
of a neuron after its introduction. In this paper, a variant of SOM is proposed
called AMSOM (Adaptive Moving Self-Organizing Map) that on the one hand creates
a more flexible structure where neuron positions are dynamically altered during
training and on the other hand tackles the drawback of having a predefined grid
by allowing neuron addition and/or removal during training. Experiments using
multiple literature datasets show that the proposed method improves training
performance of SOM, leads to a better visualization of the input dataset and
provides a framework for determining the optimal number and structure of
neurons.
|
1605.06052
|
Jason Grant
|
Jason Grant and Patrick Flynn
|
Hierarchical Clustering in Face Similarity Score Space
|
5 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Similarity scores in face recognition represent the proximity between pairs
of images as computed by a matching algorithm. Given a large set of images and
the proximities between all pairs, a similarity score space is defined. Cluster
analysis was applied to the similarity score space to develop various
taxonomies. Given the number of subjects in the dataset, we used hierarchical
methods to aggregate images of the same subject. We also explored the hierarchy
above and below the subject level, including clusters that reflect gender and
ethnicity. Evidence supports the existence of clustering by race, gender,
subject, and illumination condition.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 17:08:16 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"Grant",
"Jason",
""
],
[
"Flynn",
"Patrick",
""
]
] |
TITLE: Hierarchical Clustering in Face Similarity Score Space
ABSTRACT: Similarity scores in face recognition represent the proximity between pairs
of images as computed by a matching algorithm. Given a large set of images and
the proximities between all pairs, a similarity score space is defined. Cluster
analysis was applied to the similarity score space to develop various
taxonomies. Given the number of subjects in the dataset, we used hierarchical
methods to aggregate images of the same subject. We also explored the hierarchy
above and below the subject level, including clusters that reflect gender and
ethnicity. Evidence supports the existence of clustering by race, gender,
subject, and illumination condition.
|
1605.06083
|
Emiel van Miltenburg
|
Emiel van Miltenburg
|
Stereotyping and Bias in the Flickr30K Dataset
|
In: Proceedings of the Workshop on Multimodal Corpora (MMC-2016),
pages 1-4. Editors: Jens Edlund, Dirk Heylen and Patrizia Paggio
| null | null | null |
cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An untested assumption behind the crowdsourced descriptions of the images in
the Flickr30K dataset (Young et al., 2014) is that they "focus only on the
information that can be obtained from the image alone" (Hodosh et al., 2013, p.
859). This paper presents some evidence against this assumption, and provides a
list of biases and unwarranted inferences that can be found in the Flickr30K
dataset. Finally, it considers methods to find examples of these, and discusses
how we should deal with stereotype-driven descriptions in future applications.
|
[
{
"version": "v1",
"created": "Thu, 19 May 2016 19:17:23 GMT"
}
] | 2016-05-20T00:00:00 |
[
[
"van Miltenburg",
"Emiel",
""
]
] |
TITLE: Stereotyping and Bias in the Flickr30K Dataset
ABSTRACT: An untested assumption behind the crowdsourced descriptions of the images in
the Flickr30K dataset (Young et al., 2014) is that they "focus only on the
information that can be obtained from the image alone" (Hodosh et al., 2013, p.
859). This paper presents some evidence against this assumption, and provides a
list of biases and unwarranted inferences that can be found in the Flickr30K
dataset. Finally, it considers methods to find examples of these, and discusses
how we should deal with stereotype-driven descriptions in future applications.
|
1605.00287
|
Xiang Xiang
|
Minh Dao, Xiang Xiang, Bulent Ayhan, Chiman Kwan, Trac D. Tran
|
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation
with Low-Rank Interference
|
It is not a publishable version at this point as there is no IP
coverage at the moment
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a burnscar detection model for hyperspectral
imaging (HSI) data. The proposed model contains two-processing steps in which
the first step separate and then suppress the cloud information presenting in
the data set using an RPCA algorithm and the second step detect the burnscar
area in the low-rank component output of the first step. Experiments are
conducted on the public MODIS dataset available at NASA official website.
|
[
{
"version": "v1",
"created": "Sun, 1 May 2016 18:18:45 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 23:25:22 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Dao",
"Minh",
""
],
[
"Xiang",
"Xiang",
""
],
[
"Ayhan",
"Bulent",
""
],
[
"Kwan",
"Chiman",
""
],
[
"Tran",
"Trac D.",
""
]
] |
TITLE: Detecting Burnscar from Hyperspectral Imagery via Sparse Representation
with Low-Rank Interference
ABSTRACT: In this paper, we propose a burnscar detection model for hyperspectral
imaging (HSI) data. The proposed model contains two-processing steps in which
the first step separate and then suppress the cloud information presenting in
the data set using an RPCA algorithm and the second step detect the burnscar
area in the low-rank component output of the first step. Experiments are
conducted on the public MODIS dataset available at NASA official website.
|
1605.04652
|
Anand Padmanabha Iyer
|
Anand Padmanabha Iyer, Ion Stoica, Mosharaf Chowdhury, Li Erran Li
|
Fast and Accurate Performance Analysis of LTE Radio Access Networks
| null | null | null | null |
cs.DC cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An increasing amount of analytics is performed on data that is procured in a
real-time fashion to make real-time decisions. Such tasks include simple
reporting on streams to sophisticated model building. However, the practicality
of such analyses are impeded in several domains because they are faced with a
fundamental trade-off between data collection latency and analysis accuracy.
In this paper, we study this trade-off in the context of a specific domain,
Cellular Radio Access Networks (RAN). Our choice of this domain is influenced
by its commonalities with several other domains that produce real-time data,
our access to a large live dataset, and their real-time nature and
dimensionality which makes it a natural fit for a popular analysis technique,
machine learning (ML). We find that the latency accuracy trade-off can be
resolved using two broad, general techniques: intelligent data grouping and
task formulations that leverage domain characteristics. Based on this, we
present CellScope, a system that addresses this challenge by applying a domain
specific formulation and application of Multi-task Learning (MTL) to RAN
performance analysis. It achieves this goal using three techniques: feature
engineering to transform raw data into effective features, a PCA inspired
similarity metric to group data from geographically nearby base stations
sharing performance commonalities, and a hybrid online-offline model for
efficient model updates. Our evaluation of CellScope shows that its accuracy
improvements over direct application of ML range from 2.5x to 4.4x while
reducing the model update overhead by up to 4.8x. We have also used CellScope
to analyze a live LTE consisting of over 2 million subscribers for a period of
over 10 months, where it uncovered several problems and insights, some of them
previously unknown.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 05:31:01 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 20:00:59 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Iyer",
"Anand Padmanabha",
""
],
[
"Stoica",
"Ion",
""
],
[
"Chowdhury",
"Mosharaf",
""
],
[
"Li",
"Li Erran",
""
]
] |
TITLE: Fast and Accurate Performance Analysis of LTE Radio Access Networks
ABSTRACT: An increasing amount of analytics is performed on data that is procured in a
real-time fashion to make real-time decisions. Such tasks include simple
reporting on streams to sophisticated model building. However, the practicality
of such analyses are impeded in several domains because they are faced with a
fundamental trade-off between data collection latency and analysis accuracy.
In this paper, we study this trade-off in the context of a specific domain,
Cellular Radio Access Networks (RAN). Our choice of this domain is influenced
by its commonalities with several other domains that produce real-time data,
our access to a large live dataset, and their real-time nature and
dimensionality which makes it a natural fit for a popular analysis technique,
machine learning (ML). We find that the latency accuracy trade-off can be
resolved using two broad, general techniques: intelligent data grouping and
task formulations that leverage domain characteristics. Based on this, we
present CellScope, a system that addresses this challenge by applying a domain
specific formulation and application of Multi-task Learning (MTL) to RAN
performance analysis. It achieves this goal using three techniques: feature
engineering to transform raw data into effective features, a PCA inspired
similarity metric to group data from geographically nearby base stations
sharing performance commonalities, and a hybrid online-offline model for
efficient model updates. Our evaluation of CellScope shows that its accuracy
improvements over direct application of ML range from 2.5x to 4.4x while
reducing the model update overhead by up to 4.8x. We have also used CellScope
to analyze a live LTE consisting of over 2 million subscribers for a period of
over 10 months, where it uncovered several problems and insights, some of them
previously unknown.
|
1605.05362
|
Nabiha Asghar
|
Nabiha Asghar
|
Yelp Dataset Challenge: Review Rating Prediction
| null | null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Review websites, such as TripAdvisor and Yelp, allow users to post online
reviews for various businesses, products and services, and have been recently
shown to have a significant influence on consumer shopping behaviour. An online
review typically consists of free-form text and a star rating out of 5. The
problem of predicting a user's star rating for a product, given the user's text
review for that product, is called Review Rating Prediction and has lately
become a popular, albeit hard, problem in machine learning. In this paper, we
treat Review Rating Prediction as a multi-class classification problem, and
build sixteen different prediction models by combining four feature extraction
methods, (i) unigrams, (ii) bigrams, (iii) trigrams and (iv) Latent Semantic
Indexing, with four machine learning algorithms, (i) logistic regression, (ii)
Naive Bayes classification, (iii) perceptrons, and (iv) linear Support Vector
Classification. We analyse the performance of each of these sixteen models to
come up with the best model for predicting the ratings from reviews. We use the
dataset provided by Yelp for training and testing the models.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 20:52:33 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Asghar",
"Nabiha",
""
]
] |
TITLE: Yelp Dataset Challenge: Review Rating Prediction
ABSTRACT: Review websites, such as TripAdvisor and Yelp, allow users to post online
reviews for various businesses, products and services, and have been recently
shown to have a significant influence on consumer shopping behaviour. An online
review typically consists of free-form text and a star rating out of 5. The
problem of predicting a user's star rating for a product, given the user's text
review for that product, is called Review Rating Prediction and has lately
become a popular, albeit hard, problem in machine learning. In this paper, we
treat Review Rating Prediction as a multi-class classification problem, and
build sixteen different prediction models by combining four feature extraction
methods, (i) unigrams, (ii) bigrams, (iii) trigrams and (iv) Latent Semantic
Indexing, with four machine learning algorithms, (i) logistic regression, (ii)
Naive Bayes classification, (iii) perceptrons, and (iv) linear Support Vector
Classification. We analyse the performance of each of these sixteen models to
come up with the best model for predicting the ratings from reviews. We use the
dataset provided by Yelp for training and testing the models.
|
1605.05395
|
Scott Reed
|
Scott Reed, Zeynep Akata, Bernt Schiele, Honglak Lee
|
Learning Deep Representations of Fine-grained Visual Descriptions
|
CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art methods for zero-shot visual recognition formulate learning
as a joint embedding problem of images and side information. In these
formulations the current best complement to visual features are attributes:
manually encoded vectors describing shared characteristics among categories.
Despite good performance, attributes have limitations: (1) finer-grained
recognition requires commensurately more attributes, and (2) attributes do not
provide a natural language interface. We propose to overcome these limitations
by training neural language models from scratch; i.e. without pre-training and
only consuming words and characters. Our proposed models train end-to-end to
align with the fine-grained and category-specific content of images. Natural
language provides a flexible and compact way of encoding only the salient
visual aspects for distinguishing categories. By training on raw text, our
model can do inference on raw text as well, providing humans a familiar mode
both for annotation and retrieval. Our model achieves strong performance on
zero-shot text-based image retrieval and significantly outperforms the
attribute-based state-of-the-art for zero-shot classification on the Caltech
UCSD Birds 200-2011 dataset.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 23:08:46 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Reed",
"Scott",
""
],
[
"Akata",
"Zeynep",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Lee",
"Honglak",
""
]
] |
TITLE: Learning Deep Representations of Fine-grained Visual Descriptions
ABSTRACT: State-of-the-art methods for zero-shot visual recognition formulate learning
as a joint embedding problem of images and side information. In these
formulations the current best complement to visual features are attributes:
manually encoded vectors describing shared characteristics among categories.
Despite good performance, attributes have limitations: (1) finer-grained
recognition requires commensurately more attributes, and (2) attributes do not
provide a natural language interface. We propose to overcome these limitations
by training neural language models from scratch; i.e. without pre-training and
only consuming words and characters. Our proposed models train end-to-end to
align with the fine-grained and category-specific content of images. Natural
language provides a flexible and compact way of encoding only the salient
visual aspects for distinguishing categories. By training on raw text, our
model can do inference on raw text as well, providing humans a familiar mode
both for annotation and retrieval. Our model achieves strong performance on
zero-shot text-based image retrieval and significantly outperforms the
attribute-based state-of-the-art for zero-shot classification on the Caltech
UCSD Birds 200-2011 dataset.
|
1605.05401
|
Yu Wang
|
Yu Wang, Yang Feng, Yuncheng Li, Xiyang Zhang, Richard Niemi, Jiebo
Luo
|
Pricing the Woman Card: Gender Politics between Hillary Clinton and
Donald Trump
|
4 pages, 6 figures, 7 tables, under review
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a data-driven method to measure the impact of the
'woman card' exchange between Hillary Clinton and Donald Trump. Building from a
unique dataset of the two candidates' Twitter followers, we first examine the
transition dynamics of the two candidates' Twitter followers one week before
the exchange and one week after. Then we train a convolutional neural network
to classify the gender of the followers and unfollowers, and study how women in
particular are reacting to the 'woman card' exchange. Our study suggests that
the 'woman card' comment has made women more likely to follow Hillary Clinton,
less likely to unfollow her and that it has apparently not affected the gender
composition of Trump followers.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 00:00:44 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Wang",
"Yu",
""
],
[
"Feng",
"Yang",
""
],
[
"Li",
"Yuncheng",
""
],
[
"Zhang",
"Xiyang",
""
],
[
"Niemi",
"Richard",
""
],
[
"Luo",
"Jiebo",
""
]
] |
TITLE: Pricing the Woman Card: Gender Politics between Hillary Clinton and
Donald Trump
ABSTRACT: In this paper, we propose a data-driven method to measure the impact of the
'woman card' exchange between Hillary Clinton and Donald Trump. Building from a
unique dataset of the two candidates' Twitter followers, we first examine the
transition dynamics of the two candidates' Twitter followers one week before
the exchange and one week after. Then we train a convolutional neural network
to classify the gender of the followers and unfollowers, and study how women in
particular are reacting to the 'woman card' exchange. Our study suggests that
the 'woman card' comment has made women more likely to follow Hillary Clinton,
less likely to unfollow her and that it has apparently not affected the gender
composition of Trump followers.
|
1605.05416
|
Ryan Lowe T.
|
Teng Long, Ryan Lowe, Jackie Chi Kit Cheung, Doina Precup
|
Leveraging Lexical Resources for Learning Entity Embeddings in
Multi-Relational Data
|
6 pages. Accepted to ACL 2016 (short paper)
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work in learning vector-space embeddings for multi-relational data has
focused on combining relational information derived from knowledge bases with
distributional information derived from large text corpora. We propose a simple
approach that leverages the descriptions of entities or phrases available in
lexical resources, in conjunction with distributional semantics, in order to
derive a better initialization for training relational models. Applying this
initialization to the TransE model results in significant new state-of-the-art
performances on the WordNet dataset, decreasing the mean rank from the previous
best of 212 to 51. It also results in faster convergence of the entity
representations. We find that there is a trade-off between improving the mean
rank and the hits@10 with this approach. This illustrates that much remains to
be understood regarding performance improvements in relational models.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 01:45:32 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Long",
"Teng",
""
],
[
"Lowe",
"Ryan",
""
],
[
"Cheung",
"Jackie Chi Kit",
""
],
[
"Precup",
"Doina",
""
]
] |
TITLE: Leveraging Lexical Resources for Learning Entity Embeddings in
Multi-Relational Data
ABSTRACT: Recent work in learning vector-space embeddings for multi-relational data has
focused on combining relational information derived from knowledge bases with
distributional information derived from large text corpora. We propose a simple
approach that leverages the descriptions of entities or phrases available in
lexical resources, in conjunction with distributional semantics, in order to
derive a better initialization for training relational models. Applying this
initialization to the TransE model results in significant new state-of-the-art
performances on the WordNet dataset, decreasing the mean rank from the previous
best of 212 to 51. It also results in faster convergence of the entity
representations. We find that there is a trade-off between improving the mean
rank and the hits@10 with this approach. This illustrates that much remains to
be understood regarding performance improvements in relational models.
|
1605.05436
|
Michael Goodrich
|
Michael T. Goodrich, Ahmed Eldawy
|
Parallel Algorithms for Summing Floating-Point Numbers
|
Conference version appears in SPAA 2016
| null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of exactly summing n floating-point numbers is a fundamental
problem that has many applications in large-scale simulations and computational
geometry. Unfortunately, due to the round-off error in standard floating-point
operations, this problem becomes very challenging. Moreover, all existing
solutions rely on sequential algorithms which cannot scale to the huge datasets
that need to be processed.
In this paper, we provide several efficient parallel algorithms for summing n
floating point numbers, so as to produce a faithfully rounded floating-point
representation of the sum. We present algorithms in PRAM, external-memory, and
MapReduce models, and we also provide an experimental analysis of our MapReduce
algorithms, due to their simplicity and practical efficiency.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 04:20:41 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Goodrich",
"Michael T.",
""
],
[
"Eldawy",
"Ahmed",
""
]
] |
TITLE: Parallel Algorithms for Summing Floating-Point Numbers
ABSTRACT: The problem of exactly summing n floating-point numbers is a fundamental
problem that has many applications in large-scale simulations and computational
geometry. Unfortunately, due to the round-off error in standard floating-point
operations, this problem becomes very challenging. Moreover, all existing
solutions rely on sequential algorithms which cannot scale to the huge datasets
that need to be processed.
In this paper, we provide several efficient parallel algorithms for summing n
floating point numbers, so as to produce a faithfully rounded floating-point
representation of the sum. We present algorithms in PRAM, external-memory, and
MapReduce models, and we also provide an experimental analysis of our MapReduce
algorithms, due to their simplicity and practical efficiency.
|
1605.05462
|
Marius Leordeanu
|
Alina Marcu and Marius Leordeanu
|
Dual Local-Global Contextual Pathways for Recognition in Aerial Imagery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual context is important in object recognition and it is still an open
problem in computer vision. Along with the advent of deep convolutional neural
networks (CNN), using contextual information with such systems starts to
receive attention in the literature. At the same time, aerial imagery is
gaining momentum. While advances in deep learning make good progress in aerial
image analysis, this problem still poses many great challenges. Aerial images
are often taken under poor lighting conditions and contain low resolution
objects, many times occluded by trees or taller buildings. In this domain, in
particular, visual context could be of great help, but there are still very few
papers that consider context in aerial image understanding. Here we introduce
context as a complementary way of recognizing objects. We propose a dual-stream
deep neural network model that processes information along two independent
pathways, one for local and another for global visual reasoning. The two are
later combined in the final layers of processing. Our model learns to combine
local object appearance as well as information from the larger scene at the
same time and in a complementary way, such that together they form a powerful
classifier. We test our dual-stream network on the task of segmentation of
buildings and roads in aerial images and obtain state-of-the-art results on the
Massachusetts Buildings Dataset. We also introduce two new datasets, for
buildings and road segmentation, respectively, and study the relative
importance of local appearance vs. the larger scene, as well as their
performance in combination. While our local-global model could also be useful
in general recognition tasks, we clearly demonstrate the effectiveness of
visual context in conjunction with deep nets for aerial image understanding.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 07:37:22 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Marcu",
"Alina",
""
],
[
"Leordeanu",
"Marius",
""
]
] |
TITLE: Dual Local-Global Contextual Pathways for Recognition in Aerial Imagery
ABSTRACT: Visual context is important in object recognition and it is still an open
problem in computer vision. Along with the advent of deep convolutional neural
networks (CNN), using contextual information with such systems starts to
receive attention in the literature. At the same time, aerial imagery is
gaining momentum. While advances in deep learning make good progress in aerial
image analysis, this problem still poses many great challenges. Aerial images
are often taken under poor lighting conditions and contain low resolution
objects, many times occluded by trees or taller buildings. In this domain, in
particular, visual context could be of great help, but there are still very few
papers that consider context in aerial image understanding. Here we introduce
context as a complementary way of recognizing objects. We propose a dual-stream
deep neural network model that processes information along two independent
pathways, one for local and another for global visual reasoning. The two are
later combined in the final layers of processing. Our model learns to combine
local object appearance as well as information from the larger scene at the
same time and in a complementary way, such that together they form a powerful
classifier. We test our dual-stream network on the task of segmentation of
buildings and roads in aerial images and obtain state-of-the-art results on the
Massachusetts Buildings Dataset. We also introduce two new datasets, for
buildings and road segmentation, respectively, and study the relative
importance of local appearance vs. the larger scene, as well as their
performance in combination. While our local-global model could also be useful
in general recognition tasks, we clearly demonstrate the effectiveness of
visual context in conjunction with deep nets for aerial image understanding.
|
1605.05466
|
Jeremiah Deng
|
Xianbin Gu, Jeremiah D. Deng, Martin K. Purvis
|
Image segmentation with superpixel-based covariance descriptors in
low-rank representation
|
7 pages, 2 figures, 1 table
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the problem of image segmentation using superpixels.
We propose two approaches to enhance the discriminative ability of the
superpixel's covariance descriptors. In the first one, we employ the
Log-Euclidean distance as the metric on the covariance manifolds, and then use
the RBF kernel to measure the similarities between covariance descriptors. The
second method is focused on extracting the subspace structure of the set of
covariance descriptors by extending a low rank representation algorithm on to
the covariance manifolds. Experiments are carried out with the Berkly
Segmentation Dataset, and compared with the state-of-the-art segmentation
algorithms, both methods are competitive.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 07:44:38 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Gu",
"Xianbin",
""
],
[
"Deng",
"Jeremiah D.",
""
],
[
"Purvis",
"Martin K.",
""
]
] |
TITLE: Image segmentation with superpixel-based covariance descriptors in
low-rank representation
ABSTRACT: This paper investigates the problem of image segmentation using superpixels.
We propose two approaches to enhance the discriminative ability of the
superpixel's covariance descriptors. In the first one, we employ the
Log-Euclidean distance as the metric on the covariance manifolds, and then use
the RBF kernel to measure the similarities between covariance descriptors. The
second method is focused on extracting the subspace structure of the set of
covariance descriptors by extending a low rank representation algorithm on to
the covariance manifolds. Experiments are carried out with the Berkly
Segmentation Dataset, and compared with the state-of-the-art segmentation
algorithms, both methods are competitive.
|
1605.05538
|
Alexander Kolesnikov
|
Alexander Kolesnikov and Christoph H. Lampert
|
Improving Weakly-Supervised Object Localization By Micro-Annotation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly-supervised object localization methods tend to fail for object classes
that consistently co-occur with the same background elements, e.g. trains on
tracks. We propose a method to overcome these failures by adding a very small
amount of model-specific additional annotation. The main idea is to cluster a
deep network's mid-level representations and assign object or distractor labels
to each cluster. Experiments show substantially improved localization results
on the challenging ILSVC2014 dataset for bounding box detection and the PASCAL
VOC2012 dataset for semantic segmentation.
|
[
{
"version": "v1",
"created": "Wed, 18 May 2016 12:06:35 GMT"
}
] | 2016-05-19T00:00:00 |
[
[
"Kolesnikov",
"Alexander",
""
],
[
"Lampert",
"Christoph H.",
""
]
] |
TITLE: Improving Weakly-Supervised Object Localization By Micro-Annotation
ABSTRACT: Weakly-supervised object localization methods tend to fail for object classes
that consistently co-occur with the same background elements, e.g. trains on
tracks. We propose a method to overcome these failures by adding a very small
amount of model-specific additional annotation. The main idea is to cluster a
deep network's mid-level representations and assign object or distractor labels
to each cluster. Experiments show substantially improved localization results
on the challenging ILSVC2014 dataset for bounding box detection and the PASCAL
VOC2012 dataset for semantic segmentation.
|
1603.06127
|
Petr Baudi\v{s}
|
Petr Baudi\v{s}, Jan Pichl, Tom\'a\v{s} Vysko\v{c}il, Jan \v{S}ediv\'y
|
Sentence Pair Scoring: Towards Unified Framework for Text Comprehension
|
submitted as paper to CoNLL 2016
| null | null | null |
cs.CL cs.AI cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
We review the task of Sentence Pair Scoring, popular in the literature in
various forms - viewed as Answer Sentence Selection, Semantic Text Scoring,
Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a
component of Memory Networks.
We argue that all such tasks are similar from the model perspective and
propose new baselines by comparing the performance of common IR metrics and
popular convolutional, recurrent and attention-based neural models across many
Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating
randomized models, propose a statistically grounded methodology, and attempt to
improve comparisons by releasing new datasets that are much harder than some of
the currently used well explored benchmarks. We introduce a unified open source
software framework with easily pluggable models and tasks, which enables us to
experiment with multi-task reusability of trained sentence model. We set a new
state-of-art in performance on the Ubuntu Dialogue dataset.
|
[
{
"version": "v1",
"created": "Sat, 19 Mar 2016 18:35:26 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Apr 2016 03:10:26 GMT"
},
{
"version": "v3",
"created": "Fri, 6 May 2016 22:17:36 GMT"
},
{
"version": "v4",
"created": "Tue, 17 May 2016 14:08:38 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Baudiš",
"Petr",
""
],
[
"Pichl",
"Jan",
""
],
[
"Vyskočil",
"Tomáš",
""
],
[
"Šedivý",
"Jan",
""
]
] |
TITLE: Sentence Pair Scoring: Towards Unified Framework for Text Comprehension
ABSTRACT: We review the task of Sentence Pair Scoring, popular in the literature in
various forms - viewed as Answer Sentence Selection, Semantic Text Scoring,
Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a
component of Memory Networks.
We argue that all such tasks are similar from the model perspective and
propose new baselines by comparing the performance of common IR metrics and
popular convolutional, recurrent and attention-based neural models across many
Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating
randomized models, propose a statistically grounded methodology, and attempt to
improve comparisons by releasing new datasets that are much harder than some of
the currently used well explored benchmarks. We introduce a unified open source
software framework with easily pluggable models and tasks, which enables us to
experiment with multi-task reusability of trained sentence model. We set a new
state-of-art in performance on the Ubuntu Dialogue dataset.
|
1605.01775
|
Andras Rozsa
|
Andras Rozsa, Ethan M. Rudd, and Terrance E. Boult
|
Adversarial Diversity and Hard Positive Generation
|
Accepted to CVPR 2016 DeepVision Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art deep neural networks suffer from a fundamental problem -
they misclassify adversarial examples formed by applying small perturbations to
inputs. In this paper, we present a new psychometric perceptual adversarial
similarity score (PASS) measure for quantifying adversarial images, introduce
the notion of hard positive generation, and use a diverse set of adversarial
perturbations - not just the closest ones - for data augmentation. We introduce
a novel hot/cold approach for adversarial example generation, which provides
multiple possible adversarial perturbations for every single image. The
perturbations generated by our novel approach often correspond to semantically
meaningful image structures, and allow greater flexibility to scale
perturbation-amplitudes, which yields an increased diversity of adversarial
images. We present adversarial images on several network topologies and
datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet
on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that
fine-tuning with a diverse set of hard positives improves the robustness of
these networks compared to training with prior methods of generating
adversarial images.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 22:09:35 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 02:46:39 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Rozsa",
"Andras",
""
],
[
"Rudd",
"Ethan M.",
""
],
[
"Boult",
"Terrance E.",
""
]
] |
TITLE: Adversarial Diversity and Hard Positive Generation
ABSTRACT: State-of-the-art deep neural networks suffer from a fundamental problem -
they misclassify adversarial examples formed by applying small perturbations to
inputs. In this paper, we present a new psychometric perceptual adversarial
similarity score (PASS) measure for quantifying adversarial images, introduce
the notion of hard positive generation, and use a diverse set of adversarial
perturbations - not just the closest ones - for data augmentation. We introduce
a novel hot/cold approach for adversarial example generation, which provides
multiple possible adversarial perturbations for every single image. The
perturbations generated by our novel approach often correspond to semantically
meaningful image structures, and allow greater flexibility to scale
perturbation-amplitudes, which yields an increased diversity of adversarial
images. We present adversarial images on several network topologies and
datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet
on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that
fine-tuning with a diverse set of hard positives improves the robustness of
these networks compared to training with prior methods of generating
adversarial images.
|
1605.04932
|
Frosti Palsson
|
Magnus O. Ulfarsson, Frosti Palsson, Jakob Sigurdsson and Johannes R.
Sveinsson
|
Classification of Big Data with Application to Imaging Genetics
| null | null | null | null |
physics.data-an cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Big data applications, such as medical imaging and genetics, typically
generate datasets that consist of few observations n on many more variables p,
a scenario that we denote as p>>n. Traditional data processing methods are
often insufficient for extracting information out of big data. This calls for
the development of new algorithms that can deal with the size, complexity, and
the special structure of such datasets. In this paper, we consider the problem
of classifying p>>n data and propose a classification method based on linear
discriminant analysis (LDA). Traditional LDA depends on the covariance estimate
of the data, but when p>>n the sample covariance estimate is singular. The
proposed method estimates the covariance by using a sparse version of noisy
principal component analysis (nPCA). The use of sparsity in this setting aims
at automatically selecting variables that are relevant for classification. In
experiments, the new method is compared to state-of-the art methods for big
data problems using both simulated datasets and imaging genetics datasets.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 20:16:29 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Ulfarsson",
"Magnus O.",
""
],
[
"Palsson",
"Frosti",
""
],
[
"Sigurdsson",
"Jakob",
""
],
[
"Sveinsson",
"Johannes R.",
""
]
] |
TITLE: Classification of Big Data with Application to Imaging Genetics
ABSTRACT: Big data applications, such as medical imaging and genetics, typically
generate datasets that consist of few observations n on many more variables p,
a scenario that we denote as p>>n. Traditional data processing methods are
often insufficient for extracting information out of big data. This calls for
the development of new algorithms that can deal with the size, complexity, and
the special structure of such datasets. In this paper, we consider the problem
of classifying p>>n data and propose a classification method based on linear
discriminant analysis (LDA). Traditional LDA depends on the covariance estimate
of the data, but when p>>n the sample covariance estimate is singular. The
proposed method estimates the covariance by using a sparse version of noisy
principal component analysis (nPCA). The use of sparsity in this setting aims
at automatically selecting variables that are relevant for classification. In
experiments, the new method is compared to state-of-the art methods for big
data problems using both simulated datasets and imaging genetics datasets.
|
1605.04934
|
Fei Han
|
Fei Han, Christopher Reardon, Lynne E. Parker, Hao Zhang
|
Self-Reflective Risk-Aware Artificial Cognitive Modeling for Robot
Response to Human Behaviors
|
40 pages
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order for cooperative robots ("co-robots") to respond to human behaviors
accurately and efficiently in human-robot collaboration, interpretation of
human actions, awareness of new situations, and appropriate decision making are
all crucial abilities for co-robots. For this purpose, the human behaviors
should be interpreted by co-robots in the same manner as human peers. To
address this issue, a novel interpretability indicator is introduced so that
robot actions are appropriate to the current human behaviors. In addition, the
complete consideration of all potential situations of a robot's environment is
nearly impossible in real-world applications, making it difficult for the
co-robot to act appropriately and safely in new scenarios. This is true even
when the pretrained model is highly accurate in a known situation. For
effective and safe teaming with humans, we introduce a new generalizability
indicator that allows a co-robot to self-reflect and reason about when an
observation falls outside the co-robot's learned model. Based on topic modeling
and two novel indicators, we propose a new Self-reflective Risk-aware
Artificial Cognitive (SRAC) model. The co-robots are able to consider action
risks and identify new situations so that better decisions can be made.
Experiments both using real-world datasets and on physical robots suggest that
our SRAC model significantly outperforms the traditional methodology and
enables better decision making in response to human activities.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 20:22:30 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Han",
"Fei",
""
],
[
"Reardon",
"Christopher",
""
],
[
"Parker",
"Lynne E.",
""
],
[
"Zhang",
"Hao",
""
]
] |
TITLE: Self-Reflective Risk-Aware Artificial Cognitive Modeling for Robot
Response to Human Behaviors
ABSTRACT: In order for cooperative robots ("co-robots") to respond to human behaviors
accurately and efficiently in human-robot collaboration, interpretation of
human actions, awareness of new situations, and appropriate decision making are
all crucial abilities for co-robots. For this purpose, the human behaviors
should be interpreted by co-robots in the same manner as human peers. To
address this issue, a novel interpretability indicator is introduced so that
robot actions are appropriate to the current human behaviors. In addition, the
complete consideration of all potential situations of a robot's environment is
nearly impossible in real-world applications, making it difficult for the
co-robot to act appropriately and safely in new scenarios. This is true even
when the pretrained model is highly accurate in a known situation. For
effective and safe teaming with humans, we introduce a new generalizability
indicator that allows a co-robot to self-reflect and reason about when an
observation falls outside the co-robot's learned model. Based on topic modeling
and two novel indicators, we propose a new Self-reflective Risk-aware
Artificial Cognitive (SRAC) model. The co-robots are able to consider action
risks and identify new situations so that better decisions can be made.
Experiments both using real-world datasets and on physical robots suggest that
our SRAC model significantly outperforms the traditional methodology and
enables better decision making in response to human activities.
|
1605.04986
|
Dennis Wei
|
Dennis Wei
|
A Constant-Factor Bi-Criteria Approximation Guarantee for $k$-means++
|
17 pages, 1 figure
| null | null | null |
cs.LG cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the $k$-means++ algorithm for clustering as well as the
class of $D^\ell$ sampling algorithms to which $k$-means++ belongs. It is shown
that for any constant factor $\beta > 1$, selecting $\beta k$ cluster centers
by $D^\ell$ sampling yields a constant-factor approximation to the optimal
clustering with $k$ centers, in expectation and without conditions on the
dataset. This result extends the previously known $O(\log k)$ guarantee for the
case $\beta = 1$ to the constant-factor bi-criteria regime. It also improves
upon an existing constant-factor bi-criteria result that holds only with
constant probability.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 23:41:55 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Wei",
"Dennis",
""
]
] |
TITLE: A Constant-Factor Bi-Criteria Approximation Guarantee for $k$-means++
ABSTRACT: This paper studies the $k$-means++ algorithm for clustering as well as the
class of $D^\ell$ sampling algorithms to which $k$-means++ belongs. It is shown
that for any constant factor $\beta > 1$, selecting $\beta k$ cluster centers
by $D^\ell$ sampling yields a constant-factor approximation to the optimal
clustering with $k$ centers, in expectation and without conditions on the
dataset. This result extends the previously known $O(\log k)$ guarantee for the
case $\beta = 1$ to the constant-factor bi-criteria regime. It also improves
upon an existing constant-factor bi-criteria result that holds only with
constant probability.
|
1605.04996
|
Zizhao Zhang
|
Zizhao Zhang, Fuyong Xing, Xiaoshuang Shi, Lin Yang
|
SemiContour: A Semi-supervised Learning Approach for Contour Detection
|
Accepted by Computer Vision and Pattern Recognition (CVPR) 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised contour detection methods usually require many labeled training
images to obtain satisfactory performance. However, a large set of annotated
data might be unavailable or extremely labor intensive. In this paper, we
investigate the usage of semi-supervised learning (SSL) to obtain competitive
detection accuracy with very limited training data (three labeled images).
Specifically, we propose a semi-supervised structured ensemble learning
approach for contour detection built on structured random forests (SRF). To
allow SRF to be applicable to unlabeled data, we present an effective sparse
representation approach to capture inherent structure in image patches by
finding a compact and discriminative low-dimensional subspace representation in
an unsupervised manner, enabling the incorporation of abundant unlabeled
patches with their estimated structured labels to help SRF perform better node
splitting. We re-examine the role of sparsity and propose a novel and fast
sparse coding algorithm to boost the overall learning efficiency. To the best
of our knowledge, this is the first attempt to apply SSL for contour detection.
Extensive experiments on the BSDS500 segmentation dataset and the NYU Depth
dataset demonstrate the superiority of the proposed method.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 01:33:20 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Zhang",
"Zizhao",
""
],
[
"Xing",
"Fuyong",
""
],
[
"Shi",
"Xiaoshuang",
""
],
[
"Yang",
"Lin",
""
]
] |
TITLE: SemiContour: A Semi-supervised Learning Approach for Contour Detection
ABSTRACT: Supervised contour detection methods usually require many labeled training
images to obtain satisfactory performance. However, a large set of annotated
data might be unavailable or extremely labor intensive. In this paper, we
investigate the usage of semi-supervised learning (SSL) to obtain competitive
detection accuracy with very limited training data (three labeled images).
Specifically, we propose a semi-supervised structured ensemble learning
approach for contour detection built on structured random forests (SRF). To
allow SRF to be applicable to unlabeled data, we present an effective sparse
representation approach to capture inherent structure in image patches by
finding a compact and discriminative low-dimensional subspace representation in
an unsupervised manner, enabling the incorporation of abundant unlabeled
patches with their estimated structured labels to help SRF perform better node
splitting. We re-examine the role of sparsity and propose a novel and fast
sparse coding algorithm to boost the overall learning efficiency. To the best
of our knowledge, this is the first attempt to apply SSL for contour detection.
Extensive experiments on the BSDS500 segmentation dataset and the NYU Depth
dataset demonstrate the superiority of the proposed method.
|
1605.05054
|
Minseok Park
|
Minseok Park, Hanxiang Li, Junmo Kim
|
HARRISON: A Benchmark on HAshtag Recommendation for Real-world Images in
Social Networks
| null | null | null | null |
cs.CV cs.IR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simple, short, and compact hashtags cover a wide range of information on
social networks. Although many works in the field of natural language
processing (NLP) have demonstrated the importance of hashtag recommendation,
hashtag recommendation for images has barely been studied. In this paper, we
introduce the HARRISON dataset, a benchmark on hashtag recommendation for real
world images in social networks. The HARRISON dataset is a realistic dataset,
composed of 57,383 photos from Instagram and an average of 4.5 associated
hashtags for each photo. To evaluate our dataset, we design a baseline
framework consisting of visual feature extractor based on convolutional neural
network (CNN) and multi-label classifier based on neural network. Based on this
framework, two single feature-based models, object-based and scene-based model,
and an integrated model of them are evaluated on the HARRISON dataset. Our
dataset shows that hashtag recommendation task requires a wide and contextual
understanding of the situation conveyed in the image. As far as we know, this
work is the first vision-only attempt at hashtag recommendation for real world
images in social networks. We expect this benchmark to accelerate the
advancement of hashtag recommendation.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 08:21:07 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Park",
"Minseok",
""
],
[
"Li",
"Hanxiang",
""
],
[
"Kim",
"Junmo",
""
]
] |
TITLE: HARRISON: A Benchmark on HAshtag Recommendation for Real-world Images in
Social Networks
ABSTRACT: Simple, short, and compact hashtags cover a wide range of information on
social networks. Although many works in the field of natural language
processing (NLP) have demonstrated the importance of hashtag recommendation,
hashtag recommendation for images has barely been studied. In this paper, we
introduce the HARRISON dataset, a benchmark on hashtag recommendation for real
world images in social networks. The HARRISON dataset is a realistic dataset,
composed of 57,383 photos from Instagram and an average of 4.5 associated
hashtags for each photo. To evaluate our dataset, we design a baseline
framework consisting of visual feature extractor based on convolutional neural
network (CNN) and multi-label classifier based on neural network. Based on this
framework, two single feature-based models, object-based and scene-based model,
and an integrated model of them are evaluated on the HARRISON dataset. Our
dataset shows that hashtag recommendation task requires a wide and contextual
understanding of the situation conveyed in the image. As far as we know, this
work is the first vision-only attempt at hashtag recommendation for real world
images in social networks. We expect this benchmark to accelerate the
advancement of hashtag recommendation.
|
1605.05212
|
Youngjune Gwon
|
Youngjune Gwon and William Campbell and Kevin Brady and Douglas Sturim
and Miriam Cha and H.T. Kung
|
Multimodal Sparse Coding for Event Detection
|
Multimodal Machine Learning Workshop at NIPS 2015
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised feature learning methods have proven effective for
classification tasks based on a single modality. We present multimodal sparse
coding for learning feature representations shared across multiple modalities.
The shared representations are applied to multimedia event detection (MED) and
evaluated in comparison to unimodal counterparts, as well as other feature
learning methods such as GMM supervectors and sparse RBM. We report the
cross-validated classification accuracy and mean average precision of the MED
system trained on features learned from our unimodal and multimodal settings
for a subset of the TRECVID MED 2014 dataset.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 15:37:19 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Gwon",
"Youngjune",
""
],
[
"Campbell",
"William",
""
],
[
"Brady",
"Kevin",
""
],
[
"Sturim",
"Douglas",
""
],
[
"Cha",
"Miriam",
""
],
[
"Kung",
"H. T.",
""
]
] |
TITLE: Multimodal Sparse Coding for Event Detection
ABSTRACT: Unsupervised feature learning methods have proven effective for
classification tasks based on a single modality. We present multimodal sparse
coding for learning feature representations shared across multiple modalities.
The shared representations are applied to multimedia event detection (MED) and
evaluated in comparison to unimodal counterparts, as well as other feature
learning methods such as GMM supervectors and sparse RBM. We report the
cross-validated classification accuracy and mean average precision of the MED
system trained on features learned from our unimodal and multimodal settings
for a subset of the TRECVID MED 2014 dataset.
|
1605.05239
|
Benjamin Migliori
|
Benjamin Migliori, Riley Zeller-Townson, Daniel Grady, Daniel Gebhardt
|
Biologically Inspired Radio Signal Feature Extraction with Sparse
Denoising Autoencoders
| null | null | null | null |
stat.ML cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic modulation classification (AMC) is an important task for modern
communication systems; however, it is a challenging problem when signal
features and precise models for generating each modulation may be unknown. We
present a new biologically-inspired AMC method without the need for models or
manually specified features --- thus removing the requirement for expert prior
knowledge. We accomplish this task using regularized stacked sparse denoising
autoencoders (SSDAs). Our method selects efficient classification features
directly from raw in-phase/quadrature (I/Q) radio signals in an unsupervised
manner. These features are then used to construct higher-complexity abstract
features which can be used for automatic modulation classification. We
demonstrate this process using a dataset generated with a software defined
radio, consisting of random input bits encoded in 100-sample segments of
various common digital radio modulations. Our results show correct
classification rates of > 99% at 7.5 dB signal-to-noise ratio (SNR) and > 92%
at 0 dB SNR in a 6-way classification test. Our experiments demonstrate a
dramatically new and broadly applicable mechanism for performing AMC and
related tasks without the need for expert-defined or modulation-specific signal
information.
|
[
{
"version": "v1",
"created": "Tue, 17 May 2016 17:03:02 GMT"
}
] | 2016-05-18T00:00:00 |
[
[
"Migliori",
"Benjamin",
""
],
[
"Zeller-Townson",
"Riley",
""
],
[
"Grady",
"Daniel",
""
],
[
"Gebhardt",
"Daniel",
""
]
] |
TITLE: Biologically Inspired Radio Signal Feature Extraction with Sparse
Denoising Autoencoders
ABSTRACT: Automatic modulation classification (AMC) is an important task for modern
communication systems; however, it is a challenging problem when signal
features and precise models for generating each modulation may be unknown. We
present a new biologically-inspired AMC method without the need for models or
manually specified features --- thus removing the requirement for expert prior
knowledge. We accomplish this task using regularized stacked sparse denoising
autoencoders (SSDAs). Our method selects efficient classification features
directly from raw in-phase/quadrature (I/Q) radio signals in an unsupervised
manner. These features are then used to construct higher-complexity abstract
features which can be used for automatic modulation classification. We
demonstrate this process using a dataset generated with a software defined
radio, consisting of random input bits encoded in 100-sample segments of
various common digital radio modulations. Our results show correct
classification rates of > 99% at 7.5 dB signal-to-noise ratio (SNR) and > 92%
at 0 dB SNR in a 6-way classification test. Our experiments demonstrate a
dramatically new and broadly applicable mechanism for performing AMC and
related tasks without the need for expert-defined or modulation-specific signal
information.
|
1504.08027
|
Prashanti Manda
|
Prashanti Manda, Fiona McCarthy, Bindu Nanduri, Hui Wang, Susan M.
Bridges
|
Information-theoretic Interestingness Measures for Cross-Ontology Data
Mining
| null | null | null | null |
cs.AI cs.CE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Community annotation of biological entities with concepts from multiple
bio-ontologies has created large and growing repositories of ontology-based
annotation data with embedded implicit relationships among orthogonal
ontologies. Development of efficient data mining methods and metrics to mine
and assess the quality of the mined relationships has not kept pace with the
growth of annotation data. In this study, we present a data mining method that
uses ontology-guided generalization to discover relationships across ontologies
along with a new interestingness metric based on information theory. We apply
our data mining algorithm and interestingness measures to datasets from the
Gene Expression Database at the Mouse Genome Informatics as a preliminary proof
of concept to mine relationships between developmental stages in the mouse
anatomy ontology and Gene Ontology concepts (biological process, molecular
function and cellular component). In addition, we present a comparison of our
interestingness metric to four existing metrics. Ontology-based annotation
datasets provide a valuable resource for discovery of relationships across
ontologies. The use of efficient data mining methods and appropriate
interestingness metrics enables the identification of high quality
relationships.
|
[
{
"version": "v1",
"created": "Wed, 29 Apr 2015 21:15:46 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 08:58:17 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Manda",
"Prashanti",
""
],
[
"McCarthy",
"Fiona",
""
],
[
"Nanduri",
"Bindu",
""
],
[
"Wang",
"Hui",
""
],
[
"Bridges",
"Susan M.",
""
]
] |
TITLE: Information-theoretic Interestingness Measures for Cross-Ontology Data
Mining
ABSTRACT: Community annotation of biological entities with concepts from multiple
bio-ontologies has created large and growing repositories of ontology-based
annotation data with embedded implicit relationships among orthogonal
ontologies. Development of efficient data mining methods and metrics to mine
and assess the quality of the mined relationships has not kept pace with the
growth of annotation data. In this study, we present a data mining method that
uses ontology-guided generalization to discover relationships across ontologies
along with a new interestingness metric based on information theory. We apply
our data mining algorithm and interestingness measures to datasets from the
Gene Expression Database at the Mouse Genome Informatics as a preliminary proof
of concept to mine relationships between developmental stages in the mouse
anatomy ontology and Gene Ontology concepts (biological process, molecular
function and cellular component). In addition, we present a comparison of our
interestingness metric to four existing metrics. Ontology-based annotation
datasets provide a valuable resource for discovery of relationships across
ontologies. The use of efficient data mining methods and appropriate
interestingness metrics enables the identification of high quality
relationships.
|
1512.01752
|
Sujith Ravi
|
Sujith Ravi, Qiming Diao
|
Large Scale Distributed Semi-Supervised Learning Using Streaming
Approximation
|
10 pages
|
Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics (AISTATS), JMLR: W&CP volume 51, pp. 519-528,
2016
| null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional graph-based semi-supervised learning (SSL) approaches, even
though widely applied, are not suited for massive data and large label
scenarios since they scale linearly with the number of edges $|E|$ and distinct
labels $m$. To deal with the large label size problem, recent works propose
sketch-based methods to approximate the distribution on labels per node thereby
achieving a space reduction from $O(m)$ to $O(\log m)$, under certain
conditions. In this paper, we present a novel streaming graph-based SSL
approximation that captures the sparsity of the label distribution and ensures
the algorithm propagates labels accurately, and further reduces the space
complexity per node to $O(1)$. We also provide a distributed version of the
algorithm that scales well to large data sizes. Experiments on real-world
datasets demonstrate that the new method achieves better performance than
existing state-of-the-art algorithms with significant reduction in memory
footprint. We also study different graph construction mechanisms for natural
language applications and propose a robust graph augmentation strategy trained
using state-of-the-art unsupervised deep learning architectures that yields
further significant quality gains.
|
[
{
"version": "v1",
"created": "Sun, 6 Dec 2015 06:58:57 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 19:40:37 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Ravi",
"Sujith",
""
],
[
"Diao",
"Qiming",
""
]
] |
TITLE: Large Scale Distributed Semi-Supervised Learning Using Streaming
Approximation
ABSTRACT: Traditional graph-based semi-supervised learning (SSL) approaches, even
though widely applied, are not suited for massive data and large label
scenarios since they scale linearly with the number of edges $|E|$ and distinct
labels $m$. To deal with the large label size problem, recent works propose
sketch-based methods to approximate the distribution on labels per node thereby
achieving a space reduction from $O(m)$ to $O(\log m)$, under certain
conditions. In this paper, we present a novel streaming graph-based SSL
approximation that captures the sparsity of the label distribution and ensures
the algorithm propagates labels accurately, and further reduces the space
complexity per node to $O(1)$. We also provide a distributed version of the
algorithm that scales well to large data sizes. Experiments on real-world
datasets demonstrate that the new method achieves better performance than
existing state-of-the-art algorithms with significant reduction in memory
footprint. We also study different graph construction mechanisms for natural
language applications and propose a robust graph augmentation strategy trained
using state-of-the-art unsupervised deep learning architectures that yields
further significant quality gains.
|
1601.00306
|
Yasin Yilmaz
|
Yasin Yilmaz, Alfred Hero
|
Multimodal Event Detection in Twitter Hashtag Networks
| null | null | null | null |
stat.AP cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event detection in a multimodal Twitter dataset is considered. We treat the
hashtags in the dataset as instances with two modes: text and geolocation
features. The text feature consists of a bag-of-words representation. The
geolocation feature consists of geotags (i.e., geographical coordinates) of the
tweets. Fusing the multimodal data we aim to detect, in terms of topic and
geolocation, the interesting events and the associated hashtags. To this end, a
generative latent variable model is assumed, and a generalized
expectation-maximization (EM) algorithm is derived to learn the model
parameters. The proposed method is computationally efficient, and lends itself
to big datasets. Experimental results on a Twitter dataset from August 2014
show the efficacy of the proposed method.
|
[
{
"version": "v1",
"created": "Sun, 3 Jan 2016 15:48:36 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 01:59:30 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Yilmaz",
"Yasin",
""
],
[
"Hero",
"Alfred",
""
]
] |
TITLE: Multimodal Event Detection in Twitter Hashtag Networks
ABSTRACT: Event detection in a multimodal Twitter dataset is considered. We treat the
hashtags in the dataset as instances with two modes: text and geolocation
features. The text feature consists of a bag-of-words representation. The
geolocation feature consists of geotags (i.e., geographical coordinates) of the
tweets. Fusing the multimodal data we aim to detect, in terms of topic and
geolocation, the interesting events and the associated hashtags. To this end, a
generative latent variable model is assumed, and a generalized
expectation-maximization (EM) algorithm is derived to learn the model
parameters. The proposed method is computationally efficient, and lends itself
to big datasets. Experimental results on a Twitter dataset from August 2014
show the efficacy of the proposed method.
|
1603.09631
|
Miroslav Vodol\'an
|
Miroslav Vodol\'an, Filip Jur\v{c}\'i\v{c}ek
|
Data Collection for Interactive Learning through the Dialog
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a dataset collected from natural dialogs which enables to
test the ability of dialog systems to learn new facts from user utterances
throughout the dialog. This interactive learning will help with one of the most
prevailing problems of open domain dialog system, which is the sparsity of
facts a dialog system can reason about. The proposed dataset, consisting of
1900 collected dialogs, allows simulation of an interactive gaining of
denotations and questions explanations from users which can be used for the
interactive learning.
|
[
{
"version": "v1",
"created": "Thu, 31 Mar 2016 15:13:51 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2016 13:03:26 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Vodolán",
"Miroslav",
""
],
[
"Jurčíček",
"Filip",
""
]
] |
TITLE: Data Collection for Interactive Learning through the Dialog
ABSTRACT: This paper presents a dataset collected from natural dialogs which enables to
test the ability of dialog systems to learn new facts from user utterances
throughout the dialog. This interactive learning will help with one of the most
prevailing problems of open domain dialog system, which is the sparsity of
facts a dialog system can reason about. The proposed dataset, consisting of
1900 collected dialogs, allows simulation of an interactive gaining of
denotations and questions explanations from users which can be used for the
interactive learning.
|
1604.06242
|
Nomi Vinokurov
|
Nomi Vinokurov and Daphna Weinshall
|
Novelty Detection in MultiClass Scenarios with Incomplete Set of Class
Labels
|
10 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of novelty detection in multiclass scenarios where
some class labels are missing from the training set. Our method is based on the
initial assignment of confidence values, which measure the affinity between a
new test point and each known class. We first compare the values of the two top
elements in this vector of confidence values. In the heart of our method lies
the training of an ensemble of classifiers, each trained to discriminate known
from novel classes based on some partition of the training data into
presumed-known and presumednovel classes. Our final novelty score is derived
from the output of this ensemble of classifiers. We evaluated our method on two
datasets of images containing a relatively large number of classes - the
Caltech-256 and Cifar-100 datasets. We compared our method to 3 alternative
methods which represent commonly used approaches, including the one-class SVM,
novelty based on k-NN, novelty based on maximal confidence, and the recent
KNFST method. The results show a very clear and marked advantage for our method
over all alternative methods, in an experimental setup where class labels are
missing during training.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 10:18:26 GMT"
},
{
"version": "v2",
"created": "Sun, 15 May 2016 16:44:15 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Vinokurov",
"Nomi",
""
],
[
"Weinshall",
"Daphna",
""
]
] |
TITLE: Novelty Detection in MultiClass Scenarios with Incomplete Set of Class
Labels
ABSTRACT: We address the problem of novelty detection in multiclass scenarios where
some class labels are missing from the training set. Our method is based on the
initial assignment of confidence values, which measure the affinity between a
new test point and each known class. We first compare the values of the two top
elements in this vector of confidence values. In the heart of our method lies
the training of an ensemble of classifiers, each trained to discriminate known
from novel classes based on some partition of the training data into
presumed-known and presumednovel classes. Our final novelty score is derived
from the output of this ensemble of classifiers. We evaluated our method on two
datasets of images containing a relatively large number of classes - the
Caltech-256 and Cifar-100 datasets. We compared our method to 3 alternative
methods which represent commonly used approaches, including the one-class SVM,
novelty based on k-NN, novelty based on maximal confidence, and the recent
KNFST method. The results show a very clear and marked advantage for our method
over all alternative methods, in an experimental setup where class labels are
missing during training.
|
1605.04263
|
Guohui Xiao
|
Dag Hovland and Davide Lanti and Martin Rezk and Guohui Xiao
|
OBDA Constraints for Effective Query Answering (Extended Version)
| null | null | null | null |
cs.DB cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In Ontology Based Data Access (OBDA) users pose SPARQL queries over an
ontology that lies on top of relational datasources. These queries are
translated on-the-fly into SQL queries by OBDA systems. Standard SPARQL-to-SQL
translation techniques in OBDA often produce SQL queries containing redundant
joins and unions, even after a number of semantic and structural optimizations.
These redundancies are detrimental to the performance of query answering,
especially in complex industrial OBDA scenarios with large enterprise
databases. To address this issue, we introduce two novel notions of OBDA
constraints and show how to exploit them for efficient query answering. We
conduct an extensive set of experiments on large datasets using real world data
and queries, showing that these techniques strongly improve the performance of
query answering up to orders of magnitude.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2016 17:29:28 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 09:21:26 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Hovland",
"Dag",
""
],
[
"Lanti",
"Davide",
""
],
[
"Rezk",
"Martin",
""
],
[
"Xiao",
"Guohui",
""
]
] |
TITLE: OBDA Constraints for Effective Query Answering (Extended Version)
ABSTRACT: In Ontology Based Data Access (OBDA) users pose SPARQL queries over an
ontology that lies on top of relational datasources. These queries are
translated on-the-fly into SQL queries by OBDA systems. Standard SPARQL-to-SQL
translation techniques in OBDA often produce SQL queries containing redundant
joins and unions, even after a number of semantic and structural optimizations.
These redundancies are detrimental to the performance of query answering,
especially in complex industrial OBDA scenarios with large enterprise
databases. To address this issue, we introduce two novel notions of OBDA
constraints and show how to exploit them for efficient query answering. We
conduct an extensive set of experiments on large datasets using real world data
and queries, showing that these techniques strongly improve the performance of
query answering up to orders of magnitude.
|
1605.04465
|
Avradeep Bhowmik
|
Avradeep Bhowmik, Joydeep Ghosh
|
Monotone Retargeting for Unsupervised Rank Aggregation with Object
Features
|
15 pages, 2 figures, 1 table
| null | null | null |
stat.ML cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning the true ordering between objects by aggregating a set of expert
opinion rank order lists is an important and ubiquitous problem in many
applications ranging from social choice theory to natural language processing
and search aggregation. We study the problem of unsupervised rank aggregation
where no ground truth ordering information in available, neither about the true
preference ordering between any set of objects nor about the quality of
individual rank lists. Aggregating the often inconsistent and poor quality rank
lists in such an unsupervised manner is a highly challenging problem, and
standard consensus-based methods are often ill-defined, and difficult to solve.
In this manuscript we propose a novel framework to bypass these issues by using
object attributes to augment the standard rank aggregation framework. We design
algorithms that learn joint models on both rank lists and object features to
obtain an aggregated rank ordering that is more accurate and robust, and also
helps weed out rank lists of dubious validity. We validate our techniques on
synthetic datasets where our algorithm is able to estimate the true rank
ordering even when the rank lists are corrupted. Experiments on three real
datasets, MQ2008, MQ2008 and OHSUMED, show that using object features can
result in significant improvement in performance over existing rank aggregation
methods that do not use object information. Furthermore, when at least some of
the rank lists are of high quality, our methods are able to effectively exploit
their high expertise to output an aggregated rank ordering of great accuracy.
|
[
{
"version": "v1",
"created": "Sat, 14 May 2016 20:35:20 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Bhowmik",
"Avradeep",
""
],
[
"Ghosh",
"Joydeep",
""
]
] |
TITLE: Monotone Retargeting for Unsupervised Rank Aggregation with Object
Features
ABSTRACT: Learning the true ordering between objects by aggregating a set of expert
opinion rank order lists is an important and ubiquitous problem in many
applications ranging from social choice theory to natural language processing
and search aggregation. We study the problem of unsupervised rank aggregation
where no ground truth ordering information in available, neither about the true
preference ordering between any set of objects nor about the quality of
individual rank lists. Aggregating the often inconsistent and poor quality rank
lists in such an unsupervised manner is a highly challenging problem, and
standard consensus-based methods are often ill-defined, and difficult to solve.
In this manuscript we propose a novel framework to bypass these issues by using
object attributes to augment the standard rank aggregation framework. We design
algorithms that learn joint models on both rank lists and object features to
obtain an aggregated rank ordering that is more accurate and robust, and also
helps weed out rank lists of dubious validity. We validate our techniques on
synthetic datasets where our algorithm is able to estimate the true rank
ordering even when the rank lists are corrupted. Experiments on three real
datasets, MQ2008, MQ2008 and OHSUMED, show that using object features can
result in significant improvement in performance over existing rank aggregation
methods that do not use object information. Furthermore, when at least some of
the rank lists are of high quality, our methods are able to effectively exploit
their high expertise to output an aggregated rank ordering of great accuracy.
|
1605.04533
|
Andreea Ioana Sburlea
|
Andreea Ioana Sburlea, Luis Montesano, Javier Minguez
|
Advantages of EEG phase patterns for the detection of gait intention in
healthy and stroke subjects
|
18 pages, 5 figures
| null | null | null |
cs.HC q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One use of EEG-based brain-computer interfaces (BCIs) in rehabilitation is
the detection of movement intention. In this paper we investigate for the first
time the instantaneous phase of movement related cortical potential (MRCP) and
its application to the detection of gait intention. We demonstrate the utility
of MRCP phase in two independent datasets, in which 10 healthy subjects and 9
chronic stroke patients executed a self-initiated gait task in three sessions.
Phase features were compared to more conventional amplitude and power features.
The neurophysiology analysis showed that phase features have higher
signal-to-noise ratio than the other features. Also, BCI detectors of gait
intention based on phase, amplitude, and their combination were evaluated under
three conditions: session specific calibration, intersession transfer, and
intersubject transfer. Results show that the phase based detector is the most
accurate for session specific calibration (movement intention was correctly
detected in 66.5% of trials in healthy subjects, and in 63.3% in stroke
patients). However, in intersession and intersubject transfer, the detector
that combines amplitude and phase features is the most accurate one and the
only that retains its accuracy (62.5% in healthy subjects and 59% in stroke
patients) w.r.t. session specific calibration. Thus, MRCP phase features
improve the detection of gait intention and could be used in practice to remove
time-consuming BCI recalibration.
|
[
{
"version": "v1",
"created": "Sun, 15 May 2016 12:21:33 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Sburlea",
"Andreea Ioana",
""
],
[
"Montesano",
"Luis",
""
],
[
"Minguez",
"Javier",
""
]
] |
TITLE: Advantages of EEG phase patterns for the detection of gait intention in
healthy and stroke subjects
ABSTRACT: One use of EEG-based brain-computer interfaces (BCIs) in rehabilitation is
the detection of movement intention. In this paper we investigate for the first
time the instantaneous phase of movement related cortical potential (MRCP) and
its application to the detection of gait intention. We demonstrate the utility
of MRCP phase in two independent datasets, in which 10 healthy subjects and 9
chronic stroke patients executed a self-initiated gait task in three sessions.
Phase features were compared to more conventional amplitude and power features.
The neurophysiology analysis showed that phase features have higher
signal-to-noise ratio than the other features. Also, BCI detectors of gait
intention based on phase, amplitude, and their combination were evaluated under
three conditions: session specific calibration, intersession transfer, and
intersubject transfer. Results show that the phase based detector is the most
accurate for session specific calibration (movement intention was correctly
detected in 66.5% of trials in healthy subjects, and in 63.3% in stroke
patients). However, in intersession and intersubject transfer, the detector
that combines amplitude and phase features is the most accurate one and the
only that retains its accuracy (62.5% in healthy subjects and 59% in stroke
patients) w.r.t. session specific calibration. Thus, MRCP phase features
improve the detection of gait intention and could be used in practice to remove
time-consuming BCI recalibration.
|
1605.04644
|
Xingyan Bin
|
Xingyan Bin, Ying Zhao and Bilong Shen
|
Abnormal Subspace Sparse PCA for Anomaly Detection and Interpretation
|
ODDx3, ACM SIGKDD 2015 Workshop
| null | null | null |
cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main shortage of principle component analysis (PCA) based anomaly
detection models is their interpretability. In this paper, our goal is to
propose an interpretable PCA-based model for anomaly detection and
interpretation. The propose ASPCA model constructs principal components with
sparse and orthogonal loading vectors to represent the abnormal subspace, and
uses them to interpret detected anomalies. Our experiments on a synthetic
dataset and two real world datasets showed that the proposed ASPCA models
achieved comparable detection accuracies as the PCA model, and can provide
interpretations for individual anomalies.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 03:55:31 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Bin",
"Xingyan",
""
],
[
"Zhao",
"Ying",
""
],
[
"Shen",
"Bilong",
""
]
] |
TITLE: Abnormal Subspace Sparse PCA for Anomaly Detection and Interpretation
ABSTRACT: The main shortage of principle component analysis (PCA) based anomaly
detection models is their interpretability. In this paper, our goal is to
propose an interpretable PCA-based model for anomaly detection and
interpretation. The propose ASPCA model constructs principal components with
sparse and orthogonal loading vectors to represent the abnormal subspace, and
uses them to interpret detected anomalies. Our experiments on a synthetic
dataset and two real world datasets showed that the proposed ASPCA models
achieved comparable detection accuracies as the PCA model, and can provide
interpretations for individual anomalies.
|
1605.04672
|
Pushpendre Rastogi
|
Pushpendre Rastogi, Benjamin Van Durme
|
A Critical Examination of RESCAL for Completion of Knowledge Bases with
Transitive Relations
|
Four and a half page
| null | null | null |
stat.ML cs.AI cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Link prediction in large knowledge graphs has received a lot of attention
recently because of its importance for inferring missing relations and for
completing and improving noisily extracted knowledge graphs. Over the years a
number of machine learning researchers have presented various models for
predicting the presence of missing relations in a knowledge base. Although all
the previous methods are presented with empirical results that show high
performance on select datasets, there is almost no previous work on
understanding the connection between properties of a knowledge base and the
performance of a model. In this paper we analyze the RESCAL method and prove
that it can not encode asymmetric transitive relations in knowledge bases.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 07:43:28 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Rastogi",
"Pushpendre",
""
],
[
"Van Durme",
"Benjamin",
""
]
] |
TITLE: A Critical Examination of RESCAL for Completion of Knowledge Bases with
Transitive Relations
ABSTRACT: Link prediction in large knowledge graphs has received a lot of attention
recently because of its importance for inferring missing relations and for
completing and improving noisily extracted knowledge graphs. Over the years a
number of machine learning researchers have presented various models for
predicting the presence of missing relations in a knowledge base. Although all
the previous methods are presented with empirical results that show high
performance on select datasets, there is almost no previous work on
understanding the connection between properties of a knowledge base and the
performance of a model. In this paper we analyze the RESCAL method and prove
that it can not encode asymmetric transitive relations in knowledge bases.
|
1605.04850
|
Michael Gygli
|
Michael Gygli and Yale Song and Liangliang Cao
|
Video2GIF: Automatic Generation of Animated GIFs from Video
|
Accepted to CVPR 2016
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the novel problem of automatically generating animated GIFs from
video. GIFs are short looping video with no sound, and a perfect combination
between image and video that really capture our attention. GIFs tell a story,
express emotion, turn events into humorous moments, and are the new wave of
photojournalism. We pose the question: Can we automate the entirely manual and
elaborate process of GIF creation by leveraging the plethora of user generated
GIF content? We propose a Robust Deep RankNet that, given a video, generates a
ranked list of its segments according to their suitability as GIF. We train our
model to learn what visual content is often selected for GIFs by using over
100K user generated GIFs and their corresponding video sources. We effectively
deal with the noisy web data by proposing a novel adaptive Huber loss in the
ranking formulation. We show that our approach is robust to outliers and picks
up several patterns that are frequently present in popular animated GIFs. On
our new large-scale benchmark dataset, we show the advantage of our approach
over several state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 16 May 2016 17:44:31 GMT"
}
] | 2016-05-17T00:00:00 |
[
[
"Gygli",
"Michael",
""
],
[
"Song",
"Yale",
""
],
[
"Cao",
"Liangliang",
""
]
] |
TITLE: Video2GIF: Automatic Generation of Animated GIFs from Video
ABSTRACT: We introduce the novel problem of automatically generating animated GIFs from
video. GIFs are short looping video with no sound, and a perfect combination
between image and video that really capture our attention. GIFs tell a story,
express emotion, turn events into humorous moments, and are the new wave of
photojournalism. We pose the question: Can we automate the entirely manual and
elaborate process of GIF creation by leveraging the plethora of user generated
GIF content? We propose a Robust Deep RankNet that, given a video, generates a
ranked list of its segments according to their suitability as GIF. We train our
model to learn what visual content is often selected for GIFs by using over
100K user generated GIFs and their corresponding video sources. We effectively
deal with the noisy web data by proposing a novel adaptive Huber loss in the
ranking formulation. We show that our approach is robust to outliers and picks
up several patterns that are frequently present in popular animated GIFs. On
our new large-scale benchmark dataset, we show the advantage of our approach
over several state-of-the-art methods.
|
1401.6169
|
Hossein Soleimani
|
Hossein Soleimani, David J. Miller
|
Parsimonious Topic Models with Salient Word Discovery
| null |
IEEE Transaction on Knowledge and Data Engineering, 27 (2015)
824-837
|
10.1109/TKDE.2014.2345378
| null |
cs.LG cs.CL cs.IR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a parsimonious topic model for text corpora. In related models
such as Latent Dirichlet Allocation (LDA), all words are modeled
topic-specifically, even though many words occur with similar frequencies
across different topics. Our modeling determines salient words for each topic,
which have topic-specific probabilities, with the rest explained by a universal
shared model. Further, in LDA all topics are in principle present in every
document. By contrast our model gives sparse topic representation, determining
the (small) subset of relevant topics for each document. We derive a Bayesian
Information Criterion (BIC), balancing model complexity and goodness of fit.
Here, interestingly, we identify an effective sample size and corresponding
penalty specific to each parameter type in our model. We minimize BIC to
jointly determine our entire model -- the topic-specific words,
document-specific topics, all model parameter values, {\it and} the total
number of topics -- in a wholly unsupervised fashion. Results on three text
corpora and an image dataset show that our model achieves higher test set
likelihood and better agreement with ground-truth class labels, compared to LDA
and to a model designed to incorporate sparsity.
|
[
{
"version": "v1",
"created": "Wed, 22 Jan 2014 21:47:48 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Sep 2014 20:24:41 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Soleimani",
"Hossein",
""
],
[
"Miller",
"David J.",
""
]
] |
TITLE: Parsimonious Topic Models with Salient Word Discovery
ABSTRACT: We propose a parsimonious topic model for text corpora. In related models
such as Latent Dirichlet Allocation (LDA), all words are modeled
topic-specifically, even though many words occur with similar frequencies
across different topics. Our modeling determines salient words for each topic,
which have topic-specific probabilities, with the rest explained by a universal
shared model. Further, in LDA all topics are in principle present in every
document. By contrast our model gives sparse topic representation, determining
the (small) subset of relevant topics for each document. We derive a Bayesian
Information Criterion (BIC), balancing model complexity and goodness of fit.
Here, interestingly, we identify an effective sample size and corresponding
penalty specific to each parameter type in our model. We minimize BIC to
jointly determine our entire model -- the topic-specific words,
document-specific topics, all model parameter values, {\it and} the total
number of topics -- in a wholly unsupervised fashion. Results on three text
corpora and an image dataset show that our model achieves higher test set
likelihood and better agreement with ground-truth class labels, compared to LDA
and to a model designed to incorporate sparsity.
|
1605.02772
|
Sofia Kleisarchaki
|
Sofia Kleisarchaki, Sihem Amer-Yahia, Ahlame Douzal-Chouakria,
Vassilis Christophides
|
Querying Temporal Drifts at Multiple Granularities (Technical Report)
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There exists a large body of work on online drift detection with the goal of
dynamically finding and maintaining changes in data streams. In this paper, we
adopt a query-based approach to drift detection. Our approach relies on {\em a
drift index}, a structure that captures drift at different time granularities
and enables flexible {\em drift queries}. We formalize different drift queries
that represent real-world scenarios and develop query evaluation algorithms
that use different materializations of the drift index as well as strategies
for online index maintenance. We describe a thorough study of the performance
of our algorithms on real-world and synthetic datasets with varying change
rates.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 20:38:52 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 08:22:47 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Kleisarchaki",
"Sofia",
""
],
[
"Amer-Yahia",
"Sihem",
""
],
[
"Douzal-Chouakria",
"Ahlame",
""
],
[
"Christophides",
"Vassilis",
""
]
] |
TITLE: Querying Temporal Drifts at Multiple Granularities (Technical Report)
ABSTRACT: There exists a large body of work on online drift detection with the goal of
dynamically finding and maintaining changes in data streams. In this paper, we
adopt a query-based approach to drift detection. Our approach relies on {\em a
drift index}, a structure that captures drift at different time granularities
and enables flexible {\em drift queries}. We formalize different drift queries
that represent real-world scenarios and develop query evaluation algorithms
that use different materializations of the drift index as well as strategies
for online index maintenance. We describe a thorough study of the performance
of our algorithms on real-world and synthetic datasets with varying change
rates.
|
1605.02971
|
J\"orn-Henrik Jacobsen
|
J\"orn-Henrik Jacobsen, Jan van Gemert, Zhongyu Lou, Arnold W. M.
Smeulders
|
Structured Receptive Fields in CNNs
|
Reason for update: i) Fix Reference for "Deep roto-translation
scattering for object classification" by Oyallon and Mallat. ii) Fixed two
minor typos. iii) Removed implicit assumption in equation (4) where scale is
represented with diffusion time and adapted to rest of paper where scale is
represented with standard deviation, to avoid possible confusion
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning powerful feature representations with CNNs is hard when training
data are limited. Pre-training is one way to overcome this, but it requires
large datasets sufficiently similar to the target domain. Another option is to
design priors into the model, which can range from tuned hyperparameters to
fully engineered representations like Scattering Networks. We combine these
ideas into structured receptive field networks, a model which has a fixed
filter basis and yet retains the flexibility of CNNs. This flexibility is
achieved by expressing receptive fields in CNNs as a weighted sum over a fixed
basis which is similar in spirit to Scattering Networks. The key difference is
that we learn arbitrary effective filter sets from the basis rather than
modeling the filters. This approach explicitly connects classical multiscale
image analysis with general CNNs. With structured receptive field networks, we
improve considerably over unstructured CNNs for small and medium dataset
scenarios as well as over Scattering for large datasets. We validate our
findings on ILSVRC2012, Cifar-10, Cifar-100 and MNIST. As a realistic small
dataset example, we show state-of-the-art classification results on popular 3D
MRI brain-disease datasets where pre-training is difficult due to a lack of
large public datasets in a similar domain.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 12:18:03 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 11:56:08 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Jacobsen",
"Jörn-Henrik",
""
],
[
"van Gemert",
"Jan",
""
],
[
"Lou",
"Zhongyu",
""
],
[
"Smeulders",
"Arnold W. M.",
""
]
] |
TITLE: Structured Receptive Fields in CNNs
ABSTRACT: Learning powerful feature representations with CNNs is hard when training
data are limited. Pre-training is one way to overcome this, but it requires
large datasets sufficiently similar to the target domain. Another option is to
design priors into the model, which can range from tuned hyperparameters to
fully engineered representations like Scattering Networks. We combine these
ideas into structured receptive field networks, a model which has a fixed
filter basis and yet retains the flexibility of CNNs. This flexibility is
achieved by expressing receptive fields in CNNs as a weighted sum over a fixed
basis which is similar in spirit to Scattering Networks. The key difference is
that we learn arbitrary effective filter sets from the basis rather than
modeling the filters. This approach explicitly connects classical multiscale
image analysis with general CNNs. With structured receptive field networks, we
improve considerably over unstructured CNNs for small and medium dataset
scenarios as well as over Scattering for large datasets. We validate our
findings on ILSVRC2012, Cifar-10, Cifar-100 and MNIST. As a realistic small
dataset example, we show state-of-the-art classification results on popular 3D
MRI brain-disease datasets where pre-training is difficult due to a lack of
large public datasets in a similar domain.
|
1605.03284
|
Yuezhang Li
|
Tian Tian and Yuezhang Li
|
Machine Comprehension Based on Learning to Rank
|
9 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Machine comprehension plays an essential role in NLP and has been widely
explored with dataset like MCTest. However, this dataset is too simple and too
small for learning true reasoning abilities. \cite{hermann2015teaching}
therefore release a large scale news article dataset and propose a deep LSTM
reader system for machine comprehension. However, the training process is
expensive. We therefore try feature-engineered approach with semantics on the
new dataset to see how traditional machine learning technique and semantics can
help with machine comprehension. Meanwhile, our proposed L2R reader system
achieves good performance with efficiency and less training data.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2016 05:05:05 GMT"
},
{
"version": "v2",
"created": "Fri, 13 May 2016 01:06:09 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Tian",
"Tian",
""
],
[
"Li",
"Yuezhang",
""
]
] |
TITLE: Machine Comprehension Based on Learning to Rank
ABSTRACT: Machine comprehension plays an essential role in NLP and has been widely
explored with dataset like MCTest. However, this dataset is too simple and too
small for learning true reasoning abilities. \cite{hermann2015teaching}
therefore release a large scale news article dataset and propose a deep LSTM
reader system for machine comprehension. However, the training process is
expensive. We therefore try feature-engineered approach with semantics on the
new dataset to see how traditional machine learning technique and semantics can
help with machine comprehension. Meanwhile, our proposed L2R reader system
achieves good performance with efficiency and less training data.
|
1605.04002
|
Paul Tupper
|
Paul Tupper and Bobak Shahriari
|
Which Learning Algorithms Can Generalize Identity-Based Rules to Novel
Inputs?
|
6 pages, accepted abstract at COGSCI 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel framework for the analysis of learning algorithms that
allows us to say when such algorithms can and cannot generalize certain
patterns from training data to test data. In particular we focus on situations
where the rule that must be learned concerns two components of a stimulus being
identical. We call such a basis for discrimination an identity-based rule.
Identity-based rules have proven to be difficult or impossible for certain
types of learning algorithms to acquire from limited datasets. This is in
contrast to human behaviour on similar tasks. Here we provide a framework for
rigorously establishing which learning algorithms will fail at generalizing
identity-based rules to novel stimuli. We use this framework to show that such
algorithms are unable to generalize identity-based rules to novel inputs unless
trained on virtually all possible inputs. We demonstrate these results
computationally with a multilayer feedforward neural network.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2016 22:42:48 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Tupper",
"Paul",
""
],
[
"Shahriari",
"Bobak",
""
]
] |
TITLE: Which Learning Algorithms Can Generalize Identity-Based Rules to Novel
Inputs?
ABSTRACT: We propose a novel framework for the analysis of learning algorithms that
allows us to say when such algorithms can and cannot generalize certain
patterns from training data to test data. In particular we focus on situations
where the rule that must be learned concerns two components of a stimulus being
identical. We call such a basis for discrimination an identity-based rule.
Identity-based rules have proven to be difficult or impossible for certain
types of learning algorithms to acquire from limited datasets. This is in
contrast to human behaviour on similar tasks. Here we provide a framework for
rigorously establishing which learning algorithms will fail at generalizing
identity-based rules to novel stimuli. We use this framework to show that such
algorithms are unable to generalize identity-based rules to novel inputs unless
trained on virtually all possible inputs. We demonstrate these results
computationally with a multilayer feedforward neural network.
|
1605.04034
|
Joey Tianyi Zhou Dr
|
Joey Tianyi Zhou, Xinxing Xu, Sinno Jialin Pan, Ivor W. Tsang, Zheng
Qin and Rick Siow Mong Goh
|
Transfer Hashing with Privileged Information
|
Accepted by IJCAI-2016
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing learning to hash methods assume that there are sufficient data,
either labeled or unlabeled, on the domain of interest (i.e., the target
domain) for training. However, this assumption cannot be satisfied in some
real-world applications. To address this data sparsity issue in hashing,
inspired by transfer learning, we propose a new framework named Transfer
Hashing with Privileged Information (THPI). Specifically, we extend the
standard learning to hash method, Iterative Quantization (ITQ), in a transfer
learning manner, namely ITQ+. In ITQ+, a new slack function is learned from
auxiliary data to approximate the quantization error in ITQ. We developed an
alternating optimization approach to solve the resultant optimization problem
for ITQ+. We further extend ITQ+ to LapITQ+ by utilizing the geometry structure
among the auxiliary data for learning more precise binary codes in the target
domain. Extensive experiments on several benchmark datasets verify the
effectiveness of our proposed approaches through comparisons with several
state-of-the-art baselines.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2016 02:49:43 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Zhou",
"Joey Tianyi",
""
],
[
"Xu",
"Xinxing",
""
],
[
"Pan",
"Sinno Jialin",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Qin",
"Zheng",
""
],
[
"Goh",
"Rick Siow Mong",
""
]
] |
TITLE: Transfer Hashing with Privileged Information
ABSTRACT: Most existing learning to hash methods assume that there are sufficient data,
either labeled or unlabeled, on the domain of interest (i.e., the target
domain) for training. However, this assumption cannot be satisfied in some
real-world applications. To address this data sparsity issue in hashing,
inspired by transfer learning, we propose a new framework named Transfer
Hashing with Privileged Information (THPI). Specifically, we extend the
standard learning to hash method, Iterative Quantization (ITQ), in a transfer
learning manner, namely ITQ+. In ITQ+, a new slack function is learned from
auxiliary data to approximate the quantization error in ITQ. We developed an
alternating optimization approach to solve the resultant optimization problem
for ITQ+. We further extend ITQ+ to LapITQ+ by utilizing the geometry structure
among the auxiliary data for learning more precise binary codes in the target
domain. Extensive experiments on several benchmark datasets verify the
effectiveness of our proposed approaches through comparisons with several
state-of-the-art baselines.
|
1605.04068
|
Falong Shen
|
Falong Shen and Gang Zeng
|
Fast Semantic Image Segmentation with High Order Context and Guided
Filtering
|
14 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes a fast and accurate semantic image segmentation approach
that encodes not only the discriminative features from deep neural networks,
but also the high-order context compatibility among adjacent objects as well as
low level image features. We formulate the underlying problem as the
conditional random field that embeds local feature extraction, clique potential
construction, and guided filtering within the same framework, and provide an
efficient coarse-to-fine solver. At the coarse level, we combine local feature
representation and context interaction using a deep convolutional network, and
directly learn the interaction from high order cliques with a message passing
routine, avoiding time-consuming explicit graph inference for joint probability
distribution. At the fine level, we introduce a guided filtering interpretation
for the mean field algorithm, and achieve accurate object boundaries with 100+
faster than classic learning methods. The two parts are connected and jointly
trained in an end-to-end fashion. Experimental results on Pascal VOC 2012
dataset have shown that the proposed algorithm outperforms the
state-of-the-art, and that it achieves the rank 1 performance at the time of
submission, both of which prove the effectiveness of this unified framework for
semantic image segmentation.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2016 07:21:37 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Shen",
"Falong",
""
],
[
"Zeng",
"Gang",
""
]
] |
TITLE: Fast Semantic Image Segmentation with High Order Context and Guided
Filtering
ABSTRACT: This paper describes a fast and accurate semantic image segmentation approach
that encodes not only the discriminative features from deep neural networks,
but also the high-order context compatibility among adjacent objects as well as
low level image features. We formulate the underlying problem as the
conditional random field that embeds local feature extraction, clique potential
construction, and guided filtering within the same framework, and provide an
efficient coarse-to-fine solver. At the coarse level, we combine local feature
representation and context interaction using a deep convolutional network, and
directly learn the interaction from high order cliques with a message passing
routine, avoiding time-consuming explicit graph inference for joint probability
distribution. At the fine level, we introduce a guided filtering interpretation
for the mean field algorithm, and achieve accurate object boundaries with 100+
faster than classic learning methods. The two parts are connected and jointly
trained in an end-to-end fashion. Experimental results on Pascal VOC 2012
dataset have shown that the proposed algorithm outperforms the
state-of-the-art, and that it achieves the rank 1 performance at the time of
submission, both of which prove the effectiveness of this unified framework for
semantic image segmentation.
|
1605.04192
|
Symeon Chouvardas
|
Symeon Chouvardas, Mohammed Amin Abdullah, Lucas Claude, Moez Draief
|
Robust On-line Matrix Completion on Graphs
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study online robust matrix completion on graphs. At each iteration a
vector with some entries missing is revealed and our goal is to reconstruct it
by identifying the underlying low-dimensional subspace from which the vectors
are drawn. We assume there is an underlying graph structure to the data, that
is, the components of each vector correspond to nodes of a certain (known)
graph, and their values are related accordingly. We give algorithms that
exploit the graph to reconstruct the incomplete data, even in the presence of
outlier noise. The theoretical properties of the algorithms are studied and
numerical experiments using both synthetic and real world datasets verify the
improved performance of the proposed technique compared to other state of the
art algorithms.
|
[
{
"version": "v1",
"created": "Fri, 13 May 2016 14:29:08 GMT"
}
] | 2016-05-16T00:00:00 |
[
[
"Chouvardas",
"Symeon",
""
],
[
"Abdullah",
"Mohammed Amin",
""
],
[
"Claude",
"Lucas",
""
],
[
"Draief",
"Moez",
""
]
] |
TITLE: Robust On-line Matrix Completion on Graphs
ABSTRACT: We study online robust matrix completion on graphs. At each iteration a
vector with some entries missing is revealed and our goal is to reconstruct it
by identifying the underlying low-dimensional subspace from which the vectors
are drawn. We assume there is an underlying graph structure to the data, that
is, the components of each vector correspond to nodes of a certain (known)
graph, and their values are related accordingly. We give algorithms that
exploit the graph to reconstruct the incomplete data, even in the presence of
outlier noise. The theoretical properties of the algorithms are studied and
numerical experiments using both synthetic and real world datasets verify the
improved performance of the proposed technique compared to other state of the
art algorithms.
|
1504.01891
|
Marios Meimaris
|
Marios Meimaris, George Papastefanatos, Stratis Viglas, Yannis
Stavrakas, Christos Pateritsas and Ioannis Anagnostopoulos
|
A Query Language for Multi-version Data Web Archives
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Data Web refers to the vast and rapidly increasing quantity of
scientific, corporate, government and crowd-sourced data published in the form
of Linked Open Data, which encourages the uniform representation of
heterogeneous data items on the web and the creation of links between them. The
growing availability of open linked datasets has brought forth significant new
challenges regarding their proper preservation and the management of evolving
information within them. In this paper, we focus on the evolution and
preservation challenges related to publishing and preserving evolving linked
data across time. We discuss the main problems regarding their proper modelling
and querying and provide a conceptual model and a query language for modelling
and retrieving evolving data along with changes affecting them. We present in
details the syntax of the query language and demonstrate its functionality over
a real-world use case of evolving linked dataset from the biological domain.
|
[
{
"version": "v1",
"created": "Wed, 8 Apr 2015 09:53:52 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Sep 2015 14:38:17 GMT"
},
{
"version": "v3",
"created": "Thu, 12 May 2016 16:00:10 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Meimaris",
"Marios",
""
],
[
"Papastefanatos",
"George",
""
],
[
"Viglas",
"Stratis",
""
],
[
"Stavrakas",
"Yannis",
""
],
[
"Pateritsas",
"Christos",
""
],
[
"Anagnostopoulos",
"Ioannis",
""
]
] |
TITLE: A Query Language for Multi-version Data Web Archives
ABSTRACT: The Data Web refers to the vast and rapidly increasing quantity of
scientific, corporate, government and crowd-sourced data published in the form
of Linked Open Data, which encourages the uniform representation of
heterogeneous data items on the web and the creation of links between them. The
growing availability of open linked datasets has brought forth significant new
challenges regarding their proper preservation and the management of evolving
information within them. In this paper, we focus on the evolution and
preservation challenges related to publishing and preserving evolving linked
data across time. We discuss the main problems regarding their proper modelling
and querying and provide a conceptual model and a query language for modelling
and retrieving evolving data along with changes affecting them. We present in
details the syntax of the query language and demonstrate its functionality over
a real-world use case of evolving linked dataset from the biological domain.
|
1602.06541
|
Martin Thoma
|
Martin Thoma
|
A Survey of Semantic Segmentation
|
Fixed typo in accuracy metrics formula; added value range of accuracy
metrics; consistent naming of variables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This survey gives an overview over different techniques used for pixel-level
semantic segmentation. Metrics and datasets for the evaluation of segmentation
algorithms and traditional approaches for segmentation such as unsupervised
methods, Decision Forests and SVMs are described and pointers to the relevant
papers are given. Recently published approaches with convolutional neural
networks are mentioned and typical problematic situations for segmentation
algorithms are examined. A taxonomy of segmentation algorithms is given.
|
[
{
"version": "v1",
"created": "Sun, 21 Feb 2016 15:28:04 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2016 21:57:48 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Thoma",
"Martin",
""
]
] |
TITLE: A Survey of Semantic Segmentation
ABSTRACT: This survey gives an overview over different techniques used for pixel-level
semantic segmentation. Metrics and datasets for the evaluation of segmentation
algorithms and traditional approaches for segmentation such as unsupervised
methods, Decision Forests and SVMs are described and pointers to the relevant
papers are given. Recently published approaches with convolutional neural
networks are mentioned and typical problematic situations for segmentation
algorithms are examined. A taxonomy of segmentation algorithms is given.
|
1605.01539
|
Szymon Grabowski
|
Szymon Grabowski, Marcin Raniszewski
|
Rank and select: Another lesson learned
|
Compared to v1: slightly optimized rank implementations
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rank and select queries on bitmaps are essential building bricks of many
compressed data structures, including text indexes, membership and range
supporting spatial data structures, compressed graphs, and more. Theoretically
considered yet in 1980s, these primitives have also been a subject of vivid
research concerning their practical incarnations in the last decade. We present
a few novel rank/select variants, focusing mostly on speed, obtaining
competitive space-time results in the compressed setting. Our findings can be
summarized as follows: $(i)$ no single rank/select solution works best on any
kind of data (ours are optimized for concatenated bit arrays obtained from
wavelet trees for real text datasets), $(ii)$ it pays to efficiently handle
blocks consisting of all 0 or all 1 bits, $(iii)$ compressed select does not
have to be significantly slower than compressed rank at a comparable memory
use.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 09:39:59 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2016 16:01:30 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Grabowski",
"Szymon",
""
],
[
"Raniszewski",
"Marcin",
""
]
] |
TITLE: Rank and select: Another lesson learned
ABSTRACT: Rank and select queries on bitmaps are essential building bricks of many
compressed data structures, including text indexes, membership and range
supporting spatial data structures, compressed graphs, and more. Theoretically
considered yet in 1980s, these primitives have also been a subject of vivid
research concerning their practical incarnations in the last decade. We present
a few novel rank/select variants, focusing mostly on speed, obtaining
competitive space-time results in the compressed setting. Our findings can be
summarized as follows: $(i)$ no single rank/select solution works best on any
kind of data (ours are optimized for concatenated bit arrays obtained from
wavelet trees for real text datasets), $(ii)$ it pays to efficiently handle
blocks consisting of all 0 or all 1 bits, $(iii)$ compressed select does not
have to be significantly slower than compressed rank at a comparable memory
use.
|
1605.03688
|
Minghuang Ma
|
Minghuang Ma, Haoqi Fan, Kris M. Kitani
|
Going Deeper into First-Person Activity Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We bring together ideas from recent work on feature design for egocentric
action recognition under one framework by exploring the use of deep
convolutional neural networks (CNN). Recent work has shown that features such
as hand appearance, object attributes, local hand motion and camera ego-motion
are important for characterizing first-person actions. To integrate these ideas
under one framework, we propose a twin stream network architecture, where one
stream analyzes appearance information and the other stream analyzes motion
information. Our appearance stream encodes prior knowledge of the egocentric
paradigm by explicitly training the network to segment hands and localize
objects. By visualizing certain neuron activation of our network, we show that
our proposed architecture naturally learns features that capture object
attributes and hand-object configurations. Our extensive experiments on
benchmark egocentric action datasets show that our deep architecture enables
recognition rates that significantly outperform state-of-the-art techniques --
an average $6.6\%$ increase in accuracy over all datasets. Furthermore, by
learning to recognize objects, actions and activities jointly, the performance
of individual recognition tasks also increase by $30\%$ (actions) and $14\%$
(objects). We also include the results of extensive ablative analysis to
highlight the importance of network design decisions..
|
[
{
"version": "v1",
"created": "Thu, 12 May 2016 05:59:50 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Ma",
"Minghuang",
""
],
[
"Fan",
"Haoqi",
""
],
[
"Kitani",
"Kris M.",
""
]
] |
TITLE: Going Deeper into First-Person Activity Recognition
ABSTRACT: We bring together ideas from recent work on feature design for egocentric
action recognition under one framework by exploring the use of deep
convolutional neural networks (CNN). Recent work has shown that features such
as hand appearance, object attributes, local hand motion and camera ego-motion
are important for characterizing first-person actions. To integrate these ideas
under one framework, we propose a twin stream network architecture, where one
stream analyzes appearance information and the other stream analyzes motion
information. Our appearance stream encodes prior knowledge of the egocentric
paradigm by explicitly training the network to segment hands and localize
objects. By visualizing certain neuron activation of our network, we show that
our proposed architecture naturally learns features that capture object
attributes and hand-object configurations. Our extensive experiments on
benchmark egocentric action datasets show that our deep architecture enables
recognition rates that significantly outperform state-of-the-art techniques --
an average $6.6\%$ increase in accuracy over all datasets. Furthermore, by
learning to recognize objects, actions and activities jointly, the performance
of individual recognition tasks also increase by $30\%$ (actions) and $14\%$
(objects). We also include the results of extensive ablative analysis to
highlight the importance of network design decisions..
|
1605.03705
|
Marcus Rohrbach
|
Anna Rohrbach, Atousa Torabi, Marcus Rohrbach, Niket Tandon,
Christopher Pal, Hugo Larochelle, Aaron Courville, Bernt Schiele
|
Movie Description
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Audio Description (AD) provides linguistic descriptions of movies and allows
visually impaired people to follow a movie along with their peers. Such
descriptions are by design mainly visual and thus naturally form an interesting
data source for computer vision and computational linguistics. In this work we
propose a novel dataset which contains transcribed ADs, which are temporally
aligned to full length movies. In addition we also collected and aligned movie
scripts used in prior work and compare the two sources of descriptions. In
total the Large Scale Movie Description Challenge (LSMDC) contains a parallel
corpus of 118,114 sentences and video clips from 202 movies. First we
characterize the dataset by benchmarking different approaches for generating
video descriptions. Comparing ADs to scripts, we find that ADs are indeed more
visual and describe precisely what is shown rather than what should happen
according to the scripts created prior to movie production. Furthermore, we
present and compare the results of several teams who participated in a
challenge organized in the context of the workshop "Describing and
Understanding Video & The Large Scale Movie Description Challenge (LSMDC)", at
ICCV 2015.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2016 07:34:08 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Rohrbach",
"Anna",
""
],
[
"Torabi",
"Atousa",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Tandon",
"Niket",
""
],
[
"Pal",
"Christopher",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
],
[
"Schiele",
"Bernt",
""
]
] |
TITLE: Movie Description
ABSTRACT: Audio Description (AD) provides linguistic descriptions of movies and allows
visually impaired people to follow a movie along with their peers. Such
descriptions are by design mainly visual and thus naturally form an interesting
data source for computer vision and computational linguistics. In this work we
propose a novel dataset which contains transcribed ADs, which are temporally
aligned to full length movies. In addition we also collected and aligned movie
scripts used in prior work and compare the two sources of descriptions. In
total the Large Scale Movie Description Challenge (LSMDC) contains a parallel
corpus of 118,114 sentences and video clips from 202 movies. First we
characterize the dataset by benchmarking different approaches for generating
video descriptions. Comparing ADs to scripts, we find that ADs are indeed more
visual and describe precisely what is shown rather than what should happen
according to the scripts created prior to movie production. Furthermore, we
present and compare the results of several teams who participated in a
challenge organized in the context of the workshop "Describing and
Understanding Video & The Large Scale Movie Description Challenge (LSMDC)", at
ICCV 2015.
|
1605.03746
|
Stefano Rosa
|
Giorgio Toscana, Stefano Rosa
|
Fast Graph-Based Object Segmentation for RGB-D Images
| null | null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object segmentation is an important capability for robotic systems, in
particular for grasping. We present a graph- based approach for the
segmentation of simple objects from RGB-D images. We are interested in
segmenting objects with large variety in appearance, from lack of texture to
strong textures, for the task of robotic grasping. The algorithm does not rely
on image features or machine learning. We propose a modified Canny edge
detector for extracting robust edges by using depth information and two simple
cost functions for combining color and depth cues. The cost functions are used
to build an undirected graph, which is partitioned using the concept of
internal and external differences between graph regions. The partitioning is
fast with O(NlogN) complexity. We also discuss ways to deal with missing depth
information. We test the approach on different publicly available RGB-D object
datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset,
and compare the results with other existing methods.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2016 10:29:14 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Toscana",
"Giorgio",
""
],
[
"Rosa",
"Stefano",
""
]
] |
TITLE: Fast Graph-Based Object Segmentation for RGB-D Images
ABSTRACT: Object segmentation is an important capability for robotic systems, in
particular for grasping. We present a graph- based approach for the
segmentation of simple objects from RGB-D images. We are interested in
segmenting objects with large variety in appearance, from lack of texture to
strong textures, for the task of robotic grasping. The algorithm does not rely
on image features or machine learning. We propose a modified Canny edge
detector for extracting robust edges by using depth information and two simple
cost functions for combining color and depth cues. The cost functions are used
to build an undirected graph, which is partitioned using the concept of
internal and external differences between graph regions. The partitioning is
fast with O(NlogN) complexity. We also discuss ways to deal with missing depth
information. We test the approach on different publicly available RGB-D object
datasets, such as the Rutgers APC RGB-D dataset and the RGB-D Object Dataset,
and compare the results with other existing methods.
|
1605.03848
|
Gilles Louppe
|
Antonio Sutera, Gilles Louppe, V\^an Anh Huynh-Thu, Louis Wehenkel,
Pierre Geurts
|
Context-dependent feature analysis with random forests
|
Accepted for presentation at UAI 2016
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many cases, feature selection is often more complicated than identifying a
single subset of input variables that would together explain the output. There
may be interactions that depend on contextual information, i.e., variables that
reveal to be relevant only in some specific circumstances. In this setting, the
contribution of this paper is to extend the random forest variable importances
framework in order (i) to identify variables whose relevance is
context-dependent and (ii) to characterize as precisely as possible the effect
of contextual information on these variables. The usage and the relevance of
our framework for highlighting context-dependent variables is illustrated on
both artificial and real datasets.
|
[
{
"version": "v1",
"created": "Thu, 12 May 2016 14:59:42 GMT"
}
] | 2016-05-13T00:00:00 |
[
[
"Sutera",
"Antonio",
""
],
[
"Louppe",
"Gilles",
""
],
[
"Huynh-Thu",
"Vân Anh",
""
],
[
"Wehenkel",
"Louis",
""
],
[
"Geurts",
"Pierre",
""
]
] |
TITLE: Context-dependent feature analysis with random forests
ABSTRACT: In many cases, feature selection is often more complicated than identifying a
single subset of input variables that would together explain the output. There
may be interactions that depend on contextual information, i.e., variables that
reveal to be relevant only in some specific circumstances. In this setting, the
contribution of this paper is to extend the random forest variable importances
framework in order (i) to identify variables whose relevance is
context-dependent and (ii) to characterize as precisely as possible the effect
of contextual information on these variables. The usage and the relevance of
our framework for highlighting context-dependent variables is illustrated on
both artificial and real datasets.
|
1511.06449
|
Eunbyung Park
|
Eunbyung Park, Alexander C. Berg
|
Learning to decompose for object detection and instance segmentation
|
ICLR 2016 Workshop
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although deep convolutional neural networks(CNNs) have achieved remarkable
results on object detection and segmentation, pre- and post-processing steps
such as region proposals and non-maximum suppression(NMS), have been required.
These steps result in high computational complexity and sensitivity to
hyperparameters, e.g. thresholds for NMS. In this work, we propose a novel
end-to-end trainable deep neural network architecture, which consists of
convolutional and recurrent layers, that generates the correct number of object
instances and their bounding boxes (or segmentation masks) given an image,
using only a single network evaluation without any pre- or post-processing
steps. We have tested on detecting digits in multi-digit images synthesized
using MNIST, automatically segmenting digits in these images, and detecting
cars in the KITTI benchmark dataset. The proposed approach outperforms a strong
CNN baseline on the synthesized digits datasets and shows promising results on
KITTI car detection.
|
[
{
"version": "v1",
"created": "Thu, 19 Nov 2015 23:30:06 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Nov 2015 06:07:28 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2016 02:55:29 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Park",
"Eunbyung",
""
],
[
"Berg",
"Alexander C.",
""
]
] |
TITLE: Learning to decompose for object detection and instance segmentation
ABSTRACT: Although deep convolutional neural networks(CNNs) have achieved remarkable
results on object detection and segmentation, pre- and post-processing steps
such as region proposals and non-maximum suppression(NMS), have been required.
These steps result in high computational complexity and sensitivity to
hyperparameters, e.g. thresholds for NMS. In this work, we propose a novel
end-to-end trainable deep neural network architecture, which consists of
convolutional and recurrent layers, that generates the correct number of object
instances and their bounding boxes (or segmentation masks) given an image,
using only a single network evaluation without any pre- or post-processing
steps. We have tested on detecting digits in multi-digit images synthesized
using MNIST, automatically segmenting digits in these images, and detecting
cars in the KITTI benchmark dataset. The proposed approach outperforms a strong
CNN baseline on the synthesized digits datasets and shows promising results on
KITTI car detection.
|
1602.08114
|
Shohreh Shaghaghian Ms
|
Shohreh Shaghaghian, Mark Coates
|
Bayesian Inference of Diffusion Networks with Unknown Infection Times
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of diffusion processes in real-world propagation scenarios often
involves estimating variables that are not directly observed. These hidden
variables include parental relationships, the strengths of connections between
nodes, and the moments of time that infection occurs. In this paper, we propose
a framework in which all three sets of parameters are assumed to be hidden and
we develop a Bayesian approach to infer them. After justifying the model
assumptions, we evaluate the performance efficiency of our proposed approach
through numerical simulations on synthetic datasets and real-world diffusion
processes.
|
[
{
"version": "v1",
"created": "Thu, 25 Feb 2016 21:12:03 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Mar 2016 03:10:47 GMT"
},
{
"version": "v3",
"created": "Wed, 11 May 2016 18:58:09 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Shaghaghian",
"Shohreh",
""
],
[
"Coates",
"Mark",
""
]
] |
TITLE: Bayesian Inference of Diffusion Networks with Unknown Infection Times
ABSTRACT: The analysis of diffusion processes in real-world propagation scenarios often
involves estimating variables that are not directly observed. These hidden
variables include parental relationships, the strengths of connections between
nodes, and the moments of time that infection occurs. In this paper, we propose
a framework in which all three sets of parameters are assumed to be hidden and
we develop a Bayesian approach to infer them. After justifying the model
assumptions, we evaluate the performance efficiency of our proposed approach
through numerical simulations on synthetic datasets and real-world diffusion
processes.
|
1603.06995
|
Zhicheng Cui
|
Zhicheng Cui and Wenlin Chen and Yixin Chen
|
Multi-Scale Convolutional Neural Networks for Time Series Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time series classification (TSC), the problem of predicting class labels of
time series, has been around for decades within the community of data mining
and machine learning, and found many important applications such as biomedical
engineering and clinical prediction. However, it still remains challenging and
falls short of classification accuracy and efficiency. Traditional approaches
typically involve extracting discriminative features from the original time
series using dynamic time warping (DTW) or shapelet transformation, based on
which an off-the-shelf classifier can be applied. These methods are ad-hoc and
separate the feature extraction part with the classification part, which limits
their accuracy performance. Plus, most existing methods fail to take into
account the fact that time series often have features at different time scales.
To address these problems, we propose a novel end-to-end neural network model,
Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature
extraction and classification in a single framework. Leveraging a novel
multi-branch layer and learnable convolutional layers, MCNN automatically
extracts features at different scales and frequencies, leading to superior
feature representation. MCNN is also computationally efficient, as it naturally
leverages GPU computing. We conduct comprehensive empirical evaluation with
various existing methods on a large number of benchmark datasets, and show that
MCNN advances the state-of-the-art by achieving superior accuracy performance
than other leading methods.
|
[
{
"version": "v1",
"created": "Tue, 22 Mar 2016 21:37:33 GMT"
},
{
"version": "v2",
"created": "Fri, 1 Apr 2016 04:51:24 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Apr 2016 18:58:29 GMT"
},
{
"version": "v4",
"created": "Wed, 11 May 2016 04:48:21 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Cui",
"Zhicheng",
""
],
[
"Chen",
"Wenlin",
""
],
[
"Chen",
"Yixin",
""
]
] |
TITLE: Multi-Scale Convolutional Neural Networks for Time Series Classification
ABSTRACT: Time series classification (TSC), the problem of predicting class labels of
time series, has been around for decades within the community of data mining
and machine learning, and found many important applications such as biomedical
engineering and clinical prediction. However, it still remains challenging and
falls short of classification accuracy and efficiency. Traditional approaches
typically involve extracting discriminative features from the original time
series using dynamic time warping (DTW) or shapelet transformation, based on
which an off-the-shelf classifier can be applied. These methods are ad-hoc and
separate the feature extraction part with the classification part, which limits
their accuracy performance. Plus, most existing methods fail to take into
account the fact that time series often have features at different time scales.
To address these problems, we propose a novel end-to-end neural network model,
Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature
extraction and classification in a single framework. Leveraging a novel
multi-branch layer and learnable convolutional layers, MCNN automatically
extracts features at different scales and frequencies, leading to superior
feature representation. MCNN is also computationally efficient, as it naturally
leverages GPU computing. We conduct comprehensive empirical evaluation with
various existing methods on a large number of benchmark datasets, and show that
MCNN advances the state-of-the-art by achieving superior accuracy performance
than other leading methods.
|
1605.02951
|
Eszter Bok\'anyi
|
Eszter Bok\'anyi, D\'aniel Kondor, L\'aszl\'o Dobos, Tam\'as
Seb\H{o}k, J\'ozsef St\'eger, Istv\'an Csabai, G\'abor Vattay
|
Race, Religion and the City: Twitter Word Frequency Patterns Reveal
Dominant Demographic Dimensions in the United States
| null | null |
10.1057/palcomms.2016.10.
| null |
physics.soc-ph cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, numerous approaches have emerged in the social sciences to exploit
the opportunities made possible by the vast amounts of data generated by online
social networks (OSNs). Having access to information about users on such a
scale opens up a range of possibilities, all without the limitations associated
with often slow and expensive paper-based polls. A question that remains to be
satisfactorily addressed, however, is how demography is represented in the OSN
content? Here, we study language use in the US using a corpus of text compiled
from over half a billion geo-tagged messages from the online microblogging
platform Twitter. Our intention is to reveal the most important spatial
patterns in language use in an unsupervised manner and relate them to
demographics. Our approach is based on Latent Semantic Analysis (LSA) augmented
with the Robust Principal Component Analysis (RPCA) methodology. We find
spatially correlated patterns that can be interpreted based on the words
associated with them. The main language features can be related to slang use,
urbanization, travel, religion and ethnicity, the patterns of which are shown
to correlate plausibly with traditional census data. Our findings thus validate
the concept of demography being represented in OSN language use and show that
the traits observed are inherently present in the word frequencies without any
previous assumptions about the dataset. Thus, they could form the basis of
further research focusing on the evaluation of demographic data estimation from
other big data sources, or on the dynamical processes that result in the
patterns found here.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 11:38:43 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2016 07:58:12 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Bokányi",
"Eszter",
""
],
[
"Kondor",
"Dániel",
""
],
[
"Dobos",
"László",
""
],
[
"Sebők",
"Tamás",
""
],
[
"Stéger",
"József",
""
],
[
"Csabai",
"István",
""
],
[
"Vattay",
"Gábor",
""
]
] |
TITLE: Race, Religion and the City: Twitter Word Frequency Patterns Reveal
Dominant Demographic Dimensions in the United States
ABSTRACT: Recently, numerous approaches have emerged in the social sciences to exploit
the opportunities made possible by the vast amounts of data generated by online
social networks (OSNs). Having access to information about users on such a
scale opens up a range of possibilities, all without the limitations associated
with often slow and expensive paper-based polls. A question that remains to be
satisfactorily addressed, however, is how demography is represented in the OSN
content? Here, we study language use in the US using a corpus of text compiled
from over half a billion geo-tagged messages from the online microblogging
platform Twitter. Our intention is to reveal the most important spatial
patterns in language use in an unsupervised manner and relate them to
demographics. Our approach is based on Latent Semantic Analysis (LSA) augmented
with the Robust Principal Component Analysis (RPCA) methodology. We find
spatially correlated patterns that can be interpreted based on the words
associated with them. The main language features can be related to slang use,
urbanization, travel, religion and ethnicity, the patterns of which are shown
to correlate plausibly with traditional census data. Our findings thus validate
the concept of demography being represented in OSN language use and show that
the traits observed are inherently present in the word frequencies without any
previous assumptions about the dataset. Thus, they could form the basis of
further research focusing on the evaluation of demographic data estimation from
other big data sources, or on the dynamical processes that result in the
patterns found here.
|
1605.03222
|
Alvaro Soto
|
Anali Alfaro, Domingo Mery, Alvaro Soto
|
Action Recognition in Video Using Sparse Coding and Relative Features
|
Accepted to CVPR 2016
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work presents an approach to category-based action recognition in video
using sparse coding techniques. The proposed approach includes two main
contributions: i) A new method to handle intra-class variations by decomposing
each video into a reduced set of representative atomic action acts or
key-sequences, and ii) A new video descriptor, ITRA: Inter-Temporal Relational
Act Descriptor, that exploits the power of comparative reasoning to capture
relative similarity relations among key-sequences. In terms of the method to
obtain key-sequences, we introduce a loss function that, for each video, leads
to the identification of a sparse set of representative key-frames capturing
both, relevant particularities arising in the input video, as well as relevant
generalities arising in the complete class collection. In terms of the method
to obtain the ITRA descriptor, we introduce a novel scheme to quantify relative
intra and inter-class similarities among local temporal patterns arising in the
videos. The resulting ITRA descriptor demonstrates to be highly effective to
discriminate among action categories. As a result, the proposed approach
reaches remarkable action recognition performance on several popular benchmark
datasets, outperforming alternative state-of-the-art techniques by a large
margin.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 21:52:25 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Alfaro",
"Anali",
""
],
[
"Mery",
"Domingo",
""
],
[
"Soto",
"Alvaro",
""
]
] |
TITLE: Action Recognition in Video Using Sparse Coding and Relative Features
ABSTRACT: This work presents an approach to category-based action recognition in video
using sparse coding techniques. The proposed approach includes two main
contributions: i) A new method to handle intra-class variations by decomposing
each video into a reduced set of representative atomic action acts or
key-sequences, and ii) A new video descriptor, ITRA: Inter-Temporal Relational
Act Descriptor, that exploits the power of comparative reasoning to capture
relative similarity relations among key-sequences. In terms of the method to
obtain key-sequences, we introduce a loss function that, for each video, leads
to the identification of a sparse set of representative key-frames capturing
both, relevant particularities arising in the input video, as well as relevant
generalities arising in the complete class collection. In terms of the method
to obtain the ITRA descriptor, we introduce a novel scheme to quantify relative
intra and inter-class similarities among local temporal patterns arising in the
videos. The resulting ITRA descriptor demonstrates to be highly effective to
discriminate among action categories. As a result, the proposed approach
reaches remarkable action recognition performance on several popular benchmark
datasets, outperforming alternative state-of-the-art techniques by a large
margin.
|
1605.03328
|
Magnus Andersson
|
Alvaro Rodriguez, Hanqing Zhang, Krister Wiklund, Tomas Brodin,
Jonatan Klaminder, Patrik Andersson, Magnus Andersson
|
A robust particle detection algorithm based on symmetry
|
Manuscript including supplementary materials
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Particle tracking is common in many biophysical, ecological, and
micro-fluidic applications. Reliable tracking information is heavily dependent
on of the system under study and algorithms that correctly determines particle
position between images. However, in a real environmental context with the
presence of noise including particular or dissolved matter in water, and low
and fluctuating light conditions, many algorithms fail to obtain reliable
information. We propose a new algorithm, the Circular Symmetry algorithm
(C-Sym), for detecting the position of a circular particle with high accuracy
and precision in noisy conditions. The algorithm takes advantage of the spatial
symmetry of the particle allowing for subpixel accuracy. We compare the
proposed algorithm with four different methods using both synthetic and
experimental datasets. The results show that C-Sym is the most accurate and
precise algorithm when tracking micro-particles in all tested conditions and it
has the potential for use in applications including tracking biota in their
environment.
|
[
{
"version": "v1",
"created": "Wed, 11 May 2016 08:38:32 GMT"
}
] | 2016-05-12T00:00:00 |
[
[
"Rodriguez",
"Alvaro",
""
],
[
"Zhang",
"Hanqing",
""
],
[
"Wiklund",
"Krister",
""
],
[
"Brodin",
"Tomas",
""
],
[
"Klaminder",
"Jonatan",
""
],
[
"Andersson",
"Patrik",
""
],
[
"Andersson",
"Magnus",
""
]
] |
TITLE: A robust particle detection algorithm based on symmetry
ABSTRACT: Particle tracking is common in many biophysical, ecological, and
micro-fluidic applications. Reliable tracking information is heavily dependent
on of the system under study and algorithms that correctly determines particle
position between images. However, in a real environmental context with the
presence of noise including particular or dissolved matter in water, and low
and fluctuating light conditions, many algorithms fail to obtain reliable
information. We propose a new algorithm, the Circular Symmetry algorithm
(C-Sym), for detecting the position of a circular particle with high accuracy
and precision in noisy conditions. The algorithm takes advantage of the spatial
symmetry of the particle allowing for subpixel accuracy. We compare the
proposed algorithm with four different methods using both synthetic and
experimental datasets. The results show that C-Sym is the most accurate and
precise algorithm when tracking micro-particles in all tested conditions and it
has the potential for use in applications including tracking biota in their
environment.
|
1506.02640
|
Joseph Redmon
|
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi
|
You Only Look Once: Unified, Real-Time Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present YOLO, a new approach to object detection. Prior work on object
detection repurposes classifiers to perform detection. Instead, we frame object
detection as a regression problem to spatially separated bounding boxes and
associated class probabilities. A single neural network predicts bounding boxes
and class probabilities directly from full images in one evaluation. Since the
whole detection pipeline is a single network, it can be optimized end-to-end
directly on detection performance.
Our unified architecture is extremely fast. Our base YOLO model processes
images in real-time at 45 frames per second. A smaller version of the network,
Fast YOLO, processes an astounding 155 frames per second while still achieving
double the mAP of other real-time detectors. Compared to state-of-the-art
detection systems, YOLO makes more localization errors but is far less likely
to predict false detections where nothing exists. Finally, YOLO learns very
general representations of objects. It outperforms all other detection methods,
including DPM and R-CNN, by a wide margin when generalizing from natural images
to artwork on both the Picasso Dataset and the People-Art Dataset.
|
[
{
"version": "v1",
"created": "Mon, 8 Jun 2015 19:52:52 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Jun 2015 07:51:14 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Jun 2015 19:21:47 GMT"
},
{
"version": "v4",
"created": "Thu, 12 Nov 2015 22:53:44 GMT"
},
{
"version": "v5",
"created": "Mon, 9 May 2016 22:22:11 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Redmon",
"Joseph",
""
],
[
"Divvala",
"Santosh",
""
],
[
"Girshick",
"Ross",
""
],
[
"Farhadi",
"Ali",
""
]
] |
TITLE: You Only Look Once: Unified, Real-Time Object Detection
ABSTRACT: We present YOLO, a new approach to object detection. Prior work on object
detection repurposes classifiers to perform detection. Instead, we frame object
detection as a regression problem to spatially separated bounding boxes and
associated class probabilities. A single neural network predicts bounding boxes
and class probabilities directly from full images in one evaluation. Since the
whole detection pipeline is a single network, it can be optimized end-to-end
directly on detection performance.
Our unified architecture is extremely fast. Our base YOLO model processes
images in real-time at 45 frames per second. A smaller version of the network,
Fast YOLO, processes an astounding 155 frames per second while still achieving
double the mAP of other real-time detectors. Compared to state-of-the-art
detection systems, YOLO makes more localization errors but is far less likely
to predict false detections where nothing exists. Finally, YOLO learns very
general representations of objects. It outperforms all other detection methods,
including DPM and R-CNN, by a wide margin when generalizing from natural images
to artwork on both the Picasso Dataset and the People-Art Dataset.
|
1602.03926
|
Igor Barahona Dr
|
Igor Barahona, Judith Cavazos, and Jian-Bo Yang
|
Modelling the level of adoption of analytical tools; An implementation
of multi-criteria evidential reasoning
|
Keywords: MCDA methods; evidential reasoning; analytical tools;
multiple source data
|
International Journal of Supply and Operations Management. (2014)
Vol.1, Issue 2, pp 129-151
| null | null |
stat.AP cs.CY stat.OT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In the future, competitive advantages will be given to organisations that can
extract valuable information from massive data and make better decisions. In
most cases, this data comes from multiple sources. Therefore, the challenge is
to aggregate them into a common framework in order to make them meaningful and
useful. This paper will first review the most important multi-criteria decision
analysis methods (MCDA) existing in current literature. We will offer a novel,
practical and consistent methodology based on a type of MCDA, to aggregate data
from two different sources into a common framework. Two datasets that are
different in nature but related to the same topic are aggregated to a common
scale by implementing a set of transformation rules. This allows us to generate
appropriate evidence for assessing and finally prioritising the level of
adoption of analytical tools in four types of companies. A numerical example is
provided to clarify the form for implementing this methodology. A six-step
process is offered as a guideline to assist engineers, researchers or
practitioners interested in replicating this methodology in any situation where
there is a need to aggregate and transform multiple source data.
|
[
{
"version": "v1",
"created": "Thu, 11 Feb 2016 23:02:10 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Barahona",
"Igor",
""
],
[
"Cavazos",
"Judith",
""
],
[
"Yang",
"Jian-Bo",
""
]
] |
TITLE: Modelling the level of adoption of analytical tools; An implementation
of multi-criteria evidential reasoning
ABSTRACT: In the future, competitive advantages will be given to organisations that can
extract valuable information from massive data and make better decisions. In
most cases, this data comes from multiple sources. Therefore, the challenge is
to aggregate them into a common framework in order to make them meaningful and
useful. This paper will first review the most important multi-criteria decision
analysis methods (MCDA) existing in current literature. We will offer a novel,
practical and consistent methodology based on a type of MCDA, to aggregate data
from two different sources into a common framework. Two datasets that are
different in nature but related to the same topic are aggregated to a common
scale by implementing a set of transformation rules. This allows us to generate
appropriate evidence for assessing and finally prioritising the level of
adoption of analytical tools in four types of companies. A numerical example is
provided to clarify the form for implementing this methodology. A six-step
process is offered as a guideline to assist engineers, researchers or
practitioners interested in replicating this methodology in any situation where
there is a need to aggregate and transform multiple source data.
|
1605.02892
|
Marco Bertini
|
Simone Ercoli, Marco Bertini and Alberto Del Bimbo
|
Compact Hash Codes for Efficient Visual Descriptors Retrieval in Large
Scale Databases
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an efficient method for visual descriptors retrieval
based on compact hash codes computed using a multiple k-means assignment. The
method has been applied to the problem of approximate nearest neighbor (ANN)
search of local and global visual content descriptors, and it has been tested
on different datasets: three large scale public datasets of up to one billion
descriptors (BIGANN) and, supported by recent progress in convolutional neural
networks (CNNs), also on the CIFAR-10 and MNIST datasets. Experimental results
show that, despite its simplicity, the proposed method obtains a very high
performance that makes it superior to more complex state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 08:53:04 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Ercoli",
"Simone",
""
],
[
"Bertini",
"Marco",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] |
TITLE: Compact Hash Codes for Efficient Visual Descriptors Retrieval in Large
Scale Databases
ABSTRACT: In this paper we present an efficient method for visual descriptors retrieval
based on compact hash codes computed using a multiple k-means assignment. The
method has been applied to the problem of approximate nearest neighbor (ANN)
search of local and global visual content descriptors, and it has been tested
on different datasets: three large scale public datasets of up to one billion
descriptors (BIGANN) and, supported by recent progress in convolutional neural
networks (CNNs), also on the CIFAR-10 and MNIST datasets. Experimental results
show that, despite its simplicity, the proposed method obtains a very high
performance that makes it superior to more complex state-of-the-art methods.
|
1605.02917
|
Mohammad Ali Zare Chahooki
|
Seyed Hamid Reza Mohammadi, Mohammad Ali Zare Chahooki
|
Web Spam Detection Using Multiple Kernels in Twin Support Vector Machine
| null | null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Search engines are the most important tools for web data acquisition. Web
pages are crawled and indexed by search Engines. Users typically locate useful
web pages by querying a search engine. One of the challenges in search engines
administration is spam pages which waste search engine resources. These pages
by deception of search engine ranking algorithms try to be showed in the first
page of results. There are many approaches to web spam pages detection such as
measurement of HTML code style similarity, pages linguistic pattern analysis
and machine learning algorithm on page content features. One of the famous
algorithms has been used in machine learning approach is Support Vector Machine
(SVM) classifier. Recently basic structure of SVM has been changed by new
extensions to increase robustness and classification accuracy. In this paper we
improved accuracy of web spam detection by using two nonlinear kernels into
Twin SVM (TSVM) as an improved extension of SVM. The classifier ability to data
separation has been increased by using two separated kernels for each class of
data. Effectiveness of new proposed method has been experimented with two
publicly used spam datasets called UK-2007 and UK-2006. Results show the
effectiveness of proposed kernelized version of TSVM in web spam page
detection.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 10:05:40 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Mohammadi",
"Seyed Hamid Reza",
""
],
[
"Chahooki",
"Mohammad Ali Zare",
""
]
] |
TITLE: Web Spam Detection Using Multiple Kernels in Twin Support Vector Machine
ABSTRACT: Search engines are the most important tools for web data acquisition. Web
pages are crawled and indexed by search Engines. Users typically locate useful
web pages by querying a search engine. One of the challenges in search engines
administration is spam pages which waste search engine resources. These pages
by deception of search engine ranking algorithms try to be showed in the first
page of results. There are many approaches to web spam pages detection such as
measurement of HTML code style similarity, pages linguistic pattern analysis
and machine learning algorithm on page content features. One of the famous
algorithms has been used in machine learning approach is Support Vector Machine
(SVM) classifier. Recently basic structure of SVM has been changed by new
extensions to increase robustness and classification accuracy. In this paper we
improved accuracy of web spam detection by using two nonlinear kernels into
Twin SVM (TSVM) as an improved extension of SVM. The classifier ability to data
separation has been increased by using two separated kernels for each class of
data. Effectiveness of new proposed method has been experimented with two
publicly used spam datasets called UK-2007 and UK-2006. Results show the
effectiveness of proposed kernelized version of TSVM in web spam page
detection.
|
1605.02960
|
Konrad Hinsen
|
Konrad Hinsen
|
Scientific notations for the digital era
| null | null | null | null |
physics.soc-ph cs.OH physics.comp-ph physics.hist-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computers have profoundly changed the way scientific research is done.
Whereas the importance of computers as research tools is evident to everyone,
the impact of the digital revolution on the representation of scientific
knowledge is not yet widely recognized. An ever increasing part of today's
scientific knowledge is expressed, published, and archived exclusively in the
form of software and electronic datasets. In this essay, I compare these
digital scientific notations to the the traditional scientific notations that
have been used for centuries, showing how the digital notations optimized for
computerized processing are often an obstacle to scientific communication and
to creative work by human scientists. I analyze the causes and propose
guidelines for the design of more human-friendly digital scientific notations.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 11:56:49 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Hinsen",
"Konrad",
""
]
] |
TITLE: Scientific notations for the digital era
ABSTRACT: Computers have profoundly changed the way scientific research is done.
Whereas the importance of computers as research tools is evident to everyone,
the impact of the digital revolution on the representation of scientific
knowledge is not yet widely recognized. An ever increasing part of today's
scientific knowledge is expressed, published, and archived exclusively in the
form of software and electronic datasets. In this essay, I compare these
digital scientific notations to the the traditional scientific notations that
have been used for centuries, showing how the digital notations optimized for
computerized processing are often an obstacle to scientific communication and
to creative work by human scientists. I analyze the causes and propose
guidelines for the design of more human-friendly digital scientific notations.
|
1605.02989
|
Marco Capo MSc
|
Marco Cap\'o, Aritz P\'erez, Jos\'e Antonio Lozano
|
An efficient K-means algorithm for Massive Data
|
38 pages, 10 figures
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the progressive growth of the amount of data available in a wide
variety of scientific fields, it has become more difficult to ma- nipulate and
analyze such information. Even though datasets have grown in size, the K-means
algorithm remains as one of the most popular clustering methods, in spite of
its dependency on the initial settings and high computational cost, especially
in terms of distance computations. In this work, we propose an efficient
approximation to the K-means problem intended for massive data. Our approach
recursively partitions the entire dataset into a small number of sub- sets,
each of which is characterized by its representative (center of mass) and
weight (cardinality), afterwards a weighted version of the K-means algorithm is
applied over such local representation, which can drastically reduce the number
of distances computed. In addition to some theoretical properties, experimental
results indicate that our method outperforms well-known approaches, such as the
K-means++ and the minibatch K-means, in terms of the relation between number of
distance computations and the quality of the approximation.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 13:01:37 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Capó",
"Marco",
""
],
[
"Pérez",
"Aritz",
""
],
[
"Lozano",
"José Antonio",
""
]
] |
TITLE: An efficient K-means algorithm for Massive Data
ABSTRACT: Due to the progressive growth of the amount of data available in a wide
variety of scientific fields, it has become more difficult to ma- nipulate and
analyze such information. Even though datasets have grown in size, the K-means
algorithm remains as one of the most popular clustering methods, in spite of
its dependency on the initial settings and high computational cost, especially
in terms of distance computations. In this work, we propose an efficient
approximation to the K-means problem intended for massive data. Our approach
recursively partitions the entire dataset into a small number of sub- sets,
each of which is characterized by its representative (center of mass) and
weight (cardinality), afterwards a weighted version of the K-means algorithm is
applied over such local representation, which can drastically reduce the number
of distances computed. In addition to some theoretical properties, experimental
results indicate that our method outperforms well-known approaches, such as the
K-means++ and the minibatch K-means, in terms of the relation between number of
distance computations and the quality of the approximation.
|
1605.03004
|
Yanjun Qi Dr.
|
Zeming Lin, Jack Lanchantin, Yanjun Qi
|
MUST-CNN: A Multilayer Shift-and-Stitch Deep Convolutional Architecture
for Sequence-based Protein Structure Prediction
|
8 pages ; 3 figures ; deep learning based sequence-sequence
prediction. in AAAI 2016
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting protein properties such as solvent accessibility and secondary
structure from its primary amino acid sequence is an important task in
bioinformatics. Recently, a few deep learning models have surpassed the
traditional window based multilayer perceptron. Taking inspiration from the
image classification domain we propose a deep convolutional neural network
architecture, MUST-CNN, to predict protein properties. This architecture uses a
novel multilayer shift-and-stitch (MUST) technique to generate fully dense
per-position predictions on protein sequences. Our model is significantly
simpler than the state-of-the-art, yet achieves better results. By combining
MUST and the efficient convolution operation, we can consider far more
parameters while retaining very fast prediction speeds. We beat the
state-of-the-art performance on two large protein property prediction datasets.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 13:31:52 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Lin",
"Zeming",
""
],
[
"Lanchantin",
"Jack",
""
],
[
"Qi",
"Yanjun",
""
]
] |
TITLE: MUST-CNN: A Multilayer Shift-and-Stitch Deep Convolutional Architecture
for Sequence-based Protein Structure Prediction
ABSTRACT: Predicting protein properties such as solvent accessibility and secondary
structure from its primary amino acid sequence is an important task in
bioinformatics. Recently, a few deep learning models have surpassed the
traditional window based multilayer perceptron. Taking inspiration from the
image classification domain we propose a deep convolutional neural network
architecture, MUST-CNN, to predict protein properties. This architecture uses a
novel multilayer shift-and-stitch (MUST) technique to generate fully dense
per-position predictions on protein sequences. Our model is significantly
simpler than the state-of-the-art, yet achieves better results. By combining
MUST and the efficient convolution operation, we can consider far more
parameters while retaining very fast prediction speeds. We beat the
state-of-the-art performance on two large protein property prediction datasets.
|
1605.03150
|
Yasamin Alkhorshid
|
Yasamin Alkhorshid, Kamelia Aryafar, Sven Bauer, and Gerd Wanielik
|
Road Detection through Supervised Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving is a rapidly evolving technology. Autonomous vehicles are
capable of sensing their environment and navigating without human input through
sensory information such as radar, lidar, GNSS, vehicle odometry, and computer
vision. This sensory input provides a rich dataset that can be used in
combination with machine learning models to tackle multiple problems in
supervised settings. In this paper we focus on road detection through
gray-scale images as the sole sensory input. Our contributions are twofold:
first, we introduce an annotated dataset of urban roads for machine learning
tasks; second, we introduce a road detection framework on this dataset through
supervised classification and hand-crafted feature vectors.
|
[
{
"version": "v1",
"created": "Tue, 10 May 2016 18:53:09 GMT"
}
] | 2016-05-11T00:00:00 |
[
[
"Alkhorshid",
"Yasamin",
""
],
[
"Aryafar",
"Kamelia",
""
],
[
"Bauer",
"Sven",
""
],
[
"Wanielik",
"Gerd",
""
]
] |
TITLE: Road Detection through Supervised Classification
ABSTRACT: Autonomous driving is a rapidly evolving technology. Autonomous vehicles are
capable of sensing their environment and navigating without human input through
sensory information such as radar, lidar, GNSS, vehicle odometry, and computer
vision. This sensory input provides a rich dataset that can be used in
combination with machine learning models to tackle multiple problems in
supervised settings. In this paper we focus on road detection through
gray-scale images as the sole sensory input. Our contributions are twofold:
first, we introduce an annotated dataset of urban roads for machine learning
tasks; second, we introduce a road detection framework on this dataset through
supervised classification and hand-crafted feature vectors.
|
1310.1177
|
Weixiang Shao
|
Weixiang Shao (1), Xiaoxiao Shi (1) and Philip S. Yu (1) ((1)
University of Illinois at Chicago)
|
Clustering on Multiple Incomplete Datasets via Collective Kernel
Learning
|
ICDM 2013, Code available at
https://github.com/software-shao/Collective-Kernel-Learning
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple datasets containing different types of features may be available for
a given task. For instance, users' profiles can be used to group users for
recommendation systems. In addition, a model can also use users' historical
behaviors and credit history to group users. Each dataset contains different
information and suffices for learning. A number of clustering algorithms on
multiple datasets were proposed during the past few years. These algorithms
assume that at least one dataset is complete. So far as we know, all the
previous methods will not be applicable if there is no complete dataset
available. However, in reality, there are many situations where no dataset is
complete. As in building a recommendation system, some new users may not have a
profile or historical behaviors, while some may not have a credit history.
Hence, no available dataset is complete. In order to solve this problem, we
propose an approach called Collective Kernel Learning to infer hidden sample
similarity from multiple incomplete datasets. The idea is to collectively
completes the kernel matrices of incomplete datasets by optimizing the
alignment of the shared instances of the datasets. Furthermore, a clustering
algorithm is proposed based on the kernel matrix. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
approach. The proposed clustering algorithm outperforms the comparison
algorithms by as much as two times in normalized mutual information.
|
[
{
"version": "v1",
"created": "Fri, 4 Oct 2013 06:18:59 GMT"
},
{
"version": "v2",
"created": "Fri, 6 May 2016 23:35:13 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Shao",
"Weixiang",
""
],
[
"Shi",
"Xiaoxiao",
""
],
[
"Yu",
"Philip S.",
""
]
] |
TITLE: Clustering on Multiple Incomplete Datasets via Collective Kernel
Learning
ABSTRACT: Multiple datasets containing different types of features may be available for
a given task. For instance, users' profiles can be used to group users for
recommendation systems. In addition, a model can also use users' historical
behaviors and credit history to group users. Each dataset contains different
information and suffices for learning. A number of clustering algorithms on
multiple datasets were proposed during the past few years. These algorithms
assume that at least one dataset is complete. So far as we know, all the
previous methods will not be applicable if there is no complete dataset
available. However, in reality, there are many situations where no dataset is
complete. As in building a recommendation system, some new users may not have a
profile or historical behaviors, while some may not have a credit history.
Hence, no available dataset is complete. In order to solve this problem, we
propose an approach called Collective Kernel Learning to infer hidden sample
similarity from multiple incomplete datasets. The idea is to collectively
completes the kernel matrices of incomplete datasets by optimizing the
alignment of the shared instances of the datasets. Furthermore, a clustering
algorithm is proposed based on the kernel matrix. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
approach. The proposed clustering algorithm outperforms the comparison
algorithms by as much as two times in normalized mutual information.
|
1412.6574
|
Ali Sharif Razavian
|
Ali Sharif Razavian, Josephine Sullivan, Stefan Carlsson, Atsuto Maki
|
Visual Instance Retrieval with Deep Convolutional Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides an extensive study on the availability of image
representations based on convolutional networks (ConvNets) for the task of
visual instance retrieval. Besides the choice of convolutional layers, we
present an efficient pipeline exploiting multi-scale schemes to extract local
features, in particular, by taking geometric invariance into explicit account,
i.e. positions, scales and spatial consistency. In our experiments using five
standard image retrieval datasets, we demonstrate that generic ConvNet image
representations can outperform other state-of-the-art methods if they are
extracted appropriately.
|
[
{
"version": "v1",
"created": "Sat, 20 Dec 2014 01:32:43 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Jan 2015 19:09:15 GMT"
},
{
"version": "v3",
"created": "Fri, 10 Apr 2015 18:20:51 GMT"
},
{
"version": "v4",
"created": "Mon, 9 May 2016 08:54:31 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Razavian",
"Ali Sharif",
""
],
[
"Sullivan",
"Josephine",
""
],
[
"Carlsson",
"Stefan",
""
],
[
"Maki",
"Atsuto",
""
]
] |
TITLE: Visual Instance Retrieval with Deep Convolutional Networks
ABSTRACT: This paper provides an extensive study on the availability of image
representations based on convolutional networks (ConvNets) for the task of
visual instance retrieval. Besides the choice of convolutional layers, we
present an efficient pipeline exploiting multi-scale schemes to extract local
features, in particular, by taking geometric invariance into explicit account,
i.e. positions, scales and spatial consistency. In our experiments using five
standard image retrieval datasets, we demonstrate that generic ConvNet image
representations can outperform other state-of-the-art methods if they are
extracted appropriately.
|
1604.04558
|
Jyothi Korra
|
Jinju Joby and Jyothi Korra
|
Accessing accurate documents by mining auxiliary document information
| null | null | null | null |
cs.IR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Earlier techniques of text mining included algorithms like k-means, Naive
Bayes, SVM which classify and cluster the text document for mining relevant
information about the documents. The need for improving the mining techniques
has us searching for techniques using the available algorithms. This paper
proposes one technique which uses the auxiliary information that is present
inside the text documents to improve the mining. This auxiliary information can
be a description to the content. This information can be either useful or
completely useless for mining. The user should assess the worth of the
auxiliary information before considering this technique for text mining. In
this paper, a combination of classical clustering algorithms is used to mine
the datasets. The algorithm runs in two stages which carry out mining at
different levels of abstraction. The clustered documents would then be
classified based on the necessary groups. The proposed technique is aimed at
improved results of document clustering.
|
[
{
"version": "v1",
"created": "Fri, 15 Apr 2016 16:27:38 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Joby",
"Jinju",
""
],
[
"Korra",
"Jyothi",
""
]
] |
TITLE: Accessing accurate documents by mining auxiliary document information
ABSTRACT: Earlier techniques of text mining included algorithms like k-means, Naive
Bayes, SVM which classify and cluster the text document for mining relevant
information about the documents. The need for improving the mining techniques
has us searching for techniques using the available algorithms. This paper
proposes one technique which uses the auxiliary information that is present
inside the text documents to improve the mining. This auxiliary information can
be a description to the content. This information can be either useful or
completely useless for mining. The user should assess the worth of the
auxiliary information before considering this technique for text mining. In
this paper, a combination of classical clustering algorithms is used to mine
the datasets. The algorithm runs in two stages which carry out mining at
different levels of abstraction. The clustered documents would then be
classified based on the necessary groups. The proposed technique is aimed at
improved results of document clustering.
|
1604.06246
|
Mike Thelwall
|
Mike Thelwall
|
Are there too many uncited articles? Zero inflated variants of the
discretised lognormal and hooked power law distributions
|
Thelwall, M. (in press) Journal of Informetrics. Software and data
available here: https://dx.doi.org/10.6084/m9.figshare.3186997.v1
|
Journal of Informetrics 10 (2016), pp. 622-633
|
10.1016/j.joi.2016.04.014
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although statistical models fit many citation data sets reasonably well with
the best fitting models being the hooked power law and discretised lognormal
distribution, the fits are rarely close. One possible reason is that there
might be more uncited articles than would be predicted by any model if some
articles are inherently uncitable. Using data from 23 different Scopus
categories, this article tests the assumption that removing a proportion of
uncited articles from a citation dataset allows statistical distributions to
have much closer fits. It also introduces two new models, zero inflated
discretised lognormal distribution and the zero inflated hooked power law
distribution and algorithms to fit them. In all 23 cases, the zero inflated
version of the discretised lognormal distribution was an improvement on the
standard version and in 15 out of 23 cases the zero inflated version of the
hooked power law was an improvement on the standard version. Without zero
inflation the discretised lognormal models fit the data better than the hooked
power law distribution 6 out of 23 times and with it, the discretised lognormal
models fit the data better than the hooked power law distribution 9 out of 23
times. Apparently uncitable articles seem to occur due to the presence of
academic-related magazines in Scopus categories. In conclusion, future citation
analysis and research indicators should take into account uncitable articles,
and the best fitting distribution for sets of citation counts from a single
subject and year is either the zero inflated discretised lognormal or zero
inflated hooked power law.
|
[
{
"version": "v1",
"created": "Thu, 21 Apr 2016 10:29:28 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Thelwall",
"Mike",
""
]
] |
TITLE: Are there too many uncited articles? Zero inflated variants of the
discretised lognormal and hooked power law distributions
ABSTRACT: Although statistical models fit many citation data sets reasonably well with
the best fitting models being the hooked power law and discretised lognormal
distribution, the fits are rarely close. One possible reason is that there
might be more uncited articles than would be predicted by any model if some
articles are inherently uncitable. Using data from 23 different Scopus
categories, this article tests the assumption that removing a proportion of
uncited articles from a citation dataset allows statistical distributions to
have much closer fits. It also introduces two new models, zero inflated
discretised lognormal distribution and the zero inflated hooked power law
distribution and algorithms to fit them. In all 23 cases, the zero inflated
version of the discretised lognormal distribution was an improvement on the
standard version and in 15 out of 23 cases the zero inflated version of the
hooked power law was an improvement on the standard version. Without zero
inflation the discretised lognormal models fit the data better than the hooked
power law distribution 6 out of 23 times and with it, the discretised lognormal
models fit the data better than the hooked power law distribution 9 out of 23
times. Apparently uncitable articles seem to occur due to the presence of
academic-related magazines in Scopus categories. In conclusion, future citation
analysis and research indicators should take into account uncitable articles,
and the best fitting distribution for sets of citation counts from a single
subject and year is either the zero inflated discretised lognormal or zero
inflated hooked power law.
|
1605.02150
|
Elaheh ShafieiBavani
|
Elaheh ShafieiBavani, Mohammad Ebrahimi, Raymond Wong, Fang Chen
|
On Improving Informativity and Grammaticality for Multi-Sentence
Compression
|
19 pages
| null | null |
UNSW-CSE-TR-201517
|
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi Sentence Compression (MSC) is of great value to many real world
applications, such as guided microblog summarization, opinion summarization and
newswire summarization. Recently, word graph-based approaches have been
proposed and become popular in MSC. Their key assumption is that redundancy
among a set of related sentences provides a reliable way to generate
informative and grammatical sentences. In this paper, we propose an effective
approach to enhance the word graph-based MSC and tackle the issue that most of
the state-of-the-art MSC approaches are confronted with: i.e., improving both
informativity and grammaticality at the same time. Our approach consists of
three main components: (1) a merging method based on Multiword Expressions
(MWE); (2) a mapping strategy based on synonymy between words; (3) a re-ranking
step to identify the best compression candidates generated using a POS-based
language model (POS-LM). We demonstrate the effectiveness of this novel
approach using a dataset made of clusters of English newswire sentences. The
observed improvements on informativity and grammaticality of the generated
compressions show that our approach is superior to state-of-the-art MSC
methods.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2016 06:39:57 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"ShafieiBavani",
"Elaheh",
""
],
[
"Ebrahimi",
"Mohammad",
""
],
[
"Wong",
"Raymond",
""
],
[
"Chen",
"Fang",
""
]
] |
TITLE: On Improving Informativity and Grammaticality for Multi-Sentence
Compression
ABSTRACT: Multi Sentence Compression (MSC) is of great value to many real world
applications, such as guided microblog summarization, opinion summarization and
newswire summarization. Recently, word graph-based approaches have been
proposed and become popular in MSC. Their key assumption is that redundancy
among a set of related sentences provides a reliable way to generate
informative and grammatical sentences. In this paper, we propose an effective
approach to enhance the word graph-based MSC and tackle the issue that most of
the state-of-the-art MSC approaches are confronted with: i.e., improving both
informativity and grammaticality at the same time. Our approach consists of
three main components: (1) a merging method based on Multiword Expressions
(MWE); (2) a mapping strategy based on synonymy between words; (3) a re-ranking
step to identify the best compression candidates generated using a POS-based
language model (POS-LM). We demonstrate the effectiveness of this novel
approach using a dataset made of clusters of English newswire sentences. The
observed improvements on informativity and grammaticality of the generated
compressions show that our approach is superior to state-of-the-art MSC
methods.
|
1605.02216
|
Sixin Zhang Sixin Zhang
|
Sixin Zhang
|
Distributed stochastic optimization for deep learning (thesis)
|
This is the author's thesis at under supervision of Yann LeCun. Part
of the results are based on the paper arXiv:1412.6651 in collaboration with
Anna Choromanska and Yann LeCun
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of how to distribute the training of large-scale deep
learning models in the parallel computing environment. We propose a new
distributed stochastic optimization method called Elastic Averaging SGD
(EASGD). We analyze the convergence rate of the EASGD method in the synchronous
scenario and compare its stability condition with the existing ADMM method in
the round-robin scheme. An asynchronous and momentum variant of the EASGD
method is applied to train deep convolutional neural networks for image
classification on the CIFAR and ImageNet datasets. Our approach accelerates the
training and furthermore achieves better test accuracy. It also requires a much
smaller amount of communication than other common baseline approaches such as
the DOWNPOUR method.
We then investigate the limit in speedup of the initial and the asymptotic
phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find
that the spread of the input data distribution has a big impact on their
initial convergence rate and stability region. We also find a surprising
connection between the momentum SGD and the EASGD method with a negative moving
average rate. A non-convex case is also studied to understand when EASGD can
get trapped by a saddle point.
Finally, we scale up the EASGD method by using a tree structured network
topology. We show empirically its advantage and challenge. We also establish a
connection between the EASGD and the DOWNPOUR method with the classical Jacobi
and the Gauss-Seidel method, thus unifying a class of distributed stochastic
optimization methods.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2016 16:55:22 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Zhang",
"Sixin",
""
]
] |
TITLE: Distributed stochastic optimization for deep learning (thesis)
ABSTRACT: We study the problem of how to distribute the training of large-scale deep
learning models in the parallel computing environment. We propose a new
distributed stochastic optimization method called Elastic Averaging SGD
(EASGD). We analyze the convergence rate of the EASGD method in the synchronous
scenario and compare its stability condition with the existing ADMM method in
the round-robin scheme. An asynchronous and momentum variant of the EASGD
method is applied to train deep convolutional neural networks for image
classification on the CIFAR and ImageNet datasets. Our approach accelerates the
training and furthermore achieves better test accuracy. It also requires a much
smaller amount of communication than other common baseline approaches such as
the DOWNPOUR method.
We then investigate the limit in speedup of the initial and the asymptotic
phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find
that the spread of the input data distribution has a big impact on their
initial convergence rate and stability region. We also find a surprising
connection between the momentum SGD and the EASGD method with a negative moving
average rate. A non-convex case is also studied to understand when EASGD can
get trapped by a saddle point.
Finally, we scale up the EASGD method by using a tree structured network
topology. We show empirically its advantage and challenge. We also establish a
connection between the EASGD and the DOWNPOUR method with the classical Jacobi
and the Gauss-Seidel method, thus unifying a class of distributed stochastic
optimization methods.
|
1605.02240
|
Anish Acharya
|
Anish Acharya, Uddipan Mukherjee, Charless Fowlkes
|
On Image segmentation using Fractional Gradients-Learning Model
Parameters using Approximate Marginal Inference
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimates of image gradients play a ubiquitous role in image segmentation and
classification problems since gradients directly relate to the boundaries or
the edges of a scene. This paper proposes an unified approach to gradient
estimation based on fractional calculus that is computationally cheap and
readily applicable to any existing algorithm that relies on image gradients. We
show experiments on edge detection and image segmentation on the Stanford
Backgrounds Dataset where these improved local gradients outperforms state of
the art, achieving a performance of 79.2% average accuracy.
|
[
{
"version": "v1",
"created": "Sat, 7 May 2016 20:12:12 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Acharya",
"Anish",
""
],
[
"Mukherjee",
"Uddipan",
""
],
[
"Fowlkes",
"Charless",
""
]
] |
TITLE: On Image segmentation using Fractional Gradients-Learning Model
Parameters using Approximate Marginal Inference
ABSTRACT: Estimates of image gradients play a ubiquitous role in image segmentation and
classification problems since gradients directly relate to the boundaries or
the edges of a scene. This paper proposes an unified approach to gradient
estimation based on fractional calculus that is computationally cheap and
readily applicable to any existing algorithm that relies on image gradients. We
show experiments on edge detection and image segmentation on the Stanford
Backgrounds Dataset where these improved local gradients outperforms state of
the art, achieving a performance of 79.2% average accuracy.
|
1605.02260
|
Saihui Hou
|
Saihui Hou, Zilei Wang, Feng Wu
|
Deeply Exploit Depth Information for Object Detection
|
9 pages, 3 figures, and 4 tables. Accepted by CVPR2016 Workshops
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the issue on how to more effectively coordinate the
depth with RGB aiming at boosting the performance of RGB-D object detection.
Particularly, we investigate two primary ideas under the CNN model: property
derivation and property fusion. Firstly, we propose that the depth can be
utilized not only as a type of extra information besides RGB but also to derive
more visual properties for comprehensively describing the objects of interest.
So a two-stage learning framework consisting of property derivation and fusion
is constructed. Here the properties can be derived either from the provided
color/depth or their pairs (e.g. the geometry contour adopted in this paper).
Secondly, we explore the fusion method of different properties in feature
learning, which is boiled down to, under the CNN model, from which layer the
properties should be fused together. The analysis shows that different semantic
properties should be learned separately and combined before passing into the
final classifier. Actually, such a detection way is in accordance with the
mechanism of the primary neural cortex (V1) in brain. We experimentally
evaluate the proposed method on the challenging dataset, and have achieved
state-of-the-art performance.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2016 01:56:50 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Hou",
"Saihui",
""
],
[
"Wang",
"Zilei",
""
],
[
"Wu",
"Feng",
""
]
] |
TITLE: Deeply Exploit Depth Information for Object Detection
ABSTRACT: This paper addresses the issue on how to more effectively coordinate the
depth with RGB aiming at boosting the performance of RGB-D object detection.
Particularly, we investigate two primary ideas under the CNN model: property
derivation and property fusion. Firstly, we propose that the depth can be
utilized not only as a type of extra information besides RGB but also to derive
more visual properties for comprehensively describing the objects of interest.
So a two-stage learning framework consisting of property derivation and fusion
is constructed. Here the properties can be derived either from the provided
color/depth or their pairs (e.g. the geometry contour adopted in this paper).
Secondly, we explore the fusion method of different properties in feature
learning, which is boiled down to, under the CNN model, from which layer the
properties should be fused together. The analysis shows that different semantic
properties should be learned separately and combined before passing into the
final classifier. Actually, such a detection way is in accordance with the
mechanism of the primary neural cortex (V1) in brain. We experimentally
evaluate the proposed method on the challenging dataset, and have achieved
state-of-the-art performance.
|
1605.02289
|
Zhun Zhong
|
Zhun Zhong, Songzhi Su, Donglin Cao, Shaozi Li
|
Detecting Ground Control Points via Convolutional Neural Network for
Stereo Matching
|
9 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we present a novel approach to detect ground control points
(GCPs) for stereo matching problem. First of all, we train a convolutional
neural network (CNN) on a large stereo set, and compute the matching confidence
of each pixel by using the trained CNN model. Secondly, we present a ground
control points selection scheme according to the maximum matching confidence of
each pixel. Finally, the selected GCPs are used to refine the matching costs,
and we apply the new matching costs to perform optimization with semi-global
matching algorithm for improving the final disparity maps. We evaluate our
approach on the KITTI 2012 stereo benchmark dataset. Our experiments show that
the proposed approach significantly improves the accuracy of disparity maps.
|
[
{
"version": "v1",
"created": "Sun, 8 May 2016 07:38:40 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Zhong",
"Zhun",
""
],
[
"Su",
"Songzhi",
""
],
[
"Cao",
"Donglin",
""
],
[
"Li",
"Shaozi",
""
]
] |
TITLE: Detecting Ground Control Points via Convolutional Neural Network for
Stereo Matching
ABSTRACT: In this paper, we present a novel approach to detect ground control points
(GCPs) for stereo matching problem. First of all, we train a convolutional
neural network (CNN) on a large stereo set, and compute the matching confidence
of each pixel by using the trained CNN model. Secondly, we present a ground
control points selection scheme according to the maximum matching confidence of
each pixel. Finally, the selected GCPs are used to refine the matching costs,
and we apply the new matching costs to perform optimization with semi-global
matching algorithm for improving the final disparity maps. We evaluate our
approach on the KITTI 2012 stereo benchmark dataset. Our experiments show that
the proposed approach significantly improves the accuracy of disparity maps.
|
1605.02464
|
Liqian Ma
|
Liqian Ma, Hong Liu, Liang Hu, Can Wang, Qianru Sun
|
Orientation Driven Bag of Appearances for Person Re-identification
|
13 pages, 15 figures, 3 tables, submitted to IEEE Transactions on
Circuits and Systems for Video Technology
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Person re-identification (re-id) consists of associating individual across
camera network, which is valuable for intelligent video surveillance and has
drawn wide attention. Although person re-identification research is making
progress, it still faces some challenges such as varying poses, illumination
and viewpoints. For feature representation in re-identification, existing works
usually use low-level descriptors which do not take full advantage of body
structure information, resulting in low representation ability.
%discrimination. To solve this problem, this paper proposes the mid-level
body-structure based feature representation (BSFR) which introduces body
structure pyramid for codebook learning and feature pooling in the vertical
direction of human body. Besides, varying viewpoints in the horizontal
direction of human body usually causes the data missing problem, $i.e.$, the
appearances obtained in different orientations of the identical person could
vary significantly. To address this problem, the orientation driven bag of
appearances (ODBoA) is proposed to utilize person orientation information
extracted by orientation estimation technic. To properly evaluate the proposed
approach, we introduce a new re-identification dataset (Market-1203) based on
the Market-1501 dataset and propose a new re-identification dataset (PKU-Reid).
Both datasets contain multiple images captured in different body orientations
for each person. Experimental results on three public datasets and two proposed
datasets demonstrate the superiority of the proposed approach, indicating the
effectiveness of body structure and orientation information for improving
re-identification performance.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 08:25:33 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Ma",
"Liqian",
""
],
[
"Liu",
"Hong",
""
],
[
"Hu",
"Liang",
""
],
[
"Wang",
"Can",
""
],
[
"Sun",
"Qianru",
""
]
] |
TITLE: Orientation Driven Bag of Appearances for Person Re-identification
ABSTRACT: Person re-identification (re-id) consists of associating individual across
camera network, which is valuable for intelligent video surveillance and has
drawn wide attention. Although person re-identification research is making
progress, it still faces some challenges such as varying poses, illumination
and viewpoints. For feature representation in re-identification, existing works
usually use low-level descriptors which do not take full advantage of body
structure information, resulting in low representation ability.
%discrimination. To solve this problem, this paper proposes the mid-level
body-structure based feature representation (BSFR) which introduces body
structure pyramid for codebook learning and feature pooling in the vertical
direction of human body. Besides, varying viewpoints in the horizontal
direction of human body usually causes the data missing problem, $i.e.$, the
appearances obtained in different orientations of the identical person could
vary significantly. To address this problem, the orientation driven bag of
appearances (ODBoA) is proposed to utilize person orientation information
extracted by orientation estimation technic. To properly evaluate the proposed
approach, we introduce a new re-identification dataset (Market-1203) based on
the Market-1501 dataset and propose a new re-identification dataset (PKU-Reid).
Both datasets contain multiple images captured in different body orientations
for each person. Experimental results on three public datasets and two proposed
datasets demonstrate the superiority of the proposed approach, indicating the
effectiveness of body structure and orientation information for improving
re-identification performance.
|
1605.02559
|
Olivier Colliot
|
Linda Marrakchi-Kacem (ARAMIS), Alexandre Vignaud (NEUROSPIN), Julien
Sein (CRMBM), Johanne Germain (ARAMIS), Thomas R Henry (CMRR), Cyril Poupon
(NEUROSPIN), Lucie Hertz-Pannier, St\'ephane Leh\'ericy (CENIR, ICM), Olivier
Colliot (ARAMIS, ICM), Pierre-Fran\c{c}ois Van de Moortele (CMRR), Marie
Chupin (ARAMIS, ICM)
|
Robust imaging of hippocampal inner structure at 7T: in vivo acquisition
protocol and methodological choices
| null |
Magnetic Resonance Materials in Physics, Biology and Medicine,
Springer Verlag, 2016
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OBJECTIVE:Motion-robust multi-slab imaging of hippocampal inner structure in
vivo at 7T.MATERIALS AND METHODS:Motion is a crucial issue for ultra-high
resolution imaging, such as can be achieved with 7T MRI. An acquisition
protocol was designed for imaging hippocampal inner structure at 7T. It relies
on a compromise between anatomical details visibility and robustness to motion.
In order to reduce acquisition time and motion artifacts, the full slab
covering the hippocampus was split into separate slabs with lower acquisition
time. A robust registration approach was implemented to combine the acquired
slabs within a final 3D-consistent high-resolution slab covering the whole
hippocampus. Evaluation was performed on 50 subjects overall, made of three
groups of subjects acquired using three acquisition settings; it focused on
three issues: visibility of hippocampal inner structure, robustness to motion
artifacts and registration procedure performance.RESULTS:Overall, T2-weighted
acquisitions with interleaved slabs proved robust. Multi-slab registration
yielded high quality datasets in 96 % of the subjects, thus compatible with
further analyses of hippocampal inner structure.CONCLUSION:Multi-slab
acquisition and registration setting is efficient for reducing acquisition time
and consequently motion artifacts for ultra-high resolution imaging of the
inner structure of the hippocampus.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 12:38:44 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Marrakchi-Kacem",
"Linda",
"",
"ARAMIS"
],
[
"Vignaud",
"Alexandre",
"",
"NEUROSPIN"
],
[
"Sein",
"Julien",
"",
"CRMBM"
],
[
"Germain",
"Johanne",
"",
"ARAMIS"
],
[
"Henry",
"Thomas R",
"",
"CMRR"
],
[
"Poupon",
"Cyril",
"",
"NEUROSPIN"
],
[
"Hertz-Pannier",
"Lucie",
"",
"CENIR, ICM"
],
[
"Lehéricy",
"Stéphane",
"",
"CENIR, ICM"
],
[
"Colliot",
"Olivier",
"",
"ARAMIS, ICM"
],
[
"Van de Moortele",
"Pierre-François",
"",
"CMRR"
],
[
"Chupin",
"Marie",
"",
"ARAMIS, ICM"
]
] |
TITLE: Robust imaging of hippocampal inner structure at 7T: in vivo acquisition
protocol and methodological choices
ABSTRACT: OBJECTIVE:Motion-robust multi-slab imaging of hippocampal inner structure in
vivo at 7T.MATERIALS AND METHODS:Motion is a crucial issue for ultra-high
resolution imaging, such as can be achieved with 7T MRI. An acquisition
protocol was designed for imaging hippocampal inner structure at 7T. It relies
on a compromise between anatomical details visibility and robustness to motion.
In order to reduce acquisition time and motion artifacts, the full slab
covering the hippocampus was split into separate slabs with lower acquisition
time. A robust registration approach was implemented to combine the acquired
slabs within a final 3D-consistent high-resolution slab covering the whole
hippocampus. Evaluation was performed on 50 subjects overall, made of three
groups of subjects acquired using three acquisition settings; it focused on
three issues: visibility of hippocampal inner structure, robustness to motion
artifacts and registration procedure performance.RESULTS:Overall, T2-weighted
acquisitions with interleaved slabs proved robust. Multi-slab registration
yielded high quality datasets in 96 % of the subjects, thus compatible with
further analyses of hippocampal inner structure.CONCLUSION:Multi-slab
acquisition and registration setting is efficient for reducing acquisition time
and consequently motion artifacts for ultra-high resolution imaging of the
inner structure of the hippocampus.
|
1605.02560
|
Zi Wang
|
Zi Wang, Vyacheslav Karolis, Chiara Nosarti, Giovanni Montana
|
Studying the brain from adolescence to adulthood through sparse
multi-view matrix factorisations
|
Submitted to the 6th International Workshop on Pattern Recognition in
Neuroimaging (PRNI)
| null | null | null |
stat.AP cs.CV q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Men and women differ in specific cognitive abilities and in the expression of
several neuropsychiatric conditions. Such findings could be attributed to sex
hormones, brain differences, as well as a number of environmental variables.
Existing research on identifying sex-related differences in brain structure
have predominantly used cross-sectional studies to investigate, for instance,
differences in average gray matter volumes (GMVs). In this article we explore
the potential of a recently proposed multi-view matrix factorisation (MVMF)
methodology to study structural brain changes in men and women that occur from
adolescence to adulthood. MVMF is a multivariate variance decomposition
technique that extends principal component analysis to "multi-view" datasets,
i.e. where multiple and related groups of observations are available. In this
application, each view represents a different age group. MVMF identifies latent
factors explaining shared and age-specific contributions to the observed
overall variability in GMVs over time. These latent factors can be used to
produce low-dimensional visualisations of the data that emphasise age-specific
effects once the shared effects have been accounted for. The analysis of two
datasets consisting of individuals born prematurely as well as healthy controls
provides evidence to suggest that the separation between males and females
becomes increasingly larger as the brain transitions from adolescence to
adulthood. We report on specific brain regions associated to these variance
effects.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 12:40:22 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"Wang",
"Zi",
""
],
[
"Karolis",
"Vyacheslav",
""
],
[
"Nosarti",
"Chiara",
""
],
[
"Montana",
"Giovanni",
""
]
] |
TITLE: Studying the brain from adolescence to adulthood through sparse
multi-view matrix factorisations
ABSTRACT: Men and women differ in specific cognitive abilities and in the expression of
several neuropsychiatric conditions. Such findings could be attributed to sex
hormones, brain differences, as well as a number of environmental variables.
Existing research on identifying sex-related differences in brain structure
have predominantly used cross-sectional studies to investigate, for instance,
differences in average gray matter volumes (GMVs). In this article we explore
the potential of a recently proposed multi-view matrix factorisation (MVMF)
methodology to study structural brain changes in men and women that occur from
adolescence to adulthood. MVMF is a multivariate variance decomposition
technique that extends principal component analysis to "multi-view" datasets,
i.e. where multiple and related groups of observations are available. In this
application, each view represents a different age group. MVMF identifies latent
factors explaining shared and age-specific contributions to the observed
overall variability in GMVs over time. These latent factors can be used to
produce low-dimensional visualisations of the data that emphasise age-specific
effects once the shared effects have been accounted for. The analysis of two
datasets consisting of individuals born prematurely as well as healthy controls
provides evidence to suggest that the separation between males and females
becomes increasingly larger as the brain transitions from adolescence to
adulthood. We report on specific brain regions associated to these variance
effects.
|
1605.02633
|
Chong You
|
Chong You, Chun-Guang Li, Daniel P. Robinson, Rene Vidal
|
Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace
Clustering
|
15 pages, 6 figures, accepted to CVPR 2016 for oral presentation
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art subspace clustering methods are based on expressing each
data point as a linear combination of other data points while regularizing the
matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$
regularization is guaranteed to give a subspace-preserving affinity (i.e.,
there are no connections between points from different subspaces) under broad
theoretical conditions, but the clusters may not be connected. $\ell_2$ and
nuclear norm regularization often improve connectivity, but give a
subspace-preserving affinity only for independent subspaces. Mixed $\ell_1$,
$\ell_2$ and nuclear norm regularizations offer a balance between the
subspace-preserving and connectedness properties, but this comes at the cost of
increased computational complexity. This paper studies the geometry of the
elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses
it to derive a provably correct and scalable active set method for finding the
optimal coefficients. Our geometric analysis also provides a theoretical
justification and a geometric interpretation for the balance between the
connectedness (due to $\ell_2$ regularization) and subspace-preserving (due to
$\ell_1$ regularization) properties for elastic net subspace clustering. Our
experiments show that the proposed active set method not only achieves
state-of-the-art clustering performance, but also efficiently handles
large-scale datasets.
|
[
{
"version": "v1",
"created": "Mon, 9 May 2016 15:49:36 GMT"
}
] | 2016-05-10T00:00:00 |
[
[
"You",
"Chong",
""
],
[
"Li",
"Chun-Guang",
""
],
[
"Robinson",
"Daniel P.",
""
],
[
"Vidal",
"Rene",
""
]
] |
TITLE: Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace
Clustering
ABSTRACT: State-of-the-art subspace clustering methods are based on expressing each
data point as a linear combination of other data points while regularizing the
matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$
regularization is guaranteed to give a subspace-preserving affinity (i.e.,
there are no connections between points from different subspaces) under broad
theoretical conditions, but the clusters may not be connected. $\ell_2$ and
nuclear norm regularization often improve connectivity, but give a
subspace-preserving affinity only for independent subspaces. Mixed $\ell_1$,
$\ell_2$ and nuclear norm regularizations offer a balance between the
subspace-preserving and connectedness properties, but this comes at the cost of
increased computational complexity. This paper studies the geometry of the
elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses
it to derive a provably correct and scalable active set method for finding the
optimal coefficients. Our geometric analysis also provides a theoretical
justification and a geometric interpretation for the balance between the
connectedness (due to $\ell_2$ regularization) and subspace-preserving (due to
$\ell_1$ regularization) properties for elastic net subspace clustering. Our
experiments show that the proposed active set method not only achieves
state-of-the-art clustering performance, but also efficiently handles
large-scale datasets.
|
1506.03101
|
Bo Dai
|
Bo Dai, Niao He, Hanjun Dai, Le Song
|
Provable Bayesian Inference via Particle Mirror Descent
|
38 pages, 26 figures
| null | null | null |
cs.LG stat.CO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bayesian methods are appealing in their flexibility in modeling complex data
and ability in capturing uncertainty in parameters. However, when Bayes' rule
does not result in tractable closed-form, most approximate inference algorithms
lack either scalability or rigorous guarantees. To tackle this challenge, we
propose a simple yet provable algorithm, \emph{Particle Mirror Descent} (PMD),
to iteratively approximate the posterior density. PMD is inspired by stochastic
functional mirror descent where one descends in the density space using a small
batch of data points at each iteration, and by particle filtering where one
uses samples to approximate a function. We prove result of the first kind that,
with $m$ particles, PMD provides a posterior density estimator that converges
in terms of $KL$-divergence to the true posterior in rate $O(1/\sqrt{m})$. We
demonstrate competitive empirical performances of PMD compared to several
approximate inference algorithms in mixture models, logistic regression, sparse
Gaussian processes and latent Dirichlet allocation on large scale datasets.
|
[
{
"version": "v1",
"created": "Tue, 9 Jun 2015 20:57:37 GMT"
},
{
"version": "v2",
"created": "Tue, 3 May 2016 19:06:18 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2016 22:56:13 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Dai",
"Bo",
""
],
[
"He",
"Niao",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Song",
"Le",
""
]
] |
TITLE: Provable Bayesian Inference via Particle Mirror Descent
ABSTRACT: Bayesian methods are appealing in their flexibility in modeling complex data
and ability in capturing uncertainty in parameters. However, when Bayes' rule
does not result in tractable closed-form, most approximate inference algorithms
lack either scalability or rigorous guarantees. To tackle this challenge, we
propose a simple yet provable algorithm, \emph{Particle Mirror Descent} (PMD),
to iteratively approximate the posterior density. PMD is inspired by stochastic
functional mirror descent where one descends in the density space using a small
batch of data points at each iteration, and by particle filtering where one
uses samples to approximate a function. We prove result of the first kind that,
with $m$ particles, PMD provides a posterior density estimator that converges
in terms of $KL$-divergence to the true posterior in rate $O(1/\sqrt{m})$. We
demonstrate competitive empirical performances of PMD compared to several
approximate inference algorithms in mixture models, logistic regression, sparse
Gaussian processes and latent Dirichlet allocation on large scale datasets.
|
1512.04407
|
Arjun Chandrasekaran
|
Arjun Chandrasekaran, Ashwin K. Vijayakumar, Stanislaw Antol, Mohit
Bansal, Dhruv Batra, C. Lawrence Zitnick and Devi Parikh
|
We Are Humor Beings: Understanding and Predicting Visual Humor
|
17 pages, 16 figures, 3 tables
| null | null | null |
cs.CV cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humor is an integral part of human lives. Despite being tremendously
impactful, it is perhaps surprising that we do not have a detailed
understanding of humor yet. As interactions between humans and AI systems
increase, it is imperative that these systems are taught to understand
subtleties of human expressions such as humor. In this work, we are interested
in the question - what content in a scene causes it to be funny? As a first
step towards understanding visual humor, we analyze the humor manifested in
abstract scenes and design computational models for them. We collect two
datasets of abstract scenes that facilitate the study of humor at both the
scene-level and the object-level. We analyze the funny scenes and explore the
different types of humor depicted in them via human studies. We model two tasks
that we believe demonstrate an understanding of some aspects of visual humor.
The tasks involve predicting the funniness of a scene and altering the
funniness of a scene. We show that our models perform well quantitatively, and
qualitatively through human studies. Our datasets are publicly available.
|
[
{
"version": "v1",
"created": "Mon, 14 Dec 2015 16:59:35 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2015 02:12:49 GMT"
},
{
"version": "v3",
"created": "Sun, 10 Apr 2016 22:15:43 GMT"
},
{
"version": "v4",
"created": "Thu, 5 May 2016 21:36:13 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Chandrasekaran",
"Arjun",
""
],
[
"Vijayakumar",
"Ashwin K.",
""
],
[
"Antol",
"Stanislaw",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Parikh",
"Devi",
""
]
] |
TITLE: We Are Humor Beings: Understanding and Predicting Visual Humor
ABSTRACT: Humor is an integral part of human lives. Despite being tremendously
impactful, it is perhaps surprising that we do not have a detailed
understanding of humor yet. As interactions between humans and AI systems
increase, it is imperative that these systems are taught to understand
subtleties of human expressions such as humor. In this work, we are interested
in the question - what content in a scene causes it to be funny? As a first
step towards understanding visual humor, we analyze the humor manifested in
abstract scenes and design computational models for them. We collect two
datasets of abstract scenes that facilitate the study of humor at both the
scene-level and the object-level. We analyze the funny scenes and explore the
different types of humor depicted in them via human studies. We model two tasks
that we believe demonstrate an understanding of some aspects of visual humor.
The tasks involve predicting the funniness of a scene and altering the
funniness of a scene. We show that our models perform well quantitatively, and
qualitatively through human studies. Our datasets are publicly available.
|
1601.01396
|
arXiv Admin
|
George Tsatsanifos
|
On the Computation of the Optimal Connecting Points in Road Networks
|
This submission has been withdrawn by arXiv administrators due to
disputed authorship
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we consider a set of travelers, starting from likely different
locations towards a common destination within a road network, and propose
solutions to find the optimal connecting points for them. A connecting point is
a vertex of the network where a subset of the travelers meet and continue
traveling together towards the next connecting point or the destination. The
notion of optimality is with regard to a given aggregated travel cost, e.g.,
travel distance or shared fuel cost. This problem by itself is new and we make
it even more interesting (and complex) by considering affinity factors among
the users, i.e., how much a user likes to travel together with another one.
This plays a fundamental role in determining where the connecting points are
and how subsets of travelers are formed. We propose three methods for
addressing this problem, one that relies on a fast and greedy approach that
finds a sub-optimal solution, and two others that yield globally optimal
solution. We evaluate all proposed approaches through experiments, where
collections of real datasets are used to assess the trade-offs, behavior and
characteristics of each method.
|
[
{
"version": "v1",
"created": "Thu, 7 Jan 2016 04:25:27 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Apr 2016 12:15:03 GMT"
},
{
"version": "v3",
"created": "Fri, 6 May 2016 17:30:25 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Tsatsanifos",
"George",
""
]
] |
TITLE: On the Computation of the Optimal Connecting Points in Road Networks
ABSTRACT: In this paper we consider a set of travelers, starting from likely different
locations towards a common destination within a road network, and propose
solutions to find the optimal connecting points for them. A connecting point is
a vertex of the network where a subset of the travelers meet and continue
traveling together towards the next connecting point or the destination. The
notion of optimality is with regard to a given aggregated travel cost, e.g.,
travel distance or shared fuel cost. This problem by itself is new and we make
it even more interesting (and complex) by considering affinity factors among
the users, i.e., how much a user likes to travel together with another one.
This plays a fundamental role in determining where the connecting points are
and how subsets of travelers are formed. We propose three methods for
addressing this problem, one that relies on a fast and greedy approach that
finds a sub-optimal solution, and two others that yield globally optimal
solution. We evaluate all proposed approaches through experiments, where
collections of real datasets are used to assess the trade-offs, behavior and
characteristics of each method.
|
1605.01744
|
David Cinciruk
|
Mengke Hu, David Cinciruk, and John MacLaren Walsh
|
Improving Automated Patent Claim Parsing: Dataset, System, and
Experiments
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Off-the-shelf natural language processing software performs poorly when
parsing patent claims owing to their use of irregular language relative to the
corpora built from news articles and the web typically utilized to train this
software. Stopping short of the extensive and expensive process of accumulating
a large enough dataset to completely retrain parsers for patent claims, a
method of adapting existing natural language processing software towards patent
claims via forced part of speech tag correction is proposed. An Amazon
Mechanical Turk collection campaign organized to generate a public corpus to
train such an improved claim parsing system is discussed, identifying lessons
learned during the campaign that can be of use in future NLP dataset collection
campaigns with AMT. Experiments utilizing this corpus and other patent claim
sets measure the parsing performance improvement garnered via the claim parsing
system. Finally, the utility of the improved claim parsing system within other
patent processing applications is demonstrated via experiments showing improved
automated patent subject classification when the new claim parsing system is
utilized to generate the features.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 20:11:57 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Hu",
"Mengke",
""
],
[
"Cinciruk",
"David",
""
],
[
"Walsh",
"John MacLaren",
""
]
] |
TITLE: Improving Automated Patent Claim Parsing: Dataset, System, and
Experiments
ABSTRACT: Off-the-shelf natural language processing software performs poorly when
parsing patent claims owing to their use of irregular language relative to the
corpora built from news articles and the web typically utilized to train this
software. Stopping short of the extensive and expensive process of accumulating
a large enough dataset to completely retrain parsers for patent claims, a
method of adapting existing natural language processing software towards patent
claims via forced part of speech tag correction is proposed. An Amazon
Mechanical Turk collection campaign organized to generate a public corpus to
train such an improved claim parsing system is discussed, identifying lessons
learned during the campaign that can be of use in future NLP dataset collection
campaigns with AMT. Experiments utilizing this corpus and other patent claim
sets measure the parsing performance improvement garnered via the claim parsing
system. Finally, the utility of the improved claim parsing system within other
patent processing applications is demonstrated via experiments showing improved
automated patent subject classification when the new claim parsing system is
utilized to generate the features.
|
1605.01749
|
Paul Bertens
|
Paul Bertens
|
Rank Ordered Autoencoders
|
Personal project, 14 pages, 9 figures. For source code see:
https://github.com/paulbertens
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new method for the unsupervised learning of sparse representations using
autoencoders is proposed and implemented by ordering the output of the hidden
units by their activation value and progressively reconstructing the input in
this order. This can be done efficiently in parallel with the use of cumulative
sums and sorting only slightly increasing the computational costs. Minimizing
the difference of this progressive reconstruction with respect to the input can
be seen as minimizing the number of active output units required for the
reconstruction of the input. The model thus learns to reconstruct optimally
using the least number of active output units. This leads to high sparsity
without the need for extra hyperparameters, the amount of sparsity is instead
implicitly learned by minimizing this progressive reconstruction error. Results
of the trained model are given for patches of the CIFAR10 dataset, showing
rapid convergence of features and extremely sparse output activations while
maintaining a minimal reconstruction error and showing extreme robustness to
overfitting. Additionally the reconstruction as function of number of active
units is presented which shows the autoencoder learns a rank order code over
the input where the highest ranked units correspond to the highest decrease in
reconstruction error.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 20:33:38 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Bertens",
"Paul",
""
]
] |
TITLE: Rank Ordered Autoencoders
ABSTRACT: A new method for the unsupervised learning of sparse representations using
autoencoders is proposed and implemented by ordering the output of the hidden
units by their activation value and progressively reconstructing the input in
this order. This can be done efficiently in parallel with the use of cumulative
sums and sorting only slightly increasing the computational costs. Minimizing
the difference of this progressive reconstruction with respect to the input can
be seen as minimizing the number of active output units required for the
reconstruction of the input. The model thus learns to reconstruct optimally
using the least number of active output units. This leads to high sparsity
without the need for extra hyperparameters, the amount of sparsity is instead
implicitly learned by minimizing this progressive reconstruction error. Results
of the trained model are given for patches of the CIFAR10 dataset, showing
rapid convergence of features and extremely sparse output activations while
maintaining a minimal reconstruction error and showing extreme robustness to
overfitting. Additionally the reconstruction as function of number of active
units is presented which shows the autoencoder learns a rank order code over
the input where the highest ranked units correspond to the highest decrease in
reconstruction error.
|
1605.01790
|
Kristjan Greenewald
|
Kristjan Greenewald, Edmund Zelnio, Alfred Hero
|
Robust SAR STAP via Kronecker Decomposition
|
to appear at IEEE AES. arXiv admin note: text overlap with
arXiv:1604.03622, arXiv:1501.07481
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a spatio-temporal decomposition for the detection of
moving targets in multiantenna SAR. As a high resolution radar imaging
modality, SAR detects and localizes non-moving targets accurately, giving it an
advantage over lower resolution GMTI radars. Moving target detection is more
challenging due to target smearing and masking by clutter. Space-time adaptive
processing (STAP) is often used to remove the stationary clutter and enhance
the moving targets. In this work, it is shown that the performance of STAP can
be improved by modeling the clutter covariance as a space vs. time Kronecker
product with low rank factors. Based on this model, a low-rank Kronecker
product covariance estimation algorithm is proposed, and a novel separable
clutter cancelation filter based on the Kronecker covariance estimate is
introduced. The proposed method provides orders of magnitude reduction in the
required number of training samples, as well as improved robustness to
corruption of the training data. Simulation results and experiments using the
Gotcha SAR GMTI challenge dataset are presented that confirm the advantages of
our approach relative to existing techniques.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 23:53:32 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Greenewald",
"Kristjan",
""
],
[
"Zelnio",
"Edmund",
""
],
[
"Hero",
"Alfred",
""
]
] |
TITLE: Robust SAR STAP via Kronecker Decomposition
ABSTRACT: This paper proposes a spatio-temporal decomposition for the detection of
moving targets in multiantenna SAR. As a high resolution radar imaging
modality, SAR detects and localizes non-moving targets accurately, giving it an
advantage over lower resolution GMTI radars. Moving target detection is more
challenging due to target smearing and masking by clutter. Space-time adaptive
processing (STAP) is often used to remove the stationary clutter and enhance
the moving targets. In this work, it is shown that the performance of STAP can
be improved by modeling the clutter covariance as a space vs. time Kronecker
product with low rank factors. Based on this model, a low-rank Kronecker
product covariance estimation algorithm is proposed, and a novel separable
clutter cancelation filter based on the Kronecker covariance estimate is
introduced. The proposed method provides orders of magnitude reduction in the
required number of training samples, as well as improved robustness to
corruption of the training data. Simulation results and experiments using the
Gotcha SAR GMTI challenge dataset are presented that confirm the advantages of
our approach relative to existing techniques.
|
1605.01832
|
Hanxiao Liu
|
Hanxiao Liu, Yiming Yang
|
Cross-Graph Learning of Multi-Relational Associations
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-graph Relational Learning (CGRL) refers to the problem of predicting
the strengths or labels of multi-relational tuples of heterogeneous object
types, through the joint inference over multiple graphs which specify the
internal connections among each type of objects. CGRL is an open challenge in
machine learning due to the daunting number of all possible tuples to deal with
when the numbers of nodes in multiple graphs are large, and because the labeled
training instances are extremely sparse as typical. Existing methods such as
tensor factorization or tensor-kernel machines do not work well because of the
lack of convex formulation for the optimization of CGRL models, the poor
scalability of the algorithms in handling combinatorial numbers of tuples,
and/or the non-transductive nature of the learning methods which limits their
ability to leverage unlabeled data in training. This paper proposes a novel
framework which formulates CGRL as a convex optimization problem, enables
transductive learning using both labeled and unlabeled tuples, and offers a
scalable algorithm that guarantees the optimal solution and enjoys a linear
time complexity with respect to the sizes of input graphs. In our experiments
with a subset of DBLP publication records and an Enzyme multi-source dataset,
the proposed method successfully scaled to the large cross-graph inference
problem, and outperformed other representative approaches significantly.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2016 06:15:20 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Liu",
"Hanxiao",
""
],
[
"Yang",
"Yiming",
""
]
] |
TITLE: Cross-Graph Learning of Multi-Relational Associations
ABSTRACT: Cross-graph Relational Learning (CGRL) refers to the problem of predicting
the strengths or labels of multi-relational tuples of heterogeneous object
types, through the joint inference over multiple graphs which specify the
internal connections among each type of objects. CGRL is an open challenge in
machine learning due to the daunting number of all possible tuples to deal with
when the numbers of nodes in multiple graphs are large, and because the labeled
training instances are extremely sparse as typical. Existing methods such as
tensor factorization or tensor-kernel machines do not work well because of the
lack of convex formulation for the optimization of CGRL models, the poor
scalability of the algorithms in handling combinatorial numbers of tuples,
and/or the non-transductive nature of the learning methods which limits their
ability to leverage unlabeled data in training. This paper proposes a novel
framework which formulates CGRL as a convex optimization problem, enables
transductive learning using both labeled and unlabeled tuples, and offers a
scalable algorithm that guarantees the optimal solution and enjoys a linear
time complexity with respect to the sizes of input graphs. In our experiments
with a subset of DBLP publication records and an Enzyme multi-source dataset,
the proposed method successfully scaled to the large cross-graph inference
problem, and outperformed other representative approaches significantly.
|
1605.01843
|
Shaodi You
|
Shaodi You, Nick Barnes and Janine Walker
|
Perceptually Consistent Color-to-Gray Image Conversion
|
18 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a color to grayscale image conversion algorithm
(C2G) that aims to preserve the perceptual properties of the color image as
much as possible. To this end, we propose measures for two perceptual
properties based on contemporary research in vision science: brightness and
multi-scale contrast. The brightness measurement is based on the idea that the
brightness of a grayscale image will affect the perception of the probability
of color information. The color contrast measurement is based on the idea that
the contrast of a given pixel to its surroundings can be measured as a linear
combination of color contrast at different scales. Based on these measures we
propose a graph based optimization framework to balance the brightness and
contrast measurements. To solve the optimization, an $\ell_1$-norm based method
is provided which converts color discontinuities to brightness discontinuities.
To validate our methods, we evaluate against the existing \cadik and Color250
datasets, and against NeoColor, a new dataset that improves over existing C2G
datasets. NeoColor contains around 300 images from typical C2G scenarios,
including: commercial photograph, printing, books, magazines, masterpiece
artworks and computer designed graphics. We show improvements in metrics of
performance, and further through a user study, we validate the performance of
both the algorithm and the metric.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2016 07:13:48 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"You",
"Shaodi",
""
],
[
"Barnes",
"Nick",
""
],
[
"Walker",
"Janine",
""
]
] |
TITLE: Perceptually Consistent Color-to-Gray Image Conversion
ABSTRACT: In this paper, we propose a color to grayscale image conversion algorithm
(C2G) that aims to preserve the perceptual properties of the color image as
much as possible. To this end, we propose measures for two perceptual
properties based on contemporary research in vision science: brightness and
multi-scale contrast. The brightness measurement is based on the idea that the
brightness of a grayscale image will affect the perception of the probability
of color information. The color contrast measurement is based on the idea that
the contrast of a given pixel to its surroundings can be measured as a linear
combination of color contrast at different scales. Based on these measures we
propose a graph based optimization framework to balance the brightness and
contrast measurements. To solve the optimization, an $\ell_1$-norm based method
is provided which converts color discontinuities to brightness discontinuities.
To validate our methods, we evaluate against the existing \cadik and Color250
datasets, and against NeoColor, a new dataset that improves over existing C2G
datasets. NeoColor contains around 300 images from typical C2G scenarios,
including: commercial photograph, printing, books, magazines, masterpiece
artworks and computer designed graphics. We show improvements in metrics of
performance, and further through a user study, we validate the performance of
both the algorithm and the metric.
|
1605.02026
|
Thomas Goldstein
|
Gavin Taylor, Ryan Burmeister, Zheng Xu, Bharat Singh, Ankit Patel,
Tom Goldstein
|
Training Neural Networks Without Gradients: A Scalable ADMM Approach
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the growing importance of large network models and enormous training
datasets, GPUs have become increasingly necessary to train neural networks.
This is largely because conventional optimization algorithms rely on stochastic
gradient methods that don't scale well to large numbers of cores in a cluster
setting. Furthermore, the convergence of all gradient methods, including batch
methods, suffers from common problems like saturation effects, poor
conditioning, and saddle points. This paper explores an unconventional training
method that uses alternating direction methods and Bregman iteration to train
networks without gradient descent steps. The proposed method reduces the
network training problem to a sequence of minimization sub-steps that can each
be solved globally in closed form. The proposed method is advantageous because
it avoids many of the caveats that make gradient methods slow on highly
non-convex problems. The method exhibits strong scaling in the distributed
setting, yielding linear speedups even when split over thousands of cores.
|
[
{
"version": "v1",
"created": "Fri, 6 May 2016 18:38:45 GMT"
}
] | 2016-05-09T00:00:00 |
[
[
"Taylor",
"Gavin",
""
],
[
"Burmeister",
"Ryan",
""
],
[
"Xu",
"Zheng",
""
],
[
"Singh",
"Bharat",
""
],
[
"Patel",
"Ankit",
""
],
[
"Goldstein",
"Tom",
""
]
] |
TITLE: Training Neural Networks Without Gradients: A Scalable ADMM Approach
ABSTRACT: With the growing importance of large network models and enormous training
datasets, GPUs have become increasingly necessary to train neural networks.
This is largely because conventional optimization algorithms rely on stochastic
gradient methods that don't scale well to large numbers of cores in a cluster
setting. Furthermore, the convergence of all gradient methods, including batch
methods, suffers from common problems like saturation effects, poor
conditioning, and saddle points. This paper explores an unconventional training
method that uses alternating direction methods and Bregman iteration to train
networks without gradient descent steps. The proposed method reduces the
network training problem to a sequence of minimization sub-steps that can each
be solved globally in closed form. The proposed method is advantageous because
it avoids many of the caveats that make gradient methods slow on highly
non-convex problems. The method exhibits strong scaling in the distributed
setting, yielding linear speedups even when split over thousands of cores.
|
1412.4564
|
Karel Lenc
|
Andrea Vedaldi, Karel Lenc
|
MatConvNet - Convolutional Neural Networks for MATLAB
|
Updated for release v1.0-beta20
| null | null | null |
cs.CV cs.LG cs.MS cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for
MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility.
It exposes the building blocks of CNNs as easy-to-use MATLAB functions,
providing routines for computing linear convolutions with filter banks, feature
pooling, and many more. In this manner, MatConvNet allows fast prototyping of
new CNN architectures; at the same time, it supports efficient computation on
CPU and GPU allowing to train complex models on large datasets such as ImageNet
ILSVRC. This document provides an overview of CNNs and how they are implemented
in MatConvNet and gives the technical details of each computational block in
the toolbox.
|
[
{
"version": "v1",
"created": "Mon, 15 Dec 2014 12:23:35 GMT"
},
{
"version": "v2",
"created": "Sun, 21 Jun 2015 15:35:25 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2016 14:31:06 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Vedaldi",
"Andrea",
""
],
[
"Lenc",
"Karel",
""
]
] |
TITLE: MatConvNet - Convolutional Neural Networks for MATLAB
ABSTRACT: MatConvNet is an implementation of Convolutional Neural Networks (CNNs) for
MATLAB. The toolbox is designed with an emphasis on simplicity and flexibility.
It exposes the building blocks of CNNs as easy-to-use MATLAB functions,
providing routines for computing linear convolutions with filter banks, feature
pooling, and many more. In this manner, MatConvNet allows fast prototyping of
new CNN architectures; at the same time, it supports efficient computation on
CPU and GPU allowing to train complex models on large datasets such as ImageNet
ILSVRC. This document provides an overview of CNNs and how they are implemented
in MatConvNet and gives the technical details of each computational block in
the toolbox.
|
1509.00239
|
Jeremiah Blocki
|
Jeremiah Blocki and Anupam Datta
|
CASH: A Cost Asymmetric Secure Hash Algorithm for Optimal Password
Protection
|
29th IEEE Computer Security Foundations Symposium (Full Version)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An adversary who has obtained the cryptographic hash of a user's password can
mount an offline attack to crack the password by comparing this hash value with
the cryptographic hashes of likely password guesses. This offline attacker is
limited only by the resources he is willing to invest to crack the password.
Key-stretching tools can help mitigate the threat of offline attacks by making
each password guess more expensive for the adversary to verify. However,
key-stretching increases authentication costs for a legitimate authentication
server. We introduce a novel Stackelberg game model which captures the
essential elements of this interaction between a defender and an offline
attacker. We then introduce Cost Asymmetric Secure Hash (CASH), a randomized
key-stretching mechanism that minimizes the fraction of passwords that would be
cracked by a rational offline attacker without increasing amortized
authentication costs for the legitimate authentication server. CASH is
motivated by the observation that the legitimate authentication server will
typically run the authentication procedure to verify a correct password, while
an offline adversary will typically use incorrect password guesses. By using
randomization we can ensure that the amortized cost of running CASH to verify a
correct password guess is significantly smaller than the cost of rejecting an
incorrect password. Using our Stackelberg game framework we can quantify the
quality of the underlying CASH running time distribution in terms of the
fraction of passwords that a rational offline adversary would crack. We provide
an efficient algorithm to compute high quality CASH distributions for the
defender. Finally, we analyze CASH using empirical data from two large scale
password frequency datasets. Our analysis shows that CASH can significantly
reduce (up to $50\%$) the fraction of password cracked by a rational offline
adversary.
|
[
{
"version": "v1",
"created": "Tue, 1 Sep 2015 11:45:56 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2016 22:05:14 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Blocki",
"Jeremiah",
""
],
[
"Datta",
"Anupam",
""
]
] |
TITLE: CASH: A Cost Asymmetric Secure Hash Algorithm for Optimal Password
Protection
ABSTRACT: An adversary who has obtained the cryptographic hash of a user's password can
mount an offline attack to crack the password by comparing this hash value with
the cryptographic hashes of likely password guesses. This offline attacker is
limited only by the resources he is willing to invest to crack the password.
Key-stretching tools can help mitigate the threat of offline attacks by making
each password guess more expensive for the adversary to verify. However,
key-stretching increases authentication costs for a legitimate authentication
server. We introduce a novel Stackelberg game model which captures the
essential elements of this interaction between a defender and an offline
attacker. We then introduce Cost Asymmetric Secure Hash (CASH), a randomized
key-stretching mechanism that minimizes the fraction of passwords that would be
cracked by a rational offline attacker without increasing amortized
authentication costs for the legitimate authentication server. CASH is
motivated by the observation that the legitimate authentication server will
typically run the authentication procedure to verify a correct password, while
an offline adversary will typically use incorrect password guesses. By using
randomization we can ensure that the amortized cost of running CASH to verify a
correct password guess is significantly smaller than the cost of rejecting an
incorrect password. Using our Stackelberg game framework we can quantify the
quality of the underlying CASH running time distribution in terms of the
fraction of passwords that a rational offline adversary would crack. We provide
an efficient algorithm to compute high quality CASH distributions for the
defender. Finally, we analyze CASH using empirical data from two large scale
password frequency datasets. Our analysis shows that CASH can significantly
reduce (up to $50\%$) the fraction of password cracked by a rational offline
adversary.
|
1602.02481
|
Sungjoon Choi
|
Sungjoon Choi, Qian-Yi Zhou, Stephen Miller, and Vladlen Koltun
|
A Large Dataset of Object Scans
|
Technical report
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have created a dataset of more than ten thousand 3D scans of real objects.
To create the dataset, we recruited 70 operators, equipped them with
consumer-grade mobile 3D scanning setups, and paid them to scan objects in
their environments. The operators scanned objects of their choosing, outside
the laboratory and without direct supervision by computer vision professionals.
The result is a large and diverse collection of object scans: from shoes, mugs,
and toys to grand pianos, construction vehicles, and large outdoor sculptures.
We worked with an attorney to ensure that data acquisition did not violate
privacy constraints. The acquired data was irrevocably placed in the public
domain and is available freely at http://redwood-data.org/3dscan .
|
[
{
"version": "v1",
"created": "Mon, 8 Feb 2016 07:20:52 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Feb 2016 17:21:24 GMT"
},
{
"version": "v3",
"created": "Thu, 5 May 2016 05:35:48 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Choi",
"Sungjoon",
""
],
[
"Zhou",
"Qian-Yi",
""
],
[
"Miller",
"Stephen",
""
],
[
"Koltun",
"Vladlen",
""
]
] |
TITLE: A Large Dataset of Object Scans
ABSTRACT: We have created a dataset of more than ten thousand 3D scans of real objects.
To create the dataset, we recruited 70 operators, equipped them with
consumer-grade mobile 3D scanning setups, and paid them to scan objects in
their environments. The operators scanned objects of their choosing, outside
the laboratory and without direct supervision by computer vision professionals.
The result is a large and diverse collection of object scans: from shoes, mugs,
and toys to grand pianos, construction vehicles, and large outdoor sculptures.
We worked with an attorney to ensure that data acquisition did not violate
privacy constraints. The acquired data was irrevocably placed in the public
domain and is available freely at http://redwood-data.org/3dscan .
|
1605.00971
|
Peter Dugan Dr
|
Peter J. Dugan, Christopher W. Clark, Yann Andr\'e LeCun, Sofie M. Van
Parijs
|
Phase 1: DCL System Research Using Advanced Approaches for Land-based or
Ship-based Real-Time Recognition and Localization of Marine Mammals - HPC
System Implementation
|
Year 1 National Oceanic Partnership Program Report, sponsored ONR,
NFWF. N000141210585
| null | null |
N000141210585
|
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We aim to investigate advancing the state of the art of detection,
classification and localization (DCL) in the field of bioacoustics. The two
primary goals are to develop transferable technologies for detection and
classification in: (1) the area of advanced algorithms, such as deep learning
and other methods; and (2) advanced systems, capable of real-time and archival
and processing. This project will focus on long-term, continuous datasets to
provide automatic recognition, minimizing human time to annotate the signals.
Effort will begin by focusing on several years of multi-channel acoustic data
collected in the Stellwagen Bank National Marine Sanctuary (SBNMS) between 2006
and 2010. Our efforts will incorporate existing technologies in the
bioacoustics signal processing community, advanced high performance computing
(HPC) systems, and new approaches aimed at automatically detecting-classifying
and measuring features for species-specific marine mammal sounds within passive
acoustic data.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2016 16:35:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2016 18:27:35 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Dugan",
"Peter J.",
""
],
[
"Clark",
"Christopher W.",
""
],
[
"LeCun",
"Yann André",
""
],
[
"Van Parijs",
"Sofie M.",
""
]
] |
TITLE: Phase 1: DCL System Research Using Advanced Approaches for Land-based or
Ship-based Real-Time Recognition and Localization of Marine Mammals - HPC
System Implementation
ABSTRACT: We aim to investigate advancing the state of the art of detection,
classification and localization (DCL) in the field of bioacoustics. The two
primary goals are to develop transferable technologies for detection and
classification in: (1) the area of advanced algorithms, such as deep learning
and other methods; and (2) advanced systems, capable of real-time and archival
and processing. This project will focus on long-term, continuous datasets to
provide automatic recognition, minimizing human time to annotate the signals.
Effort will begin by focusing on several years of multi-channel acoustic data
collected in the Stellwagen Bank National Marine Sanctuary (SBNMS) between 2006
and 2010. Our efforts will incorporate existing technologies in the
bioacoustics signal processing community, advanced high performance computing
(HPC) systems, and new approaches aimed at automatically detecting-classifying
and measuring features for species-specific marine mammal sounds within passive
acoustic data.
|
1605.00972
|
Peter Dugan Dr
|
Peter J. Dugan, Christopher W. Clark, Yann Andr\'e LeCun, Sofie M. Van
Parijs
|
Phase 2: DCL System Using Deep Learning Approaches for Land-based or
Ship-based Real-Time Recognition and Localization of Marine Mammals - Machine
Learning Detection Algorithms
|
National Oceanic Partnership Program (NOPP) sponsored by ONR and
NFWF: N000141210585
| null | null |
N000141210585
|
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Overarching goals for this work aim to advance the state of the art for
detection, classification and localization (DCL) in the field of bioacoustics.
This goal is primarily achieved by building a generic framework for
detection-classification (DC) using a fast, efficient and scalable
architecture, demonstrating the capabilities of this system using on a variety
of low-frequency mid-frequency cetacean sounds. Two primary goals are to
develop transferable technologies for detection and classification in, one: the
area of advanced algorithms, such as deep learning and other methods; and two:
advanced systems, capable of real-time and archival processing. For each key
area, we will focus on producing publications from this work and providing
tools and software to the community where/when possible. Currently massive
amounts of acoustic data are being collected by various institutions,
corporations and national defense agencies. The long-term goal is to provide
technical capability to analyze the data using automatic algorithms for (DC)
based on machine intelligence. The goal of the automation is to provide
effective and efficient mechanisms by which to process large acoustic datasets
for understanding the bioacoustic behaviors of marine mammals. This capability
will provide insights into the potential ecological impacts and influences of
anthropogenic ocean sounds. This work focuses on building technologies using a
maturity model based on DARPA 6.1 and 6.2 processes, for basic and applied
research, respectively.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2016 16:36:30 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2016 18:28:21 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Dugan",
"Peter J.",
""
],
[
"Clark",
"Christopher W.",
""
],
[
"LeCun",
"Yann André",
""
],
[
"Van Parijs",
"Sofie M.",
""
]
] |
TITLE: Phase 2: DCL System Using Deep Learning Approaches for Land-based or
Ship-based Real-Time Recognition and Localization of Marine Mammals - Machine
Learning Detection Algorithms
ABSTRACT: Overarching goals for this work aim to advance the state of the art for
detection, classification and localization (DCL) in the field of bioacoustics.
This goal is primarily achieved by building a generic framework for
detection-classification (DC) using a fast, efficient and scalable
architecture, demonstrating the capabilities of this system using on a variety
of low-frequency mid-frequency cetacean sounds. Two primary goals are to
develop transferable technologies for detection and classification in, one: the
area of advanced algorithms, such as deep learning and other methods; and two:
advanced systems, capable of real-time and archival processing. For each key
area, we will focus on producing publications from this work and providing
tools and software to the community where/when possible. Currently massive
amounts of acoustic data are being collected by various institutions,
corporations and national defense agencies. The long-term goal is to provide
technical capability to analyze the data using automatic algorithms for (DC)
based on machine intelligence. The goal of the automation is to provide
effective and efficient mechanisms by which to process large acoustic datasets
for understanding the bioacoustic behaviors of marine mammals. This capability
will provide insights into the potential ecological impacts and influences of
anthropogenic ocean sounds. This work focuses on building technologies using a
maturity model based on DARPA 6.1 and 6.2 processes, for basic and applied
research, respectively.
|
1605.00982
|
Peter Dugan Dr
|
Peter J. Dugan, Christopher W. Clark, Yann Andr\'e LeCun, Sofie M. Van
Parijs
|
Phase 4: DCL System Using Deep Learning Approaches for Land-Based or
Ship-Based Real-Time Recognition and Localization of Marine Mammals -
Distributed Processing and Big Data Applications
|
National Oceanic Partnership Program (NOPP) sponsored by ONR and NFWF
| null | null |
N000141210585
|
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While the animal bioacoustics community at large is collecting huge amounts
of acoustic data at an unprecedented pace, processing these data is
problematic. Currently in bioacoustics, there is no effective way to achieve
high performance computing using commericial off the shelf (COTS) or government
off the shelf (GOTS) tools. Although several advances have been made in the
open source and commercial software community, these offerings either support
specific applications that do not integrate well with data formats in
bioacoustics or they are too general. Furthermore, complex algorithms that use
deep learning strategies require special considerations, such as very large
libraiers of exemplars (whale sounds) readily available for algorithm training
and testing. Detection-classification for passive acoustics is a data-mining
strategy and our goals are aligned with best practices that appeal to the
general data mining and machine learning communities where the problem of
processing large data is common. Therefore, the objective of this work is to
advance the state-of-the art for data-mining large passive acoustic datasets as
they pertain to bioacoustics. With this basic deficiency recognized at the
forefront, portions of the grant were dedicated to fostering deep-learning by
way of international competitions (kaggle.com) meant to attract deep-learning
solutions. The focus of this early work was targeted to make significant
progress in addressing big data systems and advanced algorithms over the
duration of the grant from 2012 to 2015. This early work provided simulataneous
advances in systems-algorithms research while supporting various collaborations
and projects.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2016 16:54:07 GMT"
},
{
"version": "v2",
"created": "Thu, 5 May 2016 18:35:16 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Dugan",
"Peter J.",
""
],
[
"Clark",
"Christopher W.",
""
],
[
"LeCun",
"Yann André",
""
],
[
"Van Parijs",
"Sofie M.",
""
]
] |
TITLE: Phase 4: DCL System Using Deep Learning Approaches for Land-Based or
Ship-Based Real-Time Recognition and Localization of Marine Mammals -
Distributed Processing and Big Data Applications
ABSTRACT: While the animal bioacoustics community at large is collecting huge amounts
of acoustic data at an unprecedented pace, processing these data is
problematic. Currently in bioacoustics, there is no effective way to achieve
high performance computing using commericial off the shelf (COTS) or government
off the shelf (GOTS) tools. Although several advances have been made in the
open source and commercial software community, these offerings either support
specific applications that do not integrate well with data formats in
bioacoustics or they are too general. Furthermore, complex algorithms that use
deep learning strategies require special considerations, such as very large
libraiers of exemplars (whale sounds) readily available for algorithm training
and testing. Detection-classification for passive acoustics is a data-mining
strategy and our goals are aligned with best practices that appeal to the
general data mining and machine learning communities where the problem of
processing large data is common. Therefore, the objective of this work is to
advance the state-of-the art for data-mining large passive acoustic datasets as
they pertain to bioacoustics. With this basic deficiency recognized at the
forefront, portions of the grant were dedicated to fostering deep-learning by
way of international competitions (kaggle.com) meant to attract deep-learning
solutions. The focus of this early work was targeted to make significant
progress in addressing big data systems and advanced algorithms over the
duration of the grant from 2012 to 2015. This early work provided simulataneous
advances in systems-algorithms research while supporting various collaborations
and projects.
|
1605.01534
|
Mohit Yadav
|
Mohit Yadav, Pankaj Malhotra, Lovekesh Vig, K Sriram, and Gautam
Shroff
|
ODE - Augmented Training Improves Anomaly Detection in Sensor Data from
Machines
|
Published at NIPS Time-series Workshop - 2015
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machines of all kinds from vehicles to industrial equipment are increasingly
instrumented with hundreds of sensors. Using such data to detect anomalous
behaviour is critical for safety and efficient maintenance. However, anomalies
occur rarely and with great variety in such systems, so there is often
insufficient anomalous data to build reliable detectors. A standard approach to
mitigate this problem is to use one class methods relying only on data from
normal behaviour. Unfortunately, even these approaches are more likely to fail
in the scenario of a dynamical system with manual control input(s). Normal
behaviour in response to novel control input(s) might look very different to
the learned detector which may be incorrectly detected as anomalous. In this
paper, we address this issue by modelling time-series via Ordinary Differential
Equations (ODE) and utilising such an ODE model to simulate the behaviour of
dynamical systems under varying control inputs. The available data is then
augmented with data generated from the ODE, and the anomaly detector is
retrained on this augmented dataset. Experiments demonstrate that ODE-augmented
training data allows better coverage of possible control input(s) and results
in learning more accurate distinctions between normal and anomalous behaviour
in time-series.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 09:15:55 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Yadav",
"Mohit",
""
],
[
"Malhotra",
"Pankaj",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Sriram",
"K",
""
],
[
"Shroff",
"Gautam",
""
]
] |
TITLE: ODE - Augmented Training Improves Anomaly Detection in Sensor Data from
Machines
ABSTRACT: Machines of all kinds from vehicles to industrial equipment are increasingly
instrumented with hundreds of sensors. Using such data to detect anomalous
behaviour is critical for safety and efficient maintenance. However, anomalies
occur rarely and with great variety in such systems, so there is often
insufficient anomalous data to build reliable detectors. A standard approach to
mitigate this problem is to use one class methods relying only on data from
normal behaviour. Unfortunately, even these approaches are more likely to fail
in the scenario of a dynamical system with manual control input(s). Normal
behaviour in response to novel control input(s) might look very different to
the learned detector which may be incorrectly detected as anomalous. In this
paper, we address this issue by modelling time-series via Ordinary Differential
Equations (ODE) and utilising such an ODE model to simulate the behaviour of
dynamical systems under varying control inputs. The available data is then
augmented with data generated from the ODE, and the anomaly detector is
retrained on this augmented dataset. Experiments demonstrate that ODE-augmented
training data allows better coverage of possible control input(s) and results
in learning more accurate distinctions between normal and anomalous behaviour
in time-series.
|
1605.01623
|
Bo Han
|
Bo Han and Ivor W. Tsang and Ling Chen
|
On the Convergence of A Family of Robust Losses for Stochastic Gradient
Descent
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The convergence of Stochastic Gradient Descent (SGD) using convex loss
functions has been widely studied. However, vanilla SGD methods using convex
losses cannot perform well with noisy labels, which adversely affect the update
of the primal variable in SGD methods. Unfortunately, noisy labels are
ubiquitous in real world applications such as crowdsourcing. To handle noisy
labels, in this paper, we present a family of robust losses for SGD methods. By
employing our robust losses, SGD methods successfully reduce negative effects
caused by noisy labels on each update of the primal variable. We not only
reveal that the convergence rate is O(1/T) for SGD methods using robust losses,
but also provide the robustness analysis on two representative robust losses.
Comprehensive experimental results on six real-world datasets show that SGD
methods using robust losses are obviously more robust than other baseline
methods in most situations with fast convergence.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 15:22:46 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Han",
"Bo",
""
],
[
"Tsang",
"Ivor W.",
""
],
[
"Chen",
"Ling",
""
]
] |
TITLE: On the Convergence of A Family of Robust Losses for Stochastic Gradient
Descent
ABSTRACT: The convergence of Stochastic Gradient Descent (SGD) using convex loss
functions has been widely studied. However, vanilla SGD methods using convex
losses cannot perform well with noisy labels, which adversely affect the update
of the primal variable in SGD methods. Unfortunately, noisy labels are
ubiquitous in real world applications such as crowdsourcing. To handle noisy
labels, in this paper, we present a family of robust losses for SGD methods. By
employing our robust losses, SGD methods successfully reduce negative effects
caused by noisy labels on each update of the primal variable. We not only
reveal that the convergence rate is O(1/T) for SGD methods using robust losses,
but also provide the robustness analysis on two representative robust losses.
Comprehensive experimental results on six real-world datasets show that SGD
methods using robust losses are obviously more robust than other baseline
methods in most situations with fast convergence.
|
1605.01655
|
Saif Mohammad Dr.
|
Saif M. Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko
|
Stance and Sentiment in Tweets
|
22 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We can often detect from a person's utterances whether he/she is in favor of
or against a given target entity -- their stance towards the target. However, a
person may express the same stance towards a target by using negative or
positive language. Here for the first time we present a dataset of
tweet--target pairs annotated for both stance and sentiment. The targets may or
may not be referred to in the tweets, and they may or may not be the target of
opinion in the tweets. Partitions of this dataset were used as training and
test sets in a SemEval-2016 shared task competition. We propose a simple stance
detection system that outperforms submissions from all 19 teams that
participated in the shared task. Additionally, access to both stance and
sentiment annotations allows us to explore several research questions. We show
that while knowing the sentiment expressed by a tweet is beneficial for stance
classification, it alone is not sufficient. Finally, we use additional
unlabeled data through distant supervision techniques and word embeddings to
further improve stance classification.
|
[
{
"version": "v1",
"created": "Thu, 5 May 2016 17:07:54 GMT"
}
] | 2016-05-06T00:00:00 |
[
[
"Mohammad",
"Saif M.",
""
],
[
"Sobhani",
"Parinaz",
""
],
[
"Kiritchenko",
"Svetlana",
""
]
] |
TITLE: Stance and Sentiment in Tweets
ABSTRACT: We can often detect from a person's utterances whether he/she is in favor of
or against a given target entity -- their stance towards the target. However, a
person may express the same stance towards a target by using negative or
positive language. Here for the first time we present a dataset of
tweet--target pairs annotated for both stance and sentiment. The targets may or
may not be referred to in the tweets, and they may or may not be the target of
opinion in the tweets. Partitions of this dataset were used as training and
test sets in a SemEval-2016 shared task competition. We propose a simple stance
detection system that outperforms submissions from all 19 teams that
participated in the shared task. Additionally, access to both stance and
sentiment annotations allows us to explore several research questions. We show
that while knowing the sentiment expressed by a tweet is beneficial for stance
classification, it alone is not sufficient. Finally, we use additional
unlabeled data through distant supervision techniques and word embeddings to
further improve stance classification.
|
1403.5006
|
Ning Yan
|
Ning Yan, Sona Hasani, Abolfazl Asudeh, Chengkai Li
|
Generating Preview Tables for Entity Graphs
|
This is the camera-ready version of a SIGMOD16 paper. There might be
tiny differences in layout, spacing and linebreaking, compared with the
version in the SIGMOD16 proceedings, since we must submit TeX files and use
arXiv to compile the files
| null |
10.1145/2882903.2915221
| null |
cs.DB cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Users are tapping into massive, heterogeneous entity graphs for many
applications. It is challenging to select entity graphs for a particular need,
given abundant datasets from many sources and the oftentimes scarce information
for them. We propose methods to produce preview tables for compact presentation
of important entity types and relationships in entity graphs. The preview
tables assist users in attaining a quick and rough preview of the data. They
can be shown in a limited display space for a user to browse and explore,
before she decides to spend time and resources to fetch and investigate the
complete dataset. We formulate several optimization problems that look for
previews with the highest scores according to intuitive goodness measures,
under various constraints on preview size and distance between preview tables.
The optimization problem under distance constraint is NP-hard. We design a
dynamic-programming algorithm and an Apriori-style algorithm for finding
optimal previews. Results from experiments, comparison with related work and
user studies demonstrated the scoring measures' accuracy and the discovery
algorithms' efficiency.
|
[
{
"version": "v1",
"created": "Thu, 20 Mar 2014 00:21:37 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2016 04:40:31 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Yan",
"Ning",
""
],
[
"Hasani",
"Sona",
""
],
[
"Asudeh",
"Abolfazl",
""
],
[
"Li",
"Chengkai",
""
]
] |
TITLE: Generating Preview Tables for Entity Graphs
ABSTRACT: Users are tapping into massive, heterogeneous entity graphs for many
applications. It is challenging to select entity graphs for a particular need,
given abundant datasets from many sources and the oftentimes scarce information
for them. We propose methods to produce preview tables for compact presentation
of important entity types and relationships in entity graphs. The preview
tables assist users in attaining a quick and rough preview of the data. They
can be shown in a limited display space for a user to browse and explore,
before she decides to spend time and resources to fetch and investigate the
complete dataset. We formulate several optimization problems that look for
previews with the highest scores according to intuitive goodness measures,
under various constraints on preview size and distance between preview tables.
The optimization problem under distance constraint is NP-hard. We design a
dynamic-programming algorithm and an Apriori-style algorithm for finding
optimal previews. Results from experiments, comparison with related work and
user studies demonstrated the scoring measures' accuracy and the discovery
algorithms' efficiency.
|
1605.01101
|
Sourya Roy
|
Avisek Lahiri, Sourya Roy, Anirban Santara, Pabitra Mitra, Prabir
Kumar Biswas
|
WEPSAM: Weakly Pre-Learnt Saliency Model
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual saliency detection tries to mimic human vision psychology which
concentrates on sparse, important areas in natural image. Saliency prediction
research has been traditionally based on low level features such as contrast,
edge, etc. Recent thrust in saliency prediction research is to learn high level
semantics using ground truth eye fixation datasets. In this paper we present,
WEPSAM : Weakly Pre-Learnt Saliency Model as a pioneering effort of using
domain specific pre-learing on ImageNet for saliency prediction using a light
weight CNN architecture. The paper proposes a two step hierarchical learning,
in which the first step is to develop a framework for weakly pre-training on a
large scale dataset such as ImageNet which is void of human eye fixation maps.
The second step refines the pre-trained model on a limited set of ground truth
fixations. Analysis of loss on iSUN and SALICON datasets reveal that
pre-trained network converges much faster compared to randomly initialized
network. WEPSAM also outperforms some recent state-of-the-art saliency
prediction models on the challenging MIT300 dataset.
|
[
{
"version": "v1",
"created": "Tue, 3 May 2016 21:47:33 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Lahiri",
"Avisek",
""
],
[
"Roy",
"Sourya",
""
],
[
"Santara",
"Anirban",
""
],
[
"Mitra",
"Pabitra",
""
],
[
"Biswas",
"Prabir Kumar",
""
]
] |
TITLE: WEPSAM: Weakly Pre-Learnt Saliency Model
ABSTRACT: Visual saliency detection tries to mimic human vision psychology which
concentrates on sparse, important areas in natural image. Saliency prediction
research has been traditionally based on low level features such as contrast,
edge, etc. Recent thrust in saliency prediction research is to learn high level
semantics using ground truth eye fixation datasets. In this paper we present,
WEPSAM : Weakly Pre-Learnt Saliency Model as a pioneering effort of using
domain specific pre-learing on ImageNet for saliency prediction using a light
weight CNN architecture. The paper proposes a two step hierarchical learning,
in which the first step is to develop a framework for weakly pre-training on a
large scale dataset such as ImageNet which is void of human eye fixation maps.
The second step refines the pre-trained model on a limited set of ground truth
fixations. Analysis of loss on iSUN and SALICON datasets reveal that
pre-trained network converges much faster compared to randomly initialized
network. WEPSAM also outperforms some recent state-of-the-art saliency
prediction models on the challenging MIT300 dataset.
|
1605.01130
|
Yaming Wang
|
Yaming Wang, Jonghyun Choi, Vlad I. Morariu, Larry S. Davis
|
Mining Discriminative Triplets of Patches for Fine-Grained
Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fine-grained classification involves distinguishing between similar
sub-categories based on subtle differences in highly localized regions;
therefore, accurate localization of discriminative regions remains a major
challenge. We describe a patch-based framework to address this problem. We
introduce triplets of patches with geometric constraints to improve the
accuracy of patch localization, and automatically mine discriminative
geometrically-constrained triplets for classification. The resulting approach
only requires object bounding boxes. Its effectiveness is demonstrated using
four publicly available fine-grained datasets, on which it outperforms or
achieves comparable performance to the state-of-the-art in classification.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2016 02:34:18 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Wang",
"Yaming",
""
],
[
"Choi",
"Jonghyun",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Davis",
"Larry S.",
""
]
] |
TITLE: Mining Discriminative Triplets of Patches for Fine-Grained
Classification
ABSTRACT: Fine-grained classification involves distinguishing between similar
sub-categories based on subtle differences in highly localized regions;
therefore, accurate localization of discriminative regions remains a major
challenge. We describe a patch-based framework to address this problem. We
introduce triplets of patches with geometric constraints to improve the
accuracy of patch localization, and automatically mine discriminative
geometrically-constrained triplets for classification. The resulting approach
only requires object bounding boxes. Its effectiveness is demonstrated using
four publicly available fine-grained datasets, on which it outperforms or
achieves comparable performance to the state-of-the-art in classification.
|
1605.01156
|
Yunjie Liu
|
Yunjie Liu, Evan Racah, Prabhat, Joaquin Correa, Amir Khosrowshahi,
David Lavers, Kenneth Kunkel, Michael Wehner, William Collins
|
Application of Deep Convolutional Neural Networks for Detecting Extreme
Weather in Climate Datasets
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting extreme events in large datasets is a major challenge in climate
science research. Current algorithms for extreme event detection are build upon
human expertise in defining events based on subjective thresholds of relevant
physical variables. Often, multiple competing methods produce vastly different
results on the same dataset. Accurate characterization of extreme events in
climate simulations and observational data archives is critical for
understanding the trends and potential impacts of such events in a climate
change content. This study presents the first application of Deep Learning
techniques as alternative methodology for climate extreme events detection.
Deep neural networks are able to learn high-level representations of a broad
class of patterns from labeled data. In this work, we developed deep
Convolutional Neural Network (CNN) classification system and demonstrated the
usefulness of Deep Learning technique for tackling climate pattern detection
problems. Coupled with Bayesian based hyper-parameter optimization scheme, our
deep CNN system achieves 89\%-99\% of accuracy in detecting extreme events
(Tropical Cyclones, Atmospheric Rivers and Weather Fronts
|
[
{
"version": "v1",
"created": "Wed, 4 May 2016 06:38:19 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Liu",
"Yunjie",
""
],
[
"Racah",
"Evan",
""
],
[
"Prabhat",
"",
""
],
[
"Correa",
"Joaquin",
""
],
[
"Khosrowshahi",
"Amir",
""
],
[
"Lavers",
"David",
""
],
[
"Kunkel",
"Kenneth",
""
],
[
"Wehner",
"Michael",
""
],
[
"Collins",
"William",
""
]
] |
TITLE: Application of Deep Convolutional Neural Networks for Detecting Extreme
Weather in Climate Datasets
ABSTRACT: Detecting extreme events in large datasets is a major challenge in climate
science research. Current algorithms for extreme event detection are build upon
human expertise in defining events based on subjective thresholds of relevant
physical variables. Often, multiple competing methods produce vastly different
results on the same dataset. Accurate characterization of extreme events in
climate simulations and observational data archives is critical for
understanding the trends and potential impacts of such events in a climate
change content. This study presents the first application of Deep Learning
techniques as alternative methodology for climate extreme events detection.
Deep neural networks are able to learn high-level representations of a broad
class of patterns from labeled data. In this work, we developed deep
Convolutional Neural Network (CNN) classification system and demonstrated the
usefulness of Deep Learning technique for tackling climate pattern detection
problems. Coupled with Bayesian based hyper-parameter optimization scheme, our
deep CNN system achieves 89\%-99\% of accuracy in detecting extreme events
(Tropical Cyclones, Atmospheric Rivers and Weather Fronts
|
1605.01189
|
Sheraz Ahmed Dr.
|
Sheraz Ahmed, Muhammad Imran Malik, Muhammad Zeshan Afzal, Koichi
Kise, Masakazu Iwamura, Andreas Dengel, Marcus Liwicki
|
A Generic Method for Automatic Ground Truth Generation of
Camera-captured Documents
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The contribution of this paper is fourfold. The first contribution is a
novel, generic method for automatic ground truth generation of camera-captured
document images (books, magazines, articles, invoices, etc.). It enables us to
build large-scale (i.e., millions of images) labeled camera-captured/scanned
documents datasets, without any human intervention. The method is generic,
language independent and can be used for generation of labeled documents
datasets (both scanned and cameracaptured) in any cursive and non-cursive
language, e.g., English, Russian, Arabic, Urdu, etc. To assess the
effectiveness of the presented method, two different datasets in English and
Russian are generated using the presented method. Evaluation of samples from
the two datasets shows that 99:98% of the images were correctly labeled. The
second contribution is a large dataset (called C3Wi) of camera-captured
characters and words images, comprising 1 million word images (10 million
character images), captured in a real camera-based acquisition. This dataset
can be used for training as well as testing of character recognition systems on
camera-captured documents. The third contribution is a novel method for the
recognition of cameracaptured document images. The proposed method is based on
Long Short-Term Memory and outperforms the state-of-the-art methods for camera
based OCRs. As a fourth contribution, various benchmark tests are performed to
uncover the behavior of commercial (ABBYY), open source (Tesseract), and the
presented camera-based OCR using the presented C3Wi dataset. Evaluation results
reveal that the existing OCRs, which already get very high accuracies on
scanned documents, have limited performance on camera-captured document images;
where ABBYY has an accuracy of 75%, Tesseract an accuracy of 50.22%, while the
presented character recognition system has an accuracy of 95.10%.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2016 09:25:04 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Ahmed",
"Sheraz",
""
],
[
"Malik",
"Muhammad Imran",
""
],
[
"Afzal",
"Muhammad Zeshan",
""
],
[
"Kise",
"Koichi",
""
],
[
"Iwamura",
"Masakazu",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Liwicki",
"Marcus",
""
]
] |
TITLE: A Generic Method for Automatic Ground Truth Generation of
Camera-captured Documents
ABSTRACT: The contribution of this paper is fourfold. The first contribution is a
novel, generic method for automatic ground truth generation of camera-captured
document images (books, magazines, articles, invoices, etc.). It enables us to
build large-scale (i.e., millions of images) labeled camera-captured/scanned
documents datasets, without any human intervention. The method is generic,
language independent and can be used for generation of labeled documents
datasets (both scanned and cameracaptured) in any cursive and non-cursive
language, e.g., English, Russian, Arabic, Urdu, etc. To assess the
effectiveness of the presented method, two different datasets in English and
Russian are generated using the presented method. Evaluation of samples from
the two datasets shows that 99:98% of the images were correctly labeled. The
second contribution is a large dataset (called C3Wi) of camera-captured
characters and words images, comprising 1 million word images (10 million
character images), captured in a real camera-based acquisition. This dataset
can be used for training as well as testing of character recognition systems on
camera-captured documents. The third contribution is a novel method for the
recognition of cameracaptured document images. The proposed method is based on
Long Short-Term Memory and outperforms the state-of-the-art methods for camera
based OCRs. As a fourth contribution, various benchmark tests are performed to
uncover the behavior of commercial (ABBYY), open source (Tesseract), and the
presented camera-based OCR using the presented C3Wi dataset. Evaluation results
reveal that the existing OCRs, which already get very high accuracies on
scanned documents, have limited performance on camera-captured document images;
where ABBYY has an accuracy of 75%, Tesseract an accuracy of 50.22%, while the
presented character recognition system has an accuracy of 95.10%.
|
1605.01194
|
Sharmistha Jat
|
Lavanya Sita Tekumalla and Sharmistha
|
IISCNLP at SemEval-2016 Task 2: Interpretable STS with ILP based
Multiple Chunk Aligner
|
SEMEVAL Workshop @ NAACL 2016
| null | null | null |
cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interpretable semantic textual similarity (iSTS) task adds a crucial
explanatory layer to pairwise sentence similarity. We address various
components of this task: chunk level semantic alignment along with assignment
of similarity type and score for aligned chunks with a novel system presented
in this paper. We propose an algorithm, iMATCH, for the alignment of multiple
non-contiguous chunks based on Integer Linear Programming (ILP). Similarity
type and score assignment for pairs of chunks is done using a supervised
multiclass classification technique based on Random Forrest Classifier. Results
show that our algorithm iMATCH has low execution time and outperforms most
other participating systems in terms of alignment score. Of the three datasets,
we are top ranked for answer- students dataset in terms of overall score and
have top alignment score for headlines dataset in the gold chunks track.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2016 09:36:49 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Tekumalla",
"Lavanya Sita",
""
],
[
"Sharmistha",
"",
""
]
] |
TITLE: IISCNLP at SemEval-2016 Task 2: Interpretable STS with ILP based
Multiple Chunk Aligner
ABSTRACT: Interpretable semantic textual similarity (iSTS) task adds a crucial
explanatory layer to pairwise sentence similarity. We address various
components of this task: chunk level semantic alignment along with assignment
of similarity type and score for aligned chunks with a novel system presented
in this paper. We propose an algorithm, iMATCH, for the alignment of multiple
non-contiguous chunks based on Integer Linear Programming (ILP). Similarity
type and score assignment for pairs of chunks is done using a supervised
multiclass classification technique based on Random Forrest Classifier. Results
show that our algorithm iMATCH has low execution time and outperforms most
other participating systems in terms of alignment score. Of the three datasets,
we are top ranked for answer- students dataset in terms of overall score and
have top alignment score for headlines dataset in the gold chunks track.
|
1605.01397
|
Noel Codella
|
David Gutman, Noel C. F. Codella, Emre Celebi, Brian Helba, Michael
Marchetti, Nabin Mishra, Allan Halpern
|
Skin Lesion Analysis toward Melanoma Detection: A Challenge at the
International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the
International Skin Imaging Collaboration (ISIC)
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we describe the design and implementation of a publicly
accessible dermatology image analysis benchmark challenge. The goal of the
challenge is to sup- port research and development of algorithms for automated
diagnosis of melanoma, a lethal form of skin cancer, from dermoscopic images.
The challenge was divided into sub-challenges for each task involved in image
analysis, including lesion segmentation, dermoscopic feature detection within a
lesion, and classification of melanoma. Training data included 900 images. A
separate test dataset of 379 images was provided to measure resultant
performance of systems developed with the training data. Ground truth for both
training and test sets was generated by a panel of dermoscopic experts. In
total, there were 79 submissions from a group of 38 participants, making this
the largest standardized and comparative study for melanoma diagnosis in
dermoscopic images to date. While the official challenge duration and ranking
of participants has concluded, the datasets remain available for further
research and development.
|
[
{
"version": "v1",
"created": "Wed, 4 May 2016 19:49:17 GMT"
}
] | 2016-05-05T00:00:00 |
[
[
"Gutman",
"David",
""
],
[
"Codella",
"Noel C. F.",
""
],
[
"Celebi",
"Emre",
""
],
[
"Helba",
"Brian",
""
],
[
"Marchetti",
"Michael",
""
],
[
"Mishra",
"Nabin",
""
],
[
"Halpern",
"Allan",
""
]
] |
TITLE: Skin Lesion Analysis toward Melanoma Detection: A Challenge at the
International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the
International Skin Imaging Collaboration (ISIC)
ABSTRACT: In this article, we describe the design and implementation of a publicly
accessible dermatology image analysis benchmark challenge. The goal of the
challenge is to sup- port research and development of algorithms for automated
diagnosis of melanoma, a lethal form of skin cancer, from dermoscopic images.
The challenge was divided into sub-challenges for each task involved in image
analysis, including lesion segmentation, dermoscopic feature detection within a
lesion, and classification of melanoma. Training data included 900 images. A
separate test dataset of 379 images was provided to measure resultant
performance of systems developed with the training data. Ground truth for both
training and test sets was generated by a panel of dermoscopic experts. In
total, there were 79 submissions from a group of 38 participants, making this
the largest standardized and comparative study for melanoma diagnosis in
dermoscopic images to date. While the official challenge duration and ranking
of participants has concluded, the datasets remain available for further
research and development.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.