id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.04149 | Po-Sen Huang | Po-Sen Huang, Minje Kim, Mark Hasegawa-Johnson, Paris Smaragdis | Joint Optimization of Masks and Deep Recurrent Neural Networks for
Monaural Source Separation | null | IEEE/ACM Transactions on Audio, Speech, and Language Processing,
vol.23, no.12, pp.2136-2147, Dec. 2015 | 10.1109/TASLP.2015.2468583 | null | cs.SD cs.AI cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monaural source separation is important for many real world applications. It
is challenging because, with only a single channel of information available,
without any constraints, an infinite number of solutions are possible. In this
paper, we explore joint optimization of masking functions and deep recurrent
neural networks for monaural source separation tasks, including monaural speech
separation, monaural singing voice separation, and speech denoising. The joint
optimization of the deep recurrent neural networks with an extra masking layer
enforces a reconstruction constraint. Moreover, we explore a discriminative
criterion for training neural networks to further enhance the separation
performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT
datasets for speech separation, singing voice separation, and speech denoising
tasks, respectively. Our approaches achieve 2.30--4.98 dB SDR gain compared to
NMF models in the speech separation task, 2.30--2.48 dB GNSDR gain and
4.32--5.42 dB GSIR gain compared to existing models in the singing voice
separation task, and outperform NMF and DNN baselines in the speech denoising
task.
| [
{
"version": "v1",
"created": "Fri, 13 Feb 2015 23:22:16 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Jun 2015 04:22:20 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Aug 2015 04:20:33 GMT"
},
{
"version": "v4",
"created": "Thu, 1 Oct 2015 02:58:01 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Huang",
"Po-Sen",
""
],
[
"Kim",
"Minje",
""
],
[
"Hasegawa-Johnson",
"Mark",
""
],
[
"Smaragdis",
"Paris",
""
]
] | TITLE: Joint Optimization of Masks and Deep Recurrent Neural Networks for
Monaural Source Separation
ABSTRACT: Monaural source separation is important for many real world applications. It
is challenging because, with only a single channel of information available,
without any constraints, an infinite number of solutions are possible. In this
paper, we explore joint optimization of masking functions and deep recurrent
neural networks for monaural source separation tasks, including monaural speech
separation, monaural singing voice separation, and speech denoising. The joint
optimization of the deep recurrent neural networks with an extra masking layer
enforces a reconstruction constraint. Moreover, we explore a discriminative
criterion for training neural networks to further enhance the separation
performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT
datasets for speech separation, singing voice separation, and speech denoising
tasks, respectively. Our approaches achieve 2.30--4.98 dB SDR gain compared to
NMF models in the speech separation task, 2.30--2.48 dB GNSDR gain and
4.32--5.42 dB GSIR gain compared to existing models in the singing voice
separation task, and outperform NMF and DNN baselines in the speech denoising
task.
| no_new_dataset | 0.950088 |
1502.08029 | Li Yao | Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal,
Hugo Larochelle, Aaron Courville | Describing Videos by Exploiting Temporal Structure | Accepted to ICCV15. This version comes with code release and
supplementary material | null | null | null | stat.ML cs.AI cs.CL cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in using recurrent neural networks (RNNs) for image
description has motivated the exploration of their application for video
description. However, while images are static, working with videos requires
modeling their dynamic temporal structure and then properly integrating that
information into a natural language description. In this context, we propose an
approach that successfully takes into account both the local and global
temporal structure of videos to produce descriptions. First, our approach
incorporates a spatial temporal 3-D convolutional neural network (3-D CNN)
representation of the short temporal dynamics. The 3-D CNN representation is
trained on video action recognition tasks, so as to produce a representation
that is tuned to human motion and behavior. Second we propose a temporal
attention mechanism that allows to go beyond local temporal modeling and learns
to automatically select the most relevant temporal segments given the
text-generating RNN. Our approach exceeds the current state-of-art for both
BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on
a new, larger and more challenging dataset of paired video and natural language
descriptions.
| [
{
"version": "v1",
"created": "Fri, 27 Feb 2015 19:30:40 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Mar 2015 17:24:47 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Mar 2015 15:27:08 GMT"
},
{
"version": "v4",
"created": "Sat, 25 Apr 2015 20:32:27 GMT"
},
{
"version": "v5",
"created": "Thu, 1 Oct 2015 00:12:46 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Yao",
"Li",
""
],
[
"Torabi",
"Atousa",
""
],
[
"Cho",
"Kyunghyun",
""
],
[
"Ballas",
"Nicolas",
""
],
[
"Pal",
"Christopher",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"Courville",
"Aaron",
""
]
] | TITLE: Describing Videos by Exploiting Temporal Structure
ABSTRACT: Recent progress in using recurrent neural networks (RNNs) for image
description has motivated the exploration of their application for video
description. However, while images are static, working with videos requires
modeling their dynamic temporal structure and then properly integrating that
information into a natural language description. In this context, we propose an
approach that successfully takes into account both the local and global
temporal structure of videos to produce descriptions. First, our approach
incorporates a spatial temporal 3-D convolutional neural network (3-D CNN)
representation of the short temporal dynamics. The 3-D CNN representation is
trained on video action recognition tasks, so as to produce a representation
that is tuned to human motion and behavior. Second we propose a temporal
attention mechanism that allows to go beyond local temporal modeling and learns
to automatically select the most relevant temporal segments given the
text-generating RNN. Our approach exceeds the current state-of-art for both
BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on
a new, larger and more challenging dataset of paired video and natural language
descriptions.
| new_dataset | 0.96856 |
1504.04211 | Zied Ben Bouallegue | Zied Ben Bouallegue, Pierre Pinson, Petra Friederichs | Quantile forecast discrimination ability and value | null | null | 10.1002/qj.2624 | null | physics.ao-ph physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While probabilistic forecast verification for categorical forecasts is well
established, some of the existing concepts and methods have not found their
equivalent for the case of continuous variables. New tools dedicated to the
assessment of forecast discrimination ability and forecast value are introduced
here, based on quantile forecasts being the base product for the continuous
case (hence in a nonparametric framework). The relative user characteristic
(RUC) curve and the quantile value plot allow analysing the performance of a
forecast for a specific user in a decision-making framework. The RUC curve is
designed as a user-based discrimination tool and the quantile value plot
translates forecast discrimination ability in terms of economic value. The
relationship between the overall value of a quantile forecast and the
respective quantile skill score is also discussed. The application of these new
verification approaches and tools is illustrated based on synthetic datasets,
as well as for the case of global radiation forecasts from the high resolution
ensemble COSMO-DE-EPS of the German Weather Service.
| [
{
"version": "v1",
"created": "Thu, 16 Apr 2015 12:45:35 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Bouallegue",
"Zied Ben",
""
],
[
"Pinson",
"Pierre",
""
],
[
"Friederichs",
"Petra",
""
]
] | TITLE: Quantile forecast discrimination ability and value
ABSTRACT: While probabilistic forecast verification for categorical forecasts is well
established, some of the existing concepts and methods have not found their
equivalent for the case of continuous variables. New tools dedicated to the
assessment of forecast discrimination ability and forecast value are introduced
here, based on quantile forecasts being the base product for the continuous
case (hence in a nonparametric framework). The relative user characteristic
(RUC) curve and the quantile value plot allow analysing the performance of a
forecast for a specific user in a decision-making framework. The RUC curve is
designed as a user-based discrimination tool and the quantile value plot
translates forecast discrimination ability in terms of economic value. The
relationship between the overall value of a quantile forecast and the
respective quantile skill score is also discussed. The application of these new
verification approaches and tools is illustrated based on synthetic datasets,
as well as for the case of global radiation forecasts from the high resolution
ensemble COSMO-DE-EPS of the German Weather Service.
| no_new_dataset | 0.947332 |
1505.01121 | Mateusz Malinowski | Mateusz Malinowski and Marcus Rohrbach and Mario Fritz | Ask Your Neurons: A Neural-based Approach to Answering Questions about
Images | ICCV'15 (Oral) | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address a question answering task on real-world images that is set up as a
Visual Turing Test. By combining latest advances in image representation and
natural language processing, we propose Neural-Image-QA, an end-to-end
formulation to this problem for which all parts are trained jointly. In
contrast to previous efforts, we are facing a multi-modal problem where the
language output (answer) is conditioned on visual and natural language input
(image and question). Our approach Neural-Image-QA doubles the performance of
the previous best approach on this problem. We provide additional insights into
the problem by analyzing how much information is contained only in the language
part for which we provide a new human baseline. To study human consensus, which
is related to the ambiguities inherent in this challenging task, we propose two
novel metrics and collect additional answers which extends the original DAQUAR
dataset to DAQUAR-Consensus.
| [
{
"version": "v1",
"created": "Tue, 5 May 2015 18:39:29 GMT"
},
{
"version": "v2",
"created": "Wed, 6 May 2015 08:10:01 GMT"
},
{
"version": "v3",
"created": "Thu, 1 Oct 2015 12:13:20 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Malinowski",
"Mateusz",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Fritz",
"Mario",
""
]
] | TITLE: Ask Your Neurons: A Neural-based Approach to Answering Questions about
Images
ABSTRACT: We address a question answering task on real-world images that is set up as a
Visual Turing Test. By combining latest advances in image representation and
natural language processing, we propose Neural-Image-QA, an end-to-end
formulation to this problem for which all parts are trained jointly. In
contrast to previous efforts, we are facing a multi-modal problem where the
language output (answer) is conditioned on visual and natural language input
(image and question). Our approach Neural-Image-QA doubles the performance of
the previous best approach on this problem. We provide additional insights into
the problem by analyzing how much information is contained only in the language
part for which we provide a new human baseline. To study human consensus, which
is related to the ambiguities inherent in this challenging task, we propose two
novel metrics and collect additional answers which extends the original DAQUAR
dataset to DAQUAR-Consensus.
| no_new_dataset | 0.948251 |
1509.08147 | Abhishek Kar | Abhishek Kar, Shubham Tulsiani, Jo\~ao Carreira, Jitendra Malik | Amodal Completion and Size Constancy in Natural Scenes | Accepted to ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of enriching current object detection systems with
veridical object sizes and relative depth estimates from a single image. There
are several technical challenges to this, such as occlusions, lack of
calibration data and the scale ambiguity between object size and distance.
These have not been addressed in full generality in previous work. Here we
propose to tackle these issues by building upon advances in object recognition
and using recently created large-scale datasets. We first introduce the task of
amodal bounding box completion, which aims to infer the the full extent of the
object instances in the image. We then propose a probabilistic framework for
learning category-specific object size distributions from available annotations
and leverage these in conjunction with amodal completion to infer veridical
sizes in novel images. Finally, we introduce a focal length prediction approach
that exploits scene recognition to overcome inherent scaling ambiguities and we
demonstrate qualitative results on challenging real-world scenes.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 21:39:42 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2015 04:34:06 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Kar",
"Abhishek",
""
],
[
"Tulsiani",
"Shubham",
""
],
[
"Carreira",
"João",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Amodal Completion and Size Constancy in Natural Scenes
ABSTRACT: We consider the problem of enriching current object detection systems with
veridical object sizes and relative depth estimates from a single image. There
are several technical challenges to this, such as occlusions, lack of
calibration data and the scale ambiguity between object size and distance.
These have not been addressed in full generality in previous work. Here we
propose to tackle these issues by building upon advances in object recognition
and using recently created large-scale datasets. We first introduce the task of
amodal bounding box completion, which aims to infer the the full extent of the
object instances in the image. We then propose a probabilistic framework for
learning category-specific object size distributions from available annotations
and leverage these in conjunction with amodal completion to infer veridical
sizes in novel images. Finally, we introduce a focal length prediction approach
that exploits scene recognition to overcome inherent scaling ambiguities and we
demonstrate qualitative results on challenging real-world scenes.
| new_dataset | 0.964254 |
1411.6387 | Chunhua Shen | Fayao Liu, Chunhua Shen, Guosheng Lin | Deep Convolutional Neural Fields for Depth Estimation from a Single
Image | fixed some typos. in CVPR15 proceedings | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of depth estimation from a single monocular image in
this work. It is a challenging task as no reliable depth cues are available,
e.g., stereo correspondences, motions, etc. Previous efforts have been focusing
on exploiting geometric priors or additional sources of information, with all
using hand-crafted features. Recently, there is mounting evidence that features
from deep convolutional neural networks (CNN) are setting new records for
various vision applications. On the other hand, considering the continuous
characteristic of the depth values, depth estimations can be naturally
formulated into a continuous conditional random field (CRF) learning problem.
Therefore, we in this paper present a deep convolutional neural field model for
estimating depths from a single image, aiming to jointly explore the capacity
of deep CNN and continuous CRF. Specifically, we propose a deep structured
learning scheme which learns the unary and pairwise potentials of continuous
CRF in a unified deep CNN framework.
The proposed method can be used for depth estimations of general scenes with
no geometric priors nor any extra information injected. In our case, the
integral of the partition function can be analytically calculated, thus we can
exactly solve the log-likelihood optimization. Moreover, solving the MAP
problem for predicting depths of a new image is highly efficient as closed-form
solutions exist. We experimentally demonstrate that the proposed method
outperforms state-of-the-art depth estimation methods on both indoor and
outdoor scene datasets.
| [
{
"version": "v1",
"created": "Mon, 24 Nov 2014 09:13:00 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Dec 2014 04:11:14 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Liu",
"Fayao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Lin",
"Guosheng",
""
]
] | TITLE: Deep Convolutional Neural Fields for Depth Estimation from a Single
Image
ABSTRACT: We consider the problem of depth estimation from a single monocular image in
this work. It is a challenging task as no reliable depth cues are available,
e.g., stereo correspondences, motions, etc. Previous efforts have been focusing
on exploiting geometric priors or additional sources of information, with all
using hand-crafted features. Recently, there is mounting evidence that features
from deep convolutional neural networks (CNN) are setting new records for
various vision applications. On the other hand, considering the continuous
characteristic of the depth values, depth estimations can be naturally
formulated into a continuous conditional random field (CRF) learning problem.
Therefore, we in this paper present a deep convolutional neural field model for
estimating depths from a single image, aiming to jointly explore the capacity
of deep CNN and continuous CRF. Specifically, we propose a deep structured
learning scheme which learns the unary and pairwise potentials of continuous
CRF in a unified deep CNN framework.
The proposed method can be used for depth estimations of general scenes with
no geometric priors nor any extra information injected. In our case, the
integral of the partition function can be analytically calculated, thus we can
exactly solve the log-likelihood optimization. Moreover, solving the MAP
problem for predicting depths of a new image is highly efficient as closed-form
solutions exist. We experimentally demonstrate that the proposed method
outperforms state-of-the-art depth estimation methods on both indoor and
outdoor scene datasets.
| no_new_dataset | 0.945751 |
1507.08799 | Christian Mandery | Christian Mandery and J\'ulia Borr\`as and Mirjam J\"ochner and Tamim
Asfour | Analyzing Whole-Body Pose Transitions in Multi-Contact Motions | 8 pages, IEEE-RAS International Conference on Humanoid Robots
(Humanoids) 2015 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When executing whole-body motions, humans are able to use a large variety of
support poses which not only utilize the feet, but also hands, knees and elbows
to enhance stability. While there are many works analyzing the transitions
involved in walking, very few works analyze human motion where more complex
supports occur.
In this work, we analyze complex support pose transitions in human motion
involving locomotion and manipulation tasks (loco-manipulation). We have
applied a method for the detection of human support contacts from motion
capture data to a large-scale dataset of loco-manipulation motions involving
multi-contact supports, providing a semantic representation of them. Our
results provide a statistical analysis of the used support poses, their
transitions and the time spent in each of them. In addition, our data partially
validates our taxonomy of whole-body support poses presented in our previous
work.
We believe that this work extends our understanding of human motion for
humanoids, with a long-term objective of developing methods for autonomous
multi-contact motion planning.
| [
{
"version": "v1",
"created": "Fri, 31 Jul 2015 08:51:51 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Sep 2015 12:04:07 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Mandery",
"Christian",
""
],
[
"Borràs",
"Júlia",
""
],
[
"Jöchner",
"Mirjam",
""
],
[
"Asfour",
"Tamim",
""
]
] | TITLE: Analyzing Whole-Body Pose Transitions in Multi-Contact Motions
ABSTRACT: When executing whole-body motions, humans are able to use a large variety of
support poses which not only utilize the feet, but also hands, knees and elbows
to enhance stability. While there are many works analyzing the transitions
involved in walking, very few works analyze human motion where more complex
supports occur.
In this work, we analyze complex support pose transitions in human motion
involving locomotion and manipulation tasks (loco-manipulation). We have
applied a method for the detection of human support contacts from motion
capture data to a large-scale dataset of loco-manipulation motions involving
multi-contact supports, providing a semantic representation of them. Our
results provide a statistical analysis of the used support poses, their
transitions and the time spent in each of them. In addition, our data partially
validates our taxonomy of whole-body support poses presented in our previous
work.
We believe that this work extends our understanding of human motion for
humanoids, with a long-term objective of developing methods for autonomous
multi-contact motion planning.
| no_new_dataset | 0.503082 |
1509.09030 | Sourangshu Bhattacharya | Ayan Das and Sourangshu Bhattacharya | Distributed Weighted Parameter Averaging for SVM Training on Big Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two popular approaches for distributed training of SVMs on big data are
parameter averaging and ADMM. Parameter averaging is efficient but suffers from
loss of accuracy with increase in number of partitions, while ADMM in the
feature space is accurate but suffers from slow convergence. In this paper, we
report a hybrid approach called weighted parameter averaging (WPA), which
optimizes the regularized hinge loss with respect to weights on parameters. The
problem is shown to be same as solving SVM in a projected space. We also
demonstrate an $O(\frac{1}{N})$ stability bound on final hypothesis given by
WPA, using novel proof techniques. Experimental results on a variety of toy and
real world datasets show that our approach is significantly more accurate than
parameter averaging for high number of partitions. It is also seen the proposed
method enjoys much faster convergence compared to ADMM in features space.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 06:59:31 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Das",
"Ayan",
""
],
[
"Bhattacharya",
"Sourangshu",
""
]
] | TITLE: Distributed Weighted Parameter Averaging for SVM Training on Big Data
ABSTRACT: Two popular approaches for distributed training of SVMs on big data are
parameter averaging and ADMM. Parameter averaging is efficient but suffers from
loss of accuracy with increase in number of partitions, while ADMM in the
feature space is accurate but suffers from slow convergence. In this paper, we
report a hybrid approach called weighted parameter averaging (WPA), which
optimizes the regularized hinge loss with respect to weights on parameters. The
problem is shown to be same as solving SVM in a projected space. We also
demonstrate an $O(\frac{1}{N})$ stability bound on final hypothesis given by
WPA, using novel proof techniques. Experimental results on a variety of toy and
real world datasets show that our approach is significantly more accurate than
parameter averaging for high number of partitions. It is also seen the proposed
method enjoys much faster convergence compared to ADMM in features space.
| no_new_dataset | 0.95096 |
1509.09055 | Marc Barthelemy | Julien Perret, Maurizio Gribaudi, Marc Barthelemy | Roads and cities of $18^{th}$ century France | 12 pages, 4 figures | Scientific Data 2, Article number: 150048 (2015) | null | null | physics.soc-ph cond-mat.dis-nn cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of infrastructure networks such as roads and streets are of
utmost importance to understand the evolution of urban systems. However,
datasets describing these spatial objects are rare and sparse. The database
presented here represents the road network at the french national level
described in the historical map of Cassini in the $18^{th}$ century. The
digitization of this historical map is based on a collaborative methodology
that we describe in detail. This dataset can be used for a variety of
interdisciplinary studies, covering multiple spatial resolutions and ranging
from history, geography, urban economics to network science.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 08:07:36 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Perret",
"Julien",
""
],
[
"Gribaudi",
"Maurizio",
""
],
[
"Barthelemy",
"Marc",
""
]
] | TITLE: Roads and cities of $18^{th}$ century France
ABSTRACT: The evolution of infrastructure networks such as roads and streets are of
utmost importance to understand the evolution of urban systems. However,
datasets describing these spatial objects are rare and sparse. The database
presented here represents the road network at the french national level
described in the historical map of Cassini in the $18^{th}$ century. The
digitization of this historical map is based on a collaborative methodology
that we describe in detail. This dataset can be used for a variety of
interdisciplinary studies, covering multiple spatial resolutions and ranging
from history, geography, urban economics to network science.
| new_dataset | 0.917377 |
1509.09114 | Karteek Alahari | Yang Hua, Karteek Alahari, Cordelia Schmid | Online Object Tracking with Proposal Selection | ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tracking-by-detection approaches are some of the most successful object
trackers in recent years. Their success is largely determined by the detector
model they learn initially and then update over time. However, under
challenging conditions where an object can undergo transformations, e.g.,
severe rotation, these methods are found to be lacking. In this paper, we
address this problem by formulating it as a proposal selection task and making
two contributions. The first one is introducing novel proposals estimated from
the geometric transformations undergone by the object, and building a rich
candidate set for predicting the object location. The second one is devising a
novel selection strategy using multiple cues, i.e., detection score and
edgeness score computed from state-of-the-art object edges and motion
boundaries. We extensively evaluate our approach on the visual object tracking
2014 challenge and online tracking benchmark datasets, and show the best
performance.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 10:38:27 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Hua",
"Yang",
""
],
[
"Alahari",
"Karteek",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: Online Object Tracking with Proposal Selection
ABSTRACT: Tracking-by-detection approaches are some of the most successful object
trackers in recent years. Their success is largely determined by the detector
model they learn initially and then update over time. However, under
challenging conditions where an object can undergo transformations, e.g.,
severe rotation, these methods are found to be lacking. In this paper, we
address this problem by formulating it as a proposal selection task and making
two contributions. The first one is introducing novel proposals estimated from
the geometric transformations undergone by the object, and building a rich
candidate set for predicting the object location. The second one is devising a
novel selection strategy using multiple cues, i.e., detection score and
edgeness score computed from state-of-the-art object edges and motion
boundaries. We extensively evaluate our approach on the visual object tracking
2014 challenge and online tracking benchmark datasets, and show the best
performance.
| no_new_dataset | 0.949201 |
1509.09130 | Claire Vernade | Claire Vernade (LTCI), Olivier Capp\'e (LTCI) | Learning From Missing Data Using Selection Bias in Movie Recommendation | null | null | null | null | stat.ML cs.IR cs.LG cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommending items to users is a challenging task due to the large amount of
missing information. In many cases, the data solely consist of ratings or tags
voluntarily contributed by each user on a very limited subset of the available
items, so that most of the data of potential interest is actually missing.
Current approaches to recommendation usually assume that the unobserved data is
missing at random. In this contribution, we provide statistical evidence that
existing movie recommendation datasets reveal a significant positive
association between the rating of items and the propensity to select these
items. We propose a computationally efficient variational approach that makes
it possible to exploit this selection bias so as to improve the estimation of
ratings from small populations of users. Results obtained with this approach
applied to neighborhood-based collaborative filtering illustrate its potential
for improving the reliability of the recommendation.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 11:40:21 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Vernade",
"Claire",
"",
"LTCI"
],
[
"Cappé",
"Olivier",
"",
"LTCI"
]
] | TITLE: Learning From Missing Data Using Selection Bias in Movie Recommendation
ABSTRACT: Recommending items to users is a challenging task due to the large amount of
missing information. In many cases, the data solely consist of ratings or tags
voluntarily contributed by each user on a very limited subset of the available
items, so that most of the data of potential interest is actually missing.
Current approaches to recommendation usually assume that the unobserved data is
missing at random. In this contribution, we provide statistical evidence that
existing movie recommendation datasets reveal a significant positive
association between the rating of items and the propensity to select these
items. We propose a computationally efficient variational approach that makes
it possible to exploit this selection bias so as to improve the estimation of
ratings from small populations of users. Results obtained with this approach
applied to neighborhood-based collaborative filtering illustrate its potential
for improving the reliability of the recommendation.
| no_new_dataset | 0.951369 |
1509.09254 | Harry Crane | Harry Crane and Walter Dempsey | Community detection for interaction networks | 29 pages, 3 figures | null | null | null | cs.SI physics.soc-ph stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications, it is common practice to obtain a network from
interaction counts by thresholding each pairwise count at a prescribed value.
Our analysis calls attention to the dependence of certain methods, notably
Newman--Girvan modularity, on the choice of threshold. Essentially, the
threshold either separates the network into clusters automatically, making the
algorithm's job trivial, or erases all structure in the data, rendering
clustering impossible. By fitting the original interaction counts as given, we
show that minor modifications to classical statistical methods outperform the
prevailing approaches for community detection from interaction datasets. We
also introduce a new hidden Markov model for inferring community structures
that vary over time. We demonstrate each of these features on three real
datasets: the karate club dataset, voting data from the U.S.\ Senate
(2001--2003), and temporal voting data for the U.S. Supreme Court (1990--2004).
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 16:56:05 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Crane",
"Harry",
""
],
[
"Dempsey",
"Walter",
""
]
] | TITLE: Community detection for interaction networks
ABSTRACT: In many applications, it is common practice to obtain a network from
interaction counts by thresholding each pairwise count at a prescribed value.
Our analysis calls attention to the dependence of certain methods, notably
Newman--Girvan modularity, on the choice of threshold. Essentially, the
threshold either separates the network into clusters automatically, making the
algorithm's job trivial, or erases all structure in the data, rendering
clustering impossible. By fitting the original interaction counts as given, we
show that minor modifications to classical statistical methods outperform the
prevailing approaches for community detection from interaction datasets. We
also introduce a new hidden Markov model for inferring community structures
that vary over time. We demonstrate each of these features on three real
datasets: the karate club dataset, voting data from the U.S.\ Senate
(2001--2003), and temporal voting data for the U.S. Supreme Court (1990--2004).
| no_new_dataset | 0.949342 |
1509.09313 | Grey Ballard | Ramakrishnan Kannan and Grey Ballard and Haesun Park | A High-Performance Parallel Algorithm for Nonnegative Matrix
Factorization | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-negative matrix factorization (NMF) is the problem of determining two
non-negative low rank factors $W$ and $H$, for the given input matrix $A$, such
that $A \approx W H$. NMF is a useful tool for many applications in different
domains such as topic modeling in text mining, background separation in video
analysis, and community detection in social networks. Despite its popularity in
the data mining community, there is a lack of efficient parallel software to
solve the problem for big datasets. Existing distributed-memory algorithms are
limited in terms of performance and applicability, as they are implemented
using Hadoop and are designed only for sparse matrices.
We propose a distributed-memory parallel algorithm that computes the
factorization by iteratively solving alternating non-negative least squares
(NLS) subproblems for $W$ and $H$. To our knowledge, our algorithm is the first
high-performance parallel algorithm for NMF. It maintains the data and factor
matrices in memory (distributed across processors), uses MPI for interprocessor
communication, and, in the dense case, provably minimizes communication costs
(under mild assumptions). As opposed to previous implementations, our algorithm
is also flexible: (1) it performs well for dense and sparse matrices, and (2)
it allows the user to choose from among multiple algorithms for solving local
NLS subproblems within the alternating iterations. We demonstrate the
scalability of our algorithm and compare it with baseline implementations,
showing significant performance improvements.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 19:47:39 GMT"
}
] | 2015-10-01T00:00:00 | [
[
"Kannan",
"Ramakrishnan",
""
],
[
"Ballard",
"Grey",
""
],
[
"Park",
"Haesun",
""
]
] | TITLE: A High-Performance Parallel Algorithm for Nonnegative Matrix
Factorization
ABSTRACT: Non-negative matrix factorization (NMF) is the problem of determining two
non-negative low rank factors $W$ and $H$, for the given input matrix $A$, such
that $A \approx W H$. NMF is a useful tool for many applications in different
domains such as topic modeling in text mining, background separation in video
analysis, and community detection in social networks. Despite its popularity in
the data mining community, there is a lack of efficient parallel software to
solve the problem for big datasets. Existing distributed-memory algorithms are
limited in terms of performance and applicability, as they are implemented
using Hadoop and are designed only for sparse matrices.
We propose a distributed-memory parallel algorithm that computes the
factorization by iteratively solving alternating non-negative least squares
(NLS) subproblems for $W$ and $H$. To our knowledge, our algorithm is the first
high-performance parallel algorithm for NMF. It maintains the data and factor
matrices in memory (distributed across processors), uses MPI for interprocessor
communication, and, in the dense case, provably minimizes communication costs
(under mild assumptions). As opposed to previous implementations, our algorithm
is also flexible: (1) it performs well for dense and sparse matrices, and (2)
it allows the user to choose from among multiple algorithms for solving local
NLS subproblems within the alternating iterations. We demonstrate the
scalability of our algorithm and compare it with baseline implementations,
showing significant performance improvements.
| no_new_dataset | 0.939913 |
1412.1470 | Mostafa Haghir Chehreghani | Mostafa Haghir Chehreghani and Maurice Bruynooghe | Mining Rooted Ordered Trees under Subtree Homeomorphism | This paper is accepted in the Data Mining and Knowledge Discovery
journal
(http://www.springer.com/computer/database+management+%26+information+retrieval/journal/10618) | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mining frequent tree patterns has many applications in different areas such
as XML data, bioinformatics and World Wide Web. The crucial step in frequent
pattern mining is frequency counting, which involves a matching operator to
find occurrences (instances) of a tree pattern in a given collection of trees.
A widely used matching operator for tree-structured data is subtree
homeomorphism, where an edge in the tree pattern is mapped onto an
ancestor-descendant relationship in the given tree. Tree patterns that are
frequent under subtree homeomorphism are usually called embedded patterns. In
this paper, we present an efficient algorithm for subtree homeomorphism with
application to frequent pattern mining. We propose a compact data-structure,
called occ, which stores only information about the rightmost paths of
occurrences and hence can encode and represent several occurrences of a tree
pattern. We then define efficient join operations on the occ data-structure,
which help us count occurrences of tree patterns according to occurrences of
their proper subtrees. Based on the proposed subtree homeomorphism method, we
develop an effective pattern mining algorithm, called TPMiner. We evaluate the
efficiency of TPMiner on several real-world and synthetic datasets. Our
extensive experiments confirm that TPMiner always outperforms well-known
existing algorithms, and in several cases the improvement with respect to
existing algorithms is significant.
| [
{
"version": "v1",
"created": "Wed, 3 Dec 2014 15:57:00 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Apr 2015 17:43:23 GMT"
},
{
"version": "v3",
"created": "Mon, 28 Sep 2015 20:36:21 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Chehreghani",
"Mostafa Haghir",
""
],
[
"Bruynooghe",
"Maurice",
""
]
] | TITLE: Mining Rooted Ordered Trees under Subtree Homeomorphism
ABSTRACT: Mining frequent tree patterns has many applications in different areas such
as XML data, bioinformatics and World Wide Web. The crucial step in frequent
pattern mining is frequency counting, which involves a matching operator to
find occurrences (instances) of a tree pattern in a given collection of trees.
A widely used matching operator for tree-structured data is subtree
homeomorphism, where an edge in the tree pattern is mapped onto an
ancestor-descendant relationship in the given tree. Tree patterns that are
frequent under subtree homeomorphism are usually called embedded patterns. In
this paper, we present an efficient algorithm for subtree homeomorphism with
application to frequent pattern mining. We propose a compact data-structure,
called occ, which stores only information about the rightmost paths of
occurrences and hence can encode and represent several occurrences of a tree
pattern. We then define efficient join operations on the occ data-structure,
which help us count occurrences of tree patterns according to occurrences of
their proper subtrees. Based on the proposed subtree homeomorphism method, we
develop an effective pattern mining algorithm, called TPMiner. We evaluate the
efficiency of TPMiner on several real-world and synthetic datasets. Our
extensive experiments confirm that TPMiner always outperforms well-known
existing algorithms, and in several cases the improvement with respect to
existing algorithms is significant.
| no_new_dataset | 0.951863 |
1504.05351 | Anirban Sen | Siddharth Bora, Harvineet Singh, Anirban Sen, Amitabha Bagchi, Parag
Singla | On the Role of Conductance, Geography and Topology in Predicting Hashtag
Virality | null | Soc. Netw. Anal. Min. 5(1):57, December 2015 | 10.1007/s13278-015-0300-2 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on three aspects of the early spread of a hashtag in order to
predict whether it will go viral: the network properties of the subset of users
tweeting the hashtag, its geographical properties, and, most importantly, its
conductance-related properties. One of our significant contributions is to
discover the critical role played by the conductance based features for the
successful prediction of virality. More specifically, we show that the first
derivative of the conductance gives an early indication of whether the hashtag
is going to go viral or not. We present a detailed experimental evaluation of
the effect of our various categories of features on the virality prediction
task. When compared to the baselines and the state of the art techniques
proposed in the literature our feature set is able to achieve significantly
better accuracy on a large dataset of 7.7 million users and all their tweets
over a period of month, as well as on existing datasets.
| [
{
"version": "v1",
"created": "Tue, 21 Apr 2015 09:30:34 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Bora",
"Siddharth",
""
],
[
"Singh",
"Harvineet",
""
],
[
"Sen",
"Anirban",
""
],
[
"Bagchi",
"Amitabha",
""
],
[
"Singla",
"Parag",
""
]
] | TITLE: On the Role of Conductance, Geography and Topology in Predicting Hashtag
Virality
ABSTRACT: We focus on three aspects of the early spread of a hashtag in order to
predict whether it will go viral: the network properties of the subset of users
tweeting the hashtag, its geographical properties, and, most importantly, its
conductance-related properties. One of our significant contributions is to
discover the critical role played by the conductance based features for the
successful prediction of virality. More specifically, we show that the first
derivative of the conductance gives an early indication of whether the hashtag
is going to go viral or not. We present a detailed experimental evaluation of
the effect of our various categories of features on the virality prediction
task. When compared to the baselines and the state of the art techniques
proposed in the literature our feature set is able to achieve significantly
better accuracy on a large dataset of 7.7 million users and all their tweets
over a period of month, as well as on existing datasets.
| no_new_dataset | 0.939637 |
1509.05257 | Ernesto Diaz-Aviles | Hoang Thanh Lam and Ernesto Diaz-Aviles and Alessandra Pascale and
Yiannis Gkoufas and Bei Chen | (Blue) Taxi Destination and Trip Time Prediction from Partial
Trajectories | ECML/PKDD Discovery Challenge 2015 | null | null | null | stat.ML cs.AI cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time estimation of destination and travel time for taxis is of great
importance for existing electronic dispatch systems. We present an approach
based on trip matching and ensemble learning, in which we leverage the patterns
observed in a dataset of roughly 1.7 million taxi journeys to predict the
corresponding final destination and travel time for ongoing taxi trips, as a
solution for the ECML/PKDD Discovery Challenge 2015 competition. The results of
our empirical evaluation show that our approach is effective and very robust,
which led our team -- BlueTaxi -- to the 3rd and 7th position of the final
rankings for the trip time and destination prediction tasks, respectively.
Given the fact that the final rankings were computed using a very small test
set (with only 320 trips) we believe that our approach is one of the most
robust solutions for the challenge based on the consistency of our good results
across the test sets.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 13:51:55 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Lam",
"Hoang Thanh",
""
],
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Pascale",
"Alessandra",
""
],
[
"Gkoufas",
"Yiannis",
""
],
[
"Chen",
"Bei",
""
]
] | TITLE: (Blue) Taxi Destination and Trip Time Prediction from Partial
Trajectories
ABSTRACT: Real-time estimation of destination and travel time for taxis is of great
importance for existing electronic dispatch systems. We present an approach
based on trip matching and ensemble learning, in which we leverage the patterns
observed in a dataset of roughly 1.7 million taxi journeys to predict the
corresponding final destination and travel time for ongoing taxi trips, as a
solution for the ECML/PKDD Discovery Challenge 2015 competition. The results of
our empirical evaluation show that our approach is effective and very robust,
which led our team -- BlueTaxi -- to the 3rd and 7th position of the final
rankings for the trip time and destination prediction tasks, respectively.
Given the fact that the final rankings were computed using a very small test
set (with only 320 trips) we believe that our approach is one of the most
robust solutions for the challenge based on the consistency of our good results
across the test sets.
| no_new_dataset | 0.934215 |
1509.08863 | Nicolas Tremblay | Nicolas Tremblay, Gilles Puy, Pierre Borgnat, Remi Gribonval, Pierre
Vandergheynst | Accelerated Spectral Clustering Using Graph Filtering Of Random Signals | null | null | null | null | cs.SI cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We build upon recent advances in graph signal processing to propose a faster
spectral clustering algorithm. Indeed, classical spectral clustering is based
on the computation of the first k eigenvectors of the similarity matrix'
Laplacian, whose computation cost, even for sparse matrices, becomes
prohibitive for large datasets. We show that we can estimate the spectral
clustering distance matrix without computing these eigenvectors: by graph
filtering random signals. Also, we take advantage of the stochasticity of these
random vectors to estimate the number of clusters k. We compare our method to
classical spectral clustering on synthetic data, and show that it reaches equal
performance while being faster by a factor at least two for large datasets.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2015 17:32:48 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Tremblay",
"Nicolas",
""
],
[
"Puy",
"Gilles",
""
],
[
"Borgnat",
"Pierre",
""
],
[
"Gribonval",
"Remi",
""
],
[
"Vandergheynst",
"Pierre",
""
]
] | TITLE: Accelerated Spectral Clustering Using Graph Filtering Of Random Signals
ABSTRACT: We build upon recent advances in graph signal processing to propose a faster
spectral clustering algorithm. Indeed, classical spectral clustering is based
on the computation of the first k eigenvectors of the similarity matrix'
Laplacian, whose computation cost, even for sparse matrices, becomes
prohibitive for large datasets. We show that we can estimate the spectral
clustering distance matrix without computing these eigenvectors: by graph
filtering random signals. Also, we take advantage of the stochasticity of these
random vectors to estimate the number of clusters k. We compare our method to
classical spectral clustering on synthetic data, and show that it reaches equal
performance while being faster by a factor at least two for large datasets.
| no_new_dataset | 0.95452 |
1509.08888 | Hamid Reza Hassanzadeh | Hamid Reza Hassanzadeh and John H. Phan and May D. Wang | A Semi-Supervised Method for Predicting Cancer Survival Using Incomplete
Clinical Data | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction of survival for cancer patients is an open area of research.
However, many of these studies focus on datasets with a large number of
patients. We present a novel method that is specifically designed to address
the challenge of data scarcity, which is often the case for cancer datasets.
Our method is able to use unlabeled data to improve classification by adopting
a semi-supervised training approach to learn an ensemble classifier. The
results of applying our method to three cancer datasets show the promise of
semi-supervised learning for prediction of cancer survival.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2015 18:53:04 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Hassanzadeh",
"Hamid Reza",
""
],
[
"Phan",
"John H.",
""
],
[
"Wang",
"May D.",
""
]
] | TITLE: A Semi-Supervised Method for Predicting Cancer Survival Using Incomplete
Clinical Data
ABSTRACT: Prediction of survival for cancer patients is an open area of research.
However, many of these studies focus on datasets with a large number of
patients. We present a novel method that is specifically designed to address
the challenge of data scarcity, which is often the case for cancer datasets.
Our method is able to use unlabeled data to improve classification by adopting
a semi-supervised training approach to learn an ensemble classifier. The
results of applying our method to three cancer datasets show the promise of
semi-supervised learning for prediction of cancer survival.
| no_new_dataset | 0.953794 |
1509.08902 | Gaurav Sharma | Gaurav Sharma and Bernt Schiele | Scalable Nonlinear Embeddings for Semantic Category-based Image
Retrieval | ICCV 2015 preprint | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel algorithm for the task of supervised discriminative
distance learning by nonlinearly embedding vectors into a low dimensional
Euclidean space. We work in the challenging setting where supervision is with
constraints on similar and dissimilar pairs while training. The proposed method
is derived by an approximate kernelization of a linear Mahalanobis-like
distance metric learning algorithm and can also be seen as a kernel neural
network. The number of model parameters and test time evaluation complexity of
the proposed method are O(dD) where D is the dimensionality of the input
features and d is the dimension of the projection space - this is in contrast
to the usual kernelization methods as, unlike them, the complexity does not
scale linearly with the number of training examples. We propose a stochastic
gradient based learning algorithm which makes the method scalable (w.r.t. the
number of training examples), while being nonlinear. We train the method with
up to half a million training pairs of 4096 dimensional CNN features. We give
empirical comparisons with relevant baselines on seven challenging datasets for
the task of low dimensional semantic category based image retrieval.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2015 19:41:33 GMT"
}
] | 2015-09-30T00:00:00 | [
[
"Sharma",
"Gaurav",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Scalable Nonlinear Embeddings for Semantic Category-based Image
Retrieval
ABSTRACT: We propose a novel algorithm for the task of supervised discriminative
distance learning by nonlinearly embedding vectors into a low dimensional
Euclidean space. We work in the challenging setting where supervision is with
constraints on similar and dissimilar pairs while training. The proposed method
is derived by an approximate kernelization of a linear Mahalanobis-like
distance metric learning algorithm and can also be seen as a kernel neural
network. The number of model parameters and test time evaluation complexity of
the proposed method are O(dD) where D is the dimensionality of the input
features and d is the dimension of the projection space - this is in contrast
to the usual kernelization methods as, unlike them, the complexity does not
scale linearly with the number of training examples. We propose a stochastic
gradient based learning algorithm which makes the method scalable (w.r.t. the
number of training examples), while being nonlinear. We train the method with
up to half a million training pairs of 4096 dimensional CNN features. We give
empirical comparisons with relevant baselines on seven challenging datasets for
the task of low dimensional semantic category based image retrieval.
| no_new_dataset | 0.943815 |
1310.6767 | Yogesh Girdhar Yogesh Girdhar | Yogesh Girdhar, David Whitney, and Gregory Dudek | Curiosity Based Exploration for Learning Terrain Models | 7 pages, 5 figures, submitted to ICRA 2014 | null | 10.1109/ICRA.2014.6906913 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a robotic exploration technique in which the goal is to learn to a
visual model and be able to distinguish between different terrains and other
visual components in an unknown environment. We use ROST, a realtime online
spatiotemporal topic modeling framework to model these terrains using the
observations made by the robot, and then use an information theoretic path
planning technique to define the exploration path. We conduct experiments with
aerial view and underwater datasets with millions of observations and varying
path lengths, and find that paths that are biased towards locations with high
topic perplexity produce better terrain models with high discriminative power,
especially with paths of length close to the diameter of the world.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2013 20:31:49 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Girdhar",
"Yogesh",
""
],
[
"Whitney",
"David",
""
],
[
"Dudek",
"Gregory",
""
]
] | TITLE: Curiosity Based Exploration for Learning Terrain Models
ABSTRACT: We present a robotic exploration technique in which the goal is to learn to a
visual model and be able to distinguish between different terrains and other
visual components in an unknown environment. We use ROST, a realtime online
spatiotemporal topic modeling framework to model these terrains using the
observations made by the robot, and then use an information theoretic path
planning technique to define the exploration path. We conduct experiments with
aerial view and underwater datasets with millions of observations and varying
path lengths, and find that paths that are biased towards locations with high
topic perplexity produce better terrain models with high discriminative power,
especially with paths of length close to the diameter of the world.
| no_new_dataset | 0.951549 |
1401.0702 | Massimo Cafaro | Massimo Cafaro, Marco Pulimeno and Piergiulio Tempesta | A Parallel Space Saving Algorithm For Frequent Items and the Hurwitz
zeta distribution | Accepted for publication. To appear in Information Sciences,
Elsevier. http://www.sciencedirect.com/science/article/pii/S002002551500657X | Information Sciences, Elsevier, Volume 329, 2016, pp. 1 - 19,
ISSN: 0020-0255 | 10.1016/j.ins.2015.09.003 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a message-passing based parallel version of the Space Saving
algorithm designed to solve the $k$--majority problem. The algorithm determines
in parallel frequent items, i.e., those whose frequency is greater than a given
threshold, and is therefore useful for iceberg queries and many other different
contexts. We apply our algorithm to the detection of frequent items in both
real and synthetic datasets whose probability distribution functions are a
Hurwitz and a Zipf distribution respectively. Also, we compare its parallel
performances and accuracy against a parallel algorithm recently proposed for
merging summaries derived by the Space Saving or Frequent algorithms.
| [
{
"version": "v1",
"created": "Fri, 3 Jan 2014 19:34:14 GMT"
},
{
"version": "v10",
"created": "Thu, 13 Aug 2015 07:59:35 GMT"
},
{
"version": "v11",
"created": "Thu, 3 Sep 2015 08:16:34 GMT"
},
{
"version": "v12",
"created": "Sat, 19 Sep 2015 13:34:20 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Jan 2014 09:31:45 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Jan 2014 17:03:52 GMT"
},
{
"version": "v4",
"created": "Mon, 19 May 2014 15:01:57 GMT"
},
{
"version": "v5",
"created": "Tue, 7 Oct 2014 21:17:35 GMT"
},
{
"version": "v6",
"created": "Wed, 3 Dec 2014 16:21:00 GMT"
},
{
"version": "v7",
"created": "Mon, 15 Jun 2015 13:21:55 GMT"
},
{
"version": "v8",
"created": "Tue, 16 Jun 2015 10:24:26 GMT"
},
{
"version": "v9",
"created": "Sun, 2 Aug 2015 09:18:00 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Cafaro",
"Massimo",
""
],
[
"Pulimeno",
"Marco",
""
],
[
"Tempesta",
"Piergiulio",
""
]
] | TITLE: A Parallel Space Saving Algorithm For Frequent Items and the Hurwitz
zeta distribution
ABSTRACT: We present a message-passing based parallel version of the Space Saving
algorithm designed to solve the $k$--majority problem. The algorithm determines
in parallel frequent items, i.e., those whose frequency is greater than a given
threshold, and is therefore useful for iceberg queries and many other different
contexts. We apply our algorithm to the detection of frequent items in both
real and synthetic datasets whose probability distribution functions are a
Hurwitz and a Zipf distribution respectively. Also, we compare its parallel
performances and accuracy against a parallel algorithm recently proposed for
merging summaries derived by the Space Saving or Frequent algorithms.
| no_new_dataset | 0.953057 |
1501.07359 | Tianfu Wu | Tianfu Wu and Bo Li and Song-Chun Zhu | Learning And-Or Models to Represent Context and Occlusion for Car
Detection and Viewpoint Estimation | 14 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a method for learning And-Or models to represent context
and occlusion for car detection and viewpoint estimation. The learned And-Or
model represents car-to-car context and occlusion configurations at three
levels: (i) spatially-aligned cars, (ii) single car under different occlusion
configurations, and (iii) a small number of parts. The And-Or model embeds a
grammar for representing large structural and appearance variations in a
reconfigurable hierarchy. The learning process consists of two stages in a
weakly supervised way (i.e., only bounding boxes of single cars are annotated).
Firstly, the structure of the And-Or model is learned with three components:
(a) mining multi-car contextual patterns based on layouts of annotated single
car bounding boxes, (b) mining occlusion configurations between single cars,
and (c) learning different combinations of part visibility based on car 3D CAD
simulation. The And-Or model is organized in a directed and acyclic graph which
can be inferred by Dynamic Programming. Secondly, the model parameters (for
appearance, deformation and bias) are jointly trained using Weak-Label
Structural SVM. In experiments, we test our model on four car detection
datasets --- the KITTI dataset \cite{Geiger12}, the PASCAL VOC2007 car
dataset~\cite{pascal}, and two self-collected car datasets, namely the
Street-Parking car dataset and the Parking-Lot car dataset, and three datasets
for car viewpoint estimation --- the PASCAL VOC2006 car dataset~\cite{pascal},
the 3D car dataset~\cite{savarese}, and the PASCAL3D+ car
dataset~\cite{xiang_wacv14}. Compared with state-of-the-art variants of
deformable part-based models and other methods, our model achieves significant
improvement consistently on the four detection datasets, and comparable
performance on car viewpoint estimation.
| [
{
"version": "v1",
"created": "Thu, 29 Jan 2015 07:30:13 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Sep 2015 08:25:35 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Wu",
"Tianfu",
""
],
[
"Li",
"Bo",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | TITLE: Learning And-Or Models to Represent Context and Occlusion for Car
Detection and Viewpoint Estimation
ABSTRACT: This paper presents a method for learning And-Or models to represent context
and occlusion for car detection and viewpoint estimation. The learned And-Or
model represents car-to-car context and occlusion configurations at three
levels: (i) spatially-aligned cars, (ii) single car under different occlusion
configurations, and (iii) a small number of parts. The And-Or model embeds a
grammar for representing large structural and appearance variations in a
reconfigurable hierarchy. The learning process consists of two stages in a
weakly supervised way (i.e., only bounding boxes of single cars are annotated).
Firstly, the structure of the And-Or model is learned with three components:
(a) mining multi-car contextual patterns based on layouts of annotated single
car bounding boxes, (b) mining occlusion configurations between single cars,
and (c) learning different combinations of part visibility based on car 3D CAD
simulation. The And-Or model is organized in a directed and acyclic graph which
can be inferred by Dynamic Programming. Secondly, the model parameters (for
appearance, deformation and bias) are jointly trained using Weak-Label
Structural SVM. In experiments, we test our model on four car detection
datasets --- the KITTI dataset \cite{Geiger12}, the PASCAL VOC2007 car
dataset~\cite{pascal}, and two self-collected car datasets, namely the
Street-Parking car dataset and the Parking-Lot car dataset, and three datasets
for car viewpoint estimation --- the PASCAL VOC2006 car dataset~\cite{pascal},
the 3D car dataset~\cite{savarese}, and the PASCAL3D+ car
dataset~\cite{xiang_wacv14}. Compared with state-of-the-art variants of
deformable part-based models and other methods, our model achieves significant
improvement consistently on the four detection datasets, and comparable
performance on car viewpoint estimation.
| no_new_dataset | 0.917783 |
1502.07428 | Elad Liebman | Elad Liebman, Benny Chor and Peter Stone | Representative Selection in Non Metric Datasets | null | null | 10.1080/08839514.2015.1071092 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of representative selection: choosing a
subset of data points from a dataset that best represents its overall set of
elements. This subset needs to inherently reflect the type of information
contained in the entire set, while minimizing redundancy. For such purposes,
clustering may seem like a natural approach. However, existing clustering
methods are not ideally suited for representative selection, especially when
dealing with non-metric data, where only a pairwise similarity measure exists.
In this paper we propose $\delta$-medoids, a novel approach that can be viewed
as an extension to the $k$-medoids algorithm and is specifically suited for
sample representative selection from non-metric data. We empirically validate
$\delta$-medoids in two domains, namely music analysis and motion analysis. We
also show some theoretical bounds on the performance of $\delta$-medoids and
the hardness of representative selection in general.
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 04:16:31 GMT"
},
{
"version": "v2",
"created": "Fri, 19 Jun 2015 22:44:29 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Liebman",
"Elad",
""
],
[
"Chor",
"Benny",
""
],
[
"Stone",
"Peter",
""
]
] | TITLE: Representative Selection in Non Metric Datasets
ABSTRACT: This paper considers the problem of representative selection: choosing a
subset of data points from a dataset that best represents its overall set of
elements. This subset needs to inherently reflect the type of information
contained in the entire set, while minimizing redundancy. For such purposes,
clustering may seem like a natural approach. However, existing clustering
methods are not ideally suited for representative selection, especially when
dealing with non-metric data, where only a pairwise similarity measure exists.
In this paper we propose $\delta$-medoids, a novel approach that can be viewed
as an extension to the $k$-medoids algorithm and is specifically suited for
sample representative selection from non-metric data. We empirically validate
$\delta$-medoids in two domains, namely music analysis and motion analysis. We
also show some theoretical bounds on the performance of $\delta$-medoids and
the hardness of representative selection in general.
| no_new_dataset | 0.947186 |
1505.00256 | Chenyi Chen | Chenyi Chen, Ari Seff, Alain Kornhauser, Jianxiong Xiao | DeepDriving: Learning Affordance for Direct Perception in Autonomous
Driving | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, there are two major paradigms for vision-based autonomous driving
systems: mediated perception approaches that parse an entire scene to make a
driving decision, and behavior reflex approaches that directly map an input
image to a driving action by a regressor. In this paper, we propose a third
paradigm: a direct perception approach to estimate the affordance for driving.
We propose to map an input image to a small number of key perception indicators
that directly relate to the affordance of a road/traffic state for driving. Our
representation provides a set of compact yet complete descriptions of the scene
to enable a simple controller to drive autonomously. Falling in between the two
extremes of mediated perception and behavior reflex, we argue that our direct
perception representation provides the right level of abstraction. To
demonstrate this, we train a deep Convolutional Neural Network using recording
from 12 hours of human driving in a video game and show that our model can work
well to drive a car in a very diverse set of virtual environments. We also
train a model for car distance estimation on the KITTI dataset. Results show
that our direct perception approach can generalize well to real driving images.
Source code and data are available on our project website.
| [
{
"version": "v1",
"created": "Fri, 1 May 2015 19:31:13 GMT"
},
{
"version": "v2",
"created": "Mon, 4 May 2015 16:25:38 GMT"
},
{
"version": "v3",
"created": "Sat, 26 Sep 2015 05:17:59 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Chen",
"Chenyi",
""
],
[
"Seff",
"Ari",
""
],
[
"Kornhauser",
"Alain",
""
],
[
"Xiao",
"Jianxiong",
""
]
] | TITLE: DeepDriving: Learning Affordance for Direct Perception in Autonomous
Driving
ABSTRACT: Today, there are two major paradigms for vision-based autonomous driving
systems: mediated perception approaches that parse an entire scene to make a
driving decision, and behavior reflex approaches that directly map an input
image to a driving action by a regressor. In this paper, we propose a third
paradigm: a direct perception approach to estimate the affordance for driving.
We propose to map an input image to a small number of key perception indicators
that directly relate to the affordance of a road/traffic state for driving. Our
representation provides a set of compact yet complete descriptions of the scene
to enable a simple controller to drive autonomously. Falling in between the two
extremes of mediated perception and behavior reflex, we argue that our direct
perception representation provides the right level of abstraction. To
demonstrate this, we train a deep Convolutional Neural Network using recording
from 12 hours of human driving in a video game and show that our model can work
well to drive a car in a very diverse set of virtual environments. We also
train a model for car distance estimation on the KITTI dataset. Results show
that our direct perception approach can generalize well to real driving images.
Source code and data are available on our project website.
| no_new_dataset | 0.944995 |
1506.01760 | Yuchi Ma | Yuchi Ma and Ning Yang and Chuan Li and Lei Zhang and Philip S. Yu | Predicting Neighbor Distribution in Heterogeneous Information Networks | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, considerable attention has been devoted to the prediction problems
arising from heterogeneous information networks. In this paper, we present a
new prediction task, Neighbor Distribution Prediction (NDP), which aims at
predicting the distribution of the labels on neighbors of a given node and is
valuable for many different applications in heterogeneous information networks.
The challenges of NDP mainly come from three aspects: the infinity of the state
space of a neighbor distribution, the sparsity of available data, and how to
fairly evaluate the predictions. To address these challenges, we first propose
an Evolution Factor Model (EFM) for NDP, which utilizes two new structures
proposed in this paper, i.e. Neighbor Distribution Vector (NDV) to represent
the state of a given node's neighbors, and Neighbor Label Evolution Matrix
(NLEM) to capture the dynamics of a neighbor distribution, respectively. We
further propose a learning algorithm for Evolution Factor Model. To overcome
the problem of data sparsity, the learning algorithm first clusters all the
nodes and learns an NLEM for each cluster instead of for each node. For fairly
evaluating the predicting results, we propose a new metric: Virtual Accuracy
(VA), which takes into consideration both the absolute accuracy and the
predictability of a node. Extensive experiments conducted on three real
datasets from different domains validate the effectiveness of our proposed
model EFM and metric VA.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 01:13:54 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Ma",
"Yuchi",
""
],
[
"Yang",
"Ning",
""
],
[
"Li",
"Chuan",
""
],
[
"Zhang",
"Lei",
""
],
[
"Yu",
"Philip S.",
""
]
] | TITLE: Predicting Neighbor Distribution in Heterogeneous Information Networks
ABSTRACT: Recently, considerable attention has been devoted to the prediction problems
arising from heterogeneous information networks. In this paper, we present a
new prediction task, Neighbor Distribution Prediction (NDP), which aims at
predicting the distribution of the labels on neighbors of a given node and is
valuable for many different applications in heterogeneous information networks.
The challenges of NDP mainly come from three aspects: the infinity of the state
space of a neighbor distribution, the sparsity of available data, and how to
fairly evaluate the predictions. To address these challenges, we first propose
an Evolution Factor Model (EFM) for NDP, which utilizes two new structures
proposed in this paper, i.e. Neighbor Distribution Vector (NDV) to represent
the state of a given node's neighbors, and Neighbor Label Evolution Matrix
(NLEM) to capture the dynamics of a neighbor distribution, respectively. We
further propose a learning algorithm for Evolution Factor Model. To overcome
the problem of data sparsity, the learning algorithm first clusters all the
nodes and learns an NLEM for each cluster instead of for each node. For fairly
evaluating the predicting results, we propose a new metric: Virtual Accuracy
(VA), which takes into consideration both the absolute accuracy and the
predictability of a node. Extensive experiments conducted on three real
datasets from different domains validate the effectiveness of our proposed
model EFM and metric VA.
| no_new_dataset | 0.949201 |
1506.01929 | Philippe Weinzaepfel | Philippe Weinzaepfel, Zaid Harchaoui, Cordelia Schmid | Learning to track for spatio-temporal action localization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an effective approach for spatio-temporal action localization in
realistic videos. The approach first detects proposals at the frame-level and
scores them with a combination of static and motion CNN features. It then
tracks high-scoring proposals throughout the video using a
tracking-by-detection approach. Our tracker relies simultaneously on
instance-level and class-level detectors. The tracks are scored using a
spatio-temporal motion histogram, a descriptor at the track level, in
combination with the CNN features. Finally, we perform temporal localization of
the action using a sliding-window approach at the track level. We present
experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB
and UCF-101 action localization datasets, where our approach outperforms the
state of the art with a margin of 15%, 7% and 12% respectively in mAP.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 14:48:46 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Sep 2015 11:21:16 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Weinzaepfel",
"Philippe",
""
],
[
"Harchaoui",
"Zaid",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: Learning to track for spatio-temporal action localization
ABSTRACT: We propose an effective approach for spatio-temporal action localization in
realistic videos. The approach first detects proposals at the frame-level and
scores them with a combination of static and motion CNN features. It then
tracks high-scoring proposals throughout the video using a
tracking-by-detection approach. Our tracker relies simultaneously on
instance-level and class-level detectors. The tracks are scored using a
spatio-temporal motion histogram, a descriptor at the track level, in
combination with the CNN features. Finally, we perform temporal localization of
the action using a sliding-window approach at the track level. We present
experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB
and UCF-101 action localization datasets, where our approach outperforms the
state of the art with a margin of 15%, 7% and 12% respectively in mAP.
| no_new_dataset | 0.951142 |
1509.04767 | Ziming Zhang | Ziming Zhang and Venkatesh Saligrama | Zero-Shot Learning via Semantic Similarity Embedding | accepted for ICCV 2015 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider a version of the zero-shot learning problem where
seen class source and target domain data are provided. The goal during
test-time is to accurately predict the class label of an unseen target domain
instance based on revealed source domain side information (\eg attributes) for
unseen classes. Our method is based on viewing each source or target data as a
mixture of seen class proportions and we postulate that the mixture patterns
have to be similar if the two instances belong to the same unseen class. This
perspective leads us to learning source/target embedding functions that map an
arbitrary source/target domain data into a same semantic space where similarity
can be readily measured. We develop a max-margin framework to learn these
similarity functions and jointly optimize parameters by means of cross
validation. Our test results are compelling, leading to significant improvement
in terms of accuracy on most benchmark datasets for zero-shot recognition.
| [
{
"version": "v1",
"created": "Tue, 15 Sep 2015 23:18:52 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2015 20:26:08 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Zhang",
"Ziming",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Zero-Shot Learning via Semantic Similarity Embedding
ABSTRACT: In this paper we consider a version of the zero-shot learning problem where
seen class source and target domain data are provided. The goal during
test-time is to accurately predict the class label of an unseen target domain
instance based on revealed source domain side information (\eg attributes) for
unseen classes. Our method is based on viewing each source or target data as a
mixture of seen class proportions and we postulate that the mixture patterns
have to be similar if the two instances belong to the same unseen class. This
perspective leads us to learning source/target embedding functions that map an
arbitrary source/target domain data into a same semantic space where similarity
can be readily measured. We develop a max-margin framework to learn these
similarity functions and jointly optimize parameters by means of cross
validation. Our test results are compelling, leading to significant improvement
in terms of accuracy on most benchmark datasets for zero-shot recognition.
| no_new_dataset | 0.952618 |
1509.05490 | Han Xiao Bookman | Han Xiao, Minlie Huang, Yu Hao, Xiaoyan Zhu | TransA: An Adaptive Approach for Knowledge Graph Embedding | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge representation is a major topic in AI, and many studies attempt to
represent entities and relations of knowledge base in a continuous vector
space. Among these attempts, translation-based methods build entity and
relation vectors by minimizing the translation loss from a head entity to a
tail one. In spite of the success of these methods, translation-based methods
also suffer from the oversimplified loss metric, and are not competitive enough
to model various and complex entities/relations in knowledge bases. To address
this issue, we propose \textbf{TransA}, an adaptive metric approach for
embedding, utilizing the metric learning ideas to provide a more flexible
embedding method. Experiments are conducted on the benchmark datasets and our
proposed method makes significant and consistent improvements over the
state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2015 02:40:07 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Sep 2015 02:21:20 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Xiao",
"Han",
""
],
[
"Huang",
"Minlie",
""
],
[
"Hao",
"Yu",
""
],
[
"Zhu",
"Xiaoyan",
""
]
] | TITLE: TransA: An Adaptive Approach for Knowledge Graph Embedding
ABSTRACT: Knowledge representation is a major topic in AI, and many studies attempt to
represent entities and relations of knowledge base in a continuous vector
space. Among these attempts, translation-based methods build entity and
relation vectors by minimizing the translation loss from a head entity to a
tail one. In spite of the success of these methods, translation-based methods
also suffer from the oversimplified loss metric, and are not competitive enough
to model various and complex entities/relations in knowledge bases. To address
this issue, we propose \textbf{TransA}, an adaptive metric approach for
embedding, utilizing the metric learning ideas to provide a more flexible
embedding method. Experiments are conducted on the benchmark datasets and our
proposed method makes significant and consistent improvements over the
state-of-the-art baselines.
| no_new_dataset | 0.94474 |
1509.06114 | Inwook Shim | Inwook Shim, Seunghak Shin, Yunsu Bok, Kyungdon Joo, Dong-Geol Choi,
Joon-Young Lee, Jaesik Park, Jun-Ho Oh, In So Kweon | Vision System and Depth Processing for DRC-HUBO+ | submitted in ICRA 2016 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents a vision system and a depth processing algorithm for
DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to
reliably capture 3D information of a scene and objects robust to challenging
environment conditions. We also propose a depth-map upsampling method that
produces an outliers-free depth map by explicitly handling depth outliers. Our
system is suitable for an interactive robot with real-world that requires
accurate object detection and pose estimation. We evaluate our depth processing
algorithm over state-of-the-art algorithms on several synthetic and real-world
datasets.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2015 06:17:21 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Sep 2015 15:05:37 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Shim",
"Inwook",
""
],
[
"Shin",
"Seunghak",
""
],
[
"Bok",
"Yunsu",
""
],
[
"Joo",
"Kyungdon",
""
],
[
"Choi",
"Dong-Geol",
""
],
[
"Lee",
"Joon-Young",
""
],
[
"Park",
"Jaesik",
""
],
[
"Oh",
"Jun-Ho",
""
],
[
"Kweon",
"In So",
""
]
] | TITLE: Vision System and Depth Processing for DRC-HUBO+
ABSTRACT: This paper presents a vision system and a depth processing algorithm for
DRC-HUBO+, the winner of the DRC finals 2015. Our system is designed to
reliably capture 3D information of a scene and objects robust to challenging
environment conditions. We also propose a depth-map upsampling method that
produces an outliers-free depth map by explicitly handling depth outliers. Our
system is suitable for an interactive robot with real-world that requires
accurate object detection and pose estimation. We evaluate our depth processing
algorithm over state-of-the-art algorithms on several synthetic and real-world
datasets.
| no_new_dataset | 0.951639 |
1509.07266 | Smita Roy | Smita Roy, Samrat Mondal and Asif Ekbal | CRDT: Correlation Ratio Based Decision Tree Model for Healthcare Data
Mining | null | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenomenal growth in the healthcare data has inspired us in investigating
robust and scalable models for data mining. For classification problems
Information Gain(IG) based Decision Tree is one of the popular choices.
However, depending upon the nature of the dataset, IG based Decision Tree may
not always perform well as it prefers the attribute with more number of
distinct values as the splitting attribute. Healthcare datasets generally have
many attributes and each attribute generally has many distinct values. In this
paper, we have tried to focus on this characteristics of the datasets while
analysing the performance of our proposed approach which is a variant of
Decision Tree model and uses the concept of Correlation Ratio(CR). Unlike IG
based approach, this CR based approach has no biasness towards the attribute
with more number of distinct values. We have applied our model on some
benchmark healthcare datasets to show the effectiveness of the proposed
technique.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 07:57:27 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Roy",
"Smita",
""
],
[
"Mondal",
"Samrat",
""
],
[
"Ekbal",
"Asif",
""
]
] | TITLE: CRDT: Correlation Ratio Based Decision Tree Model for Healthcare Data
Mining
ABSTRACT: The phenomenal growth in the healthcare data has inspired us in investigating
robust and scalable models for data mining. For classification problems
Information Gain(IG) based Decision Tree is one of the popular choices.
However, depending upon the nature of the dataset, IG based Decision Tree may
not always perform well as it prefers the attribute with more number of
distinct values as the splitting attribute. Healthcare datasets generally have
many attributes and each attribute generally has many distinct values. In this
paper, we have tried to focus on this characteristics of the datasets while
analysing the performance of our proposed approach which is a variant of
Decision Tree model and uses the concept of Correlation Ratio(CR). Unlike IG
based approach, this CR based approach has no biasness towards the attribute
with more number of distinct values. We have applied our model on some
benchmark healthcare datasets to show the effectiveness of the proposed
technique.
| no_new_dataset | 0.956227 |
1509.07479 | Michael Wilber | Michael J. Wilber, Iljung S. Kwak, David Kriegman, Serge Belongie | Learning Concept Embeddings with Combined Human-Machine Expertise | To appear at ICCV 2015. (This version has updated author affiliations
and updated footnotes.) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents our work on "SNaCK," a low-dimensional concept embedding
algorithm that combines human expertise with automatic machine similarity
kernels. Both parts are complimentary: human insight can capture relationships
that are not apparent from the object's visual similarity and the machine can
help relieve the human from having to exhaustively specify many constraints. We
show that our SNaCK embeddings are useful in several tasks: distinguishing
prime and nonprime numbers on MNIST, discovering labeling mistakes in the
Caltech UCSD Birds (CUB) dataset with the help of deep-learned features,
creating training datasets for bird classifiers, capturing subjective human
taste on a new dataset of 10,000 foods, and qualitatively exploring an
unstructured set of pictographic characters. Comparisons with the
state-of-the-art in these tasks show that SNaCK produces better concept
embeddings that require less human supervision than the leading methods.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 19:05:09 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Sep 2015 17:19:05 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Wilber",
"Michael J.",
""
],
[
"Kwak",
"Iljung S.",
""
],
[
"Kriegman",
"David",
""
],
[
"Belongie",
"Serge",
""
]
] | TITLE: Learning Concept Embeddings with Combined Human-Machine Expertise
ABSTRACT: This paper presents our work on "SNaCK," a low-dimensional concept embedding
algorithm that combines human expertise with automatic machine similarity
kernels. Both parts are complimentary: human insight can capture relationships
that are not apparent from the object's visual similarity and the machine can
help relieve the human from having to exhaustively specify many constraints. We
show that our SNaCK embeddings are useful in several tasks: distinguishing
prime and nonprime numbers on MNIST, discovering labeling mistakes in the
Caltech UCSD Birds (CUB) dataset with the help of deep-learned features,
creating training datasets for bird classifiers, capturing subjective human
taste on a new dataset of 10,000 foods, and qualitatively exploring an
unstructured set of pictographic characters. Comparisons with the
state-of-the-art in these tasks show that SNaCK produces better concept
embeddings that require less human supervision than the leading methods.
| new_dataset | 0.958615 |
1509.07961 | Vyacheslav Olshevsky | Vyacheslav Olshevsky, Andrey Divin, Elin Eriksson, Stefano Markidis,
Giovanni Lapenta | Energy dissipation in magnetic null points at kinetic scales | null | The Astrophysical Journal, Volume 807, Issue 2, article id. 155,
11 pp. (2015) | 10.1088/0004-637X/807/2/155 | null | astro-ph.EP astro-ph.SR physics.plasm-ph physics.space-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use kinetic particle-in-cell and magnetohydrodynamic simulations supported
by an observational dataset to investigate magnetic reconnection in clusters of
null points in space plasma. The magnetic configuration under investigation is
driven by fast adiabatic flux rope compression that dissipates almost half of
the initial magnetic field energy. In this phase powerful currents are excited
producing secondary instabilities, and the system is brought into a state of
`intermittent turbulence' within a few ion gyro-periods. Reconnection events
are distributed all over the simulation domain and energy dissipation is rather
volume-filling. Numerous spiral null points interconnected via their spines
form null lines embedded into magnetic flux ropes; null point pairs demonstrate
the signatures of torsional spine reconnection. However, energy dissipation
mainly happens in the shear layers formed by adjacent flux ropes with
oppositely directed currents. In these regions radial null pairs are
spontaneously emerging and vanishing, associated with electron streams and
small-scale current sheets. The number of spiral nulls in the simulation
outweighs the number of radial nulls by a factor of 5\---10, in accordance with
Cluster observations in the Earth's magnetosheath. Twisted magnetic fields with
embedded spiral null points might indicate the regions of major energy
dissipation for future space missions such as Magnetospheric Multiscale Mission
(MMS).
| [
{
"version": "v1",
"created": "Sat, 26 Sep 2015 11:20:19 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Olshevsky",
"Vyacheslav",
""
],
[
"Divin",
"Andrey",
""
],
[
"Eriksson",
"Elin",
""
],
[
"Markidis",
"Stefano",
""
],
[
"Lapenta",
"Giovanni",
""
]
] | TITLE: Energy dissipation in magnetic null points at kinetic scales
ABSTRACT: We use kinetic particle-in-cell and magnetohydrodynamic simulations supported
by an observational dataset to investigate magnetic reconnection in clusters of
null points in space plasma. The magnetic configuration under investigation is
driven by fast adiabatic flux rope compression that dissipates almost half of
the initial magnetic field energy. In this phase powerful currents are excited
producing secondary instabilities, and the system is brought into a state of
`intermittent turbulence' within a few ion gyro-periods. Reconnection events
are distributed all over the simulation domain and energy dissipation is rather
volume-filling. Numerous spiral null points interconnected via their spines
form null lines embedded into magnetic flux ropes; null point pairs demonstrate
the signatures of torsional spine reconnection. However, energy dissipation
mainly happens in the shear layers formed by adjacent flux ropes with
oppositely directed currents. In these regions radial null pairs are
spontaneously emerging and vanishing, associated with electron streams and
small-scale current sheets. The number of spiral nulls in the simulation
outweighs the number of radial nulls by a factor of 5\---10, in accordance with
Cluster observations in the Earth's magnetosheath. Twisted magnetic fields with
embedded spiral null points might indicate the regions of major energy
dissipation for future space missions such as Magnetospheric Multiscale Mission
(MMS).
| no_new_dataset | 0.952706 |
1509.07996 | Yixuan Li | Yixuan Li, Kun He, David Bindel and John Hopcroft | Overlapping Community Detection via Local Spectral Clustering | Extended version to the conference proceeding in WWW'15 | null | null | null | cs.SI cs.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large graphs arise in a number of contexts and understanding their structure
and extracting information from them is an important research area. Early
algorithms on mining communities have focused on the global structure, and
often run in time functional to the size of the entire graph. Nowadays, as we
often explore networks with billions of vertices and find communities of size
hundreds, it is crucial to shift our attention from macroscopic structure to
microscopic structure in large networks. A growing body of work has been
adopting local expansion methods in order to identify the community members
from a few exemplary seed members.
In this paper, we propose a novel approach for finding overlapping
communities called LEMON (Local Expansion via Minimum One Norm). The algorithm
finds the community by seeking a sparse vector in the span of the local spectra
such that the seeds are in its support. We show that LEMON can achieve the
highest detection accuracy among state-of-the-art proposals. The running time
depends on the size of the community rather than that of the entire graph. The
algorithm is easy to implement, and is highly parallelizable. We further
provide theoretical analysis on the local spectral properties, bounding the
measure of tightness of extracted community in terms of the eigenvalues of
graph Laplacian.
Moreover, given that networks are not all similar in nature, a comprehensive
analysis on how the local expansion approach is suited for uncovering
communities in different networks is still lacking. We thoroughly evaluate our
approach using both synthetic and real-world datasets across different domains,
and analyze the empirical variations when applying our method to inherently
different networks in practice. In addition, the heuristics on how the seed set
quality and quantity would affect the performance are provided.
| [
{
"version": "v1",
"created": "Sat, 26 Sep 2015 15:27:38 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Li",
"Yixuan",
""
],
[
"He",
"Kun",
""
],
[
"Bindel",
"David",
""
],
[
"Hopcroft",
"John",
""
]
] | TITLE: Overlapping Community Detection via Local Spectral Clustering
ABSTRACT: Large graphs arise in a number of contexts and understanding their structure
and extracting information from them is an important research area. Early
algorithms on mining communities have focused on the global structure, and
often run in time functional to the size of the entire graph. Nowadays, as we
often explore networks with billions of vertices and find communities of size
hundreds, it is crucial to shift our attention from macroscopic structure to
microscopic structure in large networks. A growing body of work has been
adopting local expansion methods in order to identify the community members
from a few exemplary seed members.
In this paper, we propose a novel approach for finding overlapping
communities called LEMON (Local Expansion via Minimum One Norm). The algorithm
finds the community by seeking a sparse vector in the span of the local spectra
such that the seeds are in its support. We show that LEMON can achieve the
highest detection accuracy among state-of-the-art proposals. The running time
depends on the size of the community rather than that of the entire graph. The
algorithm is easy to implement, and is highly parallelizable. We further
provide theoretical analysis on the local spectral properties, bounding the
measure of tightness of extracted community in terms of the eigenvalues of
graph Laplacian.
Moreover, given that networks are not all similar in nature, a comprehensive
analysis on how the local expansion approach is suited for uncovering
communities in different networks is still lacking. We thoroughly evaluate our
approach using both synthetic and real-world datasets across different domains,
and analyze the empirical variations when applying our method to inherently
different networks in practice. In addition, the heuristics on how the seed set
quality and quantity would affect the performance are provided.
| no_new_dataset | 0.94625 |
1509.08038 | Wentao Zhu | Wentao Zhu, Jun Miao, Laiyun Qing, Xilin Chen | Deep Trans-layer Unsupervised Networks for Representation Learning | 21 pages, 3 figures | null | null | null | cs.NE cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning features from massive unlabelled data is a vast prevalent topic for
high-level tasks in many machine learning applications. The recent great
improvements on benchmark data sets achieved by increasingly complex
unsupervised learning methods and deep learning models with lots of parameters
usually requires many tedious tricks and much expertise to tune. However,
filters learned by these complex architectures are quite similar to standard
hand-crafted features visually. In this paper, unsupervised learning methods,
such as PCA or auto-encoder, are employed as the building block to learn filter
banks at each layer. The lower layer responses are transferred to the last
layer (trans-layer) to form a more complete representation retaining more
information. In addition, some beneficial methods such as local contrast
normalization and whitening are added to the proposed deep trans-layer networks
to further boost performance. The trans-layer representations are followed by
block histograms with binary encoder schema to learn translation and rotation
invariant representations, which are utilized to do high-level tasks such as
recognition and classification. Compared to traditional deep learning methods,
the implemented feature learning method has much less parameters and is
validated in several typical experiments, such as digit recognition on MNIST
and MNIST variations, object recognition on Caltech 101 dataset and face
verification on LFW dataset. The deep trans-layer unsupervised learning
achieves 99.45% accuracy on MNIST dataset, 67.11% accuracy on 15 samples per
class and 75.98% accuracy on 30 samples per class on Caltech 101 dataset,
87.10% on LFW dataset.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 00:46:08 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Zhu",
"Wentao",
""
],
[
"Miao",
"Jun",
""
],
[
"Qing",
"Laiyun",
""
],
[
"Chen",
"Xilin",
""
]
] | TITLE: Deep Trans-layer Unsupervised Networks for Representation Learning
ABSTRACT: Learning features from massive unlabelled data is a vast prevalent topic for
high-level tasks in many machine learning applications. The recent great
improvements on benchmark data sets achieved by increasingly complex
unsupervised learning methods and deep learning models with lots of parameters
usually requires many tedious tricks and much expertise to tune. However,
filters learned by these complex architectures are quite similar to standard
hand-crafted features visually. In this paper, unsupervised learning methods,
such as PCA or auto-encoder, are employed as the building block to learn filter
banks at each layer. The lower layer responses are transferred to the last
layer (trans-layer) to form a more complete representation retaining more
information. In addition, some beneficial methods such as local contrast
normalization and whitening are added to the proposed deep trans-layer networks
to further boost performance. The trans-layer representations are followed by
block histograms with binary encoder schema to learn translation and rotation
invariant representations, which are utilized to do high-level tasks such as
recognition and classification. Compared to traditional deep learning methods,
the implemented feature learning method has much less parameters and is
validated in several typical experiments, such as digit recognition on MNIST
and MNIST variations, object recognition on Caltech 101 dataset and face
verification on LFW dataset. The deep trans-layer unsupervised learning
achieves 99.45% accuracy on MNIST dataset, 67.11% accuracy on 15 samples per
class and 75.98% accuracy on 30 samples per class on Caltech 101 dataset,
87.10% on LFW dataset.
| no_new_dataset | 0.950273 |
1509.08075 | Hamid Izadinia | Hamid Izadinia, Fereshteh Sadeghi, Santosh Kumar Divvala, Yejin Choi,
Ali Farhadi | Segment-Phrase Table for Semantic Segmentation, Visual Entailment and
Paraphrasing | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Segment-Phrase Table (SPT), a large collection of bijective
associations between textual phrases and their corresponding segmentations.
Leveraging recent progress in object recognition and natural language
semantics, we show how we can successfully build a high-quality segment-phrase
table using minimal human supervision. More importantly, we demonstrate the
unique value unleashed by this rich bimodal resource, for both vision as well
as natural language understanding. First, we show that fine-grained textual
labels facilitate contextual reasoning that helps in satisfying semantic
constraints across image segments. This feature enables us to achieve
state-of-the-art segmentation results on benchmark datasets. Next, we show that
the association of high-quality segmentations to textual phrases aids in richer
semantic understanding and reasoning of these textual phrases. Leveraging this
feature, we motivate the problem of visual entailment and visual paraphrasing,
and demonstrate its utility on a large dataset.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 10:01:42 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Izadinia",
"Hamid",
""
],
[
"Sadeghi",
"Fereshteh",
""
],
[
"Divvala",
"Santosh Kumar",
""
],
[
"Choi",
"Yejin",
""
],
[
"Farhadi",
"Ali",
""
]
] | TITLE: Segment-Phrase Table for Semantic Segmentation, Visual Entailment and
Paraphrasing
ABSTRACT: We introduce Segment-Phrase Table (SPT), a large collection of bijective
associations between textual phrases and their corresponding segmentations.
Leveraging recent progress in object recognition and natural language
semantics, we show how we can successfully build a high-quality segment-phrase
table using minimal human supervision. More importantly, we demonstrate the
unique value unleashed by this rich bimodal resource, for both vision as well
as natural language understanding. First, we show that fine-grained textual
labels facilitate contextual reasoning that helps in satisfying semantic
constraints across image segments. This feature enables us to achieve
state-of-the-art segmentation results on benchmark datasets. Next, we show that
the association of high-quality segmentations to textual phrases aids in richer
semantic understanding and reasoning of these textual phrases. Leveraging this
feature, we motivate the problem of visual entailment and visual paraphrasing,
and demonstrate its utility on a large dataset.
| no_new_dataset | 0.938124 |
1509.08095 | M\'arton Karsai | Laura Alessandretti, M\'arton Karsai, Laetitia Gauvin | User-based representation of time-resolved multimodal public
transportation networks | 24 pages, 8 figures | null | null | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal transportation systems can be represented as time-resolved
multilayer networks where different transportation modes connecting the same
set of nodes are associated to distinct network layers. Their quantitative
description became possible recently due to openly accessible datasets
describing the geolocalised transportation dynamics of large urban areas.
Advancements call for novel analytics, which combines earlier established
methods and exploits the inherent complexity of the data. Here, our aim is to
provide a novel user-based methodological framework to represent public
transportation systems considering the total travel time, its variability
across the schedule, and taking into account the number of transfers necessary.
Using this framework we analyse public transportation systems in several French
municipal areas. We incorporate travel routes and times over multiple
transportation modes to identify efficient transportation connections and
non-trivial connectivity patterns. The proposed method enables us to quantify
the network's overall efficiency as compared to the specific demand and to the
car alternative.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 14:03:09 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Alessandretti",
"Laura",
""
],
[
"Karsai",
"Márton",
""
],
[
"Gauvin",
"Laetitia",
""
]
] | TITLE: User-based representation of time-resolved multimodal public
transportation networks
ABSTRACT: Multimodal transportation systems can be represented as time-resolved
multilayer networks where different transportation modes connecting the same
set of nodes are associated to distinct network layers. Their quantitative
description became possible recently due to openly accessible datasets
describing the geolocalised transportation dynamics of large urban areas.
Advancements call for novel analytics, which combines earlier established
methods and exploits the inherent complexity of the data. Here, our aim is to
provide a novel user-based methodological framework to represent public
transportation systems considering the total travel time, its variability
across the schedule, and taking into account the number of transfers necessary.
Using this framework we analyse public transportation systems in several French
municipal areas. We incorporate travel routes and times over multiple
transportation modes to identify efficient transportation connections and
non-trivial connectivity patterns. The proposed method enables us to quantify
the network's overall efficiency as compared to the specific demand and to the
car alternative.
| no_new_dataset | 0.9455 |
1509.08197 | Xuan Luo | Xuan Luo, Xuejiao Bai, Shuo Li, Hongtao Lu, Sei-ichiro Kamata | Fast Non-local Stereo Matching based on Hierarchical Disparity
Prediction | 9 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stereo matching is the key step in estimating depth from two or more images.
Recently, some tree-based non-local stereo matching methods have been proposed,
which achieved state-of-the-art performance. The algorithms employed some tree
structures to aggregate cost and thus improved the performance and reduced the
coputation load of the stereo matching. However, the computational complexity
of these tree-based algorithms is still high because they search over the
entire disparity range. In addition, the extreme greediness of the minimum
spanning tree (MST) causes the poor performance in large areas with similar
colors but varying disparities. In this paper, we propose an efficient stereo
matching method using a hierarchical disparity prediction (HDP) framework to
dramatically reduce the disparity search range so as to speed up the tree-based
non-local stereo methods. Our disparity prediction scheme works on a graph
pyramid derived from an image whose disparity to be estimated. We utilize the
disparity of a upper graph to predict a small disparity range for the lower
graph. Some independent disparity trees (DT) are generated to form a disparity
prediction forest (HDPF) over which the cost aggregation is made. When combined
with the state-of-the-art tree-based methods, our scheme not only dramatically
speeds up the original methods but also improves their performance by
alleviating the second drawback of the tree-based methods. This is partially
because our DTs overcome the extreme greediness of the MST. Extensive
experimental results on some benchmark datasets demonstrate the effectiveness
and efficiency of our framework. For example, the segment-tree based stereo
matching becomes about 25.57 times faster and 2.2% more accurate over the
Middlebury 2006 full-size dataset.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 05:00:01 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Luo",
"Xuan",
""
],
[
"Bai",
"Xuejiao",
""
],
[
"Li",
"Shuo",
""
],
[
"Lu",
"Hongtao",
""
],
[
"Kamata",
"Sei-ichiro",
""
]
] | TITLE: Fast Non-local Stereo Matching based on Hierarchical Disparity
Prediction
ABSTRACT: Stereo matching is the key step in estimating depth from two or more images.
Recently, some tree-based non-local stereo matching methods have been proposed,
which achieved state-of-the-art performance. The algorithms employed some tree
structures to aggregate cost and thus improved the performance and reduced the
coputation load of the stereo matching. However, the computational complexity
of these tree-based algorithms is still high because they search over the
entire disparity range. In addition, the extreme greediness of the minimum
spanning tree (MST) causes the poor performance in large areas with similar
colors but varying disparities. In this paper, we propose an efficient stereo
matching method using a hierarchical disparity prediction (HDP) framework to
dramatically reduce the disparity search range so as to speed up the tree-based
non-local stereo methods. Our disparity prediction scheme works on a graph
pyramid derived from an image whose disparity to be estimated. We utilize the
disparity of a upper graph to predict a small disparity range for the lower
graph. Some independent disparity trees (DT) are generated to form a disparity
prediction forest (HDPF) over which the cost aggregation is made. When combined
with the state-of-the-art tree-based methods, our scheme not only dramatically
speeds up the original methods but also improves their performance by
alleviating the second drawback of the tree-based methods. This is partially
because our DTs overcome the extreme greediness of the MST. Extensive
experimental results on some benchmark datasets demonstrate the effectiveness
and efficiency of our framework. For example, the segment-tree based stereo
matching becomes about 25.57 times faster and 2.2% more accurate over the
Middlebury 2006 full-size dataset.
| no_new_dataset | 0.95222 |
1509.08239 | Biju Issac | Mohanad Albayati and Biju Issac | Analysis of Intelligent Classifiers and Enhancing the Detection Accuracy
for Intrusion Detection System | null | International Journal of Computational Intelligence Systems, 8:5,
841-853 (2015) | 10.1080/18756891.2015.1084705 | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we discuss and analyze some of the intelligent classifiers
which allows for automatic detection and classification of networks attacks for
any intrusion detection system. We will proceed initially with their analysis
using the WEKA software to work with the classifiers on a well-known IDS
(Intrusion Detection Systems) dataset like NSL-KDD dataset. The NSL-KDD dataset
of network attacks was created in a military network by MIT Lincoln Labs. Then
we will discuss and experiment some of the hybrid AI (Artificial Intelligence)
classifiers that can be used for IDS, and finally we developed a Java software
with three most efficient classifiers and compared it with other options. The
outputs would show the detection accuracy and efficiency of the single and
combined classifiers used.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 09:01:30 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Albayati",
"Mohanad",
""
],
[
"Issac",
"Biju",
""
]
] | TITLE: Analysis of Intelligent Classifiers and Enhancing the Detection Accuracy
for Intrusion Detection System
ABSTRACT: In this paper we discuss and analyze some of the intelligent classifiers
which allows for automatic detection and classification of networks attacks for
any intrusion detection system. We will proceed initially with their analysis
using the WEKA software to work with the classifiers on a well-known IDS
(Intrusion Detection Systems) dataset like NSL-KDD dataset. The NSL-KDD dataset
of network attacks was created in a military network by MIT Lincoln Labs. Then
we will discuss and experiment some of the hybrid AI (Artificial Intelligence)
classifiers that can be used for IDS, and finally we developed a Java software
with three most efficient classifiers and compared it with other options. The
outputs would show the detection accuracy and efficiency of the single and
combined classifiers used.
| new_dataset | 0.940953 |
1509.08360 | Dinesh Ramasamy | Dinesh Ramasamy and Upamanyu Madhow | Compressive spectral embedding: sidestepping the SVD | NIPS 2015 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectral embedding based on the Singular Value Decomposition (SVD) is a
widely used "preprocessing" step in many learning tasks, typically leading to
dimensionality reduction by projecting onto a number of dominant singular
vectors and rescaling the coordinate axes (by a predefined function of the
singular value). However, the number of such vectors required to capture
problem structure grows with problem size, and even partial SVD computation
becomes a bottleneck. In this paper, we propose a low-complexity it compressive
spectral embedding algorithm, which employs random projections and finite order
polynomial expansions to compute approximations to SVD-based embedding. For an
m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)),
and the embedding dimension is O(log(m+n)), both of which are independent of
the number of singular vectors whose effect we wish to capture. To the best of
our knowledge, this is the first work to circumvent this dependence on the
number of singular vectors for general SVD-based embeddings. The key to
sidestepping the SVD is the observation that, for downstream inference tasks
such as clustering and classification, we are only interested in using the
resulting embedding to evaluate pairwise similarity metrics derived from the
euclidean norm, rather than capturing the effect of the underlying matrix on
arbitrary vectors as a partial SVD tries to do. Our numerical results on
network datasets demonstrate the efficacy of the proposed method, and motivate
further exploration of its application to large-scale inference tasks.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 15:32:20 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Ramasamy",
"Dinesh",
""
],
[
"Madhow",
"Upamanyu",
""
]
] | TITLE: Compressive spectral embedding: sidestepping the SVD
ABSTRACT: Spectral embedding based on the Singular Value Decomposition (SVD) is a
widely used "preprocessing" step in many learning tasks, typically leading to
dimensionality reduction by projecting onto a number of dominant singular
vectors and rescaling the coordinate axes (by a predefined function of the
singular value). However, the number of such vectors required to capture
problem structure grows with problem size, and even partial SVD computation
becomes a bottleneck. In this paper, we propose a low-complexity it compressive
spectral embedding algorithm, which employs random projections and finite order
polynomial expansions to compute approximations to SVD-based embedding. For an
m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)),
and the embedding dimension is O(log(m+n)), both of which are independent of
the number of singular vectors whose effect we wish to capture. To the best of
our knowledge, this is the first work to circumvent this dependence on the
number of singular vectors for general SVD-based embeddings. The key to
sidestepping the SVD is the observation that, for downstream inference tasks
such as clustering and classification, we are only interested in using the
resulting embedding to evaluate pairwise similarity metrics derived from the
euclidean norm, rather than capturing the effect of the underlying matrix on
arbitrary vectors as a partial SVD tries to do. Our numerical results on
network datasets demonstrate the efficacy of the proposed method, and motivate
further exploration of its application to large-scale inference tasks.
| no_new_dataset | 0.946597 |
1509.08418 | Zheng Li | Zheng Li and Liam O'Brien and Ye Yang | The more Product Complexity, the more Actual Effort? An Empirical
Investigation into Software Developments | null | ASWEC 2014 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | [Background:] Software effort prediction methods and models typically assume
positive correlation between software product complexity and development
effort. However, conflicting observations, i.e. negative correlation between
product complexity and actual effort, have been witnessed from our experience
with the COCOMO81 dataset. [Aim:] Given our doubt about whether the observed
phenomenon is a coincidence, this study tries to investigate if an increase in
product complexity can result in the abovementioned counter-intuitive trend in
software development projects. [Method:] A modified association rule mining
approach is applied to the transformed COCOMO81 dataset. To reduce noise of
analysis, this approach uses a constant antecedent (Complexity increases while
Effort decreases) to mine potential consequents with pruning. [Results:] The
experiment has respectively mined four, five, and seven association rules from
the general, embedded, and organic projects data. The consequents of the mined
rules suggested two main aspects, namely human capability and product scale, to
be particularly concerned in this study. [Conclusions:] The negative
correlation between complexity and effort is not a coincidence under particular
conditions. In a software project, interactions between product complexity and
other factors, such as Programmer Capability and Analyst Capability, can
inevitably play a "friction" role in weakening the practical influences of
product complexity on actual development effort.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 18:11:16 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Li",
"Zheng",
""
],
[
"O'Brien",
"Liam",
""
],
[
"Yang",
"Ye",
""
]
] | TITLE: The more Product Complexity, the more Actual Effort? An Empirical
Investigation into Software Developments
ABSTRACT: [Background:] Software effort prediction methods and models typically assume
positive correlation between software product complexity and development
effort. However, conflicting observations, i.e. negative correlation between
product complexity and actual effort, have been witnessed from our experience
with the COCOMO81 dataset. [Aim:] Given our doubt about whether the observed
phenomenon is a coincidence, this study tries to investigate if an increase in
product complexity can result in the abovementioned counter-intuitive trend in
software development projects. [Method:] A modified association rule mining
approach is applied to the transformed COCOMO81 dataset. To reduce noise of
analysis, this approach uses a constant antecedent (Complexity increases while
Effort decreases) to mine potential consequents with pruning. [Results:] The
experiment has respectively mined four, five, and seven association rules from
the general, embedded, and organic projects data. The consequents of the mined
rules suggested two main aspects, namely human capability and product scale, to
be particularly concerned in this study. [Conclusions:] The negative
correlation between complexity and effort is not a coincidence under particular
conditions. In a software project, interactions between product complexity and
other factors, such as Programmer Capability and Analyst Capability, can
inevitably play a "friction" role in weakening the practical influences of
product complexity on actual development effort.
| no_new_dataset | 0.942823 |
1509.08439 | Sanath Narayan | Sanath Narayan, Kalpathi R. Ramakrishnan | Hyper-Fisher Vectors for Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel encoding scheme combining Fisher vector and
bag-of-words encodings has been proposed for recognizing action in videos. The
proposed Hyper-Fisher vector encoding is sum of local Fisher vectors which are
computed based on the traditional Bag-of-Words (BoW) encoding. Thus, the
proposed encoding is simple and yet an effective representation over the
traditional Fisher Vector encoding. By extensive evaluation on challenging
action recognition datasets, viz., Youtube, Olympic Sports, UCF50 and HMDB51,
we show that the proposed Hyper-Fisher Vector encoding improves the recognition
performance by around 2-3% compared to the improved Fisher Vector encoding. We
also perform experiments to show that the performance of the Hyper-Fisher
Vector is robust to the dictionary size of the BoW encoding.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 19:25:34 GMT"
}
] | 2015-09-29T00:00:00 | [
[
"Narayan",
"Sanath",
""
],
[
"Ramakrishnan",
"Kalpathi R.",
""
]
] | TITLE: Hyper-Fisher Vectors for Action Recognition
ABSTRACT: In this paper, a novel encoding scheme combining Fisher vector and
bag-of-words encodings has been proposed for recognizing action in videos. The
proposed Hyper-Fisher vector encoding is sum of local Fisher vectors which are
computed based on the traditional Bag-of-Words (BoW) encoding. Thus, the
proposed encoding is simple and yet an effective representation over the
traditional Fisher Vector encoding. By extensive evaluation on challenging
action recognition datasets, viz., Youtube, Olympic Sports, UCF50 and HMDB51,
we show that the proposed Hyper-Fisher Vector encoding improves the recognition
performance by around 2-3% compared to the improved Fisher Vector encoding. We
also perform experiments to show that the performance of the Hyper-Fisher
Vector is robust to the dictionary size of the BoW encoding.
| no_new_dataset | 0.95388 |
1503.03429 | Dat Tien Ngo | Dat Tien Ngo, Sanghuyk Park, Anne Jorstad, Alberto Crivellaro, Chang
Yoo, Pascal Fua | Dense image registration and deformable surface reconstruction in
presence of occlusions and minimal texture | In Proceedings of International Conference on Computer Vision, 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deformable surface tracking from monocular images is well-known to be
under-constrained. Occlusions often make the task even more challenging, and
can result in failure if the surface is not sufficiently textured. In this
work, we explicitly address the problem of 3D reconstruction of poorly
textured, occluded surfaces, proposing a framework based on a template-matching
approach that scales dense robust features by a relevancy score. Our approach
is extensively compared to current methods employing both local feature
matching and dense template alignment. We test on standard datasets as well as
on a new dataset (that will be made publicly available) of a sparsely textured,
occluded surface. Our framework achieves state-of-the-art results for both well
and poorly textured, occluded surfaces.
| [
{
"version": "v1",
"created": "Wed, 11 Mar 2015 17:37:22 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 10:30:02 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Sep 2015 09:07:09 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Ngo",
"Dat Tien",
""
],
[
"Park",
"Sanghuyk",
""
],
[
"Jorstad",
"Anne",
""
],
[
"Crivellaro",
"Alberto",
""
],
[
"Yoo",
"Chang",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: Dense image registration and deformable surface reconstruction in
presence of occlusions and minimal texture
ABSTRACT: Deformable surface tracking from monocular images is well-known to be
under-constrained. Occlusions often make the task even more challenging, and
can result in failure if the surface is not sufficiently textured. In this
work, we explicitly address the problem of 3D reconstruction of poorly
textured, occluded surfaces, proposing a framework based on a template-matching
approach that scales dense robust features by a relevancy score. Our approach
is extensively compared to current methods employing both local feature
matching and dense template alignment. We test on standard datasets as well as
on a new dataset (that will be made publicly available) of a sparsely textured,
occluded surface. Our framework achieves state-of-the-art results for both well
and poorly textured, occluded surfaces.
| new_dataset | 0.957833 |
1506.00511 | Jimmy Ba | Jimmy Ba, Kevin Swersky, Sanja Fidler and Ruslan Salakhutdinov | Predicting Deep Zero-Shot Convolutional Neural Networks using Textual
Descriptions | Correct the typos in table 1 regarding [5]. To appear in ICCV 2015 | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main challenges in Zero-Shot Learning of visual categories is
gathering semantic attributes to accompany images. Recent work has shown that
learning from textual descriptions, such as Wikipedia articles, avoids the
problem of having to explicitly define these attributes. We present a new model
that can classify unseen categories from their textual description.
Specifically, we use text features to predict the output weights of both the
convolutional and the fully connected layers in a deep convolutional neural
network (CNN). We take advantage of the architecture of CNNs and learn features
at different layers, rather than just learning an embedding space for both
modalities, as is common with existing approaches. The proposed model also
allows us to automatically generate a list of pseudo- attributes for each
visual category consisting of words from Wikipedia articles. We train our
models end-to-end us- ing the Caltech-UCSD bird and flower datasets and
evaluate both ROC and Precision-Recall curves. Our empirical results show that
the proposed model significantly outperforms previous methods.
| [
{
"version": "v1",
"created": "Mon, 1 Jun 2015 14:37:06 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2015 16:20:44 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Ba",
"Jimmy",
""
],
[
"Swersky",
"Kevin",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Predicting Deep Zero-Shot Convolutional Neural Networks using Textual
Descriptions
ABSTRACT: One of the main challenges in Zero-Shot Learning of visual categories is
gathering semantic attributes to accompany images. Recent work has shown that
learning from textual descriptions, such as Wikipedia articles, avoids the
problem of having to explicitly define these attributes. We present a new model
that can classify unseen categories from their textual description.
Specifically, we use text features to predict the output weights of both the
convolutional and the fully connected layers in a deep convolutional neural
network (CNN). We take advantage of the architecture of CNNs and learn features
at different layers, rather than just learning an embedding space for both
modalities, as is common with existing approaches. The proposed model also
allows us to automatically generate a list of pseudo- attributes for each
visual category consisting of words from Wikipedia articles. We train our
models end-to-end us- ing the Caltech-UCSD bird and flower datasets and
evaluate both ROC and Precision-Recall curves. Our empirical results show that
the proposed model significantly outperforms previous methods.
| no_new_dataset | 0.949482 |
1506.02629 | Vitaly Feldman | Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer
Reingold, Aaron Roth | Generalization in Adaptive Data Analysis and Holdout Reuse | null | null | null | null | cs.LG cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Overfitting is the bane of data analysts, even when data are plentiful.
Formal approaches to understanding this problem focus on statistical inference
and generalization of individual analysis procedures. Yet the practice of data
analysis is an inherently interactive and adaptive process: new analyses and
hypotheses are proposed after seeing the results of previous ones, parameters
are tuned on the basis of obtained results, and datasets are shared and reused.
An investigation of this gap has recently been initiated by the authors in
(Dwork et al., 2014), where we focused on the problem of estimating
expectations of adaptively chosen functions.
In this paper, we give a simple and practical method for reusing a holdout
(or testing) set to validate the accuracy of hypotheses produced by a learning
algorithm operating on a training set. Reusing a holdout set adaptively
multiple times can easily lead to overfitting to the holdout set itself. We
give an algorithm that enables the validation of a large number of adaptively
chosen hypotheses, while provably avoiding overfitting. We illustrate the
advantages of our algorithm over the standard use of the holdout set via a
simple synthetic experiment.
We also formalize and address the general problem of data reuse in adaptive
data analysis. We show how the differential-privacy based approach given in
(Dwork et al., 2014) is applicable much more broadly to adaptive data analysis.
We then show that a simple approach based on description length can also be
used to give guarantees of statistical validity in adaptive settings. Finally,
we demonstrate that these incomparable approaches can be unified via the notion
of approximate max-information that we introduce.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 19:34:29 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2015 19:04:32 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Dwork",
"Cynthia",
""
],
[
"Feldman",
"Vitaly",
""
],
[
"Hardt",
"Moritz",
""
],
[
"Pitassi",
"Toniann",
""
],
[
"Reingold",
"Omer",
""
],
[
"Roth",
"Aaron",
""
]
] | TITLE: Generalization in Adaptive Data Analysis and Holdout Reuse
ABSTRACT: Overfitting is the bane of data analysts, even when data are plentiful.
Formal approaches to understanding this problem focus on statistical inference
and generalization of individual analysis procedures. Yet the practice of data
analysis is an inherently interactive and adaptive process: new analyses and
hypotheses are proposed after seeing the results of previous ones, parameters
are tuned on the basis of obtained results, and datasets are shared and reused.
An investigation of this gap has recently been initiated by the authors in
(Dwork et al., 2014), where we focused on the problem of estimating
expectations of adaptively chosen functions.
In this paper, we give a simple and practical method for reusing a holdout
(or testing) set to validate the accuracy of hypotheses produced by a learning
algorithm operating on a training set. Reusing a holdout set adaptively
multiple times can easily lead to overfitting to the holdout set itself. We
give an algorithm that enables the validation of a large number of adaptively
chosen hypotheses, while provably avoiding overfitting. We illustrate the
advantages of our algorithm over the standard use of the holdout set via a
simple synthetic experiment.
We also formalize and address the general problem of data reuse in adaptive
data analysis. We show how the differential-privacy based approach given in
(Dwork et al., 2014) is applicable much more broadly to adaptive data analysis.
We then show that a simple approach based on description length can also be
used to give guarantees of statistical validity in adaptive settings. Finally,
we demonstrate that these incomparable approaches can be unified via the notion
of approximate max-information that we introduce.
| no_new_dataset | 0.949856 |
1509.03502 | Seong Joon Oh | Seong Joon Oh, Rodrigo Benenson, Mario Fritz, Bernt Schiele | Person Recognition in Personal Photo Collections | Accepted to ICCV 2015, revised | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognising persons in everyday photos presents major challenges (occluded
faces, different clothing, locations, etc.) for machine vision. We propose a
convnet based person recognition system on which we provide an in-depth
analysis of informativeness of different body cues, impact of training data,
and the common failure modes of the system. In addition, we discuss the
limitations of existing benchmarks and propose more challenging ones. Our
method is simple and is built on open source and open data, yet it improves the
state of the art results on a large dataset of social media photos (PIPA).
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 13:34:45 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Sep 2015 19:58:34 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Oh",
"Seong Joon",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Fritz",
"Mario",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Person Recognition in Personal Photo Collections
ABSTRACT: Recognising persons in everyday photos presents major challenges (occluded
faces, different clothing, locations, etc.) for machine vision. We propose a
convnet based person recognition system on which we provide an in-depth
analysis of informativeness of different body cues, impact of training data,
and the common failure modes of the system. In addition, we discuss the
limitations of existing benchmarks and propose more challenging ones. Our
method is simple and is built on open source and open data, yet it improves the
state of the art results on a large dataset of social media photos (PIPA).
| no_new_dataset | 0.950273 |
1509.07612 | Nils Haldenwang | Nils Haldenwang and Oliver Vornberger | Sentiment Uncertainty and Spam in Twitter Streams and Its Implications
for General Purpose Realtime Sentiment Analysis | 3 pages, 1 figure, accepted at GSCL '15 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State of the art benchmarks for Twitter Sentiment Analysis do not consider
the fact that for more than half of the tweets from the public stream a
distinct sentiment cannot be chosen. This paper provides a new perspective on
Twitter Sentiment Analysis by highlighting the necessity of explicitly
incorporating uncertainty. Moreover, a dataset of high quality to evaluate
solutions for this new problem is introduced and made publicly available.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 07:55:26 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Haldenwang",
"Nils",
""
],
[
"Vornberger",
"Oliver",
""
]
] | TITLE: Sentiment Uncertainty and Spam in Twitter Streams and Its Implications
for General Purpose Realtime Sentiment Analysis
ABSTRACT: State of the art benchmarks for Twitter Sentiment Analysis do not consider
the fact that for more than half of the tweets from the public stream a
distinct sentiment cannot be chosen. This paper provides a new perspective on
Twitter Sentiment Analysis by highlighting the necessity of explicitly
incorporating uncertainty. Moreover, a dataset of high quality to evaluate
solutions for this new problem is introduced and made publicly available.
| new_dataset | 0.955152 |
1509.07615 | Kanji Tanaka | Enfu Liu, Kanji Tanaka | Discriminative Map Retrieval Using View-Dependent Map Descriptor | Technical Report, 8 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Map retrieval, the problem of similarity search over a large collection of 2D
pointset maps previously built by mobile robots, is crucial for autonomous
navigation in indoor and outdoor environments. Bag-of-words (BoW) methods
constitute a popular approach to map retrieval; however, these methods have
extremely limited descriptive ability because they ignore the spatial layout
information of the local features. The main contribution of this paper is an
extension of the bag-of-words map retrieval method to enable the use of spatial
information from local features. Our strategy is to explicitly model a unique
viewpoint of an input local map; the pose of the local feature is defined with
respect to this unique viewpoint, and can be viewed as an additional invariant
feature for discriminative map retrieval. Specifically, we wish to determine a
unique viewpoint that is invariant to moving objects, clutter, occlusions, and
actual viewpoints. Hence, we perform scene parsing to analyze the scene
structure, and consider the "center" of the scene structure to be the unique
viewpoint. Our scene parsing is based on a Manhattan world grammar that imposes
a quasi-Manhattan world constraint to enable the robust detection of a scene
structure that is invariant to clutter and moving objects. Experimental results
using the publicly available radish dataset validate the efficacy of the
proposed approach.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 08:02:19 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Liu",
"Enfu",
""
],
[
"Tanaka",
"Kanji",
""
]
] | TITLE: Discriminative Map Retrieval Using View-Dependent Map Descriptor
ABSTRACT: Map retrieval, the problem of similarity search over a large collection of 2D
pointset maps previously built by mobile robots, is crucial for autonomous
navigation in indoor and outdoor environments. Bag-of-words (BoW) methods
constitute a popular approach to map retrieval; however, these methods have
extremely limited descriptive ability because they ignore the spatial layout
information of the local features. The main contribution of this paper is an
extension of the bag-of-words map retrieval method to enable the use of spatial
information from local features. Our strategy is to explicitly model a unique
viewpoint of an input local map; the pose of the local feature is defined with
respect to this unique viewpoint, and can be viewed as an additional invariant
feature for discriminative map retrieval. Specifically, we wish to determine a
unique viewpoint that is invariant to moving objects, clutter, occlusions, and
actual viewpoints. Hence, we perform scene parsing to analyze the scene
structure, and consider the "center" of the scene structure to be the unique
viewpoint. Our scene parsing is based on a Manhattan world grammar that imposes
a quasi-Manhattan world constraint to enable the robust detection of a scene
structure that is invariant to clutter and moving objects. Experimental results
using the publicly available radish dataset validate the efficacy of the
proposed approach.
| no_new_dataset | 0.949809 |
1509.07618 | Kanji Tanaka | Taisho Tsukamoto, Kanji Tanaka | Self-localization Using Visual Experience Across Domains | Technical Report, 8 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we aim to solve the single-view robot self-localization
problem by using visual experience across domains. Although the bag-of-words
method constitutes a popular approach to single-view localization, it fails
badly when it's visual vocabulary is learned and tested in different domains.
Further, we are interested in using a cross-domain setting, in which the visual
vocabulary is learned in different seasons and routes from the input
query/database scenes. Our strategy is to mine a cross-domain visual
experience, a library of raw visual images collected in different domains, to
discover the relevant visual patterns that effectively explain the input scene,
and use them for scene retrieval. In particular, we show that the appearance
and the pose of the mined visual patterns of a query scene can be efficiently
and discriminatively matched against those of the database scenes by employing
image-to-class distance and spatial pyramid matching. Experimental results
obtained using a novel cross-domain dataset show that our system achieves
promising results despite our visual vocabulary being learned and tested in
different domains.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 08:07:10 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Tsukamoto",
"Taisho",
""
],
[
"Tanaka",
"Kanji",
""
]
] | TITLE: Self-localization Using Visual Experience Across Domains
ABSTRACT: In this study, we aim to solve the single-view robot self-localization
problem by using visual experience across domains. Although the bag-of-words
method constitutes a popular approach to single-view localization, it fails
badly when it's visual vocabulary is learned and tested in different domains.
Further, we are interested in using a cross-domain setting, in which the visual
vocabulary is learned in different seasons and routes from the input
query/database scenes. Our strategy is to mine a cross-domain visual
experience, a library of raw visual images collected in different domains, to
discover the relevant visual patterns that effectively explain the input scene,
and use them for scene retrieval. In particular, we show that the appearance
and the pose of the mined visual patterns of a query scene can be efficiently
and discriminatively matched against those of the database scenes by employing
image-to-class distance and spatial pyramid matching. Experimental results
obtained using a novel cross-domain dataset show that our system achieves
promising results despite our visual vocabulary being learned and tested in
different domains.
| new_dataset | 0.956594 |
1509.07627 | Hirokatsu Kataoka | Hirokatsu Kataoka, Kenji Iwata, Yutaka Satoh | Feature Evaluation of Deep Convolutional Neural Networks for Object
Recognition and Detection | 5 pages, 3 figures | null | null | null | cs.CV cs.AI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we evaluate convolutional neural network (CNN) features using
the AlexNet architecture and very deep convolutional network (VGGNet)
architecture. To date, most CNN researchers have employed the last layers
before output, which were extracted from the fully connected feature layers.
However, since it is unlikely that feature representation effectiveness is
dependent on the problem, this study evaluates additional convolutional layers
that are adjacent to fully connected layers, in addition to executing simple
tuning for feature concatenation (e.g., layer 3 + layer 5 + layer 7) and
transformation, using tools such as principal component analysis. In our
experiments, we carried out detection and classification tasks using the
Caltech 101 and Daimler Pedestrian Benchmark Datasets.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 08:26:53 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Kataoka",
"Hirokatsu",
""
],
[
"Iwata",
"Kenji",
""
],
[
"Satoh",
"Yutaka",
""
]
] | TITLE: Feature Evaluation of Deep Convolutional Neural Networks for Object
Recognition and Detection
ABSTRACT: In this paper, we evaluate convolutional neural network (CNN) features using
the AlexNet architecture and very deep convolutional network (VGGNet)
architecture. To date, most CNN researchers have employed the last layers
before output, which were extracted from the fully connected feature layers.
However, since it is unlikely that feature representation effectiveness is
dependent on the problem, this study evaluates additional convolutional layers
that are adjacent to fully connected layers, in addition to executing simple
tuning for feature concatenation (e.g., layer 3 + layer 5 + layer 7) and
transformation, using tools such as principal component analysis. In our
experiments, we carried out detection and classification tasks using the
Caltech 101 and Daimler Pedestrian Benchmark Datasets.
| no_new_dataset | 0.951097 |
1509.07715 | Yixuan Li | Yixuan Li, Kun He, David Bindel and John Hopcroft | Uncovering the Small Community Structure in Large Networks: A Local
Spectral Approach | 10pages, published in WWW2015 proceedings | null | null | null | cs.SI cs.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large graphs arise in a number of contexts and understanding their structure
and extracting information from them is an important research area. Early
algorithms on mining communities have focused on the global structure, and
often run in time functional to the size of the entire graph. Nowadays, as we
often explore networks with billions of vertices and find communities of size
hundreds, it is crucial to shift our attention from macroscopic structure to
microscopic structure when dealing with large networks. A growing body of work
has been adopting local expansion methods in order to identify the community
from a few exemplary seed members.
In this paper, we propose a novel approach for finding overlapping
communities called LEMON (Local Expansion via Minimum One Norm). Different from
PageRank-like diffusion methods, LEMON finds the community by seeking a sparse
vector in the span of the local spectra such that the seeds are in its support.
We show that LEMON can achieve the highest detection accuracy among
state-of-the-art proposals. The running time depends on the size of the
community rather than that of the entire graph. The algorithm is easy to
implement, and is highly parallelizable.
Moreover, given that networks are not all similar in nature, a comprehensive
analysis on how the local expansion approach is suited for uncovering
communities in different networks is still lacking. We thoroughly evaluate our
approach using both synthetic and real-world datasets across different domains,
and analyze the empirical variations when applying our method to inherently
different networks in practice. In addition, the heuristics on how the quality
and quantity of the seed set would affect the performance are provided.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 13:50:34 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Li",
"Yixuan",
""
],
[
"He",
"Kun",
""
],
[
"Bindel",
"David",
""
],
[
"Hopcroft",
"John",
""
]
] | TITLE: Uncovering the Small Community Structure in Large Networks: A Local
Spectral Approach
ABSTRACT: Large graphs arise in a number of contexts and understanding their structure
and extracting information from them is an important research area. Early
algorithms on mining communities have focused on the global structure, and
often run in time functional to the size of the entire graph. Nowadays, as we
often explore networks with billions of vertices and find communities of size
hundreds, it is crucial to shift our attention from macroscopic structure to
microscopic structure when dealing with large networks. A growing body of work
has been adopting local expansion methods in order to identify the community
from a few exemplary seed members.
In this paper, we propose a novel approach for finding overlapping
communities called LEMON (Local Expansion via Minimum One Norm). Different from
PageRank-like diffusion methods, LEMON finds the community by seeking a sparse
vector in the span of the local spectra such that the seeds are in its support.
We show that LEMON can achieve the highest detection accuracy among
state-of-the-art proposals. The running time depends on the size of the
community rather than that of the entire graph. The algorithm is easy to
implement, and is highly parallelizable.
Moreover, given that networks are not all similar in nature, a comprehensive
analysis on how the local expansion approach is suited for uncovering
communities in different networks is still lacking. We thoroughly evaluate our
approach using both synthetic and real-world datasets across different domains,
and analyze the empirical variations when applying our method to inherently
different networks in practice. In addition, the heuristics on how the quality
and quantity of the seed set would affect the performance are provided.
| no_new_dataset | 0.945551 |
1509.07823 | Pablo Huijse Ph.D | Pablo Huijse and Pablo A. Estevez and Pavlos Protopapas and Jose C.
Principe and Pablo Zegers | Computational Intelligence Challenges and Applications on Large-Scale
Astronomical Time Series Databases | null | IEEE Computational Intelligence Magazine, vol. 9, n. 3, pp. 27-39,
2014 | 10.1109/MCI.2014.2326100 | null | astro-ph.IM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time-domain astronomy (TDA) is facing a paradigm shift caused by the
exponential growth of the sample size, data complexity and data generation
rates of new astronomical sky surveys. For example, the Large Synoptic Survey
Telescope (LSST), which will begin operations in northern Chile in 2022, will
generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky.
The LSST will stream data at rates of 2 Terabytes per hour, effectively
capturing an unprecedented movie of the sky. The LSST is expected not only to
improve our understanding of time-varying astrophysical objects, but also to
reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with
a change of paradigm to data-driven astronomy, the fields of astroinformatics
and astrostatistics have been created recently. The new data-oriented paradigms
for astronomy combine statistics, data mining, knowledge discovery, machine
learning and computational intelligence, in order to provide the automated and
robust methods needed for the rapid detection and classification of known
astrophysical objects as well as the unsupervised characterization of novel
phenomena. In this article we present an overview of machine learning and
computational intelligence applications to TDA. Future big data challenges and
new lines of research in TDA, focusing on the LSST, are identified and
discussed from the viewpoint of computational intelligence/machine learning.
Interdisciplinary collaboration will be required to cope with the challenges
posed by the deluge of astronomical data coming from the LSST.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 18:24:48 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Huijse",
"Pablo",
""
],
[
"Estevez",
"Pablo A.",
""
],
[
"Protopapas",
"Pavlos",
""
],
[
"Principe",
"Jose C.",
""
],
[
"Zegers",
"Pablo",
""
]
] | TITLE: Computational Intelligence Challenges and Applications on Large-Scale
Astronomical Time Series Databases
ABSTRACT: Time-domain astronomy (TDA) is facing a paradigm shift caused by the
exponential growth of the sample size, data complexity and data generation
rates of new astronomical sky surveys. For example, the Large Synoptic Survey
Telescope (LSST), which will begin operations in northern Chile in 2022, will
generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky.
The LSST will stream data at rates of 2 Terabytes per hour, effectively
capturing an unprecedented movie of the sky. The LSST is expected not only to
improve our understanding of time-varying astrophysical objects, but also to
reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with
a change of paradigm to data-driven astronomy, the fields of astroinformatics
and astrostatistics have been created recently. The new data-oriented paradigms
for astronomy combine statistics, data mining, knowledge discovery, machine
learning and computational intelligence, in order to provide the automated and
robust methods needed for the rapid detection and classification of known
astrophysical objects as well as the unsupervised characterization of novel
phenomena. In this article we present an overview of machine learning and
computational intelligence applications to TDA. Future big data challenges and
new lines of research in TDA, focusing on the LSST, are identified and
discussed from the viewpoint of computational intelligence/machine learning.
Interdisciplinary collaboration will be required to cope with the challenges
posed by the deluge of astronomical data coming from the LSST.
| no_new_dataset | 0.922062 |
1509.07845 | Xintong Han | Bharat Singh, Xintong Han, Zhe Wu, Vlad I. Morariu and Larry S. Davis | Selecting Relevant Web Trained Concepts for Automated Event Retrieval | null | null | null | null | cs.CV cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complex event retrieval is a challenging research problem, especially when no
training videos are available. An alternative to collecting training videos is
to train a large semantic concept bank a priori. Given a text description of an
event, event retrieval is performed by selecting concepts linguistically
related to the event description and fusing the concept responses on unseen
videos. However, defining an exhaustive concept lexicon and pre-training it
requires vast computational resources. Therefore, recent approaches automate
concept discovery and training by leveraging large amounts of weakly annotated
web data. Compact visually salient concepts are automatically obtained by the
use of concept pairs or, more generally, n-grams. However, not all visually
salient n-grams are necessarily useful for an event query--some combinations of
concepts may be visually compact but irrelevant--and this drastically affects
performance. We propose an event retrieval algorithm that constructs pairs of
automatically discovered concepts and then prunes those concepts that are
unlikely to be helpful for retrieval. Pruning depends both on the query and on
the specific video instance being evaluated. Our approach also addresses
calibration and domain adaptation issues that arise when applying concept
detectors to unseen videos. We demonstrate large improvements over other vision
based systems on the TRECVID MED 13 dataset.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 19:27:54 GMT"
}
] | 2015-09-28T00:00:00 | [
[
"Singh",
"Bharat",
""
],
[
"Han",
"Xintong",
""
],
[
"Wu",
"Zhe",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Selecting Relevant Web Trained Concepts for Automated Event Retrieval
ABSTRACT: Complex event retrieval is a challenging research problem, especially when no
training videos are available. An alternative to collecting training videos is
to train a large semantic concept bank a priori. Given a text description of an
event, event retrieval is performed by selecting concepts linguistically
related to the event description and fusing the concept responses on unseen
videos. However, defining an exhaustive concept lexicon and pre-training it
requires vast computational resources. Therefore, recent approaches automate
concept discovery and training by leveraging large amounts of weakly annotated
web data. Compact visually salient concepts are automatically obtained by the
use of concept pairs or, more generally, n-grams. However, not all visually
salient n-grams are necessarily useful for an event query--some combinations of
concepts may be visually compact but irrelevant--and this drastically affects
performance. We propose an event retrieval algorithm that constructs pairs of
automatically discovered concepts and then prunes those concepts that are
unlikely to be helpful for retrieval. Pruning depends both on the query and on
the specific video instance being evaluated. Our approach also addresses
calibration and domain adaptation issues that arise when applying concept
detectors to unseen videos. We demonstrate large improvements over other vision
based systems on the TRECVID MED 13 dataset.
| no_new_dataset | 0.949576 |
1411.4423 | Sotirios Chatzis | Sotirios P. Chatzis | A Nonparametric Bayesian Approach Toward Stacked Convolutional
Independent Component Analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised feature learning algorithms based on convolutional formulations
of independent components analysis (ICA) have been demonstrated to yield
state-of-the-art results in several action recognition benchmarks. However,
existing approaches do not allow for the number of latent components (features)
to be automatically inferred from the data in an unsupervised manner. This is a
significant disadvantage of the state-of-the-art, as it results in considerable
burden imposed on researchers and practitioners, who must resort to tedious
cross-validation procedures to obtain the optimal number of latent features. To
resolve these issues, in this paper we introduce a convolutional nonparametric
Bayesian sparse ICA architecture for overcomplete feature learning from
high-dimensional data. Our method utilizes an Indian buffet process prior to
facilitate inference of the appropriate number of latent features under a
hybrid variational inference algorithm, scalable to massive datasets. As we
show, our model can be naturally used to obtain deep unsupervised hierarchical
feature extractors, by greedily stacking successive model layers, similar to
existing approaches. In addition, inference for this model is completely
heuristics-free; thus, it obviates the need of tedious parameter tuning, which
is a major challenge most deep learning approaches are faced with. We evaluate
our method on several action recognition benchmarks, and exhibit its advantages
over the state-of-the-art.
| [
{
"version": "v1",
"created": "Mon, 17 Nov 2014 10:35:09 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Aug 2015 07:40:21 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Sep 2015 12:03:38 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Sep 2015 10:32:05 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Sep 2015 20:22:53 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Chatzis",
"Sotirios P.",
""
]
] | TITLE: A Nonparametric Bayesian Approach Toward Stacked Convolutional
Independent Component Analysis
ABSTRACT: Unsupervised feature learning algorithms based on convolutional formulations
of independent components analysis (ICA) have been demonstrated to yield
state-of-the-art results in several action recognition benchmarks. However,
existing approaches do not allow for the number of latent components (features)
to be automatically inferred from the data in an unsupervised manner. This is a
significant disadvantage of the state-of-the-art, as it results in considerable
burden imposed on researchers and practitioners, who must resort to tedious
cross-validation procedures to obtain the optimal number of latent features. To
resolve these issues, in this paper we introduce a convolutional nonparametric
Bayesian sparse ICA architecture for overcomplete feature learning from
high-dimensional data. Our method utilizes an Indian buffet process prior to
facilitate inference of the appropriate number of latent features under a
hybrid variational inference algorithm, scalable to massive datasets. As we
show, our model can be naturally used to obtain deep unsupervised hierarchical
feature extractors, by greedily stacking successive model layers, similar to
existing approaches. In addition, inference for this model is completely
heuristics-free; thus, it obviates the need of tedious parameter tuning, which
is a major challenge most deep learning approaches are faced with. We evaluate
our method on several action recognition benchmarks, and exhibit its advantages
over the state-of-the-art.
| no_new_dataset | 0.945801 |
1506.08959 | Linjie Yang | Linjie Yang, Ping Luo, Chen Change Loy, Xiaoou Tang | A Large-Scale Car Dataset for Fine-Grained Categorization and
Verification | An extension to our conference paper in CVPR 2015 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Updated on 24/09/2015: This update provides preliminary experiment results
for fine-grained classification on the surveillance data of CompCars. The
train/test splits are provided in the updated dataset. See details in Section
6.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 06:47:50 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Sep 2015 09:04:24 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Yang",
"Linjie",
""
],
[
"Luo",
"Ping",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: A Large-Scale Car Dataset for Fine-Grained Categorization and
Verification
ABSTRACT: Updated on 24/09/2015: This update provides preliminary experiment results
for fine-grained classification on the surveillance data of CompCars. The
train/test splits are provided in the updated dataset. See details in Section
6.
| no_new_dataset | 0.71794 |
1508.02884 | Ernesto Diaz-Aviles | Ernesto Diaz-Aviles (1), Fabio Pinelli (1), Karol Lynch (1), Zubair
Nabi (1), Yiannis Gkoufas (1), Eric Bouillet (1), Francesco Calabrese (1),
Eoin Coughlan (2), Peter Holland (2), Jason Salzwedel (2) ((1) IBM Research
-- Ireland, (2) IBM Now Factory -- Ireland, (3) Vodacom -- South Africa) | Towards Real-time Customer Experience Prediction for Telecommunication
Operators | IEEE 2015 BigData Conference (to appear). Keywords: Telecom
operators; Customer Care; Big Data; Predictive Analytics | null | null | null | cs.CY cs.IR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Telecommunications operators (telcos) traditional sources of income, voice
and SMS, are shrinking due to customers using over-the-top (OTT) applications
such as WhatsApp or Viber. In this challenging environment it is critical for
telcos to maintain or grow their market share, by providing users with as good
an experience as possible on their network.
But the task of extracting customer insights from the vast amounts of data
collected by telcos is growing in complexity and scale everey day. How can we
measure and predict the quality of a user's experience on a telco network in
real-time? That is the problem that we address in this paper.
We present an approach to capture, in (near) real-time, the mobile customer
experience in order to assess which conditions lead the user to place a call to
a telco's customer care center. To this end, we follow a supervised learning
approach for prediction and train our 'Restricted Random Forest' model using,
as a proxy for bad experience, the observed customer transactions in the telco
data feed before the user places a call to a customer care center.
We evaluate our approach using a rich dataset provided by a major African
telecommunication's company and a novel big data architecture for both the
training and scoring of predictive models. Our empirical study shows our
solution to be effective at predicting user experience by inferring if a
customer will place a call based on his current context.
These promising results open new possibilities for improved customer service,
which will help telcos to reduce churn rates and improve customer experience,
both factors that directly impact their revenue growth.
| [
{
"version": "v1",
"created": "Wed, 12 Aug 2015 11:43:11 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Sep 2015 15:26:48 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Diaz-Aviles",
"Ernesto",
""
],
[
"Pinelli",
"Fabio",
""
],
[
"Lynch",
"Karol",
""
],
[
"Nabi",
"Zubair",
""
],
[
"Gkoufas",
"Yiannis",
""
],
[
"Bouillet",
"Eric",
""
],
[
"Calabrese",
"Francesco",
""
],
[
"Coughlan",
"Eoin",
""
],
[
"Holland",
"Peter",
""
],
[
"Salzwedel",
"Jason",
""
]
] | TITLE: Towards Real-time Customer Experience Prediction for Telecommunication
Operators
ABSTRACT: Telecommunications operators (telcos) traditional sources of income, voice
and SMS, are shrinking due to customers using over-the-top (OTT) applications
such as WhatsApp or Viber. In this challenging environment it is critical for
telcos to maintain or grow their market share, by providing users with as good
an experience as possible on their network.
But the task of extracting customer insights from the vast amounts of data
collected by telcos is growing in complexity and scale everey day. How can we
measure and predict the quality of a user's experience on a telco network in
real-time? That is the problem that we address in this paper.
We present an approach to capture, in (near) real-time, the mobile customer
experience in order to assess which conditions lead the user to place a call to
a telco's customer care center. To this end, we follow a supervised learning
approach for prediction and train our 'Restricted Random Forest' model using,
as a proxy for bad experience, the observed customer transactions in the telco
data feed before the user places a call to a customer care center.
We evaluate our approach using a rich dataset provided by a major African
telecommunication's company and a novel big data architecture for both the
training and scoring of predictive models. Our empirical study shows our
solution to be effective at predicting user experience by inferring if a
customer will place a call based on his current context.
These promising results open new possibilities for improved customer service,
which will help telcos to reduce churn rates and improve customer experience,
both factors that directly impact their revenue growth.
| no_new_dataset | 0.941708 |
1509.02634 | Ziwei Liu | Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang | Semantic Image Segmentation via Deep Parsing Network | To appear in International Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses semantic image segmentation by incorporating rich
information into Markov Random Field (MRF), including high-order relations and
mixture of label contexts. Unlike previous works that optimized MRFs using
iterative algorithm, we solve MRF by proposing a Convolutional Neural Network
(CNN), namely Deep Parsing Network (DPN), which enables deterministic
end-to-end computation in a single forward pass. Specifically, DPN extends a
contemporary CNN architecture to model unary terms and additional layers are
carefully devised to approximate the mean field algorithm (MF) for pairwise
terms. It has several appealing properties. First, different from the recent
works that combined CNN and MRF, where many iterations of MF were required for
each training image during back-propagation, DPN is able to achieve high
performance by approximating one iteration of MF. Second, DPN represents
various types of pairwise terms, making many existing works as its special
cases. Third, DPN makes MF easier to be parallelized and speeded up in
Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC
2012 dataset, where a single DPN model yields a new state-of-the-art
segmentation accuracy.
| [
{
"version": "v1",
"created": "Wed, 9 Sep 2015 04:39:34 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Sep 2015 14:15:17 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Liu",
"Ziwei",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Luo",
"Ping",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Semantic Image Segmentation via Deep Parsing Network
ABSTRACT: This paper addresses semantic image segmentation by incorporating rich
information into Markov Random Field (MRF), including high-order relations and
mixture of label contexts. Unlike previous works that optimized MRFs using
iterative algorithm, we solve MRF by proposing a Convolutional Neural Network
(CNN), namely Deep Parsing Network (DPN), which enables deterministic
end-to-end computation in a single forward pass. Specifically, DPN extends a
contemporary CNN architecture to model unary terms and additional layers are
carefully devised to approximate the mean field algorithm (MF) for pairwise
terms. It has several appealing properties. First, different from the recent
works that combined CNN and MRF, where many iterations of MF were required for
each training image during back-propagation, DPN is able to achieve high
performance by approximating one iteration of MF. Second, DPN represents
various types of pairwise terms, making many existing works as its special
cases. Third, DPN makes MF easier to be parallelized and speeded up in
Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC
2012 dataset, where a single DPN model yields a new state-of-the-art
segmentation accuracy.
| no_new_dataset | 0.949763 |
1509.07211 | Fengyun Zhu | Zaihu Pang, Fengyun Zhu | Noise-Robust ASR for the third 'CHiME' Challenge Exploiting
Time-Frequency Masking based Multi-Channel Speech Enhancement and Recurrent
Neural Network | The 3rd 'CHiME' Speech Separation and Recognition Challenge, 5 pages,
1 figure | null | null | null | cs.SD cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, the Lingban entry to the third 'CHiME' speech separation and
recognition challenge is presented. A time-frequency masking based speech
enhancement front-end is proposed to suppress the environmental noise utilizing
multi-channel coherence and spatial cues. The state-of-the-art speech
recognition techniques, namely recurrent neural network based acoustic and
language modeling, state space minimum Bayes risk based discriminative acoustic
modeling, and i-vector based acoustic condition modeling, are carefully
integrated into the speech recognition back-end. To further improve the system
performance by fully exploiting the advantages of different technologies, the
final recognition results are obtained by lattice combination and rescoring.
Evaluations carried out on the official dataset prove the effectiveness of the
proposed systems. Comparing with the best baseline result, the proposed system
obtains consistent improvements with over 57% relative word error rate
reduction on the real-data test set.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 02:16:11 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Pang",
"Zaihu",
""
],
[
"Zhu",
"Fengyun",
""
]
] | TITLE: Noise-Robust ASR for the third 'CHiME' Challenge Exploiting
Time-Frequency Masking based Multi-Channel Speech Enhancement and Recurrent
Neural Network
ABSTRACT: In this paper, the Lingban entry to the third 'CHiME' speech separation and
recognition challenge is presented. A time-frequency masking based speech
enhancement front-end is proposed to suppress the environmental noise utilizing
multi-channel coherence and spatial cues. The state-of-the-art speech
recognition techniques, namely recurrent neural network based acoustic and
language modeling, state space minimum Bayes risk based discriminative acoustic
modeling, and i-vector based acoustic condition modeling, are carefully
integrated into the speech recognition back-end. To further improve the system
performance by fully exploiting the advantages of different technologies, the
final recognition results are obtained by lattice combination and rescoring.
Evaluations carried out on the official dataset prove the effectiveness of the
proposed systems. Comparing with the best baseline result, the proposed system
obtains consistent improvements with over 57% relative word error rate
reduction on the real-data test set.
| no_new_dataset | 0.94625 |
1509.07454 | Sanjay Krishnan | Sanjay Krishnan, Jiannan Wang, Michael J. Franklin, Ken Goldberg, Tim
Kraska | Stale View Cleaning: Getting Fresh Answers from Stale Materialized Views | null | Proceedings of the VLDB Endowment - Proceedings of the 41st
International Conference on Very Large Data Bases, Kohala Coast, Hawaii
Volume 8 Issue 12, August 2015 Pages 1370-1381 | 10.14778/2824032.2824037 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Materialized views (MVs), stored pre-computed results, are widely used to
facilitate fast queries on large datasets. When new records arrive at a high
rate, it is infeasible to continuously update (maintain) MVs and a common
solution is to defer maintenance by batching updates together. Between batches
the MVs become increasingly stale with incorrect, missing, and superfluous rows
leading to increasingly inaccurate query results. We propose Stale View
Cleaning (SVC) which addresses this problem from a data cleaning perspective.
In SVC, we efficiently clean a sample of rows from a stale MV, and use the
clean sample to estimate aggregate query results. While approximate, the
estimated query results reflect the most recent data. As sampling can be
sensitive to long-tailed distributions, we further explore an outlier indexing
technique to give increased accuracy when the data distributions are skewed.
SVC complements existing deferred maintenance approaches by giving accurate and
bounded query answers between maintenance. We evaluate our method on a
generated dataset from the TPC-D benchmark and a real video distribution
application. Experiments confirm our theoretical results: (1) cleaning an MV
sample is more efficient than full view maintenance, (2) the estimated results
are more accurate than using the stale MV, and (3) SVC is applicable for a wide
variety of MVs.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 18:01:33 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Krishnan",
"Sanjay",
""
],
[
"Wang",
"Jiannan",
""
],
[
"Franklin",
"Michael J.",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Kraska",
"Tim",
""
]
] | TITLE: Stale View Cleaning: Getting Fresh Answers from Stale Materialized Views
ABSTRACT: Materialized views (MVs), stored pre-computed results, are widely used to
facilitate fast queries on large datasets. When new records arrive at a high
rate, it is infeasible to continuously update (maintain) MVs and a common
solution is to defer maintenance by batching updates together. Between batches
the MVs become increasingly stale with incorrect, missing, and superfluous rows
leading to increasingly inaccurate query results. We propose Stale View
Cleaning (SVC) which addresses this problem from a data cleaning perspective.
In SVC, we efficiently clean a sample of rows from a stale MV, and use the
clean sample to estimate aggregate query results. While approximate, the
estimated query results reflect the most recent data. As sampling can be
sensitive to long-tailed distributions, we further explore an outlier indexing
technique to give increased accuracy when the data distributions are skewed.
SVC complements existing deferred maintenance approaches by giving accurate and
bounded query answers between maintenance. We evaluate our method on a
generated dataset from the TPC-D benchmark and a real video distribution
application. Experiments confirm our theoretical results: (1) cleaning an MV
sample is more efficient than full view maintenance, (2) the estimated results
are more accurate than using the stale MV, and (3) SVC is applicable for a wide
variety of MVs.
| no_new_dataset | 0.950411 |
1509.07481 | Zhiguang Wang | Zhiguang Wang and Tim Oates | Spatially Encoding Temporal Correlations to Classify Temporal Data Using
Convolutional Neural Networks | Submit to JCSS. Preliminary versions are appeared in AAAI 2015
workshop and IJCAI 2016 [arXiv:1506.00327] | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an off-line approach to explicitly encode temporal patterns
spatially as different types of images, namely, Gramian Angular Fields and
Markov Transition Fields. This enables the use of techniques from computer
vision for feature learning and classification. We used Tiled Convolutional
Neural Networks to learn high-level features from individual GAF, MTF, and
GAF-MTF images on 12 benchmark time series datasets and two real
spatial-temporal trajectory datasets. The classification results of our
approach are competitive with state-of-the-art approaches on both types of
data. An analysis of the features and weights learned by the CNNs explains why
the approach works.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 19:14:20 GMT"
}
] | 2015-09-25T00:00:00 | [
[
"Wang",
"Zhiguang",
""
],
[
"Oates",
"Tim",
""
]
] | TITLE: Spatially Encoding Temporal Correlations to Classify Temporal Data Using
Convolutional Neural Networks
ABSTRACT: We propose an off-line approach to explicitly encode temporal patterns
spatially as different types of images, namely, Gramian Angular Fields and
Markov Transition Fields. This enables the use of techniques from computer
vision for feature learning and classification. We used Tiled Convolutional
Neural Networks to learn high-level features from individual GAF, MTF, and
GAF-MTF images on 12 benchmark time series datasets and two real
spatial-temporal trajectory datasets. The classification results of our
approach are competitive with state-of-the-art approaches on both types of
data. An analysis of the features and weights learned by the CNNs explains why
the approach works.
| no_new_dataset | 0.951142 |
1412.1840 | Pablo Huijse | Pavlos Protopapas and Pablo Huijse and Pablo A. Estevez and Pablo
Zegers and Jose C. Principe | A Novel, Fully Automated Pipeline for Period Estimation in the EROS 2
Data Set | null | The Astrophysical Journal Supplement Series, Volume 216, Number 2,
2015 | 10.1088/0067-0049/216/2/25 | null | astro-ph.IM astro-ph.SR cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new method to discriminate periodic from non-periodic
irregularly sampled lightcurves. We introduce a periodic kernel and maximize a
similarity measure derived from information theory to estimate the periods and
a discriminator factor. We tested the method on a dataset containing 100,000
synthetic periodic and non-periodic lightcurves with various periods,
amplitudes and shapes generated using a multivariate generative model. We
correctly identified periodic and non-periodic lightcurves with a completeness
of 90% and a precision of 95%, for lightcurves with a signal-to-noise ratio
(SNR) larger than 0.5. We characterize the efficiency and reliability of the
model using these synthetic lightcurves and applied the method on the EROS-2
dataset. A crucial consideration is the speed at which the method can be
executed. Using hierarchical search and some simplification on the parameter
search we were able to analyze 32.8 million lightcurves in 18 hours on a
cluster of GPGPUs. Using the sensitivity analysis on the synthetic dataset, we
infer that 0.42% in the LMC and 0.61% in the SMC of the sources show periodic
behavior. The training set, the catalogs and source code are all available in
http://timemachine.iic.harvard.edu.
| [
{
"version": "v1",
"created": "Thu, 4 Dec 2014 21:08:55 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Protopapas",
"Pavlos",
""
],
[
"Huijse",
"Pablo",
""
],
[
"Estevez",
"Pablo A.",
""
],
[
"Zegers",
"Pablo",
""
],
[
"Principe",
"Jose C.",
""
]
] | TITLE: A Novel, Fully Automated Pipeline for Period Estimation in the EROS 2
Data Set
ABSTRACT: We present a new method to discriminate periodic from non-periodic
irregularly sampled lightcurves. We introduce a periodic kernel and maximize a
similarity measure derived from information theory to estimate the periods and
a discriminator factor. We tested the method on a dataset containing 100,000
synthetic periodic and non-periodic lightcurves with various periods,
amplitudes and shapes generated using a multivariate generative model. We
correctly identified periodic and non-periodic lightcurves with a completeness
of 90% and a precision of 95%, for lightcurves with a signal-to-noise ratio
(SNR) larger than 0.5. We characterize the efficiency and reliability of the
model using these synthetic lightcurves and applied the method on the EROS-2
dataset. A crucial consideration is the speed at which the method can be
executed. Using hierarchical search and some simplification on the parameter
search we were able to analyze 32.8 million lightcurves in 18 hours on a
cluster of GPGPUs. Using the sensitivity analysis on the synthetic dataset, we
infer that 0.42% in the LMC and 0.61% in the SMC of the sources show periodic
behavior. The training set, the catalogs and source code are all available in
http://timemachine.iic.harvard.edu.
| no_new_dataset | 0.944331 |
1412.7024 | Matthieu Courbariaux | Matthieu Courbariaux, Yoshua Bengio and Jean-Pierre David | Training deep neural networks with low precision multiplications | 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 2015 | null | null | null | cs.LG cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multipliers are the most space and power-hungry arithmetic operators of the
digital implementation of deep neural networks. We train a set of
state-of-the-art neural networks (Maxout networks) on three benchmark datasets:
MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats:
floating point, fixed point and dynamic fixed point. For each of those datasets
and for each of those formats, we assess the impact of the precision of the
multiplications on the final error after training. We find that very low
precision is sufficient not just for running trained networks but also for
training them. For example, it is possible to train Maxout networks with 10
bits multiplications.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2014 15:22:45 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Dec 2014 18:05:12 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Feb 2015 00:26:12 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Apr 2015 22:52:43 GMT"
},
{
"version": "v5",
"created": "Wed, 23 Sep 2015 01:00:44 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Courbariaux",
"Matthieu",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"David",
"Jean-Pierre",
""
]
] | TITLE: Training deep neural networks with low precision multiplications
ABSTRACT: Multipliers are the most space and power-hungry arithmetic operators of the
digital implementation of deep neural networks. We train a set of
state-of-the-art neural networks (Maxout networks) on three benchmark datasets:
MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats:
floating point, fixed point and dynamic fixed point. For each of those datasets
and for each of those formats, we assess the impact of the precision of the
multiplications on the final error after training. We find that very low
precision is sufficient not just for running trained networks but also for
training them. For example, it is possible to train Maxout networks with 10
bits multiplications.
| no_new_dataset | 0.950503 |
1506.03607 | Guilhem Ch\'eron | Guilhem Ch\'eron, Ivan Laptev, Cordelia Schmid | P-CNN: Pose-based CNN Features for Action Recognition | ICCV, December 2015, Santiago, Chile | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work targets human action recognition in video. While recent methods
typically represent actions by statistics of local video features, here we
argue for the importance of a representation derived from human pose. To this
end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN)
for action recognition. The descriptor aggregates motion and appearance
information along tracks of human body parts. We investigate different schemes
of temporal aggregation and experiment with P-CNN features obtained both for
automatically estimated and manually annotated human poses. We evaluate our
method on the recent and challenging JHMDB and MPII Cooking datasets. For both
datasets our method shows consistent improvement over the state of the art.
| [
{
"version": "v1",
"created": "Thu, 11 Jun 2015 10:02:03 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Sep 2015 10:48:29 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Chéron",
"Guilhem",
""
],
[
"Laptev",
"Ivan",
""
],
[
"Schmid",
"Cordelia",
""
]
] | TITLE: P-CNN: Pose-based CNN Features for Action Recognition
ABSTRACT: This work targets human action recognition in video. While recent methods
typically represent actions by statistics of local video features, here we
argue for the importance of a representation derived from human pose. To this
end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN)
for action recognition. The descriptor aggregates motion and appearance
information along tracks of human body parts. We investigate different schemes
of temporal aggregation and experiment with P-CNN features obtained both for
automatically estimated and manually annotated human poses. We evaluate our
method on the recent and challenging JHMDB and MPII Cooking datasets. For both
datasets our method shows consistent improvement over the state of the art.
| no_new_dataset | 0.954137 |
1509.05982 | Dan Stowell | Dan Stowell and Richard E. Turner | Denoising without access to clean data using a partitioned autoencoder | null | null | null | null | cs.NE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Training a denoising autoencoder neural network requires access to truly
clean data, a requirement which is often impractical. To remedy this, we
introduce a method to train an autoencoder using only noisy data, having
examples with and without the signal class of interest. The autoencoder learns
a partitioned representation of signal and noise, learning to reconstruct each
separately. We illustrate the method by denoising birdsong audio (available
abundantly in uncontrolled noisy datasets) using a convolutional autoencoder.
| [
{
"version": "v1",
"created": "Sun, 20 Sep 2015 09:03:48 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Sep 2015 20:51:05 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Stowell",
"Dan",
""
],
[
"Turner",
"Richard E.",
""
]
] | TITLE: Denoising without access to clean data using a partitioned autoencoder
ABSTRACT: Training a denoising autoencoder neural network requires access to truly
clean data, a requirement which is often impractical. To remedy this, we
introduce a method to train an autoencoder using only noisy data, having
examples with and without the signal class of interest. The autoencoder learns
a partitioned representation of signal and noise, learning to reconstruct each
separately. We illustrate the method by denoising birdsong audio (available
abundantly in uncontrolled noisy datasets) using a convolutional autoencoder.
| no_new_dataset | 0.949623 |
1509.06825 | Lerrel Pinto Mr | Lerrel Pinto and Abhinav Gupta | Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700
Robot Hours | null | null | null | null | cs.LG cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current learning-based robot grasping approaches exploit human-labeled
datasets for training the models. However, there are two problems with such a
methodology: (a) since each object can be grasped in multiple ways, manually
labeling grasp locations is not a trivial task; (b) human labeling is biased by
semantics. While there have been attempts to train robots using trial-and-error
experiments, the amount of data used in such experiments remains substantially
low and hence makes the learner prone to over-fitting. In this paper, we take
the leap of increasing the available training data to 40 times more than prior
work, leading to a dataset size of 50K data points collected over 700 hours of
robot grasping attempts. This allows us to train a Convolutional Neural Network
(CNN) for the task of predicting grasp locations without severe overfitting. In
our formulation, we recast the regression problem to an 18-way binary
classification over image patches. We also present a multi-stage learning
approach where a CNN trained in one stage is used to collect hard negatives in
subsequent stages. Our experiments clearly show the benefit of using
large-scale datasets (and multi-stage training) for the task of grasping. We
also compare to several baselines and show state-of-the-art performance on
generalization to unseen objects for grasping.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2015 02:08:02 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Pinto",
"Lerrel",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700
Robot Hours
ABSTRACT: Current learning-based robot grasping approaches exploit human-labeled
datasets for training the models. However, there are two problems with such a
methodology: (a) since each object can be grasped in multiple ways, manually
labeling grasp locations is not a trivial task; (b) human labeling is biased by
semantics. While there have been attempts to train robots using trial-and-error
experiments, the amount of data used in such experiments remains substantially
low and hence makes the learner prone to over-fitting. In this paper, we take
the leap of increasing the available training data to 40 times more than prior
work, leading to a dataset size of 50K data points collected over 700 hours of
robot grasping attempts. This allows us to train a Convolutional Neural Network
(CNN) for the task of predicting grasp locations without severe overfitting. In
our formulation, we recast the regression problem to an 18-way binary
classification over image patches. We also present a multi-stage learning
approach where a CNN trained in one stage is used to collect hard negatives in
subsequent stages. Our experiments clearly show the benefit of using
large-scale datasets (and multi-stage training) for the task of grasping. We
also compare to several baselines and show state-of-the-art performance on
generalization to unseen objects for grasping.
| no_new_dataset | 0.839273 |
1509.07093 | Pablo Estevez Prof. | David Nova and Pablo A. Estevez | A review of learning vector quantization classifiers | 14 pages | Neural Computing & Applications, vol. 25, pp. 511-524, 2014 | 10.1007/s00521-013-1535-3 | null | cs.LG astro-ph.IM cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we present a review of the state of the art of Learning Vector
Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the
most relevant LVQ approaches to date. The main concepts associated with modern
LVQ approaches are defined. A comparison is made among eleven LVQ classifiers
using one real-world and two artificial datasets.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2015 18:46:31 GMT"
}
] | 2015-09-24T00:00:00 | [
[
"Nova",
"David",
""
],
[
"Estevez",
"Pablo A.",
""
]
] | TITLE: A review of learning vector quantization classifiers
ABSTRACT: In this work we present a review of the state of the art of Learning Vector
Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the
most relevant LVQ approaches to date. The main concepts associated with modern
LVQ approaches are defined. A comparison is made among eleven LVQ classifiers
using one real-world and two artificial datasets.
| no_new_dataset | 0.949248 |
1401.3632 | Shaan Qamar | Shaan Qamar, Rajarshi Guhaniyogi, David B. Dunson | Bayesian Conditional Density Filtering | 41 pages, 7 figures, 12 tables | null | null | null | stat.ML cs.LG stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 15:40:40 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Oct 2014 21:47:00 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Sep 2015 07:41:00 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Qamar",
"Shaan",
""
],
[
"Guhaniyogi",
"Rajarshi",
""
],
[
"Dunson",
"David B.",
""
]
] | TITLE: Bayesian Conditional Density Filtering
ABSTRACT: We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.
| no_new_dataset | 0.950365 |
1403.5946 | Jack Kelly | Jack Kelly and William Knottenbelt | Metadata for Energy Disaggregation | To appear in The 2nd IEEE International Workshop on Consumer Devices
and Systems (CDS 2014) in V\"aster{\aa}s, Sweden | null | 10.1109/COMPSACW.2014.97 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Energy disaggregation is the process of estimating the energy consumed by
individual electrical appliances given only a time series of the whole-home
power demand. Energy disaggregation researchers require datasets of the power
demand from individual appliances and the whole-home power demand. Multiple
such datasets have been released over the last few years but provide metadata
in a disparate array of formats including CSV files and plain-text README
files. At best, the lack of a standard metadata schema makes it unnecessarily
time-consuming to write software to process multiple datasets and, at worse,
the lack of a standard means that crucial information is simply absent from
some datasets. We propose a metadata schema for representing appliances,
meters, buildings, datasets, prior knowledge about appliances and appliance
models. The schema is relational and provides a simple but powerful inheritance
mechanism.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2014 13:29:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2014 14:50:39 GMT"
},
{
"version": "v3",
"created": "Mon, 19 May 2014 22:15:00 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Kelly",
"Jack",
""
],
[
"Knottenbelt",
"William",
""
]
] | TITLE: Metadata for Energy Disaggregation
ABSTRACT: Energy disaggregation is the process of estimating the energy consumed by
individual electrical appliances given only a time series of the whole-home
power demand. Energy disaggregation researchers require datasets of the power
demand from individual appliances and the whole-home power demand. Multiple
such datasets have been released over the last few years but provide metadata
in a disparate array of formats including CSV files and plain-text README
files. At best, the lack of a standard metadata schema makes it unnecessarily
time-consuming to write software to process multiple datasets and, at worse,
the lack of a standard means that crucial information is simply absent from
some datasets. We propose a metadata schema for representing appliances,
meters, buildings, datasets, prior knowledge about appliances and appliance
models. The schema is relational and provides a simple but powerful inheritance
mechanism.
| no_new_dataset | 0.949435 |
1505.06606 | Vasileios Belagiannis | Vasileios Belagiannis, Christian Rupprecht, Gustavo Carneiro, Nassir
Navab | Robust Optimization for Deep Regression | Accepted for publication at the International Conference on Computer
Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional Neural Networks (ConvNets) have successfully contributed to
improve the accuracy of regression-based methods for computer vision tasks such
as human pose estimation, landmark localization, and object detection. The
network optimization has been usually performed with L2 loss and without
considering the impact of outliers on the training process, where an outlier in
this context is defined by a sample estimation that lies at an abnormal
distance from the other training sample estimations in the objective space. In
this work, we propose a regression model with ConvNets that achieves robustness
to such outliers by minimizing Tukey's biweight function, an M-estimator robust
to outliers, as the loss function for the ConvNet. In addition to the robust
loss, we introduce a coarse-to-fine model, which processes input images of
progressively higher resolutions for improving the accuracy of the regressed
values. In our experiments, we demonstrate faster convergence and better
generalization of our robust loss function for the tasks of human pose
estimation and age estimation from face images. We also show that the
combination of the robust loss function with the coarse-to-fine model produces
comparable or better results than current state-of-the-art approaches in four
publicly available human pose estimation datasets.
| [
{
"version": "v1",
"created": "Mon, 25 May 2015 12:25:19 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Sep 2015 15:24:58 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Belagiannis",
"Vasileios",
""
],
[
"Rupprecht",
"Christian",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Navab",
"Nassir",
""
]
] | TITLE: Robust Optimization for Deep Regression
ABSTRACT: Convolutional Neural Networks (ConvNets) have successfully contributed to
improve the accuracy of regression-based methods for computer vision tasks such
as human pose estimation, landmark localization, and object detection. The
network optimization has been usually performed with L2 loss and without
considering the impact of outliers on the training process, where an outlier in
this context is defined by a sample estimation that lies at an abnormal
distance from the other training sample estimations in the objective space. In
this work, we propose a regression model with ConvNets that achieves robustness
to such outliers by minimizing Tukey's biweight function, an M-estimator robust
to outliers, as the loss function for the ConvNet. In addition to the robust
loss, we introduce a coarse-to-fine model, which processes input images of
progressively higher resolutions for improving the accuracy of the regressed
values. In our experiments, we demonstrate faster convergence and better
generalization of our robust loss function for the tasks of human pose
estimation and age estimation from face images. We also show that the
combination of the robust loss function with the coarse-to-fine model produces
comparable or better results than current state-of-the-art approaches in four
publicly available human pose estimation datasets.
| no_new_dataset | 0.947672 |
1506.08259 | Afshin Rahimi | Afshin Rahimi, Trevor Cohn, and Timothy Baldwin | Twitter User Geolocation Using a Unified Text and Network Prediction
Model | To appear in ACL 2015, Proceedings of the 53rd Annual Meeting of the
Association for Computational Linguistics (ACL 2015) | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a label propagation approach to geolocation prediction based on
Modified Adsorption, with two enhancements:(1) the removal of "celebrity" nodes
to increase location homophily and boost tractability, and (2) he incorporation
of text-based geolocation priors for test users. Experiments over three Twitter
benchmark datasets achieve state-of-the-art results, and demonstrate the
effectiveness of the enhancements.
| [
{
"version": "v1",
"created": "Sat, 27 Jun 2015 04:51:18 GMT"
},
{
"version": "v2",
"created": "Tue, 30 Jun 2015 00:43:39 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Sep 2015 01:14:20 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Rahimi",
"Afshin",
""
],
[
"Cohn",
"Trevor",
""
],
[
"Baldwin",
"Timothy",
""
]
] | TITLE: Twitter User Geolocation Using a Unified Text and Network Prediction
Model
ABSTRACT: We propose a label propagation approach to geolocation prediction based on
Modified Adsorption, with two enhancements:(1) the removal of "celebrity" nodes
to increase location homophily and boost tractability, and (2) he incorporation
of text-based geolocation priors for test users. Experiments over three Twitter
benchmark datasets achieve state-of-the-art results, and demonstrate the
effectiveness of the enhancements.
| no_new_dataset | 0.957198 |
1508.07647 | Lamberto Ballan | Justin Johnson and Lamberto Ballan and Fei-Fei Li | Love Thy Neighbors: Image Annotation by Exploiting Image Metadata | Accepted to ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Some images that are difficult to recognize on their own may become more
clear in the context of a neighborhood of related images with similar
social-network metadata. We build on this intuition to improve multilabel image
annotation. Our model uses image metadata nonparametrically to generate
neighborhoods of related images using Jaccard similarities, then uses a deep
neural network to blend visual information from the image and its neighbors.
Prior work typically models image metadata parametrically, in contrast, our
nonparametric treatment allows our model to perform well even when the
vocabulary of metadata changes between training and testing. We perform
comprehensive experiments on the NUS-WIDE dataset, where we show that our model
outperforms state-of-the-art methods for multilabel image annotation even when
our model is forced to generalize to new types of metadata.
| [
{
"version": "v1",
"created": "Sun, 30 Aug 2015 23:34:13 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Sep 2015 00:12:06 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Johnson",
"Justin",
""
],
[
"Ballan",
"Lamberto",
""
],
[
"Li",
"Fei-Fei",
""
]
] | TITLE: Love Thy Neighbors: Image Annotation by Exploiting Image Metadata
ABSTRACT: Some images that are difficult to recognize on their own may become more
clear in the context of a neighborhood of related images with similar
social-network metadata. We build on this intuition to improve multilabel image
annotation. Our model uses image metadata nonparametrically to generate
neighborhoods of related images using Jaccard similarities, then uses a deep
neural network to blend visual information from the image and its neighbors.
Prior work typically models image metadata parametrically, in contrast, our
nonparametric treatment allows our model to perform well even when the
vocabulary of metadata changes between training and testing. We perform
comprehensive experiments on the NUS-WIDE dataset, where we show that our model
outperforms state-of-the-art methods for multilabel image annotation even when
our model is forced to generalize to new types of metadata.
| no_new_dataset | 0.950869 |
1509.02412 | Mortaza Doulaty | Mortaza Doulaty, Oscar Saz, Thomas Hain | Unsupervised Domain Discovery using Latent Dirichlet Allocation for
Acoustic Modelling in Speech Recognition | null | 16th Interspeech.Proc. (2015) 3640-3644, Dresden, Germany | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Speech recognition systems are often highly domain dependent, a fact widely
reported in the literature. However the concept of domain is complex and not
bound to clear criteria. Hence it is often not evident if data should be
considered to be out-of-domain. While both acoustic and language models can be
domain specific, work in this paper concentrates on acoustic modelling. We
present a novel method to perform unsupervised discovery of domains using
Latent Dirichlet Allocation (LDA) modelling. Here a set of hidden domains is
assumed to exist in the data, whereby each audio segment can be considered to
be a weighted mixture of domain properties. The classification of audio
segments into domains allows the creation of domain specific acoustic models
for automatic speech recognition. Experiments are conducted on a dataset of
diverse speech data covering speech from radio and TV broadcasts, telephone
conversations, meetings, lectures and read speech, with a joint training set of
60 hours and a test set of 6 hours. Maximum A Posteriori (MAP) adaptation to
LDA based domains was shown to yield relative Word Error Rate (WER)
improvements of up to 16% relative, compared to pooled training, and up to 10%,
compared with models adapted with human-labelled prior domain knowledge.
| [
{
"version": "v1",
"created": "Tue, 8 Sep 2015 15:29:23 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Doulaty",
"Mortaza",
""
],
[
"Saz",
"Oscar",
""
],
[
"Hain",
"Thomas",
""
]
] | TITLE: Unsupervised Domain Discovery using Latent Dirichlet Allocation for
Acoustic Modelling in Speech Recognition
ABSTRACT: Speech recognition systems are often highly domain dependent, a fact widely
reported in the literature. However the concept of domain is complex and not
bound to clear criteria. Hence it is often not evident if data should be
considered to be out-of-domain. While both acoustic and language models can be
domain specific, work in this paper concentrates on acoustic modelling. We
present a novel method to perform unsupervised discovery of domains using
Latent Dirichlet Allocation (LDA) modelling. Here a set of hidden domains is
assumed to exist in the data, whereby each audio segment can be considered to
be a weighted mixture of domain properties. The classification of audio
segments into domains allows the creation of domain specific acoustic models
for automatic speech recognition. Experiments are conducted on a dataset of
diverse speech data covering speech from radio and TV broadcasts, telephone
conversations, meetings, lectures and read speech, with a joint training set of
60 hours and a test set of 6 hours. Maximum A Posteriori (MAP) adaptation to
LDA based domains was shown to yield relative Word Error Rate (WER)
improvements of up to 16% relative, compared to pooled training, and up to 10%,
compared with models adapted with human-labelled prior domain knowledge.
| no_new_dataset | 0.9463 |
1509.06458 | Jian Sun | Zuoqiang Shi and Jian Sun and Minghao Tian | Harmonic Extension | 10 pages, 2 figures | null | null | null | cs.LG math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the harmonic extension problem, which is widely
used in many applications of machine learning. We find that the transitional
method of graph Laplacian fails to produce a good approximation of the
classical harmonic function. To tackle this problem, we propose a new method
called the point integral method (PIM). We consider the harmonic extension
problem from the point of view of solving PDEs on manifolds. The basic idea of
the PIM method is to approximate the harmonicity using an integral equation,
which is easy to be discretized from points. Based on the integral equation, we
explain the reason why the transitional graph Laplacian may fail to approximate
the harmonicity in the classical sense and propose a different approach which
we call the volume constraint method (VCM). Theoretically, both the PIM and the
VCM computes a harmonic function with convergence guarantees, and practically,
they are both simple, which amount to solve a linear system. One important
application of the harmonic extension in machine learning is semi-supervised
learning. We run a popular semi-supervised learning algorithm by Zhu et al.
over a couple of well-known datasets and compare the performance of the
aforementioned approaches. Our experiments show the PIM performs the best.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 04:13:38 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Shi",
"Zuoqiang",
""
],
[
"Sun",
"Jian",
""
],
[
"Tian",
"Minghao",
""
]
] | TITLE: Harmonic Extension
ABSTRACT: In this paper, we consider the harmonic extension problem, which is widely
used in many applications of machine learning. We find that the transitional
method of graph Laplacian fails to produce a good approximation of the
classical harmonic function. To tackle this problem, we propose a new method
called the point integral method (PIM). We consider the harmonic extension
problem from the point of view of solving PDEs on manifolds. The basic idea of
the PIM method is to approximate the harmonicity using an integral equation,
which is easy to be discretized from points. Based on the integral equation, we
explain the reason why the transitional graph Laplacian may fail to approximate
the harmonicity in the classical sense and propose a different approach which
we call the volume constraint method (VCM). Theoretically, both the PIM and the
VCM computes a harmonic function with convergence guarantees, and practically,
they are both simple, which amount to solve a linear system. One important
application of the harmonic extension in machine learning is semi-supervised
learning. We run a popular semi-supervised learning algorithm by Zhu et al.
over a couple of well-known datasets and compare the performance of the
aforementioned approaches. Our experiments show the PIM performs the best.
| no_new_dataset | 0.948632 |
1509.06470 | Yiyi Liao | Yiyi Liao, Sarath Kodagoda, Yue Wang, Lei Shi, Yong Liu | Understand Scene Categories by Objects: A Semantic Regularized Scene
Classifier Using Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene classification is a fundamental perception task for environmental
understanding in today's robotics. In this paper, we have attempted to exploit
the use of popular machine learning technique of deep learning to enhance scene
understanding, particularly in robotics applications. As scene images have
larger diversity than the iconic object images, it is more challenging for deep
learning methods to automatically learn features from scene images with less
samples. Inspired by human scene understanding based on object knowledge, we
address the problem of scene classification by encouraging deep neural networks
to incorporate object-level information. This is implemented with a
regularization of semantic segmentation. With only 5 thousand training images,
as opposed to 2.5 million images, we show the proposed deep architecture
achieves superior scene classification results to the state-of-the-art on a
publicly available SUN RGB-D dataset. In addition, performance of semantic
segmentation, the regularizer, also reaches a new record with refinement
derived from predicted scene labels. Finally, we apply our SUN RGB-D dataset
trained model to a mobile robot captured images to classify scenes in our
university demonstrating the generalization ability of the proposed algorithm.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 05:43:27 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Liao",
"Yiyi",
""
],
[
"Kodagoda",
"Sarath",
""
],
[
"Wang",
"Yue",
""
],
[
"Shi",
"Lei",
""
],
[
"Liu",
"Yong",
""
]
] | TITLE: Understand Scene Categories by Objects: A Semantic Regularized Scene
Classifier Using Convolutional Neural Networks
ABSTRACT: Scene classification is a fundamental perception task for environmental
understanding in today's robotics. In this paper, we have attempted to exploit
the use of popular machine learning technique of deep learning to enhance scene
understanding, particularly in robotics applications. As scene images have
larger diversity than the iconic object images, it is more challenging for deep
learning methods to automatically learn features from scene images with less
samples. Inspired by human scene understanding based on object knowledge, we
address the problem of scene classification by encouraging deep neural networks
to incorporate object-level information. This is implemented with a
regularization of semantic segmentation. With only 5 thousand training images,
as opposed to 2.5 million images, we show the proposed deep architecture
achieves superior scene classification results to the state-of-the-art on a
publicly available SUN RGB-D dataset. In addition, performance of semantic
segmentation, the regularizer, also reaches a new record with refinement
derived from predicted scene labels. Finally, we apply our SUN RGB-D dataset
trained model to a mobile robot captured images to classify scenes in our
university demonstrating the generalization ability of the proposed algorithm.
| no_new_dataset | 0.946001 |
1509.06589 | Nicol\`o Navarin | Giovanni Da San Martino, Nicol\`o Navarin, and Alessandro Sperduti | Graph Kernels exploiting Weisfeiler-Lehman Graph Isomorphism Test
Extensions | null | Neural Information Processing, Volume 8835 of the series Lecture
Notes in Computer Science pp 93-100, 2014 Springer International Publishing | 10.1007/978-3-319-12640-1_12 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a novel graph kernel framework inspired the by the
Weisfeiler-Lehman (WL) isomorphism tests. Any WL test comprises a relabelling
phase of the nodes based on test-specific information extracted from the graph,
for example the set of neighbours of a node. We defined a novel relabelling and
derived two kernels of the framework from it. The novel kernels are very fast
to compute and achieve state-of-the-art results on five real-world datasets.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 13:21:08 GMT"
}
] | 2015-09-23T00:00:00 | [
[
"Martino",
"Giovanni Da San",
""
],
[
"Navarin",
"Nicolò",
""
],
[
"Sperduti",
"Alessandro",
""
]
] | TITLE: Graph Kernels exploiting Weisfeiler-Lehman Graph Isomorphism Test
Extensions
ABSTRACT: In this paper we present a novel graph kernel framework inspired the by the
Weisfeiler-Lehman (WL) isomorphism tests. Any WL test comprises a relabelling
phase of the nodes based on test-specific information extracted from the graph,
for example the set of neighbours of a node. We defined a novel relabelling and
derived two kernels of the framework from it. The novel kernels are very fast
to compute and achieve state-of-the-art results on five real-world datasets.
| no_new_dataset | 0.94545 |
1311.4764 | Dan Stowell | Dan Stowell and Mark D. Plumbley | Large-scale analysis of frequency modulation in birdsong databases | null | Methods in Ecology and Evolution, Volume 5, Issue 9, pages
901-912, September 2014 | 10.1111/2041-210X.12223 | null | cs.SD | http://creativecommons.org/licenses/by/3.0/ | Birdsong often contains large amounts of rapid frequency modulation (FM). It
is believed that the use or otherwise of FM is adaptive to the acoustic
environment, and also that there are specific social uses of FM such as trills
in aggressive territorial encounters. Yet temporal fine detail of FM is often
absent or obscured in standard audio signal analysis methods such as Fourier
analysis or linear prediction. Hence it is important to consider high
resolution signal processing techniques for analysis of FM in bird
vocalisations. If such methods can be applied at big data scales, this offers a
further advantage as large datasets become available.
We introduce methods from the signal processing literature which can go
beyond spectrogram representations to analyse the fine modulations present in a
signal at very short timescales. Focusing primarily on the genus Phylloscopus,
we investigate which of a set of four analysis methods most strongly captures
the species signal encoded in birdsong. In order to find tools useful in
practical analysis of large databases, we also study the computational time
taken by the methods, and their robustness to additive noise and MP3
compression.
We find three methods which can robustly represent species-correlated FM
attributes, and that the simplest method tested also appears to perform the
best. We find that features representing the extremes of FM encode species
identity supplementary to that captured in frequency features, whereas
bandwidth features do not encode additional information.
Large-scale FM analysis can efficiently extract information useful for
bioacoustic studies, in addition to measures more commonly used to characterise
vocalisations.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2013 15:02:55 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Stowell",
"Dan",
""
],
[
"Plumbley",
"Mark D.",
""
]
] | TITLE: Large-scale analysis of frequency modulation in birdsong databases
ABSTRACT: Birdsong often contains large amounts of rapid frequency modulation (FM). It
is believed that the use or otherwise of FM is adaptive to the acoustic
environment, and also that there are specific social uses of FM such as trills
in aggressive territorial encounters. Yet temporal fine detail of FM is often
absent or obscured in standard audio signal analysis methods such as Fourier
analysis or linear prediction. Hence it is important to consider high
resolution signal processing techniques for analysis of FM in bird
vocalisations. If such methods can be applied at big data scales, this offers a
further advantage as large datasets become available.
We introduce methods from the signal processing literature which can go
beyond spectrogram representations to analyse the fine modulations present in a
signal at very short timescales. Focusing primarily on the genus Phylloscopus,
we investigate which of a set of four analysis methods most strongly captures
the species signal encoded in birdsong. In order to find tools useful in
practical analysis of large databases, we also study the computational time
taken by the methods, and their robustness to additive noise and MP3
compression.
We find three methods which can robustly represent species-correlated FM
attributes, and that the simplest method tested also appears to perform the
best. We find that features representing the extremes of FM encode species
identity supplementary to that captured in frequency features, whereas
bandwidth features do not encode additional information.
Large-scale FM analysis can efficiently extract information useful for
bioacoustic studies, in addition to measures more commonly used to characterise
vocalisations.
| no_new_dataset | 0.933734 |
1503.05638 | Y. William Yu | Y. William Yu, Noah M. Daniels, David Christian Danko, Bonnie Berger | Entropy-scaling search of massive biological data | Including supplement: 41 pages, 6 figures, 4 tables, 1 box | Cell Systems, Volume 1, Issue 2, 130-140, 2015 | 10.1016/j.cels.2015.08.004 | null | cs.DS q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many datasets exhibit a well-defined structure that can be exploited to
design faster search tools, but it is not always clear when such acceleration
is possible. Here, we introduce a framework for similarity search based on
characterizing a dataset's entropy and fractal dimension. We prove that
searching scales in time with metric entropy (number of covering hyperspheres),
if the fractal dimension of the dataset is low, and scales in space with the
sum of metric entropy and information-theoretic entropy (randomness of the
data). Using these ideas, we present accelerated versions of standard tools,
with no loss in specificity and little loss in sensitivity, for use in three
domains---high-throughput drug screening (Ammolite, 150x speedup), metagenomics
(MICA, 3.5x speedup of DIAMOND [3,700x BLASTX]), and protein structure search
(esFragBag, 10x speedup of FragBag). Our framework can be used to achieve
"compressive omics," and the general theory can be readily applied to data
science problems outside of biology.
| [
{
"version": "v1",
"created": "Thu, 19 Mar 2015 02:54:21 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Sep 2015 04:31:50 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Yu",
"Y. William",
""
],
[
"Daniels",
"Noah M.",
""
],
[
"Danko",
"David Christian",
""
],
[
"Berger",
"Bonnie",
""
]
] | TITLE: Entropy-scaling search of massive biological data
ABSTRACT: Many datasets exhibit a well-defined structure that can be exploited to
design faster search tools, but it is not always clear when such acceleration
is possible. Here, we introduce a framework for similarity search based on
characterizing a dataset's entropy and fractal dimension. We prove that
searching scales in time with metric entropy (number of covering hyperspheres),
if the fractal dimension of the dataset is low, and scales in space with the
sum of metric entropy and information-theoretic entropy (randomness of the
data). Using these ideas, we present accelerated versions of standard tools,
with no loss in specificity and little loss in sensitivity, for use in three
domains---high-throughput drug screening (Ammolite, 150x speedup), metagenomics
(MICA, 3.5x speedup of DIAMOND [3,700x BLASTX]), and protein structure search
(esFragBag, 10x speedup of FragBag). Our framework can be used to achieve
"compressive omics," and the general theory can be readily applied to data
science problems outside of biology.
| no_new_dataset | 0.949248 |
1504.03071 | Jaeyong Sung | Jaeyong Sung, Seok Hyun Jin, Ashutosh Saxena | Robobarista: Object Part based Transfer of Manipulation Trajectories
from Crowd-sourcing in 3D Pointclouds | In International Symposium on Robotics Research (ISRR) 2015 | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a large variety of objects and appliances in human environments,
such as stoves, coffee dispensers, juice extractors, and so on. It is
challenging for a roboticist to program a robot for each of these object types
and for each of their instantiations. In this work, we present a novel approach
to manipulation planning based on the idea that many household objects share
similarly-operated object parts. We formulate the manipulation planning as a
structured prediction problem and design a deep learning model that can handle
large noise in the manipulation demonstrations and learns features from three
different modalities: point-clouds, language and trajectory. In order to
collect a large number of manipulation demonstrations for different objects, we
developed a new crowd-sourcing platform called Robobarista. We test our model
on our dataset consisting of 116 objects with 249 parts along with 250 language
instructions, for which there are 1225 crowd-sourced manipulation
demonstrations. We further show that our robot can even manipulate objects it
has never seen before.
| [
{
"version": "v1",
"created": "Mon, 13 Apr 2015 06:25:42 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 23:43:19 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Sung",
"Jaeyong",
""
],
[
"Jin",
"Seok Hyun",
""
],
[
"Saxena",
"Ashutosh",
""
]
] | TITLE: Robobarista: Object Part based Transfer of Manipulation Trajectories
from Crowd-sourcing in 3D Pointclouds
ABSTRACT: There is a large variety of objects and appliances in human environments,
such as stoves, coffee dispensers, juice extractors, and so on. It is
challenging for a roboticist to program a robot for each of these object types
and for each of their instantiations. In this work, we present a novel approach
to manipulation planning based on the idea that many household objects share
similarly-operated object parts. We formulate the manipulation planning as a
structured prediction problem and design a deep learning model that can handle
large noise in the manipulation demonstrations and learns features from three
different modalities: point-clouds, language and trajectory. In order to
collect a large number of manipulation demonstrations for different objects, we
developed a new crowd-sourcing platform called Robobarista. We test our model
on our dataset consisting of 116 objects with 249 parts along with 250 language
instructions, for which there are 1225 crowd-sourced manipulation
demonstrations. We further show that our robot can even manipulate objects it
has never seen before.
| new_dataset | 0.960025 |
1504.06201 | Gedas Bertasius | Gedas Bertasius, Jianbo Shi and Lorenzo Torresani | High-for-Low and Low-for-High: Efficient Boundary Detection from Deep
Object Features and its Applications to High-Level Vision | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the current boundary detection systems rely exclusively on low-level
features, such as color and texture. However, perception studies suggest that
humans employ object-level reasoning when judging if a particular pixel is a
boundary. Inspired by this observation, in this work we show how to predict
boundaries by exploiting object-level features from a pretrained
object-classification network. Our method can be viewed as a "High-for-Low"
approach where high-level object features inform the low-level boundary
detection process. Our model achieves state-of-the-art performance on an
established boundary detection benchmark and it is efficient to run.
Additionally, we show that due to the semantic nature of our boundaries we
can use them to aid a number of high-level vision tasks. We demonstrate that
using our boundaries we improve the performance of state-of-the-art methods on
the problems of semantic boundary labeling, semantic segmentation and object
proposal generation. We can view this process as a "Low-for-High" scheme, where
low-level boundaries aid high-level vision tasks.
Thus, our contributions include a boundary detection system that is accurate,
efficient, generalizes well to multiple datasets, and is also shown to improve
existing state-of-the-art high-level vision methods on three distinct tasks.
| [
{
"version": "v1",
"created": "Thu, 23 Apr 2015 14:35:12 GMT"
},
{
"version": "v2",
"created": "Fri, 1 May 2015 13:46:18 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Sep 2015 17:48:23 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Bertasius",
"Gedas",
""
],
[
"Shi",
"Jianbo",
""
],
[
"Torresani",
"Lorenzo",
""
]
] | TITLE: High-for-Low and Low-for-High: Efficient Boundary Detection from Deep
Object Features and its Applications to High-Level Vision
ABSTRACT: Most of the current boundary detection systems rely exclusively on low-level
features, such as color and texture. However, perception studies suggest that
humans employ object-level reasoning when judging if a particular pixel is a
boundary. Inspired by this observation, in this work we show how to predict
boundaries by exploiting object-level features from a pretrained
object-classification network. Our method can be viewed as a "High-for-Low"
approach where high-level object features inform the low-level boundary
detection process. Our model achieves state-of-the-art performance on an
established boundary detection benchmark and it is efficient to run.
Additionally, we show that due to the semantic nature of our boundaries we
can use them to aid a number of high-level vision tasks. We demonstrate that
using our boundaries we improve the performance of state-of-the-art methods on
the problems of semantic boundary labeling, semantic segmentation and object
proposal generation. We can view this process as a "Low-for-High" scheme, where
low-level boundaries aid high-level vision tasks.
Thus, our contributions include a boundary detection system that is accurate,
efficient, generalizes well to multiple datasets, and is also shown to improve
existing state-of-the-art high-level vision methods on three distinct tasks.
| no_new_dataset | 0.947235 |
1507.05739 | Vein Kong | Emrah Budur, Seungmin Lee, Vein S. Kong | Structural Analysis of Criminal Network and Predicting Hidden Links
using Machine Learning | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of criminal networks is inherently difficult because of the nature
of the topic. Criminal networks are covert and most of the information is not
publicly available. This leads to small datasets available for analysis. The
available criminal network datasets consists of entities, i.e. individual or
organizations, which are linked to each other. The links between entities
indicates that there is a connection between these entities such as involvement
in the same criminal event, having commercial ties, and/or memberships in the
same criminal organization. Because of incognito criminal activities, there
could be many hidden links from entities to entities, which makes the publicly
available criminal networks incomplete. Revealing hidden links introduces new
information, e.g. affiliation of a suspected individual with a criminal
organization, which may not be known with public information. What will we be
able to find if we can run analysis on a larger dataset and use link prediction
to reveal the implicit connections? We plan to answer this question by using a
dataset that is an order of magnitude more than what is used in most criminal
networks analysis. And by using machine learning techniques, we will convert a
link prediction problem to a binary classification problem. We plan to reveal
hidden links and potentially hidden key attributes of the criminal network.
With a more complete picture of the network, we can potentially use this data
to thwart criminal organizations and/or take a Pareto approach in targeting key
nodes. We conclude our analysis with an effective destruction strategy to
weaken criminal networks and prove the effectiveness of revealing hidden links
when attacking to criminal networks.
| [
{
"version": "v1",
"created": "Tue, 21 Jul 2015 08:10:12 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jul 2015 07:03:01 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Sep 2015 15:52:08 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Budur",
"Emrah",
""
],
[
"Lee",
"Seungmin",
""
],
[
"Kong",
"Vein S.",
""
]
] | TITLE: Structural Analysis of Criminal Network and Predicting Hidden Links
using Machine Learning
ABSTRACT: Analysis of criminal networks is inherently difficult because of the nature
of the topic. Criminal networks are covert and most of the information is not
publicly available. This leads to small datasets available for analysis. The
available criminal network datasets consists of entities, i.e. individual or
organizations, which are linked to each other. The links between entities
indicates that there is a connection between these entities such as involvement
in the same criminal event, having commercial ties, and/or memberships in the
same criminal organization. Because of incognito criminal activities, there
could be many hidden links from entities to entities, which makes the publicly
available criminal networks incomplete. Revealing hidden links introduces new
information, e.g. affiliation of a suspected individual with a criminal
organization, which may not be known with public information. What will we be
able to find if we can run analysis on a larger dataset and use link prediction
to reveal the implicit connections? We plan to answer this question by using a
dataset that is an order of magnitude more than what is used in most criminal
networks analysis. And by using machine learning techniques, we will convert a
link prediction problem to a binary classification problem. We plan to reveal
hidden links and potentially hidden key attributes of the criminal network.
With a more complete picture of the network, we can potentially use this data
to thwart criminal organizations and/or take a Pareto approach in targeting key
nodes. We conclude our analysis with an effective destruction strategy to
weaken criminal networks and prove the effectiveness of revealing hidden links
when attacking to criminal networks.
| no_new_dataset | 0.91804 |
1509.04874 | Lichao Huang | Lichao Huang and Yi Yang and Yafeng Deng and Yinan Yu | DenseBox: Unifying Landmark Localization with End to End Object
Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can a single fully convolutional neural network (FCN) perform on object
detection? We introduce DenseBox, a unified end-to-end FCN framework that
directly predicts bounding boxes and object class confidences through all
locations and scales of an image. Our contribution is two-fold. First, we show
that a single FCN, if designed and optimized carefully, can detect multiple
different objects extremely accurately and efficiently. Second, we show that
when incorporating with landmark localization during multi-task learning,
DenseBox further improves object detection accuray. We present experimental
results on public benchmark datasets including MALF face detection and KITTI
car detection, that indicate our DenseBox is the state-of-the-art system for
detecting challenging objects such as faces and cars.
| [
{
"version": "v1",
"created": "Wed, 16 Sep 2015 10:30:37 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Sep 2015 00:20:08 GMT"
},
{
"version": "v3",
"created": "Sat, 19 Sep 2015 02:36:04 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Huang",
"Lichao",
""
],
[
"Yang",
"Yi",
""
],
[
"Deng",
"Yafeng",
""
],
[
"Yu",
"Yinan",
""
]
] | TITLE: DenseBox: Unifying Landmark Localization with End to End Object
Detection
ABSTRACT: How can a single fully convolutional neural network (FCN) perform on object
detection? We introduce DenseBox, a unified end-to-end FCN framework that
directly predicts bounding boxes and object class confidences through all
locations and scales of an image. Our contribution is two-fold. First, we show
that a single FCN, if designed and optimized carefully, can detect multiple
different objects extremely accurately and efficiently. Second, we show that
when incorporating with landmark localization during multi-task learning,
DenseBox further improves object detection accuray. We present experimental
results on public benchmark datasets including MALF face detection and KITTI
car detection, that indicate our DenseBox is the state-of-the-art system for
detecting challenging objects such as faces and cars.
| no_new_dataset | 0.947527 |
1509.05935 | Shang-Tse Chen | Paras Jain, Shang-Tse Chen, Mozhgan Azimpourkivi, Duen Horng Chau,
Bogdan Carbunar | Spotting Suspicious Reviews via (Quasi-)clique Extraction | Appeared in IEEE Symposium on Security and Privacy 2015 | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to tell if a review is real or fake? What does the underworld of
fraudulent reviewing look like? Detecting suspicious reviews has become a major
issue for many online services. We propose the use of a clique-finding approach
to discover well-organized suspicious reviewers. From a Yelp dataset with over
one million reviews, we construct multiple Reviewer Similarity graphs to link
users that have unusually similar behavior: two reviewers are connected in the
graph if they have reviewed the same set of venues within a few days. From
these graphs, our algorithms extracted many large cliques and quasi-cliques,
the largest one containing a striking 11 users who coordinated their review
activities in identical ways. Among the detected cliques, a large portion
contain Yelp Scouts who are paid by Yelp to review venues in new areas. Our
work sheds light on their little-known operation.
| [
{
"version": "v1",
"created": "Sat, 19 Sep 2015 21:01:38 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Jain",
"Paras",
""
],
[
"Chen",
"Shang-Tse",
""
],
[
"Azimpourkivi",
"Mozhgan",
""
],
[
"Chau",
"Duen Horng",
""
],
[
"Carbunar",
"Bogdan",
""
]
] | TITLE: Spotting Suspicious Reviews via (Quasi-)clique Extraction
ABSTRACT: How to tell if a review is real or fake? What does the underworld of
fraudulent reviewing look like? Detecting suspicious reviews has become a major
issue for many online services. We propose the use of a clique-finding approach
to discover well-organized suspicious reviewers. From a Yelp dataset with over
one million reviews, we construct multiple Reviewer Similarity graphs to link
users that have unusually similar behavior: two reviewers are connected in the
graph if they have reviewed the same set of venues within a few days. From
these graphs, our algorithms extracted many large cliques and quasi-cliques,
the largest one containing a striking 11 users who coordinated their review
activities in identical ways. Among the detected cliques, a large portion
contain Yelp Scouts who are paid by Yelp to review venues in new areas. Our
work sheds light on their little-known operation.
| no_new_dataset | 0.939471 |
1509.06033 | Arsalan Mousavian | Arsalan Mousavian, Jana Kosecka | Deep Convolutional Features for Image Based Retrieval and Scene
Categorization | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several recent approaches showed how the representations learned by
Convolutional Neural Networks can be repurposed for novel tasks. Most commonly
it has been shown that the activation features of the last fully connected
layers (fc7 or fc6) of the network, followed by a linear classifier outperform
the state-of-the-art on several recognition challenge datasets. Instead of
recognition, this paper focuses on the image retrieval problem and proposes a
examines alternative pooling strategies derived for CNN features. The presented
scheme uses the features maps from an earlier layer 5 of the CNN architecture,
which has been shown to preserve coarse spatial information and is semantically
meaningful. We examine several pooling strategies and demonstrate superior
performance on the image retrieval task (INRIA Holidays) at the fraction of the
computational cost, while using a relatively small memory requirements. In
addition to retrieval, we see similar efficiency gains on the SUN397 scene
categorization dataset, demonstrating wide applicability of this simple
strategy. We also introduce and evaluate a novel GeoPlaces5K dataset from
different geographical locations in the world for image retrieval that stresses
more dramatic changes in appearance and viewpoint.
| [
{
"version": "v1",
"created": "Sun, 20 Sep 2015 17:56:57 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Mousavian",
"Arsalan",
""
],
[
"Kosecka",
"Jana",
""
]
] | TITLE: Deep Convolutional Features for Image Based Retrieval and Scene
Categorization
ABSTRACT: Several recent approaches showed how the representations learned by
Convolutional Neural Networks can be repurposed for novel tasks. Most commonly
it has been shown that the activation features of the last fully connected
layers (fc7 or fc6) of the network, followed by a linear classifier outperform
the state-of-the-art on several recognition challenge datasets. Instead of
recognition, this paper focuses on the image retrieval problem and proposes a
examines alternative pooling strategies derived for CNN features. The presented
scheme uses the features maps from an earlier layer 5 of the CNN architecture,
which has been shown to preserve coarse spatial information and is semantically
meaningful. We examine several pooling strategies and demonstrate superior
performance on the image retrieval task (INRIA Holidays) at the fraction of the
computational cost, while using a relatively small memory requirements. In
addition to retrieval, we see similar efficiency gains on the SUN397 scene
categorization dataset, demonstrating wide applicability of this simple
strategy. We also introduce and evaluate a novel GeoPlaces5K dataset from
different geographical locations in the world for image retrieval that stresses
more dramatic changes in appearance and viewpoint.
| new_dataset | 0.877214 |
1509.06066 | Mahyar Najibi | Mahyar Najibi, Mohammad Rastegari, Larry S. Davis | On Large-Scale Retrieval: Binary or n-ary Coding? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growing amount of data available in modern-day datasets makes the need to
efficiently search and retrieve information. To make large-scale search
feasible, Distance Estimation and Subset Indexing are the main approaches.
Although binary coding has been popular for implementing both techniques, n-ary
coding (known as Product Quantization) is also very effective for Distance
Estimation. However, their relative performance has not been studied for Subset
Indexing. We investigate whether binary or n-ary coding works better under
different retrieval strategies. This leads to the design of a new n-ary coding
method, "Linear Subspace Quantization (LSQ)" which, unlike other n-ary
encoders, can be used as a similarity-preserving embedding. Experiments on
image retrieval show that when Distance Estimation is used, n-ary LSQ
outperforms other methods. However, when Subset Indexing is applied,
interestingly, binary codings are more effective and binary LSQ achieves the
best accuracy.
| [
{
"version": "v1",
"created": "Sun, 20 Sep 2015 22:32:23 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Najibi",
"Mahyar",
""
],
[
"Rastegari",
"Mohammad",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: On Large-Scale Retrieval: Binary or n-ary Coding?
ABSTRACT: The growing amount of data available in modern-day datasets makes the need to
efficiently search and retrieve information. To make large-scale search
feasible, Distance Estimation and Subset Indexing are the main approaches.
Although binary coding has been popular for implementing both techniques, n-ary
coding (known as Product Quantization) is also very effective for Distance
Estimation. However, their relative performance has not been studied for Subset
Indexing. We investigate whether binary or n-ary coding works better under
different retrieval strategies. This leads to the design of a new n-ary coding
method, "Linear Subspace Quantization (LSQ)" which, unlike other n-ary
encoders, can be used as a similarity-preserving embedding. Experiments on
image retrieval show that when Distance Estimation is used, n-ary LSQ
outperforms other methods. However, when Subset Indexing is applied,
interestingly, binary codings are more effective and binary LSQ achieves the
best accuracy.
| no_new_dataset | 0.949435 |
1509.06109 | Dustin Freeman | Dustin Freeman, Ricardo Jota, Daniel Vogel, Daniel Wigdor, Ravin
Balakrishnan | A Dataset of Naturally Occurring, Whole-Body Background Activity to
Reduce Gesture Conflicts | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real settings, natural body movements can be erroneously recognized by
whole-body input systems as explicit input actions. We call body activity not
intended as input actions "background activity." We argue that understanding
background activity is crucial to the success of always-available whole-body
input in the real world. To operationalize this argument, we contribute a
reusable study methodology and software tools to generate standardized
background activity datasets composed of data from multiple Kinect cameras, a
Vicon tracker, and two high-definition video cameras. Using our methodology, we
create an example background activity dataset for a television-oriented living
room setting. We use this dataset to demonstrate how it can be used to redesign
a gestural interaction vocabulary to minimize conflicts with the real world.
The software tools and initial living room dataset are publicly available
(http://www.dgp.toronto.edu/~dustin/backgroundactivity/).
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2015 04:40:31 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Freeman",
"Dustin",
""
],
[
"Jota",
"Ricardo",
""
],
[
"Vogel",
"Daniel",
""
],
[
"Wigdor",
"Daniel",
""
],
[
"Balakrishnan",
"Ravin",
""
]
] | TITLE: A Dataset of Naturally Occurring, Whole-Body Background Activity to
Reduce Gesture Conflicts
ABSTRACT: In real settings, natural body movements can be erroneously recognized by
whole-body input systems as explicit input actions. We call body activity not
intended as input actions "background activity." We argue that understanding
background activity is crucial to the success of always-available whole-body
input in the real world. To operationalize this argument, we contribute a
reusable study methodology and software tools to generate standardized
background activity datasets composed of data from multiple Kinect cameras, a
Vicon tracker, and two high-definition video cameras. Using our methodology, we
create an example background activity dataset for a television-oriented living
room setting. We use this dataset to demonstrate how it can be used to redesign
a gestural interaction vocabulary to minimize conflicts with the real world.
The software tools and initial living room dataset are publicly available
(http://www.dgp.toronto.edu/~dustin/backgroundactivity/).
| new_dataset | 0.956553 |
1509.06163 | Shubhendu Trivedi | Shubhendu Trivedi, Zachary A. Pardos, Neil T. Heffernan | The Utility of Clustering in Prediction Tasks | An experimental research report, dated 11 September 2011 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the utility of clustering in reducing error in various prediction
tasks. Previous work has hinted at the improvement in prediction accuracy
attributed to clustering algorithms if used to pre-process the data. In this
work we more deeply investigate the direct utility of using clustering to
improve prediction accuracy and provide explanations for why this may be so. We
look at a number of datasets, run k-means at different scales and for each
scale we train predictors. This produces k sets of predictions. These
predictions are then combined by a na\"ive ensemble. We observed that this use
of a predictor in conjunction with clustering improved the prediction accuracy
in most datasets. We believe this indicates the predictive utility of
exploiting structure in the data and the data compression handed over by
clustering. We also found that using this method improves upon the prediction
of even a Random Forests predictor which suggests this method is providing a
novel, and useful source of variance in the prediction process.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2015 09:42:50 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Trivedi",
"Shubhendu",
""
],
[
"Pardos",
"Zachary A.",
""
],
[
"Heffernan",
"Neil T.",
""
]
] | TITLE: The Utility of Clustering in Prediction Tasks
ABSTRACT: We explore the utility of clustering in reducing error in various prediction
tasks. Previous work has hinted at the improvement in prediction accuracy
attributed to clustering algorithms if used to pre-process the data. In this
work we more deeply investigate the direct utility of using clustering to
improve prediction accuracy and provide explanations for why this may be so. We
look at a number of datasets, run k-means at different scales and for each
scale we train predictors. This produces k sets of predictions. These
predictions are then combined by a na\"ive ensemble. We observed that this use
of a predictor in conjunction with clustering improved the prediction accuracy
in most datasets. We believe this indicates the predictive utility of
exploiting structure in the data and the data compression handed over by
clustering. We also found that using this method improves upon the prediction
of even a Random Forests predictor which suggests this method is providing a
novel, and useful source of variance in the prediction process.
| no_new_dataset | 0.951142 |
1509.06243 | Albert Gordo | Albert Gordo and Jon Almazan and Naila Murray and Florent Perronnin | LEWIS: Latent Embeddings for Word Images and their Semantics | Accepted for publication at the International Conference on Computer
Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this work is to bring semantics into the tasks of text
recognition and retrieval in natural images. Although text recognition and
retrieval have received a lot of attention in recent years, previous works have
focused on recognizing or retrieving exactly the same word used as a query,
without taking the semantics into consideration.
In this paper, we ask the following question: \emph{can we predict semantic
concepts directly from a word image, without explicitly trying to transcribe
the word image or its characters at any point?} For this goal we propose a
convolutional neural network (CNN) with a weighted ranking loss objective that
ensures that the concepts relevant to the query image are ranked ahead of those
that are not relevant. This can also be interpreted as learning a Euclidean
space where word images and concepts are jointly embedded. This model is
learned in an end-to-end manner, from image pixels to semantic concepts, using
a dataset of synthetically generated word images and concepts mined from a
lexical database (WordNet). Our results show that, despite the complexity of
the task, word images and concepts can indeed be associated with a high degree
of accuracy
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2015 14:32:43 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Gordo",
"Albert",
""
],
[
"Almazan",
"Jon",
""
],
[
"Murray",
"Naila",
""
],
[
"Perronnin",
"Florent",
""
]
] | TITLE: LEWIS: Latent Embeddings for Word Images and their Semantics
ABSTRACT: The goal of this work is to bring semantics into the tasks of text
recognition and retrieval in natural images. Although text recognition and
retrieval have received a lot of attention in recent years, previous works have
focused on recognizing or retrieving exactly the same word used as a query,
without taking the semantics into consideration.
In this paper, we ask the following question: \emph{can we predict semantic
concepts directly from a word image, without explicitly trying to transcribe
the word image or its characters at any point?} For this goal we propose a
convolutional neural network (CNN) with a weighted ranking loss objective that
ensures that the concepts relevant to the query image are ranked ahead of those
that are not relevant. This can also be interpreted as learning a Euclidean
space where word images and concepts are jointly embedded. This model is
learned in an end-to-end manner, from image pixels to semantic concepts, using
a dataset of synthetically generated word images and concepts mined from a
lexical database (WordNet). Our results show that, despite the complexity of
the task, word images and concepts can indeed be associated with a high degree
of accuracy
| new_dataset | 0.962778 |
1509.06254 | Pablo Rodriguez-Mier | Pablo Rodriguez-Mier, Manuel Mucientes, Manuel Lama | Hybrid Optimization Algorithm for Large-Scale QoS-Aware Service
Composition | Preprint accepted to appear in IEEE Transactions on Services
Computing 2015 | null | 10.1109/TSC.2015.2480396 | null | cs.AI cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a hybrid approach for automatic composition of Web
services that generates semantic input-output based compositions with optimal
end-to-end QoS, minimizing the number of services of the resulting composition.
The proposed approach has four main steps: 1) generation of the composition
graph for a request; 2) computation of the optimal composition that minimizes a
single objective QoS function; 3) multi-step optimizations to reduce the search
space by identifying equivalent and dominated services; and 4) hybrid
local-global search to extract the optimal QoS with the minimum number of
services. An extensive validation with the datasets of the Web Service
Challenge 2009-2010 and randomly generated datasets shows that: 1) the
combination of local and global optimization is a general and powerful
technique to extract optimal compositions in diverse scenarios; and 2) the
hybrid strategy performs better than the state-of-the-art, obtaining solutions
with less services and optimal QoS.
| [
{
"version": "v1",
"created": "Mon, 21 Sep 2015 14:56:28 GMT"
}
] | 2015-09-22T00:00:00 | [
[
"Rodriguez-Mier",
"Pablo",
""
],
[
"Mucientes",
"Manuel",
""
],
[
"Lama",
"Manuel",
""
]
] | TITLE: Hybrid Optimization Algorithm for Large-Scale QoS-Aware Service
Composition
ABSTRACT: In this paper we present a hybrid approach for automatic composition of Web
services that generates semantic input-output based compositions with optimal
end-to-end QoS, minimizing the number of services of the resulting composition.
The proposed approach has four main steps: 1) generation of the composition
graph for a request; 2) computation of the optimal composition that minimizes a
single objective QoS function; 3) multi-step optimizations to reduce the search
space by identifying equivalent and dominated services; and 4) hybrid
local-global search to extract the optimal QoS with the minimum number of
services. An extensive validation with the datasets of the Web Service
Challenge 2009-2010 and randomly generated datasets shows that: 1) the
combination of local and global optimization is a general and powerful
technique to extract optimal compositions in diverse scenarios; and 2) the
hybrid strategy performs better than the state-of-the-art, obtaining solutions
with less services and optimal QoS.
| no_new_dataset | 0.945096 |
1502.00956 | Raul Mur-Artal | Raul Mur-Artal, J. M. M. Montiel and Juan D. Tardos | ORB-SLAM: a Versatile and Accurate Monocular SLAM System | 17 pages. 13 figures. IEEE Transactions on Robotics, 2015. Project
webpage (videos, code): http://webdiis.unizar.es/~raulmur/orbslam/ | null | 10.1109/TRO.2015.2463671 | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents ORB-SLAM, a feature-based monocular SLAM system that
operates in real time, in small and large, indoor and outdoor environments. The
system is robust to severe motion clutter, allows wide baseline loop closing
and relocalization, and includes full automatic initialization. Building on
excellent algorithms of recent years, we designed from scratch a novel system
that uses the same features for all SLAM tasks: tracking, mapping,
relocalization, and loop closing. A survival of the fittest strategy that
selects the points and keyframes of the reconstruction leads to excellent
robustness and generates a compact and trackable map that only grows if the
scene content changes, allowing lifelong operation. We present an exhaustive
evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves
unprecedented performance with respect to other state-of-the-art monocular SLAM
approaches. For the benefit of the community, we make the source code public.
| [
{
"version": "v1",
"created": "Tue, 3 Feb 2015 18:52:23 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 09:50:11 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Mur-Artal",
"Raul",
""
],
[
"Montiel",
"J. M. M.",
""
],
[
"Tardos",
"Juan D.",
""
]
] | TITLE: ORB-SLAM: a Versatile and Accurate Monocular SLAM System
ABSTRACT: This paper presents ORB-SLAM, a feature-based monocular SLAM system that
operates in real time, in small and large, indoor and outdoor environments. The
system is robust to severe motion clutter, allows wide baseline loop closing
and relocalization, and includes full automatic initialization. Building on
excellent algorithms of recent years, we designed from scratch a novel system
that uses the same features for all SLAM tasks: tracking, mapping,
relocalization, and loop closing. A survival of the fittest strategy that
selects the points and keyframes of the reconstruction leads to excellent
robustness and generates a compact and trackable map that only grows if the
scene content changes, allowing lifelong operation. We present an exhaustive
evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves
unprecedented performance with respect to other state-of-the-art monocular SLAM
approaches. For the benefit of the community, we make the source code public.
| no_new_dataset | 0.947088 |
1506.03478 | Lucas Theis | Lucas Theis and Matthias Bethge | Generative Image Modeling Using Spatial LSTMs | null | null | null | null | stat.ML cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 20:56:14 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 08:06:06 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Theis",
"Lucas",
""
],
[
"Bethge",
"Matthias",
""
]
] | TITLE: Generative Image Modeling Using Spatial LSTMs
ABSTRACT: Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting.
| no_new_dataset | 0.950824 |
1509.05153 | Wei Pan | Wei Pan, Ye Yuan, Lennart Ljung, Jorge Goncalves and Guy-Bart Stan | Identifying Biochemical Reaction Networks From Heterogeneous Datasets | null | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new method to identify biochemical reaction
networks (i.e. both reactions and kinetic parameters) from heterogeneous
datasets. Such datasets can contain (a) data from several replicates of an
experiment performed on a biological system; (b) data measured from a
biochemical network subjected to different experimental conditions, for
example, changes/perturbations in biological inductions, temperature, gene
knock-out, gene over-expression, etc. Simultaneous integration of various
datasets to perform system identification has the potential to avoid
non-identifiability issues typically arising when only single datasets are
used.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 07:51:58 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 00:22:18 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Pan",
"Wei",
""
],
[
"Yuan",
"Ye",
""
],
[
"Ljung",
"Lennart",
""
],
[
"Goncalves",
"Jorge",
""
],
[
"Stan",
"Guy-Bart",
""
]
] | TITLE: Identifying Biochemical Reaction Networks From Heterogeneous Datasets
ABSTRACT: In this paper, we propose a new method to identify biochemical reaction
networks (i.e. both reactions and kinetic parameters) from heterogeneous
datasets. Such datasets can contain (a) data from several replicates of an
experiment performed on a biological system; (b) data measured from a
biochemical network subjected to different experimental conditions, for
example, changes/perturbations in biological inductions, temperature, gene
knock-out, gene over-expression, etc. Simultaneous integration of various
datasets to perform system identification has the potential to avoid
non-identifiability issues typically arising when only single datasets are
used.
| no_new_dataset | 0.957517 |
1509.05186 | Shicong Liu | Shicong Liu, Junru Shao, Hongtao Lu | Accelerated Distance Computation with Encoding Tree for High Dimensional
Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel distance to calculate distance between high dimensional
vector pairs, utilizing vector quantization generated encodings. Vector
quantization based methods are successful in handling large scale high
dimensional data. These methods compress vectors into short encodings, and
allow efficient distance computation between an uncompressed vector and
compressed dataset without decompressing explicitly. However for large
datasets, these distance computing methods perform excessive computations. We
avoid excessive computations by storing the encodings on an Encoding
Tree(E-Tree), interestingly the memory consumption is also lowered. We also
propose Encoding Forest(E-Forest) to further lower the computation cost. E-Tree
and E-Forest is compatible with various existing quantization-based methods. We
show by experiments our methods speed-up distance computing for high
dimensional data drastically, and various existing algorithms can benefit from
our methods.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 09:54:33 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Sep 2015 06:40:22 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Liu",
"Shicong",
""
],
[
"Shao",
"Junru",
""
],
[
"Lu",
"Hongtao",
""
]
] | TITLE: Accelerated Distance Computation with Encoding Tree for High Dimensional
Data
ABSTRACT: We propose a novel distance to calculate distance between high dimensional
vector pairs, utilizing vector quantization generated encodings. Vector
quantization based methods are successful in handling large scale high
dimensional data. These methods compress vectors into short encodings, and
allow efficient distance computation between an uncompressed vector and
compressed dataset without decompressing explicitly. However for large
datasets, these distance computing methods perform excessive computations. We
avoid excessive computations by storing the encodings on an Encoding
Tree(E-Tree), interestingly the memory consumption is also lowered. We also
propose Encoding Forest(E-Forest) to further lower the computation cost. E-Tree
and E-Forest is compatible with various existing quantization-based methods. We
show by experiments our methods speed-up distance computing for high
dimensional data drastically, and various existing algorithms can benefit from
our methods.
| no_new_dataset | 0.947332 |
1509.05567 | Dipasree Pal | Dipasree Pal, Mandar Mitra and Samar Bhattacharya | Exploring Query Categorisation for Query Expansion: A Study | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vocabulary mismatch problem is one of the important challenges facing
traditional keyword-based Information Retrieval Systems. The aim of query
expansion (QE) is to reduce this query-document mismatch by adding related or
synonymous words or phrases to the query.
Several existing query expansion algorithms have proved their merit, but they
are not uniformly beneficial for all kinds of queries. Our long-term goal is to
formulate methods for applying QE techniques tailored to individual queries,
rather than applying the same general QE method to all queries. As an initial
step, we have proposed a taxonomy of query classes (from a QE perspective) in
this report. We have discussed the properties of each query class with
examples. We have also discussed some QE strategies that might be effective for
each query category.
In future work, we intend to test the proposed techniques using standard
datasets, and to explore automatic query categorisation methods.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2015 10:04:09 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Pal",
"Dipasree",
""
],
[
"Mitra",
"Mandar",
""
],
[
"Bhattacharya",
"Samar",
""
]
] | TITLE: Exploring Query Categorisation for Query Expansion: A Study
ABSTRACT: The vocabulary mismatch problem is one of the important challenges facing
traditional keyword-based Information Retrieval Systems. The aim of query
expansion (QE) is to reduce this query-document mismatch by adding related or
synonymous words or phrases to the query.
Several existing query expansion algorithms have proved their merit, but they
are not uniformly beneficial for all kinds of queries. Our long-term goal is to
formulate methods for applying QE techniques tailored to individual queries,
rather than applying the same general QE method to all queries. As an initial
step, we have proposed a taxonomy of query classes (from a QE perspective) in
this report. We have discussed the properties of each query class with
examples. We have also discussed some QE strategies that might be effective for
each query category.
In future work, we intend to test the proposed techniques using standard
datasets, and to explore automatic query categorisation methods.
| no_new_dataset | 0.946051 |
1509.05736 | Issa Atoum | Issa Atoum, Chih How Bong, Narayanan Kulathuramaiyer | Building a Pilot Software Quality-in-Use Benchmark Dataset | 6 pages,3 figures, conference Proceedings of 9th International
Conference on IT in Asia CITA (2015) | null | null | null | cs.SE cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prepared domain specific datasets plays an important role to supervised
learning approaches. In this article a new sentence dataset for software
quality-in-use is proposed. Three experts were chosen to annotate the data
using a proposed annotation scheme. Then the data were reconciled in a (no
match eliminate) process to reduce bias. The Kappa, k statistics revealed an
acceptable level of agreement; moderate to substantial agreement between the
experts. The built data can be used to evaluate software quality-in-use models
in sentiment analysis models. Moreover, the annotation scheme can be used to
extend the current dataset.
| [
{
"version": "v1",
"created": "Fri, 18 Sep 2015 18:19:48 GMT"
}
] | 2015-09-21T00:00:00 | [
[
"Atoum",
"Issa",
""
],
[
"Bong",
"Chih How",
""
],
[
"Kulathuramaiyer",
"Narayanan",
""
]
] | TITLE: Building a Pilot Software Quality-in-Use Benchmark Dataset
ABSTRACT: Prepared domain specific datasets plays an important role to supervised
learning approaches. In this article a new sentence dataset for software
quality-in-use is proposed. Three experts were chosen to annotate the data
using a proposed annotation scheme. Then the data were reconciled in a (no
match eliminate) process to reduce bias. The Kappa, k statistics revealed an
acceptable level of agreement; moderate to substantial agreement between the
experts. The built data can be used to evaluate software quality-in-use models
in sentiment analysis models. Moreover, the annotation scheme can be used to
extend the current dataset.
| new_dataset | 0.957794 |
1509.05194 | Shicong Liu | Shicong Liu, Junru Shao, Hongtao Lu | HCLAE: High Capacity Locally Aggregating Encodings for Approximate
Nearest Neighbor Search | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vector quantization-based approaches are successful to solve Approximate
Nearest Neighbor (ANN) problems which are critical to many applications. The
idea is to generate effective encodings to allow fast distance approximation.
We propose quantization-based methods should partition the data space finely
and exhibit locality of the dataset to allow efficient non-exhaustive search.
In this paper, we introduce the concept of High Capacity Locality Aggregating
Encodings (HCLAE) to this end, and propose Dictionary Annealing (DA) to learn
HCLAE by a simulated annealing procedure. The quantization error is lower than
other state-of-the-art. The algorithms of DA can be easily extended to an
online learning scheme, allowing effective handle of large scale data. Further,
we propose Aggregating-Tree (A-Tree), a non-exhaustive search method using
HCLAE to perform efficient ANN-Search. A-Tree achieves magnitudes of speed-up
on ANN-Search tasks, compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 10:18:05 GMT"
}
] | 2015-09-18T00:00:00 | [
[
"Liu",
"Shicong",
""
],
[
"Shao",
"Junru",
""
],
[
"Lu",
"Hongtao",
""
]
] | TITLE: HCLAE: High Capacity Locally Aggregating Encodings for Approximate
Nearest Neighbor Search
ABSTRACT: Vector quantization-based approaches are successful to solve Approximate
Nearest Neighbor (ANN) problems which are critical to many applications. The
idea is to generate effective encodings to allow fast distance approximation.
We propose quantization-based methods should partition the data space finely
and exhibit locality of the dataset to allow efficient non-exhaustive search.
In this paper, we introduce the concept of High Capacity Locality Aggregating
Encodings (HCLAE) to this end, and propose Dictionary Annealing (DA) to learn
HCLAE by a simulated annealing procedure. The quantization error is lower than
other state-of-the-art. The algorithms of DA can be easily extended to an
online learning scheme, allowing effective handle of large scale data. Further,
we propose Aggregating-Tree (A-Tree), a non-exhaustive search method using
HCLAE to perform efficient ANN-Search. A-Tree achieves magnitudes of speed-up
on ANN-Search tasks, compared to the state-of-the-art.
| no_new_dataset | 0.949949 |
1509.05195 | Shicong Liu | Shicong Liu, Hongtao Lu, Junru Shao | Improved Residual Vector Quantization for High-dimensional Approximate
Nearest Neighbor Search | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantization methods have been introduced to perform large scale approximate
nearest search tasks. Residual Vector Quantization (RVQ) is one of the
effective quantization methods. RVQ uses a multi-stage codebook learning scheme
to lower the quantization error stage by stage. However, there are two major
limitations for RVQ when applied to on high-dimensional approximate nearest
neighbor search: 1. The performance gain diminishes quickly with added stages.
2. Encoding a vector with RVQ is actually NP-hard. In this paper, we propose an
improved residual vector quantization (IRVQ) method, our IRVQ learns codebook
with a hybrid method of subspace clustering and warm-started k-means on each
stage to prevent performance gain from dropping, and uses a multi-path encoding
scheme to encode a vector with lower distortion. Experimental results on the
benchmark datasets show that our method gives substantially improves RVQ and
delivers better performance compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 10:19:37 GMT"
}
] | 2015-09-18T00:00:00 | [
[
"Liu",
"Shicong",
""
],
[
"Lu",
"Hongtao",
""
],
[
"Shao",
"Junru",
""
]
] | TITLE: Improved Residual Vector Quantization for High-dimensional Approximate
Nearest Neighbor Search
ABSTRACT: Quantization methods have been introduced to perform large scale approximate
nearest search tasks. Residual Vector Quantization (RVQ) is one of the
effective quantization methods. RVQ uses a multi-stage codebook learning scheme
to lower the quantization error stage by stage. However, there are two major
limitations for RVQ when applied to on high-dimensional approximate nearest
neighbor search: 1. The performance gain diminishes quickly with added stages.
2. Encoding a vector with RVQ is actually NP-hard. In this paper, we propose an
improved residual vector quantization (IRVQ) method, our IRVQ learns codebook
with a hybrid method of subspace clustering and warm-started k-means on each
stage to prevent performance gain from dropping, and uses a multi-path encoding
scheme to encode a vector with lower distortion. Experimental results on the
benchmark datasets show that our method gives substantially improves RVQ and
delivers better performance compared to the state-of-the-art.
| no_new_dataset | 0.950319 |
1509.05366 | Nazli Ikizler-Cinbis | Gokhan Tanisik, Cemil Zalluhoglu, Nazli Ikizler-Cinbis | Facial Descriptors for Human Interaction Recognition In Still Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach in a rarely studied area of computer
vision: Human interaction recognition in still images. We explore whether the
facial regions and their spatial configurations contribute to the recognition
of interactions. In this respect, our method involves extraction of several
visual features from the facial regions, as well as incorporation of scene
characteristics and deep features to the recognition. Extracted multiple
features are utilized within a discriminative learning framework for
recognizing interactions between people. Our designed facial descriptors are
based on the observation that relative positions, size and locations of the
faces are likely to be important for characterizing human interactions. Since
there is no available dataset in this relatively new domain, a comprehensive
new dataset which includes several images of human interactions is collected.
Our experimental results show that faces and scene characteristics contain
important information to recognize interactions between people.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 18:40:15 GMT"
}
] | 2015-09-18T00:00:00 | [
[
"Tanisik",
"Gokhan",
""
],
[
"Zalluhoglu",
"Cemil",
""
],
[
"Ikizler-Cinbis",
"Nazli",
""
]
] | TITLE: Facial Descriptors for Human Interaction Recognition In Still Images
ABSTRACT: This paper presents a novel approach in a rarely studied area of computer
vision: Human interaction recognition in still images. We explore whether the
facial regions and their spatial configurations contribute to the recognition
of interactions. In this respect, our method involves extraction of several
visual features from the facial regions, as well as incorporation of scene
characteristics and deep features to the recognition. Extracted multiple
features are utilized within a discriminative learning framework for
recognizing interactions between people. Our designed facial descriptors are
based on the observation that relative positions, size and locations of the
faces are likely to be important for characterizing human interactions. Since
there is no available dataset in this relatively new domain, a comprehensive
new dataset which includes several images of human interactions is collected.
Our experimental results show that faces and scene characteristics contain
important information to recognize interactions between people.
| new_dataset | 0.958265 |
1406.2639 | Holger Roth | Holger R. Roth and Le Lu and Ari Seff and Kevin M. Cherry and Joanne
Hoffman and Shijun Wang and Jiamin Liu and Evrim Turkbey and Ronald M.
Summers | A New 2.5D Representation for Lymph Node Detection using Random Sets of
Deep Convolutional Neural Network Observations | This article will be presented at MICCAI (Medical Image Computing and
Computer-Assisted Interventions) 2014 | Medical Image Computing and Computer-Assisted Intervention -
MICCAI 2014 Volume 8673 of the series Lecture Notes in Computer Science pp
520-527 | 10.1007/978-3-319-10404-1_65 | null | cs.CV cs.LG cs.NE | http://creativecommons.org/licenses/publicdomain/ | Automated Lymph Node (LN) detection is an important clinical diagnostic task
but very challenging due to the low contrast of surrounding structures in
Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely
distributed locations. State-of-the-art studies show the performance range of
52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1
FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this
paper, we first operate a preliminary candidate generation stage, towards 100%
sensitivity at the cost of high FP levels (40 per patient), to harvest volumes
of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by
resampling 2D reformatted orthogonal views N times, via scale, random
translations, and rotations with respect to the VOI centroid coordinates. These
random views are then used to train a deep Convolutional Neural Network (CNN)
classifier. In testing, the CNN is employed to assign LN probabilities for all
N random views that can be simply averaged (as a set) to compute the final
classification probability per VOI. We validate the approach on two datasets:
90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs.
We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in
mediastinum and abdomen respectively, which drastically improves over the
previous state-of-the-art work.
| [
{
"version": "v1",
"created": "Fri, 6 Jun 2014 22:43:42 GMT"
}
] | 2015-09-17T00:00:00 | [
[
"Roth",
"Holger R.",
""
],
[
"Lu",
"Le",
""
],
[
"Seff",
"Ari",
""
],
[
"Cherry",
"Kevin M.",
""
],
[
"Hoffman",
"Joanne",
""
],
[
"Wang",
"Shijun",
""
],
[
"Liu",
"Jiamin",
""
],
[
"Turkbey",
"Evrim",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: A New 2.5D Representation for Lymph Node Detection using Random Sets of
Deep Convolutional Neural Network Observations
ABSTRACT: Automated Lymph Node (LN) detection is an important clinical diagnostic task
but very challenging due to the low contrast of surrounding structures in
Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely
distributed locations. State-of-the-art studies show the performance range of
52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1
FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this
paper, we first operate a preliminary candidate generation stage, towards 100%
sensitivity at the cost of high FP levels (40 per patient), to harvest volumes
of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by
resampling 2D reformatted orthogonal views N times, via scale, random
translations, and rotations with respect to the VOI centroid coordinates. These
random views are then used to train a deep Convolutional Neural Network (CNN)
classifier. In testing, the CNN is employed to assign LN probabilities for all
N random views that can be simply averaged (as a set) to compute the final
classification probability per VOI. We validate the approach on two datasets:
90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs.
We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in
mediastinum and abdomen respectively, which drastically improves over the
previous state-of-the-art work.
| no_new_dataset | 0.950641 |
1505.03101 | Albert Mero\~no-Pe\~nuela | Albert Mero\~no-Pe\~nuela, Christophe Gu\'eret and Stefan Schlobach | Release Early, Release Often: Predicting Change in Versioned Knowledge
Organization Systems on the Web | 16 pages, 6 figures, ISWC 2015 conference pre-print The paper has
been withdrawn due to significant overlap with a subsequent paper submitted
to a conference for review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Semantic Web is built on top of Knowledge Organization Systems (KOS)
(vocabularies, ontologies, concept schemes) that provide a structured,
interoperable and distributed access to Linked Data on the Web. The maintenance
of these KOS over time has produced a number of KOS version chains: subsequent
unique version identifiers to unique states of a KOS. However, the release of
new KOS versions pose challenges to both KOS publishers and users. For
publishers, updating a KOS is a knowledge intensive task that requires a lot of
manual effort, often implying deep deliberation on the set of changes to
introduce. For users that link their datasets to these KOS, a new version
compromises the validity of their links, often creating ramifications. In this
paper we describe a method to automatically detect which parts of a Web KOS are
likely to change in a next version, using supervised learning on past versions
in the KOS version chain. We use a set of ontology change features to model and
predict change in arbitrary Web KOS. We apply our method on 139 varied datasets
systematically retrieved from the Semantic Web, obtaining robust results at
correctly predicting change. To illustrate the accuracy, genericity and domain
independence of the method, we study the relationship between its effectiveness
and several characterizations of the evaluated datasets, finding that
predictors like the number of versions in a chain and their release frequency
have a fundamental impact in predictability of change in Web KOS. Consequently,
we argue for adopting a release early, release often philosophy in Web KOS
development cycles.
| [
{
"version": "v1",
"created": "Tue, 12 May 2015 18:03:21 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Sep 2015 20:11:34 GMT"
}
] | 2015-09-17T00:00:00 | [
[
"Meroño-Peñuela",
"Albert",
""
],
[
"Guéret",
"Christophe",
""
],
[
"Schlobach",
"Stefan",
""
]
] | TITLE: Release Early, Release Often: Predicting Change in Versioned Knowledge
Organization Systems on the Web
ABSTRACT: The Semantic Web is built on top of Knowledge Organization Systems (KOS)
(vocabularies, ontologies, concept schemes) that provide a structured,
interoperable and distributed access to Linked Data on the Web. The maintenance
of these KOS over time has produced a number of KOS version chains: subsequent
unique version identifiers to unique states of a KOS. However, the release of
new KOS versions pose challenges to both KOS publishers and users. For
publishers, updating a KOS is a knowledge intensive task that requires a lot of
manual effort, often implying deep deliberation on the set of changes to
introduce. For users that link their datasets to these KOS, a new version
compromises the validity of their links, often creating ramifications. In this
paper we describe a method to automatically detect which parts of a Web KOS are
likely to change in a next version, using supervised learning on past versions
in the KOS version chain. We use a set of ontology change features to model and
predict change in arbitrary Web KOS. We apply our method on 139 varied datasets
systematically retrieved from the Semantic Web, obtaining robust results at
correctly predicting change. To illustrate the accuracy, genericity and domain
independence of the method, we study the relationship between its effectiveness
and several characterizations of the evaluated datasets, finding that
predictors like the number of versions in a chain and their release frequency
have a fundamental impact in predictability of change in Web KOS. Consequently,
we argue for adopting a release early, release often philosophy in Web KOS
development cycles.
| no_new_dataset | 0.955858 |
1509.03755 | Gabriel Prat | Gabriel Prat Masramon and Llu\'is A. Belanche Mu\~noz | Toward better feature weighting algorithms: a focus on Relief | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Feature weighting algorithms try to solve a problem of great importance
nowadays in machine learning: The search of a relevance measure for the
features of a given domain. This relevance is primarily used for feature
selection as feature weighting can be seen as a generalization of it, but it is
also useful to better understand a problem's domain or to guide an inductor in
its learning process. Relief family of algorithms are proven to be very
effective in this task. Some other feature weighting methods are reviewed in
order to give some context and then the different existing extensions to the
original algorithm are explained.
One of Relief's known issues is the performance degradation of its estimates
when redundant features are present. A novel theoretical definition of
redundancy level is given in order to guide the work towards an extension of
the algorithm that is more robust against redundancy. A new extension is
presented that aims for improving the algorithms performance. Some experiments
were driven to test this new extension against the existing ones with a set of
artificial and real datasets and denoted that in certain cases it improves the
weight's estimation accuracy.
| [
{
"version": "v1",
"created": "Sat, 12 Sep 2015 15:10:15 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Sep 2015 11:58:32 GMT"
}
] | 2015-09-17T00:00:00 | [
[
"Masramon",
"Gabriel Prat",
""
],
[
"Muñoz",
"Lluís A. Belanche",
""
]
] | TITLE: Toward better feature weighting algorithms: a focus on Relief
ABSTRACT: Feature weighting algorithms try to solve a problem of great importance
nowadays in machine learning: The search of a relevance measure for the
features of a given domain. This relevance is primarily used for feature
selection as feature weighting can be seen as a generalization of it, but it is
also useful to better understand a problem's domain or to guide an inductor in
its learning process. Relief family of algorithms are proven to be very
effective in this task. Some other feature weighting methods are reviewed in
order to give some context and then the different existing extensions to the
original algorithm are explained.
One of Relief's known issues is the performance degradation of its estimates
when redundant features are present. A novel theoretical definition of
redundancy level is given in order to guide the work towards an extension of
the algorithm that is more robust against redundancy. A new extension is
presented that aims for improving the algorithms performance. Some experiments
were driven to test this new extension against the existing ones with a set of
artificial and real datasets and denoted that in certain cases it improves the
weight's estimation accuracy.
| no_new_dataset | 0.941061 |
1509.04612 | Alan Mosca | Alan Mosca and George D. Magoulas | Adapting Resilient Propagation for Deep Learning | Published in the proceedings of the UK workshop on Computational
Intelligence 2015 (UKCI) | null | null | null | cs.NE cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Resilient Propagation (Rprop) algorithm has been very popular for
backpropagation training of multilayer feed-forward neural networks in various
applications. The standard Rprop however encounters difficulties in the context
of deep neural networks as typically happens with gradient-based learning
algorithms. In this paper, we propose a modification of the Rprop that combines
standard Rprop steps with a special drop out technique. We apply the method for
training Deep Neural Networks as standalone components and in ensemble
formulations. Results on the MNIST dataset show that the proposed modification
alleviates standard Rprop's problems demonstrating improved learning speed and
accuracy.
| [
{
"version": "v1",
"created": "Tue, 15 Sep 2015 15:55:29 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Sep 2015 11:45:48 GMT"
}
] | 2015-09-17T00:00:00 | [
[
"Mosca",
"Alan",
""
],
[
"Magoulas",
"George D.",
""
]
] | TITLE: Adapting Resilient Propagation for Deep Learning
ABSTRACT: The Resilient Propagation (Rprop) algorithm has been very popular for
backpropagation training of multilayer feed-forward neural networks in various
applications. The standard Rprop however encounters difficulties in the context
of deep neural networks as typically happens with gradient-based learning
algorithms. In this paper, we propose a modification of the Rprop that combines
standard Rprop steps with a special drop out technique. We apply the method for
training Deep Neural Networks as standalone components and in ensemble
formulations. Results on the MNIST dataset show that the proposed modification
alleviates standard Rprop's problems demonstrating improved learning speed and
accuracy.
| no_new_dataset | 0.945801 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.