id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.01175 | Benjamin Eysenbach | Benjamin Eysenbach, Carl Vondrick, Antonio Torralba | Who is Mistaken? | See project website at: http://people.csail.mit.edu/bce/mistaken/ .
(Edit: fixed typos and references) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing when people have false beliefs is crucial for understanding their
actions. We introduce the novel problem of identifying when people in abstract
scenes have incorrect beliefs. We present a dataset of scenes, each visually
depicting an 8-frame story in which a character has a mistaken belief. We then
create a representation of characters' beliefs for two tasks in human action
understanding: predicting who is mistaken, and when they are mistaken.
Experiments suggest that our method for identifying mistaken characters
performs better on these tasks than simple baselines. Diagnostics on our model
suggest it learns important cues for recognizing mistaken beliefs, such as
gaze. We believe models of people's beliefs will have many applications in
action understanding, robotics, and healthcare.
| [
{
"version": "v1",
"created": "Sun, 4 Dec 2016 20:45:42 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 16:36:53 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Eysenbach",
"Benjamin",
""
],
[
"Vondrick",
"Carl",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Who is Mistaken?
ABSTRACT: Recognizing when people have false beliefs is crucial for understanding their
actions. We introduce the novel problem of identifying when people in abstract
scenes have incorrect beliefs. We present a dataset of scenes, each visually
depicting an 8-frame story in which a character has a mistaken belief. We then
create a representation of characters' beliefs for two tasks in human action
understanding: predicting who is mistaken, and when they are mistaken.
Experiments suggest that our method for identifying mistaken characters
performs better on these tasks than simple baselines. Diagnostics on our model
suggest it learns important cues for recognizing mistaken beliefs, such as
gaze. We believe models of people's beliefs will have many applications in
action understanding, robotics, and healthcare.
| new_dataset | 0.959383 |
1702.05891 | Feng Zhu | Feng Zhu, Hongsheng Li, Wanli Ouyang, Nenghai Yu, and Xiaogang Wang | Learning Spatial Regularization with Image-level Supervisions for
Multi-label Image Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label image classification is a fundamental but challenging task in
computer vision. Great progress has been achieved by exploiting semantic
relations between labels in recent years. However, conventional approaches are
unable to model the underlying spatial relations between labels in multi-label
images, because spatial annotations of the labels are generally not provided.
In this paper, we propose a unified deep neural network that exploits both
semantic and spatial relations between labels with only image-level
supervisions. Given a multi-label image, our proposed Spatial Regularization
Network (SRN) generates attention maps for all labels and captures the
underlying relations between them via learnable convolutions. By aggregating
the regularized classification results with original results by a ResNet-101
network, the classification performance can be consistently improved. The whole
deep neural network is trained end-to-end with only image-level annotations,
thus requires no additional efforts on image annotations. Extensive evaluations
on 3 public datasets with different types of labels show that our approach
significantly outperforms state-of-the-arts and has strong generalization
capability. Analysis of the learned SRN model demonstrates that it can
effectively capture both semantic and spatial relations of labels for improving
classification performance.
| [
{
"version": "v1",
"created": "Mon, 20 Feb 2017 08:21:58 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 08:49:43 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Zhu",
"Feng",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Yu",
"Nenghai",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Learning Spatial Regularization with Image-level Supervisions for
Multi-label Image Classification
ABSTRACT: Multi-label image classification is a fundamental but challenging task in
computer vision. Great progress has been achieved by exploiting semantic
relations between labels in recent years. However, conventional approaches are
unable to model the underlying spatial relations between labels in multi-label
images, because spatial annotations of the labels are generally not provided.
In this paper, we propose a unified deep neural network that exploits both
semantic and spatial relations between labels with only image-level
supervisions. Given a multi-label image, our proposed Spatial Regularization
Network (SRN) generates attention maps for all labels and captures the
underlying relations between them via learnable convolutions. By aggregating
the regularized classification results with original results by a ResNet-101
network, the classification performance can be consistently improved. The whole
deep neural network is trained end-to-end with only image-level annotations,
thus requires no additional efforts on image annotations. Extensive evaluations
on 3 public datasets with different types of labels show that our approach
significantly outperforms state-of-the-arts and has strong generalization
capability. Analysis of the learned SRN model demonstrates that it can
effectively capture both semantic and spatial relations of labels for improving
classification performance.
| no_new_dataset | 0.950273 |
1703.10631 | Jinkyu Kim | Jinkyu Kim and John Canny | Interpretable Learning for Self-Driving Cars by Visualizing Causal
Attention | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural perception and control networks are likely to be a key component
of self-driving vehicles. These models need to be explainable - they should
provide easy-to-interpret rationales for their behavior - so that passengers,
insurance companies, law enforcement, developers etc., can understand what
triggered a particular behavior. Here we explore the use of visual
explanations. These explanations take the form of real-time highlighted regions
of an image that causally influence the network's output (steering control).
Our approach is two-stage. In the first stage, we use a visual attention model
to train a convolution network end-to-end from images to steering angle. The
attention model highlights image regions that potentially influence the
network's output. Some of these are true influences, but some are spurious. We
then apply a causal filtering step to determine which input regions actually
influence the output. This produces more succinct visual explanations and more
accurately exposes the network's behavior. We demonstrate the effectiveness of
our model on three datasets totaling 16 hours of driving. We first show that
training with attention does not degrade the performance of the end-to-end
network. Then we show that the network causally cues on a variety of features
that are used by humans while driving.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 18:37:49 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Kim",
"Jinkyu",
""
],
[
"Canny",
"John",
""
]
] | TITLE: Interpretable Learning for Self-Driving Cars by Visualizing Causal
Attention
ABSTRACT: Deep neural perception and control networks are likely to be a key component
of self-driving vehicles. These models need to be explainable - they should
provide easy-to-interpret rationales for their behavior - so that passengers,
insurance companies, law enforcement, developers etc., can understand what
triggered a particular behavior. Here we explore the use of visual
explanations. These explanations take the form of real-time highlighted regions
of an image that causally influence the network's output (steering control).
Our approach is two-stage. In the first stage, we use a visual attention model
to train a convolution network end-to-end from images to steering angle. The
attention model highlights image regions that potentially influence the
network's output. Some of these are true influences, but some are spurious. We
then apply a causal filtering step to determine which input regions actually
influence the output. This produces more succinct visual explanations and more
accurately exposes the network's behavior. We demonstrate the effectiveness of
our model on three datasets totaling 16 hours of driving. We first show that
training with attention does not degrade the performance of the end-to-end
network. Then we show that the network causally cues on a variety of features
that are used by humans while driving.
| no_new_dataset | 0.943034 |
1703.10642 | Hyungjun Kim | Hyungjun Kim, Taesu Kim, Jinseok Kim and Jae-Joon Kim | Deep Neural Network Optimized to Resistive Memory with Nonlinear
Current-Voltage Characteristics | 14 pages | null | null | null | cs.ET cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Neural Network computation relies on intensive vector-matrix
multiplications. Recently, the emerging nonvolatile memory (NVM) crossbar array
showed a feasibility of implementing such operations with high energy
efficiency, thus there are many works on efficiently utilizing emerging NVM
crossbar array as analog vector-matrix multiplier. However, its nonlinear I-V
characteristics restrain critical design parameters, such as the read voltage
and weight range, resulting in substantial accuracy loss. In this paper,
instead of optimizing hardware parameters to a given neural network, we propose
a methodology of reconstructing a neural network itself optimized to resistive
memory crossbar arrays. To verify the validity of the proposed method, we
simulated various neural network with MNIST and CIFAR-10 dataset using two
different specific Resistive Random Access Memory (RRAM) model. Simulation
results show that our proposed neural network produces significantly higher
inference accuracies than conventional neural network when the synapse devices
have nonlinear I-V characteristics.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 19:04:55 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Kim",
"Hyungjun",
""
],
[
"Kim",
"Taesu",
""
],
[
"Kim",
"Jinseok",
""
],
[
"Kim",
"Jae-Joon",
""
]
] | TITLE: Deep Neural Network Optimized to Resistive Memory with Nonlinear
Current-Voltage Characteristics
ABSTRACT: Artificial Neural Network computation relies on intensive vector-matrix
multiplications. Recently, the emerging nonvolatile memory (NVM) crossbar array
showed a feasibility of implementing such operations with high energy
efficiency, thus there are many works on efficiently utilizing emerging NVM
crossbar array as analog vector-matrix multiplier. However, its nonlinear I-V
characteristics restrain critical design parameters, such as the read voltage
and weight range, resulting in substantial accuracy loss. In this paper,
instead of optimizing hardware parameters to a given neural network, we propose
a methodology of reconstructing a neural network itself optimized to resistive
memory crossbar arrays. To verify the validity of the proposed method, we
simulated various neural network with MNIST and CIFAR-10 dataset using two
different specific Resistive Random Access Memory (RRAM) model. Simulation
results show that our proposed neural network produces significantly higher
inference accuracies than conventional neural network when the synapse devices
have nonlinear I-V characteristics.
| no_new_dataset | 0.949995 |
1703.10645 | Igor Fedorov | Igor Fedorov, Ritwik Giri, Bhaskar D. Rao, Truong Q. Nguyen | Relevance Subject Machine: A Novel Person Re-identification Framework | Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method called the Relevance Subject Machine (RSM) to solve
the person re-identification (re-id) problem. RSM falls under the category of
Bayesian sparse recovery algorithms and uses the sparse representation of the
input video under a pre-defined dictionary to identify the subject in the
video. Our approach focuses on the multi-shot re-id problem, which is the
prevalent problem in many video analytics applications. RSM captures the
essence of the multi-shot re-id problem by constraining the support of the
sparse codes for each input video frame to be the same. Our proposed approach
is also robust enough to deal with time varying outliers and occlusions by
introducing a sparse, non-stationary noise term in the model error. We provide
a novel Variational Bayesian based inference procedure along with an intuitive
interpretation of the proposed update rules. We evaluate our approach over
several commonly used re-id datasets and show superior performance over current
state-of-the-art algorithms. Specifically, for ILIDS-VID, a recent large scale
re-id dataset, RSM shows significant improvement over all published approaches,
achieving an 11.5% (absolute) improvement in rank 1 accuracy over the closest
competing algorithm considered.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 19:21:55 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Fedorov",
"Igor",
""
],
[
"Giri",
"Ritwik",
""
],
[
"Rao",
"Bhaskar D.",
""
],
[
"Nguyen",
"Truong Q.",
""
]
] | TITLE: Relevance Subject Machine: A Novel Person Re-identification Framework
ABSTRACT: We propose a novel method called the Relevance Subject Machine (RSM) to solve
the person re-identification (re-id) problem. RSM falls under the category of
Bayesian sparse recovery algorithms and uses the sparse representation of the
input video under a pre-defined dictionary to identify the subject in the
video. Our approach focuses on the multi-shot re-id problem, which is the
prevalent problem in many video analytics applications. RSM captures the
essence of the multi-shot re-id problem by constraining the support of the
sparse codes for each input video frame to be the same. Our proposed approach
is also robust enough to deal with time varying outliers and occlusions by
introducing a sparse, non-stationary noise term in the model error. We provide
a novel Variational Bayesian based inference procedure along with an intuitive
interpretation of the proposed update rules. We evaluate our approach over
several commonly used re-id datasets and show superior performance over current
state-of-the-art algorithms. Specifically, for ILIDS-VID, a recent large scale
re-id dataset, RSM shows significant improvement over all published approaches,
achieving an 11.5% (absolute) improvement in rank 1 accuracy over the closest
competing algorithm considered.
| no_new_dataset | 0.945349 |
1703.10661 | Md Shopon | Mithun Biswas, Rafiqul Islam, Gautam Kumar Shom, Md Shopon, Nabeel
Mohammed, Sifat Momen, Md Anowarul Abedin | BanglaLekha-Isolated: A Comprehensive Bangla Handwritten Character
Dataset | Bangla Handwriting Dataset, OCR | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bangla handwriting recognition is becoming a very important issue nowadays.
It is potentially a very important task specially for Bangla speaking
population of Bangladesh and West Bengal. By keeping that in our mind we are
introducing a comprehensive Bangla handwritten character dataset named
BanglaLekha-Isolated. This dataset contains Bangla handwritten numerals, basic
characters and compound characters. This dataset was collected from multiple
geographical location within Bangladesh and includes sample collected from a
variety of aged groups. This dataset can also be used for other classification
problems i.e: gender, age, district. This is the largest dataset on Bangla
handwritten characters yet.
| [
{
"version": "v1",
"created": "Wed, 22 Feb 2017 07:57:14 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Biswas",
"Mithun",
""
],
[
"Islam",
"Rafiqul",
""
],
[
"Shom",
"Gautam Kumar",
""
],
[
"Shopon",
"Md",
""
],
[
"Mohammed",
"Nabeel",
""
],
[
"Momen",
"Sifat",
""
],
[
"Abedin",
"Md Anowarul",
""
]
] | TITLE: BanglaLekha-Isolated: A Comprehensive Bangla Handwritten Character
Dataset
ABSTRACT: Bangla handwriting recognition is becoming a very important issue nowadays.
It is potentially a very important task specially for Bangla speaking
population of Bangladesh and West Bengal. By keeping that in our mind we are
introducing a comprehensive Bangla handwritten character dataset named
BanglaLekha-Isolated. This dataset contains Bangla handwritten numerals, basic
characters and compound characters. This dataset was collected from multiple
geographical location within Bangladesh and includes sample collected from a
variety of aged groups. This dataset can also be used for other classification
problems i.e: gender, age, district. This is the largest dataset on Bangla
handwritten characters yet.
| new_dataset | 0.965381 |
1703.10667 | Chih-Yao Ma | Chih-Yao Ma, Min-Hung Chen, Zsolt Kira, Ghassan AlRegib | TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for
Activity Recognition | 16 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent two-stream deep Convolutional Neural Networks (ConvNets) have made
significant progress in recognizing human actions in videos. Despite their
success, methods extending the basic two-stream ConvNet have not systematically
explored possible network architectures to further exploit spatiotemporal
dynamics within video sequences. Further, such networks often use different
baseline two-stream networks. Therefore, the differences and the distinguishing
factors between various methods using Recurrent Neural Networks (RNN) or
convolutional networks on temporally-constructed feature vectors
(Temporal-ConvNet) are unclear. In this work, we first demonstrate a strong
baseline two-stream ConvNet using ResNet-101. We use this baseline to
thoroughly examine the use of both RNNs and Temporal-ConvNets for extracting
spatiotemporal information. Building upon our experimental results, we then
propose and investigate two different networks to further integrate
spatiotemporal information: 1) temporal segment RNN and 2) Inception-style
Temporal-ConvNet. We demonstrate that using both RNNs (using LSTMs) and
Temporal-ConvNets on spatiotemporal feature matrices are able to exploit
spatiotemporal dynamics to improve the overall performance. However, each of
these methods require proper care to achieve state-of-the-art performance; for
example, LSTMs require pre-segmented data or else they cannot fully exploit
temporal information. Our analysis identifies specific limitations for each
method that could form the basis of future work. Our experimental results on
UCF101 and HMDB51 datasets achieve state-of-the-art performances, 94.1% and
69.0%, respectively, without requiring extensive temporal augmentation.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 20:45:00 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Ma",
"Chih-Yao",
""
],
[
"Chen",
"Min-Hung",
""
],
[
"Kira",
"Zsolt",
""
],
[
"AlRegib",
"Ghassan",
""
]
] | TITLE: TS-LSTM and Temporal-Inception: Exploiting Spatiotemporal Dynamics for
Activity Recognition
ABSTRACT: Recent two-stream deep Convolutional Neural Networks (ConvNets) have made
significant progress in recognizing human actions in videos. Despite their
success, methods extending the basic two-stream ConvNet have not systematically
explored possible network architectures to further exploit spatiotemporal
dynamics within video sequences. Further, such networks often use different
baseline two-stream networks. Therefore, the differences and the distinguishing
factors between various methods using Recurrent Neural Networks (RNN) or
convolutional networks on temporally-constructed feature vectors
(Temporal-ConvNet) are unclear. In this work, we first demonstrate a strong
baseline two-stream ConvNet using ResNet-101. We use this baseline to
thoroughly examine the use of both RNNs and Temporal-ConvNets for extracting
spatiotemporal information. Building upon our experimental results, we then
propose and investigate two different networks to further integrate
spatiotemporal information: 1) temporal segment RNN and 2) Inception-style
Temporal-ConvNet. We demonstrate that using both RNNs (using LSTMs) and
Temporal-ConvNets on spatiotemporal feature matrices are able to exploit
spatiotemporal dynamics to improve the overall performance. However, each of
these methods require proper care to achieve state-of-the-art performance; for
example, LSTMs require pre-segmented data or else they cannot fully exploit
temporal information. Our analysis identifies specific limitations for each
method that could form the basis of future work. Our experimental results on
UCF101 and HMDB51 datasets achieve state-of-the-art performances, 94.1% and
69.0%, respectively, without requiring extensive temporal augmentation.
| no_new_dataset | 0.94625 |
1703.10714 | Donghyun Kim | Donghyun Kim, Matthias Hernandez, Jongmoo Choi, Gerard Medioni | Deep 3D Face Identification | 9 pages, 5 figures, 2 tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel 3D face recognition algorithm using a deep convolutional
neural network (DCNN) and a 3D augmentation technique. The performance of 2D
face recognition algorithms has significantly increased by leveraging the
representational power of deep neural networks and the use of large-scale
labeled training data. As opposed to 2D face recognition, training
discriminative deep features for 3D face recognition is very difficult due to
the lack of large-scale 3D face datasets. In this paper, we show that transfer
learning from a CNN trained on 2D face images can effectively work for 3D face
recognition by fine-tuning the CNN with a relatively small number of 3D facial
scans. We also propose a 3D face augmentation technique which synthesizes a
number of different facial expressions from a single 3D face scan. Our proposed
method shows excellent recognition results on Bosphorus, BU-3DFE, and 3D-TEC
datasets, without using hand-crafted features. The 3D identification using our
deep features also scales well for large databases.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 23:49:23 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Kim",
"Donghyun",
""
],
[
"Hernandez",
"Matthias",
""
],
[
"Choi",
"Jongmoo",
""
],
[
"Medioni",
"Gerard",
""
]
] | TITLE: Deep 3D Face Identification
ABSTRACT: We propose a novel 3D face recognition algorithm using a deep convolutional
neural network (DCNN) and a 3D augmentation technique. The performance of 2D
face recognition algorithms has significantly increased by leveraging the
representational power of deep neural networks and the use of large-scale
labeled training data. As opposed to 2D face recognition, training
discriminative deep features for 3D face recognition is very difficult due to
the lack of large-scale 3D face datasets. In this paper, we show that transfer
learning from a CNN trained on 2D face images can effectively work for 3D face
recognition by fine-tuning the CNN with a relatively small number of 3D facial
scans. We also propose a 3D face augmentation technique which synthesizes a
number of different facial expressions from a single 3D face scan. Our proposed
method shows excellent recognition results on Bosphorus, BU-3DFE, and 3D-TEC
datasets, without using hand-crafted features. The 3D identification using our
deep features also scales well for large databases.
| no_new_dataset | 0.944995 |
1703.10818 | Liying Chi | Liying Chi, Hongxin Zhang and Mingxiu Chen | End-To-End Face Detection and Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plenty of face detection and recognition methods have been proposed and got
delightful results in decades. Common face recognition pipeline consists of: 1)
face detection, 2) face alignment, 3) feature extraction, 4) similarity
calculation, which are separated and independent from each other. The separated
face analyzing stages lead the model redundant calculation and are hard for
end-to-end training. In this paper, we proposed a novel end-to-end trainable
convolutional network framework for face detection and recognition, in which a
geometric transformation matrix was directly learned to align the faces,
instead of predicting the facial landmarks. In training stage, our single CNN
model is supervised only by face bounding boxes and personal identities, which
are publicly available from WIDER FACE \cite{Yang2016} dataset and
CASIA-WebFace \cite{Yi2014} dataset. Tested on Face Detection Dataset and
Benchmark (FDDB) \cite{Jain2010} dataset and Labeled Face in the Wild (LFW)
\cite{Huang2007} dataset, we have achieved 89.24\% recall for face detection
task and 98.63\% verification accuracy for face recognition task
simultaneously, which are comparable to state-of-the-art results.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 09:48:32 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Chi",
"Liying",
""
],
[
"Zhang",
"Hongxin",
""
],
[
"Chen",
"Mingxiu",
""
]
] | TITLE: End-To-End Face Detection and Recognition
ABSTRACT: Plenty of face detection and recognition methods have been proposed and got
delightful results in decades. Common face recognition pipeline consists of: 1)
face detection, 2) face alignment, 3) feature extraction, 4) similarity
calculation, which are separated and independent from each other. The separated
face analyzing stages lead the model redundant calculation and are hard for
end-to-end training. In this paper, we proposed a novel end-to-end trainable
convolutional network framework for face detection and recognition, in which a
geometric transformation matrix was directly learned to align the faces,
instead of predicting the facial landmarks. In training stage, our single CNN
model is supervised only by face bounding boxes and personal identities, which
are publicly available from WIDER FACE \cite{Yang2016} dataset and
CASIA-WebFace \cite{Yi2014} dataset. Tested on Face Detection Dataset and
Benchmark (FDDB) \cite{Jain2010} dataset and Labeled Face in the Wild (LFW)
\cite{Huang2007} dataset, we have achieved 89.24\% recall for face detection
task and 98.63\% verification accuracy for face recognition task
simultaneously, which are comparable to state-of-the-art results.
| no_new_dataset | 0.947624 |
1703.10889 | Yudong Liang | Yudong Liang, Radu Timofte, Jinjun Wang, Yihong Gong and Nanning Zheng | Single Image Super Resolution - When Model Adaptation Matters | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the recent years impressive advances were made for single image
super-resolution. Deep learning is behind a big part of this success. Deep(er)
architecture design and external priors modeling are the key ingredients. The
internal contents of the low resolution input image is neglected with deep
modeling despite the earlier works showing the power of using such internal
priors. In this paper we propose a novel deep convolutional neural network
carefully designed for robustness and efficiency at both learning and testing.
Moreover, we propose a couple of model adaptation strategies to the internal
contents of the low resolution input image and analyze their strong points and
weaknesses. By trading runtime and using internal priors we achieve 0.1 up to
0.3dB PSNR improvements over best reported results on standard datasets. Our
adaptation especially favors images with repetitive structures or under large
resolutions. Moreover, it can be combined with other simple techniques, such as
back-projection or enhanced prediction, for further improvements.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 13:20:19 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Liang",
"Yudong",
""
],
[
"Timofte",
"Radu",
""
],
[
"Wang",
"Jinjun",
""
],
[
"Gong",
"Yihong",
""
],
[
"Zheng",
"Nanning",
""
]
] | TITLE: Single Image Super Resolution - When Model Adaptation Matters
ABSTRACT: In the recent years impressive advances were made for single image
super-resolution. Deep learning is behind a big part of this success. Deep(er)
architecture design and external priors modeling are the key ingredients. The
internal contents of the low resolution input image is neglected with deep
modeling despite the earlier works showing the power of using such internal
priors. In this paper we propose a novel deep convolutional neural network
carefully designed for robustness and efficiency at both learning and testing.
Moreover, we propose a couple of model adaptation strategies to the internal
contents of the low resolution input image and analyze their strong points and
weaknesses. By trading runtime and using internal priors we achieve 0.1 up to
0.3dB PSNR improvements over best reported results on standard datasets. Our
adaptation especially favors images with repetitive structures or under large
resolutions. Moreover, it can be combined with other simple techniques, such as
back-projection or enhanced prediction, for further improvements.
| no_new_dataset | 0.947186 |
1703.10898 | Jie Song | Jie Song, Limin Wang, Luc Van Gool, Otmar Hilliges | Thin-Slicing Network: A Deep Structured Model for Pose Estimation in
Videos | Preliminary version to appear in CVPR2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep ConvNets have been shown to be effective for the task of human pose
estimation from single images. However, several challenging issues arise in the
video-based case such as self-occlusion, motion blur, and uncommon poses with
few or no examples in training data sets. Temporal information can provide
additional cues about the location of body joints and help to alleviate these
issues. In this paper, we propose a deep structured model to estimate a
sequence of human poses in unconstrained videos. This model can be efficiently
trained in an end-to-end manner and is capable of representing appearance of
body joints and their spatio-temporal relationships simultaneously. Domain
knowledge about the human body is explicitly incorporated into the network
providing effective priors to regularize the skeletal structure and to enforce
temporal consistency. The proposed end-to-end architecture is evaluated on two
widely used benchmarks (Penn Action dataset and JHMDB dataset) for video-based
pose estimation. Our approach significantly outperforms the existing
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 13:59:31 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Song",
"Jie",
""
],
[
"Wang",
"Limin",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Hilliges",
"Otmar",
""
]
] | TITLE: Thin-Slicing Network: A Deep Structured Model for Pose Estimation in
Videos
ABSTRACT: Deep ConvNets have been shown to be effective for the task of human pose
estimation from single images. However, several challenging issues arise in the
video-based case such as self-occlusion, motion blur, and uncommon poses with
few or no examples in training data sets. Temporal information can provide
additional cues about the location of body joints and help to alleviate these
issues. In this paper, we propose a deep structured model to estimate a
sequence of human poses in unconstrained videos. This model can be efficiently
trained in an end-to-end manner and is capable of representing appearance of
body joints and their spatio-temporal relationships simultaneously. Domain
knowledge about the human body is explicitly incorporated into the network
providing effective priors to regularize the skeletal structure and to enforce
temporal consistency. The proposed end-to-end architecture is evaluated on two
widely used benchmarks (Penn Action dataset and JHMDB dataset) for video-based
pose estimation. Our approach significantly outperforms the existing
state-of-the-art methods.
| no_new_dataset | 0.949669 |
1703.10901 | Simion-Vlad Bogolin | Ioana Croitoru (1), Simion-Vlad Bogolin (1), Marius Leordeanu (1 and
2) ((1) Institute of Mathematics of the Romanian Academy, (2) University
"Politehnica" of Bucharest) | Unsupervised learning from video to detect foreground objects in single
images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised learning from visual data is one of the most difficult
challenges in computer vision, being a fundamental task for understanding how
visual recognition works. From a practical point of view, learning from
unsupervised visual input has an immense practical value, as very large
quantities of unlabeled videos can be collected at low cost. In this paper, we
address the task of unsupervised learning to detect and segment foreground
objects in single images. We achieve our goal by training a student pathway,
consisting of a deep neural network. It learns to predict from a single input
image (a video frame) the output for that particular frame, of a teacher
pathway that performs unsupervised object discovery in video. Our approach is
different from the published literature that performs unsupervised discovery in
videos or in collections of images at test time. We move the unsupervised
discovery phase during the training stage, while at test time we apply the
standard feed-forward processing along the student pathway. This has a dual
benefit: firstly, it allows in principle unlimited possibilities of learning
and generalization during training, while remaining very fast at testing.
Secondly, the student not only becomes able to detect in single images
significantly better than its unsupervised video discovery teacher, but it also
achieves state of the art results on two important current benchmarks, YouTube
Objects and Object Discovery datasets. Moreover, at test time, our system is at
least two orders of magnitude faster than other previous methods.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 14:05:13 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Croitoru",
"Ioana",
"",
"1 and\n 2"
],
[
"Bogolin",
"Simion-Vlad",
"",
"1 and\n 2"
],
[
"Leordeanu",
"Marius",
"",
"1 and\n 2"
]
] | TITLE: Unsupervised learning from video to detect foreground objects in single
images
ABSTRACT: Unsupervised learning from visual data is one of the most difficult
challenges in computer vision, being a fundamental task for understanding how
visual recognition works. From a practical point of view, learning from
unsupervised visual input has an immense practical value, as very large
quantities of unlabeled videos can be collected at low cost. In this paper, we
address the task of unsupervised learning to detect and segment foreground
objects in single images. We achieve our goal by training a student pathway,
consisting of a deep neural network. It learns to predict from a single input
image (a video frame) the output for that particular frame, of a teacher
pathway that performs unsupervised object discovery in video. Our approach is
different from the published literature that performs unsupervised discovery in
videos or in collections of images at test time. We move the unsupervised
discovery phase during the training stage, while at test time we apply the
standard feed-forward processing along the student pathway. This has a dual
benefit: firstly, it allows in principle unlimited possibilities of learning
and generalization during training, while remaining very fast at testing.
Secondly, the student not only becomes able to detect in single images
significantly better than its unsupervised video discovery teacher, but it also
achieves state of the art results on two important current benchmarks, YouTube
Objects and Object Discovery datasets. Moreover, at test time, our system is at
least two orders of magnitude faster than other previous methods.
| no_new_dataset | 0.947039 |
1703.10902 | Xiao Yang | Xiao Yang, Roland Kwitt, Martin Styner, Marc Niethammer | Fast Predictive Multimodal Image Registration | Accepted as a conference paper for ISBI 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a deep encoder-decoder architecture for image deformation
prediction from multimodal images. Specifically, we design an image-patch-based
deep network that jointly (i) learns an image similarity measure and (ii) the
relationship between image patches and deformation parameters. While our method
can be applied to general image registration formulations, we focus on the
Large Deformation Diffeomorphic Metric Mapping (LDDMM) registration model. By
predicting the initial momentum of the shooting formulation of LDDMM, we
preserve its mathematical properties and drastically reduce the computation
time, compared to optimization-based approaches. Furthermore, we create a
Bayesian probabilistic version of the network that allows evaluation of
registration uncertainty via sampling of the network at test time. We evaluate
our method on a 3D brain MRI dataset using both T1- and T2-weighted images. Our
experiments show that our method generates accurate predictions and that
learning the similarity measure leads to more consistent registrations than
relying on generic multimodal image similarity measures, such as mutual
information. Our approach is an order of magnitude faster than
optimization-based LDDMM.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 14:05:57 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Yang",
"Xiao",
""
],
[
"Kwitt",
"Roland",
""
],
[
"Styner",
"Martin",
""
],
[
"Niethammer",
"Marc",
""
]
] | TITLE: Fast Predictive Multimodal Image Registration
ABSTRACT: We introduce a deep encoder-decoder architecture for image deformation
prediction from multimodal images. Specifically, we design an image-patch-based
deep network that jointly (i) learns an image similarity measure and (ii) the
relationship between image patches and deformation parameters. While our method
can be applied to general image registration formulations, we focus on the
Large Deformation Diffeomorphic Metric Mapping (LDDMM) registration model. By
predicting the initial momentum of the shooting formulation of LDDMM, we
preserve its mathematical properties and drastically reduce the computation
time, compared to optimization-based approaches. Furthermore, we create a
Bayesian probabilistic version of the network that allows evaluation of
registration uncertainty via sampling of the network at test time. We evaluate
our method on a 3D brain MRI dataset using both T1- and T2-weighted images. Our
experiments show that our method generates accurate predictions and that
learning the similarity measure leads to more consistent registrations than
relying on generic multimodal image similarity measures, such as mutual
information. Our approach is an order of magnitude faster than
optimization-based LDDMM.
| no_new_dataset | 0.950134 |
1703.11004 | Daiki Matsumoto | Daiki Matsumoto, Thomas Indinger | On-the-fly algorithm for Dynamic Mode Decomposition using Incremental
Singular Value Decomposition and Total Least Squares | null | null | null | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Mode Decomposition (DMD) is a useful tool to effectively extract the
dominant dynamic flow structure from a unsteady flow field. However, DMD
requires massive computational resources with respect to memory consumption and
the usage of storage. In this paper, an alternative incremental algorithm of
Total DMD (Incremental TDMD) is proposed which is based on Incremental Singular
Value Decomposition (SVD). The advantage of Incremental TDMD compared to the
existing on-the-fly algorithms of DMD is that Sparsity-Promoting DMD (SPDMD)
can be performed after the incremental process without saving huge datasets on
the disk space. SPDMD combined with Incremental TDMD enable the effective
identification of dominant modes which are relevant to the results from
conventional TDMD combined with SPDMD.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 17:47:11 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Matsumoto",
"Daiki",
""
],
[
"Indinger",
"Thomas",
""
]
] | TITLE: On-the-fly algorithm for Dynamic Mode Decomposition using Incremental
Singular Value Decomposition and Total Least Squares
ABSTRACT: Dynamic Mode Decomposition (DMD) is a useful tool to effectively extract the
dominant dynamic flow structure from a unsteady flow field. However, DMD
requires massive computational resources with respect to memory consumption and
the usage of storage. In this paper, an alternative incremental algorithm of
Total DMD (Incremental TDMD) is proposed which is based on Incremental Singular
Value Decomposition (SVD). The advantage of Incremental TDMD compared to the
existing on-the-fly algorithms of DMD is that Sparsity-Promoting DMD (SPDMD)
can be performed after the incremental process without saving huge datasets on
the disk space. SPDMD combined with Incremental TDMD enable the effective
identification of dominant modes which are relevant to the results from
conventional TDMD combined with SPDMD.
| no_new_dataset | 0.948442 |
1609.00988 | Nhien-An Le-Khac | Nhien-An Le-Khac, Martin Bue, Michael Whelan, Tahar Kechadi | A clustering-based data reduction for very large spatio-temporal
datasets | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, huge amounts of data are being collected with spatial and temporal
components from sources such as meteorological, satellite imagery etc.
Efficient visualisation as well as discovery of useful knowledge from these
datasets is therefore very challenging and becoming a massive economic need.
Data Mining has emerged as the technology to discover hidden knowledge in very
large amounts of data. Furthermore, data mining techniques could be applied to
decrease the large size of raw data by retrieving its useful knowledge as
representatives. As a consequence, instead of dealing with a large size of raw
data, we can use these representatives to visualise or to analyse without
losing important information. This paper presents a new approach based on
different clustering techniques for data reduction to help analyse very large
spatio-temporal data. We also present and discuss preliminary results of this
approach.
| [
{
"version": "v1",
"created": "Sun, 4 Sep 2016 20:35:18 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2017 18:55:18 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Le-Khac",
"Nhien-An",
""
],
[
"Bue",
"Martin",
""
],
[
"Whelan",
"Michael",
""
],
[
"Kechadi",
"Tahar",
""
]
] | TITLE: A clustering-based data reduction for very large spatio-temporal
datasets
ABSTRACT: Today, huge amounts of data are being collected with spatial and temporal
components from sources such as meteorological, satellite imagery etc.
Efficient visualisation as well as discovery of useful knowledge from these
datasets is therefore very challenging and becoming a massive economic need.
Data Mining has emerged as the technology to discover hidden knowledge in very
large amounts of data. Furthermore, data mining techniques could be applied to
decrease the large size of raw data by retrieving its useful knowledge as
representatives. As a consequence, instead of dealing with a large size of raw
data, we can use these representatives to visualise or to analyse without
losing important information. This paper presents a new approach based on
different clustering techniques for data reduction to help analyse very large
spatio-temporal data. We also present and discuss preliminary results of this
approach.
| no_new_dataset | 0.951684 |
1702.05729 | Shuang Li | Shuang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, Dayu Yue, Xiaogang
Wang | Person Search with Natural Language Description | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Searching persons in large-scale image databases with the query of natural
language description has important applications in video surveillance. Existing
methods mainly focused on searching persons with image-based or attribute-based
queries, which have major limitations for a practical usage. In this paper, we
study the problem of person search with natural language description. Given the
textual description of a person, the algorithm of the person search is required
to rank all the samples in the person database then retrieve the most relevant
sample corresponding to the queried description. Since there is no person
dataset or benchmark with textual description available, we collect a
large-scale person description dataset with detailed natural language
annotations and person samples from various sources, termed as CUHK Person
Description Dataset (CUHK-PEDES). A wide range of possible models and baselines
have been evaluated and compared on the person search benchmark. An Recurrent
Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to
establish the state-of-the art performance on person search.
| [
{
"version": "v1",
"created": "Sun, 19 Feb 2017 10:01:33 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2017 07:51:10 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Li",
"Shuang",
""
],
[
"Xiao",
"Tong",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Yue",
"Dayu",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Person Search with Natural Language Description
ABSTRACT: Searching persons in large-scale image databases with the query of natural
language description has important applications in video surveillance. Existing
methods mainly focused on searching persons with image-based or attribute-based
queries, which have major limitations for a practical usage. In this paper, we
study the problem of person search with natural language description. Given the
textual description of a person, the algorithm of the person search is required
to rank all the samples in the person database then retrieve the most relevant
sample corresponding to the queried description. Since there is no person
dataset or benchmark with textual description available, we collect a
large-scale person description dataset with detailed natural language
annotations and person samples from various sources, termed as CUHK Person
Description Dataset (CUHK-PEDES). A wide range of possible models and baselines
have been evaluated and compared on the person search benchmark. An Recurrent
Neural Network with Gated Neural Attention mechanism (GNA-RNN) is proposed to
establish the state-of-the art performance on person search.
| new_dataset | 0.955026 |
1703.08544 | Joshua Michalenko | Joshua J. Michalenko, Andrew S. Lan, Richard G. Baraniuk | Data-Mining Textual Responses to Uncover Misconception Patterns | 7 Pages, Submitted to EDM 2017, Workshop version accepted to L@S
2017. Article title and acronym changed to more clearly indicate the
scientific goal of the paper of improving the quality of educational
instruction | null | null | null | stat.ML cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important, yet largely unstudied, problem in student data analysis is to
detect misconceptions from students' responses to open-response questions.
Misconception detection enables instructors to deliver more targeted feedback
on the misconceptions exhibited by many students in their class, thus improving
the quality of instruction. In this paper, we propose a new natural language
processing-based framework to detect the common misconceptions among students'
textual responses to short-answer questions. We propose a probabilistic model
for students' textual responses involving misconceptions and experimentally
validate it on a real-world student-response dataset. Experimental results show
that our proposed framework excels at classifying whether a response exhibits
one or more misconceptions. More importantly, it can also automatically detect
the common misconceptions exhibited across responses from multiple students to
multiple questions; this property is especially important at large scale, since
instructors will no longer need to manually specify all possible misconceptions
that students might exhibit.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 14:49:58 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Mar 2017 02:50:33 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Michalenko",
"Joshua J.",
""
],
[
"Lan",
"Andrew S.",
""
],
[
"Baraniuk",
"Richard G.",
""
]
] | TITLE: Data-Mining Textual Responses to Uncover Misconception Patterns
ABSTRACT: An important, yet largely unstudied, problem in student data analysis is to
detect misconceptions from students' responses to open-response questions.
Misconception detection enables instructors to deliver more targeted feedback
on the misconceptions exhibited by many students in their class, thus improving
the quality of instruction. In this paper, we propose a new natural language
processing-based framework to detect the common misconceptions among students'
textual responses to short-answer questions. We propose a probabilistic model
for students' textual responses involving misconceptions and experimentally
validate it on a real-world student-response dataset. Experimental results show
that our proposed framework excels at classifying whether a response exhibits
one or more misconceptions. More importantly, it can also automatically detect
the common misconceptions exhibited across responses from multiple students to
multiple questions; this property is especially important at large scale, since
instructors will no longer need to manually specify all possible misconceptions
that students might exhibit.
| no_new_dataset | 0.954137 |
1703.10196 | Edward Boyda | Edward Boyda, Colin McCormick, and Dan Hammer | Detecting Human Interventions on the Landscape: KAZE Features, Poisson
Point Processes, and a Construction Dataset | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm capable of identifying a wide variety of
human-induced change on the surface of the planet by analyzing matches between
local features in time-sequenced remote sensing imagery. We evaluate feature
sets, match protocols, and the statistical modeling of feature matches. With
application of KAZE features, k-nearest-neighbor descriptor matching, and
geometric proximity and bi-directional match consistency checks, average match
rates increase more than two-fold over the previous standard. In testing our
platform, we developed a small, labeled benchmark dataset expressing
large-scale residential, industrial, and civic construction, along with null
instances, in California between the years 2010 and 2012. On the benchmark set,
our algorithm makes precise, accurate change proposals on two-thirds of scenes.
Further, the detection threshold can be tuned so that all or almost all
proposed detections are true positives.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 18:56:32 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Boyda",
"Edward",
""
],
[
"McCormick",
"Colin",
""
],
[
"Hammer",
"Dan",
""
]
] | TITLE: Detecting Human Interventions on the Landscape: KAZE Features, Poisson
Point Processes, and a Construction Dataset
ABSTRACT: We present an algorithm capable of identifying a wide variety of
human-induced change on the surface of the planet by analyzing matches between
local features in time-sequenced remote sensing imagery. We evaluate feature
sets, match protocols, and the statistical modeling of feature matches. With
application of KAZE features, k-nearest-neighbor descriptor matching, and
geometric proximity and bi-directional match consistency checks, average match
rates increase more than two-fold over the previous standard. In testing our
platform, we developed a small, labeled benchmark dataset expressing
large-scale residential, industrial, and civic construction, along with null
instances, in California between the years 2010 and 2012. On the benchmark set,
our algorithm makes precise, accurate change proposals on two-thirds of scenes.
Further, the detection threshold can be tuned so that all or almost all
proposed detections are true positives.
| new_dataset | 0.951953 |
1703.10304 | Lei Fan | Lei Fan, Ziyu Pan, Long Chen and Kai Huang | Planecell: Representing the 3D Space with Planes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstruction based on the stereo camera has received considerable attention
recently, but two particular challenges still remain. The first concerns the
need to aggregate similar pixels in an effective approach, and the second is to
maintain as much of the available information as possible while ensuring
sufficient accuracy. To overcome these issues, we propose a new 3D
representation method, namely, planecell, that extracts planarity from the
depth-assisted image segmentation and then projects these depth planes into the
3D world. An energy function formulated from Conditional Random Field that
generalizes the planar relationships is maximized to merge coplanar segments.
We evaluate our method with a variety of reconstruction baselines on both KITTI
and Middlebury datasets, and the results indicate the superiorities compared to
other 3D space representation methods in accuracy, memory requirements and
further applications.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 03:58:05 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Fan",
"Lei",
""
],
[
"Pan",
"Ziyu",
""
],
[
"Chen",
"Long",
""
],
[
"Huang",
"Kai",
""
]
] | TITLE: Planecell: Representing the 3D Space with Planes
ABSTRACT: Reconstruction based on the stereo camera has received considerable attention
recently, but two particular challenges still remain. The first concerns the
need to aggregate similar pixels in an effective approach, and the second is to
maintain as much of the available information as possible while ensuring
sufficient accuracy. To overcome these issues, we propose a new 3D
representation method, namely, planecell, that extracts planarity from the
depth-assisted image segmentation and then projects these depth planes into the
3D world. An energy function formulated from Conditional Random Field that
generalizes the planar relationships is maximized to merge coplanar segments.
We evaluate our method with a variety of reconstruction baselines on both KITTI
and Middlebury datasets, and the results indicate the superiorities compared to
other 3D space representation methods in accuracy, memory requirements and
further applications.
| no_new_dataset | 0.955026 |
1703.10345 | Besnik Fetahu | Besnik Fetahu and Abhijit Anand and Avishek Anand | How much is Wikipedia Lagging Behind News? | null | null | 10.1145/2786451.2786460 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wikipedia, rich in entities and events, is an invaluable resource for various
knowledge harvesting, extraction and mining tasks. Numerous resources like
DBpedia, YAGO and other knowledge bases are based on extracting entity and
event based knowledge from it. Online news, on the other hand, is an
authoritative and rich source for emerging entities, events and facts relating
to existing entities. In this work, we study the creation of entities in
Wikipedia with respect to news by studying how entity and event based
information flows from news to Wikipedia.
We analyze the lag of Wikipedia (based on the revision history of the English
Wikipedia) with 20 years of \emph{The New York Times} dataset (NYT). We model
and analyze the lag of entities and events, namely their first appearance in
Wikipedia and in NYT, respectively. In our extensive experimental analysis, we
find that almost 20\% of the external references in entity pages are news
articles encoding the importance of news to Wikipedia. Second, we observe that
the entity-based lag follows a normal distribution with a high standard
deviation, whereas the lag for news-based events is typically very low.
Finally, we find that events are responsible for creation of emergent entities
with as many as 12\% of the entities mentioned in the event page are created
after the creation of the event page.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 08:05:17 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Fetahu",
"Besnik",
""
],
[
"Anand",
"Abhijit",
""
],
[
"Anand",
"Avishek",
""
]
] | TITLE: How much is Wikipedia Lagging Behind News?
ABSTRACT: Wikipedia, rich in entities and events, is an invaluable resource for various
knowledge harvesting, extraction and mining tasks. Numerous resources like
DBpedia, YAGO and other knowledge bases are based on extracting entity and
event based knowledge from it. Online news, on the other hand, is an
authoritative and rich source for emerging entities, events and facts relating
to existing entities. In this work, we study the creation of entities in
Wikipedia with respect to news by studying how entity and event based
information flows from news to Wikipedia.
We analyze the lag of Wikipedia (based on the revision history of the English
Wikipedia) with 20 years of \emph{The New York Times} dataset (NYT). We model
and analyze the lag of entities and events, namely their first appearance in
Wikipedia and in NYT, respectively. In our extensive experimental analysis, we
find that almost 20\% of the external references in entity pages are news
articles encoding the importance of news to Wikipedia. Second, we observe that
the entity-based lag follows a normal distribution with a high standard
deviation, whereas the lag for news-based events is typically very low.
Finally, we find that events are responsible for creation of emergent entities
with as many as 12\% of the entities mentioned in the event page are created
after the creation of the event page.
| no_new_dataset | 0.941654 |
1703.10349 | Besnik Fetahu | Besnik Fetahu and Ujwal Gadiraju and Stefan Dietze | Improving Entity Retrieval on Structured Data | null | null | 10.1007/978-3-319-25007-6_28 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing amount of data on the Web, in particular of Linked Data, has
led to a diverse landscape of datasets, which make entity retrieval a
challenging task. Explicit cross-dataset links, for instance to indicate
co-references or related entities can significantly improve entity retrieval.
However, only a small fraction of entities are interlinked through explicit
statements. In this paper, we propose a two-fold entity retrieval approach. In
a first, offline preprocessing step, we cluster entities based on the
\emph{x--means} and \emph{spectral} clustering algorithms. In the second step,
we propose an optimized retrieval model which takes advantage of our
precomputed clusters. For a given set of entities retrieved by the BM25F
retrieval approach and a given user query, we further expand the result set
with relevant entities by considering features of the queries, entities and the
precomputed clusters. Finally, we re-rank the expanded result set with respect
to the relevance to the query. We perform a thorough experimental evaluation on
the Billions Triple Challenge (BTC12) dataset. The proposed approach shows
significant improvements compared to the baseline and state of the art
approaches.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 08:25:35 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Fetahu",
"Besnik",
""
],
[
"Gadiraju",
"Ujwal",
""
],
[
"Dietze",
"Stefan",
""
]
] | TITLE: Improving Entity Retrieval on Structured Data
ABSTRACT: The increasing amount of data on the Web, in particular of Linked Data, has
led to a diverse landscape of datasets, which make entity retrieval a
challenging task. Explicit cross-dataset links, for instance to indicate
co-references or related entities can significantly improve entity retrieval.
However, only a small fraction of entities are interlinked through explicit
statements. In this paper, we propose a two-fold entity retrieval approach. In
a first, offline preprocessing step, we cluster entities based on the
\emph{x--means} and \emph{spectral} clustering algorithms. In the second step,
we propose an optimized retrieval model which takes advantage of our
precomputed clusters. For a given set of entities retrieved by the BM25F
retrieval approach and a given user query, we further expand the result set
with relevant entities by considering features of the queries, entities and the
precomputed clusters. Finally, we re-rank the expanded result set with respect
to the relevance to the query. We perform a thorough experimental evaluation on
the Billions Triple Challenge (BTC12) dataset. The proposed approach shows
significant improvements compared to the baseline and state of the art
approaches.
| no_new_dataset | 0.945601 |
1703.10603 | Joseph Gomes | Joseph Gomes, Bharath Ramsundar, Evan N. Feinberg, Vijay S. Pande | Atomic Convolutional Networks for Predicting Protein-Ligand Binding
Affinity | null | null | null | null | cs.LG physics.chem-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Empirical scoring functions based on either molecular force fields or
cheminformatics descriptors are widely used, in conjunction with molecular
docking, during the early stages of drug discovery to predict potency and
binding affinity of a drug-like molecule to a given target. These models
require expert-level knowledge of physical chemistry and biology to be encoded
as hand-tuned parameters or features rather than allowing the underlying model
to select features in a data-driven procedure. Here, we develop a general
3-dimensional spatial convolution operation for learning atomic-level chemical
interactions directly from atomic coordinates and demonstrate its application
to structure-based bioactivity prediction. The atomic convolutional neural
network is trained to predict the experimentally determined binding affinity of
a protein-ligand complex by direct calculation of the energy associated with
the complex, protein, and ligand given the crystal structure of the binding
pose. Non-covalent interactions present in the complex that are absent in the
protein-ligand sub-structures are identified and the model learns the
interaction strength associated with these features. We test our model by
predicting the binding free energy of a subset of protein-ligand complexes
found in the PDBBind dataset and compare with state-of-the-art cheminformatics
and machine learning-based approaches. We find that all methods achieve
experimental accuracy and that atomic convolutional networks either outperform
or perform competitively with the cheminformatics based methods. Unlike all
previous protein-ligand prediction systems, atomic convolutional networks are
end-to-end and fully-differentiable. They represent a new data-driven,
physics-based deep learning model paradigm that offers a strong foundation for
future improvements in structure-based bioactivity prediction.
| [
{
"version": "v1",
"created": "Thu, 30 Mar 2017 17:58:31 GMT"
}
] | 2017-03-31T00:00:00 | [
[
"Gomes",
"Joseph",
""
],
[
"Ramsundar",
"Bharath",
""
],
[
"Feinberg",
"Evan N.",
""
],
[
"Pande",
"Vijay S.",
""
]
] | TITLE: Atomic Convolutional Networks for Predicting Protein-Ligand Binding
Affinity
ABSTRACT: Empirical scoring functions based on either molecular force fields or
cheminformatics descriptors are widely used, in conjunction with molecular
docking, during the early stages of drug discovery to predict potency and
binding affinity of a drug-like molecule to a given target. These models
require expert-level knowledge of physical chemistry and biology to be encoded
as hand-tuned parameters or features rather than allowing the underlying model
to select features in a data-driven procedure. Here, we develop a general
3-dimensional spatial convolution operation for learning atomic-level chemical
interactions directly from atomic coordinates and demonstrate its application
to structure-based bioactivity prediction. The atomic convolutional neural
network is trained to predict the experimentally determined binding affinity of
a protein-ligand complex by direct calculation of the energy associated with
the complex, protein, and ligand given the crystal structure of the binding
pose. Non-covalent interactions present in the complex that are absent in the
protein-ligand sub-structures are identified and the model learns the
interaction strength associated with these features. We test our model by
predicting the binding free energy of a subset of protein-ligand complexes
found in the PDBBind dataset and compare with state-of-the-art cheminformatics
and machine learning-based approaches. We find that all methods achieve
experimental accuracy and that atomic convolutional networks either outperform
or perform competitively with the cheminformatics based methods. Unlike all
previous protein-ligand prediction systems, atomic convolutional networks are
end-to-end and fully-differentiable. They represent a new data-driven,
physics-based deep learning model paradigm that offers a strong foundation for
future improvements in structure-based bioactivity prediction.
| no_new_dataset | 0.953794 |
1607.07129 | Yuan Gao | Yuan Gao and Alan L. Yuille | Exploiting Symmetry and/or Manhattan Properties for 3D Object Structure
Estimation from Single and Multiple Images | Accepted to CVPR 2017 | null | null | null | cs.CV cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many man-made objects have intrinsic symmetries and Manhattan structure. By
assuming an orthographic projection model, this paper addresses the estimation
of 3D structures and camera projection using symmetry and/or Manhattan
structure cues, which occur when the input is single- or multiple-image from
the same category, e.g., multiple different cars. Specifically, analysis on the
single image case implies that Manhattan alone is sufficient to recover the
camera projection, and then the 3D structure can be reconstructed uniquely
exploiting symmetry. However, Manhattan structure can be difficult to observe
from a single image due to occlusion. To this end, we extend to the
multiple-image case which can also exploit symmetry but does not require
Manhattan axes. We propose a novel rigid structure from motion method,
exploiting symmetry and using multiple images from the same category as input.
Experimental results on the Pascal3D+ dataset show that our method
significantly outperforms baseline methods.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 02:36:51 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Nov 2016 19:28:56 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2017 08:15:16 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Gao",
"Yuan",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Exploiting Symmetry and/or Manhattan Properties for 3D Object Structure
Estimation from Single and Multiple Images
ABSTRACT: Many man-made objects have intrinsic symmetries and Manhattan structure. By
assuming an orthographic projection model, this paper addresses the estimation
of 3D structures and camera projection using symmetry and/or Manhattan
structure cues, which occur when the input is single- or multiple-image from
the same category, e.g., multiple different cars. Specifically, analysis on the
single image case implies that Manhattan alone is sufficient to recover the
camera projection, and then the 3D structure can be reconstructed uniquely
exploiting symmetry. However, Manhattan structure can be difficult to observe
from a single image due to occlusion. To this end, we extend to the
multiple-image case which can also exploit symmetry but does not require
Manhattan axes. We propose a novel rigid structure from motion method,
exploiting symmetry and using multiple images from the same category as input.
Experimental results on the Pascal3D+ dataset show that our method
significantly outperforms baseline methods.
| no_new_dataset | 0.955486 |
1609.02031 | Nhien-An Le-Khac | Nhien-An Le-Khac, Sammer Markos, Michael O'Neill, Anthony Brabazon and
Tahar Kechadi | An efficient Search Tool for an Anti-Money Laundering Application of an
Multi-national Bank's Dataset | null | null | null | null | cs.DB cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, money laundering (ML) poses a serious threat not only to financial
institutions but also to the nations. This criminal activity is becoming more
and more sophisticated and seems to have moved from the clichy of drug
trafficking to financing terrorism and surely not forgetting personal gain.
Most of the financial institutions internationally have been implementing
anti-money laundering solutions (AML) to fight investment fraud activities. In
AML, the customer identification is an important task which helps AML experts
to monitor customer habits: some being customer domicile, transactions that
they are involved in etc. However, simple query tools provided by current DBMS
as well as naive approaches in customer searching may produce incorrect and
ambiguous results and their processing time is also very high due to the
complexity of the database system architecture. In this paper, we present a new
approach for identifying customers registered in an investment bank. This
approach is developed as a tool that allows AML experts to quickly identify
customers who are managed independently across separate databases. It is tested
on real-world datasets, which are real and large financial datasets. Some
preliminary experimental results show that this new approach is efficient and
effective.
| [
{
"version": "v1",
"created": "Sun, 4 Sep 2016 20:17:45 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 21:30:08 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Le-Khac",
"Nhien-An",
""
],
[
"Markos",
"Sammer",
""
],
[
"O'Neill",
"Michael",
""
],
[
"Brabazon",
"Anthony",
""
],
[
"Kechadi",
"Tahar",
""
]
] | TITLE: An efficient Search Tool for an Anti-Money Laundering Application of an
Multi-national Bank's Dataset
ABSTRACT: Today, money laundering (ML) poses a serious threat not only to financial
institutions but also to the nations. This criminal activity is becoming more
and more sophisticated and seems to have moved from the clichy of drug
trafficking to financing terrorism and surely not forgetting personal gain.
Most of the financial institutions internationally have been implementing
anti-money laundering solutions (AML) to fight investment fraud activities. In
AML, the customer identification is an important task which helps AML experts
to monitor customer habits: some being customer domicile, transactions that
they are involved in etc. However, simple query tools provided by current DBMS
as well as naive approaches in customer searching may produce incorrect and
ambiguous results and their processing time is also very high due to the
complexity of the database system architecture. In this paper, we present a new
approach for identifying customers registered in an investment bank. This
approach is developed as a tool that allows AML experts to quickly identify
customers who are managed independently across separate databases. It is tested
on real-world datasets, which are real and large financial datasets. Some
preliminary experimental results show that this new approach is efficient and
effective.
| no_new_dataset | 0.947962 |
1611.08002 | Zhe Gan | Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng
Gao, Lawrence Carin, Li Deng | Semantic Compositional Networks for Visual Captioning | Accepted in CVPR 2017 | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Semantic Compositional Network (SCN) is developed for image captioning, in
which semantic concepts (i.e., tags) are detected from the image, and the
probability of each tag is used to compose the parameters in a long short-term
memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an
ensemble of tag-dependent weight matrices. The degree to which each member of
the ensemble is used to generate an image caption is tied to the
image-dependent probability of the corresponding tag. In addition to captioning
images, we also extend the SCN to generate captions for video clips. We
qualitatively analyze semantic composition in SCNs, and quantitatively evaluate
the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text.
Experimental results show that the proposed method significantly outperforms
prior state-of-the-art approaches, across multiple evaluation metrics.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2016 21:22:22 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 18:33:51 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Gan",
"Zhe",
""
],
[
"Gan",
"Chuang",
""
],
[
"He",
"Xiaodong",
""
],
[
"Pu",
"Yunchen",
""
],
[
"Tran",
"Kenneth",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Carin",
"Lawrence",
""
],
[
"Deng",
"Li",
""
]
] | TITLE: Semantic Compositional Networks for Visual Captioning
ABSTRACT: A Semantic Compositional Network (SCN) is developed for image captioning, in
which semantic concepts (i.e., tags) are detected from the image, and the
probability of each tag is used to compose the parameters in a long short-term
memory (LSTM) network. The SCN extends each weight matrix of the LSTM to an
ensemble of tag-dependent weight matrices. The degree to which each member of
the ensemble is used to generate an image caption is tied to the
image-dependent probability of the corresponding tag. In addition to captioning
images, we also extend the SCN to generate captions for video clips. We
qualitatively analyze semantic composition in SCNs, and quantitatively evaluate
the algorithm on three benchmark datasets: COCO, Flickr30k, and Youtube2Text.
Experimental results show that the proposed method significantly outperforms
prior state-of-the-art approaches, across multiple evaluation metrics.
| no_new_dataset | 0.951729 |
1612.08712 | Aaditya Prakash | Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo and
James Storer | Semantic Perceptual Image Compression using Deep Convolution Networks | Accepted to Data Compression Conference, 11 pages, 5 figures | null | null | null | cs.MM cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has long been considered a significant problem to improve the visual
quality of lossy image and video compression. Recent advances in computing
power together with the availability of large training data sets has increased
interest in the application of deep learning cnns to address image recognition
and image processing tasks. Here, we present a powerful cnn tailored to the
specific task of semantic image understanding to achieve higher visual quality
in lossy compression. A modest increase in complexity is incorporated to the
encoder which allows a standard, off-the-shelf jpeg decoder to be used. While
jpeg encoding may be optimized for generic images, the process is ultimately
unaware of the specific content of the image to be compressed. Our technique
makes jpeg content-aware by designing and training a model to identify multiple
semantic regions in a given image. Unlike object detection techniques, our
model does not require labeling of object positions and is able to identify
objects in a single pass. We present a new cnn architecture directed
specifically to image compression, which generates a map that highlights
semantically-salient regions so that they can be encoded at higher quality as
compared to background regions. By adding a complete set of features for every
class, and then taking a threshold over the sum of all feature activations, we
generate a map that highlights semantically-salient regions so that they can be
encoded at a better quality compared to background regions. Experiments are
presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset,
in which our algorithm achieves higher visual quality for the same compressed
size.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 19:21:18 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2017 16:29:54 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Prakash",
"Aaditya",
""
],
[
"Moran",
"Nick",
""
],
[
"Garber",
"Solomon",
""
],
[
"DiLillo",
"Antonella",
""
],
[
"Storer",
"James",
""
]
] | TITLE: Semantic Perceptual Image Compression using Deep Convolution Networks
ABSTRACT: It has long been considered a significant problem to improve the visual
quality of lossy image and video compression. Recent advances in computing
power together with the availability of large training data sets has increased
interest in the application of deep learning cnns to address image recognition
and image processing tasks. Here, we present a powerful cnn tailored to the
specific task of semantic image understanding to achieve higher visual quality
in lossy compression. A modest increase in complexity is incorporated to the
encoder which allows a standard, off-the-shelf jpeg decoder to be used. While
jpeg encoding may be optimized for generic images, the process is ultimately
unaware of the specific content of the image to be compressed. Our technique
makes jpeg content-aware by designing and training a model to identify multiple
semantic regions in a given image. Unlike object detection techniques, our
model does not require labeling of object positions and is able to identify
objects in a single pass. We present a new cnn architecture directed
specifically to image compression, which generates a map that highlights
semantically-salient regions so that they can be encoded at higher quality as
compared to background regions. By adding a complete set of features for every
class, and then taking a threshold over the sum of all feature activations, we
generate a map that highlights semantically-salient regions so that they can be
encoded at a better quality compared to background regions. Experiments are
presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset,
in which our algorithm achieves higher visual quality for the same compressed
size.
| no_new_dataset | 0.948489 |
1703.05468 | Yongjoo Park | Yongjoo Park, Ahmad Shahab Tajik, Michael Cafarella, Barzan Mozafari | Database Learning: Toward a Database that Becomes Smarter Every Time | This manuscript is an extended report of the work published in ACM
SIGMOD conference 2017 | null | 10.1145/3035918.3064013 | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In today's databases, previous query answers rarely benefit answering future
queries. For the first time, to the best of our knowledge, we change this
paradigm in an approximate query processing (AQP) context. We make the
following observation: the answer to each query reveals some degree of
knowledge about the answer to another query because their answers stem from the
same underlying distribution that has produced the entire dataset. Exploiting
and refining this knowledge should allow us to answer queries more
analytically, rather than by reading enormous amounts of raw data. Also,
processing more queries should continuously enhance our knowledge of the
underlying distribution, and hence lead to increasingly faster response times
for future queries.
We call this novel idea---learning from past query answers---Database
Learning. We exploit the principle of maximum entropy to produce answers, which
are in expectation guaranteed to be more accurate than existing sample-based
approximations. Empowered by this idea, we build a query engine on top of Spark
SQL, called Verdict. We conduct extensive experiments on real-world query
traces from a large customer of a major database vendor. Our results
demonstrate that Verdict supports 73.7% of these queries, speeding them up by
up to 23.0x for the same accuracy level compared to existing AQP systems.
| [
{
"version": "v1",
"created": "Thu, 16 Mar 2017 03:36:28 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 21:47:25 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Park",
"Yongjoo",
""
],
[
"Tajik",
"Ahmad Shahab",
""
],
[
"Cafarella",
"Michael",
""
],
[
"Mozafari",
"Barzan",
""
]
] | TITLE: Database Learning: Toward a Database that Becomes Smarter Every Time
ABSTRACT: In today's databases, previous query answers rarely benefit answering future
queries. For the first time, to the best of our knowledge, we change this
paradigm in an approximate query processing (AQP) context. We make the
following observation: the answer to each query reveals some degree of
knowledge about the answer to another query because their answers stem from the
same underlying distribution that has produced the entire dataset. Exploiting
and refining this knowledge should allow us to answer queries more
analytically, rather than by reading enormous amounts of raw data. Also,
processing more queries should continuously enhance our knowledge of the
underlying distribution, and hence lead to increasingly faster response times
for future queries.
We call this novel idea---learning from past query answers---Database
Learning. We exploit the principle of maximum entropy to produce answers, which
are in expectation guaranteed to be more accurate than existing sample-based
approximations. Empowered by this idea, we build a query engine on top of Spark
SQL, called Verdict. We conduct extensive experiments on real-world query
traces from a large customer of a major database vendor. Our results
demonstrate that Verdict supports 73.7% of these queries, speeding them up by
up to 23.0x for the same accuracy level compared to existing AQP systems.
| no_new_dataset | 0.945851 |
1703.09752 | Nhien-An Le-Khac | Loic Bontemps, Van Loi Cao, James McDermott, Nhien-An Le-Khac | Collective Anomaly Detection based on Long Short Term Memory Recurrent
Neural Network | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intrusion detection for computer network systems becomes one of the most
critical tasks for network administrators today. It has an important role for
organizations, governments and our society due to its valuable resources on
computer networks. Traditional misuse detection strategies are unable to detect
new and unknown intrusion. Besides, anomaly detection in network security is
aim to distinguish between illegal or malicious events and normal behavior of
network systems. Anomaly detection can be considered as a classification
problem where it builds models of normal network behavior, which it uses to
detect new patterns that significantly deviate from the model. Most of the cur-
rent research on anomaly detection is based on the learning of normally and
anomaly behaviors. They do not take into account the previous, re- cent events
to detect the new incoming one. In this paper, we propose a real time
collective anomaly detection model based on neural network learning and feature
operating. Normally a Long Short Term Memory Recurrent Neural Network (LSTM
RNN) is trained only on normal data and it is capable of predicting several
time steps ahead of an input. In our approach, a LSTM RNN is trained with
normal time series data before performing a live prediction for each time step.
Instead of considering each time step separately, the observation of prediction
errors from a certain number of time steps is now proposed as a new idea for
detecting collective anomalies. The prediction errors from a number of the
latest time steps above a threshold will indicate a collective anomaly. The
model is built on a time series version of the KDD 1999 dataset. The
experiments demonstrate that it is possible to offer reliable and efficient for
collective anomaly detection.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 19:04:11 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Bontemps",
"Loic",
""
],
[
"Cao",
"Van Loi",
""
],
[
"McDermott",
"James",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] | TITLE: Collective Anomaly Detection based on Long Short Term Memory Recurrent
Neural Network
ABSTRACT: Intrusion detection for computer network systems becomes one of the most
critical tasks for network administrators today. It has an important role for
organizations, governments and our society due to its valuable resources on
computer networks. Traditional misuse detection strategies are unable to detect
new and unknown intrusion. Besides, anomaly detection in network security is
aim to distinguish between illegal or malicious events and normal behavior of
network systems. Anomaly detection can be considered as a classification
problem where it builds models of normal network behavior, which it uses to
detect new patterns that significantly deviate from the model. Most of the cur-
rent research on anomaly detection is based on the learning of normally and
anomaly behaviors. They do not take into account the previous, re- cent events
to detect the new incoming one. In this paper, we propose a real time
collective anomaly detection model based on neural network learning and feature
operating. Normally a Long Short Term Memory Recurrent Neural Network (LSTM
RNN) is trained only on normal data and it is capable of predicting several
time steps ahead of an input. In our approach, a LSTM RNN is trained with
normal time series data before performing a live prediction for each time step.
Instead of considering each time step separately, the observation of prediction
errors from a certain number of time steps is now proposed as a new idea for
detecting collective anomalies. The prediction errors from a number of the
latest time steps above a threshold will indicate a collective anomaly. The
model is built on a time series version of the KDD 1999 dataset. The
experiments demonstrate that it is possible to offer reliable and efficient for
collective anomaly detection.
| no_new_dataset | 0.953319 |
1703.09756 | Nhien-An Le-Khac | Nhien-An Le-Khac, M-Tahar Kechadi, Joe Carthy | Admire framework: Distributed data mining on data grid platforms | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the ADMIRE architecture; a new framework for
developing novel and innovative data mining techniques to deal with very large
and distributed heterogeneous datasets in both commercial and academic
applications. The main ADMIRE components are detailed as well as its interfaces
allowing the user to efficiently develop and implement their data mining
applications techniques on a Grid platform such as Globus ToolKit, DGET, etc.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 19:22:42 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Le-Khac",
"Nhien-An",
""
],
[
"Kechadi",
"M-Tahar",
""
],
[
"Carthy",
"Joe",
""
]
] | TITLE: Admire framework: Distributed data mining on data grid platforms
ABSTRACT: In this paper, we present the ADMIRE architecture; a new framework for
developing novel and innovative data mining techniques to deal with very large
and distributed heterogeneous datasets in both commercial and academic
applications. The main ADMIRE components are detailed as well as its interfaces
allowing the user to efficiently develop and implement their data mining
applications techniques on a Grid platform such as Globus ToolKit, DGET, etc.
| no_new_dataset | 0.946498 |
1703.09775 | Dorian Cazau | D. Cazau, G. Revillon, O. Adam | Deep scattering transform applied to note onset detection and instrument
recognition | null | null | null | null | stat.ML cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic Music Transcription (AMT) is one of the oldest and most
well-studied problems in the field of music information retrieval. Within this
challenging research field, onset detection and instrument recognition take
important places in transcription systems, as they respectively help to
determine exact onset times of notes and to recognize the corresponding
instrument sources. The aim of this study is to explore the usefulness of
multiscale scattering operators for these two tasks on plucked string
instrument and piano music. After resuming the theoretical background and
illustrating the key features of this sound representation method, we evaluate
its performances comparatively to other classical sound representations. Using
both MIDI-driven datasets with real instrument samples and real musical pieces,
scattering is proved to outperform other sound representations for these AMT
subtasks, putting forward its richer sound representation and invariance
properties.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 19:57:30 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Cazau",
"D.",
""
],
[
"Revillon",
"G.",
""
],
[
"Adam",
"O.",
""
]
] | TITLE: Deep scattering transform applied to note onset detection and instrument
recognition
ABSTRACT: Automatic Music Transcription (AMT) is one of the oldest and most
well-studied problems in the field of music information retrieval. Within this
challenging research field, onset detection and instrument recognition take
important places in transcription systems, as they respectively help to
determine exact onset times of notes and to recognize the corresponding
instrument sources. The aim of this study is to explore the usefulness of
multiscale scattering operators for these two tasks on plucked string
instrument and piano music. After resuming the theoretical background and
illustrating the key features of this sound representation method, we evaluate
its performances comparatively to other classical sound representations. Using
both MIDI-driven datasets with real instrument samples and real musical pieces,
scattering is proved to outperform other sound representations for these AMT
subtasks, putting forward its richer sound representation and invariance
properties.
| no_new_dataset | 0.936865 |
1703.09807 | Nhien-An Le-Khac | Lamine M. Aouad, Nhien-An Le-Khac, Tahar Kechadi | Grid-based Approaches for Distributed Data Mining Applications | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The data mining field is an important source of large-scale applications and
datasets which are getting more and more common. In this paper, we present
grid-based approaches for two basic data mining applications, and a performance
evaluation on an experimental grid environment that provides interesting
monitoring capabilities and configuration tools. We propose a new distributed
clustering approach and a distributed frequent itemsets generation well-adapted
for grid environments. Performance evaluation is done using the Condor system
and its workflow manager DAGMan. We also compare this performance analysis to a
simple analytical model to evaluate the overheads related to the workflow
engine and the underlying grid system. This will specifically show that
realistic performance expectations are currently difficult to achieve on the
grid.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 21:19:24 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Aouad",
"Lamine M.",
""
],
[
"Le-Khac",
"Nhien-An",
""
],
[
"Kechadi",
"Tahar",
""
]
] | TITLE: Grid-based Approaches for Distributed Data Mining Applications
ABSTRACT: The data mining field is an important source of large-scale applications and
datasets which are getting more and more common. In this paper, we present
grid-based approaches for two basic data mining applications, and a performance
evaluation on an experimental grid environment that provides interesting
monitoring capabilities and configuration tools. We propose a new distributed
clustering approach and a distributed frequent itemsets generation well-adapted
for grid environments. Performance evaluation is done using the Condor system
and its workflow manager DAGMan. We also compare this performance analysis to a
simple analytical model to evaluate the overheads related to the workflow
engine and the underlying grid system. This will specifically show that
realistic performance expectations are currently difficult to achieve on the
grid.
| no_new_dataset | 0.94428 |
1703.09856 | Joseph Antony A | Joseph Antony, Kevin McGuinness, Kieran Moran and Noel E O'Connor | Automatic Detection of Knee Joints and Quantification of Knee
Osteoarthritis Severity using Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new approach to automatically quantify the severity
of knee OA using X-ray images. Automatically quantifying knee OA severity
involves two steps: first, automatically localizing the knee joints; next,
classifying the localized knee joint images. We introduce a new approach to
automatically detect the knee joints using a fully convolutional neural network
(FCN). We train convolutional neural networks (CNN) from scratch to
automatically quantify the knee OA severity optimizing a weighted ratio of two
loss functions: categorical cross-entropy and mean-squared loss. This joint
training further improves the overall quantification of knee OA severity, with
the added benefit of naturally producing simultaneous multi-class
classification and regression outputs. Two public datasets are used to evaluate
our approach, the Osteoarthritis Initiative (OAI) and the Multicenter
Osteoarthritis Study (MOST), with extremely promising results that outperform
existing approaches.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 01:29:32 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Antony",
"Joseph",
""
],
[
"McGuinness",
"Kevin",
""
],
[
"Moran",
"Kieran",
""
],
[
"O'Connor",
"Noel E",
""
]
] | TITLE: Automatic Detection of Knee Joints and Quantification of Knee
Osteoarthritis Severity using Convolutional Neural Networks
ABSTRACT: This paper introduces a new approach to automatically quantify the severity
of knee OA using X-ray images. Automatically quantifying knee OA severity
involves two steps: first, automatically localizing the knee joints; next,
classifying the localized knee joint images. We introduce a new approach to
automatically detect the knee joints using a fully convolutional neural network
(FCN). We train convolutional neural networks (CNN) from scratch to
automatically quantify the knee OA severity optimizing a weighted ratio of two
loss functions: categorical cross-entropy and mean-squared loss. This joint
training further improves the overall quantification of knee OA severity, with
the added benefit of naturally producing simultaneous multi-class
classification and regression outputs. Two public datasets are used to evaluate
our approach, the Osteoarthritis Initiative (OAI) and the Multicenter
Osteoarthritis Study (MOST), with extremely promising results that outperform
existing approaches.
| no_new_dataset | 0.950641 |
1703.09891 | Hexiang Hu | Hexiang Hu, Zhiwei Deng, Guang-Tong Zhou, Fei Sha, Greg Mori | LabelBank: Revisiting Global Perspectives for Semantic Segmentation | Pre-prints | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation requires a detailed labeling of image pixels by object
category. Information derived from local image patches is necessary to describe
the detailed shape of individual objects. However, this information is
ambiguous and can result in noisy labels. Global inference of image content can
instead capture the general semantic concepts present. We advocate that
holistic inference of image concepts provides valuable information for detailed
pixel labeling. We propose a generic framework to leverage holistic information
in the form of a LabelBank for pixel-level segmentation.
We show the ability of our framework to improve semantic segmentation
performance in a variety of settings. We learn models for extracting a holistic
LabelBank from visual cues, attributes, and/or textual descriptions. We
demonstrate improvements in semantic segmentation accuracy on standard datasets
across a range of state-of-the-art segmentation architectures and holistic
inference approaches.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 05:58:21 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Hu",
"Hexiang",
""
],
[
"Deng",
"Zhiwei",
""
],
[
"Zhou",
"Guang-Tong",
""
],
[
"Sha",
"Fei",
""
],
[
"Mori",
"Greg",
""
]
] | TITLE: LabelBank: Revisiting Global Perspectives for Semantic Segmentation
ABSTRACT: Semantic segmentation requires a detailed labeling of image pixels by object
category. Information derived from local image patches is necessary to describe
the detailed shape of individual objects. However, this information is
ambiguous and can result in noisy labels. Global inference of image content can
instead capture the general semantic concepts present. We advocate that
holistic inference of image concepts provides valuable information for detailed
pixel labeling. We propose a generic framework to leverage holistic information
in the form of a LabelBank for pixel-level segmentation.
We show the ability of our framework to improve semantic segmentation
performance in a variety of settings. We learn models for extracting a holistic
LabelBank from visual cues, attributes, and/or textual descriptions. We
demonstrate improvements in semantic segmentation accuracy on standard datasets
across a range of state-of-the-art segmentation architectures and holistic
inference approaches.
| no_new_dataset | 0.948155 |
1703.09933 | Estefania Talavera | Estefania Talavera, Nicola Strisciuglio, Nicolai Petkov, Petia Radeva | Sentiment Recognition in Egocentric Photostreams | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifelogging is a process of collecting rich source of information about daily
life of people. In this paper, we introduce the problem of sentiment analysis
in egocentric events focusing on the moments that compose the images recalling
positive, neutral or negative feelings to the observer. We propose a method for
the classification of the sentiments in egocentric pictures based on global and
semantic image features extracted by Convolutional Neural Networks. We carried
out experiments on an egocentric dataset, which we organized in 3 classes on
the basis of the sentiment that is recalled to the user (positive, negative or
neutral).
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 08:38:32 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Talavera",
"Estefania",
""
],
[
"Strisciuglio",
"Nicola",
""
],
[
"Petkov",
"Nicolai",
""
],
[
"Radeva",
"Petia",
""
]
] | TITLE: Sentiment Recognition in Egocentric Photostreams
ABSTRACT: Lifelogging is a process of collecting rich source of information about daily
life of people. In this paper, we introduce the problem of sentiment analysis
in egocentric events focusing on the moments that compose the images recalling
positive, neutral or negative feelings to the observer. We propose a method for
the classification of the sentiments in egocentric pictures based on global and
semantic image features extracted by Convolutional Neural Networks. We carried
out experiments on an egocentric dataset, which we organized in 3 classes on
the basis of the sentiment that is recalled to the user (positive, negative or
neutral).
| new_dataset | 0.896976 |
1703.09962 | Mitra Baratchi Mitra Baratchi | Mitra Baratchi, Geert Heijenk, Maarten van Steen | Spaceprint: a Mobility-based Fingerprinting Scheme for Public Spaces | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of how automated situation-awareness
can be achieved by learning real-world situations from ubiquitously generated
mobility data. Without semantic input about the time and space where situations
take place, this turns out to be a fundamental challenging problem.
Uncertainties also introduce technical challenges when data is generated in
irregular time intervals, being mixed with noise, and errors. Purely relying on
temporal patterns observable in mobility data, in this paper, we propose
Spaceprint, a fully automated algorithm for finding the repetitive pattern of
similar situations in spaces. We evaluate this technique by showing how the
latent variables describing the category, and the actual identity of a space
can be discovered from the extracted situation patterns. Doing so, we use
different real-world mobility datasets with data about the presence of mobile
entities in a variety of spaces. We also evaluate the performance of this
technique by showing its robustness against uncertainties.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 10:31:04 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Baratchi",
"Mitra",
""
],
[
"Heijenk",
"Geert",
""
],
[
"van Steen",
"Maarten",
""
]
] | TITLE: Spaceprint: a Mobility-based Fingerprinting Scheme for Public Spaces
ABSTRACT: In this paper, we address the problem of how automated situation-awareness
can be achieved by learning real-world situations from ubiquitously generated
mobility data. Without semantic input about the time and space where situations
take place, this turns out to be a fundamental challenging problem.
Uncertainties also introduce technical challenges when data is generated in
irregular time intervals, being mixed with noise, and errors. Purely relying on
temporal patterns observable in mobility data, in this paper, we propose
Spaceprint, a fully automated algorithm for finding the repetitive pattern of
similar situations in spaces. We evaluate this technique by showing how the
latent variables describing the category, and the actual identity of a space
can be discovered from the extracted situation patterns. Doing so, we use
different real-world mobility datasets with data about the presence of mobile
entities in a variety of spaces. We also evaluate the performance of this
technique by showing its robustness against uncertainties.
| no_new_dataset | 0.949106 |
1703.09983 | Zhiqiang Shen | Zhiqiang Shen and Yu-Gang Jiang and Dequan Wang and Xiangyang Xue | Iterative Object and Part Transfer for Fine-Grained Recognition | To appear in ICME 2017 as an oral paper | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of fine-grained recognition is to identify sub-ordinate categories in
images like different species of birds. Existing works have confirmed that, in
order to capture the subtle differences across the categories, automatic
localization of objects and parts is critical. Most approaches for object and
part localization relied on the bottom-up pipeline, where thousands of region
proposals are generated and then filtered by pre-trained object/part models.
This is computationally expensive and not scalable once the number of
objects/parts becomes large. In this paper, we propose a nonparametric
data-driven method for object and part localization. Given an unlabeled test
image, our approach transfers annotations from a few similar images retrieved
in the training set. In particular, we propose an iterative transfer strategy
that gradually refine the predicted bounding boxes. Based on the located
objects and parts, deep convolutional features are extracted for recognition.
We evaluate our approach on the widely-used CUB200-2011 dataset and a new and
large dataset called Birdsnap. On both datasets, we achieve better results than
many state-of-the-art approaches, including a few using oracle (manually
annotated) bounding boxes in the test images.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 11:50:34 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Shen",
"Zhiqiang",
""
],
[
"Jiang",
"Yu-Gang",
""
],
[
"Wang",
"Dequan",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: Iterative Object and Part Transfer for Fine-Grained Recognition
ABSTRACT: The aim of fine-grained recognition is to identify sub-ordinate categories in
images like different species of birds. Existing works have confirmed that, in
order to capture the subtle differences across the categories, automatic
localization of objects and parts is critical. Most approaches for object and
part localization relied on the bottom-up pipeline, where thousands of region
proposals are generated and then filtered by pre-trained object/part models.
This is computationally expensive and not scalable once the number of
objects/parts becomes large. In this paper, we propose a nonparametric
data-driven method for object and part localization. Given an unlabeled test
image, our approach transfers annotations from a few similar images retrieved
in the training set. In particular, we propose an iterative transfer strategy
that gradually refine the predicted bounding boxes. Based on the located
objects and parts, deep convolutional features are extracted for recognition.
We evaluate our approach on the widely-used CUB200-2011 dataset and a new and
large dataset called Birdsnap. On both datasets, we achieve better results than
many state-of-the-art approaches, including a few using oracle (manually
annotated) bounding boxes in the test images.
| new_dataset | 0.96862 |
1703.10062 | Sofia Ira Ktena | Sofia Ira Ktena, Salim Arslan, Sarah Parisot, Daniel Rueckert | Exploring Heritability of Functional Brain Networks with Inexact Graph
Matching | accepted at ISBI 2017: International Symposium on Biomedical Imaging,
Apr 2017, Melbourne, Australia | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data-driven brain parcellations aim to provide a more accurate representation
of an individual's functional connectivity, since they are able to capture
individual variability that arises due to development or disease. This renders
comparisons between the emerging brain connectivity networks more challenging,
since correspondences between their elements are not preserved. Unveiling these
correspondences is of major importance to keep track of local functional
connectivity changes. We propose a novel method based on graph edit distance
for the comparison of brain graphs directly in their domain, that can
accurately reflect similarities between individual networks while providing the
network element correspondences. This method is validated on a dataset of 116
twin subjects provided by the Human Connectome Project.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 14:24:52 GMT"
}
] | 2017-03-30T00:00:00 | [
[
"Ktena",
"Sofia Ira",
""
],
[
"Arslan",
"Salim",
""
],
[
"Parisot",
"Sarah",
""
],
[
"Rueckert",
"Daniel",
""
]
] | TITLE: Exploring Heritability of Functional Brain Networks with Inexact Graph
Matching
ABSTRACT: Data-driven brain parcellations aim to provide a more accurate representation
of an individual's functional connectivity, since they are able to capture
individual variability that arises due to development or disease. This renders
comparisons between the emerging brain connectivity networks more challenging,
since correspondences between their elements are not preserved. Unveiling these
correspondences is of major importance to keep track of local functional
connectivity changes. We propose a novel method based on graph edit distance
for the comparison of brain graphs directly in their domain, that can
accurately reflect similarities between individual networks while providing the
network element correspondences. This method is validated on a dataset of 116
twin subjects provided by the Human Connectome Project.
| no_new_dataset | 0.932269 |
1509.00117 | Alexander Wong | Devinder Kumar, Mohammad Javad Shafiee, Audrey G. Chung, Farzad
Khalvati, Masoom A. Haider, Alexander Wong | Discovery Radiomics for Pathologically-Proven Computed Tomography Lung
Cancer Prediction | 8 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lung cancer is the leading cause for cancer related deaths. As such, there is
an urgent need for a streamlined process that can allow radiologists to provide
diagnosis with greater efficiency and accuracy. A powerful tool to do this is
radiomics: a high-dimension imaging feature set. In this study, we take the
idea of radiomics one step further by introducing the concept of discovery
radiomics for lung cancer prediction using CT imaging data. In this study, we
realize these custom radiomic sequencers as deep convolutional sequencers using
a deep convolutional neural network learning architecture. To illustrate the
prognostic power and effectiveness of the radiomic sequences produced by the
discovered sequencer, we perform cancer prediction between malignant and benign
lesions from 97 patients using the pathologically-proven diagnostic data from
the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data
as ground truth, the proposed framework provided an average accuracy of 77.52%
via 10-fold cross-validation with a sensitivity of 79.06% and specificity of
76.11%, surpassing the state-of-the art method.
| [
{
"version": "v1",
"created": "Tue, 1 Sep 2015 02:00:56 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Oct 2015 19:10:18 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2017 02:01:31 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Kumar",
"Devinder",
""
],
[
"Shafiee",
"Mohammad Javad",
""
],
[
"Chung",
"Audrey G.",
""
],
[
"Khalvati",
"Farzad",
""
],
[
"Haider",
"Masoom A.",
""
],
[
"Wong",
"Alexander",
""
]
] | TITLE: Discovery Radiomics for Pathologically-Proven Computed Tomography Lung
Cancer Prediction
ABSTRACT: Lung cancer is the leading cause for cancer related deaths. As such, there is
an urgent need for a streamlined process that can allow radiologists to provide
diagnosis with greater efficiency and accuracy. A powerful tool to do this is
radiomics: a high-dimension imaging feature set. In this study, we take the
idea of radiomics one step further by introducing the concept of discovery
radiomics for lung cancer prediction using CT imaging data. In this study, we
realize these custom radiomic sequencers as deep convolutional sequencers using
a deep convolutional neural network learning architecture. To illustrate the
prognostic power and effectiveness of the radiomic sequences produced by the
discovered sequencer, we perform cancer prediction between malignant and benign
lesions from 97 patients using the pathologically-proven diagnostic data from
the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data
as ground truth, the proposed framework provided an average accuracy of 77.52%
via 10-fold cross-validation with a sensitivity of 79.06% and specificity of
76.11%, surpassing the state-of-the art method.
| no_new_dataset | 0.954265 |
1602.05335 | Jing Yang Koh | Jing Yang Koh, Ido Nevat, Derek Leong, and Wai-Choong Wong | Geo-spatial Location Spoofing Detection for Internet of Things | A shorten version of this work has been accepted to the IEEE IoT
Journal (IoT-J) on 08-Feb-2016 | IEEE Internet of Things Journal, vol. 3, no. 6, pp. 971-978, Dec.
2016 | 10.1109/JIOT.2016.2535165 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a new location spoofing detection algorithm for geo-spatial
tagging and location-based services in the Internet of Things (IoT), called
Enhanced Location Spoofing Detection using Audibility (ELSA) which can be
implemented at the backend server without modifying existing legacy IoT
systems. ELSA is based on a statistical decision theory framework and uses
two-way time-of-arrival (TW-TOA) information between the user's device and the
anchors. In addition to the TW-TOA information, ELSA exploits the implicit
available audibility information to improve detection rates of location
spoofing attacks. Given TW-TOA and audibility information, we derive the
decision rule for the verification of the device's location, based on the
generalized likelihood ratio test. We develop a practical threat model for
delay measurements spoofing scenarios, and investigate in detail the
performance of ELSA in terms of detection and false alarm rates. Our extensive
simulation results on both synthetic and real-world datasets demonstrate the
superior performance of ELSA compared to conventional non-audibility-aware
approaches.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 09:03:06 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 08:54:43 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Koh",
"Jing Yang",
""
],
[
"Nevat",
"Ido",
""
],
[
"Leong",
"Derek",
""
],
[
"Wong",
"Wai-Choong",
""
]
] | TITLE: Geo-spatial Location Spoofing Detection for Internet of Things
ABSTRACT: We develop a new location spoofing detection algorithm for geo-spatial
tagging and location-based services in the Internet of Things (IoT), called
Enhanced Location Spoofing Detection using Audibility (ELSA) which can be
implemented at the backend server without modifying existing legacy IoT
systems. ELSA is based on a statistical decision theory framework and uses
two-way time-of-arrival (TW-TOA) information between the user's device and the
anchors. In addition to the TW-TOA information, ELSA exploits the implicit
available audibility information to improve detection rates of location
spoofing attacks. Given TW-TOA and audibility information, we derive the
decision rule for the verification of the device's location, based on the
generalized likelihood ratio test. We develop a practical threat model for
delay measurements spoofing scenarios, and investigate in detail the
performance of ELSA in terms of detection and false alarm rates. Our extensive
simulation results on both synthetic and real-world datasets demonstrate the
superior performance of ELSA compared to conventional non-audibility-aware
approaches.
| no_new_dataset | 0.948917 |
1603.05544 | Linnan Wang | Linnan Wang, Yi Yang, Martin Renqiang Min, Srimat Chakradhar | Accelerating Deep Neural Network Training with Inconsistent Stochastic
Gradient Descent | The patent of ISGD belongs to NEC Labs | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SGD is the widely adopted method to train CNN. Conceptually it approximates
the population with a randomly sampled batch; then it evenly trains batches by
conducting a gradient update on every batch in an epoch. In this paper, we
demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo
Random Sampling differentiate batches in training, which then affect learning
speeds on them. Because of this, the unbiased treatment of batches involved in
SGD creates improper load balancing. To address this issue, we present
Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training
effort according to learning statuses on batches. Specifically ISGD leverages
techniques in Statistical Process Control to identify a undertrained batch.
Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic
plus a conservative constraint, to accelerate the training on the batch while
avoid drastic parameter changes. Extensive experiments on a variety of datasets
demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is
21.05\% faster than SGD to reach 56\% top1 accuracy under the exactly same
experiment setup. We also extend ISGD to work on multiGPU or heterogeneous
distributed system based on data parallelism, enabling the batch size to be the
key to scalability. Then we present the study of ISGD batch size to the
learning rate, parallelism, synchronization cost, system saturation and
scalability. We conclude the optimal ISGD batch size is machine dependent.
Various experiments on a multiGPU system validate our claim. In particular,
ISGD trains AlexNet to 56.3% top1 and 80.1% top5 accuracy in 11.5 hours with 4
NVIDIA TITAN X at the batch size of 1536.
| [
{
"version": "v1",
"created": "Thu, 17 Mar 2016 15:49:48 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2016 05:35:22 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2017 13:56:03 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Wang",
"Linnan",
""
],
[
"Yang",
"Yi",
""
],
[
"Min",
"Martin Renqiang",
""
],
[
"Chakradhar",
"Srimat",
""
]
] | TITLE: Accelerating Deep Neural Network Training with Inconsistent Stochastic
Gradient Descent
ABSTRACT: SGD is the widely adopted method to train CNN. Conceptually it approximates
the population with a randomly sampled batch; then it evenly trains batches by
conducting a gradient update on every batch in an epoch. In this paper, we
demonstrate Sampling Bias, Intrinsic Image Difference and Fixed Cycle Pseudo
Random Sampling differentiate batches in training, which then affect learning
speeds on them. Because of this, the unbiased treatment of batches involved in
SGD creates improper load balancing. To address this issue, we present
Inconsistent Stochastic Gradient Descent (ISGD) to dynamically vary training
effort according to learning statuses on batches. Specifically ISGD leverages
techniques in Statistical Process Control to identify a undertrained batch.
Once a batch is undertrained, ISGD solves a new subproblem, a chasing logic
plus a conservative constraint, to accelerate the training on the batch while
avoid drastic parameter changes. Extensive experiments on a variety of datasets
demonstrate ISGD converges faster than SGD. In training AlexNet, ISGD is
21.05\% faster than SGD to reach 56\% top1 accuracy under the exactly same
experiment setup. We also extend ISGD to work on multiGPU or heterogeneous
distributed system based on data parallelism, enabling the batch size to be the
key to scalability. Then we present the study of ISGD batch size to the
learning rate, parallelism, synchronization cost, system saturation and
scalability. We conclude the optimal ISGD batch size is machine dependent.
Various experiments on a multiGPU system validate our claim. In particular,
ISGD trains AlexNet to 56.3% top1 and 80.1% top5 accuracy in 11.5 hours with 4
NVIDIA TITAN X at the batch size of 1536.
| no_new_dataset | 0.945045 |
1604.01431 | Wei-Chiu Ma | Wei-Chiu Ma, De-An Huang, Namhoon Lee, Kris M. Kitani | Forecasting Interactive Dynamics of Pedestrians with Fictitious Play | Accepted to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop predictive models of pedestrian dynamics by encoding the coupled
nature of multi-pedestrian interaction using game theory, and deep
learning-based visual analysis to estimate person-specific behavior parameters.
Building predictive models for multi-pedestrian interactions however, is very
challenging due to two reasons: (1) the dynamics of interaction are complex
interdependent processes, where the predicted behavior of one pedestrian can
affect the actions taken by others and (2) dynamics are variable depending on
an individuals physical characteristics (e.g., an older person may walk slowly
while the younger person may walk faster). To address these challenges, we (1)
utilize concepts from game theory to model the interdependent decision making
process of multiple pedestrians and (2) use visual classifiers to learn a
mapping from pedestrian appearance to behavior parameters. We evaluate our
proposed model on several public multiple pedestrian interaction video
datasets. Results show that our strategic planning model explains human
interactions 25% better when compared to state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 21:13:32 GMT"
},
{
"version": "v2",
"created": "Mon, 9 May 2016 18:07:23 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2017 16:31:01 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Ma",
"Wei-Chiu",
""
],
[
"Huang",
"De-An",
""
],
[
"Lee",
"Namhoon",
""
],
[
"Kitani",
"Kris M.",
""
]
] | TITLE: Forecasting Interactive Dynamics of Pedestrians with Fictitious Play
ABSTRACT: We develop predictive models of pedestrian dynamics by encoding the coupled
nature of multi-pedestrian interaction using game theory, and deep
learning-based visual analysis to estimate person-specific behavior parameters.
Building predictive models for multi-pedestrian interactions however, is very
challenging due to two reasons: (1) the dynamics of interaction are complex
interdependent processes, where the predicted behavior of one pedestrian can
affect the actions taken by others and (2) dynamics are variable depending on
an individuals physical characteristics (e.g., an older person may walk slowly
while the younger person may walk faster). To address these challenges, we (1)
utilize concepts from game theory to model the interdependent decision making
process of multiple pedestrians and (2) use visual classifiers to learn a
mapping from pedestrian appearance to behavior parameters. We evaluate our
proposed model on several public multiple pedestrian interaction video
datasets. Results show that our strategic planning model explains human
interactions 25% better when compared to state-of-the-art methods.
| no_new_dataset | 0.947769 |
1607.06179 | Ninh Pham | Ninh Pham | Hybrid LSH: Faster Near Neighbors Reporting in High-dimensional Space | Accepted as a short paper in EDBT 2017 | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the $r$-near neighbors reporting problem ($r$-NN), i.e., reporting
\emph{all} points in a high-dimensional point set $S$ that lie within a radius
$r$ of a given query point $q$. Our approach builds upon on the
locality-sensitive hashing (LSH) framework due to its appealing asymptotic
sublinear query time for near neighbor search problems in high-dimensional
space. A bottleneck of the traditional LSH scheme for solving $r$-NN is that
its performance is sensitive to data and query-dependent parameters. On
datasets whose data distributions have diverse local density patterns, LSH with
inappropriate tuning parameters can sometimes be outperformed by a simple
linear search.
In this paper, we introduce a hybrid search strategy between LSH-based search
and linear search for $r$-NN in high-dimensional space. By integrating an
auxiliary data structure into LSH hash tables, we can efficiently estimate the
computational cost of LSH-based search for a given query regardless of the data
distribution. This means that we are able to choose the appropriate search
strategy between LSH-based search and linear search to achieve better
performance. Moreover, the integrated data structure is time efficient and fits
well with many recent state-of-the-art LSH-based approaches. Our experiments on
real-world datasets show that the hybrid search approach outperforms (or is
comparable to) both LSH-based search and linear search for a wide range of
search radii and data distributions in high-dimensional space.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2016 03:24:06 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Jan 2017 02:28:08 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2017 12:30:35 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Pham",
"Ninh",
""
]
] | TITLE: Hybrid LSH: Faster Near Neighbors Reporting in High-dimensional Space
ABSTRACT: We study the $r$-near neighbors reporting problem ($r$-NN), i.e., reporting
\emph{all} points in a high-dimensional point set $S$ that lie within a radius
$r$ of a given query point $q$. Our approach builds upon on the
locality-sensitive hashing (LSH) framework due to its appealing asymptotic
sublinear query time for near neighbor search problems in high-dimensional
space. A bottleneck of the traditional LSH scheme for solving $r$-NN is that
its performance is sensitive to data and query-dependent parameters. On
datasets whose data distributions have diverse local density patterns, LSH with
inappropriate tuning parameters can sometimes be outperformed by a simple
linear search.
In this paper, we introduce a hybrid search strategy between LSH-based search
and linear search for $r$-NN in high-dimensional space. By integrating an
auxiliary data structure into LSH hash tables, we can efficiently estimate the
computational cost of LSH-based search for a given query regardless of the data
distribution. This means that we are able to choose the appropriate search
strategy between LSH-based search and linear search to achieve better
performance. Moreover, the integrated data structure is time efficient and fits
well with many recent state-of-the-art LSH-based approaches. Our experiments on
real-world datasets show that the hybrid search approach outperforms (or is
comparable to) both LSH-based search and linear search for a wide range of
search radii and data distributions in high-dimensional space.
| no_new_dataset | 0.950686 |
1611.07156 | Yazhou Yao | Yazhou Yao, Jian Zhang, Fumin Shen, Xiansheng Hua, Jingsong Xu and
Zhenmin Tang | Exploiting Web Images for Dataset Construction: A Domain Robust Approach | Journal | null | 10.1109/TMM.2017.2684626 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Labelled image datasets have played a critical role in high-level image
understanding. However, the process of manual labelling is both time-consuming
and labor intensive. To reduce the cost of manual labelling, there has been
increased research interest in automatically constructing image datasets by
exploiting web images. Datasets constructed by existing methods tend to have a
weak domain adaptation ability, which is known as the "dataset bias problem".
To address this issue, we present a novel image dataset construction framework
that can be generalized well to unseen target domains. Specifically, the given
queries are first expanded by searching the Google Books Ngrams Corpus to
obtain a rich semantic description, from which the visually non-salient and
less relevant expansions are filtered out. By treating each selected expansion
as a "bag" and the retrieved images as "instances", image selection can be
formulated as a multi-instance learning problem with constrained positive bags.
We propose to solve the employed problems by the cutting-plane and
concave-convex procedure (CCCP) algorithm. By using this approach, images from
different distributions can be kept while noisy images are filtered out. To
verify the effectiveness of our proposed approach, we build an image dataset
with 20 categories. Extensive experiments on image classification,
cross-dataset generalization, diversity comparison and object detection
demonstrate the domain robustness of our dataset.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2016 06:22:19 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Feb 2017 23:53:20 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Mar 2017 22:54:15 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Mar 2017 06:30:41 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Yao",
"Yazhou",
""
],
[
"Zhang",
"Jian",
""
],
[
"Shen",
"Fumin",
""
],
[
"Hua",
"Xiansheng",
""
],
[
"Xu",
"Jingsong",
""
],
[
"Tang",
"Zhenmin",
""
]
] | TITLE: Exploiting Web Images for Dataset Construction: A Domain Robust Approach
ABSTRACT: Labelled image datasets have played a critical role in high-level image
understanding. However, the process of manual labelling is both time-consuming
and labor intensive. To reduce the cost of manual labelling, there has been
increased research interest in automatically constructing image datasets by
exploiting web images. Datasets constructed by existing methods tend to have a
weak domain adaptation ability, which is known as the "dataset bias problem".
To address this issue, we present a novel image dataset construction framework
that can be generalized well to unseen target domains. Specifically, the given
queries are first expanded by searching the Google Books Ngrams Corpus to
obtain a rich semantic description, from which the visually non-salient and
less relevant expansions are filtered out. By treating each selected expansion
as a "bag" and the retrieved images as "instances", image selection can be
formulated as a multi-instance learning problem with constrained positive bags.
We propose to solve the employed problems by the cutting-plane and
concave-convex procedure (CCCP) algorithm. By using this approach, images from
different distributions can be kept while noisy images are filtered out. To
verify the effectiveness of our proposed approach, we build an image dataset
with 20 categories. Extensive experiments on image classification,
cross-dataset generalization, diversity comparison and object detection
demonstrate the domain robustness of our dataset.
| no_new_dataset | 0.904693 |
1611.07485 | Qiangui Huang | Qiangui Huang, Weiyue Wang, Kevin Zhou, Suya You, Ulrich Neumann | Scene Labeling using Gated Recurrent Units with Explicit Long Range
Conditioning | updated version 2 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural network (RNN), as a powerful contextual dependency modeling
framework, has been widely applied to scene labeling problems. However, this
work shows that directly applying traditional RNN architectures, which unfolds
a 2D lattice grid into a sequence, is not sufficient to model structure
dependencies in images due to the "impact vanishing" problem. First, we give an
empirical analysis about the "impact vanishing" problem. Then, a new RNN unit
named Recurrent Neural Network with explicit long range conditioning (RNN-ELC)
is designed to alleviate this problem. A novel neural network architecture is
built for scene labeling tasks where one of the variants of the new RNN unit,
Gated Recurrent Unit with Explicit Long-range Conditioning (GRU-ELC), is used
to model multi scale contextual dependencies in images. We validate the use of
GRU-ELC units with state-of-the-art performance on three standard scene
labeling datasets. Comprehensive experiments demonstrate that the new GRU-ELC
unit benefits scene labeling problem a lot as it can encode longer contextual
dependencies in images more effectively than traditional RNN units.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2016 19:43:24 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 05:12:44 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Huang",
"Qiangui",
""
],
[
"Wang",
"Weiyue",
""
],
[
"Zhou",
"Kevin",
""
],
[
"You",
"Suya",
""
],
[
"Neumann",
"Ulrich",
""
]
] | TITLE: Scene Labeling using Gated Recurrent Units with Explicit Long Range
Conditioning
ABSTRACT: Recurrent neural network (RNN), as a powerful contextual dependency modeling
framework, has been widely applied to scene labeling problems. However, this
work shows that directly applying traditional RNN architectures, which unfolds
a 2D lattice grid into a sequence, is not sufficient to model structure
dependencies in images due to the "impact vanishing" problem. First, we give an
empirical analysis about the "impact vanishing" problem. Then, a new RNN unit
named Recurrent Neural Network with explicit long range conditioning (RNN-ELC)
is designed to alleviate this problem. A novel neural network architecture is
built for scene labeling tasks where one of the variants of the new RNN unit,
Gated Recurrent Unit with Explicit Long-range Conditioning (GRU-ELC), is used
to model multi scale contextual dependencies in images. We validate the use of
GRU-ELC units with state-of-the-art performance on three standard scene
labeling datasets. Comprehensive experiments demonstrate that the new GRU-ELC
unit benefits scene labeling problem a lot as it can encode longer contextual
dependencies in images more effectively than traditional RNN units.
| no_new_dataset | 0.950411 |
1702.00648 | Niannan Xue | Niannan Xue, Yannis Panagakis, Stefanos Zafeiriou | Side Information in Robust Principal Component Analysis: Algorithms and
Applications | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robust Principal Component Analysis (RPCA) aims at recovering a low-rank
subspace from grossly corrupted high-dimensional (often visual) data and is a
cornerstone in many machine learning and computer vision applications. Even
though RPCA has been shown to be very successful in solving many rank
minimisation problems, there are still cases where degenerate or suboptimal
solutions are obtained. This is likely to be remedied by taking into account of
domain-dependent prior knowledge. In this paper, we propose two models for the
RPCA problem with the aid of side information on the low-rank structure of the
data. The versatility of the proposed methods is demonstrated by applying them
to four applications, namely background subtraction, facial image denoising,
face and facial expression recognition. Experimental results on synthetic and
five real world datasets indicate the robustness and effectiveness of the
proposed methods on these application domains, largely outperforming six
previous approaches.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2017 12:42:50 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 16:23:44 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Xue",
"Niannan",
""
],
[
"Panagakis",
"Yannis",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] | TITLE: Side Information in Robust Principal Component Analysis: Algorithms and
Applications
ABSTRACT: Robust Principal Component Analysis (RPCA) aims at recovering a low-rank
subspace from grossly corrupted high-dimensional (often visual) data and is a
cornerstone in many machine learning and computer vision applications. Even
though RPCA has been shown to be very successful in solving many rank
minimisation problems, there are still cases where degenerate or suboptimal
solutions are obtained. This is likely to be remedied by taking into account of
domain-dependent prior knowledge. In this paper, we propose two models for the
RPCA problem with the aid of side information on the low-rank structure of the
data. The versatility of the proposed methods is demonstrated by applying them
to four applications, namely background subtraction, facial image denoising,
face and facial expression recognition. Experimental results on synthetic and
five real world datasets indicate the robustness and effectiveness of the
proposed methods on these application domains, largely outperforming six
previous approaches.
| no_new_dataset | 0.945248 |
1703.07475 | Chunhui Liu | Chunhui Liu, and Yueyu Hu, and Yanghao Li, and Sijie Song, and Jiaying
Liu | PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action
Understanding | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the fact that many 3D human activity benchmarks being proposed, most
existing action datasets focus on the action recognition tasks for the
segmented videos. There is a lack of standard large-scale benchmarks,
especially for current popular data-hungry deep learning based methods. In this
paper, we introduce a new large scale benchmark (PKU-MMD) for continuous
multi-modality 3D human action understanding and cover a wide range of complex
human activities with well annotated information. PKU-MMD contains 1076 long
video sequences in 51 action categories, performed by 66 subjects in three
camera views. It contains almost 20,000 action instances and 5.4 million frames
in total. Our dataset also provides multi-modality data sources, including RGB,
depth, Infrared Radiation and Skeleton. With different modalities, we conduct
extensive experiments on our dataset in terms of two scenarios and evaluate
different methods by various metrics, including a new proposed evaluation
protocol 2D-AP. We believe this large-scale dataset will benefit future
researches on action detection for the community.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2017 00:22:49 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 01:01:29 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Liu",
"Chunhui",
""
],
[
"Hu",
"Yueyu",
""
],
[
"Li",
"Yanghao",
""
],
[
"Song",
"Sijie",
""
],
[
"Liu",
"Jiaying",
""
]
] | TITLE: PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action
Understanding
ABSTRACT: Despite the fact that many 3D human activity benchmarks being proposed, most
existing action datasets focus on the action recognition tasks for the
segmented videos. There is a lack of standard large-scale benchmarks,
especially for current popular data-hungry deep learning based methods. In this
paper, we introduce a new large scale benchmark (PKU-MMD) for continuous
multi-modality 3D human action understanding and cover a wide range of complex
human activities with well annotated information. PKU-MMD contains 1076 long
video sequences in 51 action categories, performed by 66 subjects in three
camera views. It contains almost 20,000 action instances and 5.4 million frames
in total. Our dataset also provides multi-modality data sources, including RGB,
depth, Infrared Radiation and Skeleton. With different modalities, we conduct
extensive experiments on our dataset in terms of two scenarios and evaluate
different methods by various metrics, including a new proposed evaluation
protocol 2D-AP. We believe this large-scale dataset will benefit future
researches on action detection for the community.
| new_dataset | 0.969699 |
1703.07957 | Yohann Salaun | Yohann Salaun, Renaud Marlet, and Pascal Monasse | Robust SfM with Little Image Overlap | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Usual Structure-from-Motion (SfM) techniques require at least trifocal
overlaps to calibrate cameras and reconstruct a scene. We consider here
scenarios of reduced image sets with little overlap, possibly as low as two
images at most seeing the same part of the scene. We propose a new method,
based on line coplanarity hypotheses, for estimating the relative scale of two
independent bifocal calibrations sharing a camera, without the need of any
trifocal information or Manhattan-world assumption. We use it to compute SfM in
a chain of up-to-scale relative motions. For accuracy, we however also make use
of trifocal information for line and/or point features, when present, relaxing
usual trifocal constraints. For robustness to wrong assumptions and mismatches,
we embed all constraints in a parameterless RANSAC-like approach. Experiments
show that we can calibrate datasets that previously could not, and that this
wider applicability does not come at the cost of inaccuracy.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 07:52:31 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Mar 2017 09:57:56 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Salaun",
"Yohann",
""
],
[
"Marlet",
"Renaud",
""
],
[
"Monasse",
"Pascal",
""
]
] | TITLE: Robust SfM with Little Image Overlap
ABSTRACT: Usual Structure-from-Motion (SfM) techniques require at least trifocal
overlaps to calibrate cameras and reconstruct a scene. We consider here
scenarios of reduced image sets with little overlap, possibly as low as two
images at most seeing the same part of the scene. We propose a new method,
based on line coplanarity hypotheses, for estimating the relative scale of two
independent bifocal calibrations sharing a camera, without the need of any
trifocal information or Manhattan-world assumption. We use it to compute SfM in
a chain of up-to-scale relative motions. For accuracy, we however also make use
of trifocal information for line and/or point features, when present, relaxing
usual trifocal constraints. For robustness to wrong assumptions and mismatches,
we embed all constraints in a parameterless RANSAC-like approach. Experiments
show that we can calibrate datasets that previously could not, and that this
wider applicability does not come at the cost of inaccuracy.
| no_new_dataset | 0.94699 |
1703.09474 | Ancong Wu | Ancong Wu, Wei-Shi Zheng, Jianhuang Lai | Robust Depth-based Person Re-identification | IEEE Transactions on Image Processing Early Access | null | 10.1109/TIP.2017.2675201 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (re-id) aims to match people across non-overlapping
camera views. So far the RGB-based appearance is widely used in most existing
works. However, when people appeared in extreme illumination or changed
clothes, the RGB appearance-based re-id methods tended to fail. To overcome
this problem, we propose to exploit depth information to provide more invariant
body shape and skeleton information regardless of illumination and color
change. More specifically, we exploit depth voxel covariance descriptor and
further propose a locally rotation invariant depth shape descriptor called
Eigen-depth feature to describe pedestrian body shape. We prove that the
distance between any two covariance matrices on the Riemannian manifold is
equivalent to the Euclidean distance between the corresponding Eigen-depth
features. Furthermore, we propose a kernelized implicit feature transfer scheme
to estimate Eigen-depth feature implicitly from RGB image when depth
information is not available. We find that combining the estimated depth
features with RGB-based appearance features can sometimes help to better reduce
visual ambiguities of appearance features caused by illumination and similar
clothes. The effectiveness of our models was validated on publicly available
depth pedestrian datasets as compared to related methods for person
re-identification.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 09:26:54 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Wu",
"Ancong",
""
],
[
"Zheng",
"Wei-Shi",
""
],
[
"Lai",
"Jianhuang",
""
]
] | TITLE: Robust Depth-based Person Re-identification
ABSTRACT: Person re-identification (re-id) aims to match people across non-overlapping
camera views. So far the RGB-based appearance is widely used in most existing
works. However, when people appeared in extreme illumination or changed
clothes, the RGB appearance-based re-id methods tended to fail. To overcome
this problem, we propose to exploit depth information to provide more invariant
body shape and skeleton information regardless of illumination and color
change. More specifically, we exploit depth voxel covariance descriptor and
further propose a locally rotation invariant depth shape descriptor called
Eigen-depth feature to describe pedestrian body shape. We prove that the
distance between any two covariance matrices on the Riemannian manifold is
equivalent to the Euclidean distance between the corresponding Eigen-depth
features. Furthermore, we propose a kernelized implicit feature transfer scheme
to estimate Eigen-depth feature implicitly from RGB image when depth
information is not available. We find that combining the estimated depth
features with RGB-based appearance features can sometimes help to better reduce
visual ambiguities of appearance features caused by illumination and similar
clothes. The effectiveness of our models was validated on publicly available
depth pedestrian datasets as compared to related methods for person
re-identification.
| no_new_dataset | 0.953319 |
1703.09513 | Aleksey Buzmakov | Aleksey Buzmakov and Sergei O. Kuznetsov and Amedeo Napoli | Mining Best Closed Itemsets for Projection-antimonotonic Constraints in
Polynomial Time | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The exponential explosion of the set of patterns is one of the main
challenges in pattern mining. This challenge is approached by introducing a
constraint for pattern selection. One of the first constraints proposed in
pattern mining is support (frequency) of a pattern in a dataset. Frequency is
an anti-monotonic function, i.e., given an infrequent pattern, all its
superpatterns are not frequent. However, many other constraints for pattern
selection are neither monotonic nor anti-monotonic, which makes it difficult to
generate patterns satisfying these constraints.
In order to deal with nonmonotonic constraints we introduce the notion of
"projection antimonotonicity" and SOFIA algorithm that allow generating best
patterns for a class of nonmonotonic constraints. Cosine interest, robustness,
stability of closed itemsets, and the associated delta-measure are among these
constraints. SOFIA starts from light descriptions of transactions in dataset (a
small set of items in the case of itemset description) and then iteratively
adds more information to these descriptions (more items with indication of
tidsets they describe).
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 11:40:44 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Buzmakov",
"Aleksey",
""
],
[
"Kuznetsov",
"Sergei O.",
""
],
[
"Napoli",
"Amedeo",
""
]
] | TITLE: Mining Best Closed Itemsets for Projection-antimonotonic Constraints in
Polynomial Time
ABSTRACT: The exponential explosion of the set of patterns is one of the main
challenges in pattern mining. This challenge is approached by introducing a
constraint for pattern selection. One of the first constraints proposed in
pattern mining is support (frequency) of a pattern in a dataset. Frequency is
an anti-monotonic function, i.e., given an infrequent pattern, all its
superpatterns are not frequent. However, many other constraints for pattern
selection are neither monotonic nor anti-monotonic, which makes it difficult to
generate patterns satisfying these constraints.
In order to deal with nonmonotonic constraints we introduce the notion of
"projection antimonotonicity" and SOFIA algorithm that allow generating best
patterns for a class of nonmonotonic constraints. Cosine interest, robustness,
stability of closed itemsets, and the associated delta-measure are among these
constraints. SOFIA starts from light descriptions of transactions in dataset (a
small set of items in the case of itemset description) and then iteratively
adds more information to these descriptions (more items with indication of
tidsets they describe).
| no_new_dataset | 0.947137 |
1703.09695 | Nasim Souly | Nasim Souly, Concetto Spampinato and Mubarak Shah | Semi and Weakly Supervised Semantic Segmentation Using Generative
Adversarial Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic segmentation has been a long standing challenging task in computer
vision. It aims at assigning a label to each image pixel and needs significant
number of pixellevel annotated data, which is often unavailable. To address
this lack, in this paper, we leverage, on one hand, massive amount of available
unlabeled or weakly labeled data, and on the other hand, non-real images
created through Generative Adversarial Networks. In particular, we propose a
semi-supervised framework ,based on Generative Adversarial Networks (GANs),
which consists of a generator network to provide extra training examples to a
multi-class classifier, acting as discriminator in the GAN framework, that
assigns sample a label y from the K possible classes or marks it as a fake
sample (extra class). The underlying idea is that adding large fake visual data
forces real samples to be close in the feature space, enabling a bottom-up
clustering process, which, in turn, improves multiclass pixel classification.
To ensure higher quality of generated images for GANs with consequent improved
pixel classification, we extend the above framework by adding weakly annotated
data, i.e., we provide class level information to the generator. We tested our
approaches on several challenging benchmarking visual datasets, i.e. PASCAL,
SiftFLow, Stanford and CamVid, achieving competitive performance also compared
to state-of-the-art semantic segmentation method
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 17:57:21 GMT"
}
] | 2017-03-29T00:00:00 | [
[
"Souly",
"Nasim",
""
],
[
"Spampinato",
"Concetto",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Semi and Weakly Supervised Semantic Segmentation Using Generative
Adversarial Network
ABSTRACT: Semantic segmentation has been a long standing challenging task in computer
vision. It aims at assigning a label to each image pixel and needs significant
number of pixellevel annotated data, which is often unavailable. To address
this lack, in this paper, we leverage, on one hand, massive amount of available
unlabeled or weakly labeled data, and on the other hand, non-real images
created through Generative Adversarial Networks. In particular, we propose a
semi-supervised framework ,based on Generative Adversarial Networks (GANs),
which consists of a generator network to provide extra training examples to a
multi-class classifier, acting as discriminator in the GAN framework, that
assigns sample a label y from the K possible classes or marks it as a fake
sample (extra class). The underlying idea is that adding large fake visual data
forces real samples to be close in the feature space, enabling a bottom-up
clustering process, which, in turn, improves multiclass pixel classification.
To ensure higher quality of generated images for GANs with consequent improved
pixel classification, we extend the above framework by adding weakly annotated
data, i.e., we provide class level information to the generator. We tested our
approaches on several challenging benchmarking visual datasets, i.e. PASCAL,
SiftFLow, Stanford and CamVid, achieving competitive performance also compared
to state-of-the-art semantic segmentation method
| no_new_dataset | 0.956186 |
1607.06025 | Janez Starc | Janez Starc and Dunja Mladeni\'c | Constructing a Natural Language Inference Dataset using Generative
Neural Networks | null | null | null | null | cs.AI cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural Language Inference is an important task for Natural Language
Understanding. It is concerned with classifying the logical relation between
two sentences. In this paper, we propose several text generative neural
networks for generating text hypothesis, which allows construction of new
Natural Language Inference datasets. To evaluate the models, we propose a new
metric -- the accuracy of the classifier trained on the generated dataset. The
accuracy obtained by our best generative model is only 2.7% lower than the
accuracy of the classifier trained on the original, human crafted dataset.
Furthermore, the best generated dataset combined with the original dataset
achieves the highest accuracy. The best model learns a mapping embedding for
each training example. By comparing various metrics we show that datasets that
obtain higher ROUGE or METEOR scores do not necessarily yield higher
classification accuracies. We also provide analysis of what are the
characteristics of a good dataset including the distinguishability of the
generated datasets from the original one.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2016 16:59:21 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2017 08:33:27 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Starc",
"Janez",
""
],
[
"Mladenić",
"Dunja",
""
]
] | TITLE: Constructing a Natural Language Inference Dataset using Generative
Neural Networks
ABSTRACT: Natural Language Inference is an important task for Natural Language
Understanding. It is concerned with classifying the logical relation between
two sentences. In this paper, we propose several text generative neural
networks for generating text hypothesis, which allows construction of new
Natural Language Inference datasets. To evaluate the models, we propose a new
metric -- the accuracy of the classifier trained on the generated dataset. The
accuracy obtained by our best generative model is only 2.7% lower than the
accuracy of the classifier trained on the original, human crafted dataset.
Furthermore, the best generated dataset combined with the original dataset
achieves the highest accuracy. The best model learns a mapping embedding for
each training example. By comparing various metrics we show that datasets that
obtain higher ROUGE or METEOR scores do not necessarily yield higher
classification accuracies. We also provide analysis of what are the
characteristics of a good dataset including the distinguishability of the
generated datasets from the original one.
| no_new_dataset | 0.701279 |
1608.05339 | Wei-Tse Sun | Wei-Tse Sun, Ting-Hsuan Chao, Yin-Hsi Kuo, Winston H. Hsu | Photo Filter Recommendation by Category-Aware Aesthetic Learning | 11 pages, 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, social media has become a popular platform for the public to share
photos. To make photos more visually appealing, users usually apply filters on
their photos without domain knowledge. However, due to the growing number of
filter types, it becomes a major issue for users to choose the best filter
type. For this purpose, filter recommendation for photo aesthetics takes an
important role in image quality ranking problems. In these years, several works
have declared that Convolutional Neural Networks (CNNs) outperform traditional
methods in image aesthetic categorization, which classifies images into high or
low quality. Most of them do not consider the effect on filtered images; hence,
we propose a novel image aesthetic learning for filter recommendation. Instead
of binarizing image quality, we adjust the state-of-the-art CNN architectures
and design a pairwise loss function to learn the embedded aesthetic responses
in hidden layers for filtered images. Based on our pilot study, we observe
image categories (e.g., portrait, landscape, food) will affect user preference
on filter selection. We further integrate category classification into our
proposed aesthetic-oriented models. To the best of our knowledge, there is no
public dataset for aesthetic judgment with filtered images. We create a new
dataset called Filter Aesthetic Comparison Dataset (FACD). It contains 28,160
filtered images based on the AVA dataset and 42,240 reliable image pairs with
aesthetic annotations using Amazon Mechanical Turk. It is the first dataset
containing filtered images and user preference labels. We conduct experiments
on the collected FACD for filter recommendation, and the results show that our
proposed category-aware aesthetic learning outperforms aesthetic classification
methods (e.g., 12% relative improvement).
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 17:22:54 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2017 05:07:06 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Sun",
"Wei-Tse",
""
],
[
"Chao",
"Ting-Hsuan",
""
],
[
"Kuo",
"Yin-Hsi",
""
],
[
"Hsu",
"Winston H.",
""
]
] | TITLE: Photo Filter Recommendation by Category-Aware Aesthetic Learning
ABSTRACT: Nowadays, social media has become a popular platform for the public to share
photos. To make photos more visually appealing, users usually apply filters on
their photos without domain knowledge. However, due to the growing number of
filter types, it becomes a major issue for users to choose the best filter
type. For this purpose, filter recommendation for photo aesthetics takes an
important role in image quality ranking problems. In these years, several works
have declared that Convolutional Neural Networks (CNNs) outperform traditional
methods in image aesthetic categorization, which classifies images into high or
low quality. Most of them do not consider the effect on filtered images; hence,
we propose a novel image aesthetic learning for filter recommendation. Instead
of binarizing image quality, we adjust the state-of-the-art CNN architectures
and design a pairwise loss function to learn the embedded aesthetic responses
in hidden layers for filtered images. Based on our pilot study, we observe
image categories (e.g., portrait, landscape, food) will affect user preference
on filter selection. We further integrate category classification into our
proposed aesthetic-oriented models. To the best of our knowledge, there is no
public dataset for aesthetic judgment with filtered images. We create a new
dataset called Filter Aesthetic Comparison Dataset (FACD). It contains 28,160
filtered images based on the AVA dataset and 42,240 reliable image pairs with
aesthetic annotations using Amazon Mechanical Turk. It is the first dataset
containing filtered images and user preference labels. We conduct experiments
on the collected FACD for filter recommendation, and the results show that our
proposed category-aware aesthetic learning outperforms aesthetic classification
methods (e.g., 12% relative improvement).
| new_dataset | 0.965381 |
1610.04325 | Jin-Hwa Kim | Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha,
Byoung-Tak Zhang | Hadamard Product for Low-rank Bilinear Pooling | 13 pages, 1 figure, & appendix. ICLR 2017 accepted | null | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bilinear models provide rich representations compared with linear models.
They have been applied in various visual tasks, such as object recognition,
segmentation, and visual question-answering, to get state-of-the-art
performances taking advantage of the expanded representations. However,
bilinear representations tend to be high-dimensional, limiting the
applicability to computationally complex tasks. We propose low-rank bilinear
pooling using Hadamard product for an efficient attention mechanism of
multimodal learning. We show that our model outperforms compact bilinear
pooling in visual question-answering tasks with the state-of-the-art results on
the VQA dataset, having a better parsimonious property.
| [
{
"version": "v1",
"created": "Fri, 14 Oct 2016 04:29:52 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Nov 2016 05:31:27 GMT"
},
{
"version": "v3",
"created": "Tue, 14 Feb 2017 05:22:01 GMT"
},
{
"version": "v4",
"created": "Sun, 26 Mar 2017 16:22:47 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Kim",
"Jin-Hwa",
""
],
[
"On",
"Kyoung-Woon",
""
],
[
"Lim",
"Woosang",
""
],
[
"Kim",
"Jeonghee",
""
],
[
"Ha",
"Jung-Woo",
""
],
[
"Zhang",
"Byoung-Tak",
""
]
] | TITLE: Hadamard Product for Low-rank Bilinear Pooling
ABSTRACT: Bilinear models provide rich representations compared with linear models.
They have been applied in various visual tasks, such as object recognition,
segmentation, and visual question-answering, to get state-of-the-art
performances taking advantage of the expanded representations. However,
bilinear representations tend to be high-dimensional, limiting the
applicability to computationally complex tasks. We propose low-rank bilinear
pooling using Hadamard product for an efficient attention mechanism of
multimodal learning. We show that our model outperforms compact bilinear
pooling in visual question-answering tasks with the state-of-the-art results on
the VQA dataset, having a better parsimonious property.
| no_new_dataset | 0.948822 |
1610.07718 | Jiecao Chen | Jiecao Chen and Qin Zhang | Bias-Aware Sketches | 16 pages | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linear sketching algorithms have been widely used for processing large-scale
distributed and streaming datasets. Their popularity is largely due to the fact
that linear sketches can be naturally composed in the distributed model and be
efficiently updated in the streaming model. The errors of linear sketches are
typically expressed in terms of the sum of coordinates of the input vector
excluding those largest ones, or, the mass on the tail of the vector. Thus, the
precondition for these algorithms to perform well is that the mass on the tail
is small, which is, however, not always the case -- in many real-world datasets
the coordinates of the input vector have a {\em bias}, which will generate a
large mass on the tail.
In this paper we propose linear sketches that are {\em bias-aware}. We
rigorously prove that they achieve strictly better error guarantees than the
corresponding existing sketches, and demonstrate their practicality and
superiority via an extensive experimental evaluation on both real and synthetic
datasets.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2016 03:51:39 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2017 19:17:54 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Chen",
"Jiecao",
""
],
[
"Zhang",
"Qin",
""
]
] | TITLE: Bias-Aware Sketches
ABSTRACT: Linear sketching algorithms have been widely used for processing large-scale
distributed and streaming datasets. Their popularity is largely due to the fact
that linear sketches can be naturally composed in the distributed model and be
efficiently updated in the streaming model. The errors of linear sketches are
typically expressed in terms of the sum of coordinates of the input vector
excluding those largest ones, or, the mass on the tail of the vector. Thus, the
precondition for these algorithms to perform well is that the mass on the tail
is small, which is, however, not always the case -- in many real-world datasets
the coordinates of the input vector have a {\em bias}, which will generate a
large mass on the tail.
In this paper we propose linear sketches that are {\em bias-aware}. We
rigorously prove that they achieve strictly better error guarantees than the
corresponding existing sketches, and demonstrate their practicality and
superiority via an extensive experimental evaluation on both real and synthetic
datasets.
| no_new_dataset | 0.949248 |
1611.02364 | Guillaume-Alexandre Bilodeau | Yuebin Yang, Guillaume-Alexandre Bilodeau | Multiple Object Tracking with Kernelized Correlation Filters in Urban
Mixed Traffic | Accepted for CRV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, the Kernelized Correlation Filters tracker (KCF) achieved
competitive performance and robustness in visual object tracking. On the other
hand, visual trackers are not typically used in multiple object tracking. In
this paper, we investigate how a robust visual tracker like KCF can improve
multiple object tracking. Since KCF is a fast tracker, many can be used in
parallel and still result in fast tracking. We build a multiple object tracking
system based on KCF and background subtraction. Background subtraction is
applied to extract moving objects and get their scale and size in combination
with KCF outputs, while KCF is used for data association and to handle
fragmentation and occlusion problems. As a result, KCF and background
subtraction help each other to take tracking decision at every frame. Sometimes
KCF outputs are the most trustworthy (e.g. during occlusion), while in some
other case, it is the background subtraction outputs. To validate the
effectiveness of our system, the algorithm is demonstrated on four urban video
recordings from a standard dataset. Results show that our method is competitive
with state-of-the-art trackers even if we use a much simpler data association
step.
| [
{
"version": "v1",
"created": "Tue, 8 Nov 2016 02:20:09 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2017 14:36:59 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Yang",
"Yuebin",
""
],
[
"Bilodeau",
"Guillaume-Alexandre",
""
]
] | TITLE: Multiple Object Tracking with Kernelized Correlation Filters in Urban
Mixed Traffic
ABSTRACT: Recently, the Kernelized Correlation Filters tracker (KCF) achieved
competitive performance and robustness in visual object tracking. On the other
hand, visual trackers are not typically used in multiple object tracking. In
this paper, we investigate how a robust visual tracker like KCF can improve
multiple object tracking. Since KCF is a fast tracker, many can be used in
parallel and still result in fast tracking. We build a multiple object tracking
system based on KCF and background subtraction. Background subtraction is
applied to extract moving objects and get their scale and size in combination
with KCF outputs, while KCF is used for data association and to handle
fragmentation and occlusion problems. As a result, KCF and background
subtraction help each other to take tracking decision at every frame. Sometimes
KCF outputs are the most trustworthy (e.g. during occlusion), while in some
other case, it is the background subtraction outputs. To validate the
effectiveness of our system, the algorithm is demonstrated on four urban video
recordings from a standard dataset. Results show that our method is competitive
with state-of-the-art trackers even if we use a much simpler data association
step.
| no_new_dataset | 0.946745 |
1612.00089 | Luka \v{C}ehovin | Luka \v{C}ehovin Zajc, Alan Luke\v{z}i\v{c}, Ale\v{s} Leonardis, Matej
Kristan | Beyond standard benchmarks: Parameterizing performance evaluation in
visual object tracking | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object-to-camera motion produces a variety of apparent motion patterns that
significantly affect performance of short-term visual trackers. Despite being
crucial for designing robust trackers, their influence is poorly explored in
standard benchmarks due to weakly defined, biased and overlapping attribute
annotations. In this paper we propose to go beyond pre-recorded benchmarks with
post-hoc annotations by presenting an approach that utilizes omnidirectional
videos to generate realistic, consistently annotated, short-term tracking
scenarios with exactly parameterized motion patterns. We have created an
evaluation system, constructed a fully annotated dataset of omnidirectional
videos and the generators for typical motion patterns. We provide an in-depth
analysis of major tracking paradigms which is complementary to the standard
benchmarks and confirms the expressiveness of our evaluation approach.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 00:26:03 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2017 18:13:40 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Zajc",
"Luka Čehovin",
""
],
[
"Lukežič",
"Alan",
""
],
[
"Leonardis",
"Aleš",
""
],
[
"Kristan",
"Matej",
""
]
] | TITLE: Beyond standard benchmarks: Parameterizing performance evaluation in
visual object tracking
ABSTRACT: Object-to-camera motion produces a variety of apparent motion patterns that
significantly affect performance of short-term visual trackers. Despite being
crucial for designing robust trackers, their influence is poorly explored in
standard benchmarks due to weakly defined, biased and overlapping attribute
annotations. In this paper we propose to go beyond pre-recorded benchmarks with
post-hoc annotations by presenting an approach that utilizes omnidirectional
videos to generate realistic, consistently annotated, short-term tracking
scenarios with exactly parameterized motion patterns. We have created an
evaluation system, constructed a fully annotated dataset of omnidirectional
videos and the generators for typical motion patterns. We provide an in-depth
analysis of major tracking paradigms which is complementary to the standard
benchmarks and confirms the expressiveness of our evaluation approach.
| new_dataset | 0.949763 |
1702.08234 | Hamza Harkous | Hamza Harkous and Karl Aberer | "If You Can't Beat them, Join them": A Usability Approach to
Interdependent Privacy in Cloud Apps | Authors' extended version of the paper published at CODASPY 2017 | null | 10.1145/3029806.3029837 | null | cs.CR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud storage services, like Dropbox and Google Drive, have growing
ecosystems of 3rd party apps that are designed to work with users' cloud files.
Such apps often request full access to users' files, including files shared
with collaborators. Hence, whenever a user grants access to a new vendor, she
is inflicting a privacy loss on herself and on her collaborators too. Based on
analyzing a real dataset of 183 Google Drive users and 131 third party apps, we
discover that collaborators inflict a privacy loss which is at least 39% higher
than what users themselves cause. We take a step toward minimizing this loss by
introducing the concept of History-based decisions. Simply put, users are
informed at decision time about the vendors which have been previously granted
access to their data. Thus, they can reduce their privacy loss by not
installing apps from new vendors whenever possible. Next, we realize this
concept by introducing a new privacy indicator, which can be integrated within
the cloud apps' authorization interface. Via a web experiment with 141
participants recruited from CrowdFlower, we show that our privacy indicator can
significantly increase the user's likelihood of choosing the app that minimizes
her privacy loss. Finally, we explore the network effect of History-based
decisions via a simulation on top of large collaboration networks. We
demonstrate that adopting such a decision-making process is capable of reducing
the growth of users' privacy loss by 70% in a Google Drive-based network and by
40% in an author collaboration network. This is despite the fact that we
neither assume that users cooperate nor that they exhibit altruistic behavior.
To our knowledge, our work is the first to provide quantifiable evidence of the
privacy risk that collaborators pose in cloud apps. We are also the first to
mitigate this problem via a usable privacy approach.
| [
{
"version": "v1",
"created": "Mon, 27 Feb 2017 11:15:21 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2017 19:29:03 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Harkous",
"Hamza",
""
],
[
"Aberer",
"Karl",
""
]
] | TITLE: "If You Can't Beat them, Join them": A Usability Approach to
Interdependent Privacy in Cloud Apps
ABSTRACT: Cloud storage services, like Dropbox and Google Drive, have growing
ecosystems of 3rd party apps that are designed to work with users' cloud files.
Such apps often request full access to users' files, including files shared
with collaborators. Hence, whenever a user grants access to a new vendor, she
is inflicting a privacy loss on herself and on her collaborators too. Based on
analyzing a real dataset of 183 Google Drive users and 131 third party apps, we
discover that collaborators inflict a privacy loss which is at least 39% higher
than what users themselves cause. We take a step toward minimizing this loss by
introducing the concept of History-based decisions. Simply put, users are
informed at decision time about the vendors which have been previously granted
access to their data. Thus, they can reduce their privacy loss by not
installing apps from new vendors whenever possible. Next, we realize this
concept by introducing a new privacy indicator, which can be integrated within
the cloud apps' authorization interface. Via a web experiment with 141
participants recruited from CrowdFlower, we show that our privacy indicator can
significantly increase the user's likelihood of choosing the app that minimizes
her privacy loss. Finally, we explore the network effect of History-based
decisions via a simulation on top of large collaboration networks. We
demonstrate that adopting such a decision-making process is capable of reducing
the growth of users' privacy loss by 70% in a Google Drive-based network and by
40% in an author collaboration network. This is despite the fact that we
neither assume that users cooperate nor that they exhibit altruistic behavior.
To our knowledge, our work is the first to provide quantifiable evidence of the
privacy risk that collaborators pose in cloud apps. We are also the first to
mitigate this problem via a usable privacy approach.
| no_new_dataset | 0.939025 |
1702.08652 | Pichao Wang | Pichao Wang and Wanqing Li and Zhimin Gao and Yuyao Zhang and Chang
Tang and Philip Ogunbona | Scene Flow to Action Map: A New Representation for RGB-D based Action
Recognition with Convolutional Neural Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene flow describes the motion of 3D objects in real world and potentially
could be the basis of a good feature for 3D action recognition. However, its
use for action recognition, especially in the context of convolutional neural
networks (ConvNets), has not been previously studied. In this paper, we propose
the extraction and use of scene flow for action recognition from RGB-D data.
Previous works have considered the depth and RGB modalities as separate
channels and extract features for later fusion. We take a different approach
and consider the modalities as one entity, thus allowing feature extraction for
action recognition at the beginning. Two key questions about the use of scene
flow for action recognition are addressed: how to organize the scene flow
vectors and how to represent the long term dynamics of videos based on scene
flow. In order to calculate the scene flow correctly on the available datasets,
we propose an effective self-calibration method to align the RGB and depth data
spatially without knowledge of the camera parameters. Based on the scene flow
vectors, we propose a new representation, namely, Scene Flow to Action Map
(SFAM), that describes several long term spatio-temporal dynamics for action
recognition. We adopt a channel transform kernel to transform the scene flow
vectors to an optimal color space analogous to RGB. This transformation takes
better advantage of the trained ConvNets models over ImageNet. Experimental
results indicate that this new representation can surpass the performance of
state-of-the-art methods on two large public datasets.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2017 05:39:25 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Mar 2017 07:33:51 GMT"
},
{
"version": "v3",
"created": "Mon, 27 Mar 2017 00:52:21 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Wang",
"Pichao",
""
],
[
"Li",
"Wanqing",
""
],
[
"Gao",
"Zhimin",
""
],
[
"Zhang",
"Yuyao",
""
],
[
"Tang",
"Chang",
""
],
[
"Ogunbona",
"Philip",
""
]
] | TITLE: Scene Flow to Action Map: A New Representation for RGB-D based Action
Recognition with Convolutional Neural Networks
ABSTRACT: Scene flow describes the motion of 3D objects in real world and potentially
could be the basis of a good feature for 3D action recognition. However, its
use for action recognition, especially in the context of convolutional neural
networks (ConvNets), has not been previously studied. In this paper, we propose
the extraction and use of scene flow for action recognition from RGB-D data.
Previous works have considered the depth and RGB modalities as separate
channels and extract features for later fusion. We take a different approach
and consider the modalities as one entity, thus allowing feature extraction for
action recognition at the beginning. Two key questions about the use of scene
flow for action recognition are addressed: how to organize the scene flow
vectors and how to represent the long term dynamics of videos based on scene
flow. In order to calculate the scene flow correctly on the available datasets,
we propose an effective self-calibration method to align the RGB and depth data
spatially without knowledge of the camera parameters. Based on the scene flow
vectors, we propose a new representation, namely, Scene Flow to Action Map
(SFAM), that describes several long term spatio-temporal dynamics for action
recognition. We adopt a channel transform kernel to transform the scene flow
vectors to an optimal color space analogous to RGB. This transformation takes
better advantage of the trained ConvNets models over ImageNet. Experimental
results indicate that this new representation can surpass the performance of
state-of-the-art methods on two large public datasets.
| no_new_dataset | 0.949153 |
1703.03107 | Onur Varol | Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer,
Alessandro Flammini | Online Human-Bot Interactions: Detection, Estimation, and
Characterization | Accepted paper for ICWSM'17, 10 pages, 8 figures, 1 table | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasing evidence suggests that a growing amount of social media content is
generated by autonomous entities known as social bots. In this work we present
a framework to detect such entities on Twitter. We leverage more than a
thousand features extracted from public data and meta-data about users:
friends, tweet content and sentiment, network patterns, and activity time
series. We benchmark the classification framework by using a publicly available
dataset of Twitter bots. This training data is enriched by a manually annotated
collection of active Twitter users that include both humans and bots of varying
sophistication. Our models yield high accuracy and agreement with each other
and can detect bots of different nature. Our estimates suggest that between 9%
and 15% of active Twitter accounts are bots. Characterizing ties among
accounts, we observe that simple bots tend to interact with bots that exhibit
more human-like behaviors. Analysis of content flows reveals retweet and
mention strategies adopted by bots to interact with different target groups.
Using clustering analysis, we characterize several subclasses of accounts,
including spammers, self promoters, and accounts that post content from
connected applications.
| [
{
"version": "v1",
"created": "Thu, 9 Mar 2017 02:27:47 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Mar 2017 17:56:11 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Varol",
"Onur",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Davis",
"Clayton A.",
""
],
[
"Menczer",
"Filippo",
""
],
[
"Flammini",
"Alessandro",
""
]
] | TITLE: Online Human-Bot Interactions: Detection, Estimation, and
Characterization
ABSTRACT: Increasing evidence suggests that a growing amount of social media content is
generated by autonomous entities known as social bots. In this work we present
a framework to detect such entities on Twitter. We leverage more than a
thousand features extracted from public data and meta-data about users:
friends, tweet content and sentiment, network patterns, and activity time
series. We benchmark the classification framework by using a publicly available
dataset of Twitter bots. This training data is enriched by a manually annotated
collection of active Twitter users that include both humans and bots of varying
sophistication. Our models yield high accuracy and agreement with each other
and can detect bots of different nature. Our estimates suggest that between 9%
and 15% of active Twitter accounts are bots. Characterizing ties among
accounts, we observe that simple bots tend to interact with bots that exhibit
more human-like behaviors. Analysis of content flows reveals retweet and
mention strategies adopted by bots to interact with different target groups.
Using clustering analysis, we characterize several subclasses of accounts,
including spammers, self promoters, and accounts that post content from
connected applications.
| no_new_dataset | 0.946646 |
1703.04617 | Junbei Zhang | Junbei Zhang, Xiaodan Zhu, Qian Chen, Lirong Dai, Si Wei, and Hui
Jiang | Exploring Question Understanding and Adaptation in Neural-Network-Based
Question Answering | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The last several years have seen intensive interest in exploring
neural-network-based models for machine comprehension (MC) and question
answering (QA). In this paper, we approach the problems by closely modelling
questions in a neural network framework. We first introduce syntactic
information to help encode questions. We then view and model different types of
questions and the information shared among them as an adaptation task and
proposed adaptation models for them. On the Stanford Question Answering Dataset
(SQuAD), we show that these approaches can help attain better results over a
competitive baseline.
| [
{
"version": "v1",
"created": "Tue, 14 Mar 2017 17:43:25 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2017 16:17:03 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Zhang",
"Junbei",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Chen",
"Qian",
""
],
[
"Dai",
"Lirong",
""
],
[
"Wei",
"Si",
""
],
[
"Jiang",
"Hui",
""
]
] | TITLE: Exploring Question Understanding and Adaptation in Neural-Network-Based
Question Answering
ABSTRACT: The last several years have seen intensive interest in exploring
neural-network-based models for machine comprehension (MC) and question
answering (QA). In this paper, we approach the problems by closely modelling
questions in a neural network framework. We first introduce syntactic
information to help encode questions. We then view and model different types of
questions and the information shared among them as an adaptation task and
proposed adaptation models for them. On the Stanford Question Answering Dataset
(SQuAD), we show that these approaches can help attain better results over a
competitive baseline.
| no_new_dataset | 0.935405 |
1703.06412 | Ayushman Dash | Ayushman Dash, John Cristian Borges Gamboa, Sheraz Ahmed, Marcus
Liwicki, Muhammad Zeshan Afzal | TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial
Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we present the Text Conditioned Auxiliary Classifier Generative
Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network
(GAN) for synthesizing images from their text descriptions. Former approaches
have tried to condition the generative process on the textual data; but allying
it to the usage of class information, known to diversify the generated samples
and improve their structural coherence, has not been explored. We trained the
presented TAC-GAN model on the Oxford-102 dataset of flowers, and evaluated the
discriminability of the generated images with Inception-Score, as well as their
diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our
approach outperforms the state-of-the-art models, i.e., its inception score is
3.45, corresponding to a relative increase of 7.8% compared to the recently
introduced StackGan. A comparison of the mean MS-SSIM scores of the training
and generated samples per class shows that our approach is able to generate
highly diverse images with an average MS-SSIM of 0.14 over all generated
classes.
| [
{
"version": "v1",
"created": "Sun, 19 Mar 2017 10:07:58 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Mar 2017 11:29:21 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Dash",
"Ayushman",
""
],
[
"Gamboa",
"John Cristian Borges",
""
],
[
"Ahmed",
"Sheraz",
""
],
[
"Liwicki",
"Marcus",
""
],
[
"Afzal",
"Muhammad Zeshan",
""
]
] | TITLE: TAC-GAN - Text Conditioned Auxiliary Classifier Generative Adversarial
Network
ABSTRACT: In this work, we present the Text Conditioned Auxiliary Classifier Generative
Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network
(GAN) for synthesizing images from their text descriptions. Former approaches
have tried to condition the generative process on the textual data; but allying
it to the usage of class information, known to diversify the generated samples
and improve their structural coherence, has not been explored. We trained the
presented TAC-GAN model on the Oxford-102 dataset of flowers, and evaluated the
discriminability of the generated images with Inception-Score, as well as their
diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our
approach outperforms the state-of-the-art models, i.e., its inception score is
3.45, corresponding to a relative increase of 7.8% compared to the recently
introduced StackGan. A comparison of the mean MS-SSIM scores of the training
and generated samples per class shows that our approach is able to generate
highly diverse images with an average MS-SSIM of 0.14 over all generated
classes.
| no_new_dataset | 0.948965 |
1703.08580 | Vittal Premachandran | Daniil Pakhomov and Vittal Premachandran and Max Allan and Mahdi
Azizian and Nassir Navab | Deep Residual Learning for Instrument Segmentation in Robotic Surgery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection, tracking, and pose estimation of surgical instruments are crucial
tasks for computer assistance during minimally invasive robotic surgery. In the
majority of cases, the first step is the automatic segmentation of surgical
tools. Prior work has focused on binary segmentation, where the objective is to
label every pixel in an image as tool or background. We improve upon previous
work in two major ways. First, we leverage recent techniques such as deep
residual learning and dilated convolutions to advance binary-segmentation
performance. Second, we extend the approach to multi-class segmentation, which
lets us segment different parts of the tool, in addition to background. We
demonstrate the performance of this method on the MICCAI Endoscopic Vision
Challenge Robotic Instruments dataset.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 19:43:20 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Pakhomov",
"Daniil",
""
],
[
"Premachandran",
"Vittal",
""
],
[
"Allan",
"Max",
""
],
[
"Azizian",
"Mahdi",
""
],
[
"Navab",
"Nassir",
""
]
] | TITLE: Deep Residual Learning for Instrument Segmentation in Robotic Surgery
ABSTRACT: Detection, tracking, and pose estimation of surgical instruments are crucial
tasks for computer assistance during minimally invasive robotic surgery. In the
majority of cases, the first step is the automatic segmentation of surgical
tools. Prior work has focused on binary segmentation, where the objective is to
label every pixel in an image as tool or background. We improve upon previous
work in two major ways. First, we leverage recent techniques such as deep
residual learning and dilated convolutions to advance binary-segmentation
performance. Second, we extend the approach to multi-class segmentation, which
lets us segment different parts of the tool, in addition to background. We
demonstrate the performance of this method on the MICCAI Endoscopic Vision
Challenge Robotic Instruments dataset.
| no_new_dataset | 0.948917 |
1703.08617 | Chi Nhan Duong | Chi Nhan Duong, Kha Gia Quach, Khoa Luu, T. Hoang Ngan le, Marios
Savvides | Temporal Non-Volume Preserving Approach to Facial Age-Progression and
Age-Invariant Face Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling the long-term facial aging process is extremely challenging due to
the presence of large and non-linear variations during the face development
stages. In order to efficiently address the problem, this work first decomposes
the aging process into multiple short-term stages. Then, a novel generative
probabilistic model, named Temporal Non-Volume Preserving (TNVP)
transformation, is presented to model the facial aging process at each stage.
Unlike Generative Adversarial Networks (GANs), which requires an empirical
balance threshold, and Restricted Boltzmann Machines (RBM), an intractable
model, our proposed TNVP approach guarantees a tractable density function,
exact inference and evaluation for embedding the feature transformations
between faces in consecutive stages. Our model shows its advantages not only in
capturing the non-linear age related variance in each stage but also producing
a smooth synthesis in age progression across faces. Our approach can model any
face in the wild provided with only four basic landmark points. Moreover, the
structure can be transformed into a deep convolutional network while keeping
the advantages of probabilistic models with tractable log-likelihood density
estimation. Our method is evaluated in both terms of synthesizing
age-progressed faces and cross-age face verification and consistently shows the
state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH,
AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A
large-scale face verification on Megaface challenge 1 is also performed to
further show the advantages of our proposed approach.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 22:43:05 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Duong",
"Chi Nhan",
""
],
[
"Quach",
"Kha Gia",
""
],
[
"Luu",
"Khoa",
""
],
[
"le",
"T. Hoang Ngan",
""
],
[
"Savvides",
"Marios",
""
]
] | TITLE: Temporal Non-Volume Preserving Approach to Facial Age-Progression and
Age-Invariant Face Recognition
ABSTRACT: Modeling the long-term facial aging process is extremely challenging due to
the presence of large and non-linear variations during the face development
stages. In order to efficiently address the problem, this work first decomposes
the aging process into multiple short-term stages. Then, a novel generative
probabilistic model, named Temporal Non-Volume Preserving (TNVP)
transformation, is presented to model the facial aging process at each stage.
Unlike Generative Adversarial Networks (GANs), which requires an empirical
balance threshold, and Restricted Boltzmann Machines (RBM), an intractable
model, our proposed TNVP approach guarantees a tractable density function,
exact inference and evaluation for embedding the feature transformations
between faces in consecutive stages. Our model shows its advantages not only in
capturing the non-linear age related variance in each stage but also producing
a smooth synthesis in age progression across faces. Our approach can model any
face in the wild provided with only four basic landmark points. Moreover, the
structure can be transformed into a deep convolutional network while keeping
the advantages of probabilistic models with tractable log-likelihood density
estimation. Our method is evaluated in both terms of synthesizing
age-progressed faces and cross-age face verification and consistently shows the
state-of-the-art results in various face aging databases, i.e. FG-NET, MORPH,
AginG Faces in the Wild (AGFW), and Cross-Age Celebrity Dataset (CACD). A
large-scale face verification on Megaface challenge 1 is also performed to
further show the advantages of our proposed approach.
| no_new_dataset | 0.947575 |
1703.08668 | Dong Wen | Dong Wen, Lu Qin, Xuemin Lin, Ying Zhang, Lijun Chang | Enumerating k-Vertex Connected Components in Large Graphs | 16 pages | null | null | null | cs.DB cs.SI | http://creativecommons.org/licenses/by/4.0/ | Cohesive subgraph detection is an important graph problem that is widely
applied in many application domains, such as social community detection,
network visualization, and network topology analysis. Most of existing cohesive
subgraph metrics can guarantee good structural properties but may cause the
free-rider effect. Here, by free-rider effect, we mean that some irrelevant
subgraphs are combined as one subgraph if they only share a small number of
vertices and edges. In this paper, we study k-vertex connected component
(k-VCC) which can effectively eliminate the free-rider effect but less studied
in the literature. A k-VCC is a connected subgraph in which the removal of any
k-1 vertices will not disconnect the subgraph. In addition to eliminating the
free-rider effect, k-VCC also has other advantages such as bounded diameter,
high cohesiveness, bounded graph overlapping, and bounded subgraph number. We
propose a polynomial time algorithm to enumerate all k-VCCs of a graph by
recursively partitioning the graph into overlapped subgraphs. We find that the
key to improving the algorithm is reducing the number of local connectivity
testings. Therefore, we propose two effective optimization strategies, namely
neighbor sweep and group sweep, to largely reduce the number of local
connectivity testings. We conduct extensive performance studies using seven
large real datasets to demonstrate the effectiveness of this model as well as
the efficiency of our proposed algorithms.
| [
{
"version": "v1",
"created": "Sat, 25 Mar 2017 09:36:47 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Wen",
"Dong",
""
],
[
"Qin",
"Lu",
""
],
[
"Lin",
"Xuemin",
""
],
[
"Zhang",
"Ying",
""
],
[
"Chang",
"Lijun",
""
]
] | TITLE: Enumerating k-Vertex Connected Components in Large Graphs
ABSTRACT: Cohesive subgraph detection is an important graph problem that is widely
applied in many application domains, such as social community detection,
network visualization, and network topology analysis. Most of existing cohesive
subgraph metrics can guarantee good structural properties but may cause the
free-rider effect. Here, by free-rider effect, we mean that some irrelevant
subgraphs are combined as one subgraph if they only share a small number of
vertices and edges. In this paper, we study k-vertex connected component
(k-VCC) which can effectively eliminate the free-rider effect but less studied
in the literature. A k-VCC is a connected subgraph in which the removal of any
k-1 vertices will not disconnect the subgraph. In addition to eliminating the
free-rider effect, k-VCC also has other advantages such as bounded diameter,
high cohesiveness, bounded graph overlapping, and bounded subgraph number. We
propose a polynomial time algorithm to enumerate all k-VCCs of a graph by
recursively partitioning the graph into overlapped subgraphs. We find that the
key to improving the algorithm is reducing the number of local connectivity
testings. Therefore, we propose two effective optimization strategies, namely
neighbor sweep and group sweep, to largely reduce the number of local
connectivity testings. We conduct extensive performance studies using seven
large real datasets to demonstrate the effectiveness of this model as well as
the efficiency of our proposed algorithms.
| no_new_dataset | 0.943191 |
1703.08701 | Albert Gatt | Claudia Borg and Albert Gatt | Morphological Analysis for the Maltese Language: The Challenges of a
Hybrid System | 11pages, Proceedings of the 3rd Arabic Natural Language Processing
Workshop (WANLP'17) | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maltese is a morphologically rich language with a hybrid morphological system
which features both concatenative and non-concatenative processes. This paper
analyses the impact of this hybridity on the performance of machine learning
techniques for morphological labelling and clustering. In particular, we
analyse a dataset of morphologically related word clusters to evaluate the
difference in results for concatenative and nonconcatenative clusters. We also
describe research carried out in morphological labelling, with a particular
focus on the verb category. Two evaluations were carried out, one using an
unseen dataset, and another one using a gold standard dataset which was
manually labelled. The gold standard dataset was split into concatenative and
non-concatenative to analyse the difference in results between the two
morphological systems.
| [
{
"version": "v1",
"created": "Sat, 25 Mar 2017 14:56:27 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Borg",
"Claudia",
""
],
[
"Gatt",
"Albert",
""
]
] | TITLE: Morphological Analysis for the Maltese Language: The Challenges of a
Hybrid System
ABSTRACT: Maltese is a morphologically rich language with a hybrid morphological system
which features both concatenative and non-concatenative processes. This paper
analyses the impact of this hybridity on the performance of machine learning
techniques for morphological labelling and clustering. In particular, we
analyse a dataset of morphologically related word clusters to evaluate the
difference in results for concatenative and nonconcatenative clusters. We also
describe research carried out in morphological labelling, with a particular
focus on the verb category. Two evaluations were carried out, one using an
unseen dataset, and another one using a gold standard dataset which was
manually labelled. The gold standard dataset was split into concatenative and
non-concatenative to analyse the difference in results between the two
morphological systems.
| no_new_dataset | 0.82226 |
1703.08762 | Sanaz Bahargam Sanaz Bahargam | Sanaz Bahargam, D\'ora Erdos, Azer Bestavros, Evimaria Terzi | Team Formation for Scheduling Educational Material in Massive Online
Classes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Whether teaching in a classroom or a Massive Online Open Course it is crucial
to present the material in a way that benefits the audience as a whole. We
identify two important tasks to solve towards this objective, 1 group students
so that they can maximally benefit from peer interaction and 2 find an optimal
schedule of the educational material for each group. Thus, in this paper, we
solve the problem of team formation and content scheduling for education. Given
a time frame d, a set of students S with their required need to learn different
activities T and given k as the number of desired groups, we study the problem
of finding k group of students. The goal is to teach students within time frame
d such that their potential for learning is maximized and find the best
schedule for each group. We show this problem to be NP-hard and develop a
polynomial algorithm for it. We show our algorithm to be effective both on
synthetic as well as a real data set. For our experiments, we use real data on
students' grades in a Computer Science department. As part of our contribution,
we release a semi-synthetic dataset that mimics the properties of the real
data.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2017 03:47:54 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Bahargam",
"Sanaz",
""
],
[
"Erdos",
"Dóra",
""
],
[
"Bestavros",
"Azer",
""
],
[
"Terzi",
"Evimaria",
""
]
] | TITLE: Team Formation for Scheduling Educational Material in Massive Online
Classes
ABSTRACT: Whether teaching in a classroom or a Massive Online Open Course it is crucial
to present the material in a way that benefits the audience as a whole. We
identify two important tasks to solve towards this objective, 1 group students
so that they can maximally benefit from peer interaction and 2 find an optimal
schedule of the educational material for each group. Thus, in this paper, we
solve the problem of team formation and content scheduling for education. Given
a time frame d, a set of students S with their required need to learn different
activities T and given k as the number of desired groups, we study the problem
of finding k group of students. The goal is to teach students within time frame
d such that their potential for learning is maximized and find the best
schedule for each group. We show this problem to be NP-hard and develop a
polynomial algorithm for it. We show our algorithm to be effective both on
synthetic as well as a real data set. For our experiments, we use real data on
students' grades in a Computer Science department. As part of our contribution,
we release a semi-synthetic dataset that mimics the properties of the real
data.
| new_dataset | 0.960731 |
1703.08764 | Chunhua Shen | Fayao Liu, Guosheng Lin, Ruizhi Qiao, Chunhua Shen | Structured Learning of Tree Potentials in CRF for Image Segmentation | 10 pages. Appearing in IEEE Transactions on Neural Networks and
Learning Systems | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approach to image segmentation, which exploits the
advantages of both conditional random fields (CRFs) and decision trees. In the
literature, the potential functions of CRFs are mostly defined as a linear
combination of some pre-defined parametric models, and then methods like
structured support vector machines (SSVMs) are applied to learn those linear
coefficients. We instead formulate the unary and pairwise potentials as
nonparametric forests---ensembles of decision trees, and learn the ensemble
parameters and the trees in a unified optimization problem within the
large-margin framework. In this fashion, we easily achieve nonlinear learning
of potential functions on both unary and pairwise terms in CRFs. Moreover, we
learn class-wise decision trees for each object that appears in the image. Due
to the rich structure and flexibility of decision trees, our approach is
powerful in modelling complex data likelihoods and label relationships. The
resulting optimization problem is very challenging because it can have
exponentially many variables and constraints. We show that this challenging
optimization can be efficiently solved by combining a modified column
generation and cutting-planes techniques. Experimental results on both binary
(Graz-02, Weizmann horse, Oxford flower) and multi-class (MSRC-21, PASCAL VOC
2012) segmentation datasets demonstrate the power of the learned nonlinear
nonparametric potentials.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2017 04:15:10 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Liu",
"Fayao",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Qiao",
"Ruizhi",
""
],
[
"Shen",
"Chunhua",
""
]
] | TITLE: Structured Learning of Tree Potentials in CRF for Image Segmentation
ABSTRACT: We propose a new approach to image segmentation, which exploits the
advantages of both conditional random fields (CRFs) and decision trees. In the
literature, the potential functions of CRFs are mostly defined as a linear
combination of some pre-defined parametric models, and then methods like
structured support vector machines (SSVMs) are applied to learn those linear
coefficients. We instead formulate the unary and pairwise potentials as
nonparametric forests---ensembles of decision trees, and learn the ensemble
parameters and the trees in a unified optimization problem within the
large-margin framework. In this fashion, we easily achieve nonlinear learning
of potential functions on both unary and pairwise terms in CRFs. Moreover, we
learn class-wise decision trees for each object that appears in the image. Due
to the rich structure and flexibility of decision trees, our approach is
powerful in modelling complex data likelihoods and label relationships. The
resulting optimization problem is very challenging because it can have
exponentially many variables and constraints. We show that this challenging
optimization can be efficiently solved by combining a modified column
generation and cutting-planes techniques. Experimental results on both binary
(Graz-02, Weizmann horse, Oxford flower) and multi-class (MSRC-21, PASCAL VOC
2012) segmentation datasets demonstrate the power of the learned nonlinear
nonparametric potentials.
| no_new_dataset | 0.950411 |
1703.08885 | Yusuke Watanabe Dr. | Yusuke Watanabe, Bhuwan Dhingra, Ruslan Salakhutdinov | Question Answering from Unstructured Text by Retrieval and Comprehension | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open domain Question Answering (QA) systems must interact with external
knowledge sources, such as web pages, to find relevant information. Information
sources like Wikipedia, however, are not well structured and difficult to
utilize in comparison with Knowledge Bases (KBs). In this work we present a
two-step approach to question answering from unstructured text, consisting of a
retrieval step and a comprehension step. For comprehension, we present an RNN
based attention model with a novel mixture mechanism for selecting answers from
either retrieved articles or a fixed vocabulary. For retrieval we introduce a
hand-crafted model and a neural model for ranking relevant articles. We achieve
state-of-the-art performance on W IKI M OVIES dataset, reducing the error by
40%. Our experimental results further demonstrate the importance of each of the
introduced components.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2017 23:48:06 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Watanabe",
"Yusuke",
""
],
[
"Dhingra",
"Bhuwan",
""
],
[
"Salakhutdinov",
"Ruslan",
""
]
] | TITLE: Question Answering from Unstructured Text by Retrieval and Comprehension
ABSTRACT: Open domain Question Answering (QA) systems must interact with external
knowledge sources, such as web pages, to find relevant information. Information
sources like Wikipedia, however, are not well structured and difficult to
utilize in comparison with Knowledge Bases (KBs). In this work we present a
two-step approach to question answering from unstructured text, consisting of a
retrieval step and a comprehension step. For comprehension, we present an RNN
based attention model with a novel mixture mechanism for selecting answers from
either retrieved articles or a fixed vocabulary. For retrieval we introduce a
hand-crafted model and a neural model for ranking relevant articles. We achieve
state-of-the-art performance on W IKI M OVIES dataset, reducing the error by
40%. Our experimental results further demonstrate the importance of each of the
introduced components.
| no_new_dataset | 0.947527 |
1703.08893 | Yunlong Yu | Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling,
Fei Wu | Transductive Zero-Shot Learning with a Self-training dictionary approach | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an important and challenging problem in computer vision, zero-shot
learning (ZSL) aims at automatically recognizing the instances from unseen
object classes without training data. To address this problem, ZSL is usually
carried out in the following two aspects: 1) capturing the domain distribution
connections between seen classes data and unseen classes data; and 2) modeling
the semantic interactions between the image feature space and the label
embedding space. Motivated by these observations, we propose a bidirectional
mapping based semantic relationship modeling scheme that seeks for crossmodal
knowledge transfer by simultaneously projecting the image features and label
embeddings into a common latent space. Namely, we have a bidirectional
connection relationship that takes place from the image feature space to the
latent space as well as from the label embedding space to the latent space. To
deal with the domain shift problem, we further present a transductive learning
approach that formulates the class prediction problem in an iterative refining
process, where the object classification capacity is progressively reinforced
through bootstrapping-based model updating over highly reliable instances.
Experimental results on three benchmark datasets (AwA, CUB and SUN) demonstrate
the effectiveness of the proposed approach against the state-of-the-art
approaches.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 01:36:38 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Yu",
"Yunlong",
""
],
[
"Ji",
"Zhong",
""
],
[
"Li",
"Xi",
""
],
[
"Guo",
"Jichang",
""
],
[
"Zhang",
"Zhongfei",
""
],
[
"Ling",
"Haibin",
""
],
[
"Wu",
"Fei",
""
]
] | TITLE: Transductive Zero-Shot Learning with a Self-training dictionary approach
ABSTRACT: As an important and challenging problem in computer vision, zero-shot
learning (ZSL) aims at automatically recognizing the instances from unseen
object classes without training data. To address this problem, ZSL is usually
carried out in the following two aspects: 1) capturing the domain distribution
connections between seen classes data and unseen classes data; and 2) modeling
the semantic interactions between the image feature space and the label
embedding space. Motivated by these observations, we propose a bidirectional
mapping based semantic relationship modeling scheme that seeks for crossmodal
knowledge transfer by simultaneously projecting the image features and label
embeddings into a common latent space. Namely, we have a bidirectional
connection relationship that takes place from the image feature space to the
latent space as well as from the label embedding space to the latent space. To
deal with the domain shift problem, we further present a transductive learning
approach that formulates the class prediction problem in an iterative refining
process, where the object classification capacity is progressively reinforced
through bootstrapping-based model updating over highly reliable instances.
Experimental results on three benchmark datasets (AwA, CUB and SUN) demonstrate
the effectiveness of the proposed approach against the state-of-the-art
approaches.
| no_new_dataset | 0.946597 |
1703.08897 | Yunlong Yu | Yunlong Yu, Zhong Ji, Jichang Guo, and Yanwei Pang | Transductive Zero-Shot Learning with Adaptive Structural Embedding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot learning (ZSL) endows the computer vision system with the
inferential capability to recognize instances of a new category that has never
seen before. Two fundamental challenges in it are visual-semantic embedding and
domain adaptation in cross-modality learning and unseen class prediction steps,
respectively. To address both challenges, this paper presents two corresponding
methods named Adaptive STructural Embedding (ASTE) and Self-PAsed Selective
Strategy (SPASS), respectively. Specifically, ASTE formulates the
visualsemantic interactions in a latent structural SVM framework to adaptively
adjust the slack variables to embody the different reliableness among training
instances. In this way, the reliable instances are imposed with small
punishments, wheras the less reliable instances are imposed with more severe
punishments. Thus, it ensures a more discriminative embedding. On the other
hand, SPASS offers a framework to alleviate the domain shift problem in ZSL,
which exploits the unseen data in an easy to hard fashion. Particularly, SPASS
borrows the idea from selfpaced learning by iteratively selecting the unseen
instances from reliable to less reliable to gradually adapt the knowledge from
the seen domain to the unseen domain. Subsequently, by combining SPASS and
ASTE, we present a self-paced Transductive ASTE (TASTE) method to progressively
reinforce the classification capacity. Extensive experiments on three benchmark
datasets (i.e., AwA, CUB, and aPY) demonstrate the superiorities of ASTE and
TASTE. Furthermore, we also propose a fast training (FT) strategy to improve
the efficiency of most of existing ZSL methods. The FT strategy is surprisingly
simple and general enough, which can speed up the training time of most
existing methods by 4~300 times while holding the previous performance.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 01:44:41 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Yu",
"Yunlong",
""
],
[
"Ji",
"Zhong",
""
],
[
"Guo",
"Jichang",
""
],
[
"Pang",
"Yanwei",
""
]
] | TITLE: Transductive Zero-Shot Learning with Adaptive Structural Embedding
ABSTRACT: Zero-shot learning (ZSL) endows the computer vision system with the
inferential capability to recognize instances of a new category that has never
seen before. Two fundamental challenges in it are visual-semantic embedding and
domain adaptation in cross-modality learning and unseen class prediction steps,
respectively. To address both challenges, this paper presents two corresponding
methods named Adaptive STructural Embedding (ASTE) and Self-PAsed Selective
Strategy (SPASS), respectively. Specifically, ASTE formulates the
visualsemantic interactions in a latent structural SVM framework to adaptively
adjust the slack variables to embody the different reliableness among training
instances. In this way, the reliable instances are imposed with small
punishments, wheras the less reliable instances are imposed with more severe
punishments. Thus, it ensures a more discriminative embedding. On the other
hand, SPASS offers a framework to alleviate the domain shift problem in ZSL,
which exploits the unseen data in an easy to hard fashion. Particularly, SPASS
borrows the idea from selfpaced learning by iteratively selecting the unseen
instances from reliable to less reliable to gradually adapt the knowledge from
the seen domain to the unseen domain. Subsequently, by combining SPASS and
ASTE, we present a self-paced Transductive ASTE (TASTE) method to progressively
reinforce the classification capacity. Extensive experiments on three benchmark
datasets (i.e., AwA, CUB, and aPY) demonstrate the superiorities of ASTE and
TASTE. Furthermore, we also propose a fast training (FT) strategy to improve
the efficiency of most of existing ZSL methods. The FT strategy is surprisingly
simple and general enough, which can speed up the training time of most
existing methods by 4~300 times while holding the previous performance.
| no_new_dataset | 0.946794 |
1703.09076 | Yunho Jeon | Yunho Jeon, Junmo Kim | Active Convolution: Learning the Shape of Convolution for Image
Classification | Accepted to appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep learning has achieved great success in many computer
vision applications. Convolutional neural networks (CNNs) have lately emerged
as a major approach to image classification. Most research on CNNs thus far has
focused on developing architectures such as the Inception and residual
networks. The convolution layer is the core of the CNN, but few studies have
addressed the convolution unit itself. In this paper, we introduce a
convolution unit called the active convolution unit (ACU). A new convolution
has no fixed shape, because of which we can define any form of convolution. Its
shape can be learned through backpropagation during training. Our proposed unit
has a few advantages. First, the ACU is a generalization of convolution; it can
define not only all conventional convolutions, but also convolutions with
fractional pixel coordinates. We can freely change the shape of the
convolution, which provides greater freedom to form CNN structures. Second, the
shape of the convolution is learned while training and there is no need to tune
it by hand. Third, the ACU can learn better than a conventional unit, where we
obtained the improvement simply by changing the conventional convolution to an
ACU. We tested our proposed method on plain and residual networks, and the
results showed significant improvement using our method on various datasets and
architectures in comparison with the baseline.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 13:44:26 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Jeon",
"Yunho",
""
],
[
"Kim",
"Junmo",
""
]
] | TITLE: Active Convolution: Learning the Shape of Convolution for Image
Classification
ABSTRACT: In recent years, deep learning has achieved great success in many computer
vision applications. Convolutional neural networks (CNNs) have lately emerged
as a major approach to image classification. Most research on CNNs thus far has
focused on developing architectures such as the Inception and residual
networks. The convolution layer is the core of the CNN, but few studies have
addressed the convolution unit itself. In this paper, we introduce a
convolution unit called the active convolution unit (ACU). A new convolution
has no fixed shape, because of which we can define any form of convolution. Its
shape can be learned through backpropagation during training. Our proposed unit
has a few advantages. First, the ACU is a generalization of convolution; it can
define not only all conventional convolutions, but also convolutions with
fractional pixel coordinates. We can freely change the shape of the
convolution, which provides greater freedom to form CNN structures. Second, the
shape of the convolution is learned while training and there is no need to tune
it by hand. Third, the ACU can learn better than a conventional unit, where we
obtained the improvement simply by changing the conventional convolution to an
ACU. We tested our proposed method on plain and residual networks, and the
results showed significant improvement using our method on various datasets and
architectures in comparison with the baseline.
| no_new_dataset | 0.951142 |
1703.09145 | Yuguang Liu | Yuguang Liu, Martin D. Levine | Multi-Path Region-Based Convolutional Neural Network for Accurate
Detection of Unconstrained "Hard Faces" | 11 pages, 7 figures, to be presented at CRV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale variations still pose a challenge in unconstrained face
detection. To the best of our knowledge, no current face detection algorithm
can detect a face as large as 800 x 800 pixels while simultaneously detecting
another one as small as 8 x 8 pixels within a single image with equally high
accuracy. We propose a two-stage cascaded face detection framework, Multi-Path
Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a
deep neural network with a classic learning strategy, to tackle this challenge.
The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes
faces at three different scales. It simultaneously utilizes three parallel
outputs of the convolutional feature maps to predict multi-scale candidate face
regions. The "atrous" convolution trick (convolution with up-sampled filters)
and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN
to further boost its performance. The second stage is a Boosted Forests
classifier, which utilizes deep facial features pooled from inside the
candidate face regions as well as deep contextual features pooled from a larger
region surrounding the candidate face regions. This step is included to further
remove hard negative samples. Experiments show that this approach achieves
state-of-the-art face detection performance on the WIDER FACE dataset "hard"
partition, outperforming the former best result by 9.6% for the Average
Precision.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 15:31:00 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Liu",
"Yuguang",
""
],
[
"Levine",
"Martin D.",
""
]
] | TITLE: Multi-Path Region-Based Convolutional Neural Network for Accurate
Detection of Unconstrained "Hard Faces"
ABSTRACT: Large-scale variations still pose a challenge in unconstrained face
detection. To the best of our knowledge, no current face detection algorithm
can detect a face as large as 800 x 800 pixels while simultaneously detecting
another one as small as 8 x 8 pixels within a single image with equally high
accuracy. We propose a two-stage cascaded face detection framework, Multi-Path
Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a
deep neural network with a classic learning strategy, to tackle this challenge.
The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes
faces at three different scales. It simultaneously utilizes three parallel
outputs of the convolutional feature maps to predict multi-scale candidate face
regions. The "atrous" convolution trick (convolution with up-sampled filters)
and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN
to further boost its performance. The second stage is a Boosted Forests
classifier, which utilizes deep facial features pooled from inside the
candidate face regions as well as deep contextual features pooled from a larger
region surrounding the candidate face regions. This step is included to further
remove hard negative samples. Experiments show that this approach achieves
state-of-the-art face detection performance on the WIDER FACE dataset "hard"
partition, outperforming the former best result by 9.6% for the Average
Precision.
| no_new_dataset | 0.948155 |
1703.09193 | Saravanan Thirumuruganathan | Zoi Kaoudi, Jorge-Arnulfo Quian\'e-Ruiz, Saravanan Thirumuruganathan,
Sanjay Chawla, Divy Agrawal | A Cost-based Optimizer for Gradient Descent Optimization | Accepted at SIGMOD 2017 | null | 10.1145/3035918.3064042 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 17:24:54 GMT"
}
] | 2017-03-28T00:00:00 | [
[
"Kaoudi",
"Zoi",
""
],
[
"Quiané-Ruiz",
"Jorge-Arnulfo",
""
],
[
"Thirumuruganathan",
"Saravanan",
""
],
[
"Chawla",
"Sanjay",
""
],
[
"Agrawal",
"Divy",
""
]
] | TITLE: A Cost-based Optimizer for Gradient Descent Optimization
ABSTRACT: As the use of machine learning (ML) permeates into diverse application
domains, there is an urgent need to support a declarative framework for ML.
Ideally, a user will specify an ML task in a high-level and easy-to-use
language and the framework will invoke the appropriate algorithms and system
configurations to execute it. An important observation towards designing such a
framework is that many ML tasks can be expressed as mathematical optimization
problems, which take a specific form. Furthermore, these optimization problems
can be efficiently solved using variations of the gradient descent (GD)
algorithm. Thus, to decouple a user specification of an ML task from its
execution, a key component is a GD optimizer. We propose a cost-based GD
optimizer that selects the best GD plan for a given ML task. To build our
optimizer, we introduce a set of abstract operators for expressing GD
algorithms and propose a novel approach to estimate the number of iterations a
GD algorithm requires to converge. Extensive experiments on real and synthetic
datasets show that our optimizer not only chooses the best GD plan but also
allows for optimizations that achieve orders of magnitude performance speed-up.
| no_new_dataset | 0.938463 |
1612.07056 | Panayiotis Varotsos | Panayiotis A. Varotsos, Nicholas V. Sarlis, Efthimios S. Skordas and
Mary S. Lazaridou | Seismic Electric Signals: A physical interconnection with seismicity | 18 pages, 8 figures, 1 table | null | null | null | physics.geo-ph cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By applying natural time analysis to the time series of earthquakes, we find
that the order parameter of seismicity exhibits a unique change approximately
at the date(s) at which Seismic Electric Signals (SES) activities have been
reported to initiate. In particular, we show that the fluctuations of the order
parameter of seismicity in Japan exhibits a clearly detectable minimum
approximately at the time of the initiation of the SES activity observed almost
two months before the onset of the Volcanic-seismic swarm activity in 2000 in
the Izu Island region, Japan. To the best of our knowledge, this is the first
time that, well before the occurrence of major earthquakes, anomalous changes
are found to appear almost simultaneously in two independent datasets of
different geophysical observables (geoelectrical measurements, seismicity). In
addition, we show that these two phenomena are also linked closely in space.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2016 11:15:40 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2017 14:57:07 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Varotsos",
"Panayiotis A.",
""
],
[
"Sarlis",
"Nicholas V.",
""
],
[
"Skordas",
"Efthimios S.",
""
],
[
"Lazaridou",
"Mary S.",
""
]
] | TITLE: Seismic Electric Signals: A physical interconnection with seismicity
ABSTRACT: By applying natural time analysis to the time series of earthquakes, we find
that the order parameter of seismicity exhibits a unique change approximately
at the date(s) at which Seismic Electric Signals (SES) activities have been
reported to initiate. In particular, we show that the fluctuations of the order
parameter of seismicity in Japan exhibits a clearly detectable minimum
approximately at the time of the initiation of the SES activity observed almost
two months before the onset of the Volcanic-seismic swarm activity in 2000 in
the Izu Island region, Japan. To the best of our knowledge, this is the first
time that, well before the occurrence of major earthquakes, anomalous changes
are found to appear almost simultaneously in two independent datasets of
different geophysical observables (geoelectrical measurements, seismicity). In
addition, we show that these two phenomena are also linked closely in space.
| no_new_dataset | 0.952309 |
1703.07022 | Xiaodan Liang | Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, Eric P. Xing | Recurrent Topic-Transition GAN for Visual Paragraph Generation | 10 pages, 6 figures | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A natural image usually conveys rich semantic content and can be viewed from
different angles. Existing image description methods are largely restricted by
small sets of biased visual paragraph annotations, and fail to cover rich
underlying semantics. In this paper, we investigate a semi-supervised paragraph
generative framework that is able to synthesize diverse and semantically
coherent paragraph descriptions by reasoning over local semantic regions and
exploiting linguistic knowledge. The proposed Recurrent Topic-Transition
Generative Adversarial Network (RTT-GAN) builds an adversarial framework
between a structured paragraph generator and multi-level paragraph
discriminators. The paragraph generator generates sentences recurrently by
incorporating region-based visual and language attention mechanisms at each
step. The quality of generated paragraph sentences is assessed by multi-level
adversarial discriminators from two aspects, namely, plausibility at sentence
level and topic-transition coherence at paragraph level. The joint adversarial
training of RTT-GAN drives the model to generate realistic paragraphs with
smooth logical transition between sentence topics. Extensive quantitative
experiments on image and video paragraph datasets demonstrate the effectiveness
of our RTT-GAN in both supervised and semi-supervised settings. Qualitative
results on telling diverse stories for an image also verify the
interpretability of RTT-GAN.
| [
{
"version": "v1",
"created": "Tue, 21 Mar 2017 01:43:12 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2017 20:06:15 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Liang",
"Xiaodan",
""
],
[
"Hu",
"Zhiting",
""
],
[
"Zhang",
"Hao",
""
],
[
"Gan",
"Chuang",
""
],
[
"Xing",
"Eric P.",
""
]
] | TITLE: Recurrent Topic-Transition GAN for Visual Paragraph Generation
ABSTRACT: A natural image usually conveys rich semantic content and can be viewed from
different angles. Existing image description methods are largely restricted by
small sets of biased visual paragraph annotations, and fail to cover rich
underlying semantics. In this paper, we investigate a semi-supervised paragraph
generative framework that is able to synthesize diverse and semantically
coherent paragraph descriptions by reasoning over local semantic regions and
exploiting linguistic knowledge. The proposed Recurrent Topic-Transition
Generative Adversarial Network (RTT-GAN) builds an adversarial framework
between a structured paragraph generator and multi-level paragraph
discriminators. The paragraph generator generates sentences recurrently by
incorporating region-based visual and language attention mechanisms at each
step. The quality of generated paragraph sentences is assessed by multi-level
adversarial discriminators from two aspects, namely, plausibility at sentence
level and topic-transition coherence at paragraph level. The joint adversarial
training of RTT-GAN drives the model to generate realistic paragraphs with
smooth logical transition between sentence topics. Extensive quantitative
experiments on image and video paragraph datasets demonstrate the effectiveness
of our RTT-GAN in both supervised and semi-supervised settings. Qualitative
results on telling diverse stories for an image also verify the
interpretability of RTT-GAN.
| no_new_dataset | 0.946448 |
1703.08014 | Timo von Marcard | Timo von Marcard, Bodo Rosenhahn, Michael J. Black, Gerard Pons-Moll | Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse
IMUs | 12 pages, Accepted at Eurographics 2017 | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of making human motion capture in the wild more
practical by using a small set of inertial sensors attached to the body. Since
the problem is heavily under-constrained, previous methods either use a large
number of sensors, which is intrusive, or they require additional video input.
We take a different approach and constrain the problem by: (i) making use of a
realistic statistical body model that includes anthropometric constraints and
(ii) using a joint optimization framework to fit the model to orientation and
acceleration measurements over multiple frames. The resulting tracker Sparse
Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors
(attached to the wrists, lower legs, back and head) and works for arbitrary
human motions. Experiments on the recently released TNT15 dataset show that,
using the same number of sensors, SIP achieves higher accuracy than the dataset
baseline without using any video data. We further demonstrate the effectiveness
of SIP on newly recorded challenging motions in outdoor scenarios such as
climbing or jumping over a wall.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 11:35:41 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Mar 2017 08:24:07 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"von Marcard",
"Timo",
""
],
[
"Rosenhahn",
"Bodo",
""
],
[
"Black",
"Michael J.",
""
],
[
"Pons-Moll",
"Gerard",
""
]
] | TITLE: Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse
IMUs
ABSTRACT: We address the problem of making human motion capture in the wild more
practical by using a small set of inertial sensors attached to the body. Since
the problem is heavily under-constrained, previous methods either use a large
number of sensors, which is intrusive, or they require additional video input.
We take a different approach and constrain the problem by: (i) making use of a
realistic statistical body model that includes anthropometric constraints and
(ii) using a joint optimization framework to fit the model to orientation and
acceleration measurements over multiple frames. The resulting tracker Sparse
Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors
(attached to the wrists, lower legs, back and head) and works for arbitrary
human motions. Experiments on the recently released TNT15 dataset show that,
using the same number of sensors, SIP achieves higher accuracy than the dataset
baseline without using any video data. We further demonstrate the effectiveness
of SIP on newly recorded challenging motions in outdoor scenarios such as
climbing or jumping over a wall.
| no_new_dataset | 0.719433 |
1703.08244 | Maribel Acosta | Fabian Fl\"ock, Kenan Erdogan, Maribel Acosta | TokTrack: A Complete Token Provenance and Change Tracking Dataset for
the English Wikipedia | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dataset that contains every instance of all tokens (~ words)
ever written in undeleted, non-redirect English Wikipedia articles until
October 2016, in total 13,545,349,787 instances. Each token is annotated with
(i) the article revision it was originally created in, and (ii) lists with all
the revisions in which the token was ever deleted and (potentially) re-added
and re-deleted from its article, enabling a complete and straightforward
tracking of its history. This data would be exceedingly hard to create by an
average potential user as it is (i) very expensive to compute and as (ii)
accurately tracking the history of each token in revisioned documents is a
non-trivial task. Adapting a state-of-the-art algorithm, we have produced a
dataset that allows for a range of analyses and metrics, already popular in
research and going beyond, to be generated on complete-Wikipedia scale;
ensuring quality and allowing researchers to forego expensive text-comparison
computation, which so far has hindered scalable usage. We show how this data
enables, on token-level, computation of provenance, measuring survival of
content over time, very detailed conflict metrics, and fine-grained
interactions of editors like partial reverts, re-additions and other metrics,
in the process gaining several novel insights.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 22:20:45 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Flöck",
"Fabian",
""
],
[
"Erdogan",
"Kenan",
""
],
[
"Acosta",
"Maribel",
""
]
] | TITLE: TokTrack: A Complete Token Provenance and Change Tracking Dataset for
the English Wikipedia
ABSTRACT: We present a dataset that contains every instance of all tokens (~ words)
ever written in undeleted, non-redirect English Wikipedia articles until
October 2016, in total 13,545,349,787 instances. Each token is annotated with
(i) the article revision it was originally created in, and (ii) lists with all
the revisions in which the token was ever deleted and (potentially) re-added
and re-deleted from its article, enabling a complete and straightforward
tracking of its history. This data would be exceedingly hard to create by an
average potential user as it is (i) very expensive to compute and as (ii)
accurately tracking the history of each token in revisioned documents is a
non-trivial task. Adapting a state-of-the-art algorithm, we have produced a
dataset that allows for a range of analyses and metrics, already popular in
research and going beyond, to be generated on complete-Wikipedia scale;
ensuring quality and allowing researchers to forego expensive text-comparison
computation, which so far has hindered scalable usage. We show how this data
enables, on token-level, computation of provenance, measuring survival of
content over time, very detailed conflict metrics, and fine-grained
interactions of editors like partial reverts, re-additions and other metrics,
in the process gaining several novel insights.
| new_dataset | 0.964355 |
1703.08289 | Wenhao He | Wenhao He, Xu-Yao Zhang, Fei Yin, Cheng-Lin Liu | Deep Direct Regression for Multi-Oriented Scene Text Detection | 9 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we first provide a new perspective to divide existing high
performance object detection methods into direct and indirect regressions.
Direct regression performs boundary regression by predicting the offsets from a
given point, while indirect regression predicts the offsets from some bounding
box proposals. Then we analyze the drawbacks of the indirect regression, which
the recent state-of-the-art detection structures like Faster-RCNN and SSD
follows, for multi-oriented scene text detection, and point out the potential
superiority of direct regression. To verify this point of view, we propose a
deep direct regression based method for multi-oriented scene text detection.
Our detection framework is simple and effective with a fully convolutional
network and one-step post processing. The fully convolutional network is
optimized in an end-to-end way and has bi-task outputs where one is pixel-wise
classification between text and non-text, and the other is direct regression to
determine the vertex coordinates of quadrilateral text boundaries. The proposed
method is particularly beneficial for localizing incidental scene texts. On the
ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure
of 81%, which is a new state-of-the-art and significantly outperforms previous
approaches. On other standard datasets with focused scene texts, our method
also reaches the state-of-the-art performance.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 05:54:11 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"He",
"Wenhao",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Yin",
"Fei",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: Deep Direct Regression for Multi-Oriented Scene Text Detection
ABSTRACT: In this paper, we first provide a new perspective to divide existing high
performance object detection methods into direct and indirect regressions.
Direct regression performs boundary regression by predicting the offsets from a
given point, while indirect regression predicts the offsets from some bounding
box proposals. Then we analyze the drawbacks of the indirect regression, which
the recent state-of-the-art detection structures like Faster-RCNN and SSD
follows, for multi-oriented scene text detection, and point out the potential
superiority of direct regression. To verify this point of view, we propose a
deep direct regression based method for multi-oriented scene text detection.
Our detection framework is simple and effective with a fully convolutional
network and one-step post processing. The fully convolutional network is
optimized in an end-to-end way and has bi-task outputs where one is pixel-wise
classification between text and non-text, and the other is direct regression to
determine the vertex coordinates of quadrilateral text boundaries. The proposed
method is particularly beneficial for localizing incidental scene texts. On the
ICDAR2015 Incidental Scene Text benchmark, our method achieves the F1-measure
of 81%, which is a new state-of-the-art and significantly outperforms previous
approaches. On other standard datasets with focused scene texts, our method
also reaches the state-of-the-art performance.
| no_new_dataset | 0.947624 |
1703.08366 | Mohamed Moustafa | Hussein Adly and Mohamed Moustafa | A Hybrid Deep Learning Approach for Texture Analysis | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Texture classification is a problem that has various applications such as
remote sensing and forest species recognition. Solutions tend to be custom fit
to the dataset used but fails to generalize. The Convolutional Neural Network
(CNN) in combination with Support Vector Machine (SVM) form a robust selection
between powerful invariant feature extractor and accurate classifier. The
fusion of experts provides stability in classification rates among different
datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 11:39:26 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Adly",
"Hussein",
""
],
[
"Moustafa",
"Mohamed",
""
]
] | TITLE: A Hybrid Deep Learning Approach for Texture Analysis
ABSTRACT: Texture classification is a problem that has various applications such as
remote sensing and forest species recognition. Solutions tend to be custom fit
to the dataset used but fails to generalize. The Convolutional Neural Network
(CNN) in combination with Support Vector Machine (SVM) form a robust selection
between powerful invariant feature extractor and accurate classifier. The
fusion of experts provides stability in classification rates among different
datasets.
| no_new_dataset | 0.954137 |
1703.08378 | Shenglan Liu | Shenglan Liu, Muxin Sun, Wei Wang, Feilong Wang | Feature Fusion using Extended Jaccard Graph and Stochastic Gradient
Descent for Robot | Assembly Automation | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robot vision is a fundamental device for human-robot interaction and robot
complex tasks. In this paper, we use Kinect and propose a feature graph fusion
(FGF) for robot recognition. Our feature fusion utilizes RGB and depth
information to construct fused feature from Kinect. FGF involves multi-Jaccard
similarity to compute a robust graph and utilize word embedding method to
enhance the recognition results. We also collect DUT RGB-D face dataset and a
benchmark datset to evaluate the effectiveness and efficiency of our method.
The experimental results illustrate FGF is robust and effective to face and
object datasets in robot applications.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 11:58:14 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Liu",
"Shenglan",
""
],
[
"Sun",
"Muxin",
""
],
[
"Wang",
"Wei",
""
],
[
"Wang",
"Feilong",
""
]
] | TITLE: Feature Fusion using Extended Jaccard Graph and Stochastic Gradient
Descent for Robot
ABSTRACT: Robot vision is a fundamental device for human-robot interaction and robot
complex tasks. In this paper, we use Kinect and propose a feature graph fusion
(FGF) for robot recognition. Our feature fusion utilizes RGB and depth
information to construct fused feature from Kinect. FGF involves multi-Jaccard
similarity to compute a robust graph and utilize word embedding method to
enhance the recognition results. We also collect DUT RGB-D face dataset and a
benchmark datset to evaluate the effectiveness and efficiency of our method.
The experimental results illustrate FGF is robust and effective to face and
object datasets in robot applications.
| no_new_dataset | 0.724139 |
1703.08434 | Kojo Sarfo Gyamfi | Kojo Sarfo Gyamfi, James Brusey, Andrew Hunt and Elena Gaura | Linear classifier design under heteroscedasticity in Linear Discriminant
Analysis | null | null | 10.1016/j.eswa.2017.02.039 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Under normality and homoscedasticity assumptions, Linear Discriminant
Analysis (LDA) is known to be optimal in terms of minimising the Bayes error
for binary classification. In the heteroscedastic case, LDA is not guaranteed
to minimise this error. Assuming heteroscedasticity, we derive a linear
classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the
Bayes error for binary classification. In addition, we also propose a local
neighbourhood search (LNS) algorithm to obtain a more robust classifier if the
data is known to have a non-normal distribution. We evaluate the proposed
classifiers on two artificial and ten real-world datasets that cut across a
wide range of application areas including handwriting recognition, medical
diagnosis and remote sensing, and then compare our algorithm against existing
LDA approaches and other linear classifiers. The GLD is shown to outperform the
original LDA procedure in terms of the classification accuracy under
heteroscedasticity. While it compares favourably with other existing
heteroscedastic LDA approaches, the GLD requires as much as 60 times lower
training time on some datasets. Our comparison with the support vector machine
(SVM) also shows that, the GLD, together with the LNS, requires as much as 150
times lower training time to achieve an equivalent classification accuracy on
some of the datasets. Thus, our algorithms can provide a cheap and reliable
option for classification in a lot of expert systems.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 14:45:12 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Gyamfi",
"Kojo Sarfo",
""
],
[
"Brusey",
"James",
""
],
[
"Hunt",
"Andrew",
""
],
[
"Gaura",
"Elena",
""
]
] | TITLE: Linear classifier design under heteroscedasticity in Linear Discriminant
Analysis
ABSTRACT: Under normality and homoscedasticity assumptions, Linear Discriminant
Analysis (LDA) is known to be optimal in terms of minimising the Bayes error
for binary classification. In the heteroscedastic case, LDA is not guaranteed
to minimise this error. Assuming heteroscedasticity, we derive a linear
classifier, the Gaussian Linear Discriminant (GLD), that directly minimises the
Bayes error for binary classification. In addition, we also propose a local
neighbourhood search (LNS) algorithm to obtain a more robust classifier if the
data is known to have a non-normal distribution. We evaluate the proposed
classifiers on two artificial and ten real-world datasets that cut across a
wide range of application areas including handwriting recognition, medical
diagnosis and remote sensing, and then compare our algorithm against existing
LDA approaches and other linear classifiers. The GLD is shown to outperform the
original LDA procedure in terms of the classification accuracy under
heteroscedasticity. While it compares favourably with other existing
heteroscedastic LDA approaches, the GLD requires as much as 60 times lower
training time on some datasets. Our comparison with the support vector machine
(SVM) also shows that, the GLD, together with the LNS, requires as much as 150
times lower training time to achieve an equivalent classification accuracy on
some of the datasets. Thus, our algorithms can provide a cheap and reliable
option for classification in a lot of expert systems.
| no_new_dataset | 0.946101 |
1703.08440 | Kojo Sarfo Gyamfi | Kojo Sarfo Gyamfi, James Brusey and Andrew Hunt | K-Means Clustering using Tabu Search with Quantized Means | World Conference on Engineering and Computer Science | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering
as an alternative to Lloyd's algorithm, which for all its ease of
implementation and fast runtime, has the major drawback of being trapped at
local optima. While the TS approach can yield superior performance, it involves
a high computational complexity. Moreover, the difficulty in parameter
selection in the existing TS approach does not make it any more attractive.
This paper presents an alternative, low-complexity formulation of the TS
optimization procedure for K-Means clustering. This approach does not require
many parameter settings. We initially constrain the centers to points in the
dataset. We then aim at evolving these centers using a unique neighborhood
structure that makes use of gradient information of the objective function.
This results in an efficient exploration of the search space, after which the
means are refined. The proposed scheme is implemented in MATLAB and tested on
four real-world datasets, and it achieves a significant improvement over the
existing TS approach in terms of the intra cluster sum of squares and
computational time.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 14:59:06 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Gyamfi",
"Kojo Sarfo",
""
],
[
"Brusey",
"James",
""
],
[
"Hunt",
"Andrew",
""
]
] | TITLE: K-Means Clustering using Tabu Search with Quantized Means
ABSTRACT: The Tabu Search (TS) metaheuristic has been proposed for K-Means clustering
as an alternative to Lloyd's algorithm, which for all its ease of
implementation and fast runtime, has the major drawback of being trapped at
local optima. While the TS approach can yield superior performance, it involves
a high computational complexity. Moreover, the difficulty in parameter
selection in the existing TS approach does not make it any more attractive.
This paper presents an alternative, low-complexity formulation of the TS
optimization procedure for K-Means clustering. This approach does not require
many parameter settings. We initially constrain the centers to points in the
dataset. We then aim at evolving these centers using a unique neighborhood
structure that makes use of gradient information of the objective function.
This results in an efficient exploration of the search space, after which the
means are refined. The proposed scheme is implemented in MATLAB and tested on
four real-world datasets, and it achieves a significant improvement over the
existing TS approach in terms of the intra cluster sum of squares and
computational time.
| no_new_dataset | 0.952309 |
1703.08471 | Mirco Ravanelli | Mirco Ravanelli, Philemon Brakel, Maurizio Omologo, Yoshua Bengio | Batch-normalized joint training for DNN-based distant speech recognition | arXiv admin note: text overlap with arXiv:1703.08002 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Improving distant speech recognition is a crucial step towards flexible
human-machine interfaces. Current technology, however, still exhibits a lack of
robustness, especially when adverse acoustic conditions are met. Despite the
significant progress made in the last years on both speech enhancement and
speech recognition, one potential limitation of state-of-the-art technology
lies in composing modules that are not well matched because they are not
trained jointly. To address this concern, a promising approach consists in
concatenating a speech enhancement and a speech recognition deep neural network
and to jointly update their parameters as if they were within a single bigger
network. Unfortunately, joint training can be difficult because the output
distribution of the speech enhancement system may change substantially during
the optimization procedure. The speech recognition module would have to deal
with an input distribution that is non-stationary and unnormalized. To mitigate
this issue, we propose a joint training approach based on a fully
batch-normalized architecture. Experiments, conducted using different datasets,
tasks and acoustic conditions, revealed that the proposed framework
significantly overtakes other competitive solutions, especially in challenging
environments.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 15:40:19 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Ravanelli",
"Mirco",
""
],
[
"Brakel",
"Philemon",
""
],
[
"Omologo",
"Maurizio",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: Batch-normalized joint training for DNN-based distant speech recognition
ABSTRACT: Improving distant speech recognition is a crucial step towards flexible
human-machine interfaces. Current technology, however, still exhibits a lack of
robustness, especially when adverse acoustic conditions are met. Despite the
significant progress made in the last years on both speech enhancement and
speech recognition, one potential limitation of state-of-the-art technology
lies in composing modules that are not well matched because they are not
trained jointly. To address this concern, a promising approach consists in
concatenating a speech enhancement and a speech recognition deep neural network
and to jointly update their parameters as if they were within a single bigger
network. Unfortunately, joint training can be difficult because the output
distribution of the speech enhancement system may change substantially during
the optimization procedure. The speech recognition module would have to deal
with an input distribution that is non-stationary and unnormalized. To mitigate
this issue, we propose a joint training approach based on a fully
batch-normalized architecture. Experiments, conducted using different datasets,
tasks and acoustic conditions, revealed that the proposed framework
significantly overtakes other competitive solutions, especially in challenging
environments.
| no_new_dataset | 0.932699 |
1703.08524 | Shuai Xiao | Shuai Xiao, Junchi Yan, Mehrdad Farajtabar, Le Song, Xiaokang Yang,
Hongyuan Zha | Joint Modeling of Event Sequence and Time Series with Attentional Twin
Recurrent Neural Networks | 14 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of real-world processes (over networks) produce sequences of data
whose complex temporal dynamics need to be studied. More especially, the event
timestamps can carry important information about the underlying network
dynamics, which otherwise are not available from the time-series evenly sampled
from continuous signals. Moreover, in most complex processes, event sequences
and evenly-sampled times series data can interact with each other, which
renders joint modeling of those two sources of data necessary. To tackle the
above problems, in this paper, we utilize the rich framework of (temporal)
point processes to model event data and timely update its intensity function by
the synergic twin Recurrent Neural Networks (RNNs). In the proposed
architecture, the intensity function is synergistically modulated by one RNN
with asynchronous events as input and another RNN with time series as input.
Furthermore, to enhance the interpretability of the model, the attention
mechanism for the neural point process is introduced. The whole model with
event type and timestamp prediction output layers can be trained end-to-end and
allows a black-box treatment for modeling the intensity. We substantiate the
superiority of our model in synthetic data and three real-world benchmark
datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 17:29:14 GMT"
}
] | 2017-03-27T00:00:00 | [
[
"Xiao",
"Shuai",
""
],
[
"Yan",
"Junchi",
""
],
[
"Farajtabar",
"Mehrdad",
""
],
[
"Song",
"Le",
""
],
[
"Yang",
"Xiaokang",
""
],
[
"Zha",
"Hongyuan",
""
]
] | TITLE: Joint Modeling of Event Sequence and Time Series with Attentional Twin
Recurrent Neural Networks
ABSTRACT: A variety of real-world processes (over networks) produce sequences of data
whose complex temporal dynamics need to be studied. More especially, the event
timestamps can carry important information about the underlying network
dynamics, which otherwise are not available from the time-series evenly sampled
from continuous signals. Moreover, in most complex processes, event sequences
and evenly-sampled times series data can interact with each other, which
renders joint modeling of those two sources of data necessary. To tackle the
above problems, in this paper, we utilize the rich framework of (temporal)
point processes to model event data and timely update its intensity function by
the synergic twin Recurrent Neural Networks (RNNs). In the proposed
architecture, the intensity function is synergistically modulated by one RNN
with asynchronous events as input and another RNN with time series as input.
Furthermore, to enhance the interpretability of the model, the attention
mechanism for the neural point process is introduced. The whole model with
event type and timestamp prediction output layers can be trained end-to-end and
allows a black-box treatment for modeling the intensity. We substantiate the
superiority of our model in synthetic data and three real-world benchmark
datasets.
| no_new_dataset | 0.953579 |
1508.03422 | Salman Khan Mr. | Salman H. Khan, Munawar Hayat, Mohammed Bennamoun, Ferdous Sohel,
Roberto Togneri | Cost Sensitive Learning of Deep Feature Representations from Imbalanced
Data | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Class imbalance is a common problem in the case of real-world object
detection and classification tasks. Data of some classes is abundant making
them an over-represented majority, and data of other classes is scarce, making
them an under-represented minority. This imbalance makes it challenging for a
classifier to appropriately learn the discriminating boundaries of the majority
and minority classes. In this work, we propose a cost sensitive deep neural
network which can automatically learn robust feature representations for both
the majority and minority classes. During training, our learning procedure
jointly optimizes the class dependent costs and the neural network parameters.
The proposed approach is applicable to both binary and multi-class problems
without any modification. Moreover, as opposed to data level approaches, we do
not alter the original data distribution which results in a lower computational
cost during the training process. We report the results of our experiments on
six major image classification datasets and show that the proposed approach
significantly outperforms the baseline algorithms. Comparisons with popular
data sampling techniques and cost sensitive classifiers demonstrate the
superior performance of our proposed method.
| [
{
"version": "v1",
"created": "Fri, 14 Aug 2015 05:23:30 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Dec 2015 08:37:37 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2017 10:57:10 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Khan",
"Salman H.",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Sohel",
"Ferdous",
""
],
[
"Togneri",
"Roberto",
""
]
] | TITLE: Cost Sensitive Learning of Deep Feature Representations from Imbalanced
Data
ABSTRACT: Class imbalance is a common problem in the case of real-world object
detection and classification tasks. Data of some classes is abundant making
them an over-represented majority, and data of other classes is scarce, making
them an under-represented minority. This imbalance makes it challenging for a
classifier to appropriately learn the discriminating boundaries of the majority
and minority classes. In this work, we propose a cost sensitive deep neural
network which can automatically learn robust feature representations for both
the majority and minority classes. During training, our learning procedure
jointly optimizes the class dependent costs and the neural network parameters.
The proposed approach is applicable to both binary and multi-class problems
without any modification. Moreover, as opposed to data level approaches, we do
not alter the original data distribution which results in a lower computational
cost during the training process. We report the results of our experiments on
six major image classification datasets and show that the proposed approach
significantly outperforms the baseline algorithms. Comparisons with popular
data sampling techniques and cost sensitive classifiers demonstrate the
superior performance of our proposed method.
| no_new_dataset | 0.951684 |
1606.03956 | Eric Tramel | Eric W. Tramel and Andre Manoel and Francesco Caltagirone and Marylou
Gabri\'e and Florent Krzakala | Inferring Sparsity: Compressed Sensing using Generalized Restricted
Boltzmann Machines | IEEE Information Theory Workshop, 2016 | 2016 IEEE Information Theory Workshop (ITW), Pages: 265 - 269 | 10.1109/ITW.2016.7606837 | null | cs.IT cond-mat.dis-nn cs.LG math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we consider compressed sensing reconstruction from $M$
measurements of $K$-sparse structured signals which do not possess a writable
correlation model. Assuming that a generative statistical model, such as a
Boltzmann machine, can be trained in an unsupervised manner on example signals,
we demonstrate how this signal model can be used within a Bayesian framework of
signal reconstruction. By deriving a message-passing inference for general
distribution restricted Boltzmann machines, we are able to integrate these
inferred signal models into approximate message passing for compressed sensing
reconstruction. Finally, we show for the MNIST dataset that this approach can
be very effective, even for $M < K$.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2016 14:03:50 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Tramel",
"Eric W.",
""
],
[
"Manoel",
"Andre",
""
],
[
"Caltagirone",
"Francesco",
""
],
[
"Gabrié",
"Marylou",
""
],
[
"Krzakala",
"Florent",
""
]
] | TITLE: Inferring Sparsity: Compressed Sensing using Generalized Restricted
Boltzmann Machines
ABSTRACT: In this work, we consider compressed sensing reconstruction from $M$
measurements of $K$-sparse structured signals which do not possess a writable
correlation model. Assuming that a generative statistical model, such as a
Boltzmann machine, can be trained in an unsupervised manner on example signals,
we demonstrate how this signal model can be used within a Bayesian framework of
signal reconstruction. By deriving a message-passing inference for general
distribution restricted Boltzmann machines, we are able to integrate these
inferred signal models into approximate message passing for compressed sensing
reconstruction. Finally, we show for the MNIST dataset that this approach can
be very effective, even for $M < K$.
| no_new_dataset | 0.949669 |
1606.04722 | Xi Wu | Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey
F. Naughton | Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics | null | null | null | null | cs.LG cs.CR cs.DB stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While significant progress has been made separately on analytics systems for
scalable stochastic gradient descent (SGD) and private SGD, none of the major
scalable analytics frameworks have incorporated differentially private SGD.
There are two inter-related issues for this disconnect between research and
practice: (1) low model accuracy due to added noise to guarantee privacy, and
(2) high development and runtime overhead of the private algorithms. This paper
takes a first step to remedy this disconnect and proposes a private SGD
algorithm to address \emph{both} issues in an integrated manner. In contrast to
the white-box approach adopted by previous work, we revisit and use the
classical technique of {\em output perturbation} to devise a novel "bolt-on"
approach to private SGD. While our approach trivially addresses (2), it makes
(1) even more challenging. We address this challenge by providing a novel
analysis of the $L_2$-sensitivity of SGD, which allows, under the same privacy
guarantees, better convergence of SGD when only a constant number of passes can
be made over the data. We integrate our algorithm, as well as other
state-of-the-art differentially private SGD, into Bismarck, a popular scalable
SGD-based analytics system on top of an RDBMS. Extensive experiments show that
our algorithm can be easily integrated, incurs virtually no overhead, scales
well, and most importantly, yields substantially better (up to 4X) test
accuracy than the state-of-the-art algorithms on many real datasets.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 11:14:29 GMT"
},
{
"version": "v2",
"created": "Sun, 26 Feb 2017 16:26:59 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Mar 2017 17:35:09 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Wu",
"Xi",
""
],
[
"Li",
"Fengan",
""
],
[
"Kumar",
"Arun",
""
],
[
"Chaudhuri",
"Kamalika",
""
],
[
"Jha",
"Somesh",
""
],
[
"Naughton",
"Jeffrey F.",
""
]
] | TITLE: Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics
ABSTRACT: While significant progress has been made separately on analytics systems for
scalable stochastic gradient descent (SGD) and private SGD, none of the major
scalable analytics frameworks have incorporated differentially private SGD.
There are two inter-related issues for this disconnect between research and
practice: (1) low model accuracy due to added noise to guarantee privacy, and
(2) high development and runtime overhead of the private algorithms. This paper
takes a first step to remedy this disconnect and proposes a private SGD
algorithm to address \emph{both} issues in an integrated manner. In contrast to
the white-box approach adopted by previous work, we revisit and use the
classical technique of {\em output perturbation} to devise a novel "bolt-on"
approach to private SGD. While our approach trivially addresses (2), it makes
(1) even more challenging. We address this challenge by providing a novel
analysis of the $L_2$-sensitivity of SGD, which allows, under the same privacy
guarantees, better convergence of SGD when only a constant number of passes can
be made over the data. We integrate our algorithm, as well as other
state-of-the-art differentially private SGD, into Bismarck, a popular scalable
SGD-based analytics system on top of an RDBMS. Extensive experiments show that
our algorithm can be easily integrated, incurs virtually no overhead, scales
well, and most importantly, yields substantially better (up to 4X) test
accuracy than the state-of-the-art algorithms on many real datasets.
| no_new_dataset | 0.943295 |
1608.00161 | Nam Vo | Nam Vo and James Hays | Localizing and Orienting Street Views Using Overhead Imagery | ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we aim to determine the location and orientation of a
ground-level query image by matching to a reference database of overhead (e.g.
satellite) images. For this task we collect a new dataset with one million
pairs of street view and overhead images sampled from eleven U.S. cities. We
explore several deep CNN architectures for cross-domain matching --
Classification, Hybrid, Siamese, and Triplet networks. Classification and
Hybrid architectures are accurate but slow since they allow only partial
feature precomputation. We propose a new loss function which significantly
improves the accuracy of Siamese and Triplet embedding networks while
maintaining their applicability to large-scale retrieval tasks like image
geolocalization. This image matching task is challenging not just because of
the dramatic viewpoint difference between ground-level and overhead imagery but
because the orientation (i.e. azimuth) of the street views is unknown making
correspondence even more difficult. We examine several mechanisms to match in
spite of this -- training for rotation invariance, sampling possible rotations
at query time, and explicitly predicting relative rotation of ground and
overhead images with our deep networks. It turns out that explicit orientation
supervision also improves location prediction accuracy. Our best performing
architectures are roughly 2.5 times as accurate as the commonly used Siamese
network baseline.
| [
{
"version": "v1",
"created": "Sat, 30 Jul 2016 20:48:14 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2017 23:49:57 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Vo",
"Nam",
""
],
[
"Hays",
"James",
""
]
] | TITLE: Localizing and Orienting Street Views Using Overhead Imagery
ABSTRACT: In this paper we aim to determine the location and orientation of a
ground-level query image by matching to a reference database of overhead (e.g.
satellite) images. For this task we collect a new dataset with one million
pairs of street view and overhead images sampled from eleven U.S. cities. We
explore several deep CNN architectures for cross-domain matching --
Classification, Hybrid, Siamese, and Triplet networks. Classification and
Hybrid architectures are accurate but slow since they allow only partial
feature precomputation. We propose a new loss function which significantly
improves the accuracy of Siamese and Triplet embedding networks while
maintaining their applicability to large-scale retrieval tasks like image
geolocalization. This image matching task is challenging not just because of
the dramatic viewpoint difference between ground-level and overhead imagery but
because the orientation (i.e. azimuth) of the street views is unknown making
correspondence even more difficult. We examine several mechanisms to match in
spite of this -- training for rotation invariance, sampling possible rotations
at query time, and explicitly predicting relative rotation of ground and
overhead images with our deep networks. It turns out that explicit orientation
supervision also improves location prediction accuracy. Our best performing
architectures are roughly 2.5 times as accurate as the commonly used Siamese
network baseline.
| new_dataset | 0.956145 |
1611.06492 | Abhinav Agarwalla | Arnav Kumar Jain, Abhinav Agarwalla, Kumar Krishna Agrawal, Pabitra
Mitra | Recurrent Memory Addressing for describing videos | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce Key-Value Memory Networks to a multimodal setting
and a novel key-addressing mechanism to deal with sequence-to-sequence models.
The proposed model naturally decomposes the problem of video captioning into
vision and language segments, dealing with them as key-value pairs. More
specifically, we learn a semantic embedding (v) corresponding to each frame (k)
in the video, thereby creating (k, v) memory slots. We propose to find the next
step attention weights conditioned on the previous attention distributions for
the key-value memory slots in the memory addressing schema. Exploiting this
flexibility of the framework, we additionally capture spatial dependencies
while mapping from the visual to semantic embedding. Experiments done on the
Youtube2Text dataset demonstrate usefulness of recurrent key-addressing, while
achieving competitive scores on BLEU@4, METEOR metrics against state-of-the-art
models.
| [
{
"version": "v1",
"created": "Sun, 20 Nov 2016 10:07:54 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Mar 2017 14:01:20 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Jain",
"Arnav Kumar",
""
],
[
"Agarwalla",
"Abhinav",
""
],
[
"Agrawal",
"Kumar Krishna",
""
],
[
"Mitra",
"Pabitra",
""
]
] | TITLE: Recurrent Memory Addressing for describing videos
ABSTRACT: In this paper, we introduce Key-Value Memory Networks to a multimodal setting
and a novel key-addressing mechanism to deal with sequence-to-sequence models.
The proposed model naturally decomposes the problem of video captioning into
vision and language segments, dealing with them as key-value pairs. More
specifically, we learn a semantic embedding (v) corresponding to each frame (k)
in the video, thereby creating (k, v) memory slots. We propose to find the next
step attention weights conditioned on the previous attention distributions for
the key-value memory slots in the memory addressing schema. Exploiting this
flexibility of the framework, we additionally capture spatial dependencies
while mapping from the visual to semantic embedding. Experiments done on the
Youtube2Text dataset demonstrate usefulness of recurrent key-addressing, while
achieving competitive scores on BLEU@4, METEOR metrics against state-of-the-art
models.
| no_new_dataset | 0.948728 |
1703.07807 | Theja Tulabandhula | Arun Rajkumar and Koyel Mukherjee and Theja Tulabandhula | Learning to Partition using Score Based Compatibilities | Appears in the Proceedings of the 16th International Conference on
Autonomous Agents and Multiagent Systems (AAMAS 2017) | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of learning to partition users into groups, where one
must learn the compatibilities between the users to achieve optimal groupings.
We define four natural objectives that optimize for average and worst case
compatibilities and propose new algorithms for adaptively learning optimal
groupings. When we do not impose any structure on the compatibilities, we show
that the group formation objectives considered are $NP$ hard to solve and we
either give approximation guarantees or prove inapproximability results. We
then introduce an elegant structure, namely that of \textit{intrinsic scores},
that makes many of these problems polynomial time solvable. We explicitly
characterize the optimal groupings under this structure and show that the
optimal solutions are related to \emph{homophilous} and \emph{heterophilous}
partitions, well-studied in the psychology literature. For one of the four
objectives, we show $NP$ hardness under the score structure and give a
$\frac{1}{2}$ approximation algorithm for which no constant approximation was
known thus far. Finally, under the score structure, we propose an online low
sample complexity PAC algorithm for learning the optimal partition. We
demonstrate the efficacy of the proposed algorithm on synthetic and real world
datasets.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2017 18:30:10 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Rajkumar",
"Arun",
""
],
[
"Mukherjee",
"Koyel",
""
],
[
"Tulabandhula",
"Theja",
""
]
] | TITLE: Learning to Partition using Score Based Compatibilities
ABSTRACT: We study the problem of learning to partition users into groups, where one
must learn the compatibilities between the users to achieve optimal groupings.
We define four natural objectives that optimize for average and worst case
compatibilities and propose new algorithms for adaptively learning optimal
groupings. When we do not impose any structure on the compatibilities, we show
that the group formation objectives considered are $NP$ hard to solve and we
either give approximation guarantees or prove inapproximability results. We
then introduce an elegant structure, namely that of \textit{intrinsic scores},
that makes many of these problems polynomial time solvable. We explicitly
characterize the optimal groupings under this structure and show that the
optimal solutions are related to \emph{homophilous} and \emph{heterophilous}
partitions, well-studied in the psychology literature. For one of the four
objectives, we show $NP$ hardness under the score structure and give a
$\frac{1}{2}$ approximation algorithm for which no constant approximation was
known thus far. Finally, under the score structure, we propose an online low
sample complexity PAC algorithm for learning the optimal partition. We
demonstrate the efficacy of the proposed algorithm on synthetic and real world
datasets.
| no_new_dataset | 0.94428 |
1703.07815 | Chen Chen | Yicong Tian and Chen Chen and Mubarak Shah | Cross-View Image Matching for Geo-localization in Urban Environments | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of cross-view image geo-localization.
Specifically, we aim to estimate the GPS location of a query street view image
by finding the matching images in a reference database of geo-tagged bird's eye
view images, or vice versa. To this end, we present a new framework for
cross-view image geo-localization by taking advantage of the tremendous success
of deep convolutional neural networks (CNNs) in image classification and object
detection. First, we employ the Faster R-CNN to detect buildings in the query
and reference images. Next, for each building in the query image, we retrieve
the $k$ nearest neighbors from the reference buildings using a Siamese network
trained on both positive matching image pairs and negative pairs. To find the
correct NN for each query building, we develop an efficient multiple nearest
neighbors matching method based on dominant sets. We evaluate the proposed
framework on a new dataset that consists of pairs of street view and bird's eye
view images. Experimental results show that the proposed method achieves better
geo-localization accuracy than other approaches and is able to generalize to
images at unseen locations.
| [
{
"version": "v1",
"created": "Wed, 22 Mar 2017 18:51:51 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Tian",
"Yicong",
""
],
[
"Chen",
"Chen",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Cross-View Image Matching for Geo-localization in Urban Environments
ABSTRACT: In this paper, we address the problem of cross-view image geo-localization.
Specifically, we aim to estimate the GPS location of a query street view image
by finding the matching images in a reference database of geo-tagged bird's eye
view images, or vice versa. To this end, we present a new framework for
cross-view image geo-localization by taking advantage of the tremendous success
of deep convolutional neural networks (CNNs) in image classification and object
detection. First, we employ the Faster R-CNN to detect buildings in the query
and reference images. Next, for each building in the query image, we retrieve
the $k$ nearest neighbors from the reference buildings using a Siamese network
trained on both positive matching image pairs and negative pairs. To find the
correct NN for each query building, we develop an efficient multiple nearest
neighbors matching method based on dominant sets. We evaluate the proposed
framework on a new dataset that consists of pairs of street view and bird's eye
view images. Experimental results show that the proposed method achieves better
geo-localization accuracy than other approaches and is able to generalize to
images at unseen locations.
| new_dataset | 0.961893 |
1703.07980 | Fengfu Li | Fengfu Li, Hong Qiao, Bo Zhang, Xuanyang Xi | Discriminatively Boosted Image Clustering with Fully Convolutional
Auto-Encoders | 27 pages | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional image clustering methods take a two-step approach, feature
learning and clustering, sequentially. However, recent research results
demonstrated that combining the separated phases in a unified framework and
training them jointly can achieve a better performance. In this paper, we first
introduce fully convolutional auto-encoders for image feature learning and then
propose a unified clustering framework to learn image representations and
cluster centers jointly based on a fully convolutional auto-encoder and soft
$k$-means scores. At initial stages of the learning procedure, the
representations extracted from the auto-encoder may not be very discriminative
for latter clustering. We address this issue by adopting a boosted
discriminative distribution, where high score assignments are highlighted and
low score ones are de-emphasized. With the gradually boosted discrimination,
clustering assignment scores are discriminated and cluster purities are
enlarged. Experiments on several vision benchmark datasets show that our
methods can achieve a state-of-the-art performance.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 09:49:37 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Li",
"Fengfu",
""
],
[
"Qiao",
"Hong",
""
],
[
"Zhang",
"Bo",
""
],
[
"Xi",
"Xuanyang",
""
]
] | TITLE: Discriminatively Boosted Image Clustering with Fully Convolutional
Auto-Encoders
ABSTRACT: Traditional image clustering methods take a two-step approach, feature
learning and clustering, sequentially. However, recent research results
demonstrated that combining the separated phases in a unified framework and
training them jointly can achieve a better performance. In this paper, we first
introduce fully convolutional auto-encoders for image feature learning and then
propose a unified clustering framework to learn image representations and
cluster centers jointly based on a fully convolutional auto-encoder and soft
$k$-means scores. At initial stages of the learning procedure, the
representations extracted from the auto-encoder may not be very discriminative
for latter clustering. We address this issue by adopting a boosted
discriminative distribution, where high score assignments are highlighted and
low score ones are de-emphasized. With the gradually boosted discrimination,
clustering assignment scores are discriminated and cluster purities are
enlarged. Experiments on several vision benchmark datasets show that our
methods can achieve a state-of-the-art performance.
| no_new_dataset | 0.947039 |
1703.08002 | Mirco Ravanelli | Mirco Ravanelli, Philemon Brakel, Maurizio Omologo, Yoshua Bengio | A network of deep neural networks for distant speech recognition | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the remarkable progress recently made in distant speech recognition,
state-of-the-art technology still suffers from a lack of robustness, especially
when adverse acoustic conditions characterized by non-stationary noises and
reverberation are met. A prominent limitation of current systems lies in the
lack of matching and communication between the various technologies involved in
the distant speech recognition process. The speech enhancement and speech
recognition modules are, for instance, often trained independently. Moreover,
the speech enhancement normally helps the speech recognizer, but the output of
the latter is not commonly used, in turn, to improve the speech enhancement. To
address both concerns, we propose a novel architecture based on a network of
deep neural networks, where all the components are jointly trained and better
cooperate with each other thanks to a full communication scheme between them.
Experiments, conducted using different datasets, tasks and acoustic conditions,
revealed that the proposed framework can overtake other competitive solutions,
including recent joint training approaches.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 11:02:47 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Ravanelli",
"Mirco",
""
],
[
"Brakel",
"Philemon",
""
],
[
"Omologo",
"Maurizio",
""
],
[
"Bengio",
"Yoshua",
""
]
] | TITLE: A network of deep neural networks for distant speech recognition
ABSTRACT: Despite the remarkable progress recently made in distant speech recognition,
state-of-the-art technology still suffers from a lack of robustness, especially
when adverse acoustic conditions characterized by non-stationary noises and
reverberation are met. A prominent limitation of current systems lies in the
lack of matching and communication between the various technologies involved in
the distant speech recognition process. The speech enhancement and speech
recognition modules are, for instance, often trained independently. Moreover,
the speech enhancement normally helps the speech recognizer, but the output of
the latter is not commonly used, in turn, to improve the speech enhancement. To
address both concerns, we propose a novel architecture based on a network of
deep neural networks, where all the components are jointly trained and better
cooperate with each other thanks to a full communication scheme between them.
Experiments, conducted using different datasets, tasks and acoustic conditions,
revealed that the proposed framework can overtake other competitive solutions,
including recent joint training approaches.
| no_new_dataset | 0.942718 |
1703.08033 | Akshay Mehotra | Akshay Mehrotra, Ambedkar Dukkipati | Generative Adversarial Residual Pairwise Networks for One Shot Learning | null | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks achieve unprecedented performance levels over many tasks
and scale well with large quantities of data, but performance in the low-data
regime and tasks like one shot learning still lags behind. While recent work
suggests many hypotheses from better optimization to more complicated network
structures, in this work we hypothesize that having a learnable and more
expressive similarity objective is an essential missing component. Towards
overcoming that, we propose a network design inspired by deep residual networks
that allows the efficient computation of this more expressive pairwise
similarity objective. Further, we argue that regularization is key in learning
with small amounts of data, and propose an additional generator network based
on the Generative Adversarial Networks where the discriminator is our residual
pairwise network. This provides a strong regularizer by leveraging the
generated data samples. The proposed model can generate plausible variations of
exemplars over unseen classes and outperforms strong discriminative baselines
for few shot classification tasks. Notably, our residual pairwise network
design outperforms previous state-of-theart on the challenging mini-Imagenet
dataset for one shot learning by getting over 55% accuracy for the 5-way
classification task over unseen classes.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 12:19:09 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Mehrotra",
"Akshay",
""
],
[
"Dukkipati",
"Ambedkar",
""
]
] | TITLE: Generative Adversarial Residual Pairwise Networks for One Shot Learning
ABSTRACT: Deep neural networks achieve unprecedented performance levels over many tasks
and scale well with large quantities of data, but performance in the low-data
regime and tasks like one shot learning still lags behind. While recent work
suggests many hypotheses from better optimization to more complicated network
structures, in this work we hypothesize that having a learnable and more
expressive similarity objective is an essential missing component. Towards
overcoming that, we propose a network design inspired by deep residual networks
that allows the efficient computation of this more expressive pairwise
similarity objective. Further, we argue that regularization is key in learning
with small amounts of data, and propose an additional generator network based
on the Generative Adversarial Networks where the discriminator is our residual
pairwise network. This provides a strong regularizer by leveraging the
generated data samples. The proposed model can generate plausible variations of
exemplars over unseen classes and outperforms strong discriminative baselines
for few shot classification tasks. Notably, our residual pairwise network
design outperforms previous state-of-theart on the challenging mini-Imagenet
dataset for one shot learning by getting over 55% accuracy for the 5-way
classification task over unseen classes.
| no_new_dataset | 0.949902 |
1703.08089 | Alexander Richard | Alexander Richard (1), Juergen Gall (1) ((1) University of Bonn) | A Bag-of-Words Equivalent Recurrent Neural Network for Action
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The traditional bag-of-words approach has found a wide range of applications
in computer vision. The standard pipeline consists of a generation of a visual
vocabulary, a quantization of the features into histograms of visual words, and
a classification step for which usually a support vector machine in combination
with a non-linear kernel is used. Given large amounts of data, however, the
model suffers from a lack of discriminative power. This applies particularly
for action recognition, where the vast amount of video features needs to be
subsampled for unsupervised visual vocabulary generation. Moreover, the kernel
computation can be very expensive on large datasets. In this work, we propose a
recurrent neural network that is equivalent to the traditional bag-of-words
approach but enables for the application of discriminative training. The model
further allows to incorporate the kernel computation into the neural network
directly, solving the complexity issue and allowing to represent the complete
classification system within a single network. We evaluate our method on four
recent action recognition benchmarks and show that the conventional model as
well as sparse coding methods are outperformed.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 14:46:46 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Richard",
"Alexander",
"",
"University of Bonn"
],
[
"Gall",
"Juergen",
"",
"University of Bonn"
]
] | TITLE: A Bag-of-Words Equivalent Recurrent Neural Network for Action
Recognition
ABSTRACT: The traditional bag-of-words approach has found a wide range of applications
in computer vision. The standard pipeline consists of a generation of a visual
vocabulary, a quantization of the features into histograms of visual words, and
a classification step for which usually a support vector machine in combination
with a non-linear kernel is used. Given large amounts of data, however, the
model suffers from a lack of discriminative power. This applies particularly
for action recognition, where the vast amount of video features needs to be
subsampled for unsupervised visual vocabulary generation. Moreover, the kernel
computation can be very expensive on large datasets. In this work, we propose a
recurrent neural network that is equivalent to the traditional bag-of-words
approach but enables for the application of discriminative training. The model
further allows to incorporate the kernel computation into the neural network
directly, solving the complexity issue and allowing to represent the complete
classification system within a single network. We evaluate our method on four
recent action recognition benchmarks and show that the conventional model as
well as sparse coding methods are outperformed.
| no_new_dataset | 0.94545 |
1703.08120 | Abhijit Sharang | Abhijit Sharang, Eric Lau | Recurrent and Contextual Models for Visual Question Answering | null | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a series of recurrent and contextual neural network models for
multiple choice visual question answering on the Visual7W dataset. Motivated by
divergent trends in model complexities in the literature, we explore the
balance between model expressiveness and simplicity by studying incrementally
more complex architectures. We start with LSTM-encoding of input questions and
answers; build on this with context generation by LSTM-encodings of neural
image and question representations and attention over images; and evaluate the
diversity and predictive power of our models and the ensemble thereof. All
models are evaluated against a simple baseline inspired by the current
state-of-the-art, consisting of involving simple concatenation of bag-of-words
and CNN representations for the text and images, respectively. Generally, we
observe marked variation in image-reasoning performance between our models not
obvious from their overall performance, as well as evidence of dataset bias.
Our standalone models achieve accuracies up to $64.6\%$, while the ensemble of
all models achieves the best accuracy of $66.67\%$, within $0.5\%$ of the
current state-of-the-art for Visual7W.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 15:57:23 GMT"
}
] | 2017-03-24T00:00:00 | [
[
"Sharang",
"Abhijit",
""
],
[
"Lau",
"Eric",
""
]
] | TITLE: Recurrent and Contextual Models for Visual Question Answering
ABSTRACT: We propose a series of recurrent and contextual neural network models for
multiple choice visual question answering on the Visual7W dataset. Motivated by
divergent trends in model complexities in the literature, we explore the
balance between model expressiveness and simplicity by studying incrementally
more complex architectures. We start with LSTM-encoding of input questions and
answers; build on this with context generation by LSTM-encodings of neural
image and question representations and attention over images; and evaluate the
diversity and predictive power of our models and the ensemble thereof. All
models are evaluated against a simple baseline inspired by the current
state-of-the-art, consisting of involving simple concatenation of bag-of-words
and CNN representations for the text and images, respectively. Generally, we
observe marked variation in image-reasoning performance between our models not
obvious from their overall performance, as well as evidence of dataset bias.
Our standalone models achieve accuracies up to $64.6\%$, while the ensemble of
all models achieves the best accuracy of $66.67\%$, within $0.5\%$ of the
current state-of-the-art for Visual7W.
| no_new_dataset | 0.943243 |
1511.08327 | Nathalie Villa-Vialaneix | Robin Genuer (ISPED, SISTM), Jean-Michel Poggi (UPD5, LM-Orsay),
Christine Tuleau-Malot (JAD), Nathalie Villa-Vialaneix (MIAT INRA) | Random Forests for Big Data | null | null | null | null | stat.ML cs.LG math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big Data is one of the major challenges of statistical science and has
numerous consequences from algorithmic and theoretical viewpoints. Big Data
always involve massive data but they also often include online data and data
heterogeneity. Recently some statistical methods have been adapted to process
Big Data, like linear regression models, clustering methods and bootstrapping
schemes. Based on decision trees combined with aggregation and bootstrap ideas,
random forests were introduced by Breiman in 2001. They are a powerful
nonparametric statistical method allowing to consider in a single and versatile
framework regression problems, as well as two-class and multi-class
classification problems. Focusing on classification problems, this paper
proposes a selective review of available proposals that deal with scaling
random forests to Big Data problems. These proposals rely on parallel
environments or on online adaptations of random forests. We also describe how
related quantities -- such as out-of-bag error and variable importance -- are
addressed in these methods. Then, we formulate various remarks for random
forests in the Big Data context. Finally, we experiment five variants on two
massive datasets (15 and 120 millions of observations), a simulated one as well
as real world data. One variant relies on subsampling while three others are
related to parallel implementations of random forests and involve either
various adaptations of bootstrap to Big Data or to "divide-and-conquer"
approaches. The fifth variant relates on online learning of random forests.
These numerical experiments lead to highlight the relative performance of the
different variants, as well as some of their limitations.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 09:04:47 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2017 14:51:57 GMT"
}
] | 2017-03-23T00:00:00 | [
[
"Genuer",
"Robin",
"",
"ISPED, SISTM"
],
[
"Poggi",
"Jean-Michel",
"",
"UPD5, LM-Orsay"
],
[
"Tuleau-Malot",
"Christine",
"",
"JAD"
],
[
"Villa-Vialaneix",
"Nathalie",
"",
"MIAT INRA"
]
] | TITLE: Random Forests for Big Data
ABSTRACT: Big Data is one of the major challenges of statistical science and has
numerous consequences from algorithmic and theoretical viewpoints. Big Data
always involve massive data but they also often include online data and data
heterogeneity. Recently some statistical methods have been adapted to process
Big Data, like linear regression models, clustering methods and bootstrapping
schemes. Based on decision trees combined with aggregation and bootstrap ideas,
random forests were introduced by Breiman in 2001. They are a powerful
nonparametric statistical method allowing to consider in a single and versatile
framework regression problems, as well as two-class and multi-class
classification problems. Focusing on classification problems, this paper
proposes a selective review of available proposals that deal with scaling
random forests to Big Data problems. These proposals rely on parallel
environments or on online adaptations of random forests. We also describe how
related quantities -- such as out-of-bag error and variable importance -- are
addressed in these methods. Then, we formulate various remarks for random
forests in the Big Data context. Finally, we experiment five variants on two
massive datasets (15 and 120 millions of observations), a simulated one as well
as real world data. One variant relies on subsampling while three others are
related to parallel implementations of random forests and involve either
various adaptations of bootstrap to Big Data or to "divide-and-conquer"
approaches. The fifth variant relates on online learning of random forests.
These numerical experiments lead to highlight the relative performance of the
different variants, as well as some of their limitations.
| no_new_dataset | 0.94699 |
1606.07006 | Xiao Yang | Xiao Yang, Craig Macdonald, Iadh Ounis | Using Word Embeddings in Twitter Election Classification | NeuIR Workshop 2016 | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Word embeddings and convolutional neural networks (CNN) have attracted
extensive attention in various classification tasks for Twitter, e.g. sentiment
classification. However, the effect of the configuration used to train and
generate the word embeddings on the classification performance has not been
studied in the existing literature. In this paper, using a Twitter election
classification task that aims to detect election-related tweets, we investigate
the impact of the background dataset used to train the embedding models, the
context window size and the dimensionality of word embeddings on the
classification performance. By comparing the classification results of two word
embedding models, which are trained using different background corpora (e.g.
Wikipedia articles and Twitter microposts), we show that the background data
type should align with the Twitter classification dataset to achieve a better
performance. Moreover, by evaluating the results of word embeddings models
trained using various context window sizes and dimensionalities, we found that
large context window and dimension sizes are preferable to improve the
performance. Our experimental results also show that using word embeddings and
CNN leads to statistically significant improvements over various baselines such
as random, SVM with TF-IDF and SVM with word embeddings.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 16:37:55 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Jul 2016 10:22:17 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2017 18:29:49 GMT"
}
] | 2017-03-23T00:00:00 | [
[
"Yang",
"Xiao",
""
],
[
"Macdonald",
"Craig",
""
],
[
"Ounis",
"Iadh",
""
]
] | TITLE: Using Word Embeddings in Twitter Election Classification
ABSTRACT: Word embeddings and convolutional neural networks (CNN) have attracted
extensive attention in various classification tasks for Twitter, e.g. sentiment
classification. However, the effect of the configuration used to train and
generate the word embeddings on the classification performance has not been
studied in the existing literature. In this paper, using a Twitter election
classification task that aims to detect election-related tweets, we investigate
the impact of the background dataset used to train the embedding models, the
context window size and the dimensionality of word embeddings on the
classification performance. By comparing the classification results of two word
embedding models, which are trained using different background corpora (e.g.
Wikipedia articles and Twitter microposts), we show that the background data
type should align with the Twitter classification dataset to achieve a better
performance. Moreover, by evaluating the results of word embeddings models
trained using various context window sizes and dimensionalities, we found that
large context window and dimension sizes are preferable to improve the
performance. Our experimental results also show that using word embeddings and
CNN leads to statistically significant improvements over various baselines such
as random, SVM with TF-IDF and SVM with word embeddings.
| no_new_dataset | 0.954605 |
1609.03683 | Giorgio Patrini | Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, Lizhen
Qu | Making Deep Neural Networks Robust to Label Noise: a Loss Correction
Approach | Oral paper at CVPR 2017 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a theoretically grounded approach to train deep neural networks,
including recurrent networks, subject to class-dependent label noise. We
propose two procedures for loss correction that are agnostic to both
application domain and network architecture. They simply amount to at most a
matrix inversion and multiplication, provided that we know the probability of
each class being corrupted into another. We further show how one can estimate
these probabilities, adapting a recent technique for noise estimation to the
multi-class setting, and thus providing an end-to-end framework. Extensive
experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of
clothing images employing a diversity of architectures --- stacking dense,
convolutional, pooling, dropout, batch normalization, word embedding, LSTM and
residual layers --- demonstrate the noise robustness of our proposals.
Incidentally, we also prove that, when ReLU is the only non-linearity, the loss
curvature is immune to class-dependent label noise.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 05:23:29 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2017 08:48:02 GMT"
}
] | 2017-03-23T00:00:00 | [
[
"Patrini",
"Giorgio",
""
],
[
"Rozza",
"Alessandro",
""
],
[
"Menon",
"Aditya",
""
],
[
"Nock",
"Richard",
""
],
[
"Qu",
"Lizhen",
""
]
] | TITLE: Making Deep Neural Networks Robust to Label Noise: a Loss Correction
Approach
ABSTRACT: We present a theoretically grounded approach to train deep neural networks,
including recurrent networks, subject to class-dependent label noise. We
propose two procedures for loss correction that are agnostic to both
application domain and network architecture. They simply amount to at most a
matrix inversion and multiplication, provided that we know the probability of
each class being corrupted into another. We further show how one can estimate
these probabilities, adapting a recent technique for noise estimation to the
multi-class setting, and thus providing an end-to-end framework. Extensive
experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of
clothing images employing a diversity of architectures --- stacking dense,
convolutional, pooling, dropout, batch normalization, word embedding, LSTM and
residual layers --- demonstrate the noise robustness of our proposals.
Incidentally, we also prove that, when ReLU is the only non-linearity, the loss
curvature is immune to class-dependent label noise.
| no_new_dataset | 0.942082 |
1611.02941 | Jun Sun | Jun Sun, J\'er\^ome Kunegis, Steffen Staab | Predicting User Roles in Social Networks using Transfer Learning with
Feature Transformation | 8 pages, 5 figures, IEEE ICDMW 2016 | null | 10.1109/ICDMW.2016.0026 | null | cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we recognise social roles of people, given a completely unlabelled
social network? We present a transfer learning approach to network role
classification based on feature transformations from each network's local
feature distribution to a global feature space. Experiments are carried out on
real-world datasets. (See manuscript for the full abstract.)
| [
{
"version": "v1",
"created": "Wed, 9 Nov 2016 14:15:14 GMT"
}
] | 2017-03-23T00:00:00 | [
[
"Sun",
"Jun",
""
],
[
"Kunegis",
"Jérôme",
""
],
[
"Staab",
"Steffen",
""
]
] | TITLE: Predicting User Roles in Social Networks using Transfer Learning with
Feature Transformation
ABSTRACT: How can we recognise social roles of people, given a completely unlabelled
social network? We present a transfer learning approach to network role
classification based on feature transformations from each network's local
feature distribution to a global feature space. Experiments are carried out on
real-world datasets. (See manuscript for the full abstract.)
| no_new_dataset | 0.94625 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.