id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.02966 | Samuel Rota Bul\`o | Samuel Rota Bul\`o, Gerhard Neuhold, Peter Kontschieder | Loss Max-Pooling for Semantic Image Segmentation | accepted at CVPR 2017 | null | null | null | cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel loss max-pooling concept for handling imbalanced
training data distributions, applicable as alternative loss layer in the
context of deep neural networks for semantic image segmentation. Most
real-world semantic segmentation datasets exhibit long tail distributions with
few object categories comprising the majority of data and consequently biasing
the classifiers towards them. Our method adaptively re-weights the
contributions of each pixel based on their observed losses, targeting
under-performing classification results as often encountered for
under-represented object classes. Our approach goes beyond conventional
cost-sensitive learning attempts through adaptive considerations that allow us
to indirectly address both, inter- and intra-class imbalances. We provide a
theoretical justification of our approach, complementary to experimental
analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal
VOC 2012 segmentation datasets we find consistently improved results,
demonstrating the efficacy of our approach.
| [
{
"version": "v1",
"created": "Mon, 10 Apr 2017 17:44:33 GMT"
}
] | 2017-04-11T00:00:00 | [
[
"Bulò",
"Samuel Rota",
""
],
[
"Neuhold",
"Gerhard",
""
],
[
"Kontschieder",
"Peter",
""
]
] | TITLE: Loss Max-Pooling for Semantic Image Segmentation
ABSTRACT: We introduce a novel loss max-pooling concept for handling imbalanced
training data distributions, applicable as alternative loss layer in the
context of deep neural networks for semantic image segmentation. Most
real-world semantic segmentation datasets exhibit long tail distributions with
few object categories comprising the majority of data and consequently biasing
the classifiers towards them. Our method adaptively re-weights the
contributions of each pixel based on their observed losses, targeting
under-performing classification results as often encountered for
under-represented object classes. Our approach goes beyond conventional
cost-sensitive learning attempts through adaptive considerations that allow us
to indirectly address both, inter- and intra-class imbalances. We provide a
theoretical justification of our approach, complementary to experimental
analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal
VOC 2012 segmentation datasets we find consistently improved results,
demonstrating the efficacy of our approach.
| no_new_dataset | 0.949248 |
1512.01708 | Soham De | Soham De, Gavin Taylor, Tom Goldstein | Variance Reduction for Distributed Stochastic Gradient Descent | null | null | null | null | cs.LG cs.DC math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variance reduction (VR) methods boost the performance of stochastic gradient
descent (SGD) by enabling the use of larger, constant stepsizes and preserving
linear convergence rates. However, current variance reduced SGD methods require
either high memory usage or an exact gradient computation (using the entire
dataset) at the end of each epoch. This limits the use of VR methods in
practical distributed settings. In this paper, we propose a variance reduction
method, called VR-lite, that does not require full gradient computations or
extra storage. We explore distributed synchronous and asynchronous variants
that are scalable and remain stable with low communication frequency. We
empirically compare both the sequential and distributed algorithms to
state-of-the-art stochastic optimization methods, and find that our proposed
algorithms perform favorably to other stochastic methods.
| [
{
"version": "v1",
"created": "Sat, 5 Dec 2015 22:48:40 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 04:07:29 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"De",
"Soham",
""
],
[
"Taylor",
"Gavin",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: Variance Reduction for Distributed Stochastic Gradient Descent
ABSTRACT: Variance reduction (VR) methods boost the performance of stochastic gradient
descent (SGD) by enabling the use of larger, constant stepsizes and preserving
linear convergence rates. However, current variance reduced SGD methods require
either high memory usage or an exact gradient computation (using the entire
dataset) at the end of each epoch. This limits the use of VR methods in
practical distributed settings. In this paper, we propose a variance reduction
method, called VR-lite, that does not require full gradient computations or
extra storage. We explore distributed synchronous and asynchronous variants
that are scalable and remain stable with low communication frequency. We
empirically compare both the sequential and distributed algorithms to
state-of-the-art stochastic optimization methods, and find that our proposed
algorithms perform favorably to other stochastic methods.
| no_new_dataset | 0.947235 |
1512.02970 | Soham De | Soham De and Tom Goldstein | Efficient Distributed SGD with Variance Reduction | In Proceedings of 2016 IEEE International Conference on Data Mining
(ICDM) | null | null | null | cs.LG cs.DC math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic Gradient Descent (SGD) has become one of the most popular
optimization methods for training machine learning models on massive datasets.
However, SGD suffers from two main drawbacks: (i) The noisy gradient updates
have high variance, which slows down convergence as the iterates approach the
optimum, and (ii) SGD scales poorly in distributed settings, typically
experiencing rapidly decreasing marginal benefits as the number of workers
increases. In this paper, we propose a highly parallel method, CentralVR, that
uses error corrections to reduce the variance of SGD gradient updates, and
scales linearly with the number of worker nodes. CentralVR enjoys low iteration
complexity, provably linear convergence rates, and exhibits linear performance
gains up to hundreds of cores for massive datasets. We compare CentralVR to
state-of-the-art parallel stochastic optimization methods on a variety of
models and datasets, and find that our proposed methods exhibit stronger
scaling than other SGD variants.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 17:57:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2016 16:03:51 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Apr 2017 02:54:14 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"De",
"Soham",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: Efficient Distributed SGD with Variance Reduction
ABSTRACT: Stochastic Gradient Descent (SGD) has become one of the most popular
optimization methods for training machine learning models on massive datasets.
However, SGD suffers from two main drawbacks: (i) The noisy gradient updates
have high variance, which slows down convergence as the iterates approach the
optimum, and (ii) SGD scales poorly in distributed settings, typically
experiencing rapidly decreasing marginal benefits as the number of workers
increases. In this paper, we propose a highly parallel method, CentralVR, that
uses error corrections to reduce the variance of SGD gradient updates, and
scales linearly with the number of worker nodes. CentralVR enjoys low iteration
complexity, provably linear convergence rates, and exhibits linear performance
gains up to hundreds of cores for massive datasets. We compare CentralVR to
state-of-the-art parallel stochastic optimization methods on a variety of
models and datasets, and find that our proposed methods exhibit stronger
scaling than other SGD variants.
| no_new_dataset | 0.947186 |
1603.09364 | Upal Mahbub | Upal Mahbub, Vishal M. Patel, Deepak Chandra, Brandon Barbello, Rama
Chellappa | Partial Face Detection for Continuous Authentication | null | 2016 IEEE International Conference on Image Processing (ICIP),
Phoenix, AZ, USA, 2016, pp. 2991-2995 | 10.1109/ICIP.2016.7532908 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a part-based technique for real time detection of users' faces
on mobile devices is proposed. This method is specifically designed for
detecting partially cropped and occluded faces captured using a smartphone's
front-facing camera for continuous authentication. The key idea is to detect
facial segments in the frame and cluster the results to obtain the region which
is most likely to contain a face. Extensive experimentation on a mobile dataset
of 50 users shows that our method performs better than many state-of-the-art
face detection methods in terms of accuracy and processing speed.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 20:15:08 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Mahbub",
"Upal",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Chandra",
"Deepak",
""
],
[
"Barbello",
"Brandon",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Partial Face Detection for Continuous Authentication
ABSTRACT: In this paper, a part-based technique for real time detection of users' faces
on mobile devices is proposed. This method is specifically designed for
detecting partially cropped and occluded faces captured using a smartphone's
front-facing camera for continuous authentication. The key idea is to detect
facial segments in the frame and cluster the results to obtain the region which
is most likely to contain a face. Extensive experimentation on a mobile dataset
of 50 users shows that our method performs better than many state-of-the-art
face detection methods in terms of accuracy and processing speed.
| no_new_dataset | 0.927692 |
1610.07930 | Upal Mahbub | Upal Mahbub, Sayantan Sarkar, Vishal M. Patel, Rama Chellappa | Active User Authentication for Smartphones: A Challenge Data Set and
Benchmark Results | 8 pages, 12 figures, 6 tables. Best poster award at BTAS 2016 | null | 10.1109/BTAS.2016.7791155 | null | cs.CV cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, automated user verification techniques for smartphones are
investigated. A unique non-commercial dataset, the University of Maryland
Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication
research is introduced. This paper focuses on three sensors - front camera,
touch sensor and location service while providing a general description for
other modalities. Benchmark results for face detection, face verification,
touch-based user identification and location-based next-place prediction are
presented, which indicate that more robust methods fine-tuned to the mobile
platform are needed to achieve satisfactory verification accuracy. The dataset
will be made available to the research community for promoting additional
research.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2016 15:56:07 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Mahbub",
"Upal",
""
],
[
"Sarkar",
"Sayantan",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Active User Authentication for Smartphones: A Challenge Data Set and
Benchmark Results
ABSTRACT: In this paper, automated user verification techniques for smartphones are
investigated. A unique non-commercial dataset, the University of Maryland
Active Authentication Dataset 02 (UMDAA-02) for multi-modal user authentication
research is introduced. This paper focuses on three sensors - front camera,
touch sensor and location service while providing a general description for
other modalities. Benchmark results for face detection, face verification,
touch-based user identification and location-based next-place prediction are
presented, which indicate that more robust methods fine-tuned to the mobile
platform are needed to achieve satisfactory verification accuracy. The dataset
will be made available to the research community for promoting additional
research.
| new_dataset | 0.963541 |
1610.07935 | Upal Mahbub | Upal Mahbub and Rama Chellappa | PATH: Person Authentication using Trace Histories | 8 pages, 9 figures. Best Paper award at IEEE UEMCON 2016 | null | 10.1109/UEMCON.2016.7777911 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a solution to the problem of Active Authentication using trace
histories is addressed. Specifically, the task is to perform user verification
on mobile devices using historical location traces of the user as a function of
time. Considering the movement of a human as a Markovian motion, a modified
Hidden Markov Model (HMM)-based solution is proposed. The proposed method,
namely the Marginally Smoothed HMM (MSHMM), utilizes the marginal probabilities
of location and timing information of the observations to smooth-out the
emission probabilities while training. Hence, it can efficiently handle
unforeseen observations during the test phase. The verification performance of
this method is compared to a sequence matching (SM) method , a Markov
Chain-based method (MC) and an HMM with basic Laplace Smoothing (HMM-lap).
Experimental results using the location information of the UMD Active
Authentication Dataset-02 (UMDAA02) and the GeoLife dataset are presented. The
proposed MSHMM method outperforms the compared methods in terms of equal error
rate (EER). Additionally, the effects of different parameters on the proposed
method are discussed.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2016 15:57:41 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Mahbub",
"Upal",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: PATH: Person Authentication using Trace Histories
ABSTRACT: In this paper, a solution to the problem of Active Authentication using trace
histories is addressed. Specifically, the task is to perform user verification
on mobile devices using historical location traces of the user as a function of
time. Considering the movement of a human as a Markovian motion, a modified
Hidden Markov Model (HMM)-based solution is proposed. The proposed method,
namely the Marginally Smoothed HMM (MSHMM), utilizes the marginal probabilities
of location and timing information of the observations to smooth-out the
emission probabilities while training. Hence, it can efficiently handle
unforeseen observations during the test phase. The verification performance of
this method is compared to a sequence matching (SM) method , a Markov
Chain-based method (MC) and an HMM with basic Laplace Smoothing (HMM-lap).
Experimental results using the location information of the UMD Active
Authentication Dataset-02 (UMDAA02) and the GeoLife dataset are presented. The
proposed MSHMM method outperforms the compared methods in terms of equal error
rate (EER). Additionally, the effects of different parameters on the proposed
method are discussed.
| no_new_dataset | 0.946151 |
1611.07727 | Umar Iqbal | Umar Iqbal, Anton Milan, Juergen Gall | PoseTrack: Joint Multi-Person Pose Estimation and Tracking | Accepted to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce the challenging problem of joint multi-person pose
estimation and tracking of an unknown number of persons in unconstrained
videos. Existing methods for multi-person pose estimation in images cannot be
applied directly to this problem, since it also requires to solve the problem
of person association over time in addition to the pose estimation for each
person. We therefore propose a novel method that jointly models multi-person
pose estimation and tracking in a single formulation. To this end, we represent
body joint detections in a video by a spatio-temporal graph and solve an
integer linear program to partition the graph into sub-graphs that correspond
to plausible body pose trajectories for each person. The proposed approach
implicitly handles occlusion and truncation of persons. Since the problem has
not been addressed quantitatively in the literature, we introduce a challenging
"Multi-Person PoseTrack" dataset, and also propose a completely unconstrained
evaluation protocol that does not make any assumptions about the scale, size,
location or the number of persons. Finally, we evaluate the proposed approach
and several baseline methods on our new dataset.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2016 10:30:06 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 12:56:22 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Apr 2017 14:16:38 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Iqbal",
"Umar",
""
],
[
"Milan",
"Anton",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: PoseTrack: Joint Multi-Person Pose Estimation and Tracking
ABSTRACT: In this work, we introduce the challenging problem of joint multi-person pose
estimation and tracking of an unknown number of persons in unconstrained
videos. Existing methods for multi-person pose estimation in images cannot be
applied directly to this problem, since it also requires to solve the problem
of person association over time in addition to the pose estimation for each
person. We therefore propose a novel method that jointly models multi-person
pose estimation and tracking in a single formulation. To this end, we represent
body joint detections in a video by a spatio-temporal graph and solve an
integer linear program to partition the graph into sub-graphs that correspond
to plausible body pose trajectories for each person. The proposed approach
implicitly handles occlusion and truncation of persons. Since the problem has
not been addressed quantitatively in the literature, we introduce a challenging
"Multi-Person PoseTrack" dataset, and also propose a completely unconstrained
evaluation protocol that does not make any assumptions about the scale, size,
location or the number of persons. Finally, we evaluate the proposed approach
and several baseline methods on our new dataset.
| new_dataset | 0.963643 |
1612.03129 | Zeeshan Hayder | Zeeshan Hayder, Xuming He and Mathieu Salzmann | Boundary-aware Instance Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of instance-level semantic segmentation, which aims at
jointly detecting, segmenting and classifying every individual object in an
image. In this context, existing methods typically propose candidate objects,
usually as bounding boxes, and directly predict a binary mask within each such
proposal. As a consequence, they cannot recover from errors in the object
candidate generation process, such as too small or shifted boxes.
In this paper, we introduce a novel object segment representation based on
the distance transform of the object masks. We then design an object mask
network (OMN) with a new residual-deconvolution architecture that infers such a
representation and decodes it into the final binary object mask. This allows us
to predict masks that go beyond the scope of the bounding boxes and are thus
robust to inaccurate object candidates. We integrate our OMN into a Multitask
Network Cascade framework, and learn the resulting boundary-aware instance
segmentation (BAIS) network in an end-to-end manner. Our experiments on the
PASCAL VOC 2012 and the Cityscapes datasets demonstrate the benefits of our
approach, which outperforms the state-of-the-art in both object proposal
generation and instance segmentation.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 18:57:33 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 01:43:57 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Hayder",
"Zeeshan",
""
],
[
"He",
"Xuming",
""
],
[
"Salzmann",
"Mathieu",
""
]
] | TITLE: Boundary-aware Instance Segmentation
ABSTRACT: We address the problem of instance-level semantic segmentation, which aims at
jointly detecting, segmenting and classifying every individual object in an
image. In this context, existing methods typically propose candidate objects,
usually as bounding boxes, and directly predict a binary mask within each such
proposal. As a consequence, they cannot recover from errors in the object
candidate generation process, such as too small or shifted boxes.
In this paper, we introduce a novel object segment representation based on
the distance transform of the object masks. We then design an object mask
network (OMN) with a new residual-deconvolution architecture that infers such a
representation and decodes it into the final binary object mask. This allows us
to predict masks that go beyond the scope of the bounding boxes and are thus
robust to inaccurate object candidates. We integrate our OMN into a Multitask
Network Cascade framework, and learn the resulting boundary-aware instance
segmentation (BAIS) network in an end-to-end manner. Our experiments on the
PASCAL VOC 2012 and the Cityscapes datasets demonstrate the benefits of our
approach, which outperforms the state-of-the-art in both object proposal
generation and instance segmentation.
| no_new_dataset | 0.950503 |
1612.03777 | Nima Sedaghat Alvar | Nima Sedaghat, Mohammadreza Zolfaghari, Thomas Brox | Hybrid Learning of Optical Flow and Next Frame Prediction to Boost
Optical Flow in the Wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CNN-based optical flow estimation has attracted attention recently, mainly
due to its impressively high frame rates. These networks perform well on
synthetic datasets, but they are still far behind the classical methods in
real-world videos. This is because there is no ground truth optical flow for
training these networks on real data. In this paper, we boost CNN-based optical
flow estimation in real scenes with the help of the freely available
self-supervised task of next-frame prediction. To this end, we train the
network in a hybrid way, providing it with a mixture of synthetic and real
videos. With the help of a sample-variant multi-tasking architecture, the
network is trained on different tasks depending on the availability of
ground-truth. We also experiment with the prediction of "next-flow" instead of
estimation of the current flow, which is intuitively closer to the task of
next-frame prediction and yields favorable results. We demonstrate the
improvement in optical flow estimation on the real-world KITTI benchmark.
Additionally, we test the optical flow indirectly in an action classification
scenario. As a side product of this work, we report significant improvements
over state-of-the-art in the task of next-frame prediction.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 16:45:08 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 13:01:12 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Sedaghat",
"Nima",
""
],
[
"Zolfaghari",
"Mohammadreza",
""
],
[
"Brox",
"Thomas",
""
]
] | TITLE: Hybrid Learning of Optical Flow and Next Frame Prediction to Boost
Optical Flow in the Wild
ABSTRACT: CNN-based optical flow estimation has attracted attention recently, mainly
due to its impressively high frame rates. These networks perform well on
synthetic datasets, but they are still far behind the classical methods in
real-world videos. This is because there is no ground truth optical flow for
training these networks on real data. In this paper, we boost CNN-based optical
flow estimation in real scenes with the help of the freely available
self-supervised task of next-frame prediction. To this end, we train the
network in a hybrid way, providing it with a mixture of synthetic and real
videos. With the help of a sample-variant multi-tasking architecture, the
network is trained on different tasks depending on the availability of
ground-truth. We also experiment with the prediction of "next-flow" instead of
estimation of the current flow, which is intuitively closer to the task of
next-frame prediction and yields favorable results. We demonstrate the
improvement in optical flow estimation on the real-world KITTI benchmark.
Additionally, we test the optical flow indirectly in an action classification
scenario. As a side product of this work, we report significant improvements
over state-of-the-art in the task of next-frame prediction.
| no_new_dataset | 0.952574 |
1701.00669 | Matthias Vestner | Matthias Vestner, Roee Litman, Emanuele Rodol\`a, Alex Bronstein,
Daniel Cremers | Product Manifold Filter: Non-Rigid Shape Correspondence via Kernel
Density Estimation in the Product Space | To appear at CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many algorithms for the computation of correspondences between deformable
shapes rely on some variant of nearest neighbor matching in a descriptor space.
Such are, for example, various point-wise correspondence recovery algorithms
used as a post-processing stage in the functional correspondence framework.
Such frequently used techniques implicitly make restrictive assumptions (e.g.,
near-isometry) on the considered shapes and in practice suffer from lack of
accuracy and result in poor surjectivity. We propose an alternative recovery
technique capable of guaranteeing a bijective correspondence and producing
significantly higher accuracy and smoothness. Unlike other methods our approach
does not depend on the assumption that the analyzed shapes are isometric. We
derive the proposed method from the statistical framework of kernel density
estimation and demonstrate its performance on several challenging deformable 3D
shape matching datasets.
| [
{
"version": "v1",
"created": "Tue, 3 Jan 2017 11:43:44 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 11:40:41 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Vestner",
"Matthias",
""
],
[
"Litman",
"Roee",
""
],
[
"Rodolà",
"Emanuele",
""
],
[
"Bronstein",
"Alex",
""
],
[
"Cremers",
"Daniel",
""
]
] | TITLE: Product Manifold Filter: Non-Rigid Shape Correspondence via Kernel
Density Estimation in the Product Space
ABSTRACT: Many algorithms for the computation of correspondences between deformable
shapes rely on some variant of nearest neighbor matching in a descriptor space.
Such are, for example, various point-wise correspondence recovery algorithms
used as a post-processing stage in the functional correspondence framework.
Such frequently used techniques implicitly make restrictive assumptions (e.g.,
near-isometry) on the considered shapes and in practice suffer from lack of
accuracy and result in poor surjectivity. We propose an alternative recovery
technique capable of guaranteeing a bijective correspondence and producing
significantly higher accuracy and smoothness. Unlike other methods our approach
does not depend on the assumption that the analyzed shapes are isometric. We
derive the proposed method from the statistical framework of kernel density
estimation and demonstrate its performance on several challenging deformable 3D
shape matching datasets.
| no_new_dataset | 0.950457 |
1703.08388 | Md Abul Hasnat | Abul Hasnat, Julien Bohn\'e, Jonathan Milgram, St\'ephane Gentric, and
Liming Chen | DeepVisage: Making face recognition simple yet with powerful
generalization skills | Second version (12 pages), under review | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition (FR) methods report significant performance by adopting the
convolutional neural network (CNN) based learning methods. Although CNNs are
mostly trained by optimizing the softmax loss, the recent trend shows an
improvement of accuracy with different strategies, such as task-specific CNN
learning with different loss functions, fine-tuning on target dataset, metric
learning and concatenating features from multiple CNNs. Incorporating these
tasks obviously requires additional efforts. Moreover, it demotivates the
discovery of efficient CNN models for FR which are trained only with identity
labels. We focus on this fact and propose an easily trainable and single CNN
based FR method. Our CNN model exploits the residual learning framework.
Additionally, it uses normalized features to compute the loss. Our extensive
experiments show excellent generalization on different datasets. We obtain very
competitive and state-of-the-art results on the LFW, IJB-A, YouTube faces and
CACD datasets.
| [
{
"version": "v1",
"created": "Fri, 24 Mar 2017 12:41:38 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 11:37:21 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Hasnat",
"Abul",
""
],
[
"Bohné",
"Julien",
""
],
[
"Milgram",
"Jonathan",
""
],
[
"Gentric",
"Stéphane",
""
],
[
"Chen",
"Liming",
""
]
] | TITLE: DeepVisage: Making face recognition simple yet with powerful
generalization skills
ABSTRACT: Face recognition (FR) methods report significant performance by adopting the
convolutional neural network (CNN) based learning methods. Although CNNs are
mostly trained by optimizing the softmax loss, the recent trend shows an
improvement of accuracy with different strategies, such as task-specific CNN
learning with different loss functions, fine-tuning on target dataset, metric
learning and concatenating features from multiple CNNs. Incorporating these
tasks obviously requires additional efforts. Moreover, it demotivates the
discovery of efficient CNN models for FR which are trained only with identity
labels. We focus on this fact and propose an easily trainable and single CNN
based FR method. Our CNN model exploits the residual learning framework.
Additionally, it uses normalized features to compute the loss. Our extensive
experiments show excellent generalization on different datasets. We obtain very
competitive and state-of-the-art results on the LFW, IJB-A, YouTube faces and
CACD datasets.
| no_new_dataset | 0.946695 |
1704.01474 | Kai Chen | Kai Chen and Mathias Seuret | Convolutional Neural Networks for Page Segmentation of Historical
Document Images | null | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a Convolutional Neural Network (CNN) based page
segmentation method for handwritten historical document images. We consider
page segmentation as a pixel labeling problem, i.e., each pixel is classified
as one of the predefined classes. Traditional methods in this area rely on
carefully hand-crafted features or large amounts of prior knowledge. In
contrast, we propose to learn features from raw image pixels using a CNN. While
many researchers focus on developing deep CNN architectures to solve different
problems, we train a simple CNN with only one convolution layer. We show that
the simple architecture achieves competitive results against other deep
architectures on different public datasets. Experiments also demonstrate the
effectiveness and superiority of the proposed method compared to previous
methods.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 15:12:25 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 10:16:49 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Chen",
"Kai",
""
],
[
"Seuret",
"Mathias",
""
]
] | TITLE: Convolutional Neural Networks for Page Segmentation of Historical
Document Images
ABSTRACT: This paper presents a Convolutional Neural Network (CNN) based page
segmentation method for handwritten historical document images. We consider
page segmentation as a pixel labeling problem, i.e., each pixel is classified
as one of the predefined classes. Traditional methods in this area rely on
carefully hand-crafted features or large amounts of prior knowledge. In
contrast, we propose to learn features from raw image pixels using a CNN. While
many researchers focus on developing deep CNN architectures to solve different
problems, we train a simple CNN with only one convolution layer. We show that
the simple architecture achieves competitive results against other deep
architectures on different public datasets. Experiments also demonstrate the
effectiveness and superiority of the proposed method compared to previous
methods.
| no_new_dataset | 0.949248 |
1704.01897 | Wei-Shi Zheng | Long-Kai Huang, Qiang Yang, Wei-Shi Zheng | Online Hashing | To appear in IEEE Transactions on Neural Networks and Learning
Systems (DOI: 10.1109/TNNLS.2017.2689242) | null | 10.1109/TNNLS.2017.2689242 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although hash function learning algorithms have achieved great success in
recent years, most existing hash models are off-line, which are not suitable
for processing sequential or online data. To address this problem, this work
proposes an online hash model to accommodate data coming in stream for online
learning. Specifically, a new loss function is proposed to measure the
similarity loss between a pair of data samples in hamming space. Then, a
structured hash model is derived and optimized in a passive-aggressive way.
Theoretical analysis on the upper bound of the cumulative loss for the proposed
online hash model is provided. Furthermore, we extend our online hashing from a
single-model to a multi-model online hashing that trains multiple models so as
to retain diverse online hashing models in order to avoid biased update. The
competitive efficiency and effectiveness of the proposed online hash models are
verified through extensive experiments on several large-scale datasets as
compared to related hashing methods.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 15:44:29 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Huang",
"Long-Kai",
""
],
[
"Yang",
"Qiang",
""
],
[
"Zheng",
"Wei-Shi",
""
]
] | TITLE: Online Hashing
ABSTRACT: Although hash function learning algorithms have achieved great success in
recent years, most existing hash models are off-line, which are not suitable
for processing sequential or online data. To address this problem, this work
proposes an online hash model to accommodate data coming in stream for online
learning. Specifically, a new loss function is proposed to measure the
similarity loss between a pair of data samples in hamming space. Then, a
structured hash model is derived and optimized in a passive-aggressive way.
Theoretical analysis on the upper bound of the cumulative loss for the proposed
online hash model is provided. Furthermore, we extend our online hashing from a
single-model to a multi-model online hashing that trains multiple models so as
to retain diverse online hashing models in order to avoid biased update. The
competitive efficiency and effectiveness of the proposed online hash models are
verified through extensive experiments on several large-scale datasets as
compared to related hashing methods.
| no_new_dataset | 0.947478 |
1704.02083 | Li Sulimowicz Mrs. | Li Sulimowicz, Ishfaq Ahmad | "RAPID" Regions-of-Interest Detection In Big Histopathological Images | 6 pages, 5 figures, ICME conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sheer volume and size of histopathological images (e.g.,10^6 MPixel)
underscores the need for faster and more accurate Regions-of-interest (ROI)
detection algorithms. In this paper, we propose such an algorithm, which has
four main components that help achieve greater accuracy and faster speed:
First, while using coarse-to-fine topology preserving segmentation as the
baseline, the proposed algorithm uses a superpixel regularity optimization
scheme for avoiding irregular and extremely small superpixels. Second, the
proposed technique employs a prediction strategy to focus only on important
superpixels at finer image levels. Third, the algorithm reuses the information
gained from the coarsest image level at other finer image levels. Both the
second and the third components drastically lower the complexity. Fourth, the
algorithm employs a highly effective parallelization scheme using adap- tive
data partitioning, which gains high speedup. Experimental results, conducted on
the BSD500 [1] and 500 whole-slide histological images from the National Lung
Screening Trial (NLST)1 dataset, confirm that the proposed algorithm gained 13
times speedup compared with the baseline, and around 160 times compared with
SLIC [11], without losing accuracy.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 03:34:40 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Sulimowicz",
"Li",
""
],
[
"Ahmad",
"Ishfaq",
""
]
] | TITLE: "RAPID" Regions-of-Interest Detection In Big Histopathological Images
ABSTRACT: The sheer volume and size of histopathological images (e.g.,10^6 MPixel)
underscores the need for faster and more accurate Regions-of-interest (ROI)
detection algorithms. In this paper, we propose such an algorithm, which has
four main components that help achieve greater accuracy and faster speed:
First, while using coarse-to-fine topology preserving segmentation as the
baseline, the proposed algorithm uses a superpixel regularity optimization
scheme for avoiding irregular and extremely small superpixels. Second, the
proposed technique employs a prediction strategy to focus only on important
superpixels at finer image levels. Third, the algorithm reuses the information
gained from the coarsest image level at other finer image levels. Both the
second and the third components drastically lower the complexity. Fourth, the
algorithm employs a highly effective parallelization scheme using adap- tive
data partitioning, which gains high speedup. Experimental results, conducted on
the BSD500 [1] and 500 whole-slide histological images from the National Lung
Screening Trial (NLST)1 dataset, confirm that the proposed algorithm gained 13
times speedup compared with the baseline, and around 160 times compared with
SLIC [11], without losing accuracy.
| no_new_dataset | 0.951953 |
1704.02095 | Alon Sela | Alon Sela, Orit Milo-Cohen, Irad Ben-Gal, Eugene Kagan | Increasing the Flow of Rumors in Social Networks by Spreading Groups | null | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper addresses a method for spreading messages in social networks
through an initial acceleration by Spreading Groups. These groups start the
spread which eventually reaches a larger portion of the network. The use of
spreading groups creates a final flow which resembles the spread through the
nodes with the highest level of influence (opinion leaders). While harnessing
opinion leaders to spread messages is generally costly, the formation of
spreading groups is merely a technical issue, and can be done by computerized
bots. The paper presents an information flow model and inspects the model
through a dataset of Nasdaq-related tweets.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 05:40:46 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Sela",
"Alon",
""
],
[
"Milo-Cohen",
"Orit",
""
],
[
"Ben-Gal",
"Irad",
""
],
[
"Kagan",
"Eugene",
""
]
] | TITLE: Increasing the Flow of Rumors in Social Networks by Spreading Groups
ABSTRACT: The paper addresses a method for spreading messages in social networks
through an initial acceleration by Spreading Groups. These groups start the
spread which eventually reaches a larger portion of the network. The use of
spreading groups creates a final flow which resembles the spread through the
nodes with the highest level of influence (opinion leaders). While harnessing
opinion leaders to spread messages is generally costly, the formation of
spreading groups is merely a technical issue, and can be done by computerized
bots. The paper presents an information flow model and inspects the model
through a dataset of Nasdaq-related tweets.
| new_dataset | 0.926037 |
1704.02117 | Upal Mahbub | Upal Mahbub, Sayantan Sarkar, and Rama Chellappa | Partial Face Detection in the Mobile Domain | 18 pages, 22 figures, 3 tables, submitted to IEEE Transactions on
Image Processing | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generic face detection algorithms do not perform well in the mobile domain
due to significant presence of occluded and partially visible faces. One
promising technique to handle the challenge of partial faces is to design face
detectors based on facial segments. In this paper two different approaches of
facial segment-based face detection are discussed, namely, proposal-based
detection and detection by end-to-end regression. Methods that follow the first
approach rely on generating face proposals that contain facial segment
information. The three detectors following this approach, namely Facial
Segment-based Face Detector (FSFD), SegFace and DeepSegFace, discussed in this
paper, perform binary classification on each proposal based on features learned
from facial segments. The process of proposal generation, however, needs to be
handled separately, which can be very time consuming, and is not truly
necessary given the nature of the active authentication problem. Hence a novel
algorithm, Deep Regression-based User Image Detector (DRUID) is proposed, which
shifts from the classification to the regression paradigm, thus obviating the
need for proposal generation. DRUID has an unique network architecture with
customized loss functions, is trained using a relatively small amount of data
by utilizing a novel data augmentation scheme and is fast since it outputs the
bounding boxes of a face and its segments in a single pass. Being robust to
occlusion by design, the facial segment-based face detection methods,
especially DRUID show superior performance over other state-of-the-art face
detectors in terms of precision-recall and ROC curve on two mobile face
datasets.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 07:43:11 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Mahbub",
"Upal",
""
],
[
"Sarkar",
"Sayantan",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Partial Face Detection in the Mobile Domain
ABSTRACT: Generic face detection algorithms do not perform well in the mobile domain
due to significant presence of occluded and partially visible faces. One
promising technique to handle the challenge of partial faces is to design face
detectors based on facial segments. In this paper two different approaches of
facial segment-based face detection are discussed, namely, proposal-based
detection and detection by end-to-end regression. Methods that follow the first
approach rely on generating face proposals that contain facial segment
information. The three detectors following this approach, namely Facial
Segment-based Face Detector (FSFD), SegFace and DeepSegFace, discussed in this
paper, perform binary classification on each proposal based on features learned
from facial segments. The process of proposal generation, however, needs to be
handled separately, which can be very time consuming, and is not truly
necessary given the nature of the active authentication problem. Hence a novel
algorithm, Deep Regression-based User Image Detector (DRUID) is proposed, which
shifts from the classification to the regression paradigm, thus obviating the
need for proposal generation. DRUID has an unique network architecture with
customized loss functions, is trained using a relatively small amount of data
by utilizing a novel data augmentation scheme and is fast since it outputs the
bounding boxes of a face and its segments in a single pass. Being robust to
occlusion by design, the facial segment-based face detection methods,
especially DRUID show superior performance over other state-of-the-art face
detectors in terms of precision-recall and ROC curve on two mobile face
datasets.
| no_new_dataset | 0.951639 |
1704.02147 | Vincent Cohen-Addad | Vincent Cohen-Addad and Varun Kanade and Frederik Mallmann-Trenn and
Claire Mathieu | Hierarchical Clustering: Objective Functions and Algorithms | null | null | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical clustering is a recursive partitioning of a dataset into
clusters at an increasingly finer granularity. Motivated by the fact that most
work on hierarchical clustering was based on providing algorithms, rather than
optimizing a specific objective, Dasgupta framed similarity-based hierarchical
clustering as a combinatorial optimization problem, where a `good' hierarchical
clustering is one that minimizes some cost function. He showed that this cost
function has certain desirable properties.
We take an axiomatic approach to defining `good' objective functions for both
similarity and dissimilarity-based hierarchical clustering. We characterize a
set of "admissible" objective functions (that includes Dasgupta's one) that
have the property that when the input admits a `natural' hierarchical
clustering, it has an optimal value.
Equipped with a suitable objective function, we analyze the performance of
practical algorithms, as well as develop better algorithms. For
similarity-based hierarchical clustering, Dasgupta showed that the divisive
sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a
refined analysis of the algorithm and show that it in fact achieves an
$O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved
that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based
$O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical
clustering, we show that the classic average-linkage algorithm gives a factor 2
approx., and provide a simple and better algorithm that gives a factor 3/2
approx..
Finally, we consider `beyond-worst-case' scenario through a generalisation of
the stochastic block model for hierarchical clustering. We show that Dasgupta's
cost function has desirable properties for these inputs and we provide a simple
1 + o(1)-approximation in this setting.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 09:14:28 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Cohen-Addad",
"Vincent",
""
],
[
"Kanade",
"Varun",
""
],
[
"Mallmann-Trenn",
"Frederik",
""
],
[
"Mathieu",
"Claire",
""
]
] | TITLE: Hierarchical Clustering: Objective Functions and Algorithms
ABSTRACT: Hierarchical clustering is a recursive partitioning of a dataset into
clusters at an increasingly finer granularity. Motivated by the fact that most
work on hierarchical clustering was based on providing algorithms, rather than
optimizing a specific objective, Dasgupta framed similarity-based hierarchical
clustering as a combinatorial optimization problem, where a `good' hierarchical
clustering is one that minimizes some cost function. He showed that this cost
function has certain desirable properties.
We take an axiomatic approach to defining `good' objective functions for both
similarity and dissimilarity-based hierarchical clustering. We characterize a
set of "admissible" objective functions (that includes Dasgupta's one) that
have the property that when the input admits a `natural' hierarchical
clustering, it has an optimal value.
Equipped with a suitable objective function, we analyze the performance of
practical algorithms, as well as develop better algorithms. For
similarity-based hierarchical clustering, Dasgupta showed that the divisive
sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a
refined analysis of the algorithm and show that it in fact achieves an
$O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved
that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based
$O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical
clustering, we show that the classic average-linkage algorithm gives a factor 2
approx., and provide a simple and better algorithm that gives a factor 3/2
approx..
Finally, we consider `beyond-worst-case' scenario through a generalisation of
the stochastic block model for hierarchical clustering. We show that Dasgupta's
cost function has desirable properties for these inputs and we provide a simple
1 + o(1)-approximation in this setting.
| no_new_dataset | 0.950778 |
1704.02157 | Dan Xu | Dan Xu, Elisa Ricci, Wanli Ouyang, Xiaogang Wang, Nicu Sebe | Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular
Depth Estimation | Accepted as a spotlight paper at CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of depth estimation from a single still
image. Inspired by recent works on multi- scale convolutional neural networks
(CNN), we propose a deep model which fuses complementary information derived
from multiple CNN side outputs. Different from previous methods, the
integration is obtained by means of continuous Conditional Random Fields
(CRFs). In particular, we propose two different variations, one based on a
cascade of multiple CRFs, the other on a unified graphical model. By designing
a novel CNN implementation of mean-field updates for continuous CRFs, we show
that both proposed models can be regarded as sequential deep networks and that
training can be performed end-to-end. Through extensive experimental evaluation
we demonstrate the effective- ness of the proposed approach and establish new
state of the art results on publicly available datasets.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 09:39:01 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Xu",
"Dan",
""
],
[
"Ricci",
"Elisa",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Wang",
"Xiaogang",
""
],
[
"Sebe",
"Nicu",
""
]
] | TITLE: Multi-Scale Continuous CRFs as Sequential Deep Networks for Monocular
Depth Estimation
ABSTRACT: This paper addresses the problem of depth estimation from a single still
image. Inspired by recent works on multi- scale convolutional neural networks
(CNN), we propose a deep model which fuses complementary information derived
from multiple CNN side outputs. Different from previous methods, the
integration is obtained by means of continuous Conditional Random Fields
(CRFs). In particular, we propose two different variations, one based on a
cascade of multiple CRFs, the other on a unified graphical model. By designing
a novel CNN implementation of mean-field updates for continuous CRFs, we show
that both proposed models can be regarded as sequential deep networks and that
training can be performed end-to-end. Through extensive experimental evaluation
we demonstrate the effective- ness of the proposed approach and establish new
state of the art results on publicly available datasets.
| no_new_dataset | 0.950915 |
1704.02166 | Yanwei Fu | Weidong Yin, Yanwei Fu, Leonid Sigal and Xiangyang Xue | Semi-Latent GAN: Learning to generate and modify facial images from
attributes | 10 pages, submitted to ICCV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating and manipulating human facial images using high-level attributal
controls are important and interesting problems. The models proposed in
previous work can solve one of these two problems (generation or manipulation),
but not both coherently. This paper proposes a novel model that learns how to
both generate and modify the facial image from high-level semantic attributes.
Our key idea is to formulate a Semi-Latent Facial Attribute Space (SL-FAS) to
systematically learn relationship between user-defined and latent attributes,
as well as between those attributes and RGB imagery. As part of this newly
formulated space, we propose a new model --- SL-GAN which is a specific form of
Generative Adversarial Network. Finally, we present an iterative training
algorithm for SL-GAN. The experiments on recent CelebA and CASIA-WebFace
datasets validate the effectiveness of our proposed framework. We will also
make data, pre-trained models and code available.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 10:04:06 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Yin",
"Weidong",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: Semi-Latent GAN: Learning to generate and modify facial images from
attributes
ABSTRACT: Generating and manipulating human facial images using high-level attributal
controls are important and interesting problems. The models proposed in
previous work can solve one of these two problems (generation or manipulation),
but not both coherently. This paper proposes a novel model that learns how to
both generate and modify the facial image from high-level semantic attributes.
Our key idea is to formulate a Semi-Latent Facial Attribute Space (SL-FAS) to
systematically learn relationship between user-defined and latent attributes,
as well as between those attributes and RGB imagery. As part of this newly
formulated space, we propose a new model --- SL-GAN which is a specific form of
Generative Adversarial Network. Finally, we present an iterative training
algorithm for SL-GAN. The experiments on recent CelebA and CASIA-WebFace
datasets validate the effectiveness of our proposed framework. We will also
make data, pre-trained models and code available.
| no_new_dataset | 0.947866 |
1704.02218 | Hamed R. Tavakoli | Hamed R. Tavakoli, Jorma Laaksonen, and Esa Rahtu | Investigating Natural Image Pleasantness Recognition using Deep Features
and Eye Tracking for Loosely Controlled Human-computer Interaction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper revisits recognition of natural image pleasantness by employing
deep convolutional neural networks and affordable eye trackers. There exist
several approaches to recognize image pleasantness: (1) computer vision, and
(2) psychophysical signals. For natural images, computer vision approaches have
not been as successful as for abstract paintings and is lagging behind the
psychophysical signals like eye movements. Despite better results, the
scalability of eye movements is adversely affected by the sensor cost. While
the introduction of affordable sensors have helped the scalability issue by
making the sensors more accessible, the application of such sensors in a
loosely controlled human-computer interaction setup is not yet studied for
affective image tagging. On the other hand, deep convolutional neural networks
have boosted the performance of vision-based techniques significantly in recent
years. To investigate the current status in regard to affective image tagging,
we (1) introduce a new eye movement dataset using an affordable eye tracker,
(2) study the use of deep neural networks for pleasantness recognition, (3)
investigate the gap between deep features and eye movements. To meet these
ends, we record eye movements in a less controlled setup, akin to daily
human-computer interaction. We assess features from eye movements, visual
features, and their combination. Our results show that (1) recognizing natural
image pleasantness from eye movement under less restricted setup is difficult
and previously used techniques are prone to fail, and (2) visual class
categories are strong cues for predicting pleasantness, due to their
correlation with emotions, necessitating careful study of this phenomenon. This
latter finding is alerting as some deep learning approaches may fit to the
class category bias.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 13:16:17 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Tavakoli",
"Hamed R.",
""
],
[
"Laaksonen",
"Jorma",
""
],
[
"Rahtu",
"Esa",
""
]
] | TITLE: Investigating Natural Image Pleasantness Recognition using Deep Features
and Eye Tracking for Loosely Controlled Human-computer Interaction
ABSTRACT: This paper revisits recognition of natural image pleasantness by employing
deep convolutional neural networks and affordable eye trackers. There exist
several approaches to recognize image pleasantness: (1) computer vision, and
(2) psychophysical signals. For natural images, computer vision approaches have
not been as successful as for abstract paintings and is lagging behind the
psychophysical signals like eye movements. Despite better results, the
scalability of eye movements is adversely affected by the sensor cost. While
the introduction of affordable sensors have helped the scalability issue by
making the sensors more accessible, the application of such sensors in a
loosely controlled human-computer interaction setup is not yet studied for
affective image tagging. On the other hand, deep convolutional neural networks
have boosted the performance of vision-based techniques significantly in recent
years. To investigate the current status in regard to affective image tagging,
we (1) introduce a new eye movement dataset using an affordable eye tracker,
(2) study the use of deep neural networks for pleasantness recognition, (3)
investigate the gap between deep features and eye movements. To meet these
ends, we record eye movements in a less controlled setup, akin to daily
human-computer interaction. We assess features from eye movements, visual
features, and their combination. Our results show that (1) recognizing natural
image pleasantness from eye movement under less restricted setup is difficult
and previously used techniques are prone to fail, and (2) visual class
categories are strong cues for predicting pleasantness, due to their
correlation with emotions, necessitating careful study of this phenomenon. This
latter finding is alerting as some deep learning approaches may fit to the
class category bias.
| new_dataset | 0.963848 |
1704.02224 | Xiaoming Deng | Xiaoming Deng, Shuo Yang, Yinda Zhang, Ping Tan, Liang Chang, Hongan
Wang | Hand3D: Hand Pose Estimation using 3D Neural Network | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel 3D neural network architecture for 3D hand pose estimation
from a single depth image. Different from previous works that mostly run on 2D
depth image domain and require intermediate or post process to bring in the
supervision from 3D space, we convert the depth map to a 3D volumetric
representation, and feed it into a 3D convolutional neural network(CNN) to
directly produce the pose in 3D requiring no further process. Our system does
not require the ground truth reference point for initialization, and our
network architecture naturally integrates both local feature and global context
in 3D space. To increase the coverage of the hand pose space of the training
data, we render synthetic depth image by transferring hand pose from existing
real image datasets. We evaluation our algorithm on two public benchmarks and
achieve the state-of-the-art performance. The synthetic hand pose dataset will
be available.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 13:27:48 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Deng",
"Xiaoming",
""
],
[
"Yang",
"Shuo",
""
],
[
"Zhang",
"Yinda",
""
],
[
"Tan",
"Ping",
""
],
[
"Chang",
"Liang",
""
],
[
"Wang",
"Hongan",
""
]
] | TITLE: Hand3D: Hand Pose Estimation using 3D Neural Network
ABSTRACT: We propose a novel 3D neural network architecture for 3D hand pose estimation
from a single depth image. Different from previous works that mostly run on 2D
depth image domain and require intermediate or post process to bring in the
supervision from 3D space, we convert the depth map to a 3D volumetric
representation, and feed it into a 3D convolutional neural network(CNN) to
directly produce the pose in 3D requiring no further process. Our system does
not require the ground truth reference point for initialization, and our
network architecture naturally integrates both local feature and global context
in 3D space. To increase the coverage of the hand pose space of the training
data, we render synthetic depth image by transferring hand pose from existing
real image datasets. We evaluation our algorithm on two public benchmarks and
achieve the state-of-the-art performance. The synthetic hand pose dataset will
be available.
| no_new_dataset | 0.944382 |
1704.02227 | Maciej Zieba | Maciej Zieba, Lei Wang | Training Triplet Networks with GAN | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Triplet networks are widely used models that are characterized by good
performance in classification and retrieval tasks. In this work we propose to
train a triplet network by putting it as the discriminator in Generative
Adversarial Nets (GANs). We make use of the good capability of representation
learning of the discriminator to increase the predictive quality of the model.
We evaluated our approach on Cifar10 and MNIST datasets and observed
significant improvement on the classification performance using the simple k-nn
method.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 17:09:20 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Zieba",
"Maciej",
""
],
[
"Wang",
"Lei",
""
]
] | TITLE: Training Triplet Networks with GAN
ABSTRACT: Triplet networks are widely used models that are characterized by good
performance in classification and retrieval tasks. In this work we propose to
train a triplet network by putting it as the discriminator in Generative
Adversarial Nets (GANs). We make use of the good capability of representation
learning of the discriminator to increase the predictive quality of the model.
We evaluated our approach on Cifar10 and MNIST datasets and observed
significant improvement on the classification performance using the simple k-nn
method.
| no_new_dataset | 0.953837 |
1704.02312 | Yaoyuan Zhang | Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, Rui Yan | A Constrained Sequence-to-Sequence Neural Model for Sentence
Simplification | null | null | null | null | cs.CL cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sentence simplification reduces semantic complexity to benefit people with
language impairments. Previous simplification studies on the sentence level and
word level have achieved promising results but also meet great challenges. For
sentence-level studies, sentences after simplification are fluent but sometimes
are not really simplified. For word-level studies, words are simplified but
also have potential grammar errors due to different usages of words before and
after simplification. In this paper, we propose a two-step simplification
framework by combining both the word-level and the sentence-level
simplifications, making use of their corresponding advantages. Based on the
two-step framework, we implement a novel constrained neural generation model to
simplify sentences given simplified words. The final results on Wikipedia and
Simple Wikipedia aligned datasets indicate that our method yields better
performance than various baselines.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2017 17:53:24 GMT"
}
] | 2017-04-10T00:00:00 | [
[
"Zhang",
"Yaoyuan",
""
],
[
"Ye",
"Zhenxu",
""
],
[
"Feng",
"Yansong",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Yan",
"Rui",
""
]
] | TITLE: A Constrained Sequence-to-Sequence Neural Model for Sentence
Simplification
ABSTRACT: Sentence simplification reduces semantic complexity to benefit people with
language impairments. Previous simplification studies on the sentence level and
word level have achieved promising results but also meet great challenges. For
sentence-level studies, sentences after simplification are fluent but sometimes
are not really simplified. For word-level studies, words are simplified but
also have potential grammar errors due to different usages of words before and
after simplification. In this paper, we propose a two-step simplification
framework by combining both the word-level and the sentence-level
simplifications, making use of their corresponding advantages. Based on the
two-step framework, we implement a novel constrained neural generation model to
simplify sentences given simplified words. The final results on Wikipedia and
Simple Wikipedia aligned datasets indicate that our method yields better
performance than various baselines.
| no_new_dataset | 0.948155 |
1512.00442 | Ke Li | Ke Li, Jitendra Malik | Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing | 13 pages, 6 figures; International Conference on Machine Learning
(ICML), 2016. This version corrects a typo in the pseudocode | null | null | null | cs.DS cs.AI cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing methods for retrieving k-nearest neighbours suffer from the curse of
dimensionality. We argue this is caused in part by inherent deficiencies of
space partitioning, which is the underlying strategy used by most existing
methods. We devise a new strategy that avoids partitioning the vector space and
present a novel randomized algorithm that runs in time linear in dimensionality
of the space and sub-linear in the intrinsic dimensionality and the size of the
dataset and takes space constant in dimensionality of the space and linear in
the size of the dataset. The proposed algorithm allows fine-grained control
over accuracy and speed on a per-query basis, automatically adapts to
variations in data density, supports dynamic updates to the dataset and is
easy-to-implement. We show appealing theoretical properties and demonstrate
empirically that the proposed algorithm outperforms locality-sensitivity
hashing (LSH) in terms of approximation quality, speed and space efficiency.
| [
{
"version": "v1",
"created": "Tue, 1 Dec 2015 20:53:16 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2016 18:47:10 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2017 06:51:49 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Li",
"Ke",
""
],
[
"Malik",
"Jitendra",
""
]
] | TITLE: Fast k-Nearest Neighbour Search via Dynamic Continuous Indexing
ABSTRACT: Existing methods for retrieving k-nearest neighbours suffer from the curse of
dimensionality. We argue this is caused in part by inherent deficiencies of
space partitioning, which is the underlying strategy used by most existing
methods. We devise a new strategy that avoids partitioning the vector space and
present a novel randomized algorithm that runs in time linear in dimensionality
of the space and sub-linear in the intrinsic dimensionality and the size of the
dataset and takes space constant in dimensionality of the space and linear in
the size of the dataset. The proposed algorithm allows fine-grained control
over accuracy and speed on a per-query basis, automatically adapts to
variations in data density, supports dynamic updates to the dataset and is
easy-to-implement. We show appealing theoretical properties and demonstrate
empirically that the proposed algorithm outperforms locality-sensitivity
hashing (LSH) in terms of approximation quality, speed and space efficiency.
| no_new_dataset | 0.947527 |
1604.01850 | Tong Xiao | Tong Xiao, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang | Joint Detection and Identification Feature Learning for Person Search | CVPR 2017 camera-ready | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Existing person re-identification benchmarks and methods mainly focus on
matching cropped pedestrian images between queries and candidates. However, it
is different from real-world scenarios where the annotations of pedestrian
bounding boxes are unavailable and the target person needs to be searched from
a gallery of whole scene images. To close the gap, we propose a new deep
learning framework for person search. Instead of breaking it down into two
separate tasks---pedestrian detection and person re-identification, we jointly
handle both aspects in a single convolutional neural network. An Online
Instance Matching (OIM) loss function is proposed to train the network
effectively, which is scalable to datasets with numerous identities. To
validate our approach, we collect and annotate a large-scale benchmark dataset
for person search. It contains 18,184 images, 8,432 identities, and 96,143
pedestrian bounding boxes. Experiments show that our framework outperforms
other separate approaches, and the proposed OIM loss function converges much
faster and better than the conventional Softmax loss.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 02:16:26 GMT"
},
{
"version": "v2",
"created": "Thu, 23 Feb 2017 09:48:19 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2017 01:31:08 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Xiao",
"Tong",
""
],
[
"Li",
"Shuang",
""
],
[
"Wang",
"Bochao",
""
],
[
"Lin",
"Liang",
""
],
[
"Wang",
"Xiaogang",
""
]
] | TITLE: Joint Detection and Identification Feature Learning for Person Search
ABSTRACT: Existing person re-identification benchmarks and methods mainly focus on
matching cropped pedestrian images between queries and candidates. However, it
is different from real-world scenarios where the annotations of pedestrian
bounding boxes are unavailable and the target person needs to be searched from
a gallery of whole scene images. To close the gap, we propose a new deep
learning framework for person search. Instead of breaking it down into two
separate tasks---pedestrian detection and person re-identification, we jointly
handle both aspects in a single convolutional neural network. An Online
Instance Matching (OIM) loss function is proposed to train the network
effectively, which is scalable to datasets with numerous identities. To
validate our approach, we collect and annotate a large-scale benchmark dataset
for person search. It contains 18,184 images, 8,432 identities, and 96,143
pedestrian bounding boxes. Experiments show that our framework outperforms
other separate approaches, and the proposed OIM loss function converges much
faster and better than the conventional Softmax loss.
| new_dataset | 0.956877 |
1604.02531 | Liang Zheng | Liang Zheng, Hengheng Zhang, Shaoyan Sun, Manmohan Chandraker, Yi
Yang, Qi Tian | Person Re-identification in the Wild | accepted as spotlight to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel large-scale dataset and comprehensive baselines for
end-to-end pedestrian detection and person recognition in raw video frames. Our
baselines address three issues: the performance of various combinations of
detectors and recognizers, mechanisms for pedestrian detection to help improve
overall re-identification accuracy and assessing the effectiveness of different
detectors for re-identification. We make three distinct contributions. First, a
new dataset, PRW, is introduced to evaluate Person Re-identification in the
Wild, using videos acquired through six synchronized cameras. It contains 932
identities and 11,816 frames in which pedestrians are annotated with their
bounding box positions and identities. Extensive benchmarking results are
presented on this dataset. Second, we show that pedestrian detection aids
re-identification through two simple yet effective improvements: a
discriminatively trained ID-discriminative Embedding (IDE) in the person
subspace using convolutional neural network (CNN) features and a Confidence
Weighted Similarity (CWS) metric that incorporates detection scores into
similarity measurement. Third, we derive insights in evaluating detector
performance for the particular scenario of accurate person re-identification.
| [
{
"version": "v1",
"created": "Sat, 9 Apr 2016 06:57:28 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 15:02:40 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Zheng",
"Liang",
""
],
[
"Zhang",
"Hengheng",
""
],
[
"Sun",
"Shaoyan",
""
],
[
"Chandraker",
"Manmohan",
""
],
[
"Yang",
"Yi",
""
],
[
"Tian",
"Qi",
""
]
] | TITLE: Person Re-identification in the Wild
ABSTRACT: We present a novel large-scale dataset and comprehensive baselines for
end-to-end pedestrian detection and person recognition in raw video frames. Our
baselines address three issues: the performance of various combinations of
detectors and recognizers, mechanisms for pedestrian detection to help improve
overall re-identification accuracy and assessing the effectiveness of different
detectors for re-identification. We make three distinct contributions. First, a
new dataset, PRW, is introduced to evaluate Person Re-identification in the
Wild, using videos acquired through six synchronized cameras. It contains 932
identities and 11,816 frames in which pedestrians are annotated with their
bounding box positions and identities. Extensive benchmarking results are
presented on this dataset. Second, we show that pedestrian detection aids
re-identification through two simple yet effective improvements: a
discriminatively trained ID-discriminative Embedding (IDE) in the person
subspace using convolutional neural network (CNN) features and a Confidence
Weighted Similarity (CWS) metric that incorporates detection scores into
similarity measurement. Third, we derive insights in evaluating detector
performance for the particular scenario of accurate person re-identification.
| new_dataset | 0.958226 |
1606.06793 | Vu Nguyen | Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung | Scalable Semi-supervised Learning with Graph-based Kernel Machine | 21 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acquiring labels are often costly, whereas unlabeled data are usually easy to
obtain in modern machine learning applications. Semi-supervised learning
provides a principled machine learning framework to address such situations,
and has been applied successfully in many real-word applications and
industries. Nonetheless, most of existing semi-supervised learning methods
encounter two serious limitations when applied to modern and large-scale
datasets: computational burden and memory usage demand. To this end, we present
in this paper the Graph-based semi-supervised Kernel Machine (GKM), a method
that leverages the generalization ability of kernel-based method with the
geometrical and distributive information formulated through a spectral graph
induced from data for semi-supervised learning purpose. Our proposed GKM can be
solved directly in the primal form using the Stochastic Gradient Descent method
with the ideal convergence rate $O(\frac{1}{T})$. Besides, our formulation is
suitable for a wide spectrum of important loss functions in the literature of
machine learning (e.g., Hinge, smooth Hinge, Logistic, L1, and
{\epsilon}-insensitive) and smoothness functions (i.e., $l_p(t) = |t|^p$ with
$p\ge1$). We further show that the well-known Laplacian Support Vector Machine
is a special case of our formulation. We validate our proposed method on
several benchmark datasets to demonstrate that GKM is appropriate for the
large-scale datasets since it is optimal in memory usage and yields superior
classification accuracy whilst simultaneously achieving a significant
computation speed-up in comparison with the state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2016 00:26:59 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Sep 2016 02:09:35 GMT"
},
{
"version": "v3",
"created": "Thu, 6 Apr 2017 02:40:23 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Le",
"Trung",
""
],
[
"Nguyen",
"Khanh",
""
],
[
"Nguyen",
"Van",
""
],
[
"Nguyen",
"Vu",
""
],
[
"Phung",
"Dinh",
""
]
] | TITLE: Scalable Semi-supervised Learning with Graph-based Kernel Machine
ABSTRACT: Acquiring labels are often costly, whereas unlabeled data are usually easy to
obtain in modern machine learning applications. Semi-supervised learning
provides a principled machine learning framework to address such situations,
and has been applied successfully in many real-word applications and
industries. Nonetheless, most of existing semi-supervised learning methods
encounter two serious limitations when applied to modern and large-scale
datasets: computational burden and memory usage demand. To this end, we present
in this paper the Graph-based semi-supervised Kernel Machine (GKM), a method
that leverages the generalization ability of kernel-based method with the
geometrical and distributive information formulated through a spectral graph
induced from data for semi-supervised learning purpose. Our proposed GKM can be
solved directly in the primal form using the Stochastic Gradient Descent method
with the ideal convergence rate $O(\frac{1}{T})$. Besides, our formulation is
suitable for a wide spectrum of important loss functions in the literature of
machine learning (e.g., Hinge, smooth Hinge, Logistic, L1, and
{\epsilon}-insensitive) and smoothness functions (i.e., $l_p(t) = |t|^p$ with
$p\ge1$). We further show that the well-known Laplacian Support Vector Machine
is a special case of our formulation. We validate our proposed method on
several benchmark datasets to demonstrate that GKM is appropriate for the
large-scale datasets since it is optimal in memory usage and yields superior
classification accuracy whilst simultaneously achieving a significant
computation speed-up in comparison with the state-of-the-art baselines.
| no_new_dataset | 0.948106 |
1607.02436 | Rocco Tripodi | Rocco Tripodi and Marcello Pelillo | Document Clustering Games in Static and Dynamic Scenarios | This paper will be published in the series Lecture Notes in Computer
Science (LNCS) published by Springer, containing the ICPRAM 2016 best papers | null | 10.1007/978-3-319-53375-9_2 | null | cs.AI cs.CL cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a game theoretic model for document clustering. Each
document to be clustered is represented as a player and each cluster as a
strategy. The players receive a reward interacting with other players that they
try to maximize choosing their best strategies. The geometry of the data is
modeled with a weighted graph that encodes the pairwise similarity among
documents, so that similar players are constrained to choose similar
strategies, updating their strategy preferences at each iteration of the games.
We used different approaches to find the prototypical elements of the clusters
and with this information we divided the players into two disjoint sets, one
collecting players with a definite strategy and the other one collecting
players that try to learn from others the correct strategy to play. The latter
set of players can be considered as new data points that have to be clustered
according to previous information. This representation is useful in scenarios
in which the data are streamed continuously. The evaluation of the system was
conducted on 13 document datasets using different settings. It shows that the
proposed method performs well compared to different document clustering
algorithms.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2016 16:17:12 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Tripodi",
"Rocco",
""
],
[
"Pelillo",
"Marcello",
""
]
] | TITLE: Document Clustering Games in Static and Dynamic Scenarios
ABSTRACT: In this work we propose a game theoretic model for document clustering. Each
document to be clustered is represented as a player and each cluster as a
strategy. The players receive a reward interacting with other players that they
try to maximize choosing their best strategies. The geometry of the data is
modeled with a weighted graph that encodes the pairwise similarity among
documents, so that similar players are constrained to choose similar
strategies, updating their strategy preferences at each iteration of the games.
We used different approaches to find the prototypical elements of the clusters
and with this information we divided the players into two disjoint sets, one
collecting players with a definite strategy and the other one collecting
players that try to learn from others the correct strategy to play. The latter
set of players can be considered as new data points that have to be clustered
according to previous information. This representation is useful in scenarios
in which the data are streamed continuously. The evaluation of the system was
conducted on 13 document datasets using different settings. It shows that the
proposed method performs well compared to different document clustering
algorithms.
| no_new_dataset | 0.948058 |
1611.05053 | Elad Richardson | Elad Richardson, Matan Sela, Roy Or-El, Ron Kimmel | Learning Detailed Face Reconstruction from a Single Image | 15 pages, supplementary material included | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing the detailed geometric structure of a face from a given image
is a key to many computer vision and graphics applications, such as motion
capture and reenactment. The reconstruction task is challenging as human faces
vary extensively when considering expressions, poses, textures, and intrinsic
geometries. While many approaches tackle this complexity by using additional
data to reconstruct the face of a single subject, extracting facial surface
from a single image remains a difficult problem. As a result, single-image
based methods can usually provide only a rough estimate of the facial geometry.
In contrast, we propose to leverage the power of convolutional neural networks
to produce a highly detailed face reconstruction from a single image. For this
purpose, we introduce an end-to-end CNN framework which derives the shape in a
coarse-to-fine fashion. The proposed architecture is composed of two main
blocks, a network that recovers the coarse facial geometry (CoarseNet),
followed by a CNN that refines the facial features of that geometry (FineNet).
The proposed networks are connected by a novel layer which renders a depth
image given a mesh in 3D. Unlike object recognition and detection problems,
there are no suitable datasets for training CNNs to perform face geometry
reconstruction. Therefore, our training regime begins with a supervised phase,
based on synthetic images, followed by an unsupervised phase that uses only
unconstrained facial images. The accuracy and robustness of the proposed model
is demonstrated by both qualitative and quantitative evaluation tests.
| [
{
"version": "v1",
"created": "Tue, 15 Nov 2016 21:08:15 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 15:05:16 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Richardson",
"Elad",
""
],
[
"Sela",
"Matan",
""
],
[
"Or-El",
"Roy",
""
],
[
"Kimmel",
"Ron",
""
]
] | TITLE: Learning Detailed Face Reconstruction from a Single Image
ABSTRACT: Reconstructing the detailed geometric structure of a face from a given image
is a key to many computer vision and graphics applications, such as motion
capture and reenactment. The reconstruction task is challenging as human faces
vary extensively when considering expressions, poses, textures, and intrinsic
geometries. While many approaches tackle this complexity by using additional
data to reconstruct the face of a single subject, extracting facial surface
from a single image remains a difficult problem. As a result, single-image
based methods can usually provide only a rough estimate of the facial geometry.
In contrast, we propose to leverage the power of convolutional neural networks
to produce a highly detailed face reconstruction from a single image. For this
purpose, we introduce an end-to-end CNN framework which derives the shape in a
coarse-to-fine fashion. The proposed architecture is composed of two main
blocks, a network that recovers the coarse facial geometry (CoarseNet),
followed by a CNN that refines the facial features of that geometry (FineNet).
The proposed networks are connected by a novel layer which renders a depth
image given a mesh in 3D. Unlike object recognition and detection problems,
there are no suitable datasets for training CNNs to perform face geometry
reconstruction. Therefore, our training regime begins with a supervised phase,
based on synthetic images, followed by an unsupervised phase that uses only
unconstrained facial images. The accuracy and robustness of the proposed model
is demonstrated by both qualitative and quantitative evaluation tests.
| no_new_dataset | 0.947186 |
1611.09827 | John Thickstun | John Thickstun, Zaid Harchaoui, Sham Kakade | Learning Features of Music from Scratch | 14 pages; camera-ready version; updated experiments and related
works; additional MIR metrics (Appendix C) | null | null | null | stat.ML cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new large-scale music dataset, MusicNet, to serve as
a source of supervision and evaluation of machine learning methods for music
research. MusicNet consists of hundreds of freely-licensed classical music
recordings by 10 composers, written for 11 instruments, together with
instrument/note annotations resulting in over 1 million temporal labels on 34
hours of chamber music performances under various studio and microphone
conditions.
The paper defines a multi-label classification task to predict notes in
musical recordings, along with an evaluation protocol, and benchmarks several
machine learning architectures for this task: i) learning from spectrogram
features; ii) end-to-end learning with a neural net; iii) end-to-end learning
with a convolutional neural net. These experiments show that end-to-end models
trained for note prediction learn frequency selective filters as a low-level
representation of audio.
| [
{
"version": "v1",
"created": "Tue, 29 Nov 2016 20:26:00 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 01:13:41 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Thickstun",
"John",
""
],
[
"Harchaoui",
"Zaid",
""
],
[
"Kakade",
"Sham",
""
]
] | TITLE: Learning Features of Music from Scratch
ABSTRACT: This paper introduces a new large-scale music dataset, MusicNet, to serve as
a source of supervision and evaluation of machine learning methods for music
research. MusicNet consists of hundreds of freely-licensed classical music
recordings by 10 composers, written for 11 instruments, together with
instrument/note annotations resulting in over 1 million temporal labels on 34
hours of chamber music performances under various studio and microphone
conditions.
The paper defines a multi-label classification task to predict notes in
musical recordings, along with an evaluation protocol, and benchmarks several
machine learning architectures for this task: i) learning from spectrogram
features; ii) end-to-end learning with a neural net; iii) end-to-end learning
with a convolutional neural net. These experiments show that end-to-end models
trained for note prediction learn frequency selective filters as a low-level
representation of audio.
| new_dataset | 0.951774 |
1701.03246 | Jue Wang | Jue Wang, Anoop Cherian, Fatih Porikli | Ordered Pooling of Optical Flow Sequences for Action Recognition | Accepted in WACV 2017 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Training of Convolutional Neural Networks (CNNs) on long video sequences is
computationally expensive due to the substantial memory requirements and the
massive number of parameters that deep architectures demand. Early fusion of
video frames is thus a standard technique, in which several consecutive frames
are first agglomerated into a compact representation, and then fed into the CNN
as an input sample. For this purpose, a summarization approach that represents
a set of consecutive RGB frames by a single dynamic image to capture pixel
dynamics is proposed recently. In this paper, we introduce a novel ordered
representation of consecutive optical flow frames as an alternative and argue
that this representation captures the action dynamics more effectively than RGB
frames. We provide intuitions on why such a representation is better for action
recognition. We validate our claims on standard benchmark datasets and
demonstrate that using summaries of flow images lead to significant
improvements over RGB frames while achieving accuracy comparable to the
state-of-the-art on UCF101 and HMDB datasets.
| [
{
"version": "v1",
"created": "Thu, 12 Jan 2017 06:08:18 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 05:27:03 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Wang",
"Jue",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Porikli",
"Fatih",
""
]
] | TITLE: Ordered Pooling of Optical Flow Sequences for Action Recognition
ABSTRACT: Training of Convolutional Neural Networks (CNNs) on long video sequences is
computationally expensive due to the substantial memory requirements and the
massive number of parameters that deep architectures demand. Early fusion of
video frames is thus a standard technique, in which several consecutive frames
are first agglomerated into a compact representation, and then fed into the CNN
as an input sample. For this purpose, a summarization approach that represents
a set of consecutive RGB frames by a single dynamic image to capture pixel
dynamics is proposed recently. In this paper, we introduce a novel ordered
representation of consecutive optical flow frames as an alternative and argue
that this representation captures the action dynamics more effectively than RGB
frames. We provide intuitions on why such a representation is better for action
recognition. We validate our claims on standard benchmark datasets and
demonstrate that using summaries of flow images lead to significant
improvements over RGB frames while achieving accuracy comparable to the
state-of-the-art on UCF101 and HMDB datasets.
| no_new_dataset | 0.952794 |
1701.08991 | Aleksei Tiulpin | Aleksei Tiulpin, J\'er\^ome Thevenot, Esa Rahtu, Simo Saarakkala | A novel method for automatic localization of joint area on knee plain
radiographs | Accepted to Scandinavian Conference on Image Analysis (SCIA) 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Osteoarthritis (OA) is a common musculoskeletal condition typically diagnosed
from radiographic assessment after clinical examination. However, a visual
evaluation made by a practitioner suffers from subjectivity and is highly
dependent on the experience. Computer-aided diagnostics (CAD) could improve the
objectivity of knee radiographic examination. The first essential step of knee
OA CAD is to automatically localize the joint area. However, according to the
literature this task itself remains challenging. The aim of this study was to
develop novel and computationally efficient method to tackle the issue. Here,
three different datasets of knee radiographs were used (n = 473/93/77) to
validate the overall performance of the method. Our pipeline consists of two
parts: anatomically-based joint area proposal and their evaluation using
Histogram of Oriented Gradients and the pre-trained Support Vector Machine
classifier scores. The obtained results for the used datasets show the mean
intersection over the union equal to: 0.84, 0.79 and 0.78. Using a high-end
computer, the method allows to automatically annotate conventional knee
radiographs within 14-16ms and high resolution ones within 170ms. Our results
demonstrate that the developed method is suitable for large-scale analyses.
| [
{
"version": "v1",
"created": "Tue, 31 Jan 2017 11:06:12 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Feb 2017 09:23:16 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2017 20:05:02 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Tiulpin",
"Aleksei",
""
],
[
"Thevenot",
"Jérôme",
""
],
[
"Rahtu",
"Esa",
""
],
[
"Saarakkala",
"Simo",
""
]
] | TITLE: A novel method for automatic localization of joint area on knee plain
radiographs
ABSTRACT: Osteoarthritis (OA) is a common musculoskeletal condition typically diagnosed
from radiographic assessment after clinical examination. However, a visual
evaluation made by a practitioner suffers from subjectivity and is highly
dependent on the experience. Computer-aided diagnostics (CAD) could improve the
objectivity of knee radiographic examination. The first essential step of knee
OA CAD is to automatically localize the joint area. However, according to the
literature this task itself remains challenging. The aim of this study was to
develop novel and computationally efficient method to tackle the issue. Here,
three different datasets of knee radiographs were used (n = 473/93/77) to
validate the overall performance of the method. Our pipeline consists of two
parts: anatomically-based joint area proposal and their evaluation using
Histogram of Oriented Gradients and the pre-trained Support Vector Machine
classifier scores. The obtained results for the used datasets show the mean
intersection over the union equal to: 0.84, 0.79 and 0.78. Using a high-end
computer, the method allows to automatically annotate conventional knee
radiographs within 14-16ms and high resolution ones within 170ms. Our results
demonstrate that the developed method is suitable for large-scale analyses.
| no_new_dataset | 0.945399 |
1702.01105 | Iro Armeni | Iro Armeni, Sasha Sax, Amir R. Zamir and Silvio Savarese | Joint 2D-3D-Semantic Data for Indoor Scene Understanding | The dataset is available http://3Dsemantics.stanford.edu/ | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a dataset of large-scale indoor spaces that provides a variety of
mutually registered modalities from 2D, 2.5D and 3D domains, with
instance-level semantic and geometric annotations. The dataset covers over
6,000m2 and contains over 70,000 RGB images, along with the corresponding
depths, surface normals, semantic annotations, global XYZ images (all in forms
of both regular and 360{\deg} equirectangular images) as well as camera
information. It also includes registered raw and semantically annotated 3D
meshes and point clouds. The dataset enables development of joint and
cross-modal learning models and potentially unsupervised approaches utilizing
the regularities present in large-scale indoor spaces. The dataset is available
here: http://3Dsemantics.stanford.edu/
| [
{
"version": "v1",
"created": "Fri, 3 Feb 2017 18:28:33 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 01:46:13 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Armeni",
"Iro",
""
],
[
"Sax",
"Sasha",
""
],
[
"Zamir",
"Amir R.",
""
],
[
"Savarese",
"Silvio",
""
]
] | TITLE: Joint 2D-3D-Semantic Data for Indoor Scene Understanding
ABSTRACT: We present a dataset of large-scale indoor spaces that provides a variety of
mutually registered modalities from 2D, 2.5D and 3D domains, with
instance-level semantic and geometric annotations. The dataset covers over
6,000m2 and contains over 70,000 RGB images, along with the corresponding
depths, surface normals, semantic annotations, global XYZ images (all in forms
of both regular and 360{\deg} equirectangular images) as well as camera
information. It also includes registered raw and semantically annotated 3D
meshes and point clouds. The dataset enables development of joint and
cross-modal learning models and potentially unsupervised approaches utilizing
the regularities present in large-scale indoor spaces. The dataset is available
here: http://3Dsemantics.stanford.edu/
| new_dataset | 0.958654 |
1703.09772 | Dorian Cazau | D. Cazau, G. Revillon, W. Yuancheng, O. Adam | Particle Filtering for PLCA model with Application to Music
Transcription | null | null | null | null | stat.ML cs.LG cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic Music Transcription (AMT) consists in automatically estimating the
notes in an audio recording, through three attributes: onset time, duration and
pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular
for this task. PLCA is a spectrogram factorization method, able to model a
magnitude spectrogram as a linear combination of spectral vectors from a
dictionary. Such methods use the Expectation-Maximization (EM) algorithm to
estimate the parameters of the acoustic model. This algorithm presents
well-known inherent defaults (local convergence, initialization dependency),
making EM-based systems limited in their applications to AMT, particularly in
regards to the mathematical form and number of priors. To overcome such limits,
we propose in this paper to employ a different estimation framework based on
Particle Filtering (PF), which consists in sampling the posterior distribution
over larger parameter ranges. This framework proves to be more robust in
parameter estimation, more flexible and unifying in the integration of prior
knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and
59.5 $\%$ were achieved on evaluation sound datasets of two different
instrument repertoires, including the classical piano (from MAPS dataset) and
the marovany zither, and direct comparisons to previous PLCA-based approaches
are provided. Steps for further development are also outlined.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 19:56:47 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Cazau",
"D.",
""
],
[
"Revillon",
"G.",
""
],
[
"Yuancheng",
"W.",
""
],
[
"Adam",
"O.",
""
]
] | TITLE: Particle Filtering for PLCA model with Application to Music
Transcription
ABSTRACT: Automatic Music Transcription (AMT) consists in automatically estimating the
notes in an audio recording, through three attributes: onset time, duration and
pitch. Probabilistic Latent Component Analysis (PLCA) has become very popular
for this task. PLCA is a spectrogram factorization method, able to model a
magnitude spectrogram as a linear combination of spectral vectors from a
dictionary. Such methods use the Expectation-Maximization (EM) algorithm to
estimate the parameters of the acoustic model. This algorithm presents
well-known inherent defaults (local convergence, initialization dependency),
making EM-based systems limited in their applications to AMT, particularly in
regards to the mathematical form and number of priors. To overcome such limits,
we propose in this paper to employ a different estimation framework based on
Particle Filtering (PF), which consists in sampling the posterior distribution
over larger parameter ranges. This framework proves to be more robust in
parameter estimation, more flexible and unifying in the integration of prior
knowledge in the system. Note-level transcription accuracies of 61.8 $\%$ and
59.5 $\%$ were achieved on evaluation sound datasets of two different
instrument repertoires, including the classical piano (from MAPS dataset) and
the marovany zither, and direct comparisons to previous PLCA-based approaches
are provided. Steps for further development are also outlined.
| no_new_dataset | 0.9434 |
1703.09851 | Mohamed Abuella | Mohamed Abuella and Badrul Chowdhury | Solar Power Forecasting Using Support Vector Regression | This works has been presented in the American Society for Engineering
Management, International Annual Conference, 2016 | null | null | null | cs.LG cs.CE stat.AP | http://creativecommons.org/publicdomain/zero/1.0/ | Generation and load balance is required in the economic scheduling of
generating units in the smart grid. Variable energy generations, particularly
from wind and solar energy resources, are witnessing a rapid boost, and, it is
anticipated that with a certain level of their penetration, they can become
noteworthy sources of uncertainty. As in the case of load demand, energy
forecasting can also be used to mitigate some of the challenges that arise from
the uncertainty in the resource. While wind energy forecasting research is
considered mature, solar energy forecasting is witnessing a steadily growing
attention from the research community. This paper presents a support vector
regression model to produce solar power forecasts on a rolling basis for 24
hours ahead over an entire year, to mimic the practical business of energy
forecasting. Twelve weather variables are considered from a high-quality
benchmark dataset and new variables are extracted. The added value of the heat
index and wind speed as additional variables to the model is studied across
different seasons. The support vector regression model performance is compared
with artificial neural networks and multiple linear regression models for
energy forecasting.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2017 00:58:01 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Abuella",
"Mohamed",
""
],
[
"Chowdhury",
"Badrul",
""
]
] | TITLE: Solar Power Forecasting Using Support Vector Regression
ABSTRACT: Generation and load balance is required in the economic scheduling of
generating units in the smart grid. Variable energy generations, particularly
from wind and solar energy resources, are witnessing a rapid boost, and, it is
anticipated that with a certain level of their penetration, they can become
noteworthy sources of uncertainty. As in the case of load demand, energy
forecasting can also be used to mitigate some of the challenges that arise from
the uncertainty in the resource. While wind energy forecasting research is
considered mature, solar energy forecasting is witnessing a steadily growing
attention from the research community. This paper presents a support vector
regression model to produce solar power forecasts on a rolling basis for 24
hours ahead over an entire year, to mimic the practical business of energy
forecasting. Twelve weather variables are considered from a high-quality
benchmark dataset and new variables are extracted. The added value of the heat
index and wind speed as additional variables to the model is studied across
different seasons. The support vector regression model performance is compared
with artificial neural networks and multiple linear regression models for
energy forecasting.
| no_new_dataset | 0.949389 |
1704.01444 | Alec Radford | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | Learning to Generate Reviews and Discovering Sentiment | null | null | null | null | cs.LG cs.CL cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 14:20:28 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Apr 2017 09:48:20 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Radford",
"Alec",
""
],
[
"Jozefowicz",
"Rafal",
""
],
[
"Sutskever",
"Ilya",
""
]
] | TITLE: Learning to Generate Reviews and Discovering Sentiment
ABSTRACT: We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment.
| no_new_dataset | 0.951414 |
1704.01603 | Christina Lioma Assoc. Prof | Christina Lioma and Birger Larsen and Peter Ingwersen | Preliminary Experiments using Subjective Logic for the
Polyrepresentation of Information Needs | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to the principle of polyrepresentation, retrieval accuracy may
improve through the combination of multiple and diverse information object
representations about e.g. the context of the user, the information sought, or
the retrieval system. Recently, the principle of polyrepresentation was
mathematically expressed using subjective logic, where the potential
suitability of each representation for improving retrieval performance was
formalised through degrees of belief and uncertainty. No experimental evidence
or practical application has so far validated this model. We extend the work of
Lioma et al. (2010), by providing a practical application and analysis of the
model. We show how to map the abstract notions of belief and uncertainty to
real-life evidence drawn from a retrieval dataset. We also show how to estimate
two different types of polyrepresentation assuming either (a) independence or
(b) dependence between the information objects that are combined. We focus on
the polyrepresentation of different types of context relating to user
information needs (i.e. work task, user background knowledge, ideal answer) and
show that the subjective logic model can predict their optimal combination
prior and independently to the retrieval process.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 18:45:31 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Lioma",
"Christina",
""
],
[
"Larsen",
"Birger",
""
],
[
"Ingwersen",
"Peter",
""
]
] | TITLE: Preliminary Experiments using Subjective Logic for the
Polyrepresentation of Information Needs
ABSTRACT: According to the principle of polyrepresentation, retrieval accuracy may
improve through the combination of multiple and diverse information object
representations about e.g. the context of the user, the information sought, or
the retrieval system. Recently, the principle of polyrepresentation was
mathematically expressed using subjective logic, where the potential
suitability of each representation for improving retrieval performance was
formalised through degrees of belief and uncertainty. No experimental evidence
or practical application has so far validated this model. We extend the work of
Lioma et al. (2010), by providing a practical application and analysis of the
model. We show how to map the abstract notions of belief and uncertainty to
real-life evidence drawn from a retrieval dataset. We also show how to estimate
two different types of polyrepresentation assuming either (a) independence or
(b) dependence between the information objects that are combined. We focus on
the polyrepresentation of different types of context relating to user
information needs (i.e. work task, user background knowledge, ideal answer) and
show that the subjective logic model can predict their optimal combination
prior and independently to the retrieval process.
| no_new_dataset | 0.946051 |
1704.01716 | Jue Wang | Jue Wang, Anoop Cherian, Fatih Porikli, Stephen Gould | Action Representation Using Classifier Decision Boundaries | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Most popular deep learning based models for action recognition are designed
to generate separate predictions within their short temporal windows, which are
often aggregated by heuristic means to assign an action label to the full video
segment. Given that not all frames from a video characterize the underlying
action, pooling schemes that impose equal importance to all frames might be
unfavorable. In an attempt towards tackling this challenge, we propose a novel
pooling scheme, dubbed SVM pooling, based on the notion that among the bag of
features generated by a CNN on all temporal windows, there is at least one
feature that characterizes the action. To this end, we learn a decision
hyperplane that separates this unknown yet useful feature from the rest.
Applying multiple instance learning in an SVM setup, we use the parameters of
this separating hyperplane as a descriptor for the video. Since these
parameters are directly related to the support vectors in a max-margin
framework, they serve as robust representations for pooling of the CNN
features. We devise a joint optimization objective and an efficient solver that
learns these hyperplanes per video and the corresponding action classifiers
over the hyperplanes. Showcased experiments on the standard HMDB and UCF101
datasets demonstrate state-of-the-art performance.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 06:00:14 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Wang",
"Jue",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Porikli",
"Fatih",
""
],
[
"Gould",
"Stephen",
""
]
] | TITLE: Action Representation Using Classifier Decision Boundaries
ABSTRACT: Most popular deep learning based models for action recognition are designed
to generate separate predictions within their short temporal windows, which are
often aggregated by heuristic means to assign an action label to the full video
segment. Given that not all frames from a video characterize the underlying
action, pooling schemes that impose equal importance to all frames might be
unfavorable. In an attempt towards tackling this challenge, we propose a novel
pooling scheme, dubbed SVM pooling, based on the notion that among the bag of
features generated by a CNN on all temporal windows, there is at least one
feature that characterizes the action. To this end, we learn a decision
hyperplane that separates this unknown yet useful feature from the rest.
Applying multiple instance learning in an SVM setup, we use the parameters of
this separating hyperplane as a descriptor for the video. Since these
parameters are directly related to the support vectors in a max-margin
framework, they serve as robust representations for pooling of the CNN
features. We devise a joint optimization objective and an efficient solver that
learns these hyperplanes per video and the corresponding action classifiers
over the hyperplanes. Showcased experiments on the standard HMDB and UCF101
datasets demonstrate state-of-the-art performance.
| no_new_dataset | 0.948106 |
1704.01719 | Weihua Chen | Weihua Chen, Xiaotang Chen, Jianguo Zhang, Kaiqi Huang | Beyond triplet loss: a deep quadruplet network for person
re-identification | accepted to CVPR2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Person re-identification (ReID) is an important task in wide area video
surveillance which focuses on identifying people across different cameras.
Recently, deep learning networks with a triplet loss become a common framework
for person ReID. However, the triplet loss pays main attentions on obtaining
correct orders on the training set. It still suffers from a weaker
generalization capability from the training set to the testing set, thus
resulting in inferior performance. In this paper, we design a quadruplet loss,
which can lead to the model output with a larger inter-class variation and a
smaller intra-class variation compared to the triplet loss. As a result, our
model has a better generalization ability and can achieve a higher performance
on the testing set. In particular, a quadruplet deep network using a
margin-based online hard negative mining is proposed based on the quadruplet
loss for the person ReID. In extensive experiments, the proposed network
outperforms most of the state-of-the-art algorithms on representative datasets
which clearly demonstrates the effectiveness of our proposed method.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 06:09:55 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Chen",
"Weihua",
""
],
[
"Chen",
"Xiaotang",
""
],
[
"Zhang",
"Jianguo",
""
],
[
"Huang",
"Kaiqi",
""
]
] | TITLE: Beyond triplet loss: a deep quadruplet network for person
re-identification
ABSTRACT: Person re-identification (ReID) is an important task in wide area video
surveillance which focuses on identifying people across different cameras.
Recently, deep learning networks with a triplet loss become a common framework
for person ReID. However, the triplet loss pays main attentions on obtaining
correct orders on the training set. It still suffers from a weaker
generalization capability from the training set to the testing set, thus
resulting in inferior performance. In this paper, we design a quadruplet loss,
which can lead to the model output with a larger inter-class variation and a
smaller intra-class variation compared to the triplet loss. As a result, our
model has a better generalization ability and can achieve a higher performance
on the testing set. In particular, a quadruplet deep network using a
margin-based online hard negative mining is proposed based on the quadruplet
loss for the person ReID. In extensive experiments, the proposed network
outperforms most of the state-of-the-art algorithms on representative datasets
which clearly demonstrates the effectiveness of our proposed method.
| no_new_dataset | 0.94801 |
1704.01788 | Christos Kalyvas | Christos Kalyvas, Theodoros Tzouramanis | A Survey of Skyline Query Processing | 127 pages, 91 figures, 38 tables, 208 references,extended Survey | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living in the Information Age allows almost everyone have access to a large
amount of information and options to choose from in order to fulfill their
needs. In many cases, the amount of information available and the rate of
change may hide the optimal and truly desired solution. This reveals the need
of a mechanism that will highlight the best options to choose among every
possible scenario. Based on this the skyline query was proposed which is a
decision support mechanism, that retrieves the valuefor- money options of a
dataset by identifying the objects that present the optimal combination of the
characteristics of the dataset. This paper surveys the state-of-the-art
techniques for skyline query processing, the numerous variations of the initial
algorithm that were proposed to solve similar problems and the
application-specific approaches that were developed to provide a solution
efficiently in each case. Aditionally in each section a taxonomy is outlined
along with the key aspects of each algorithm and its relation to previous
studies.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 11:34:20 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Kalyvas",
"Christos",
""
],
[
"Tzouramanis",
"Theodoros",
""
]
] | TITLE: A Survey of Skyline Query Processing
ABSTRACT: Living in the Information Age allows almost everyone have access to a large
amount of information and options to choose from in order to fulfill their
needs. In many cases, the amount of information available and the rate of
change may hide the optimal and truly desired solution. This reveals the need
of a mechanism that will highlight the best options to choose among every
possible scenario. Based on this the skyline query was proposed which is a
decision support mechanism, that retrieves the valuefor- money options of a
dataset by identifying the objects that present the optimal combination of the
characteristics of the dataset. This paper surveys the state-of-the-art
techniques for skyline query processing, the numerous variations of the initial
algorithm that were proposed to solve similar problems and the
application-specific approaches that were developed to provide a solution
efficiently in each case. Aditionally in each section a taxonomy is outlined
along with the key aspects of each algorithm and its relation to previous
studies.
| no_new_dataset | 0.954647 |
1704.01880 | Amit Kumar | Amit Kumar, Rama Chellappa | A Convolution Tree with Deconvolution Branches: Exploiting Geometric
Relationships for Single Shot Keypoint Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Deep Convolution Networks (DCNNs) have been applied to the task of
face alignment and have shown potential for learning improved feature
representations. Although deeper layers can capture abstract concepts like
pose, it is difficult to capture the geometric relationships among the
keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution
network for facial keypoint detection. Our model predicts the 2D locations of
the keypoints and their individual visibility along with 3D head pose, while
exploiting the spatial relationships among different keypoints. Different from
existing approaches of modeling these relationships, we propose learnable
transform functions which captures the relationships between keypoints at
feature level. However, due to extensive variations in pose, not all of these
relationships act at once, and hence we propose, a pose-based routing function
which implicitly models the active relationships. Both transform functions and
the routing function are implemented through convolutions in a multi-task
framework. Our approach presents a single-shot keypoint detection method,
making it different from many existing cascade regression-based methods. We
also show that learning these relationships significantly improve the accuracy
of keypoint detections for in-the-wild face images from challenging datasets
such as AFW and AFLW.
| [
{
"version": "v1",
"created": "Thu, 6 Apr 2017 15:08:59 GMT"
}
] | 2017-04-07T00:00:00 | [
[
"Kumar",
"Amit",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: A Convolution Tree with Deconvolution Branches: Exploiting Geometric
Relationships for Single Shot Keypoint Detection
ABSTRACT: Recently, Deep Convolution Networks (DCNNs) have been applied to the task of
face alignment and have shown potential for learning improved feature
representations. Although deeper layers can capture abstract concepts like
pose, it is difficult to capture the geometric relationships among the
keypoints in DCNNs. In this paper, we propose a novel convolution-deconvolution
network for facial keypoint detection. Our model predicts the 2D locations of
the keypoints and their individual visibility along with 3D head pose, while
exploiting the spatial relationships among different keypoints. Different from
existing approaches of modeling these relationships, we propose learnable
transform functions which captures the relationships between keypoints at
feature level. However, due to extensive variations in pose, not all of these
relationships act at once, and hence we propose, a pose-based routing function
which implicitly models the active relationships. Both transform functions and
the routing function are implemented through convolutions in a multi-task
framework. Our approach presents a single-shot keypoint detection method,
making it different from many existing cascade regression-based methods. We
also show that learning these relationships significantly improve the accuracy
of keypoint detections for in-the-wild face images from challenging datasets
such as AFW and AFLW.
| no_new_dataset | 0.948251 |
1604.03001 | Genqiang Wu | Genqiang Wu and Yeping He and Jingzheng Wu and Xianyao Xia | Inherit Differential Privacy in Distributed Setting: Multiparty
Randomized Function Computation | null | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to achieve differential privacy in the distributed setting, where the
dataset is distributed among the distrustful parties, is an important problem.
We consider in what condition can a protocol inherit the differential privacy
property of a function it computes. The heart of the problem is the secure
multiparty computation of randomized function. A notion \emph{obliviousness} is
introduced, which captures the key security problems when computing a
randomized function from a deterministic one in the distributed setting. By
this observation, a sufficient and necessary condition about computing a
randomized function from a deterministic one is given. The above result can not
only be used to determine whether a protocol computing differentially private
function is secure, but also be used to construct secure one. Then we prove
that the differential privacy property of a function can be inherited by the
protocol computing it if the protocol privately computes it. A composition
theorem of differentially private protocols is also presented. We also
construct some protocols to generate random variate in the distributed setting,
such as the uniform random variates and the inversion method. By using these
fundamental protocols, we construct protocols of the Gaussian mechanism, the
Laplace mechanism and the Exponential mechanism. Importantly, all these
protocols satisfy obliviousness and so can be proved to be secure in a
simulation based manner. We also provide a complexity bound of computing
randomized function in the distribute setting. Finally, to show that our
results are fundamental and powerful to multiparty differential privacy, we
construct a differentially private empirical risk minimization protocol.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 15:38:42 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Wu",
"Genqiang",
""
],
[
"He",
"Yeping",
""
],
[
"Wu",
"Jingzheng",
""
],
[
"Xia",
"Xianyao",
""
]
] | TITLE: Inherit Differential Privacy in Distributed Setting: Multiparty
Randomized Function Computation
ABSTRACT: How to achieve differential privacy in the distributed setting, where the
dataset is distributed among the distrustful parties, is an important problem.
We consider in what condition can a protocol inherit the differential privacy
property of a function it computes. The heart of the problem is the secure
multiparty computation of randomized function. A notion \emph{obliviousness} is
introduced, which captures the key security problems when computing a
randomized function from a deterministic one in the distributed setting. By
this observation, a sufficient and necessary condition about computing a
randomized function from a deterministic one is given. The above result can not
only be used to determine whether a protocol computing differentially private
function is secure, but also be used to construct secure one. Then we prove
that the differential privacy property of a function can be inherited by the
protocol computing it if the protocol privately computes it. A composition
theorem of differentially private protocols is also presented. We also
construct some protocols to generate random variate in the distributed setting,
such as the uniform random variates and the inversion method. By using these
fundamental protocols, we construct protocols of the Gaussian mechanism, the
Laplace mechanism and the Exponential mechanism. Importantly, all these
protocols satisfy obliviousness and so can be proved to be secure in a
simulation based manner. We also provide a complexity bound of computing
randomized function in the distribute setting. Finally, to show that our
results are fundamental and powerful to multiparty differential privacy, we
construct a differentially private empirical risk minimization protocol.
| no_new_dataset | 0.945399 |
1610.02357 | Francois Chollet | Fran\c{c}ois Chollet | Xception: Deep Learning with Depthwise Separable Convolutions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2016 17:51:51 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Oct 2016 17:37:25 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Apr 2017 18:40:27 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Chollet",
"François",
""
]
] | TITLE: Xception: Deep Learning with Depthwise Separable Convolutions
ABSTRACT: We present an interpretation of Inception modules in convolutional neural
networks as being an intermediate step in-between regular convolution and the
depthwise separable convolution operation (a depthwise convolution followed by
a pointwise convolution). In this light, a depthwise separable convolution can
be understood as an Inception module with a maximally large number of towers.
This observation leads us to propose a novel deep convolutional neural network
architecture inspired by Inception, where Inception modules have been replaced
with depthwise separable convolutions. We show that this architecture, dubbed
Xception, slightly outperforms Inception V3 on the ImageNet dataset (which
Inception V3 was designed for), and significantly outperforms Inception V3 on a
larger image classification dataset comprising 350 million images and 17,000
classes. Since the Xception architecture has the same number of parameters as
Inception V3, the performance gains are not due to increased capacity but
rather to a more efficient use of model parameters.
| no_new_dataset | 0.954052 |
1611.01547 | Philip Blair | Philip Blair, Yuval Merhav, and Joel Barry | Automated Generation of Multilingual Clusters for the Evaluation of
Distributed Representations | Published as a workshop paper at ICLR 2017 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a language-agnostic way of automatically generating sets of
semantically similar clusters of entities along with sets of "outlier"
elements, which may then be used to perform an intrinsic evaluation of word
embeddings in the outlier detection task. We used our methodology to create a
gold-standard dataset, which we call WikiSem500, and evaluated multiple
state-of-the-art embeddings. The results show a correlation between performance
on this dataset and performance on sentiment analysis.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 21:35:07 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Nov 2016 13:21:17 GMT"
},
{
"version": "v3",
"created": "Fri, 9 Dec 2016 15:58:37 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Dec 2016 17:51:57 GMT"
},
{
"version": "v5",
"created": "Wed, 5 Apr 2017 15:26:51 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Blair",
"Philip",
""
],
[
"Merhav",
"Yuval",
""
],
[
"Barry",
"Joel",
""
]
] | TITLE: Automated Generation of Multilingual Clusters for the Evaluation of
Distributed Representations
ABSTRACT: We propose a language-agnostic way of automatically generating sets of
semantically similar clusters of entities along with sets of "outlier"
elements, which may then be used to perform an intrinsic evaluation of word
embeddings in the outlier detection task. We used our methodology to create a
gold-standard dataset, which we call WikiSem500, and evaluated multiple
state-of-the-art embeddings. The results show a correlation between performance
on this dataset and performance on sentiment analysis.
| new_dataset | 0.957118 |
1611.04076 | Xudong Mao | Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang and
Stephen Paul Smolley | Least Squares Generative Adversarial Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised learning with generative adversarial networks (GANs) has proven
hugely successful. Regular GANs hypothesize the discriminator as a classifier
with the sigmoid cross entropy loss function. However, we found that this loss
function may lead to the vanishing gradients problem during the learning
process. To overcome such a problem, we propose in this paper the Least Squares
Generative Adversarial Networks (LSGANs) which adopt the least squares loss
function for the discriminator. We show that minimizing the objective function
of LSGAN yields minimizing the Pearson $\chi^2$ divergence. There are two
benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher
quality images than regular GANs. Second, LSGANs perform more stable during the
learning process. We evaluate LSGANs on five scene datasets and the
experimental results show that the images generated by LSGANs are of better
quality than the ones generated by regular GANs. We also conduct two comparison
experiments between LSGANs and regular GANs to illustrate the stability of
LSGANs.
| [
{
"version": "v1",
"created": "Sun, 13 Nov 2016 03:38:28 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2017 07:50:53 GMT"
},
{
"version": "v3",
"created": "Wed, 5 Apr 2017 05:44:47 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Mao",
"Xudong",
""
],
[
"Li",
"Qing",
""
],
[
"Xie",
"Haoran",
""
],
[
"Lau",
"Raymond Y. K.",
""
],
[
"Wang",
"Zhen",
""
],
[
"Smolley",
"Stephen Paul",
""
]
] | TITLE: Least Squares Generative Adversarial Networks
ABSTRACT: Unsupervised learning with generative adversarial networks (GANs) has proven
hugely successful. Regular GANs hypothesize the discriminator as a classifier
with the sigmoid cross entropy loss function. However, we found that this loss
function may lead to the vanishing gradients problem during the learning
process. To overcome such a problem, we propose in this paper the Least Squares
Generative Adversarial Networks (LSGANs) which adopt the least squares loss
function for the discriminator. We show that minimizing the objective function
of LSGAN yields minimizing the Pearson $\chi^2$ divergence. There are two
benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher
quality images than regular GANs. Second, LSGANs perform more stable during the
learning process. We evaluate LSGANs on five scene datasets and the
experimental results show that the images generated by LSGANs are of better
quality than the ones generated by regular GANs. We also conduct two comparison
experiments between LSGANs and regular GANs to illustrate the stability of
LSGANs.
| no_new_dataset | 0.952353 |
1611.06646 | Basura Fernando | Basura Fernando, Hakan Bilen, Efstratios Gavves, Stephen Gould | Self-Supervised Video Representation Learning With Odd-One-Out Networks | Accepted in In IEEE International Conference on Computer Vision and
Pattern Recognition CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new self-supervised CNN pre-training technique based on a novel
auxiliary task called "odd-one-out learning". In this task, the machine is
asked to identify the unrelated or odd element from a set of otherwise related
elements. We apply this technique to self-supervised video representation
learning where we sample subsequences from videos and ask the network to learn
to predict the odd video subsequence. The odd video subsequence is sampled such
that it has wrong temporal order of frames while the even ones have the correct
temporal order. Therefore, to generate a odd-one-out question no manual
annotation is required. Our learning machine is implemented as multi-stream
convolutional neural network, which is learned end-to-end. Using odd-one-out
networks, we learn temporal representations for videos that generalizes to
other related tasks such as action recognition.
On action classification, our method obtains 60.3\% on the UCF101 dataset
using only UCF101 data for training which is approximately 10% better than
current state-of-the-art self-supervised learning methods. Similarly, on HMDB51
dataset we outperform self-supervised state-of-the art methods by 12.7% on
action classification task.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 04:35:45 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 00:05:09 GMT"
},
{
"version": "v3",
"created": "Mon, 3 Apr 2017 03:51:05 GMT"
},
{
"version": "v4",
"created": "Wed, 5 Apr 2017 05:52:00 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Fernando",
"Basura",
""
],
[
"Bilen",
"Hakan",
""
],
[
"Gavves",
"Efstratios",
""
],
[
"Gould",
"Stephen",
""
]
] | TITLE: Self-Supervised Video Representation Learning With Odd-One-Out Networks
ABSTRACT: We propose a new self-supervised CNN pre-training technique based on a novel
auxiliary task called "odd-one-out learning". In this task, the machine is
asked to identify the unrelated or odd element from a set of otherwise related
elements. We apply this technique to self-supervised video representation
learning where we sample subsequences from videos and ask the network to learn
to predict the odd video subsequence. The odd video subsequence is sampled such
that it has wrong temporal order of frames while the even ones have the correct
temporal order. Therefore, to generate a odd-one-out question no manual
annotation is required. Our learning machine is implemented as multi-stream
convolutional neural network, which is learned end-to-end. Using odd-one-out
networks, we learn temporal representations for videos that generalizes to
other related tasks such as action recognition.
On action classification, our method obtains 60.3\% on the UCF101 dataset
using only UCF101 data for training which is approximately 10% better than
current state-of-the-art self-supervised learning methods. Similarly, on HMDB51
dataset we outperform self-supervised state-of-the art methods by 12.7% on
action classification task.
| no_new_dataset | 0.949576 |
1703.08769 | Hang Zhao | Hang Zhao, Xavier Puig, Bolei Zhou, Sanja Fidler, Antonio Torralba | Open Vocabulary Scene Parsing | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing arbitrary objects in the wild has been a challenging problem due
to the limitations of existing classification models and datasets. In this
paper, we propose a new task that aims at parsing scenes with a large and open
vocabulary, and several evaluation metrics are explored for this problem. Our
proposed approach to this problem is a joint image pixel and word concept
embeddings framework, where word concepts are connected by semantic relations.
We validate the open vocabulary prediction ability of our framework on ADE20K
dataset which covers a wide variety of scenes and objects. We further explore
the trained joint embedding space to show its interpretability.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2017 05:44:56 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2017 18:28:20 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Zhao",
"Hang",
""
],
[
"Puig",
"Xavier",
""
],
[
"Zhou",
"Bolei",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Torralba",
"Antonio",
""
]
] | TITLE: Open Vocabulary Scene Parsing
ABSTRACT: Recognizing arbitrary objects in the wild has been a challenging problem due
to the limitations of existing classification models and datasets. In this
paper, we propose a new task that aims at parsing scenes with a large and open
vocabulary, and several evaluation metrics are explored for this problem. Our
proposed approach to this problem is a joint image pixel and word concept
embeddings framework, where word concepts are connected by semantic relations.
We validate the open vocabulary prediction ability of our framework on ADE20K
dataset which covers a wide variety of scenes and objects. We further explore
the trained joint embedding space to show its interpretability.
| no_new_dataset | 0.945349 |
1704.01178 | Nguyet Minh Mach | Andreas Hauptmann, Ville Kolehmainen, Nguyet Minh Mach, Tuomo
Savolainen, Aku Sepp\"anen, Samuli Siltanen | Open 2D Electrical Impedance Tomography data archive | 15 pages, 16 figures, open dataset | null | null | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document reports an Open 2D Electrical Impedance Tomography (EIT) data
set. The EIT measurements were collected from a circular body (a flat tank
filled with saline) with various choices of conductive and resistive
inclusions. Data are available at http://fips.fi/ EIT_dataset.php and can be
freely used for scientific purposes with appropriate references to them, and to
this document at https://arxiv.org. The data set consists of (1) current
patterns and voltage measurements of a circular tank containing different
targets, (2) photos of the tank and targets and (3) a MATLAB-code for reading
the data. A video report of the data collection session is available at
https://www.youtube.com/watch?v=65Zca_qd1Y8.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 20:57:39 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Hauptmann",
"Andreas",
""
],
[
"Kolehmainen",
"Ville",
""
],
[
"Mach",
"Nguyet Minh",
""
],
[
"Savolainen",
"Tuomo",
""
],
[
"Seppänen",
"Aku",
""
],
[
"Siltanen",
"Samuli",
""
]
] | TITLE: Open 2D Electrical Impedance Tomography data archive
ABSTRACT: This document reports an Open 2D Electrical Impedance Tomography (EIT) data
set. The EIT measurements were collected from a circular body (a flat tank
filled with saline) with various choices of conductive and resistive
inclusions. Data are available at http://fips.fi/ EIT_dataset.php and can be
freely used for scientific purposes with appropriate references to them, and to
this document at https://arxiv.org. The data set consists of (1) current
patterns and voltage measurements of a circular tank containing different
targets, (2) photos of the tank and targets and (3) a MATLAB-code for reading
the data. A video report of the data collection session is available at
https://www.youtube.com/watch?v=65Zca_qd1Y8.
| no_new_dataset | 0.837155 |
1704.01220 | Parvez Ahammad | Qingzhu Gao, Prasenjit Dey, and Parvez Ahammad | Perceived Performance of Webpages In the Wild: Insights from Large-scale
Crowdsourcing of Above-the-Fold QoE | 6 pages, 5 figures, submitted to ACM SIGCOMM 2nd Workshop on
QoE-based Analysis and Management of Data Communication Networks
(Internet-QoE 2017) | null | null | null | cs.NI cs.HC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clearly, no one likes webpages with poor quality of experience (QoE). Being
perceived as slow or fast is a key element in the overall perceived QoE of web
applications. While extensive effort has been put into optimizing web
applications (both in industry and academia), not a lot of work exists in
characterizing what aspects of webpage loading process truly influence human
end-user's perception of the "Speed" of a page. In this paper we present
"SpeedPerception", a large-scale web performance crowdsourcing framework
focused on understanding the perceived loading performance of above-the-fold
(ATF) webpage content. Our end goal is to create free open-source benchmarking
datasets to advance the systematic analysis of how humans perceive webpage
loading process. In Phase-1 of our "SpeedPerception" study using Internet
Retailer Top 500 (IR 500) websites
(https://github.com/pahammad/speedperception), we found that commonly used
navigation metrics such as "onLoad" and "Time To First Byte (TTFB)" fail (less
than 60% match) to represent majority human perception when comparing the speed
of two webpages. We present a simple 3-variable-based machine learning model
that explains the majority end-user choices better (with $87 \pm 2\%$
accuracy). In addition, our results suggest that the time needed by end-users
to evaluate relative perceived speed of webpage is far less than the time of
its "visualComplete" event.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 23:47:41 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Gao",
"Qingzhu",
""
],
[
"Dey",
"Prasenjit",
""
],
[
"Ahammad",
"Parvez",
""
]
] | TITLE: Perceived Performance of Webpages In the Wild: Insights from Large-scale
Crowdsourcing of Above-the-Fold QoE
ABSTRACT: Clearly, no one likes webpages with poor quality of experience (QoE). Being
perceived as slow or fast is a key element in the overall perceived QoE of web
applications. While extensive effort has been put into optimizing web
applications (both in industry and academia), not a lot of work exists in
characterizing what aspects of webpage loading process truly influence human
end-user's perception of the "Speed" of a page. In this paper we present
"SpeedPerception", a large-scale web performance crowdsourcing framework
focused on understanding the perceived loading performance of above-the-fold
(ATF) webpage content. Our end goal is to create free open-source benchmarking
datasets to advance the systematic analysis of how humans perceive webpage
loading process. In Phase-1 of our "SpeedPerception" study using Internet
Retailer Top 500 (IR 500) websites
(https://github.com/pahammad/speedperception), we found that commonly used
navigation metrics such as "onLoad" and "Time To First Byte (TTFB)" fail (less
than 60% match) to represent majority human perception when comparing the speed
of two webpages. We present a simple 3-variable-based machine learning model
that explains the majority end-user choices better (with $87 \pm 2\%$
accuracy). In addition, our results suggest that the time needed by end-users
to evaluate relative perceived speed of webpage is far less than the time of
its "visualComplete" event.
| no_new_dataset | 0.508216 |
1704.01235 | Parag Chandakkar | Parag S. Chandakkar and Baoxin Li | Joint Regression and Ranking for Image Enhancement | WACV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on automated image enhancement has gained momentum in recent years,
partially due to the need for easy-to-use tools for enhancing pictures captured
by ubiquitous cameras on mobile devices. Many of the existing leading methods
employ machine-learning-based techniques, by which some enhancement parameters
for a given image are found by relating the image to the training images with
known enhancement parameters. While knowing the structure of the parameter
space can facilitate search for the optimal solution, none of the existing
methods has explicitly modeled and learned that structure. This paper presents
an end-to-end, novel joint regression and ranking approach to model the
interaction between desired enhancement parameters and images to be processed,
employing a Gaussian process (GP). GP allows searching for ideal parameters
using only the image features. The model naturally leads to a ranking technique
for comparing images in the induced feature space. Comparative evaluation using
the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on
an additional data-set were used to demonstrate the effectiveness of the
proposed approach.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 01:28:04 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Chandakkar",
"Parag S.",
""
],
[
"Li",
"Baoxin",
""
]
] | TITLE: Joint Regression and Ranking for Image Enhancement
ABSTRACT: Research on automated image enhancement has gained momentum in recent years,
partially due to the need for easy-to-use tools for enhancing pictures captured
by ubiquitous cameras on mobile devices. Many of the existing leading methods
employ machine-learning-based techniques, by which some enhancement parameters
for a given image are found by relating the image to the training images with
known enhancement parameters. While knowing the structure of the parameter
space can facilitate search for the optimal solution, none of the existing
methods has explicitly modeled and learned that structure. This paper presents
an end-to-end, novel joint regression and ranking approach to model the
interaction between desired enhancement parameters and images to be processed,
employing a Gaussian process (GP). GP allows searching for ideal parameters
using only the image features. The model naturally leads to a ranking technique
for comparing images in the induced feature space. Comparative evaluation using
the ground-truth based on the MIT-Adobe FiveK dataset plus subjective tests on
an additional data-set were used to demonstrate the effectiveness of the
proposed approach.
| no_new_dataset | 0.947186 |
1704.01248 | Parag Chandakkar | Parag S. Chandakkar, Vijetha Gattupalli and Baoxin Li | A Computational Approach to Relative Aesthetics | ICPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational visual aesthetics has recently become an active research area.
Existing state-of-art methods formulate this as a binary classification task
where a given image is predicted to be beautiful or not. In many applications
such as image retrieval and enhancement, it is more important to rank images
based on their aesthetic quality instead of binary-categorizing them.
Furthermore, in such applications, it may be possible that all images belong to
the same category. Hence determining the aesthetic ranking of the images is
more appropriate. To this end, we formulate a novel problem of ranking images
with respect to their aesthetic quality. We construct a new dataset of image
pairs with relative labels by carefully selecting images from the popular AVA
dataset. Unlike in aesthetics classification, there is no single threshold
which would determine the ranking order of the images across our entire
dataset. We propose a deep neural network based approach that is trained on
image pairs by incorporating principles from relative learning. Results show
that such relative training procedure allows our network to rank the images
with a higher accuracy than a state-of-art network trained on the same set of
images using binary labels.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 02:49:30 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Chandakkar",
"Parag S.",
""
],
[
"Gattupalli",
"Vijetha",
""
],
[
"Li",
"Baoxin",
""
]
] | TITLE: A Computational Approach to Relative Aesthetics
ABSTRACT: Computational visual aesthetics has recently become an active research area.
Existing state-of-art methods formulate this as a binary classification task
where a given image is predicted to be beautiful or not. In many applications
such as image retrieval and enhancement, it is more important to rank images
based on their aesthetic quality instead of binary-categorizing them.
Furthermore, in such applications, it may be possible that all images belong to
the same category. Hence determining the aesthetic ranking of the images is
more appropriate. To this end, we formulate a novel problem of ranking images
with respect to their aesthetic quality. We construct a new dataset of image
pairs with relative labels by carefully selecting images from the popular AVA
dataset. Unlike in aesthetics classification, there is no single threshold
which would determine the ranking order of the images across our entire
dataset. We propose a deep neural network based approach that is trained on
image pairs by incorporating principles from relative learning. Results show
that such relative training procedure allows our network to rank the images
with a higher accuracy than a state-of-art network trained on the same set of
images using binary labels.
| new_dataset | 0.964422 |
1704.01279 | Jesse Engel | Jesse Engel, Cinjon Resnick, Adam Roberts, Sander Dieleman, Douglas
Eck, Karen Simonyan, Mohammad Norouzi | Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders | null | null | null | null | cs.LG cs.AI cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models in vision have seen rapid progress due to algorithmic
improvements and the availability of high-quality image datasets. In this
paper, we offer contributions in both these areas to enable similar progress in
audio modeling. First, we detail a powerful new WaveNet-style autoencoder model
that conditions an autoregressive decoder on temporal codes learned from the
raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality
dataset of musical notes that is an order of magnitude larger than comparable
public datasets. Using NSynth, we demonstrate improved qualitative and
quantitative performance of the WaveNet autoencoder over a well-tuned spectral
autoencoder baseline. Finally, we show that the model learns a manifold of
embeddings that allows for morphing between instruments, meaningfully
interpolating in timbre to create new types of sounds that are realistic and
expressive.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 06:34:22 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Engel",
"Jesse",
""
],
[
"Resnick",
"Cinjon",
""
],
[
"Roberts",
"Adam",
""
],
[
"Dieleman",
"Sander",
""
],
[
"Eck",
"Douglas",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Norouzi",
"Mohammad",
""
]
] | TITLE: Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders
ABSTRACT: Generative models in vision have seen rapid progress due to algorithmic
improvements and the availability of high-quality image datasets. In this
paper, we offer contributions in both these areas to enable similar progress in
audio modeling. First, we detail a powerful new WaveNet-style autoencoder model
that conditions an autoregressive decoder on temporal codes learned from the
raw audio waveform. Second, we introduce NSynth, a large-scale and high-quality
dataset of musical notes that is an order of magnitude larger than comparable
public datasets. Using NSynth, we demonstrate improved qualitative and
quantitative performance of the WaveNet autoencoder over a well-tuned spectral
autoencoder baseline. Finally, we show that the model learns a manifold of
embeddings that allows for morphing between instruments, meaningfully
interpolating in timbre to create new types of sounds that are realistic and
expressive.
| new_dataset | 0.956063 |
1704.01280 | Li-Chia Yang | Li-Chia Yang, Szu-Yu Chou, Jen-Yu Liu, Yi-Hsuan Yang, Yi-An Chen | Revisiting the problem of audio-based hit song prediction using
convolutional neural networks | To appear in the proceedings of 2017 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP) | null | null | null | cs.SD cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Being able to predict whether a song can be a hit has impor- tant
applications in the music industry. Although it is true that the popularity of
a song can be greatly affected by exter- nal factors such as social and
commercial influences, to which degree audio features computed from musical
signals (whom we regard as internal factors) can predict song popularity is an
interesting research question on its own. Motivated by the recent success of
deep learning techniques, we attempt to ex- tend previous work on hit song
prediction by jointly learning the audio features and prediction models using
deep learning. Specifically, we experiment with a convolutional neural net-
work model that takes the primitive mel-spectrogram as the input for feature
learning, a more advanced JYnet model that uses an external song dataset for
supervised pre-training and auto-tagging, and the combination of these two
models. We also consider the inception model to characterize audio infor-
mation in different scales. Our experiments suggest that deep structures are
indeed more accurate than shallow structures in predicting the popularity of
either Chinese or Western Pop songs in Taiwan. We also use the tags predicted
by JYnet to gain insights into the result of different models.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 06:39:51 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Yang",
"Li-Chia",
""
],
[
"Chou",
"Szu-Yu",
""
],
[
"Liu",
"Jen-Yu",
""
],
[
"Yang",
"Yi-Hsuan",
""
],
[
"Chen",
"Yi-An",
""
]
] | TITLE: Revisiting the problem of audio-based hit song prediction using
convolutional neural networks
ABSTRACT: Being able to predict whether a song can be a hit has impor- tant
applications in the music industry. Although it is true that the popularity of
a song can be greatly affected by exter- nal factors such as social and
commercial influences, to which degree audio features computed from musical
signals (whom we regard as internal factors) can predict song popularity is an
interesting research question on its own. Motivated by the recent success of
deep learning techniques, we attempt to ex- tend previous work on hit song
prediction by jointly learning the audio features and prediction models using
deep learning. Specifically, we experiment with a convolutional neural net-
work model that takes the primitive mel-spectrogram as the input for feature
learning, a more advanced JYnet model that uses an external song dataset for
supervised pre-training and auto-tagging, and the combination of these two
models. We also consider the inception model to characterize audio infor-
mation in different scales. Our experiments suggest that deep structures are
indeed more accurate than shallow structures in predicting the popularity of
either Chinese or Western Pop songs in Taiwan. We also use the tags predicted
by JYnet to gain insights into the result of different models.
| no_new_dataset | 0.93852 |
1704.01344 | Ziwei Liu | Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via
Deep Layer Cascade | To appear in CVPR 2017 as a spotlight paper | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel deep layer cascade (LC) method to improve the accuracy and
speed of semantic segmentation. Unlike the conventional model cascade (MC) that
is composed of multiple independent models, LC treats a single deep model as a
cascade of several sub-models. Earlier sub-models are trained to handle easy
and confident regions, and they progressively feed-forward harder regions to
the next sub-model for processing. Convolutions are only calculated on these
regions to reduce computations. The proposed method possesses several
advantages. First, LC classifies most of the easy regions in the shallow stage
and makes deeper stage focuses on a few hard regions. Such an adaptive and
'difficulty-aware' learning improves segmentation performance. Second, LC
accelerates both training and testing of deep network thanks to early decisions
in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable
framework, allowing joint learning of all sub-models. We evaluate our method on
PASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and
fast speed.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 09:58:51 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Li",
"Xiaoxiao",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Luo",
"Ping",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] | TITLE: Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via
Deep Layer Cascade
ABSTRACT: We propose a novel deep layer cascade (LC) method to improve the accuracy and
speed of semantic segmentation. Unlike the conventional model cascade (MC) that
is composed of multiple independent models, LC treats a single deep model as a
cascade of several sub-models. Earlier sub-models are trained to handle easy
and confident regions, and they progressively feed-forward harder regions to
the next sub-model for processing. Convolutions are only calculated on these
regions to reduce computations. The proposed method possesses several
advantages. First, LC classifies most of the easy regions in the shallow stage
and makes deeper stage focuses on a few hard regions. Such an adaptive and
'difficulty-aware' learning improves segmentation performance. Second, LC
accelerates both training and testing of deep network thanks to early decisions
in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable
framework, allowing joint learning of all sub-models. We evaluate our method on
PASCAL VOC and Cityscapes datasets, achieving state-of-the-art performance and
fast speed.
| no_new_dataset | 0.950273 |
1704.01372 | Jiqing Wu | Jiqing Wu, Radu Timofte, Zhiwu Huang, Luc Van Gool | On the Relation between Color Image Denoising and Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large amount of image denoising literature focuses on single channel images
and often experimentally validates the proposed methods on tens of images at
most. In this paper, we investigate the interaction between denoising and
classification on large scale dataset. Inspired by classification models, we
propose a novel deep learning architecture for color (multichannel) image
denoising and report on thousands of images from ImageNet dataset as well as
commonly used imagery. We study the importance of (sufficient) training data,
how semantic class information can be traded for improved denoising results. As
a result, our method greatly improves PSNR performance by 0.34 - 0.51 dB on
average over state-of-the art methods on large scale dataset. We conclude that
it is beneficial to incorporate in classification models. On the other hand, we
also study how noise affect classification performance. In the end, we come to
a number of interesting conclusions, some being counter-intuitive.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 11:28:25 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Wu",
"Jiqing",
""
],
[
"Timofte",
"Radu",
""
],
[
"Huang",
"Zhiwu",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: On the Relation between Color Image Denoising and Classification
ABSTRACT: Large amount of image denoising literature focuses on single channel images
and often experimentally validates the proposed methods on tens of images at
most. In this paper, we investigate the interaction between denoising and
classification on large scale dataset. Inspired by classification models, we
propose a novel deep learning architecture for color (multichannel) image
denoising and report on thousands of images from ImageNet dataset as well as
commonly used imagery. We study the importance of (sufficient) training data,
how semantic class information can be traded for improved denoising results. As
a result, our method greatly improves PSNR performance by 0.34 - 0.51 dB on
average over state-of-the art methods on large scale dataset. We conclude that
it is beneficial to incorporate in classification models. On the other hand, we
also study how noise affect classification performance. In the end, we come to
a number of interesting conclusions, some being counter-intuitive.
| no_new_dataset | 0.947088 |
1704.01510 | Martin Weigert | Martin Weigert, Loic Royer, Florian Jug, Gene Myers | Isotropic reconstruction of 3D fluorescence microscopy images using
convolutional neural networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fluorescence microscopy images usually show severe anisotropy in axial versus
lateral resolution. This hampers downstream processing, i.e. the automatic
extraction of quantitative biological data. While deconvolution methods and
other techniques to address this problem exist, they are either time consuming
to apply or limited in their ability to remove anisotropy. We propose a method
to recover isotropic resolution from readily acquired anisotropic data. We
achieve this using a convolutional neural network that is trained end-to-end
from the same anisotropic body of data we later apply the network to. The
network effectively learns to restore the full isotropic resolution by
restoring the image under a trained, sample specific image prior. We apply our
method to $3$ synthetic and $3$ real datasets and show that our results improve
on results from deconvolution and state-of-the-art super-resolution techniques.
Finally, we demonstrate that a standard 3D segmentation pipeline performs on
the output of our network with comparable accuracy as on the full isotropic
data.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 16:20:36 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Weigert",
"Martin",
""
],
[
"Royer",
"Loic",
""
],
[
"Jug",
"Florian",
""
],
[
"Myers",
"Gene",
""
]
] | TITLE: Isotropic reconstruction of 3D fluorescence microscopy images using
convolutional neural networks
ABSTRACT: Fluorescence microscopy images usually show severe anisotropy in axial versus
lateral resolution. This hampers downstream processing, i.e. the automatic
extraction of quantitative biological data. While deconvolution methods and
other techniques to address this problem exist, they are either time consuming
to apply or limited in their ability to remove anisotropy. We propose a method
to recover isotropic resolution from readily acquired anisotropic data. We
achieve this using a convolutional neural network that is trained end-to-end
from the same anisotropic body of data we later apply the network to. The
network effectively learns to restore the full isotropic resolution by
restoring the image under a trained, sample specific image prior. We apply our
method to $3$ synthetic and $3$ real datasets and show that our results improve
on results from deconvolution and state-of-the-art super-resolution techniques.
Finally, we demonstrate that a standard 3D segmentation pipeline performs on
the output of our network with comparable accuracy as on the full isotropic
data.
| no_new_dataset | 0.949295 |
1704.01518 | Anna Rohrbach | Anna Rohrbach, Marcus Rohrbach, Siyu Tang, Seong Joon Oh, Bernt
Schiele | Generating Descriptions with Grounded and Co-Referenced People | Accepted to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning how to generate descriptions of images or videos received major
interest both in the Computer Vision and Natural Language Processing
communities. While a few works have proposed to learn a grounding during the
generation process in an unsupervised way (via an attention mechanism), it
remains unclear how good the quality of the grounding is and whether it
benefits the description quality. In this work we propose a movie description
model which learns to generate description and jointly ground (localize) the
mentioned characters as well as do visual co-reference resolution between pairs
of consecutive sentences/clips. We also propose to use weak localization
supervision through character mentions provided in movie descriptions to learn
the character grounding. At training time, we first learn how to localize
characters by relating their visual appearance to mentions in the descriptions
via a semi-supervised approach. We then provide this (noisy) supervision into
our description model which greatly improves its performance. Our proposed
description model improves over prior work w.r.t. generated description quality
and additionally provides grounding and local co-reference resolution. We
evaluate it on the MPII Movie Description dataset using automatic and human
evaluation measures and using our newly collected grounding and co-reference
data for characters.
| [
{
"version": "v1",
"created": "Wed, 5 Apr 2017 16:36:13 GMT"
}
] | 2017-04-06T00:00:00 | [
[
"Rohrbach",
"Anna",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Tang",
"Siyu",
""
],
[
"Oh",
"Seong Joon",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Generating Descriptions with Grounded and Co-Referenced People
ABSTRACT: Learning how to generate descriptions of images or videos received major
interest both in the Computer Vision and Natural Language Processing
communities. While a few works have proposed to learn a grounding during the
generation process in an unsupervised way (via an attention mechanism), it
remains unclear how good the quality of the grounding is and whether it
benefits the description quality. In this work we propose a movie description
model which learns to generate description and jointly ground (localize) the
mentioned characters as well as do visual co-reference resolution between pairs
of consecutive sentences/clips. We also propose to use weak localization
supervision through character mentions provided in movie descriptions to learn
the character grounding. At training time, we first learn how to localize
characters by relating their visual appearance to mentions in the descriptions
via a semi-supervised approach. We then provide this (noisy) supervision into
our description model which greatly improves its performance. Our proposed
description model improves over prior work w.r.t. generated description quality
and additionally provides grounding and local co-reference resolution. We
evaluate it on the MPII Movie Description dataset using automatic and human
evaluation measures and using our newly collected grounding and co-reference
data for characters.
| no_new_dataset | 0.943348 |
1506.01186 | Leslie Smith | Leslie N. Smith | Cyclical Learning Rates for Training Neural Networks | Presented at WACV 2017; see https://github.com/bckenstler/CLR for
instructions to implement CLR in Keras | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks.
| [
{
"version": "v1",
"created": "Wed, 3 Jun 2015 09:54:31 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Jun 2015 20:40:18 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Oct 2016 19:07:58 GMT"
},
{
"version": "v4",
"created": "Thu, 29 Dec 2016 15:20:01 GMT"
},
{
"version": "v5",
"created": "Thu, 23 Mar 2017 11:38:19 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Apr 2017 11:34:46 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Smith",
"Leslie N.",
""
]
] | TITLE: Cyclical Learning Rates for Training Neural Networks
ABSTRACT: It is known that the learning rate is the most important hyper-parameter to
tune for training deep neural networks. This paper describes a new method for
setting the learning rate, named cyclical learning rates, which practically
eliminates the need to experimentally find the best values and schedule for the
global learning rates. Instead of monotonically decreasing the learning rate,
this method lets the learning rate cyclically vary between reasonable boundary
values. Training with cyclical learning rates instead of fixed values achieves
improved classification accuracy without a need to tune and often in fewer
iterations. This paper also describes a simple way to estimate "reasonable
bounds" -- linearly increasing the learning rate of the network for a few
epochs. In addition, cyclical learning rates are demonstrated on the CIFAR-10
and CIFAR-100 datasets with ResNets, Stochastic Depth networks, and DenseNets,
and the ImageNet dataset with the AlexNet and GoogLeNet architectures. These
are practical tools for everyone who trains neural networks.
| no_new_dataset | 0.954942 |
1601.07576 | Weilin Huang | Sheng Guo, Weilin Huang, Limin Wang, Yu Qiao | Locally-Supervised Deep Hybrid Model for Scene Recognition | To appear in IEEE Trans. on Image Processing, 2017 | null | 10.1109/TIP.2016.2629443 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks (CNN) have recently achieved remarkable
successes in various image classification and understanding tasks. The deep
features obtained at the top fully-connected layer of the CNN (FC-features)
exhibit rich global semantic information and are extremely effective in image
classification. On the other hand, the convolutional features in the middle
layers of the CNN also contain meaningful local information, but are not fully
explored for image representation. In this paper, we propose a novel
Locally-Supervised Deep Hybrid Model (LS-DHM) that effectively enhances and
explores the convolutional features for scene recognition. Firstly, we notice
that the convolutional features capture local objects and fine structures of
scene images, which yield important cues for discriminating ambiguous scenes,
whereas these features are significantly eliminated in the highly-compressed FC
representation. Secondly, we propose a new Local Convolutional Supervision
(LCS) layer to enhance the local structure of the image by directly propagating
the label information to the convolutional layers. Thirdly, we propose an
efficient Fisher Convolutional Vector (FCV) that successfully rescues the
orderless mid-level semantic information (e.g. objects and textures) of scene
image. The FCV encodes the large-sized convolutional maps into a fixed-length
mid-level representation, and is demonstrated to be strongly complementary to
the high-level FC-features. Finally, both the FCV and FC-features are
collaboratively employed in the LSDHM representation, which achieves
outstanding performance in our experiments. It obtains 83.75% and 67.56%
accuracies respectively on the heavily benchmarked MIT Indoor67 and SUN397
datasets, advancing the stat-of-the-art substantially.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 21:32:15 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2016 21:30:09 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Guo",
"Sheng",
""
],
[
"Huang",
"Weilin",
""
],
[
"Wang",
"Limin",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: Locally-Supervised Deep Hybrid Model for Scene Recognition
ABSTRACT: Convolutional neural networks (CNN) have recently achieved remarkable
successes in various image classification and understanding tasks. The deep
features obtained at the top fully-connected layer of the CNN (FC-features)
exhibit rich global semantic information and are extremely effective in image
classification. On the other hand, the convolutional features in the middle
layers of the CNN also contain meaningful local information, but are not fully
explored for image representation. In this paper, we propose a novel
Locally-Supervised Deep Hybrid Model (LS-DHM) that effectively enhances and
explores the convolutional features for scene recognition. Firstly, we notice
that the convolutional features capture local objects and fine structures of
scene images, which yield important cues for discriminating ambiguous scenes,
whereas these features are significantly eliminated in the highly-compressed FC
representation. Secondly, we propose a new Local Convolutional Supervision
(LCS) layer to enhance the local structure of the image by directly propagating
the label information to the convolutional layers. Thirdly, we propose an
efficient Fisher Convolutional Vector (FCV) that successfully rescues the
orderless mid-level semantic information (e.g. objects and textures) of scene
image. The FCV encodes the large-sized convolutional maps into a fixed-length
mid-level representation, and is demonstrated to be strongly complementary to
the high-level FC-features. Finally, both the FCV and FC-features are
collaboratively employed in the LSDHM representation, which achieves
outstanding performance in our experiments. It obtains 83.75% and 67.56%
accuracies respectively on the heavily benchmarked MIT Indoor67 and SUN397
datasets, advancing the stat-of-the-art substantially.
| no_new_dataset | 0.952175 |
1602.00216 | Jean Golay | Jean Golay, Michael Leuenberger, Mikhail Kanevski | Feature Selection for Regression Problems Based on the Morisita
Estimator of Intrinsic Dimension | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data acquisition, storage and management have been improved, while the key
factors of many phenomena are not well known. Consequently, irrelevant and
redundant features artificially increase the size of datasets, which
complicates learning tasks, such as regression. To address this problem,
feature selection methods have been proposed. This paper introduces a new
supervised filter based on the Morisita estimator of intrinsic dimension. It
can identify relevant features and distinguish between redundant and irrelevant
information. Besides, it offers a clear graphical representation of the
results, and it can be easily implemented in different programming languages.
Comprehensive numerical experiments are conducted using simulated datasets
characterized by different levels of complexity, sample size and noise. The
suggested algorithm is also successfully tested on a selection of real world
applications and compared with RReliefF using extreme learning machine. In
addition, a new measure of feature relevance is presented and discussed.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 09:59:27 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Feb 2016 17:03:26 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Mar 2016 20:40:06 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Mar 2016 14:39:24 GMT"
},
{
"version": "v5",
"created": "Fri, 8 Apr 2016 18:37:17 GMT"
},
{
"version": "v6",
"created": "Tue, 4 Apr 2017 13:28:48 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Golay",
"Jean",
""
],
[
"Leuenberger",
"Michael",
""
],
[
"Kanevski",
"Mikhail",
""
]
] | TITLE: Feature Selection for Regression Problems Based on the Morisita
Estimator of Intrinsic Dimension
ABSTRACT: Data acquisition, storage and management have been improved, while the key
factors of many phenomena are not well known. Consequently, irrelevant and
redundant features artificially increase the size of datasets, which
complicates learning tasks, such as regression. To address this problem,
feature selection methods have been proposed. This paper introduces a new
supervised filter based on the Morisita estimator of intrinsic dimension. It
can identify relevant features and distinguish between redundant and irrelevant
information. Besides, it offers a clear graphical representation of the
results, and it can be easily implemented in different programming languages.
Comprehensive numerical experiments are conducted using simulated datasets
characterized by different levels of complexity, sample size and noise. The
suggested algorithm is also successfully tested on a selection of real world
applications and compared with RReliefF using extreme learning machine. In
addition, a new measure of feature relevance is presented and discussed.
| no_new_dataset | 0.944074 |
1604.04970 | Yueying Kao | Yueying Kao, Ran He, Kaiqi Huang | Deep Aesthetic Quality Assessment with Semantic Information | 13 pages, 10 figures | null | 10.1109/TIP.2017.2651399 | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human beings often assess the aesthetic quality of an image coupled with the
identification of the image's semantic content. This paper addresses the
correlation issue between automatic aesthetic quality assessment and semantic
recognition. We cast the assessment problem as the main task among a multi-task
deep model, and argue that semantic recognition task offers the key to address
this problem. Based on convolutional neural networks, we employ a single and
simple multi-task framework to efficiently utilize the supervision of aesthetic
and semantic labels. A correlation item between these two tasks is further
introduced to the framework by incorporating the inter-task relationship
learning. This item not only provides some useful insight about the correlation
but also improves assessment accuracy of the aesthetic task. Particularly, an
effective strategy is developed to keep a balance between the two tasks, which
facilitates to optimize the parameters of the framework. Extensive experiments
on the challenging AVA dataset and Photo.net dataset validate the importance of
semantic recognition in aesthetic quality assessment, and demonstrate that
multi-task deep models can discover an effective aesthetic representation to
achieve state-of-the-art results.
| [
{
"version": "v1",
"created": "Mon, 18 Apr 2016 03:16:56 GMT"
},
{
"version": "v2",
"created": "Sat, 20 Aug 2016 14:09:48 GMT"
},
{
"version": "v3",
"created": "Fri, 21 Oct 2016 07:46:54 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Kao",
"Yueying",
""
],
[
"He",
"Ran",
""
],
[
"Huang",
"Kaiqi",
""
]
] | TITLE: Deep Aesthetic Quality Assessment with Semantic Information
ABSTRACT: Human beings often assess the aesthetic quality of an image coupled with the
identification of the image's semantic content. This paper addresses the
correlation issue between automatic aesthetic quality assessment and semantic
recognition. We cast the assessment problem as the main task among a multi-task
deep model, and argue that semantic recognition task offers the key to address
this problem. Based on convolutional neural networks, we employ a single and
simple multi-task framework to efficiently utilize the supervision of aesthetic
and semantic labels. A correlation item between these two tasks is further
introduced to the framework by incorporating the inter-task relationship
learning. This item not only provides some useful insight about the correlation
but also improves assessment accuracy of the aesthetic task. Particularly, an
effective strategy is developed to keep a balance between the two tasks, which
facilitates to optimize the parameters of the framework. Extensive experiments
on the challenging AVA dataset and Photo.net dataset validate the importance of
semantic recognition in aesthetic quality assessment, and demonstrate that
multi-task deep models can discover an effective aesthetic representation to
achieve state-of-the-art results.
| no_new_dataset | 0.942348 |
1605.01436 | Behtash Babadi | Abbas Kazemipour, Sina Miran, Piya Pal, Behtash Babadi, and Min Wu | Sampling Requirements for Stable Autoregressive Estimation | null | null | 10.1109/TSP.2017.2656848 | null | cs.IT cs.DM math.IT math.OC stat.ME stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating the parameters of a linear univariate
autoregressive model with sub-Gaussian innovations from a limited sequence of
consecutive observations. Assuming that the parameters are compressible, we
analyze the performance of the $\ell_1$-regularized least squares as well as a
greedy estimator of the parameters and characterize the sampling trade-offs
required for stable recovery in the non-asymptotic regime. In particular, we
show that for a fixed sparsity level, stable recovery of AR parameters is
possible when the number of samples scale sub-linearly with the AR order. Our
results improve over existing sampling complexity requirements in AR estimation
using the LASSO, when the sparsity level scales faster than the square root of
the model order. We further derive sufficient conditions on the sparsity level
that guarantee the minimax optimality of the $\ell_1$-regularized least squares
estimate. Applying these techniques to simulated data as well as real-world
datasets from crude oil prices and traffic speed data confirm our predicted
theoretical performance gains in terms of estimation accuracy and model
selection.
| [
{
"version": "v1",
"created": "Wed, 4 May 2016 21:07:04 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Jan 2017 19:22:02 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Kazemipour",
"Abbas",
""
],
[
"Miran",
"Sina",
""
],
[
"Pal",
"Piya",
""
],
[
"Babadi",
"Behtash",
""
],
[
"Wu",
"Min",
""
]
] | TITLE: Sampling Requirements for Stable Autoregressive Estimation
ABSTRACT: We consider the problem of estimating the parameters of a linear univariate
autoregressive model with sub-Gaussian innovations from a limited sequence of
consecutive observations. Assuming that the parameters are compressible, we
analyze the performance of the $\ell_1$-regularized least squares as well as a
greedy estimator of the parameters and characterize the sampling trade-offs
required for stable recovery in the non-asymptotic regime. In particular, we
show that for a fixed sparsity level, stable recovery of AR parameters is
possible when the number of samples scale sub-linearly with the AR order. Our
results improve over existing sampling complexity requirements in AR estimation
using the LASSO, when the sparsity level scales faster than the square root of
the model order. We further derive sufficient conditions on the sparsity level
that guarantee the minimax optimality of the $\ell_1$-regularized least squares
estimate. Applying these techniques to simulated data as well as real-world
datasets from crude oil prices and traffic speed data confirm our predicted
theoretical performance gains in terms of estimation accuracy and model
selection.
| no_new_dataset | 0.942507 |
1611.05396 | Zhenhua Feng | Zhen-Hua Feng, Josef Kittler, William Christmas, Patrik Huber and
Xiao-Jun Wu | Dynamic Attention-controlled Cascaded Shape Regression Exploiting
Training Data Augmentation and Fuzzy-set Sample Weighting | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new Cascaded Shape Regression (CSR) architecture, namely Dynamic
Attention-Controlled CSR (DAC-CSR), for robust facial landmark detection on
unconstrained faces. Our DAC-CSR divides facial landmark detection into three
cascaded sub-tasks: face bounding box refinement, general CSR and
attention-controlled CSR. The first two stages refine initial face bounding
boxes and output intermediate facial landmarks. Then, an online dynamic model
selection method is used to choose appropriate domain-specific CSRs for further
landmark refinement. The key innovation of our DAC-CSR is the fault-tolerant
mechanism, using fuzzy set sample weighting for attention-controlled
domain-specific model training. Moreover, we advocate data augmentation with a
simple but effective 2D profile face generator, and context-aware feature
extraction for better facial feature representation. Experimental results
obtained on challenging datasets demonstrate the merits of our DAC-CSR over the
state-of-the-art.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 18:18:07 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2017 17:45:43 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Feng",
"Zhen-Hua",
""
],
[
"Kittler",
"Josef",
""
],
[
"Christmas",
"William",
""
],
[
"Huber",
"Patrik",
""
],
[
"Wu",
"Xiao-Jun",
""
]
] | TITLE: Dynamic Attention-controlled Cascaded Shape Regression Exploiting
Training Data Augmentation and Fuzzy-set Sample Weighting
ABSTRACT: We present a new Cascaded Shape Regression (CSR) architecture, namely Dynamic
Attention-Controlled CSR (DAC-CSR), for robust facial landmark detection on
unconstrained faces. Our DAC-CSR divides facial landmark detection into three
cascaded sub-tasks: face bounding box refinement, general CSR and
attention-controlled CSR. The first two stages refine initial face bounding
boxes and output intermediate facial landmarks. Then, an online dynamic model
selection method is used to choose appropriate domain-specific CSRs for further
landmark refinement. The key innovation of our DAC-CSR is the fault-tolerant
mechanism, using fuzzy set sample weighting for attention-controlled
domain-specific model training. Moreover, we advocate data augmentation with a
simple but effective 2D profile face generator, and context-aware feature
extraction for better facial feature representation. Experimental results
obtained on challenging datasets demonstrate the merits of our DAC-CSR over the
state-of-the-art.
| no_new_dataset | 0.947817 |
1611.06759 | J\'er\^ome Tubiana | J\'er\^ome Tubiana (LPTENS), R\'emi Monasson (LPTENS) | Emergence of Compositional Representations in Restricted Boltzmann
Machines | Supplementary material available at the authors' webpage | Phys. Rev. Lett. 118, 138301 (2017) | 10.1103/PhysRevLett.118.138301 | null | physics.data-an cond-mat.dis-nn cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting automatically the complex set of features composing real
high-dimensional data is crucial for achieving high performance in
machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically
known to be efficient for this purpose, and to be able to generate distributed
and graded representations of the data. We characterize the structural
conditions (sparsity of the weights, low effective temperature, nonlinearities
in the activation functions of hidden units, and adaptation of fields
maintaining the activity in the visible layer) allowing RBM to operate in such
a compositional phase. Evidence is provided by the replica analysis of an
adequate statistical ensemble of random RBMs and by RBM trained on the
handwritten digits dataset MNIST.
| [
{
"version": "v1",
"created": "Mon, 21 Nov 2016 12:46:25 GMT"
},
{
"version": "v2",
"created": "Thu, 2 Mar 2017 21:50:02 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Tubiana",
"Jérôme",
"",
"LPTENS"
],
[
"Monasson",
"Rémi",
"",
"LPTENS"
]
] | TITLE: Emergence of Compositional Representations in Restricted Boltzmann
Machines
ABSTRACT: Extracting automatically the complex set of features composing real
high-dimensional data is crucial for achieving high performance in
machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically
known to be efficient for this purpose, and to be able to generate distributed
and graded representations of the data. We characterize the structural
conditions (sparsity of the weights, low effective temperature, nonlinearities
in the activation functions of hidden units, and adaptation of fields
maintaining the activity in the visible layer) allowing RBM to operate in such
a compositional phase. Evidence is provided by the replica analysis of an
adequate statistical ensemble of random RBMs and by RBM trained on the
handwritten digits dataset MNIST.
| no_new_dataset | 0.946547 |
1612.02761 | Yuelong Li | Yuelong Li, Chul Lee and Vishal Monga | A Maximum A Posteriori Estimation Framework for Robust High Dynamic
Range Video Synthesis | null | null | 10.1109/TIP.2016.2642790 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High dynamic range (HDR) image synthesis from multiple low dynamic range
(LDR) exposures continues to be actively researched. The extension to HDR video
synthesis is a topic of significant current interest due to potential cost
benefits. For HDR video, a stiff practical challenge presents itself in the
form of accurate correspondence estimation of objects between video frames. In
particular, loss of data resulting from poor exposures and varying intensity
make conventional optical flow methods highly inaccurate. We avoid exact
correspondence estimation by proposing a statistical approach via maximum a
posterior (MAP) estimation, and under appropriate statistical assumptions and
choice of priors and models, we reduce it to an optimization problem of solving
for the foreground and background of the target frame. We obtain the background
through rank minimization and estimate the foreground via a novel multiscale
adaptive kernel regression technique, which implicitly captures local structure
and temporal motion by solving an unconstrained optimization problem. Extensive
experimental results on both real and synthetic datasets demonstrate that our
algorithm is more capable of delivering high-quality HDR videos than current
state-of-the-art methods, under both subjective and objective assessments.
Furthermore, a thorough complexity analysis reveals that our algorithm achieves
better complexity-performance trade-off than conventional methods.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 18:33:08 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Li",
"Yuelong",
""
],
[
"Lee",
"Chul",
""
],
[
"Monga",
"Vishal",
""
]
] | TITLE: A Maximum A Posteriori Estimation Framework for Robust High Dynamic
Range Video Synthesis
ABSTRACT: High dynamic range (HDR) image synthesis from multiple low dynamic range
(LDR) exposures continues to be actively researched. The extension to HDR video
synthesis is a topic of significant current interest due to potential cost
benefits. For HDR video, a stiff practical challenge presents itself in the
form of accurate correspondence estimation of objects between video frames. In
particular, loss of data resulting from poor exposures and varying intensity
make conventional optical flow methods highly inaccurate. We avoid exact
correspondence estimation by proposing a statistical approach via maximum a
posterior (MAP) estimation, and under appropriate statistical assumptions and
choice of priors and models, we reduce it to an optimization problem of solving
for the foreground and background of the target frame. We obtain the background
through rank minimization and estimate the foreground via a novel multiscale
adaptive kernel regression technique, which implicitly captures local structure
and temporal motion by solving an unconstrained optimization problem. Extensive
experimental results on both real and synthetic datasets demonstrate that our
algorithm is more capable of delivering high-quality HDR videos than current
state-of-the-art methods, under both subjective and objective assessments.
Furthermore, a thorough complexity analysis reveals that our algorithm achieves
better complexity-performance trade-off than conventional methods.
| no_new_dataset | 0.948489 |
1701.01909 | Amir Sadeghian | Amir Sadeghian, Alexandre Alahi, and Silvio Savarese | Tracking The Untrackable: Learning To Track Multiple Cues with Long-Term
Dependencies | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The majority of existing solutions to the Multi-Target Tracking (MTT) problem
do not combine cues in a coherent end-to-end fashion over a long period of
time. However, we present an online method that encodes long-term temporal
dependencies across multiple cues. One key challenge of tracking methods is to
accurately track occluded targets or those which share similar appearance
properties with surrounding objects. To address this challenge, we present a
structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple
cues over a temporal window. We are able to correct many data association
errors and recover observations from an occluded state. We demonstrate the
robustness of our data-driven approach by tracking multiple targets using their
appearance, motion, and even interactions. Our method outperforms previous
works on multiple publicly available datasets including the challenging MOT
benchmark.
| [
{
"version": "v1",
"created": "Sun, 8 Jan 2017 03:29:26 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 21:42:58 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Sadeghian",
"Amir",
""
],
[
"Alahi",
"Alexandre",
""
],
[
"Savarese",
"Silvio",
""
]
] | TITLE: Tracking The Untrackable: Learning To Track Multiple Cues with Long-Term
Dependencies
ABSTRACT: The majority of existing solutions to the Multi-Target Tracking (MTT) problem
do not combine cues in a coherent end-to-end fashion over a long period of
time. However, we present an online method that encodes long-term temporal
dependencies across multiple cues. One key challenge of tracking methods is to
accurately track occluded targets or those which share similar appearance
properties with surrounding objects. To address this challenge, we present a
structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple
cues over a temporal window. We are able to correct many data association
errors and recover observations from an occluded state. We demonstrate the
robustness of our data-driven approach by tracking multiple targets using their
appearance, motion, and even interactions. Our method outperforms previous
works on multiple publicly available datasets including the challenging MOT
benchmark.
| no_new_dataset | 0.946498 |
1702.02744 | Jubin Johnson | Jubin Johnson, Hisham Cholakkal and Deepu Rajan | L1-regularized Reconstruction Error as Alpha Matte | 5 pages, 5 figure, Accepted in IEEE Signal Processing Letters | null | 10.1109/LSP.2017.2666180 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sampling-based alpha matting methods have traditionally followed the
compositing equation to estimate the alpha value at a pixel from a pair of
foreground (F) and background (B) samples. The (F,B) pair that produces the
least reconstruction error is selected, followed by alpha estimation. The
significance of that residual error has been left unexamined. In this letter,
we propose a video matting algorithm that uses L1-regularized reconstruction
error of F and B samples as a measure of the alpha matte. A multi-frame
non-local means framework using coherency sensitive hashing is utilized to
ensure temporal coherency in the video mattes. Qualitative and quantitative
evaluations on a dataset exclusively for video matting demonstrate the
effectiveness of the proposed matting algorithm.
| [
{
"version": "v1",
"created": "Thu, 9 Feb 2017 08:29:58 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Johnson",
"Jubin",
""
],
[
"Cholakkal",
"Hisham",
""
],
[
"Rajan",
"Deepu",
""
]
] | TITLE: L1-regularized Reconstruction Error as Alpha Matte
ABSTRACT: Sampling-based alpha matting methods have traditionally followed the
compositing equation to estimate the alpha value at a pixel from a pair of
foreground (F) and background (B) samples. The (F,B) pair that produces the
least reconstruction error is selected, followed by alpha estimation. The
significance of that residual error has been left unexamined. In this letter,
we propose a video matting algorithm that uses L1-regularized reconstruction
error of F and B samples as a measure of the alpha matte. A multi-frame
non-local means framework using coherency sensitive hashing is utilized to
ensure temporal coherency in the video mattes. Qualitative and quantitative
evaluations on a dataset exclusively for video matting demonstrate the
effectiveness of the proposed matting algorithm.
| no_new_dataset | 0.946151 |
1702.07735 | Mitch Rees-Jones | Mitch Rees-Jones, Matthew Martin, Tim Menzies | Better Predictors for Issue Lifetime | 9 pages, 3 figures, 5 tables | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting issue lifetime can help software developers, managers, and
stakeholders effectively prioritize work, allocate development resources, and
better understand project timelines. Progress had been made on this prediction
problem, but prior work has reported low precision and high false alarms. The
latest results also use complex models such as random forests that detract from
their readability.
We solve both issues by using small, readable decision trees (under 20 lines
long) and correlation feature selection to predict issue lifetime, achieving
high precision and low false alarms (medians of 71% and 13% respectively). We
also address the problem of high class imbalance within issue datasets - when
local data fails to train a good model, we show that cross-project data can be
used in place of the local data. In fact, cross-project data works so well that
we argue it should be the default approach for learning predictors for issue
lifetime.
| [
{
"version": "v1",
"created": "Fri, 24 Feb 2017 19:15:31 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2017 13:37:03 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Rees-Jones",
"Mitch",
""
],
[
"Martin",
"Matthew",
""
],
[
"Menzies",
"Tim",
""
]
] | TITLE: Better Predictors for Issue Lifetime
ABSTRACT: Predicting issue lifetime can help software developers, managers, and
stakeholders effectively prioritize work, allocate development resources, and
better understand project timelines. Progress had been made on this prediction
problem, but prior work has reported low precision and high false alarms. The
latest results also use complex models such as random forests that detract from
their readability.
We solve both issues by using small, readable decision trees (under 20 lines
long) and correlation feature selection to predict issue lifetime, achieving
high precision and low false alarms (medians of 71% and 13% respectively). We
also address the problem of high class imbalance within issue datasets - when
local data fails to train a good model, we show that cross-project data can be
used in place of the local data. In fact, cross-project data works so well that
we argue it should be the default approach for learning predictors for issue
lifetime.
| no_new_dataset | 0.948298 |
1703.01698 | Abhineet Singh | Mennatullah Siam, Abhineet Singh, Camilo Perez and Martin Jagersand | 4-DoF Tracking for Robot Fine Manipulation Tasks | accepted in CRV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents two visual trackers from the different paradigms of
learning and registration based tracking and evaluates their application in
image based visual servoing. They can track object motion with four degrees of
freedom (DoF) which, as we will show here, is sufficient for many fine
manipulation tasks. One of these trackers is a newly developed learning based
tracker that relies on learning discriminative correlation filters while the
other is a refinement of a recent 8 DoF RANSAC based tracker adapted with a new
appearance model for tracking 4 DoF motion.
Both trackers are shown to provide superior performance to several state of
the art trackers on an existing dataset for manipulation tasks. Further, a new
dataset with challenging sequences for fine manipulation tasks captured from
robot mounted eye-in-hand (EIH) cameras is also presented. These sequences have
a variety of challenges encountered during real tasks including jittery camera
movement, motion blur, drastic scale changes and partial occlusions.
Quantitative and qualitative results on these sequences are used to show that
these two trackers are robust to failures while providing high precision that
makes them suitable for such fine manipulation tasks.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2017 00:59:46 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2017 01:33:14 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Siam",
"Mennatullah",
""
],
[
"Singh",
"Abhineet",
""
],
[
"Perez",
"Camilo",
""
],
[
"Jagersand",
"Martin",
""
]
] | TITLE: 4-DoF Tracking for Robot Fine Manipulation Tasks
ABSTRACT: This paper presents two visual trackers from the different paradigms of
learning and registration based tracking and evaluates their application in
image based visual servoing. They can track object motion with four degrees of
freedom (DoF) which, as we will show here, is sufficient for many fine
manipulation tasks. One of these trackers is a newly developed learning based
tracker that relies on learning discriminative correlation filters while the
other is a refinement of a recent 8 DoF RANSAC based tracker adapted with a new
appearance model for tracking 4 DoF motion.
Both trackers are shown to provide superior performance to several state of
the art trackers on an existing dataset for manipulation tasks. Further, a new
dataset with challenging sequences for fine manipulation tasks captured from
robot mounted eye-in-hand (EIH) cameras is also presented. These sequences have
a variety of challenges encountered during real tasks including jittery camera
movement, motion blur, drastic scale changes and partial occlusions.
Quantitative and qualitative results on these sequences are used to show that
these two trackers are robust to failures while providing high precision that
makes them suitable for such fine manipulation tasks.
| new_dataset | 0.954435 |
1703.07949 | Joshua Joy | Joshua Joy, Mario Gerla | Anonymized Local Privacy | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce the family of Anonymized Local Privacy
mechanisms. These mechanisms have an output space of three values "Yes", "No",
or "$\perp$" (not participating) and leverage the law of large numbers to
generate linear noise in the number of data owners to protect privacy both
before and after aggregation yet preserve accuracy.
We describe the suitability in a distributed on-demand network and evaluate
over a real dataset as we scale the population.
| [
{
"version": "v1",
"created": "Thu, 23 Mar 2017 07:15:55 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Mar 2017 08:02:36 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Apr 2017 01:40:22 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Joy",
"Joshua",
""
],
[
"Gerla",
"Mario",
""
]
] | TITLE: Anonymized Local Privacy
ABSTRACT: In this paper, we introduce the family of Anonymized Local Privacy
mechanisms. These mechanisms have an output space of three values "Yes", "No",
or "$\perp$" (not participating) and leverage the law of large numbers to
generate linear noise in the number of data owners to protect privacy both
before and after aggregation yet preserve accuracy.
We describe the suitability in a distributed on-demand network and evaluate
over a real dataset as we scale the population.
| no_new_dataset | 0.951414 |
1703.08961 | Eugene Belilovsky | Edouard Oyallon (DI-ENS), Eugene Belilovsky (CVN, GALEN), Sergey
Zagoruyko (ENPC) | Scaling the Scattering Transform: Deep Hybrid Networks | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use the scattering network as a generic and fixed ini-tialization of the
first layers of a supervised hybrid deep network. We show that early layers do
not necessarily need to be learned, providing the best results to-date with
pre-defined representations while being competitive with Deep CNNs. Using a
shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients
that correspond to spatial windows of very small sizes, permits to obtain
AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local
encoding explicitly learns invariance w.r.t. rotations. Combining scattering
networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on
imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing
only 10 layers. We also find that hybrid architectures can yield excellent
performance in the small sample regime, exceeding their end-to-end
counterparts, through their ability to incorporate geometrical priors. We
demonstrate this on subsets of the CIFAR-10 dataset and on the STL-10 dataset.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2017 07:49:43 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2017 06:13:22 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Oyallon",
"Edouard",
"",
"DI-ENS"
],
[
"Belilovsky",
"Eugene",
"",
"CVN, GALEN"
],
[
"Zagoruyko",
"Sergey",
"",
"ENPC"
]
] | TITLE: Scaling the Scattering Transform: Deep Hybrid Networks
ABSTRACT: We use the scattering network as a generic and fixed ini-tialization of the
first layers of a supervised hybrid deep network. We show that early layers do
not necessarily need to be learned, providing the best results to-date with
pre-defined representations while being competitive with Deep CNNs. Using a
shallow cascade of 1 x 1 convolutions, which encodes scattering coefficients
that correspond to spatial windows of very small sizes, permits to obtain
AlexNet accuracy on the imagenet ILSVRC2012. We demonstrate that this local
encoding explicitly learns invariance w.r.t. rotations. Combining scattering
networks with a modern ResNet, we achieve a single-crop top 5 error of 11.4% on
imagenet ILSVRC2012, comparable to the Resnet-18 architecture, while utilizing
only 10 layers. We also find that hybrid architectures can yield excellent
performance in the small sample regime, exceeding their end-to-end
counterparts, through their ability to incorporate geometrical priors. We
demonstrate this on subsets of the CIFAR-10 dataset and on the STL-10 dataset.
| no_new_dataset | 0.948537 |
1704.00758 | Waqas Sultani | Waqas Sultani, Dong Zhang and Mubarak Shah | Unsupervised Action Proposal Ranking through Proposal Recombination | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, action proposal methods have played an important role in action
recognition tasks, as they reduce the search space dramatically. Most
unsupervised action proposal methods tend to generate hundreds of action
proposals which include many noisy, inconsistent, and unranked action
proposals, while supervised action proposal methods take advantage of
predefined object detectors (e.g., human detector) to refine and score the
action proposals, but they require thousands of manual annotations to train.
Given the action proposals in a video, the goal of the proposed work is to
generate a few better action proposals that are ranked properly. In our
approach, we first divide action proposal into sub-proposal and then use
Dynamic Programming based graph optimization scheme to select the optimal
combinations of sub-proposals from different proposals and assign each new
proposal a score. We propose a new unsupervised image-based actioness detector
that leverages web images and employs it as one of the node scores in our graph
formulation. Moreover, we capture motion information by estimating the number
of motion contours within each action proposal patch. The proposed method is an
unsupervised method that neither needs bounding box annotations nor video level
labels, which is desirable with the current explosion of large-scale action
datasets. Our approach is generic and does not depend on a specific action
proposal method. We evaluate our approach on several publicly available trimmed
and un-trimmed datasets and obtain better performance compared to several
proposal ranking methods. In addition, we demonstrate that properly ranked
proposals produce significantly better action detection as compared to
state-of-the-art proposal based methods.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 18:43:20 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Sultani",
"Waqas",
""
],
[
"Zhang",
"Dong",
""
],
[
"Shah",
"Mubarak",
""
]
] | TITLE: Unsupervised Action Proposal Ranking through Proposal Recombination
ABSTRACT: Recently, action proposal methods have played an important role in action
recognition tasks, as they reduce the search space dramatically. Most
unsupervised action proposal methods tend to generate hundreds of action
proposals which include many noisy, inconsistent, and unranked action
proposals, while supervised action proposal methods take advantage of
predefined object detectors (e.g., human detector) to refine and score the
action proposals, but they require thousands of manual annotations to train.
Given the action proposals in a video, the goal of the proposed work is to
generate a few better action proposals that are ranked properly. In our
approach, we first divide action proposal into sub-proposal and then use
Dynamic Programming based graph optimization scheme to select the optimal
combinations of sub-proposals from different proposals and assign each new
proposal a score. We propose a new unsupervised image-based actioness detector
that leverages web images and employs it as one of the node scores in our graph
formulation. Moreover, we capture motion information by estimating the number
of motion contours within each action proposal patch. The proposed method is an
unsupervised method that neither needs bounding box annotations nor video level
labels, which is desirable with the current explosion of large-scale action
datasets. Our approach is generic and does not depend on a specific action
proposal method. We evaluate our approach on several publicly available trimmed
and un-trimmed datasets and obtain better performance compared to several
proposal ranking methods. In addition, we demonstrate that properly ranked
proposals produce significantly better action detection as compared to
state-of-the-art proposal based methods.
| no_new_dataset | 0.949201 |
1704.00763 | Kan Chen | Kan Chen, Trung Bui, Fang Chen, Zhaowen Wang, Ram Nevatia | AMC: Attention guided Multi-modal Correlation Learning for Image Search | CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a user's query, traditional image search systems rank images according
to its relevance to a single modality (e.g., image content or surrounding
text). Nowadays, an increasing number of images on the Internet are available
with associated meta data in rich modalities (e.g., titles, keywords, tags,
etc.), which can be exploited for better similarity measure with queries. In
this paper, we leverage visual and textual modalities for image search by
learning their correlation with input query. According to the intent of query,
attention mechanism can be introduced to adaptively balance the importance of
different modalities. We propose a novel Attention guided Multi-modal
Correlation (AMC) learning method which consists of a jointly learned hierarchy
of intra and inter-attention networks. Conditioned on query's intent,
intra-attention networks (i.e., visual intra-attention network and language
intra-attention network) attend on informative parts within each modality; a
multi-modal inter-attention network promotes the importance of the most
query-relevant modalities. In experiments, we evaluate AMC models on the search
logs from two real world image search engines and show a significant boost on
the ranking of user-clicked images in search results. Additionally, we extend
AMC models to caption ranking task on COCO dataset and achieve competitive
results compared with recent state-of-the-arts.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 18:57:42 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Chen",
"Kan",
""
],
[
"Bui",
"Trung",
""
],
[
"Chen",
"Fang",
""
],
[
"Wang",
"Zhaowen",
""
],
[
"Nevatia",
"Ram",
""
]
] | TITLE: AMC: Attention guided Multi-modal Correlation Learning for Image Search
ABSTRACT: Given a user's query, traditional image search systems rank images according
to its relevance to a single modality (e.g., image content or surrounding
text). Nowadays, an increasing number of images on the Internet are available
with associated meta data in rich modalities (e.g., titles, keywords, tags,
etc.), which can be exploited for better similarity measure with queries. In
this paper, we leverage visual and textual modalities for image search by
learning their correlation with input query. According to the intent of query,
attention mechanism can be introduced to adaptively balance the importance of
different modalities. We propose a novel Attention guided Multi-modal
Correlation (AMC) learning method which consists of a jointly learned hierarchy
of intra and inter-attention networks. Conditioned on query's intent,
intra-attention networks (i.e., visual intra-attention network and language
intra-attention network) attend on informative parts within each modality; a
multi-modal inter-attention network promotes the importance of the most
query-relevant modalities. In experiments, we evaluate AMC models on the search
logs from two real world image search engines and show a significant boost on
the ranking of user-clicked images in search results. Additionally, we extend
AMC models to caption ranking task on COCO dataset and achieve competitive
results compared with recent state-of-the-arts.
| no_new_dataset | 0.944177 |
1704.00829 | Emiliano Diaz | Emiliano Diaz | Online deforestation detection | null | null | null | null | stat.AP cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deforestation detection using satellite images can make an important
contribution to forest management. Current approaches can be broadly divided
into those that compare two images taken at similar periods of the year and
those that monitor changes by using multiple images taken during the growing
season. The CMFDA algorithm described in Zhu et al. (2012) is an algorithm that
builds on the latter category by implementing a year-long, continuous,
time-series based approach to monitoring images. This algorithm was developed
for 30m resolution, 16-day frequency reflectance data from the Landsat
satellite. In this work we adapt the algorithm to 1km, 16-day frequency
reflectance data from the modis sensor aboard the Terra satellite. The CMFDA
algorithm is composed of two submodels which are fitted on a pixel-by-pixel
basis. The first estimates the amount of surface reflectance as a function of
the day of the year. The second estimates the occurrence of a deforestation
event by comparing the last few predicted and real reflectance values. For this
comparison, the reflectance observations for six different bands are first
combined into a forest index. Real and predicted values of the forest index are
then compared and high absolute differences for consecutive observation dates
are flagged as deforestation events. Our adapted algorithm also uses the two
model framework. However, since the modis 13A2 dataset used, includes
reflectance data for different spectral bands than those included in the
Landsat dataset, we cannot construct the forest index. Instead we propose two
contrasting approaches: a multivariate and an index approach similar to that of
CMFDA.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 22:40:48 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Diaz",
"Emiliano",
""
]
] | TITLE: Online deforestation detection
ABSTRACT: Deforestation detection using satellite images can make an important
contribution to forest management. Current approaches can be broadly divided
into those that compare two images taken at similar periods of the year and
those that monitor changes by using multiple images taken during the growing
season. The CMFDA algorithm described in Zhu et al. (2012) is an algorithm that
builds on the latter category by implementing a year-long, continuous,
time-series based approach to monitoring images. This algorithm was developed
for 30m resolution, 16-day frequency reflectance data from the Landsat
satellite. In this work we adapt the algorithm to 1km, 16-day frequency
reflectance data from the modis sensor aboard the Terra satellite. The CMFDA
algorithm is composed of two submodels which are fitted on a pixel-by-pixel
basis. The first estimates the amount of surface reflectance as a function of
the day of the year. The second estimates the occurrence of a deforestation
event by comparing the last few predicted and real reflectance values. For this
comparison, the reflectance observations for six different bands are first
combined into a forest index. Real and predicted values of the forest index are
then compared and high absolute differences for consecutive observation dates
are flagged as deforestation events. Our adapted algorithm also uses the two
model framework. However, since the modis 13A2 dataset used, includes
reflectance data for different spectral bands than those included in the
Landsat dataset, we cannot construct the forest index. Instead we propose two
contrasting approaches: a multivariate and an index approach similar to that of
CMFDA.
| no_new_dataset | 0.9434 |
1704.00834 | Siyang Qin | Siyang Qin and Roberto Manduchi | Cascaded Segmentation-Detection Networks for Word-Level Text Spotting | 7 pages, 8 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an algorithm for word-level text spotting that is able to
accurately and reliably determine the bounding regions of individual words of
text "in the wild". Our system is formed by the cascade of two convolutional
neural networks. The first network is fully convolutional and is in charge of
detecting areas containing text. This results in a very reliable but possibly
inaccurate segmentation of the input image. The second network (inspired by the
popular YOLO architecture) analyzes each segment produced in the first stage,
and predicts oriented rectangular regions containing individual words. No
post-processing (e.g. text line grouping) is necessary. With execution time of
450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the
highest score to date among published algorithms on the ICDAR 2015 Incidental
Scene Text dataset benchmark.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 23:55:13 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Qin",
"Siyang",
""
],
[
"Manduchi",
"Roberto",
""
]
] | TITLE: Cascaded Segmentation-Detection Networks for Word-Level Text Spotting
ABSTRACT: We introduce an algorithm for word-level text spotting that is able to
accurately and reliably determine the bounding regions of individual words of
text "in the wild". Our system is formed by the cascade of two convolutional
neural networks. The first network is fully convolutional and is in charge of
detecting areas containing text. This results in a very reliable but possibly
inaccurate segmentation of the input image. The second network (inspired by the
popular YOLO architecture) analyzes each segment produced in the first stage,
and predicts oriented rectangular regions containing individual words. No
post-processing (e.g. text line grouping) is necessary. With execution time of
450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the
highest score to date among published algorithms on the ICDAR 2015 Incidental
Scene Text dataset benchmark.
| no_new_dataset | 0.949153 |
1704.00848 | Daniel Haehn | Daniel Haehn, Verena Kaynig, James Tompkin, Jeff W. Lichtman,
Hanspeter Pfister | Guided Proofreading of Automatic Segmentations for Connectomics | Supplemental material available at
http://rhoana.org/guidedproofreading/supplemental.pdf | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 01:46:46 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Haehn",
"Daniel",
""
],
[
"Kaynig",
"Verena",
""
],
[
"Tompkin",
"James",
""
],
[
"Lichtman",
"Jeff W.",
""
],
[
"Pfister",
"Hanspeter",
""
]
] | TITLE: Guided Proofreading of Automatic Segmentations for Connectomics
ABSTRACT: Automatic cell image segmentation methods in connectomics produce merge and
split errors, which require correction through proofreading. Previous research
has identified the visual search for these errors as the bottleneck in
interactive proofreading. To aid error correction, we develop two classifiers
that automatically recommend candidate merges and splits to the user. These
classifiers use a convolutional neural network (CNN) that has been trained with
errors in automatic segmentations against expert-labeled ground truth. Our
classifiers detect potentially-erroneous regions by considering a large context
region around a segmentation boundary. Corrections can then be performed by a
user with yes/no decisions, which reduces variation of information 7.5x faster
than previous proofreading methods. We also present a fully-automatic mode that
uses a probability threshold to make merge/split decisions. Extensive
experiments using the automatic approach and comparing performance of novice
and expert users demonstrate that our method performs favorably against
state-of-the-art proofreading methods on different connectomics datasets.
| no_new_dataset | 0.947478 |
1704.00860 | Thanh-Toan Do | Thanh-Toan Do and Dang-Khoa Le Tan and Trung T. Pham and Ngai-Man
Cheung | Simultaneous Feature Aggregating and Hashing for Large-scale Image
Search | Accepted to CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In most state-of-the-art hashing-based visual search systems, local image
descriptors of an image are first aggregated as a single feature vector. This
feature vector is then subjected to a hashing function that produces a binary
hash code. In previous work, the aggregating and the hashing processes are
designed independently. In this paper, we propose a novel framework where
feature aggregating and hashing are designed simultaneously and optimized
jointly. Specifically, our joint optimization produces aggregated
representations that can be better reconstructed by some binary codes. This
leads to more discriminative binary hash codes and improved retrieval accuracy.
In addition, we also propose a fast version of the recently-proposed Binary
Autoencoder to be used in our proposed framework. We perform extensive
retrieval experiments on several benchmark datasets with both SIFT and
convolutional features. Our results suggest that the proposed framework
achieves significant improvements over the state of the art.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 03:04:30 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Do",
"Thanh-Toan",
""
],
[
"Tan",
"Dang-Khoa Le",
""
],
[
"Pham",
"Trung T.",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] | TITLE: Simultaneous Feature Aggregating and Hashing for Large-scale Image
Search
ABSTRACT: In most state-of-the-art hashing-based visual search systems, local image
descriptors of an image are first aggregated as a single feature vector. This
feature vector is then subjected to a hashing function that produces a binary
hash code. In previous work, the aggregating and the hashing processes are
designed independently. In this paper, we propose a novel framework where
feature aggregating and hashing are designed simultaneously and optimized
jointly. Specifically, our joint optimization produces aggregated
representations that can be better reconstructed by some binary codes. This
leads to more discriminative binary hash codes and improved retrieval accuracy.
In addition, we also propose a fast version of the recently-proposed Binary
Autoencoder to be used in our proposed framework. We perform extensive
retrieval experiments on several benchmark datasets with both SIFT and
convolutional features. Our results suggest that the proposed framework
achieves significant improvements over the state of the art.
| no_new_dataset | 0.950365 |
1704.00878 | Shixiang Wan | Shixiang Wan, Quan Zou | HAlign-II: efficient ultra-large multiple sequence alignment and
phylogenetic tree reconstruction with distributed and parallel computing | null | null | null | null | cs.DC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple sequence alignment (MSA) plays a key role in biological sequence
analyses, especially in phylogenetic tree construction. Extreme increase in
next-generation sequencing results in shortage of efficient ultra-large
biological sequence alignment approaches for coping with different sequence
types. Distributed and parallel computing represents a crucial technique for
accelerating ultra-large sequence analyses. Based on HAlign and Spark
distributed computing system, we implement a highly cost-efficient and
time-efficient HAlign-II tool to address ultra-large multiple biological
sequence alignment and phylogenetic tree construction. After comparing with
most available state-of-the-art methods, our experimental results indicate the
following: 1) HAlign-II can efficiently carry out MSA and construct
phylogenetic trees with ultra-large biological sequences; 2) HAlign-II shows
extremely high memory efficiency and scales well with increases in computing
resource; 3) HAlign-II provides a user-friendly web server based on our
distributed computing infrastructure. HAlign-II with open-source codes and
datasets was established at http://lab.malab.cn/soft/halign.
| [
{
"version": "v1",
"created": "Tue, 4 Apr 2017 05:49:04 GMT"
}
] | 2017-04-05T00:00:00 | [
[
"Wan",
"Shixiang",
""
],
[
"Zou",
"Quan",
""
]
] | TITLE: HAlign-II: efficient ultra-large multiple sequence alignment and
phylogenetic tree reconstruction with distributed and parallel computing
ABSTRACT: Multiple sequence alignment (MSA) plays a key role in biological sequence
analyses, especially in phylogenetic tree construction. Extreme increase in
next-generation sequencing results in shortage of efficient ultra-large
biological sequence alignment approaches for coping with different sequence
types. Distributed and parallel computing represents a crucial technique for
accelerating ultra-large sequence analyses. Based on HAlign and Spark
distributed computing system, we implement a highly cost-efficient and
time-efficient HAlign-II tool to address ultra-large multiple biological
sequence alignment and phylogenetic tree construction. After comparing with
most available state-of-the-art methods, our experimental results indicate the
following: 1) HAlign-II can efficiently carry out MSA and construct
phylogenetic trees with ultra-large biological sequences; 2) HAlign-II shows
extremely high memory efficiency and scales well with increases in computing
resource; 3) HAlign-II provides a user-friendly web server based on our
distributed computing infrastructure. HAlign-II with open-source codes and
datasets was established at http://lab.malab.cn/soft/halign.
| no_new_dataset | 0.947817 |
1604.01474 | Changsheng Li | Changsheng Li, Junchi Yan, Fan Wei, Weishan Dong, Qingshan Liu,
Hongyuan Zha | Self-Paced Multi-Task Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel multi-task learning (MTL) framework, called
Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating
all tasks and instances equally when training, SPMTL attempts to jointly learn
the tasks by taking into consideration the complexities of both tasks and
instances. This is inspired by the cognitive process of human brain that often
learns from the easy to the hard. We construct a compact SPMTL formulation by
proposing a new task-oriented regularizer that can jointly prioritize the tasks
and the instances. Thus it can be interpreted as a self-paced learner for MTL.
A simple yet effective algorithm is designed for optimizing the proposed
objective function. An error bound for a simplified formulation is also
analyzed theoretically. Experimental results on toy and real-world datasets
demonstrate the effectiveness of the proposed approach, compared to the
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Wed, 6 Apr 2016 03:44:03 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 02:28:32 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Li",
"Changsheng",
""
],
[
"Yan",
"Junchi",
""
],
[
"Wei",
"Fan",
""
],
[
"Dong",
"Weishan",
""
],
[
"Liu",
"Qingshan",
""
],
[
"Zha",
"Hongyuan",
""
]
] | TITLE: Self-Paced Multi-Task Learning
ABSTRACT: In this paper, we propose a novel multi-task learning (MTL) framework, called
Self-Paced Multi-Task Learning (SPMTL). Different from previous works treating
all tasks and instances equally when training, SPMTL attempts to jointly learn
the tasks by taking into consideration the complexities of both tasks and
instances. This is inspired by the cognitive process of human brain that often
learns from the easy to the hard. We construct a compact SPMTL formulation by
proposing a new task-oriented regularizer that can jointly prioritize the tasks
and the instances. Thus it can be interpreted as a self-paced learner for MTL.
A simple yet effective algorithm is designed for optimizing the proposed
objective function. An error bound for a simplified formulation is also
analyzed theoretically. Experimental results on toy and real-world datasets
demonstrate the effectiveness of the proposed approach, compared to the
state-of-the-art methods.
| no_new_dataset | 0.940188 |
1605.05106 | Kaelon Lloyd | Kaelon Lloyd, David Marshall, Simon C. Moore, Paul L. Rosin | Detecting Violent and Abnormal Crowd activity using Temporal Analysis of
Grey Level Co-occurrence Matrix (GLCM) Based Texture Measures | Published under open access, 9 pages, 12 Figures | Machine Vision and Applications (2017) | 10.1007/s00138-017-0830-x | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The severity of sustained injury resulting from assault-related violence can
be minimised by reducing detection time. However, it has been shown that human
operators perform poorly at detecting events found in video footage when
presented with simultaneous feeds. We utilise computer vision techniques to
develop an automated method of abnormal crowd detection that can aid a human
operator in the detection of violent behaviour. We observed that behaviour in
city centre environments often occur in crowded areas, resulting in individual
actions being occluded by other crowd members. We propose a real-time
descriptor that models crowd dynamics by encoding changes in crowd texture
using temporal summaries of Grey Level Co-Occurrence Matrix (GLCM) features. We
introduce a measure of inter-frame uniformity (IFU) and demonstrate that the
appearance of violent behaviour changes in a less uniform manner when compared
to other types of crowd behaviour. Our proposed method is computationally cheap
and offers real-time description. Evaluating our method using a privately held
CCTV dataset and the publicly available Violent Flows, UCF Web Abnormality, and
UMN Abnormal Crowd datasets, we report a receiver operating characteristic
score of 0.9782, 0.9403, 0.8218 and 0.9956 respectively.
| [
{
"version": "v1",
"created": "Tue, 17 May 2016 10:53:07 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 10:39:02 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Lloyd",
"Kaelon",
""
],
[
"Marshall",
"David",
""
],
[
"Moore",
"Simon C.",
""
],
[
"Rosin",
"Paul L.",
""
]
] | TITLE: Detecting Violent and Abnormal Crowd activity using Temporal Analysis of
Grey Level Co-occurrence Matrix (GLCM) Based Texture Measures
ABSTRACT: The severity of sustained injury resulting from assault-related violence can
be minimised by reducing detection time. However, it has been shown that human
operators perform poorly at detecting events found in video footage when
presented with simultaneous feeds. We utilise computer vision techniques to
develop an automated method of abnormal crowd detection that can aid a human
operator in the detection of violent behaviour. We observed that behaviour in
city centre environments often occur in crowded areas, resulting in individual
actions being occluded by other crowd members. We propose a real-time
descriptor that models crowd dynamics by encoding changes in crowd texture
using temporal summaries of Grey Level Co-Occurrence Matrix (GLCM) features. We
introduce a measure of inter-frame uniformity (IFU) and demonstrate that the
appearance of violent behaviour changes in a less uniform manner when compared
to other types of crowd behaviour. Our proposed method is computationally cheap
and offers real-time description. Evaluating our method using a privately held
CCTV dataset and the publicly available Violent Flows, UCF Web Abnormality, and
UMN Abnormal Crowd datasets, we report a receiver operating characteristic
score of 0.9782, 0.9403, 0.8218 and 0.9956 respectively.
| no_new_dataset | 0.941708 |
1609.05283 | Maneesh Agrawala | Jonathan Harper and Maneesh Agrawala | Converting Basic D3 Charts into Reusable Style Templates | 11 pages | null | null | null | cs.HC cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a technique for converting a basic D3 chart into a reusable style
template. Then, given a new data source we can apply the style template to
generate a chart that depicts the new data, but in the style of the template.
To construct the style template we first deconstruct the input D3 chart to
recover its underlying structure: the data, the marks and the mappings that
describe how the marks encode the data. We then rank the perceptual
effectiveness of the deconstructed mappings. To apply the resulting style
template to a new data source we first obtain importance ranks for each new
data field. We then adjust the template mappings to depict the source data by
matching the most important data fields to the most perceptually effective
mappings. We show how the style templates can be applied to source data in the
form of either a data table or another D3 chart. While our implementation
focuses on generating templates for basic chart types (e.g. variants of bar
charts, line charts, dot plots, scatterplots, etc.), these are the most
commonly used chart types today. Users can easily find such basic D3 charts on
the Web, turn them into templates, and immediately see how their own data would
look in the visual style (e.g. colors, shapes, fonts, etc.) of the templates.
We demonstrate the effectiveness of our approach by applying a diverse set of
style templates to a variety of source datasets.
| [
{
"version": "v1",
"created": "Sat, 17 Sep 2016 05:08:24 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2016 04:04:10 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Apr 2017 16:42:22 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Harper",
"Jonathan",
""
],
[
"Agrawala",
"Maneesh",
""
]
] | TITLE: Converting Basic D3 Charts into Reusable Style Templates
ABSTRACT: We present a technique for converting a basic D3 chart into a reusable style
template. Then, given a new data source we can apply the style template to
generate a chart that depicts the new data, but in the style of the template.
To construct the style template we first deconstruct the input D3 chart to
recover its underlying structure: the data, the marks and the mappings that
describe how the marks encode the data. We then rank the perceptual
effectiveness of the deconstructed mappings. To apply the resulting style
template to a new data source we first obtain importance ranks for each new
data field. We then adjust the template mappings to depict the source data by
matching the most important data fields to the most perceptually effective
mappings. We show how the style templates can be applied to source data in the
form of either a data table or another D3 chart. While our implementation
focuses on generating templates for basic chart types (e.g. variants of bar
charts, line charts, dot plots, scatterplots, etc.), these are the most
commonly used chart types today. Users can easily find such basic D3 charts on
the Web, turn them into templates, and immediately see how their own data would
look in the visual style (e.g. colors, shapes, fonts, etc.) of the templates.
We demonstrate the effectiveness of our approach by applying a diverse set of
style templates to a variety of source datasets.
| no_new_dataset | 0.948251 |
1610.05820 | Reza Shokri | Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov | Membership Inference Attacks against Machine Learning Models | In the proceedings of the IEEE Symposium on Security and Privacy,
2017 | null | null | null | cs.CR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We quantitatively investigate how machine learning models leak information
about the individual data records on which they were trained. We focus on the
basic membership inference attack: given a data record and black-box access to
a model, determine if the record was in the model's training dataset. To
perform membership inference against a target model, we make adversarial use of
machine learning and train our own inference model to recognize differences in
the target model's predictions on the inputs that it trained on versus the
inputs that it did not train on.
We empirically evaluate our inference techniques on classification models
trained by commercial "machine learning as a service" providers such as Google
and Amazon. Using realistic datasets and classification tasks, including a
hospital discharge dataset whose membership is sensitive from the privacy
perspective, we show that these models can be vulnerable to membership
inference attacks. We then investigate the factors that influence this leakage
and evaluate mitigation strategies.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 22:38:33 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 22:17:07 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Shokri",
"Reza",
""
],
[
"Stronati",
"Marco",
""
],
[
"Song",
"Congzheng",
""
],
[
"Shmatikov",
"Vitaly",
""
]
] | TITLE: Membership Inference Attacks against Machine Learning Models
ABSTRACT: We quantitatively investigate how machine learning models leak information
about the individual data records on which they were trained. We focus on the
basic membership inference attack: given a data record and black-box access to
a model, determine if the record was in the model's training dataset. To
perform membership inference against a target model, we make adversarial use of
machine learning and train our own inference model to recognize differences in
the target model's predictions on the inputs that it trained on versus the
inputs that it did not train on.
We empirically evaluate our inference techniques on classification models
trained by commercial "machine learning as a service" providers such as Google
and Amazon. Using realistic datasets and classification tasks, including a
hospital discharge dataset whose membership is sensitive from the privacy
perspective, we show that these models can be vulnerable to membership
inference attacks. We then investigate the factors that influence this leakage
and evaluate mitigation strategies.
| no_new_dataset | 0.944485 |
1611.05916 | Le Hou | Le Hou, Chen-Ping Yu, Dimitris Samaras | Squared Earth Mover's Distance-based Loss for Training Deep Neural
Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of single-label classification, despite the huge success of
deep learning, the commonly used cross-entropy loss function ignores the
intricate inter-class relationships that often exist in real-life tasks such as
age classification. In this work, we propose to leverage these relationships
between classes by training deep nets with the exact squared Earth Mover's
Distance (also known as Wasserstein distance) for single-label classification.
The squared EMD loss uses the predicted probabilities of all classes and
penalizes the miss-predictions according to a ground distance matrix that
quantifies the dissimilarities between classes. We demonstrate that on datasets
with strong inter-class relationships such as an ordering between classes, our
exact squared EMD losses yield new state-of-the-art results. Furthermore, we
propose a method to automatically learn this matrix using the CNN's own
features during training. We show that our method can learn a ground distance
matrix efficiently with no inter-class relationship priors and yield the same
performance gain. Finally, we show that our method can be generalized to
applications that lack strong inter-class relationships and still maintain
state-of-the-art performance. Therefore, with limited computational overhead,
one can always deploy the proposed loss function on any dataset over the
conventional cross-entropy.
| [
{
"version": "v1",
"created": "Thu, 17 Nov 2016 22:03:35 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Nov 2016 16:30:12 GMT"
},
{
"version": "v3",
"created": "Wed, 30 Nov 2016 20:12:23 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Apr 2017 02:30:57 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Hou",
"Le",
""
],
[
"Yu",
"Chen-Ping",
""
],
[
"Samaras",
"Dimitris",
""
]
] | TITLE: Squared Earth Mover's Distance-based Loss for Training Deep Neural
Networks
ABSTRACT: In the context of single-label classification, despite the huge success of
deep learning, the commonly used cross-entropy loss function ignores the
intricate inter-class relationships that often exist in real-life tasks such as
age classification. In this work, we propose to leverage these relationships
between classes by training deep nets with the exact squared Earth Mover's
Distance (also known as Wasserstein distance) for single-label classification.
The squared EMD loss uses the predicted probabilities of all classes and
penalizes the miss-predictions according to a ground distance matrix that
quantifies the dissimilarities between classes. We demonstrate that on datasets
with strong inter-class relationships such as an ordering between classes, our
exact squared EMD losses yield new state-of-the-art results. Furthermore, we
propose a method to automatically learn this matrix using the CNN's own
features during training. We show that our method can learn a ground distance
matrix efficiently with no inter-class relationship priors and yield the same
performance gain. Finally, we show that our method can be generalized to
applications that lack strong inter-class relationships and still maintain
state-of-the-art performance. Therefore, with limited computational overhead,
one can always deploy the proposed loss function on any dataset over the
conventional cross-entropy.
| no_new_dataset | 0.945901 |
1703.02243 | Wei Ke | Wei Ke, Jie Chen, Jianbin Jiao, Guoying Zhao, Qixiang Ye | SRN: Side-output Residual Network for Object Symmetry Detection in the
Wild | Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we establish a baseline for object symmetry detection in
complex backgrounds by presenting a new benchmark and an end-to-end deep
learning approach, opening up a promising direction for symmetry detection in
the wild. The new benchmark, named Sym-PASCAL, spans challenges including
object diversity, multi-objects, part-invisibility, and various complex
backgrounds that are far beyond those in existing datasets. The proposed
symmetry detection approach, named Side-output Residual Network (SRN),
leverages output Residual Units (RUs) to fit the errors between the object
symmetry groundtruth and the outputs of RUs. By stacking RUs in a
deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales
to ease the problems of fitting complex outputs with limited layers,
suppressing the complex backgrounds, and effectively matching object symmetry
of different scales. Experimental results validate both the benchmark and its
challenging aspects related to realworld images, and the state-of-the-art
performance of our symmetry detection approach. The benchmark and the code for
SRN are publicly available at https://github.com/KevinKecc/SRN.
| [
{
"version": "v1",
"created": "Tue, 7 Mar 2017 07:09:40 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Apr 2017 01:58:50 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Ke",
"Wei",
""
],
[
"Chen",
"Jie",
""
],
[
"Jiao",
"Jianbin",
""
],
[
"Zhao",
"Guoying",
""
],
[
"Ye",
"Qixiang",
""
]
] | TITLE: SRN: Side-output Residual Network for Object Symmetry Detection in the
Wild
ABSTRACT: In this paper, we establish a baseline for object symmetry detection in
complex backgrounds by presenting a new benchmark and an end-to-end deep
learning approach, opening up a promising direction for symmetry detection in
the wild. The new benchmark, named Sym-PASCAL, spans challenges including
object diversity, multi-objects, part-invisibility, and various complex
backgrounds that are far beyond those in existing datasets. The proposed
symmetry detection approach, named Side-output Residual Network (SRN),
leverages output Residual Units (RUs) to fit the errors between the object
symmetry groundtruth and the outputs of RUs. By stacking RUs in a
deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales
to ease the problems of fitting complex outputs with limited layers,
suppressing the complex backgrounds, and effectively matching object symmetry
of different scales. Experimental results validate both the benchmark and its
challenging aspects related to realworld images, and the state-of-the-art
performance of our symmetry detection approach. The benchmark and the code for
SRN are publicly available at https://github.com/KevinKecc/SRN.
| no_new_dataset | 0.949716 |
1703.04071 | Chunpeng Wu | Chunpeng Wu, Wei Wen, Tariq Afzal, Yongmei Zhang, Yiran Chen, and Hai
Li | A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification
and Domain Adaptation | 2017 IEEE Conference on Computer Vision and Pattern Recognition
(CVPR'17) | null | null | null | cs.CV cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, DNN model compression based on network architecture design, e.g.,
SqueezeNet, attracted a lot attention. No accuracy drop on image classification
is observed on these extremely compact networks, compared to well-known models.
An emerging question, however, is whether these model compression techniques
hurt DNN's learning ability other than classifying images on a single dataset.
Our preliminary experiment shows that these compression methods could degrade
domain adaptation (DA) ability, though the classification performance is
preserved. Therefore, we propose a new compact network architecture and
unsupervised DA method in this paper. The DNN is built on a new basic module
Conv-M which provides more diverse feature extractors without significantly
increasing parameters. The unified framework of our DA method will
simultaneously learn invariance across domains, reduce divergence of feature
representations, and adapt label prediction. Our DNN has 4.1M parameters, which
is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN
obtains GoogLeNet-level accuracy both on classification and DA, and our DA
method slightly outperforms previous competitive ones. Put all together, our DA
strategy based on our DNN achieves state-of-the-art on sixteen of total
eighteen DA tasks on popular Office-31 and Office-Caltech datasets.
| [
{
"version": "v1",
"created": "Sun, 12 Mar 2017 05:07:00 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2017 03:21:57 GMT"
},
{
"version": "v3",
"created": "Wed, 29 Mar 2017 05:40:52 GMT"
},
{
"version": "v4",
"created": "Mon, 3 Apr 2017 05:17:42 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Wu",
"Chunpeng",
""
],
[
"Wen",
"Wei",
""
],
[
"Afzal",
"Tariq",
""
],
[
"Zhang",
"Yongmei",
""
],
[
"Chen",
"Yiran",
""
],
[
"Li",
"Hai",
""
]
] | TITLE: A Compact DNN: Approaching GoogLeNet-Level Accuracy of Classification
and Domain Adaptation
ABSTRACT: Recently, DNN model compression based on network architecture design, e.g.,
SqueezeNet, attracted a lot attention. No accuracy drop on image classification
is observed on these extremely compact networks, compared to well-known models.
An emerging question, however, is whether these model compression techniques
hurt DNN's learning ability other than classifying images on a single dataset.
Our preliminary experiment shows that these compression methods could degrade
domain adaptation (DA) ability, though the classification performance is
preserved. Therefore, we propose a new compact network architecture and
unsupervised DA method in this paper. The DNN is built on a new basic module
Conv-M which provides more diverse feature extractors without significantly
increasing parameters. The unified framework of our DA method will
simultaneously learn invariance across domains, reduce divergence of feature
representations, and adapt label prediction. Our DNN has 4.1M parameters, which
is only 6.7% of AlexNet or 59% of GoogLeNet. Experiments show that our DNN
obtains GoogLeNet-level accuracy both on classification and DA, and our DA
method slightly outperforms previous competitive ones. Put all together, our DA
strategy based on our DNN achieves state-of-the-art on sixteen of total
eighteen DA tasks on popular Office-31 and Office-Caltech datasets.
| no_new_dataset | 0.946498 |
1703.09745 | Samuel Marchal | Radek Tomsu, Samuel Marchal, N. Asokan | Profiling Users by Modeling Web Transactions | Extended technical report of an IEEE ICDCS 2017 publication | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users of electronic devices, e.g., laptop, smartphone, etc. have
characteristic behaviors while surfing the Web. Profiling this behavior can
help identify the person using a given device. In this paper, we introduce a
technique to profile users based on their web transactions. We compute several
features extracted from a sequence of web transactions and use them with
one-class classification techniques to profile a user. We assess the efficacy
and speed of our method at differentiating 25 users on a dataset representing 6
months of web traffic monitoring from a small company network.
| [
{
"version": "v1",
"created": "Tue, 28 Mar 2017 18:54:15 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 10:56:49 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Tomsu",
"Radek",
""
],
[
"Marchal",
"Samuel",
""
],
[
"Asokan",
"N.",
""
]
] | TITLE: Profiling Users by Modeling Web Transactions
ABSTRACT: Users of electronic devices, e.g., laptop, smartphone, etc. have
characteristic behaviors while surfing the Web. Profiling this behavior can
help identify the person using a given device. In this paper, we introduce a
technique to profile users based on their web transactions. We compute several
features extracted from a sequence of web transactions and use them with
one-class classification techniques to profile a user. We assess the efficacy
and speed of our method at differentiating 25 users on a dataset representing 6
months of web traffic monitoring from a small company network.
| no_new_dataset | 0.915507 |
1703.10730 | Donghoon Lee | Donghoon Lee, Sangdoo Yun, Sungjoon Choi, Hwiyeon Yoo, Ming-Hsuan
Yang, and Songhwai Oh | Unsupervised Holistic Image Generation from Key Local Patches | 16 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new problem of generating an image based on a small number of
key local patches without any geometric prior. In this work, key local patches
are defined as informative regions of the target object or scene. This is a
challenging problem since it requires generating realistic images and
predicting locations of parts at the same time. We construct adversarial
networks to tackle this problem. A generator network generates a fake image as
well as a mask based on the encoder-decoder framework. On the other hand, a
discriminator network aims to detect fake images. The network is trained with
three losses to consider spatial, appearance, and adversarial information. The
spatial loss determines whether the locations of predicted parts are correct.
Input patches are restored in the output image without much modification due to
the appearance loss. The adversarial loss ensures output images are realistic.
The proposed network is trained without supervisory signals since no labels of
key parts are required. Experimental results on six datasets demonstrate that
the proposed algorithm performs favorably on challenging objects and scenes.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 01:43:06 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 00:38:12 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Lee",
"Donghoon",
""
],
[
"Yun",
"Sangdoo",
""
],
[
"Choi",
"Sungjoon",
""
],
[
"Yoo",
"Hwiyeon",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Oh",
"Songhwai",
""
]
] | TITLE: Unsupervised Holistic Image Generation from Key Local Patches
ABSTRACT: We introduce a new problem of generating an image based on a small number of
key local patches without any geometric prior. In this work, key local patches
are defined as informative regions of the target object or scene. This is a
challenging problem since it requires generating realistic images and
predicting locations of parts at the same time. We construct adversarial
networks to tackle this problem. A generator network generates a fake image as
well as a mask based on the encoder-decoder framework. On the other hand, a
discriminator network aims to detect fake images. The network is trained with
three losses to consider spatial, appearance, and adversarial information. The
spatial loss determines whether the locations of predicted parts are correct.
Input patches are restored in the output image without much modification due to
the appearance loss. The adversarial loss ensures output images are realistic.
The proposed network is trained without supervisory signals since no labels of
key parts are required. Experimental results on six datasets demonstrate that
the proposed algorithm performs favorably on challenging objects and scenes.
| no_new_dataset | 0.948489 |
1704.00003 | Hsiao-Yu Tung | Hsiao-Yu Fish Tung and Chao-Yuan Wu and Manzil Zaheer and Alexander J.
Smola | Spectral Methods for Nonparametric Models | Keywords: Spectral Methods, Indian Buffet Process, Hierarchical
Dirichlet Process | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nonparametric models are versatile, albeit computationally expensive, tool
for modeling mixture models. In this paper, we introduce spectral methods for
the two most popular nonparametric models: the Indian Buffet Process (IBP) and
the Hierarchical Dirichlet Process (HDP). We show that using spectral methods
for the inference of nonparametric models are computationally and statistically
efficient. In particular, we derive the lower-order moments of the IBP and the
HDP, propose spectral algorithms for both models, and provide reconstruction
guarantees for the algorithms. For the HDP, we further show that applying
hierarchical models on dataset with hierarchical structure, which can be solved
with the generalized spectral HDP, produces better solutions to that of flat
models regarding likelihood performance.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 03:50:03 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Tung",
"Hsiao-Yu Fish",
""
],
[
"Wu",
"Chao-Yuan",
""
],
[
"Zaheer",
"Manzil",
""
],
[
"Smola",
"Alexander J.",
""
]
] | TITLE: Spectral Methods for Nonparametric Models
ABSTRACT: Nonparametric models are versatile, albeit computationally expensive, tool
for modeling mixture models. In this paper, we introduce spectral methods for
the two most popular nonparametric models: the Indian Buffet Process (IBP) and
the Hierarchical Dirichlet Process (HDP). We show that using spectral methods
for the inference of nonparametric models are computationally and statistically
efficient. In particular, we derive the lower-order moments of the IBP and the
HDP, propose spectral algorithms for both models, and provide reconstruction
guarantees for the algorithms. For the HDP, we further show that applying
hierarchical models on dataset with hierarchical structure, which can be solved
with the generalized spectral HDP, produces better solutions to that of flat
models regarding likelihood performance.
| no_new_dataset | 0.951908 |
1704.00023 | Tegjyot Singh Sethi | Tegjyot Singh Sethi, Mehmed Kantardzic | On the Reliable Detection of Concept Drift from Streaming Unlabeled Data | null | null | null | null | stat.ML cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifiers deployed in the real world operate in a dynamic environment,
where the data distribution can change over time. These changes, referred to as
concept drift, can cause the predictive performance of the classifier to drop
over time, thereby making it obsolete. To be of any real use, these classifiers
need to detect drifts and be able to adapt to them, over time. Detecting drifts
has traditionally been approached as a supervised task, with labeled data
constantly being used for validating the learned model. Although effective in
detecting drifts, these techniques are impractical, as labeling is a difficult,
costly and time consuming activity. On the other hand, unsupervised change
detection techniques are unreliable, as they produce a large number of false
alarms. The inefficacy of the unsupervised techniques stems from the exclusion
of the characteristics of the learned classifier, from the detection process.
In this paper, we propose the Margin Density Drift Detection (MD3) algorithm,
which tracks the number of samples in the uncertainty region of a classifier,
as a metric to detect drift. The MD3 algorithm is a distribution independent,
application independent, model independent, unsupervised and incremental
algorithm for reliably detecting drifts from data streams. Experimental
evaluation on 6 drift induced datasets and 4 additional datasets from the
cybersecurity domain demonstrates that the MD3 approach can reliably detect
drifts, with significantly fewer false alarms compared to unsupervised feature
based drift detectors. The reduced false alarms enables the signaling of drifts
only when they are most likely to affect classification performance. As such,
the MD3 approach leads to a detection scheme which is credible, label efficient
and general in its applicability.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 18:55:48 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Sethi",
"Tegjyot Singh",
""
],
[
"Kantardzic",
"Mehmed",
""
]
] | TITLE: On the Reliable Detection of Concept Drift from Streaming Unlabeled Data
ABSTRACT: Classifiers deployed in the real world operate in a dynamic environment,
where the data distribution can change over time. These changes, referred to as
concept drift, can cause the predictive performance of the classifier to drop
over time, thereby making it obsolete. To be of any real use, these classifiers
need to detect drifts and be able to adapt to them, over time. Detecting drifts
has traditionally been approached as a supervised task, with labeled data
constantly being used for validating the learned model. Although effective in
detecting drifts, these techniques are impractical, as labeling is a difficult,
costly and time consuming activity. On the other hand, unsupervised change
detection techniques are unreliable, as they produce a large number of false
alarms. The inefficacy of the unsupervised techniques stems from the exclusion
of the characteristics of the learned classifier, from the detection process.
In this paper, we propose the Margin Density Drift Detection (MD3) algorithm,
which tracks the number of samples in the uncertainty region of a classifier,
as a metric to detect drift. The MD3 algorithm is a distribution independent,
application independent, model independent, unsupervised and incremental
algorithm for reliably detecting drifts from data streams. Experimental
evaluation on 6 drift induced datasets and 4 additional datasets from the
cybersecurity domain demonstrates that the MD3 approach can reliably detect
drifts, with significantly fewer false alarms compared to unsupervised feature
based drift detectors. The reduced false alarms enables the signaling of drifts
only when they are most likely to affect classification performance. As such,
the MD3 approach leads to a detection scheme which is credible, label efficient
and general in its applicability.
| no_new_dataset | 0.949949 |
1704.00077 | Hieu Le | Hieu Le, Vu Nguyen, Chen-Ping Yu, Dimitris Samaras | Geodesic Distance Histogram Feature for Video Segmentation | null | null | 10.1007/978-3-319-54181-5_18 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a geodesic-distance-based feature that encodes global
information for improved video segmentation algorithms. The feature is a joint
histogram of intensity and geodesic distances, where the geodesic distances are
computed as the shortest paths between superpixels via their boundaries. We
also incorporate adaptive voting weights and spatial pyramid configurations to
include spatial information into the geodesic histogram feature and show that
this further improves results. The feature is generic and can be used as part
of various algorithms. In experiments, we test the geodesic histogram feature
by incorporating it into two existing video segmentation frameworks. This leads
to significantly better performance in 3D video segmentation benchmarks on two
datasets.
| [
{
"version": "v1",
"created": "Fri, 31 Mar 2017 22:39:32 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Le",
"Hieu",
""
],
[
"Nguyen",
"Vu",
""
],
[
"Yu",
"Chen-Ping",
""
],
[
"Samaras",
"Dimitris",
""
]
] | TITLE: Geodesic Distance Histogram Feature for Video Segmentation
ABSTRACT: This paper proposes a geodesic-distance-based feature that encodes global
information for improved video segmentation algorithms. The feature is a joint
histogram of intensity and geodesic distances, where the geodesic distances are
computed as the shortest paths between superpixels via their boundaries. We
also incorporate adaptive voting weights and spatial pyramid configurations to
include spatial information into the geodesic histogram feature and show that
this further improves results. The feature is generic and can be used as part
of various algorithms. In experiments, we test the geodesic histogram feature
by incorporating it into two existing video segmentation frameworks. This leads
to significantly better performance in 3D video segmentation benchmarks on two
datasets.
| no_new_dataset | 0.954009 |
1704.00156 | Joeran Beel | Joeran Beel, Siddharth Dinesh | Real-World Recommender Systems for Academia: The Pain and Gain in
Building, Operating, and Researching them [Long Version] | This article is a long version of the article published in the
Proceedings of the 5th International Workshop on Bibliometric-enhanced
Information Retrieval (BIR) | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on recommender systems is a challenging task, as is building and
operating such systems. Major challenges include non-reproducible research
results, dealing with noisy data, and answering many questions such as how many
recommendations to display, how often, and, of course, how to generate
recommendations most effectively. In the past six years, we built three
research-article recommender systems for digital libraries and reference
managers, and conducted research on these systems. In this paper, we share some
experiences we made during that time. Among others, we discuss the required
skills to build recommender systems, and why the literature provides little
help in identifying promising recommendation approaches. We explain the
challenge in creating a randomization engine to run A/B tests, and how low data
quality impacts the calculation of bibliometrics. We further discuss why
several of our experiments delivered disappointing results, and provide
statistics on how many researchers showed interest in our recommendation
dataset.
| [
{
"version": "v1",
"created": "Sat, 1 Apr 2017 11:36:26 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Beel",
"Joeran",
""
],
[
"Dinesh",
"Siddharth",
""
]
] | TITLE: Real-World Recommender Systems for Academia: The Pain and Gain in
Building, Operating, and Researching them [Long Version]
ABSTRACT: Research on recommender systems is a challenging task, as is building and
operating such systems. Major challenges include non-reproducible research
results, dealing with noisy data, and answering many questions such as how many
recommendations to display, how often, and, of course, how to generate
recommendations most effectively. In the past six years, we built three
research-article recommender systems for digital libraries and reference
managers, and conducted research on these systems. In this paper, we share some
experiences we made during that time. Among others, we discuss the required
skills to build recommender systems, and why the literature provides little
help in identifying promising recommendation approaches. We explain the
challenge in creating a randomization engine to run A/B tests, and how low data
quality impacts the calculation of bibliometrics. We further discuss why
several of our experiments delivered disappointing results, and provide
statistics on how many researchers showed interest in our recommendation
dataset.
| no_new_dataset | 0.942188 |
1704.00158 | Ozsel Kilinc | Ozsel Kilinc, Ismail Uysal | Clustering-based Source-aware Assessment of True Robustness for Learning
Models | Submitted to UAI 2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel validation framework to measure the true robustness of
learning models for real-world applications by creating source-inclusive and
source-exclusive partitions in a dataset via clustering. We develop a
robustness metric derived from source-aware lower and upper bounds of model
accuracy even when data source labels are not readily available. We clearly
demonstrate that even on a well-explored dataset like MNIST, challenging
training scenarios can be constructed under the proposed assessment framework
for two separate yet equally important applications: i) more rigorous learning
model comparison and ii) dataset adequacy evaluation. In addition, our findings
not only promise a more complete identification of trade-offs between model
complexity, accuracy and robustness but can also help researchers optimize
their efforts in data collection by identifying the less robust and more
challenging class labels.
| [
{
"version": "v1",
"created": "Sat, 1 Apr 2017 11:58:24 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Kilinc",
"Ozsel",
""
],
[
"Uysal",
"Ismail",
""
]
] | TITLE: Clustering-based Source-aware Assessment of True Robustness for Learning
Models
ABSTRACT: We introduce a novel validation framework to measure the true robustness of
learning models for real-world applications by creating source-inclusive and
source-exclusive partitions in a dataset via clustering. We develop a
robustness metric derived from source-aware lower and upper bounds of model
accuracy even when data source labels are not readily available. We clearly
demonstrate that even on a well-explored dataset like MNIST, challenging
training scenarios can be constructed under the proposed assessment framework
for two separate yet equally important applications: i) more rigorous learning
model comparison and ii) dataset adequacy evaluation. In addition, our findings
not only promise a more complete identification of trade-offs between model
complexity, accuracy and robustness but can also help researchers optimize
their efforts in data collection by identifying the less robust and more
challenging class labels.
| no_new_dataset | 0.950778 |
1704.00180 | Manoel Horta Ribeiro | Manoel Horta Ribeiro, Bruno Teixeira, Ant\^onio Ot\'avio Fernandes,
Wagner Meira Jr., Erickson R. Nascimento | Complexity-Aware Assignment of Latent Values in Discriminative Models
for Accurate Gesture Recognition | Conference paper published at 2016 29th SIBGRAPI, Conference on
Graphics, Patterns and Images (SIBGRAPI). 8 pages, 7 figures | null | 10.1109/SIBGRAPI.2016.059 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many of the state-of-the-art algorithms for gesture recognition are based on
Conditional Random Fields (CRFs). Successful approaches, such as the
Latent-Dynamic CRFs, extend the CRF by incorporating latent variables, whose
values are mapped to the values of the labels. In this paper we propose a novel
methodology to set the latent values according to the gesture complexity. We
use an heuristic that iterates through the samples associated with each label
value, stimating their complexity. We then use it to assign the latent values
to the label values. We evaluate our method on the task of recognizing human
gestures from video streams. The experiments were performed in binary datasets,
generated by grouping different labels. Our results demonstrate that our
approach outperforms the arbitrary one in many cases, increasing the accuracy
by up to 10%.
| [
{
"version": "v1",
"created": "Sat, 1 Apr 2017 15:15:38 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Ribeiro",
"Manoel Horta",
""
],
[
"Teixeira",
"Bruno",
""
],
[
"Fernandes",
"Antônio Otávio",
""
],
[
"Meira",
"Wagner",
"Jr."
],
[
"Nascimento",
"Erickson R.",
""
]
] | TITLE: Complexity-Aware Assignment of Latent Values in Discriminative Models
for Accurate Gesture Recognition
ABSTRACT: Many of the state-of-the-art algorithms for gesture recognition are based on
Conditional Random Fields (CRFs). Successful approaches, such as the
Latent-Dynamic CRFs, extend the CRF by incorporating latent variables, whose
values are mapped to the values of the labels. In this paper we propose a novel
methodology to set the latent values according to the gesture complexity. We
use an heuristic that iterates through the samples associated with each label
value, stimating their complexity. We then use it to assign the latent values
to the label values. We evaluate our method on the task of recognizing human
gestures from video streams. The experiments were performed in binary datasets,
generated by grouping different labels. Our results demonstrate that our
approach outperforms the arbitrary one in many cases, increasing the accuracy
by up to 10%.
| no_new_dataset | 0.950411 |
1704.00380 | Mamoru Komachi | Junki Matsuo, Mamoru Komachi and Katsuhito Sudoh | Word-Alignment-Based Segment-Level Machine Translation Evaluation using
Word Embeddings | 5 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | One of the most important problems in machine translation (MT) evaluation is
to evaluate the similarity between translation hypotheses with different
surface forms from the reference, especially at the segment level. We propose
to use word embeddings to perform word alignment for segment-level MT
evaluation. We performed experiments with three types of alignment methods
using word embeddings. We evaluated our proposed methods with various
translation datasets. Experimental results show that our proposed methods
outperform previous word embeddings-based methods.
| [
{
"version": "v1",
"created": "Sun, 2 Apr 2017 22:36:56 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Matsuo",
"Junki",
""
],
[
"Komachi",
"Mamoru",
""
],
[
"Sudoh",
"Katsuhito",
""
]
] | TITLE: Word-Alignment-Based Segment-Level Machine Translation Evaluation using
Word Embeddings
ABSTRACT: One of the most important problems in machine translation (MT) evaluation is
to evaluate the similarity between translation hypotheses with different
surface forms from the reference, especially at the segment level. We propose
to use word embeddings to perform word alignment for segment-level MT
evaluation. We performed experiments with three types of alignment methods
using word embeddings. We evaluated our proposed methods with various
translation datasets. Experimental results show that our proposed methods
outperform previous word embeddings-based methods.
| no_new_dataset | 0.947137 |
1704.00492 | Dimitrios Tzionas | Dimitrios Tzionas and Juergen Gall | A Comparison of Directional Distances for Hand Pose Estimation | German Conference on Pattern Recognition (GCPR) 2013,
http://files.is.tue.mpg.de/dtzionas/GCPR_2013.html | null | 10.1007/978-3-642-40602-7_14 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Benchmarking methods for 3d hand tracking is still an open problem due to the
difficulty of acquiring ground truth data. We introduce a new dataset and
benchmarking protocol that is insensitive to the accumulative error of other
protocols. To this end, we create testing frame pairs of increasing difficulty
and measure the pose estimation error separately for each of them. This
approach gives new insights and allows to accurately study the performance of
each feature or method without employing a full tracking pipeline. Following
this protocol, we evaluate various directional distances in the context of
silhouette-based 3d hand tracking, expressed as special cases of a generalized
Chamfer distance form. An appropriate parameter setup is proposed for each of
them, and a comparative study reveals the best performing method in this
context.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 09:31:01 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Tzionas",
"Dimitrios",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: A Comparison of Directional Distances for Hand Pose Estimation
ABSTRACT: Benchmarking methods for 3d hand tracking is still an open problem due to the
difficulty of acquiring ground truth data. We introduce a new dataset and
benchmarking protocol that is insensitive to the accumulative error of other
protocols. To this end, we create testing frame pairs of increasing difficulty
and measure the pose estimation error separately for each of them. This
approach gives new insights and allows to accurately study the performance of
each feature or method without employing a full tracking pipeline. Following
this protocol, we evaluate various directional distances in the context of
silhouette-based 3d hand tracking, expressed as special cases of a generalized
Chamfer distance form. An appropriate parameter setup is proposed for each of
them, and a comparative study reveals the best performing method in this
context.
| new_dataset | 0.959154 |
1704.00498 | Malte Nissen | Malte St{\ae}r Nissen, Oswin Krause, Kristian Almstrup, S{\o}ren
Kj{\ae}rulff, Torben Trindk{\ae}r Nielsen, Mads Nielsen | Convolutional neural networks for segmentation and object detection of
human semen | Submitted for Scandinavian Conference on Image Analysis 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We compare a set of convolutional neural network (CNN) architectures for the
task of segmenting and detecting human sperm cells in an image taken from a
semen sample. In contrast to previous work, samples are not stained or washed
to allow for full sperm quality analysis, making analysis harder due to
clutter. Our results indicate that training on full images is superior to
training on patches when class-skew is properly handled. Full image training
including up-sampling during training proves to be beneficial in deep CNNs for
pixel wise accuracy and detection performance. Predicted sperm cells are found
by using connected components on the CNN predictions. We investigate
optimization of a threshold parameter on the size of detected components. Our
best network achieves 93.87% precision and 91.89% recall on our test dataset
after thresholding outperforming a classical mage analysis approach.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 09:40:56 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Nissen",
"Malte Stær",
""
],
[
"Krause",
"Oswin",
""
],
[
"Almstrup",
"Kristian",
""
],
[
"Kjærulff",
"Søren",
""
],
[
"Nielsen",
"Torben Trindkær",
""
],
[
"Nielsen",
"Mads",
""
]
] | TITLE: Convolutional neural networks for segmentation and object detection of
human semen
ABSTRACT: We compare a set of convolutional neural network (CNN) architectures for the
task of segmenting and detecting human sperm cells in an image taken from a
semen sample. In contrast to previous work, samples are not stained or washed
to allow for full sperm quality analysis, making analysis harder due to
clutter. Our results indicate that training on full images is superior to
training on patches when class-skew is properly handled. Full image training
including up-sampling during training proves to be beneficial in deep CNNs for
pixel wise accuracy and detection performance. Predicted sperm cells are found
by using connected components on the CNN predictions. We investigate
optimization of a threshold parameter on the size of detected components. Our
best network achieves 93.87% precision and 91.89% recall on our test dataset
after thresholding outperforming a classical mage analysis approach.
| no_new_dataset | 0.926037 |
1704.00509 | Yan Zhang | Yan Zhang and Mete Ozay and Shuohao Li and Takayuki Okatani | Truncating Wide Networks using Binary Tree Architectures | 10 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent study shows that a wide deep network can obtain accuracy comparable to
a deeper but narrower network. Compared to narrower and deeper networks, wide
networks employ relatively less number of layers and have various important
benefits, such that they have less running time on parallel computing devices,
and they are less affected by gradient vanishing problems. However, the
parameter size of a wide network can be very large due to use of large width of
each layer in the network. In order to keep the benefits of wide networks
meanwhile improve the parameter size and accuracy trade-off of wide networks,
we propose a binary tree architecture to truncate architecture of wide networks
by reducing the width of the networks. More precisely, in the proposed
architecture, the width is continuously reduced from lower layers to higher
layers in order to increase the expressive capacity of network with a less
increase on parameter size. Also, to ease the gradient vanishing problem,
features obtained at different layers are concatenated to form the output of
our architecture. By employing the proposed architecture on a baseline wide
network, we can construct and train a new network with same depth but
considerably less number of parameters. In our experimental analyses, we
observe that the proposed architecture enables us to obtain better parameter
size and accuracy trade-off compared to baseline networks using various
benchmark image classification datasets. The results show that our model can
decrease the classification error of baseline from 20.43% to 19.22% on
Cifar-100 using only 28% of parameters that baseline has. Code is available at
https://github.com/ZhangVision/bitnet.
| [
{
"version": "v1",
"created": "Mon, 3 Apr 2017 10:11:10 GMT"
}
] | 2017-04-04T00:00:00 | [
[
"Zhang",
"Yan",
""
],
[
"Ozay",
"Mete",
""
],
[
"Li",
"Shuohao",
""
],
[
"Okatani",
"Takayuki",
""
]
] | TITLE: Truncating Wide Networks using Binary Tree Architectures
ABSTRACT: Recent study shows that a wide deep network can obtain accuracy comparable to
a deeper but narrower network. Compared to narrower and deeper networks, wide
networks employ relatively less number of layers and have various important
benefits, such that they have less running time on parallel computing devices,
and they are less affected by gradient vanishing problems. However, the
parameter size of a wide network can be very large due to use of large width of
each layer in the network. In order to keep the benefits of wide networks
meanwhile improve the parameter size and accuracy trade-off of wide networks,
we propose a binary tree architecture to truncate architecture of wide networks
by reducing the width of the networks. More precisely, in the proposed
architecture, the width is continuously reduced from lower layers to higher
layers in order to increase the expressive capacity of network with a less
increase on parameter size. Also, to ease the gradient vanishing problem,
features obtained at different layers are concatenated to form the output of
our architecture. By employing the proposed architecture on a baseline wide
network, we can construct and train a new network with same depth but
considerably less number of parameters. In our experimental analyses, we
observe that the proposed architecture enables us to obtain better parameter
size and accuracy trade-off compared to baseline networks using various
benchmark image classification datasets. The results show that our model can
decrease the classification error of baseline from 20.43% to 19.22% on
Cifar-100 using only 28% of parameters that baseline has. Code is available at
https://github.com/ZhangVision/bitnet.
| no_new_dataset | 0.952086 |
1607.02737 | Guillermo Garcia-Hernando | Guillermo Garcia-Hernando and Tae-Kyun Kim | Transition Forests: Learning Discriminative Temporal Transitions for
Action Recognition and Detection | to appear in CVPR 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A human action can be seen as transitions between one's body poses over time,
where the transition depicts a temporal relation between two poses. Recognizing
actions thus involves learning a classifier sensitive to these pose transitions
as well as to static poses. In this paper, we introduce a novel method called
transitions forests, an ensemble of decision trees that both learn to
discriminate static poses and transitions between pairs of two independent
frames. During training, node splitting is driven by alternating two criteria:
the standard classification objective that maximizes the discrimination power
in individual frames, and the proposed one in pairwise frame transitions.
Growing the trees tends to group frames that have similar associated
transitions and share same action label incorporating temporal information that
was not available otherwise. Unlike conventional decision trees where the best
split in a node is determined independently of other nodes, the transition
forests try to find the best split of nodes jointly (within a layer) for
incorporating distant node transitions. When inferring the class label of a new
frame, it is passed down the trees and the prediction is made based on previous
frame predictions and the current one in an efficient and online manner. We
apply our method on varied skeleton action recognition and online detection
datasets showing its suitability over several baselines and state-of-the-art
approaches.
| [
{
"version": "v1",
"created": "Sun, 10 Jul 2016 12:05:41 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 17:21:46 GMT"
},
{
"version": "v3",
"created": "Fri, 31 Mar 2017 15:39:45 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Garcia-Hernando",
"Guillermo",
""
],
[
"Kim",
"Tae-Kyun",
""
]
] | TITLE: Transition Forests: Learning Discriminative Temporal Transitions for
Action Recognition and Detection
ABSTRACT: A human action can be seen as transitions between one's body poses over time,
where the transition depicts a temporal relation between two poses. Recognizing
actions thus involves learning a classifier sensitive to these pose transitions
as well as to static poses. In this paper, we introduce a novel method called
transitions forests, an ensemble of decision trees that both learn to
discriminate static poses and transitions between pairs of two independent
frames. During training, node splitting is driven by alternating two criteria:
the standard classification objective that maximizes the discrimination power
in individual frames, and the proposed one in pairwise frame transitions.
Growing the trees tends to group frames that have similar associated
transitions and share same action label incorporating temporal information that
was not available otherwise. Unlike conventional decision trees where the best
split in a node is determined independently of other nodes, the transition
forests try to find the best split of nodes jointly (within a layer) for
incorporating distant node transitions. When inferring the class label of a new
frame, it is passed down the trees and the prediction is made based on previous
frame predictions and the current one in an efficient and online manner. We
apply our method on varied skeleton action recognition and online detection
datasets showing its suitability over several baselines and state-of-the-art
approaches.
| no_new_dataset | 0.941815 |
1611.01427 | Arash Ardakani | Arash Ardakani, Carlo Condo and Warren J. Gross | Sparsely-Connected Neural Networks: Towards Efficient VLSI
Implementation of Deep Neural Networks | Published as a conference paper at ICLR 2017 | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently deep neural networks have received considerable attention due to
their ability to extract and represent high-level abstractions in data sets.
Deep neural networks such as fully-connected and convolutional neural networks
have shown excellent performance on a wide range of recognition and
classification tasks. However, their hardware implementations currently suffer
from large silicon area and high power consumption due to the their high degree
of complexity. The power/energy consumption of neural networks is dominated by
memory accesses, the majority of which occur in fully-connected networks. In
fact, they contain most of the deep neural network parameters. In this paper,
we propose sparsely-connected networks, by showing that the number of
connections in fully-connected networks can be reduced by up to 90% while
improving the accuracy performance on three popular datasets (MNIST, CIFAR10
and SVHN). We then propose an efficient hardware architecture based on
linear-feedback shift registers to reduce the memory requirements of the
proposed sparsely-connected networks. The proposed architecture can save up to
90% of memory compared to the conventional implementations of fully-connected
neural networks. Moreover, implementation results show up to 84% reduction in
the energy consumption of a single neuron of the proposed sparsely-connected
networks compared to a single neuron of fully-connected neural networks.
| [
{
"version": "v1",
"created": "Fri, 4 Nov 2016 15:47:32 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2017 15:52:44 GMT"
},
{
"version": "v3",
"created": "Thu, 30 Mar 2017 19:51:47 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Ardakani",
"Arash",
""
],
[
"Condo",
"Carlo",
""
],
[
"Gross",
"Warren J.",
""
]
] | TITLE: Sparsely-Connected Neural Networks: Towards Efficient VLSI
Implementation of Deep Neural Networks
ABSTRACT: Recently deep neural networks have received considerable attention due to
their ability to extract and represent high-level abstractions in data sets.
Deep neural networks such as fully-connected and convolutional neural networks
have shown excellent performance on a wide range of recognition and
classification tasks. However, their hardware implementations currently suffer
from large silicon area and high power consumption due to the their high degree
of complexity. The power/energy consumption of neural networks is dominated by
memory accesses, the majority of which occur in fully-connected networks. In
fact, they contain most of the deep neural network parameters. In this paper,
we propose sparsely-connected networks, by showing that the number of
connections in fully-connected networks can be reduced by up to 90% while
improving the accuracy performance on three popular datasets (MNIST, CIFAR10
and SVHN). We then propose an efficient hardware architecture based on
linear-feedback shift registers to reduce the memory requirements of the
proposed sparsely-connected networks. The proposed architecture can save up to
90% of memory compared to the conventional implementations of fully-connected
neural networks. Moreover, implementation results show up to 84% reduction in
the energy consumption of a single neuron of the proposed sparsely-connected
networks compared to a single neuron of fully-connected neural networks.
| no_new_dataset | 0.952486 |
1611.05971 | Marcelo Cicconet | Marcelo Cicconet, David G. C. Hildebrand, and Hunter Elliott | Finding Mirror Symmetry via Registration | Submitted to ICCV 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry is prevalent in nature and a common theme in man-made designs. Both
the human visual system and computer vision algorithms can use symmetry to
facilitate object recognition and other tasks. Detecting mirror symmetry in
images and data is, therefore, useful for a number of applications. Here, we
demonstrate that the problem of fitting a plane of mirror symmetry to data in
any Euclidian space can be reduced to the problem of registering two datasets.
The exactness of the resulting solution depends entirely on the registration
accuracy. This new Mirror Symmetry via Registration (MSR) framework involves
(1) data reflection with respect to an arbitrary plane, (2) registration of
original and reflected datasets, and (3) calculation of the eigenvector of
eigenvalue -1 for the transformation matrix representing the reflection and
registration mappings. To support MSR, we also introduce a novel 2D
registration method based on random sample consensus of an ensemble of
normalized cross-correlation matches. With this as its registration back-end,
MSR achieves state-of-the-art performance for symmetry line detection in two
independent 2D testing databases. We further demonstrate the generality of MSR
by testing it on a database of 3D shapes with an iterative closest point
registration back-end. Finally, we explore its applicability to examining
symmetry in natural systems by assessing the degree of symmetry present in
myelinated axon reconstructions from a larval zebrafish.
| [
{
"version": "v1",
"created": "Fri, 18 Nov 2016 04:37:26 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2017 01:41:41 GMT"
}
] | 2017-04-03T00:00:00 | [
[
"Cicconet",
"Marcelo",
""
],
[
"Hildebrand",
"David G. C.",
""
],
[
"Elliott",
"Hunter",
""
]
] | TITLE: Finding Mirror Symmetry via Registration
ABSTRACT: Symmetry is prevalent in nature and a common theme in man-made designs. Both
the human visual system and computer vision algorithms can use symmetry to
facilitate object recognition and other tasks. Detecting mirror symmetry in
images and data is, therefore, useful for a number of applications. Here, we
demonstrate that the problem of fitting a plane of mirror symmetry to data in
any Euclidian space can be reduced to the problem of registering two datasets.
The exactness of the resulting solution depends entirely on the registration
accuracy. This new Mirror Symmetry via Registration (MSR) framework involves
(1) data reflection with respect to an arbitrary plane, (2) registration of
original and reflected datasets, and (3) calculation of the eigenvector of
eigenvalue -1 for the transformation matrix representing the reflection and
registration mappings. To support MSR, we also introduce a novel 2D
registration method based on random sample consensus of an ensemble of
normalized cross-correlation matches. With this as its registration back-end,
MSR achieves state-of-the-art performance for symmetry line detection in two
independent 2D testing databases. We further demonstrate the generality of MSR
by testing it on a database of 3D shapes with an iterative closest point
registration back-end. Finally, we explore its applicability to examining
symmetry in natural systems by assessing the degree of symmetry present in
myelinated axon reconstructions from a larval zebrafish.
| no_new_dataset | 0.945248 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.