id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1511.06739 | Varun Jampani | Raghudeep Gadde and Varun Jampani and Martin Kiefel and Daniel Kappler
and Peter V. Gehler | Superpixel Convolutional Networks using Bilateral Inceptions | European Conference on Computer Vision (ECCV), 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a CNN architecture for semantic image segmentation.
We introduce a new 'bilateral inception' module that can be inserted in
existing CNN architectures and performs bilateral filtering, at multiple
feature-scales, between superpixels in an image. The feature spaces for
bilateral filtering and other parameters of the module are learned end-to-end
using standard backpropagation techniques. The bilateral inception module
addresses two issues that arise with general CNN segmentation architectures.
First, this module propagates information between (super) pixels while
respecting image edges, thus using the structured information of the problem
for improved results. Second, the layer recovers a full resolution segmentation
result from the lower resolution solution of a CNN. In the experiments, we
modify several existing CNN architectures by inserting our inception module
between the last CNN (1x1 convolution) layers. Empirical results on three
different datasets show reliable improvements not only in comparison to the
baseline networks, but also in comparison to several dense-pixel prediction
techniques such as CRFs, while being competitive in time.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 19:58:38 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Dec 2015 10:43:52 GMT"
},
{
"version": "v3",
"created": "Fri, 8 Jan 2016 09:10:31 GMT"
},
{
"version": "v4",
"created": "Fri, 5 Aug 2016 09:14:18 GMT"
},
{
"version": "v5",
"created": "Mon, 8 Aug 2016 15:31:14 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Gadde",
"Raghudeep",
""
],
[
"Jampani",
"Varun",
""
],
[
"Kiefel",
"Martin",
""
],
[
"Kappler",
"Daniel",
""
],
[
"Gehler",
"Peter V.",
""
]
] | TITLE: Superpixel Convolutional Networks using Bilateral Inceptions
ABSTRACT: In this paper we propose a CNN architecture for semantic image segmentation.
We introduce a new 'bilateral inception' module that can be inserted in
existing CNN architectures and performs bilateral filtering, at multiple
feature-scales, between superpixels in an image. The feature spaces for
bilateral filtering and other parameters of the module are learned end-to-end
using standard backpropagation techniques. The bilateral inception module
addresses two issues that arise with general CNN segmentation architectures.
First, this module propagates information between (super) pixels while
respecting image edges, thus using the structured information of the problem
for improved results. Second, the layer recovers a full resolution segmentation
result from the lower resolution solution of a CNN. In the experiments, we
modify several existing CNN architectures by inserting our inception module
between the last CNN (1x1 convolution) layers. Empirical results on three
different datasets show reliable improvements not only in comparison to the
baseline networks, but also in comparison to several dense-pixel prediction
techniques such as CRFs, while being competitive in time.
| no_new_dataset | 0.951684 |
1511.07710 | Varun Nagaraja | Varun K. Nagaraja, Vlad I. Morariu, Larry S. Davis | Searching for Objects using Structure in Indoor Scenes | Appeared in British Machine Vision Conference (BMVC) 2015 | null | 10.5244/C.29.53 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To identify the location of objects of a particular class, a passive computer
vision system generally processes all the regions in an image to finally output
few regions. However, we can use structure in the scene to search for objects
without processing the entire image. We propose a search technique that
sequentially processes image regions such that the regions that are more likely
to correspond to the query class object are explored earlier. We frame the
problem as a Markov decision process and use an imitation learning algorithm to
learn a search strategy. Since structure in the scene is essential for search,
we work with indoor scene images as they contain both unary scene context
information and object-object context in the scene. We perform experiments on
the NYU-depth v2 dataset and show that the unary scene context features alone
can achieve a significantly high average precision while processing only
20-25\% of the regions for classes like bed and sofa. By considering
object-object context along with the scene context features, the performance is
further improved for classes like counter, lamp, pillow and sofa.
| [
{
"version": "v1",
"created": "Tue, 24 Nov 2015 14:05:28 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Nagaraja",
"Varun K.",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Searching for Objects using Structure in Indoor Scenes
ABSTRACT: To identify the location of objects of a particular class, a passive computer
vision system generally processes all the regions in an image to finally output
few regions. However, we can use structure in the scene to search for objects
without processing the entire image. We propose a search technique that
sequentially processes image regions such that the regions that are more likely
to correspond to the query class object are explored earlier. We frame the
problem as a Markov decision process and use an imitation learning algorithm to
learn a search strategy. Since structure in the scene is essential for search,
we work with indoor scene images as they contain both unary scene context
information and object-object context in the scene. We perform experiments on
the NYU-depth v2 dataset and show that the unary scene context features alone
can achieve a significantly high average precision while processing only
20-25\% of the regions for classes like bed and sofa. By considering
object-object context along with the scene context features, the performance is
further improved for classes like counter, lamp, pillow and sofa.
| no_new_dataset | 0.957833 |
1512.06285 | Min Xian | Min Xian, Yingtao Zhang, H. D. Cheng, Fei Xu, Jianrui Ding | Neutro-Connectedness Cut | 15 pages, 14 figures, 4 tables, journal | null | 10.1109/TIP.2016.2594485 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactive image segmentation is a challenging task and receives increasing
attention recently; however, two major drawbacks exist in interactive
segmentation approaches. First, the segmentation performance of ROI-based
methods is sensitive to the initial ROI: different ROIs may produce results
with great difference. Second, most seed-based methods need intense
interactions, and are not applicable in many cases. In this work, we generalize
the Neutro-Connectedness (NC) to be independent of top-down priors of objects
and to model image topology with indeterminacy measurement on image regions,
propose a novel method for determining object and background regions, which is
applied to exclude isolated background regions and enforce label consistency,
and put forward a hybrid interactive segmentation method, Neutro-Connectedness
Cut (NC-Cut), which can overcome the above two problems by utilizing both
pixel-wise appearance information and region-based NC properties. We evaluate
the proposed NC-Cut by employing two image datasets (265 images), and
demonstrate that the proposed approach outperforms state-of-the-art interactive
image segmentation methods (Grabcut, MILCut, One-Cut, MGC_max^sum and pPBC).
| [
{
"version": "v1",
"created": "Sat, 19 Dec 2015 20:59:09 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Aug 2016 04:51:06 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Xian",
"Min",
""
],
[
"Zhang",
"Yingtao",
""
],
[
"Cheng",
"H. D.",
""
],
[
"Xu",
"Fei",
""
],
[
"Ding",
"Jianrui",
""
]
] | TITLE: Neutro-Connectedness Cut
ABSTRACT: Interactive image segmentation is a challenging task and receives increasing
attention recently; however, two major drawbacks exist in interactive
segmentation approaches. First, the segmentation performance of ROI-based
methods is sensitive to the initial ROI: different ROIs may produce results
with great difference. Second, most seed-based methods need intense
interactions, and are not applicable in many cases. In this work, we generalize
the Neutro-Connectedness (NC) to be independent of top-down priors of objects
and to model image topology with indeterminacy measurement on image regions,
propose a novel method for determining object and background regions, which is
applied to exclude isolated background regions and enforce label consistency,
and put forward a hybrid interactive segmentation method, Neutro-Connectedness
Cut (NC-Cut), which can overcome the above two problems by utilizing both
pixel-wise appearance information and region-based NC properties. We evaluate
the proposed NC-Cut by employing two image datasets (265 images), and
demonstrate that the proposed approach outperforms state-of-the-art interactive
image segmentation methods (Grabcut, MILCut, One-Cut, MGC_max^sum and pPBC).
| no_new_dataset | 0.947962 |
1601.01145 | Yiren Zhou | Yiren Zhou, Hossein Nejati, Thanh-Toan Do, Ngai-Man Cheung, Lynette
Cheah | Image-based Vehicle Analysis using Deep Neural Network: A Systematic
Study | 5 pages, 6 figures, conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the vehicle detection and classification problems using Deep
Neural Networks (DNNs) approaches. Here we answer to questions that are
specific to our application including how to utilize DNN for vehicle detection,
what features are useful for vehicle classification, and how to extend a model
trained on a limited size dataset, to the cases of extreme lighting condition.
Answering these questions we propose our approach that outperforms
state-of-the-art methods, and achieves promising results on image with extreme
lighting conditions.
| [
{
"version": "v1",
"created": "Wed, 6 Jan 2016 11:25:36 GMT"
},
{
"version": "v2",
"created": "Sun, 7 Aug 2016 09:21:08 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Zhou",
"Yiren",
""
],
[
"Nejati",
"Hossein",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Cheung",
"Ngai-Man",
""
],
[
"Cheah",
"Lynette",
""
]
] | TITLE: Image-based Vehicle Analysis using Deep Neural Network: A Systematic
Study
ABSTRACT: We address the vehicle detection and classification problems using Deep
Neural Networks (DNNs) approaches. Here we answer to questions that are
specific to our application including how to utilize DNN for vehicle detection,
what features are useful for vehicle classification, and how to extend a model
trained on a limited size dataset, to the cases of extreme lighting condition.
Answering these questions we propose our approach that outperforms
state-of-the-art methods, and achieves promising results on image with extreme
lighting conditions.
| no_new_dataset | 0.950273 |
1603.06098 | Alexander Kolesnikov | Alexander Kolesnikov and Christoph H. Lampert | Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image
Segmentation | ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new loss function for the weakly-supervised training of
semantic image segmentation models based on three guiding principles: to seed
with weak localization cues, to expand objects based on the information about
which classes can occur in an image, and to constrain the segmentations to
coincide with object boundaries. We show experimentally that training a deep
convolutional neural network using the proposed loss function leads to
substantially better segmentations than previous state-of-the-art methods on
the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the
working mechanism of our method by a detailed experimental study that
illustrates how the segmentation quality is affected by each term of the
proposed loss function as well as their combinations.
| [
{
"version": "v1",
"created": "Sat, 19 Mar 2016 14:13:42 GMT"
},
{
"version": "v2",
"created": "Mon, 25 Jul 2016 17:36:50 GMT"
},
{
"version": "v3",
"created": "Sat, 6 Aug 2016 18:49:45 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Kolesnikov",
"Alexander",
""
],
[
"Lampert",
"Christoph H.",
""
]
] | TITLE: Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image
Segmentation
ABSTRACT: We introduce a new loss function for the weakly-supervised training of
semantic image segmentation models based on three guiding principles: to seed
with weak localization cues, to expand objects based on the information about
which classes can occur in an image, and to constrain the segmentations to
coincide with object boundaries. We show experimentally that training a deep
convolutional neural network using the proposed loss function leads to
substantially better segmentations than previous state-of-the-art methods on
the challenging PASCAL VOC 2012 dataset. We furthermore give insight into the
working mechanism of our method by a detailed experimental study that
illustrates how the segmentation quality is affected by each term of the
proposed loss function as well as their combinations.
| no_new_dataset | 0.95018 |
1604.02135 | Sergey Zagoruyko | Sergey Zagoruyko, Adam Lerer, Tsung-Yi Lin, Pedro O. Pinheiro, Sam
Gross, Soumith Chintala, Piotr Doll\'ar | A MultiPath Network for Object Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent COCO object detection dataset presents several new challenges for
object detection. In particular, it contains objects at a broad range of
scales, less prototypical images, and requires more precise localization. To
address these challenges, we test three modifications to the standard Fast
R-CNN object detector: (1) skip connections that give the detector access to
features at multiple network layers, (2) a foveal structure to exploit object
context at multiple object resolutions, and (3) an integral loss function and
corresponding network adjustment that improve localization. The result of these
modifications is that information can flow along multiple paths in our network,
including through features from multiple network layers and from multiple
object views. We refer to our modified classifier as a "MultiPath" network. We
couple our MultiPath network with DeepMask object proposals, which are well
suited for localization and small objects, and adapt our pipeline to predict
segmentation masks in addition to bounding boxes. The combined system improves
results over the baseline Fast R-CNN detector with Selective Search by 66%
overall and by 4x on small objects. It placed second in both the COCO 2015
detection and segmentation challenges.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 19:43:47 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2016 13:29:02 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Zagoruyko",
"Sergey",
""
],
[
"Lerer",
"Adam",
""
],
[
"Lin",
"Tsung-Yi",
""
],
[
"Pinheiro",
"Pedro O.",
""
],
[
"Gross",
"Sam",
""
],
[
"Chintala",
"Soumith",
""
],
[
"Dollár",
"Piotr",
""
]
] | TITLE: A MultiPath Network for Object Detection
ABSTRACT: The recent COCO object detection dataset presents several new challenges for
object detection. In particular, it contains objects at a broad range of
scales, less prototypical images, and requires more precise localization. To
address these challenges, we test three modifications to the standard Fast
R-CNN object detector: (1) skip connections that give the detector access to
features at multiple network layers, (2) a foveal structure to exploit object
context at multiple object resolutions, and (3) an integral loss function and
corresponding network adjustment that improve localization. The result of these
modifications is that information can flow along multiple paths in our network,
including through features from multiple network layers and from multiple
object views. We refer to our modified classifier as a "MultiPath" network. We
couple our MultiPath network with DeepMask object proposals, which are well
suited for localization and small objects, and adapt our pipeline to predict
segmentation masks in addition to bounding boxes. The combined system improves
results over the baseline Fast R-CNN detector with Selective Search by 66%
overall and by 4x on small objects. It placed second in both the COCO 2015
detection and segmentation challenges.
| no_new_dataset | 0.955817 |
1605.00164 | Dinesh Jayaraman | Dinesh Jayaraman and Kristen Grauman | Look-ahead before you leap: end-to-end active recognition by forecasting
the effect of motion | A preliminary version of the material in this document was filed as
University of Texas technical report no. UT AI15-06, December, 2015, at:
http://apps.cs.utexas.edu/tech_reports/reports/ai/AI-2214.pdf, ECCV 2016 | null | null | University of Texas Technical Report UT AI 15-06 (December 2015) | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual recognition systems mounted on autonomous moving agents face the
challenge of unconstrained data, but simultaneously have the opportunity to
improve their performance by moving to acquire new views of test data. In this
work, we first show how a recurrent neural network-based system may be trained
to perform end-to-end learning of motion policies suited for this "active
recognition" setting. Further, we hypothesize that active vision requires an
agent to have the capacity to reason about the effects of its motions on its
view of the world. To verify this hypothesis, we attempt to induce this
capacity in our active recognition pipeline, by simultaneously learning to
forecast the effects of the agent's motions on its internal representation of
the environment conditional on all past views. Results across two challenging
datasets confirm both that our end-to-end system successfully learns meaningful
policies for active category recognition, and that "learning to look ahead"
further boosts recognition performance.
| [
{
"version": "v1",
"created": "Sat, 30 Apr 2016 20:39:16 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2016 22:15:48 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Jayaraman",
"Dinesh",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Look-ahead before you leap: end-to-end active recognition by forecasting
the effect of motion
ABSTRACT: Visual recognition systems mounted on autonomous moving agents face the
challenge of unconstrained data, but simultaneously have the opportunity to
improve their performance by moving to acquire new views of test data. In this
work, we first show how a recurrent neural network-based system may be trained
to perform end-to-end learning of motion policies suited for this "active
recognition" setting. Further, we hypothesize that active vision requires an
agent to have the capacity to reason about the effects of its motions on its
view of the world. To verify this hypothesis, we attempt to induce this
capacity in our active recognition pipeline, by simultaneously learning to
forecast the effects of the agent's motions on its internal representation of
the environment conditional on all past views. Results across two challenging
datasets confirm both that our end-to-end system successfully learns meaningful
policies for active category recognition, and that "learning to look ahead"
further boosts recognition performance.
| no_new_dataset | 0.950088 |
1608.02010 | Si Si | Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon | Communication-Efficient Parallel Block Minimization for Kernel Machines | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Kernel machines often yield superior predictive performance on various tasks;
however, they suffer from severe computational challenges. In this paper, we
show how to overcome the important challenge of speeding up kernel machines. In
particular, we develop a parallel block minimization framework for solving
kernel machines, including kernel SVM and kernel logistic regression. Our
framework proceeds by dividing the problem into smaller subproblems by forming
a block-diagonal approximation of the Hessian matrix. The subproblems are then
solved approximately in parallel. After that, a communication efficient line
search procedure is developed to ensure sufficient reduction of the objective
function value at each iteration. We prove global linear convergence rate of
the proposed method with a wide class of subproblem solvers, and our analysis
covers strongly convex and some non-strongly convex functions. We apply our
algorithm to solve large-scale kernel SVM problems on distributed systems, and
show a significant improvement over existing parallel solvers. As an example,
on the covtype dataset with half-a-million samples, our algorithm can obtain an
approximate solution with 96% accuracy in 20 seconds using 32 machines, while
all the other parallel kernel SVM solvers require more than 2000 seconds to
achieve a solution with 95% accuracy. Moreover, our algorithm can scale to very
large data sets, such as the kdd algebra dataset with 8 million samples and 20
million features.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 20:15:51 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Hsieh",
"Cho-Jui",
""
],
[
"Si",
"Si",
""
],
[
"Dhillon",
"Inderjit S.",
""
]
] | TITLE: Communication-Efficient Parallel Block Minimization for Kernel Machines
ABSTRACT: Kernel machines often yield superior predictive performance on various tasks;
however, they suffer from severe computational challenges. In this paper, we
show how to overcome the important challenge of speeding up kernel machines. In
particular, we develop a parallel block minimization framework for solving
kernel machines, including kernel SVM and kernel logistic regression. Our
framework proceeds by dividing the problem into smaller subproblems by forming
a block-diagonal approximation of the Hessian matrix. The subproblems are then
solved approximately in parallel. After that, a communication efficient line
search procedure is developed to ensure sufficient reduction of the objective
function value at each iteration. We prove global linear convergence rate of
the proposed method with a wide class of subproblem solvers, and our analysis
covers strongly convex and some non-strongly convex functions. We apply our
algorithm to solve large-scale kernel SVM problems on distributed systems, and
show a significant improvement over existing parallel solvers. As an example,
on the covtype dataset with half-a-million samples, our algorithm can obtain an
approximate solution with 96% accuracy in 20 seconds using 32 machines, while
all the other parallel kernel SVM solvers require more than 2000 seconds to
achieve a solution with 95% accuracy. Moreover, our algorithm can scale to very
large data sets, such as the kdd algebra dataset with 8 million samples and 20
million features.
| no_new_dataset | 0.946051 |
1608.02026 | Hatem Alismail | Hatem Alismail and Brett Browning and Simon Lucey | Photometric Bundle Adjustment for Vision-Based SLAM | Under review | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel algorithm for the joint refinement of structure and motion
parameters from image data directly without relying on fixed and known
correspondences. In contrast to traditional bundle adjustment (BA) where the
optimal parameters are determined by minimizing the reprojection error using
tracked features, the proposed algorithm relies on maximizing the photometric
consistency and estimates the correspondences implicitly. Since the proposed
algorithm does not require correspondences, its application is not limited to
corner-like structure; any pixel with nonvanishing gradient could be used in
the estimation process. Furthermore, we demonstrate the feasibility of refining
the motion and structure parameters simultaneously using the photometric in
unconstrained scenes and without requiring restrictive assumptions such as
planarity. The proposed algorithm is evaluated on range of challenging outdoor
datasets, and it is shown to improve upon the accuracy of the state-of-the-art
VSLAM methods obtained using the minimization of the reprojection error using
traditional BA as well as loop closure.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 21:27:11 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Alismail",
"Hatem",
""
],
[
"Browning",
"Brett",
""
],
[
"Lucey",
"Simon",
""
]
] | TITLE: Photometric Bundle Adjustment for Vision-Based SLAM
ABSTRACT: We propose a novel algorithm for the joint refinement of structure and motion
parameters from image data directly without relying on fixed and known
correspondences. In contrast to traditional bundle adjustment (BA) where the
optimal parameters are determined by minimizing the reprojection error using
tracked features, the proposed algorithm relies on maximizing the photometric
consistency and estimates the correspondences implicitly. Since the proposed
algorithm does not require correspondences, its application is not limited to
corner-like structure; any pixel with nonvanishing gradient could be used in
the estimation process. Furthermore, we demonstrate the feasibility of refining
the motion and structure parameters simultaneously using the photometric in
unconstrained scenes and without requiring restrictive assumptions such as
planarity. The proposed algorithm is evaluated on range of challenging outdoor
datasets, and it is shown to improve upon the accuracy of the state-of-the-art
VSLAM methods obtained using the minimization of the reprojection error using
traditional BA as well as loop closure.
| no_new_dataset | 0.949435 |
1608.02051 | Kanji Tanaka | Tomoya Murase and Kanji Tanaka | Compressive Change Retrieval for Moving Object Detection | 6 pages, 6 figures, Draft of a paper submitted to an International
Conference | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change detection, or anomaly detection, from street-view images acquired by
an autonomous robot at multiple different times, is a major problem in robotic
mapping and autonomous driving. Formulation as an image comparison task, which
operates on a given pair of query and reference images is common to many
existing approaches to this problem. Unfortunately, providing relevant
reference images is not straightforward. In this paper, we propose a novel
formulation for change detection, termed compressive change retrieval, which
can operate on a query image and similar reference images retrieved from the
web. Compared to previous formulations, there are two sources of difficulty.
First, the retrieved reference images may frequently contain non-relevant
reference images, because even state-of-the-art place-recognition techniques
suffer from retrieval noise. Second, image comparison needs to be conducted in
a compressed domain to minimize the storage cost of large collections of
street-view images. To address the above issues, we also present a practical
change detection algorithm that uses compressed bag-of-words (BoW) image
representation as a scalable solution. The results of experiments conducted on
a practical change detection task, "moving object detection (MOD)," using the
publicly available Malaga dataset validate the effectiveness of the proposed
approach.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2016 02:04:25 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Murase",
"Tomoya",
""
],
[
"Tanaka",
"Kanji",
""
]
] | TITLE: Compressive Change Retrieval for Moving Object Detection
ABSTRACT: Change detection, or anomaly detection, from street-view images acquired by
an autonomous robot at multiple different times, is a major problem in robotic
mapping and autonomous driving. Formulation as an image comparison task, which
operates on a given pair of query and reference images is common to many
existing approaches to this problem. Unfortunately, providing relevant
reference images is not straightforward. In this paper, we propose a novel
formulation for change detection, termed compressive change retrieval, which
can operate on a query image and similar reference images retrieved from the
web. Compared to previous formulations, there are two sources of difficulty.
First, the retrieved reference images may frequently contain non-relevant
reference images, because even state-of-the-art place-recognition techniques
suffer from retrieval noise. Second, image comparison needs to be conducted in
a compressed domain to minimize the storage cost of large collections of
street-view images. To address the above issues, we also present a practical
change detection algorithm that uses compressed bag-of-words (BoW) image
representation as a scalable solution. The results of experiments conducted on
a practical change detection task, "moving object detection (MOD)," using the
publicly available Malaga dataset validate the effectiveness of the proposed
approach.
| no_new_dataset | 0.944536 |
1608.02192 | Stephan R Richter | Stephan R. Richter, Vibhav Vineet, Stefan Roth, Vladlen Koltun | Playing for Data: Ground Truth from Computer Games | Accepted to the 14th European Conference on Computer Vision (ECCV
2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in computer vision has been driven by high-capacity models
trained on large datasets. Unfortunately, creating large datasets with
pixel-level labels has been extremely costly due to the amount of human effort
required. In this paper, we present an approach to rapidly creating
pixel-accurate semantic label maps for images extracted from modern computer
games. Although the source code and the internal operation of commercial games
are inaccessible, we show that associations between image patches can be
reconstructed from the communication between the game and the graphics
hardware. This enables rapid propagation of semantic labels within and across
images synthesized by the game, with no access to the source code or the
content. We validate the presented approach by producing dense pixel-level
semantic annotations for 25 thousand images synthesized by a photorealistic
open-world computer game. Experiments on semantic segmentation datasets show
that using the acquired data to supplement real-world images significantly
increases accuracy and that the acquired data enables reducing the amount of
hand-labeled real-world data: models trained with game data and just 1/3 of the
CamVid training set outperform models trained on the complete CamVid training
set.
| [
{
"version": "v1",
"created": "Sun, 7 Aug 2016 08:20:14 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Richter",
"Stephan R.",
""
],
[
"Vineet",
"Vibhav",
""
],
[
"Roth",
"Stefan",
""
],
[
"Koltun",
"Vladlen",
""
]
] | TITLE: Playing for Data: Ground Truth from Computer Games
ABSTRACT: Recent progress in computer vision has been driven by high-capacity models
trained on large datasets. Unfortunately, creating large datasets with
pixel-level labels has been extremely costly due to the amount of human effort
required. In this paper, we present an approach to rapidly creating
pixel-accurate semantic label maps for images extracted from modern computer
games. Although the source code and the internal operation of commercial games
are inaccessible, we show that associations between image patches can be
reconstructed from the communication between the game and the graphics
hardware. This enables rapid propagation of semantic labels within and across
images synthesized by the game, with no access to the source code or the
content. We validate the presented approach by producing dense pixel-level
semantic annotations for 25 thousand images synthesized by a photorealistic
open-world computer game. Experiments on semantic segmentation datasets show
that using the acquired data to supplement real-world images significantly
increases accuracy and that the acquired data enables reducing the amount of
hand-labeled real-world data: models trained with game data and just 1/3 of the
CamVid training set outperform models trained on the complete CamVid training
set.
| no_new_dataset | 0.950503 |
1608.02201 | Hussein Al-Barazanchi | Hussein A. Al-Barazanchi, Hussam Qassim, Abhishek Verma | Residual CNDS | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Convolutional Neural networks nowadays are of tremendous importance for any
image classification system. One of the most investigated methods to increase
the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by
stacking more layers also increases the difficulty of training besides making
it computationally expensive. Some research found that adding auxiliary forks
after intermediate layers increases the accuracy. Specifying which intermediate
layer shoud have the fork just addressed recently. Where a simple rule were
used to detect the position of intermediate layers that needs the auxiliary
supervision fork. This technique known as convolutional neural networks with
deep supervision (CNDS). This technique enhanced the accuracy of classification
over the straight forward CNN used on the MIT places dataset and ImageNet. In
the other side, Residual Learning is another technique emerged recently to ease
the training of very deep CNN. Residual Learning framwork changed the learning
of layers from unreferenced functions to learning residual function with regard
to the layer's input. Residual Learning achieved state of arts results on
ImageNet 2015 and COCO competitions. In this paper, we study the effect of
adding residual connections to CNDS network. Our experiments results show
increasing of accuracy over using CNDS only.
| [
{
"version": "v1",
"created": "Sun, 7 Aug 2016 10:34:02 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Al-Barazanchi",
"Hussein A.",
""
],
[
"Qassim",
"Hussam",
""
],
[
"Verma",
"Abhishek",
""
]
] | TITLE: Residual CNDS
ABSTRACT: Convolutional Neural networks nowadays are of tremendous importance for any
image classification system. One of the most investigated methods to increase
the accuracy of CNN is by increasing the depth of CNN. Increasing the depth by
stacking more layers also increases the difficulty of training besides making
it computationally expensive. Some research found that adding auxiliary forks
after intermediate layers increases the accuracy. Specifying which intermediate
layer shoud have the fork just addressed recently. Where a simple rule were
used to detect the position of intermediate layers that needs the auxiliary
supervision fork. This technique known as convolutional neural networks with
deep supervision (CNDS). This technique enhanced the accuracy of classification
over the straight forward CNN used on the MIT places dataset and ImageNet. In
the other side, Residual Learning is another technique emerged recently to ease
the training of very deep CNN. Residual Learning framwork changed the learning
of layers from unreferenced functions to learning residual function with regard
to the layer's input. Residual Learning achieved state of arts results on
ImageNet 2015 and COCO competitions. In this paper, we study the effect of
adding residual connections to CNDS network. Our experiments results show
increasing of accuracy over using CNDS only.
| no_new_dataset | 0.949763 |
1608.02236 | Shaohua Wan | Shaohua Wan, Zhijun Chen, Tao Zhang, Bo Zhang, Kong-kat Wong | Bootstrapping Face Detection with Hard Negative Examples | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently significant performance improvement in face detection was made
possible by deeply trained convolutional networks. In this report, a novel
approach for training state-of-the-art face detector is described. The key is
to exploit the idea of hard negative mining and iteratively update the Faster
R-CNN based face detector with the hard negatives harvested from a large set of
background examples. We demonstrate that our face detector outperforms
state-of-the-art detectors on the FDDB dataset, which is the de facto standard
for evaluating face detection algorithms.
| [
{
"version": "v1",
"created": "Sun, 7 Aug 2016 16:10:50 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Wan",
"Shaohua",
""
],
[
"Chen",
"Zhijun",
""
],
[
"Zhang",
"Tao",
""
],
[
"Zhang",
"Bo",
""
],
[
"Wong",
"Kong-kat",
""
]
] | TITLE: Bootstrapping Face Detection with Hard Negative Examples
ABSTRACT: Recently significant performance improvement in face detection was made
possible by deeply trained convolutional networks. In this report, a novel
approach for training state-of-the-art face detector is described. The key is
to exploit the idea of hard negative mining and iteratively update the Faster
R-CNN based face detector with the hard negatives harvested from a large set of
background examples. We demonstrate that our face detector outperforms
state-of-the-art detectors on the FDDB dataset, which is the de facto standard
for evaluating face detection algorithms.
| no_new_dataset | 0.955026 |
1608.02289 | Rossano Schifanella | Rossano Schifanella, Paloma de Juan, Joel Tetreault, Liangliang Cao | Detecting Sarcasm in Multimodal Social Platforms | 10 pages, 3 figures, final version published in the Proceedings of
ACM Multimedia 2016 | null | 10.1145/2964284.2964321 | null | cs.CV cs.CL cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sarcasm is a peculiar form of sentiment expression, where the surface
sentiment differs from the implied sentiment. The detection of sarcasm in
social media platforms has been applied in the past mainly to textual
utterances where lexical indicators (such as interjections and intensifiers),
linguistic markers, and contextual information (such as user profiles, or past
conversations) were used to detect the sarcastic tone. However, modern social
media platforms allow to create multimodal messages where audiovisual content
is integrated with the text, making the analysis of a mode in isolation
partial. In our work, we first study the relationship between the textual and
visual aspects in multimodal posts from three major social media platforms,
i.e., Instagram, Tumblr and Twitter, and we run a crowdsourcing task to
quantify the extent to which images are perceived as necessary by human
annotators. Moreover, we propose two different computational frameworks to
detect sarcasm that integrate the textual and visual modalities. The first
approach exploits visual semantics trained on an external dataset, and
concatenates the semantics features with state-of-the-art textual features. The
second method adapts a visual neural network initialized with parameters
trained on ImageNet to multimodal sarcastic posts. Results show the positive
effect of combining modalities for the detection of sarcasm across platforms
and methods.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 00:59:03 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Schifanella",
"Rossano",
""
],
[
"de Juan",
"Paloma",
""
],
[
"Tetreault",
"Joel",
""
],
[
"Cao",
"Liangliang",
""
]
] | TITLE: Detecting Sarcasm in Multimodal Social Platforms
ABSTRACT: Sarcasm is a peculiar form of sentiment expression, where the surface
sentiment differs from the implied sentiment. The detection of sarcasm in
social media platforms has been applied in the past mainly to textual
utterances where lexical indicators (such as interjections and intensifiers),
linguistic markers, and contextual information (such as user profiles, or past
conversations) were used to detect the sarcastic tone. However, modern social
media platforms allow to create multimodal messages where audiovisual content
is integrated with the text, making the analysis of a mode in isolation
partial. In our work, we first study the relationship between the textual and
visual aspects in multimodal posts from three major social media platforms,
i.e., Instagram, Tumblr and Twitter, and we run a crowdsourcing task to
quantify the extent to which images are perceived as necessary by human
annotators. Moreover, we propose two different computational frameworks to
detect sarcasm that integrate the textual and visual modalities. The first
approach exploits visual semantics trained on an external dataset, and
concatenates the semantics features with state-of-the-art textual features. The
second method adapts a visual neural network initialized with parameters
trained on ImageNet to multimodal sarcastic posts. Results show the positive
effect of combining modalities for the detection of sarcasm across platforms
and methods.
| no_new_dataset | 0.94474 |
1608.02307 | William Gray Roncal | William Gray Roncal, Colin Lea, Akira Baruah, Gregory D. Hager | SANTIAGO: Spine Association for Neuron Topology Improvement and Graph
Optimization | 13 pp | null | null | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing automated and semi-automated solutions for reconstructing wiring
diagrams of the brain from electron micrographs is important for advancing the
field of connectomics. While the ultimate goal is to generate a graph of neuron
connectivity, most prior automated methods have focused on volume segmentation
rather than explicit graph estimation. In these approaches, one of the key,
commonly occurring error modes is dendritic shaft-spine fragmentation.
We posit that directly addressing this problem of connection identification
may provide critical insight into estimating more accurate brain graphs. To
this end, we develop a network-centric approach motivated by biological priors
image grammars. We build a computer vision pipeline to reconnect fragmented
spines to their parent dendrites using both fully-automated and semi-automated
approaches. Our experiments show we can learn valid connections despite
uncertain segmentation paths. We curate the first known reference dataset for
analyzing the performance of various spine-shaft algorithms and demonstrate
promising results that recover many previously lost connections. Our automated
approach improves the local subgraph score by more than four times and the full
graph score by 60 percent. These data, results, and evaluation tools are all
available to the broader scientific community. This reframing of the
connectomics problem illustrates a semantic, biologically inspired solution to
remedy a major problem with neuron tracking.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 03:37:29 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Roncal",
"William Gray",
""
],
[
"Lea",
"Colin",
""
],
[
"Baruah",
"Akira",
""
],
[
"Hager",
"Gregory D.",
""
]
] | TITLE: SANTIAGO: Spine Association for Neuron Topology Improvement and Graph
Optimization
ABSTRACT: Developing automated and semi-automated solutions for reconstructing wiring
diagrams of the brain from electron micrographs is important for advancing the
field of connectomics. While the ultimate goal is to generate a graph of neuron
connectivity, most prior automated methods have focused on volume segmentation
rather than explicit graph estimation. In these approaches, one of the key,
commonly occurring error modes is dendritic shaft-spine fragmentation.
We posit that directly addressing this problem of connection identification
may provide critical insight into estimating more accurate brain graphs. To
this end, we develop a network-centric approach motivated by biological priors
image grammars. We build a computer vision pipeline to reconnect fragmented
spines to their parent dendrites using both fully-automated and semi-automated
approaches. Our experiments show we can learn valid connections despite
uncertain segmentation paths. We curate the first known reference dataset for
analyzing the performance of various spine-shaft algorithms and demonstrate
promising results that recover many previously lost connections. Our automated
approach improves the local subgraph score by more than four times and the full
graph score by 60 percent. These data, results, and evaluation tools are all
available to the broader scientific community. This reframing of the
connectomics problem illustrates a semantic, biologically inspired solution to
remedy a major problem with neuron tracking.
| new_dataset | 0.957078 |
1608.02388 | Mohamed Ali Mahjoub | Ibtissem Hadj Ali, Mohammed Ali Mahjoub | Database of handwritten Arabic mathematical formulas images | CGIV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although publicly available, ground-truthed database have proven useful for
training, evaluating, and comparing recognition systems in many domains, the
availability of such database for handwritten Arabic mathematical formula
recognition in particular, is currently quite poor. In this paper, we present a
new public database that contains mathematical expressions available in their
off-line handwritten form. Here, we describe the different steps that allowed
us to acquire this database, from the creation of the mathematical expression
corpora to the transcription of the collected data. Currently, the dataset
contains 4 238 off-line handwritten mathematical expressions written by 66
writers and 20 300 handwritten isolated symbol images. The ground truth is also
provided for the handwritten expressions as XML files with the number of
symbols, and the MATHML structure.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 11:30:35 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Ali",
"Ibtissem Hadj",
""
],
[
"Mahjoub",
"Mohammed Ali",
""
]
] | TITLE: Database of handwritten Arabic mathematical formulas images
ABSTRACT: Although publicly available, ground-truthed database have proven useful for
training, evaluating, and comparing recognition systems in many domains, the
availability of such database for handwritten Arabic mathematical formula
recognition in particular, is currently quite poor. In this paper, we present a
new public database that contains mathematical expressions available in their
off-line handwritten form. Here, we describe the different steps that allowed
us to acquire this database, from the creation of the mathematical expression
corpora to the transcription of the collected data. Currently, the dataset
contains 4 238 off-line handwritten mathematical expressions written by 66
writers and 20 300 handwritten isolated symbol images. The ground truth is also
provided for the handwritten expressions as XML files with the number of
symbols, and the MATHML structure.
| new_dataset | 0.956756 |
1608.02519 | Marina Sokolova | Marina Sokolova, Kanyi Huang, Stan Matwin, Joshua Ramisch, Vera
Sazonova, Renee Black, Chris Orwa, Sidney Ochieng, Nanjira Sambuli | Topic Modelling and Event Identification from Twitter Textual Data | 17 pages, 2 figures, 5 tables | null | null | null | cs.SI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tremendous growth of social media content on the Internet has inspired
the development of the text analytics to understand and solve real-life
problems. Leveraging statistical topic modelling helps researchers and
practitioners in better comprehension of textual content as well as provides
useful information for further analysis. Statistical topic modelling becomes
especially important when we work with large volumes of dynamic text, e.g.,
Facebook or Twitter datasets. In this study, we summarize the message content
of four data sets of Twitter messages relating to challenging social events in
Kenya. We use Latent Dirichlet Allocation (LDA) topic modelling to analyze the
content. Our study uses two evaluation measures, Normalized Mutual Information
(NMI) and topic coherence analysis, to select the best LDA models. The obtained
LDA results show that the tool can be effectively used to extract discussion
topics and summarize them for further manual analysis
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 17:03:03 GMT"
}
] | 2016-08-09T00:00:00 | [
[
"Sokolova",
"Marina",
""
],
[
"Huang",
"Kanyi",
""
],
[
"Matwin",
"Stan",
""
],
[
"Ramisch",
"Joshua",
""
],
[
"Sazonova",
"Vera",
""
],
[
"Black",
"Renee",
""
],
[
"Orwa",
"Chris",
""
],
[
"Ochieng",
"Sidney",
""
],
[
"Sambuli",
"Nanjira",
""
]
] | TITLE: Topic Modelling and Event Identification from Twitter Textual Data
ABSTRACT: The tremendous growth of social media content on the Internet has inspired
the development of the text analytics to understand and solve real-life
problems. Leveraging statistical topic modelling helps researchers and
practitioners in better comprehension of textual content as well as provides
useful information for further analysis. Statistical topic modelling becomes
especially important when we work with large volumes of dynamic text, e.g.,
Facebook or Twitter datasets. In this study, we summarize the message content
of four data sets of Twitter messages relating to challenging social events in
Kenya. We use Latent Dirichlet Allocation (LDA) topic modelling to analyze the
content. Our study uses two evaluation measures, Normalized Mutual Information
(NMI) and topic coherence analysis, to select the best LDA models. The obtained
LDA results show that the tool can be effectively used to extract discussion
topics and summarize them for further manual analysis
| no_new_dataset | 0.944382 |
1405.1837 | Emanuel Lacic | Emanuel Lacic, Dominik Kowald, Lukas Eberhard, Christoph Trattner,
Denis Parra, Leandro Marinho | Utilizing Online Social Network and Location-Based Data to Recommend
Products and Categories in Online Marketplaces | 20 pages book chapter | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research has unveiled the importance of online social networks for
improving the quality of recommender systems and encouraged the research
community to investigate better ways of exploiting the social information for
recommendations. To contribute to this sparse field of research, in this paper
we exploit users' interactions along three data sources (marketplace, social
network and location-based) to assess their performance in a barely studied
domain: recommending products and domains of interests (i.e., product
categories) to people in an online marketplace environment. To that end we
defined sets of content- and network-based user similarity features for each
data source and studied them isolated using an user-based Collaborative
Filtering (CF) approach and in combination via a hybrid recommender algorithm,
to assess which one provides the best recommendation performance.
Interestingly, in our experiments conducted on a rich dataset collected from
SecondLife, a popular online virtual world, we found that recommenders relying
on user similarity features obtained from the social network data clearly
yielded the best results in terms of accuracy in case of predicting products,
whereas the features obtained from the marketplace and location-based data
sources also obtained very good results in case of predicting categories. This
finding indicates that all three types of data sources are important and should
be taken into account depending on the level of specialization of the
recommendation task.
| [
{
"version": "v1",
"created": "Thu, 8 May 2014 08:43:55 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Sep 2014 07:48:08 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Lacic",
"Emanuel",
""
],
[
"Kowald",
"Dominik",
""
],
[
"Eberhard",
"Lukas",
""
],
[
"Trattner",
"Christoph",
""
],
[
"Parra",
"Denis",
""
],
[
"Marinho",
"Leandro",
""
]
] | TITLE: Utilizing Online Social Network and Location-Based Data to Recommend
Products and Categories in Online Marketplaces
ABSTRACT: Recent research has unveiled the importance of online social networks for
improving the quality of recommender systems and encouraged the research
community to investigate better ways of exploiting the social information for
recommendations. To contribute to this sparse field of research, in this paper
we exploit users' interactions along three data sources (marketplace, social
network and location-based) to assess their performance in a barely studied
domain: recommending products and domains of interests (i.e., product
categories) to people in an online marketplace environment. To that end we
defined sets of content- and network-based user similarity features for each
data source and studied them isolated using an user-based Collaborative
Filtering (CF) approach and in combination via a hybrid recommender algorithm,
to assess which one provides the best recommendation performance.
Interestingly, in our experiments conducted on a rich dataset collected from
SecondLife, a popular online virtual world, we found that recommenders relying
on user similarity features obtained from the social network data clearly
yielded the best results in terms of accuracy in case of predicting products,
whereas the features obtained from the marketplace and location-based data
sources also obtained very good results in case of predicting categories. This
finding indicates that all three types of data sources are important and should
be taken into account depending on the level of specialization of the
recommendation task.
| no_new_dataset | 0.952309 |
1507.04155 | Cem Orhan | Cem Orhan and \"Oznur Ta\c{s}tan | ALEVS: Active Learning by Statistical Leverage Sampling | 4 pages, presented as contributed talk in ICML 2015 Active Learning
Workshop | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active learning aims to obtain a classifier of high accuracy by using fewer
label requests in comparison to passive learning by selecting effective
queries. Many active learning methods have been developed in the past two
decades, which sample queries based on informativeness or representativeness of
unlabeled data points. In this work, we explore a novel querying criterion
based on statistical leverage scores. The statistical leverage scores of a row
in a matrix are the squared row-norms of the matrix containing its (top) left
singular vectors and is a measure of influence of the row on the matrix.
Leverage scores have been used for detecting high influential points in
regression diagnostics and have been recently shown to be useful for data
analysis and randomized low-rank matrix approximation algorithms. We explore
how sampling data instances with high statistical leverage scores perform in
active learning. Our empirical comparison on several binary classification
datasets indicate that querying high leverage points is an effective strategy.
| [
{
"version": "v1",
"created": "Wed, 15 Jul 2015 10:31:00 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Orhan",
"Cem",
""
],
[
"Taştan",
"Öznur",
""
]
] | TITLE: ALEVS: Active Learning by Statistical Leverage Sampling
ABSTRACT: Active learning aims to obtain a classifier of high accuracy by using fewer
label requests in comparison to passive learning by selecting effective
queries. Many active learning methods have been developed in the past two
decades, which sample queries based on informativeness or representativeness of
unlabeled data points. In this work, we explore a novel querying criterion
based on statistical leverage scores. The statistical leverage scores of a row
in a matrix are the squared row-norms of the matrix containing its (top) left
singular vectors and is a measure of influence of the row on the matrix.
Leverage scores have been used for detecting high influential points in
regression diagnostics and have been recently shown to be useful for data
analysis and randomized low-rank matrix approximation algorithms. We explore
how sampling data instances with high statistical leverage scores perform in
active learning. Our empirical comparison on several binary classification
datasets indicate that querying high leverage points is an effective strategy.
| no_new_dataset | 0.952175 |
1601.03892 | Massimo Cafaro | Massimo Cafaro, Marco Pulimeno, Italo Epicoco and Giovanni Aloisio | Mining frequent items in the time fading model | To appear in Information Sciences, Elsevier | Information Sciences, Elsevier, 2016, Volume 370-371, pp.221-238 | 10.1016/j.ins.2016.07.077 | null | cs.DS cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present FDCMSS, a new sketch-based algorithm for mining frequent items in
data streams. The algorithm cleverly combines key ideas borrowed from forward
decay, the Count-Min and the Space Saving algorithms. It works in the time
fading model, mining data streams according to the cash register model. We
formally prove its correctness and show, through extensive experimental
results, that our algorithm outperforms $\lambda$-HCount, a recently developed
algorithm, with regard to speed, space used, precision attained and error
committed on both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Fri, 15 Jan 2016 12:21:47 GMT"
},
{
"version": "v2",
"created": "Wed, 29 Jun 2016 10:00:22 GMT"
},
{
"version": "v3",
"created": "Tue, 2 Aug 2016 08:24:12 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Cafaro",
"Massimo",
""
],
[
"Pulimeno",
"Marco",
""
],
[
"Epicoco",
"Italo",
""
],
[
"Aloisio",
"Giovanni",
""
]
] | TITLE: Mining frequent items in the time fading model
ABSTRACT: We present FDCMSS, a new sketch-based algorithm for mining frequent items in
data streams. The algorithm cleverly combines key ideas borrowed from forward
decay, the Count-Min and the Space Saving algorithms. It works in the time
fading model, mining data streams according to the cash register model. We
formally prove its correctness and show, through extensive experimental
results, that our algorithm outperforms $\lambda$-HCount, a recently developed
algorithm, with regard to speed, space used, precision attained and error
committed on both synthetic and real datasets.
| no_new_dataset | 0.952131 |
1601.06068 | Shashi Narayan | Shashi Narayan, Siva Reddy and Shay B. Cohen | Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing | 10 pages, INLG 2016 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the limitations of semantic parsing approaches to open-domain question
answering is the lexicosyntactic gap between natural language questions and
knowledge base entries -- there are many ways to ask a question, all with the
same answer. In this paper we propose to bridge this gap by generating
paraphrases of the input question with the goal that at least one of them will
be correctly mapped to a knowledge-base query. We introduce a novel grammar
model for paraphrase generation that does not require any sentence-aligned
paraphrase corpus. Our key idea is to leverage the flexibility and scalability
of latent-variable probabilistic context-free grammars to sample paraphrases.
We do an extrinsic evaluation of our paraphrases by plugging them into a
semantic parser for Freebase. Our evaluation experiments on the WebQuestions
benchmark dataset show that the performance of the semantic parser
significantly improves over strong baselines.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 16:50:22 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Aug 2016 12:20:52 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Narayan",
"Shashi",
""
],
[
"Reddy",
"Siva",
""
],
[
"Cohen",
"Shay B.",
""
]
] | TITLE: Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing
ABSTRACT: One of the limitations of semantic parsing approaches to open-domain question
answering is the lexicosyntactic gap between natural language questions and
knowledge base entries -- there are many ways to ask a question, all with the
same answer. In this paper we propose to bridge this gap by generating
paraphrases of the input question with the goal that at least one of them will
be correctly mapped to a knowledge-base query. We introduce a novel grammar
model for paraphrase generation that does not require any sentence-aligned
paraphrase corpus. Our key idea is to leverage the flexibility and scalability
of latent-variable probabilistic context-free grammars to sample paraphrases.
We do an extrinsic evaluation of our paraphrases by plugging them into a
semantic parser for Freebase. Our evaluation experiments on the WebQuestions
benchmark dataset show that the performance of the semantic parser
significantly improves over strong baselines.
| no_new_dataset | 0.933552 |
1608.01709 | Sofiane Abbar | Sofiane Abbar and Tahar Zanouda and Javier Borge-Holthoefer | Robustness and Resilience of cities around the world | 8 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of city or urban resilience has emerged as one of the key
challenges for the next decades. As a consequence, institutions like the United
Nations or Rockefeller Foundation have embraced initiatives that increase or
improve it. These efforts translate into funded programs both for action on the
ground and to develop quantification of resilience, under the for of an index.
Ironically, on the academic side there is no clear consensus regarding how
resilience should be quantified, or what it exactly refers to in the urban
context. Here we attempt to link both extremes providing an example of how to
exploit large, publicly available, worldwide urban datasets, to produce
objective insight into one of the possible dimensions of urban resilience. We
do so via well-established methods in complexity science, such as percolation
theory --which has a long tradition at providing valuable information on the
vulnerability in complex systems. Our findings uncover large differences among
studied cities, both regarding their infrastructural fragility and the
imbalances in the distribution of critical services.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 21:58:21 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Abbar",
"Sofiane",
""
],
[
"Zanouda",
"Tahar",
""
],
[
"Borge-Holthoefer",
"Javier",
""
]
] | TITLE: Robustness and Resilience of cities around the world
ABSTRACT: The concept of city or urban resilience has emerged as one of the key
challenges for the next decades. As a consequence, institutions like the United
Nations or Rockefeller Foundation have embraced initiatives that increase or
improve it. These efforts translate into funded programs both for action on the
ground and to develop quantification of resilience, under the for of an index.
Ironically, on the academic side there is no clear consensus regarding how
resilience should be quantified, or what it exactly refers to in the urban
context. Here we attempt to link both extremes providing an example of how to
exploit large, publicly available, worldwide urban datasets, to produce
objective insight into one of the possible dimensions of urban resilience. We
do so via well-established methods in complexity science, such as percolation
theory --which has a long tradition at providing valuable information on the
vulnerability in complex systems. Our findings uncover large differences among
studied cities, both regarding their infrastructural fragility and the
imbalances in the distribution of critical services.
| no_new_dataset | 0.946101 |
1608.01760 | Benjamin Hung Benjamin Hung | Benjamin W.K. Hung and Anura P. Jayasumana | Investigative Simulation: Towards Utilizing Graph Pattern Matching for
Investigative Search | 8 pages, 6 figures. Paper to appear in the Fosint-SI 2016 conference
proceedings in conjunction with the 2016 IEEE/ACM International Conference on
Advances in Social Networks Analysis and Mining ASONAM 2016 | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes the use of graph pattern matching for investigative graph
search, which is the process of searching for and prioritizing persons of
interest who may exhibit part or all of a pattern of suspicious behaviors or
connections. While there are a variety of applications, our principal
motivation is to aid law enforcement in the detection of homegrown violent
extremists. We introduce investigative simulation, which consists of several
necessary extensions to the existing dual simulation graph pattern matching
scheme in order to make it appropriate for intelligence analysts and law
enforcement officials. Specifically, we impose a categorical label structure on
nodes consistent with the nature of indicators in investigations, as well as
prune or complete search results to ensure sensibility and usefulness of
partial matches to analysts. Lastly, we introduce a natural top-k ranking
scheme that can help analysts prioritize investigative efforts. We demonstrate
performance of investigative simulation on a real-world large dataset.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 04:51:41 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Hung",
"Benjamin W. K.",
""
],
[
"Jayasumana",
"Anura P.",
""
]
] | TITLE: Investigative Simulation: Towards Utilizing Graph Pattern Matching for
Investigative Search
ABSTRACT: This paper proposes the use of graph pattern matching for investigative graph
search, which is the process of searching for and prioritizing persons of
interest who may exhibit part or all of a pattern of suspicious behaviors or
connections. While there are a variety of applications, our principal
motivation is to aid law enforcement in the detection of homegrown violent
extremists. We introduce investigative simulation, which consists of several
necessary extensions to the existing dual simulation graph pattern matching
scheme in order to make it appropriate for intelligence analysts and law
enforcement officials. Specifically, we impose a categorical label structure on
nodes consistent with the nature of indicators in investigations, as well as
prune or complete search results to ensure sensibility and usefulness of
partial matches to analysts. Lastly, we introduce a natural top-k ranking
scheme that can help analysts prioritize investigative efforts. We demonstrate
performance of investigative simulation on a real-world large dataset.
| no_new_dataset | 0.949435 |
1608.01866 | Mustafa Sert | Hilal Ergun and Mustafa Sert | Fusing Deep Convolutional Networks for Large Scale Visual Concept
Classification | To appear in The Second IEEE International Conference on Multimedia
Big Data (IEEE BigMM 2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning architectures are showing great promise in various computer
vision domains including image classification, object detection, event
detection and action recognition. In this study, we investigate various aspects
of convolutional neural networks (CNNs) from the big data perspective. We
analyze recent studies and different network architectures both in terms of
running time and accuracy. We present extensive empirical information along
with best practices for big data practitioners. Using these best practices we
propose efficient fusion mechanisms both for single and multiple network
models. We present state-of-the art results on benchmark datasets while keeping
computational costs at a lower level. Another contribution of our paper is that
these state-of-the-art results can be reached without using extensive data
augmentation techniques.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 12:50:28 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Ergun",
"Hilal",
""
],
[
"Sert",
"Mustafa",
""
]
] | TITLE: Fusing Deep Convolutional Networks for Large Scale Visual Concept
Classification
ABSTRACT: Deep learning architectures are showing great promise in various computer
vision domains including image classification, object detection, event
detection and action recognition. In this study, we investigate various aspects
of convolutional neural networks (CNNs) from the big data perspective. We
analyze recent studies and different network architectures both in terms of
running time and accuracy. We present extensive empirical information along
with best practices for big data practitioners. Using these best practices we
propose efficient fusion mechanisms both for single and multiple network
models. We present state-of-the art results on benchmark datasets while keeping
computational costs at a lower level. Another contribution of our paper is that
these state-of-the-art results can be reached without using extensive data
augmentation techniques.
| no_new_dataset | 0.952264 |
1608.01939 | Andrea Cuttone | Andrea Cuttone, Sune Lehmann, Marta C. Gonz\'alez | Understanding Predictability and Exploration in Human Mobility | null | null | null | null | cs.CY physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predictive models for human mobility have important applications in many
fields such as traffic control, ubiquitous computing and contextual
advertisement. The predictive performance of models in literature varies quite
broadly, from as high as 93% to as low as under 40%. In this work we
investigate which factors influence the accuracy of next-place prediction,
using a high-precision location dataset of more than 400 users for periods
between 3 months and one year. We show that it is easier to achieve high
accuracy when predicting the time-bin location than when predicting the next
place. Moreover we demonstrate how the temporal and spatial resolution of the
data can have strong influence on the accuracy of prediction. Finally we
uncover that the exploration of new locations is an important factor in human
mobility, and we measure that on average 20-25% of transitions are to new
places, and approx. 70% of locations are visited only once. We discuss how
these mechanisms are important factors limiting our ability to predict human
mobility.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 17:06:50 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Cuttone",
"Andrea",
""
],
[
"Lehmann",
"Sune",
""
],
[
"González",
"Marta C.",
""
]
] | TITLE: Understanding Predictability and Exploration in Human Mobility
ABSTRACT: Predictive models for human mobility have important applications in many
fields such as traffic control, ubiquitous computing and contextual
advertisement. The predictive performance of models in literature varies quite
broadly, from as high as 93% to as low as under 40%. In this work we
investigate which factors influence the accuracy of next-place prediction,
using a high-precision location dataset of more than 400 users for periods
between 3 months and one year. We show that it is easier to achieve high
accuracy when predicting the time-bin location than when predicting the next
place. Moreover we demonstrate how the temporal and spatial resolution of the
data can have strong influence on the accuracy of prediction. Finally we
uncover that the exploration of new locations is an important factor in human
mobility, and we measure that on average 20-25% of transitions are to new
places, and approx. 70% of locations are visited only once. We discuss how
these mechanisms are important factors limiting our ability to predict human
mobility.
| new_dataset | 0.939748 |
1608.01961 | Mohammad Taher Pilehvar | Mohammad Taher Pilehvar and Nigel Collier | De-Conflated Semantic Representations | EMNLP 2016 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One major deficiency of most semantic representation techniques is that they
usually model a word type as a single point in the semantic space, hence
conflating all the meanings that the word can have. Addressing this issue by
learning distinct representations for individual meanings of words has been the
subject of several research studies in the past few years. However, the
generated sense representations are either not linked to any sense inventory or
are unreliable for infrequent word senses. We propose a technique that tackles
these problems by de-conflating the representations of words based on the deep
knowledge it derives from a semantic network. Our approach provides multiple
advantages in comparison to the past work, including its high coverage and the
ability to generate accurate representations even for infrequent word senses.
We carry out evaluations on six datasets across two semantic similarity tasks
and report state-of-the-art results on most of them.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 18:14:19 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Pilehvar",
"Mohammad Taher",
""
],
[
"Collier",
"Nigel",
""
]
] | TITLE: De-Conflated Semantic Representations
ABSTRACT: One major deficiency of most semantic representation techniques is that they
usually model a word type as a single point in the semantic space, hence
conflating all the meanings that the word can have. Addressing this issue by
learning distinct representations for individual meanings of words has been the
subject of several research studies in the past few years. However, the
generated sense representations are either not linked to any sense inventory or
are unreliable for infrequent word senses. We propose a technique that tackles
these problems by de-conflating the representations of words based on the deep
knowledge it derives from a semantic network. Our approach provides multiple
advantages in comparison to the past work, including its high coverage and the
ability to generate accurate representations even for infrequent word senses.
We carry out evaluations on six datasets across two semantic similarity tasks
and report state-of-the-art results on most of them.
| no_new_dataset | 0.950227 |
1608.01987 | Peter Krafft | Peter M. Krafft, Julia Zheng, Wei Pan, Nicol\'as Della Penna, Yaniv
Altshuler, Erez Shmueli, Joshua B. Tenenbaum, Alex Pentland | Human collective intelligence as distributed Bayesian inference | null | null | null | null | cs.CY cs.AI cs.GT cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collective intelligence is believed to underly the remarkable success of
human society. The formation of accurate shared beliefs is one of the key
components of human collective intelligence. How are accurate shared beliefs
formed in groups of fallible individuals? Answering this question requires a
multiscale analysis. We must understand both the individual decision mechanisms
people use, and the properties and dynamics of those mechanisms in the
aggregate. As of yet, mathematical tools for such an approach have been
lacking. To address this gap, we introduce a new analytical framework: We
propose that groups arrive at accurate shared beliefs via distributed Bayesian
inference. Distributed inference occurs through information processing at the
individual level, and yields rational belief formation at the group level. We
instantiate this framework in a new model of human social decision-making,
which we validate using a dataset we collected of over 50,000 users of an
online social trading platform where investors mimic each others' trades using
real money in foreign exchange and other asset markets. We find that in this
setting people use a decision mechanism in which popularity is treated as a
prior distribution for which decisions are best to make. This mechanism is
boundedly rational at the individual level, but we prove that in the aggregate
implements a type of approximate "Thompson sampling"---a well-known and highly
effective single-agent Bayesian machine learning algorithm for sequential
decision-making. The perspective of distributed Bayesian inference therefore
reveals how collective rationality emerges from the boundedly rational decision
mechanisms people use.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 19:55:57 GMT"
}
] | 2016-08-08T00:00:00 | [
[
"Krafft",
"Peter M.",
""
],
[
"Zheng",
"Julia",
""
],
[
"Pan",
"Wei",
""
],
[
"Della Penna",
"Nicolás",
""
],
[
"Altshuler",
"Yaniv",
""
],
[
"Shmueli",
"Erez",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Pentland",
"Alex",
""
]
] | TITLE: Human collective intelligence as distributed Bayesian inference
ABSTRACT: Collective intelligence is believed to underly the remarkable success of
human society. The formation of accurate shared beliefs is one of the key
components of human collective intelligence. How are accurate shared beliefs
formed in groups of fallible individuals? Answering this question requires a
multiscale analysis. We must understand both the individual decision mechanisms
people use, and the properties and dynamics of those mechanisms in the
aggregate. As of yet, mathematical tools for such an approach have been
lacking. To address this gap, we introduce a new analytical framework: We
propose that groups arrive at accurate shared beliefs via distributed Bayesian
inference. Distributed inference occurs through information processing at the
individual level, and yields rational belief formation at the group level. We
instantiate this framework in a new model of human social decision-making,
which we validate using a dataset we collected of over 50,000 users of an
online social trading platform where investors mimic each others' trades using
real money in foreign exchange and other asset markets. We find that in this
setting people use a decision mechanism in which popularity is treated as a
prior distribution for which decisions are best to make. This mechanism is
boundedly rational at the individual level, but we prove that in the aggregate
implements a type of approximate "Thompson sampling"---a well-known and highly
effective single-agent Bayesian machine learning algorithm for sequential
decision-making. The perspective of distributed Bayesian inference therefore
reveals how collective rationality emerges from the boundedly rational decision
mechanisms people use.
| new_dataset | 0.904059 |
1501.05194 | Guillaume Marrelec | Guillaume Marrelec, Arnaud Mess\'e, Pierre Bellec | A Bayesian alternative to mutual information for the hierarchical
clustering of dependent random variables | null | G. Marrelec, A. Messe, P. Bellec (2015) A Bayesian alternative to
mutual information for the hierarchical clustering of dependent random
variables. PLoS ONE 10(9): e0137278 | 10.1371/journal.pone.0137278 | null | stat.ML cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of mutual information as a similarity measure in agglomerative
hierarchical clustering (AHC) raises an important issue: some correction needs
to be applied for the dimensionality of variables. In this work, we formulate
the decision of merging dependent multivariate normal variables in an AHC
procedure as a Bayesian model comparison. We found that the Bayesian
formulation naturally shrinks the empirical covariance matrix towards a matrix
set a priori (e.g., the identity), provides an automated stopping rule, and
corrects for dimensionality using a term that scales up the measure as a
function of the dimensionality of the variables. Also, the resulting log Bayes
factor is asymptotically proportional to the plug-in estimate of mutual
information, with an additive correction for dimensionality in agreement with
the Bayesian information criterion. We investigated the behavior of these
Bayesian alternatives (in exact and asymptotic forms) to mutual information on
simulated and real data. An encouraging result was first derived on
simulations: the hierarchical clustering based on the log Bayes factor
outperformed off-the-shelf clustering techniques as well as raw and normalized
mutual information in terms of classification accuracy. On a toy example, we
found that the Bayesian approaches led to results that were similar to those of
mutual information clustering techniques, with the advantage of an automated
thresholding. On real functional magnetic resonance imaging (fMRI) datasets
measuring brain activity, it identified clusters consistent with the
established outcome of standard procedures. On this application, normalized
mutual information had a highly atypical behavior, in the sense that it
systematically favored very large clusters. These initial experiments suggest
that the proposed Bayesian alternatives to mutual information are a useful new
tool for hierarchical clustering.
| [
{
"version": "v1",
"created": "Wed, 21 Jan 2015 15:22:13 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Oct 2015 11:31:38 GMT"
}
] | 2016-08-07T00:00:00 | [
[
"Marrelec",
"Guillaume",
""
],
[
"Messé",
"Arnaud",
""
],
[
"Bellec",
"Pierre",
""
]
] | TITLE: A Bayesian alternative to mutual information for the hierarchical
clustering of dependent random variables
ABSTRACT: The use of mutual information as a similarity measure in agglomerative
hierarchical clustering (AHC) raises an important issue: some correction needs
to be applied for the dimensionality of variables. In this work, we formulate
the decision of merging dependent multivariate normal variables in an AHC
procedure as a Bayesian model comparison. We found that the Bayesian
formulation naturally shrinks the empirical covariance matrix towards a matrix
set a priori (e.g., the identity), provides an automated stopping rule, and
corrects for dimensionality using a term that scales up the measure as a
function of the dimensionality of the variables. Also, the resulting log Bayes
factor is asymptotically proportional to the plug-in estimate of mutual
information, with an additive correction for dimensionality in agreement with
the Bayesian information criterion. We investigated the behavior of these
Bayesian alternatives (in exact and asymptotic forms) to mutual information on
simulated and real data. An encouraging result was first derived on
simulations: the hierarchical clustering based on the log Bayes factor
outperformed off-the-shelf clustering techniques as well as raw and normalized
mutual information in terms of classification accuracy. On a toy example, we
found that the Bayesian approaches led to results that were similar to those of
mutual information clustering techniques, with the advantage of an automated
thresholding. On real functional magnetic resonance imaging (fMRI) datasets
measuring brain activity, it identified clusters consistent with the
established outcome of standard procedures. On this application, normalized
mutual information had a highly atypical behavior, in the sense that it
systematically favored very large clusters. These initial experiments suggest
that the proposed Bayesian alternatives to mutual information are a useful new
tool for hierarchical clustering.
| no_new_dataset | 0.958187 |
1509.04513 | Vinh Nguyen | Vinh Nguyen, Olivier Bodenreider, Krishnaprasad Thirunarayan, Gang Fu,
Evan Bolton, N\'uria Queralt Rosinach, Laura I. Furlong, Michel Dumontier,
Amit Sheth | On Reasoning with RDF Statements about Statements using Singleton
Property Triples | null | null | null | null | cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties.
| [
{
"version": "v1",
"created": "Tue, 15 Sep 2015 12:10:37 GMT"
}
] | 2016-08-07T00:00:00 | [
[
"Nguyen",
"Vinh",
""
],
[
"Bodenreider",
"Olivier",
""
],
[
"Thirunarayan",
"Krishnaprasad",
""
],
[
"Fu",
"Gang",
""
],
[
"Bolton",
"Evan",
""
],
[
"Rosinach",
"Núria Queralt",
""
],
[
"Furlong",
"Laura I.",
""
],
[
"Dumontier",
"Michel",
""
],
[
"Sheth",
"Amit",
""
]
] | TITLE: On Reasoning with RDF Statements about Statements using Singleton
Property Triples
ABSTRACT: The Singleton Property (SP) approach has been proposed for representing and
querying metadata about RDF triples such as provenance, time, location, and
evidence. In this approach, one singleton property is created to uniquely
represent a relationship in a particular context, and in general, generates a
large property hierarchy in the schema. It has become the subject of important
questions from Semantic Web practitioners. Can an existing reasoner recognize
the singleton property triples? And how? If the singleton property triples
describe a data triple, then how can a reasoner infer this data triple from the
singleton property triples? Or would the large property hierarchy affect the
reasoners in some way? We address these questions in this paper and present our
study about the reasoning aspects of the singleton properties. We propose a
simple mechanism to enable existing reasoners to recognize the singleton
property triples, as well as to infer the data triples described by the
singleton property triples. We evaluate the effect of the singleton property
triples in the reasoning processes by comparing the performance on RDF datasets
with and without singleton properties. Our evaluation uses as benchmark the
LUBM datasets and the LUBM-SP datasets derived from LUBM with temporal
information added through singleton properties.
| no_new_dataset | 0.9463 |
1511.00915 | Jan Wielemaker | Jan Wielemaker and Torbj\"orn Lager and Fabrizio Riguzzi | SWISH: SWI-Prolog for Sharing | International Workshop on User-Oriented Logic Programming (IULP
2015), co-located with the 31st International Conference on Logic Programming
(ICLP 2015), Proceedings of the International Workshop on User-Oriented Logic
Programming (IULP 2015), Editors: Stefan Ellmauthaler and Claudia Schulz,
pages 99-113, August 2015 | null | null | null | cs.PL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, we see a new type of interfaces for programmers based on web
technology. For example, JSFiddle, IPython Notebook and R-studio. Web
technology enables cloud-based solutions, embedding in tutorial web pages,
atractive rendering of results, web-scale cooperative development, etc. This
article describes SWISH, a web front-end for Prolog. A public website exposes
SWI-Prolog using SWISH, which is used to run small Prolog programs for
demonstration, experimentation and education. We connected SWISH to the
ClioPatria semantic web toolkit, where it allows for collaborative development
of programs and queries related to a dataset as well as performing maintenance
tasks on the running server and we embedded SWISH in the Learn Prolog Now!
online Prolog book.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2015 14:16:31 GMT"
}
] | 2016-08-06T00:00:00 | [
[
"Wielemaker",
"Jan",
""
],
[
"Lager",
"Torbjörn",
""
],
[
"Riguzzi",
"Fabrizio",
""
]
] | TITLE: SWISH: SWI-Prolog for Sharing
ABSTRACT: Recently, we see a new type of interfaces for programmers based on web
technology. For example, JSFiddle, IPython Notebook and R-studio. Web
technology enables cloud-based solutions, embedding in tutorial web pages,
atractive rendering of results, web-scale cooperative development, etc. This
article describes SWISH, a web front-end for Prolog. A public website exposes
SWI-Prolog using SWISH, which is used to run small Prolog programs for
demonstration, experimentation and education. We connected SWISH to the
ClioPatria semantic web toolkit, where it allows for collaborative development
of programs and queries related to a dataset as well as performing maintenance
tasks on the running server and we embedded SWISH in the Learn Prolog Now!
online Prolog book.
| no_new_dataset | 0.9462 |
1511.04834 | Arvind Neelakantan | Arvind Neelakantan, Quoc V. Le, Ilya Sutskever | Neural Programmer: Inducing Latent Programs with Gradient Descent | Accepted as a conference paper at ICLR 2015 | null | null | null | cs.LG cs.CL stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks have achieved impressive supervised classification
performance in many tasks including image recognition, speech recognition, and
sequence to sequence learning. However, this success has not been translated to
applications like question answering that may involve complex arithmetic and
logic reasoning. A major limitation of these models is in their inability to
learn even simple arithmetic and logic operations. For example, it has been
shown that neural networks fail to learn to add two binary numbers reliably. In
this work, we propose Neural Programmer, an end-to-end differentiable neural
network augmented with a small set of basic arithmetic and logic operations.
Neural Programmer can call these augmented operations over several steps,
thereby inducing compositional programs that are more complex than the built-in
operations. The model learns from a weak supervision signal which is the result
of execution of the correct program, hence it does not require expensive
annotation of the correct program itself. The decisions of what operations to
call, and what data segments to apply to are inferred by Neural Programmer.
Such decisions, during training, are done in a differentiable fashion so that
the entire network can be trained jointly by gradient descent. We find that
training the model is difficult, but it can be greatly improved by adding
random noise to the gradient. On a fairly complex synthetic table-comprehension
dataset, traditional recurrent networks and attentional models perform poorly
while Neural Programmer typically obtains nearly perfect accuracy.
| [
{
"version": "v1",
"created": "Mon, 16 Nov 2015 06:03:58 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Mar 2016 07:00:28 GMT"
},
{
"version": "v3",
"created": "Thu, 4 Aug 2016 18:23:03 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Neelakantan",
"Arvind",
""
],
[
"Le",
"Quoc V.",
""
],
[
"Sutskever",
"Ilya",
""
]
] | TITLE: Neural Programmer: Inducing Latent Programs with Gradient Descent
ABSTRACT: Deep neural networks have achieved impressive supervised classification
performance in many tasks including image recognition, speech recognition, and
sequence to sequence learning. However, this success has not been translated to
applications like question answering that may involve complex arithmetic and
logic reasoning. A major limitation of these models is in their inability to
learn even simple arithmetic and logic operations. For example, it has been
shown that neural networks fail to learn to add two binary numbers reliably. In
this work, we propose Neural Programmer, an end-to-end differentiable neural
network augmented with a small set of basic arithmetic and logic operations.
Neural Programmer can call these augmented operations over several steps,
thereby inducing compositional programs that are more complex than the built-in
operations. The model learns from a weak supervision signal which is the result
of execution of the correct program, hence it does not require expensive
annotation of the correct program itself. The decisions of what operations to
call, and what data segments to apply to are inferred by Neural Programmer.
Such decisions, during training, are done in a differentiable fashion so that
the entire network can be trained jointly by gradient descent. We find that
training the model is difficult, but it can be greatly improved by adding
random noise to the gradient. On a fairly complex synthetic table-comprehension
dataset, traditional recurrent networks and attentional models perform poorly
while Neural Programmer typically obtains nearly perfect accuracy.
| no_new_dataset | 0.941601 |
1601.05335 | Kong Hyeok | Hyeok Kong, Cholyong Jong, Unhyok Ryang | Implementation of Association Rule Mining for Network Intrusion
Detection | I have something wrong in submitting the paper | null | null | null | cs.CR cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many modern intrusion detection systems are based on data mining and
database-centric architecture, where a number of data mining techniques have
been found. Among the most popular techniques, association rule mining is one
of the important topics in data mining research. This approach determines
interesting relationships between large sets of data items. This technique was
initially applied to the so-called market basket analysis, which aims at
finding regularities in shopping behaviour of customers of supermarkets. In
contrast to dataset for market basket analysis, which takes usually hundreds of
attributes, network audit databases face tens of attributes. So the typical
Apriori algorithm of association rule mining, which needs so many database
scans, can be improved, dealing with such characteristics of transaction
database. In this paper we propose an impoved Apriori algorithm, very useful in
practice, using scan of network audit database only once by transaction cutting
and hashing.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 17:15:44 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2016 01:03:15 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Kong",
"Hyeok",
""
],
[
"Jong",
"Cholyong",
""
],
[
"Ryang",
"Unhyok",
""
]
] | TITLE: Implementation of Association Rule Mining for Network Intrusion
Detection
ABSTRACT: Many modern intrusion detection systems are based on data mining and
database-centric architecture, where a number of data mining techniques have
been found. Among the most popular techniques, association rule mining is one
of the important topics in data mining research. This approach determines
interesting relationships between large sets of data items. This technique was
initially applied to the so-called market basket analysis, which aims at
finding regularities in shopping behaviour of customers of supermarkets. In
contrast to dataset for market basket analysis, which takes usually hundreds of
attributes, network audit databases face tens of attributes. So the typical
Apriori algorithm of association rule mining, which needs so many database
scans, can be improved, dealing with such characteristics of transaction
database. In this paper we propose an impoved Apriori algorithm, very useful in
practice, using scan of network audit database only once by transaction cutting
and hashing.
| no_new_dataset | 0.946941 |
1604.02400 | P\'adraig Mac Carron | P\'adraig MacCarron, Kimmo Kaski and Robin Dunbar | Calling Dunbar's Numbers | 7 pages, 6 figures | Social Networks 47 (2016): 151-155 | 10.1016/j.socnet.2016.06.003 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The social brain hypothesis predicts that humans have an average of about 150
relationships at any given time. Within this 150, there are layers of friends
of an ego, where the number of friends in a layer increases as the emotional
closeness decreases. Here we analyse a mobile phone dataset, firstly, to
ascertain whether layers of friends can be identified based on call frequency.
We then apply different clustering algorithms to break the call frequency of
egos into clusters and compare the number of alters in each cluster with the
layer size predicted by the social brain hypothesis. In this dataset we find
strong evidence for the existence of a layered structure. The clustering yields
results that match well with previous studies for the innermost and outermost
layers, but for layers in between we observe large variability.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 16:55:43 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2016 15:01:25 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"MacCarron",
"Pádraig",
""
],
[
"Kaski",
"Kimmo",
""
],
[
"Dunbar",
"Robin",
""
]
] | TITLE: Calling Dunbar's Numbers
ABSTRACT: The social brain hypothesis predicts that humans have an average of about 150
relationships at any given time. Within this 150, there are layers of friends
of an ego, where the number of friends in a layer increases as the emotional
closeness decreases. Here we analyse a mobile phone dataset, firstly, to
ascertain whether layers of friends can be identified based on call frequency.
We then apply different clustering algorithms to break the call frequency of
egos into clusters and compare the number of alters in each cluster with the
layer size predicted by the social brain hypothesis. In this dataset we find
strong evidence for the existence of a layered structure. The clustering yields
results that match well with previous studies for the innermost and outermost
layers, but for layers in between we observe large variability.
| no_new_dataset | 0.627209 |
1604.04038 | Kuan-Ting Yu | Kuan-Ting Yu, Maria Bauza, Nima Fazeli, Alberto Rodriguez | More than a Million Ways to Be Pushed: A High-Fidelity Experimental
Dataset of Planar Pushing | 8 pages, 10 figures | IROS 2016 | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pushing is a motion primitive useful to handle objects that are too large,
too heavy, or too cluttered to be grasped. It is at the core of much of robotic
manipulation, in particular when physical interaction is involved. It seems
reasonable then to wish for robots to understand how pushed objects move.
In reality, however, robots often rely on approximations which yield models
that are computable, but also restricted and inaccurate. Just how close are
those models? How reasonable are the assumptions they are based on? To help
answer these questions, and to get a better experimental understanding of
pushing, we present a comprehensive and high-fidelity dataset of planar pushing
experiments. The dataset contains timestamped poses of a circular pusher and a
pushed object, as well as forces at the interaction.We vary the push
interaction in 6 dimensions: surface material, shape of the pushed object,
contact position, pushing direction, pushing speed, and pushing acceleration.
An industrial robot automates the data capturing along precisely controlled
position-velocity-acceleration trajectories of the pusher, which give dense
samples of positions and forces of uniform quality.
We finish the paper by characterizing the variability of friction, and
evaluating the most common assumptions and simplifications made by models of
frictional pushing in robotics.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2016 06:08:11 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2016 02:38:33 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Yu",
"Kuan-Ting",
""
],
[
"Bauza",
"Maria",
""
],
[
"Fazeli",
"Nima",
""
],
[
"Rodriguez",
"Alberto",
""
]
] | TITLE: More than a Million Ways to Be Pushed: A High-Fidelity Experimental
Dataset of Planar Pushing
ABSTRACT: Pushing is a motion primitive useful to handle objects that are too large,
too heavy, or too cluttered to be grasped. It is at the core of much of robotic
manipulation, in particular when physical interaction is involved. It seems
reasonable then to wish for robots to understand how pushed objects move.
In reality, however, robots often rely on approximations which yield models
that are computable, but also restricted and inaccurate. Just how close are
those models? How reasonable are the assumptions they are based on? To help
answer these questions, and to get a better experimental understanding of
pushing, we present a comprehensive and high-fidelity dataset of planar pushing
experiments. The dataset contains timestamped poses of a circular pusher and a
pushed object, as well as forces at the interaction.We vary the push
interaction in 6 dimensions: surface material, shape of the pushed object,
contact position, pushing direction, pushing speed, and pushing acceleration.
An industrial robot automates the data capturing along precisely controlled
position-velocity-acceleration trajectories of the pusher, which give dense
samples of positions and forces of uniform quality.
We finish the paper by characterizing the variability of friction, and
evaluating the most common assumptions and simplifications made by models of
frictional pushing in robotics.
| new_dataset | 0.956472 |
1608.01441 | Hao Yang Dr | Hao Yang, Joey Tianyi Zhou and Jianfei Cai | Improving Multi-label Learning with Missing Labels by Structured
Semantic Correlations | Accepted in ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-label learning has attracted significant interests in computer vision
recently, finding applications in many vision tasks such as multiple object
recognition and automatic image annotation. Associating multiple labels to a
complex image is very difficult, not only due to the intricacy of describing
the image, but also because of the incompleteness nature of the observed
labels. Existing works on the problem either ignore the label-label and
instance-instance correlations or just assume these correlations are linear and
unstructured. Considering that semantic correlations between images are
actually structured, in this paper we propose to incorporate structured
semantic correlations to solve the missing label problem of multi-label
learning. Specifically, we project images to the semantic space with an
effective semantic descriptor. A semantic graph is then constructed on these
images to capture the structured correlations between them. We utilize the
semantic graph Laplacian as a smooth term in the multi-label learning
formulation to incorporate the structured semantic correlations. Experimental
results demonstrate the effectiveness of the proposed semantic descriptor and
the usefulness of incorporating the structured semantic correlations. We
achieve better results than state-of-the-art multi-label learning methods on
four benchmark datasets.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 06:58:32 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Yang",
"Hao",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Cai",
"Jianfei",
""
]
] | TITLE: Improving Multi-label Learning with Missing Labels by Structured
Semantic Correlations
ABSTRACT: Multi-label learning has attracted significant interests in computer vision
recently, finding applications in many vision tasks such as multiple object
recognition and automatic image annotation. Associating multiple labels to a
complex image is very difficult, not only due to the intricacy of describing
the image, but also because of the incompleteness nature of the observed
labels. Existing works on the problem either ignore the label-label and
instance-instance correlations or just assume these correlations are linear and
unstructured. Considering that semantic correlations between images are
actually structured, in this paper we propose to incorporate structured
semantic correlations to solve the missing label problem of multi-label
learning. Specifically, we project images to the semantic space with an
effective semantic descriptor. A semantic graph is then constructed on these
images to capture the structured correlations between them. We utilize the
semantic graph Laplacian as a smooth term in the multi-label learning
formulation to incorporate the structured semantic correlations. Experimental
results demonstrate the effectiveness of the proposed semantic descriptor and
the usefulness of incorporating the structured semantic correlations. We
achieve better results than state-of-the-art multi-label learning methods on
four benchmark datasets.
| no_new_dataset | 0.947817 |
1608.01529 | Suman Saha | Suman Saha, Gurkirt Singh, Michael Sapienza, Philip H. S. Torr, Fabio
Cuzzolin | Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos | Accepted by British Machine Vision Conference 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we propose an approach to the spatiotemporal localisation
(detection) and classification of multiple concurrent actions within temporally
untrimmed videos. Our framework is composed of three stages. In stage 1,
appearance and motion detection networks are employed to localise and score
actions from colour images and optical flow. In stage 2, the appearance network
detections are boosted by combining them with the motion detection scores, in
proportion to their respective spatial overlap. In stage 3, sequences of
detection boxes most likely to be associated with a single action instance,
called action tubes, are constructed by solving two energy maximisation
problems via dynamic programming. While in the first pass, action paths
spanning the whole video are built by linking detection boxes over time using
their class-specific scores and their spatial overlap, in the second pass,
temporal trimming is performed by ensuring label consistency for all
constituting detection boxes. We demonstrate the performance of our algorithm
on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new
state-of-the-art results across the board and significantly increasing
detection speed at test time. We achieve a huge leap forward in action
detection performance and report a 20% and 11% gain in mAP (mean average
precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the
state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 13:38:38 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Saha",
"Suman",
""
],
[
"Singh",
"Gurkirt",
""
],
[
"Sapienza",
"Michael",
""
],
[
"Torr",
"Philip H. S.",
""
],
[
"Cuzzolin",
"Fabio",
""
]
] | TITLE: Deep Learning for Detecting Multiple Space-Time Action Tubes in Videos
ABSTRACT: In this work, we propose an approach to the spatiotemporal localisation
(detection) and classification of multiple concurrent actions within temporally
untrimmed videos. Our framework is composed of three stages. In stage 1,
appearance and motion detection networks are employed to localise and score
actions from colour images and optical flow. In stage 2, the appearance network
detections are boosted by combining them with the motion detection scores, in
proportion to their respective spatial overlap. In stage 3, sequences of
detection boxes most likely to be associated with a single action instance,
called action tubes, are constructed by solving two energy maximisation
problems via dynamic programming. While in the first pass, action paths
spanning the whole video are built by linking detection boxes over time using
their class-specific scores and their spatial overlap, in the second pass,
temporal trimming is performed by ensuring label consistency for all
constituting detection boxes. We demonstrate the performance of our algorithm
on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new
state-of-the-art results across the board and significantly increasing
detection speed at test time. We achieve a huge leap forward in action
detection performance and report a 20% and 11% gain in mAP (mean average
precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the
state-of-the-art.
| no_new_dataset | 0.950549 |
1608.01561 | Paheli Bhattacharya | Paheli Bhattacharya, Pawan Goyal and Sudeshna Sarkar | UsingWord Embeddings for Query Translation for Hindi to English Cross
Language Information Retrieval | 17th International Conference on Intelligent Text Processing and
Computational Linguistics | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross-Language Information Retrieval (CLIR) has become an important problem
to solve in the recent years due to the growth of content in multiple languages
in the Web. One of the standard methods is to use query translation from source
to target language. In this paper, we propose an approach based on word
embeddings, a method that captures contextual clues for a particular word in
the source language and gives those words as translations that occur in a
similar context in the target language. Once we obtain the word embeddings of
the source and target language pairs, we learn a projection from source to
target word embeddings, making use of a dictionary with word translation
pairs.We then propose various methods of query translation and aggregation. The
advantage of this approach is that it does not require the corpora to be
aligned (which is difficult to obtain for resource-scarce languages), a
dictionary with word translation pairs is enough to train the word vectors for
translation. We experiment with Forum for Information Retrieval and Evaluation
(FIRE) 2008 and 2012 datasets for Hindi to English CLIR. The proposed word
embedding based approach outperforms the basic dictionary based approach by 70%
and when the word embeddings are combined with the dictionary, the hybrid
approach beats the baseline dictionary based method by 77%. It outperforms the
English monolingual baseline by 15%, when combined with the translations
obtained from Google Translate and Dictionary.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 14:44:52 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Bhattacharya",
"Paheli",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Sarkar",
"Sudeshna",
""
]
] | TITLE: UsingWord Embeddings for Query Translation for Hindi to English Cross
Language Information Retrieval
ABSTRACT: Cross-Language Information Retrieval (CLIR) has become an important problem
to solve in the recent years due to the growth of content in multiple languages
in the Web. One of the standard methods is to use query translation from source
to target language. In this paper, we propose an approach based on word
embeddings, a method that captures contextual clues for a particular word in
the source language and gives those words as translations that occur in a
similar context in the target language. Once we obtain the word embeddings of
the source and target language pairs, we learn a projection from source to
target word embeddings, making use of a dictionary with word translation
pairs.We then propose various methods of query translation and aggregation. The
advantage of this approach is that it does not require the corpora to be
aligned (which is difficult to obtain for resource-scarce languages), a
dictionary with word translation pairs is enough to train the word vectors for
translation. We experiment with Forum for Information Retrieval and Evaluation
(FIRE) 2008 and 2012 datasets for Hindi to English CLIR. The proposed word
embedding based approach outperforms the basic dictionary based approach by 70%
and when the word embeddings are combined with the dictionary, the hybrid
approach beats the baseline dictionary based method by 77%. It outperforms the
English monolingual baseline by 15%, when combined with the translations
obtained from Google Translate and Dictionary.
| no_new_dataset | 0.949012 |
1608.01647 | Wei Li | Wei Li and Christina Tsangouri and Farnaz Abtahi and Zhigang Zhu | A Recursive Framework for Expression Recognition: From Web Images to
Deep Models to Game Dataset | Submit to Machine Vision Application Journal. arXiv admin note: text
overlap with arXiv:1607.02678 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a recursive framework to recognize facial
expressions from images in real scenes. Unlike traditional approaches that
typically focus on developing and refining algorithms for improving recognition
performance on an existing dataset, we integrate three important components in
a recursive manner: facial dataset generation, facial expression recognition
model building, and interactive interfaces for testing and new data collection.
To start with, we first create a candid-images-for-facial-expression (CIFE)
dataset. We then apply a convolutional neural network (CNN) to CIFE and build a
CNN model for web image expression classification. In order to increase the
expression recognition accuracy, we also fine-tune the CNN model and thus
obtain a better CNN facial expression recognition model. Based on the
fine-tuned CNN model, we design a facial expression game engine and collect a
new and more balanced dataset, GaMo. The images of this dataset are collected
from the different expressions our game users make when playing the game.
Finally, we evaluate the GaMo and CIFE datasets and show that our recursive
framework can help build a better facial expression model for dealing with real
scene facial expression tasks.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 19:07:08 GMT"
}
] | 2016-08-05T00:00:00 | [
[
"Li",
"Wei",
""
],
[
"Tsangouri",
"Christina",
""
],
[
"Abtahi",
"Farnaz",
""
],
[
"Zhu",
"Zhigang",
""
]
] | TITLE: A Recursive Framework for Expression Recognition: From Web Images to
Deep Models to Game Dataset
ABSTRACT: In this paper, we propose a recursive framework to recognize facial
expressions from images in real scenes. Unlike traditional approaches that
typically focus on developing and refining algorithms for improving recognition
performance on an existing dataset, we integrate three important components in
a recursive manner: facial dataset generation, facial expression recognition
model building, and interactive interfaces for testing and new data collection.
To start with, we first create a candid-images-for-facial-expression (CIFE)
dataset. We then apply a convolutional neural network (CNN) to CIFE and build a
CNN model for web image expression classification. In order to increase the
expression recognition accuracy, we also fine-tune the CNN model and thus
obtain a better CNN facial expression recognition model. Based on the
fine-tuned CNN model, we design a facial expression game engine and collect a
new and more balanced dataset, GaMo. The images of this dataset are collected
from the different expressions our game users make when playing the game.
Finally, we evaluate the GaMo and CIFE datasets and show that our recursive
framework can help build a better facial expression model for dealing with real
scene facial expression tasks.
| new_dataset | 0.965996 |
1506.02554 | Christina Heinze | Christina Heinze, Brian McWilliams, Nicolai Meinshausen | DUAL-LOCO: Distributing Statistical Estimation Using Random Projections | 13 pages | Proceedings of the 19th International Conference on Artificial
Intelligence and Statistics, 51, 2016, 12 pages | null | null | stat.ML cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present DUAL-LOCO, a communication-efficient algorithm for distributed
statistical estimation. DUAL-LOCO assumes that the data is distributed
according to the features rather than the samples. It requires only a single
round of communication where low-dimensional random projections are used to
approximate the dependences between features available to different workers. We
show that DUAL-LOCO has bounded approximation error which only depends weakly
on the number of workers. We compare DUAL-LOCO against a state-of-the-art
distributed optimization method on a variety of real world datasets and show
that it obtains better speedups while retaining good accuracy.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 15:35:24 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jan 2016 16:44:27 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Heinze",
"Christina",
""
],
[
"McWilliams",
"Brian",
""
],
[
"Meinshausen",
"Nicolai",
""
]
] | TITLE: DUAL-LOCO: Distributing Statistical Estimation Using Random Projections
ABSTRACT: We present DUAL-LOCO, a communication-efficient algorithm for distributed
statistical estimation. DUAL-LOCO assumes that the data is distributed
according to the features rather than the samples. It requires only a single
round of communication where low-dimensional random projections are used to
approximate the dependences between features available to different workers. We
show that DUAL-LOCO has bounded approximation error which only depends weakly
on the number of workers. We compare DUAL-LOCO against a state-of-the-art
distributed optimization method on a variety of real world datasets and show
that it obtains better speedups while retaining good accuracy.
| no_new_dataset | 0.951051 |
1603.06568 | Haimin Zhang | Haimin Zhang and Min Xu | Modelling Temporal Information Using Discrete Fourier Transform for
Recognizing Emotions in User-generated Videos | 5 pages. arXiv admin note: substantial text overlap with
arXiv:1603.06182 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the widespread of user-generated Internet videos, emotion recognition in
those videos attracts increasing research efforts. However, most existing works
are based on framelevel visual features and/or audio features, which might fail
to model the temporal information, e.g. characteristics accumulated along time.
In order to capture video temporal information, in this paper, we propose to
analyse features in frequency domain transformed by discrete Fourier transform
(DFT features). Frame-level features are firstly extract by a pre-trained deep
convolutional neural network (CNN). Then, time domain features are transferred
and interpolated into DFT features. CNN and DFT features are further encoded
and fused for emotion classification. By this way, static image features
extracted from a pre-trained deep CNN and temporal information represented by
DFT features are jointly considered for video emotion recognition. Experimental
results demonstrate that combining DFT features can effectively capture
temporal information and therefore improve emotion recognition performance. Our
approach has achieved a state-of-the-art performance on the largest video
emotion dataset (VideoEmotion-8 dataset), improving accuracy from 51.1% to
62.6%.
| [
{
"version": "v1",
"created": "Sun, 20 Mar 2016 04:46:00 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2016 00:53:23 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Zhang",
"Haimin",
""
],
[
"Xu",
"Min",
""
]
] | TITLE: Modelling Temporal Information Using Discrete Fourier Transform for
Recognizing Emotions in User-generated Videos
ABSTRACT: With the widespread of user-generated Internet videos, emotion recognition in
those videos attracts increasing research efforts. However, most existing works
are based on framelevel visual features and/or audio features, which might fail
to model the temporal information, e.g. characteristics accumulated along time.
In order to capture video temporal information, in this paper, we propose to
analyse features in frequency domain transformed by discrete Fourier transform
(DFT features). Frame-level features are firstly extract by a pre-trained deep
convolutional neural network (CNN). Then, time domain features are transferred
and interpolated into DFT features. CNN and DFT features are further encoded
and fused for emotion classification. By this way, static image features
extracted from a pre-trained deep CNN and temporal information represented by
DFT features are jointly considered for video emotion recognition. Experimental
results demonstrate that combining DFT features can effectively capture
temporal information and therefore improve emotion recognition performance. Our
approach has achieved a state-of-the-art performance on the largest video
emotion dataset (VideoEmotion-8 dataset), improving accuracy from 51.1% to
62.6%.
| no_new_dataset | 0.948251 |
1603.07704 | Quan Liu | Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si
Wei, Yu Hu | Probabilistic Reasoning via Deep Learning: Neural Association Models | Probabilistic reasoning, Winograd Schema Challenge, Deep learning,
Neural Networks, Distributed Representation | null | null | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new deep learning approach, called neural
association model (NAM), for probabilistic reasoning in artificial
intelligence. We propose to use neural networks to model association between
any two events in a domain. Neural networks take one event as input and compute
a conditional probability of the other event to model how likely these two
events are to be associated. The actual meaning of the conditional
probabilities varies between applications and depends on how the models are
trained. In this work, as two case studies, we have investigated two NAM
structures, namely deep neural networks (DNN) and relation-modulated neural
nets (RMNN), on several probabilistic reasoning tasks in AI, including
recognizing textual entailment, triple classification in multi-relational
knowledge bases and commonsense reasoning. Experimental results on several
popular datasets derived from WordNet, FreeBase and ConceptNet have all
demonstrated that both DNNs and RMNNs perform equally well and they can
significantly outperform the conventional methods available for these reasoning
tasks. Moreover, compared with DNNs, RMNNs are superior in knowledge transfer,
where a pre-trained model can be quickly extended to an unseen relation after
observing only a few training samples. To further prove the effectiveness of
the proposed models, in this work, we have applied NAMs to solving challenging
Winograd Schema (WS) problems. Experiments conducted on a set of WS problems
prove that the proposed models have the potential for commonsense reasoning.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 18:54:18 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2016 14:31:17 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Liu",
"Quan",
""
],
[
"Jiang",
"Hui",
""
],
[
"Evdokimov",
"Andrew",
""
],
[
"Ling",
"Zhen-Hua",
""
],
[
"Zhu",
"Xiaodan",
""
],
[
"Wei",
"Si",
""
],
[
"Hu",
"Yu",
""
]
] | TITLE: Probabilistic Reasoning via Deep Learning: Neural Association Models
ABSTRACT: In this paper, we propose a new deep learning approach, called neural
association model (NAM), for probabilistic reasoning in artificial
intelligence. We propose to use neural networks to model association between
any two events in a domain. Neural networks take one event as input and compute
a conditional probability of the other event to model how likely these two
events are to be associated. The actual meaning of the conditional
probabilities varies between applications and depends on how the models are
trained. In this work, as two case studies, we have investigated two NAM
structures, namely deep neural networks (DNN) and relation-modulated neural
nets (RMNN), on several probabilistic reasoning tasks in AI, including
recognizing textual entailment, triple classification in multi-relational
knowledge bases and commonsense reasoning. Experimental results on several
popular datasets derived from WordNet, FreeBase and ConceptNet have all
demonstrated that both DNNs and RMNNs perform equally well and they can
significantly outperform the conventional methods available for these reasoning
tasks. Moreover, compared with DNNs, RMNNs are superior in knowledge transfer,
where a pre-trained model can be quickly extended to an unseen relation after
observing only a few training samples. To further prove the effectiveness of
the proposed models, in this work, we have applied NAMs to solving challenging
Winograd Schema (WS) problems. Experiments conducted on a set of WS problems
prove that the proposed models have the potential for commonsense reasoning.
| no_new_dataset | 0.949106 |
1606.06472 | Linjie Xing | Linjie Xing, Yu Qiao | DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer
Identification | This article will be presented at ICFHR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-independent writer identification is challenging due to the huge
variation of written contents and the ambiguous written styles of different
writers. This paper proposes DeepWriter, a deep multi-stream CNN to learn deep
powerful representation for recognizing writers. DeepWriter takes local
handwritten patches as input and is trained with softmax classification loss.
The main contributions are: 1) we design and optimize multi-stream structure
for writer identification task; 2) we introduce data augmentation learning to
enhance the performance of DeepWriter; 3) we introduce a patch scanning
strategy to handle text image with different lengths. In addition, we find that
different languages such as English and Chinese may share common features for
writer identification, and joint training can yield better performance.
Experimental results on IAM and HWDB datasets show that our models achieve high
identification accuracy: 99.01% on 301 writers and 97.03% on 657 writers with
one English sentence input, 93.85% on 300 writers with one Chinese character
input, which outperform previous methods with a large margin. Moreover, our
models obtain accuracy of 98.01% on 301 writers with only 4 English alphabets
as input.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2016 08:25:25 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2016 03:26:58 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Xing",
"Linjie",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer
Identification
ABSTRACT: Text-independent writer identification is challenging due to the huge
variation of written contents and the ambiguous written styles of different
writers. This paper proposes DeepWriter, a deep multi-stream CNN to learn deep
powerful representation for recognizing writers. DeepWriter takes local
handwritten patches as input and is trained with softmax classification loss.
The main contributions are: 1) we design and optimize multi-stream structure
for writer identification task; 2) we introduce data augmentation learning to
enhance the performance of DeepWriter; 3) we introduce a patch scanning
strategy to handle text image with different lengths. In addition, we find that
different languages such as English and Chinese may share common features for
writer identification, and joint training can yield better performance.
Experimental results on IAM and HWDB datasets show that our models achieve high
identification accuracy: 99.01% on 301 writers and 97.03% on 657 writers with
one English sentence input, 93.85% on 300 writers with one Chinese character
input, which outperform previous methods with a large margin. Moreover, our
models obtain accuracy of 98.01% on 301 writers with only 4 English alphabets
as input.
| no_new_dataset | 0.950319 |
1608.00641 | Eyasu Mequanint Zemene | Eyasu Zemene, Marcello Pelillo | Interactive Image Segmentation Using Constrained Dominant Sets | Accepted at ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new approach to interactive image segmentation based on some
properties of a family of quadratic optimization problems related to dominant
sets, a well-known graph-theoretic notion of a cluster which generalizes the
concept of a maximal clique to edge-weighted graphs. In particular, we show
that by properly controlling a regularization parameter which determines the
structure and the scale of the underlying problem, we are in a position to
extract groups of dominant-set clusters which are constrained to contain
user-selected elements. The resulting algorithm can deal naturally with any
type of input modality, including scribbles, sloppy contours, and bounding
boxes, and is able to robustly handle noisy annotations on the part of the
user. Experiments on standard benchmark datasets show the effectiveness of our
approach as compared to state-of-the-art algorithms on a variety of natural
images under several input conditions.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 23:37:41 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Aug 2016 17:32:04 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Zemene",
"Eyasu",
""
],
[
"Pelillo",
"Marcello",
""
]
] | TITLE: Interactive Image Segmentation Using Constrained Dominant Sets
ABSTRACT: We propose a new approach to interactive image segmentation based on some
properties of a family of quadratic optimization problems related to dominant
sets, a well-known graph-theoretic notion of a cluster which generalizes the
concept of a maximal clique to edge-weighted graphs. In particular, we show
that by properly controlling a regularization parameter which determines the
structure and the scale of the underlying problem, we are in a position to
extract groups of dominant-set clusters which are constrained to contain
user-selected elements. The resulting algorithm can deal naturally with any
type of input modality, including scribbles, sloppy contours, and bounding
boxes, and is able to robustly handle noisy annotations on the part of the
user. Experiments on standard benchmark datasets show the effectiveness of our
approach as compared to state-of-the-art algorithms on a variety of natural
images under several input conditions.
| no_new_dataset | 0.947039 |
1608.01024 | Iman Abbasnejad | N Dinesh Reddy, Iman Abbasnejad, Sheetal Reddy, Amit Kumar Mondal,
Vindhya Devalla | Incremental Real-Time Multibody VSLAM with Trajectory Optimization Using
Stereo Camera | Available on IROS | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real time outdoor navigation in highly dynamic environments is an crucial
problem. The recent literature on real time static SLAM don't scale up to
dynamic outdoor environments. Most of these methods assume moving objects as
outliers or discard the information provided by them. We propose an algorithm
to jointly infer the camera trajectory and the moving object trajectory
simultaneously. In this paper, we perform a sparse scene flow based motion
segmentation using a stereo camera. The segmented objects motion models are
used for accurate localization of the camera trajectory as well as the moving
objects. We exploit the relationship between moving objects for improving the
accuracy of the poses. We formulate the poses as a factor graph incorporating
all the constraints. We achieve exact incremental solution by solving a full
nonlinear optimization problem in real time. The evaluation is performed on the
challenging KITTI dataset with multiple moving cars.Our method outperforms the
previous baselines in outdoor navigation.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 23:03:19 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Reddy",
"N Dinesh",
""
],
[
"Abbasnejad",
"Iman",
""
],
[
"Reddy",
"Sheetal",
""
],
[
"Mondal",
"Amit Kumar",
""
],
[
"Devalla",
"Vindhya",
""
]
] | TITLE: Incremental Real-Time Multibody VSLAM with Trajectory Optimization Using
Stereo Camera
ABSTRACT: Real time outdoor navigation in highly dynamic environments is an crucial
problem. The recent literature on real time static SLAM don't scale up to
dynamic outdoor environments. Most of these methods assume moving objects as
outliers or discard the information provided by them. We propose an algorithm
to jointly infer the camera trajectory and the moving object trajectory
simultaneously. In this paper, we perform a sparse scene flow based motion
segmentation using a stereo camera. The segmented objects motion models are
used for accurate localization of the camera trajectory as well as the moving
objects. We exploit the relationship between moving objects for improving the
accuracy of the poses. We formulate the poses as a factor graph incorporating
all the constraints. We achieve exact incremental solution by solving a full
nonlinear optimization problem in real time. The evaluation is performed on the
challenging KITTI dataset with multiple moving cars.Our method outperforms the
previous baselines in outdoor navigation.
| no_new_dataset | 0.945197 |
1608.01026 | Victor Fragoso | Victor Fragoso, Walter Scheirer, Joao Hespanha, Matthew Turk | One-Class Slab Support Vector Machine | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces the one-class slab SVM (OCSSVM), a one-class classifier
that aims at improving the performance of the one-class SVM. The proposed
strategy reduces the false positive rate and increases the accuracy of
detecting instances from novel classes. To this end, it uses two parallel
hyperplanes to learn the normal region of the decision scores of the target
class. OCSSVM extends one-class SVM since it can scale and learn non-linear
decision functions via kernel methods. The experiments on two publicly
available datasets show that OCSSVM can consistently outperform the one-class
SVM and perform comparable to or better than other state-of-the-art one-class
classifiers.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 23:06:35 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Fragoso",
"Victor",
""
],
[
"Scheirer",
"Walter",
""
],
[
"Hespanha",
"Joao",
""
],
[
"Turk",
"Matthew",
""
]
] | TITLE: One-Class Slab Support Vector Machine
ABSTRACT: This work introduces the one-class slab SVM (OCSSVM), a one-class classifier
that aims at improving the performance of the one-class SVM. The proposed
strategy reduces the false positive rate and increases the accuracy of
detecting instances from novel classes. To this end, it uses two parallel
hyperplanes to learn the normal region of the decision scores of the target
class. OCSSVM extends one-class SVM since it can scale and learn non-linear
decision functions via kernel methods. The experiments on two publicly
available datasets show that OCSSVM can consistently outperform the one-class
SVM and perform comparable to or better than other state-of-the-art one-class
classifiers.
| no_new_dataset | 0.948442 |
1608.01082 | Jinghua Wang | Jinghua Wang, Zhenhua Wang, Dacheng Tao, Simon See, Gang Wang | Learning Common and Specific Features for RGB-D Semantic Segmentation
with Deconvolutional Networks | ECCV 2016, 16 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we tackle the problem of RGB-D semantic segmentation of indoor
images. We take advantage of deconvolutional networks which can predict
pixel-wise class labels, and develop a new structure for deconvolution of
multiple modalities. We propose a novel feature transformation network to
bridge the convolutional networks and deconvolutional networks. In the feature
transformation network, we correlate the two modalities by discovering common
features between them, as well as characterize each modality by discovering
modality specific features. With the common features, we not only closely
correlate the two modalities, but also allow them to borrow features from each
other to enhance the representation of shared information. With specific
features, we capture the visual patterns that are only visible in one modality.
The proposed network achieves competitive segmentation accuracy on NYU depth
dataset V1 and V2.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 06:05:16 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Wang",
"Jinghua",
""
],
[
"Wang",
"Zhenhua",
""
],
[
"Tao",
"Dacheng",
""
],
[
"See",
"Simon",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: Learning Common and Specific Features for RGB-D Semantic Segmentation
with Deconvolutional Networks
ABSTRACT: In this paper, we tackle the problem of RGB-D semantic segmentation of indoor
images. We take advantage of deconvolutional networks which can predict
pixel-wise class labels, and develop a new structure for deconvolution of
multiple modalities. We propose a novel feature transformation network to
bridge the convolutional networks and deconvolutional networks. In the feature
transformation network, we correlate the two modalities by discovering common
features between them, as well as characterize each modality by discovering
modality specific features. With the common features, we not only closely
correlate the two modalities, but also allow them to borrow features from each
other to enhance the representation of shared information. With specific
features, we capture the visual patterns that are only visible in one modality.
The proposed network achieves competitive segmentation accuracy on NYU depth
dataset V1 and V2.
| no_new_dataset | 0.952706 |
1608.01264 | Niao He | Niao He, Zaid Harchaoui, Yichen Wang, Le Song | Fast and Simple Optimization for Poisson Likelihood Models | null | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Poisson likelihood models have been prevalently used in imaging, social
networks, and time series analysis. We propose fast, simple,
theoretically-grounded, and versatile, optimization algorithms for Poisson
likelihood modeling. The Poisson log-likelihood is concave but not
Lipschitz-continuous. Since almost all gradient-based optimization algorithms
rely on Lipschitz-continuity, optimizing Poisson likelihood models with a
guarantee of convergence can be challenging, especially for large-scale
problems.
We present a new perspective allowing to efficiently optimize a wide range of
penalized Poisson likelihood objectives. We show that an appropriate saddle
point reformulation enjoys a favorable geometry and a smooth structure.
Therefore, we can design a new gradient-based optimization algorithm with
$O(1/t)$ convergence rate, in contrast to the usual $O(1/\sqrt{t})$ rate of
non-smooth minimization alternatives. Furthermore, in order to tackle problems
with large samples, we also develop a randomized block-decomposition variant
that enjoys the same convergence rate yet more efficient iteration cost.
Experimental results on several point process applications including social
network estimation and temporal recommendation show that the proposed algorithm
and its randomized block variant outperform existing methods both on synthetic
and real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 17:33:16 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"He",
"Niao",
""
],
[
"Harchaoui",
"Zaid",
""
],
[
"Wang",
"Yichen",
""
],
[
"Song",
"Le",
""
]
] | TITLE: Fast and Simple Optimization for Poisson Likelihood Models
ABSTRACT: Poisson likelihood models have been prevalently used in imaging, social
networks, and time series analysis. We propose fast, simple,
theoretically-grounded, and versatile, optimization algorithms for Poisson
likelihood modeling. The Poisson log-likelihood is concave but not
Lipschitz-continuous. Since almost all gradient-based optimization algorithms
rely on Lipschitz-continuity, optimizing Poisson likelihood models with a
guarantee of convergence can be challenging, especially for large-scale
problems.
We present a new perspective allowing to efficiently optimize a wide range of
penalized Poisson likelihood objectives. We show that an appropriate saddle
point reformulation enjoys a favorable geometry and a smooth structure.
Therefore, we can design a new gradient-based optimization algorithm with
$O(1/t)$ convergence rate, in contrast to the usual $O(1/\sqrt{t})$ rate of
non-smooth minimization alternatives. Furthermore, in order to tackle problems
with large samples, we also develop a randomized block-decomposition variant
that enjoys the same convergence rate yet more efficient iteration cost.
Experimental results on several point process applications including social
network estimation and temporal recommendation show that the proposed algorithm
and its randomized block variant outperform existing methods both on synthetic
and real-world datasets.
| no_new_dataset | 0.947137 |
1608.01281 | Navdeep Jaitly | Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, Ilya Sutskever | Learning Online Alignments with Continuous Rewards Policy Gradient | null | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence-to-sequence models with soft attention had significant success in
machine translation, speech recognition, and question answering. Though capable
and easy to use, they require that the entirety of the input sequence is
available at the beginning of inference, an assumption that is not valid for
instantaneous translation and speech recognition. To address this problem, we
present a new method for solving sequence-to-sequence problems using hard
online alignments instead of soft offline alignments. The online alignments
model is able to start producing outputs without the need to first process the
entire input sequence. A highly accurate online sequence-to-sequence model is
useful because it can be used to build an accurate voice-based instantaneous
translator. Our model uses hard binary stochastic decisions to select the
timesteps at which outputs will be produced. The model is trained to produce
these stochastic decisions using a standard policy gradient method. In our
experiments, we show that this model achieves encouraging performance on TIMIT
and Wall Street Journal (WSJ) speech recognition datasets.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 18:35:12 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Luo",
"Yuping",
""
],
[
"Chiu",
"Chung-Cheng",
""
],
[
"Jaitly",
"Navdeep",
""
],
[
"Sutskever",
"Ilya",
""
]
] | TITLE: Learning Online Alignments with Continuous Rewards Policy Gradient
ABSTRACT: Sequence-to-sequence models with soft attention had significant success in
machine translation, speech recognition, and question answering. Though capable
and easy to use, they require that the entirety of the input sequence is
available at the beginning of inference, an assumption that is not valid for
instantaneous translation and speech recognition. To address this problem, we
present a new method for solving sequence-to-sequence problems using hard
online alignments instead of soft offline alignments. The online alignments
model is able to start producing outputs without the need to first process the
entire input sequence. A highly accurate online sequence-to-sequence model is
useful because it can be used to build an accurate voice-based instantaneous
translator. Our model uses hard binary stochastic decisions to select the
timesteps at which outputs will be produced. The model is trained to produce
these stochastic decisions using a standard policy gradient method. In our
experiments, we show that this model achieves encouraging performance on TIMIT
and Wall Street Journal (WSJ) speech recognition datasets.
| no_new_dataset | 0.953665 |
1608.01298 | Peter Wittek | S\'andor Dar\'anyi, Peter Wittek, Konstantinos Konstantinidis, Symeon
Papadopoulos, Efstratios Kontopoulos | A Physical Metaphor to Study Semantic Drift | 8 pages, 4 figures, to appear in Proceedings of SuCCESS-16, 1st
International Workshop on Semantic Change & Evolving Semantics | null | null | null | cs.CL cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In accessibility tests for digital preservation, over time we experience
drifts of localized and labelled content in statistical models of evolving
semantics represented as a vector field. This articulates the need to detect,
measure, interpret and model outcomes of knowledge dynamics. To this end we
employ a high-performance machine learning algorithm for the training of
extremely large emergent self-organizing maps for exploratory data analysis.
The working hypothesis we present here is that the dynamics of semantic drifts
can be modeled on a relaxed version of Newtonian mechanics called social
mechanics. By using term distances as a measure of semantic relatedness vs.
their PageRank values indicating social importance and applied as variable
`term mass', gravitation as a metaphor to express changes in the semantic
content of a vector field lends a new perspective for experimentation. From
`term gravitation' over time, one can compute its generating potential whose
fluctuations manifest modifications in pairwise term similarity vs. social
importance, thereby updating Osgood's semantic differential. The dataset
examined is the public catalog metadata of Tate Galleries, London.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 19:34:13 GMT"
}
] | 2016-08-04T00:00:00 | [
[
"Darányi",
"Sándor",
""
],
[
"Wittek",
"Peter",
""
],
[
"Konstantinidis",
"Konstantinos",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Kontopoulos",
"Efstratios",
""
]
] | TITLE: A Physical Metaphor to Study Semantic Drift
ABSTRACT: In accessibility tests for digital preservation, over time we experience
drifts of localized and labelled content in statistical models of evolving
semantics represented as a vector field. This articulates the need to detect,
measure, interpret and model outcomes of knowledge dynamics. To this end we
employ a high-performance machine learning algorithm for the training of
extremely large emergent self-organizing maps for exploratory data analysis.
The working hypothesis we present here is that the dynamics of semantic drifts
can be modeled on a relaxed version of Newtonian mechanics called social
mechanics. By using term distances as a measure of semantic relatedness vs.
their PageRank values indicating social importance and applied as variable
`term mass', gravitation as a metaphor to express changes in the semantic
content of a vector field lends a new perspective for experimentation. From
`term gravitation' over time, one can compute its generating potential whose
fluctuations manifest modifications in pairwise term similarity vs. social
importance, thereby updating Osgood's semantic differential. The dataset
examined is the public catalog metadata of Tate Galleries, London.
| no_new_dataset | 0.952309 |
1510.02781 | Thierry Moreira | Thierry Pinheiro Moreira, Mauricio Lisboa Perez, Rafael de Oliveira
Werneck, Eduardo Valle | Where Is My Puppy? Retrieving Lost Dogs by Facial Features | 17 pages, 8 figures, 1 table, Multimedia Tools and Applications | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A pet that goes missing is among many people's worst fears: a moment of
distraction is enough for a dog or a cat wandering off from home. Some measures
help matching lost animals to their owners; but automated visual recognition is
one that - although convenient, highly available, and low-cost - is
surprisingly overlooked. In this paper, we inaugurate that promising avenue by
pursuing face recognition for dogs. We contrast four ready-to-use human facial
recognizers (EigenFaces, FisherFaces, LBPH, and a Sparse method) to two
original solutions based upon convolutional neural networks: BARK (inspired in
architecture-optimized networks employed for human facial recognition) and WOOF
(based upon off-the-shelf OverFeat features). Human facial recognizers perform
poorly for dogs (up to 60.5% accuracy), showing that dog facial recognition is
not a trivial extension of human facial recognition. The convolutional network
solutions work much better, with BARK attaining up to 81.1% accuracy, and WOOF,
89.4%. The tests were conducted in two datasets: Flickr-dog, with 42 dogs of
two breeds (pugs and huskies); and Snoopybook, with 18 mongrel dogs.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2015 19:39:15 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2016 20:02:15 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Moreira",
"Thierry Pinheiro",
""
],
[
"Perez",
"Mauricio Lisboa",
""
],
[
"Werneck",
"Rafael de Oliveira",
""
],
[
"Valle",
"Eduardo",
""
]
] | TITLE: Where Is My Puppy? Retrieving Lost Dogs by Facial Features
ABSTRACT: A pet that goes missing is among many people's worst fears: a moment of
distraction is enough for a dog or a cat wandering off from home. Some measures
help matching lost animals to their owners; but automated visual recognition is
one that - although convenient, highly available, and low-cost - is
surprisingly overlooked. In this paper, we inaugurate that promising avenue by
pursuing face recognition for dogs. We contrast four ready-to-use human facial
recognizers (EigenFaces, FisherFaces, LBPH, and a Sparse method) to two
original solutions based upon convolutional neural networks: BARK (inspired in
architecture-optimized networks employed for human facial recognition) and WOOF
(based upon off-the-shelf OverFeat features). Human facial recognizers perform
poorly for dogs (up to 60.5% accuracy), showing that dog facial recognition is
not a trivial extension of human facial recognition. The convolutional network
solutions work much better, with BARK attaining up to 81.1% accuracy, and WOOF,
89.4%. The tests were conducted in two datasets: Flickr-dog, with 42 dogs of
two breeds (pugs and huskies); and Snoopybook, with 18 mongrel dogs.
| no_new_dataset | 0.940408 |
1602.05498 | Hynek Lavicka | Ji\v{r}\'i Krac\'ik, Hynek Lavi\v{c}ka | Fluctuation analysis of high frequency electric power load in the Czech
Republic | sent to Physica A for consideration | null | 10.1016/j.physa.2016.06.073 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the electric power load in the Czech Republic (CR) which exhibits
a seasonality as well as other oscillations typical for European countries.
Moreover, we detect 1/f noise property of electrical power load with extra
additional peaks that allows to separate it into a deterministic and stochastic
part. We then focus on the analysis of the stochastic part using improved
Multi-fractal Detrended Fluctuation Analysis method (MFDFA) to investigate
power load datasets with a minute resolution. Extracting the noise part of the
signal by using Fourier transform allows us to apply this method to obtain the
fluctuation function and to estimate the generalized Hurst exponent together
with the correlated Hurst exponent, its improvement for the non-Gaussian
datasets. The results exhibit a strong presence of persistent behaviour and the
dataset is characterized by a non-Gaussian skewed distribution. There are also
indications for the presence of the probability distribution that has heavier
tail than the Gaussian distribution.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 15:59:37 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Kracík",
"Jiří",
""
],
[
"Lavička",
"Hynek",
""
]
] | TITLE: Fluctuation analysis of high frequency electric power load in the Czech
Republic
ABSTRACT: We analyze the electric power load in the Czech Republic (CR) which exhibits
a seasonality as well as other oscillations typical for European countries.
Moreover, we detect 1/f noise property of electrical power load with extra
additional peaks that allows to separate it into a deterministic and stochastic
part. We then focus on the analysis of the stochastic part using improved
Multi-fractal Detrended Fluctuation Analysis method (MFDFA) to investigate
power load datasets with a minute resolution. Extracting the noise part of the
signal by using Fourier transform allows us to apply this method to obtain the
fluctuation function and to estimate the generalized Hurst exponent together
with the correlated Hurst exponent, its improvement for the non-Gaussian
datasets. The results exhibit a strong presence of persistent behaviour and the
dataset is characterized by a non-Gaussian skewed distribution. There are also
indications for the presence of the probability distribution that has heavier
tail than the Gaussian distribution.
| no_new_dataset | 0.94699 |
1604.05837 | Shyeh Tjing Loi | Shyeh Tjing Loi, Tara Murphy, Iver H. Cairns, Cathryn M. Trott,
Natasha Hurley-Walker, Lu Feng, Paul J. Hancock, David L. Kaplan | A new angle for probing field-aligned irregularities with the Murchison
Widefield Array | 23 pages, 14 figures, accepted for publication in Radio Science | null | 10.1002/2015RS005878 | null | physics.space-ph astro-ph.IM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electron density irregularities in the ionosphere are known to be
magnetically anisotropic, preferentially elongated along the lines of force.
While many studies of their morphology have been undertaken by topside sounding
and whistler measurements, it is only recently that detailed regional-scale
reconstructions have become possible, enabled by the advent of widefield radio
telescopes. Here we present a new approach for visualising and studying
field-aligned irregularities (FAIs), which involves transforming
interferometric measurements of TEC gradients onto a magnetic shell tangent
plane. This removes the perspective distortion associated with the oblique
viewing angle of the irregularities from the ground, facilitating the
decomposition of dynamics along and across magnetic field lines. We apply this
transformation to the dataset of Loi et al. [2015a], obtained on 15 October
2013 by the Murchison Widefield Array (MWA) radio telescope and displaying
prominent FAIs. We study these FAIs in the new reference frame, quantifying
field-aligned and field-transverse behaviour, examining time and altitude
dependencies, and extending the analysis to FAIs on sub-array scales. We show
that the inclination of the plane can be derived solely from the data, and
verify that the best-fit value is consistent with the known magnetic
inclination. The ability of the model to concentrate the fluctuations along a
single spatial direction may find practical application to future calibration
strategies for widefield interferometry, by providing a compact representation
of FAI-induced distortions.
| [
{
"version": "v1",
"created": "Wed, 20 Apr 2016 06:26:44 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Loi",
"Shyeh Tjing",
""
],
[
"Murphy",
"Tara",
""
],
[
"Cairns",
"Iver H.",
""
],
[
"Trott",
"Cathryn M.",
""
],
[
"Hurley-Walker",
"Natasha",
""
],
[
"Feng",
"Lu",
""
],
[
"Hancock",
"Paul J.",
""
],
[
"Kaplan",
"David L.",
""
]
] | TITLE: A new angle for probing field-aligned irregularities with the Murchison
Widefield Array
ABSTRACT: Electron density irregularities in the ionosphere are known to be
magnetically anisotropic, preferentially elongated along the lines of force.
While many studies of their morphology have been undertaken by topside sounding
and whistler measurements, it is only recently that detailed regional-scale
reconstructions have become possible, enabled by the advent of widefield radio
telescopes. Here we present a new approach for visualising and studying
field-aligned irregularities (FAIs), which involves transforming
interferometric measurements of TEC gradients onto a magnetic shell tangent
plane. This removes the perspective distortion associated with the oblique
viewing angle of the irregularities from the ground, facilitating the
decomposition of dynamics along and across magnetic field lines. We apply this
transformation to the dataset of Loi et al. [2015a], obtained on 15 October
2013 by the Murchison Widefield Array (MWA) radio telescope and displaying
prominent FAIs. We study these FAIs in the new reference frame, quantifying
field-aligned and field-transverse behaviour, examining time and altitude
dependencies, and extending the analysis to FAIs on sub-array scales. We show
that the inclination of the plane can be derived solely from the data, and
verify that the best-fit value is consistent with the known magnetic
inclination. The ability of the model to concentrate the fluctuations along a
single spatial direction may find practical application to future calibration
strategies for widefield interferometry, by providing a compact representation
of FAI-induced distortions.
| no_new_dataset | 0.947381 |
1607.03516 | Muhammad Ghifary | Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David
Balduzzi and Wen Li | Deep Reconstruction-Classification Networks for Unsupervised Domain
Adaptation | to appear in European Conference on Computer Vision (ECCV) 2016 | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel unsupervised domain adaptation algorithm
based on deep learning for visual object recognition. Specifically, we design a
new model called Deep Reconstruction-Classification Network (DRCN), which
jointly learns a shared encoding representation for two tasks: i) supervised
classification of labeled source data, and ii) unsupervised reconstruction of
unlabeled target data.In this way, the learnt representation not only preserves
discriminability, but also encodes useful information from the target domain.
Our new DRCN model can be optimized by using backpropagation similarly as the
standard neural networks.
We evaluate the performance of DRCN on a series of cross-domain object
recognition tasks, where DRCN provides a considerable improvement (up to ~8% in
accuracy) over the prior state-of-the-art algorithms. Interestingly, we also
observe that the reconstruction pipeline of DRCN transforms images from the
source domain into images whose appearance resembles the target dataset. This
suggests that DRCN's performance is due to constructing a single composite
representation that encodes information about both the structure of target
images and the classification of source images. Finally, we provide a formal
analysis to justify the algorithm's objective in domain adaptation context.
| [
{
"version": "v1",
"created": "Tue, 12 Jul 2016 20:48:58 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2016 09:58:13 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Ghifary",
"Muhammad",
""
],
[
"Kleijn",
"W. Bastiaan",
""
],
[
"Zhang",
"Mengjie",
""
],
[
"Balduzzi",
"David",
""
],
[
"Li",
"Wen",
""
]
] | TITLE: Deep Reconstruction-Classification Networks for Unsupervised Domain
Adaptation
ABSTRACT: In this paper, we propose a novel unsupervised domain adaptation algorithm
based on deep learning for visual object recognition. Specifically, we design a
new model called Deep Reconstruction-Classification Network (DRCN), which
jointly learns a shared encoding representation for two tasks: i) supervised
classification of labeled source data, and ii) unsupervised reconstruction of
unlabeled target data.In this way, the learnt representation not only preserves
discriminability, but also encodes useful information from the target domain.
Our new DRCN model can be optimized by using backpropagation similarly as the
standard neural networks.
We evaluate the performance of DRCN on a series of cross-domain object
recognition tasks, where DRCN provides a considerable improvement (up to ~8% in
accuracy) over the prior state-of-the-art algorithms. Interestingly, we also
observe that the reconstruction pipeline of DRCN transforms images from the
source domain into images whose appearance resembles the target dataset. This
suggests that DRCN's performance is due to constructing a single composite
representation that encodes information about both the structure of target
images and the classification of source images. Finally, we provide a formal
analysis to justify the algorithm's objective in domain adaptation context.
| no_new_dataset | 0.948632 |
1608.00611 | Priyadarshini Panda | Priyadarshini Panda, and Kaushik Roy | Attention Tree: Learning Hierarchies of Visual Features for Large-Scale
Image Recognition | 11 pages, 8 figures, Under review in IEEE Transactions on Neural
Networks and Learning systems | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the key challenges in machine learning is to design a computationally
efficient multi-class classifier while maintaining the output accuracy and
performance. In this paper, we present a tree-based classifier: Attention Tree
(ATree) for large-scale image classification that uses recursive Adaboost
training to construct a visual attention hierarchy. The proposed attention
model is inspired from the biological 'selective tuning mechanism for cortical
visual processing'. We exploit the inherent feature similarity across images in
datasets to identify the input variability and use recursive optimization
procedure, to determine data partitioning at each node, thereby, learning the
attention hierarchy. A set of binary classifiers is organized on top of the
learnt hierarchy to minimize the overall test-time complexity. The attention
model maximizes the margins for the binary classifiers for optimal decision
boundary modelling, leading to better performance at minimal complexity. The
proposed framework has been evaluated on both Caltech-256 and SUN datasets and
achieves accuracy improvement over state-of-the-art tree-based methods at
significantly lower computational cost.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 20:51:29 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Panda",
"Priyadarshini",
""
],
[
"Roy",
"Kaushik",
""
]
] | TITLE: Attention Tree: Learning Hierarchies of Visual Features for Large-Scale
Image Recognition
ABSTRACT: One of the key challenges in machine learning is to design a computationally
efficient multi-class classifier while maintaining the output accuracy and
performance. In this paper, we present a tree-based classifier: Attention Tree
(ATree) for large-scale image classification that uses recursive Adaboost
training to construct a visual attention hierarchy. The proposed attention
model is inspired from the biological 'selective tuning mechanism for cortical
visual processing'. We exploit the inherent feature similarity across images in
datasets to identify the input variability and use recursive optimization
procedure, to determine data partitioning at each node, thereby, learning the
attention hierarchy. A set of binary classifiers is organized on top of the
learnt hierarchy to minimize the overall test-time complexity. The attention
model maximizes the margins for the binary classifiers for optimal decision
boundary modelling, leading to better performance at minimal complexity. The
proposed framework has been evaluated on both Caltech-256 and SUN datasets and
achieves accuracy improvement over state-of-the-art tree-based methods at
significantly lower computational cost.
| no_new_dataset | 0.952175 |
1608.00667 | Hong-Min Chu | Hong-Min Chu, Hsuan-Tien Lin | Can Active Learning Experience Be Transferred? | 10 pages, 8 figs, 4 tables, conference | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Active learning is an important machine learning problem in reducing the
human labeling effort. Current active learning strategies are designed from
human knowledge, and are applied on each dataset in an immutable manner. In
other words, experience about the usefulness of strategies cannot be updated
and transferred to improve active learning on other datasets. This paper
initiates a pioneering study on whether active learning experience can be
transferred. We first propose a novel active learning model that linearly
aggregates existing strategies. The linear weights can then be used to
represent the active learning experience. We equip the model with the popular
linear upper- confidence-bound (LinUCB) algorithm for contextual bandit to
update the weights. Finally, we extend our model to transfer the experience
across datasets with the technique of biased regularization. Empirical studies
demonstrate that the learned experience not only is competitive with existing
strategies on most single datasets, but also can be transferred across datasets
to improve the performance on future learning tasks.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 01:30:25 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Chu",
"Hong-Min",
""
],
[
"Lin",
"Hsuan-Tien",
""
]
] | TITLE: Can Active Learning Experience Be Transferred?
ABSTRACT: Active learning is an important machine learning problem in reducing the
human labeling effort. Current active learning strategies are designed from
human knowledge, and are applied on each dataset in an immutable manner. In
other words, experience about the usefulness of strategies cannot be updated
and transferred to improve active learning on other datasets. This paper
initiates a pioneering study on whether active learning experience can be
transferred. We first propose a novel active learning model that linearly
aggregates existing strategies. The linear weights can then be used to
represent the active learning experience. We equip the model with the popular
linear upper- confidence-bound (LinUCB) algorithm for contextual bandit to
update the weights. Finally, we extend our model to transfer the experience
across datasets with the technique of biased regularization. Empirical studies
demonstrate that the learned experience not only is competitive with existing
strategies on most single datasets, but also can be transferred across datasets
to improve the performance on future learning tasks.
| no_new_dataset | 0.945751 |
1608.00753 | Nick Schneider | Nick Schneider, Lukas Schneider, Peter Pinggera, Uwe Franke, Marc
Pollefeys, Christoph Stiller | Semantically Guided Depth Upsampling | German Conference on Pattern Recognition 2016 (Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 09:44:53 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Schneider",
"Nick",
""
],
[
"Schneider",
"Lukas",
""
],
[
"Pinggera",
"Peter",
""
],
[
"Franke",
"Uwe",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Stiller",
"Christoph",
""
]
] | TITLE: Semantically Guided Depth Upsampling
ABSTRACT: We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.
| no_new_dataset | 0.954984 |
1608.00859 | Limin Wang | Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang,
and Luc Van Gool | Temporal Segment Networks: Towards Good Practices for Deep Action
Recognition | Accepted by ECCV 2016. Based on this method, we won the ActivityNet
challenge 2016 in untrimmed video classification | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional networks have achieved great success for visual
recognition in still images. However, for action recognition in videos, the
advantage over traditional methods is not so evident. This paper aims to
discover the principles to design effective ConvNet architectures for action
recognition in videos and learn these models given limited training samples.
Our first contribution is temporal segment network (TSN), a novel framework for
video-based action recognition. which is based on the idea of long-range
temporal structure modeling. It combines a sparse temporal sampling strategy
and video-level supervision to enable efficient and effective learning using
the whole action video. The other contribution is our study on a series of good
practices in learning ConvNets on video data with the help of temporal segment
network. Our approach obtains the state-the-of-art performance on the datasets
of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned
ConvNet models, which qualitatively demonstrates the effectiveness of temporal
segment network and the proposed good practices.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 15:06:50 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Wang",
"Limin",
""
],
[
"Xiong",
"Yuanjun",
""
],
[
"Wang",
"Zhe",
""
],
[
"Qiao",
"Yu",
""
],
[
"Lin",
"Dahua",
""
],
[
"Tang",
"Xiaoou",
""
],
[
"Van Gool",
"Luc",
""
]
] | TITLE: Temporal Segment Networks: Towards Good Practices for Deep Action
Recognition
ABSTRACT: Deep convolutional networks have achieved great success for visual
recognition in still images. However, for action recognition in videos, the
advantage over traditional methods is not so evident. This paper aims to
discover the principles to design effective ConvNet architectures for action
recognition in videos and learn these models given limited training samples.
Our first contribution is temporal segment network (TSN), a novel framework for
video-based action recognition. which is based on the idea of long-range
temporal structure modeling. It combines a sparse temporal sampling strategy
and video-level supervision to enable efficient and effective learning using
the whole action video. The other contribution is our study on a series of good
practices in learning ConvNets on video data with the help of temporal segment
network. Our approach obtains the state-the-of-art performance on the datasets
of HMDB51 ( $ 69.4\% $) and UCF101 ($ 94.2\% $). We also visualize the learned
ConvNet models, which qualitatively demonstrates the effectiveness of temporal
segment network and the proposed good practices.
| no_new_dataset | 0.949012 |
1608.00911 | Wen-Sheng Chu | Wen-Sheng Chu, Fernando De la Torre, Jeffrey F. Cohn | Modeling Spatial and Temporal Cues for Multi-label Facial Action Unit
Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial action units (AUs) are essential to decode human facial expressions.
Researchers have focused on training AU detectors with a variety of features
and classifiers. However, several issues remain. These are spatial
representation, temporal modeling, and AU correlation. Unlike most studies that
tackle these issues separately, we propose a hybrid network architecture to
jointly address them. Specifically, spatial representations are extracted by a
Convolutional Neural Network (CNN), which, as analyzed in this paper, is able
to reduce person-specific biases caused by hand-crafted features (eg, SIFT and
Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are
stacked on top of these representations, regardless of the lengths of input
videos. The outputs of CNNs and LSTMs are further aggregated into a fusion
network to produce per-frame predictions of 12 AUs. Our network naturally
addresses the three issues, and leads to superior performance compared to
existing methods that consider these issues independently. Extensive
experiments were conducted on two large spontaneous datasets, GFT and BP4D,
containing more than 400,000 frames coded with 12 AUs. On both datasets, we
report significant improvement over a standard multi-label CNN and
feature-based state-of-the-art. Finally, we provide visualization of the
learned AU models, which, to our best knowledge, reveal how machines see facial
AUs for the first time.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 17:37:38 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Chu",
"Wen-Sheng",
""
],
[
"De la Torre",
"Fernando",
""
],
[
"Cohn",
"Jeffrey F.",
""
]
] | TITLE: Modeling Spatial and Temporal Cues for Multi-label Facial Action Unit
Detection
ABSTRACT: Facial action units (AUs) are essential to decode human facial expressions.
Researchers have focused on training AU detectors with a variety of features
and classifiers. However, several issues remain. These are spatial
representation, temporal modeling, and AU correlation. Unlike most studies that
tackle these issues separately, we propose a hybrid network architecture to
jointly address them. Specifically, spatial representations are extracted by a
Convolutional Neural Network (CNN), which, as analyzed in this paper, is able
to reduce person-specific biases caused by hand-crafted features (eg, SIFT and
Gabor). To model temporal dependencies, Long Short-Term Memory (LSTMs) are
stacked on top of these representations, regardless of the lengths of input
videos. The outputs of CNNs and LSTMs are further aggregated into a fusion
network to produce per-frame predictions of 12 AUs. Our network naturally
addresses the three issues, and leads to superior performance compared to
existing methods that consider these issues independently. Extensive
experiments were conducted on two large spontaneous datasets, GFT and BP4D,
containing more than 400,000 frames coded with 12 AUs. On both datasets, we
report significant improvement over a standard multi-label CNN and
feature-based state-of-the-art. Finally, we provide visualization of the
learned AU models, which, to our best knowledge, reveal how machines see facial
AUs for the first time.
| no_new_dataset | 0.944791 |
1608.00921 | Saad Nadeem | Saad Nadeem, Rui Shi, Joseph Marino, Wei Zeng, Xianfeng Gu, and Arie
Kaufman | Registration of Volumetric Prostate Scans using Curvature Flow | Technical Report Manuscript prepared: July 2014 --> (Keywords: Shape
registration, geometry-based techniques, medical visualization, mathematical
foundations for visualization) | null | null | null | cs.GR math.DG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Radiological imaging of the prostate is becoming more popular among
researchers and clinicians in searching for diseases, primarily cancer. Scans
might be acquired with different equipment or at different times for prognosis
monitoring, with patient movement between scans, resulting in multiple datasets
that need to be registered. For these cases, we introduce a method for
volumetric registration using curvature flow. Multiple prostate datasets are
mapped to canonical solid spheres, which are in turn aligned and registered
through the use of identified landmarks on or within the gland. Theoretical
proof and experimental results show that our method produces homeomorphisms
with feature constraints. We provide thorough validation of our method by
registering prostate scans of the same patient in different orientations, from
different days and using different modes of MRI. Our method also provides the
foundation for a general group-wise registration using a standard reference,
defined on the complex plane, for any input. In the present context, this can
be used for registering as many scans as needed for a single patient or
different patients on the basis of age, weight or even malignant and
non-malignant attributes to study the differences in general population. Though
we present this technique with a specific application to the prostate, it is
generally applicable for volumetric registration problems.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 18:15:34 GMT"
}
] | 2016-08-03T00:00:00 | [
[
"Nadeem",
"Saad",
""
],
[
"Shi",
"Rui",
""
],
[
"Marino",
"Joseph",
""
],
[
"Zeng",
"Wei",
""
],
[
"Gu",
"Xianfeng",
""
],
[
"Kaufman",
"Arie",
""
]
] | TITLE: Registration of Volumetric Prostate Scans using Curvature Flow
ABSTRACT: Radiological imaging of the prostate is becoming more popular among
researchers and clinicians in searching for diseases, primarily cancer. Scans
might be acquired with different equipment or at different times for prognosis
monitoring, with patient movement between scans, resulting in multiple datasets
that need to be registered. For these cases, we introduce a method for
volumetric registration using curvature flow. Multiple prostate datasets are
mapped to canonical solid spheres, which are in turn aligned and registered
through the use of identified landmarks on or within the gland. Theoretical
proof and experimental results show that our method produces homeomorphisms
with feature constraints. We provide thorough validation of our method by
registering prostate scans of the same patient in different orientations, from
different days and using different modes of MRI. Our method also provides the
foundation for a general group-wise registration using a standard reference,
defined on the complex plane, for any input. In the present context, this can
be used for registering as many scans as needed for a single patient or
different patients on the basis of age, weight or even malignant and
non-malignant attributes to study the differences in general population. Though
we present this technique with a specific application to the prostate, it is
generally applicable for volumetric registration problems.
| no_new_dataset | 0.951278 |
1503.06465 | Joao Carreira | Joao Carreira, Sara Vicente, Lourdes Agapito and Jorge Batista | Lifting Object Detection Datasets into 3D | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While data has certainly taken the center stage in computer vision in recent
years, it can still be difficult to obtain in certain scenarios. In particular,
acquiring ground truth 3D shapes of objects pictured in 2D images remains a
challenging feat and this has hampered progress in recognition-based object
reconstruction from a single image. Here we propose to bypass previous
solutions such as 3D scanning or manual design, that scale poorly, and instead
populate object category detection datasets semi-automatically with dense,
per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground
truth figure-ground segmentations and (iii) a small set of keypoint
annotations. Our proposed algorithm first estimates camera viewpoint using
rigid structure-from-motion and then reconstructs object shapes by optimizing
over visual hull proposals guided by loose within-class shape similarity
assumptions. The visual hull sampling process attempts to intersect an object's
projection cone with the cones of minimal subsets of other similar objects
among those pictured from certain vantage points. We show that our method is
able to produce convincing per-object 3D reconstructions and to accurately
estimate cameras viewpoints on one of the most challenging existing
object-category detection datasets, PASCAL VOC. We hope that our results will
re-stimulate interest on joint object recognition and 3D reconstruction from a
single image.
| [
{
"version": "v1",
"created": "Sun, 22 Mar 2015 19:26:57 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2016 09:49:19 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Carreira",
"Joao",
""
],
[
"Vicente",
"Sara",
""
],
[
"Agapito",
"Lourdes",
""
],
[
"Batista",
"Jorge",
""
]
] | TITLE: Lifting Object Detection Datasets into 3D
ABSTRACT: While data has certainly taken the center stage in computer vision in recent
years, it can still be difficult to obtain in certain scenarios. In particular,
acquiring ground truth 3D shapes of objects pictured in 2D images remains a
challenging feat and this has hampered progress in recognition-based object
reconstruction from a single image. Here we propose to bypass previous
solutions such as 3D scanning or manual design, that scale poorly, and instead
populate object category detection datasets semi-automatically with dense,
per-object 3D reconstructions, bootstrapped from:(i) class labels, (ii) ground
truth figure-ground segmentations and (iii) a small set of keypoint
annotations. Our proposed algorithm first estimates camera viewpoint using
rigid structure-from-motion and then reconstructs object shapes by optimizing
over visual hull proposals guided by loose within-class shape similarity
assumptions. The visual hull sampling process attempts to intersect an object's
projection cone with the cones of minimal subsets of other similar objects
among those pictured from certain vantage points. We show that our method is
able to produce convincing per-object 3D reconstructions and to accurately
estimate cameras viewpoints on one of the most challenging existing
object-category detection datasets, PASCAL VOC. We hope that our results will
re-stimulate interest on joint object recognition and 3D reconstruction from a
single image.
| no_new_dataset | 0.944689 |
1512.04065 | Yannis Kalantidis | Yannis Kalantidis, Clayton Mellina, Simon Osindero | Cross-dimensional Weighting for Aggregated Deep Convolutional Features | Accepted for publications at the 4th Workshop on Web-scale Vision and
Social Media (VSM), ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple and straightforward way of creating powerful image
representations via cross-dimensional weighting and aggregation of deep
convolutional neural network layer outputs. We first present a generalized
framework that encompasses a broad family of approaches and includes
cross-dimensional pooling and weighting steps. We then propose specific
non-parametric schemes for both spatial- and channel-wise weighting that boost
the effect of highly active spatial responses and at the same time regulate
burstiness effects. We experiment on different public datasets for image search
and show that our approach outperforms the current state-of-the-art for
approaches based on pre-trained networks. We also provide an easy-to-use, open
source implementation that reproduces our results.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 15:16:02 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2016 02:14:18 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Kalantidis",
"Yannis",
""
],
[
"Mellina",
"Clayton",
""
],
[
"Osindero",
"Simon",
""
]
] | TITLE: Cross-dimensional Weighting for Aggregated Deep Convolutional Features
ABSTRACT: We propose a simple and straightforward way of creating powerful image
representations via cross-dimensional weighting and aggregation of deep
convolutional neural network layer outputs. We first present a generalized
framework that encompasses a broad family of approaches and includes
cross-dimensional pooling and weighting steps. We then propose specific
non-parametric schemes for both spatial- and channel-wise weighting that boost
the effect of highly active spatial responses and at the same time regulate
burstiness effects. We experiment on different public datasets for image search
and show that our approach outperforms the current state-of-the-art for
approaches based on pre-trained networks. We also provide an easy-to-use, open
source implementation that reproduces our results.
| no_new_dataset | 0.950273 |
1512.09272 | Vijay Kumar B G Dr | Vijay Kumar B G, Gustavo Carneiro, Ian Reid | Learning Local Image Descriptors with Deep Siamese and Triplet
Convolutional Networks by Minimising Global Loss Functions | IEEE Conference on Computer Vision and Pattern Recognition 2016 (CVPR
2016) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent innovations in training deep convolutional neural network (ConvNet)
models have motivated the design of new methods to automatically learn local
image descriptors. The latest deep ConvNets proposed for this task consist of a
siamese network that is trained by penalising misclassification of pairs of
local image patches. Current results from machine learning show that replacing
this siamese by a triplet network can improve the classification accuracy in
several problems, but this has yet to be demonstrated for local image
descriptor learning. Moreover, current siamese and triplet networks have been
trained with stochastic gradient descent that computes the gradient from
individual pairs or triplets of local image patches, which can make them prone
to overfitting. In this paper, we first propose the use of triplet networks for
the problem of local image descriptor learning. Furthermore, we also propose
the use of a global loss that minimises the overall classification error in the
training set, which can improve the generalisation capability of the model.
Using the UBC benchmark dataset for comparing local image descriptors, we show
that the triplet network produces a more accurate embedding than the siamese
network in terms of the UBC dataset errors. Moreover, we also demonstrate that
a combination of the triplet and global losses produces the best embedding in
the field, using this triplet network. Finally, we also show that the use of
the central-surround siamese network trained with the global loss produces the
best result of the field on the UBC dataset. Pre-trained models are available
online at https://github.com/vijaykbg/deep-patchmatch
| [
{
"version": "v1",
"created": "Thu, 31 Dec 2015 12:36:28 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2016 06:47:57 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"G",
"Vijay Kumar B",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Learning Local Image Descriptors with Deep Siamese and Triplet
Convolutional Networks by Minimising Global Loss Functions
ABSTRACT: Recent innovations in training deep convolutional neural network (ConvNet)
models have motivated the design of new methods to automatically learn local
image descriptors. The latest deep ConvNets proposed for this task consist of a
siamese network that is trained by penalising misclassification of pairs of
local image patches. Current results from machine learning show that replacing
this siamese by a triplet network can improve the classification accuracy in
several problems, but this has yet to be demonstrated for local image
descriptor learning. Moreover, current siamese and triplet networks have been
trained with stochastic gradient descent that computes the gradient from
individual pairs or triplets of local image patches, which can make them prone
to overfitting. In this paper, we first propose the use of triplet networks for
the problem of local image descriptor learning. Furthermore, we also propose
the use of a global loss that minimises the overall classification error in the
training set, which can improve the generalisation capability of the model.
Using the UBC benchmark dataset for comparing local image descriptors, we show
that the triplet network produces a more accurate embedding than the siamese
network in terms of the UBC dataset errors. Moreover, we also demonstrate that
a combination of the triplet and global losses produces the best embedding in
the field, using this triplet network. Finally, we also show that the use of
the central-surround siamese network trained with the global loss produces the
best result of the field on the UBC dataset. Pre-trained models are available
online at https://github.com/vijaykbg/deep-patchmatch
| no_new_dataset | 0.948822 |
1603.02844 | Chunhua Shen | Bohan Zhuang, Guosheng Lin, Chunhua Shen, Ian Reid | Fast Training of Triplet-based Deep Binary Embedding Networks | Apeparing in Proc. IEEE Conf. Computer Vision and Pattern Recognition
2016. Code is at
https://bitbucket.org/jingruixiaozhuang/fast-training-of-triplet-based-deep-binary-embedding-networks | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we aim to learn a mapping (or embedding) from images to a
compact binary space in which Hamming distances correspond to a ranking measure
for the image retrieval task.
We make use of a triplet loss because this has been shown to be most
effective for ranking problems.
However, training in previous works can be prohibitively expensive due to the
fact that optimization is directly performed on the triplet space, where the
number of possible triplets for training is cubic in the number of training
examples.
To address this issue, we propose to formulate high-order binary codes
learning as a multi-label classification problem by explicitly separating
learning into two interleaved stages.
To solve the first stage, we design a large-scale high-order binary codes
inference algorithm to reduce the high-order objective to a standard binary
quadratic problem such that graph cuts can be used to efficiently infer the
binary code which serve as the label of each training datum.
In the second stage we propose to map the original image to compact binary
codes via carefully designed deep convolutional neural networks (CNNs) and the
hashing function fitting can be solved by training binary CNN classifiers.
An incremental/interleaved optimization strategy is proffered to ensure that
these two steps are interactive with each other during training for better
accuracy.
We conduct experiments on several benchmark datasets, which demonstrate both
improved training time (by as much as two orders of magnitude) as well as
producing state-of-the-art hashing for various retrieval tasks.
| [
{
"version": "v1",
"created": "Wed, 9 Mar 2016 11:10:12 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2016 01:52:57 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Zhuang",
"Bohan",
""
],
[
"Lin",
"Guosheng",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Fast Training of Triplet-based Deep Binary Embedding Networks
ABSTRACT: In this paper, we aim to learn a mapping (or embedding) from images to a
compact binary space in which Hamming distances correspond to a ranking measure
for the image retrieval task.
We make use of a triplet loss because this has been shown to be most
effective for ranking problems.
However, training in previous works can be prohibitively expensive due to the
fact that optimization is directly performed on the triplet space, where the
number of possible triplets for training is cubic in the number of training
examples.
To address this issue, we propose to formulate high-order binary codes
learning as a multi-label classification problem by explicitly separating
learning into two interleaved stages.
To solve the first stage, we design a large-scale high-order binary codes
inference algorithm to reduce the high-order objective to a standard binary
quadratic problem such that graph cuts can be used to efficiently infer the
binary code which serve as the label of each training datum.
In the second stage we propose to map the original image to compact binary
codes via carefully designed deep convolutional neural networks (CNNs) and the
hashing function fitting can be solved by training binary CNN classifiers.
An incremental/interleaved optimization strategy is proffered to ensure that
these two steps are interactive with each other during training for better
accuracy.
We conduct experiments on several benchmark datasets, which demonstrate both
improved training time (by as much as two orders of magnitude) as well as
producing state-of-the-art hashing for various retrieval tasks.
| no_new_dataset | 0.947381 |
1604.06480 | Yannis Kalantidis | Yannis Kalantidis, Lyndon Kennedy, Huy Nguyen, Clayton Mellina, David
A. Shamma | LOH and behold: Web-scale visual search, recommendation and clustering
using Locally Optimized Hashing | Accepted for publication at the 4th Workshop on Web-scale Vision and
Social Media (VSM), ECCV 2016 | null | null | null | cs.CV cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel hashing-based matching scheme, called Locally Optimized
Hashing (LOH), based on a state-of-the-art quantization algorithm that can be
used for efficient, large-scale search, recommendation, clustering, and
deduplication. We show that matching with LOH only requires set intersections
and summations to compute and so is easily implemented in generic distributed
computing systems. We further show application of LOH to: a) large-scale search
tasks where performance is on par with other state-of-the-art hashing
approaches; b) large-scale recommendation where queries consisting of thousands
of images can be used to generate accurate recommendations from collections of
hundreds of millions of images; and c) efficient clustering with a graph-based
algorithm that can be scaled to massive collections in a distributed
environment or can be used for deduplication for small collections, like search
results, performing better than traditional hashing approaches while only
requiring a few milliseconds to run. In this paper we experiment on datasets of
up to 100 million images, but in practice our system can scale to larger
collections and can be used for other types of data that have a vector
representation in a Euclidean space.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 20:23:55 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jul 2016 02:34:52 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Kalantidis",
"Yannis",
""
],
[
"Kennedy",
"Lyndon",
""
],
[
"Nguyen",
"Huy",
""
],
[
"Mellina",
"Clayton",
""
],
[
"Shamma",
"David A.",
""
]
] | TITLE: LOH and behold: Web-scale visual search, recommendation and clustering
using Locally Optimized Hashing
ABSTRACT: We propose a novel hashing-based matching scheme, called Locally Optimized
Hashing (LOH), based on a state-of-the-art quantization algorithm that can be
used for efficient, large-scale search, recommendation, clustering, and
deduplication. We show that matching with LOH only requires set intersections
and summations to compute and so is easily implemented in generic distributed
computing systems. We further show application of LOH to: a) large-scale search
tasks where performance is on par with other state-of-the-art hashing
approaches; b) large-scale recommendation where queries consisting of thousands
of images can be used to generate accurate recommendations from collections of
hundreds of millions of images; and c) efficient clustering with a graph-based
algorithm that can be scaled to massive collections in a distributed
environment or can be used for deduplication for small collections, like search
results, performing better than traditional hashing approaches while only
requiring a few milliseconds to run. In this paper we experiment on datasets of
up to 100 million images, but in practice our system can scale to larger
collections and can be used for other types of data that have a vector
representation in a Euclidean space.
| no_new_dataset | 0.950641 |
1607.04564 | Yi Zhou | Yi Zhou, Li Liu, Ling Shao and Matt Mellor | DAVE: A Unified Framework for Fast Vehicle Detection and Annotation | This paper has been accepted by ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicle detection and annotation for streaming video data with complex scenes
is an interesting but challenging task for urban traffic surveillance. In this
paper, we present a fast framework of Detection and Annotation for Vehicles
(DAVE), which effectively combines vehicle detection and attributes annotation.
DAVE consists of two convolutional neural networks (CNNs): a fast vehicle
proposal network (FVPN) for vehicle-like objects extraction and an attributes
learning network (ALN) aiming to verify each proposal and infer each vehicle's
pose, color and type simultaneously. These two nets are jointly optimized so
that abundant latent knowledge learned from the ALN can be exploited to guide
FVPN training. Once the system is trained, it can achieve efficient vehicle
detection and annotation for real-world traffic surveillance data. We evaluate
DAVE on a new self-collected UTS dataset and the public PASCAL VOC2007 car and
LISA 2010 datasets, with consistent improvements over existing algorithms.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2016 15:58:16 GMT"
},
{
"version": "v2",
"created": "Mon, 18 Jul 2016 10:55:12 GMT"
},
{
"version": "v3",
"created": "Mon, 1 Aug 2016 08:52:55 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Zhou",
"Yi",
""
],
[
"Liu",
"Li",
""
],
[
"Shao",
"Ling",
""
],
[
"Mellor",
"Matt",
""
]
] | TITLE: DAVE: A Unified Framework for Fast Vehicle Detection and Annotation
ABSTRACT: Vehicle detection and annotation for streaming video data with complex scenes
is an interesting but challenging task for urban traffic surveillance. In this
paper, we present a fast framework of Detection and Annotation for Vehicles
(DAVE), which effectively combines vehicle detection and attributes annotation.
DAVE consists of two convolutional neural networks (CNNs): a fast vehicle
proposal network (FVPN) for vehicle-like objects extraction and an attributes
learning network (ALN) aiming to verify each proposal and infer each vehicle's
pose, color and type simultaneously. These two nets are jointly optimized so
that abundant latent knowledge learned from the ALN can be exploited to guide
FVPN training. Once the system is trained, it can achieve efficient vehicle
detection and annotation for real-world traffic surveillance data. We evaluate
DAVE on a new self-collected UTS dataset and the public PASCAL VOC2007 car and
LISA 2010 datasets, with consistent improvements over existing algorithms.
| new_dataset | 0.962036 |
1608.00027 | Rhiannon Rose | Rhiannon V. Rose, Daniel J. Lizotte | gLOP: the global and Local Penalty for Capturing Predictive
Heterogeneity | Presented at 2016 Machine Learning and Healthcare Conference (MLHC
2016), Los Angeles, CA | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When faced with a supervised learning problem, we hope to have rich enough
data to build a model that predicts future instances well. However, in
practice, problems can exhibit predictive heterogeneity: most instances might
be relatively easy to predict, while others might be predictive outliers for
which a model trained on the entire dataset does not perform well. Identifying
these can help focus future data collection. We present gLOP, the global and
Local Penalty, a framework for capturing predictive heterogeneity and
identifying predictive outliers. gLOP is based on penalized regression for
multitask learning, which improves learning by leveraging training signal
information from related tasks. We give two optimization algorithms for gLOP,
one space-efficient, and another giving the full regularization path. We also
characterize uniqueness in terms of the data and tuning parameters, and present
empirical results on synthetic data and on two health research problems.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2016 20:57:06 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Rose",
"Rhiannon V.",
""
],
[
"Lizotte",
"Daniel J.",
""
]
] | TITLE: gLOP: the global and Local Penalty for Capturing Predictive
Heterogeneity
ABSTRACT: When faced with a supervised learning problem, we hope to have rich enough
data to build a model that predicts future instances well. However, in
practice, problems can exhibit predictive heterogeneity: most instances might
be relatively easy to predict, while others might be predictive outliers for
which a model trained on the entire dataset does not perform well. Identifying
these can help focus future data collection. We present gLOP, the global and
Local Penalty, a framework for capturing predictive heterogeneity and
identifying predictive outliers. gLOP is based on penalized regression for
multitask learning, which improves learning by leveraging training signal
information from related tasks. We give two optimization algorithms for gLOP,
one space-efficient, and another giving the full regularization path. We also
characterize uniqueness in terms of the data and tuning parameters, and present
empirical results on synthetic data and on two health research problems.
| no_new_dataset | 0.949623 |
1608.00104 | Chenguang Wang | Chenguang Wang, Yangqiu Song, Dan Roth, Ming Zhang, Jiawei Han | World Knowledge as Indirect Supervision for Document Clustering | 33 pages, 53 figures, ACM TKDD 2016 | null | null | null | cs.LG cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the key obstacles in making learning protocols realistic in
applications is the need to supervise them, a costly process that often
requires hiring domain experts. We consider the framework to use the world
knowledge as indirect supervision. World knowledge is general-purpose
knowledge, which is not designed for any specific domain. Then the key
challenges are how to adapt the world knowledge to domains and how to represent
it for learning. In this paper, we provide an example of using world knowledge
for domain dependent document clustering. We provide three ways to specify the
world knowledge to domains by resolving the ambiguity of the entities and their
types, and represent the data with world knowledge as a heterogeneous
information network. Then we propose a clustering algorithm that can cluster
multiple types and incorporate the sub-type information as constraints. In the
experiments, we use two existing knowledge bases as our sources of world
knowledge. One is Freebase, which is collaboratively collected knowledge about
entities and their organizations. The other is YAGO2, a knowledge base
automatically extracted from Wikipedia and maps knowledge to the linguistic
knowledge base, WordNet. Experimental results on two text benchmark datasets
(20newsgroups and RCV1) show that incorporating world knowledge as indirect
supervision can significantly outperform the state-of-the-art clustering
algorithms as well as clustering algorithms enhanced with world knowledge
features.
| [
{
"version": "v1",
"created": "Sat, 30 Jul 2016 11:53:04 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Wang",
"Chenguang",
""
],
[
"Song",
"Yangqiu",
""
],
[
"Roth",
"Dan",
""
],
[
"Zhang",
"Ming",
""
],
[
"Han",
"Jiawei",
""
]
] | TITLE: World Knowledge as Indirect Supervision for Document Clustering
ABSTRACT: One of the key obstacles in making learning protocols realistic in
applications is the need to supervise them, a costly process that often
requires hiring domain experts. We consider the framework to use the world
knowledge as indirect supervision. World knowledge is general-purpose
knowledge, which is not designed for any specific domain. Then the key
challenges are how to adapt the world knowledge to domains and how to represent
it for learning. In this paper, we provide an example of using world knowledge
for domain dependent document clustering. We provide three ways to specify the
world knowledge to domains by resolving the ambiguity of the entities and their
types, and represent the data with world knowledge as a heterogeneous
information network. Then we propose a clustering algorithm that can cluster
multiple types and incorporate the sub-type information as constraints. In the
experiments, we use two existing knowledge bases as our sources of world
knowledge. One is Freebase, which is collaboratively collected knowledge about
entities and their organizations. The other is YAGO2, a knowledge base
automatically extracted from Wikipedia and maps knowledge to the linguistic
knowledge base, WordNet. Experimental results on two text benchmark datasets
(20newsgroups and RCV1) show that incorporating world knowledge as indirect
supervision can significantly outperform the state-of-the-art clustering
algorithms as well as clustering algorithms enhanced with world knowledge
features.
| no_new_dataset | 0.948251 |
1608.00148 | Bilal Ahmed | Bilal Ahmed and Thomas Thesen and Karen E. Blackmon and Ruben
Kuzniecky and Orrin Devinsky and Jennifer G. Dy and Carla E. Brodley | Multi-task Learning with Weak Class Labels: Leveraging iEEG to Detect
Cortical Lesions in Cryptogenic Epilepsy | Presented at 2016 Machine Learning and Healthcare Conference (MLHC
2016), Los Angeles, CA | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-task learning (MTL) is useful for domains in which data originates from
multiple sources that are individually under-sampled. MTL methods are able to
learn classification models that have higher performance as compared to
learning a single model by aggregating all the data together or learning a
separate model for each data source. The performance of these methods relies on
label accuracy. We address the problem of simultaneously learning multiple
classifiers in the MTL framework when the training data has imprecise labels.
We assume that there is an additional source of information that provides a
score for each instance which reflects the certainty about its label. Modeling
this score as being generated by an underlying ranking function, we augment the
MTL framework with an added layer of supervision. This results in new MTL
methods that are able to learn accurate classifiers while preserving the domain
structure provided through the rank information. We apply these methods to the
task of detecting abnormal cortical regions in the MRIs of patients suffering
from focal epilepsy whose MRI were read as normal by expert neuroradiologists.
In addition to the noisy labels provided by the results of surgical resection,
we employ the results of an invasive intracranial-EEG exam as an additional
source of label information. Our proposed methods are able to successfully
detect abnormal regions for all patients in our dataset and achieve a higher
performance as compared to baseline methods.
| [
{
"version": "v1",
"created": "Sat, 30 Jul 2016 17:04:47 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Ahmed",
"Bilal",
""
],
[
"Thesen",
"Thomas",
""
],
[
"Blackmon",
"Karen E.",
""
],
[
"Kuzniecky",
"Ruben",
""
],
[
"Devinsky",
"Orrin",
""
],
[
"Dy",
"Jennifer G.",
""
],
[
"Brodley",
"Carla E.",
""
]
] | TITLE: Multi-task Learning with Weak Class Labels: Leveraging iEEG to Detect
Cortical Lesions in Cryptogenic Epilepsy
ABSTRACT: Multi-task learning (MTL) is useful for domains in which data originates from
multiple sources that are individually under-sampled. MTL methods are able to
learn classification models that have higher performance as compared to
learning a single model by aggregating all the data together or learning a
separate model for each data source. The performance of these methods relies on
label accuracy. We address the problem of simultaneously learning multiple
classifiers in the MTL framework when the training data has imprecise labels.
We assume that there is an additional source of information that provides a
score for each instance which reflects the certainty about its label. Modeling
this score as being generated by an underlying ranking function, we augment the
MTL framework with an added layer of supervision. This results in new MTL
methods that are able to learn accurate classifiers while preserving the domain
structure provided through the rank information. We apply these methods to the
task of detecting abnormal cortical regions in the MRIs of patients suffering
from focal epilepsy whose MRI were read as normal by expert neuroradiologists.
In addition to the noisy labels provided by the results of surgical resection,
we employ the results of an invasive intracranial-EEG exam as an additional
source of label information. Our proposed methods are able to successfully
detect abnormal regions for all patients in our dataset and achieve a higher
performance as compared to baseline methods.
| no_new_dataset | 0.940024 |
1608.00182 | Peng Tang | Peng Tang, Xinggang Wang, Baoguang Shi, Xiang Bai, Wenyu Liu, Zhuowen
Tu | Deep FisherNet for Object Classification | submitted to NIPS 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the great success of convolutional neural networks (CNN) for the
image classification task on datasets like Cifar and ImageNet, CNN's
representation power is still somewhat limited in dealing with object images
that have large variation in size and clutter, where Fisher Vector (FV) has
shown to be an effective encoding strategy. FV encodes an image by aggregating
local descriptors with a universal generative Gaussian Mixture Model (GMM). FV
however has limited learning capability and its parameters are mostly fixed
after constructing the codebook. To combine together the best of the two
worlds, we propose in this paper a neural network structure with FV layer being
part of an end-to-end trainable system that is differentiable; we name our
network FisherNet that is learnable using backpropagation. Our proposed
FisherNet combines convolutional neural network training and Fisher Vector
encoding in a single end-to-end structure. We observe a clear advantage of
FisherNet over plain CNN and standard FV in terms of both classification
accuracy and computational efficiency on the challenging PASCAL VOC object
classification task.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 03:56:30 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Tang",
"Peng",
""
],
[
"Wang",
"Xinggang",
""
],
[
"Shi",
"Baoguang",
""
],
[
"Bai",
"Xiang",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Deep FisherNet for Object Classification
ABSTRACT: Despite the great success of convolutional neural networks (CNN) for the
image classification task on datasets like Cifar and ImageNet, CNN's
representation power is still somewhat limited in dealing with object images
that have large variation in size and clutter, where Fisher Vector (FV) has
shown to be an effective encoding strategy. FV encodes an image by aggregating
local descriptors with a universal generative Gaussian Mixture Model (GMM). FV
however has limited learning capability and its parameters are mostly fixed
after constructing the codebook. To combine together the best of the two
worlds, we propose in this paper a neural network structure with FV layer being
part of an end-to-end trainable system that is differentiable; we name our
network FisherNet that is learnable using backpropagation. Our proposed
FisherNet combines convolutional neural network training and Fisher Vector
encoding in a single end-to-end structure. We observe a clear advantage of
FisherNet over plain CNN and standard FV in terms of both classification
accuracy and computational efficiency on the challenging PASCAL VOC object
classification task.
| no_new_dataset | 0.948822 |
1608.00199 | Soumitra Samanta | Soumitra Samanta and Bhabatosh Chanda | A Data-driven Approach for Human Pose Tracking Based on Spatio-temporal
Pictorial Structure | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a data-driven approach for human pose tracking in
video data. We formulate the human pose tracking problem as a discrete
optimization problem based on spatio-temporal pictorial structure model and
solve this problem in a greedy framework very efficiently. We propose the model
to track the human pose by combining the human pose estimation from single
image and traditional object tracking in a video. Our pose tracking objective
function consists of the following terms: likeliness of appearance of a part
within a frame, temporal displacement of the part from previous frame to the
current frame, and the spatial dependency of a part with its parent in the
graph structure. Experimental evaluation on benchmark datasets (VideoPose2,
Poses in the Wild and Outdoor Pose) as well as on our newly build ICDPose
dataset shows the usefulness of our proposed method.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 08:50:47 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Samanta",
"Soumitra",
""
],
[
"Chanda",
"Bhabatosh",
""
]
] | TITLE: A Data-driven Approach for Human Pose Tracking Based on Spatio-temporal
Pictorial Structure
ABSTRACT: In this paper, we present a data-driven approach for human pose tracking in
video data. We formulate the human pose tracking problem as a discrete
optimization problem based on spatio-temporal pictorial structure model and
solve this problem in a greedy framework very efficiently. We propose the model
to track the human pose by combining the human pose estimation from single
image and traditional object tracking in a video. Our pose tracking objective
function consists of the following terms: likeliness of appearance of a part
within a frame, temporal displacement of the part from previous frame to the
current frame, and the spatial dependency of a part with its parent in the
graph structure. Experimental evaluation on benchmark datasets (VideoPose2,
Poses in the Wild and Outdoor Pose) as well as on our newly build ICDPose
dataset shows the usefulness of our proposed method.
| new_dataset | 0.954984 |
1608.00203 | Balint Antal | Balint Antal | Automatic 3D Point Set Reconstruction from Stereo Laparoscopic Images
using Deep Neural Networks | In Proceedings of the 6th International Joint Conference on Pervasive
and Embedded Computing and Communication Systems (PECCS 2016), pages 116-121
ISBN: 978-989-758-195-3 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, an automatic approach to predict 3D coordinates from stereo
laparoscopic images is presented. The approach maps a vector of pixel
intensities to 3D coordinates through training a six layer deep neural network.
The architectural aspects of the approach is presented and in detail and the
method is evaluated on a publicly available dataset with promising results.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 09:28:28 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Antal",
"Balint",
""
]
] | TITLE: Automatic 3D Point Set Reconstruction from Stereo Laparoscopic Images
using Deep Neural Networks
ABSTRACT: In this paper, an automatic approach to predict 3D coordinates from stereo
laparoscopic images is presented. The approach maps a vector of pixel
intensities to 3D coordinates through training a six layer deep neural network.
The architectural aspects of the approach is presented and in detail and the
method is evaluated on a publicly available dataset with promising results.
| no_new_dataset | 0.947527 |
1608.00207 | Zhiwen Shao | Zhiwen Shao, Shouhong Ding, Yiru Zhao, Qinchuan Zhang, Lizhuang Ma | Learning deep representation from coarse to fine for face alignment | This paper is accepted by 2016 IEEE International Conference on
Multimedia and Expo (ICME) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel face alignment method that trains deep
convolutional network from coarse to fine. It divides given landmarks into
principal subset and elaborate subset. We firstly keep a large weight for
principal subset to make our network primarily predict their locations while
slightly take elaborate subset into account. Next the weight of principal
subset is gradually decreased until two subsets have equivalent weights. This
process contributes to learn a good initial model and search the optimal model
smoothly to avoid missing fairly good intermediate models in subsequent
procedures. On the challenging COFW dataset [1], our method achieves 6.33% mean
error with a reduction of 21.37% compared with the best previous result [2].
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 11:02:40 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Shao",
"Zhiwen",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Zhao",
"Yiru",
""
],
[
"Zhang",
"Qinchuan",
""
],
[
"Ma",
"Lizhuang",
""
]
] | TITLE: Learning deep representation from coarse to fine for face alignment
ABSTRACT: In this paper, we propose a novel face alignment method that trains deep
convolutional network from coarse to fine. It divides given landmarks into
principal subset and elaborate subset. We firstly keep a large weight for
principal subset to make our network primarily predict their locations while
slightly take elaborate subset into account. Next the weight of principal
subset is gradually decreased until two subsets have equivalent weights. This
process contributes to learn a good initial model and search the optimal model
smoothly to avoid missing fairly good intermediate models in subsequent
procedures. On the challenging COFW dataset [1], our method achieves 6.33% mean
error with a reduction of 21.37% compared with the best previous result [2].
| no_new_dataset | 0.953232 |
1608.00218 | Ilija Ilievski | Ilija Ilievski and Jiashi Feng | Hyperparameter Transfer Learning through Surrogate Alignment for
Efficient Deep Neural Network Training | null | null | null | null | cs.LG cs.CV cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, several optimization methods have been successfully applied to the
hyperparameter optimization of deep neural networks (DNNs). The methods work by
modeling the joint distribution of hyperparameter values and corresponding
error. Those methods become less practical when applied to modern DNNs whose
training may take a few days and thus one cannot collect sufficient
observations to accurately model the distribution. To address this challenging
issue, we propose a method that learns to transfer optimal hyperparameter
values for a small source dataset to hyperparameter values with comparable
performance on a dataset of interest. As opposed to existing transfer learning
methods, our proposed method does not use hand-designed features. Instead, it
uses surrogates to model the hyperparameter-error distributions of the two
datasets and trains a neural network to learn the transfer function. Extensive
experiments on three CV benchmark datasets clearly demonstrate the efficiency
of our method.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2016 14:09:17 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Ilievski",
"Ilija",
""
],
[
"Feng",
"Jiashi",
""
]
] | TITLE: Hyperparameter Transfer Learning through Surrogate Alignment for
Efficient Deep Neural Network Training
ABSTRACT: Recently, several optimization methods have been successfully applied to the
hyperparameter optimization of deep neural networks (DNNs). The methods work by
modeling the joint distribution of hyperparameter values and corresponding
error. Those methods become less practical when applied to modern DNNs whose
training may take a few days and thus one cannot collect sufficient
observations to accurately model the distribution. To address this challenging
issue, we propose a method that learns to transfer optimal hyperparameter
values for a small source dataset to hyperparameter values with comparable
performance on a dataset of interest. As opposed to existing transfer learning
methods, our proposed method does not use hand-designed features. Instead, it
uses surrogates to model the hyperparameter-error distributions of the two
datasets and trains a neural network to learn the transfer function. Extensive
experiments on three CV benchmark datasets clearly demonstrate the efficiency
of our method.
| no_new_dataset | 0.947914 |
1608.00310 | Abir Das | Rameswar Panda, Abir Das, Amit K. Roy-Chowdhury | Video Summarization in a Multi-View Camera Network | Accepted in ICPR 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While most existing video summarization approaches aim to extract an
informative summary of a single video, we propose a novel framework for
summarizing multi-view videos by exploiting both intra- and inter-view content
correlations in a joint embedding space. We learn the embedding by minimizing
an objective function that has two terms: one due to intra-view correlations
and another due to inter-view correlations across the multiple views. The
solution can be obtained directly by solving one Eigen-value problem that is
linear in the number of multi-view videos. We then employ a sparse
representative selection approach over the learned embedding space to summarize
the multi-view videos. Experimental results on several benchmark datasets
demonstrate that our proposed approach clearly outperforms the
state-of-the-art.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 03:42:07 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Panda",
"Rameswar",
""
],
[
"Das",
"Abir",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Video Summarization in a Multi-View Camera Network
ABSTRACT: While most existing video summarization approaches aim to extract an
informative summary of a single video, we propose a novel framework for
summarizing multi-view videos by exploiting both intra- and inter-view content
correlations in a joint embedding space. We learn the embedding by minimizing
an objective function that has two terms: one due to intra-view correlations
and another due to inter-view correlations across the multiple views. The
solution can be obtained directly by solving one Eigen-value problem that is
linear in the number of multi-view videos. We then employ a sparse
representative selection approach over the learned embedding space to summarize
the multi-view videos. Experimental results on several benchmark datasets
demonstrate that our proposed approach clearly outperforms the
state-of-the-art.
| no_new_dataset | 0.944382 |
1608.00462 | Marco De Nadai | Marco De Nadai, Radu L. Vieriu, Gloria Zen, Stefan Dragicevic, Nikhil
Naik, Michele Caraviello, Cesar A. Hidalgo, Nicu Sebe, Bruno Lepri | Are Safer Looking Neighborhoods More Lively? A Multimodal Investigation
into Urban Life | To appear in the Proceedings of ACM Multimedia Conference (MM), 2016.
October 15 - 19, 2016, Amsterdam, Netherlands | null | null | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policy makers, urban planners, architects, sociologists, and economists are
interested in creating urban areas that are both lively and safe. But are the
safety and liveliness of neighborhoods independent characteristics? Or are they
just two sides of the same coin? In a world where people avoid unsafe looking
places, neighborhoods that look unsafe will be less lively, and will fail to
harness the natural surveillance of human activity. But in a world where the
preference for safe looking neighborhoods is small, the connection between the
perception of safety and liveliness will be either weak or nonexistent. In this
paper we explore the connection between the levels of activity and the
perception of safety of neighborhoods in two major Italian cities by combining
mobile phone data (as a proxy for activity or liveliness) with scores of
perceived safety estimated using a Convolutional Neural Network trained on a
dataset of Google Street View images scored using a crowdsourced visual
perception survey. We find that: (i) safer looking neighborhoods are more
active than what is expected from their population density, employee density,
and distance to the city centre; and (ii) that the correlation between
appearance of safety and activity is positive, strong, and significant, for
females and people over 50, but negative for people under 30, suggesting that
the behavioral impact of perception depends on the demographic of the
population. Finally, we use occlusion techniques to identify the urban features
that contribute to the appearance of safety, finding that greenery and street
facing windows contribute to a positive appearance of safety (in agreement with
Oscar Newman's defensible space theory). These results suggest that urban
appearance modulates levels of human activity and, consequently, a
neighborhood's rate of natural surveillance.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 15:06:40 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"De Nadai",
"Marco",
""
],
[
"Vieriu",
"Radu L.",
""
],
[
"Zen",
"Gloria",
""
],
[
"Dragicevic",
"Stefan",
""
],
[
"Naik",
"Nikhil",
""
],
[
"Caraviello",
"Michele",
""
],
[
"Hidalgo",
"Cesar A.",
""
],
[
"Sebe",
"Nicu",
""
],
[
"Lepri",
"Bruno",
""
]
] | TITLE: Are Safer Looking Neighborhoods More Lively? A Multimodal Investigation
into Urban Life
ABSTRACT: Policy makers, urban planners, architects, sociologists, and economists are
interested in creating urban areas that are both lively and safe. But are the
safety and liveliness of neighborhoods independent characteristics? Or are they
just two sides of the same coin? In a world where people avoid unsafe looking
places, neighborhoods that look unsafe will be less lively, and will fail to
harness the natural surveillance of human activity. But in a world where the
preference for safe looking neighborhoods is small, the connection between the
perception of safety and liveliness will be either weak or nonexistent. In this
paper we explore the connection between the levels of activity and the
perception of safety of neighborhoods in two major Italian cities by combining
mobile phone data (as a proxy for activity or liveliness) with scores of
perceived safety estimated using a Convolutional Neural Network trained on a
dataset of Google Street View images scored using a crowdsourced visual
perception survey. We find that: (i) safer looking neighborhoods are more
active than what is expected from their population density, employee density,
and distance to the city centre; and (ii) that the correlation between
appearance of safety and activity is positive, strong, and significant, for
females and people over 50, but negative for people under 30, suggesting that
the behavioral impact of perception depends on the demographic of the
population. Finally, we use occlusion techniques to identify the urban features
that contribute to the appearance of safety, finding that greenery and street
facing windows contribute to a positive appearance of safety (in agreement with
Oscar Newman's defensible space theory). These results suggest that urban
appearance modulates levels of human activity and, consequently, a
neighborhood's rate of natural surveillance.
| no_new_dataset | 0.943556 |
1608.00507 | Jianming Zhang | Jianming Zhang, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Stan Sclaroff | Top-down Neural Attention by Excitation Backprop | A shorter version of this paper is accepted at ECCV, 2016 (oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We aim to model the top-down attention of a Convolutional Neural Network
(CNN) classifier for generating task-specific attention maps. Inspired by a
top-down human visual attention model, we propose a new backpropagation scheme,
called Excitation Backprop, to pass along top-down signals downwards in the
network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we
introduce the concept of contrastive attention to make the top-down attention
maps more discriminative. In experiments, we demonstrate the accuracy and
generalizability of our method in weakly supervised localization tasks on the
MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is
further validated in the text-to-region association task. On the Flickr30k
Entities dataset, we achieve promising performance in phrase localization by
leveraging the top-down attention of a CNN model that has been trained on
weakly labeled web images.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 17:49:57 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Zhang",
"Jianming",
""
],
[
"Lin",
"Zhe",
""
],
[
"Brandt",
"Jonathan",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Sclaroff",
"Stan",
""
]
] | TITLE: Top-down Neural Attention by Excitation Backprop
ABSTRACT: We aim to model the top-down attention of a Convolutional Neural Network
(CNN) classifier for generating task-specific attention maps. Inspired by a
top-down human visual attention model, we propose a new backpropagation scheme,
called Excitation Backprop, to pass along top-down signals downwards in the
network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we
introduce the concept of contrastive attention to make the top-down attention
maps more discriminative. In experiments, we demonstrate the accuracy and
generalizability of our method in weakly supervised localization tasks on the
MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is
further validated in the text-to-region association task. On the Flickr30k
Entities dataset, we achieve promising performance in phrase localization by
leveraging the top-down attention of a CNN model that has been trained on
weakly labeled web images.
| no_new_dataset | 0.951278 |
1608.00525 | Varun Nagaraja | Varun K. Nagaraja, Vlad I. Morariu, Larry S. Davis | Modeling Context Between Objects for Referring Expression Understanding | To appear at ECCV 16 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Referring expressions usually describe an object using properties of the
object and relationships of the object with other objects. We propose a
technique that integrates context between objects to understand referring
expressions. Our approach uses an LSTM to learn the probability of a referring
expression, with input features from a region and a context region. The context
regions are discovered using multiple-instance learning (MIL) since annotations
for context objects are generally not available for training. We utilize
max-margin based MIL objective functions for training the LSTM. Experiments on
the Google RefExp and UNC RefExp datasets show that modeling context between
objects provides better performance than modeling only object properties. We
also qualitatively show that our technique can ground a referring expression to
its referred region along with the supporting context region.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 19:03:27 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Nagaraja",
"Varun K.",
""
],
[
"Morariu",
"Vlad I.",
""
],
[
"Davis",
"Larry S.",
""
]
] | TITLE: Modeling Context Between Objects for Referring Expression Understanding
ABSTRACT: Referring expressions usually describe an object using properties of the
object and relationships of the object with other objects. We propose a
technique that integrates context between objects to understand referring
expressions. Our approach uses an LSTM to learn the probability of a referring
expression, with input features from a region and a context region. The context
regions are discovered using multiple-instance learning (MIL) since annotations
for context objects are generally not available for training. We utilize
max-margin based MIL objective functions for training the LSTM. Experiments on
the Google RefExp and UNC RefExp datasets show that modeling context between
objects provides better performance than modeling only object properties. We
also qualitatively show that our technique can ground a referring expression to
its referred region along with the supporting context region.
| no_new_dataset | 0.950503 |
1608.00550 | Ping Li | Ping Li and Cun-Hui Zhang | Theory of the GMM Kernel | null | null | null | null | stat.ME cs.DS cs.IT cs.LG math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop some theoretical results for a robust similarity measure named
"generalized min-max" (GMM). This similarity has direct applications in machine
learning as a positive definite kernel and can be efficiently computed via
probabilistic hashing. Owing to the discrete nature, the hashed values can also
be used for efficient near neighbor search. We prove the theoretical limit of
GMM and the consistency result, assuming that the data follow an elliptical
distribution, which is a very general family of distributions and includes the
multivariate $t$-distribution as a special case. The consistency result holds
as long as the data have bounded first moment (an assumption which essentially
holds for datasets commonly encountered in practice). Furthermore, we establish
the asymptotic normality of GMM. Compared to the "cosine" similarity which is
routinely adopted in current practice in statistics and machine learning, the
consistency of GMM requires much weaker conditions. Interestingly, when the
data follow the $t$-distribution with $\nu$ degrees of freedom, GMM typically
provides a better measure of similarity than "cosine" roughly when $\nu<8$
(which is already very close to normal). These theoretical results will help
explain the recent success of GMM in learning tasks.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 19:45:57 GMT"
}
] | 2016-08-02T00:00:00 | [
[
"Li",
"Ping",
""
],
[
"Zhang",
"Cun-Hui",
""
]
] | TITLE: Theory of the GMM Kernel
ABSTRACT: We develop some theoretical results for a robust similarity measure named
"generalized min-max" (GMM). This similarity has direct applications in machine
learning as a positive definite kernel and can be efficiently computed via
probabilistic hashing. Owing to the discrete nature, the hashed values can also
be used for efficient near neighbor search. We prove the theoretical limit of
GMM and the consistency result, assuming that the data follow an elliptical
distribution, which is a very general family of distributions and includes the
multivariate $t$-distribution as a special case. The consistency result holds
as long as the data have bounded first moment (an assumption which essentially
holds for datasets commonly encountered in practice). Furthermore, we establish
the asymptotic normality of GMM. Compared to the "cosine" similarity which is
routinely adopted in current practice in statistics and machine learning, the
consistency of GMM requires much weaker conditions. Interestingly, when the
data follow the $t$-distribution with $\nu$ degrees of freedom, GMM typically
provides a better measure of similarity than "cosine" roughly when $\nu<8$
(which is already very close to normal). These theoretical results will help
explain the recent success of GMM in learning tasks.
| no_new_dataset | 0.948822 |
1408.0517 | Vijay Gadepally | Vijay Gadepally and Jeremy Kepner | Big Data Dimensional Analysis | From IEEE HPEC 2014 | null | 10.1109/HPEC.2014.7040944 | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to collect and analyze large amounts of data is a growing problem
within the scientific community. The growing gap between data and users calls
for innovative tools that address the challenges faced by big data volume,
velocity and variety. One of the main challenges associated with big data
variety is automatically understanding the underlying structures and patterns
of the data. Such an understanding is required as a pre-requisite to the
application of advanced analytics to the data. Further, big data sets often
contain anomalies and errors that are difficult to know a priori. Current
approaches to understanding data structure are drawn from the traditional
database ontology design. These approaches are effective, but often require too
much human involvement to be effective for the volume, velocity and variety of
data encountered by big data systems. Dimensional Data Analysis (DDA) is a
proposed technique that allows big data analysts to quickly understand the
overall structure of a big dataset, determine anomalies. DDA exploits
structures that exist in a wide class of data to quickly determine the nature
of the data and its statical anomalies. DDA leverages existing schemas that are
employed in big data databases today. This paper presents DDA, applies it to a
number of data sets, and measures its performance. The overhead of DDA is low
and can be applied to existing big data systems without greatly impacting their
computing requirements.
| [
{
"version": "v1",
"created": "Sun, 3 Aug 2014 17:22:01 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Gadepally",
"Vijay",
""
],
[
"Kepner",
"Jeremy",
""
]
] | TITLE: Big Data Dimensional Analysis
ABSTRACT: The ability to collect and analyze large amounts of data is a growing problem
within the scientific community. The growing gap between data and users calls
for innovative tools that address the challenges faced by big data volume,
velocity and variety. One of the main challenges associated with big data
variety is automatically understanding the underlying structures and patterns
of the data. Such an understanding is required as a pre-requisite to the
application of advanced analytics to the data. Further, big data sets often
contain anomalies and errors that are difficult to know a priori. Current
approaches to understanding data structure are drawn from the traditional
database ontology design. These approaches are effective, but often require too
much human involvement to be effective for the volume, velocity and variety of
data encountered by big data systems. Dimensional Data Analysis (DDA) is a
proposed technique that allows big data analysts to quickly understand the
overall structure of a big dataset, determine anomalies. DDA exploits
structures that exist in a wide class of data to quickly determine the nature
of the data and its statical anomalies. DDA leverages existing schemas that are
employed in big data databases today. This paper presents DDA, applies it to a
number of data sets, and measures its performance. The overhead of DDA is low
and can be applied to existing big data systems without greatly impacting their
computing requirements.
| no_new_dataset | 0.949435 |
1509.06585 | Vincent Labatut | Jean-Val\`ere Cossu (LIA), Vincent Labatut (LIA), Nicolas Dugu\'e (UO) | A Review of Features for the Discrimination of Twitter Users:
Application to the Prediction of Offline Influence | null | Social Network Analysis and Mining, Springer, 2016, 6 (1), pp.25 | 10.1007/s13278-016-0329-x | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many works related to Twitter aim at characterizing its users in some way:
role on the service (spammers, bots, organizations, etc.), nature of the user
(socio-professional category, age, etc.), topics of interest , and others.
However, for a given user classification problem, it is very difficult to
select a set of appropriate features, because the many features described in
the literature are very heterogeneous, with name overlaps and collisions, and
numerous very close variants. In this article, we review a wide range of such
features. In order to present a clear state-of-the-art description, we unify
their names, definitions and relationships, and we propose a new, neutral,
typology. We then illustrate the interest of our review by applying a selection
of these features to the offline influence detection problem. This task
consists in identifying users which are influential in real-life, based on
their Twitter account and related data. We show that most features deemed
efficient to predict online influence, such as the numbers of retweets and
followers, are not relevant to this problem. However, We propose several
content-based approaches to label Twitter users as Influencers or not. We also
rank them according to a predicted influence level. Our proposals are evaluated
over the CLEF RepLab 2014 dataset, and outmatch state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 13:12:34 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2016 13:19:33 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jul 2016 08:02:34 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Cossu",
"Jean-Valère",
"",
"LIA"
],
[
"Labatut",
"Vincent",
"",
"LIA"
],
[
"Dugué",
"Nicolas",
"",
"UO"
]
] | TITLE: A Review of Features for the Discrimination of Twitter Users:
Application to the Prediction of Offline Influence
ABSTRACT: Many works related to Twitter aim at characterizing its users in some way:
role on the service (spammers, bots, organizations, etc.), nature of the user
(socio-professional category, age, etc.), topics of interest , and others.
However, for a given user classification problem, it is very difficult to
select a set of appropriate features, because the many features described in
the literature are very heterogeneous, with name overlaps and collisions, and
numerous very close variants. In this article, we review a wide range of such
features. In order to present a clear state-of-the-art description, we unify
their names, definitions and relationships, and we propose a new, neutral,
typology. We then illustrate the interest of our review by applying a selection
of these features to the offline influence detection problem. This task
consists in identifying users which are influential in real-life, based on
their Twitter account and related data. We show that most features deemed
efficient to predict online influence, such as the numbers of retweets and
followers, are not relevant to this problem. However, We propose several
content-based approaches to label Twitter users as Influencers or not. We also
rank them according to a predicted influence level. Our proposals are evaluated
over the CLEF RepLab 2014 dataset, and outmatch state-of-the-art methods.
| no_new_dataset | 0.941385 |
1601.05347 | M. Saquib Sarfraz | M. Saquib Sarfraz and Rainer Stiefelhagen | Deep Perceptual Mapping for Cross-Modal Face Recognition | This is the extended version (invited IJCV submission) with new
results of our previous submission (arXiv:1507.02879) | null | 10.1007/s11263-016-0933-2 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cross modal face matching between the thermal and visible spectrum is a much
desired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship between the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity information. We show
substantive performance improvement on three difficult thermal-visible face
datasets. The presented approach improves the state-of-the-art by more than
10\% on UND-X1 dataset and by more than 15-30\% on NVESD dataset in terms of
Rank-1 identification. Our method bridges the drop in performance due to the
modality gap by more than 40\%.
| [
{
"version": "v1",
"created": "Wed, 20 Jan 2016 17:49:11 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jul 2016 07:30:51 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Sarfraz",
"M. Saquib",
""
],
[
"Stiefelhagen",
"Rainer",
""
]
] | TITLE: Deep Perceptual Mapping for Cross-Modal Face Recognition
ABSTRACT: Cross modal face matching between the thermal and visible spectrum is a much
desired capability for night-time surveillance and security applications. Due
to a very large modality gap, thermal-to-visible face recognition is one of the
most challenging face matching problem. In this paper, we present an approach
to bridge this modality gap by a significant margin. Our approach captures the
highly non-linear relationship between the two modalities by using a deep
neural network. Our model attempts to learn a non-linear mapping from visible
to thermal spectrum while preserving the identity information. We show
substantive performance improvement on three difficult thermal-visible face
datasets. The presented approach improves the state-of-the-art by more than
10\% on UND-X1 dataset and by more than 15-30\% on NVESD dataset in terms of
Rank-1 identification. Our method bridges the drop in performance due to the
modality gap by more than 40\%.
| no_new_dataset | 0.955361 |
1601.07213 | Alexander Ororbia II | Alexander G. Ororbia II, C. Lee Giles, and Daniel Kifer | Unifying Adversarial Training Algorithms with Flexible Deep Data
Gradient Regularization | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many previous proposals for adversarial training of deep neural nets have
included di- rectly modifying the gradient, training on a mix of original and
adversarial examples, using contractive penalties, and approximately optimizing
constrained adversarial ob- jective functions. In this paper, we show these
proposals are actually all instances of optimizing a general, regularized
objective we call DataGrad. Our proposed DataGrad framework, which can be
viewed as a deep extension of the layerwise contractive au- toencoder penalty,
cleanly simplifies prior work and easily allows extensions such as adversarial
training with multi-task cues. In our experiments, we find that the deep gra-
dient regularization of DataGrad (which also has L1 and L2 flavors of
regularization) outperforms alternative forms of regularization, including
classical L1, L2, and multi- task, both on the original dataset as well as on
adversarial sets. Furthermore, we find that combining multi-task optimization
with DataGrad adversarial training results in the most robust performance.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 22:41:13 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Feb 2016 20:40:13 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jul 2016 15:36:19 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Ororbia",
"Alexander G.",
"II"
],
[
"Giles",
"C. Lee",
""
],
[
"Kifer",
"Daniel",
""
]
] | TITLE: Unifying Adversarial Training Algorithms with Flexible Deep Data
Gradient Regularization
ABSTRACT: Many previous proposals for adversarial training of deep neural nets have
included di- rectly modifying the gradient, training on a mix of original and
adversarial examples, using contractive penalties, and approximately optimizing
constrained adversarial ob- jective functions. In this paper, we show these
proposals are actually all instances of optimizing a general, regularized
objective we call DataGrad. Our proposed DataGrad framework, which can be
viewed as a deep extension of the layerwise contractive au- toencoder penalty,
cleanly simplifies prior work and easily allows extensions such as adversarial
training with multi-task cues. In our experiments, we find that the deep gra-
dient regularization of DataGrad (which also has L1 and L2 flavors of
regularization) outperforms alternative forms of regularization, including
classical L1, L2, and multi- task, both on the original dataset as well as on
adversarial sets. Furthermore, we find that combining multi-task optimization
with DataGrad adversarial training results in the most robust performance.
| no_new_dataset | 0.948442 |
1603.04992 | Ravi Garg | Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, Ian Reid | Unsupervised CNN for Single View Depth Estimation: Geometry to the
Rescue | Accepted for publication at ECCV 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A significant weakness of most current deep Convolutional Neural Networks is
the need to train them using vast amounts of manu- ally labelled data. In this
work we propose a unsupervised framework to learn a deep convolutional neural
network for single view depth predic- tion, without requiring a pre-training
stage or annotated ground truth depths. We achieve this by training the network
in a manner analogous to an autoencoder. At training time we consider a pair of
images, source and target, with small, known camera motion between the two such
as a stereo pair. We train the convolutional encoder for the task of predicting
the depth map for the source image. To do so, we explicitly generate an inverse
warp of the target image using the predicted depth and known inter-view
displacement, to reconstruct the source image; the photomet- ric error in the
reconstruction is the reconstruction loss for the encoder. The acquisition of
this training data is considerably simpler than for equivalent systems,
requiring no manual annotation, nor calibration of depth sensor to camera. We
show that our network trained on less than half of the KITTI dataset (without
any further augmentation) gives com- parable performance to that of the state
of art supervised methods for single view depth estimation.
| [
{
"version": "v1",
"created": "Wed, 16 Mar 2016 08:57:15 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 03:20:46 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Garg",
"Ravi",
""
],
[
"BG",
"Vijay Kumar",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Reid",
"Ian",
""
]
] | TITLE: Unsupervised CNN for Single View Depth Estimation: Geometry to the
Rescue
ABSTRACT: A significant weakness of most current deep Convolutional Neural Networks is
the need to train them using vast amounts of manu- ally labelled data. In this
work we propose a unsupervised framework to learn a deep convolutional neural
network for single view depth predic- tion, without requiring a pre-training
stage or annotated ground truth depths. We achieve this by training the network
in a manner analogous to an autoencoder. At training time we consider a pair of
images, source and target, with small, known camera motion between the two such
as a stereo pair. We train the convolutional encoder for the task of predicting
the depth map for the source image. To do so, we explicitly generate an inverse
warp of the target image using the predicted depth and known inter-view
displacement, to reconstruct the source image; the photomet- ric error in the
reconstruction is the reconstruction loss for the encoder. The acquisition of
this training data is considerably simpler than for equivalent systems,
requiring no manual annotation, nor calibration of depth sensor to camera. We
show that our network trained on less than half of the KITTI dataset (without
any further augmentation) gives com- parable performance to that of the state
of art supervised methods for single view depth estimation.
| no_new_dataset | 0.947332 |
1603.09114 | Eduard Trulls | Kwang Moo Yi and Eduard Trulls and Vincent Lepetit and Pascal Fua | LIFT: Learned Invariant Feature Transform | Accepted to ECCV 2016 (spotlight) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel Deep Network architecture that implements the full
feature point handling pipeline, that is, detection, orientation estimation,
and feature description. While previous works have successfully tackled each
one of these problems individually, we show how to learn to do all three in a
unified manner while preserving end-to-end differentiability. We then
demonstrate that our Deep pipeline outperforms state-of-the-art methods on a
number of benchmark datasets, without the need of retraining.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 10:33:18 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 15:29:39 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Yi",
"Kwang Moo",
""
],
[
"Trulls",
"Eduard",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Fua",
"Pascal",
""
]
] | TITLE: LIFT: Learned Invariant Feature Transform
ABSTRACT: We introduce a novel Deep Network architecture that implements the full
feature point handling pipeline, that is, detection, orientation estimation,
and feature description. While previous works have successfully tackled each
one of these problems individually, we show how to learn to do all three in a
unified manner while preserving end-to-end differentiability. We then
demonstrate that our Deep pipeline outperforms state-of-the-art methods on a
number of benchmark datasets, without the need of retraining.
| no_new_dataset | 0.945147 |
1605.02964 | Abhilash Srikantha | Abhilash Srikantha, Juergen Gall | Weakly Supervised Learning of Affordances | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localizing functional regions of objects or affordances is an important
aspect of scene understanding. In this work, we cast the problem of affordance
segmentation as that of semantic image segmentation. In order to explore
various levels of supervision, we introduce a pixel-annotated affordance
dataset of 3090 images containing 9916 object instances with rich contextual
information in terms of human-object interactions. We use a deep convolutional
neural network within an expectation maximization framework to take advantage
of weakly labeled data like image level annotations or keypoint annotations. We
show that a further reduction in supervision is possible with a minimal loss in
performance when human pose is used as context.
| [
{
"version": "v1",
"created": "Tue, 10 May 2016 12:04:07 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 13:46:59 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Srikantha",
"Abhilash",
""
],
[
"Gall",
"Juergen",
""
]
] | TITLE: Weakly Supervised Learning of Affordances
ABSTRACT: Localizing functional regions of objects or affordances is an important
aspect of scene understanding. In this work, we cast the problem of affordance
segmentation as that of semantic image segmentation. In order to explore
various levels of supervision, we introduce a pixel-annotated affordance
dataset of 3090 images containing 9916 object instances with rich contextual
information in terms of human-object interactions. We use a deep convolutional
neural network within an expectation maximization framework to take advantage
of weakly labeled data like image level annotations or keypoint annotations. We
show that a further reduction in supervision is possible with a minimal loss in
performance when human pose is used as context.
| new_dataset | 0.953275 |
1605.05081 | Fabrizio Cei | The MEG Collaboration | Search for the Lepton Flavour Violating Decay $\mu^{+} \to e^+ \gamma$
with the Full Dataset of the MEG Experiment | 30 pages, 31 figures | null | null | null | hep-ex physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The final results of the search for the lepton flavour violating decay
$\mu^{+} \rightarrow {\rm e^{+}} \gamma$ based on the full dataset collected by
the MEG experiment at the Paul Scherrer Institut in the period 2009--2013 and
totalling $7.5\times 10^{14}$ stopped muons on target are presented. No
significant excess of events is observed in the dataset with respect to the
expected background and a new upper limit on the branching ratio of this decay
of $BR( \mu^{+} \rightarrow {\rm e^{+}} \gamma ) < 4.2 \times 10^{-13}$ (90\%\
confidence level) is established, which represents the most stringent limit on
the existence of this decay to date.
| [
{
"version": "v1",
"created": "Tue, 17 May 2016 09:52:20 GMT"
},
{
"version": "v2",
"created": "Wed, 18 May 2016 07:08:36 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Jul 2016 09:39:29 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"The MEG Collaboration",
"",
""
]
] | TITLE: Search for the Lepton Flavour Violating Decay $\mu^{+} \to e^+ \gamma$
with the Full Dataset of the MEG Experiment
ABSTRACT: The final results of the search for the lepton flavour violating decay
$\mu^{+} \rightarrow {\rm e^{+}} \gamma$ based on the full dataset collected by
the MEG experiment at the Paul Scherrer Institut in the period 2009--2013 and
totalling $7.5\times 10^{14}$ stopped muons on target are presented. No
significant excess of events is observed in the dataset with respect to the
expected background and a new upper limit on the branching ratio of this decay
of $BR( \mu^{+} \rightarrow {\rm e^{+}} \gamma ) < 4.2 \times 10^{-13}$ (90\%\
confidence level) is established, which represents the most stringent limit on
the existence of this decay to date.
| no_new_dataset | 0.941493 |
1605.08110 | Ke Zhang | Ke Zhang, Wei-Lun Chao, Fei Sha, Kristen Grauman | Video Summarization with Long Short-term Memory | To appear in ECCV 2016 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel supervised learning technique for summarizing videos by
automatically selecting keyframes or key subshots. Casting the problem as a
structured prediction problem on sequential data, our main idea is to use Long
Short-Term Memory (LSTM), a special type of recurrent neural networks to model
the variable-range dependencies entailed in the task of video summarization.
Our learning models attain the state-of-the-art results on two benchmark video
datasets. Detailed analysis justifies the design of the models. In particular,
we show that it is crucial to take into consideration the sequential structures
in videos and model them. Besides advances in modeling techniques, we introduce
techniques to address the need of a large number of annotated data for training
complex learning models. There, our main idea is to exploit the existence of
auxiliary annotated video datasets, albeit heterogeneous in visual styles and
contents. Specifically, we show domain adaptation techniques can improve
summarization by reducing the discrepancies in statistical properties across
those datasets.
| [
{
"version": "v1",
"created": "Thu, 26 May 2016 00:46:35 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 07:05:34 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Zhang",
"Ke",
""
],
[
"Chao",
"Wei-Lun",
""
],
[
"Sha",
"Fei",
""
],
[
"Grauman",
"Kristen",
""
]
] | TITLE: Video Summarization with Long Short-term Memory
ABSTRACT: We propose a novel supervised learning technique for summarizing videos by
automatically selecting keyframes or key subshots. Casting the problem as a
structured prediction problem on sequential data, our main idea is to use Long
Short-Term Memory (LSTM), a special type of recurrent neural networks to model
the variable-range dependencies entailed in the task of video summarization.
Our learning models attain the state-of-the-art results on two benchmark video
datasets. Detailed analysis justifies the design of the models. In particular,
we show that it is crucial to take into consideration the sequential structures
in videos and model them. Besides advances in modeling techniques, we introduce
techniques to address the need of a large number of annotated data for training
complex learning models. There, our main idea is to exploit the existence of
auxiliary annotated video datasets, albeit heterogeneous in visual styles and
contents. Specifically, we show domain adaptation techniques can improve
summarization by reducing the discrepancies in statistical properties across
those datasets.
| no_new_dataset | 0.94256 |
1607.01650 | Wenkun Zhang | Lei Li, Ailong Cai, Linyuan Wang, Bin Yan, Hanming Zhang, Zhizhong
Zheng, Wenkun Zhang, Wanli Lu, Guoen Hu | Efficient Image Reconstruction and Practical Decomposition for
Dual-energy Computed Tomography | 19 pages, 5 figures, 2 tables | null | null | null | physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dual-energy computed tomography (DECT) has shown great potential and
promising applications in advanced imaging fields for its capabilities of
material decomposition. However, image reconstructions and decompositions under
sparse views dataset suffers severely from multi factors, such as
insufficiencies of data, appearances of noise, and inconsistencies of
observations. Under sparse views, conventional filtered back-projection type
reconstruction methods fails to provide CT images with satisfying quality.
Moreover, direct image decomposition is unstable and meet with noise boost even
with full views dataset. This paper proposes an iterative image reconstruction
algorithm and a practical image domain decomposition method for DECT. On one
hand, the reconstruction algorithm is formulated as an optimization problem,
which containing total variation regularization term and data fidelity term.
The alternating direction method is utilized to design the corresponding
algorithm which shows faster convergence speed compared with the existing ones.
On the other hand, the image domain decomposition applies the penalized least
square (PLS) estimation on decomposing the material mappings. The PLS includes
linear combination term and the regularization term which enforces the
smoothness on estimation images. The authors implement and evaluate the
proposed joint method on real DECT projections and compare the method with
typical and state-of-the-art reconstruction and decomposition methods. The
experiments on dataset of an anthropomorphic head phantom show that our methods
have advantages on noise suppression and edge reservation, without blurring the
fine structures in the sinus area in the phantom. Compared to the existing
approaches, our method achieves a superior performance on DECT imaging with
respect to reconstruction accuracy and decomposition quality.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2016 14:54:17 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 00:54:08 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Li",
"Lei",
""
],
[
"Cai",
"Ailong",
""
],
[
"Wang",
"Linyuan",
""
],
[
"Yan",
"Bin",
""
],
[
"Zhang",
"Hanming",
""
],
[
"Zheng",
"Zhizhong",
""
],
[
"Zhang",
"Wenkun",
""
],
[
"Lu",
"Wanli",
""
],
[
"Hu",
"Guoen",
""
]
] | TITLE: Efficient Image Reconstruction and Practical Decomposition for
Dual-energy Computed Tomography
ABSTRACT: Dual-energy computed tomography (DECT) has shown great potential and
promising applications in advanced imaging fields for its capabilities of
material decomposition. However, image reconstructions and decompositions under
sparse views dataset suffers severely from multi factors, such as
insufficiencies of data, appearances of noise, and inconsistencies of
observations. Under sparse views, conventional filtered back-projection type
reconstruction methods fails to provide CT images with satisfying quality.
Moreover, direct image decomposition is unstable and meet with noise boost even
with full views dataset. This paper proposes an iterative image reconstruction
algorithm and a practical image domain decomposition method for DECT. On one
hand, the reconstruction algorithm is formulated as an optimization problem,
which containing total variation regularization term and data fidelity term.
The alternating direction method is utilized to design the corresponding
algorithm which shows faster convergence speed compared with the existing ones.
On the other hand, the image domain decomposition applies the penalized least
square (PLS) estimation on decomposing the material mappings. The PLS includes
linear combination term and the regularization term which enforces the
smoothness on estimation images. The authors implement and evaluate the
proposed joint method on real DECT projections and compare the method with
typical and state-of-the-art reconstruction and decomposition methods. The
experiments on dataset of an anthropomorphic head phantom show that our methods
have advantages on noise suppression and edge reservation, without blurring the
fine structures in the sinus area in the phantom. Compared to the existing
approaches, our method achieves a superior performance on DECT imaging with
respect to reconstruction accuracy and decomposition quality.
| no_new_dataset | 0.944228 |
1607.07403 | Sebastian Schelter | Sebastian Schelter, J\'er\^ome Kunegis | On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl | null | null | null | null | cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We perform a large-scale analysis of third-party trackers on the World Wide
Web from more than 3.5 billion web pages of the CommonCrawl 2012 corpus. We
extract a dataset containing more than 140 million third-party embeddings in
over 41 million domains. To the best of our knowledge, this constitutes the
largest web tracking dataset collected so far, and exceeds related studies by
more than an order of magnitude in the number of domains and web pages
analyzed. We perform a large-scale study of online tracking, on three levels:
(1) On a global level, we give a precise figure for the extent of tracking,
give insights into the structure of the `online tracking sphere' and analyse
which trackers are used by how many websites. (2) On a country-specific level,
we analyse which trackers are used by websites in different countries, and
identify the countries in which websites choose significantly different
trackers than in the rest of the world. (3) We answer the question whether the
content of websites influences the choice of trackers they use, leveraging more
than 90 thousand categorized domains. In particular, we analyse whether highly
privacy-critical websites make different choices of trackers than other
websites. Based on the performed analyses, we confirm that trackers are
widespread (as expected), and that a small number of trackers dominates the web
(Google, Facebook and Twitter). In particular, the three tracking domains with
the highest PageRank are all owned by Google. The only exception to this
pattern are a few countries such as China and Russia. Our results suggest that
this dominance is strongly associated with country-specific political factors
such as freedom of the press. We also confirm that websites with highly
privacy-critical content are less likely to contain trackers (60% vs 90% for
other websites), even though the majority of them still do contain trackers.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2016 18:49:20 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 06:29:26 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Schelter",
"Sebastian",
""
],
[
"Kunegis",
"Jérôme",
""
]
] | TITLE: On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl
ABSTRACT: We perform a large-scale analysis of third-party trackers on the World Wide
Web from more than 3.5 billion web pages of the CommonCrawl 2012 corpus. We
extract a dataset containing more than 140 million third-party embeddings in
over 41 million domains. To the best of our knowledge, this constitutes the
largest web tracking dataset collected so far, and exceeds related studies by
more than an order of magnitude in the number of domains and web pages
analyzed. We perform a large-scale study of online tracking, on three levels:
(1) On a global level, we give a precise figure for the extent of tracking,
give insights into the structure of the `online tracking sphere' and analyse
which trackers are used by how many websites. (2) On a country-specific level,
we analyse which trackers are used by websites in different countries, and
identify the countries in which websites choose significantly different
trackers than in the rest of the world. (3) We answer the question whether the
content of websites influences the choice of trackers they use, leveraging more
than 90 thousand categorized domains. In particular, we analyse whether highly
privacy-critical websites make different choices of trackers than other
websites. Based on the performed analyses, we confirm that trackers are
widespread (as expected), and that a small number of trackers dominates the web
(Google, Facebook and Twitter). In particular, the three tracking domains with
the highest PageRank are all owned by Google. The only exception to this
pattern are a few countries such as China and Russia. Our results suggest that
this dominance is strongly associated with country-specific political factors
such as freedom of the press. We also confirm that websites with highly
privacy-critical content are less likely to contain trackers (60% vs 90% for
other websites), even though the majority of them still do contain trackers.
| no_new_dataset | 0.894052 |
1607.08414 | Michael Wray | Michael Wray, Davide Moltisanti, Walterio Mayol-Cuevas and Dima Damen | SEMBED: Semantic Embedding of Egocentric Action Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present SEMBED, an approach for embedding an egocentric object interaction
video in a semantic-visual graph to estimate the probability distribution over
its potential semantic labels. When object interactions are annotated using
unbounded choice of verbs, we embrace the wealth and ambiguity of these labels
by capturing the semantic relationships as well as the visual similarities over
motion and appearance features. We show how SEMBED can interpret a challenging
dataset of 1225 freely annotated egocentric videos, outperforming SVM
classification by more than 5%.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 11:55:38 GMT"
},
{
"version": "v2",
"created": "Fri, 29 Jul 2016 09:40:37 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Wray",
"Michael",
""
],
[
"Moltisanti",
"Davide",
""
],
[
"Mayol-Cuevas",
"Walterio",
""
],
[
"Damen",
"Dima",
""
]
] | TITLE: SEMBED: Semantic Embedding of Egocentric Action Videos
ABSTRACT: We present SEMBED, an approach for embedding an egocentric object interaction
video in a semantic-visual graph to estimate the probability distribution over
its potential semantic labels. When object interactions are annotated using
unbounded choice of verbs, we embrace the wealth and ambiguity of these labels
by capturing the semantic relationships as well as the visual similarities over
motion and appearance features. We show how SEMBED can interpret a challenging
dataset of 1225 freely annotated egocentric videos, outperforming SVM
classification by more than 5%.
| no_new_dataset | 0.937555 |
1607.08764 | Ravi Kiran Sarvadevabhatla | Ravi Kiran Sarvadevabhatla, Shiv Surya, Srinivas S S Kruthiventi,
Venkatesh Babu R | SwiDeN : Convolutional Neural Networks For Depiction Invariant Object
Recognition | Accepted at ACMMM 2016. The first two authors contributed equally.
Code and models at https://github.com/val-iisc/swiden | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current state of the art object recognition architectures achieve impressive
performance but are typically specialized for a single depictive style (e.g.
photos only, sketches only). In this paper, we present SwiDeN : our
Convolutional Neural Network (CNN) architecture which recognizes objects
regardless of how they are visually depicted (line drawing, realistic shaded
drawing, photograph etc.). In SwiDeN, we utilize a novel `deep' depictive
style-based switching mechanism which appropriately addresses the
depiction-specific and depiction-invariant aspects of the problem. We compare
SwiDeN with alternative architectures and prior work on a 50-category Photo-Art
dataset containing objects depicted in multiple styles. Experimental results
show that SwiDeN outperforms other approaches for the depiction-invariant
object recognition problem.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2016 11:00:08 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Sarvadevabhatla",
"Ravi Kiran",
""
],
[
"Surya",
"Shiv",
""
],
[
"Kruthiventi",
"Srinivas S S",
""
],
[
"R",
"Venkatesh Babu",
""
]
] | TITLE: SwiDeN : Convolutional Neural Networks For Depiction Invariant Object
Recognition
ABSTRACT: Current state of the art object recognition architectures achieve impressive
performance but are typically specialized for a single depictive style (e.g.
photos only, sketches only). In this paper, we present SwiDeN : our
Convolutional Neural Network (CNN) architecture which recognizes objects
regardless of how they are visually depicted (line drawing, realistic shaded
drawing, photograph etc.). In SwiDeN, we utilize a novel `deep' depictive
style-based switching mechanism which appropriately addresses the
depiction-specific and depiction-invariant aspects of the problem. We compare
SwiDeN with alternative architectures and prior work on a 50-category Photo-Art
dataset containing objects depicted in multiple styles. Experimental results
show that SwiDeN outperforms other approaches for the depiction-invariant
object recognition problem.
| no_new_dataset | 0.939913 |
1607.08807 | Ingmar Weber | Palakorn Achananuparp and Ingmar Weber | Extracting Food Substitutes From Food Diary via Distributional
Similarity | To appear at HealthRecSys'16 | null | null | null | cs.CY cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore the problem of identifying substitute relationship
between food pairs from real-world food consumption data as the first step
towards the healthier food recommendation. Our method is inspired by the
distributional hypothesis in linguistics. Specifically, we assume that foods
that are consumed in similar contexts are more likely to be similar dietarily.
For example, a turkey sandwich can be considered a suitable substitute for a
chicken sandwich if both tend to be consumed with french fries and salad. To
evaluate our method, we constructed a real-world food consumption dataset from
MyFitnessPal's public food diary entries and obtained ground-truth human
judgements of food substitutes from a crowdsourcing service. The experiment
results suggest the effectiveness of the method in identifying suitable
substitutes.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2016 13:46:17 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Achananuparp",
"Palakorn",
""
],
[
"Weber",
"Ingmar",
""
]
] | TITLE: Extracting Food Substitutes From Food Diary via Distributional
Similarity
ABSTRACT: In this paper, we explore the problem of identifying substitute relationship
between food pairs from real-world food consumption data as the first step
towards the healthier food recommendation. Our method is inspired by the
distributional hypothesis in linguistics. Specifically, we assume that foods
that are consumed in similar contexts are more likely to be similar dietarily.
For example, a turkey sandwich can be considered a suitable substitute for a
chicken sandwich if both tend to be consumed with french fries and salad. To
evaluate our method, we constructed a real-world food consumption dataset from
MyFitnessPal's public food diary entries and obtained ground-truth human
judgements of food substitutes from a crowdsourcing service. The experiment
results suggest the effectiveness of the method in identifying suitable
substitutes.
| new_dataset | 0.647074 |
1607.08822 | Peter Anderson | Peter Anderson, Basura Fernando, Mark Johnson, Stephen Gould | SPICE: Semantic Propositional Image Caption Evaluation | 14 pages plus references, accepted to ECCV 2016 | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is considerable interest in the task of automatically generating image
captions. However, evaluation is challenging. Existing automatic evaluation
metrics are primarily sensitive to n-gram overlap, which is neither necessary
nor sufficient for the task of simulating human judgment. We hypothesize that
semantic propositional content is an important component of human caption
evaluation, and propose a new automated caption evaluation metric defined over
scene graphs coined SPICE. Extensive evaluations across a range of models and
datasets indicate that SPICE captures human judgments over model-generated
captions better than other automatic metrics (e.g., system-level correlation of
0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and
0.53 for METEOR). Furthermore, SPICE can answer questions such as `which
caption-generator best understands colors?' and `can caption-generators count?'
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2016 14:26:27 GMT"
}
] | 2016-08-01T00:00:00 | [
[
"Anderson",
"Peter",
""
],
[
"Fernando",
"Basura",
""
],
[
"Johnson",
"Mark",
""
],
[
"Gould",
"Stephen",
""
]
] | TITLE: SPICE: Semantic Propositional Image Caption Evaluation
ABSTRACT: There is considerable interest in the task of automatically generating image
captions. However, evaluation is challenging. Existing automatic evaluation
metrics are primarily sensitive to n-gram overlap, which is neither necessary
nor sufficient for the task of simulating human judgment. We hypothesize that
semantic propositional content is an important component of human caption
evaluation, and propose a new automated caption evaluation metric defined over
scene graphs coined SPICE. Extensive evaluations across a range of models and
datasets indicate that SPICE captures human judgments over model-generated
captions better than other automatic metrics (e.g., system-level correlation of
0.88 with human judgments on the MS COCO dataset, versus 0.43 for CIDEr and
0.53 for METEOR). Furthermore, SPICE can answer questions such as `which
caption-generator best understands colors?' and `can caption-generators count?'
| no_new_dataset | 0.923764 |
1506.08415 | Andrea Burattin | Andrea Burattin | PLG2: Multiperspective Processes Randomization and Simulation for Online
and Offline Settings | 36 pages, minor updates | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Process mining represents an important field in BPM and data mining research.
Recently, it has gained importance also for practitioners: more and more
companies are creating business process intelligence solutions. The evaluation
of process mining algorithms requires, as any other data mining task, the
availability of large amount of real-world data. Despite the increasing
availability of such datasets, they are affected by many limitations, in primis
the absence of a "gold standard" (i.e., the reference model).
This paper extends an approach, already available in the literature, for the
generation of random processes. Novelties have been introduced throughout the
work and, in particular, they involve the complete support for multiperspective
models and logs (i.e., the control-flow perspective is enriched with time and
data information) and for online settings (i.e., generation of multiperspective
event streams and concept drifts). The proposed new framework is able to almost
entirely cover the spectrum of possible scenarios that can be observed in the
real-world. The proposed approach is implemented as a publicly available Java
application, with a set of APIs for the programmatic execution of experiments.
| [
{
"version": "v1",
"created": "Sun, 28 Jun 2015 15:28:24 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2016 09:15:43 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jul 2016 05:51:01 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Burattin",
"Andrea",
""
]
] | TITLE: PLG2: Multiperspective Processes Randomization and Simulation for Online
and Offline Settings
ABSTRACT: Process mining represents an important field in BPM and data mining research.
Recently, it has gained importance also for practitioners: more and more
companies are creating business process intelligence solutions. The evaluation
of process mining algorithms requires, as any other data mining task, the
availability of large amount of real-world data. Despite the increasing
availability of such datasets, they are affected by many limitations, in primis
the absence of a "gold standard" (i.e., the reference model).
This paper extends an approach, already available in the literature, for the
generation of random processes. Novelties have been introduced throughout the
work and, in particular, they involve the complete support for multiperspective
models and logs (i.e., the control-flow perspective is enriched with time and
data information) and for online settings (i.e., generation of multiperspective
event streams and concept drifts). The proposed new framework is able to almost
entirely cover the spectrum of possible scenarios that can be observed in the
real-world. The proposed approach is implemented as a publicly available Java
application, with a set of APIs for the programmatic execution of experiments.
| no_new_dataset | 0.939803 |
1604.00239 | Piotr Koniusz | Piotr Koniusz and Anoop Cherian and Fatih Porikli | Tensor Representations via Kernel Linearization for Action Recognition
from 3D Skeletons (Extended Version) | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore tensor representations that can compactly capture
higher-order relationships between skeleton joints for 3D action recognition.
We first define RBF kernels on 3D joint sequences, which are then linearized to
form kernel descriptors. The higher-order outer-products of these kernel
descriptors form our tensor representations. We present two different kernels
for action recognition, namely (i) a sequence compatibility kernel that
captures the spatio-temporal compatibility of joints in one sequence against
those in the other, and (ii) a dynamics compatibility kernel that explicitly
models the action dynamics of a sequence. Tensors formed from these kernels are
then used to train an SVM. We present experiments on several benchmark datasets
and demonstrate state of the art results, substantiating the effectiveness of
our representations.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 13:41:49 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2016 08:35:38 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Koniusz",
"Piotr",
""
],
[
"Cherian",
"Anoop",
""
],
[
"Porikli",
"Fatih",
""
]
] | TITLE: Tensor Representations via Kernel Linearization for Action Recognition
from 3D Skeletons (Extended Version)
ABSTRACT: In this paper, we explore tensor representations that can compactly capture
higher-order relationships between skeleton joints for 3D action recognition.
We first define RBF kernels on 3D joint sequences, which are then linearized to
form kernel descriptors. The higher-order outer-products of these kernel
descriptors form our tensor representations. We present two different kernels
for action recognition, namely (i) a sequence compatibility kernel that
captures the spatio-temporal compatibility of joints in one sequence against
those in the other, and (ii) a dynamics compatibility kernel that explicitly
models the action dynamics of a sequence. Tensors formed from these kernels are
then used to train an SVM. We present experiments on several benchmark datasets
and demonstrate state of the art results, substantiating the effectiveness of
our representations.
| no_new_dataset | 0.95222 |
1604.01325 | Albert Gordo | Albert Gordo, Jon Almazan, Jerome Revaud, Diane Larlus | Deep Image Retrieval: Learning global representations for image search | ECCV 2016 version + additional results | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel approach for instance-level image retrieval. It produces a
global and compact fixed-length representation for each image by aggregating
many region-wise descriptors. In contrast to previous works employing
pre-trained deep networks as a black box to produce features, our method
leverages a deep architecture trained for the specific task of image retrieval.
Our contribution is twofold: (i) we leverage a ranking framework to learn
convolution and projection weights that are used to build the region features;
and (ii) we employ a region proposal network to learn which regions should be
pooled to form the final global descriptor. We show that using clean training
data is key to the success of our approach. To that aim, we use a large scale
but noisy landmark dataset and develop an automatic cleaning approach. The
proposed architecture produces a global image representation in a single
forward pass. Our approach significantly outperforms previous approaches based
on global descriptors on standard datasets. It even surpasses most prior works
based on costly local descriptor indexing and spatial verification. Additional
material is available at www.xrce.xerox.com/Deep-Image-Retrieval.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 16:48:17 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2016 10:44:17 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Gordo",
"Albert",
""
],
[
"Almazan",
"Jon",
""
],
[
"Revaud",
"Jerome",
""
],
[
"Larlus",
"Diane",
""
]
] | TITLE: Deep Image Retrieval: Learning global representations for image search
ABSTRACT: We propose a novel approach for instance-level image retrieval. It produces a
global and compact fixed-length representation for each image by aggregating
many region-wise descriptors. In contrast to previous works employing
pre-trained deep networks as a black box to produce features, our method
leverages a deep architecture trained for the specific task of image retrieval.
Our contribution is twofold: (i) we leverage a ranking framework to learn
convolution and projection weights that are used to build the region features;
and (ii) we employ a region proposal network to learn which regions should be
pooled to form the final global descriptor. We show that using clean training
data is key to the success of our approach. To that aim, we use a large scale
but noisy landmark dataset and develop an automatic cleaning approach. The
proposed architecture produces a global image representation in a single
forward pass. Our approach significantly outperforms previous approaches based
on global descriptors on standard datasets. It even surpasses most prior works
based on costly local descriptor indexing and spatial verification. Additional
material is available at www.xrce.xerox.com/Deep-Image-Retrieval.
| no_new_dataset | 0.948442 |
1604.04808 | Arun Mallya | Arun Mallya and Svetlana Lazebnik | Learning Models for Actions and Person-Object Interactions with Transfer
to Question Answering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes deep convolutional network models that utilize local and
global context to make human activity label predictions in still images,
achieving state-of-the-art performance on two recent datasets with hundreds of
labels each. We use multiple instance learning to handle the lack of
supervision on the level of individual person instances, and weighted loss to
handle unbalanced training data. Further, we show how specialized features
trained on these datasets can be used to improve accuracy on the Visual
Question Answering (VQA) task, in the form of multiple choice fill-in-the-blank
questions (Visual Madlibs). Specifically, we tackle two types of questions on
person activity and person-object relationship and show improvements over
generic features trained on the ImageNet classification task.
| [
{
"version": "v1",
"created": "Sat, 16 Apr 2016 22:54:05 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2016 04:44:36 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Mallya",
"Arun",
""
],
[
"Lazebnik",
"Svetlana",
""
]
] | TITLE: Learning Models for Actions and Person-Object Interactions with Transfer
to Question Answering
ABSTRACT: This paper proposes deep convolutional network models that utilize local and
global context to make human activity label predictions in still images,
achieving state-of-the-art performance on two recent datasets with hundreds of
labels each. We use multiple instance learning to handle the lack of
supervision on the level of individual person instances, and weighted loss to
handle unbalanced training data. Further, we show how specialized features
trained on these datasets can be used to improve accuracy on the Visual
Question Answering (VQA) task, in the form of multiple choice fill-in-the-blank
questions (Visual Madlibs). Specifically, we tackle two types of questions on
person activity and person-object relationship and show improvements over
generic features trained on the ImageNet classification task.
| no_new_dataset | 0.951142 |
1605.06155 | Cheng Zhang | Cheng Zhang and Hedvig Kjellstrom and Carl Henrik Ek | Inter-Battery Topic Representation Learning | ECCV 2016 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the Inter-Battery Topic Model (IBTM). Our approach
extends traditional topic models by learning a factorized latent variable
representation. The structured representation leads to a model that marries
benefits traditionally associated with a discriminative approach, such as
feature selection, with those of a generative model, such as principled
regularization and ability to handle missing data. The factorization is
provided by representing data in terms of aligned pairs of observations as
different views. This provides means for selecting a representation that
separately models topics that exist in both views from the topics that are
unique to a single view. This structured consolidation allows for efficient and
robust inference and provides a compact and efficient representation. Learning
is performed in a Bayesian fashion by maximizing a rigorous bound on the
log-likelihood. Firstly, we illustrate the benefits of the model on a synthetic
dataset,. The model is then evaluated in both uni- and multi-modality settings
on two different classification tasks with off-the-shelf convolutional neural
network (CNN) features which generate state-of-the-art results with extremely
compact representations.
| [
{
"version": "v1",
"created": "Thu, 19 May 2016 21:44:12 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jul 2016 10:08:40 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Zhang",
"Cheng",
""
],
[
"Kjellstrom",
"Hedvig",
""
],
[
"Ek",
"Carl Henrik",
""
]
] | TITLE: Inter-Battery Topic Representation Learning
ABSTRACT: In this paper, we present the Inter-Battery Topic Model (IBTM). Our approach
extends traditional topic models by learning a factorized latent variable
representation. The structured representation leads to a model that marries
benefits traditionally associated with a discriminative approach, such as
feature selection, with those of a generative model, such as principled
regularization and ability to handle missing data. The factorization is
provided by representing data in terms of aligned pairs of observations as
different views. This provides means for selecting a representation that
separately models topics that exist in both views from the topics that are
unique to a single view. This structured consolidation allows for efficient and
robust inference and provides a compact and efficient representation. Learning
is performed in a Bayesian fashion by maximizing a rigorous bound on the
log-likelihood. Firstly, we illustrate the benefits of the model on a synthetic
dataset,. The model is then evaluated in both uni- and multi-modality settings
on two different classification tasks with off-the-shelf convolutional neural
network (CNN) features which generate state-of-the-art results with extremely
compact representations.
| no_new_dataset | 0.9455 |
1606.05002 | Tanmay Gupta | Tanmay Gupta, Daeyun Shin, Naren Sivagnanadasan, Derek Hoiem | 3DFS: Deformable Dense Depth Fusion and Segmentation for Object
Reconstruction from a Handheld Camera | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an approach for 3D reconstruction and segmentation of a single
object placed on a flat surface from an input video. Our approach is to perform
dense depth map estimation for multiple views using a proposed objective
function that preserves detail. The resulting depth maps are then fused using a
proposed implicit surface function that is robust to estimation error,
producing a smooth surface reconstruction of the entire scene. Finally, the
object is segmented from the remaining scene using a proposed 2D-3D
segmentation that incorporates image and depth cues with priors and
regularization over the 3D volume and 2D segmentations. We evaluate 3D
reconstructions qualitatively on our Object-Videos dataset, comparing to
fusion, multiview stereo, and segmentation baselines. We also quantitatively
evaluate the dense depth estimation using the RGBD Scenes V2 dataset [Henry et
al. 2013] and the segmentation using keyframe annotations of the Object-Videos
dataset.
| [
{
"version": "v1",
"created": "Wed, 15 Jun 2016 23:23:08 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2016 20:38:19 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Gupta",
"Tanmay",
""
],
[
"Shin",
"Daeyun",
""
],
[
"Sivagnanadasan",
"Naren",
""
],
[
"Hoiem",
"Derek",
""
]
] | TITLE: 3DFS: Deformable Dense Depth Fusion and Segmentation for Object
Reconstruction from a Handheld Camera
ABSTRACT: We propose an approach for 3D reconstruction and segmentation of a single
object placed on a flat surface from an input video. Our approach is to perform
dense depth map estimation for multiple views using a proposed objective
function that preserves detail. The resulting depth maps are then fused using a
proposed implicit surface function that is robust to estimation error,
producing a smooth surface reconstruction of the entire scene. Finally, the
object is segmented from the remaining scene using a proposed 2D-3D
segmentation that incorporates image and depth cues with priors and
regularization over the 3D volume and 2D segmentations. We evaluate 3D
reconstructions qualitatively on our Object-Videos dataset, comparing to
fusion, multiview stereo, and segmentation baselines. We also quantitatively
evaluate the dense depth estimation using the RGBD Scenes V2 dataset [Henry et
al. 2013] and the segmentation using keyframe annotations of the Object-Videos
dataset.
| no_new_dataset | 0.918772 |
1607.08381 | Rahul Rama Varior Mr. | Rahul Rama Varior, Bing Shuai, Jiwen Lu, Dong Xu, and Gang Wang | A Siamese Long Short-Term Memory Architecture for Human
Re-Identification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matching pedestrians across multiple camera views known as human
re-identification (re-identification) is a challenging problem in visual
surveillance. In the existing works concentrating on feature extraction,
representations are formed locally and independent of other regions. We present
a novel siamese Long Short-Term Memory (LSTM) architecture that can process
image regions sequentially and enhance the discriminative capability of local
feature representation by leveraging contextual information. The feedback
connections and internal gating mechanism of the LSTM cells enable our model to
memorize the spatial dependencies and selectively propagate relevant contextual
information through the network. We demonstrate improved performance compared
to the baseline algorithm with no LSTM units and promising results compared to
state-of-the-art methods on Market-1501, CUHK03 and VIPeR datasets.
Visualization of the internal mechanism of LSTM cells shows meaningful patterns
can be learned by our method.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 09:43:52 GMT"
}
] | 2016-07-29T00:00:00 | [
[
"Varior",
"Rahul Rama",
""
],
[
"Shuai",
"Bing",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Xu",
"Dong",
""
],
[
"Wang",
"Gang",
""
]
] | TITLE: A Siamese Long Short-Term Memory Architecture for Human
Re-Identification
ABSTRACT: Matching pedestrians across multiple camera views known as human
re-identification (re-identification) is a challenging problem in visual
surveillance. In the existing works concentrating on feature extraction,
representations are formed locally and independent of other regions. We present
a novel siamese Long Short-Term Memory (LSTM) architecture that can process
image regions sequentially and enhance the discriminative capability of local
feature representation by leveraging contextual information. The feedback
connections and internal gating mechanism of the LSTM cells enable our model to
memorize the spatial dependencies and selectively propagate relevant contextual
information through the network. We demonstrate improved performance compared
to the baseline algorithm with no LSTM units and promising results compared to
state-of-the-art methods on Market-1501, CUHK03 and VIPeR datasets.
Visualization of the internal mechanism of LSTM cells shows meaningful patterns
can be learned by our method.
| no_new_dataset | 0.9455 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.