id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1612.01689
Ra\'ul D\'iaz
Ra\'ul D\'iaz, Charless C. Fowlkes
Cluster-Wise Ratio Tests for Fast Camera Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature point matching for camera localization suffers from scalability problems. Even when feature descriptors associated with 3D scene points are locally unique, as coverage grows, similar or repeated features become increasingly common. As a result, the standard distance ratio-test used to identify reliable image feature points is overly restrictive and rejects many good candidate matches. We propose a simple coarse-to-fine strategy that uses conservative approximations to robust local ratio-tests that can be computed efficiently using global approximate k-nearest neighbor search. We treat these forward matches as votes in camera pose space and use them to prioritize back-matching within candidate camera pose clusters, exploiting feature co-visibility captured by clustering the 3D model camera pose graph. This approach achieves state-of-the-art camera localization results on a variety of popular benchmarks, outperforming several methods that use more complicated data structures and that make more restrictive assumptions on camera pose. We also carry out diagnostic analyses on a difficult test dataset containing globally repetitive structure that suggest our approach successfully adapts to the challenges of large-scale image localization.
[ { "version": "v1", "created": "Tue, 6 Dec 2016 07:35:24 GMT" }, { "version": "v2", "created": "Sat, 20 May 2017 18:02:46 GMT" } ]
2017-05-23T00:00:00
[ [ "Díaz", "Raúl", "" ], [ "Fowlkes", "Charless C.", "" ] ]
TITLE: Cluster-Wise Ratio Tests for Fast Camera Localization ABSTRACT: Feature point matching for camera localization suffers from scalability problems. Even when feature descriptors associated with 3D scene points are locally unique, as coverage grows, similar or repeated features become increasingly common. As a result, the standard distance ratio-test used to identify reliable image feature points is overly restrictive and rejects many good candidate matches. We propose a simple coarse-to-fine strategy that uses conservative approximations to robust local ratio-tests that can be computed efficiently using global approximate k-nearest neighbor search. We treat these forward matches as votes in camera pose space and use them to prioritize back-matching within candidate camera pose clusters, exploiting feature co-visibility captured by clustering the 3D model camera pose graph. This approach achieves state-of-the-art camera localization results on a variety of popular benchmarks, outperforming several methods that use more complicated data structures and that make more restrictive assumptions on camera pose. We also carry out diagnostic analyses on a difficult test dataset containing globally repetitive structure that suggest our approach successfully adapts to the challenges of large-scale image localization.
no_new_dataset
0.954052
1612.02590
Zike Yan
Zike Yan, Xuezhi Xiang
Scene Flow Estimation: A Survey
51 pages, 12 figures, 10 tables, 108 references
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is the first to review the scene flow estimation field, which analyzes and compares methods, technical challenges, evaluation methodologies and performance of scene flow estimation. Existing algorithms are categorized in terms of scene representation, data source, and calculation scheme, and the pros and cons in each category are compared briefly. The datasets and evaluation protocols are enumerated, and the performance of the most representative methods is presented. A future vision is illustrated with few questions arisen for discussion. This survey presents a general introduction and analysis of scene flow estimation.
[ { "version": "v1", "created": "Thu, 8 Dec 2016 10:44:03 GMT" }, { "version": "v2", "created": "Mon, 12 Dec 2016 03:25:34 GMT" }, { "version": "v3", "created": "Sun, 21 May 2017 09:52:02 GMT" } ]
2017-05-23T00:00:00
[ [ "Yan", "Zike", "" ], [ "Xiang", "Xuezhi", "" ] ]
TITLE: Scene Flow Estimation: A Survey ABSTRACT: This paper is the first to review the scene flow estimation field, which analyzes and compares methods, technical challenges, evaluation methodologies and performance of scene flow estimation. Existing algorithms are categorized in terms of scene representation, data source, and calculation scheme, and the pros and cons in each category are compared briefly. The datasets and evaluation protocols are enumerated, and the performance of the most representative methods is presented. A future vision is illustrated with few questions arisen for discussion. This survey presents a general introduction and analysis of scene flow estimation.
no_new_dataset
0.949342
1701.09135
Samarth Manoj Brahmbhatt
Samarth Brahmbhatt and James Hays
DeepNav: Learning to Navigate Large Cities
CVPR 2017 camera ready version
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].
[ { "version": "v1", "created": "Tue, 31 Jan 2017 17:14:24 GMT" }, { "version": "v2", "created": "Sat, 20 May 2017 22:40:26 GMT" } ]
2017-05-23T00:00:00
[ [ "Brahmbhatt", "Samarth", "" ], [ "Hays", "James", "" ] ]
TITLE: DeepNav: Learning to Navigate Large Cities ABSTRACT: We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].
new_dataset
0.95561
1702.08653
Asli Celikyilmaz
Asli Celikyilmaz and Li Deng and Lihong Li and Chong Wang
Scaffolding Networks: Incremental Learning and Teaching Through Questioning
11 pages + Abstract + 3 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm. The scaffolding network embodies an incremental learning approach that is formulated as a teacher-student network architecture to teach machines how to understand text and do reasoning. The key to our computational scaffolding approach is the interactions between the teacher and the student through sequential questioning. The student observes each sentence in the text incrementally, and it uses an attention-based neural net to discover and register the key information in relation to its current memory. Meanwhile, the teacher asks questions about the observed text, and the student network gets rewarded by correctly answering these questions. The entire network is updated continually using reinforcement learning. Our experimental results on synthetic and real datasets show that the scaffolding network not only outperforms state-of-the-art methods but also learns to do reasoning in a scalable way even with little human generated input.
[ { "version": "v1", "created": "Tue, 28 Feb 2017 05:43:10 GMT" }, { "version": "v2", "created": "Fri, 19 May 2017 19:45:43 GMT" } ]
2017-05-23T00:00:00
[ [ "Celikyilmaz", "Asli", "" ], [ "Deng", "Li", "" ], [ "Li", "Lihong", "" ], [ "Wang", "Chong", "" ] ]
TITLE: Scaffolding Networks: Incremental Learning and Teaching Through Questioning ABSTRACT: We introduce a new paradigm of learning for reasoning, understanding, and prediction, as well as the scaffolding network to implement this paradigm. The scaffolding network embodies an incremental learning approach that is formulated as a teacher-student network architecture to teach machines how to understand text and do reasoning. The key to our computational scaffolding approach is the interactions between the teacher and the student through sequential questioning. The student observes each sentence in the text incrementally, and it uses an attention-based neural net to discover and register the key information in relation to its current memory. Meanwhile, the teacher asks questions about the observed text, and the student network gets rewarded by correctly answering these questions. The entire network is updated continually using reinforcement learning. Our experimental results on synthetic and real datasets show that the scaffolding network not only outperforms state-of-the-art methods but also learns to do reasoning in a scalable way even with little human generated input.
no_new_dataset
0.949389
1703.01789
Jongpil Lee
Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, Juhan Nam
Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms
7 pages, Sound and Music Computing Conference (SMC), 2017
null
null
null
cs.SD cs.LG cs.MM cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains. This approach was applied to musical signals as well but has been not fully explored yet. To this end, we propose sample-level deep convolutional neural networks which learn representations from very small grains of waveforms (e.g. 2 or 3 samples) beyond typical frame-level input representations. Our experiments show how deep architectures with sample-level filters improve the accuracy in music auto-tagging and they provide results comparable to previous state-of-the-art performances for the Magnatagatune dataset and Million Song Dataset. In addition, we visualize filters learned in a sample-level DCNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency along layer, such as mel-frequency spectrogram that is widely used in music classification systems.
[ { "version": "v1", "created": "Mon, 6 Mar 2017 09:49:48 GMT" }, { "version": "v2", "created": "Mon, 22 May 2017 04:46:36 GMT" } ]
2017-05-23T00:00:00
[ [ "Lee", "Jongpil", "" ], [ "Park", "Jiyoung", "" ], [ "Kim", "Keunhyoung Luke", "" ], [ "Nam", "Juhan", "" ] ]
TITLE: Sample-level Deep Convolutional Neural Networks for Music Auto-tagging Using Raw Waveforms ABSTRACT: Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains. This approach was applied to musical signals as well but has been not fully explored yet. To this end, we propose sample-level deep convolutional neural networks which learn representations from very small grains of waveforms (e.g. 2 or 3 samples) beyond typical frame-level input representations. Our experiments show how deep architectures with sample-level filters improve the accuracy in music auto-tagging and they provide results comparable to previous state-of-the-art performances for the Magnatagatune dataset and Million Song Dataset. In addition, we visualize filters learned in a sample-level DCNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency along layer, such as mel-frequency spectrogram that is widely used in music classification systems.
no_new_dataset
0.949669
1703.03329
Limin Wang
Limin Wang, Yuanjun Xiong, Dahua Lin, Luc Van Gool
UntrimmedNets for Weakly Supervised Action Recognition and Detection
camera-ready version to appear in CVPR2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.
[ { "version": "v1", "created": "Thu, 9 Mar 2017 16:29:39 GMT" }, { "version": "v2", "created": "Mon, 22 May 2017 12:38:02 GMT" } ]
2017-05-23T00:00:00
[ [ "Wang", "Limin", "" ], [ "Xiong", "Yuanjun", "" ], [ "Lin", "Dahua", "" ], [ "Van Gool", "Luc", "" ] ]
TITLE: UntrimmedNets for Weakly Supervised Action Recognition and Detection ABSTRACT: Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.
no_new_dataset
0.94699
1704.00135
Vadim Markovtsev
Vadim Markovtsev and Eiso Kant
Topic modeling of public repositories at scale using names in source code
11 pages
null
null
null
cs.PL cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programming languages themselves have a limited number of reserved keywords and character based tokens that define the language specification. However, programmers have a rich use of natural language within their code through comments, text literals and naming entities. The programmer defined names that can be found in source code are a rich source of information to build a high level understanding of the project. The goal of this paper is to apply topic modeling to names used in over 13.6 million repositories and perceive the inferred topics. One of the problems in such a study is the occurrence of duplicate repositories not officially marked as forks (obscure forks). We show how to address it using the same identifiers which are extracted for topic modeling. We open with a discussion on naming in source code, we then elaborate on our approach to remove exact duplicate and fuzzy duplicate repositories using Locality Sensitive Hashing on the bag-of-words model and then discuss our work on topic modeling; and finally present the results from our data analysis together with open-access to the source code, tools and datasets.
[ { "version": "v1", "created": "Sat, 1 Apr 2017 08:16:20 GMT" }, { "version": "v2", "created": "Sat, 20 May 2017 08:29:00 GMT" } ]
2017-05-23T00:00:00
[ [ "Markovtsev", "Vadim", "" ], [ "Kant", "Eiso", "" ] ]
TITLE: Topic modeling of public repositories at scale using names in source code ABSTRACT: Programming languages themselves have a limited number of reserved keywords and character based tokens that define the language specification. However, programmers have a rich use of natural language within their code through comments, text literals and naming entities. The programmer defined names that can be found in source code are a rich source of information to build a high level understanding of the project. The goal of this paper is to apply topic modeling to names used in over 13.6 million repositories and perceive the inferred topics. One of the problems in such a study is the occurrence of duplicate repositories not officially marked as forks (obscure forks). We show how to address it using the same identifiers which are extracted for topic modeling. We open with a discussion on naming in source code, we then elaborate on our approach to remove exact duplicate and fuzzy duplicate repositories using Locality Sensitive Hashing on the bag-of-words model and then discuss our work on topic modeling; and finally present the results from our data analysis together with open-access to the source code, tools and datasets.
no_new_dataset
0.948728
1704.01700
Anirban Roychowdhury
Anirban Roychowdhury
Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds
null
null
null
null
math.OC cs.LG math.DG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose an L-BFGS optimization algorithm on Riemannian manifolds using minibatched stochastic variance reduction techniques for fast convergence with constant step sizes, without resorting to linesearch methods designed to satisfy Wolfe conditions. We provide a new convergence proof for strongly convex functions without using curvature conditions on the manifold, as well as a convergence discussion for nonconvex functions. We discuss a couple of ways to obtain the correction pairs used to calculate the product of the gradient with the inverse Hessian, and empirically demonstrate their use in synthetic experiments on computation of Karcher means for symmetric positive definite matrices and leading eigenvalues of large scale data matrices. We compare our method to VR-PCA for the latter experiment, along with Riemannian SVRG for both cases, and show strong convergence results for a range of datasets.
[ { "version": "v1", "created": "Thu, 6 Apr 2017 03:34:29 GMT" }, { "version": "v2", "created": "Wed, 12 Apr 2017 22:02:30 GMT" }, { "version": "v3", "created": "Mon, 22 May 2017 15:02:02 GMT" } ]
2017-05-23T00:00:00
[ [ "Roychowdhury", "Anirban", "" ] ]
TITLE: Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds ABSTRACT: We propose an L-BFGS optimization algorithm on Riemannian manifolds using minibatched stochastic variance reduction techniques for fast convergence with constant step sizes, without resorting to linesearch methods designed to satisfy Wolfe conditions. We provide a new convergence proof for strongly convex functions without using curvature conditions on the manifold, as well as a convergence discussion for nonconvex functions. We discuss a couple of ways to obtain the correction pairs used to calculate the product of the gradient with the inverse Hessian, and empirically demonstrate their use in synthetic experiments on computation of Karcher means for symmetric positive definite matrices and leading eigenvalues of large scale data matrices. We compare our method to VR-PCA for the latter experiment, along with Riemannian SVRG for both cases, and show strong convergence results for a range of datasets.
no_new_dataset
0.945951
1704.02703
Lei Bi
Lei Bi, Jinman Kim, Ashnil Kumar, Dagan Feng
Automatic Liver Lesion Detection using Cascaded Deep Residual Networks
Submission for 2017 ISBI LiTS Challenge
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic segmentation of liver lesions is a fundamental requirement towards the creation of computer aided diagnosis (CAD) and decision support systems (CDS). Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented. However, FCNs based on a 16 layer VGGNet architecture have limited capacity to add additional layers. Therefore, it is challenging to learn more discriminative features among different classes for FCNs. In this study, we overcome these limitations using deep residual networks (ResNet) to segment liver lesions. ResNet contain skip connections between convolutional layers, which solved the problem of the training degradation of training accuracy in very deep networks and thereby enables the use of additional layers for learning more discriminative features. In addition, we achieve more precise boundary definitions through a novel cascaded ResNet architecture with multi-scale fusion to gradually learn and infer the boundaries of both the liver and the liver lesions. Our proposed method achieved 4th place in the ISBI 2017 Liver Tumor Segmentation Challenge by the submission deadline.
[ { "version": "v1", "created": "Mon, 10 Apr 2017 04:05:50 GMT" }, { "version": "v2", "created": "Sun, 21 May 2017 02:58:40 GMT" } ]
2017-05-23T00:00:00
[ [ "Bi", "Lei", "" ], [ "Kim", "Jinman", "" ], [ "Kumar", "Ashnil", "" ], [ "Feng", "Dagan", "" ] ]
TITLE: Automatic Liver Lesion Detection using Cascaded Deep Residual Networks ABSTRACT: Automatic segmentation of liver lesions is a fundamental requirement towards the creation of computer aided diagnosis (CAD) and decision support systems (CDS). Traditional segmentation approaches depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, deep learning methods based on fully convolutional networks (FCNs) have been successful in many segmentation problems primarily because they leverage a large labelled dataset to hierarchically learn the features that best correspond to the shallow visual appearance as well as the deep semantics of the areas to be segmented. However, FCNs based on a 16 layer VGGNet architecture have limited capacity to add additional layers. Therefore, it is challenging to learn more discriminative features among different classes for FCNs. In this study, we overcome these limitations using deep residual networks (ResNet) to segment liver lesions. ResNet contain skip connections between convolutional layers, which solved the problem of the training degradation of training accuracy in very deep networks and thereby enables the use of additional layers for learning more discriminative features. In addition, we achieve more precise boundary definitions through a novel cascaded ResNet architecture with multi-scale fusion to gradually learn and infer the boundaries of both the liver and the liver lesions. Our proposed method achieved 4th place in the ISBI 2017 Liver Tumor Segmentation Challenge by the submission deadline.
no_new_dataset
0.946101
1704.04599
Arkan Al-Hamodi
Arkan A. G. Al-Hamodi, Songfeng Lu
A novel approach for fast mining frequent itemsets use N-list structure based on MapReduce
11 pages, 10 figures
null
null
null
cs.DC cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frequent Pattern Mining is a one field of the most significant topics in data mining. In recent years, many algorithms have been proposed for mining frequent itemsets. A new algorithm has been presented for mining frequent itemsets based on N-list data structure called Prepost algorithm. The Prepost algorithm is enhanced by implementing compact PPC-tree with the general tree. Prepost algorithm can only find a frequent itemsets with required (pre-order and post-order) for each node. In this chapter, we improved prepost algorithm based on Hadoop platform (HPrepost), proposed using the Mapreduce programming model. The main goals of proposed method are efficient mining frequent itemsets requiring less running time and memory usage. We have conduct experiments for the proposed scheme to compare with another algorithms. With dense datasets, which have a large average length of transactions, HPrepost is more effective than frequent itemsets algorithms in terms of execution time and memory usage for all min-sup. Generally, our algorithm outperforms algorithms in terms of runtime and memory usage with small thresholds and large datasets.
[ { "version": "v1", "created": "Sat, 15 Apr 2017 07:23:40 GMT" } ]
2017-05-23T00:00:00
[ [ "Al-Hamodi", "Arkan A. G.", "" ], [ "Lu", "Songfeng", "" ] ]
TITLE: A novel approach for fast mining frequent itemsets use N-list structure based on MapReduce ABSTRACT: Frequent Pattern Mining is a one field of the most significant topics in data mining. In recent years, many algorithms have been proposed for mining frequent itemsets. A new algorithm has been presented for mining frequent itemsets based on N-list data structure called Prepost algorithm. The Prepost algorithm is enhanced by implementing compact PPC-tree with the general tree. Prepost algorithm can only find a frequent itemsets with required (pre-order and post-order) for each node. In this chapter, we improved prepost algorithm based on Hadoop platform (HPrepost), proposed using the Mapreduce programming model. The main goals of proposed method are efficient mining frequent itemsets requiring less running time and memory usage. We have conduct experiments for the proposed scheme to compare with another algorithms. With dense datasets, which have a large average length of transactions, HPrepost is more effective than frequent itemsets algorithms in terms of execution time and memory usage for all min-sup. Generally, our algorithm outperforms algorithms in terms of runtime and memory usage with small thresholds and large datasets.
no_new_dataset
0.948058
1705.05183
Sahil Manchanda
Sahil Manchanda and Ashish Anand
Representation learning of drug and disease terms for drug repositioning
Accepted to appear in 3rd IEEE International Conference on Cybernetics (Spl Session: Deep Learning for Prediction and Estimation)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug repositioning (DR) refers to identification of novel indications for the approved drugs. The requirement of huge investment of time as well as money and risk of failure in clinical trials have led to surge in interest in drug repositioning. DR exploits two major aspects associated with drugs and diseases: existence of similarity among drugs and among diseases due to their shared involved genes or pathways or common biological effects. Existing methods of identifying drug-disease association majorly rely on the information available in the structured databases only. On the other hand, abundant information available in form of free texts in biomedical research articles are not being fully exploited. Word-embedding or obtaining vector representation of words from a large corpora of free texts using neural network methods have been shown to give significant performance for several natural language processing tasks. In this work we propose a novel way of representation learning to obtain features of drugs and diseases by combining complementary information available in unstructured texts and structured datasets. Next we use matrix completion approach on these feature vectors to learn projection matrix between drug and disease vector spaces. The proposed method has shown competitive performance with state-of-the-art methods. Further, the case studies on Alzheimer's and Hypertension diseases have shown that the predicted associations are matching with the existing knowledge.
[ { "version": "v1", "created": "Mon, 15 May 2017 12:29:52 GMT" }, { "version": "v2", "created": "Sat, 20 May 2017 12:29:56 GMT" } ]
2017-05-23T00:00:00
[ [ "Manchanda", "Sahil", "" ], [ "Anand", "Ashish", "" ] ]
TITLE: Representation learning of drug and disease terms for drug repositioning ABSTRACT: Drug repositioning (DR) refers to identification of novel indications for the approved drugs. The requirement of huge investment of time as well as money and risk of failure in clinical trials have led to surge in interest in drug repositioning. DR exploits two major aspects associated with drugs and diseases: existence of similarity among drugs and among diseases due to their shared involved genes or pathways or common biological effects. Existing methods of identifying drug-disease association majorly rely on the information available in the structured databases only. On the other hand, abundant information available in form of free texts in biomedical research articles are not being fully exploited. Word-embedding or obtaining vector representation of words from a large corpora of free texts using neural network methods have been shown to give significant performance for several natural language processing tasks. In this work we propose a novel way of representation learning to obtain features of drugs and diseases by combining complementary information available in unstructured texts and structured datasets. Next we use matrix completion approach on these feature vectors to learn projection matrix between drug and disease vector spaces. The proposed method has shown competitive performance with state-of-the-art methods. Further, the case studies on Alzheimer's and Hypertension diseases have shown that the predicted associations are matching with the existing knowledge.
no_new_dataset
0.9462
1705.07202
Lei Cai
Lei Cai and Hongyang Gao and Shuiwang Ji
Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image Generation
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Variational auto-encoder (VAE) is a powerful unsupervised learning framework for image generation. One drawback of VAE is that it generates blurry images due to its Gaussianity assumption and thus L2 loss. To allow the generation of high quality images by VAE, we increase the capacity of decoder network by employing residual blocks and skip connections, which also enable efficient optimization. To overcome the limitation of L2 loss, we propose to generate images in a multi-stage manner from coarse to fine. In the simplest case, the proposed multi-stage VAE divides the decoder into two components in which the second component generates refined images based on the course images generated by the first component. Since the second component is independent of the VAE model, it can employ other loss functions beyond the L2 loss and different model architectures. The proposed framework can be easily generalized to contain more than two components. Experiment results on the MNIST and CelebA datasets demonstrate that the proposed multi-stage VAE can generate sharper images as compared to those from the original VAE.
[ { "version": "v1", "created": "Fri, 19 May 2017 21:51:30 GMT" } ]
2017-05-23T00:00:00
[ [ "Cai", "Lei", "" ], [ "Gao", "Hongyang", "" ], [ "Ji", "Shuiwang", "" ] ]
TITLE: Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image Generation ABSTRACT: Variational auto-encoder (VAE) is a powerful unsupervised learning framework for image generation. One drawback of VAE is that it generates blurry images due to its Gaussianity assumption and thus L2 loss. To allow the generation of high quality images by VAE, we increase the capacity of decoder network by employing residual blocks and skip connections, which also enable efficient optimization. To overcome the limitation of L2 loss, we propose to generate images in a multi-stage manner from coarse to fine. In the simplest case, the proposed multi-stage VAE divides the decoder into two components in which the second component generates refined images based on the course images generated by the first component. Since the second component is independent of the VAE model, it can employ other loss functions beyond the L2 loss and different model architectures. The proposed framework can be easily generalized to contain more than two components. Experiment results on the MNIST and CelebA datasets demonstrate that the proposed multi-stage VAE can generate sharper images as compared to those from the original VAE.
no_new_dataset
0.94868
1705.07256
Samet Oymak
Samet Oymak, Mehrdad Mahdavi, Jiasi Chen
Learning Feature Nonlinearities with Non-Convex Regularized Binned Regression
22 pages, 7 figures
null
null
null
cs.LG cs.IT math.IT math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For various applications, the relations between the dependent and independent variables are highly nonlinear. Consequently, for large scale complex problems, neural networks and regression trees are commonly preferred over linear models such as Lasso. This work proposes learning the feature nonlinearities by binning feature values and finding the best fit in each quantile using non-convex regularized linear regression. The algorithm first captures the dependence between neighboring quantiles by enforcing smoothness via piecewise-constant/linear approximation and then selects a sparse subset of good features. We prove that the proposed algorithm is statistically and computationally efficient. In particular, it achieves linear rate of convergence while requiring near-minimal number of samples. Evaluations on synthetic and real datasets demonstrate that algorithm is competitive with current state-of-the-art and accurately learns feature nonlinearities. Finally, we explore an interesting connection between the binning stage of our algorithm and sparse Johnson-Lindenstrauss matrices.
[ { "version": "v1", "created": "Sat, 20 May 2017 03:46:32 GMT" } ]
2017-05-23T00:00:00
[ [ "Oymak", "Samet", "" ], [ "Mahdavi", "Mehrdad", "" ], [ "Chen", "Jiasi", "" ] ]
TITLE: Learning Feature Nonlinearities with Non-Convex Regularized Binned Regression ABSTRACT: For various applications, the relations between the dependent and independent variables are highly nonlinear. Consequently, for large scale complex problems, neural networks and regression trees are commonly preferred over linear models such as Lasso. This work proposes learning the feature nonlinearities by binning feature values and finding the best fit in each quantile using non-convex regularized linear regression. The algorithm first captures the dependence between neighboring quantiles by enforcing smoothness via piecewise-constant/linear approximation and then selects a sparse subset of good features. We prove that the proposed algorithm is statistically and computationally efficient. In particular, it achieves linear rate of convergence while requiring near-minimal number of samples. Evaluations on synthetic and real datasets demonstrate that algorithm is competitive with current state-of-the-art and accurately learns feature nonlinearities. Finally, we explore an interesting connection between the binning stage of our algorithm and sparse Johnson-Lindenstrauss matrices.
no_new_dataset
0.947527
1705.07258
Ziqi Yan
Ziqi Yan, Jiqiang Liu, Gang Li, Zhen Han, Shuo Qiu
PrivMin: Differentially Private MinHash for Jaccard Similarity Computation
27 pages, 6 figures, 4 tables
null
null
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many industrial applications of big data, the Jaccard Similarity Computation has been widely used to measure the distance between two profiles or sets respectively owned by two users. Yet, one semi-honest user with unpredictable knowledge may also deduce the private or sensitive information (e.g., the existence of a single element in the original sets) of the other user via the shared similarity. In this paper, we aim at solving the privacy issues in Jaccard similarity computation with strict differential privacy guarantees. To achieve this, we first define the Conditional $\epsilon$-DPSO, a relaxed differential privacy definition regarding set operations, and prove that the MinHash-based Jaccard Similarity Computation (MH-JSC) satisfies this definition. Then for achieving strict differential privacy in MH-JSC, we propose the PrivMin algorithm, which consists of two private operations: 1) the Private MinHash Value Generation that works by introducing the Exponential noise to the generation of MinHash signature. 2) the Randomized MinHashing Steps Selection that works by adopting Randomized Response technique to privately select several steps within the MinHashing phase that are deployed with the Exponential mechanism. Experiments on real datasets demonstrate that the proposed PrivMin algorithm can successfully retain the utility of the computed similarity while preserving privacy.
[ { "version": "v1", "created": "Sat, 20 May 2017 04:09:12 GMT" } ]
2017-05-23T00:00:00
[ [ "Yan", "Ziqi", "" ], [ "Liu", "Jiqiang", "" ], [ "Li", "Gang", "" ], [ "Han", "Zhen", "" ], [ "Qiu", "Shuo", "" ] ]
TITLE: PrivMin: Differentially Private MinHash for Jaccard Similarity Computation ABSTRACT: In many industrial applications of big data, the Jaccard Similarity Computation has been widely used to measure the distance between two profiles or sets respectively owned by two users. Yet, one semi-honest user with unpredictable knowledge may also deduce the private or sensitive information (e.g., the existence of a single element in the original sets) of the other user via the shared similarity. In this paper, we aim at solving the privacy issues in Jaccard similarity computation with strict differential privacy guarantees. To achieve this, we first define the Conditional $\epsilon$-DPSO, a relaxed differential privacy definition regarding set operations, and prove that the MinHash-based Jaccard Similarity Computation (MH-JSC) satisfies this definition. Then for achieving strict differential privacy in MH-JSC, we propose the PrivMin algorithm, which consists of two private operations: 1) the Private MinHash Value Generation that works by introducing the Exponential noise to the generation of MinHash signature. 2) the Randomized MinHashing Steps Selection that works by adopting Randomized Response technique to privately select several steps within the MinHashing phase that are deployed with the Exponential mechanism. Experiments on real datasets demonstrate that the proposed PrivMin algorithm can successfully retain the utility of the computed similarity while preserving privacy.
no_new_dataset
0.945851
1705.07290
Chandra Sekhar Seelamantula
Debabrata Mahapatra, Subhadip Mukherjee, and Chandra Sekhar Seelamantula
Deep Sparse Coding Using Optimized Linear Expansion of Thresholds
Submission date: November 11, 2016. 19 pages; 9 figures
null
null
IEEE Transactions on Pattern Analysis and Machine Intelligence Manuscript ID: TPAMI-2016-11-0861;
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of reconstructing sparse signals from noisy and compressive measurements using a feed-forward deep neural network (DNN) with an architecture motivated by the iterative shrinkage-thresholding algorithm (ISTA). We maintain the weights and biases of the network links as prescribed by ISTA and model the nonlinear activation function using a linear expansion of thresholds (LET), which has been very successful in image denoising and deconvolution. The optimal set of coefficients of the parametrized activation is learned over a training dataset containing measurement-sparse signal pairs, corresponding to a fixed sensing matrix. For training, we develop an efficient second-order algorithm, which requires only matrix-vector product computations in every training epoch (Hessian-free optimization) and offers superior convergence performance than gradient-descent optimization. Subsequently, we derive an improved network architecture inspired by FISTA, a faster version of ISTA, to achieve similar signal estimation performance with about 50% of the number of layers. The resulting architecture turns out to be a deep residual network, which has recently been shown to exhibit superior performance in several visual recognition tasks. Numerical experiments demonstrate that the proposed DNN architectures lead to 3 to 4 dB improvement in the reconstruction signal-to-noise ratio (SNR), compared with the state-of-the-art sparse coding algorithms.
[ { "version": "v1", "created": "Sat, 20 May 2017 11:14:39 GMT" } ]
2017-05-23T00:00:00
[ [ "Mahapatra", "Debabrata", "" ], [ "Mukherjee", "Subhadip", "" ], [ "Seelamantula", "Chandra Sekhar", "" ] ]
TITLE: Deep Sparse Coding Using Optimized Linear Expansion of Thresholds ABSTRACT: We address the problem of reconstructing sparse signals from noisy and compressive measurements using a feed-forward deep neural network (DNN) with an architecture motivated by the iterative shrinkage-thresholding algorithm (ISTA). We maintain the weights and biases of the network links as prescribed by ISTA and model the nonlinear activation function using a linear expansion of thresholds (LET), which has been very successful in image denoising and deconvolution. The optimal set of coefficients of the parametrized activation is learned over a training dataset containing measurement-sparse signal pairs, corresponding to a fixed sensing matrix. For training, we develop an efficient second-order algorithm, which requires only matrix-vector product computations in every training epoch (Hessian-free optimization) and offers superior convergence performance than gradient-descent optimization. Subsequently, we derive an improved network architecture inspired by FISTA, a faster version of ISTA, to achieve similar signal estimation performance with about 50% of the number of layers. The resulting architecture turns out to be a deep residual network, which has recently been shown to exhibit superior performance in several visual recognition tasks. Numerical experiments demonstrate that the proposed DNN architectures lead to 3 to 4 dB improvement in the reconstruction signal-to-noise ratio (SNR), compared with the state-of-the-art sparse coding algorithms.
no_new_dataset
0.945096
1705.07311
Mohammad Aliannejadi
Mohammad Aliannejadi, Ida Mele, and Fabio Crestani
Personalized Ranking for Context-Aware Venue Suggestion
The 32nd ACM SIGAPP Symposium On Applied Computing (SAC), Marrakech, Morocco, April 4-6, 2017
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Making personalized and context-aware suggestions of venues to the users is very crucial in venue recommendation. These suggestions are often based on matching the venues' features with the users' preferences, which can be collected from previously visited locations. In this paper we present a novel user-modeling approach which relies on a set of scoring functions for making personalized suggestions of venues based on venues content and reviews as well as users context. Our experiments, conducted on the dataset of the TREC Contextual Suggestion Track, prove that our methodology outperforms state-of-the-art approaches by a significant margin.
[ { "version": "v1", "created": "Sat, 20 May 2017 14:21:02 GMT" } ]
2017-05-23T00:00:00
[ [ "Aliannejadi", "Mohammad", "" ], [ "Mele", "Ida", "" ], [ "Crestani", "Fabio", "" ] ]
TITLE: Personalized Ranking for Context-Aware Venue Suggestion ABSTRACT: Making personalized and context-aware suggestions of venues to the users is very crucial in venue recommendation. These suggestions are often based on matching the venues' features with the users' preferences, which can be collected from previously visited locations. In this paper we present a novel user-modeling approach which relies on a set of scoring functions for making personalized suggestions of venues based on venues content and reviews as well as users context. Our experiments, conducted on the dataset of the TREC Contextual Suggestion Track, prove that our methodology outperforms state-of-the-art approaches by a significant margin.
no_new_dataset
0.951639
1705.07366
Jeffrey Humpherys
Kevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, and David Kartchner
Forward Thinking: Building Deep Random Forests
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of deep neural networks has inspired many to wonder whether other learners could benefit from deep, layered architectures. We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results. This is done by training one layer at a time, and once a layer is trained, the input data are mapped forward through the layer to create a new learning problem. The process is then repeated, transforming the data through multiple layers, one at a time, rendering a new dataset, which is expected to be better behaved, and on which a final output layer can achieve good performance. In the case where the neurons of deep neural nets are replaced with decision trees, we call the result a Forward Thinking Deep Random Forest (FTDRF). We demonstrate a proof of concept by applying FTDRF on the MNIST dataset. We also provide a general mathematical formulation that allows for other types of deep learning problems to be considered.
[ { "version": "v1", "created": "Sat, 20 May 2017 22:39:51 GMT" } ]
2017-05-23T00:00:00
[ [ "Miller", "Kevin", "" ], [ "Hettinger", "Chris", "" ], [ "Humpherys", "Jeffrey", "" ], [ "Jarvis", "Tyler", "" ], [ "Kartchner", "David", "" ] ]
TITLE: Forward Thinking: Building Deep Random Forests ABSTRACT: The success of deep neural networks has inspired many to wonder whether other learners could benefit from deep, layered architectures. We present a general framework called forward thinking for deep learning that generalizes the architectural flexibility and sophistication of deep neural networks while also allowing for (i) different types of learning functions in the network, other than neurons, and (ii) the ability to adaptively deepen the network as needed to improve results. This is done by training one layer at a time, and once a layer is trained, the input data are mapped forward through the layer to create a new learning problem. The process is then repeated, transforming the data through multiple layers, one at a time, rendering a new dataset, which is expected to be better behaved, and on which a final output layer can achieve good performance. In the case where the neurons of deep neural nets are replaced with decision trees, we call the result a Forward Thinking Deep Random Forest (FTDRF). We demonstrate a proof of concept by applying FTDRF on the MNIST dataset. We also provide a general mathematical formulation that allows for other types of deep learning problems to be considered.
new_dataset
0.738009
1705.07522
Hamid Tizhoosh
Morteza Babaie, Shivam Kalra, Aditya Sriram, Christopher Mitcheltree, Shujin Zhu, Amin Khatami, Shahryar Rahnamayan, H.R. Tizhoosh
Classification and Retrieval of Digital Pathology Scans: A New Dataset
Accepted for presentation at Workshop for Computer Vision for Microscopy Image Analysis (CVMI 2017) @ CVPR 2017, Honolulu, Hawaii
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce a new dataset, \textbf{Kimia Path24}, for image classification and retrieval in digital pathology. We use the whole scan images of 24 different tissue textures to generate 1,325 test patches of size 1000$\times$1000 (0.5mm$\times$0.5mm). Training data can be generated according to preferences of algorithm designer and can range from approximately 27,000 to over 50,000 patches if the preset parameters are adopted. We propose a compound patch-and-scan accuracy measurement that makes achieving high accuracies quite challenging. In addition, we set the benchmarking line by applying LBP, dictionary approach and convolutional neural nets (CNNs) and report their results. The highest accuracy was 41.80\% for CNN.
[ { "version": "v1", "created": "Mon, 22 May 2017 00:00:18 GMT" } ]
2017-05-23T00:00:00
[ [ "Babaie", "Morteza", "" ], [ "Kalra", "Shivam", "" ], [ "Sriram", "Aditya", "" ], [ "Mitcheltree", "Christopher", "" ], [ "Zhu", "Shujin", "" ], [ "Khatami", "Amin", "" ], [ "Rahnamayan", "Shahryar", "" ], [ "Tizhoosh", "H. R.", "" ] ]
TITLE: Classification and Retrieval of Digital Pathology Scans: A New Dataset ABSTRACT: In this paper, we introduce a new dataset, \textbf{Kimia Path24}, for image classification and retrieval in digital pathology. We use the whole scan images of 24 different tissue textures to generate 1,325 test patches of size 1000$\times$1000 (0.5mm$\times$0.5mm). Training data can be generated according to preferences of algorithm designer and can range from approximately 27,000 to over 50,000 patches if the preset parameters are adopted. We propose a compound patch-and-scan accuracy measurement that makes achieving high accuracies quite challenging. In addition, we set the benchmarking line by applying LBP, dictionary approach and convolutional neural nets (CNNs) and report their results. The highest accuracy was 41.80\% for CNN.
new_dataset
0.957675
1705.07563
Yuxin Su
Yuxin Su, Irwin King, Michael Lyu
Learning to Rank Using Localized Geometric Mean Metrics
To appear in SIGIR'17
null
10.1145/3077136.3080828
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many learning-to-rank (LtR) algorithms focus on query-independent model, in which query and document do not lie in the same feature space, and the rankers rely on the feature ensemble about query-document pair instead of the similarity between query instance and documents. However, existing algorithms do not consider local structures in query-document feature space, and are fragile to irrelevant noise features. In this paper, we propose a novel Riemannian metric learning algorithm to capture the local structures and develop a robust LtR algorithm. First, we design a concept called \textit{ideal candidate document} to introduce metric learning algorithm to query-independent model. Previous metric learning algorithms aiming to find an optimal metric space are only suitable for query-dependent model, in which query instance and documents belong to the same feature space and the similarity is directly computed from the metric space. Then we extend the new and extremely fast global Geometric Mean Metric Learning (GMML) algorithm to develop a localized GMML, namely L-GMML. Based on the combination of local learned metrics, we employ the popular Normalized Discounted Cumulative Gain~(NDCG) scorer and Weighted Approximate Rank Pairwise (WARP) loss to optimize the \textit{ideal candidate document} for each query candidate set. Finally, we can quickly evaluate all candidates via the similarity between the \textit{ideal candidate document} and other candidates. By leveraging the ability of metric learning algorithms to describe the complex structural information, our approach gives us a principled and efficient way to perform LtR tasks. The experiments on real-world datasets demonstrate that our proposed L-GMML algorithm outperforms the state-of-the-art metric learning to rank methods and the stylish query-independent LtR algorithms regarding accuracy and computational efficiency.
[ { "version": "v1", "created": "Mon, 22 May 2017 05:46:44 GMT" } ]
2017-05-23T00:00:00
[ [ "Su", "Yuxin", "" ], [ "King", "Irwin", "" ], [ "Lyu", "Michael", "" ] ]
TITLE: Learning to Rank Using Localized Geometric Mean Metrics ABSTRACT: Many learning-to-rank (LtR) algorithms focus on query-independent model, in which query and document do not lie in the same feature space, and the rankers rely on the feature ensemble about query-document pair instead of the similarity between query instance and documents. However, existing algorithms do not consider local structures in query-document feature space, and are fragile to irrelevant noise features. In this paper, we propose a novel Riemannian metric learning algorithm to capture the local structures and develop a robust LtR algorithm. First, we design a concept called \textit{ideal candidate document} to introduce metric learning algorithm to query-independent model. Previous metric learning algorithms aiming to find an optimal metric space are only suitable for query-dependent model, in which query instance and documents belong to the same feature space and the similarity is directly computed from the metric space. Then we extend the new and extremely fast global Geometric Mean Metric Learning (GMML) algorithm to develop a localized GMML, namely L-GMML. Based on the combination of local learned metrics, we employ the popular Normalized Discounted Cumulative Gain~(NDCG) scorer and Weighted Approximate Rank Pairwise (WARP) loss to optimize the \textit{ideal candidate document} for each query candidate set. Finally, we can quickly evaluate all candidates via the similarity between the \textit{ideal candidate document} and other candidates. By leveraging the ability of metric learning algorithms to describe the complex structural information, our approach gives us a principled and efficient way to perform LtR tasks. The experiments on real-world datasets demonstrate that our proposed L-GMML algorithm outperforms the state-of-the-art metric learning to rank methods and the stylish query-independent LtR algorithms regarding accuracy and computational efficiency.
no_new_dataset
0.949949
1705.07609
Hassan Foroosh
Yuping Shen and Hassan Foroosh
View-Invariant Recognition of Action Style Self-Dissimilarity
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Self-similarity was recently introduced as a measure of inter-class congruence for classification of actions. Herein, we investigate the dual problem of intra-class dissimilarity for classification of action styles. We introduce self-dissimilarity matrices that discriminate between same actions performed by different subjects regardless of viewing direction and camera parameters. We investigate two frameworks using these invariant style dissimilarity measures based on Principal Component Analysis (PCA) and Fisher Discriminant Analysis (FDA). Extensive experiments performed on IXMAS dataset indicate remarkably good discriminant characteristics for the proposed invariant measures for gender recognition from video data.
[ { "version": "v1", "created": "Mon, 22 May 2017 08:38:19 GMT" } ]
2017-05-23T00:00:00
[ [ "Shen", "Yuping", "" ], [ "Foroosh", "Hassan", "" ] ]
TITLE: View-Invariant Recognition of Action Style Self-Dissimilarity ABSTRACT: Self-similarity was recently introduced as a measure of inter-class congruence for classification of actions. Herein, we investigate the dual problem of intra-class dissimilarity for classification of action styles. We introduce self-dissimilarity matrices that discriminate between same actions performed by different subjects regardless of viewing direction and camera parameters. We investigate two frameworks using these invariant style dissimilarity measures based on Principal Component Analysis (PCA) and Fisher Discriminant Analysis (FDA). Extensive experiments performed on IXMAS dataset indicate remarkably good discriminant characteristics for the proposed invariant measures for gender recognition from video data.
no_new_dataset
0.942981
1705.07692
Yunlong Yu
Zhong Ji, Yunxin Sun, Yulong Yu, Jichang Guo, and Yanwei Pang
Semantic Softmax Loss for Zero-Shot Learning
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visual features and the class semantic descriptors into a multimodal framework with a linear or bilinear model. However, the visual features and the class semantic descriptors locate in different structural spaces, a linear or bilinear model can not capture the semantic interactions between different modalities well. In this letter, we propose a nonlinear approach to impose ZSL as a multi-class classification problem via a Semantic Softmax Loss by embedding the class semantic descriptors into the softmax layer of multi-class classification network. To narrow the structural differences between the visual features and semantic descriptors, we further use an L2 normalization constraint to the differences between the visual features and visual prototypes reconstructed with the semantic descriptors. The results on three benchmark datasets, i.e., AwA, CUB and SUN demonstrate the proposed approach can boost the performances steadily and achieve the state-of-the-art performance for both zero-shot classification and zero-shot retrieval.
[ { "version": "v1", "created": "Mon, 22 May 2017 12:26:04 GMT" } ]
2017-05-23T00:00:00
[ [ "Ji", "Zhong", "" ], [ "Sun", "Yunxin", "" ], [ "Yu", "Yulong", "" ], [ "Guo", "Jichang", "" ], [ "Pang", "Yanwei", "" ] ]
TITLE: Semantic Softmax Loss for Zero-Shot Learning ABSTRACT: A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visual features and the class semantic descriptors into a multimodal framework with a linear or bilinear model. However, the visual features and the class semantic descriptors locate in different structural spaces, a linear or bilinear model can not capture the semantic interactions between different modalities well. In this letter, we propose a nonlinear approach to impose ZSL as a multi-class classification problem via a Semantic Softmax Loss by embedding the class semantic descriptors into the softmax layer of multi-class classification network. To narrow the structural differences between the visual features and semantic descriptors, we further use an L2 normalization constraint to the differences between the visual features and visual prototypes reconstructed with the semantic descriptors. The results on three benchmark datasets, i.e., AwA, CUB and SUN demonstrate the proposed approach can boost the performances steadily and achieve the state-of-the-art performance for both zero-shot classification and zero-shot retrieval.
no_new_dataset
0.945801
1705.07818
Chenliang Xu
Li Ding and Chenliang Xu
TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
[ { "version": "v1", "created": "Mon, 22 May 2017 15:55:08 GMT" } ]
2017-05-23T00:00:00
[ [ "Ding", "Li", "" ], [ "Xu", "Chenliang", "" ] ]
TITLE: TricorNet: A Hybrid Temporal Convolutional and Recurrent Network for Video Action Segmentation ABSTRACT: Action segmentation as a milestone towards building automatic systems to understand untrimmed videos has received considerable attention in the recent years. It is typically being modeled as a sequence labeling problem but contains intrinsic and sufficient differences than text parsing or speech processing. In this paper, we introduce a novel hybrid temporal convolutional and recurrent network (TricorNet), which has an encoder-decoder architecture: the encoder consists of a hierarchy of temporal convolutional kernels that capture the local motion changes of different actions; the decoder is a hierarchy of recurrent neural networks that are able to learn and memorize long-term action dependencies after the encoding stage. Our model is simple but extremely effective in terms of video sequence labeling. The experimental results on three public action segmentation datasets have shown that the proposed model achieves superior performance over the state of the art.
no_new_dataset
0.946597
1604.06414
Da Zheng
Da Zheng, Disa Mhembere, Joshua T. Vogelstein, Carey E. Priebe, Randal Burns
FlashR: R-Programmed Parallel and Scalable Machine Learning using SSDs
null
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
R is one of the most popular programming languages for statistics and machine learning, but the R framework is relatively slow and unable to scale to large datasets. The general approach for speeding up an implementation in R is to implement the algorithms in C or FORTRAN and provide an R wrapper. FlashR takes a different approach: it executes R code in parallel and scales the code beyond memory capacity by utilizing solid-state drives (SSDs) automatically. It provides a small number of generalized operations (GenOps) upon which we reimplement a large number of matrix functions in the R base package. As such, FlashR parallelizes and scales existing R code with little/no modification. To reduce data movement between CPU and SSDs, FlashR evaluates matrix operations lazily, fuses operations at runtime, and uses cache-aware, two-level matrix partitioning. We evaluate FlashR on a variety of machine learning and statistics algorithms on inputs of up to four billion data points. FlashR out-of-core tracks closely the performance of FlashR in-memory. The R code for machine learning algorithms executed in FlashR outperforms the in-memory execution of H2O and Spark MLlib by a factor of 2-10 and outperforms Revolution R Open by more than an order of magnitude.
[ { "version": "v1", "created": "Thu, 21 Apr 2016 18:43:38 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2016 00:43:50 GMT" }, { "version": "v3", "created": "Wed, 18 May 2016 13:42:30 GMT" }, { "version": "v4", "created": "Thu, 18 May 2017 23:28:01 GMT" } ]
2017-05-22T00:00:00
[ [ "Zheng", "Da", "" ], [ "Mhembere", "Disa", "" ], [ "Vogelstein", "Joshua T.", "" ], [ "Priebe", "Carey E.", "" ], [ "Burns", "Randal", "" ] ]
TITLE: FlashR: R-Programmed Parallel and Scalable Machine Learning using SSDs ABSTRACT: R is one of the most popular programming languages for statistics and machine learning, but the R framework is relatively slow and unable to scale to large datasets. The general approach for speeding up an implementation in R is to implement the algorithms in C or FORTRAN and provide an R wrapper. FlashR takes a different approach: it executes R code in parallel and scales the code beyond memory capacity by utilizing solid-state drives (SSDs) automatically. It provides a small number of generalized operations (GenOps) upon which we reimplement a large number of matrix functions in the R base package. As such, FlashR parallelizes and scales existing R code with little/no modification. To reduce data movement between CPU and SSDs, FlashR evaluates matrix operations lazily, fuses operations at runtime, and uses cache-aware, two-level matrix partitioning. We evaluate FlashR on a variety of machine learning and statistics algorithms on inputs of up to four billion data points. FlashR out-of-core tracks closely the performance of FlashR in-memory. The R code for machine learning algorithms executed in FlashR outperforms the in-memory execution of H2O and Spark MLlib by a factor of 2-10 and outperforms Revolution R Open by more than an order of magnitude.
no_new_dataset
0.932515
1610.03023
Weixun Zhou
Weixun Zhou, Shawn Newsam, Congmin Li, Zhenfeng Shao
Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval
null
Remote Sens., 9(5), 489 (2017)
10.3390/rs9050489
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the content complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNN) for high-resolution remote sensing image retrieval (HRRSIR). To this end, two effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, the deep features are extracted from the fully-connected and convolutional layers of the pre-trained CNN models, respectively; in the second scheme, we propose a novel CNN architecture based on conventional convolution layers and a three-layer perceptron. The novel CNN model is then trained on a large remote sensing dataset to learn low dimensional features. The two schemes are evaluated on several public and challenging datasets, and the results indicate that the proposed schemes and in particular the novel CNN are able to achieve state-of-the-art performance.
[ { "version": "v1", "created": "Mon, 10 Oct 2016 18:45:30 GMT" }, { "version": "v2", "created": "Fri, 30 Dec 2016 19:04:58 GMT" } ]
2017-05-22T00:00:00
[ [ "Zhou", "Weixun", "" ], [ "Newsam", "Shawn", "" ], [ "Li", "Congmin", "" ], [ "Shao", "Zhenfeng", "" ] ]
TITLE: Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval ABSTRACT: Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the content complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNN) for high-resolution remote sensing image retrieval (HRRSIR). To this end, two effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, the deep features are extracted from the fully-connected and convolutional layers of the pre-trained CNN models, respectively; in the second scheme, we propose a novel CNN architecture based on conventional convolution layers and a three-layer perceptron. The novel CNN model is then trained on a large remote sensing dataset to learn low dimensional features. The two schemes are evaluated on several public and challenging datasets, and the results indicate that the proposed schemes and in particular the novel CNN are able to achieve state-of-the-art performance.
no_new_dataset
0.955858
1704.04133
Devinder Kumar
Devinder Kumar, Alexander Wong, Graham W. Taylor
Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks
Accepted at Computer Vision and Patter Recognition Workshop (CVPR-W) on Explainable Computer Vision, 2017
null
null
null
cs.CV cs.AI cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.
[ { "version": "v1", "created": "Thu, 13 Apr 2017 13:44:33 GMT" }, { "version": "v2", "created": "Thu, 18 May 2017 18:38:06 GMT" } ]
2017-05-22T00:00:00
[ [ "Kumar", "Devinder", "" ], [ "Wong", "Alexander", "" ], [ "Taylor", "Graham W.", "" ] ]
TITLE: Explaining the Unexplained: A CLass-Enhanced Attentive Response (CLEAR) Approach to Understanding Deep Neural Networks ABSTRACT: In this work, we propose CLass-Enhanced Attentive Response (CLEAR): an approach to visualize and understand the decisions made by deep neural networks (DNNs) given a specific input. CLEAR facilitates the visualization of attentive regions and levels of interest of DNNs during the decision-making process. It also enables the visualization of the most dominant classes associated with these attentive regions of interest. As such, CLEAR can mitigate some of the shortcomings of heatmap-based methods associated with decision ambiguity, and allows for better insights into the decision-making process of DNNs. Quantitative and qualitative experiments across three different datasets demonstrate the efficacy of CLEAR for gaining a better understanding of the inner workings of DNNs during the decision-making process.
no_new_dataset
0.951594
1705.01567
Manuel G\"unther
Manuel G\"unther, Steve Cruz, Ethan M. Rudd, Terrance E. Boult
Toward Open-Set Face Recognition
Accepted for Publication in CVPR 2017 Biometrics Workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Much research has been conducted on both face identification and face verification, with greater focus on the latter. Research on face identification has mostly focused on using closed-set protocols, which assume that all probe images used in evaluation contain identities of subjects that are enrolled in the gallery. Real systems, however, where only a fraction of probe sample identities are enrolled in the gallery, cannot make this closed-set assumption. Instead, they must assume an open set of probe samples and be able to reject/ignore those that correspond to unknown identities. In this paper, we address the widespread misconception that thresholding verification-like scores is a good way to solve the open-set face identification problem, by formulating an open-set face identification protocol and evaluating different strategies for assessing similarity. Our open-set identification protocol is based on the canonical labeled faces in the wild (LFW) dataset. Additionally to the known identities, we introduce the concepts of known unknowns (known, but uninteresting persons) and unknown unknowns (people never seen before) to the biometric community. We compare three algorithms for assessing similarity in a deep feature space under an open-set protocol: thresholded verification-like scores, linear discriminant analysis (LDA) scores, and an extreme value machine (EVM) probabilities. Our findings suggest that thresholding EVM probabilities, which are open-set by design, outperforms thresholding verification-like scores.
[ { "version": "v1", "created": "Wed, 3 May 2017 18:10:09 GMT" }, { "version": "v2", "created": "Fri, 19 May 2017 00:24:43 GMT" } ]
2017-05-22T00:00:00
[ [ "Günther", "Manuel", "" ], [ "Cruz", "Steve", "" ], [ "Rudd", "Ethan M.", "" ], [ "Boult", "Terrance E.", "" ] ]
TITLE: Toward Open-Set Face Recognition ABSTRACT: Much research has been conducted on both face identification and face verification, with greater focus on the latter. Research on face identification has mostly focused on using closed-set protocols, which assume that all probe images used in evaluation contain identities of subjects that are enrolled in the gallery. Real systems, however, where only a fraction of probe sample identities are enrolled in the gallery, cannot make this closed-set assumption. Instead, they must assume an open set of probe samples and be able to reject/ignore those that correspond to unknown identities. In this paper, we address the widespread misconception that thresholding verification-like scores is a good way to solve the open-set face identification problem, by formulating an open-set face identification protocol and evaluating different strategies for assessing similarity. Our open-set identification protocol is based on the canonical labeled faces in the wild (LFW) dataset. Additionally to the known identities, we introduce the concepts of known unknowns (known, but uninteresting persons) and unknown unknowns (people never seen before) to the biometric community. We compare three algorithms for assessing similarity in a deep feature space under an open-set protocol: thresholded verification-like scores, linear discriminant analysis (LDA) scores, and an extreme value machine (EVM) probabilities. Our findings suggest that thresholding EVM probabilities, which are open-set by design, outperforms thresholding verification-like scores.
no_new_dataset
0.955569
1705.03146
Shiyang Yan
Shiyang Yan, Jeremy S. Smith, Wenjin Lu and Bailing Zhang
CHAM: action recognition using convolutional hierarchical attention model
accepted by ICIP2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the soft attention mechanism, which was originally proposed in language processing, has been applied in computer vision tasks like image captioning. This paper presents improvements to the soft attention model by combining a convolutional LSTM with a hierarchical system architecture to recognize action categories in videos. We call this model the Convolutional Hierarchical Attention Model (CHAM). The model applies a convolutional operation inside the LSTM cell and an attention map generation process to recognize actions. The hierarchical architecture of this model is able to explicitly reason on multi-granularities of action categories. The proposed architecture achieved improved results on three publicly available datasets: the UCF sports dataset, the Olympic sports dataset and the HMDB51 dataset.
[ { "version": "v1", "created": "Tue, 9 May 2017 02:27:37 GMT" }, { "version": "v2", "created": "Fri, 19 May 2017 06:11:26 GMT" } ]
2017-05-22T00:00:00
[ [ "Yan", "Shiyang", "" ], [ "Smith", "Jeremy S.", "" ], [ "Lu", "Wenjin", "" ], [ "Zhang", "Bailing", "" ] ]
TITLE: CHAM: action recognition using convolutional hierarchical attention model ABSTRACT: Recently, the soft attention mechanism, which was originally proposed in language processing, has been applied in computer vision tasks like image captioning. This paper presents improvements to the soft attention model by combining a convolutional LSTM with a hierarchical system architecture to recognize action categories in videos. We call this model the Convolutional Hierarchical Attention Model (CHAM). The model applies a convolutional operation inside the LSTM cell and an attention map generation process to recognize actions. The hierarchical architecture of this model is able to explicitly reason on multi-granularities of action categories. The proposed architecture achieved improved results on three publicly available datasets: the UCF sports dataset, the Olympic sports dataset and the HMDB51 dataset.
no_new_dataset
0.953966
1705.06753
Konstantin Bauman
Evgeny Bauman, Konstantin Bauman
Discovering the Graph Structure in the Clustering Results
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a standard cluster analysis, such as k-means, in addition to clusters locations and distances between them, it's important to know if they are connected or well separated from each other. The main focus of this paper is discovering the relations between the resulting clusters. We propose a new method which is based on pairwise overlapping k-means clustering, that in addition to means of clusters provides the graph structure of their relations. The proposed method has a set of parameters that can be tuned in order to control the sensitivity of the model and the desired relative size of the pairwise overlapping interval between means of two adjacent clusters, i.e., level of overlapping. We present the exact formula for calculating that parameter. The empirical study presented in the paper demonstrates that our approach works well not only on toy data but also compliments standard clustering results with a reasonable graph structure on real datasets, such as financial indices and restaurants.
[ { "version": "v1", "created": "Thu, 18 May 2017 18:01:50 GMT" } ]
2017-05-22T00:00:00
[ [ "Bauman", "Evgeny", "" ], [ "Bauman", "Konstantin", "" ] ]
TITLE: Discovering the Graph Structure in the Clustering Results ABSTRACT: In a standard cluster analysis, such as k-means, in addition to clusters locations and distances between them, it's important to know if they are connected or well separated from each other. The main focus of this paper is discovering the relations between the resulting clusters. We propose a new method which is based on pairwise overlapping k-means clustering, that in addition to means of clusters provides the graph structure of their relations. The proposed method has a set of parameters that can be tuned in order to control the sensitivity of the model and the desired relative size of the pairwise overlapping interval between means of two adjacent clusters, i.e., level of overlapping. We present the exact formula for calculating that parameter. The empirical study presented in the paper demonstrates that our approach works well not only on toy data but also compliments standard clustering results with a reasonable graph structure on real datasets, such as financial indices and restaurants.
no_new_dataset
0.949342
1705.06849
Lianwen Jin
Songxuan Lai, Lianwen Jin, Weixin Yang
Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature
6 pages, 5 figures, 5 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel descriptor, called the length-normalized path signature (LNPS), and apply it to online signature verification. LNPS has interesting properties, such as scale invariance and rotation invariance after linear combination, and shows promising results in online signature verification. Experiments on the publicly available SVC-2004 dataset yielded state-of-the-art performance of 2.37% equal error rate (EER).
[ { "version": "v1", "created": "Fri, 19 May 2017 02:27:58 GMT" } ]
2017-05-22T00:00:00
[ [ "Lai", "Songxuan", "" ], [ "Jin", "Lianwen", "" ], [ "Yang", "Weixin", "" ] ]
TITLE: Online Signature Verification using Recurrent Neural Network and Length-normalized Path Signature ABSTRACT: Inspired by the great success of recurrent neural networks (RNNs) in sequential modeling, we introduce a novel RNN system to improve the performance of online signature verification. The training objective is to directly minimize intra-class variations and to push the distances between skilled forgeries and genuine samples above a given threshold. By back-propagating the training signals, our RNN network produced discriminative features with desired metrics. Additionally, we propose a novel descriptor, called the length-normalized path signature (LNPS), and apply it to online signature verification. LNPS has interesting properties, such as scale invariance and rotation invariance after linear combination, and shows promising results in online signature verification. Experiments on the publicly available SVC-2004 dataset yielded state-of-the-art performance of 2.37% equal error rate (EER).
no_new_dataset
0.94699
1705.06871
Shirui Li
You Hao, Shirui Li, Hanlin Mo, and Hua Li
Affine-Gradient Based Local Binary Pattern Descriptor for Texture Classiffication
11 pages,4 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel Affine-Gradient based Local Binary Pattern (AGLBP) descriptor for texture classification. It is very hard to describe complicated texture using single type information, such as Local Binary Pattern (LBP), which just utilizes the sign information of the difference between the pixel and its local neighbors. Our descriptor has three characteristics: 1) In order to make full use of the information contained in the texture, the Affine-Gradient, which is different from Euclidean-Gradient and invariant to affine transformation is incorporated into AGLBP. 2) An improved method is proposed for rotation invariance, which depends on the reference direction calculating respect to local neighbors. 3) Feature selection method, considering both the statistical frequency and the intraclass variance of the training dataset, is also applied to reduce the dimensionality of descriptors. Experiments on three standard texture datasets, Outex12, Outex10 and KTH-TIPS2, are conducted to evaluate the performance of AGLBP. The results show that our proposed descriptor gets better performance comparing to some state-of-the-art rotation texture descriptors in texture classification.
[ { "version": "v1", "created": "Fri, 19 May 2017 06:41:31 GMT" } ]
2017-05-22T00:00:00
[ [ "Hao", "You", "" ], [ "Li", "Shirui", "" ], [ "Mo", "Hanlin", "" ], [ "Li", "Hua", "" ] ]
TITLE: Affine-Gradient Based Local Binary Pattern Descriptor for Texture Classiffication ABSTRACT: We present a novel Affine-Gradient based Local Binary Pattern (AGLBP) descriptor for texture classification. It is very hard to describe complicated texture using single type information, such as Local Binary Pattern (LBP), which just utilizes the sign information of the difference between the pixel and its local neighbors. Our descriptor has three characteristics: 1) In order to make full use of the information contained in the texture, the Affine-Gradient, which is different from Euclidean-Gradient and invariant to affine transformation is incorporated into AGLBP. 2) An improved method is proposed for rotation invariance, which depends on the reference direction calculating respect to local neighbors. 3) Feature selection method, considering both the statistical frequency and the intraclass variance of the training dataset, is also applied to reduce the dimensionality of descriptors. Experiments on three standard texture datasets, Outex12, Outex10 and KTH-TIPS2, are conducted to evaluate the performance of AGLBP. The results show that our proposed descriptor gets better performance comparing to some state-of-the-art rotation texture descriptors in texture classification.
no_new_dataset
0.949763
1705.06950
Joao Carreira
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman and Andrew Zisserman
The Kinetics Human Action Video Dataset
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.
[ { "version": "v1", "created": "Fri, 19 May 2017 12:07:01 GMT" } ]
2017-05-22T00:00:00
[ [ "Kay", "Will", "" ], [ "Carreira", "Joao", "" ], [ "Simonyan", "Karen", "" ], [ "Zhang", "Brian", "" ], [ "Hillier", "Chloe", "" ], [ "Vijayanarasimhan", "Sudheendra", "" ], [ "Viola", "Fabio", "" ], [ "Green", "Tim", "" ], [ "Back", "Trevor", "" ], [ "Natsev", "Paul", "" ], [ "Suleyman", "Mustafa", "" ], [ "Zisserman", "Andrew", "" ] ]
TITLE: The Kinetics Human Action Video Dataset ABSTRACT: We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.
new_dataset
0.933975
1705.07008
Leandro dos Santos
Leandro B. dos Santos, Magali S. Duran, Nathan S. Hartmann, Arnaldo Candido Jr., Gustavo H. Paetzold, Sandra M. Aluisio
A Lightweight Regression Method to Infer Psycholinguistic Properties for Brazilian Portuguese
Paper accepted for TSD2017
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Psycholinguistic properties of words have been used in various approaches to Natural Language Processing tasks, such as text simplification and readability assessment. Most of these properties are subjective, involving costly and time-consuming surveys to be gathered. Recent approaches use the limited datasets of psycholinguistic properties to extend them automatically to large lexicons. However, some of the resources used by such approaches are not available to most languages. This study presents a method to infer psycholinguistic properties for Brazilian Portuguese (BP) using regressors built with a light set of features usually available for less resourced languages: word length, frequency lists, lexical databases composed of school dictionaries and word embedding models. The correlations between the properties inferred are close to those obtained by related works. The resulting resource contains 26,874 words in BP annotated with concreteness, age of acquisition, imageability and subjective frequency.
[ { "version": "v1", "created": "Fri, 19 May 2017 14:17:31 GMT" } ]
2017-05-22T00:00:00
[ [ "Santos", "Leandro B. dos", "" ], [ "Duran", "Magali S.", "" ], [ "Hartmann", "Nathan S.", "" ], [ "Candido", "Arnaldo", "Jr." ], [ "Paetzold", "Gustavo H.", "" ], [ "Aluisio", "Sandra M.", "" ] ]
TITLE: A Lightweight Regression Method to Infer Psycholinguistic Properties for Brazilian Portuguese ABSTRACT: Psycholinguistic properties of words have been used in various approaches to Natural Language Processing tasks, such as text simplification and readability assessment. Most of these properties are subjective, involving costly and time-consuming surveys to be gathered. Recent approaches use the limited datasets of psycholinguistic properties to extend them automatically to large lexicons. However, some of the resources used by such approaches are not available to most languages. This study presents a method to infer psycholinguistic properties for Brazilian Portuguese (BP) using regressors built with a light set of features usually available for less resourced languages: word length, frequency lists, lexical databases composed of school dictionaries and word embedding models. The correlations between the properties inferred are close to those obtained by related works. The resulting resource contains 26,874 words in BP annotated with concreteness, age of acquisition, imageability and subjective frequency.
no_new_dataset
0.944434
1705.07015
Jen-Wei Kuo
Jen-wei Kuo, Jonathan Mamou, Yao Wang, Emi Saegusa-Beecroft, Junji Machi, and Ernest J. Feleppa
Segmentation of 3D High-frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut with Energy Functional Adapted to Local Intensity Distribution
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous studies by our group have shown that three-dimensional high-frequency quantitative ultrasound methods have the potential to differentiate metastatic lymph nodes from cancer-free lymph nodes dissected from human cancer patients. To successfully perform these methods inside the lymph node parenchyma, an automatic segmentation method is highly desired to exclude the surrounding thin layer of fat from quantitative ultrasound processing and accurately correct for ultrasound attenuation. In high-frequency ultrasound images of lymph nodes, the intensity distribution of lymph node parenchyma and fat varies spatially because of acoustic attenuation and focusing effects. Thus, the intensity contrast between two object regions (e.g., lymph node parenchyma and fat) is also spatially varying. In our previous work, nested graph cut demonstrated its ability to simultaneously segment lymph node parenchyma, fat, and the outer phosphate-buffered saline bath even when some boundaries are lost because of acoustic attenuation and focusing effects. This paper describes a novel approach called graph cut with locally adaptive energy to further deal with spatially varying distributions of lymph node parenchyma and fat caused by inhomogeneous acoustic attenuation. The proposed method achieved Dice similarity coefficients of 0.937+-0.035 when compared to expert manual segmentation on a representative dataset consisting of 115 three-dimensional lymph node images obtained from colorectal cancer patients.
[ { "version": "v1", "created": "Fri, 19 May 2017 14:25:20 GMT" } ]
2017-05-22T00:00:00
[ [ "Kuo", "Jen-wei", "" ], [ "Mamou", "Jonathan", "" ], [ "Wang", "Yao", "" ], [ "Saegusa-Beecroft", "Emi", "" ], [ "Machi", "Junji", "" ], [ "Feleppa", "Ernest J.", "" ] ]
TITLE: Segmentation of 3D High-frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut with Energy Functional Adapted to Local Intensity Distribution ABSTRACT: Previous studies by our group have shown that three-dimensional high-frequency quantitative ultrasound methods have the potential to differentiate metastatic lymph nodes from cancer-free lymph nodes dissected from human cancer patients. To successfully perform these methods inside the lymph node parenchyma, an automatic segmentation method is highly desired to exclude the surrounding thin layer of fat from quantitative ultrasound processing and accurately correct for ultrasound attenuation. In high-frequency ultrasound images of lymph nodes, the intensity distribution of lymph node parenchyma and fat varies spatially because of acoustic attenuation and focusing effects. Thus, the intensity contrast between two object regions (e.g., lymph node parenchyma and fat) is also spatially varying. In our previous work, nested graph cut demonstrated its ability to simultaneously segment lymph node parenchyma, fat, and the outer phosphate-buffered saline bath even when some boundaries are lost because of acoustic attenuation and focusing effects. This paper describes a novel approach called graph cut with locally adaptive energy to further deal with spatially varying distributions of lymph node parenchyma and fat caused by inhomogeneous acoustic attenuation. The proposed method achieved Dice similarity coefficients of 0.937+-0.035 when compared to expert manual segmentation on a representative dataset consisting of 115 three-dimensional lymph node images obtained from colorectal cancer patients.
new_dataset
0.966156
1611.05125
Paritosh Parmar
Paritosh Parmar and Brendan Tran Morris
Learning To Score Olympic Events
CVPR 2017 - CVSports Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Estimating action quality, the process of assigning a "score" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small -- typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR, ii) LSTM, and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of Olympic events {diving, vault, figure skating}. While the SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.
[ { "version": "v1", "created": "Wed, 16 Nov 2016 02:56:24 GMT" }, { "version": "v2", "created": "Wed, 11 Jan 2017 00:47:29 GMT" }, { "version": "v3", "created": "Thu, 18 May 2017 05:55:24 GMT" } ]
2017-05-19T00:00:00
[ [ "Parmar", "Paritosh", "" ], [ "Morris", "Brendan Tran", "" ] ]
TITLE: Learning To Score Olympic Events ABSTRACT: Estimating action quality, the process of assigning a "score" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small -- typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR, ii) LSTM, and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of Olympic events {diving, vault, figure skating}. While the SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.
no_new_dataset
0.947527
1704.02788
Chuanqi Tan
Chuanqi Tan, Furu Wei, Pengjie Ren, Weifeng Lv, Ming Zhou
Entity Linking for Queries by Searching Wikipedia Sentences
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple yet effective approach for linking entities in queries. The key idea is to search sentences similar to a query from Wikipedia articles and directly use the human-annotated entities in the similar sentences as candidate entities for the query. Then, we employ a rich set of features, such as link-probability, context-matching, word embeddings, and relatedness among candidate entities as well as their related entities, to rank the candidates under a regression based framework. The advantages of our approach lie in two aspects, which contribute to the ranking process and final linking result. First, it can greatly reduce the number of candidate entities by filtering out irrelevant entities with the words in the query. Second, we can obtain the query sensitive prior probability in addition to the static link-probability derived from all Wikipedia articles. We conduct experiments on two benchmark datasets on entity linking for queries, namely the ERD14 dataset and the GERDAQ dataset. Experimental results show that our method outperforms state-of-the-art systems and yields 75.0% in F1 on the ERD14 dataset and 56.9% on the GERDAQ dataset.
[ { "version": "v1", "created": "Mon, 10 Apr 2017 10:19:53 GMT" }, { "version": "v2", "created": "Tue, 18 Apr 2017 06:59:56 GMT" }, { "version": "v3", "created": "Thu, 18 May 2017 08:03:49 GMT" } ]
2017-05-19T00:00:00
[ [ "Tan", "Chuanqi", "" ], [ "Wei", "Furu", "" ], [ "Ren", "Pengjie", "" ], [ "Lv", "Weifeng", "" ], [ "Zhou", "Ming", "" ] ]
TITLE: Entity Linking for Queries by Searching Wikipedia Sentences ABSTRACT: We present a simple yet effective approach for linking entities in queries. The key idea is to search sentences similar to a query from Wikipedia articles and directly use the human-annotated entities in the similar sentences as candidate entities for the query. Then, we employ a rich set of features, such as link-probability, context-matching, word embeddings, and relatedness among candidate entities as well as their related entities, to rank the candidates under a regression based framework. The advantages of our approach lie in two aspects, which contribute to the ranking process and final linking result. First, it can greatly reduce the number of candidate entities by filtering out irrelevant entities with the words in the query. Second, we can obtain the query sensitive prior probability in addition to the static link-probability derived from all Wikipedia articles. We conduct experiments on two benchmark datasets on entity linking for queries, namely the ERD14 dataset and the GERDAQ dataset. Experimental results show that our method outperforms state-of-the-art systems and yields 75.0% in F1 on the ERD14 dataset and 56.9% on the GERDAQ dataset.
no_new_dataset
0.951369
1705.03261
Zibo Yi
Zibo Yi, Shasha Li, Jie Yu, Qingbo Wu
Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers' attention. However, the existing work utilize either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recur- rent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.
[ { "version": "v1", "created": "Tue, 9 May 2017 10:22:48 GMT" }, { "version": "v2", "created": "Thu, 18 May 2017 15:54:36 GMT" } ]
2017-05-19T00:00:00
[ [ "Yi", "Zibo", "" ], [ "Li", "Shasha", "" ], [ "Yu", "Jie", "" ], [ "Wu", "Qingbo", "" ] ]
TITLE: Drug-drug Interaction Extraction via Recurrent Neural Network with Multiple Attention Layers ABSTRACT: Drug-drug interaction (DDI) is a vital information when physicians and pharmacists intend to co-administer two or more drugs. Thus, several DDI databases are constructed to avoid mistakenly combined use. In recent years, automatically extracting DDIs from biomedical text has drawn researchers' attention. However, the existing work utilize either complex feature engineering or NLP tools, both of which are insufficient for sentence comprehension. Inspired by the deep learning approaches in natural language processing, we propose a recur- rent neural network model with multiple attention layers for DDI classification. We evaluate our model on 2013 SemEval DDIExtraction dataset. The experiments show that our model classifies most of the drug pairs into correct DDI categories, which outperforms the existing NLP or deep learning methods.
no_new_dataset
0.945197
1705.06362
Darvin Yi
Darvin Yi, Rebecca Lynn Sawyer, David Cohn III, Jared Dunnmon, Carson Lam, Xuerong Xiao, and Daniel Rubin
Optimizing and Visualizing Deep Learning for Benign/Malignant Classification in Breast Tumors
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Breast cancer has the highest incidence and second highest mortality rate for women in the US. Our study aims to utilize deep learning for benign/malignant classification of mammogram tumors using a subset of cases from the Digital Database of Screening Mammography (DDSM). Though it was a small dataset from the view of Deep Learning (about 1000 patients), we show that currently state of the art architectures of deep learning can find a robust signal, even when trained from scratch. Using convolutional neural networks (CNNs), we are able to achieve an accuracy of 85% and an ROC AUC of 0.91, while leading hand-crafted feature based methods are only able to achieve an accuracy of 71%. We investigate an amalgamation of architectures to show that our best result is reached with an ensemble of the lightweight GoogLe Nets tasked with interpreting both the coronal caudal view and the mediolateral oblique view, simply averaging the probability scores of both views to make the final prediction. In addition, we have created a novel method to visualize what features the neural network detects for the benign/malignant classification, and have correlated those features with well known radiological features, such as spiculation. Our algorithm significantly improves existing classification methods for mammography lesions and identifies features that correlate with established clinical markers.
[ { "version": "v1", "created": "Wed, 17 May 2017 22:35:28 GMT" } ]
2017-05-19T00:00:00
[ [ "Yi", "Darvin", "" ], [ "Sawyer", "Rebecca Lynn", "" ], [ "Cohn", "David", "III" ], [ "Dunnmon", "Jared", "" ], [ "Lam", "Carson", "" ], [ "Xiao", "Xuerong", "" ], [ "Rubin", "Daniel", "" ] ]
TITLE: Optimizing and Visualizing Deep Learning for Benign/Malignant Classification in Breast Tumors ABSTRACT: Breast cancer has the highest incidence and second highest mortality rate for women in the US. Our study aims to utilize deep learning for benign/malignant classification of mammogram tumors using a subset of cases from the Digital Database of Screening Mammography (DDSM). Though it was a small dataset from the view of Deep Learning (about 1000 patients), we show that currently state of the art architectures of deep learning can find a robust signal, even when trained from scratch. Using convolutional neural networks (CNNs), we are able to achieve an accuracy of 85% and an ROC AUC of 0.91, while leading hand-crafted feature based methods are only able to achieve an accuracy of 71%. We investigate an amalgamation of architectures to show that our best result is reached with an ensemble of the lightweight GoogLe Nets tasked with interpreting both the coronal caudal view and the mediolateral oblique view, simply averaging the probability scores of both views to make the final prediction. In addition, we have created a novel method to visualize what features the neural network detects for the benign/malignant classification, and have correlated those features with well known radiological features, such as spiculation. Our algorithm significantly improves existing classification methods for mammography lesions and identifies features that correlate with established clinical markers.
no_new_dataset
0.945045
1705.06371
Robert Durrant
Xianghui Luo and Robert J. Durrant
Maximum Margin Principal Components
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Principal Component Analysis (PCA) is a very successful dimensionality reduction technique, widely used in predictive modeling. A key factor in its widespread use in this domain is the fact that the projection of a dataset onto its first $K$ principal components minimizes the sum of squared errors between the original data and the projected data over all possible rank $K$ projections. Thus, PCA provides optimal low-rank representations of data for least-squares linear regression under standard modeling assumptions. On the other hand, when the loss function for a prediction problem is not the least-squares error, PCA is typically a heuristic choice of dimensionality reduction -- in particular for classification problems under the zero-one loss. In this paper we target classification problems by proposing a straightforward alternative to PCA that aims to minimize the difference in margin distribution between the original and the projected data. Extensive experiments show that our simple approach typically outperforms PCA on any particular dataset, in terms of classification error, though this difference is not always statistically significant, and despite being a filter method is frequently competitive with Partial Least Squares (PLS) and Lasso on a wide range of datasets.
[ { "version": "v1", "created": "Wed, 17 May 2017 23:45:11 GMT" } ]
2017-05-19T00:00:00
[ [ "Luo", "Xianghui", "" ], [ "Durrant", "Robert J.", "" ] ]
TITLE: Maximum Margin Principal Components ABSTRACT: Principal Component Analysis (PCA) is a very successful dimensionality reduction technique, widely used in predictive modeling. A key factor in its widespread use in this domain is the fact that the projection of a dataset onto its first $K$ principal components minimizes the sum of squared errors between the original data and the projected data over all possible rank $K$ projections. Thus, PCA provides optimal low-rank representations of data for least-squares linear regression under standard modeling assumptions. On the other hand, when the loss function for a prediction problem is not the least-squares error, PCA is typically a heuristic choice of dimensionality reduction -- in particular for classification problems under the zero-one loss. In this paper we target classification problems by proposing a straightforward alternative to PCA that aims to minimize the difference in margin distribution between the original and the projected data. Extensive experiments show that our simple approach typically outperforms PCA on any particular dataset, in terms of classification error, though this difference is not always statistically significant, and despite being a filter method is frequently competitive with Partial Least Squares (PLS) and Lasso on a wide range of datasets.
no_new_dataset
0.942876
1705.06516
Pedro F. Proen\c{c}a
Pedro F. Proen\c{c}a and Yang Gao
Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry
Accepted to TAROS 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a visual odometry method that combines points and plane primitives, extracted from a noisy depth camera. Depth measurement uncertainty is modelled and propagated through the extraction of geometric primitives to the frame-to-frame motion estimation, where pose is optimized by weighting the residuals of 3D point and planes matches, according to their uncertainties. Results on an RGB-D dataset show that the combination of points and planes, through the proposed method, is able to perform well in poorly textured environments, where point-based odometry is bound to fail.
[ { "version": "v1", "created": "Thu, 18 May 2017 10:53:51 GMT" } ]
2017-05-19T00:00:00
[ [ "Proença", "Pedro F.", "" ], [ "Gao", "Yang", "" ] ]
TITLE: Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry ABSTRACT: This work proposes a visual odometry method that combines points and plane primitives, extracted from a noisy depth camera. Depth measurement uncertainty is modelled and propagated through the extraction of geometric primitives to the frame-to-frame motion estimation, where pose is optimized by weighting the residuals of 3D point and planes matches, according to their uncertainties. Results on an RGB-D dataset show that the combination of points and planes, through the proposed method, is able to perform well in poorly textured environments, where point-based odometry is bound to fail.
no_new_dataset
0.944842
1705.06560
Kuo-Hao Zeng
Kuo-Hao Zeng, Shih-Han Chou, Fu-Hsiang Chan, Juan Carlos Niebles, Min Sun
Agent-Centric Risk Assessment: Accident Anticipation and Risky Region Localization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For survival, a living agent must have the ability to assess risk (1) by temporally anticipating accidents before they occur, and (2) by spatially localizing risky regions in the environment to move away from threats. In this paper, we take an agent-centric approach to study the accident anticipation and risky region localization tasks. We propose a novel soft-attention Recurrent Neural Network (RNN) which explicitly models both spatial and appearance-wise non-linear interaction between the agent triggering the event and another agent or static-region involved. In order to test our proposed method, we introduce the Epic Fail (EF) dataset consisting of 3000 viral videos capturing various accidents. In the experiments, we evaluate the risk assessment accuracy both in the temporal domain (accident anticipation) and spatial domain (risky region localization) on our EF dataset and the Street Accident (SA) dataset. Our method consistently outperforms other baselines on both datasets.
[ { "version": "v1", "created": "Thu, 18 May 2017 12:56:20 GMT" } ]
2017-05-19T00:00:00
[ [ "Zeng", "Kuo-Hao", "" ], [ "Chou", "Shih-Han", "" ], [ "Chan", "Fu-Hsiang", "" ], [ "Niebles", "Juan Carlos", "" ], [ "Sun", "Min", "" ] ]
TITLE: Agent-Centric Risk Assessment: Accident Anticipation and Risky Region Localization ABSTRACT: For survival, a living agent must have the ability to assess risk (1) by temporally anticipating accidents before they occur, and (2) by spatially localizing risky regions in the environment to move away from threats. In this paper, we take an agent-centric approach to study the accident anticipation and risky region localization tasks. We propose a novel soft-attention Recurrent Neural Network (RNN) which explicitly models both spatial and appearance-wise non-linear interaction between the agent triggering the event and another agent or static-region involved. In order to test our proposed method, we introduce the Epic Fail (EF) dataset consisting of 3000 viral videos capturing various accidents. In the experiments, we evaluate the risk assessment accuracy both in the temporal domain (accident anticipation) and spatial domain (risky region localization) on our EF dataset and the Street Accident (SA) dataset. Our method consistently outperforms other baselines on both datasets.
new_dataset
0.958226
1705.06599
Boyue Wang
Boyue Wang, Yongli Hu, Junbin Gao, Yanfeng Sun and Baocai Yin
Localized LRR on Grassmann Manifolds: An Extrinsic View
IEEE Transactions on Circuits and Systems for Video Technology with Minor Revisions. arXiv admin note: text overlap with arXiv:1504.01807
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Subspace data representation has recently become a common practice in many computer vision tasks. It demands generalizing classical machine learning algorithms for subspace data. Low-Rank Representation (LRR) is one of the most successful models for clustering vectorial data according to their subspace structures. This paper explores the possibility of extending LRR for subspace data on Grassmann manifolds. Rather than directly embedding the Grassmann manifolds into the symmetric matrix space, an extrinsic view is taken to build the LRR self-representation in the local area of the tangent space at each Grassmannian point, resulting in a localized LRR method on Grassmann manifolds. A novel algorithm for solving the proposed model is investigated and implemented. The performance of the new clustering algorithm is assessed through experiments on several real-world datasets including MNIST handwritten digits, ballet video clips, SKIG action clips, DynTex++ dataset and highway traffic video clips. The experimental results show the new method outperforms a number of state-of-the-art clustering methods
[ { "version": "v1", "created": "Wed, 17 May 2017 03:04:43 GMT" } ]
2017-05-19T00:00:00
[ [ "Wang", "Boyue", "" ], [ "Hu", "Yongli", "" ], [ "Gao", "Junbin", "" ], [ "Sun", "Yanfeng", "" ], [ "Yin", "Baocai", "" ] ]
TITLE: Localized LRR on Grassmann Manifolds: An Extrinsic View ABSTRACT: Subspace data representation has recently become a common practice in many computer vision tasks. It demands generalizing classical machine learning algorithms for subspace data. Low-Rank Representation (LRR) is one of the most successful models for clustering vectorial data according to their subspace structures. This paper explores the possibility of extending LRR for subspace data on Grassmann manifolds. Rather than directly embedding the Grassmann manifolds into the symmetric matrix space, an extrinsic view is taken to build the LRR self-representation in the local area of the tangent space at each Grassmannian point, resulting in a localized LRR method on Grassmann manifolds. A novel algorithm for solving the proposed model is investigated and implemented. The performance of the new clustering algorithm is assessed through experiments on several real-world datasets including MNIST handwritten digits, ballet video clips, SKIG action clips, DynTex++ dataset and highway traffic video clips. The experimental results show the new method outperforms a number of state-of-the-art clustering methods
no_new_dataset
0.950457
1705.06687
Nick Johnston
Michele Covell, Nick Johnston, David Minnen, Sung Jin Hwang, Joel Shor, Saurabh Singh, Damien Vincent, George Toderici
Target-Quality Image Compression with Recurrent, Convolutional Neural Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a stop-code tolerant (SCT) approach to training recurrent convolutional neural networks for lossy image compression. Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas. These methods lead to lower true bitrates for a given recursion count, both pre- and post-entropy coding, even using unstructured LZ77 code compression. The pre-LZ77 gains are achieved by trimming stop codes. The post-LZ77 gains are due to the highly unequal distributions of 0/1 codes from the SCT architectures. With these code compressions, the SCT architecture maintains or exceeds the image quality at all compression rates compared to JPEG and to RNN auto-encoders across the Kodak dataset. In addition, the SCT coding results in lower variance in image quality across the extent of the image, a characteristic that has been shown to be important in human ratings of image quality
[ { "version": "v1", "created": "Thu, 18 May 2017 16:44:31 GMT" } ]
2017-05-19T00:00:00
[ [ "Covell", "Michele", "" ], [ "Johnston", "Nick", "" ], [ "Minnen", "David", "" ], [ "Hwang", "Sung Jin", "" ], [ "Shor", "Joel", "" ], [ "Singh", "Saurabh", "" ], [ "Vincent", "Damien", "" ], [ "Toderici", "George", "" ] ]
TITLE: Target-Quality Image Compression with Recurrent, Convolutional Neural Networks ABSTRACT: We introduce a stop-code tolerant (SCT) approach to training recurrent convolutional neural networks for lossy image compression. Our methods introduce a multi-pass training method to combine the training goals of high-quality reconstructions in areas around stop-code masking as well as in highly-detailed areas. These methods lead to lower true bitrates for a given recursion count, both pre- and post-entropy coding, even using unstructured LZ77 code compression. The pre-LZ77 gains are achieved by trimming stop codes. The post-LZ77 gains are due to the highly unequal distributions of 0/1 codes from the SCT architectures. With these code compressions, the SCT architecture maintains or exceeds the image quality at all compression rates compared to JPEG and to RNN auto-encoders across the Kodak dataset. In addition, the SCT coding results in lower variance in image quality across the extent of the image, a characteristic that has been shown to be important in human ratings of image quality
no_new_dataset
0.942823
1705.06709
Zhuolin Jiang
Zhuolin Jiang, Viktor Rozgic, Sancar Adali
Learning Spatiotemporal Features for Infrared Action Recognition with 3D Convolutional Neural Networks
null
null
null
null
cs.CV cs.AI cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrared (IR) imaging has the potential to enable more robust action recognition systems compared to visible spectrum cameras due to lower sensitivity to lighting conditions and appearance variability. While the action recognition task on videos collected from visible spectrum imaging has received much attention, action recognition in IR videos is significantly less explored. Our objective is to exploit imaging data in this modality for the action recognition task. In this work, we propose a novel two-stream 3D convolutional neural network (CNN) architecture by introducing the discriminative code layer and the corresponding discriminative code loss function. The proposed network processes IR image and the IR-based optical flow field sequences. We pretrain the 3D CNN model on the visible spectrum Sports-1M action dataset and finetune it on the Infrared Action Recognition (InfAR) dataset. To our best knowledge, this is the first application of the 3D CNN to action recognition in the IR domain. We conduct an elaborate analysis of different fusion schemes (weighted average, single and double-layer neural nets) applied to different 3D CNN outputs. Experimental results demonstrate that our approach can achieve state-of-the-art average precision (AP) performances on the InfAR dataset: (1) the proposed two-stream 3D CNN achieves the best reported 77.5% AP, and (2) our 3D CNN model applied to the optical flow fields achieves the best reported single stream 75.42% AP.
[ { "version": "v1", "created": "Thu, 18 May 2017 17:26:34 GMT" } ]
2017-05-19T00:00:00
[ [ "Jiang", "Zhuolin", "" ], [ "Rozgic", "Viktor", "" ], [ "Adali", "Sancar", "" ] ]
TITLE: Learning Spatiotemporal Features for Infrared Action Recognition with 3D Convolutional Neural Networks ABSTRACT: Infrared (IR) imaging has the potential to enable more robust action recognition systems compared to visible spectrum cameras due to lower sensitivity to lighting conditions and appearance variability. While the action recognition task on videos collected from visible spectrum imaging has received much attention, action recognition in IR videos is significantly less explored. Our objective is to exploit imaging data in this modality for the action recognition task. In this work, we propose a novel two-stream 3D convolutional neural network (CNN) architecture by introducing the discriminative code layer and the corresponding discriminative code loss function. The proposed network processes IR image and the IR-based optical flow field sequences. We pretrain the 3D CNN model on the visible spectrum Sports-1M action dataset and finetune it on the Infrared Action Recognition (InfAR) dataset. To our best knowledge, this is the first application of the 3D CNN to action recognition in the IR domain. We conduct an elaborate analysis of different fusion schemes (weighted average, single and double-layer neural nets) applied to different 3D CNN outputs. Experimental results demonstrate that our approach can achieve state-of-the-art average precision (AP) performances on the InfAR dataset: (1) the proposed two-stream 3D CNN achieves the best reported 77.5% AP, and (2) our 3D CNN model applied to the optical flow fields achieves the best reported single stream 75.42% AP.
no_new_dataset
0.952175
1509.07831
Jaeyong Sung
Jaeyong Sung, Ian Lenz, Ashutosh Saxena
Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories
IEEE International Conference on Robotics and Automation (ICRA), 2017
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A robot operating in a real-world environment needs to perform reasoning over a variety of sensor modalities such as vision, language and motion trajectories. However, it is extremely challenging to manually design features relating such disparate modalities. In this work, we introduce an algorithm that learns to embed point-cloud, natural language, and manipulation trajectory data into a shared embedding space with a deep neural network. To learn semantically meaningful spaces throughout our network, we use a loss-based margin to bring embeddings of relevant pairs closer together while driving less-relevant cases from different modalities further apart. We use this both to pre-train its lower layers and fine-tune our final embedding space, leading to a more robust representation. We test our algorithm on the task of manipulating novel objects and appliances based on prior experience with other objects. On a large dataset, we achieve significant improvements in both accuracy and inference time over the previous state of the art. We also perform end-to-end experiments on a PR2 robot utilizing our learned embedding space.
[ { "version": "v1", "created": "Fri, 25 Sep 2015 18:55:45 GMT" }, { "version": "v2", "created": "Wed, 17 May 2017 15:12:33 GMT" } ]
2017-05-18T00:00:00
[ [ "Sung", "Jaeyong", "" ], [ "Lenz", "Ian", "" ], [ "Saxena", "Ashutosh", "" ] ]
TITLE: Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds, Language and Trajectories ABSTRACT: A robot operating in a real-world environment needs to perform reasoning over a variety of sensor modalities such as vision, language and motion trajectories. However, it is extremely challenging to manually design features relating such disparate modalities. In this work, we introduce an algorithm that learns to embed point-cloud, natural language, and manipulation trajectory data into a shared embedding space with a deep neural network. To learn semantically meaningful spaces throughout our network, we use a loss-based margin to bring embeddings of relevant pairs closer together while driving less-relevant cases from different modalities further apart. We use this both to pre-train its lower layers and fine-tune our final embedding space, leading to a more robust representation. We test our algorithm on the task of manipulating novel objects and appliances based on prior experience with other objects. On a large dataset, we achieve significant improvements in both accuracy and inference time over the previous state of the art. We also perform end-to-end experiments on a PR2 robot utilizing our learned embedding space.
no_new_dataset
0.944382
1603.08308
Minsu Park
Minsu Park, Mor Naaman, Jonah Berger
A Data-driven Study of View Duration on YouTube
4 pages, 2 tables, Accepted to the 10th International AAAI Conference on Web and Social Media, ICWSM'16
10th International AAAI Conference on Web and Social Media (ICWSM 2016)
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video watching had emerged as one of the most frequent media activities on the Internet. Yet, little is known about how users watch online video. Using two distinct YouTube datasets, a set of random YouTube videos crawled from the Web and a set of videos watched by participants tracked by a Chrome extension, we examine whether and how indicators of collective preferences and reactions are associated with view duration of videos. We show that video view duration is positively associated with the video's view count, the number of likes per view, and the negative sentiment in the comments. These metrics and reactions have a significant predictive power over the duration the video is watched by individuals. Our findings provide a more precise understandings of user engagement with video content in social media beyond view count.
[ { "version": "v1", "created": "Mon, 28 Mar 2016 04:55:21 GMT" }, { "version": "v2", "created": "Tue, 29 Mar 2016 15:46:07 GMT" }, { "version": "v3", "created": "Wed, 17 May 2017 04:27:20 GMT" } ]
2017-05-18T00:00:00
[ [ "Park", "Minsu", "" ], [ "Naaman", "Mor", "" ], [ "Berger", "Jonah", "" ] ]
TITLE: A Data-driven Study of View Duration on YouTube ABSTRACT: Video watching had emerged as one of the most frequent media activities on the Internet. Yet, little is known about how users watch online video. Using two distinct YouTube datasets, a set of random YouTube videos crawled from the Web and a set of videos watched by participants tracked by a Chrome extension, we examine whether and how indicators of collective preferences and reactions are associated with view duration of videos. We show that video view duration is positively associated with the video's view count, the number of likes per view, and the negative sentiment in the comments. These metrics and reactions have a significant predictive power over the duration the video is watched by individuals. Our findings provide a more precise understandings of user engagement with video content in social media beyond view count.
no_new_dataset
0.912903
1604.02071
Reinhard Heckel
Reinhard Heckel, Michail Vlachos, Thomas Parnell, and Celestine D\"unner
Scalable and interpretable product recommendations via overlapping co-clustering
In IEEE International Conference on Data Engineering (ICDE) 2017
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback. Our approach is applicable on very large datasets because it exhibits almost linear complexity in the input examples and the number of co-clusters. We show, both on real industrial data and on publicly available datasets, that the recommendation accuracy of our algorithm is competitive to that of state-of-art matrix factorization techniques. In addition, our technique has the advantage of offering recommendations that are textually and visually interpretable. Finally, we examine how to implement our technique efficiently on Graphical Processing Units (GPUs).
[ { "version": "v1", "created": "Thu, 7 Apr 2016 16:40:53 GMT" }, { "version": "v2", "created": "Wed, 17 May 2017 17:58:51 GMT" } ]
2017-05-18T00:00:00
[ [ "Heckel", "Reinhard", "" ], [ "Vlachos", "Michail", "" ], [ "Parnell", "Thomas", "" ], [ "Dünner", "Celestine", "" ] ]
TITLE: Scalable and interpretable product recommendations via overlapping co-clustering ABSTRACT: We consider the problem of generating interpretable recommendations by identifying overlapping co-clusters of clients and products, based only on positive or implicit feedback. Our approach is applicable on very large datasets because it exhibits almost linear complexity in the input examples and the number of co-clusters. We show, both on real industrial data and on publicly available datasets, that the recommendation accuracy of our algorithm is competitive to that of state-of-art matrix factorization techniques. In addition, our technique has the advantage of offering recommendations that are textually and visually interpretable. Finally, we examine how to implement our technique efficiently on Graphical Processing Units (GPUs).
no_new_dataset
0.945147
1611.09957
Ehsan Amid
Ehsan Amid, Nikos Vlassis, Manfred K. Warmuth
Low-dimensional Data Embedding via Robust Ranking
null
null
null
null
cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a new method called t-ETE for finding a low-dimensional embedding of a set of objects in Euclidean space. We formulate the embedding problem as a joint ranking problem over a set of triplets, where each triplet captures the relative similarities between three objects in the set. By exploiting recent advances in robust ranking, t-ETE produces high-quality embeddings even in the presence of a significant amount of noise and better preserves local scale than known methods, such as t-STE and t-SNE. In particular, our method produces significantly better results than t-SNE on signature datasets while also being faster to compute.
[ { "version": "v1", "created": "Wed, 30 Nov 2016 01:03:11 GMT" }, { "version": "v2", "created": "Tue, 16 May 2017 21:21:03 GMT" } ]
2017-05-18T00:00:00
[ [ "Amid", "Ehsan", "" ], [ "Vlassis", "Nikos", "" ], [ "Warmuth", "Manfred K.", "" ] ]
TITLE: Low-dimensional Data Embedding via Robust Ranking ABSTRACT: We describe a new method called t-ETE for finding a low-dimensional embedding of a set of objects in Euclidean space. We formulate the embedding problem as a joint ranking problem over a set of triplets, where each triplet captures the relative similarities between three objects in the set. By exploiting recent advances in robust ranking, t-ETE produces high-quality embeddings even in the presence of a significant amount of noise and better preserves local scale than known methods, such as t-STE and t-SNE. In particular, our method produces significantly better results than t-SNE on signature datasets while also being faster to compute.
no_new_dataset
0.952794
1703.07076
Esben Jannik Bjerrum
Esben Jannik Bjerrum
SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simplified Molecular Input Line Entry System (SMILES) is a single line text representation of a unique molecule. One molecule can however have multiple SMILES strings, which is a reason that canonical SMILES have been defined, which ensures a one to one correspondence between SMILES string and molecule. Here the fact that multiple SMILES represent the same molecule is explored as a technique for data augmentation of a molecular QSAR dataset modeled by a long short term memory (LSTM) cell based neural network. The augmented dataset was 130 times bigger than the original. The network trained with the augmented dataset shows better performance on a test set when compared to a model built with only one canonical SMILES string per molecule. The correlation coefficient R2 on the test set was improved from 0.56 to 0.66 when using SMILES enumeration, and the root mean square error (RMS) likewise fell from 0.62 to 0.55. The technique also works in the prediction phase. By taking the average per molecule of the predictions for the enumerated SMILES a further improvement to a correlation coefficient of 0.68 and a RMS of 0.52 was found.
[ { "version": "v1", "created": "Tue, 21 Mar 2017 07:13:13 GMT" }, { "version": "v2", "created": "Wed, 17 May 2017 11:24:43 GMT" } ]
2017-05-18T00:00:00
[ [ "Bjerrum", "Esben Jannik", "" ] ]
TITLE: SMILES Enumeration as Data Augmentation for Neural Network Modeling of Molecules ABSTRACT: Simplified Molecular Input Line Entry System (SMILES) is a single line text representation of a unique molecule. One molecule can however have multiple SMILES strings, which is a reason that canonical SMILES have been defined, which ensures a one to one correspondence between SMILES string and molecule. Here the fact that multiple SMILES represent the same molecule is explored as a technique for data augmentation of a molecular QSAR dataset modeled by a long short term memory (LSTM) cell based neural network. The augmented dataset was 130 times bigger than the original. The network trained with the augmented dataset shows better performance on a test set when compared to a model built with only one canonical SMILES string per molecule. The correlation coefficient R2 on the test set was improved from 0.56 to 0.66 when using SMILES enumeration, and the root mean square error (RMS) likewise fell from 0.62 to 0.55. The technique also works in the prediction phase. By taking the average per molecule of the predictions for the enumerated SMILES a further improvement to a correlation coefficient of 0.68 and a RMS of 0.52 was found.
no_new_dataset
0.952706
1704.02003
Samuel Pollard
Samuel Pollard and Boyana Norris
A Comparison of Parallel Graph Processing Implementations
10 pages, 10 figures, Submitted to EuroPar 2017 and rejected. Revised and submitted to IEEE Cluster 2017
null
null
null
cs.PF cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rapidly growing number of large network analysis problems has led to the emergence of many parallel and distributed graph processing systems---one survey in 2014 identified over 80. Since then, the landscape has evolved; some packages have become inactive while more are being developed. Determining the best approach for a given problem is infeasible for most developers. To enable easy, rigorous, and repeatable comparison of the capabilities of such systems, we present an approach and associated software for analyzing the performance and scalability of parallel, open-source graph libraries. We demonstrate our approach on five graph processing packages: GraphMat, the Graph500, the Graph Algorithm Platform Benchmark Suite, GraphBIG, and PowerGraph using synthetic and real-world datasets. We examine previously overlooked aspects of parallel graph processing performance such as phases of execution and energy usage for three algorithms: breadth first search, single source shortest paths, and PageRank and compare our results to Graphalytics.
[ { "version": "v1", "created": "Thu, 6 Apr 2017 19:48:37 GMT" }, { "version": "v2", "created": "Wed, 17 May 2017 01:44:23 GMT" } ]
2017-05-18T00:00:00
[ [ "Pollard", "Samuel", "" ], [ "Norris", "Boyana", "" ] ]
TITLE: A Comparison of Parallel Graph Processing Implementations ABSTRACT: The rapidly growing number of large network analysis problems has led to the emergence of many parallel and distributed graph processing systems---one survey in 2014 identified over 80. Since then, the landscape has evolved; some packages have become inactive while more are being developed. Determining the best approach for a given problem is infeasible for most developers. To enable easy, rigorous, and repeatable comparison of the capabilities of such systems, we present an approach and associated software for analyzing the performance and scalability of parallel, open-source graph libraries. We demonstrate our approach on five graph processing packages: GraphMat, the Graph500, the Graph Algorithm Platform Benchmark Suite, GraphBIG, and PowerGraph using synthetic and real-world datasets. We examine previously overlooked aspects of parallel graph processing performance such as phases of execution and energy usage for three algorithms: breadth first search, single source shortest paths, and PageRank and compare our results to Graphalytics.
no_new_dataset
0.941385
1705.04612
Esben Jannik Bjerrum
Esben Jannik Bjerrum, Richard Threlfall
Molecular Generation with Recurrent Neural Networks (RNNs)
null
null
null
null
cs.LG q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The potential number of drug like small molecules is estimated to be between 10^23 and 10^60 while current databases of known compounds are orders of magnitude smaller with approximately 10^8 compounds. This discrepancy has led to an interest in generating virtual libraries using hand crafted chemical rules and fragment based methods to cover a larger area of chemical space and generate chemical libraries for use in in silico drug discovery endeavors. Here it is explored to what extent a recurrent neural network with long short term memory cells can figure out sensible chemical rules and generate synthesizable molecules by being trained on existing compounds encoded as SMILES. The networks can to a high extent generate novel, but chemically sensible molecules. The properties of the molecules are tuned by training on two different datasets consisting of fragment like molecules and drug like molecules. The produced molecules and the training databases have very similar distributions of molar weight, predicted logP, number of hydrogen bond acceptors and donors, number of rotatable bonds and topological polar surface area when compared to their respective training sets. The compounds are for the most cases synthesizable as assessed with SA score and Wiley ChemPlanner.
[ { "version": "v1", "created": "Fri, 12 May 2017 14:56:09 GMT" }, { "version": "v2", "created": "Wed, 17 May 2017 10:55:22 GMT" } ]
2017-05-18T00:00:00
[ [ "Bjerrum", "Esben Jannik", "" ], [ "Threlfall", "Richard", "" ] ]
TITLE: Molecular Generation with Recurrent Neural Networks (RNNs) ABSTRACT: The potential number of drug like small molecules is estimated to be between 10^23 and 10^60 while current databases of known compounds are orders of magnitude smaller with approximately 10^8 compounds. This discrepancy has led to an interest in generating virtual libraries using hand crafted chemical rules and fragment based methods to cover a larger area of chemical space and generate chemical libraries for use in in silico drug discovery endeavors. Here it is explored to what extent a recurrent neural network with long short term memory cells can figure out sensible chemical rules and generate synthesizable molecules by being trained on existing compounds encoded as SMILES. The networks can to a high extent generate novel, but chemically sensible molecules. The properties of the molecules are tuned by training on two different datasets consisting of fragment like molecules and drug like molecules. The produced molecules and the training databases have very similar distributions of molar weight, predicted logP, number of hydrogen bond acceptors and donors, number of rotatable bonds and topological polar surface area when compared to their respective training sets. The compounds are for the most cases synthesizable as assessed with SA score and Wiley ChemPlanner.
no_new_dataset
0.952264
1705.05986
Srinivasan Parthasarathy
Yanjie Fu, Charu Aggarwal, Srinivasan Parthasarathy, Deepak S. Turaga, Hui Xiong
REMIX: Automated Exploration for Interactive Outlier Detection
To appear in KDD 2017
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Outlier detection is the identification of points in a dataset that do not conform to the norm. Outlier detection is highly sensitive to the choice of the detection algorithm and the feature subspace used by the algorithm. Extracting domain-relevant insights from outliers needs systematic exploration of these choices since diverse outlier sets could lead to complementary insights. This challenge is especially acute in an interactive setting, where the choices must be explored in a time-constrained manner. In this work, we present REMIX, the first system to address the problem of outlier detection in an interactive setting. REMIX uses a novel mixed integer programming (MIP) formulation for automatically selecting and executing a diverse set of outlier detectors within a time limit. This formulation incorporates multiple aspects such as (i) an upper limit on the total execution time of detectors (ii) diversity in the space of algorithms and features, and (iii) meta-learning for evaluating the cost and utility of detectors. REMIX provides two distinct ways for the analyst to consume its results: (i) a partitioning of the detectors explored by REMIX into perspectives through low-rank non-negative matrix factorization; each perspective can be easily visualized as an intuitive heatmap of experiments versus outliers, and (ii) an ensembled set of outliers which combines outlier scores from all detectors. We demonstrate the benefits of REMIX through extensive empirical validation on real-world data.
[ { "version": "v1", "created": "Wed, 17 May 2017 02:17:48 GMT" } ]
2017-05-18T00:00:00
[ [ "Fu", "Yanjie", "" ], [ "Aggarwal", "Charu", "" ], [ "Parthasarathy", "Srinivasan", "" ], [ "Turaga", "Deepak S.", "" ], [ "Xiong", "Hui", "" ] ]
TITLE: REMIX: Automated Exploration for Interactive Outlier Detection ABSTRACT: Outlier detection is the identification of points in a dataset that do not conform to the norm. Outlier detection is highly sensitive to the choice of the detection algorithm and the feature subspace used by the algorithm. Extracting domain-relevant insights from outliers needs systematic exploration of these choices since diverse outlier sets could lead to complementary insights. This challenge is especially acute in an interactive setting, where the choices must be explored in a time-constrained manner. In this work, we present REMIX, the first system to address the problem of outlier detection in an interactive setting. REMIX uses a novel mixed integer programming (MIP) formulation for automatically selecting and executing a diverse set of outlier detectors within a time limit. This formulation incorporates multiple aspects such as (i) an upper limit on the total execution time of detectors (ii) diversity in the space of algorithms and features, and (iii) meta-learning for evaluating the cost and utility of detectors. REMIX provides two distinct ways for the analyst to consume its results: (i) a partitioning of the detectors explored by REMIX into perspectives through low-rank non-negative matrix factorization; each perspective can be easily visualized as an intuitive heatmap of experiments versus outliers, and (ii) an ensembled set of outliers which combines outlier scores from all detectors. We demonstrate the benefits of REMIX through extensive empirical validation on real-world data.
no_new_dataset
0.945651
1705.05992
Jun Zhang
Xu Tian, Jun Zhang, Zejun Ma, Yi He, Juan Wei
Frame Stacking and Retaining for Recurrent Neural Network Acoustic Model
5 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frame stacking is broadly applied in end-to-end neural network training like connectionist temporal classification (CTC), and it leads to more accurate models and faster decoding. However, it is not well-suited to conventional neural network based on context-dependent state acoustic model, if the decoder is unchanged. In this paper, we propose a novel frame retaining method which is applied in decoding. The system which combined frame retaining with frame stacking could reduces the time consumption of both training and decoding. Long short-term memory (LSTM) recurrent neural networks (RNNs) using it achieve almost linear training speedup and reduces relative 41\% real time factor (RTF). At the same time, recognition performance is no degradation or improves sightly on Shenma voice search dataset in Mandarin.
[ { "version": "v1", "created": "Wed, 17 May 2017 02:34:27 GMT" } ]
2017-05-18T00:00:00
[ [ "Tian", "Xu", "" ], [ "Zhang", "Jun", "" ], [ "Ma", "Zejun", "" ], [ "He", "Yi", "" ], [ "Wei", "Juan", "" ] ]
TITLE: Frame Stacking and Retaining for Recurrent Neural Network Acoustic Model ABSTRACT: Frame stacking is broadly applied in end-to-end neural network training like connectionist temporal classification (CTC), and it leads to more accurate models and faster decoding. However, it is not well-suited to conventional neural network based on context-dependent state acoustic model, if the decoder is unchanged. In this paper, we propose a novel frame retaining method which is applied in decoding. The system which combined frame retaining with frame stacking could reduces the time consumption of both training and decoding. Long short-term memory (LSTM) recurrent neural networks (RNNs) using it achieve almost linear training speedup and reduces relative 41\% real time factor (RTF). At the same time, recognition performance is no degradation or improves sightly on Shenma voice search dataset in Mandarin.
no_new_dataset
0.951997
1705.05998
Tao Xiong
Dong Yang, Tao Xiong, Daguang Xu, Qiangui Huang, David Liu, S.Kevin Zhou, Zhoubing Xu, JinHyeong Park, Mingqing Chen, Trac D. Tran, Sang Peter Chin, Dimitris Metaxas, Dorin Comaniciu
Automatic Vertebra Labeling in Large-Scale 3D CT using Deep Image-to-Image Network with Message Passing and Sparsity Regularization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic localization and labeling of vertebra in 3D medical images plays an important role in many clinical tasks, including pathological diagnosis, surgical planning and postoperative assessment. However, the unusual conditions of pathological cases, such as the abnormal spine curvature, bright visual imaging artifacts caused by metal implants, and the limited field of view, increase the difficulties of accurate localization. In this paper, we propose an automatic and fast algorithm to localize and label the vertebra centroids in 3D CT volumes. First, we deploy a deep image-to-image network (DI2IN) to initialize vertebra locations, employing the convolutional encoder-decoder architecture together with multi-level feature concatenation and deep supervision. Next, the centroid probability maps from DI2IN are iteratively evolved with the message passing schemes based on the mutual relation of vertebra centroids. Finally, the localization results are refined with sparsity regularization. The proposed method is evaluated on a public dataset of 302 spine CT volumes with various pathologies. Our method outperforms other state-of-the-art methods in terms of localization accuracy. The run time is around 3 seconds on average per case. To further boost the performance, we retrain the DI2IN on additional 1000+ 3D CT volumes from different patients. To the best of our knowledge, this is the first time more than 1000 3D CT volumes with expert annotation are adopted in experiments for the anatomic landmark detection tasks. Our experimental results show that training with such a large dataset significantly improves the performance and the overall identification rate, for the first time by our knowledge, reaches 90 %.
[ { "version": "v1", "created": "Wed, 17 May 2017 03:56:14 GMT" } ]
2017-05-18T00:00:00
[ [ "Yang", "Dong", "" ], [ "Xiong", "Tao", "" ], [ "Xu", "Daguang", "" ], [ "Huang", "Qiangui", "" ], [ "Liu", "David", "" ], [ "Zhou", "S. Kevin", "" ], [ "Xu", "Zhoubing", "" ], [ "Park", "JinHyeong", "" ], [ "Chen", "Mingqing", "" ], [ "Tran", "Trac D.", "" ], [ "Chin", "Sang Peter", "" ], [ "Metaxas", "Dimitris", "" ], [ "Comaniciu", "Dorin", "" ] ]
TITLE: Automatic Vertebra Labeling in Large-Scale 3D CT using Deep Image-to-Image Network with Message Passing and Sparsity Regularization ABSTRACT: Automatic localization and labeling of vertebra in 3D medical images plays an important role in many clinical tasks, including pathological diagnosis, surgical planning and postoperative assessment. However, the unusual conditions of pathological cases, such as the abnormal spine curvature, bright visual imaging artifacts caused by metal implants, and the limited field of view, increase the difficulties of accurate localization. In this paper, we propose an automatic and fast algorithm to localize and label the vertebra centroids in 3D CT volumes. First, we deploy a deep image-to-image network (DI2IN) to initialize vertebra locations, employing the convolutional encoder-decoder architecture together with multi-level feature concatenation and deep supervision. Next, the centroid probability maps from DI2IN are iteratively evolved with the message passing schemes based on the mutual relation of vertebra centroids. Finally, the localization results are refined with sparsity regularization. The proposed method is evaluated on a public dataset of 302 spine CT volumes with various pathologies. Our method outperforms other state-of-the-art methods in terms of localization accuracy. The run time is around 3 seconds on average per case. To further boost the performance, we retrain the DI2IN on additional 1000+ 3D CT volumes from different patients. To the best of our knowledge, this is the first time more than 1000 3D CT volumes with expert annotation are adopted in experiments for the anatomic landmark detection tasks. Our experimental results show that training with such a large dataset significantly improves the performance and the overall identification rate, for the first time by our knowledge, reaches 90 %.
no_new_dataset
0.950824
1705.06000
Abhishek Sharma
Abhishek Sharma
One Shot Joint Colocalization and Cosegmentation
8 pages, Under Review
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel framework in which image cosegmentation and colocalization are cast into a single optimization problem that integrates information from low level appearance cues with that of high level localization cues in a very weakly supervised manner. In contrast to multi-task learning paradigm that learns similar tasks using a shared representation, the proposed framework leverages two representations at different levels and simultaneously discriminates between foreground and background at the bounding box and superpixel level using discriminative clustering. We show empirically that constraining the two problems at different scales enables the transfer of semantic localization cues to improve cosegmentation output whereas local appearance based segmentation cues help colocalization. The unified framework outperforms strong baseline approaches, of learning the two problems separately, by a large margin on four benchmark datasets. Furthermore, it obtains competitive results compared to the state of the art for cosegmentation on two benchmark datasets and second best result for colocalization on Pascal VOC 2007.
[ { "version": "v1", "created": "Wed, 17 May 2017 04:18:19 GMT" } ]
2017-05-18T00:00:00
[ [ "Sharma", "Abhishek", "" ] ]
TITLE: One Shot Joint Colocalization and Cosegmentation ABSTRACT: This paper presents a novel framework in which image cosegmentation and colocalization are cast into a single optimization problem that integrates information from low level appearance cues with that of high level localization cues in a very weakly supervised manner. In contrast to multi-task learning paradigm that learns similar tasks using a shared representation, the proposed framework leverages two representations at different levels and simultaneously discriminates between foreground and background at the bounding box and superpixel level using discriminative clustering. We show empirically that constraining the two problems at different scales enables the transfer of semantic localization cues to improve cosegmentation output whereas local appearance based segmentation cues help colocalization. The unified framework outperforms strong baseline approaches, of learning the two problems separately, by a large margin on four benchmark datasets. Furthermore, it obtains competitive results compared to the state of the art for cosegmentation on two benchmark datasets and second best result for colocalization on Pascal VOC 2007.
no_new_dataset
0.950641
1705.06057
Nicolas Audebert
Nicolas Audebert (Palaiseau, OBELIX), Bertrand Le Saux (Palaiseau), S\'ebastien Lef\`evre (OBELIX)
Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps
null
EARTHVISION 2017 IEEE/ISPRS CVPR Workshop. Large Scale Computer Vision for Remote Sensing Imagery, Jul 2017, Honolulu, United States. 2017
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we investigate the use of OpenStreetMap data for semantic labeling of Earth Observation images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, SAR and LiDAR data. While OpenStreetMap has already been used as ground truth data for training such networks, this abundant data source remains rarely exploited as an input information layer. In this paper, we study different use cases and deep network architectures to leverage OpenStreetMap data for semantic labeling of aerial and satellite images. Especially , we look into fusion based architectures and coarse-to-fine segmentation to include the OpenStreetMap layer into multispectral-based deep fully convolutional networks. We illustrate how these methods can be successfully used on two public datasets: ISPRS Potsdam and DFC2017. We show that OpenStreetMap data can efficiently be integrated into the vision-based deep learning models and that it significantly improves both the accuracy performance and the convergence speed of the networks.
[ { "version": "v1", "created": "Wed, 17 May 2017 09:07:08 GMT" } ]
2017-05-18T00:00:00
[ [ "Audebert", "Nicolas", "", "Palaiseau, OBELIX" ], [ "Saux", "Bertrand Le", "", "Palaiseau" ], [ "Lefèvre", "Sébastien", "", "OBELIX" ] ]
TITLE: Joint Learning from Earth Observation and OpenStreetMap Data to Get Faster Better Semantic Maps ABSTRACT: In this work, we investigate the use of OpenStreetMap data for semantic labeling of Earth Observation images. Deep neural networks have been used in the past for remote sensing data classification from various sensors, including multispectral, hyperspectral, SAR and LiDAR data. While OpenStreetMap has already been used as ground truth data for training such networks, this abundant data source remains rarely exploited as an input information layer. In this paper, we study different use cases and deep network architectures to leverage OpenStreetMap data for semantic labeling of aerial and satellite images. Especially , we look into fusion based architectures and coarse-to-fine segmentation to include the OpenStreetMap layer into multispectral-based deep fully convolutional networks. We illustrate how these methods can be successfully used on two public datasets: ISPRS Potsdam and DFC2017. We show that OpenStreetMap data can efficiently be integrated into the vision-based deep learning models and that it significantly improves both the accuracy performance and the convergence speed of the networks.
no_new_dataset
0.952794
1705.06201
Anirudh Vemula
Anirudh Vemula, Katharina Muelling and Jean Oh
Modeling Cooperative Navigation in Dense Human Crowds
Accepted at ICRA 2017
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For robots to be a part of our daily life, they need to be able to navigate among crowds not only safely but also in a socially compliant fashion. This is a challenging problem because humans tend to navigate by implicitly cooperating with one another to avoid collisions, while heading toward their respective destinations. Previous approaches have used hand-crafted functions based on proximity to model human-human and human-robot interactions. However, these approaches can only model simple interactions and fail to generalize for complex crowded settings. In this paper, we develop an approach that models the joint distribution over future trajectories of all interacting agents in the crowd, through a local interaction model that we train using real human trajectory data. The interaction model infers the velocity of each agent based on the spatial orientation of other agents in his vicinity. During prediction, our approach infers the goal of the agent from its past trajectory and uses the learned model to predict its future trajectory. We demonstrate the performance of our method against a state-of-the-art approach on a public dataset and show that our model outperforms when predicting future trajectories for longer horizons.
[ { "version": "v1", "created": "Wed, 17 May 2017 15:12:46 GMT" } ]
2017-05-18T00:00:00
[ [ "Vemula", "Anirudh", "" ], [ "Muelling", "Katharina", "" ], [ "Oh", "Jean", "" ] ]
TITLE: Modeling Cooperative Navigation in Dense Human Crowds ABSTRACT: For robots to be a part of our daily life, they need to be able to navigate among crowds not only safely but also in a socially compliant fashion. This is a challenging problem because humans tend to navigate by implicitly cooperating with one another to avoid collisions, while heading toward their respective destinations. Previous approaches have used hand-crafted functions based on proximity to model human-human and human-robot interactions. However, these approaches can only model simple interactions and fail to generalize for complex crowded settings. In this paper, we develop an approach that models the joint distribution over future trajectories of all interacting agents in the crowd, through a local interaction model that we train using real human trajectory data. The interaction model infers the velocity of each agent based on the spatial orientation of other agents in his vicinity. During prediction, our approach infers the goal of the agent from its past trajectory and uses the learned model to predict its future trajectory. We demonstrate the performance of our method against a state-of-the-art approach on a public dataset and show that our model outperforms when predicting future trajectories for longer horizons.
no_new_dataset
0.94428
1705.06273
Franck Dernoncourt
Ji Young Lee, Franck Dernoncourt, Peter Szolovits
Transfer Learning for Named-Entity Recognition with Neural Networks
The first two authors contributed equally to this work
null
null
null
cs.CL cs.AI cs.NE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.
[ { "version": "v1", "created": "Wed, 17 May 2017 17:45:15 GMT" } ]
2017-05-18T00:00:00
[ [ "Lee", "Ji Young", "" ], [ "Dernoncourt", "Franck", "" ], [ "Szolovits", "Peter", "" ] ]
TITLE: Transfer Learning for Named-Entity Recognition with Neural Networks ABSTRACT: Recent approaches based on artificial neural networks (ANNs) have shown promising results for named-entity recognition (NER). In order to achieve high performances, ANNs need to be trained on a large labeled dataset. However, labels might be difficult to obtain for the dataset on which the user wants to perform NER: label scarcity is particularly pronounced for patient note de-identification, which is an instance of NER. In this work, we analyze to what extent transfer learning may address this issue. In particular, we demonstrate that transferring an ANN model trained on a large labeled dataset to another dataset with a limited number of labels improves upon the state-of-the-art results on two different datasets for patient note de-identification.
no_new_dataset
0.958304
1607.04573
Luiz Gustavo Hafemann
Luiz G. Hafemann, Robert Sabourin, Luiz S. Oliveira
Analyzing features learned for Offline Signature Verification using Deep CNNs
Accepted as a conference paper to ICPR 2016
null
10.1109/ICPR.2016.7900092
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.
[ { "version": "v1", "created": "Fri, 15 Jul 2016 16:35:20 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2016 14:52:55 GMT" } ]
2017-05-17T00:00:00
[ [ "Hafemann", "Luiz G.", "" ], [ "Sabourin", "Robert", "" ], [ "Oliveira", "Luiz S.", "" ] ]
TITLE: Analyzing features learned for Offline Signature Verification using Deep CNNs ABSTRACT: Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.
no_new_dataset
0.94474
1612.00558
Basura Fernando
Basura Fernando, Sareh Shirazi and Stephen Gould
Unsupervised Human Action Detection by Action Matching
IEEE International Conference on Computer Vision and Pattern Recognition CVPR 2017 Workshops
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a new task of unsupervised action detection by action matching. Given two long videos, the objective is to temporally detect all pairs of matching video segments. A pair of video segments are matched if they share the same human action. The task is category independent---it does not matter what action is being performed---and no supervision is used to discover such video segments. Unsupervised action detection by action matching allows us to align videos in a meaningful manner. As such, it can be used to discover new action categories or as an action proposal technique within, say, an action detection pipeline. Moreover, it is a useful pre-processing step for generating video highlights, e.g., from sports videos. We present an effective and efficient method for unsupervised action detection. We use an unsupervised temporal encoding method and exploit the temporal consistency in human actions to obtain candidate action segments. We evaluate our method on this challenging task using three activity recognition benchmarks, namely, the MPII Cooking activities dataset, the THUMOS15 action detection benchmark and a new dataset called the IKEA dataset. On the MPII Cooking dataset we detect action segments with a precision of 21.6% and recall of 11.7% over 946 long video pairs and over 5000 ground truth action segments. Similarly, on THUMOS dataset we obtain 18.4% precision and 25.1% recall over 5094 ground truth action segment pairs.
[ { "version": "v1", "created": "Fri, 2 Dec 2016 03:39:38 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2017 03:36:17 GMT" }, { "version": "v3", "created": "Wed, 5 Apr 2017 06:18:22 GMT" }, { "version": "v4", "created": "Tue, 16 May 2017 00:56:24 GMT" } ]
2017-05-17T00:00:00
[ [ "Fernando", "Basura", "" ], [ "Shirazi", "Sareh", "" ], [ "Gould", "Stephen", "" ] ]
TITLE: Unsupervised Human Action Detection by Action Matching ABSTRACT: We propose a new task of unsupervised action detection by action matching. Given two long videos, the objective is to temporally detect all pairs of matching video segments. A pair of video segments are matched if they share the same human action. The task is category independent---it does not matter what action is being performed---and no supervision is used to discover such video segments. Unsupervised action detection by action matching allows us to align videos in a meaningful manner. As such, it can be used to discover new action categories or as an action proposal technique within, say, an action detection pipeline. Moreover, it is a useful pre-processing step for generating video highlights, e.g., from sports videos. We present an effective and efficient method for unsupervised action detection. We use an unsupervised temporal encoding method and exploit the temporal consistency in human actions to obtain candidate action segments. We evaluate our method on this challenging task using three activity recognition benchmarks, namely, the MPII Cooking activities dataset, the THUMOS15 action detection benchmark and a new dataset called the IKEA dataset. On the MPII Cooking dataset we detect action segments with a precision of 21.6% and recall of 11.7% over 946 long video pairs and over 5000 ground truth action segments. Similarly, on THUMOS dataset we obtain 18.4% precision and 25.1% recall over 5094 ground truth action segment pairs.
new_dataset
0.96225
1704.01508
Zafar Gilani
Zafar Gilani, Reza Farahbakhsh, Gareth Tyson, Liang Wang, Jon Crowcroft
An in-depth characterisation of Bots and Humans on Twitter
This is a technical report of 18 pages including references
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this paper we utilise our past work on studying bots (Stweeler) to comparatively analyse the usage and impact of bots and humans on Twitter, one of the largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. We divide and filter the dataset in four popularity groups in terms of number of followers. Using a human annotation task we assign 'bot' and 'human' ground-truth labels to the dataset, and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural bot and human characteristics using metrics within and among four popularity groups. From the comparative analysis we draw important differences as well as surprising similarities between the two entities, thus paving the way for reliable classification of automated political infiltration, advertisement campaigns, and general bot detection.
[ { "version": "v1", "created": "Wed, 5 Apr 2017 16:17:41 GMT" } ]
2017-05-17T00:00:00
[ [ "Gilani", "Zafar", "" ], [ "Farahbakhsh", "Reza", "" ], [ "Tyson", "Gareth", "" ], [ "Wang", "Liang", "" ], [ "Crowcroft", "Jon", "" ] ]
TITLE: An in-depth characterisation of Bots and Humans on Twitter ABSTRACT: Recent research has shown a substantial active presence of bots in online social networks (OSNs). In this paper we utilise our past work on studying bots (Stweeler) to comparatively analyse the usage and impact of bots and humans on Twitter, one of the largest OSNs in the world. We collect a large-scale Twitter dataset and define various metrics based on tweet metadata. We divide and filter the dataset in four popularity groups in terms of number of followers. Using a human annotation task we assign 'bot' and 'human' ground-truth labels to the dataset, and compare the annotations against an online bot detection tool for evaluation. We then ask a series of questions to discern important behavioural bot and human characteristics using metrics within and among four popularity groups. From the comparative analysis we draw important differences as well as surprising similarities between the two entities, thus paving the way for reliable classification of automated political infiltration, advertisement campaigns, and general bot detection.
no_new_dataset
0.946547
1705.04353
Daniel Larremore
Andrew Berdahl, Uttam Bhat, Vanessa Ferdinand, Joshua Garland, Keyan Ghazi-Zahedi, Justin Grana, Joshua A. Grochow, Elizabeth Hobson, Yoav Kallus, Christopher P. Kempes, Artemy Kolchinsky, Daniel B. Larremore, Eric Libby, Eleanor A. Power, and Brendan D. Tracey (Santa Fe Institute Postdocs)
On the records
This paper was produced, from conception of idea, to execution, to writing, by a team in just 72 hours (see Appendix)
null
null
null
physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
World record setting has long attracted public interest and scientific investigation. Extremal records summarize the limits of the space explored by a process, and the historical progression of a record sheds light on the underlying dynamics of the process. Existing analyses of prediction, statistical properties, and ultimate limits of record progressions have focused on particular domains. However, a broad perspective on how record progressions vary across different spheres of activity needs further development. Here we employ cross-cutting metrics to compare records across a variety of domains, including sports, games, biological evolution, and technological development. We find that these domains exhibit characteristic statistical signatures in terms of rates of improvement, "burstiness" of record-breaking time series, and the acceleration of the record breaking process. Specifically, sports and games exhibit the slowest rate of improvement and a wide range of rates of "burstiness." Technology improves at a much faster rate and, unlike other domains, tends to show acceleration in records. Many biological and technological processes are characterized by constant rates of improvement, showing less burstiness than sports and games. It is important to understand how these statistical properties of record progression emerge from the underlying dynamics. Towards this end, we conduct a detailed analysis of a particular record-setting event: elite marathon running. In this domain, we find that studying record-setting data alone can obscure many of the structural properties of the underlying process. The marathon study also illustrates how some of the standard statistical assumptions underlying record progression models may be inappropriate or commonly violated in real-world datasets.
[ { "version": "v1", "created": "Thu, 11 May 2017 18:59:43 GMT" }, { "version": "v2", "created": "Mon, 15 May 2017 18:00:16 GMT" } ]
2017-05-17T00:00:00
[ [ "Berdahl", "Andrew", "", "Santa Fe Institute Postdocs" ], [ "Bhat", "Uttam", "", "Santa Fe Institute Postdocs" ], [ "Ferdinand", "Vanessa", "", "Santa Fe Institute Postdocs" ], [ "Garland", "Joshua", "", "Santa Fe Institute Postdocs" ], [ "Ghazi-Zahedi", "Keyan", "", "Santa Fe Institute Postdocs" ], [ "Grana", "Justin", "", "Santa Fe Institute Postdocs" ], [ "Grochow", "Joshua A.", "", "Santa Fe Institute Postdocs" ], [ "Hobson", "Elizabeth", "", "Santa Fe Institute Postdocs" ], [ "Kallus", "Yoav", "", "Santa Fe Institute Postdocs" ], [ "Kempes", "Christopher P.", "", "Santa Fe Institute Postdocs" ], [ "Kolchinsky", "Artemy", "", "Santa Fe Institute Postdocs" ], [ "Larremore", "Daniel B.", "", "Santa Fe Institute Postdocs" ], [ "Libby", "Eric", "", "Santa Fe Institute Postdocs" ], [ "Power", "Eleanor A.", "", "Santa Fe Institute Postdocs" ], [ "Tracey", "Brendan D.", "", "Santa Fe Institute Postdocs" ] ]
TITLE: On the records ABSTRACT: World record setting has long attracted public interest and scientific investigation. Extremal records summarize the limits of the space explored by a process, and the historical progression of a record sheds light on the underlying dynamics of the process. Existing analyses of prediction, statistical properties, and ultimate limits of record progressions have focused on particular domains. However, a broad perspective on how record progressions vary across different spheres of activity needs further development. Here we employ cross-cutting metrics to compare records across a variety of domains, including sports, games, biological evolution, and technological development. We find that these domains exhibit characteristic statistical signatures in terms of rates of improvement, "burstiness" of record-breaking time series, and the acceleration of the record breaking process. Specifically, sports and games exhibit the slowest rate of improvement and a wide range of rates of "burstiness." Technology improves at a much faster rate and, unlike other domains, tends to show acceleration in records. Many biological and technological processes are characterized by constant rates of improvement, showing less burstiness than sports and games. It is important to understand how these statistical properties of record progression emerge from the underlying dynamics. Towards this end, we conduct a detailed analysis of a particular record-setting event: elite marathon running. In this domain, we find that studying record-setting data alone can obscure many of the structural properties of the underlying process. The marathon study also illustrates how some of the standard statistical assumptions underlying record progression models may be inappropriate or commonly violated in real-world datasets.
no_new_dataset
0.928862
1705.05219
Sobhan Moosavi
Sobhan Moosavi, Behrooz Omidvar-Tehrani, R. Bruce Craig, Rajiv Ramnath
Annotation of Car Trajectories based on Driving Patterns
A 10 pages technical report which described the process of preparing a ground-truth dataset
null
null
null
cs.OH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, the ubiquity of various sensors enables the collection of voluminous datasets of car trajectories. Such datasets enable analysts to make sense of driving patterns and behaviors: in order to understand the behavior of drivers, one approach is to break a trajectory into its underlying patterns and then analyze that trajectory in terms of derived patterns. The process of trajectory segmentation is a function of various resources including a set of ground truth trajectories with their driving patterns. To the best of our knowledge, no such ground-truth dataset exists in the literature. In this paper, we describe a trajectory annotation framework and report our results to annotate a dataset of personal car trajectories. Our annotation methodology consists of a crowd-sourcing task followed by a precise process of aggregation. Our annotation process consists of two granularity levels, one to specify the annotation (segment border) and the other one to describe the type of the segment (e.g. speed-up, turn, merge, etc.). The output of our project, Dataset of Annotated Car Trajectories (DACT), is available online at https://figshare.com/articles/dact_dataset_of_annotated_car_trajectories/5005289 .
[ { "version": "v1", "created": "Mon, 15 May 2017 13:30:36 GMT" }, { "version": "v2", "created": "Tue, 16 May 2017 14:48:34 GMT" } ]
2017-05-17T00:00:00
[ [ "Moosavi", "Sobhan", "" ], [ "Omidvar-Tehrani", "Behrooz", "" ], [ "Craig", "R. Bruce", "" ], [ "Ramnath", "Rajiv", "" ] ]
TITLE: Annotation of Car Trajectories based on Driving Patterns ABSTRACT: Nowadays, the ubiquity of various sensors enables the collection of voluminous datasets of car trajectories. Such datasets enable analysts to make sense of driving patterns and behaviors: in order to understand the behavior of drivers, one approach is to break a trajectory into its underlying patterns and then analyze that trajectory in terms of derived patterns. The process of trajectory segmentation is a function of various resources including a set of ground truth trajectories with their driving patterns. To the best of our knowledge, no such ground-truth dataset exists in the literature. In this paper, we describe a trajectory annotation framework and report our results to annotate a dataset of personal car trajectories. Our annotation methodology consists of a crowd-sourcing task followed by a precise process of aggregation. Our annotation process consists of two granularity levels, one to specify the annotation (segment border) and the other one to describe the type of the segment (e.g. speed-up, turn, merge, etc.). The output of our project, Dataset of Annotated Car Trajectories (DACT), is available online at https://figshare.com/articles/dact_dataset_of_annotated_car_trajectories/5005289 .
new_dataset
0.968709
1705.05435
Mehmet Turan
Mehmet Turan, Yasin Almalioglu, Ender Konukoglu, Metin Sitti
A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a robust deep learning based 6 degrees-of-freedom (DoF) localization system for endoscopic capsule robots. Our system mainly focuses on localization of endoscopic capsule robots inside the GI tract using only visual information captured by a mono camera integrated to the robot. The proposed system is a 23-layer deep convolutional neural network (CNN) that is capable to estimate the pose of the robot in real time using a standard CPU. The dataset for the evaluation of the system was recorded inside a surgical human stomach model with realistic surface texture, softness, and surface liquid properties so that the pre-trained CNN architecture can be transferred confidently into a real endoscopic scenario. An average error of 7:1% and 3:4% for translation and rotation has been obtained, respectively. The results accomplished from the experiments demonstrate that a CNN pre-trained with raw 2D endoscopic images performs accurately inside the GI tract and is robust to various challenges posed by reflection distortions, lens imperfections, vignetting, noise, motion blur, low resolution, and lack of unique landmarks to track.
[ { "version": "v1", "created": "Mon, 15 May 2017 20:33:37 GMT" } ]
2017-05-17T00:00:00
[ [ "Turan", "Mehmet", "" ], [ "Almalioglu", "Yasin", "" ], [ "Konukoglu", "Ender", "" ], [ "Sitti", "Metin", "" ] ]
TITLE: A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots ABSTRACT: We present a robust deep learning based 6 degrees-of-freedom (DoF) localization system for endoscopic capsule robots. Our system mainly focuses on localization of endoscopic capsule robots inside the GI tract using only visual information captured by a mono camera integrated to the robot. The proposed system is a 23-layer deep convolutional neural network (CNN) that is capable to estimate the pose of the robot in real time using a standard CPU. The dataset for the evaluation of the system was recorded inside a surgical human stomach model with realistic surface texture, softness, and surface liquid properties so that the pre-trained CNN architecture can be transferred confidently into a real endoscopic scenario. An average error of 7:1% and 3:4% for translation and rotation has been obtained, respectively. The results accomplished from the experiments demonstrate that a CNN pre-trained with raw 2D endoscopic images performs accurately inside the GI tract and is robust to various challenges posed by reflection distortions, lens imperfections, vignetting, noise, motion blur, low resolution, and lack of unique landmarks to track.
new_dataset
0.87584
1705.05455
Saad Bin Ahmed
Saad Bin Ahmed, Saeeda Naz, Salahuddin Swati, Muhammad Imran Razzak
Handwritten Urdu Character Recognition using 1-Dimensional BLSTM Classifier
10 pages, Accepted in NCA for publication
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recognition of cursive script is regarded as a subtle task in optical character recognition due to its varied representation. Every cursive script has different nature and associated challenges. As Urdu is one of cursive language that is derived from Arabic script, thats why it nearly shares the same challenges and difficulties even more harder. We can categorized Urdu and Arabic language on basis of its script they use. Urdu is mostly written in Nastaliq style whereas, Arabic follows Naskh style of writing. This paper presents new and comprehensive Urdu handwritten offline database name Urdu-Nastaliq Handwritten Dataset (UNHD). Currently, there is no standard and comprehensive Urdu handwritten dataset available publicly for researchers. The acquired dataset covers commonly used ligatures that were written by 500 writers with their natural handwriting on A4 size paper. We performed experiments using recurrent neural networks and reported a significant accuracy for handwritten Urdu character recognition.
[ { "version": "v1", "created": "Mon, 15 May 2017 21:13:08 GMT" } ]
2017-05-17T00:00:00
[ [ "Ahmed", "Saad Bin", "" ], [ "Naz", "Saeeda", "" ], [ "Swati", "Salahuddin", "" ], [ "Razzak", "Muhammad Imran", "" ] ]
TITLE: Handwritten Urdu Character Recognition using 1-Dimensional BLSTM Classifier ABSTRACT: The recognition of cursive script is regarded as a subtle task in optical character recognition due to its varied representation. Every cursive script has different nature and associated challenges. As Urdu is one of cursive language that is derived from Arabic script, thats why it nearly shares the same challenges and difficulties even more harder. We can categorized Urdu and Arabic language on basis of its script they use. Urdu is mostly written in Nastaliq style whereas, Arabic follows Naskh style of writing. This paper presents new and comprehensive Urdu handwritten offline database name Urdu-Nastaliq Handwritten Dataset (UNHD). Currently, there is no standard and comprehensive Urdu handwritten dataset available publicly for researchers. The acquired dataset covers commonly used ligatures that were written by 500 writers with their natural handwriting on A4 size paper. We performed experiments using recurrent neural networks and reported a significant accuracy for handwritten Urdu character recognition.
new_dataset
0.957636
1705.05483
Andrei Polzounov
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, Jianfei Cai
WordFence: Text Detection in Natural Images with Border Awareness
5 pages, 2 figures, ICIP 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, text recognition has achieved remarkable success in recognizing scanned document text. However, word recognition in natural images is still an open problem, which generally requires time consuming post-processing steps. We present a novel architecture for individual word detection in scene images based on semantic segmentation. Our contributions are twofold: the concept of WordFence, which detects border areas surrounding each individual word and a novel pixelwise weighted softmax loss function which penalizes background and emphasizes small text regions. WordFence ensures that each word is detected individually, and the new loss function provides a strong training signal to both text and word border localization. The proposed technique avoids intensive post-processing, producing an end-to-end word detection system. We achieve superior localization recall on common benchmark datasets - 92% recall on ICDAR11 and ICDAR13 and 63% recall on SVT. Furthermore, our end-to-end word recognition system achieves state-of-the-art 86% F-Score on ICDAR13.
[ { "version": "v1", "created": "Mon, 15 May 2017 23:42:59 GMT" } ]
2017-05-17T00:00:00
[ [ "Polzounov", "Andrei", "" ], [ "Ablavatski", "Artsiom", "" ], [ "Escalera", "Sergio", "" ], [ "Lu", "Shijian", "" ], [ "Cai", "Jianfei", "" ] ]
TITLE: WordFence: Text Detection in Natural Images with Border Awareness ABSTRACT: In recent years, text recognition has achieved remarkable success in recognizing scanned document text. However, word recognition in natural images is still an open problem, which generally requires time consuming post-processing steps. We present a novel architecture for individual word detection in scene images based on semantic segmentation. Our contributions are twofold: the concept of WordFence, which detects border areas surrounding each individual word and a novel pixelwise weighted softmax loss function which penalizes background and emphasizes small text regions. WordFence ensures that each word is detected individually, and the new loss function provides a strong training signal to both text and word border localization. The proposed technique avoids intensive post-processing, producing an end-to-end word detection system. We achieve superior localization recall on common benchmark datasets - 92% recall on ICDAR11 and ICDAR13 and 63% recall on SVT. Furthermore, our end-to-end word recognition system achieves state-of-the-art 86% F-Score on ICDAR13.
no_new_dataset
0.953966
1705.05494
Paulo Roberto Urio
Paulo Roberto Urio, Zhao Liang
Data clustering with edge domination in complex networks
13 pages, 6 figures
null
null
null
cs.SI cs.LG physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a model for a dynamical system where particles dominate edges in a complex network. The proposed dynamical system is then extended to an application on the problem of community detection and data clustering. In the case of the data clustering problem, 6 different techniques were simulated on 10 different datasets in order to compare with the proposed technique. The results show that the proposed algorithm performs well when prior knowledge of the number of clusters is known to the algorithm.
[ { "version": "v1", "created": "Tue, 16 May 2017 00:44:31 GMT" } ]
2017-05-17T00:00:00
[ [ "Urio", "Paulo Roberto", "" ], [ "Liang", "Zhao", "" ] ]
TITLE: Data clustering with edge domination in complex networks ABSTRACT: This paper presents a model for a dynamical system where particles dominate edges in a complex network. The proposed dynamical system is then extended to an application on the problem of community detection and data clustering. In the case of the data clustering problem, 6 different techniques were simulated on 10 different datasets in order to compare with the proposed technique. The results show that the proposed algorithm performs well when prior knowledge of the number of clusters is known to the algorithm.
no_new_dataset
0.949342
1705.05498
Jing Zhang
Jing Zhang and Wanqing Li and Philip Ogunbona
Joint Geometrical and Statistical Alignment for Visual Domain Adaptation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.
[ { "version": "v1", "created": "Tue, 16 May 2017 01:35:58 GMT" } ]
2017-05-17T00:00:00
[ [ "Zhang", "Jing", "" ], [ "Li", "Wanqing", "" ], [ "Ogunbona", "Philip", "" ] ]
TITLE: Joint Geometrical and Statistical Alignment for Visual Domain Adaptation ABSTRACT: This paper presents a novel unsupervised domain adaptation method for cross-domain visual recognition. We propose a unified framework that reduces the shift between domains both statistically and geometrically, referred to as Joint Geometrical and Statistical Alignment (JGSA). Specifically, we learn two coupled projections that project the source domain and target domain data into low dimensional subspaces where the geometrical shift and distribution shift are reduced simultaneously. The objective function can be solved efficiently in a closed form. Extensive experiments have verified that the proposed method significantly outperforms several state-of-the-art domain adaptation methods on a synthetic dataset and three different real world cross-domain visual recognition tasks.
no_new_dataset
0.951594
1705.05508
Yong Khoo
Yong Khoo, Sang Chung
Automated Body Structure Extraction from Arbitrary 3D Mesh
null
Imaging and Graphics, 2017
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an automated method for 3D character skeleton extraction that can be applied for generic 3D shapes. Our work is motivated by the skeleton-based prior work on automatic rigging focused on skeleton extraction and can automatically aligns the extracted structure to fit the 3D shape of the given 3D mesh. The body mesh can be subsequently skinned based on the extracted skeleton and thus enables rigging process. In the experiment, we apply public dataset to drive the estimated skeleton from different body shapes, as well as the real data obtained from 3D scanning systems. Satisfactory results are obtained compared to the existing approaches.
[ { "version": "v1", "created": "Tue, 16 May 2017 02:58:44 GMT" } ]
2017-05-17T00:00:00
[ [ "Khoo", "Yong", "" ], [ "Chung", "Sang", "" ] ]
TITLE: Automated Body Structure Extraction from Arbitrary 3D Mesh ABSTRACT: This paper presents an automated method for 3D character skeleton extraction that can be applied for generic 3D shapes. Our work is motivated by the skeleton-based prior work on automatic rigging focused on skeleton extraction and can automatically aligns the extracted structure to fit the 3D shape of the given 3D mesh. The body mesh can be subsequently skinned based on the extracted skeleton and thus enables rigging process. In the experiment, we apply public dataset to drive the estimated skeleton from different body shapes, as well as the real data obtained from 3D scanning systems. Satisfactory results are obtained compared to the existing approaches.
no_new_dataset
0.954308
1705.05592
Varun Ojha
Varun Kumar Ojha, Ajith Abraham, V\'aclav Sn\'a\v{s}el
Ensemble of heterogeneous flexible neural trees using multiobjective genetic programming
null
Applied Soft Computing, 2017, Volume 52 Pages 909 to 924
10.1016/j.asoc.2016.09.035
null
cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning algorithms are inherently multiobjective in nature, where approximation error minimization and model's complexity simplification are two conflicting objectives. We proposed a multiobjective genetic programming (MOGP) for creating a heterogeneous flexible neural tree (HFNT), tree-like flexible feedforward neural network model. The functional heterogeneity in neural tree nodes was introduced to capture a better insight of data during learning because each input in a dataset possess different features. MOGP guided an initial HFNT population towards Pareto-optimal solutions, where the final population was used for making an ensemble system. A diversity index measure along with approximation error and complexity was introduced to maintain diversity among the candidates in the population. Hence, the ensemble was created by using accurate, structurally simple, and diverse candidates from MOGP final population. Differential evolution algorithm was applied to fine-tune the underlying parameters of the selected candidates. A comprehensive test over classification, regression, and time-series datasets proved the efficiency of the proposed algorithm over other available prediction methods. Moreover, the heterogeneous creation of HFNT proved to be efficient in making ensemble system from the final population.
[ { "version": "v1", "created": "Tue, 16 May 2017 08:40:42 GMT" } ]
2017-05-17T00:00:00
[ [ "Ojha", "Varun Kumar", "" ], [ "Abraham", "Ajith", "" ], [ "Snášel", "Václav", "" ] ]
TITLE: Ensemble of heterogeneous flexible neural trees using multiobjective genetic programming ABSTRACT: Machine learning algorithms are inherently multiobjective in nature, where approximation error minimization and model's complexity simplification are two conflicting objectives. We proposed a multiobjective genetic programming (MOGP) for creating a heterogeneous flexible neural tree (HFNT), tree-like flexible feedforward neural network model. The functional heterogeneity in neural tree nodes was introduced to capture a better insight of data during learning because each input in a dataset possess different features. MOGP guided an initial HFNT population towards Pareto-optimal solutions, where the final population was used for making an ensemble system. A diversity index measure along with approximation error and complexity was introduced to maintain diversity among the candidates in the population. Hence, the ensemble was created by using accurate, structurally simple, and diverse candidates from MOGP final population. Differential evolution algorithm was applied to fine-tune the underlying parameters of the selected candidates. A comprehensive test over classification, regression, and time-series datasets proved the efficiency of the proposed algorithm over other available prediction methods. Moreover, the heterogeneous creation of HFNT proved to be efficient in making ensemble system from the final population.
no_new_dataset
0.953319
1705.05640
Limin Wang
Wen Li, Limin Wang, Wei Li, Eirikur Agustsson, Jesse Berent, Abhinav Gupta, Rahul Sukthankar, Luc Van Gool
WebVision Challenge: Visual Learning and Understanding With Web Data
project page: http://www.vision.ee.ethz.ch/webvision/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the 2017 WebVision Challenge, a public image recognition challenge designed for deep learning based on web images without instance-level human annotation. Following the spirit of previous vision challenges, such as ILSVRC, Places2 and PASCAL VOC, which have played critical roles in the development of computer vision by contributing to the community with large scale annotated data for model designing and standardized benchmarking, we contribute with this challenge a large scale web images dataset, and a public competition with a workshop co-located with CVPR 2017. The WebVision dataset contains more than $2.4$ million web images crawled from the Internet by using queries generated from the $1,000$ semantic concepts of the benchmark ILSVRC 2012 dataset. Meta information is also included. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. The 2017 WebVision challenge consists of two tracks, the image classification task on WebVision test set, and the transfer learning task on PASCAL VOC 2012 dataset. In this paper, we describe the details of data collection and annotation, highlight the characteristics of the dataset, and introduce the evaluation metrics.
[ { "version": "v1", "created": "Tue, 16 May 2017 10:59:23 GMT" } ]
2017-05-17T00:00:00
[ [ "Li", "Wen", "" ], [ "Wang", "Limin", "" ], [ "Li", "Wei", "" ], [ "Agustsson", "Eirikur", "" ], [ "Berent", "Jesse", "" ], [ "Gupta", "Abhinav", "" ], [ "Sukthankar", "Rahul", "" ], [ "Van Gool", "Luc", "" ] ]
TITLE: WebVision Challenge: Visual Learning and Understanding With Web Data ABSTRACT: We present the 2017 WebVision Challenge, a public image recognition challenge designed for deep learning based on web images without instance-level human annotation. Following the spirit of previous vision challenges, such as ILSVRC, Places2 and PASCAL VOC, which have played critical roles in the development of computer vision by contributing to the community with large scale annotated data for model designing and standardized benchmarking, we contribute with this challenge a large scale web images dataset, and a public competition with a workshop co-located with CVPR 2017. The WebVision dataset contains more than $2.4$ million web images crawled from the Internet by using queries generated from the $1,000$ semantic concepts of the benchmark ILSVRC 2012 dataset. Meta information is also included. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. The 2017 WebVision challenge consists of two tracks, the image classification task on WebVision test set, and the transfer learning task on PASCAL VOC 2012 dataset. In this paper, we describe the details of data collection and annotation, highlight the characteristics of the dataset, and introduce the evaluation metrics.
new_dataset
0.938181
1705.05756
Witold Rudnicki
Krzysztof Mnich and Witold R. Rudnicki
All-relevant feature selection using multidimensional filters with exhaustive search
27 pages, 11 figures, 3 tables
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes a method for identification of the informative variables in the information system with discrete decision variables. It is targeted specifically towards discovery of the variables that are non-informative when considered alone, but are informative when the synergistic interactions between multiple variables are considered. To this end, the mutual entropy of all possible k-tuples of variables with decision variable is computed. Then, for each variable the maximal information gain due to interactions with other variables is obtained. For non-informative variables this quantity conforms to the well known statistical distributions. This allows for discerning truly informative variables from non-informative ones. For demonstration of the approach, the method is applied to several synthetic datasets that involve complex multidimensional interactions between variables. It is capable of identifying most important informative variables, even in the case when the dimensionality of the analysis is smaller than the true dimensionality of the problem. What is more, the high sensitivity of the algorithm allows for detection of the influence of nuisance variables on the response variable.
[ { "version": "v1", "created": "Tue, 16 May 2017 15:11:10 GMT" } ]
2017-05-17T00:00:00
[ [ "Mnich", "Krzysztof", "" ], [ "Rudnicki", "Witold R.", "" ] ]
TITLE: All-relevant feature selection using multidimensional filters with exhaustive search ABSTRACT: This paper describes a method for identification of the informative variables in the information system with discrete decision variables. It is targeted specifically towards discovery of the variables that are non-informative when considered alone, but are informative when the synergistic interactions between multiple variables are considered. To this end, the mutual entropy of all possible k-tuples of variables with decision variable is computed. Then, for each variable the maximal information gain due to interactions with other variables is obtained. For non-informative variables this quantity conforms to the well known statistical distributions. This allows for discerning truly informative variables from non-informative ones. For demonstration of the approach, the method is applied to several synthetic datasets that involve complex multidimensional interactions between variables. It is capable of identifying most important informative variables, even in the case when the dimensionality of the analysis is smaller than the true dimensionality of the problem. What is more, the high sensitivity of the algorithm allows for detection of the influence of nuisance variables on the response variable.
no_new_dataset
0.9463
1705.05787
Luiz Gustavo Hafemann
Luiz G. Hafemann, Robert Sabourin, Luiz S. Oliveira
Learning Features for Offline Handwritten Signature Verification using Deep Convolutional Neural Networks
null
null
10.1016/j.patcog.2017.05.012
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person's signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset.
[ { "version": "v1", "created": "Tue, 16 May 2017 16:08:09 GMT" } ]
2017-05-17T00:00:00
[ [ "Hafemann", "Luiz G.", "" ], [ "Sabourin", "Robert", "" ], [ "Oliveira", "Luiz S.", "" ] ]
TITLE: Learning Features for Offline Handwritten Signature Verification using Deep Convolutional Neural Networks ABSTRACT: Verifying the identity of a person using handwritten signatures is challenging in the presence of skilled forgeries, where a forger has access to a person's signature and deliberately attempt to imitate it. In offline (static) signature verification, the dynamic information of the signature writing process is lost, and it is difficult to design good feature extractors that can distinguish genuine signatures and skilled forgeries. This reflects in a relatively poor performance, with verification errors around 7% in the best systems in the literature. To address both the difficulty of obtaining good features, as well as improve system performance, we propose learning the representations from signature images, in a Writer-Independent format, using Convolutional Neural Networks. In particular, we propose a novel formulation of the problem that includes knowledge of skilled forgeries from a subset of users in the feature learning process, that aims to capture visual cues that distinguish genuine signatures and forgeries regardless of the user. Extensive experiments were conducted on four datasets: GPDS, MCYT, CEDAR and Brazilian PUC-PR datasets. On GPDS-160, we obtained a large improvement in state-of-the-art performance, achieving 1.72% Equal Error Rate, compared to 6.97% in the literature. We also verified that the features generalize beyond the GPDS dataset, surpassing the state-of-the-art performance in the other datasets, without requiring the representation to be fine-tuned to each particular dataset.
no_new_dataset
0.946892
1705.05823
Oren Rippel
Oren Rippel, Lubomir Bourdev
Real-Time Adaptive Image Compression
Published at ICML 2017
null
null
null
stat.ML cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.
[ { "version": "v1", "created": "Tue, 16 May 2017 17:51:07 GMT" } ]
2017-05-17T00:00:00
[ [ "Rippel", "Oren", "" ], [ "Bourdev", "Lubomir", "" ] ]
TITLE: Real-Time Adaptive Image Compression ABSTRACT: We present a machine learning-based approach to lossy image compression which outperforms all existing codecs, while running in real-time. Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG 2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of generic images across all quality levels. At the same time, our codec is designed to be lightweight and deployable: for example, it can encode or decode the Kodak dataset in around 10ms per image on GPU. Our architecture is an autoencoder featuring pyramidal analysis, an adaptive coding module, and regularization of the expected codelength. We also supplement our approach with adversarial training specialized towards use in a compression setting: this enables us to produce visually pleasing reconstructions for very low bitrates.
no_new_dataset
0.940408
1512.07797
Jiaqian Yu
Jiaqian Yu (CVC, GALEN), Matthew Blaschko
The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel surrogate loss function for submodular losses, the Lov\'asz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is convex and yields an extension. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through several set prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets.
[ { "version": "v1", "created": "Thu, 24 Dec 2015 11:49:47 GMT" }, { "version": "v2", "created": "Mon, 15 May 2017 11:25:31 GMT" } ]
2017-05-16T00:00:00
[ [ "Yu", "Jiaqian", "", "CVC, GALEN" ], [ "Blaschko", "Matthew", "" ] ]
TITLE: The Lov\'asz Hinge: A Novel Convex Surrogate for Submodular Losses ABSTRACT: Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for non-supermodular loss functions. We propose instead a novel surrogate loss function for submodular losses, the Lov\'asz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. We prove that the Lov\'asz hinge is convex and yields an extension. As a result, we have developed the first tractable convex surrogates in the literature for submodular losses. We demonstrate the utility of this novel convex surrogate through several set prediction tasks, including on the PASCAL VOC and Microsoft COCO datasets.
no_new_dataset
0.951997
1608.04267
Zihan Zhou
Zihan Zhou, Farshid Farhat, James Z. Wang
Detecting Dominant Vanishing Points in Natural Scenes with Application to Composition-Sensitive Image Retrieval
15 pages, 18 figures, to appear in IEEE Transactions on Multimedia
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of linear perspective in landscape photography has several real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition, yet adequate automated understanding has been elusive. We address this problem by detecting the dominant vanishing point and the associated line structures in a photo. However, natural landscape scenes pose great technical challenges because often the inadequate number of strong edges converging to the dominant vanishing point is inadequate. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding provides on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.
[ { "version": "v1", "created": "Mon, 15 Aug 2016 13:48:22 GMT" }, { "version": "v2", "created": "Sat, 13 May 2017 14:58:05 GMT" } ]
2017-05-16T00:00:00
[ [ "Zhou", "Zihan", "" ], [ "Farhat", "Farshid", "" ], [ "Wang", "James Z.", "" ] ]
TITLE: Detecting Dominant Vanishing Points in Natural Scenes with Application to Composition-Sensitive Image Retrieval ABSTRACT: Linear perspective is widely used in landscape photography to create the impression of depth on a 2D photo. Automated understanding of linear perspective in landscape photography has several real-world applications, including aesthetics assessment, image retrieval, and on-site feedback for photo composition, yet adequate automated understanding has been elusive. We address this problem by detecting the dominant vanishing point and the associated line structures in a photo. However, natural landscape scenes pose great technical challenges because often the inadequate number of strong edges converging to the dominant vanishing point is inadequate. To overcome this difficulty, we propose a novel vanishing point detection method that exploits global structures in the scene via contour detection. We show that our method significantly outperforms state-of-the-art methods on a public ground truth landscape image dataset that we have created. Based on the detection results, we further demonstrate how our approach to linear perspective understanding provides on-site guidance to amateur photographers on their work through a novel viewpoint-specific image retrieval system.
new_dataset
0.959762
1612.00837
Yash Goyal
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
null
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at www.visualqa.org as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.
[ { "version": "v1", "created": "Fri, 2 Dec 2016 20:57:07 GMT" }, { "version": "v2", "created": "Fri, 14 Apr 2017 18:20:13 GMT" }, { "version": "v3", "created": "Mon, 15 May 2017 17:58:49 GMT" } ]
2017-05-16T00:00:00
[ [ "Goyal", "Yash", "" ], [ "Khot", "Tejas", "" ], [ "Summers-Stay", "Douglas", "" ], [ "Batra", "Dhruv", "" ], [ "Parikh", "Devi", "" ] ]
TITLE: Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering ABSTRACT: Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at www.visualqa.org as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.
new_dataset
0.921781
1612.01414
Alexander Jung
Alexander Jung, Alfred O. Hero III, Alexandru Mara, and Saeed Jahromi
Semi-Supervised Learning via Sparse Label Propagation
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work proposes a novel method for semi-supervised learning from partially labeled massive network-structured datasets, i.e., big data over networks. We model the underlying hypothesis, which relates data points to labels, as a graph signal, defined over some graph (network) structure intrinsic to the dataset. Following the key principle of supervised learning, i.e., similar inputs yield similar outputs, we require the graph signals induced by labels to have small total variation. Accordingly, we formulate the problem of learning the labels of data points as a non-smooth convex optimization problem which amounts to balancing between the empirical loss, i.e., the discrepancy with some partially available label information, and the smoothness quantified by the total variation of the learned graph signal. We solve this optimization problem by appealing to a recently proposed preconditioned variant of the popular primal-dual method by Pock and Chambolle, which results in a sparse label propagation algorithm. This learning algorithm allows for a highly scalable implementation as message passing over the underlying data graph. By applying concepts of compressed sensing to the learning problem, we are also able to provide a transparent sufficient condition on the underlying network structure such that accurate learning of the labels is possible. We also present an implementation of the message passing formulation allows for a highly scalable implementation in big data frameworks.
[ { "version": "v1", "created": "Mon, 5 Dec 2016 16:04:38 GMT" }, { "version": "v2", "created": "Thu, 22 Dec 2016 15:41:31 GMT" }, { "version": "v3", "created": "Wed, 10 May 2017 16:57:05 GMT" }, { "version": "v4", "created": "Mon, 15 May 2017 07:53:13 GMT" } ]
2017-05-16T00:00:00
[ [ "Jung", "Alexander", "" ], [ "Hero", "Alfred O.", "III" ], [ "Mara", "Alexandru", "" ], [ "Jahromi", "Saeed", "" ] ]
TITLE: Semi-Supervised Learning via Sparse Label Propagation ABSTRACT: This work proposes a novel method for semi-supervised learning from partially labeled massive network-structured datasets, i.e., big data over networks. We model the underlying hypothesis, which relates data points to labels, as a graph signal, defined over some graph (network) structure intrinsic to the dataset. Following the key principle of supervised learning, i.e., similar inputs yield similar outputs, we require the graph signals induced by labels to have small total variation. Accordingly, we formulate the problem of learning the labels of data points as a non-smooth convex optimization problem which amounts to balancing between the empirical loss, i.e., the discrepancy with some partially available label information, and the smoothness quantified by the total variation of the learned graph signal. We solve this optimization problem by appealing to a recently proposed preconditioned variant of the popular primal-dual method by Pock and Chambolle, which results in a sparse label propagation algorithm. This learning algorithm allows for a highly scalable implementation as message passing over the underlying data graph. By applying concepts of compressed sensing to the learning problem, we are also able to provide a transparent sufficient condition on the underlying network structure such that accurate learning of the labels is possible. We also present an implementation of the message passing formulation allows for a highly scalable implementation in big data frameworks.
no_new_dataset
0.948298
1702.08400
Kuniaki Saito Saito Kuniaki
Kuniaki Saito, Yoshitaka Ushiku and Tatsuya Harada
Asymmetric Tri-training for Unsupervised Domain Adaptation
TBA on ICML2017
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep-layered models trained on a large number of labeled samples boost the accuracy of many tasks. It is important to apply such models to different domains because collecting many labeled samples in various domains is expensive. In unsupervised domain adaptation, one needs to train a classifier that works well on a target domain when provided with labeled source samples and unlabeled target samples. Although many methods aim to match the distributions of source and target samples, simply matching the distribution cannot ensure accuracy on the target domain. To learn discriminative representations for the target domain, we assume that artificially labeling target samples can result in a good representation. Tri-training leverages three classifiers equally to give pseudo-labels to unlabeled samples, but the method does not assume labeling samples generated from a different domain.In this paper, we propose an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train neural networks as if they are true labels. In our work, we use three networks asymmetrically. By asymmetric, we mean that two networks are used to label unlabeled target samples and one network is trained by the samples to obtain target-discriminative representations. We evaluate our method on digit recognition and sentiment analysis datasets. Our proposed method achieves state-of-the-art performance on the benchmark digit recognition datasets of domain adaptation.
[ { "version": "v1", "created": "Mon, 27 Feb 2017 17:48:17 GMT" }, { "version": "v2", "created": "Thu, 16 Mar 2017 15:11:14 GMT" }, { "version": "v3", "created": "Sat, 13 May 2017 05:44:03 GMT" } ]
2017-05-16T00:00:00
[ [ "Saito", "Kuniaki", "" ], [ "Ushiku", "Yoshitaka", "" ], [ "Harada", "Tatsuya", "" ] ]
TITLE: Asymmetric Tri-training for Unsupervised Domain Adaptation ABSTRACT: Deep-layered models trained on a large number of labeled samples boost the accuracy of many tasks. It is important to apply such models to different domains because collecting many labeled samples in various domains is expensive. In unsupervised domain adaptation, one needs to train a classifier that works well on a target domain when provided with labeled source samples and unlabeled target samples. Although many methods aim to match the distributions of source and target samples, simply matching the distribution cannot ensure accuracy on the target domain. To learn discriminative representations for the target domain, we assume that artificially labeling target samples can result in a good representation. Tri-training leverages three classifiers equally to give pseudo-labels to unlabeled samples, but the method does not assume labeling samples generated from a different domain.In this paper, we propose an asymmetric tri-training method for unsupervised domain adaptation, where we assign pseudo-labels to unlabeled samples and train neural networks as if they are true labels. In our work, we use three networks asymmetrically. By asymmetric, we mean that two networks are used to label unlabeled target samples and one network is trained by the samples to obtain target-discriminative representations. We evaluate our method on digit recognition and sentiment analysis datasets. Our proposed method achieves state-of-the-art performance on the benchmark digit recognition datasets of domain adaptation.
no_new_dataset
0.950411
1703.01289
Sebastian Bullinger
Sebastian Bullinger, Christoph Bodensteiner and Michael Arens
Instance Flow Based Online Multiple Object Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a method to perform online Multiple Object Tracking (MOT) of known object categories in monocular video data. Current Tracking-by-Detection MOT approaches build on top of 2D bounding box detections. In contrast, we exploit state-of-the-art instance aware semantic segmentation techniques to compute 2D shape representations of target objects in each frame. We predict position and shape of segmented instances in subsequent frames by exploiting optical flow cues. We define an affinity matrix between instances of subsequent frames which reflects locality and visual similarity. The instance association is solved by applying the Hungarian method. We evaluate different configurations of our algorithm using the MOT 2D 2015 train dataset. The evaluation shows that our tracking approach is able to track objects with high relative motions. In addition, we provide results of our approach on the MOT 2D 2015 test set for comparison with previous works. We achieve a MOTA score of 32.1.
[ { "version": "v1", "created": "Fri, 3 Mar 2017 18:54:55 GMT" }, { "version": "v2", "created": "Mon, 15 May 2017 14:14:30 GMT" } ]
2017-05-16T00:00:00
[ [ "Bullinger", "Sebastian", "" ], [ "Bodensteiner", "Christoph", "" ], [ "Arens", "Michael", "" ] ]
TITLE: Instance Flow Based Online Multiple Object Tracking ABSTRACT: We present a method to perform online Multiple Object Tracking (MOT) of known object categories in monocular video data. Current Tracking-by-Detection MOT approaches build on top of 2D bounding box detections. In contrast, we exploit state-of-the-art instance aware semantic segmentation techniques to compute 2D shape representations of target objects in each frame. We predict position and shape of segmented instances in subsequent frames by exploiting optical flow cues. We define an affinity matrix between instances of subsequent frames which reflects locality and visual similarity. The instance association is solved by applying the Hungarian method. We evaluate different configurations of our algorithm using the MOT 2D 2015 train dataset. The evaluation shows that our tracking approach is able to track objects with high relative motions. In addition, we provide results of our approach on the MOT 2D 2015 test set for comparison with previous works. We achieve a MOTA score of 32.1.
no_new_dataset
0.948058
1705.00274
Sibel Tari
Asli Genctav, Yusuf Sahillioglu, and Sibel Tari
Topologically Robust 3D Shape Matching via Gradual Deflation and Inflation
Section 2 replaced
null
null
null
cs.GR cs.CG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite being vastly ignored in the literature, coping with topological noise is an issue of increasing importance, especially as a consequence of the increasing number and diversity of 3D polygonal models that are captured by devices of different qualities or synthesized by algorithms of different stabilities. One approach for matching 3D shapes under topological noise is to replace the topology-sensitive geodesic distance with distances that are less sensitive to topological changes. We propose an alternative approach utilising gradual deflation (or inflation) of the shape volume, of which purpose is to bring the pair of shapes to be matched to a \emph{comparable} topology before the search for correspondences. Illustrative experiments using different datasets demonstrate that as the level of topological noise increases, our approach outperforms the other methods in the literature.
[ { "version": "v1", "created": "Sun, 30 Apr 2017 06:40:18 GMT" }, { "version": "v2", "created": "Fri, 12 May 2017 21:48:29 GMT" } ]
2017-05-16T00:00:00
[ [ "Genctav", "Asli", "" ], [ "Sahillioglu", "Yusuf", "" ], [ "Tari", "Sibel", "" ] ]
TITLE: Topologically Robust 3D Shape Matching via Gradual Deflation and Inflation ABSTRACT: Despite being vastly ignored in the literature, coping with topological noise is an issue of increasing importance, especially as a consequence of the increasing number and diversity of 3D polygonal models that are captured by devices of different qualities or synthesized by algorithms of different stabilities. One approach for matching 3D shapes under topological noise is to replace the topology-sensitive geodesic distance with distances that are less sensitive to topological changes. We propose an alternative approach utilising gradual deflation (or inflation) of the shape volume, of which purpose is to bring the pair of shapes to be matched to a \emph{comparable} topology before the search for correspondences. Illustrative experiments using different datasets demonstrate that as the level of topological noise increases, our approach outperforms the other methods in the literature.
no_new_dataset
0.952882
1705.02012
Tong Wang
Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng Zhang, Adam Trischler
Machine Comprehension by Text-to-Text Neural Question Generation
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.
[ { "version": "v1", "created": "Thu, 4 May 2017 20:58:06 GMT" }, { "version": "v2", "created": "Mon, 15 May 2017 14:47:05 GMT" } ]
2017-05-16T00:00:00
[ [ "Yuan", "Xingdi", "" ], [ "Wang", "Tong", "" ], [ "Gulcehre", "Caglar", "" ], [ "Sordoni", "Alessandro", "" ], [ "Bachman", "Philip", "" ], [ "Subramanian", "Sandeep", "" ], [ "Zhang", "Saizheng", "" ], [ "Trischler", "Adam", "" ] ]
TITLE: Machine Comprehension by Text-to-Text Neural Question Generation ABSTRACT: We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD.
no_new_dataset
0.927166
1705.02519
Subhabrata Mukherjee
Subhabrata Mukherjee, Hemank Lamba, Gerhard Weikum
Item Recommendation with Evolving User Preferences and Experience
null
null
10.1109/ICDM.2015.111
null
cs.AI cs.CL cs.IR cs.SI stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels.
[ { "version": "v1", "created": "Sat, 6 May 2017 19:22:41 GMT" } ]
2017-05-16T00:00:00
[ [ "Mukherjee", "Subhabrata", "" ], [ "Lamba", "Hemank", "" ], [ "Weikum", "Gerhard", "" ] ]
TITLE: Item Recommendation with Evolving User Preferences and Experience ABSTRACT: Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels.
no_new_dataset
0.958847
1705.03551
Mandar Joshi
Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
Added references, fixed typos, minor baseline update
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
[ { "version": "v1", "created": "Tue, 9 May 2017 21:35:07 GMT" }, { "version": "v2", "created": "Sat, 13 May 2017 21:12:37 GMT" } ]
2017-05-16T00:00:00
[ [ "Joshi", "Mandar", "" ], [ "Choi", "Eunsol", "" ], [ "Weld", "Daniel S.", "" ], [ "Zettlemoyer", "Luke", "" ] ]
TITLE: TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension ABSTRACT: We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, that provide high quality distant supervision for answering the questions. We show that, in comparison to other recently introduced large-scale datasets, TriviaQA (1) has relatively complex, compositional questions, (2) has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) requires more cross sentence reasoning to find answers. We also present two baseline algorithms: a feature-based classifier and a state-of-the-art neural network, that performs well on SQuAD reading comprehension. Neither approach comes close to human performance (23% and 40% vs. 80%), suggesting that TriviaQA is a challenging testbed that is worth significant future study. Data and code available at -- http://nlp.cs.washington.edu/triviaqa/
new_dataset
0.956309
1705.04803
Bhaskar Mitra
Federico Nanni, Bhaskar Mitra, Matt Magnusson and Laura Dietz
Benchmark for Complex Answer Retrieval
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieving paragraphs to populate a Wikipedia article is a challenging task. The new TREC Complex Answer Retrieval (TREC CAR) track introduces a comprehensive dataset that targets this retrieval scenario. We present early results from a variety of approaches -- from standard information retrieval methods (e.g., tf-idf) to complex systems that using query expansion using knowledge bases and deep neural networks. The goal is to offer future participants of this track an overview of some promising approaches to tackle this problem.
[ { "version": "v1", "created": "Sat, 13 May 2017 09:06:52 GMT" } ]
2017-05-16T00:00:00
[ [ "Nanni", "Federico", "" ], [ "Mitra", "Bhaskar", "" ], [ "Magnusson", "Matt", "" ], [ "Dietz", "Laura", "" ] ]
TITLE: Benchmark for Complex Answer Retrieval ABSTRACT: Retrieving paragraphs to populate a Wikipedia article is a challenging task. The new TREC Complex Answer Retrieval (TREC CAR) track introduces a comprehensive dataset that targets this retrieval scenario. We present early results from a variety of approaches -- from standard information retrieval methods (e.g., tf-idf) to complex systems that using query expansion using knowledge bases and deep neural networks. The goal is to offer future participants of this track an overview of some promising approaches to tackle this problem.
new_dataset
0.957833
1705.04828
Yiluan Guo
Yiluan Guo, Hossein Nejati, Ngai-Man Cheung
Deep neural networks on graph signals for brain imaging analysis
Accepted by ICIP 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Brain imaging data such as EEG or MEG are high-dimensional spatiotemporal data often degraded by complex, non-Gaussian noise. For reliable analysis of brain imaging data, it is important to extract discriminative, low-dimensional intrinsic representation of the recorded data. This work proposes a new method to learn the low-dimensional representations from the noise-degraded measurements. In particular, our work proposes a new deep neural network design that integrates graph information such as brain connectivity with fully-connected layers. Our work leverages efficient graph filter design using Chebyshev polynomial and recent work on convolutional nets on graph-structured data. Our approach exploits graph structure as the prior side information, localized graph filter for feature extraction and neural networks for high capacity learning. Experiments on real MEG datasets show that our approach can extract more discriminative representations, leading to improved accuracy in a supervised classification task.
[ { "version": "v1", "created": "Sat, 13 May 2017 13:50:47 GMT" } ]
2017-05-16T00:00:00
[ [ "Guo", "Yiluan", "" ], [ "Nejati", "Hossein", "" ], [ "Cheung", "Ngai-Man", "" ] ]
TITLE: Deep neural networks on graph signals for brain imaging analysis ABSTRACT: Brain imaging data such as EEG or MEG are high-dimensional spatiotemporal data often degraded by complex, non-Gaussian noise. For reliable analysis of brain imaging data, it is important to extract discriminative, low-dimensional intrinsic representation of the recorded data. This work proposes a new method to learn the low-dimensional representations from the noise-degraded measurements. In particular, our work proposes a new deep neural network design that integrates graph information such as brain connectivity with fully-connected layers. Our work leverages efficient graph filter design using Chebyshev polynomial and recent work on convolutional nets on graph-structured data. Our approach exploits graph structure as the prior side information, localized graph filter for feature extraction and neural networks for high capacity learning. Experiments on real MEG datasets show that our approach can extract more discriminative representations, leading to improved accuracy in a supervised classification task.
no_new_dataset
0.954732
1705.04892
Jimmy Lin
Jinfeng Rao, Ferhan Ture, Hua He, Oliver Jojic, and Jimmy Lin
Talking to Your TV: Context-Aware Voice Search with Hierarchical Recurrent Neural Networks
null
null
null
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tackle the novel problem of navigational voice queries posed against an entertainment system, where viewers interact with a voice-enabled remote controller to specify the program to watch. This is a difficult problem for several reasons: such queries are short, even shorter than comparable voice queries in other domains, which offers fewer opportunities for deciphering user intent. Furthermore, ambiguity is exacerbated by underlying speech recognition errors. We address these challenges by integrating word- and character-level representations of the queries and by modeling voice search sessions to capture the contextual dependencies in query sequences. Both are accomplished with a probabilistic framework in which recurrent and feedforward neural network modules are organized in a hierarchical manner. From a raw dataset of 32M voice queries from 2.5M viewers on the Comcast Xfinity X1 entertainment system, we extracted data to train and test our models. We demonstrate the benefits of our hybrid representation and context-aware model, which significantly outperforms models without context as well as the current deployed product.
[ { "version": "v1", "created": "Sat, 13 May 2017 22:24:26 GMT" } ]
2017-05-16T00:00:00
[ [ "Rao", "Jinfeng", "" ], [ "Ture", "Ferhan", "" ], [ "He", "Hua", "" ], [ "Jojic", "Oliver", "" ], [ "Lin", "Jimmy", "" ] ]
TITLE: Talking to Your TV: Context-Aware Voice Search with Hierarchical Recurrent Neural Networks ABSTRACT: We tackle the novel problem of navigational voice queries posed against an entertainment system, where viewers interact with a voice-enabled remote controller to specify the program to watch. This is a difficult problem for several reasons: such queries are short, even shorter than comparable voice queries in other domains, which offers fewer opportunities for deciphering user intent. Furthermore, ambiguity is exacerbated by underlying speech recognition errors. We address these challenges by integrating word- and character-level representations of the queries and by modeling voice search sessions to capture the contextual dependencies in query sequences. Both are accomplished with a probabilistic framework in which recurrent and feedforward neural network modules are organized in a hierarchical manner. From a raw dataset of 32M voice queries from 2.5M viewers on the Comcast Xfinity X1 entertainment system, we extracted data to train and test our models. We demonstrate the benefits of our hybrid representation and context-aware model, which significantly outperforms models without context as well as the current deployed product.
no_new_dataset
0.810779
1705.04916
Suryansh Kumar
Suryansh Kumar, Yuchao Dai, Hongdong Li
Spatial-Temporal Union of Subspaces for Multi-body Non-rigid Structure-from-Motion
Author version of this paper has been accepted by Pattern Recognition Journal in the special issue on Articulated Motion and Deformable Objects. This work was originally submitted to ACCV 16 conference on 27th May 2016 for review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-rigid structure-from-motion (NRSfM) has so far been mostly studied for recovering 3D structure of a single non-rigid/deforming object. To handle the real world challenging multiple deforming objects scenarios, existing methods either pre-segment different objects in the scene or treat multiple non-rigid objects as a whole to obtain the 3D non-rigid reconstruction. However, these methods fail to exploit the inherent structure in the problem as the solution of segmentation and the solution of reconstruction could not benefit each other. In this paper, we propose a unified framework to jointly segment and reconstruct multiple non-rigid objects. To compactly represent complex multi-body non-rigid scenes, we propose to exploit the structure of the scenes along both temporal direction and spatial direction, thus achieving a spatio-temporal representation. Specifically, we represent the 3D non-rigid deformations as lying in a union of subspaces along the temporal direction and represent the 3D trajectories as lying in the union of subspaces along the spatial direction. This spatio-temporal representation not only provides competitive 3D reconstruction but also outputs robust segmentation of multiple non-rigid objects. The resultant optimization problem is solved efficiently using the Alternating Direction Method of Multipliers (ADMM). Extensive experimental results on both synthetic and real multi-body NRSfM datasets demonstrate the superior performance of our proposed framework compared with the state-of-the-art methods.
[ { "version": "v1", "created": "Sun, 14 May 2017 05:59:51 GMT" } ]
2017-05-16T00:00:00
[ [ "Kumar", "Suryansh", "" ], [ "Dai", "Yuchao", "" ], [ "Li", "Hongdong", "" ] ]
TITLE: Spatial-Temporal Union of Subspaces for Multi-body Non-rigid Structure-from-Motion ABSTRACT: Non-rigid structure-from-motion (NRSfM) has so far been mostly studied for recovering 3D structure of a single non-rigid/deforming object. To handle the real world challenging multiple deforming objects scenarios, existing methods either pre-segment different objects in the scene or treat multiple non-rigid objects as a whole to obtain the 3D non-rigid reconstruction. However, these methods fail to exploit the inherent structure in the problem as the solution of segmentation and the solution of reconstruction could not benefit each other. In this paper, we propose a unified framework to jointly segment and reconstruct multiple non-rigid objects. To compactly represent complex multi-body non-rigid scenes, we propose to exploit the structure of the scenes along both temporal direction and spatial direction, thus achieving a spatio-temporal representation. Specifically, we represent the 3D non-rigid deformations as lying in a union of subspaces along the temporal direction and represent the 3D trajectories as lying in the union of subspaces along the spatial direction. This spatio-temporal representation not only provides competitive 3D reconstruction but also outputs robust segmentation of multiple non-rigid objects. The resultant optimization problem is solved efficiently using the Alternating Direction Method of Multipliers (ADMM). Extensive experimental results on both synthetic and real multi-body NRSfM datasets demonstrate the superior performance of our proposed framework compared with the state-of-the-art methods.
no_new_dataset
0.94699
1705.04932
Shuchang Zhou
Shuchang Zhou, Taihong Xiao, Yi Yang, Dieqiao Feng, Qinyao He, Weiran He
GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data
Github: https://github.com/Prinsphield/GeneGAN
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online.
[ { "version": "v1", "created": "Sun, 14 May 2017 08:59:36 GMT" } ]
2017-05-16T00:00:00
[ [ "Zhou", "Shuchang", "" ], [ "Xiao", "Taihong", "" ], [ "Yang", "Yi", "" ], [ "Feng", "Dieqiao", "" ], [ "He", "Qinyao", "" ], [ "He", "Weiran", "" ] ]
TITLE: GeneGAN: Learning Object Transfiguration and Attribute Subspace from Unpaired Data ABSTRACT: Object Transfiguration replaces an object in an image with another object from a second image. For example it can perform tasks like "putting exactly those eyeglasses from image A on the nose of the person in image B". Usage of exemplar images allows more precise specification of desired modifications and improves the diversity of conditional image generation. However, previous methods that rely on feature space operations, require paired data and/or appearance models for training or disentangling objects from background. In this work, we propose a model that can learn object transfiguration from two unpaired sets of images: one set containing images that "have" that kind of object, and the other set being the opposite, with the mild constraint that the objects be located approximately at the same place. For example, the training data can be one set of reference face images that have eyeglasses, and another set of images that have not, both of which spatially aligned by face landmarks. Despite the weak 0/1 labels, our model can learn an "eyeglasses" subspace that contain multiple representatives of different types of glasses. Consequently, we can perform fine-grained control of generated images, like swapping the glasses in two images by swapping the projected components in the "eyeglasses" subspace, to create novel images of people wearing eyeglasses. Overall, our deterministic generative model learns disentangled attribute subspaces from weakly labeled data by adversarial training. Experiments on CelebA and Multi-PIE datasets validate the effectiveness of the proposed model on real world data, in generating images with specified eyeglasses, smiling, hair styles, and lighting conditions etc. The code is available online.
no_new_dataset
0.940844
1705.04964
Balint Daroczy
B\'alint Zolt\'an Dar\'oczy
Machine learning methods for multimedia information retrieval
doctoral thesis, 2016
null
10.15476/ELTE.2016.086
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this thesis we examined several multimodal feature extraction and learning methods for retrieval and classification purposes. We reread briefly some theoretical results of learning in Section 2 and reviewed several generative and discriminative models in Section 3 while we described the similarity kernel in Section 4. We examined different aspects of the multimodal image retrieval and classification in Section 5 and suggested methods for identifying quality assessments of Web documents in Section 6. In our last problem we proposed similarity kernel for time-series based classification. The experiments were carried over publicly available datasets and source codes for the most essential parts are either open source or released. Since the used similarity graphs (Section 4.2) are greatly constrained for computational purposes, we would like to continue work with more complex, evolving and capable graphs and apply for different problems such as capturing the rapid change in the distribution (e.g. session based recommendation) or complex graphs of the literature work. The similarity kernel with the proper metrics reaches and in many cases improves over the state-of-the-art. Hence we may conclude generative models based on instance similarities with multiple modes is a generally applicable model for classification and regression tasks ranging over various domains, including but not limited to the ones presented in this thesis. More generally, the Fisher kernel is not only unique in many ways but one of the most powerful kernel functions. Therefore we may exploit the Fisher kernel in the future over widely used generative models, such as Boltzmann Machines [Hinton et al., 1984], a particular subset, the Restricted Boltzmann Machines and Deep Belief Networks [Hinton et al., 2006]), Latent Dirichlet Allocation [Blei et al., 2003] or Hidden Markov Models [Baum and Petrie, 1966] to name a few.
[ { "version": "v1", "created": "Sun, 14 May 2017 14:10:22 GMT" } ]
2017-05-16T00:00:00
[ [ "Daróczy", "Bálint Zoltán", "" ] ]
TITLE: Machine learning methods for multimedia information retrieval ABSTRACT: In this thesis we examined several multimodal feature extraction and learning methods for retrieval and classification purposes. We reread briefly some theoretical results of learning in Section 2 and reviewed several generative and discriminative models in Section 3 while we described the similarity kernel in Section 4. We examined different aspects of the multimodal image retrieval and classification in Section 5 and suggested methods for identifying quality assessments of Web documents in Section 6. In our last problem we proposed similarity kernel for time-series based classification. The experiments were carried over publicly available datasets and source codes for the most essential parts are either open source or released. Since the used similarity graphs (Section 4.2) are greatly constrained for computational purposes, we would like to continue work with more complex, evolving and capable graphs and apply for different problems such as capturing the rapid change in the distribution (e.g. session based recommendation) or complex graphs of the literature work. The similarity kernel with the proper metrics reaches and in many cases improves over the state-of-the-art. Hence we may conclude generative models based on instance similarities with multiple modes is a generally applicable model for classification and regression tasks ranging over various domains, including but not limited to the ones presented in this thesis. More generally, the Fisher kernel is not only unique in many ways but one of the most powerful kernel functions. Therefore we may exploit the Fisher kernel in the future over widely used generative models, such as Boltzmann Machines [Hinton et al., 1984], a particular subset, the Restricted Boltzmann Machines and Deep Belief Networks [Hinton et al., 2006]), Latent Dirichlet Allocation [Blei et al., 2003] or Hidden Markov Models [Baum and Petrie, 1966] to name a few.
no_new_dataset
0.948537
1705.05040
Kechen Qin
Lu Wang, Nick Beauchamp, Sarah Shugars, and Kechen Qin
Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes
Accepted by TACL, 14 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
[ { "version": "v1", "created": "Mon, 15 May 2017 00:21:03 GMT" } ]
2017-05-16T00:00:00
[ [ "Wang", "Lu", "" ], [ "Beauchamp", "Nick", "" ], [ "Shugars", "Sarah", "" ], [ "Qin", "Kechen", "" ] ]
TITLE: Winning on the Merits: The Joint Effects of Content and Style on Debate Outcomes ABSTRACT: Debate and deliberation play essential roles in politics and government, but most models presume that debates are won mainly via superior style or agenda control. Ideally, however, debates would be won on the merits, as a function of which side has the stronger arguments. We propose a predictive model of debate that estimates the effects of linguistic features and the latent persuasive strengths of different topics, as well as the interactions between the two. Using a dataset of 118 Oxford-style debates, our model's combination of content (as latent topics) and style (as linguistic features) allows us to predict audience-adjudicated winners with 74% accuracy, significantly outperforming linguistic features alone (66%). Our model finds that winning sides employ stronger arguments, and allows us to identify the linguistic features associated with strong or weak arguments.
no_new_dataset
0.942771
1705.05084
Bolun Cai
Xiaoyi Jia, Xiangmin Xu, Bolun Cai, Kailing Guo
Single Image Super-Resolution Using Multi-Scale Convolutional Neural Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methods based on convolutional neural network (CNN) have demonstrated tremendous improvements on single image super-resolution. However, the previous methods mainly restore images from one single area in the low resolution (LR) input, which limits the flexibility of models to infer various scales of details for high resolution (HR) output. Moreover, most of them train a specific model for each up-scale factor. In this paper, we propose a multi-scale super resolution (MSSR) network. Our network consists of multi-scale paths to make the HR inference, which can learn to synthesize features from different scales. This property helps reconstruct various kinds of regions in HR images. In addition, only one single model is needed for multiple up-scale factors, which is more efficient without loss of restoration quality. Experiments on four public datasets demonstrate that the proposed method achieved state-of-the-art performance with fast speed.
[ { "version": "v1", "created": "Mon, 15 May 2017 06:38:04 GMT" } ]
2017-05-16T00:00:00
[ [ "Jia", "Xiaoyi", "" ], [ "Xu", "Xiangmin", "" ], [ "Cai", "Bolun", "" ], [ "Guo", "Kailing", "" ] ]
TITLE: Single Image Super-Resolution Using Multi-Scale Convolutional Neural Network ABSTRACT: Methods based on convolutional neural network (CNN) have demonstrated tremendous improvements on single image super-resolution. However, the previous methods mainly restore images from one single area in the low resolution (LR) input, which limits the flexibility of models to infer various scales of details for high resolution (HR) output. Moreover, most of them train a specific model for each up-scale factor. In this paper, we propose a multi-scale super resolution (MSSR) network. Our network consists of multi-scale paths to make the HR inference, which can learn to synthesize features from different scales. This property helps reconstruct various kinds of regions in HR images. In addition, only one single model is needed for multiple up-scale factors, which is more efficient without loss of restoration quality. Experiments on four public datasets demonstrate that the proposed method achieved state-of-the-art performance with fast speed.
no_new_dataset
0.950869
1705.05207
Lianwen Jin
Xuefeng Xiao, Yafeng Yang, Tasweer Ahmad, Lianwen Jin and Tianhai Chang
Design of a Very Compact CNN Classifier for Online Handwritten Chinese Character Recognition Using DropWeight and Global Pooling
5 pages, 2 figures, 2 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Currently, owing to the ubiquity of mobile devices, online handwritten Chinese character recognition (HCCR) has become one of the suitable choice for feeding input to cell phones and tablet devices. Over the past few years, larger and deeper convolutional neural networks (CNNs) have extensively been employed for improving character recognition performance. However, its substantial storage requirement is a significant obstacle in deploying such networks into portable electronic devices. To circumvent this problem, we propose a novel technique called DropWeight for pruning redundant connections in the CNN architecture. It is revealed that the proposed method not only treats streamlined architectures such as AlexNet and VGGNet well but also exhibits remarkable performance for deep residual network and inception network. We also demonstrate that global pooling is a better choice for building very compact online HCCR systems. Experiments were performed on the ICDAR-2013 online HCCR competition dataset using our proposed network, and it is found that the proposed approach requires only 0.57 MB for storage, whereas state-of-the-art CNN-based methods require up to 135 MB; meanwhile the performance is decreased only by 0.91%.
[ { "version": "v1", "created": "Mon, 15 May 2017 13:18:38 GMT" } ]
2017-05-16T00:00:00
[ [ "Xiao", "Xuefeng", "" ], [ "Yang", "Yafeng", "" ], [ "Ahmad", "Tasweer", "" ], [ "Jin", "Lianwen", "" ], [ "Chang", "Tianhai", "" ] ]
TITLE: Design of a Very Compact CNN Classifier for Online Handwritten Chinese Character Recognition Using DropWeight and Global Pooling ABSTRACT: Currently, owing to the ubiquity of mobile devices, online handwritten Chinese character recognition (HCCR) has become one of the suitable choice for feeding input to cell phones and tablet devices. Over the past few years, larger and deeper convolutional neural networks (CNNs) have extensively been employed for improving character recognition performance. However, its substantial storage requirement is a significant obstacle in deploying such networks into portable electronic devices. To circumvent this problem, we propose a novel technique called DropWeight for pruning redundant connections in the CNN architecture. It is revealed that the proposed method not only treats streamlined architectures such as AlexNet and VGGNet well but also exhibits remarkable performance for deep residual network and inception network. We also demonstrate that global pooling is a better choice for building very compact online HCCR systems. Experiments were performed on the ICDAR-2013 online HCCR competition dataset using our proposed network, and it is found that the proposed approach requires only 0.57 MB for storage, whereas state-of-the-art CNN-based methods require up to 135 MB; meanwhile the performance is decreased only by 0.91%.
no_new_dataset
0.943556
1705.05301
Paschalis Panteleris
Paschalis Panteleris (1) and Antonis Argyros (1 and 2) ((1) Institute of Computer Science, FORTH, (2) Computer Science Department, University of Crete)
Back to RGB: 3D tracking of hands and hand-object interactions based on short-baseline stereo
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel solution to the problem of 3D tracking of the articulated motion of human hand(s), possibly in interaction with other objects. The vast majority of contemporary relevant work capitalizes on depth information provided by RGBD cameras. In this work, we show that accurate and efficient 3D hand tracking is possible, even for the case of RGB stereo. A straightforward approach for solving the problem based on such input would be to first recover depth and then apply a state of the art depth-based 3D hand tracking method. Unfortunately, this does not work well in practice because the stereo-based, dense 3D reconstruction of hands is far less accurate than the one obtained by RGBD cameras. Our approach bypasses 3D reconstruction and follows a completely different route: 3D hand tracking is formulated as an optimization problem whose solution is the hand configuration that maximizes the color consistency between the two views of the hand. We demonstrate the applicability of our method for real time tracking of a single hand, of a hand manipulating an object and of two interacting hands. The method has been evaluated quantitatively on standard datasets and in comparison to relevant, state of the art RGBD-based approaches. The obtained results demonstrate that the proposed stereo-based method performs equally well to its RGBD-based competitors, and in some cases, it even outperforms them.
[ { "version": "v1", "created": "Mon, 15 May 2017 15:38:56 GMT" } ]
2017-05-16T00:00:00
[ [ "Panteleris", "Paschalis", "", "1 and 2" ], [ "Argyros", "Antonis", "", "1 and 2" ] ]
TITLE: Back to RGB: 3D tracking of hands and hand-object interactions based on short-baseline stereo ABSTRACT: We present a novel solution to the problem of 3D tracking of the articulated motion of human hand(s), possibly in interaction with other objects. The vast majority of contemporary relevant work capitalizes on depth information provided by RGBD cameras. In this work, we show that accurate and efficient 3D hand tracking is possible, even for the case of RGB stereo. A straightforward approach for solving the problem based on such input would be to first recover depth and then apply a state of the art depth-based 3D hand tracking method. Unfortunately, this does not work well in practice because the stereo-based, dense 3D reconstruction of hands is far less accurate than the one obtained by RGBD cameras. Our approach bypasses 3D reconstruction and follows a completely different route: 3D hand tracking is formulated as an optimization problem whose solution is the hand configuration that maximizes the color consistency between the two views of the hand. We demonstrate the applicability of our method for real time tracking of a single hand, of a hand manipulating an object and of two interacting hands. The method has been evaluated quantitatively on standard datasets and in comparison to relevant, state of the art RGBD-based approaches. The obtained results demonstrate that the proposed stereo-based method performs equally well to its RGBD-based competitors, and in some cases, it even outperforms them.
no_new_dataset
0.941547
1705.05347
Rafael Uetz
Luis Alberto Benthin Sanguino, Rafael Uetz
Software Vulnerability Analysis Using CPE and CVE
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we analyze the Common Platform Enumeration (CPE) dictionary and the Common Vulnerabilities and Exposures (CVE) feeds. These repositories are widely used in Vulnerability Management Systems (VMSs) to check for known vulnerabilities in software products. The analysis shows, among other issues, a lack of synchronization between both datasets that can lead to incorrect results output by VMSs relying on those datasets. To deal with these problems, we developed a method that recommends to a user a prioritized list of CPE identifiers for a given software product. The user can then assign (and, if necessary, adapt) the most suitable CPE identifier to the software so that regular (e.g., daily) checks can find known vulnerabilities for this software in the CVE feeds. Our evaluation of this method shows that this interaction is indeed necessary because a fully automated CPE assignment is prone to errors due to the CPE and CVE shortcomings. We implemented an open-source VMS that employs the proposed method and published it on GitHub.
[ { "version": "v1", "created": "Mon, 15 May 2017 17:33:47 GMT" } ]
2017-05-16T00:00:00
[ [ "Sanguino", "Luis Alberto Benthin", "" ], [ "Uetz", "Rafael", "" ] ]
TITLE: Software Vulnerability Analysis Using CPE and CVE ABSTRACT: In this paper, we analyze the Common Platform Enumeration (CPE) dictionary and the Common Vulnerabilities and Exposures (CVE) feeds. These repositories are widely used in Vulnerability Management Systems (VMSs) to check for known vulnerabilities in software products. The analysis shows, among other issues, a lack of synchronization between both datasets that can lead to incorrect results output by VMSs relying on those datasets. To deal with these problems, we developed a method that recommends to a user a prioritized list of CPE identifiers for a given software product. The user can then assign (and, if necessary, adapt) the most suitable CPE identifier to the software so that regular (e.g., daily) checks can find known vulnerabilities for this software in the CVE feeds. Our evaluation of this method shows that this interaction is indeed necessary because a fully automated CPE assignment is prone to errors due to the CPE and CVE shortcomings. We implemented an open-source VMS that employs the proposed method and published it on GitHub.
no_new_dataset
0.943608
1510.05198
Jiwei Li
Jiwei Li, Alan Ritter and Dan Jurafsky
Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks
null
null
null
null
cs.SI cs.CL
http://creativecommons.org/licenses/by/4.0/
Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks.
[ { "version": "v1", "created": "Sun, 18 Oct 2015 04:26:08 GMT" }, { "version": "v2", "created": "Fri, 30 Oct 2015 22:45:41 GMT" }, { "version": "v3", "created": "Wed, 22 Feb 2017 06:20:53 GMT" }, { "version": "v4", "created": "Thu, 11 May 2017 20:47:13 GMT" } ]
2017-05-15T00:00:00
[ [ "Li", "Jiwei", "" ], [ "Ritter", "Alan", "" ], [ "Jurafsky", "Dan", "" ] ]
TITLE: Learning multi-faceted representations of individuals from heterogeneous evidence using neural networks ABSTRACT: Inferring latent attributes of people online is an important social computing task, but requires integrating the many heterogeneous sources of information available on the web. We propose learning individual representations of people using neural nets to integrate rich linguistic and network evidence gathered from social media. The algorithm is able to combine diverse cues, such as the text a person writes, their attributes (e.g. gender, employer, education, location) and social relations to other people. We show that by integrating both textual and network evidence, these representations offer improved performance at four important tasks in social media inference on Twitter: predicting (1) gender, (2) occupation, (3) location, and (4) friendships for users. Our approach scales to large datasets and the learned representations can be used as general features in and have the potential to benefit a large number of downstream tasks including link prediction, community detection, or probabilistic reasoning over social networks.
no_new_dataset
0.947866
1511.04646
Yikang Shen
Yikang Shen, Wenge Rong, Nan Jiang, Baolin Peng, Jie Tang, Zhang Xiong
Word Embedding based Correlation Model for Question/Answer Matching
8 pages, 2 figures
AAAI (2017) 3511--3517
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of community based question answering (Q&A) services, a large scale of Q&A archives have been accumulated and are an important information and knowledge resource on the web. Question and answer matching has been attached much importance to for its ability to reuse knowledge stored in these systems: it can be useful in enhancing user experience with recurrent questions. In this paper, we try to improve the matching accuracy by overcoming the lexical gap between question and answer pairs. A Word Embedding based Correlation (WEC) model is proposed by integrating advantages of both the translation model and word embedding, given a random pair of words, WEC can score their co-occurrence probability in Q&A pairs and it can also leverage the continuity and smoothness of continuous space word representation to deal with new pairs of words that are rare in the training parallel text. An experimental study on Yahoo! Answers dataset and Baidu Zhidao dataset shows this new method's promising potential.
[ { "version": "v1", "created": "Sun, 15 Nov 2015 02:59:22 GMT" }, { "version": "v2", "created": "Sat, 26 Nov 2016 02:40:12 GMT" } ]
2017-05-15T00:00:00
[ [ "Shen", "Yikang", "" ], [ "Rong", "Wenge", "" ], [ "Jiang", "Nan", "" ], [ "Peng", "Baolin", "" ], [ "Tang", "Jie", "" ], [ "Xiong", "Zhang", "" ] ]
TITLE: Word Embedding based Correlation Model for Question/Answer Matching ABSTRACT: With the development of community based question answering (Q&A) services, a large scale of Q&A archives have been accumulated and are an important information and knowledge resource on the web. Question and answer matching has been attached much importance to for its ability to reuse knowledge stored in these systems: it can be useful in enhancing user experience with recurrent questions. In this paper, we try to improve the matching accuracy by overcoming the lexical gap between question and answer pairs. A Word Embedding based Correlation (WEC) model is proposed by integrating advantages of both the translation model and word embedding, given a random pair of words, WEC can score their co-occurrence probability in Q&A pairs and it can also leverage the continuity and smoothness of continuous space word representation to deal with new pairs of words that are rare in the training parallel text. An experimental study on Yahoo! Answers dataset and Baidu Zhidao dataset shows this new method's promising potential.
no_new_dataset
0.949201
1605.04129
Maedeh Aghaei
Maedeh Aghaei, Mariella Dimiccoli, Petia Radeva
With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams
6 pages, 9 figures, accepted and presented in International Conference on Pattern Recognition (ICPR 2016)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.
[ { "version": "v1", "created": "Fri, 13 May 2016 11:04:28 GMT" }, { "version": "v2", "created": "Fri, 12 May 2017 11:27:50 GMT" } ]
2017-05-15T00:00:00
[ [ "Aghaei", "Maedeh", "" ], [ "Dimiccoli", "Mariella", "" ], [ "Radeva", "Petia", "" ] ]
TITLE: With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams ABSTRACT: Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams.
no_new_dataset
0.707809
1606.00915
Liang-Chieh Chen
Liang-Chieh Chen and George Papandreou and Iasonas Kokkinos and Kevin Murphy and Alan L. Yuille
DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
Accepted by TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
[ { "version": "v1", "created": "Thu, 2 Jun 2016 21:52:21 GMT" }, { "version": "v2", "created": "Fri, 12 May 2017 03:25:47 GMT" } ]
2017-05-15T00:00:00
[ [ "Chen", "Liang-Chieh", "" ], [ "Papandreou", "George", "" ], [ "Kokkinos", "Iasonas", "" ], [ "Murphy", "Kevin", "" ], [ "Yuille", "Alan L.", "" ] ]
TITLE: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs ABSTRACT: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.
no_new_dataset
0.954052
1611.07661
Michael Maire
Tsung-Wei Ke, Michael Maire, Stella X. Yu
Multigrid Neural Architectures
updated with ImageNet results; to appear at CVPR 2017
null
null
null
cs.CV cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs; convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices; we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.
[ { "version": "v1", "created": "Wed, 23 Nov 2016 06:55:53 GMT" }, { "version": "v2", "created": "Thu, 11 May 2017 19:24:33 GMT" } ]
2017-05-15T00:00:00
[ [ "Ke", "Tsung-Wei", "" ], [ "Maire", "Michael", "" ], [ "Yu", "Stella X.", "" ] ]
TITLE: Multigrid Neural Architectures ABSTRACT: We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs; convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices; we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.
no_new_dataset
0.950365
1703.01790
Maedeh Aghaei
Maedeh Aghaei, Mariella Dimiccoli, Petia Radeva
All the people around me: face discovery in egocentric photo-streams
5 pages, 3 figures, accepted in IEEE International Conference on Image Processing (ICIP 2017)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
[ { "version": "v1", "created": "Mon, 6 Mar 2017 09:50:39 GMT" }, { "version": "v2", "created": "Fri, 12 May 2017 11:29:38 GMT" } ]
2017-05-15T00:00:00
[ [ "Aghaei", "Maedeh", "" ], [ "Dimiccoli", "Mariella", "" ], [ "Radeva", "Petia", "" ] ]
TITLE: All the people around me: face discovery in egocentric photo-streams ABSTRACT: Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose.
no_new_dataset
0.917967