id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
1702.03684
Sebastian Bodenstedt
Sebastian Bodenstedt (1), Martin Wagner (2), Darko Kati\'c (1), Patrick Mietkowski (2), Benjamin Mayer (2), Hannes Kenngott (2), Beat M\"uller-Stich (2), R\"udiger Dillmann (1), Stefanie Speidel (1) ((1) Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, (2) Department of General, Visceral and Transplant Surgery, University of Heidelberg, Heidelberg)
Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer-assisted surgery (CAS) aims to provide the surgeon with the right type of assistance at the right moment. Such assistance systems are especially relevant in laparoscopic surgery, where CAS can alleviate some of the drawbacks that surgeons incur. For many assistance functions, e.g. displaying the location of a tumor at the appropriate time or suggesting what instruments to prepare next, analyzing the surgical workflow is a prerequisite. Since laparoscopic interventions are performed via endoscope, the video signal is an obvious sensor modality to rely on for workflow analysis. Image-based workflow analysis tasks in laparoscopy, such as phase recognition, skill assessment, video indexing or automatic annotation, require a temporal distinction between video frames. Generally computer vision based methods that generalize from previously seen data are used. For training such methods, large amounts of annotated data are necessary. Annotating surgical data requires expert knowledge, therefore collecting a sufficient amount of data is difficult, time-consuming and not always feasible. In this paper, we address this problem by presenting an unsupervised method for training a convolutional neural network (CNN) to differentiate between laparoscopic video frames on a temporal basis. We extract video frames at regular intervals from 324 unlabeled laparoscopic interventions, resulting in a dataset of approximately 2.2 million images. From this dataset, we extract image pairs from the same video and train a CNN to determine their temporal order. To solve this problem, the CNN has to extract features that are relevant for comprehending laparoscopic workflow. Furthermore, we demonstrate that such a CNN can be adapted for surgical workflow segmentation. We performed image-based workflow segmentation on a publicly available dataset of 7 cholecystectomies and 9 colorectal interventions.
[ { "version": "v1", "created": "Mon, 13 Feb 2017 09:29:50 GMT" } ]
2017-02-14T00:00:00
[ [ "Bodenstedt", "Sebastian", "" ], [ "Wagner", "Martin", "" ], [ "Katić", "Darko", "" ], [ "Mietkowski", "Patrick", "" ], [ "Mayer", "Benjamin", "" ], [ "Kenngott", "Hannes", "" ], [ "Müller-Stich", "Beat", "" ], [ "Dillmann", "Rüdiger", "" ], [ "Speidel", "Stefanie", "" ] ]
TITLE: Unsupervised temporal context learning using convolutional neural networks for laparoscopic workflow analysis ABSTRACT: Computer-assisted surgery (CAS) aims to provide the surgeon with the right type of assistance at the right moment. Such assistance systems are especially relevant in laparoscopic surgery, where CAS can alleviate some of the drawbacks that surgeons incur. For many assistance functions, e.g. displaying the location of a tumor at the appropriate time or suggesting what instruments to prepare next, analyzing the surgical workflow is a prerequisite. Since laparoscopic interventions are performed via endoscope, the video signal is an obvious sensor modality to rely on for workflow analysis. Image-based workflow analysis tasks in laparoscopy, such as phase recognition, skill assessment, video indexing or automatic annotation, require a temporal distinction between video frames. Generally computer vision based methods that generalize from previously seen data are used. For training such methods, large amounts of annotated data are necessary. Annotating surgical data requires expert knowledge, therefore collecting a sufficient amount of data is difficult, time-consuming and not always feasible. In this paper, we address this problem by presenting an unsupervised method for training a convolutional neural network (CNN) to differentiate between laparoscopic video frames on a temporal basis. We extract video frames at regular intervals from 324 unlabeled laparoscopic interventions, resulting in a dataset of approximately 2.2 million images. From this dataset, we extract image pairs from the same video and train a CNN to determine their temporal order. To solve this problem, the CNN has to extract features that are relevant for comprehending laparoscopic workflow. Furthermore, we demonstrate that such a CNN can be adapted for surgical workflow segmentation. We performed image-based workflow segmentation on a publicly available dataset of 7 cholecystectomies and 9 colorectal interventions.
no_new_dataset
0.615926
1702.03825
Yang Zhang
Yang Zhang, Yusu Wang, Srinivasan Parthasarathy
Analyzing and Visualizing Scalar Fields on Graphs
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The value proposition of a dataset often resides in the implicit interconnections or explicit relationships (patterns) among individual entities, and is often modeled as a graph. Effective visualization of such graphs can lead to key insights uncovering such value. In this article we propose a visualization method to explore graphs with numerical attributes associated with nodes (or edges) -- referred to as scalar graphs. Such numerical attributes can represent raw content information, similarities, or derived information reflecting important network measures such as triangle density and centrality. The proposed visualization strategy seeks to simultaneously uncover the relationship between attribute values and graph topology, and relies on transforming the network to generate a terrain map. A key objective here is to ensure that the terrain map reveals the overall distribution of components-of-interest (e.g. dense subgraphs, k-cores) and the relationships among them while being sensitive to the attribute values over the graph. We also design extensions that can capture the relationship across multiple numerical attributes (scalars). We demonstrate the efficacy of our method on several real-world data science tasks while scaling to large graphs with millions of nodes.
[ { "version": "v1", "created": "Fri, 10 Feb 2017 07:47:48 GMT" } ]
2017-02-14T00:00:00
[ [ "Zhang", "Yang", "" ], [ "Wang", "Yusu", "" ], [ "Parthasarathy", "Srinivasan", "" ] ]
TITLE: Analyzing and Visualizing Scalar Fields on Graphs ABSTRACT: The value proposition of a dataset often resides in the implicit interconnections or explicit relationships (patterns) among individual entities, and is often modeled as a graph. Effective visualization of such graphs can lead to key insights uncovering such value. In this article we propose a visualization method to explore graphs with numerical attributes associated with nodes (or edges) -- referred to as scalar graphs. Such numerical attributes can represent raw content information, similarities, or derived information reflecting important network measures such as triangle density and centrality. The proposed visualization strategy seeks to simultaneously uncover the relationship between attribute values and graph topology, and relies on transforming the network to generate a terrain map. A key objective here is to ensure that the terrain map reveals the overall distribution of components-of-interest (e.g. dense subgraphs, k-cores) and the relationships among them while being sensitive to the attribute values over the graph. We also design extensions that can capture the relationship across multiple numerical attributes (scalars). We demonstrate the efficacy of our method on several real-world data science tasks while scaling to large graphs with millions of nodes.
no_new_dataset
0.949623
1702.03856
Sameer Bansal
Sameer Bansal, Herman Kamper, Adam Lopez and Sharon Goldwater
Towards speech-to-text translation without speech recognition
To appear in EACL 2017 (short papers)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We explore the problem of translating speech to text in low-resource scenarios where neither automatic speech recognition (ASR) nor machine translation (MT) are available, but we have training data in the form of audio paired with text translations. We present the first system for this problem applied to a realistic multi-speaker dataset, the CALLHOME Spanish-English speech translation corpus. Our approach uses unsupervised term discovery (UTD) to cluster repeated patterns in the audio, creating a pseudotext, which we pair with translations to create a parallel text and train a simple bag-of-words MT model. We identify the challenges faced by the system, finding that the difficulty of cross-speaker UTD results in low recall, but that our system is still able to correctly translate some content words in test data.
[ { "version": "v1", "created": "Mon, 13 Feb 2017 16:30:23 GMT" } ]
2017-02-14T00:00:00
[ [ "Bansal", "Sameer", "" ], [ "Kamper", "Herman", "" ], [ "Lopez", "Adam", "" ], [ "Goldwater", "Sharon", "" ] ]
TITLE: Towards speech-to-text translation without speech recognition ABSTRACT: We explore the problem of translating speech to text in low-resource scenarios where neither automatic speech recognition (ASR) nor machine translation (MT) are available, but we have training data in the form of audio paired with text translations. We present the first system for this problem applied to a realistic multi-speaker dataset, the CALLHOME Spanish-English speech translation corpus. Our approach uses unsupervised term discovery (UTD) to cluster repeated patterns in the audio, creating a pseudotext, which we pair with translations to create a parallel text and train a simple bag-of-words MT model. We identify the challenges faced by the system, finding that the difficulty of cross-speaker UTD results in low recall, but that our system is still able to correctly translate some content words in test data.
no_new_dataset
0.848972
1603.04037
Umar Iqbal
Umar Iqbal, Martin Garbade, Juergen Gall
Pose for Action - Action for Pose
Accepted to FG-2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we propose to utilize information about human actions to improve pose estimation in monocular videos. To this end, we present a pictorial structure model that exploits high-level information about activities to incorporate higher-order part dependencies by modeling action specific appearance models and pose priors. However, instead of using an additional expensive action recognition framework, the action priors are efficiently estimated by our pose estimation framework. This is achieved by starting with a uniform action prior and updating the action prior during pose estimation. We also show that learning the right amount of appearance sharing among action classes improves the pose estimation. We demonstrate the effectiveness of the proposed method on two challenging datasets for pose estimation and action recognition with over 80,000 test images.
[ { "version": "v1", "created": "Sun, 13 Mar 2016 15:09:35 GMT" }, { "version": "v2", "created": "Fri, 10 Feb 2017 14:01:09 GMT" } ]
2017-02-13T00:00:00
[ [ "Iqbal", "Umar", "" ], [ "Garbade", "Martin", "" ], [ "Gall", "Juergen", "" ] ]
TITLE: Pose for Action - Action for Pose ABSTRACT: In this work we propose to utilize information about human actions to improve pose estimation in monocular videos. To this end, we present a pictorial structure model that exploits high-level information about activities to incorporate higher-order part dependencies by modeling action specific appearance models and pose priors. However, instead of using an additional expensive action recognition framework, the action priors are efficiently estimated by our pose estimation framework. This is achieved by starting with a uniform action prior and updating the action prior during pose estimation. We also show that learning the right amount of appearance sharing among action classes improves the pose estimation. We demonstrate the effectiveness of the proposed method on two challenging datasets for pose estimation and action recognition with over 80,000 test images.
no_new_dataset
0.947381
1609.03056
Yemin Shi Shi
Yemin Shi, Yonghong Tian, Yaowei Wang, Tiejun Huang
Sequential Deep Trajectory Descriptor for Action Recognition with Three-stream CNN
10 pages, 29 figures, T-MM
null
10.1109/TMM.2017.2666540
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning the spatial-temporal representation of motion information is crucial to human action recognition. Nevertheless, most of the existing features or descriptors cannot capture motion information effectively, especially for long-term motion. To address this problem, this paper proposes a long-term motion descriptor called sequential Deep Trajectory Descriptor (sDTD). Specifically, we project dense trajectories into two-dimensional planes, and subsequently a CNN-RNN network is employed to learn an effective representation for long-term motion. Unlike the popular two-stream ConvNets, the sDTD stream is introduced into a three-stream framework so as to identify actions from a video sequence. Consequently, this three-stream framework can simultaneously capture static spatial features, short-term motion and long-term motion in the video. Extensive experiments were conducted on three challenging datasets: KTH, HMDB51 and UCF101. Experimental results show that our method achieves state-of-the-art performance on the KTH and UCF101 datasets, and is comparable to the state-of-the-art methods on the HMDB51 dataset.
[ { "version": "v1", "created": "Sat, 10 Sep 2016 14:24:38 GMT" }, { "version": "v2", "created": "Fri, 10 Feb 2017 02:49:10 GMT" } ]
2017-02-13T00:00:00
[ [ "Shi", "Yemin", "" ], [ "Tian", "Yonghong", "" ], [ "Wang", "Yaowei", "" ], [ "Huang", "Tiejun", "" ] ]
TITLE: Sequential Deep Trajectory Descriptor for Action Recognition with Three-stream CNN ABSTRACT: Learning the spatial-temporal representation of motion information is crucial to human action recognition. Nevertheless, most of the existing features or descriptors cannot capture motion information effectively, especially for long-term motion. To address this problem, this paper proposes a long-term motion descriptor called sequential Deep Trajectory Descriptor (sDTD). Specifically, we project dense trajectories into two-dimensional planes, and subsequently a CNN-RNN network is employed to learn an effective representation for long-term motion. Unlike the popular two-stream ConvNets, the sDTD stream is introduced into a three-stream framework so as to identify actions from a video sequence. Consequently, this three-stream framework can simultaneously capture static spatial features, short-term motion and long-term motion in the video. Extensive experiments were conducted on three challenging datasets: KTH, HMDB51 and UCF101. Experimental results show that our method achieves state-of-the-art performance on the KTH and UCF101 datasets, and is comparable to the state-of-the-art methods on the HMDB51 dataset.
no_new_dataset
0.952486
1610.08738
Nicolas Keriven
Nicolas Keriven (PANAMA), Nicolas Tremblay (GIPSA-CICS), Yann Traonmilin (PANAMA), R\'emi Gribonval (PANAMA)
Compressive K-means
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Lloyd-Max algorithm is a classical approach to perform K-means clustering. Unfortunately, its cost becomes prohibitive as the training dataset grows large. We propose a compressive version of K-means (CKM), that estimates cluster centers from a sketch, i.e. from a drastically compressed representation of the training dataset. We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset. Given the sketch, the computational complexity of CKM is also independent of the size of the dataset. Unlike Lloyd-Max which requires several replicates, we further demonstrate that CKM is almost insensitive to initialization. For a large dataset of 10^7 data points, we show that CKM can run two orders of magnitude faster than five replicates of Lloyd-Max, with similar clustering performance on artificial data. Finally, CKM achieves lower classification errors on handwritten digits classification.
[ { "version": "v1", "created": "Thu, 27 Oct 2016 12:13:05 GMT" }, { "version": "v2", "created": "Wed, 30 Nov 2016 07:58:05 GMT" }, { "version": "v3", "created": "Mon, 9 Jan 2017 10:40:53 GMT" }, { "version": "v4", "created": "Fri, 10 Feb 2017 15:22:24 GMT" } ]
2017-02-13T00:00:00
[ [ "Keriven", "Nicolas", "", "PANAMA" ], [ "Tremblay", "Nicolas", "", "GIPSA-CICS" ], [ "Traonmilin", "Yann", "", "PANAMA" ], [ "Gribonval", "Rémi", "", "PANAMA" ] ]
TITLE: Compressive K-means ABSTRACT: The Lloyd-Max algorithm is a classical approach to perform K-means clustering. Unfortunately, its cost becomes prohibitive as the training dataset grows large. We propose a compressive version of K-means (CKM), that estimates cluster centers from a sketch, i.e. from a drastically compressed representation of the training dataset. We demonstrate empirically that CKM performs similarly to Lloyd-Max, for a sketch size proportional to the number of cen-troids times the ambient dimension, and independent of the size of the original dataset. Given the sketch, the computational complexity of CKM is also independent of the size of the dataset. Unlike Lloyd-Max which requires several replicates, we further demonstrate that CKM is almost insensitive to initialization. For a large dataset of 10^7 data points, we show that CKM can run two orders of magnitude faster than five replicates of Lloyd-Max, with similar clustering performance on artificial data. Finally, CKM achieves lower classification errors on handwritten digits classification.
no_new_dataset
0.941868
1611.00128
David Rosen
David M. Rosen, Luca Carlone, Afonso S. Bandeira, and John J. Leonard
A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group
16 pages, 8 figures, to appear in the International Workshop on the Algorithmic Foundations of Robotics (WAFR), Dec 2016
null
null
null
cs.RO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many geometric estimation problems take the form of synchronization over the special Euclidean group: estimate the values of a set of poses given noisy measurements of a subset of their pairwise relative transforms. This problem is typically formulated as a maximum-likelihood estimation that requires solving a nonconvex nonlinear program, which is computationally intractable in general. Nevertheless, in this paper we present an algorithm that is able to efficiently recover certifiably globally optimal solutions of this estimation problem in a non-adversarial noise regime. The crux of our approach is the development of a semidefinite relaxation of the maximum-likelihood estimation whose minimizer provides the exact MLE so long as the magnitude of the noise corrupting the available measurements falls below a certain critical threshold; furthermore, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the optimality of the recovered estimate. We develop a specialized optimization scheme for solving large-scale instances of this semidefinite relaxation by exploiting its low-rank, geometric, and graph-theoretic structure to reduce it to an equivalent optimization problem on a low-dimensional Riemannian manifold, and then design a Riemannian truncated-Newton trust-region method to solve this reduction efficiently. We combine this fast optimization approach with a simple rounding procedure to produce our algorithm, SE-Sync. Experimental evaluation on a variety of simulated and real-world pose-graph SLAM datasets shows that SE-Sync is capable of recovering globally optimal solutions when the available measurements are corrupted by noise up to an order of magnitude greater than that typically encountered in robotics applications, and does so at a computational cost that scales comparably with that of direct Newton-type local search techniques.
[ { "version": "v1", "created": "Tue, 1 Nov 2016 04:54:35 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2016 22:37:03 GMT" }, { "version": "v3", "created": "Fri, 10 Feb 2017 02:04:32 GMT" } ]
2017-02-13T00:00:00
[ [ "Rosen", "David M.", "" ], [ "Carlone", "Luca", "" ], [ "Bandeira", "Afonso S.", "" ], [ "Leonard", "John J.", "" ] ]
TITLE: A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group ABSTRACT: Many geometric estimation problems take the form of synchronization over the special Euclidean group: estimate the values of a set of poses given noisy measurements of a subset of their pairwise relative transforms. This problem is typically formulated as a maximum-likelihood estimation that requires solving a nonconvex nonlinear program, which is computationally intractable in general. Nevertheless, in this paper we present an algorithm that is able to efficiently recover certifiably globally optimal solutions of this estimation problem in a non-adversarial noise regime. The crux of our approach is the development of a semidefinite relaxation of the maximum-likelihood estimation whose minimizer provides the exact MLE so long as the magnitude of the noise corrupting the available measurements falls below a certain critical threshold; furthermore, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the optimality of the recovered estimate. We develop a specialized optimization scheme for solving large-scale instances of this semidefinite relaxation by exploiting its low-rank, geometric, and graph-theoretic structure to reduce it to an equivalent optimization problem on a low-dimensional Riemannian manifold, and then design a Riemannian truncated-Newton trust-region method to solve this reduction efficiently. We combine this fast optimization approach with a simple rounding procedure to produce our algorithm, SE-Sync. Experimental evaluation on a variety of simulated and real-world pose-graph SLAM datasets shows that SE-Sync is capable of recovering globally optimal solutions when the available measurements are corrupted by noise up to an order of magnitude greater than that typically encountered in robotics applications, and does so at a computational cost that scales comparably with that of direct Newton-type local search techniques.
no_new_dataset
0.941868
1702.02817
Immanuel Bayer
Immanuel Bayer, Uwe Nagel, Steffen Rendle
Graph Based Relational Features for Collective Classification
Pacific-Asia Conference on Knowledge Discovery and Data Mining
null
10.1007/978-3-319-18032-8_35
null
cs.IR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Statistical Relational Learning (SRL) methods have shown that classification accuracy can be improved by integrating relations between samples. Techniques such as iterative classification or relaxation labeling achieve this by propagating information between related samples during the inference process. When only a few samples are labeled and connections between samples are sparse, collective inference methods have shown large improvements over standard feature-based ML methods. However, in contrast to feature based ML, collective inference methods require complex inference procedures and often depend on the strong assumption of label consistency among related samples. In this paper, we introduce new relational features for standard ML methods by extracting information from direct and indirect relations. We show empirically on three standard benchmark datasets that our relational features yield results comparable to collective inference methods. Finally we show that our proposal outperforms these methods when additional information is available.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 12:58:23 GMT" } ]
2017-02-13T00:00:00
[ [ "Bayer", "Immanuel", "" ], [ "Nagel", "Uwe", "" ], [ "Rendle", "Steffen", "" ] ]
TITLE: Graph Based Relational Features for Collective Classification ABSTRACT: Statistical Relational Learning (SRL) methods have shown that classification accuracy can be improved by integrating relations between samples. Techniques such as iterative classification or relaxation labeling achieve this by propagating information between related samples during the inference process. When only a few samples are labeled and connections between samples are sparse, collective inference methods have shown large improvements over standard feature-based ML methods. However, in contrast to feature based ML, collective inference methods require complex inference procedures and often depend on the strong assumption of label consistency among related samples. In this paper, we introduce new relational features for standard ML methods by extracting information from direct and indirect relations. We show empirically on three standard benchmark datasets that our relational features yield results comparable to collective inference methods. Finally we show that our proposal outperforms these methods when additional information is available.
no_new_dataset
0.945651
1702.02970
Jonathan Ullman
Mitali Bafna and Jonathan Ullman
The Price of Selection in Differential Privacy
null
null
null
null
cs.DS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the differentially private top-$k$ selection problem, we are given a dataset $X \in \{\pm 1\}^{n \times d}$, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of $k \ll d$ columns whose means are approximately as large as possible. Differential privacy requires that our choice of these $k$ columns does not depend too much on any on individual's dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error $\alpha \approx \sqrt{\ln(d)/n}$, this procedure succeeds given a dataset of size $n \gtrsim k \ln(d)$. We prove a matching lower bound, showing that a dataset of size $n \gtrsim k \ln(d)$ is necessary for private top-$k$ selection in this high-accuracy regime. Our lower bound is the first to show that selecting the $k$ largest columns requires more data than simply estimating the value of those $k$ columns, which can be done using a dataset of size just $n \gtrsim k$.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 20:11:49 GMT" } ]
2017-02-13T00:00:00
[ [ "Bafna", "Mitali", "" ], [ "Ullman", "Jonathan", "" ] ]
TITLE: The Price of Selection in Differential Privacy ABSTRACT: In the differentially private top-$k$ selection problem, we are given a dataset $X \in \{\pm 1\}^{n \times d}$, in which each row belongs to an individual and each column corresponds to some binary attribute, and our goal is to find a set of $k \ll d$ columns whose means are approximately as large as possible. Differential privacy requires that our choice of these $k$ columns does not depend too much on any on individual's dataset. This problem can be solved using the well known exponential mechanism and composition properties of differential privacy. In the high-accuracy regime, where we require the error of the selection procedure to be to be smaller than the so-called sampling error $\alpha \approx \sqrt{\ln(d)/n}$, this procedure succeeds given a dataset of size $n \gtrsim k \ln(d)$. We prove a matching lower bound, showing that a dataset of size $n \gtrsim k \ln(d)$ is necessary for private top-$k$ selection in this high-accuracy regime. Our lower bound is the first to show that selecting the $k$ largest columns requires more data than simply estimating the value of those $k$ columns, which can be done using a dataset of size just $n \gtrsim k$.
no_new_dataset
0.912553
1702.03267
Amarjot Singh
Amarjot Singh and Nick Kingsbury
Dual-Tree Wavelet Scattering Network with Parametric Log Transformation for Object Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a ScatterNet that uses a parametric log transformation with Dual-Tree complex wavelets to extract translation invariant representations from a multi-resolution image. The parametric transformation aids the OLS pruning algorithm by converting the skewed distributions into relatively mean-symmetric distributions while the Dual-Tree wavelets improve the computational efficiency of the network. The proposed network is shown to outperform Mallat's ScatterNet on two image datasets, both for classification accuracy and computational efficiency. The advantages of the proposed network over other supervised and some unsupervised methods are also presented using experiments performed on different training dataset sizes.
[ { "version": "v1", "created": "Fri, 10 Feb 2017 18:02:05 GMT" } ]
2017-02-13T00:00:00
[ [ "Singh", "Amarjot", "" ], [ "Kingsbury", "Nick", "" ] ]
TITLE: Dual-Tree Wavelet Scattering Network with Parametric Log Transformation for Object Classification ABSTRACT: We introduce a ScatterNet that uses a parametric log transformation with Dual-Tree complex wavelets to extract translation invariant representations from a multi-resolution image. The parametric transformation aids the OLS pruning algorithm by converting the skewed distributions into relatively mean-symmetric distributions while the Dual-Tree wavelets improve the computational efficiency of the network. The proposed network is shown to outperform Mallat's ScatterNet on two image datasets, both for classification accuracy and computational efficiency. The advantages of the proposed network over other supervised and some unsupervised methods are also presented using experiments performed on different training dataset sizes.
no_new_dataset
0.951594
1701.03918
Yongqing Wang
Yongqing Wang, Shenghua Liu, Huawei Shen, Xueqi Cheng
Marked Temporal Dynamics Modeling based on Recurrent Neural Network
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We are now witnessing the increasing availability of event stream data, i.e., a sequence of events with each event typically being denoted by the time it occurs and its mark information (e.g., event type). A fundamental problem is to model and predict such kind of marked temporal dynamics, i.e., when the next event will take place and what its mark will be. Existing methods either predict only the mark or the time of the next event, or predict both of them, yet separately. Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them. To tackle this problem, in this paper, we propose to model marked temporal dynamics by using a mark-specific intensity function to explicitly capture the dependency between the mark and the time of the next event. Extensive experiments on two datasets demonstrate that the proposed method outperforms state-of-the-art methods at predicting marked temporal dynamics.
[ { "version": "v1", "created": "Sat, 14 Jan 2017 13:26:39 GMT" } ]
2017-02-12T00:00:00
[ [ "Wang", "Yongqing", "" ], [ "Liu", "Shenghua", "" ], [ "Shen", "Huawei", "" ], [ "Cheng", "Xueqi", "" ] ]
TITLE: Marked Temporal Dynamics Modeling based on Recurrent Neural Network ABSTRACT: We are now witnessing the increasing availability of event stream data, i.e., a sequence of events with each event typically being denoted by the time it occurs and its mark information (e.g., event type). A fundamental problem is to model and predict such kind of marked temporal dynamics, i.e., when the next event will take place and what its mark will be. Existing methods either predict only the mark or the time of the next event, or predict both of them, yet separately. Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them. To tackle this problem, in this paper, we propose to model marked temporal dynamics by using a mark-specific intensity function to explicitly capture the dependency between the mark and the time of the next event. Extensive experiments on two datasets demonstrate that the proposed method outperforms state-of-the-art methods at predicting marked temporal dynamics.
no_new_dataset
0.950549
1701.03947
Tuan Tran
Tuan Tran, Claudia Nieder\'ee, Nattiya Kanhabua, Ujwal Gadiraju, Avishek Anand
Balancing Novelty and Salience: Adaptive Learning to Rank Entities for Timeline Summarization of High-impact Events
Published via ACM to CIKM 2015
null
10.1145/2806416.2806486
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-running, high-impact events such as the Boston Marathon bombing often develop through many stages and involve a large number of entities in their unfolding. Timeline summarization of an event by key sentences eases story digestion, but does not distinguish between what a user remembers and what she might want to re-check. In this work, we present a novel approach for timeline summarization of high-impact events, which uses entities instead of sentences for summarizing the event at each individual point in time. Such entity summaries can serve as both (1) important memory cues in a retrospective event consideration and (2) pointers for personalized event exploration. In order to automatically create such summaries, it is crucial to identify the "right" entities for inclusion. We propose to learn a ranking function for entities, with a dynamically adapted trade-off between the in-document salience of entities and the informativeness of entities across documents, i.e., the level of new information associated with an entity for a time point under consideration. Furthermore, for capturing collective attention for an entity we use an innovative soft labeling approach based on Wikipedia. Our experiments on a real large news datasets confirm the effectiveness of the proposed methods.
[ { "version": "v1", "created": "Sat, 14 Jan 2017 16:47:51 GMT" } ]
2017-02-12T00:00:00
[ [ "Tran", "Tuan", "" ], [ "Niederée", "Claudia", "" ], [ "Kanhabua", "Nattiya", "" ], [ "Gadiraju", "Ujwal", "" ], [ "Anand", "Avishek", "" ] ]
TITLE: Balancing Novelty and Salience: Adaptive Learning to Rank Entities for Timeline Summarization of High-impact Events ABSTRACT: Long-running, high-impact events such as the Boston Marathon bombing often develop through many stages and involve a large number of entities in their unfolding. Timeline summarization of an event by key sentences eases story digestion, but does not distinguish between what a user remembers and what she might want to re-check. In this work, we present a novel approach for timeline summarization of high-impact events, which uses entities instead of sentences for summarizing the event at each individual point in time. Such entity summaries can serve as both (1) important memory cues in a retrospective event consideration and (2) pointers for personalized event exploration. In order to automatically create such summaries, it is crucial to identify the "right" entities for inclusion. We propose to learn a ranking function for entities, with a dynamically adapted trade-off between the in-document salience of entities and the informativeness of entities across documents, i.e., the level of new information associated with an entity for a time point under consideration. Furthermore, for capturing collective attention for an entity we use an innovative soft labeling approach based on Wikipedia. Our experiments on a real large news datasets confirm the effectiveness of the proposed methods.
no_new_dataset
0.954858
1607.03392
Christian Dansereau
Christian Dansereau, Yassine Benhajali, Celine Risterucci, Emilio Merlo Pich, Pierre Orban, Douglas Arnold, Pierre Bellec
Statistical power and prediction accuracy in multisite resting-state fMRI connectivity
null
NeuroImage.Vol 149, p. 220-232 (2017)
10.1016/j.neuroimage.2017.01.072
null
q-bio.QM cs.CE stat.ML
http://creativecommons.org/licenses/by/4.0/
Connectivity studies using resting-state functional magnetic resonance imaging are increasingly pooling data acquired at multiple sites. While this may allow investigators to speed up recruitment or increase sample size, multisite studies also potentially introduce systematic biases in connectivity measures across sites. In this work, we measure the inter-site effect in connectivity and its impact on our ability to detect individual and group differences. Our study was based on real, as opposed to simulated, multisite fMRI datasets collected in N=345 young, healthy subjects across 8 scanning sites with 3T scanners and heterogeneous scanning protocols, drawn from the 1000 functional connectome project. We first empirically show that typical functional networks were reliably found at the group level in all sites, and that the amplitude of the inter-site effects was small to moderate, with a Cohen's effect size below 0.5 on average across brain connections. We then implemented a series of Monte-Carlo simulations, based on real data, to evaluate the impact of the multisite effects on detection power in statistical tests comparing two groups (with and without the effect) using a general linear model, as well as on the prediction of group labels with a support-vector machine. As a reference, we also implemented the same simulations with fMRI data collected at a single site using an identical sample size. Simulations revealed that using data from heterogeneous sites only slightly decreased our ability to detect changes compared to a monosite study with the GLM, and had a greater impact on prediction accuracy. Taken together, our results support the feasibility of multisite studies in rs-fMRI provided the sample size is large enough.
[ { "version": "v1", "created": "Tue, 12 Jul 2016 15:22:52 GMT" }, { "version": "v2", "created": "Thu, 1 Dec 2016 19:47:13 GMT" }, { "version": "v3", "created": "Fri, 27 Jan 2017 17:57:06 GMT" } ]
2017-02-10T00:00:00
[ [ "Dansereau", "Christian", "" ], [ "Benhajali", "Yassine", "" ], [ "Risterucci", "Celine", "" ], [ "Pich", "Emilio Merlo", "" ], [ "Orban", "Pierre", "" ], [ "Arnold", "Douglas", "" ], [ "Bellec", "Pierre", "" ] ]
TITLE: Statistical power and prediction accuracy in multisite resting-state fMRI connectivity ABSTRACT: Connectivity studies using resting-state functional magnetic resonance imaging are increasingly pooling data acquired at multiple sites. While this may allow investigators to speed up recruitment or increase sample size, multisite studies also potentially introduce systematic biases in connectivity measures across sites. In this work, we measure the inter-site effect in connectivity and its impact on our ability to detect individual and group differences. Our study was based on real, as opposed to simulated, multisite fMRI datasets collected in N=345 young, healthy subjects across 8 scanning sites with 3T scanners and heterogeneous scanning protocols, drawn from the 1000 functional connectome project. We first empirically show that typical functional networks were reliably found at the group level in all sites, and that the amplitude of the inter-site effects was small to moderate, with a Cohen's effect size below 0.5 on average across brain connections. We then implemented a series of Monte-Carlo simulations, based on real data, to evaluate the impact of the multisite effects on detection power in statistical tests comparing two groups (with and without the effect) using a general linear model, as well as on the prediction of group labels with a support-vector machine. As a reference, we also implemented the same simulations with fMRI data collected at a single site using an identical sample size. Simulations revealed that using data from heterogeneous sites only slightly decreased our ability to detect changes compared to a monosite study with the GLM, and had a greater impact on prediction accuracy. Taken together, our results support the feasibility of multisite studies in rs-fMRI provided the sample size is large enough.
no_new_dataset
0.945197
1702.01446
Nirman Kumar
Pankaj K. Agarwal and Nirman Kumar and Stavros Sintos and Subhash Suri
Efficient Algorithms for k-Regret Minimizing Sets
null
null
null
null
cs.DS cs.CG cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from Chester et al. [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets.
[ { "version": "v1", "created": "Sun, 5 Feb 2017 19:30:44 GMT" }, { "version": "v2", "created": "Thu, 9 Feb 2017 01:46:20 GMT" } ]
2017-02-10T00:00:00
[ [ "Agarwal", "Pankaj K.", "" ], [ "Kumar", "Nirman", "" ], [ "Sintos", "Stavros", "" ], [ "Suri", "Subhash", "" ] ]
TITLE: Efficient Algorithms for k-Regret Minimizing Sets ABSTRACT: A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from Chester et al. [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimization, both with provable guarantees, one based on coresets and another based on hitting sets. We also carry out extensive experimental evaluation, and show that our schemes compute regret-minimizing sets comparable in size to the greedy algorithm proposed in [VLDB 14] but our schemes are significantly faster and scalable to large data sets.
no_new_dataset
0.941385
1702.02363
Bahadir Sahin
H. Bahadir Sahin, Caglar Tirkaz, Eray Yildiz, Mustafa Tolga Eren, Ozan Sonmez
Automatically Annotated Turkish Corpus for Named Entity Recognition and Text Categorization using Large-Scale Gazetteers
10 page, 1 figure, white paper, update: added correct download link for dataset
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).
[ { "version": "v1", "created": "Wed, 8 Feb 2017 10:45:23 GMT" }, { "version": "v2", "created": "Thu, 9 Feb 2017 08:35:12 GMT" } ]
2017-02-10T00:00:00
[ [ "Sahin", "H. Bahadir", "" ], [ "Tirkaz", "Caglar", "" ], [ "Yildiz", "Eray", "" ], [ "Eren", "Mustafa Tolga", "" ], [ "Sonmez", "Ozan", "" ] ]
TITLE: Automatically Annotated Turkish Corpus for Named Entity Recognition and Text Categorization using Large-Scale Gazetteers ABSTRACT: Turkish Wikipedia Named-Entity Recognition and Text Categorization (TWNERTC) dataset is a collection of automatically categorized and annotated sentences obtained from Wikipedia. We constructed large-scale gazetteers by using a graph crawler algorithm to extract relevant entity and domain information from a semantic knowledge base, Freebase. The constructed gazetteers contains approximately 300K entities with thousands of fine-grained entity types under 77 different domains. Since automated processes are prone to ambiguity, we also introduce two new content specific noise reduction methodologies. Moreover, we map fine-grained entity types to the equivalent four coarse-grained types: person, loc, org, misc. Eventually, we construct six different dataset versions and evaluate the quality of annotations by comparing ground truths from human annotators. We make these datasets publicly available to support studies on Turkish named-entity recognition (NER) and text categorization (TC).
new_dataset
0.946892
1702.02640
Zhe Gan
Zhe Gan, P. D. Singh, Ameet Joshi, Xiaodong He, Jianshu Chen, Jianfeng Gao, Li Deng
Character-level Deep Conflation for Business Data Analytics
Accepted for publication, at ICASSP 2017
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Connecting different text attributes associated with the same entity (conflation) is important in business data analytics since it could help merge two different tables in a database to provide a more comprehensive profile of an entity. However, the conflation task is challenging because two text strings that describe the same entity could be quite different from each other for reasons such as misspelling. It is therefore critical to develop a conflation model that is able to truly understand the semantic meaning of the strings and match them at the semantic level. To this end, we develop a character-level deep conflation model that encodes the input text strings from character level into finite dimension feature vectors, which are then used to compute the cosine similarity between the text strings. The model is trained in an end-to-end manner using back propagation and stochastic gradient descent to maximize the likelihood of the correct association. Specifically, we propose two variants of the deep conflation model, based on long-short-term memory (LSTM) recurrent neural network (RNN) and convolutional neural network (CNN), respectively. Both models perform well on a real-world business analytics dataset and significantly outperform the baseline bag-of-character (BoC) model.
[ { "version": "v1", "created": "Wed, 8 Feb 2017 22:24:14 GMT" } ]
2017-02-10T00:00:00
[ [ "Gan", "Zhe", "" ], [ "Singh", "P. D.", "" ], [ "Joshi", "Ameet", "" ], [ "He", "Xiaodong", "" ], [ "Chen", "Jianshu", "" ], [ "Gao", "Jianfeng", "" ], [ "Deng", "Li", "" ] ]
TITLE: Character-level Deep Conflation for Business Data Analytics ABSTRACT: Connecting different text attributes associated with the same entity (conflation) is important in business data analytics since it could help merge two different tables in a database to provide a more comprehensive profile of an entity. However, the conflation task is challenging because two text strings that describe the same entity could be quite different from each other for reasons such as misspelling. It is therefore critical to develop a conflation model that is able to truly understand the semantic meaning of the strings and match them at the semantic level. To this end, we develop a character-level deep conflation model that encodes the input text strings from character level into finite dimension feature vectors, which are then used to compute the cosine similarity between the text strings. The model is trained in an end-to-end manner using back propagation and stochastic gradient descent to maximize the likelihood of the correct association. Specifically, we propose two variants of the deep conflation model, based on long-short-term memory (LSTM) recurrent neural network (RNN) and convolutional neural network (CNN), respectively. Both models perform well on a real-world business analytics dataset and significantly outperform the baseline bag-of-character (BoC) model.
no_new_dataset
0.952309
1702.02661
U. N. Niranjan
U.N. Niranjan, Arun Rajkumar
Inductive Pairwise Ranking: Going Beyond the n log(n) Barrier
null
null
null
null
cs.LG cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of ranking a set of items from nonactively chosen pairwise preferences where each item has feature information with it. We propose and characterize a very broad class of preference matrices giving rise to the Feature Low Rank (FLR) model, which subsumes several models ranging from the classic Bradley-Terry-Luce (BTL) (Bradley and Terry 1952) and Thurstone (Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims 2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We use the technique of matrix completion in the presence of side information to develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a good ranking under the FLR model, in a sample-efficient manner. In practice, through systematic synthetic simulations, we confirm our theoretical findings regarding improvements in the sample complexity due to the use of feature information. Moreover, on popular real-world preference learning datasets, with as less as 10% sampling of the pairwise comparisons, our method recovers a good ranking.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 00:17:39 GMT" } ]
2017-02-10T00:00:00
[ [ "Niranjan", "U. N.", "" ], [ "Rajkumar", "Arun", "" ] ]
TITLE: Inductive Pairwise Ranking: Going Beyond the n log(n) Barrier ABSTRACT: We study the problem of ranking a set of items from nonactively chosen pairwise preferences where each item has feature information with it. We propose and characterize a very broad class of preference matrices giving rise to the Feature Low Rank (FLR) model, which subsumes several models ranging from the classic Bradley-Terry-Luce (BTL) (Bradley and Terry 1952) and Thurstone (Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims 2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We use the technique of matrix completion in the presence of side information to develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a good ranking under the FLR model, in a sample-efficient manner. In practice, through systematic synthetic simulations, we confirm our theoretical findings regarding improvements in the sample complexity due to the use of feature information. Moreover, on popular real-world preference learning datasets, with as less as 10% sampling of the pairwise comparisons, our method recovers a good ranking.
no_new_dataset
0.94699
1702.02676
Arman Afrasiyabi
Arman Afrasiyabi, Ozan Yildiz, Baris Nasir, Fatos T. Yarman Vural and A. Enis Cetin
Energy Saving Additive Neural Network
8 pages (double column), 2 figures, 1 table
null
null
null
cs.NE cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
In recent years, machine learning techniques based on neural networks for mobile computing become increasingly popular. Classical multi-layer neural networks require matrix multiplications at each stage. Multiplication operation is not an energy efficient operation and consequently it drains the battery of the mobile device. In this paper, we propose a new energy efficient neural network with the universal approximation property over space of Lebesgue integrable functions. This network, called, additive neural network, is very suitable for mobile computing. The neural structure is based on a novel vector product definition, called ef-operator, that permits a multiplier-free implementation. In ef-operation, the "product" of two real numbers is defined as the sum of their absolute values, with the sign determined by the sign of the product of the numbers. This "product" is used to construct a vector product in $R^N$. The vector product induces the $l_1$ norm. The proposed additive neural network successfully solves the XOR problem. The experiments on MNIST dataset show that the classification performances of the proposed additive neural networks are very similar to the corresponding multi-layer perceptron and convolutional neural networks (LeNet).
[ { "version": "v1", "created": "Thu, 9 Feb 2017 02:02:27 GMT" } ]
2017-02-10T00:00:00
[ [ "Afrasiyabi", "Arman", "" ], [ "Yildiz", "Ozan", "" ], [ "Nasir", "Baris", "" ], [ "Vural", "Fatos T. Yarman", "" ], [ "Cetin", "A. Enis", "" ] ]
TITLE: Energy Saving Additive Neural Network ABSTRACT: In recent years, machine learning techniques based on neural networks for mobile computing become increasingly popular. Classical multi-layer neural networks require matrix multiplications at each stage. Multiplication operation is not an energy efficient operation and consequently it drains the battery of the mobile device. In this paper, we propose a new energy efficient neural network with the universal approximation property over space of Lebesgue integrable functions. This network, called, additive neural network, is very suitable for mobile computing. The neural structure is based on a novel vector product definition, called ef-operator, that permits a multiplier-free implementation. In ef-operation, the "product" of two real numbers is defined as the sum of their absolute values, with the sign determined by the sign of the product of the numbers. This "product" is used to construct a vector product in $R^N$. The vector product induces the $l_1$ norm. The proposed additive neural network successfully solves the XOR problem. The experiments on MNIST dataset show that the classification performances of the proposed additive neural networks are very similar to the corresponding multi-layer perceptron and convolutional neural networks (LeNet).
no_new_dataset
0.949576
1702.02743
Juan Felipe Perez-Juste Abascal
Juan F P J Abascal (CREATIS), Manuel Desco, Juan Parra-Robles
Incorporation of prior knowledge of the signal behavior into the reconstruction to accelerate the acquisition of MR diffusion data
null
null
null
null
physics.med-ph cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diffusion MRI measurements using hyperpolarized gases are generally acquired during patient breath hold, which yields a compromise between achievable image resolution, lung coverage and number of b-values. In this work, we propose a novel method that accelerates the acquisition of MR diffusion data by undersampling in both spatial and b-value dimensions, thanks to incorporating knowledge about the signal decay into the reconstruction (SIDER). SIDER is compared to total variation (TV) reconstruction by assessing their effect on both the recovery of ventilation images and estimated mean alveolar dimensions (MAD). Both methods are assessed by retrospectively undersampling diffusion datasets of normal volunteers and COPD patients (n=8) for acceleration factors between x2 and x10. TV led to large errors and artefacts for acceleration factors equal or larger than x5. SIDER improved TV, presenting lower errors and histograms of MAD closer to those obtained from fully sampled data for accelerations factors up to x10. SIDER preserved image quality at all acceleration factors but images were slightly smoothed and some details were lost at x10. In conclusion, we have developed and validated a novel compressed sensing method for lung MRI imaging and achieved high acceleration factors, which can be used to increase the amount of data acquired during a breath-hold. This methodology is expected to improve the accuracy of estimated lung microstructure dimensions and widen the possibilities of studying lung diseases with MRI.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 08:26:53 GMT" } ]
2017-02-10T00:00:00
[ [ "Abascal", "Juan F P J", "", "CREATIS" ], [ "Desco", "Manuel", "" ], [ "Parra-Robles", "Juan", "" ] ]
TITLE: Incorporation of prior knowledge of the signal behavior into the reconstruction to accelerate the acquisition of MR diffusion data ABSTRACT: Diffusion MRI measurements using hyperpolarized gases are generally acquired during patient breath hold, which yields a compromise between achievable image resolution, lung coverage and number of b-values. In this work, we propose a novel method that accelerates the acquisition of MR diffusion data by undersampling in both spatial and b-value dimensions, thanks to incorporating knowledge about the signal decay into the reconstruction (SIDER). SIDER is compared to total variation (TV) reconstruction by assessing their effect on both the recovery of ventilation images and estimated mean alveolar dimensions (MAD). Both methods are assessed by retrospectively undersampling diffusion datasets of normal volunteers and COPD patients (n=8) for acceleration factors between x2 and x10. TV led to large errors and artefacts for acceleration factors equal or larger than x5. SIDER improved TV, presenting lower errors and histograms of MAD closer to those obtained from fully sampled data for accelerations factors up to x10. SIDER preserved image quality at all acceleration factors but images were slightly smoothed and some details were lost at x10. In conclusion, we have developed and validated a novel compressed sensing method for lung MRI imaging and achieved high acceleration factors, which can be used to increase the amount of data acquired during a breath-hold. This methodology is expected to improve the accuracy of estimated lung microstructure dimensions and widen the possibilities of studying lung diseases with MRI.
no_new_dataset
0.95452
1702.02805
Qi Guo
Qi Guo, Ce Zhu, Zhiqiang Xia, Zhengtao Wang, Yipeng Liu
Attribute-controlled face photo synthesis from simple line drawing
5 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face photo synthesis from simple line drawing is a one-to-many task as simple line drawing merely contains the contour of human face. Previous exemplar-based methods are over-dependent on the datasets and are hard to generalize to complicated natural scenes. Recently, several works utilize deep neural networks to increase the generalization, but they are still limited in the controllability of the users. In this paper, we propose a deep generative model to synthesize face photo from simple line drawing controlled by face attributes such as hair color and complexion. In order to maximize the controllability of face attributes, an attribute-disentangled variational auto-encoder (AD-VAE) is firstly introduced to learn latent representations disentangled with respect to specified attributes. Then we conduct photo synthesis from simple line drawing based on AD-VAE. Experiments show that our model can well disentangle the variations of attributes from other variations of face photos and synthesize detailed photorealistic face images with desired attributes. Regarding background and illumination as the style and human face as the content, we can also synthesize face photos with the target style of a style photo.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 12:21:36 GMT" } ]
2017-02-10T00:00:00
[ [ "Guo", "Qi", "" ], [ "Zhu", "Ce", "" ], [ "Xia", "Zhiqiang", "" ], [ "Wang", "Zhengtao", "" ], [ "Liu", "Yipeng", "" ] ]
TITLE: Attribute-controlled face photo synthesis from simple line drawing ABSTRACT: Face photo synthesis from simple line drawing is a one-to-many task as simple line drawing merely contains the contour of human face. Previous exemplar-based methods are over-dependent on the datasets and are hard to generalize to complicated natural scenes. Recently, several works utilize deep neural networks to increase the generalization, but they are still limited in the controllability of the users. In this paper, we propose a deep generative model to synthesize face photo from simple line drawing controlled by face attributes such as hair color and complexion. In order to maximize the controllability of face attributes, an attribute-disentangled variational auto-encoder (AD-VAE) is firstly introduced to learn latent representations disentangled with respect to specified attributes. Then we conduct photo synthesis from simple line drawing based on AD-VAE. Experiments show that our model can well disentangle the variations of attributes from other variations of face photos and synthesize detailed photorealistic face images with desired attributes. Regarding background and illumination as the style and human face as the content, we can also synthesize face photos with the target style of a style photo.
no_new_dataset
0.946151
1702.02925
Wei Li
Wei Li, Farnaz Abtahi, Zhigang Zhu, Lijun Yin
EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection
The paper is accepted by FG 2017
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, we propose a deep learning based approach for facial action unit detection by enhancing and cropping the regions of interest. The approach is implemented by adding two novel nets (layers): the enhancing layers and the cropping layers, to a pretrained CNN model. For the enhancing layers, we designed an attention map based on facial landmark features and applied it to a pretrained neural network to conduct enhanced learning (The E-Net). For the cropping layers, we crop facial regions around the detected landmarks and design convolutional layers to learn deeper features for each facial region (C-Net). We then fuse the E-Net and the C-Net to obtain our Enhancing and Cropping (EAC) Net, which can learn both feature enhancing and region cropping functions. Our approach shows significant improvement in performance compared to the state-of-the-art methods applied to BP4D and DISFA AU datasets.
[ { "version": "v1", "created": "Thu, 9 Feb 2017 18:16:44 GMT" } ]
2017-02-10T00:00:00
[ [ "Li", "Wei", "" ], [ "Abtahi", "Farnaz", "" ], [ "Zhu", "Zhigang", "" ], [ "Yin", "Lijun", "" ] ]
TITLE: EAC-Net: A Region-based Deep Enhancing and Cropping Approach for Facial Action Unit Detection ABSTRACT: In this paper, we propose a deep learning based approach for facial action unit detection by enhancing and cropping the regions of interest. The approach is implemented by adding two novel nets (layers): the enhancing layers and the cropping layers, to a pretrained CNN model. For the enhancing layers, we designed an attention map based on facial landmark features and applied it to a pretrained neural network to conduct enhanced learning (The E-Net). For the cropping layers, we crop facial regions around the detected landmarks and design convolutional layers to learn deeper features for each facial region (C-Net). We then fuse the E-Net and the C-Net to obtain our Enhancing and Cropping (EAC) Net, which can learn both feature enhancing and region cropping functions. Our approach shows significant improvement in performance compared to the state-of-the-art methods applied to BP4D and DISFA AU datasets.
no_new_dataset
0.951142
1503.00173
Jonathan Mei
Jonathan Mei and Jos\'e M. F. Moura
Signal Processing on Graphs: Causal Modeling of Unstructured Data
null
IEEE Transactions on Signal Processing, vol. 65, no. 8, pp. 2077-2092, April 15, 2017
10.1109/TSP.2016.2634543
null
cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested.
[ { "version": "v1", "created": "Sat, 28 Feb 2015 20:28:05 GMT" }, { "version": "v2", "created": "Thu, 14 Apr 2016 20:58:45 GMT" }, { "version": "v3", "created": "Tue, 13 Sep 2016 13:19:02 GMT" }, { "version": "v4", "created": "Mon, 31 Oct 2016 22:05:33 GMT" }, { "version": "v5", "created": "Wed, 30 Nov 2016 19:12:41 GMT" }, { "version": "v6", "created": "Wed, 8 Feb 2017 15:49:58 GMT" } ]
2017-02-09T00:00:00
[ [ "Mei", "Jonathan", "" ], [ "Moura", "José M. F.", "" ] ]
TITLE: Signal Processing on Graphs: Causal Modeling of Unstructured Data ABSTRACT: Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested.
no_new_dataset
0.946843
1611.01839
Eunsol Choi
Eunsol Choi, Daniel Hewlett, Alexandre Lacoste, Illia Polosukhin, Jakob Uszkoreit, Jonathan Berant
Hierarchical Question Answering for Long Documents
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the Wikireading and on a new dataset, while speeding up the model by 3.5x-6.7x.
[ { "version": "v1", "created": "Sun, 6 Nov 2016 20:24:40 GMT" }, { "version": "v2", "created": "Wed, 8 Feb 2017 07:42:34 GMT" } ]
2017-02-09T00:00:00
[ [ "Choi", "Eunsol", "" ], [ "Hewlett", "Daniel", "" ], [ "Lacoste", "Alexandre", "" ], [ "Polosukhin", "Illia", "" ], [ "Uszkoreit", "Jakob", "" ], [ "Berant", "Jonathan", "" ] ]
TITLE: Hierarchical Question Answering for Long Documents ABSTRACT: We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models. While most successful approaches for reading comprehension rely on recurrent neural networks (RNNs), running them over long documents is prohibitively slow because it is difficult to parallelize over sequences. Inspired by how people first skim the document, identify relevant parts, and carefully read these parts to produce an answer, we combine a coarse, fast model for selecting relevant sentences and a more expensive RNN for producing the answer from those sentences. We treat sentence selection as a latent variable trained jointly from the answer only using reinforcement learning. Experiments demonstrate the state of the art performance on a challenging subset of the Wikireading and on a new dataset, while speeding up the model by 3.5x-6.7x.
new_dataset
0.961714
1612.00729
Sowmya Vajjala
Sowmya Vajjala
Automated assessment of non-native learner essays: Investigating the role of linguistic features
Article accepted for publication at: International Journal of Artificial Intelligence in Education (IJAIED). To appear in early 2017 (journal url: http://www.springer.com/computer/ai/journal/40593)
null
10.1007/s40593-017-0142-3
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question "what are the most predictive features?" has a different answer for each dataset.
[ { "version": "v1", "created": "Fri, 2 Dec 2016 16:22:49 GMT" } ]
2017-02-09T00:00:00
[ [ "Vajjala", "Sowmya", "" ] ]
TITLE: Automated assessment of non-native learner essays: Investigating the role of linguistic features ABSTRACT: Automatic essay scoring (AES) refers to the process of scoring free text responses to given prompts, considering human grader scores as the gold standard. Writing such essays is an essential component of many language and aptitude exams. Hence, AES became an active and established area of research, and there are many proprietary systems used in real life applications today. However, not much is known about which specific linguistic features are useful for prediction and how much of this is consistent across datasets. This article addresses that by exploring the role of various linguistic features in automatic essay scoring using two publicly available datasets of non-native English essays written in test taking scenarios. The linguistic properties are modeled by encoding lexical, syntactic, discourse and error types of learner language in the feature set. Predictive models are then developed using these features on both datasets and the most predictive features are compared. While the results show that the feature set used results in good predictive models with both datasets, the question "what are the most predictive features?" has a different answer for each dataset.
no_new_dataset
0.939248
1702.02367
Claudio Greco
Claudio Greco, Alessandro Suglia, Pierpaolo Basile, Gaetano Rossiello, Giovanni Semeraro
Iterative Multi-document Neural Attention for Multiple Answer Prediction
Paper accepted and presented at the Deep Understanding and Reasoning: A challenge for Next-generation Intelligent Agents (URANIA) workshop, held in the context of the AI*IA 2016 conference
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People have information needs of varying complexity, which can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences. In a scenario in which the user profile can be considered as a question, intelligent agents able to answer questions can be used to find the most relevant answers for a given user. In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The model is evaluated on the factoid Question Answering and top-n recommendation tasks of the bAbI Movie Dialog dataset. After assessing the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact using natural language and to support users in their information seeking processes in a personalized way.
[ { "version": "v1", "created": "Wed, 8 Feb 2017 10:58:02 GMT" } ]
2017-02-09T00:00:00
[ [ "Greco", "Claudio", "" ], [ "Suglia", "Alessandro", "" ], [ "Basile", "Pierpaolo", "" ], [ "Rossiello", "Gaetano", "" ], [ "Semeraro", "Giovanni", "" ] ]
TITLE: Iterative Multi-document Neural Attention for Multiple Answer Prediction ABSTRACT: People have information needs of varying complexity, which can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences. In a scenario in which the user profile can be considered as a question, intelligent agents able to answer questions can be used to find the most relevant answers for a given user. In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The model is evaluated on the factoid Question Answering and top-n recommendation tasks of the bAbI Movie Dialog dataset. After assessing the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact using natural language and to support users in their information seeking processes in a personalized way.
no_new_dataset
0.946597
1702.02508
Corneliu Arsene Dr
Corneliu Arsene, Peter Pormann, William Sellers, Siam Bhayro
Computational Techniques in Multispectral Image Processing: Application to the Syriac Galen Palimpsest
29 February - 2 March 2016, Second International Conference on Natural Sciences and Technology in Manuscript Analysis, Centre for the study of Manuscript Cultures, Hamburg, Germany
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multispectral and hyperspectral image analysis has experienced much development in the last decade. The application of these methods to palimpsests has produced significant results, enabling researchers to recover texts that would be otherwise lost under the visible overtext, by improving the contrast between the undertext and the overtext. In this paper we explore an extended number of multispectral and hyperspectral image analysis methods, consisting of supervised and unsupervised dimensionality reduction techniques, on a part of the Syriac Galen Palimpsest dataset (www.digitalgalen.net). Of this extended set of methods, eight methods gave good results: three were supervised methods Generalized Discriminant Analysis (GDA), Linear Discriminant Analysis (LDA), and Neighborhood Component Analysis (NCA); and the other five methods were unsupervised methods (but still used in a supervised way) Gaussian Process Latent Variable Model (GPLVM), Isomap, Landmark Isomap, Principal Component Analysis (PCA), and Probabilistic Principal Component Analysis (PPCA). The relative success of these methods was determined visually, using color pictures, on the basis of whether the undertext was distinguishable from the overtext, resulting in the following ranking of the methods: LDA, NCA, GDA, Isomap, Landmark Isomap, PPCA, PCA, and GPLVM. These results were compared with those obtained using the Canonical Variates Analysis (CVA) method on the same dataset, which showed remarkably accuracy (LDA is a particular case of CVA where the objects are classified to two classes).
[ { "version": "v1", "created": "Tue, 31 Jan 2017 13:03:20 GMT" } ]
2017-02-09T00:00:00
[ [ "Arsene", "Corneliu", "" ], [ "Pormann", "Peter", "" ], [ "Sellers", "William", "" ], [ "Bhayro", "Siam", "" ] ]
TITLE: Computational Techniques in Multispectral Image Processing: Application to the Syriac Galen Palimpsest ABSTRACT: Multispectral and hyperspectral image analysis has experienced much development in the last decade. The application of these methods to palimpsests has produced significant results, enabling researchers to recover texts that would be otherwise lost under the visible overtext, by improving the contrast between the undertext and the overtext. In this paper we explore an extended number of multispectral and hyperspectral image analysis methods, consisting of supervised and unsupervised dimensionality reduction techniques, on a part of the Syriac Galen Palimpsest dataset (www.digitalgalen.net). Of this extended set of methods, eight methods gave good results: three were supervised methods Generalized Discriminant Analysis (GDA), Linear Discriminant Analysis (LDA), and Neighborhood Component Analysis (NCA); and the other five methods were unsupervised methods (but still used in a supervised way) Gaussian Process Latent Variable Model (GPLVM), Isomap, Landmark Isomap, Principal Component Analysis (PCA), and Probabilistic Principal Component Analysis (PPCA). The relative success of these methods was determined visually, using color pictures, on the basis of whether the undertext was distinguishable from the overtext, resulting in the following ranking of the methods: LDA, NCA, GDA, Isomap, Landmark Isomap, PPCA, PCA, and GPLVM. These results were compared with those obtained using the Canonical Variates Analysis (CVA) method on the same dataset, which showed remarkably accuracy (LDA is a particular case of CVA where the objects are classified to two classes).
no_new_dataset
0.9462
1702.02512
Yi Zhou
Yi Zhou, Laurent Kneip and Hongdong Li
Semi-Dense Visual Odometry for RGB-D Cameras Using Approximate Nearest Neighbour Fields
ICRA 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semi-dense map of the reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour fields under the Euclidean distance criterion, which permits the use of compact Gauss-Newton updates in the optimization. The registration is formulated as a maximum a posterior problem to deal with outliers and sensor noises, and consequently the equivalent weighted least squares problem is solved by iteratively reweighted least squares method. A variety of robust weight functions are tested and the optimum is determined based on the characteristics of the sensor model. Extensive evaluation on publicly available RGB-D datasets shows that the proposed method predominantly outperforms existing state-of-the-art methods.
[ { "version": "v1", "created": "Mon, 6 Feb 2017 00:12:37 GMT" } ]
2017-02-09T00:00:00
[ [ "Zhou", "Yi", "" ], [ "Kneip", "Laurent", "" ], [ "Li", "Hongdong", "" ] ]
TITLE: Semi-Dense Visual Odometry for RGB-D Cameras Using Approximate Nearest Neighbour Fields ABSTRACT: This paper presents a robust and efficient semi-dense visual odometry solution for RGB-D cameras. The core of our method is a 2D-3D ICP pipeline which estimates the pose of the sensor by registering the projection of a 3D semi-dense map of the reference frame with the 2D semi-dense region extracted in the current frame. The processing is speeded up by efficiently implemented approximate nearest neighbour fields under the Euclidean distance criterion, which permits the use of compact Gauss-Newton updates in the optimization. The registration is formulated as a maximum a posterior problem to deal with outliers and sensor noises, and consequently the equivalent weighted least squares problem is solved by iteratively reweighted least squares method. A variety of robust weight functions are tested and the optimum is determined based on the characteristics of the sensor model. Extensive evaluation on publicly available RGB-D datasets shows that the proposed method predominantly outperforms existing state-of-the-art methods.
no_new_dataset
0.944228
1702.02537
Olasimbo Ayodeji Arigbabu
Olasimbo Ayodeji Arigbabu, Sharifah Mumtazah Syed Ahmad, Wan Azizun Wan Adnan, Salman Yussof, Saif Mahmood
Soft Biometrics: Gender Recognition from Unconstrained Face Images using Local Feature Descriptor
null
Journal of Information and Communication Technology (JICT), 2015
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gender recognition from unconstrained face images is a challenging task due to the high degree of misalignment, pose, expression, and illumination variation. In previous works, the recognition of gender from unconstrained face images is approached by utilizing image alignment, exploiting multiple samples per individual to improve the learning ability of the classifier, or learning gender based on prior knowledge about pose and demographic distributions of the dataset. However, image alignment increases the complexity and time of computation, while the use of multiple samples or having prior knowledge about data distribution is unrealistic in practical applications. This paper presents an approach for gender recognition from unconstrained face images. Our technique exploits the robustness of local feature descriptor to photometric variations to extract the shape description of the 2D face image using a single sample image per individual. The results obtained from experiments on Labeled Faces in the Wild (LFW) dataset describe the effectiveness of the proposed method. The essence of this study is to investigate the most suitable functions and parameter settings for recognizing gender from unconstrained face images.
[ { "version": "v1", "created": "Wed, 8 Feb 2017 17:34:53 GMT" } ]
2017-02-09T00:00:00
[ [ "Arigbabu", "Olasimbo Ayodeji", "" ], [ "Ahmad", "Sharifah Mumtazah Syed", "" ], [ "Adnan", "Wan Azizun Wan", "" ], [ "Yussof", "Salman", "" ], [ "Mahmood", "Saif", "" ] ]
TITLE: Soft Biometrics: Gender Recognition from Unconstrained Face Images using Local Feature Descriptor ABSTRACT: Gender recognition from unconstrained face images is a challenging task due to the high degree of misalignment, pose, expression, and illumination variation. In previous works, the recognition of gender from unconstrained face images is approached by utilizing image alignment, exploiting multiple samples per individual to improve the learning ability of the classifier, or learning gender based on prior knowledge about pose and demographic distributions of the dataset. However, image alignment increases the complexity and time of computation, while the use of multiple samples or having prior knowledge about data distribution is unrealistic in practical applications. This paper presents an approach for gender recognition from unconstrained face images. Our technique exploits the robustness of local feature descriptor to photometric variations to extract the shape description of the 2D face image using a single sample image per individual. The results obtained from experiments on Labeled Faces in the Wild (LFW) dataset describe the effectiveness of the proposed method. The essence of this study is to investigate the most suitable functions and parameter settings for recognizing gender from unconstrained face images.
no_new_dataset
0.95469
1405.6623
Michael May
Andrew F. Magee and Michael R. May and Brian R. Moore
The Dawn of Open Access to Phylogenetic Data
null
null
10.1371/journal.pone.0110268
null
q-bio.PE cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scientific enterprise depends critically on the preservation of and open access to published data. This basic tenet applies acutely to phylogenies (estimates of evolutionary relationships among species). Increasingly, phylogenies are estimated from increasingly large, genome-scale datasets using increasingly complex statistical methods that require increasing levels of expertise and computational investment. Moreover, the resulting phylogenetic data provide an explicit historical perspective that critically informs research in a vast and growing number of scientific disciplines. One such use is the study of changes in rates of lineage diversification (speciation - extinction) through time. As part of a meta-analysis in this area, we sought to collect phylogenetic data (comprising nucleotide sequence alignment and tree files) from 217 studies published in 46 journals over a 13-year period. We document our attempts to procure those data (from online archives and by direct request to corresponding authors), and report results of analyses (using Bayesian logistic regression) to assess the impact of various factors on the success of our efforts. Overall, complete phylogenetic data for ~60% of these studies are effectively lost to science. Our study indicates that phylogenetic data are more likely to be deposited in online archives and/or shared upon request when: (1) the publishing journal has a strong data-sharing policy; (2) the publishing journal has a higher impact factor, and; (3) the data are requested from faculty rather than students. Although the situation appears dire, our analyses suggest that it is far from hopeless: recent initiatives by the scientific community -- including policy changes by journals and funding agencies -- are improving the state of affairs.
[ { "version": "v1", "created": "Fri, 23 May 2014 00:20:42 GMT" } ]
2017-02-08T00:00:00
[ [ "Magee", "Andrew F.", "" ], [ "May", "Michael R.", "" ], [ "Moore", "Brian R.", "" ] ]
TITLE: The Dawn of Open Access to Phylogenetic Data ABSTRACT: The scientific enterprise depends critically on the preservation of and open access to published data. This basic tenet applies acutely to phylogenies (estimates of evolutionary relationships among species). Increasingly, phylogenies are estimated from increasingly large, genome-scale datasets using increasingly complex statistical methods that require increasing levels of expertise and computational investment. Moreover, the resulting phylogenetic data provide an explicit historical perspective that critically informs research in a vast and growing number of scientific disciplines. One such use is the study of changes in rates of lineage diversification (speciation - extinction) through time. As part of a meta-analysis in this area, we sought to collect phylogenetic data (comprising nucleotide sequence alignment and tree files) from 217 studies published in 46 journals over a 13-year period. We document our attempts to procure those data (from online archives and by direct request to corresponding authors), and report results of analyses (using Bayesian logistic regression) to assess the impact of various factors on the success of our efforts. Overall, complete phylogenetic data for ~60% of these studies are effectively lost to science. Our study indicates that phylogenetic data are more likely to be deposited in online archives and/or shared upon request when: (1) the publishing journal has a strong data-sharing policy; (2) the publishing journal has a higher impact factor, and; (3) the data are requested from faculty rather than students. Although the situation appears dire, our analyses suggest that it is far from hopeless: recent initiatives by the scientific community -- including policy changes by journals and funding agencies -- are improving the state of affairs.
no_new_dataset
0.947088
1506.04422
Rafael Pinto
Rafael Pinto and Paulo Engel
A Fast Incremental Gaussian Mixture Model
10 pages, no figures, draft submission to Plos One
null
10.1371/journal.pone.0139931
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work builds upon previous efforts in online incremental learning, namely the Incremental Gaussian Mixture Network (IGMN). The IGMN is capable of learning from data streams in a single-pass by improving its model after analyzing each data point and discarding it thereafter. Nevertheless, it suffers from the scalability point-of-view, due to its asymptotic time complexity of $\operatorname{O}\bigl(NKD^3\bigr)$ for $N$ data points, $K$ Gaussian components and $D$ dimensions, rendering it inadequate for high-dimensional data. In this paper, we manage to reduce this complexity to $\operatorname{O}\bigl(NKD^2\bigr)$ by deriving formulas for working directly with precision matrices instead of covariance matrices. The final result is a much faster and scalable algorithm which can be applied to high dimensional tasks. This is confirmed by applying the modified algorithm to high-dimensional classification datasets.
[ { "version": "v1", "created": "Sun, 14 Jun 2015 17:02:49 GMT" }, { "version": "v2", "created": "Thu, 18 Jun 2015 17:04:01 GMT" } ]
2017-02-08T00:00:00
[ [ "Pinto", "Rafael", "" ], [ "Engel", "Paulo", "" ] ]
TITLE: A Fast Incremental Gaussian Mixture Model ABSTRACT: This work builds upon previous efforts in online incremental learning, namely the Incremental Gaussian Mixture Network (IGMN). The IGMN is capable of learning from data streams in a single-pass by improving its model after analyzing each data point and discarding it thereafter. Nevertheless, it suffers from the scalability point-of-view, due to its asymptotic time complexity of $\operatorname{O}\bigl(NKD^3\bigr)$ for $N$ data points, $K$ Gaussian components and $D$ dimensions, rendering it inadequate for high-dimensional data. In this paper, we manage to reduce this complexity to $\operatorname{O}\bigl(NKD^2\bigr)$ by deriving formulas for working directly with precision matrices instead of covariance matrices. The final result is a much faster and scalable algorithm which can be applied to high dimensional tasks. This is confirmed by applying the modified algorithm to high-dimensional classification datasets.
no_new_dataset
0.94366
1604.01277
Ramon Fraga Pereira
Ramon Fraga Pereira and Felipe Meneguzzi
Landmark-Based Plan Recognition
Accepted as short paper in the 22nd European Conference on Artificial Intelligence, ECAI 2016
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognition of goals and plans using incomplete evidence from action execution can be done efficiently by using planning techniques. In many applications it is important to recognize goals and plans not only accurately, but also quickly. In this paper, we develop a heuristic approach for recognizing plans based on planning techniques that rely on ordering constraints to filter candidate goals from observations. These ordering constraints are called landmarks in the planning literature, which are facts or actions that cannot be avoided to achieve a goal. We show the applicability of planning landmarks in two settings: first, we use it directly to develop a heuristic-based plan recognition approach; second, we refine an existing planning-based plan recognition approach by pre-filtering its candidate goals. Our empirical evaluation shows that our approach is not only substantially more accurate than the state-of-the-art in all available datasets, it is also an order of magnitude faster.
[ { "version": "v1", "created": "Tue, 5 Apr 2016 14:44:03 GMT" }, { "version": "v2", "created": "Tue, 28 Jun 2016 17:56:47 GMT" }, { "version": "v3", "created": "Tue, 7 Feb 2017 01:15:59 GMT" } ]
2017-02-08T00:00:00
[ [ "Pereira", "Ramon Fraga", "" ], [ "Meneguzzi", "Felipe", "" ] ]
TITLE: Landmark-Based Plan Recognition ABSTRACT: Recognition of goals and plans using incomplete evidence from action execution can be done efficiently by using planning techniques. In many applications it is important to recognize goals and plans not only accurately, but also quickly. In this paper, we develop a heuristic approach for recognizing plans based on planning techniques that rely on ordering constraints to filter candidate goals from observations. These ordering constraints are called landmarks in the planning literature, which are facts or actions that cannot be avoided to achieve a goal. We show the applicability of planning landmarks in two settings: first, we use it directly to develop a heuristic-based plan recognition approach; second, we refine an existing planning-based plan recognition approach by pre-filtering its candidate goals. Our empirical evaluation shows that our approach is not only substantially more accurate than the state-of-the-art in all available datasets, it is also an order of magnitude faster.
no_new_dataset
0.954858
1608.06108
Andrea Cuttone
Andrea Cuttone, Per B{\ae}kgaard, Vedran Sekara, H{\aa}kan Jonsson, Jakob Eg Larsen, Sune Lehmann
SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events
null
null
10.1371/journal.pone.0169901
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient.
[ { "version": "v1", "created": "Mon, 22 Aug 2016 10:18:56 GMT" } ]
2017-02-08T00:00:00
[ [ "Cuttone", "Andrea", "" ], [ "Bækgaard", "Per", "" ], [ "Sekara", "Vedran", "" ], [ "Jonsson", "Håkan", "" ], [ "Larsen", "Jakob Eg", "" ], [ "Lehmann", "Sune", "" ] ]
TITLE: SensibleSleep: A Bayesian Model for Learning Sleep Patterns from Smartphone Events ABSTRACT: We propose a Bayesian model for extracting sleep patterns from smartphone events. Our method is able to identify individuals' daily sleep periods and their evolution over time, and provides an estimation of the probability of sleep and wake transitions. The model is fitted to more than 400 participants from two different datasets, and we verify the results against ground truth from dedicated armband sleep trackers. We show that the model is able to produce reliable sleep estimates with an accuracy of 0.89, both at the individual and at the collective level. Moreover the Bayesian model is able to quantify uncertainty and encode prior knowledge about sleep patterns. Compared with existing smartphone-based systems, our method requires only screen on/off events, and is therefore much less intrusive in terms of privacy and more battery-efficient.
no_new_dataset
0.946597
1611.02770
Xinyun Chen
Yanpei Liu, Xinyun Chen, Chang Liu, Dawn Song
Delving into Transferable Adversarial Examples and Black-box Attacks
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system.
[ { "version": "v1", "created": "Tue, 8 Nov 2016 23:25:00 GMT" }, { "version": "v2", "created": "Mon, 21 Nov 2016 22:28:51 GMT" }, { "version": "v3", "created": "Tue, 7 Feb 2017 14:24:44 GMT" } ]
2017-02-08T00:00:00
[ [ "Liu", "Yanpei", "" ], [ "Chen", "Xinyun", "" ], [ "Liu", "Chang", "" ], [ "Song", "Dawn", "" ] ]
TITLE: Delving into Transferable Adversarial Examples and Black-box Attacks ABSTRACT: An intriguing property of deep neural networks is the existence of adversarial examples, which can transfer among different architectures. These transferable adversarial examples may severely hinder deep neural network-based applications. Previous works mostly study the transferability using small scale datasets. In this work, we are the first to conduct an extensive study of the transferability over large models and a large scale dataset, and we are also the first to study the transferability of targeted adversarial examples with their target labels. We study both non-targeted and targeted adversarial examples, and show that while transferable non-targeted adversarial examples are easy to find, targeted adversarial examples generated using existing approaches almost never transfer with their target labels. Therefore, we propose novel ensemble-based approaches to generating transferable adversarial examples. Using such approaches, we observe a large proportion of targeted adversarial examples that are able to transfer with their target labels for the first time. We also present some geometric studies to help understanding the transferable adversarial examples. Finally, we show that the adversarial examples generated using ensemble-based approaches can successfully attack Clarifai.com, which is a black-box image classification system.
no_new_dataset
0.940298
1611.08481
Harm de Vries
Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville
GuessWhat?! Visual object discovery through multi-modal dialogue
23 pages; CVPR 2017 submission; see https://guesswhat.ai
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.
[ { "version": "v1", "created": "Wed, 23 Nov 2016 20:56:13 GMT" }, { "version": "v2", "created": "Mon, 6 Feb 2017 12:52:53 GMT" } ]
2017-02-08T00:00:00
[ [ "de Vries", "Harm", "" ], [ "Strub", "Florian", "" ], [ "Chandar", "Sarath", "" ], [ "Pietquin", "Olivier", "" ], [ "Larochelle", "Hugo", "" ], [ "Courville", "Aaron", "" ] ]
TITLE: GuessWhat?! Visual object discovery through multi-modal dialogue ABSTRACT: We introduce GuessWhat?!, a two-player guessing game as a testbed for research on the interplay of computer vision and dialogue systems. The goal of the game is to locate an unknown object in a rich image scene by asking a sequence of questions. Higher-level image understanding, like spatial reasoning and language grounding, is required to solve the proposed task. Our key contribution is the collection of a large-scale dataset consisting of 150K human-played games with a total of 800K visual question-answer pairs on 66K images. We explain our design decisions in collecting the dataset and introduce the oracle and questioner tasks that are associated with the two players of the game. We prototyped deep learning models to establish initial baselines of the introduced tasks.
new_dataset
0.956634
1611.09830
Tong Wang
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman
NewsQA: A Machine Comprehension Dataset
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
[ { "version": "v1", "created": "Tue, 29 Nov 2016 20:38:07 GMT" }, { "version": "v2", "created": "Thu, 22 Dec 2016 18:12:57 GMT" }, { "version": "v3", "created": "Tue, 7 Feb 2017 16:27:59 GMT" } ]
2017-02-08T00:00:00
[ [ "Trischler", "Adam", "" ], [ "Wang", "Tong", "" ], [ "Yuan", "Xingdi", "" ], [ "Harris", "Justin", "" ], [ "Sordoni", "Alessandro", "" ], [ "Bachman", "Philip", "" ], [ "Suleman", "Kaheer", "" ] ]
TITLE: NewsQA: A Machine Comprehension Dataset ABSTRACT: We present NewsQA, a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. We collect this dataset through a four-stage process designed to solicit exploratory questions that require reasoning. A thorough analysis confirms that NewsQA demands abilities beyond simple word matching and recognizing textual entailment. We measure human performance on the dataset and compare it to several strong neural models. The performance gap between humans and machines (0.198 in F1) indicates that significant progress can be made on NewsQA through future research. The dataset is freely available at https://datasets.maluuba.com/NewsQA.
new_dataset
0.962321
1702.01992
John Arevalo
John Arevalo, Thamar Solorio, Manuel Montes-y-G\'omez, Fabio A. Gonz\'alez
Gated Multimodal Units for Information Fusion
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediate representation based on a combination of data from different modalities. The GMU learns to decide how modalities influence the activation of the unit using multiplicative gates. It was evaluated on a multilabel scenario for genre classification of movies using the plot and the poster. The GMU improved the macro f-score performance of single-modality approaches and outperformed other fusion strategies, including mixture of experts models. Along with this work, the MM-IMDb dataset is released which, to the best of our knowledge, is the largest publicly available multimodal dataset for genre prediction on movies.
[ { "version": "v1", "created": "Tue, 7 Feb 2017 13:05:19 GMT" } ]
2017-02-08T00:00:00
[ [ "Arevalo", "John", "" ], [ "Solorio", "Thamar", "" ], [ "Montes-y-Gómez", "Manuel", "" ], [ "González", "Fabio A.", "" ] ]
TITLE: Gated Multimodal Units for Information Fusion ABSTRACT: This paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediate representation based on a combination of data from different modalities. The GMU learns to decide how modalities influence the activation of the unit using multiplicative gates. It was evaluated on a multilabel scenario for genre classification of movies using the plot and the poster. The GMU improved the macro f-score performance of single-modality approaches and outperformed other fusion strategies, including mixture of experts models. Along with this work, the MM-IMDb dataset is released which, to the best of our knowledge, is the largest publicly available multimodal dataset for genre prediction on movies.
new_dataset
0.95594
1702.02125
Eug\'enio Rodrigues
Eug\'enio Rodrigues and Lu\'isa Dias Pereira and Ad\'elio Rodrigues Gaspar and \'Alvaro Gomes and Manuel Carlos Gameiro da Silva
Estimation of classrooms occupancy using a multi-layer perceptron
6 pages, 2 figures, conference article
null
null
null
cs.NE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a multi-layer perceptron model for the estimation of classrooms number of occupants from sensed indoor environmental data-relative humidity, air temperature, and carbon dioxide concentration. The modelling datasets were collected from two classrooms in the Secondary School of Pombal, Portugal. The number of occupants and occupation periods were obtained from class attendance reports. However, post-class occupancy was unknown and the developed model is used to reconstruct the classrooms occupancy by filling the unreported periods. Different model structure and environment variables combination were tested. The model with best accuracy had as input vector 10 variables of five averaged time intervals of relative humidity and carbon dioxide concentration. The model presented a mean square error of 1.99, coefficient of determination of 0.96 with a significance of p-value < 0.001, and a mean absolute error of 1 occupant. These results show promising estimation capabilities in uncertain indoor environment conditions.
[ { "version": "v1", "created": "Tue, 7 Feb 2017 18:17:25 GMT" } ]
2017-02-08T00:00:00
[ [ "Rodrigues", "Eugénio", "" ], [ "Pereira", "Luísa Dias", "" ], [ "Gaspar", "Adélio Rodrigues", "" ], [ "Gomes", "Álvaro", "" ], [ "da Silva", "Manuel Carlos Gameiro", "" ] ]
TITLE: Estimation of classrooms occupancy using a multi-layer perceptron ABSTRACT: This paper presents a multi-layer perceptron model for the estimation of classrooms number of occupants from sensed indoor environmental data-relative humidity, air temperature, and carbon dioxide concentration. The modelling datasets were collected from two classrooms in the Secondary School of Pombal, Portugal. The number of occupants and occupation periods were obtained from class attendance reports. However, post-class occupancy was unknown and the developed model is used to reconstruct the classrooms occupancy by filling the unreported periods. Different model structure and environment variables combination were tested. The model with best accuracy had as input vector 10 variables of five averaged time intervals of relative humidity and carbon dioxide concentration. The model presented a mean square error of 1.99, coefficient of determination of 0.96 with a significance of p-value < 0.001, and a mean absolute error of 1 occupant. These results show promising estimation capabilities in uncertain indoor environment conditions.
no_new_dataset
0.951729
1310.2880
Adrian Barbu
Adrian Barbu, Yiyuan She, Liangjing Ding, Gary Gramajo
Feature Selection with Annealing for Computer Vision and Big Data Learning
18 pages, 9 figures
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no 2, pp 272 - 286, 2017
10.1109/TPAMI.2016.2544315
null
stat.ML cs.CV cs.LG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many computer vision and medical imaging problems are faced with learning from large-scale datasets, with millions of observations and features. In this paper we propose a novel efficient learning scheme that tightens a sparsity constraint by gradually removing variables based on a criterion and a schedule. The attractive fact that the problem size keeps dropping throughout the iterations makes it particularly suitable for big data learning. Our approach applies generically to the optimization of any differentiable loss function, and finds applications in regression, classification and ranking. The resultant algorithms build variable screening into estimation and are extremely simple to implement. We provide theoretical guarantees of convergence and selection consistency. In addition, one dimensional piecewise linear response functions are used to account for nonlinearity and a second order prior is imposed on these functions to avoid overfitting. Experiments on real and synthetic data show that the proposed method compares very well with other state of the art methods in regression, classification and ranking while being computationally very efficient and scalable.
[ { "version": "v1", "created": "Thu, 10 Oct 2013 16:47:22 GMT" }, { "version": "v2", "created": "Wed, 4 Jun 2014 22:42:51 GMT" }, { "version": "v3", "created": "Tue, 30 Sep 2014 00:33:36 GMT" }, { "version": "v4", "created": "Wed, 1 Oct 2014 20:03:42 GMT" }, { "version": "v5", "created": "Thu, 3 Sep 2015 13:20:26 GMT" }, { "version": "v6", "created": "Wed, 24 Feb 2016 02:02:20 GMT" }, { "version": "v7", "created": "Thu, 17 Mar 2016 14:55:09 GMT" } ]
2017-02-07T00:00:00
[ [ "Barbu", "Adrian", "" ], [ "She", "Yiyuan", "" ], [ "Ding", "Liangjing", "" ], [ "Gramajo", "Gary", "" ] ]
TITLE: Feature Selection with Annealing for Computer Vision and Big Data Learning ABSTRACT: Many computer vision and medical imaging problems are faced with learning from large-scale datasets, with millions of observations and features. In this paper we propose a novel efficient learning scheme that tightens a sparsity constraint by gradually removing variables based on a criterion and a schedule. The attractive fact that the problem size keeps dropping throughout the iterations makes it particularly suitable for big data learning. Our approach applies generically to the optimization of any differentiable loss function, and finds applications in regression, classification and ranking. The resultant algorithms build variable screening into estimation and are extremely simple to implement. We provide theoretical guarantees of convergence and selection consistency. In addition, one dimensional piecewise linear response functions are used to account for nonlinearity and a second order prior is imposed on these functions to avoid overfitting. Experiments on real and synthetic data show that the proposed method compares very well with other state of the art methods in regression, classification and ranking while being computationally very efficient and scalable.
no_new_dataset
0.94801
1502.05137
Andreas Bulling
Hosnieh Sattar, Sabine M\"uller, Mario Fritz, Andreas Bulling
Prediction of Search Targets From Fixations in Open-World Settings
null
null
10.1109/CVPR.2015.7298700
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
[ { "version": "v1", "created": "Wed, 18 Feb 2015 07:04:04 GMT" }, { "version": "v2", "created": "Wed, 4 Mar 2015 09:26:03 GMT" }, { "version": "v3", "created": "Sat, 11 Apr 2015 14:56:51 GMT" } ]
2017-02-07T00:00:00
[ [ "Sattar", "Hosnieh", "" ], [ "Müller", "Sabine", "" ], [ "Fritz", "Mario", "" ], [ "Bulling", "Andreas", "" ] ]
TITLE: Prediction of Search Targets From Fixations in Open-World Settings ABSTRACT: Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets.
new_dataset
0.961534
1504.02863
Andreas Bulling
Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
Appearance-Based Gaze Estimation in the Wild
null
null
10.1109/CVPR.2015.7299081
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.
[ { "version": "v1", "created": "Sat, 11 Apr 2015 11:52:33 GMT" } ]
2017-02-07T00:00:00
[ [ "Zhang", "Xucong", "" ], [ "Sugano", "Yusuke", "" ], [ "Fritz", "Mario", "" ], [ "Bulling", "Andreas", "" ] ]
TITLE: Appearance-Based Gaze Estimation in the Wild ABSTRACT: Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and methods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze estimation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild.
new_dataset
0.962532
1505.05916
Erroll Wood
Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.
[ { "version": "v1", "created": "Thu, 21 May 2015 22:12:31 GMT" } ]
2017-02-07T00:00:00
[ [ "Wood", "Erroll", "" ], [ "Baltrusaitis", "Tadas", "" ], [ "Zhang", "Xucong", "" ], [ "Sugano", "Yusuke", "" ], [ "Robinson", "Peter", "" ], [ "Bulling", "Andreas", "" ] ]
TITLE: Rendering of Eyes for Eye-Shape Registration and Gaze Estimation ABSTRACT: Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.
no_new_dataset
0.948537
1511.05768
Andreas Bulling
Marc Tonsen, Xucong Zhang, Yusuke Sugano, Andreas Bulling
Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments
null
null
10.1145/2857491.2857520
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
[ { "version": "v1", "created": "Wed, 18 Nov 2015 13:30:55 GMT" } ]
2017-02-07T00:00:00
[ [ "Tonsen", "Marc", "" ], [ "Zhang", "Xucong", "" ], [ "Sugano", "Yusuke", "" ], [ "Bulling", "Andreas", "" ] ]
TITLE: Labeled pupils in the wild: A dataset for studying pupil detection in unconstrained environments ABSTRACT: We present labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms. The videos in our dataset were recorded from 22 participants in everyday locations at about 95 FPS using a state-of-the-art dark-pupil head-mounted eye tracker. They cover people with different ethnicities, a diverse set of everyday indoor and outdoor illumination environments, as well as natural gaze direction distributions. The dataset also includes participants wearing glasses, contact lenses, as well as make-up. We benchmark five state-of-the-art pupil detection algorithms on our dataset with respect to robustness and accuracy. We further study the influence of image resolution, vision aids, as well as recording location (indoor, outdoor) on pupil detection performance. Our evaluations provide valuable insights into the general pupil detection problem and allow us to identify key challenges for robust pupil detection on head-mounted eye trackers.
new_dataset
0.964288
1512.07158
Baichuan Zhang
Baichuan Zhang, Noman Mohammed, Vachik Dave, Mohammad Al Hasan
Feature Selection for Classification under Anonymity Constraint
Transactions on Data Privacy 2017
null
null
null
cs.LG cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last decade, proliferation of various online platforms and their increasing adoption by billions of users have heightened the privacy risk of a user enormously. In fact, security researchers have shown that sparse microdata containing information about online activities of a user although anonymous, can still be used to disclose the identity of the user by cross-referencing the data with other data sources. To preserve the privacy of a user, in existing works several methods (k-anonymity, l-diversity, differential privacy) are proposed that ensure a dataset which is meant to share or publish bears small identity disclosure risk. However, the majority of these methods modify the data in isolation, without considering their utility in subsequent knowledge discovery tasks, which makes these datasets less informative. In this work, we consider labeled data that are generally used for classification, and propose two methods for feature selection considering two goals: first, on the reduced feature set the data has small disclosure risk, and second, the utility of the data is preserved for performing a classification task. Experimental results on various real-world datasets show that the method is effective and useful in practice.
[ { "version": "v1", "created": "Tue, 22 Dec 2015 17:06:01 GMT" }, { "version": "v2", "created": "Sat, 13 Feb 2016 03:05:36 GMT" }, { "version": "v3", "created": "Fri, 19 Feb 2016 02:01:57 GMT" }, { "version": "v4", "created": "Thu, 17 Mar 2016 02:30:33 GMT" }, { "version": "v5", "created": "Thu, 1 Dec 2016 01:05:59 GMT" }, { "version": "v6", "created": "Tue, 31 Jan 2017 15:47:47 GMT" }, { "version": "v7", "created": "Mon, 6 Feb 2017 01:14:37 GMT" } ]
2017-02-07T00:00:00
[ [ "Zhang", "Baichuan", "" ], [ "Mohammed", "Noman", "" ], [ "Dave", "Vachik", "" ], [ "Hasan", "Mohammad Al", "" ] ]
TITLE: Feature Selection for Classification under Anonymity Constraint ABSTRACT: Over the last decade, proliferation of various online platforms and their increasing adoption by billions of users have heightened the privacy risk of a user enormously. In fact, security researchers have shown that sparse microdata containing information about online activities of a user although anonymous, can still be used to disclose the identity of the user by cross-referencing the data with other data sources. To preserve the privacy of a user, in existing works several methods (k-anonymity, l-diversity, differential privacy) are proposed that ensure a dataset which is meant to share or publish bears small identity disclosure risk. However, the majority of these methods modify the data in isolation, without considering their utility in subsequent knowledge discovery tasks, which makes these datasets less informative. In this work, we consider labeled data that are generally used for classification, and propose two methods for feature selection considering two goals: first, on the reduced feature set the data has small disclosure risk, and second, the utility of the data is preserved for performing a classification task. Experimental results on various real-world datasets show that the method is effective and useful in practice.
no_new_dataset
0.952175
1601.01006
Fei Han
Fei Han, Brian Reily, William Hoff, Hao Zhang
Space-Time Representation of People Based on 3D Skeletal Data: A Review
Our paper has been accepted by the journal Computer Vision and Image Understanding, see http://www.sciencedirect.com/science/article/pii/S1077314217300279, Computer Vision and Image Understanding, 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions.
[ { "version": "v1", "created": "Tue, 5 Jan 2016 22:38:36 GMT" }, { "version": "v2", "created": "Thu, 21 Jan 2016 06:00:39 GMT" }, { "version": "v3", "created": "Sat, 4 Feb 2017 01:08:55 GMT" } ]
2017-02-07T00:00:00
[ [ "Han", "Fei", "" ], [ "Reily", "Brian", "" ], [ "Hoff", "William", "" ], [ "Zhang", "Hao", "" ] ]
TITLE: Space-Time Representation of People Based on 3D Skeletal Data: A Review ABSTRACT: Spatiotemporal human representation based on 3D visual perception data is a rapidly growing research area. Based on the information sources, these representations can be broadly categorized into two groups based on RGB-D information or 3D skeleton data. Recently, skeleton-based human representations have been intensively studied and kept attracting an increasing attention, due to their robustness to variations of viewpoint, human body scale and motion speed as well as the realtime, online performance. This paper presents a comprehensive survey of existing space-time representations of people based on 3D skeletal data, and provides an informative categorization and analysis of these methods from the perspectives, including information modality, representation encoding, structure and transition, and feature engineering. We also provide a brief overview of skeleton acquisition devices and construction methods, enlist a number of public benchmark datasets with skeleton data, and discuss potential future research directions.
no_new_dataset
0.947624
1605.05110
Zhen Xu
Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, Xiaolong Wang
Incorporating Loose-Structured Knowledge into Conversation Modeling via Recall-Gate LSTM
under review of IJCNN 2017; 10 pages, 5 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling human conversations is the essence for building satisfying chat-bots with multi-turn dialog ability. Conversation modeling will notably benefit from domain knowledge since the relationships between sentences can be clarified due to semantic hints introduced by knowledge. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a specially designed Recall gate, domain knowledge can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations. In addition, this paper introduces the loose structured domain knowledge base, which can be built with slight amount of manual work and easily adopted by the Recall gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on both two datasets have shown that our approach is promising for modeling human conversations and building key components of automatic chatting systems.
[ { "version": "v1", "created": "Tue, 17 May 2016 11:03:25 GMT" }, { "version": "v2", "created": "Mon, 6 Feb 2017 03:43:17 GMT" } ]
2017-02-07T00:00:00
[ [ "Xu", "Zhen", "" ], [ "Liu", "Bingquan", "" ], [ "Wang", "Baoxun", "" ], [ "Sun", "Chengjie", "" ], [ "Wang", "Xiaolong", "" ] ]
TITLE: Incorporating Loose-Structured Knowledge into Conversation Modeling via Recall-Gate LSTM ABSTRACT: Modeling human conversations is the essence for building satisfying chat-bots with multi-turn dialog ability. Conversation modeling will notably benefit from domain knowledge since the relationships between sentences can be clarified due to semantic hints introduced by knowledge. In this paper, a deep neural network is proposed to incorporate background knowledge for conversation modeling. Through a specially designed Recall gate, domain knowledge can be transformed into the extra global memory of Long Short-Term Memory (LSTM), so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations. In addition, this paper introduces the loose structured domain knowledge base, which can be built with slight amount of manual work and easily adopted by the Recall gate. Our model is evaluated on the context-oriented response selecting task, and experimental results on both two datasets have shown that our approach is promising for modeling human conversations and building key components of automatic chatting systems.
no_new_dataset
0.942135
1605.06423
Jonathan Huggins
Jonathan H. Huggins, Trevor Campbell, Tamara Broderick
Coresets for Scalable Bayesian Logistic Regression
In Proceedings of Advances in Neural Information Processing Systems (NIPS 2016)
null
null
null
stat.CO cs.DS stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of Bayesian methods in large-scale data settings is attractive because of the rich hierarchical models, uncertainty quantification, and prior specification they provide. Standard Bayesian inference algorithms are computationally expensive, however, making their direct application to large datasets difficult or infeasible. Recent work on scaling Bayesian inference has focused on modifying the underlying algorithms to, for example, use only a random data subsample at each iteration. We leverage the insight that data is often redundant to instead obtain a weighted subset of the data (called a coreset) that is much smaller than the original dataset. We can then use this small coreset in any number of existing posterior inference algorithms without modification. In this paper, we develop an efficient coreset construction algorithm for Bayesian logistic regression models. We provide theoretical guarantees on the size and approximation quality of the coreset -- both for fixed, known datasets, and in expectation for a wide class of data generative models. Crucially, the proposed approach also permits efficient construction of the coreset in both streaming and parallel settings, with minimal additional effort. We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size. Furthermore, constructing the coreset takes a negligible amount of time compared to that required to run MCMC on it.
[ { "version": "v1", "created": "Fri, 20 May 2016 16:26:45 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2016 14:12:19 GMT" }, { "version": "v3", "created": "Mon, 6 Feb 2017 15:11:30 GMT" } ]
2017-02-07T00:00:00
[ [ "Huggins", "Jonathan H.", "" ], [ "Campbell", "Trevor", "" ], [ "Broderick", "Tamara", "" ] ]
TITLE: Coresets for Scalable Bayesian Logistic Regression ABSTRACT: The use of Bayesian methods in large-scale data settings is attractive because of the rich hierarchical models, uncertainty quantification, and prior specification they provide. Standard Bayesian inference algorithms are computationally expensive, however, making their direct application to large datasets difficult or infeasible. Recent work on scaling Bayesian inference has focused on modifying the underlying algorithms to, for example, use only a random data subsample at each iteration. We leverage the insight that data is often redundant to instead obtain a weighted subset of the data (called a coreset) that is much smaller than the original dataset. We can then use this small coreset in any number of existing posterior inference algorithms without modification. In this paper, we develop an efficient coreset construction algorithm for Bayesian logistic regression models. We provide theoretical guarantees on the size and approximation quality of the coreset -- both for fixed, known datasets, and in expectation for a wide class of data generative models. Crucially, the proposed approach also permits efficient construction of the coreset in both streaming and parallel settings, with minimal additional effort. We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size. Furthermore, constructing the coreset takes a negligible amount of time compared to that required to run MCMC on it.
no_new_dataset
0.949809
1610.01101
Damek Davis
Aleksandr Aravkin and Damek Davis
A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning
33 pages, 5 figures
null
null
null
stat.ML cs.LG math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size $n$, our approach requires $O(n^{2/3}/\varepsilon)$ gradient evaluations to reach $\varepsilon$-accuracy and, when a certain error bound holds, the complexity improves to $O(\kappa n^{2/3}\log(1/\varepsilon))$. These rates are $n^{1/3}$ times better than those achieved by typical, full gradient methods.
[ { "version": "v1", "created": "Tue, 4 Oct 2016 17:24:43 GMT" }, { "version": "v2", "created": "Sun, 5 Feb 2017 15:24:39 GMT" } ]
2017-02-07T00:00:00
[ [ "Aravkin", "Aleksandr", "" ], [ "Davis", "Damek", "" ] ]
TITLE: A SMART Stochastic Algorithm for Nonconvex Optimization with Applications to Robust Machine Learning ABSTRACT: In this paper, we show how to transform any optimization problem that arises from fitting a machine learning model into one that (1) detects and removes contaminated data from the training set while (2) simultaneously fitting the trimmed model on the uncontaminated data that remains. To solve the resulting nonconvex optimization problem, we introduce a fast stochastic proximal-gradient algorithm that incorporates prior knowledge through nonsmooth regularization. For datasets of size $n$, our approach requires $O(n^{2/3}/\varepsilon)$ gradient evaluations to reach $\varepsilon$-accuracy and, when a certain error bound holds, the complexity improves to $O(\kappa n^{2/3}\log(1/\varepsilon))$. These rates are $n^{1/3}$ times better than those achieved by typical, full gradient methods.
no_new_dataset
0.946597
1610.06227
Mohammad Sadegh Rasooli
Mohammad Sadegh Rasooli, Michael Collins
Cross-Lingual Syntactic Transfer with Limited Resources
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. The method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.
[ { "version": "v1", "created": "Wed, 19 Oct 2016 21:25:39 GMT" }, { "version": "v2", "created": "Sat, 4 Feb 2017 04:05:00 GMT" } ]
2017-02-07T00:00:00
[ [ "Rasooli", "Mohammad Sadegh", "" ], [ "Collins", "Michael", "" ] ]
TITLE: Cross-Lingual Syntactic Transfer with Limited Resources ABSTRACT: We describe a simple but effective method for cross-lingual syntactic transfer of dependency parsers, in the scenario where a large amount of translation data is not available. The method makes use of three steps: 1) a method for deriving cross-lingual word clusters, which can then be used in a multilingual parser; 2) a method for transferring lexical information from a target language to source language treebanks; 3) a method for integrating these steps with the density-driven annotation projection method of Rasooli and Collins (2015). Experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the Bible, a considerably smaller corpus than the Europarl corpus used in previous work. Results using the Europarl corpus as a source of translation data show additional improvements over the results of Rasooli and Collins (2015). We conclude with results on 38 datasets from the Universal Dependencies corpora.
no_new_dataset
0.9462
1611.07810
Tegan Maharaj
Tegan Maharaj and Nicolas Ballas and Anna Rohrbach and Aaron Courville and Christopher Pal
A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While deep convolutional neural networks frequently approach or exceed human-level performance at benchmark tasks involving static images, extending this success to moving images is not straightforward. Having models which can learn to understand video is of interest for many applications, including content recommendation, prediction, summarization, event/object detection and understanding human visual perception, but many domains lack sufficient data to explore and perfect video models. In order to address the need for a simple, quantitative benchmark for developing and understanding video, we present MovieFIB, a fill-in-the-blank question-answering dataset with over 300,000 examples, based on descriptive video annotations for the visually impaired. In addition to presenting statistics and a description of the dataset, we perform a detailed analysis of 5 different models' predictions, and compare these with human performance. We investigate the relative importance of language, static (2D) visual features, and moving (3D) visual features; the effects of increasing dataset size, the number of frames sampled; and of vocabulary size. We illustrate that: this task is not solvable by a language model alone; our model combining 2D and 3D visual information indeed provides the best result; all models perform significantly worse than human-level. We provide human evaluations for responses given by different models and find that accuracy on the MovieFIB evaluation corresponds well with human judgement. We suggest avenues for improving video models, and hope that the proposed dataset can be useful for measuring and encouraging progress in this very interesting field.
[ { "version": "v1", "created": "Wed, 23 Nov 2016 14:22:51 GMT" }, { "version": "v2", "created": "Sun, 5 Feb 2017 17:51:19 GMT" } ]
2017-02-07T00:00:00
[ [ "Maharaj", "Tegan", "" ], [ "Ballas", "Nicolas", "" ], [ "Rohrbach", "Anna", "" ], [ "Courville", "Aaron", "" ], [ "Pal", "Christopher", "" ] ]
TITLE: A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering ABSTRACT: While deep convolutional neural networks frequently approach or exceed human-level performance at benchmark tasks involving static images, extending this success to moving images is not straightforward. Having models which can learn to understand video is of interest for many applications, including content recommendation, prediction, summarization, event/object detection and understanding human visual perception, but many domains lack sufficient data to explore and perfect video models. In order to address the need for a simple, quantitative benchmark for developing and understanding video, we present MovieFIB, a fill-in-the-blank question-answering dataset with over 300,000 examples, based on descriptive video annotations for the visually impaired. In addition to presenting statistics and a description of the dataset, we perform a detailed analysis of 5 different models' predictions, and compare these with human performance. We investigate the relative importance of language, static (2D) visual features, and moving (3D) visual features; the effects of increasing dataset size, the number of frames sampled; and of vocabulary size. We illustrate that: this task is not solvable by a language model alone; our model combining 2D and 3D visual information indeed provides the best result; all models perform significantly worse than human-level. We provide human evaluations for responses given by different models and find that accuracy on the MovieFIB evaluation corresponds well with human judgement. We suggest avenues for improving video models, and hope that the proposed dataset can be useful for measuring and encouraging progress in this very interesting field.
no_new_dataset
0.912124
1612.00157
Yi\u{g}it Baran Can
Yi\u{g}it Baran Can, Efe Il{\i}cak, Tolga \c{C}ukur
Fast 3D Variable-FOV Reconstruction for Parallel Imaging with Localized Sensitivities
Accepted, to be presented at ISMRM 25th Annual Meeting 2017
null
null
null
physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several successful iterative approaches have recently been proposed for parallel-imaging reconstructions of variable-density (VD) acquisitions, but they often induce substantial computational burden for non-Cartesian data. Here we propose a generalized variable-FOV PILS reconstruction 3D VD Cartesian and non-Cartesian data. The proposed method separates k-space into non-intersecting annuli based on sampling density, and sets the 3D reconstruction FOV for each annulus based on the respective sampling density. The variable-FOV method is compared against conventional gridding, PILS, and ESPIRiT reconstructions. Results indicate that the proposed method yields better artifact suppression compared to gridding and PILS, and improves noise conditioning relative to ESPIRiT, enabling fast and high-quality reconstructions of 3D datasets.
[ { "version": "v1", "created": "Thu, 1 Dec 2016 06:14:34 GMT" }, { "version": "v2", "created": "Mon, 6 Feb 2017 13:06:28 GMT" } ]
2017-02-07T00:00:00
[ [ "Can", "Yiğit Baran", "" ], [ "Ilıcak", "Efe", "" ], [ "Çukur", "Tolga", "" ] ]
TITLE: Fast 3D Variable-FOV Reconstruction for Parallel Imaging with Localized Sensitivities ABSTRACT: Several successful iterative approaches have recently been proposed for parallel-imaging reconstructions of variable-density (VD) acquisitions, but they often induce substantial computational burden for non-Cartesian data. Here we propose a generalized variable-FOV PILS reconstruction 3D VD Cartesian and non-Cartesian data. The proposed method separates k-space into non-intersecting annuli based on sampling density, and sets the 3D reconstruction FOV for each annulus based on the respective sampling density. The variable-FOV method is compared against conventional gridding, PILS, and ESPIRiT reconstructions. Results indicate that the proposed method yields better artifact suppression compared to gridding and PILS, and improves noise conditioning relative to ESPIRiT, enabling fast and high-quality reconstructions of 3D datasets.
no_new_dataset
0.951142
1612.07386
David Rosen
David M. Rosen, Luca Carlone, Afonso S. Bandeira, and John J. Leonard
SE-Sync: A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group
49 Pages, 20 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many important geometric estimation problems take the form of synchronization over the special Euclidean group: estimate the values of a set of poses given a set of relative measurements between them. This problem is typically formulated as a nonconvex maximum-likelihood estimation that is computationally hard to solve in general. Nevertheless, in this paper we present an algorithm that is able to efficiently recover certifiably globally optimal solutions of the special Euclidean synchronization problem in a non-adversarial noise regime. The crux of our approach is the development of a semidefinite relaxation of the maximum-likelihood estimation whose minimizer provides an exact MLE so long as the magnitude of the noise corrupting the available measurements falls below a certain critical threshold; furthermore, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the optimality of the recovered estimate. We develop a specialized optimization scheme for solving large-scale instances of this relaxation by exploiting its low-rank, geometric, and graph-theoretic structure to reduce it to an equivalent optimization problem on a low-dimensional Riemannian manifold, and design a truncated-Newton trust-region method to solve this reduction efficiently. Finally, we combine this fast optimization approach with a simple rounding procedure to produce our algorithm, SE-Sync. Experimental evaluation on a variety of simulated and real-world pose-graph SLAM datasets shows that SE-Sync is able to recover certifiably globally optimal solutions when the available measurements are corrupted by noise up to an order of magnitude greater than that typically encountered in robotics and computer vision applications, and does so more than an order of magnitude faster than the Gauss-Newton-based approach that forms the basis of current state-of-the-art techniques.
[ { "version": "v1", "created": "Wed, 21 Dec 2016 23:21:29 GMT" }, { "version": "v2", "created": "Sun, 5 Feb 2017 03:49:42 GMT" } ]
2017-02-07T00:00:00
[ [ "Rosen", "David M.", "" ], [ "Carlone", "Luca", "" ], [ "Bandeira", "Afonso S.", "" ], [ "Leonard", "John J.", "" ] ]
TITLE: SE-Sync: A Certifiably Correct Algorithm for Synchronization over the Special Euclidean Group ABSTRACT: Many important geometric estimation problems take the form of synchronization over the special Euclidean group: estimate the values of a set of poses given a set of relative measurements between them. This problem is typically formulated as a nonconvex maximum-likelihood estimation that is computationally hard to solve in general. Nevertheless, in this paper we present an algorithm that is able to efficiently recover certifiably globally optimal solutions of the special Euclidean synchronization problem in a non-adversarial noise regime. The crux of our approach is the development of a semidefinite relaxation of the maximum-likelihood estimation whose minimizer provides an exact MLE so long as the magnitude of the noise corrupting the available measurements falls below a certain critical threshold; furthermore, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the optimality of the recovered estimate. We develop a specialized optimization scheme for solving large-scale instances of this relaxation by exploiting its low-rank, geometric, and graph-theoretic structure to reduce it to an equivalent optimization problem on a low-dimensional Riemannian manifold, and design a truncated-Newton trust-region method to solve this reduction efficiently. Finally, we combine this fast optimization approach with a simple rounding procedure to produce our algorithm, SE-Sync. Experimental evaluation on a variety of simulated and real-world pose-graph SLAM datasets shows that SE-Sync is able to recover certifiably globally optimal solutions when the available measurements are corrupted by noise up to an order of magnitude greater than that typically encountered in robotics and computer vision applications, and does so more than an order of magnitude faster than the Gauss-Newton-based approach that forms the basis of current state-of-the-art techniques.
no_new_dataset
0.944893
1702.00956
Suwon Shon
Suwon Shon, Hanseok Ko
KU-ISPL Speaker Recognition Systems under Language mismatch condition for NIST 2016 Speaker Recognition Evaluation
SRE16, NIST SRE 2016 system description
null
null
null
cs.SD cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Korea University Intelligent Signal Processing Lab. (KU-ISPL) developed speaker recognition system for SRE16 fixed training condition. Data for evaluation trials are collected from outside North America, spoken in Tagalog and Cantonese while training data only is spoken English. Thus, main issue for SRE16 is compensating the discrepancy between different languages. As development dataset which is spoken in Cebuano and Mandarin, we could prepare the evaluation trials through preliminary experiments to compensate the language mismatched condition. Our team developed 4 different approaches to extract i-vectors and applied state-of-the-art techniques as backend. To compensate language mismatch, we investigated and endeavored unique method such as unsupervised language clustering, inter language variability compensation and gender/language dependent score normalization.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 10:15:29 GMT" }, { "version": "v2", "created": "Mon, 6 Feb 2017 03:37:28 GMT" } ]
2017-02-07T00:00:00
[ [ "Shon", "Suwon", "" ], [ "Ko", "Hanseok", "" ] ]
TITLE: KU-ISPL Speaker Recognition Systems under Language mismatch condition for NIST 2016 Speaker Recognition Evaluation ABSTRACT: Korea University Intelligent Signal Processing Lab. (KU-ISPL) developed speaker recognition system for SRE16 fixed training condition. Data for evaluation trials are collected from outside North America, spoken in Tagalog and Cantonese while training data only is spoken English. Thus, main issue for SRE16 is compensating the discrepancy between different languages. As development dataset which is spoken in Cebuano and Mandarin, we could prepare the evaluation trials through preliminary experiments to compensate the language mismatched condition. Our team developed 4 different approaches to extract i-vectors and applied state-of-the-art techniques as backend. To compensate language mismatch, we investigated and endeavored unique method such as unsupervised language clustering, inter language variability compensation and gender/language dependent score normalization.
no_new_dataset
0.934813
1702.01151
Helge Holzmann
Helge Holzmann, Wolfgang Nejdl, Avishek Anand
The Dawn of Today's Popular Domains: A Study of the Archived German Web over 18 Years
JCDL 2016, Newark, NJ, USA
null
10.1145/2910896.2910901
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Web has been around and maturing for 25 years. The popular websites of today have undergone vast changes during this period, with a few being there almost since the beginning and many new ones becoming popular over the years. This makes it worthwhile to take a look at how these sites have evolved and what they might tell us about the future of the Web. We therefore embarked on a longitudinal study spanning almost the whole period of the Web, based on data collected by the Internet Archive starting in 1996, to retrospectively analyze how the popular Web as of now has evolved over the past 18 years. For our study we focused on the German Web, specifically on the top 100 most popular websites in 17 categories. This paper presents a selection of the most interesting findings in terms of volume, size as well as age of the Web. While related work in the field of Web Dynamics has mainly focused on change rates and analyzed datasets spanning less than a year, we looked at the evolution of websites over 18 years. We found that around 70% of the pages we investigated are younger than a year, with an observed exponential growth in age as well as in size up to now. If this growth rate continues, the number of pages from the popular domains will almost double in the next two years. In addition, we give insights into our data set, provided by the Internet Archive, which hosts the largest and most complete Web archive as of today.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 20:45:56 GMT" } ]
2017-02-07T00:00:00
[ [ "Holzmann", "Helge", "" ], [ "Nejdl", "Wolfgang", "" ], [ "Anand", "Avishek", "" ] ]
TITLE: The Dawn of Today's Popular Domains: A Study of the Archived German Web over 18 Years ABSTRACT: The Web has been around and maturing for 25 years. The popular websites of today have undergone vast changes during this period, with a few being there almost since the beginning and many new ones becoming popular over the years. This makes it worthwhile to take a look at how these sites have evolved and what they might tell us about the future of the Web. We therefore embarked on a longitudinal study spanning almost the whole period of the Web, based on data collected by the Internet Archive starting in 1996, to retrospectively analyze how the popular Web as of now has evolved over the past 18 years. For our study we focused on the German Web, specifically on the top 100 most popular websites in 17 categories. This paper presents a selection of the most interesting findings in terms of volume, size as well as age of the Web. While related work in the field of Web Dynamics has mainly focused on change rates and analyzed datasets spanning less than a year, we looked at the evolution of websites over 18 years. We found that around 70% of the pages we investigated are younger than a year, with an observed exponential growth in age as well as in size up to now. If this growth rate continues, the number of pages from the popular domains will almost double in the next two years. In addition, we give insights into our data set, provided by the Internet Archive, which hosts the largest and most complete Web archive as of today.
no_new_dataset
0.886174
1702.01159
Helge Holzmann
Helge Holzmann, Wolfgang Nejdl, Avishek Anand
On the Applicability of Delicious for Temporal Search on Web Archives
SIGIR 2016, Pisa, Italy
null
10.1145/2911451.2914724
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web archives are large longitudinal collections that store webpages from the past, which might be missing on the current live Web. Consequently, temporal search over such collections is essential for finding prominent missing webpages and tasks like historical analysis. However, this has been challenging due to the lack of popularity information and proper ground truth to evaluate temporal retrieval models. In this paper we investigate the applicability of external longitudinal resources to identify important and popular websites in the past and analyze the social bookmarking service Delicious for this purpose. The timestamped bookmarks on Delicious provide explicit cues about popular time periods in the past along with relevant descriptors. These are valuable to identify important documents in the past for a given temporal query. Focusing purely on recall, we analyzed more than 12,000 queries and find that using Delicious yields average recall values from 46% up to 100%, when limiting ourselves to the best represented queries in the considered dataset. This constitutes an attractive and low-overhead approach for quick access into Web archives by not dealing with the actual contents.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 21:06:47 GMT" } ]
2017-02-07T00:00:00
[ [ "Holzmann", "Helge", "" ], [ "Nejdl", "Wolfgang", "" ], [ "Anand", "Avishek", "" ] ]
TITLE: On the Applicability of Delicious for Temporal Search on Web Archives ABSTRACT: Web archives are large longitudinal collections that store webpages from the past, which might be missing on the current live Web. Consequently, temporal search over such collections is essential for finding prominent missing webpages and tasks like historical analysis. However, this has been challenging due to the lack of popularity information and proper ground truth to evaluate temporal retrieval models. In this paper we investigate the applicability of external longitudinal resources to identify important and popular websites in the past and analyze the social bookmarking service Delicious for this purpose. The timestamped bookmarks on Delicious provide explicit cues about popular time periods in the past along with relevant descriptors. These are valuable to identify important documents in the past for a given temporal query. Focusing purely on recall, we analyzed more than 12,000 queries and find that using Delicious yields average recall values from 46% up to 100%, when limiting ourselves to the best represented queries in the considered dataset. This constitutes an attractive and low-overhead approach for quick access into Web archives by not dealing with the actual contents.
no_new_dataset
0.947769
1702.01167
Andrey Kuehlkamp
Andrey Kuehlkamp and Kevin W. Bowyer
An Analysis of 1-to-First Matching in Iris Recognition
2016 IEEE Winter Conference on Applications of Computer Vision (WACV)
null
10.1109/WACV.2016.7477687
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Iris recognition systems are a mature technology that is widely used throughout the world. In identification (as opposed to verification) mode, an iris to be recognized is typically matched against all N enrolled irises. This is the classic "1-to-N search". In order to improve the speed of large-scale identification, a modified "1-to-First" search has been used in some operational systems. A 1-to-First search terminates with the first below-threshold match that is found, whereas a 1-to-N search always finds the best match across all enrollments. We know of no previous studies that evaluate how the accuracy of 1-to-First search differs from that of 1-to-N search. Using a dataset of over 50,000 iris images from 2,800 different irises, we perform experiments to evaluate the relative accuracy of 1-to-First and 1-to-N search. We evaluate how the accuracy difference changes with larger numbers of enrolled irises, and with larger ranges of rotational difference allowed between iris images. We find that False Match error rate for 1-to-First is higher than for 1-to-N, and the the difference grows with larger number of enrolled irises and with larger range of rotation.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 21:24:10 GMT" } ]
2017-02-07T00:00:00
[ [ "Kuehlkamp", "Andrey", "" ], [ "Bowyer", "Kevin W.", "" ] ]
TITLE: An Analysis of 1-to-First Matching in Iris Recognition ABSTRACT: Iris recognition systems are a mature technology that is widely used throughout the world. In identification (as opposed to verification) mode, an iris to be recognized is typically matched against all N enrolled irises. This is the classic "1-to-N search". In order to improve the speed of large-scale identification, a modified "1-to-First" search has been used in some operational systems. A 1-to-First search terminates with the first below-threshold match that is found, whereas a 1-to-N search always finds the best match across all enrollments. We know of no previous studies that evaluate how the accuracy of 1-to-First search differs from that of 1-to-N search. Using a dataset of over 50,000 iris images from 2,800 different irises, we perform experiments to evaluate the relative accuracy of 1-to-First and 1-to-N search. We evaluate how the accuracy difference changes with larger numbers of enrolled irises, and with larger ranges of rotational difference allowed between iris images. We find that False Match error rate for 1-to-First is higher than for 1-to-N, and the the difference grows with larger number of enrolled irises and with larger range of rotation.
no_new_dataset
0.884489
1702.01184
Benjamin Mako Hill
Benjamin Mako Hill, Andr\'es Monroy-Hern\'andez
A longitudinal dataset of five years of public activity in the Scratch online community
null
Scientific Data 4, Article number: 170002, 2017
10.1038/sdata.2017.2
null
cs.CY cs.HC cs.SI
http://creativecommons.org/licenses/by/4.0/
Scratch is a programming environment and an online community where young people can create, share, learn, and communicate. In collaboration with the Scratch Team at MIT, we created a longitudinal dataset of public activity in the Scratch online community during its first five years (2007-2012). The dataset comprises 32 tables with information on more than 1 million Scratch users, nearly 2 million Scratch projects, more than 10 million comments, more than 30 million visits to Scratch projects, and more. To help researchers understand this dataset, and to establish the validity of the data, we also include the source code of every version of the software that operated the website, as well as the software used to generate this dataset. We believe this is the largest and most comprehensive downloadable dataset of youth programming artifacts and communication.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 22:02:24 GMT" } ]
2017-02-07T00:00:00
[ [ "Hill", "Benjamin Mako", "" ], [ "Monroy-Hernández", "Andrés", "" ] ]
TITLE: A longitudinal dataset of five years of public activity in the Scratch online community ABSTRACT: Scratch is a programming environment and an online community where young people can create, share, learn, and communicate. In collaboration with the Scratch Team at MIT, we created a longitudinal dataset of public activity in the Scratch online community during its first five years (2007-2012). The dataset comprises 32 tables with information on more than 1 million Scratch users, nearly 2 million Scratch projects, more than 10 million comments, more than 30 million visits to Scratch projects, and more. To help researchers understand this dataset, and to establish the validity of the data, we also include the source code of every version of the software that operated the website, as well as the software used to generate this dataset. We believe this is the largest and most comprehensive downloadable dataset of youth programming artifacts and communication.
new_dataset
0.958654
1702.01268
Giorgio Valentini
Jessica Gliozzo
Network-based methods for outcome prediction in the "sample space"
MSc Thesis, Advisor: G. Valentini, Co-Advisors: A. Paccanaro and M. Re, 92 pages, 36 figures, 10 tables
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this thesis we present the novel semi-supervised network-based algorithm P-Net, which is able to rank and classify patients with respect to a specific phenotype or clinical outcome under study. The peculiar and innovative characteristic of this method is that it builds a network of samples/patients, where the nodes represent the samples and the edges are functional or genetic relationships between individuals (e.g. similarity of expression profiles), to predict the phenotype under study. In other words, it constructs the network in the "sample space" and not in the "biomarker space" (where nodes represent biomolecules (e.g. genes, proteins) and edges represent functional or genetic relationships between nodes), as usual in state-of-the-art methods. To assess the performances of P-Net, we apply it on three different publicly available datasets from patients afflicted with a specific type of tumor: pancreatic cancer, melanoma and ovarian cancer dataset, by using the data and following the experimental set-up proposed in two recently published papers [Barter et al., 2014, Winter et al., 2012]. We show that network-based methods in the "sample space" can achieve results competitive with classical supervised inductive systems. Moreover, the graph representation of the samples can be easily visualized through networks and can be used to gain visual clues about the relationships between samples, taking into account the phenotype associated or predicted for each sample. To our knowledge this is one of the first works that proposes graph-based algorithms working in the "sample space" of the biomolecular profiles of the patients to predict their phenotype or outcome, thus contributing to a novel research line in the framework of the Network Medicine.
[ { "version": "v1", "created": "Sat, 4 Feb 2017 11:18:53 GMT" } ]
2017-02-07T00:00:00
[ [ "Gliozzo", "Jessica", "" ] ]
TITLE: Network-based methods for outcome prediction in the "sample space" ABSTRACT: In this thesis we present the novel semi-supervised network-based algorithm P-Net, which is able to rank and classify patients with respect to a specific phenotype or clinical outcome under study. The peculiar and innovative characteristic of this method is that it builds a network of samples/patients, where the nodes represent the samples and the edges are functional or genetic relationships between individuals (e.g. similarity of expression profiles), to predict the phenotype under study. In other words, it constructs the network in the "sample space" and not in the "biomarker space" (where nodes represent biomolecules (e.g. genes, proteins) and edges represent functional or genetic relationships between nodes), as usual in state-of-the-art methods. To assess the performances of P-Net, we apply it on three different publicly available datasets from patients afflicted with a specific type of tumor: pancreatic cancer, melanoma and ovarian cancer dataset, by using the data and following the experimental set-up proposed in two recently published papers [Barter et al., 2014, Winter et al., 2012]. We show that network-based methods in the "sample space" can achieve results competitive with classical supervised inductive systems. Moreover, the graph representation of the samples can be easily visualized through networks and can be used to gain visual clues about the relationships between samples, taking into account the phenotype associated or predicted for each sample. To our knowledge this is one of the first works that proposes graph-based algorithms working in the "sample space" of the biomolecular profiles of the patients to predict their phenotype or outcome, thus contributing to a novel research line in the framework of the Network Medicine.
no_new_dataset
0.942612
1702.01434
Muhammad Qasim Pasta
Muhammad Qasim Pasta, Faraz Zaidi, C\'eline Rozenblat
Generating online social networks based on socio-demographic attributes
arXiv admin note: substantial text overlap with arXiv:1311.3508
J Complex Netw 2014, 2 (4): 475-494
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have seen tremendous growth of many online social networks such as Facebook, LinkedIn and MySpace. People connect to each other through these networks forming large social communities providing researchers rich datasets to understand, model and predict social interactions and behaviors. New contacts in these networks can be formed due to an individual's demographic attributes such as age group, gender, geographic location, or due to a network's structural dynamics such as triadic closure and preferential attachment, or a combination of both demographic and structural characteristics. A number of network generation models have been proposed in the last decade to explain the structure, evolution and processes taking place in different types of networks, and notably social networks. Network generation models studied in the literature primarily consider structural properties, and in some cases an individual's demographic profile in the formation of new social contacts. These models do not present a mechanism to combine both structural and demographic characteristics for the formation of new links. In this paper, we propose a new network generation algorithm which incorporates both these characteristics to model network formation. We use different publicly available Facebook datasets as benchmarks to demonstrate the correctness of the proposed network generation model. The proposed model is flexible and thus can generate networks with varying demographic and structural properties.
[ { "version": "v1", "created": "Sun, 5 Feb 2017 18:04:29 GMT" } ]
2017-02-07T00:00:00
[ [ "Pasta", "Muhammad Qasim", "" ], [ "Zaidi", "Faraz", "" ], [ "Rozenblat", "Céline", "" ] ]
TITLE: Generating online social networks based on socio-demographic attributes ABSTRACT: Recent years have seen tremendous growth of many online social networks such as Facebook, LinkedIn and MySpace. People connect to each other through these networks forming large social communities providing researchers rich datasets to understand, model and predict social interactions and behaviors. New contacts in these networks can be formed due to an individual's demographic attributes such as age group, gender, geographic location, or due to a network's structural dynamics such as triadic closure and preferential attachment, or a combination of both demographic and structural characteristics. A number of network generation models have been proposed in the last decade to explain the structure, evolution and processes taking place in different types of networks, and notably social networks. Network generation models studied in the literature primarily consider structural properties, and in some cases an individual's demographic profile in the formation of new social contacts. These models do not present a mechanism to combine both structural and demographic characteristics for the formation of new links. In this paper, we propose a new network generation algorithm which incorporates both these characteristics to model network formation. We use different publicly available Facebook datasets as benchmarks to demonstrate the correctness of the proposed network generation model. The proposed model is flexible and thus can generate networks with varying demographic and structural properties.
no_new_dataset
0.954308
1702.01466
Hongyu Gong
Hongyu Gong, Jiaqi Mu, Suma Bhat, Pramod Viswanath
Prepositions in Context
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prepositions are highly polysemous, and their variegated senses encode significant semantic information. In this paper we match each preposition's complement and attachment and their interplay crucially to the geometry of the word vectors to the left and right of the preposition. Extracting such features from the vast number of instances of each preposition and clustering them makes for an efficient preposition sense disambigution (PSD) algorithm, which is comparable to and better than state-of-the-art on two benchmark datasets. Our reliance on no external linguistic resource allows us to scale the PSD algorithm to a large WikiCorpus and learn sense-specific preposition representations -- which we show to encode semantic relations and paraphrasing of verb particle compounds, via simple vector operations.
[ { "version": "v1", "created": "Sun, 5 Feb 2017 23:16:01 GMT" } ]
2017-02-07T00:00:00
[ [ "Gong", "Hongyu", "" ], [ "Mu", "Jiaqi", "" ], [ "Bhat", "Suma", "" ], [ "Viswanath", "Pramod", "" ] ]
TITLE: Prepositions in Context ABSTRACT: Prepositions are highly polysemous, and their variegated senses encode significant semantic information. In this paper we match each preposition's complement and attachment and their interplay crucially to the geometry of the word vectors to the left and right of the preposition. Extracting such features from the vast number of instances of each preposition and clustering them makes for an efficient preposition sense disambigution (PSD) algorithm, which is comparable to and better than state-of-the-art on two benchmark datasets. Our reliance on no external linguistic resource allows us to scale the PSD algorithm to a large WikiCorpus and learn sense-specific preposition representations -- which we show to encode semantic relations and paraphrasing of verb particle compounds, via simple vector operations.
no_new_dataset
0.952618
1702.01638
Xinyu Li
Xinyu Li, Yanyi Zhang, Jianyu Zhang, Shuhong Chen, Ivan Marsic, Richard A. Farneth, Randall S. Burd
Concurrent Activity Recognition with Multimodal CNN-LSTM Structure
14 pages, 12 figures, under review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a system that recognizes concurrent activities from real-world data captured by multiple sensors of different types. The recognition is achieved in two steps. First, we extract spatial and temporal features from the multimodal data. We feed each datatype into a convolutional neural network that extracts spatial features, followed by a long-short term memory network that extracts temporal information in the sensory data. The extracted features are then fused for decision making in the second step. Second, we achieve concurrent activity recognition with a single classifier that encodes a binary output vector in which elements indicate whether the corresponding activity types are currently in progress. We tested our system with three datasets from different domains recorded using different sensors and achieved performance comparable to existing systems designed specifically for those domains. Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.
[ { "version": "v1", "created": "Mon, 6 Feb 2017 15:01:45 GMT" } ]
2017-02-07T00:00:00
[ [ "Li", "Xinyu", "" ], [ "Zhang", "Yanyi", "" ], [ "Zhang", "Jianyu", "" ], [ "Chen", "Shuhong", "" ], [ "Marsic", "Ivan", "" ], [ "Farneth", "Richard A.", "" ], [ "Burd", "Randall S.", "" ] ]
TITLE: Concurrent Activity Recognition with Multimodal CNN-LSTM Structure ABSTRACT: We introduce a system that recognizes concurrent activities from real-world data captured by multiple sensors of different types. The recognition is achieved in two steps. First, we extract spatial and temporal features from the multimodal data. We feed each datatype into a convolutional neural network that extracts spatial features, followed by a long-short term memory network that extracts temporal information in the sensory data. The extracted features are then fused for decision making in the second step. Second, we achieve concurrent activity recognition with a single classifier that encodes a binary output vector in which elements indicate whether the corresponding activity types are currently in progress. We tested our system with three datasets from different domains recorded using different sensors and achieved performance comparable to existing systems designed specifically for those domains. Our system is the first to address the concurrent activity recognition with multisensory data using a single model, which is scalable, simple to train and easy to deploy.
no_new_dataset
0.948537
1702.01711
Rodrigo Agerri
I\~naki San Vicente, Rodrigo Agerri, German Rigau
Q-WordNet PPV: Simple, Robust and (almost) Unsupervised Generation of Polarity Lexicons for Multiple Languages
8 pages plus 2 pages of references
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 88-97, Gothenburg, Sweden, April 26-30 2014
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents a simple, robust and (almost) unsupervised dictionary-based method, qwn-ppv (Q-WordNet as Personalized PageRanking Vector) to automatically generate polarity lexicons. We show that qwn-ppv outperforms other automatically generated lexicons for the four extrinsic evaluations presented here. It also shows very competitive and robust results with respect to manually annotated ones. Results suggest that no single lexicon is best for every task and dataset and that the intrinsic evaluation of polarity lexicons is not a good performance indicator on a Sentiment Analysis task. The qwn-ppv method allows to easily create quality polarity lexicons whenever no domain-based annotated corpora are available for a given language.
[ { "version": "v1", "created": "Mon, 6 Feb 2017 17:14:29 GMT" } ]
2017-02-07T00:00:00
[ [ "Vicente", "Iñaki San", "" ], [ "Agerri", "Rodrigo", "" ], [ "Rigau", "German", "" ] ]
TITLE: Q-WordNet PPV: Simple, Robust and (almost) Unsupervised Generation of Polarity Lexicons for Multiple Languages ABSTRACT: This paper presents a simple, robust and (almost) unsupervised dictionary-based method, qwn-ppv (Q-WordNet as Personalized PageRanking Vector) to automatically generate polarity lexicons. We show that qwn-ppv outperforms other automatically generated lexicons for the four extrinsic evaluations presented here. It also shows very competitive and robust results with respect to manually annotated ones. Results suggest that no single lexicon is best for every task and dataset and that the intrinsic evaluation of polarity lexicons is not a good performance indicator on a Sentiment Analysis task. The qwn-ppv method allows to easily create quality polarity lexicons whenever no domain-based annotated corpora are available for a given language.
no_new_dataset
0.948917
1702.01713
Nikolaos Polatidis Mr
Nikolaos Polatidis, Christos K. Georgiadis
A dynamic multi-level collaborative filtering method for improved recommendations
null
Computer Standards & Interfaces, 51, 14-21 (2017)
10.1016/j.csi.2016.10.014
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most used approaches for providing recommendations in various online environments such as e-commerce is collaborative filtering. Although, this is a simple method for recommending items or services, accuracy and quality problems still exist. Thus, we propose a dynamic multi-level collaborative filtering method that improves the quality of the recommendations. The proposed method is based on positive and negative adjustments and can be used in different domains that utilize collaborative filtering to increase the quality of the user experience. Furthermore, the effectiveness of the proposed method is shown by providing an extensive experimental evaluation based on three real datasets and by comparisons to alternative methods.
[ { "version": "v1", "created": "Mon, 6 Feb 2017 17:19:07 GMT" } ]
2017-02-07T00:00:00
[ [ "Polatidis", "Nikolaos", "" ], [ "Georgiadis", "Christos K.", "" ] ]
TITLE: A dynamic multi-level collaborative filtering method for improved recommendations ABSTRACT: One of the most used approaches for providing recommendations in various online environments such as e-commerce is collaborative filtering. Although, this is a simple method for recommending items or services, accuracy and quality problems still exist. Thus, we propose a dynamic multi-level collaborative filtering method that improves the quality of the recommendations. The proposed method is based on positive and negative adjustments and can be used in different domains that utilize collaborative filtering to increase the quality of the user experience. Furthermore, the effectiveness of the proposed method is shown by providing an extensive experimental evaluation based on three real datasets and by comparisons to alternative methods.
no_new_dataset
0.952131
1702.01721
Afshin Dehghan
Afshin Dehghan, Syed Zain Masood, Guang Shu, Enrique. G. Ortiz
View Independent Vehicle Make, Model and Color Recognition Using Convolutional Neural Network
7 Pages
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the details of Sighthound's fully automated vehicle make, model and color recognition system. The backbone of our system is a deep convolutional neural network that is not only computationally inexpensive, but also provides state-of-the-art results on several competitive benchmarks. Additionally, our deep network is trained on a large dataset of several million images which are labeled through a semi-automated process. Finally we test our system on several public datasets as well as our own internal test dataset. Our results show that we outperform other methods on all benchmarks by significant margins. Our model is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud
[ { "version": "v1", "created": "Mon, 6 Feb 2017 17:47:08 GMT" } ]
2017-02-07T00:00:00
[ [ "Dehghan", "Afshin", "" ], [ "Masood", "Syed Zain", "" ], [ "Shu", "Guang", "" ], [ "Ortiz", "Enrique. G.", "" ] ]
TITLE: View Independent Vehicle Make, Model and Color Recognition Using Convolutional Neural Network ABSTRACT: This paper describes the details of Sighthound's fully automated vehicle make, model and color recognition system. The backbone of our system is a deep convolutional neural network that is not only computationally inexpensive, but also provides state-of-the-art results on several competitive benchmarks. Additionally, our deep network is trained on a large dataset of several million images which are labeled through a semi-automated process. Finally we test our system on several public datasets as well as our own internal test dataset. Our results show that we outperform other methods on all benchmarks by significant margins. Our model is available to developers through the Sighthound Cloud API at https://www.sighthound.com/products/cloud
new_dataset
0.955068
1604.01170
Antonia Godoy-Lorite
Antonia Godoy-Lorite, Roger Guimera, Cristopher Moore, Marta Sales-Pardo
Accurate and scalable social recommendation using mixed-membership stochastic block models
9 pages, 4 figures
Proc. Natl. Acad. Sci. USA 113 (50) , 14207 -14212 (2016)
10.1073/pnas.1606316113
null
cs.SI cs.IR cs.LG physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With ever-increasing amounts of online information available, modeling and predicting individual preferences-for books or articles, for example-is becoming more and more important. Good predictions enable us to improve advice to users, and obtain a better understanding of the socio-psychological processes that determine those preferences. We have developed a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of individuals' preferences. Our approach is based on the explicit assumption that there are groups of individuals and of items, and that the preferences of an individual for an item are determined only by their group memberships. Importantly, we allow each individual and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches, such as matrix factorization, we do not assume implicitly or explicitly that individuals in each group prefer items in a single group of items. The resulting overlapping groups and the predicted preferences can be inferred with a expectation-maximization algorithm whose running time scales linearly (per iteration). Our approach enables us to predict individual preferences in large datasets, and is considerably more accurate than the current algorithms for such large datasets.
[ { "version": "v1", "created": "Tue, 5 Apr 2016 08:28:08 GMT" }, { "version": "v2", "created": "Wed, 6 Apr 2016 07:55:35 GMT" } ]
2017-02-06T00:00:00
[ [ "Godoy-Lorite", "Antonia", "" ], [ "Guimera", "Roger", "" ], [ "Moore", "Cristopher", "" ], [ "Sales-Pardo", "Marta", "" ] ]
TITLE: Accurate and scalable social recommendation using mixed-membership stochastic block models ABSTRACT: With ever-increasing amounts of online information available, modeling and predicting individual preferences-for books or articles, for example-is becoming more and more important. Good predictions enable us to improve advice to users, and obtain a better understanding of the socio-psychological processes that determine those preferences. We have developed a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of individuals' preferences. Our approach is based on the explicit assumption that there are groups of individuals and of items, and that the preferences of an individual for an item are determined only by their group memberships. Importantly, we allow each individual and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches, such as matrix factorization, we do not assume implicitly or explicitly that individuals in each group prefer items in a single group of items. The resulting overlapping groups and the predicted preferences can be inferred with a expectation-maximization algorithm whose running time scales linearly (per iteration). Our approach enables us to predict individual preferences in large datasets, and is considerably more accurate than the current algorithms for such large datasets.
no_new_dataset
0.944893
1702.00820
Theodoros Rekatsinas
Theodoros Rekatsinas, Xu Chu, Ihab F. Ilyas, Christopher R\'e
HoloClean: Holistic Data Repairs with Probabilistic Inference
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce HoloClean, a framework for holistic data repairing driven by probabilistic inference. HoloClean unifies existing qualitative data repairing approaches, which rely on integrity constraints or external data sources, with quantitative data repairing methods, which leverage statistical properties of the input data. Given an inconsistent dataset as input, HoloClean automatically generates a probabilistic program that performs data repairing. Inspired by recent theoretical advances in probabilistic inference, we introduce a series of optimizations which ensure that inference over HoloClean's probabilistic model scales to instances with millions of tuples. We show that HoloClean scales to instances with millions of tuples and find data repairs with an average precision of ~90% and an average recall of above ~76% across a diverse array of datasets exhibiting different types of errors. This yields an average F1 improvement of more than 2x against state-of-the-art methods.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 20:25:41 GMT" } ]
2017-02-06T00:00:00
[ [ "Rekatsinas", "Theodoros", "" ], [ "Chu", "Xu", "" ], [ "Ilyas", "Ihab F.", "" ], [ "Ré", "Christopher", "" ] ]
TITLE: HoloClean: Holistic Data Repairs with Probabilistic Inference ABSTRACT: We introduce HoloClean, a framework for holistic data repairing driven by probabilistic inference. HoloClean unifies existing qualitative data repairing approaches, which rely on integrity constraints or external data sources, with quantitative data repairing methods, which leverage statistical properties of the input data. Given an inconsistent dataset as input, HoloClean automatically generates a probabilistic program that performs data repairing. Inspired by recent theoretical advances in probabilistic inference, we introduce a series of optimizations which ensure that inference over HoloClean's probabilistic model scales to instances with millions of tuples. We show that HoloClean scales to instances with millions of tuples and find data repairs with an average precision of ~90% and an average recall of above ~76% across a diverse array of datasets exhibiting different types of errors. This yields an average F1 improvement of more than 2x against state-of-the-art methods.
no_new_dataset
0.94699
1702.00833
Maciej Wielgosz
Maciej Wielgosz and Andrzej Skocze\'n and Matej Mertik
Recurrent Neural Networks for anomaly detection in the Post-Mortem time series of LHC superconducting magnets
Related to arxiv:1611.06241
null
null
null
physics.ins-det cs.LG physics.acc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a model based on Deep Learning algorithms of LSTM and GRU for facilitating an anomaly detection in Large Hadron Collider superconducting magnets. We used high resolution data available in Post Mortem database to train a set of models and chose the best possible set of their hyper-parameters. Using Deep Learning approach allowed to examine a vast body of data and extract the fragments which require further experts examination and are regarded as anomalies. The presented method does not require tedious manual threshold setting and operator attention at the stage of the system setup. Instead, the automatic approach is proposed, which achieves according to our experiments accuracy of 99%. This is reached for the largest dataset of 302 MB and the following architecture of the network: single layer LSTM, 128 cells, 20 epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam. All the experiments were run on GPU Nvidia Tesla K80
[ { "version": "v1", "created": "Thu, 2 Feb 2017 21:32:32 GMT" } ]
2017-02-06T00:00:00
[ [ "Wielgosz", "Maciej", "" ], [ "Skoczeń", "Andrzej", "" ], [ "Mertik", "Matej", "" ] ]
TITLE: Recurrent Neural Networks for anomaly detection in the Post-Mortem time series of LHC superconducting magnets ABSTRACT: This paper presents a model based on Deep Learning algorithms of LSTM and GRU for facilitating an anomaly detection in Large Hadron Collider superconducting magnets. We used high resolution data available in Post Mortem database to train a set of models and chose the best possible set of their hyper-parameters. Using Deep Learning approach allowed to examine a vast body of data and extract the fragments which require further experts examination and are regarded as anomalies. The presented method does not require tedious manual threshold setting and operator attention at the stage of the system setup. Instead, the automatic approach is proposed, which achieves according to our experiments accuracy of 99%. This is reached for the largest dataset of 302 MB and the following architecture of the network: single layer LSTM, 128 cells, 20 epochs of training, look_back=16, look_ahead=128, grid=100 and optimizer Adam. All the experiments were run on GPU Nvidia Tesla K80
no_new_dataset
0.953275
1702.00926
Seungryong Kim
Seungryong Kim, Dongbo Min, Bumsub Ham, Sangryul Jeon, Stephen Lin, Kwanghoon Sohn
FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 07:36:44 GMT" } ]
2017-02-06T00:00:00
[ [ "Kim", "Seungryong", "" ], [ "Min", "Dongbo", "" ], [ "Ham", "Bumsub", "" ], [ "Jeon", "Sangryul", "" ], [ "Lin", "Stephen", "" ], [ "Sohn", "Kwanghoon", "" ] ]
TITLE: FCSS: Fully Convolutional Self-Similarity for Dense Semantic Correspondence ABSTRACT: We present a descriptor, called fully convolutional self-similarity (FCSS), for dense semantic correspondence. To robustly match points among different instances within the same object class, we formulate FCSS using local self-similarity (LSS) within a fully convolutional network. In contrast to existing CNN-based descriptors, FCSS is inherently insensitive to intra-class appearance variations because of its LSS-based structure, while maintaining the precise localization ability of deep neural networks. The sampling patterns of local structure and the self-similarity measure are jointly learned within the proposed network in an end-to-end and multi-scale manner. As training data for semantic correspondence is rather limited, we propose to leverage object candidate priors provided in existing image datasets and also correspondence consistency between object pairs to enable weakly-supervised learning. Experiments demonstrate that FCSS outperforms conventional handcrafted descriptors and CNN-based descriptors on various benchmarks.
no_new_dataset
0.949623
1702.01015
Helge Holzmann
Helge Holzmann, Vinay Goel, Avishek Anand
ArchiveSpark: Efficient Web Archive Access, Extraction and Derivation
JCDL 2016, Newark, NJ, USA
null
10.1145/2910896.2910902
null
cs.DL cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web archives are a valuable resource for researchers of various disciplines. However, to use them as a scholarly source, researchers require a tool that provides efficient access to Web archive data for extraction and derivation of smaller datasets. Besides efficient access we identify five other objectives based on practical researcher needs such as ease of use, extensibility and reusability. Towards these objectives we propose ArchiveSpark, a framework for efficient, distributed Web archive processing that builds a research corpus by working on existing and standardized data formats commonly held by Web archiving institutions. Performance optimizations in ArchiveSpark, facilitated by the use of a widely available metadata index, result in significant speed-ups of data processing. Our benchmarks show that ArchiveSpark is faster than alternative approaches without depending on any additional data stores while improving usability by seamlessly integrating queries and derivations with external tools.
[ { "version": "v1", "created": "Fri, 3 Feb 2017 14:17:02 GMT" } ]
2017-02-06T00:00:00
[ [ "Holzmann", "Helge", "" ], [ "Goel", "Vinay", "" ], [ "Anand", "Avishek", "" ] ]
TITLE: ArchiveSpark: Efficient Web Archive Access, Extraction and Derivation ABSTRACT: Web archives are a valuable resource for researchers of various disciplines. However, to use them as a scholarly source, researchers require a tool that provides efficient access to Web archive data for extraction and derivation of smaller datasets. Besides efficient access we identify five other objectives based on practical researcher needs such as ease of use, extensibility and reusability. Towards these objectives we propose ArchiveSpark, a framework for efficient, distributed Web archive processing that builds a research corpus by working on existing and standardized data formats commonly held by Web archiving institutions. Performance optimizations in ArchiveSpark, facilitated by the use of a widely available metadata index, result in significant speed-ups of data processing. Our benchmarks show that ArchiveSpark is faster than alternative approaches without depending on any additional data stores while improving usability by seamlessly integrating queries and derivations with external tools.
no_new_dataset
0.945147
1609.07042
Xiang Xiang
Xiang Xiang and Trac D. Tran
Pose-Selective Max Pooling for Measuring Similarity
The tutorial and program associated with this paper are available at https://github.com/eglxiang/ytf yet for non-commercial use
null
null
null
cs.CV cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
[ { "version": "v1", "created": "Thu, 22 Sep 2016 15:59:38 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2016 18:21:05 GMT" }, { "version": "v3", "created": "Thu, 10 Nov 2016 04:05:29 GMT" }, { "version": "v4", "created": "Mon, 14 Nov 2016 04:10:09 GMT" } ]
2017-02-03T00:00:00
[ [ "Xiang", "Xiang", "" ], [ "Tran", "Trac D.", "" ] ]
TITLE: Pose-Selective Max Pooling for Measuring Similarity ABSTRACT: In this paper, we deal with two challenges for measuring the similarity of the subject identities in practical video-based face recognition - the variation of the head pose in uncontrolled environments and the computational expense of processing videos. Since the frame-wise feature mean is unable to characterize the pose diversity among frames, we define and preserve the overall pose diversity and closeness in a video. Then, identity will be the only source of variation across videos since the pose varies even within a single video. Instead of simply using all the frames, we select those faces whose pose point is closest to the centroid of the K-means cluster containing that pose point. Then, we represent a video as a bag of frame-wise deep face features while the number of features has been reduced from hundreds to K. Since the video representation can well represent the identity, now we measure the subject similarity between two videos as the max correlation among all possible pairs in the two bags of features. On the official 5,000 video-pairs of the YouTube Face dataset for face verification, our algorithm achieves a comparable performance with VGG-face that averages over deep features of all frames. Other vision tasks can also benefit from the generic idea of employing geometric cues to improve the descriptiveness of deep features.
no_new_dataset
0.947962
1701.09123
Rodrigo Agerri
Rodrigo Agerri and German Rigau
Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features
26 pages, 19 tables (submitted for publication on September 2015), Artificial Intelligence (2016)
Artificial Intelligence, 238, 63-82 (2016)
10.1016/j.artint.2016.05.003
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 16:36:06 GMT" } ]
2017-02-03T00:00:00
[ [ "Agerri", "Rodrigo", "" ], [ "Rigau", "German", "" ] ]
TITLE: Robust Multilingual Named Entity Recognition with Shallow Semi-Supervised Features ABSTRACT: We present a multilingual Named Entity Recognition approach based on a robust and general set of features across languages and datasets. Our system combines shallow local information with clustering semi-supervised features induced on large amounts of unlabeled text. Understanding via empirical experimentation how to effectively combine various types of clustering features allows us to seamlessly export our system to other datasets and languages. The result is a simple but highly competitive system which obtains state of the art results across five languages and twelve datasets. The results are reported on standard shared task evaluation data such as CoNLL for English, Spanish and Dutch. Furthermore, and despite the lack of linguistically motivated features, we also report best results for languages such as Basque and German. In addition, we demonstrate that our method also obtains very competitive results even when the amount of supervised data is cut by half, alleviating the dependency on manually annotated data. Finally, the results show that our emphasis on clustering features is crucial to develop robust out-of-domain models. The system and models are freely available to facilitate its use and guarantee the reproducibility of results.
no_new_dataset
0.946151
1702.00552
Aziz Mohaisen
Omar Al-Ibrahim and Aziz Mohaisen and Charles Kamhoua and Kevin Kwiat and Laurent Njilla
Beyond Free Riding: Quality of Indicators for Assessing Participation in Information Sharing for Threat Intelligence
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Threat intelligence sharing has become a growing concept, whereby entities can exchange patterns of threats with each other, in the form of indicators, to a community of trust for threat analysis and incident response. However, sharing threat-related information have posed various risks to an organization that pertains to its security, privacy, and competitiveness. Given the coinciding benefits and risks of threat information sharing, some entities have adopted an elusive behavior of "free-riding" so that they can acquire the benefits of sharing without contributing much to the community. So far, understanding the effectiveness of sharing has been viewed from the perspective of the amount of information exchanged as opposed to its quality. In this paper, we introduce the notion of quality of indicators (\qoi) for the assessment of the level of contribution by participants in information sharing for threat intelligence. We exemplify this notion through various metrics, including correctness, relevance, utility, and uniqueness of indicators. In order to realize the notion of \qoi, we conducted an empirical study and taken a benchmark approach to define quality metrics, then we obtained a reference dataset and utilized tools from the machine learning literature for quality assessment. We compared these results against a model that only considers the volume of information as a metric for contribution, and unveiled various interesting observations, including the ability to spot low quality contributions that are synonym to free riding in threat information sharing.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 06:35:55 GMT" } ]
2017-02-03T00:00:00
[ [ "Al-Ibrahim", "Omar", "" ], [ "Mohaisen", "Aziz", "" ], [ "Kamhoua", "Charles", "" ], [ "Kwiat", "Kevin", "" ], [ "Njilla", "Laurent", "" ] ]
TITLE: Beyond Free Riding: Quality of Indicators for Assessing Participation in Information Sharing for Threat Intelligence ABSTRACT: Threat intelligence sharing has become a growing concept, whereby entities can exchange patterns of threats with each other, in the form of indicators, to a community of trust for threat analysis and incident response. However, sharing threat-related information have posed various risks to an organization that pertains to its security, privacy, and competitiveness. Given the coinciding benefits and risks of threat information sharing, some entities have adopted an elusive behavior of "free-riding" so that they can acquire the benefits of sharing without contributing much to the community. So far, understanding the effectiveness of sharing has been viewed from the perspective of the amount of information exchanged as opposed to its quality. In this paper, we introduce the notion of quality of indicators (\qoi) for the assessment of the level of contribution by participants in information sharing for threat intelligence. We exemplify this notion through various metrics, including correctness, relevance, utility, and uniqueness of indicators. In order to realize the notion of \qoi, we conducted an empirical study and taken a benchmark approach to define quality metrics, then we obtained a reference dataset and utilized tools from the machine learning literature for quality assessment. We compared these results against a model that only considers the volume of information as a metric for contribution, and unveiled various interesting observations, including the ability to spot low quality contributions that are synonym to free riding in threat information sharing.
no_new_dataset
0.950732
1702.00583
Mikhail Breslav
Mikhail Breslav, Tyson L. Hedrick, Stan Sclaroff, Margrit Betke
Automating Image Analysis by Annotating Landmarks with Deep Neural Networks
30 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image and video analysis is often a crucial step in the study of animal behavior and kinematics. Often these analyses require that the position of one or more animal landmarks are annotated (marked) in numerous images. The process of annotating landmarks can require a significant amount of time and tedious labor, which motivates the need for algorithms that can automatically annotate landmarks. In the community of scientists that use image and video analysis to study the 3D flight of animals, there has been a trend of developing more automated approaches for annotating landmarks, yet they fall short of being generally applicable. Inspired by the success of Deep Neural Networks (DNNs) on many problems in the field of computer vision, we investigate how suitable DNNs are for accurate and automatic annotation of landmarks in video datasets representative of those collected by scientists studying animals. Our work shows, through extensive experimentation on videos of hawkmoths, that DNNs are suitable for automatic and accurate landmark localization. In particular, we show that one of our proposed DNNs is more accurate than the current best algorithm for automatic localization of landmarks on hawkmoth videos. Moreover, we demonstrate how these annotations can be used to quantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of DNNs by scientists from many different fields, we provide a self contained explanation of what DNNs are, how they work, and how to apply them to other datasets using the freely available library Caffe and supplemental code that we provide.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 08:53:10 GMT" } ]
2017-02-03T00:00:00
[ [ "Breslav", "Mikhail", "" ], [ "Hedrick", "Tyson L.", "" ], [ "Sclaroff", "Stan", "" ], [ "Betke", "Margrit", "" ] ]
TITLE: Automating Image Analysis by Annotating Landmarks with Deep Neural Networks ABSTRACT: Image and video analysis is often a crucial step in the study of animal behavior and kinematics. Often these analyses require that the position of one or more animal landmarks are annotated (marked) in numerous images. The process of annotating landmarks can require a significant amount of time and tedious labor, which motivates the need for algorithms that can automatically annotate landmarks. In the community of scientists that use image and video analysis to study the 3D flight of animals, there has been a trend of developing more automated approaches for annotating landmarks, yet they fall short of being generally applicable. Inspired by the success of Deep Neural Networks (DNNs) on many problems in the field of computer vision, we investigate how suitable DNNs are for accurate and automatic annotation of landmarks in video datasets representative of those collected by scientists studying animals. Our work shows, through extensive experimentation on videos of hawkmoths, that DNNs are suitable for automatic and accurate landmark localization. In particular, we show that one of our proposed DNNs is more accurate than the current best algorithm for automatic localization of landmarks on hawkmoth videos. Moreover, we demonstrate how these annotations can be used to quantitatively analyze the 3D flight of a hawkmoth. To facilitate the use of DNNs by scientists from many different fields, we provide a self contained explanation of what DNNs are, how they work, and how to apply them to other datasets using the freely available library Caffe and supplemental code that we provide.
no_new_dataset
0.94256
1702.00585
Massimo Franceschet
Massimo Franceschet and Enrico Bozzo
The temporalized Massey's method
arXiv admin note: text overlap with arXiv:1701.03363
null
null
null
cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose and throughly investigate a temporalized version of the popular Massey's technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 08:54:32 GMT" } ]
2017-02-03T00:00:00
[ [ "Franceschet", "Massimo", "" ], [ "Bozzo", "Enrico", "" ] ]
TITLE: The temporalized Massey's method ABSTRACT: We propose and throughly investigate a temporalized version of the popular Massey's technique for rating actors in sport competitions. The method can be described as a dynamic temporal process in which team ratings are updated at every match according to their performance during the match and the strength of the opponent team. Using the Italian soccer dataset, we empirically show that the method has a good foresight prediction accuracy.
no_new_dataset
0.952353
1702.00619
Tarcisio Souza Costa
Tarcisio Souza and Elena Demidova and Thomas Risse and Helge Holzmann and Gerhard Gossen and Julian Szymanski
Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives
null
null
10.1007/978-3-319-27932-9_14
null
cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Long-term Web archives comprise Web documents gathered over longer time periods and can easily reach hundreds of terabytes in size. Semantic annotations such as named entities can facilitate intelligent access to the Web archive data. However, the annotation of the entire archive content on this scale is often infeasible. The most efficient way to access the documents within Web archives is provided through their URLs, which are typically stored in dedicated index files.The URLs of the archived Web documents can contain semantic information and can offer an efficient way to obtain initial semantic annotations for the archived documents. In this paper, we analyse the applicability of semantic analysis techniques such as named entity extraction to the URLs in a Web archive. We evaluate the precision of the named entity extraction from the URLs in the Popular German Web dataset and analyse the proportion of the archived URLs from 1,444 popular domains in the time interval from 2000 to 2012 to which these techniques are applicable. Our results demonstrate that named entity recognition can be successfully applied to a large number of URLs in our Web archive and provide a good starting point to efficiently annotate large scale collections of Web documents.
[ { "version": "v1", "created": "Thu, 2 Feb 2017 11:09:53 GMT" } ]
2017-02-03T00:00:00
[ [ "Souza", "Tarcisio", "" ], [ "Demidova", "Elena", "" ], [ "Risse", "Thomas", "" ], [ "Holzmann", "Helge", "" ], [ "Gossen", "Gerhard", "" ], [ "Szymanski", "Julian", "" ] ]
TITLE: Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives ABSTRACT: Long-term Web archives comprise Web documents gathered over longer time periods and can easily reach hundreds of terabytes in size. Semantic annotations such as named entities can facilitate intelligent access to the Web archive data. However, the annotation of the entire archive content on this scale is often infeasible. The most efficient way to access the documents within Web archives is provided through their URLs, which are typically stored in dedicated index files.The URLs of the archived Web documents can contain semantic information and can offer an efficient way to obtain initial semantic annotations for the archived documents. In this paper, we analyse the applicability of semantic analysis techniques such as named entity extraction to the URLs in a Web archive. We evaluate the precision of the named entity extraction from the URLs in the Popular German Web dataset and analyse the proportion of the archived URLs from 1,444 popular domains in the time interval from 2000 to 2012 to which these techniques are applicable. Our results demonstrate that named entity recognition can be successfully applied to a large number of URLs in our Web archive and provide a good starting point to efficiently annotate large scale collections of Web documents.
no_new_dataset
0.95418
1602.07480
Lluis Gomez
Lluis Gomez, Anguelos Nicolaou, Dimosthenis Karatzas
Improving patch-based scene text script identification with ensembles of conjoined networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper focuses on the problem of script identification in scene text images. Facing this problem with state of the art CNN classifiers is not straightforward, as they fail to address a key characteristic of scene text instances: their extremely variable aspect ratio. Instead of resizing input images to a fixed aspect ratio as in the typical use of holistic CNN classifiers, we propose here a patch-based classification framework in order to preserve discriminative parts of the image that are characteristic of its class. We describe a novel method based on the use of ensembles of conjoined networks to jointly learn discriminative stroke-parts representations and their relative importance in a patch-based classification scheme. Our experiments with this learning procedure demonstrate state-of-the-art results in two public script identification datasets. In addition, we propose a new public benchmark dataset for the evaluation of multi-lingual scene text end-to-end reading systems. Experiments done in this dataset demonstrate the key role of script identification in a complete end-to-end system that combines our script identification method with a previously published text detector and an off-the-shelf OCR engine.
[ { "version": "v1", "created": "Wed, 24 Feb 2016 12:33:25 GMT" }, { "version": "v2", "created": "Wed, 1 Feb 2017 13:17:57 GMT" } ]
2017-02-02T00:00:00
[ [ "Gomez", "Lluis", "" ], [ "Nicolaou", "Anguelos", "" ], [ "Karatzas", "Dimosthenis", "" ] ]
TITLE: Improving patch-based scene text script identification with ensembles of conjoined networks ABSTRACT: This paper focuses on the problem of script identification in scene text images. Facing this problem with state of the art CNN classifiers is not straightforward, as they fail to address a key characteristic of scene text instances: their extremely variable aspect ratio. Instead of resizing input images to a fixed aspect ratio as in the typical use of holistic CNN classifiers, we propose here a patch-based classification framework in order to preserve discriminative parts of the image that are characteristic of its class. We describe a novel method based on the use of ensembles of conjoined networks to jointly learn discriminative stroke-parts representations and their relative importance in a patch-based classification scheme. Our experiments with this learning procedure demonstrate state-of-the-art results in two public script identification datasets. In addition, we propose a new public benchmark dataset for the evaluation of multi-lingual scene text end-to-end reading systems. Experiments done in this dataset demonstrate the key role of script identification in a complete end-to-end system that combines our script identification method with a previously published text detector and an off-the-shelf OCR engine.
new_dataset
0.969469
1604.02619
Lluis Gomez
Lluis Gomez-Bigorda and Dimosthenis Karatzas
TextProposals: a Text-specific Selective Search Algorithm for Word Spotting in the Wild
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 percent f-score the best-performing method in the last ICDAR Robust Reading Competition. Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals
[ { "version": "v1", "created": "Sun, 10 Apr 2016 00:03:16 GMT" }, { "version": "v2", "created": "Tue, 24 May 2016 17:03:13 GMT" }, { "version": "v3", "created": "Wed, 1 Feb 2017 15:35:28 GMT" } ]
2017-02-02T00:00:00
[ [ "Gomez-Bigorda", "Lluis", "" ], [ "Karatzas", "Dimosthenis", "" ] ]
TITLE: TextProposals: a Text-specific Selective Search Algorithm for Word Spotting in the Wild ABSTRACT: Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way. Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10 percent f-score the best-performing method in the last ICDAR Robust Reading Competition. Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals
new_dataset
0.877161
1701.09049
Amit Awekar
Panthadeep Bhattacharjee and Amit Awekar
Batch Incremental Shared Nearest Neighbor Density Based Clustering Algorithm for Dynamic Datasets
6 pages, Accepted at ECIR 2017
null
null
null
cs.DB cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Incremental data mining algorithms process frequent updates to dynamic datasets efficiently by avoiding redundant computation. Existing incremental extension to shared nearest neighbor density based clustering (SNND) algorithm cannot handle deletions to dataset and handles insertions only one point at a time. We present an incremental algorithm to overcome both these bottlenecks by efficiently identifying affected parts of clusters while processing updates to dataset in batch mode. We show effectiveness of our algorithm by performing experiments on large synthetic as well as real world datasets. Our algorithm is up to four orders of magnitude faster than SNND and requires up to 60% extra memory than SNND while providing output identical to SNND.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 14:19:18 GMT" } ]
2017-02-02T00:00:00
[ [ "Bhattacharjee", "Panthadeep", "" ], [ "Awekar", "Amit", "" ] ]
TITLE: Batch Incremental Shared Nearest Neighbor Density Based Clustering Algorithm for Dynamic Datasets ABSTRACT: Incremental data mining algorithms process frequent updates to dynamic datasets efficiently by avoiding redundant computation. Existing incremental extension to shared nearest neighbor density based clustering (SNND) algorithm cannot handle deletions to dataset and handles insertions only one point at a time. We present an incremental algorithm to overcome both these bottlenecks by efficiently identifying affected parts of clusters while processing updates to dataset in batch mode. We show effectiveness of our algorithm by performing experiments on large synthetic as well as real world datasets. Our algorithm is up to four orders of magnitude faster than SNND and requires up to 60% extra memory than SNND while providing output identical to SNND.
no_new_dataset
0.949949
1702.00025
Rainer Kelz
Rainer Kelz and Gerhard Widmer
An Experimental Analysis of the Entanglement Problem in Neural-Network-based Music Transcription Systems
Submitted to AES Conference on Semantic Audio, Erlangen, Germany, 2017 June 22, 24
null
null
null
cs.SD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several recent polyphonic music transcription systems have utilized deep neural networks to achieve state of the art results on various benchmark datasets, pushing the envelope on framewise and note-level performance measures. Unfortunately we can observe a sort of glass ceiling effect. To investigate this effect, we provide a detailed analysis of the particular kinds of errors that state of the art deep neural transcription systems make, when trained and tested on a piano transcription task. We are ultimately forced to draw a rather disheartening conclusion: the networks seem to learn combinations of notes, and have a hard time generalizing to unseen combinations of notes. Furthermore, we speculate on various means to alleviate this situation.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 19:21:41 GMT" } ]
2017-02-02T00:00:00
[ [ "Kelz", "Rainer", "" ], [ "Widmer", "Gerhard", "" ] ]
TITLE: An Experimental Analysis of the Entanglement Problem in Neural-Network-based Music Transcription Systems ABSTRACT: Several recent polyphonic music transcription systems have utilized deep neural networks to achieve state of the art results on various benchmark datasets, pushing the envelope on framewise and note-level performance measures. Unfortunately we can observe a sort of glass ceiling effect. To investigate this effect, we provide a detailed analysis of the particular kinds of errors that state of the art deep neural transcription systems make, when trained and tested on a piano transcription task. We are ultimately forced to draw a rather disheartening conclusion: the networks seem to learn combinations of notes, and have a hard time generalizing to unseen combinations of notes. Furthermore, we speculate on various means to alleviate this situation.
no_new_dataset
0.949482
1702.00045
Le Lu
Holger R. Roth, Le Lu, Nathan Lay, Adam P. Harrison, Amal Farag, Andrew Sohn, Ronald M. Summers
Spatial Aggregation of Holistically-Nested Convolutional Neural Networks for Automated Pancreas Localization and Segmentation
This version was submitted to IEEE Trans. on Medical Imaging on Dec. 18th, 2016. The content of this article is covered by US Patent Applications of 62/345,606# and 62/450,681#
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. In this paper, we present an automated system using 3D computed tomography (CT) volumes via a two-stage cascaded approach: pancreas localization and segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a Dice similarity coefficient (DSC) of 81.27+/-6.27% in validation, which significantly outperforms previous state-of-the art methods that report DSCs of 71.80+/-10.70% and 78.01+/-8.20%, respectively, using the same dataset.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 20:22:15 GMT" } ]
2017-02-02T00:00:00
[ [ "Roth", "Holger R.", "" ], [ "Lu", "Le", "" ], [ "Lay", "Nathan", "" ], [ "Harrison", "Adam P.", "" ], [ "Farag", "Amal", "" ], [ "Sohn", "Andrew", "" ], [ "Summers", "Ronald M.", "" ] ]
TITLE: Spatial Aggregation of Holistically-Nested Convolutional Neural Networks for Automated Pancreas Localization and Segmentation ABSTRACT: Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. In this paper, we present an automated system using 3D computed tomography (CT) volumes via a two-stage cascaded approach: pancreas localization and segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a Dice similarity coefficient (DSC) of 81.27+/-6.27% in validation, which significantly outperforms previous state-of-the art methods that report DSCs of 71.80+/-10.70% and 78.01+/-8.20%, respectively, using the same dataset.
no_new_dataset
0.95096
1702.00158
Xiaqing Pan
Xiaqing Pan, Yueru Chen, C.-C. Jay Kuo
Design, Analysis and Application of A Volumetric Convolutional Neural Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The design, analysis and application of a volumetric convolutional neural network (VCNN) are studied in this work. Although many CNNs have been proposed in the literature, their design is empirical. In the design of the VCNN, we propose a feed-forward K-means clustering algorithm to determine the filter number and size at each convolutional layer systematically. For the analysis of the VCNN, the cause of confusing classes in the output of the VCNN is explained by analyzing the relationship between the filter weights (also known as anchor vectors) from the last fully-connected layer to the output. Furthermore, a hierarchical clustering method followed by a random forest classification method is proposed to boost the classification performance among confusing classes. For the application of the VCNN, we examine the 3D shape classification problem and conduct experiments on a popular ModelNet40 dataset. The proposed VCNN offers the state-of-the-art performance among all volume-based CNN methods.
[ { "version": "v1", "created": "Wed, 1 Feb 2017 08:32:11 GMT" } ]
2017-02-02T00:00:00
[ [ "Pan", "Xiaqing", "" ], [ "Chen", "Yueru", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
TITLE: Design, Analysis and Application of A Volumetric Convolutional Neural Network ABSTRACT: The design, analysis and application of a volumetric convolutional neural network (VCNN) are studied in this work. Although many CNNs have been proposed in the literature, their design is empirical. In the design of the VCNN, we propose a feed-forward K-means clustering algorithm to determine the filter number and size at each convolutional layer systematically. For the analysis of the VCNN, the cause of confusing classes in the output of the VCNN is explained by analyzing the relationship between the filter weights (also known as anchor vectors) from the last fully-connected layer to the output. Furthermore, a hierarchical clustering method followed by a random forest classification method is proposed to boost the classification performance among confusing classes. For the application of the VCNN, we examine the 3D shape classification problem and conduct experiments on a popular ModelNet40 dataset. The proposed VCNN offers the state-of-the-art performance among all volume-based CNN methods.
no_new_dataset
0.949763
1702.00196
He Sun
Jiecao Chen and He Sun and David P. Woodruff and Qin Zhang
Communication-Optimal Distributed Clustering
A preliminary version of this paper appeared at the 30th Annual Conference on Neural Information Processing Systems (NIPS), 2016
null
null
null
cs.DS cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster $n$ points or $n$ vertices in a graph distributed across $s$ servers, for a worst-case partitioning the communication complexity in a point-to-point model is $n \cdot s$, while in the broadcast model it is $n + s$. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
[ { "version": "v1", "created": "Wed, 1 Feb 2017 10:30:32 GMT" } ]
2017-02-02T00:00:00
[ [ "Chen", "Jiecao", "" ], [ "Sun", "He", "" ], [ "Woodruff", "David P.", "" ], [ "Zhang", "Qin", "" ] ]
TITLE: Communication-Optimal Distributed Clustering ABSTRACT: Clustering large datasets is a fundamental problem with a number of applications in machine learning. Data is often collected on different sites and clustering needs to be performed in a distributed manner with low communication. We would like the quality of the clustering in the distributed setting to match that in the centralized setting for which all the data resides on a single site. In this work, we study both graph and geometric clustering problems in two distributed models: (1) a point-to-point model, and (2) a model with a broadcast channel. We give protocols in both models which we show are nearly optimal by proving almost matching communication lower bounds. Our work highlights the surprising power of a broadcast channel for clustering problems; roughly speaking, to spectrally cluster $n$ points or $n$ vertices in a graph distributed across $s$ servers, for a worst-case partitioning the communication complexity in a point-to-point model is $n \cdot s$, while in the broadcast model it is $n + s$. A similar phenomenon holds for the geometric setting as well. We implement our algorithms and demonstrate this phenomenon on real life datasets, showing that our algorithms are also very efficient in practice.
no_new_dataset
0.955152
1702.00338
Eng-Jon Ong
Eng-Jon Ong and Sameed Husain and Miroslaw Bober
Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper addresses the problem of large scale image retrieval, with the aim of accurately ranking the similarity of a large number of images to a given query image. To achieve this, we propose a novel Siamese network. This network consists of two computational strands, each comprising of a CNN component followed by a Fisher vector component. The CNN component produces dense, deep convolutional descriptors that are then aggregated by the Fisher Vector method. Crucially, we propose to simultaneously learn both the CNN filter weights and Fisher Vector model parameters. This allows us to account for the evolving distribution of deep descriptors over the course of the learning process. We show that the proposed approach gives significant improvements over the state-of-the-art methods on the Oxford and Paris image retrieval datasets. Additionally, we provide a baseline performance measure for both these datasets with the inclusion of 1 million distractors.
[ { "version": "v1", "created": "Wed, 1 Feb 2017 16:20:00 GMT" } ]
2017-02-02T00:00:00
[ [ "Ong", "Eng-Jon", "" ], [ "Husain", "Sameed", "" ], [ "Bober", "Miroslaw", "" ] ]
TITLE: Siamese Network of Deep Fisher-Vector Descriptors for Image Retrieval ABSTRACT: This paper addresses the problem of large scale image retrieval, with the aim of accurately ranking the similarity of a large number of images to a given query image. To achieve this, we propose a novel Siamese network. This network consists of two computational strands, each comprising of a CNN component followed by a Fisher vector component. The CNN component produces dense, deep convolutional descriptors that are then aggregated by the Fisher Vector method. Crucially, we propose to simultaneously learn both the CNN filter weights and Fisher Vector model parameters. This allows us to account for the evolving distribution of deep descriptors over the course of the learning process. We show that the proposed approach gives significant improvements over the state-of-the-art methods on the Oxford and Paris image retrieval datasets. Additionally, we provide a baseline performance measure for both these datasets with the inclusion of 1 million distractors.
no_new_dataset
0.948489
1702.00358
Florin Rusu
Yu Cheng, Weijie Zhao, Florin Rusu
OLA-RAW: Scalable Exploration over Raw Data
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In-situ processing has been proposed as a novel data exploration solution in many domains generating massive amounts of raw data, e.g., astronomy, since it provides immediate SQL querying over raw files. The performance of in-situ processing across a query workload is, however, limited by the speed of full scan, tokenizing, and parsing of the entire data. Online aggregation (OLA) has been introduced as an efficient method for data exploration that identifies uninteresting patterns faster by continuously estimating the result of a computation during the actual processing---the computation can be stopped as early as the estimate is accurate enough to be deemed uninteresting. However, existing OLA solutions have a high upfront cost of randomly shuffling and/or sampling the data. In this paper, we present OLA-RAW, a bi-level sampling scheme for parallel online aggregation over raw data. Sampling in OLA-RAW is query-driven and performed exclusively in-situ during the runtime query execution, without data reorganization. This is realized by a novel resource-aware bi-level sampling algorithm that processes data in random chunks concurrently and determines adaptively the number of sampled tuples inside a chunk. In order to avoid the cost of repetitive conversion from raw data, OLA-RAW builds and maintains a memory-resident bi-level sample synopsis incrementally. We implement OLA-RAW inside a modern in-situ data processing system and evaluate its performance across several real and synthetic datasets and file formats. Our results show that OLA-RAW chooses the sampling plan that minimizes the execution time and guarantees the required accuracy for each query in a given workload. The end result is a focused data exploration process that avoids unnecessary work and discards uninteresting data.
[ { "version": "v1", "created": "Wed, 1 Feb 2017 17:07:56 GMT" } ]
2017-02-02T00:00:00
[ [ "Cheng", "Yu", "" ], [ "Zhao", "Weijie", "" ], [ "Rusu", "Florin", "" ] ]
TITLE: OLA-RAW: Scalable Exploration over Raw Data ABSTRACT: In-situ processing has been proposed as a novel data exploration solution in many domains generating massive amounts of raw data, e.g., astronomy, since it provides immediate SQL querying over raw files. The performance of in-situ processing across a query workload is, however, limited by the speed of full scan, tokenizing, and parsing of the entire data. Online aggregation (OLA) has been introduced as an efficient method for data exploration that identifies uninteresting patterns faster by continuously estimating the result of a computation during the actual processing---the computation can be stopped as early as the estimate is accurate enough to be deemed uninteresting. However, existing OLA solutions have a high upfront cost of randomly shuffling and/or sampling the data. In this paper, we present OLA-RAW, a bi-level sampling scheme for parallel online aggregation over raw data. Sampling in OLA-RAW is query-driven and performed exclusively in-situ during the runtime query execution, without data reorganization. This is realized by a novel resource-aware bi-level sampling algorithm that processes data in random chunks concurrently and determines adaptively the number of sampled tuples inside a chunk. In order to avoid the cost of repetitive conversion from raw data, OLA-RAW builds and maintains a memory-resident bi-level sample synopsis incrementally. We implement OLA-RAW inside a modern in-situ data processing system and evaluate its performance across several real and synthetic datasets and file formats. Our results show that OLA-RAW chooses the sampling plan that minimizes the execution time and guarantees the required accuracy for each query in a given workload. The end result is a focused data exploration process that avoids unnecessary work and discards uninteresting data.
no_new_dataset
0.951369
1608.07327
Bo Qu
Bo Qu and Huijuan Wang
SIS Epidemic Spreading with Correlated Heterogeneous Infection Rates
null
null
10.1016/j.physa.2016.12.077
null
physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The epidemic spreading has been widely studied when each node may get infected by an infected neighbor with the same rate. However, the infection rate between a pair of nodes is usually heterogeneous and even correlated with their nodal degrees in the contact network. We aim to understand how such correlated heterogeneous infection rates influence the spreading on different network topologies. Motivated by real-world datasets, we propose a correlated heterogeneous Susceptible-Infected-Susceptible model which assumes that the infection rate $\beta_{ij}(=\beta_{ji})$ between node $i$ and $j$ is correlated with the degree of the two end nodes: $\beta_{ij}=c(d_id_j)^\alpha$, where $\alpha$ indicates the strength of the correlation and $c$ is selected so that the average infection rate is $1$. In order to understand the effect of such correlation on epidemic spreading, we consider as well the corresponding uncorrected but still heterogeneous infection rate scenario as a reference, where the original correlated infection rates in our CSIS model are shuffled and reallocated to the links of the same network topology. We compare these two scenarios in the average fraction of infected nodes in the metastable state on Erd{\"o}s-R{\'e}nyi (ER) and scale-free (SF) networks with a similar average degree. Through the continuous-time simulations, we find that, when the recovery rate is small, the negative correlation is more likely to help the epidemic spread and the positive correlation prohibit the spreading; as the recovery rate increases to be larger than a critical value, the positive but not negative correlation tends to help the spreading. Our findings are further analytically proved in a wheel network (one central node connects with each of the nodes in a ring) and validated on real-world networks with correlated heterogeneous interaction frequencies.
[ { "version": "v1", "created": "Thu, 25 Aug 2016 22:33:50 GMT" } ]
2017-02-01T00:00:00
[ [ "Qu", "Bo", "" ], [ "Wang", "Huijuan", "" ] ]
TITLE: SIS Epidemic Spreading with Correlated Heterogeneous Infection Rates ABSTRACT: The epidemic spreading has been widely studied when each node may get infected by an infected neighbor with the same rate. However, the infection rate between a pair of nodes is usually heterogeneous and even correlated with their nodal degrees in the contact network. We aim to understand how such correlated heterogeneous infection rates influence the spreading on different network topologies. Motivated by real-world datasets, we propose a correlated heterogeneous Susceptible-Infected-Susceptible model which assumes that the infection rate $\beta_{ij}(=\beta_{ji})$ between node $i$ and $j$ is correlated with the degree of the two end nodes: $\beta_{ij}=c(d_id_j)^\alpha$, where $\alpha$ indicates the strength of the correlation and $c$ is selected so that the average infection rate is $1$. In order to understand the effect of such correlation on epidemic spreading, we consider as well the corresponding uncorrected but still heterogeneous infection rate scenario as a reference, where the original correlated infection rates in our CSIS model are shuffled and reallocated to the links of the same network topology. We compare these two scenarios in the average fraction of infected nodes in the metastable state on Erd{\"o}s-R{\'e}nyi (ER) and scale-free (SF) networks with a similar average degree. Through the continuous-time simulations, we find that, when the recovery rate is small, the negative correlation is more likely to help the epidemic spread and the positive correlation prohibit the spreading; as the recovery rate increases to be larger than a critical value, the positive but not negative correlation tends to help the spreading. Our findings are further analytically proved in a wheel network (one central node connects with each of the nodes in a ring) and validated on real-world networks with correlated heterogeneous interaction frequencies.
no_new_dataset
0.954308
1609.09405
Siva Reddy
Yonatan Bisk, Siva Reddy, John Blitzer, Julia Hockenmaier, Mark Steedman
Evaluating Induced CCG Parsers on Grounded Semantic Parsing
EMNLP 2016, Table 2 erratum, Code and Freebase Semantic Parsing data URL
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis. This extrinsic, task-based evaluation provides a unique window to explore the strengths and weaknesses of semantics captured by unsupervised grammar induction systems. We release a new Freebase semantic parsing dataset called SPADES (Semantic PArsing of DEclarative Sentences) containing 93K cloze-style questions paired with answers. We evaluate all our models on this dataset. Our code and data are available at https://github.com/sivareddyg/graph-parser.
[ { "version": "v1", "created": "Thu, 29 Sep 2016 16:09:29 GMT" }, { "version": "v2", "created": "Tue, 31 Jan 2017 16:25:39 GMT" } ]
2017-02-01T00:00:00
[ [ "Bisk", "Yonatan", "" ], [ "Reddy", "Siva", "" ], [ "Blitzer", "John", "" ], [ "Hockenmaier", "Julia", "" ], [ "Steedman", "Mark", "" ] ]
TITLE: Evaluating Induced CCG Parsers on Grounded Semantic Parsing ABSTRACT: We compare the effectiveness of four different syntactic CCG parsers for a semantic slot-filling task to explore how much syntactic supervision is required for downstream semantic analysis. This extrinsic, task-based evaluation provides a unique window to explore the strengths and weaknesses of semantics captured by unsupervised grammar induction systems. We release a new Freebase semantic parsing dataset called SPADES (Semantic PArsing of DEclarative Sentences) containing 93K cloze-style questions paired with answers. We evaluate all our models on this dataset. Our code and data are available at https://github.com/sivareddyg/graph-parser.
new_dataset
0.954052
1701.07847
Dmitry Petrov
Dmitry Petrov, Boris Gutman, Alexander Ivanov, Joshua Faskowitz, Neda Jahanshad, Mikhail Belyaev, Paul Thompson
Structural Connectome Validation Using Pairwise Classification
Accepted for IEEE International Symposium on Biomedical Imaging 2017
null
null
null
q-bio.NC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we study the extent to which structural connectomes and topological derivative measures are unique to individual changes within human brains. To do so, we classify structural connectome pairs from two large longitudinal datasets as either belonging to the same individual or not. Our data is comprised of 227 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and 226 from the Parkinson's Progression Markers Initiative (PPMI). We achieve 0.99 area under the ROC curve score for features which represent either weights or network structure of the connectomes (node degrees, PageRank and local efficiency). Our approach may be useful for eliminating noisy features as a preprocessing step in brain aging studies and early diagnosis classification problems.
[ { "version": "v1", "created": "Thu, 26 Jan 2017 19:13:36 GMT" }, { "version": "v2", "created": "Mon, 30 Jan 2017 19:55:15 GMT" } ]
2017-02-01T00:00:00
[ [ "Petrov", "Dmitry", "" ], [ "Gutman", "Boris", "" ], [ "Ivanov", "Alexander", "" ], [ "Faskowitz", "Joshua", "" ], [ "Jahanshad", "Neda", "" ], [ "Belyaev", "Mikhail", "" ], [ "Thompson", "Paul", "" ] ]
TITLE: Structural Connectome Validation Using Pairwise Classification ABSTRACT: In this work, we study the extent to which structural connectomes and topological derivative measures are unique to individual changes within human brains. To do so, we classify structural connectome pairs from two large longitudinal datasets as either belonging to the same individual or not. Our data is comprised of 227 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) and 226 from the Parkinson's Progression Markers Initiative (PPMI). We achieve 0.99 area under the ROC curve score for features which represent either weights or network structure of the connectomes (node degrees, PageRank and local efficiency). Our approach may be useful for eliminating noisy features as a preprocessing step in brain aging studies and early diagnosis classification problems.
no_new_dataset
0.953579
1701.08302
A Mani
Mani A and Rebeka Mukherjee
A Study of FOSS'2013 Survey Data Using Clustering Techniques
IEEE Women in Engineering Conference Paper: WIECON-ECE'2016 (Scheduled to appear in IEEE Xplore )
null
null
null
cs.AI cs.CY cs.SE stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
FOSS is an acronym for Free and Open Source Software. The FOSS 2013 survey primarily targets FOSS contributors and relevant anonymized dataset is publicly available under CC by SA license. In this study, the dataset is analyzed from a critical perspective using statistical and clustering techniques (especially multiple correspondence analysis) with a strong focus on women contributors towards discovering hidden trends and facts. Important inferences are drawn about development practices and other facets of the free software and OSS worlds.
[ { "version": "v1", "created": "Sat, 28 Jan 2017 16:52:13 GMT" }, { "version": "v2", "created": "Tue, 31 Jan 2017 17:18:01 GMT" } ]
2017-02-01T00:00:00
[ [ "A", "Mani", "" ], [ "Mukherjee", "Rebeka", "" ] ]
TITLE: A Study of FOSS'2013 Survey Data Using Clustering Techniques ABSTRACT: FOSS is an acronym for Free and Open Source Software. The FOSS 2013 survey primarily targets FOSS contributors and relevant anonymized dataset is publicly available under CC by SA license. In this study, the dataset is analyzed from a critical perspective using statistical and clustering techniques (especially multiple correspondence analysis) with a strong focus on women contributors towards discovering hidden trends and facts. Important inferences are drawn about development practices and other facets of the free software and OSS worlds.
no_new_dataset
0.950732
1701.08757
Dmitry Ignatov
Mikhail V. Goubko and Sergey O. Kuznetsov and Alexey A. Neznanov and Dmitry I. Ignatov
Bayesian Learning of Consumer Preferences for Residential Demand Response
null
IFAC-PapersOnLine, 49(32), 2016, p. 24-29, ISSN 2405-8963
10.1016/j.ifacol.2016.12.184
null
cs.LG cs.SY stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In coming years residential consumers will face real-time electricity tariffs with energy prices varying day to day, and effective energy saving will require automation - a recommender system, which learns consumer's preferences from her actions. A consumer chooses a scenario of home appliance use to balance her comfort level and the energy bill. We propose a Bayesian learning algorithm to estimate the comfort level function from the history of appliance use. In numeric experiments with datasets generated from a simulation model of a consumer interacting with small home appliances the algorithm outperforms popular regression analysis tools. Our approach can be extended to control an air heating and conditioning system, which is responsible for up to half of a household's energy bill.
[ { "version": "v1", "created": "Fri, 27 Jan 2017 20:45:31 GMT" } ]
2017-02-01T00:00:00
[ [ "Goubko", "Mikhail V.", "" ], [ "Kuznetsov", "Sergey O.", "" ], [ "Neznanov", "Alexey A.", "" ], [ "Ignatov", "Dmitry I.", "" ] ]
TITLE: Bayesian Learning of Consumer Preferences for Residential Demand Response ABSTRACT: In coming years residential consumers will face real-time electricity tariffs with energy prices varying day to day, and effective energy saving will require automation - a recommender system, which learns consumer's preferences from her actions. A consumer chooses a scenario of home appliance use to balance her comfort level and the energy bill. We propose a Bayesian learning algorithm to estimate the comfort level function from the history of appliance use. In numeric experiments with datasets generated from a simulation model of a consumer interacting with small home appliances the algorithm outperforms popular regression analysis tools. Our approach can be extended to control an air heating and conditioning system, which is responsible for up to half of a household's energy bill.
no_new_dataset
0.942507
1701.08799
Alan Kuhnle
Alan Kuhnle, Tianyi Pan, Md Abdul Alim, My T. Thai
Scalable Bicriteria Algorithms for the Threshold Activation Problem in Online Social Networks
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Threshold Activation Problem (TAP): given social network $G$ and positive threshold $T$, find a minimum-size seed set $A$ that can trigger expected activation of at least $T$. We introduce the first scalable, parallelizable algorithm with performance guarantee for TAP suitable for datasets with millions of nodes and edges; we exploit the bicriteria nature of solutions to TAP to allow the user to control the running time versus accuracy of our algorithm through a parameter $\alpha \in (0,1)$: given $\eta > 0$, with probability $1 - \eta$ our algorithm returns a solution $A$ with expected activation greater than $T - 2 \alpha T$, and the size of the solution $A$ is within factor $1 + 4 \alpha T + \log ( T )$ of the optimal size. The algorithm runs in time $O \left( \alpha^{-2}\log \left( n / \eta \right) (n + m) |A| \right)$, where $n$, $m$, refer to the number of nodes, edges in the network. The performance guarantee holds for the general triggering model of internal influence and also incorporates external influence, provided a certain condition is met on the cost-effectivity of seed selection.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 19:52:25 GMT" } ]
2017-02-01T00:00:00
[ [ "Kuhnle", "Alan", "" ], [ "Pan", "Tianyi", "" ], [ "Alim", "Md Abdul", "" ], [ "Thai", "My T.", "" ] ]
TITLE: Scalable Bicriteria Algorithms for the Threshold Activation Problem in Online Social Networks ABSTRACT: We consider the Threshold Activation Problem (TAP): given social network $G$ and positive threshold $T$, find a minimum-size seed set $A$ that can trigger expected activation of at least $T$. We introduce the first scalable, parallelizable algorithm with performance guarantee for TAP suitable for datasets with millions of nodes and edges; we exploit the bicriteria nature of solutions to TAP to allow the user to control the running time versus accuracy of our algorithm through a parameter $\alpha \in (0,1)$: given $\eta > 0$, with probability $1 - \eta$ our algorithm returns a solution $A$ with expected activation greater than $T - 2 \alpha T$, and the size of the solution $A$ is within factor $1 + 4 \alpha T + \log ( T )$ of the optimal size. The algorithm runs in time $O \left( \alpha^{-2}\log \left( n / \eta \right) (n + m) |A| \right)$, where $n$, $m$, refer to the number of nodes, edges in the network. The performance guarantee holds for the general triggering model of internal influence and also incorporates external influence, provided a certain condition is met on the cost-effectivity of seed selection.
no_new_dataset
0.938632
1701.08869
Xiaqing Pan
Xiaqing Pan, Yueru Chen, C.-C. Jay Kuo
3D Shape Retrieval via Irrelevance Filtering and Similarity Ranking (IF/SR)
arXiv admin note: text overlap with arXiv:1603.01942
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel solution for the content-based 3D shape retrieval problem using an unsupervised clustering approach, which does not need any label information of 3D shapes, is presented in this work. The proposed shape retrieval system consists of two modules in cascade: the irrelevance filtering (IF) module and the similarity ranking (SR) module. The IF module attempts to cluster gallery shapes that are similar to each other by examining global and local features simultaneously. However, shapes that are close in the local feature space can be distant in the global feature space, and vice versa. To resolve this issue, we propose a joint cost function that strikes a balance between two distances. Irrelevant samples that are close in the local feature space but distant in the global feature space can be removed in this stage. The remaining gallery samples are ranked in the SR module using the local feature. The superior performance of the proposed IF/SR method is demonstrated by extensive experiments conducted on the popular SHREC12 dataset.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 23:04:57 GMT" } ]
2017-02-01T00:00:00
[ [ "Pan", "Xiaqing", "" ], [ "Chen", "Yueru", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
TITLE: 3D Shape Retrieval via Irrelevance Filtering and Similarity Ranking (IF/SR) ABSTRACT: A novel solution for the content-based 3D shape retrieval problem using an unsupervised clustering approach, which does not need any label information of 3D shapes, is presented in this work. The proposed shape retrieval system consists of two modules in cascade: the irrelevance filtering (IF) module and the similarity ranking (SR) module. The IF module attempts to cluster gallery shapes that are similar to each other by examining global and local features simultaneously. However, shapes that are close in the local feature space can be distant in the global feature space, and vice versa. To resolve this issue, we propose a joint cost function that strikes a balance between two distances. Irrelevant samples that are close in the local feature space but distant in the global feature space can be removed in this stage. The remaining gallery samples are ranked in the SR module using the local feature. The superior performance of the proposed IF/SR method is demonstrated by extensive experiments conducted on the popular SHREC12 dataset.
no_new_dataset
0.950732
1701.08886
Moustafa Alzantot
Moustafa Alzantot, Supriyo Chakraborty, Mani B. Srivastava
SenseGen: A Deep Learning Architecture for Synthetic Sensor Data Generation
null
null
null
null
cs.LG cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our ability to synthesize sensory data that preserves specific statistical properties of the real data has had tremendous implications on data privacy and big data analytics. The synthetic data can be used as a substitute for selective real data segments,that are sensitive to the user, thus protecting privacy and resulting in improved analytics.However, increasingly adversarial roles taken by data recipients such as mobile apps, or other cloud-based analytics services, mandate that the synthetic data, in addition to preserving statistical properties, should also be difficult to distinguish from the real data. Typically, visual inspection has been used as a test to distinguish between datasets. But more recently, sophisticated classifier models (discriminators), corresponding to a set of events, have also been employed to distinguish between synthesized and real data. The model operates on both datasets and the respective event outputs are compared for consistency. In this paper, we take a step towards generating sensory data that can pass a deep learning based discriminator model test, and make two specific contributions: first, we present a deep learning based architecture for synthesizing sensory data. This architecture comprises of a generator model, which is a stack of multiple Long-Short-Term-Memory (LSTM) networks and a Mixture Density Network. second, we use another LSTM network based discriminator model for distinguishing between the true and the synthesized data. Using a dataset of accelerometer traces, collected using smartphones of users doing their daily activities, we show that the deep learning based discriminator model can only distinguish between the real and synthesized traces with an accuracy in the neighborhood of 50%.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 01:59:58 GMT" } ]
2017-02-01T00:00:00
[ [ "Alzantot", "Moustafa", "" ], [ "Chakraborty", "Supriyo", "" ], [ "Srivastava", "Mani B.", "" ] ]
TITLE: SenseGen: A Deep Learning Architecture for Synthetic Sensor Data Generation ABSTRACT: Our ability to synthesize sensory data that preserves specific statistical properties of the real data has had tremendous implications on data privacy and big data analytics. The synthetic data can be used as a substitute for selective real data segments,that are sensitive to the user, thus protecting privacy and resulting in improved analytics.However, increasingly adversarial roles taken by data recipients such as mobile apps, or other cloud-based analytics services, mandate that the synthetic data, in addition to preserving statistical properties, should also be difficult to distinguish from the real data. Typically, visual inspection has been used as a test to distinguish between datasets. But more recently, sophisticated classifier models (discriminators), corresponding to a set of events, have also been employed to distinguish between synthesized and real data. The model operates on both datasets and the respective event outputs are compared for consistency. In this paper, we take a step towards generating sensory data that can pass a deep learning based discriminator model test, and make two specific contributions: first, we present a deep learning based architecture for synthesizing sensory data. This architecture comprises of a generator model, which is a stack of multiple Long-Short-Term-Memory (LSTM) networks and a Mixture Density Network. second, we use another LSTM network based discriminator model for distinguishing between the true and the synthesized data. Using a dataset of accelerometer traces, collected using smartphones of users doing their daily activities, we show that the deep learning based discriminator model can only distinguish between the real and synthesized traces with an accuracy in the neighborhood of 50%.
no_new_dataset
0.839603
1701.08921
Yasir Latif
Yasir Latif, Guoquan Huang, John Leonard, Jose Neira
Sparse Optimization for Robust and Efficient Loop Closing
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex $\ell_1$-minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 05:32:09 GMT" } ]
2017-02-01T00:00:00
[ [ "Latif", "Yasir", "" ], [ "Huang", "Guoquan", "" ], [ "Leonard", "John", "" ], [ "Neira", "Jose", "" ] ]
TITLE: Sparse Optimization for Robust and Efficient Loop Closing ABSTRACT: It is essential for a robot to be able to detect revisits or loop closures for long-term visual navigation.A key insight explored in this work is that the loop-closing event inherently occurs sparsely, that is, the image currently being taken matches with only a small subset (if any) of previous images. Based on this observation, we formulate the problem of loop-closure detection as a sparse, convex $\ell_1$-minimization problem. By leveraging fast convex optimization techniques, we are able to efficiently find loop closures, thus enabling real-time robot navigation. This novel formulation requires no offline dictionary learning, as required by most existing approaches, and thus allows online incremental operation. Our approach ensures a unique hypothesis by choosing only a single globally optimal match when making a loop-closure decision. Furthermore, the proposed formulation enjoys a flexible representation with no restriction imposed on how images should be represented, while requiring only that the representations are "close" to each other when the corresponding images are visually similar. The proposed algorithm is validated extensively using real-world datasets.
no_new_dataset
0.946794
1701.08931
Hadar Averbuch-Elor
Hadar Averbuch-Elor, Johannes Kopf, Tamir Hazan and Daniel Cohen-Or
Co-segmentation for Space-Time Co-located Collections
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 07:05:58 GMT" } ]
2017-02-01T00:00:00
[ [ "Averbuch-Elor", "Hadar", "" ], [ "Kopf", "Johannes", "" ], [ "Hazan", "Tamir", "" ], [ "Cohen-Or", "Daniel", "" ] ]
TITLE: Co-segmentation for Space-Time Co-located Collections ABSTRACT: We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.
no_new_dataset
0.951006
1701.08968
Nhan Truong
Nhan Truong, Levin Kuhlmann, Mohammad Reza Bonyadi, Jiawei Yang, Andrew Faulks, Omid Kavehei
Supervised Learning in Automatic Channel Selection for Epileptic Seizure Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting seizure using brain neuroactivations recorded by intracranial electroencephalogram (iEEG) has been widely used for monitoring, diagnosing, and closed-loop therapy of epileptic patients, however, computational efficiency gains are needed if state-of-the-art methods are to be implemented in implanted devices. We present a novel method for automatic seizure detection based on iEEG data that outperforms current state-of-the-art seizure detection methods in terms of computational efficiency while maintaining the accuracy. The proposed algorithm incorporates an automatic channel selection (ACS) engine as a pre-processing stage to the seizure detection procedure. The ACS engine consists of supervised classifiers which aim to find iEEGchannelswhich contribute the most to a seizure. Seizure detection stage involves feature extraction and classification. Feature extraction is performed in both frequency and time domains where spectral power and correlation between channel pairs are calculated. Random Forest is used in classification of interictal, ictal and early ictal periods of iEEG signals. Seizure detection in this paper is retrospective and patient-specific. iEEG data is accessed via Kaggle, provided by International Epilepsy Electro-physiology Portal. The dataset includes a training set of 6.5 hours of interictal data and 41 minin ictal data and a test set of 9.14 hours. Compared to the state-of-the-art on the same dataset, we achieve 49.4% increase in computational efficiency and 400 mins better in average for detection delay. The proposed model is able to detect a seizure onset at 91.95% sensitivity and 94.05% specificity with a mean detection delay of 2.77 s. The area under the curve (AUC) is 96.44%, that is comparable to the current state-of-the-art with AUC of 96.29%.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 10:01:45 GMT" } ]
2017-02-01T00:00:00
[ [ "Truong", "Nhan", "" ], [ "Kuhlmann", "Levin", "" ], [ "Bonyadi", "Mohammad Reza", "" ], [ "Yang", "Jiawei", "" ], [ "Faulks", "Andrew", "" ], [ "Kavehei", "Omid", "" ] ]
TITLE: Supervised Learning in Automatic Channel Selection for Epileptic Seizure Detection ABSTRACT: Detecting seizure using brain neuroactivations recorded by intracranial electroencephalogram (iEEG) has been widely used for monitoring, diagnosing, and closed-loop therapy of epileptic patients, however, computational efficiency gains are needed if state-of-the-art methods are to be implemented in implanted devices. We present a novel method for automatic seizure detection based on iEEG data that outperforms current state-of-the-art seizure detection methods in terms of computational efficiency while maintaining the accuracy. The proposed algorithm incorporates an automatic channel selection (ACS) engine as a pre-processing stage to the seizure detection procedure. The ACS engine consists of supervised classifiers which aim to find iEEGchannelswhich contribute the most to a seizure. Seizure detection stage involves feature extraction and classification. Feature extraction is performed in both frequency and time domains where spectral power and correlation between channel pairs are calculated. Random Forest is used in classification of interictal, ictal and early ictal periods of iEEG signals. Seizure detection in this paper is retrospective and patient-specific. iEEG data is accessed via Kaggle, provided by International Epilepsy Electro-physiology Portal. The dataset includes a training set of 6.5 hours of interictal data and 41 minin ictal data and a test set of 9.14 hours. Compared to the state-of-the-art on the same dataset, we achieve 49.4% increase in computational efficiency and 400 mins better in average for detection delay. The proposed model is able to detect a seizure onset at 91.95% sensitivity and 94.05% specificity with a mean detection delay of 2.77 s. The area under the curve (AUC) is 96.44%, that is comparable to the current state-of-the-art with AUC of 96.29%.
no_new_dataset
0.947284
1701.08985
Alin Popa
Alin-Ionut Popa, Mihai Zanfir, Cristian Sminchisescu
Deep Multitask Architecture for Integrated 2D and 3D Human Sensing
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a deep multitask architecture for \emph{fully automatic 2d and 3d human sensing} (DMHS), including \emph{recognition and reconstruction}, in \emph{monocular images}. The system computes the figure-ground segmentation, semantically identifies the human body parts at pixel level, and estimates the 2d and 3d pose of the person. The model supports the joint training of all components by means of multi-task losses where early processing stages recursively feed into advanced ones for increasingly complex calculations, accuracy and robustness. The design allows us to tie a complete training protocol, by taking advantage of multiple datasets that would otherwise restrictively cover only some of the model components: complex 2d image data with no body part labeling and without associated 3d ground truth, or complex 3d data with limited 2d background variability. In detailed experiments based on several challenging 2d and 3d datasets (LSP, HumanEva, Human3.6M), we evaluate the sub-structures of the model, the effect of various types of training data in the multitask loss, and demonstrate that state-of-the-art results can be achieved at all processing levels. We also show that in the wild our monocular RGB architecture is perceptually competitive to a state-of-the art (commercial) Kinect system based on RGB-D data.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 10:52:48 GMT" } ]
2017-02-01T00:00:00
[ [ "Popa", "Alin-Ionut", "" ], [ "Zanfir", "Mihai", "" ], [ "Sminchisescu", "Cristian", "" ] ]
TITLE: Deep Multitask Architecture for Integrated 2D and 3D Human Sensing ABSTRACT: We propose a deep multitask architecture for \emph{fully automatic 2d and 3d human sensing} (DMHS), including \emph{recognition and reconstruction}, in \emph{monocular images}. The system computes the figure-ground segmentation, semantically identifies the human body parts at pixel level, and estimates the 2d and 3d pose of the person. The model supports the joint training of all components by means of multi-task losses where early processing stages recursively feed into advanced ones for increasingly complex calculations, accuracy and robustness. The design allows us to tie a complete training protocol, by taking advantage of multiple datasets that would otherwise restrictively cover only some of the model components: complex 2d image data with no body part labeling and without associated 3d ground truth, or complex 3d data with limited 2d background variability. In detailed experiments based on several challenging 2d and 3d datasets (LSP, HumanEva, Human3.6M), we evaluate the sub-structures of the model, the effect of various types of training data in the multitask loss, and demonstrate that state-of-the-art results can be achieved at all processing levels. We also show that in the wild our monocular RGB architecture is perceptually competitive to a state-of-the art (commercial) Kinect system based on RGB-D data.
no_new_dataset
0.950869
1701.09039
Bryan Perozzi
Aria Rezaei, Bryan Perozzi, Leman Akoglu
Ties That Bind - Characterizing Classes by Attributes and Social Ties
WWW'17 Web Science, 9 pages
null
null
null
cs.SI cs.IR physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a set of attributed subgraphs known to be from different classes, how can we discover their differences? There are many cases where collections of subgraphs may be contrasted against each other. For example, they may be assigned ground truth labels (spam/not-spam), or it may be desired to directly compare the biological networks of different species or compound networks of different chemicals. In this work we introduce the problem of characterizing the differences between attributed subgraphs that belong to different classes. We define this characterization problem as one of partitioning the attributes into as many groups as the number of classes, while maximizing the total attributed quality score of all the given subgraphs. We show that our attribute-to-class assignment problem is NP-hard and an optimal $(1 - 1/e)$-approximation algorithm exists. We also propose two different faster heuristics that are linear-time in the number of attributes and subgraphs. Unlike previous work where only attributes were taken into account for characterization, here we exploit both attributes and social ties (i.e. graph structure). Through extensive experiments, we compare our proposed algorithms, show findings that agree with human intuition on datasets from Amazon co-purchases, Congressional bill sponsorships, and DBLP co-authorships. We also show that our approach of characterizing subgraphs is better suited for sense-making than discriminating classification approaches.
[ { "version": "v1", "created": "Tue, 31 Jan 2017 14:01:04 GMT" } ]
2017-02-01T00:00:00
[ [ "Rezaei", "Aria", "" ], [ "Perozzi", "Bryan", "" ], [ "Akoglu", "Leman", "" ] ]
TITLE: Ties That Bind - Characterizing Classes by Attributes and Social Ties ABSTRACT: Given a set of attributed subgraphs known to be from different classes, how can we discover their differences? There are many cases where collections of subgraphs may be contrasted against each other. For example, they may be assigned ground truth labels (spam/not-spam), or it may be desired to directly compare the biological networks of different species or compound networks of different chemicals. In this work we introduce the problem of characterizing the differences between attributed subgraphs that belong to different classes. We define this characterization problem as one of partitioning the attributes into as many groups as the number of classes, while maximizing the total attributed quality score of all the given subgraphs. We show that our attribute-to-class assignment problem is NP-hard and an optimal $(1 - 1/e)$-approximation algorithm exists. We also propose two different faster heuristics that are linear-time in the number of attributes and subgraphs. Unlike previous work where only attributes were taken into account for characterization, here we exploit both attributes and social ties (i.e. graph structure). Through extensive experiments, we compare our proposed algorithms, show findings that agree with human intuition on datasets from Amazon co-purchases, Congressional bill sponsorships, and DBLP co-authorships. We also show that our approach of characterizing subgraphs is better suited for sense-making than discriminating classification approaches.
no_new_dataset
0.943348
1701.09042
Jeff Heaton
Jeff Heaton
Comparing Dataset Characteristics that Favor the Apriori, Eclat or FP-Growth Frequent Itemset Mining Algorithms
null
null
null
null
cs.DB cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Frequent itemset mining is a popular data mining technique. Apriori, Eclat, and FP-Growth are among the most common algorithms for frequent itemset mining. Considerable research has been performed to compare the relative performance between these three algorithms, by evaluating the scalability of each algorithm as the dataset size increases. While scalability as data size increases is important, previous papers have not examined the performance impact of similarly sized datasets that contain different itemset characteristics. This paper explores the effects that two dataset characteristics can have on the performance of these three frequent itemset algorithms. To perform this empirical analysis, a dataset generator is created to measure the effects of frequent item density and the maximum transaction size on performance. The generated datasets contain the same number of rows. This provides some insight into dataset characteristics that are conducive to each algorithm. The results of this paper's research demonstrate Eclat and FP-Growth both handle increases in maximum transaction size and frequent itemset density considerably better than the Apriori algorithm. This paper explores the effects that two dataset characteristics can have on the performance of these three frequent itemset algorithms. To perform this empirical analysis, a dataset generator is created to measure the effects of frequent item density and the maximum transaction size on performance. The generated datasets contain the same number of rows. This provides some insight into dataset characteristics that are conducive to each algorithm. The results of this paper's research demonstrate Eclat and FP-Growth both handle increases in maximum transaction size and frequent itemset density considerably better than the Apriori algorithm.
[ { "version": "v1", "created": "Mon, 30 Jan 2017 12:34:02 GMT" } ]
2017-02-01T00:00:00
[ [ "Heaton", "Jeff", "" ] ]
TITLE: Comparing Dataset Characteristics that Favor the Apriori, Eclat or FP-Growth Frequent Itemset Mining Algorithms ABSTRACT: Frequent itemset mining is a popular data mining technique. Apriori, Eclat, and FP-Growth are among the most common algorithms for frequent itemset mining. Considerable research has been performed to compare the relative performance between these three algorithms, by evaluating the scalability of each algorithm as the dataset size increases. While scalability as data size increases is important, previous papers have not examined the performance impact of similarly sized datasets that contain different itemset characteristics. This paper explores the effects that two dataset characteristics can have on the performance of these three frequent itemset algorithms. To perform this empirical analysis, a dataset generator is created to measure the effects of frequent item density and the maximum transaction size on performance. The generated datasets contain the same number of rows. This provides some insight into dataset characteristics that are conducive to each algorithm. The results of this paper's research demonstrate Eclat and FP-Growth both handle increases in maximum transaction size and frequent itemset density considerably better than the Apriori algorithm. This paper explores the effects that two dataset characteristics can have on the performance of these three frequent itemset algorithms. To perform this empirical analysis, a dataset generator is created to measure the effects of frequent item density and the maximum transaction size on performance. The generated datasets contain the same number of rows. This provides some insight into dataset characteristics that are conducive to each algorithm. The results of this paper's research demonstrate Eclat and FP-Growth both handle increases in maximum transaction size and frequent itemset density considerably better than the Apriori algorithm.
new_dataset
0.973569
1111.4171
Arian Ojeda Gonz\'alez
G. A. Ojeda, O. Mendes, M. A. Calzadilla, M. O. Domingues
The Entropy Index (EI): an Auxiliary Tool to Identify the Occurrence of Interplanetary Magnetic Clouds
This paper has been withdrawn by the author due to great amount of modifications done, and low quality of figures
null
null
null
physics.space-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
By the study of the dynamical processes related to entropy, this work aims to create a mathematical tool to identify magnetic clouds (MCs) in the interplanetary space using only interplanetary magnetic field (IMF) data. Used as basis for an analysis methodology, the spatio-temporal entropy (STE) measures the image (recurrence plots) "structuredness" in both space and time domains. Initially we worked with the Huttunen et al. 2005's dataset and studied the 41 MCs presenting a shock wave identified before the cloud. The STE values for each Bx, By, Bz IMF time series, with dimension and time delay equal to one, were respectively calculated. We found higher STE values in the sheaths and zero STE values in some of the three components in most of the MCs (30 among 41 events). In a physically consistent manner, data windows of 2500 magnetic records were selected as the calculation interval for the time series. As not all MCs have zero STE simultaneously, we created a standardization index (an entropy index, called as EI) to allow joining the result of the three components. With the use of EI three not known MCs were indeed identified and then the MVA method allowed calculating their boundaries. Thus the EI is proposed as an auxiliary tool to identify MC candidates based only on IMF analysis. In a promissor condition, this methodology implemented gives basis for an automatic MC identification procedure and surely useful for space weather purposes.
[ { "version": "v1", "created": "Thu, 17 Nov 2011 18:14:10 GMT" }, { "version": "v2", "created": "Mon, 30 Jan 2017 12:49:41 GMT" } ]
2017-01-31T00:00:00
[ [ "Ojeda", "G. A.", "" ], [ "Mendes", "O.", "" ], [ "Calzadilla", "M. A.", "" ], [ "Domingues", "M. O.", "" ] ]
TITLE: The Entropy Index (EI): an Auxiliary Tool to Identify the Occurrence of Interplanetary Magnetic Clouds ABSTRACT: By the study of the dynamical processes related to entropy, this work aims to create a mathematical tool to identify magnetic clouds (MCs) in the interplanetary space using only interplanetary magnetic field (IMF) data. Used as basis for an analysis methodology, the spatio-temporal entropy (STE) measures the image (recurrence plots) "structuredness" in both space and time domains. Initially we worked with the Huttunen et al. 2005's dataset and studied the 41 MCs presenting a shock wave identified before the cloud. The STE values for each Bx, By, Bz IMF time series, with dimension and time delay equal to one, were respectively calculated. We found higher STE values in the sheaths and zero STE values in some of the three components in most of the MCs (30 among 41 events). In a physically consistent manner, data windows of 2500 magnetic records were selected as the calculation interval for the time series. As not all MCs have zero STE simultaneously, we created a standardization index (an entropy index, called as EI) to allow joining the result of the three components. With the use of EI three not known MCs were indeed identified and then the MVA method allowed calculating their boundaries. Thus the EI is proposed as an auxiliary tool to identify MC candidates based only on IMF analysis. In a promissor condition, this methodology implemented gives basis for an automatic MC identification procedure and surely useful for space weather purposes.
no_new_dataset
0.947575
1409.0272
Andre Goncalves
Andre R. Goncalves, Puja Das, Soumyadeep Chatterjee, Vidyashankar Sivakumar, Fernando J. Von Zuben, Arindam Banerjee
Multi-task Sparse Structure Learning
23rd ACM International Conference on Information and Knowledge Management - CIKM 2014
null
10.1145/2661829.2662091
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of task relationships. In particular, we consider a joint estimation problem of the task relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship structure learning component builds on recent advances in structure learning of Gaussian graphical models based on sparse estimators of the precision (inverse covariance) matrix. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark datasets for regression and classification. We also consider the problem of combining climate model outputs for better projections of future climate, with focus on temperature in South America, and show that the proposed model outperforms several existing methods for the problem.
[ { "version": "v1", "created": "Mon, 1 Sep 2014 00:33:38 GMT" }, { "version": "v2", "created": "Tue, 2 Sep 2014 00:33:35 GMT" } ]
2017-01-31T00:00:00
[ [ "Goncalves", "Andre R.", "" ], [ "Das", "Puja", "" ], [ "Chatterjee", "Soumyadeep", "" ], [ "Sivakumar", "Vidyashankar", "" ], [ "Von Zuben", "Fernando J.", "" ], [ "Banerjee", "Arindam", "" ] ]
TITLE: Multi-task Sparse Structure Learning ABSTRACT: Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously. While sometimes the underlying task relationship structure is known, often the structure needs to be estimated from data at hand. In this paper, we present a novel family of models for MTL, applicable to regression and classification problems, capable of learning the structure of task relationships. In particular, we consider a joint estimation problem of the task relationship structure and the individual task parameters, which is solved using alternating minimization. The task relationship structure learning component builds on recent advances in structure learning of Gaussian graphical models based on sparse estimators of the precision (inverse covariance) matrix. We illustrate the effectiveness of the proposed model on a variety of synthetic and benchmark datasets for regression and classification. We also consider the problem of combining climate model outputs for better projections of future climate, with focus on temperature in South America, and show that the proposed model outperforms several existing methods for the problem.
no_new_dataset
0.938688
1511.07118
Dong-Hyun Lee
Dong-Hyun Lee
Cascading Denoising Auto-Encoder as a Deep Directed Generative Model
not completed
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work (Bengio et al., 2013) has shown howDenoising Auto-Encoders(DAE) become gener-ative models as a density estimator. However,in practice, the framework suffers from a mixingproblem in the MCMC sampling process and nodirect method to estimate the test log-likelihood.We consider a directed model with an stochas-tic identity mapping (simple corruption pro-cess) as an inference model and a DAE as agenerative model. By cascading these mod-els, we propose Cascading Denoising Auto-Encoders(CDAE) which can generate samples ofdata distribution from tractable prior distributionunder the assumption that probabilistic distribu-tion of corrupted data approaches tractable priordistribution as the level of corruption increases.This work tries to answer two questions. On theone hand, can deep directed models be success-fully trained without intractable posterior infer-ence and difficult optimization of very deep neu-ral networks in inference and generative mod-els? These are unavoidable when recent suc-cessful directed model like VAE (Kingma &Welling, 2014) is trained on complex dataset likereal images. On the other hand, can DAEs getclean samples of data distribution from heavilycorrupted samples which can be considered oftractable prior distribution far from data mani-fold? so-called global denoising scheme.Our results show positive responses of thesequestions and this work can provide fairly simpleframework for generative models of very com-plex dataset.
[ { "version": "v1", "created": "Mon, 23 Nov 2015 06:32:57 GMT" }, { "version": "v2", "created": "Fri, 27 Jan 2017 19:09:52 GMT" } ]
2017-01-31T00:00:00
[ [ "Lee", "Dong-Hyun", "" ] ]
TITLE: Cascading Denoising Auto-Encoder as a Deep Directed Generative Model ABSTRACT: Recent work (Bengio et al., 2013) has shown howDenoising Auto-Encoders(DAE) become gener-ative models as a density estimator. However,in practice, the framework suffers from a mixingproblem in the MCMC sampling process and nodirect method to estimate the test log-likelihood.We consider a directed model with an stochas-tic identity mapping (simple corruption pro-cess) as an inference model and a DAE as agenerative model. By cascading these mod-els, we propose Cascading Denoising Auto-Encoders(CDAE) which can generate samples ofdata distribution from tractable prior distributionunder the assumption that probabilistic distribu-tion of corrupted data approaches tractable priordistribution as the level of corruption increases.This work tries to answer two questions. On theone hand, can deep directed models be success-fully trained without intractable posterior infer-ence and difficult optimization of very deep neu-ral networks in inference and generative mod-els? These are unavoidable when recent suc-cessful directed model like VAE (Kingma &Welling, 2014) is trained on complex dataset likereal images. On the other hand, can DAEs getclean samples of data distribution from heavilycorrupted samples which can be considered oftractable prior distribution far from data mani-fold? so-called global denoising scheme.Our results show positive responses of thesequestions and this work can provide fairly simpleframework for generative models of very com-plex dataset.
no_new_dataset
0.949763