id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1612.08378 | Haoxiang Xia | Ling Zhang, Shuangling Luo, Haoxiang Xia | An Investigation of Intra-Urban Mobility Pattern of Taxi Passengers | 16 pages, 9 figures, 7 tables | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of human mobility patterns is of both theoretical and practical
values in many aspects. For long-distance travels, a few research endeavors
have shown that the displacements of human travels follow the power-law
distribution. However, controversies remain in the issue of the scaling law of
human mobility in intra-urban areas. In this work we focus on the mobility
pattern of taxi passengers by examining five datasets of the three
metropolitans of New York, Dalian and Nanjing. Through statistical analysis, we
find that the lognormal distribution with a power-law tail can best approximate
both the displacement and the duration time of taxi trips, as well as the
vacant time of taxicabs, in all the examined cities. The universality of
scaling law of human mobility is subsequently discussed, in accordance with the
data analytics.
| [
{
"version": "v1",
"created": "Mon, 26 Dec 2016 13:35:17 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Zhang",
"Ling",
""
],
[
"Luo",
"Shuangling",
""
],
[
"Xia",
"Haoxiang",
""
]
] | TITLE: An Investigation of Intra-Urban Mobility Pattern of Taxi Passengers
ABSTRACT: The study of human mobility patterns is of both theoretical and practical
values in many aspects. For long-distance travels, a few research endeavors
have shown that the displacements of human travels follow the power-law
distribution. However, controversies remain in the issue of the scaling law of
human mobility in intra-urban areas. In this work we focus on the mobility
pattern of taxi passengers by examining five datasets of the three
metropolitans of New York, Dalian and Nanjing. Through statistical analysis, we
find that the lognormal distribution with a power-law tail can best approximate
both the displacement and the duration time of taxi trips, as well as the
vacant time of taxicabs, in all the examined cities. The universality of
scaling law of human mobility is subsequently discussed, in accordance with the
data analytics.
| no_new_dataset | 0.9463 |
1612.08388 | Cesar Comin PhD | Mayra Z. Rodriguez, Cesar H. Comin, Dalcimar Casanova, Odemir M.
Bruno, Diego R. Amancio, Francisco A. Rodrigues, Luciano da F. Costa | Clustering Algorithms: A Comparative Approach | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Many real-world systems can be studied in terms of pattern recognition tasks,
so that proper use (and understanding) of machine learning methods in practical
applications becomes essential. While a myriad of classification methods have
been proposed, there is no consensus on which methods are more suitable for a
given dataset. As a consequence, it is important to comprehensively compare
methods in many possible scenarios. In this context, we performed a systematic
comparison of 7 well-known clustering methods available in the R language. In
order to account for the many possible variations of data, we considered
artificial datasets with several tunable properties (number of classes,
separation between classes, etc). In addition, we also evaluated the
sensitivity of the clustering methods with regard to their parameters
configuration. The results revealed that, when considering the default
configurations of the adopted methods, the spectral approach usually
outperformed the other clustering algorithms. We also found that the default
configuration of the adopted implementations was not accurate. In these cases,
a simple approach based on random selection of parameters values proved to be a
good alternative to improve the performance. All in all, the reported approach
provides subsidies guiding the choice of clustering algorithms.
| [
{
"version": "v1",
"created": "Mon, 26 Dec 2016 14:25:32 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Rodriguez",
"Mayra Z.",
""
],
[
"Comin",
"Cesar H.",
""
],
[
"Casanova",
"Dalcimar",
""
],
[
"Bruno",
"Odemir M.",
""
],
[
"Amancio",
"Diego R.",
""
],
[
"Rodrigues",
"Francisco A.",
""
],
[
"Costa",
"Luciano da F.",
""
]
] | TITLE: Clustering Algorithms: A Comparative Approach
ABSTRACT: Many real-world systems can be studied in terms of pattern recognition tasks,
so that proper use (and understanding) of machine learning methods in practical
applications becomes essential. While a myriad of classification methods have
been proposed, there is no consensus on which methods are more suitable for a
given dataset. As a consequence, it is important to comprehensively compare
methods in many possible scenarios. In this context, we performed a systematic
comparison of 7 well-known clustering methods available in the R language. In
order to account for the many possible variations of data, we considered
artificial datasets with several tunable properties (number of classes,
separation between classes, etc). In addition, we also evaluated the
sensitivity of the clustering methods with regard to their parameters
configuration. The results revealed that, when considering the default
configurations of the adopted methods, the spectral approach usually
outperformed the other clustering algorithms. We also found that the default
configuration of the adopted implementations was not accurate. In these cases,
a simple approach based on random selection of parameters values proved to be a
good alternative to improve the performance. All in all, the reported approach
provides subsidies guiding the choice of clustering algorithms.
| no_new_dataset | 0.944689 |
1612.08499 | Lilei Zheng | Lilei Zheng, Ying Zhang, Stefan Duffner, Khalid Idrissi, Christophe
Garcia, Atilla Baskurt | End-to-End Data Visualization by Metric Learning and Coordinate
Transformation | 17 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a deep nonlinear metric learning framework for data
visualization on an image dataset. We propose the Triangular Similarity and
prove its equivalence to the Cosine Similarity in measuring a data pair. Based
on this novel similarity, a geometrically motivated loss function - the
triangular loss - is then developed for optimizing a metric learning system
comprising two identical CNNs. It is shown that this deep nonlinear system can
be efficiently trained by a hybrid algorithm based on the conventional
backpropagation algorithm. More interestingly, benefiting from classical
manifold learning theories, the proposed system offers two different views to
visualize the outputs, the second of which provides better classification
results than the state-of-the-art methods in the visualizable spaces.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 05:03:09 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Zheng",
"Lilei",
""
],
[
"Zhang",
"Ying",
""
],
[
"Duffner",
"Stefan",
""
],
[
"Idrissi",
"Khalid",
""
],
[
"Garcia",
"Christophe",
""
],
[
"Baskurt",
"Atilla",
""
]
] | TITLE: End-to-End Data Visualization by Metric Learning and Coordinate
Transformation
ABSTRACT: This paper presents a deep nonlinear metric learning framework for data
visualization on an image dataset. We propose the Triangular Similarity and
prove its equivalence to the Cosine Similarity in measuring a data pair. Based
on this novel similarity, a geometrically motivated loss function - the
triangular loss - is then developed for optimizing a metric learning system
comprising two identical CNNs. It is shown that this deep nonlinear system can
be efficiently trained by a hybrid algorithm based on the conventional
backpropagation algorithm. More interestingly, benefiting from classical
manifold learning theories, the proposed system offers two different views to
visualize the outputs, the second of which provides better classification
results than the state-of-the-art methods in the visualizable spaces.
| no_new_dataset | 0.951006 |
1612.08510 | Jian Shi | Jian Shi, Yue Dong, Hao Su, Stella X. Yu | Learning Non-Lambertian Object Intrinsics across ShapeNet Categories | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the non-Lambertian object intrinsic problem of recovering diffuse
albedo, shading, and specular highlights from a single image of an object.
We build a large-scale object intrinsics database based on existing 3D models
in the ShapeNet database. Rendered with realistic environment maps, millions of
synthetic images of objects and their corresponding albedo, shading, and
specular ground-truth images are used to train an encoder-decoder CNN. Once
trained, the network can decompose an image into the product of albedo and
shading components, along with an additive specular component.
Our CNN delivers accurate and sharp results in this classical inverse problem
of computer vision, sharp details attributed to skip layer connections at
corresponding resolutions from the encoder to the decoder. Benchmarked on our
ShapeNet and MIT intrinsics datasets, our model consistently outperforms the
state-of-the-art by a large margin.
We train and test our CNN on different object categories. Perhaps surprising
especially from the CNN classification perspective, our intrinsics CNN
generalizes very well across categories. Our analysis shows that feature
learning at the encoder stage is more crucial for developing a universal
representation across categories.
We apply our synthetic data trained model to images and videos downloaded
from the internet, and observe robust and realistic intrinsics results. Quality
non-Lambertian intrinsics could open up many interesting applications such as
image-based albedo and specular editing.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 06:38:43 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Shi",
"Jian",
""
],
[
"Dong",
"Yue",
""
],
[
"Su",
"Hao",
""
],
[
"Yu",
"Stella X.",
""
]
] | TITLE: Learning Non-Lambertian Object Intrinsics across ShapeNet Categories
ABSTRACT: We consider the non-Lambertian object intrinsic problem of recovering diffuse
albedo, shading, and specular highlights from a single image of an object.
We build a large-scale object intrinsics database based on existing 3D models
in the ShapeNet database. Rendered with realistic environment maps, millions of
synthetic images of objects and their corresponding albedo, shading, and
specular ground-truth images are used to train an encoder-decoder CNN. Once
trained, the network can decompose an image into the product of albedo and
shading components, along with an additive specular component.
Our CNN delivers accurate and sharp results in this classical inverse problem
of computer vision, sharp details attributed to skip layer connections at
corresponding resolutions from the encoder to the decoder. Benchmarked on our
ShapeNet and MIT intrinsics datasets, our model consistently outperforms the
state-of-the-art by a large margin.
We train and test our CNN on different object categories. Perhaps surprising
especially from the CNN classification perspective, our intrinsics CNN
generalizes very well across categories. Our analysis shows that feature
learning at the encoder stage is more crucial for developing a universal
representation across categories.
We apply our synthetic data trained model to images and videos downloaded
from the internet, and observe robust and realistic intrinsics results. Quality
non-Lambertian intrinsics could open up many interesting applications such as
image-based albedo and specular editing.
| no_new_dataset | 0.946349 |
1612.08534 | Fang Zhao | Fang Zhao, Jiashi Feng, Jian Zhao, Wenhan Yang, Shuicheng Yan | Robust LSTM-Autoencoders for Face De-Occlusion in the Wild | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face recognition techniques have been developed significantly in recent
years. However, recognizing faces with partial occlusion is still challenging
for existing face recognizers which is heavily desired in real-world
applications concerning surveillance and security. Although much research
effort has been devoted to developing face de-occlusion methods, most of them
can only work well under constrained conditions, such as all the faces are from
a pre-defined closed set. In this paper, we propose a robust LSTM-Autoencoders
(RLA) model to effectively restore partially occluded faces even in the wild.
The RLA model consists of two LSTM components, which aims at occlusion-robust
face encoding and recurrent occlusion removal respectively. The first one,
named multi-scale spatial LSTM encoder, reads facial patches of various scales
sequentially to output a latent representation, and occlusion-robustness is
achieved owing to the fact that the influence of occlusion is only upon some of
the patches. Receiving the representation learned by the encoder, the LSTM
decoder with a dual channel architecture reconstructs the overall face and
detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down
the task of face de-occlusion into restoring the occluded part step by step.
Moreover, to minimize identify information loss and guarantee face recognition
accuracy over recovered faces, we introduce an identity-preserving adversarial
training scheme to further improve RLA. Extensive experiments on both synthetic
and real datasets of faces with occlusion clearly demonstrate the effectiveness
of our proposed RLA in removing different types of facial occlusion at various
locations. The proposed method also provides significantly larger performance
gain than other de-occlusion methods in promoting recognition performance over
partially-occluded faces.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 08:36:48 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Zhao",
"Fang",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Zhao",
"Jian",
""
],
[
"Yang",
"Wenhan",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Robust LSTM-Autoencoders for Face De-Occlusion in the Wild
ABSTRACT: Face recognition techniques have been developed significantly in recent
years. However, recognizing faces with partial occlusion is still challenging
for existing face recognizers which is heavily desired in real-world
applications concerning surveillance and security. Although much research
effort has been devoted to developing face de-occlusion methods, most of them
can only work well under constrained conditions, such as all the faces are from
a pre-defined closed set. In this paper, we propose a robust LSTM-Autoencoders
(RLA) model to effectively restore partially occluded faces even in the wild.
The RLA model consists of two LSTM components, which aims at occlusion-robust
face encoding and recurrent occlusion removal respectively. The first one,
named multi-scale spatial LSTM encoder, reads facial patches of various scales
sequentially to output a latent representation, and occlusion-robustness is
achieved owing to the fact that the influence of occlusion is only upon some of
the patches. Receiving the representation learned by the encoder, the LSTM
decoder with a dual channel architecture reconstructs the overall face and
detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down
the task of face de-occlusion into restoring the occluded part step by step.
Moreover, to minimize identify information loss and guarantee face recognition
accuracy over recovered faces, we introduce an identity-preserving adversarial
training scheme to further improve RLA. Extensive experiments on both synthetic
and real datasets of faces with occlusion clearly demonstrate the effectiveness
of our proposed RLA in removing different types of facial occlusion at various
locations. The proposed method also provides significantly larger performance
gain than other de-occlusion methods in promoting recognition performance over
partially-occluded faces.
| no_new_dataset | 0.948106 |
1612.08633 | Vishal Kakkar | Vishal Kakkar, Shirish K. Shevade, S Sundararajan, Dinesh Garg | A Sparse Nonlinear Classifier Design Using AUC Optimization | null | null | null | null | cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AUC (Area under the ROC curve) is an important performance measure for
applications where the data is highly imbalanced. Learning to maximize AUC
performance is thus an important research problem. Using a max-margin based
surrogate loss function, AUC optimization problem can be approximated as a
pairwise rankSVM learning problem. Batch learning methods for solving the
kernelized version of this problem suffer from scalability and may not result
in sparse classifiers. Recent years have witnessed an increased interest in the
development of online or single-pass online learning algorithms that design a
classifier by maximizing the AUC performance. The AUC performance of nonlinear
classifiers, designed using online methods, is not comparable with that of
nonlinear classifiers designed using batch learning algorithms on many
real-world datasets. Motivated by these observations, we design a scalable
algorithm for maximizing AUC performance by greedily adding the required number
of basis functions into the classifier model. The resulting sparse classifiers
perform faster inference. Our experimental results show that the level of
sparsity achievable can be order of magnitude smaller than the Kernel RankSVM
model without affecting the AUC performance much.
| [
{
"version": "v1",
"created": "Tue, 27 Dec 2016 13:52:56 GMT"
}
] | 2016-12-28T00:00:00 | [
[
"Kakkar",
"Vishal",
""
],
[
"Shevade",
"Shirish K.",
""
],
[
"Sundararajan",
"S",
""
],
[
"Garg",
"Dinesh",
""
]
] | TITLE: A Sparse Nonlinear Classifier Design Using AUC Optimization
ABSTRACT: AUC (Area under the ROC curve) is an important performance measure for
applications where the data is highly imbalanced. Learning to maximize AUC
performance is thus an important research problem. Using a max-margin based
surrogate loss function, AUC optimization problem can be approximated as a
pairwise rankSVM learning problem. Batch learning methods for solving the
kernelized version of this problem suffer from scalability and may not result
in sparse classifiers. Recent years have witnessed an increased interest in the
development of online or single-pass online learning algorithms that design a
classifier by maximizing the AUC performance. The AUC performance of nonlinear
classifiers, designed using online methods, is not comparable with that of
nonlinear classifiers designed using batch learning algorithms on many
real-world datasets. Motivated by these observations, we design a scalable
algorithm for maximizing AUC performance by greedily adding the required number
of basis functions into the classifier model. The resulting sparse classifiers
perform faster inference. Our experimental results show that the level of
sparsity achievable can be order of magnitude smaller than the Kernel RankSVM
model without affecting the AUC performance much.
| no_new_dataset | 0.945298 |
1306.1066 | Benjamin Rubinstein | Christos Dimitrakakis and Blaine Nelson and and Zuhe Zhang and
Aikaterini Mitrokotsa and Benjamin Rubinstein | Bayesian Differential Privacy through Posterior Sampling | 38 pages; An earlier version of this article was published in ALT
2014. This version has corrections and additional results | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy formalises privacy-preserving mechanisms that provide
access to a database. We pose the question of whether Bayesian inference itself
can be used directly to provide private access to data, with no modification.
The answer is affirmative: under certain conditions on the prior, sampling from
the posterior distribution can be used to achieve a desired level of privacy
and utility. To do so, we generalise differential privacy to arbitrary dataset
metrics, outcome spaces and distribution families. This allows us to also deal
with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of
the posterior to the data, which gives a measure of robustness. We also show
how to use posterior sampling to provide differentially private responses to
queries, within a decision-theoretic framework. Finally, we provide bounds on
the utility and on the distinguishability of datasets. The latter are
complemented by a novel use of Le Cam's method to obtain lower bounds. All our
general results hold for arbitrary database metrics, including those for the
common definition of differential privacy. For specific choices of the metric,
we give a number of examples satisfying our assumptions.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2013 11:38:46 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Feb 2014 13:40:36 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Mar 2014 15:31:32 GMT"
},
{
"version": "v4",
"created": "Sun, 12 Jul 2015 03:44:30 GMT"
},
{
"version": "v5",
"created": "Fri, 23 Dec 2016 12:28:36 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Dimitrakakis",
"Christos",
""
],
[
"Nelson",
"Blaine",
""
],
[
"Zhang",
"and Zuhe",
""
],
[
"Mitrokotsa",
"Aikaterini",
""
],
[
"Rubinstein",
"Benjamin",
""
]
] | TITLE: Bayesian Differential Privacy through Posterior Sampling
ABSTRACT: Differential privacy formalises privacy-preserving mechanisms that provide
access to a database. We pose the question of whether Bayesian inference itself
can be used directly to provide private access to data, with no modification.
The answer is affirmative: under certain conditions on the prior, sampling from
the posterior distribution can be used to achieve a desired level of privacy
and utility. To do so, we generalise differential privacy to arbitrary dataset
metrics, outcome spaces and distribution families. This allows us to also deal
with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of
the posterior to the data, which gives a measure of robustness. We also show
how to use posterior sampling to provide differentially private responses to
queries, within a decision-theoretic framework. Finally, we provide bounds on
the utility and on the distinguishability of datasets. The latter are
complemented by a novel use of Le Cam's method to obtain lower bounds. All our
general results hold for arbitrary database metrics, including those for the
common definition of differential privacy. For specific choices of the metric,
we give a number of examples satisfying our assumptions.
| no_new_dataset | 0.945801 |
1607.08329 | Jiongqian Liang | Jiongqian Liang and Srinivasan Parthasarathy | Robust Contextual Outlier Detection: Where Context Meets Sparsity | 11 pages. Extended version of CIKM'16 paper | null | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Outlier detection is a fundamental data science task with applications
ranging from data cleaning to network security. Given the fundamental nature of
the task, this has been the subject of much research. Recently, a new class of
outlier detection algorithms has emerged, called {\it contextual outlier
detection}, and has shown improved performance when studying anomalous behavior
in a specific context. However, as we point out in this article, such
approaches have limited applicability in situations where the context is sparse
(i.e. lacking a suitable frame of reference). Moreover, approaches developed to
date do not scale to large datasets. To address these problems, here we propose
a novel and robust approach alternative to the state-of-the-art called RObust
Contextual Outlier Detection (ROCOD). We utilize a local and global behavioral
model based on the relevant contexts, which is then integrated in a natural and
robust fashion. We also present several optimizations to improve the
scalability of the approach. We run ROCOD on both synthetic and real-world
datasets and demonstrate that it outperforms other competitive baselines on the
axes of efficacy and efficiency (40X speedup compared to modern contextual
outlier detection methods). We also drill down and perform a fine-grained
analysis to shed light on the rationale for the performance gains of ROCOD and
reveal its effectiveness when handling objects with sparse contexts.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 06:40:30 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2016 03:47:51 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Dec 2016 21:52:12 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Liang",
"Jiongqian",
""
],
[
"Parthasarathy",
"Srinivasan",
""
]
] | TITLE: Robust Contextual Outlier Detection: Where Context Meets Sparsity
ABSTRACT: Outlier detection is a fundamental data science task with applications
ranging from data cleaning to network security. Given the fundamental nature of
the task, this has been the subject of much research. Recently, a new class of
outlier detection algorithms has emerged, called {\it contextual outlier
detection}, and has shown improved performance when studying anomalous behavior
in a specific context. However, as we point out in this article, such
approaches have limited applicability in situations where the context is sparse
(i.e. lacking a suitable frame of reference). Moreover, approaches developed to
date do not scale to large datasets. To address these problems, here we propose
a novel and robust approach alternative to the state-of-the-art called RObust
Contextual Outlier Detection (ROCOD). We utilize a local and global behavioral
model based on the relevant contexts, which is then integrated in a natural and
robust fashion. We also present several optimizations to improve the
scalability of the approach. We run ROCOD on both synthetic and real-world
datasets and demonstrate that it outperforms other competitive baselines on the
axes of efficacy and efficiency (40X speedup compared to modern contextual
outlier detection methods). We also drill down and perform a fine-grained
analysis to shed light on the rationale for the performance gains of ROCOD and
reveal its effectiveness when handling objects with sparse contexts.
| no_new_dataset | 0.947039 |
1612.01251 | Pedro Tabacof | Ramon Oliveira, Pedro Tabacof, Eduardo Valle | Known Unknowns: Uncertainty Quality in Bayesian Neural Networks | Workshop on Bayesian Deep Learning, NIPS 2016, Barcelona, Spain;
EDIT: Changed analysis from Logit-AUC space to AUC (with changes to Figs. 2
and 3) | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We evaluate the uncertainty quality in neural networks using anomaly
detection. We extract uncertainty measures (e.g. entropy) from the predictions
of candidate models, use those measures as features for an anomaly detector,
and gauge how well the detector differentiates known from unknown classes. We
assign higher uncertainty quality to candidate models that lead to better
detectors. We also propose a novel method for sampling a variational
approximation of a Bayesian neural network, called One-Sample Bayesian
Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We
compare the following candidate neural network models: Maximum Likelihood,
Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational
approximation. We show that Bayesian Dropout and OSBA provide better
uncertainty information than Maximum Likelihood, and are essentially equivalent
to the standard variational approximation, but much faster.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2016 05:21:42 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Dec 2016 00:24:27 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Oliveira",
"Ramon",
""
],
[
"Tabacof",
"Pedro",
""
],
[
"Valle",
"Eduardo",
""
]
] | TITLE: Known Unknowns: Uncertainty Quality in Bayesian Neural Networks
ABSTRACT: We evaluate the uncertainty quality in neural networks using anomaly
detection. We extract uncertainty measures (e.g. entropy) from the predictions
of candidate models, use those measures as features for an anomaly detector,
and gauge how well the detector differentiates known from unknown classes. We
assign higher uncertainty quality to candidate models that lead to better
detectors. We also propose a novel method for sampling a variational
approximation of a Bayesian neural network, called One-Sample Bayesian
Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We
compare the following candidate neural network models: Maximum Likelihood,
Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational
approximation. We show that Bayesian Dropout and OSBA provide better
uncertainty information than Maximum Likelihood, and are essentially equivalent
to the standard variational approximation, but much faster.
| no_new_dataset | 0.955277 |
1612.07833 | Radu Soricut | Nan Ding and Sebastian Goodman and Fei Sha and Radu Soricut | Understanding Image and Text Simultaneously: a Dual Vision-Language
Machine Comprehension Task | 11 pages | null | null | null | cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new multi-modal task for computer systems, posed as a combined
vision-language comprehension challenge: identifying the most suitable text
describing a scene, given several similar options. Accomplishing the task
entails demonstrating comprehension beyond just recognizing "keywords" (or
key-phrases) and their corresponding visual concepts. Instead, it requires an
alignment between the representations of the two modalities that achieves a
visually-grounded "understanding" of various linguistic elements and their
dependencies. This new task also admits an easy-to-compute and well-studied
metric: the accuracy in detecting the true target among the decoys.
The paper makes several contributions: an effective and extensible mechanism
for generating decoys from (human-created) image captions; an instance of
applying this mechanism, yielding a large-scale machine comprehension dataset
(based on the COCO images and captions) that we make publicly available; human
evaluation results on this dataset, informing a performance upper-bound; and
several baseline and competitive learning approaches that illustrate the
utility of the proposed task and dataset in advancing both image and language
comprehension. We also show that, in a multi-task learning setting, the
performance on the proposed task is positively correlated with the end-to-end
task of image captioning.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2016 22:44:17 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Ding",
"Nan",
""
],
[
"Goodman",
"Sebastian",
""
],
[
"Sha",
"Fei",
""
],
[
"Soricut",
"Radu",
""
]
] | TITLE: Understanding Image and Text Simultaneously: a Dual Vision-Language
Machine Comprehension Task
ABSTRACT: We introduce a new multi-modal task for computer systems, posed as a combined
vision-language comprehension challenge: identifying the most suitable text
describing a scene, given several similar options. Accomplishing the task
entails demonstrating comprehension beyond just recognizing "keywords" (or
key-phrases) and their corresponding visual concepts. Instead, it requires an
alignment between the representations of the two modalities that achieves a
visually-grounded "understanding" of various linguistic elements and their
dependencies. This new task also admits an easy-to-compute and well-studied
metric: the accuracy in detecting the true target among the decoys.
The paper makes several contributions: an effective and extensible mechanism
for generating decoys from (human-created) image captions; an instance of
applying this mechanism, yielding a large-scale machine comprehension dataset
(based on the COCO images and captions) that we make publicly available; human
evaluation results on this dataset, informing a performance upper-bound; and
several baseline and competitive learning approaches that illustrate the
utility of the proposed task and dataset in advancing both image and language
comprehension. We also show that, in a multi-task learning setting, the
performance on the proposed task is positively correlated with the end-to-end
task of image captioning.
| new_dataset | 0.960398 |
1612.07896 | Christopher Burges | C.J.C. Burges, T. Hart, Z. Yang, S. Cucerzan, R.W. White, A.
Pastusiak, J. Lewis | A Base Camp for Scaling AI | null | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern statistical machine learning (SML) methods share a major limitation
with the early approaches to AI: there is no scalable way to adapt them to new
domains. Human learning solves this in part by leveraging a rich, shared,
updateable world model. Such scalability requires modularity: updating part of
the world model should not impact unrelated parts. We have argued that such
modularity will require both "correctability" (so that errors can be corrected
without introducing new errors) and "interpretability" (so that we can
understand what components need correcting).
To achieve this, one could attempt to adapt state of the art SML systems to
be interpretable and correctable; or one could see how far the simplest
possible interpretable, correctable learning methods can take us, and try to
control the limitations of SML methods by applying them only where needed. Here
we focus on the latter approach and we investigate two main ideas: "Teacher
Assisted Learning", which leverages crowd sourcing to learn language; and
"Factored Dialog Learning", which factors the process of application
development into roles where the language competencies needed are isolated,
enabling non-experts to quickly create new applications.
We test these ideas in an "Automated Personal Assistant" (APA) setting, with
two scenarios: that of detecting user intent from a user-APA dialog; and that
of creating a class of event reminder applications, where a non-expert
"teacher" can then create specific apps. For the intent detection task, we use
a dataset of a thousand labeled utterances from user dialogs with Cortana, and
we show that our approach matches state of the art SML methods, but in addition
provides full transparency: the whole (editable) model can be summarized on one
human-readable page. For the reminder app task, we ran small user studies to
verify the efficacy of the approach.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2016 08:03:20 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Burges",
"C. J. C.",
""
],
[
"Hart",
"T.",
""
],
[
"Yang",
"Z.",
""
],
[
"Cucerzan",
"S.",
""
],
[
"White",
"R. W.",
""
],
[
"Pastusiak",
"A.",
""
],
[
"Lewis",
"J.",
""
]
] | TITLE: A Base Camp for Scaling AI
ABSTRACT: Modern statistical machine learning (SML) methods share a major limitation
with the early approaches to AI: there is no scalable way to adapt them to new
domains. Human learning solves this in part by leveraging a rich, shared,
updateable world model. Such scalability requires modularity: updating part of
the world model should not impact unrelated parts. We have argued that such
modularity will require both "correctability" (so that errors can be corrected
without introducing new errors) and "interpretability" (so that we can
understand what components need correcting).
To achieve this, one could attempt to adapt state of the art SML systems to
be interpretable and correctable; or one could see how far the simplest
possible interpretable, correctable learning methods can take us, and try to
control the limitations of SML methods by applying them only where needed. Here
we focus on the latter approach and we investigate two main ideas: "Teacher
Assisted Learning", which leverages crowd sourcing to learn language; and
"Factored Dialog Learning", which factors the process of application
development into roles where the language competencies needed are isolated,
enabling non-experts to quickly create new applications.
We test these ideas in an "Automated Personal Assistant" (APA) setting, with
two scenarios: that of detecting user intent from a user-APA dialog; and that
of creating a class of event reminder applications, where a non-expert
"teacher" can then create specific apps. For the intent detection task, we use
a dataset of a thousand labeled utterances from user dialogs with Cortana, and
we show that our approach matches state of the art SML methods, but in addition
provides full transparency: the whole (editable) model can be summarized on one
human-readable page. For the reminder app task, we ran small user studies to
verify the efficacy of the approach.
| no_new_dataset | 0.887156 |
1612.07978 | Hengkai Guo | Hengkai Guo, Guijin Wang, Xinghao Chen | Two-stream convolutional neural network for accurate RGB-D fingertip
detection using depth and edge information | Accepted by ICIP 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate detection of fingertips in depth image is critical for
human-computer interaction. In this paper, we present a novel two-stream
convolutional neural network (CNN) for RGB-D fingertip detection. Firstly edge
image is extracted from raw depth image using random forest. Then the edge
information is combined with depth information in our CNN structure. We study
several fusion approaches and suggest a slow fusion strategy as a promising way
of fingertip detection. As shown in our experiments, our real-time algorithm
outperforms state-of-the-art fingertip detection methods on the public dataset
HandNet with an average 3D error of 9.9mm, and shows comparable accuracy of
fingertip estimation on NYU hand dataset.
| [
{
"version": "v1",
"created": "Fri, 23 Dec 2016 14:17:31 GMT"
}
] | 2016-12-26T00:00:00 | [
[
"Guo",
"Hengkai",
""
],
[
"Wang",
"Guijin",
""
],
[
"Chen",
"Xinghao",
""
]
] | TITLE: Two-stream convolutional neural network for accurate RGB-D fingertip
detection using depth and edge information
ABSTRACT: Accurate detection of fingertips in depth image is critical for
human-computer interaction. In this paper, we present a novel two-stream
convolutional neural network (CNN) for RGB-D fingertip detection. Firstly edge
image is extracted from raw depth image using random forest. Then the edge
information is combined with depth information in our CNN structure. We study
several fusion approaches and suggest a slow fusion strategy as a promising way
of fingertip detection. As shown in our experiments, our real-time algorithm
outperforms state-of-the-art fingertip detection methods on the public dataset
HandNet with an average 3D error of 9.9mm, and shows comparable accuracy of
fingertip estimation on NYU hand dataset.
| no_new_dataset | 0.953449 |
1510.00921 | Chunhua Shen | Lingqiao Liu, Chunhua Shen, Anton van den Hengel | Cross-convolutional-layer Pooling for Image Recognition | Fixed typos. Journal extension of arXiv:1411.7466. Accepted to IEEE
Transactions on Pattern Analysis and Machine Intelligence | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.
| [
{
"version": "v1",
"created": "Sun, 4 Oct 2015 10:27:36 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2016 07:37:09 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Oct 2016 05:48:16 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Dec 2016 00:00:42 GMT"
},
{
"version": "v5",
"created": "Thu, 8 Dec 2016 01:31:05 GMT"
},
{
"version": "v6",
"created": "Thu, 22 Dec 2016 04:43:19 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: Cross-convolutional-layer Pooling for Image Recognition
ABSTRACT: Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.
| no_new_dataset | 0.951097 |
1606.07372 | Noah Apthorpe | Noah J. Apthorpe, Alexander J. Riordan, Rob E. Aguilar, Jan Homann, Yi
Gu, David W. Tank, H. Sebastian Seung | Automatic Neuron Detection in Calcium Imaging Data Using Convolutional
Networks | 9 pages, 5 figures, 2 ancillary files; minor changes for camera-ready
version. appears in Advances in Neural Information Processing Systems 29
(NIPS 2016) | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Calcium imaging is an important technique for monitoring the activity of
thousands of neurons simultaneously. As calcium imaging datasets grow in size,
automated detection of individual neurons is becoming important. Here we apply
a supervised learning approach to this problem and show that convolutional
networks can achieve near-human accuracy and superhuman speed. Accuracy is
superior to the popular PCA/ICA method based on precision and recall relative
to ground truth annotation by a human expert. These results suggest that
convolutional networks are an efficient and flexible tool for the analysis of
large-scale calcium imaging data.
| [
{
"version": "v1",
"created": "Thu, 23 Jun 2016 16:49:40 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2016 23:40:08 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Apthorpe",
"Noah J.",
""
],
[
"Riordan",
"Alexander J.",
""
],
[
"Aguilar",
"Rob E.",
""
],
[
"Homann",
"Jan",
""
],
[
"Gu",
"Yi",
""
],
[
"Tank",
"David W.",
""
],
[
"Seung",
"H. Sebastian",
""
]
] | TITLE: Automatic Neuron Detection in Calcium Imaging Data Using Convolutional
Networks
ABSTRACT: Calcium imaging is an important technique for monitoring the activity of
thousands of neurons simultaneously. As calcium imaging datasets grow in size,
automated detection of individual neurons is becoming important. Here we apply
a supervised learning approach to this problem and show that convolutional
networks can achieve near-human accuracy and superhuman speed. Accuracy is
superior to the popular PCA/ICA method based on precision and recall relative
to ground truth annotation by a human expert. These results suggest that
convolutional networks are an efficient and flexible tool for the analysis of
large-scale calcium imaging data.
| no_new_dataset | 0.94887 |
1608.03714 | Haiping Huang | Haiping Huang and Taro Toyoizumi | Unsupervised feature learning from finite data by message passing:
discontinuous versus continuous phase transition | 8 pages, 7 figures (5 pages, 4 figures in the main text and 3 pages
of appendix) | Phys. Rev. E 94, 062310 (2016) | 10.1103/PhysRevE.94.062310 | null | cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised neural network learning extracts hidden features from unlabeled
training data. This is used as a pretraining step for further supervised
learning in deep networks. Hence, understanding unsupervised learning is of
fundamental importance. Here, we study the unsupervised learning from a finite
number of data, based on the restricted Boltzmann machine learning. Our study
inspires an efficient message passing algorithm to infer the hidden feature,
and estimate the entropy of candidate features consistent with the data. Our
analysis reveals that the learning requires only a few data if the feature is
salient and extensively many if the feature is weak. Moreover, the entropy of
candidate features monotonically decreases with data size and becomes negative
(i.e., entropy crisis) before the message passing becomes unstable, suggesting
a discontinuous phase transition. In terms of convergence time of the message
passing algorithm, the unsupervised learning exhibits an easy-hard-easy
phenomenon as the training data size increases. All these properties are
reproduced in an approximate Hopfield model, with an exception that the entropy
crisis is absent, and only continuous phase transition is observed. This key
difference is also confirmed in a handwritten digits dataset. This study
deepens our understanding of unsupervised learning from a finite number of
data, and may provide insights into its role in training deep networks.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 08:35:22 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 01:49:13 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Huang",
"Haiping",
""
],
[
"Toyoizumi",
"Taro",
""
]
] | TITLE: Unsupervised feature learning from finite data by message passing:
discontinuous versus continuous phase transition
ABSTRACT: Unsupervised neural network learning extracts hidden features from unlabeled
training data. This is used as a pretraining step for further supervised
learning in deep networks. Hence, understanding unsupervised learning is of
fundamental importance. Here, we study the unsupervised learning from a finite
number of data, based on the restricted Boltzmann machine learning. Our study
inspires an efficient message passing algorithm to infer the hidden feature,
and estimate the entropy of candidate features consistent with the data. Our
analysis reveals that the learning requires only a few data if the feature is
salient and extensively many if the feature is weak. Moreover, the entropy of
candidate features monotonically decreases with data size and becomes negative
(i.e., entropy crisis) before the message passing becomes unstable, suggesting
a discontinuous phase transition. In terms of convergence time of the message
passing algorithm, the unsupervised learning exhibits an easy-hard-easy
phenomenon as the training data size increases. All these properties are
reproduced in an approximate Hopfield model, with an exception that the entropy
crisis is absent, and only continuous phase transition is observed. This key
difference is also confirmed in a handwritten digits dataset. This study
deepens our understanding of unsupervised learning from a finite number of
data, and may provide insights into its role in training deep networks.
| no_new_dataset | 0.950869 |
1609.00565 | Lingxun Meng | Lingxun Meng, Yan Li, Mengyi Liu and Peng Shu | Skipping Word: A Character-Sequential Representation based Framework for
Question Answering | to be accepted as CIKM2016 short paper | null | 10.1145/2983323.2983861 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works using artificial neural networks based on word distributed
representation greatly boost the performance of various natural language
learning tasks, especially question answering. Though, they also carry along
with some attendant problems, such as corpus selection for embedding learning,
dictionary transformation for different learning tasks, etc. In this paper, we
propose to straightforwardly model sentences by means of character sequences,
and then utilize convolutional neural networks to integrate character embedding
learning together with point-wise answer selection training. Compared with deep
models pre-trained on word embedding (WE) strategy, our character-sequential
representation (CSR) based method shows a much simpler procedure and more
stable performance across different benchmarks. Extensive experiments on two
benchmark answer selection datasets exhibit the competitive performance
compared with the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 11:57:46 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Meng",
"Lingxun",
""
],
[
"Li",
"Yan",
""
],
[
"Liu",
"Mengyi",
""
],
[
"Shu",
"Peng",
""
]
] | TITLE: Skipping Word: A Character-Sequential Representation based Framework for
Question Answering
ABSTRACT: Recent works using artificial neural networks based on word distributed
representation greatly boost the performance of various natural language
learning tasks, especially question answering. Though, they also carry along
with some attendant problems, such as corpus selection for embedding learning,
dictionary transformation for different learning tasks, etc. In this paper, we
propose to straightforwardly model sentences by means of character sequences,
and then utilize convolutional neural networks to integrate character embedding
learning together with point-wise answer selection training. Compared with deep
models pre-trained on word embedding (WE) strategy, our character-sequential
representation (CSR) based method shows a much simpler procedure and more
stable performance across different benchmarks. Extensive experiments on two
benchmark answer selection datasets exhibit the competitive performance
compared with the state-of-the-art methods.
| no_new_dataset | 0.945045 |
1612.07405 | Ioannis Psarros | Georgia Avarikioti, Ioannis Z. Emiris, Ioannis Psarros, and Georgios
Samaras | Practical linear-space Approximate Near Neighbors in high dimension | 15 pages | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The $c$-approximate Near Neighbor problem in high dimensional spaces has been
mainly addressed by Locality Sensitive Hashing (LSH), which offers polynomial
dependence on the dimension, query time sublinear in the size of the dataset,
and subquadratic space requirement. For practical applications, linear space is
typically imperative. Most previous work in the linear space regime focuses on
the case that $c$ exceeds $1$ by a constant term. In a recently accepted paper,
optimal bounds have been achieved for any $c>1$ \cite{ALRW17}.
Towards practicality, we present a new and simple data structure using linear
space and sublinear query time for any $c>1$ including $c\to 1^+$. Given an LSH
family of functions for some metric space, we randomly project points to the
Hamming cube of dimension $\log n$, where $n$ is the number of input points.
The projected space contains strings which serve as keys for buckets containing
the input points. The query algorithm simply projects the query point, then
examines points which are assigned to the same or nearby vertices on the
Hamming cube. We analyze in detail the query time for some standard LSH
families.
To illustrate our claim of practicality, we offer an open-source
implementation in {\tt C++}, and report on several experiments in dimension up
to 1000 and $n$ up to $10^6$. Our algorithm is one to two orders of magnitude
faster than brute force search. Experiments confirm the sublinear dependence on
$n$ and the linear dependence on the dimension. We have compared against
state-of-the-art LSH-based library {\tt FALCONN}: our search is somewhat
slower, but memory usage and preprocessing time are significantly smaller.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2016 00:55:29 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Avarikioti",
"Georgia",
""
],
[
"Emiris",
"Ioannis Z.",
""
],
[
"Psarros",
"Ioannis",
""
],
[
"Samaras",
"Georgios",
""
]
] | TITLE: Practical linear-space Approximate Near Neighbors in high dimension
ABSTRACT: The $c$-approximate Near Neighbor problem in high dimensional spaces has been
mainly addressed by Locality Sensitive Hashing (LSH), which offers polynomial
dependence on the dimension, query time sublinear in the size of the dataset,
and subquadratic space requirement. For practical applications, linear space is
typically imperative. Most previous work in the linear space regime focuses on
the case that $c$ exceeds $1$ by a constant term. In a recently accepted paper,
optimal bounds have been achieved for any $c>1$ \cite{ALRW17}.
Towards practicality, we present a new and simple data structure using linear
space and sublinear query time for any $c>1$ including $c\to 1^+$. Given an LSH
family of functions for some metric space, we randomly project points to the
Hamming cube of dimension $\log n$, where $n$ is the number of input points.
The projected space contains strings which serve as keys for buckets containing
the input points. The query algorithm simply projects the query point, then
examines points which are assigned to the same or nearby vertices on the
Hamming cube. We analyze in detail the query time for some standard LSH
families.
To illustrate our claim of practicality, we offer an open-source
implementation in {\tt C++}, and report on several experiments in dimension up
to 1000 and $n$ up to $10^6$. Our algorithm is one to two orders of magnitude
faster than brute force search. Experiments confirm the sublinear dependence on
$n$ and the linear dependence on the dimension. We have compared against
state-of-the-art LSH-based library {\tt FALCONN}: our search is somewhat
slower, but memory usage and preprocessing time are significantly smaller.
| no_new_dataset | 0.952353 |
1612.07659 | Youngjoo Seo | Youngjoo Seo, Micha\"el Defferrard, Pierre Vandergheynst, Xavier
Bresson | Structured Sequence Modeling with Graph Convolutional Recurrent Networks | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep
learning model able to predict structured sequences of data. Precisely, GCRN is
a generalization of classical recurrent neural networks (RNN) to data
structured by an arbitrary graph. Such structured sequences can represent
series of frames in videos, spatio-temporal measurements on a network of
sensors, or random walks on a vocabulary graph for natural language modeling.
The proposed model combines convolutional neural networks (CNN) on graphs to
identify spatial structures and RNN to find dynamic patterns. We study two
possible architectures of GCRN, and apply the models to two practical problems:
predicting moving MNIST data, and modeling natural language with the Penn
Treebank dataset. Experiments show that exploiting simultaneously graph spatial
and dynamic information about data can improve both precision and learning
speed.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2016 15:53:57 GMT"
}
] | 2016-12-23T00:00:00 | [
[
"Seo",
"Youngjoo",
""
],
[
"Defferrard",
"Michaël",
""
],
[
"Vandergheynst",
"Pierre",
""
],
[
"Bresson",
"Xavier",
""
]
] | TITLE: Structured Sequence Modeling with Graph Convolutional Recurrent Networks
ABSTRACT: This paper introduces Graph Convolutional Recurrent Network (GCRN), a deep
learning model able to predict structured sequences of data. Precisely, GCRN is
a generalization of classical recurrent neural networks (RNN) to data
structured by an arbitrary graph. Such structured sequences can represent
series of frames in videos, spatio-temporal measurements on a network of
sensors, or random walks on a vocabulary graph for natural language modeling.
The proposed model combines convolutional neural networks (CNN) on graphs to
identify spatial structures and RNN to find dynamic patterns. We study two
possible architectures of GCRN, and apply the models to two practical problems:
predicting moving MNIST data, and modeling natural language with the Penn
Treebank dataset. Experiments show that exploiting simultaneously graph spatial
and dynamic information about data can improve both precision and learning
speed.
| no_new_dataset | 0.951684 |
1602.02845 | Carlos Riquelme Ruiz | Carlos Riquelme, Ramesh Johari, Baosen Zhang | Online Active Linear Regression via Thresholding | Published in AAAI 2017 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of online active learning to collect data for
regression modeling. Specifically, we consider a decision maker with a limited
experimentation budget who must efficiently learn an underlying linear
population model. Our main contribution is a novel threshold-based algorithm
for selection of most informative observations; we characterize its performance
and fundamental lower bounds. We extend the algorithm and its guarantees to
sparse linear regression in high-dimensional settings. Simulations suggest the
algorithm is remarkably robust: it provides significant benefits over passive
random sampling in real-world datasets that exhibit high nonlinearity and high
dimensionality --- significantly reducing both the mean and variance of the
squared error.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 02:51:12 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 17:53:33 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Jun 2016 18:36:58 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Dec 2016 13:36:50 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Riquelme",
"Carlos",
""
],
[
"Johari",
"Ramesh",
""
],
[
"Zhang",
"Baosen",
""
]
] | TITLE: Online Active Linear Regression via Thresholding
ABSTRACT: We consider the problem of online active learning to collect data for
regression modeling. Specifically, we consider a decision maker with a limited
experimentation budget who must efficiently learn an underlying linear
population model. Our main contribution is a novel threshold-based algorithm
for selection of most informative observations; we characterize its performance
and fundamental lower bounds. We extend the algorithm and its guarantees to
sparse linear regression in high-dimensional settings. Simulations suggest the
algorithm is remarkably robust: it provides significant benefits over passive
random sampling in real-world datasets that exhibit high nonlinearity and high
dimensionality --- significantly reducing both the mean and variance of the
squared error.
| no_new_dataset | 0.948298 |
1608.02658 | Abbas Shojaee | Abbas Shojaee, Isuru Ranasinghe, Alireza Ani | Revisiting Causality Inference in Memory-less Transition Networks | This edition is improved with further details in the discussion
section and Figure 1. Other authors will be added in final revision; For
feedback, opinions, or questions please contact: [email protected] OR
[email protected] | null | null | null | stat.ML cs.AI nlin.CD physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several methods exist to infer causal networks from massive volumes of
observational data. However, almost all existing methods require a considerable
length of time series data to capture cause and effect relationships. In
contrast, memory-less transition networks or Markov Chain data, which refers to
one-step transitions to and from an event, have not been explored for causality
inference even though such data is widely available. We find that causal
network can be inferred from characteristics of four unique distribution zones
around each event. We call this Composition of Transitions and show that cause,
effect, and random events exhibit different behavior in their compositions. We
applied machine learning models to learn these different behaviors and to infer
causality. We name this new method Causality Inference using Composition of
Transitions (CICT). To evaluate CICT, we used an administrative inpatient
healthcare dataset to set up a network of patients transitions between
different diagnoses. We show that CICT is highly accurate in inferring whether
the transition between a pair of events is causal or random and performs well
in identifying the direction of causality in a bi-directional association.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 23:46:59 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Aug 2016 21:38:17 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Dec 2016 16:33:44 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Shojaee",
"Abbas",
""
],
[
"Ranasinghe",
"Isuru",
""
],
[
"Ani",
"Alireza",
""
]
] | TITLE: Revisiting Causality Inference in Memory-less Transition Networks
ABSTRACT: Several methods exist to infer causal networks from massive volumes of
observational data. However, almost all existing methods require a considerable
length of time series data to capture cause and effect relationships. In
contrast, memory-less transition networks or Markov Chain data, which refers to
one-step transitions to and from an event, have not been explored for causality
inference even though such data is widely available. We find that causal
network can be inferred from characteristics of four unique distribution zones
around each event. We call this Composition of Transitions and show that cause,
effect, and random events exhibit different behavior in their compositions. We
applied machine learning models to learn these different behaviors and to infer
causality. We name this new method Causality Inference using Composition of
Transitions (CICT). To evaluate CICT, we used an administrative inpatient
healthcare dataset to set up a network of patients transitions between
different diagnoses. We show that CICT is highly accurate in inferring whether
the transition between a pair of events is causal or random and performs well
in identifying the direction of causality in a bi-directional association.
| no_new_dataset | 0.950503 |
1611.07909 | Shervin Minaee | Shervin Minaee, Yao Wang | Image Segmentation Using Overlapping Group Sparsity | arXiv admin note: substantial text overlap with arXiv:1602.02434.
appears in IEEE Signal Processing in Medicine and Biology Symposium, 2016 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sparse decomposition has been widely used for different applications, such as
source separation, image classification and image denoising. This paper
presents a new algorithm for segmentation of an image into background and
foreground text and graphics using sparse decomposition. First, the background
is represented using a suitable smooth model, which is a linear combination of
a few smoothly varying basis functions, and the foreground text and graphics
are modeled as a sparse component overlaid on the smooth background. Then the
background and foreground are separated using a sparse decomposition framework
and imposing some prior information, which promote the smoothness of
background, and the sparsity and connectivity of foreground pixels. This
algorithm has been tested on a dataset of images extracted from HEVC standard
test sequences for screen content coding, and is shown to outperform prior
methods, including least absolute deviation fitting, k-means clustering based
segmentation in DjVu, and shape primitive extraction and coding algorithm.
| [
{
"version": "v1",
"created": "Wed, 23 Nov 2016 18:08:33 GMT"
},
{
"version": "v2",
"created": "Sat, 26 Nov 2016 03:42:36 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Dec 2016 15:38:42 GMT"
},
{
"version": "v4",
"created": "Wed, 21 Dec 2016 15:36:41 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Minaee",
"Shervin",
""
],
[
"Wang",
"Yao",
""
]
] | TITLE: Image Segmentation Using Overlapping Group Sparsity
ABSTRACT: Sparse decomposition has been widely used for different applications, such as
source separation, image classification and image denoising. This paper
presents a new algorithm for segmentation of an image into background and
foreground text and graphics using sparse decomposition. First, the background
is represented using a suitable smooth model, which is a linear combination of
a few smoothly varying basis functions, and the foreground text and graphics
are modeled as a sparse component overlaid on the smooth background. Then the
background and foreground are separated using a sparse decomposition framework
and imposing some prior information, which promote the smoothness of
background, and the sparsity and connectivity of foreground pixels. This
algorithm has been tested on a dataset of images extracted from HEVC standard
test sequences for screen content coding, and is shown to outperform prior
methods, including least absolute deviation fitting, k-means clustering based
segmentation in DjVu, and shape primitive extraction and coding algorithm.
| no_new_dataset | 0.947817 |
1612.05730 | Snehasis Banerjee | Snehasis Banerjee, Tanushyam Chattopadhyay, Swagata Biswas, Rohan
Banerjee, Anirban Dutta Choudhury, Arpan Pal and Utpal Garain | Towards Wide Learning: Experiments in Healthcare | 4 pages, Machine Learning for Health Workshop, NIPS 2016 | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a Wide Learning architecture is proposed that attempts to
automate the feature engineering portion of the machine learning (ML) pipeline.
Feature engineering is widely considered as the most time consuming and expert
knowledge demanding portion of any ML task. The proposed feature recommendation
approach is tested on 3 healthcare datasets: a) PhysioNet Challenge 2016
dataset of phonocardiogram (PCG) signals, b) MIMIC II blood pressure
classification dataset of photoplethysmogram (PPG) signals and c) an emotion
classification dataset of PPG signals. While the proposed method beats the
state of the art techniques for 2nd and 3rd dataset, it reaches 94.38% of the
accuracy level of the winner of PhysioNet Challenge 2016. In all cases, the
effort to reach a satisfactory performance was drastically less (a few days)
than manual feature engineering.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2016 11:00:49 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Dec 2016 13:53:15 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Banerjee",
"Snehasis",
""
],
[
"Chattopadhyay",
"Tanushyam",
""
],
[
"Biswas",
"Swagata",
""
],
[
"Banerjee",
"Rohan",
""
],
[
"Choudhury",
"Anirban Dutta",
""
],
[
"Pal",
"Arpan",
""
],
[
"Garain",
"Utpal",
""
]
] | TITLE: Towards Wide Learning: Experiments in Healthcare
ABSTRACT: In this paper, a Wide Learning architecture is proposed that attempts to
automate the feature engineering portion of the machine learning (ML) pipeline.
Feature engineering is widely considered as the most time consuming and expert
knowledge demanding portion of any ML task. The proposed feature recommendation
approach is tested on 3 healthcare datasets: a) PhysioNet Challenge 2016
dataset of phonocardiogram (PCG) signals, b) MIMIC II blood pressure
classification dataset of photoplethysmogram (PPG) signals and c) an emotion
classification dataset of PPG signals. While the proposed method beats the
state of the art techniques for 2nd and 3rd dataset, it reaches 94.38% of the
accuracy level of the winner of PhysioNet Challenge 2016. In all cases, the
effort to reach a satisfactory performance was drastically less (a few days)
than manual feature engineering.
| new_dataset | 0.890342 |
1612.06890 | Justin Johnson | Justin Johnson and Bharath Hariharan and Laurens van der Maaten and Li
Fei-Fei and C. Lawrence Zitnick and Ross Girshick | CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
Visual Reasoning | null | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When building artificial intelligence systems that can reason and answer
questions about visual data, we need diagnostic tests to analyze our progress
and discover shortcomings. Existing benchmarks for visual question answering
can help, but have strong biases that models can exploit to correctly answer
questions without reasoning. They also conflate multiple sources of error,
making it hard to pinpoint model weaknesses. We present a diagnostic dataset
that tests a range of visual reasoning abilities. It contains minimal biases
and has detailed annotations describing the kind of reasoning each question
requires. We use this dataset to analyze a variety of modern visual reasoning
systems, providing novel insights into their abilities and limitations.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 21:40:40 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Johnson",
"Justin",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"van der Maaten",
"Laurens",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Zitnick",
"C. Lawrence",
""
],
[
"Girshick",
"Ross",
""
]
] | TITLE: CLEVR: A Diagnostic Dataset for Compositional Language and Elementary
Visual Reasoning
ABSTRACT: When building artificial intelligence systems that can reason and answer
questions about visual data, we need diagnostic tests to analyze our progress
and discover shortcomings. Existing benchmarks for visual question answering
can help, but have strong biases that models can exploit to correctly answer
questions without reasoning. They also conflate multiple sources of error,
making it hard to pinpoint model weaknesses. We present a diagnostic dataset
that tests a range of visual reasoning abilities. It contains minimal biases
and has detailed annotations describing the kind of reasoning each question
requires. We use this dataset to analyze a variety of modern visual reasoning
systems, providing novel insights into their abilities and limitations.
| new_dataset | 0.951818 |
1612.06933 | Kanji Tanaka | Fei Xiaoxiao, Tanaka Kanji, Inamoto Kouya | Unsupervised Place Discovery for Visual Place Classification | Technical Report, 5 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we explore the use of deep convolutional neural networks
(DCNNs) in visual place classification for robotic mapping and localization. An
open question is how to partition the robot's workspace into places to maximize
the performance (e.g., accuracy, precision, recall) of potential DCNN
classifiers. This is a chicken and egg problem: If we had a well-trained DCNN
classifier, it is rather easy to partition the robot's workspace into places,
but the training of a DCNN classifier requires a set of pre-defined place
classes. In this study, we address this problem and present several strategies
for unsupervised discovery of place classes ("time cue," "location cue,"
"time-appearance cue," and "location-appearance cue"). We also evaluate the
efficacy of the proposed methods using the publicly available University of
Michigan North Campus Long-Term (NCLT) Dataset.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2016 00:53:18 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Xiaoxiao",
"Fei",
""
],
[
"Kanji",
"Tanaka",
""
],
[
"Kouya",
"Inamoto",
""
]
] | TITLE: Unsupervised Place Discovery for Visual Place Classification
ABSTRACT: In this study, we explore the use of deep convolutional neural networks
(DCNNs) in visual place classification for robotic mapping and localization. An
open question is how to partition the robot's workspace into places to maximize
the performance (e.g., accuracy, precision, recall) of potential DCNN
classifiers. This is a chicken and egg problem: If we had a well-trained DCNN
classifier, it is rather easy to partition the robot's workspace into places,
but the training of a DCNN classifier requires a set of pre-defined place
classes. In this study, we address this problem and present several strategies
for unsupervised discovery of place classes ("time cue," "location cue,"
"time-appearance cue," and "location-appearance cue"). We also evaluate the
efficacy of the proposed methods using the publicly available University of
Michigan North Campus Long-Term (NCLT) Dataset.
| no_new_dataset | 0.95511 |
1612.07089 | Sandeep Kumar | Ketan Rajawat and Sandeep Kumar | Stochastic Multidimensional Scaling | null | null | null | null | math.OC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multidimensional scaling (MDS) is a popular dimensionality reduction
techniques that has been widely used for network visualization and cooperative
localization. However, the traditional stress minimization formulation of MDS
necessitates the use of batch optimization algorithms that are not scalable to
large-sized problems. This paper considers an alternative stochastic stress
minimization framework that is amenable to incremental and distributed
solutions. A novel linear-complexity stochastic optimization algorithm is
proposed that is provably convergent and simple to implement. The applicability
of the proposed algorithm to localization and visualization tasks is also
expounded. Extensive tests on synthetic and real datasets demonstrate the
efficacy of the proposed algorithm.
| [
{
"version": "v1",
"created": "Wed, 21 Dec 2016 13:08:35 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Rajawat",
"Ketan",
""
],
[
"Kumar",
"Sandeep",
""
]
] | TITLE: Stochastic Multidimensional Scaling
ABSTRACT: Multidimensional scaling (MDS) is a popular dimensionality reduction
techniques that has been widely used for network visualization and cooperative
localization. However, the traditional stress minimization formulation of MDS
necessitates the use of batch optimization algorithms that are not scalable to
large-sized problems. This paper considers an alternative stochastic stress
minimization framework that is amenable to incremental and distributed
solutions. A novel linear-complexity stochastic optimization algorithm is
proposed that is provably convergent and simple to implement. The applicability
of the proposed algorithm to localization and visualization tasks is also
expounded. Extensive tests on synthetic and real datasets demonstrate the
efficacy of the proposed algorithm.
| no_new_dataset | 0.948298 |
1612.07119 | Yaman Umuroglu | Yaman Umuroglu, Nicholas J. Fraser, Giulio Gambardella, Michaela
Blott, Philip Leong, Magnus Jahre, Kees Vissers | FINN: A Framework for Fast, Scalable Binarized Neural Network Inference | To appear in the 25th International Symposium on Field-Programmable
Gate Arrays, February 2017 | null | 10.1145/3020078.3021744 | null | cs.CV cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research has shown that convolutional neural networks contain significant
redundancy, and high classification accuracy can be obtained even when weights
and activations are reduced from floating point to binary values. In this
paper, we present FINN, a framework for building fast and flexible FPGA
accelerators using a flexible heterogeneous streaming architecture. By
utilizing a novel set of optimizations that enable efficient mapping of
binarized neural networks to hardware, we implement fully connected,
convolutional and pooling layers, with per-layer compute resources being
tailored to user-provided throughput requirements. On a ZC706 embedded FPGA
platform drawing less than 25 W total system power, we demonstrate up to 12.3
million image classifications per second with 0.31 {\mu}s latency on the MNIST
dataset with 95.8% accuracy, and 21906 image classifications per second with
283 {\mu}s latency on the CIFAR-10 and SVHN datasets with respectively 80.1%
and 94.9% accuracy. To the best of our knowledge, ours are the fastest
classification rates reported to date on these benchmarks.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 22:19:47 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Umuroglu",
"Yaman",
""
],
[
"Fraser",
"Nicholas J.",
""
],
[
"Gambardella",
"Giulio",
""
],
[
"Blott",
"Michaela",
""
],
[
"Leong",
"Philip",
""
],
[
"Jahre",
"Magnus",
""
],
[
"Vissers",
"Kees",
""
]
] | TITLE: FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
ABSTRACT: Research has shown that convolutional neural networks contain significant
redundancy, and high classification accuracy can be obtained even when weights
and activations are reduced from floating point to binary values. In this
paper, we present FINN, a framework for building fast and flexible FPGA
accelerators using a flexible heterogeneous streaming architecture. By
utilizing a novel set of optimizations that enable efficient mapping of
binarized neural networks to hardware, we implement fully connected,
convolutional and pooling layers, with per-layer compute resources being
tailored to user-provided throughput requirements. On a ZC706 embedded FPGA
platform drawing less than 25 W total system power, we demonstrate up to 12.3
million image classifications per second with 0.31 {\mu}s latency on the MNIST
dataset with 95.8% accuracy, and 21906 image classifications per second with
283 {\mu}s latency on the CIFAR-10 and SVHN datasets with respectively 80.1%
and 94.9% accuracy. To the best of our knowledge, ours are the fastest
classification rates reported to date on these benchmarks.
| no_new_dataset | 0.949106 |
1612.07310 | Cewu Lu | Cewu Lu, Hao Su, Yongyi Lu, Li Yi, Chikeung Tang, Leonidas Guibas | Beyond Holistic Object Recognition: Enriching Image Understanding with
Part States | 9 pages | null | null | 23452523 | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Important high-level vision tasks such as human-object interaction, image
captioning and robotic manipulation require rich semantic descriptions of
objects at part level. Based upon previous work on part localization, in this
paper, we address the problem of inferring rich semantics imparted by an object
part in still images. We propose to tokenize the semantic space as a discrete
set of part states. Our modeling of part state is spatially localized,
therefore, we formulate the part state inference problem as a pixel-wise
annotation problem. An iterative part-state inference neural network is
specifically designed for this task, which is efficient in time and accurate in
performance. Extensive experiments demonstrate that the proposed method can
effectively predict the semantic states of parts and simultaneously correct
localization errors, thus benefiting a few visual understanding applications.
The other contribution of this paper is our part state dataset which contains
rich part-level semantic annotations.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 13:46:58 GMT"
}
] | 2016-12-22T00:00:00 | [
[
"Lu",
"Cewu",
""
],
[
"Su",
"Hao",
""
],
[
"Lu",
"Yongyi",
""
],
[
"Yi",
"Li",
""
],
[
"Tang",
"Chikeung",
""
],
[
"Guibas",
"Leonidas",
""
]
] | TITLE: Beyond Holistic Object Recognition: Enriching Image Understanding with
Part States
ABSTRACT: Important high-level vision tasks such as human-object interaction, image
captioning and robotic manipulation require rich semantic descriptions of
objects at part level. Based upon previous work on part localization, in this
paper, we address the problem of inferring rich semantics imparted by an object
part in still images. We propose to tokenize the semantic space as a discrete
set of part states. Our modeling of part state is spatially localized,
therefore, we formulate the part state inference problem as a pixel-wise
annotation problem. An iterative part-state inference neural network is
specifically designed for this task, which is efficient in time and accurate in
performance. Extensive experiments demonstrate that the proposed method can
effectively predict the semantic states of parts and simultaneously correct
localization errors, thus benefiting a few visual understanding applications.
The other contribution of this paper is our part state dataset which contains
rich part-level semantic annotations.
| new_dataset | 0.957358 |
1602.07226 | Valentin Kuznetsov | Valentin Kuznetsov, Ting Li, Luca Giommi, Daniele Bonacorsi, Tony
Wildish | Predicting dataset popularity for the CMS experiment | Submitted to proceedings of 17th International workshop on Advanced
Computing and Analysis Techniques in physics research (ACAT) | null | 10.1088/1742-6596/762/1/012048 | null | physics.data-an hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The CMS experiment at the LHC accelerator at CERN relies on its computing
infrastructure to stay at the frontier of High Energy Physics, searching for
new phenomena and making discoveries. Even though computing plays a significant
role in physics analysis we rarely use its data to predict the system behavior
itself. A basic information about computing resources, user activities and site
utilization can be really useful for improving the throughput of the system and
its management. In this paper, we discuss a first CMS analysis of dataset
popularity based on CMS meta-data which can be used as a model for dynamic data
placement and provide the foundation of data-driven approach for the CMS
computing infrastructure.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 16:39:37 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Kuznetsov",
"Valentin",
""
],
[
"Li",
"Ting",
""
],
[
"Giommi",
"Luca",
""
],
[
"Bonacorsi",
"Daniele",
""
],
[
"Wildish",
"Tony",
""
]
] | TITLE: Predicting dataset popularity for the CMS experiment
ABSTRACT: The CMS experiment at the LHC accelerator at CERN relies on its computing
infrastructure to stay at the frontier of High Energy Physics, searching for
new phenomena and making discoveries. Even though computing plays a significant
role in physics analysis we rarely use its data to predict the system behavior
itself. A basic information about computing resources, user activities and site
utilization can be really useful for improving the throughput of the system and
its management. In this paper, we discuss a first CMS analysis of dataset
popularity based on CMS meta-data which can be used as a model for dynamic data
placement and provide the foundation of data-driven approach for the CMS
computing infrastructure.
| no_new_dataset | 0.950732 |
1604.02634 | Renbo Zhao | Renbo Zhao and Vincent Y. F. Tan | Online Nonnegative Matrix Factorization with Outliers | null | null | 10.1109/TSP.2016.2620967 | null | stat.ML cs.IT cs.LG math.IT math.OC stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a unified and systematic framework for performing online
nonnegative matrix factorization in the presence of outliers. Our framework is
particularly suited to large-scale data. We propose two solvers based on
projected gradient descent and the alternating direction method of multipliers.
We prove that the sequence of objective values converges almost surely by
appealing to the quasi-martingale convergence theorem. We also show the
sequence of learned dictionaries converges to the set of stationary points of
the expected loss function almost surely. In addition, we extend our basic
problem formulation to various settings with different constraints and
regularizers. We also adapt the solvers and analyses to each setting. We
perform extensive experiments on both synthetic and real datasets. These
experiments demonstrate the computational efficiency and efficacy of our
algorithms on tasks such as (parts-based) basis learning, image denoising,
shadow removal and foreground-background separation.
| [
{
"version": "v1",
"created": "Sun, 10 Apr 2016 04:02:57 GMT"
},
{
"version": "v2",
"created": "Sat, 15 Oct 2016 12:01:30 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Zhao",
"Renbo",
""
],
[
"Tan",
"Vincent Y. F.",
""
]
] | TITLE: Online Nonnegative Matrix Factorization with Outliers
ABSTRACT: We propose a unified and systematic framework for performing online
nonnegative matrix factorization in the presence of outliers. Our framework is
particularly suited to large-scale data. We propose two solvers based on
projected gradient descent and the alternating direction method of multipliers.
We prove that the sequence of objective values converges almost surely by
appealing to the quasi-martingale convergence theorem. We also show the
sequence of learned dictionaries converges to the set of stationary points of
the expected loss function almost surely. In addition, we extend our basic
problem formulation to various settings with different constraints and
regularizers. We also adapt the solvers and analyses to each setting. We
perform extensive experiments on both synthetic and real datasets. These
experiments demonstrate the computational efficiency and efficacy of our
algorithms on tasks such as (parts-based) basis learning, image denoising,
shadow removal and foreground-background separation.
| no_new_dataset | 0.944485 |
1604.08001 | Amin Zheng | Amin Zheng, Gene Cheung and Dinei Florencio | Context Tree based Image Contour Coding using A Geometric Prior | null | null | 10.1109/TIP.2016.2627813 | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If object contours in images are coded efficiently as side information, then
they can facilitate advanced image / video coding techniques, such as graph
Fourier transform coding or motion prediction of arbitrarily shaped pixel
blocks. In this paper, we study the problem of lossless and lossy compression
of detected contours in images. Specifically, we first convert a detected
object contour composed of contiguous between-pixel edges to a sequence of
directional symbols drawn from a small alphabet. To encode the symbol sequence
using arithmetic coding, we compute an optimal variable-length context tree
(VCT) $\mathcal{T}$ via a maximum a posterior (MAP) formulation to estimate
symbols' conditional probabilities. MAP prevents us from overfitting given a
small training set $\mathcal{X}$ of past symbol sequences by identifying a VCT
$\mathcal{T}$ that achieves a high likelihood $P(\mathcal{X}|\mathcal{T})$ of
observing $\mathcal{X}$ given $\mathcal{T}$, and a large geometric prior
$P(\mathcal{T})$ stating that image contours are more often straight than
curvy. For the lossy case, we design efficient dynamic programming (DP)
algorithms that optimally trade off coding rate of an approximate contour
$\hat{\mathbf{x}}$ given a VCT $\mathcal{T}$ with two notions of distortion of
$\hat{\mathbf{x}}$ with respect to the original contour $\mathbf{x}$. To reduce
the size of the DP tables, a total suffix tree is derived from a given VCT
$\mathcal{T}$ for compact table entry indexing, reducing complexity.
Experimental results show that for lossless contour coding, our proposed
algorithm outperforms state-of-the-art context-based schemes consistently for
both small and large training datasets. For lossy contour coding, our
algorithms outperform comparable schemes in the literature in rate-distortion
performance.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 10:00:41 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Zheng",
"Amin",
""
],
[
"Cheung",
"Gene",
""
],
[
"Florencio",
"Dinei",
""
]
] | TITLE: Context Tree based Image Contour Coding using A Geometric Prior
ABSTRACT: If object contours in images are coded efficiently as side information, then
they can facilitate advanced image / video coding techniques, such as graph
Fourier transform coding or motion prediction of arbitrarily shaped pixel
blocks. In this paper, we study the problem of lossless and lossy compression
of detected contours in images. Specifically, we first convert a detected
object contour composed of contiguous between-pixel edges to a sequence of
directional symbols drawn from a small alphabet. To encode the symbol sequence
using arithmetic coding, we compute an optimal variable-length context tree
(VCT) $\mathcal{T}$ via a maximum a posterior (MAP) formulation to estimate
symbols' conditional probabilities. MAP prevents us from overfitting given a
small training set $\mathcal{X}$ of past symbol sequences by identifying a VCT
$\mathcal{T}$ that achieves a high likelihood $P(\mathcal{X}|\mathcal{T})$ of
observing $\mathcal{X}$ given $\mathcal{T}$, and a large geometric prior
$P(\mathcal{T})$ stating that image contours are more often straight than
curvy. For the lossy case, we design efficient dynamic programming (DP)
algorithms that optimally trade off coding rate of an approximate contour
$\hat{\mathbf{x}}$ given a VCT $\mathcal{T}$ with two notions of distortion of
$\hat{\mathbf{x}}$ with respect to the original contour $\mathbf{x}$. To reduce
the size of the DP tables, a total suffix tree is derived from a given VCT
$\mathcal{T}$ for compact table entry indexing, reducing complexity.
Experimental results show that for lossless contour coding, our proposed
algorithm outperforms state-of-the-art context-based schemes consistently for
both small and large training datasets. For lossy contour coding, our
algorithms outperform comparable schemes in the literature in rate-distortion
performance.
| no_new_dataset | 0.945197 |
1605.06711 | Bo Yang | Bo Yang, Xiao Fu and Nicholas D. Sidiropoulos | Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering | null | null | 10.1109/TSP.2016.2614491 | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimensionality reduction techniques play an essential role in data analytics,
signal processing and machine learning. Dimensionality reduction is usually
performed in a preprocessing stage that is separate from subsequent data
analysis, such as clustering or classification. Finding reduced-dimension
representations that are well-suited for the intended task is more appealing.
This paper proposes a joint factor analysis and latent clustering framework,
which aims at learning cluster-aware low-dimensional representations of matrix
and tensor data. The proposed approach leverages matrix and tensor
factorization models that produce essentially unique latent representations of
the data to unravel latent cluster structure -- which is otherwise obscured
because of the freedom to apply an oblique transformation in latent space. At
the same time, latent cluster structure is used as prior information to enhance
the performance of factorization. Specific contributions include several
custom-built problem formulations, corresponding algorithms, and discussion of
associated convergence properties. Besides extensive simulations, real-world
datasets such as Reuters document data and MNIST image data are also employed
to showcase the effectiveness of the proposed approaches.
| [
{
"version": "v1",
"created": "Sat, 21 May 2016 23:51:02 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Yang",
"Bo",
""
],
[
"Fu",
"Xiao",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
]
] | TITLE: Learning From Hidden Traits: Joint Factor Analysis and Latent Clustering
ABSTRACT: Dimensionality reduction techniques play an essential role in data analytics,
signal processing and machine learning. Dimensionality reduction is usually
performed in a preprocessing stage that is separate from subsequent data
analysis, such as clustering or classification. Finding reduced-dimension
representations that are well-suited for the intended task is more appealing.
This paper proposes a joint factor analysis and latent clustering framework,
which aims at learning cluster-aware low-dimensional representations of matrix
and tensor data. The proposed approach leverages matrix and tensor
factorization models that produce essentially unique latent representations of
the data to unravel latent cluster structure -- which is otherwise obscured
because of the freedom to apply an oblique transformation in latent space. At
the same time, latent cluster structure is used as prior information to enhance
the performance of factorization. Specific contributions include several
custom-built problem formulations, corresponding algorithms, and discussion of
associated convergence properties. Besides extensive simulations, real-world
datasets such as Reuters document data and MNIST image data are also employed
to showcase the effectiveness of the proposed approaches.
| no_new_dataset | 0.947235 |
1608.05513 | Sagar Gandhi | Shraddha Deshmukh, Sagar Gandhi, Pratap Sanap and Vivek Kulkarni | Data Centroid Based Multi-Level Fuzzy Min-Max Neural Network | This paper has been withdrawn by the author due to crucial evidence
that the similar work has already been published | null | null | null | cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, a multi-level fuzzy min max neural network (MLF) was proposed,
which improves the classification accuracy by handling an overlapped region
(area of confusion) with the help of a tree structure. In this brief, an
extension of MLF is proposed which defines a new boundary region, where the
previously proposed methods mark decisions with less confidence and hence
misclassification is more frequent. A methodology to classify patterns more
accurately is presented. Our work enhances the testing procedure by means of
data centroids. We exhibit an illustrative example, clearly highlighting the
advantage of our approach. Results on standard datasets are also presented to
evidentially prove a consistent improvement in the classification rate.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 07:05:33 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Dec 2016 08:09:40 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Deshmukh",
"Shraddha",
""
],
[
"Gandhi",
"Sagar",
""
],
[
"Sanap",
"Pratap",
""
],
[
"Kulkarni",
"Vivek",
""
]
] | TITLE: Data Centroid Based Multi-Level Fuzzy Min-Max Neural Network
ABSTRACT: Recently, a multi-level fuzzy min max neural network (MLF) was proposed,
which improves the classification accuracy by handling an overlapped region
(area of confusion) with the help of a tree structure. In this brief, an
extension of MLF is proposed which defines a new boundary region, where the
previously proposed methods mark decisions with less confidence and hence
misclassification is more frequent. A methodology to classify patterns more
accurately is presented. Our work enhances the testing procedure by means of
data centroids. We exhibit an illustrative example, clearly highlighting the
advantage of our approach. Results on standard datasets are also presented to
evidentially prove a consistent improvement in the classification rate.
| no_new_dataset | 0.95297 |
1612.06443 | Odemir Bruno PhD | Mariane Barros Neiva, Antoine Manzanera, Odemir Martinez Bruno | Binary Distance Transform to Improve Feature Extraction | 9 pages, 4 figures, WVC 2016 proceedings | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To recognize textures many methods have been developed along the years.
However, texture datasets may be hard to be classified due to artefacts such as
a variety of scale, illumination and noise. This paper proposes the application
of binary distance transform on the original dataset to add information to
texture representation and consequently improve recognition. Texture images,
usually in grayscale, suffers a binarization prior to distance transform and
one of the resulted images are combined with original texture to improve the
amount of information. Four datasets are used to evaluate our approach. For
Outex dataset, for instance, the proposal outperforms all rates, improvements
of an up to 10\%, compared to traditional approach where descriptors are
applied on the original dataset, showing the importance of this approach.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 22:19:19 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Neiva",
"Mariane Barros",
""
],
[
"Manzanera",
"Antoine",
""
],
[
"Bruno",
"Odemir Martinez",
""
]
] | TITLE: Binary Distance Transform to Improve Feature Extraction
ABSTRACT: To recognize textures many methods have been developed along the years.
However, texture datasets may be hard to be classified due to artefacts such as
a variety of scale, illumination and noise. This paper proposes the application
of binary distance transform on the original dataset to add information to
texture representation and consequently improve recognition. Texture images,
usually in grayscale, suffers a binarization prior to distance transform and
one of the resulted images are combined with original texture to improve the
amount of information. Four datasets are used to evaluate our approach. For
Outex dataset, for instance, the proposal outperforms all rates, improvements
of an up to 10\%, compared to traditional approach where descriptors are
applied on the original dataset, showing the importance of this approach.
| no_new_dataset | 0.952882 |
1612.06454 | Henrique Morimitsu | Henrique Morimitsu, Isabelle Bloch and Roberto M. Cesar-Jr | Exploring Structure for Long-Term Tracking of Multiple Objects in Sports
Videos | This version corresponds to the preprint of the paper accepted for
CVIU | null | 10.1016/j.cviu.2016.12.003 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel approach for exploiting structural
relations to track multiple objects that may undergo long-term occlusion and
abrupt motion. We use a model-free approach that relies only on annotations
given in the first frame of the video to track all the objects online, i.e.
without knowledge from future frames. We initialize a probabilistic Attributed
Relational Graph (ARG) from the first frame, which is incrementally updated
along the video. Instead of using the structural information only to evaluate
the scene, the proposed approach considers it to generate new tracking
hypotheses. In this way, our method is capable of generating relevant object
candidates that are used to improve or recover the track of lost objects. The
proposed method is evaluated on several videos of table tennis, volleyball, and
on the ACASVA dataset. The results show that our approach is very robust,
flexible and able to outperform other state-of-the-art methods in sports videos
that present structural patterns.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 23:14:26 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Morimitsu",
"Henrique",
""
],
[
"Bloch",
"Isabelle",
""
],
[
"Cesar-Jr",
"Roberto M.",
""
]
] | TITLE: Exploring Structure for Long-Term Tracking of Multiple Objects in Sports
Videos
ABSTRACT: In this paper, we propose a novel approach for exploiting structural
relations to track multiple objects that may undergo long-term occlusion and
abrupt motion. We use a model-free approach that relies only on annotations
given in the first frame of the video to track all the objects online, i.e.
without knowledge from future frames. We initialize a probabilistic Attributed
Relational Graph (ARG) from the first frame, which is incrementally updated
along the video. Instead of using the structural information only to evaluate
the scene, the proposed approach considers it to generate new tracking
hypotheses. In this way, our method is capable of generating relevant object
candidates that are used to improve or recover the track of lost objects. The
proposed method is evaluated on several videos of table tennis, volleyball, and
on the ACASVA dataset. The results show that our approach is very robust,
flexible and able to outperform other state-of-the-art methods in sports videos
that present structural patterns.
| no_new_dataset | 0.951142 |
1612.06508 | Youngjung Kim | Youngjung Kim, Hyungjoo Jung, Dongbo Min, Kwanghoon Sohn | Deeply Aggregated Alternating Minimization for Image Restoration | 9 PAGES | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regularization-based image restoration has remained an active research topic
in computer vision and image processing. It often leverages a guidance signal
captured in different fields as an additional cue. In this work, we present a
general framework for image restoration, called deeply aggregated alternating
minimization (DeepAM). We propose to train deep neural network to advance two
of the steps in the conventional AM algorithm: proximal mapping and ?-
continuation. Both steps are learned from a large dataset in an end-to-end
manner. The proposed framework enables the convolutional neural networks (CNNs)
to operate as a prior or regularizer in the AM algorithm. We show that our
learned regularizer via deep aggregation outperforms the recent data-driven
approaches as well as the nonlocalbased methods. The flexibility and
effectiveness of our framework are demonstrated in several image restoration
tasks, including single image denoising, RGB-NIR restoration, and depth
super-resolution.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 04:56:56 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Kim",
"Youngjung",
""
],
[
"Jung",
"Hyungjoo",
""
],
[
"Min",
"Dongbo",
""
],
[
"Sohn",
"Kwanghoon",
""
]
] | TITLE: Deeply Aggregated Alternating Minimization for Image Restoration
ABSTRACT: Regularization-based image restoration has remained an active research topic
in computer vision and image processing. It often leverages a guidance signal
captured in different fields as an additional cue. In this work, we present a
general framework for image restoration, called deeply aggregated alternating
minimization (DeepAM). We propose to train deep neural network to advance two
of the steps in the conventional AM algorithm: proximal mapping and ?-
continuation. Both steps are learned from a large dataset in an end-to-end
manner. The proposed framework enables the convolutional neural networks (CNNs)
to operate as a prior or regularizer in the AM algorithm. We show that our
learned regularizer via deep aggregation outperforms the recent data-driven
approaches as well as the nonlocalbased methods. The flexibility and
effectiveness of our framework are demonstrated in several image restoration
tasks, including single image denoising, RGB-NIR restoration, and depth
super-resolution.
| no_new_dataset | 0.948775 |
1612.06543 | Niki Martinel | Niki Martinel, Gian Luca Foresti and Christian Micheloni | Wide-Slice Residual Networks for Food Recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Food diary applications represent a tantalizing market. Such applications,
based on image food recognition, opened to new challenges for computer vision
and pattern recognition algorithms. Recent works in the field are focusing
either on hand-crafted representations or on learning these by exploiting deep
neural networks. Despite the success of such a last family of works, these
generally exploit off-the shelf deep architectures to classify food dishes.
Thus, the architectures are not cast to the specific problem. We believe that
better results can be obtained if the deep architecture is defined with respect
to an analysis of the food composition. Following such an intuition, this work
introduces a new deep scheme that is designed to handle the food structure.
Specifically, inspired by the recent success of residual deep network, we
exploit such a learning scheme and introduce a slice convolution block to
capture the vertical food layers. Outputs of the deep residual blocks are
combined with the sliced convolution to produce the classification score for
specific food categories. To evaluate our proposed architecture we have
conducted experimental results on three benchmark datasets. Results demonstrate
that our solution shows better performance with respect to existing approaches
(e.g., a top-1 accuracy of 90.27% on the Food-101 challenging dataset).
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 08:19:52 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Martinel",
"Niki",
""
],
[
"Foresti",
"Gian Luca",
""
],
[
"Micheloni",
"Christian",
""
]
] | TITLE: Wide-Slice Residual Networks for Food Recognition
ABSTRACT: Food diary applications represent a tantalizing market. Such applications,
based on image food recognition, opened to new challenges for computer vision
and pattern recognition algorithms. Recent works in the field are focusing
either on hand-crafted representations or on learning these by exploiting deep
neural networks. Despite the success of such a last family of works, these
generally exploit off-the shelf deep architectures to classify food dishes.
Thus, the architectures are not cast to the specific problem. We believe that
better results can be obtained if the deep architecture is defined with respect
to an analysis of the food composition. Following such an intuition, this work
introduces a new deep scheme that is designed to handle the food structure.
Specifically, inspired by the recent success of residual deep network, we
exploit such a learning scheme and introduce a slice convolution block to
capture the vertical food layers. Outputs of the deep residual blocks are
combined with the sliced convolution to produce the classification score for
specific food categories. To evaluate our proposed architecture we have
conducted experimental results on three benchmark datasets. Results demonstrate
that our solution shows better performance with respect to existing approaches
(e.g., a top-1 accuracy of 90.27% on the Food-101 challenging dataset).
| no_new_dataset | 0.946745 |
1612.06573 | Sebastian Ramos | Sebastian Ramos, Stefan Gehrig, Peter Pinggera, Uwe Franke, Carsten
Rother | Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep
Learning and Geometric Modeling | Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 2017 | null | null | null | cs.CV cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of small road hazards, such as lost cargo, is a vital
capability for self-driving cars. We tackle this challenging and rarely
addressed problem with a vision system that leverages appearance, contextual as
well as geometric cues. To utilize the appearance and contextual cues, we
propose a new deep learning-based obstacle detection framework. Here a variant
of a fully convolutional network is used to predict a pixel-wise semantic
labeling of (i) free-space, (ii) on-road unexpected obstacles, and (iii)
background. The geometric cues are exploited using a state-of-the-art detection
approach that predicts obstacles from stereo input images via model-based
statistical hypothesis tests. We present a principled Bayesian framework to
fuse the semantic and stereo-based detection results. The mid-level Stixel
representation is used to describe obstacles in a flexible, compact and robust
manner. We evaluate our new obstacle detection system on the Lost and Found
dataset, which includes very challenging scenes with obstacles of only 5 cm
height. Overall, we report a major improvement over the state-of-the-art, with
relative performance gains of up to 50%. In particular, we achieve a detection
rate of over 90% for distances of up to 50 m. Our system operates at 22 Hz on
our self-driving platform.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 09:55:00 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Ramos",
"Sebastian",
""
],
[
"Gehrig",
"Stefan",
""
],
[
"Pinggera",
"Peter",
""
],
[
"Franke",
"Uwe",
""
],
[
"Rother",
"Carsten",
""
]
] | TITLE: Detecting Unexpected Obstacles for Self-Driving Cars: Fusing Deep
Learning and Geometric Modeling
ABSTRACT: The detection of small road hazards, such as lost cargo, is a vital
capability for self-driving cars. We tackle this challenging and rarely
addressed problem with a vision system that leverages appearance, contextual as
well as geometric cues. To utilize the appearance and contextual cues, we
propose a new deep learning-based obstacle detection framework. Here a variant
of a fully convolutional network is used to predict a pixel-wise semantic
labeling of (i) free-space, (ii) on-road unexpected obstacles, and (iii)
background. The geometric cues are exploited using a state-of-the-art detection
approach that predicts obstacles from stereo input images via model-based
statistical hypothesis tests. We present a principled Bayesian framework to
fuse the semantic and stereo-based detection results. The mid-level Stixel
representation is used to describe obstacles in a flexible, compact and robust
manner. We evaluate our new obstacle detection system on the Lost and Found
dataset, which includes very challenging scenes with obstacles of only 5 cm
height. Overall, we report a major improvement over the state-of-the-art, with
relative performance gains of up to 50%. In particular, we achieve a detection
rate of over 90% for distances of up to 50 m. Our system operates at 22 Hz on
our self-driving platform.
| no_new_dataset | 0.942718 |
1612.06685 | Konstantinos Pappas | Konstantinos Pappas, Steven Wilson, and Rada Mihalcea | Stateology: State-Level Interactive Charting of Language, Feelings, and
Values | 5 pages, 5 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | People's personality and motivations are manifest in their everyday language
usage. With the emergence of social media, ample examples of such usage are
procurable. In this paper, we aim to analyze the vocabulary used by close to
200,000 Blogger users in the U.S. with the purpose of geographically portraying
various demographic, linguistic, and psychological dimensions at the state
level. We give a description of a web-based tool for viewing maps that depict
various characteristics of the social media users as derived from this large
blog dataset of over two billion words.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 14:44:19 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Pappas",
"Konstantinos",
""
],
[
"Wilson",
"Steven",
""
],
[
"Mihalcea",
"Rada",
""
]
] | TITLE: Stateology: State-Level Interactive Charting of Language, Feelings, and
Values
ABSTRACT: People's personality and motivations are manifest in their everyday language
usage. With the emergence of social media, ample examples of such usage are
procurable. In this paper, we aim to analyze the vocabulary used by close to
200,000 Blogger users in the U.S. with the purpose of geographically portraying
various demographic, linguistic, and psychological dimensions at the state
level. We give a description of a web-based tool for viewing maps that depict
various characteristics of the social media users as derived from this large
blog dataset of over two billion words.
| no_new_dataset | 0.742982 |
1612.06703 | Harish Karunakaran | Adhavan Jayabalan, Harish Karunakaran, Shravan Murlidharan, Tesia
Shizume | Dynamic Action Recognition: A convolutional neural network model for
temporally organized joint location data | 11 pages | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Recognizing human actions in a video is a challenging task which
has applications in various fields. Previous works in this area have either
used images from a 2D or 3D camera. Few have used the idea that human actions
can be easily identified by the movement of the joints in the 3D space and
instead used a Recurrent Neural Network (RNN) for modeling. Convolutional
neural networks (CNN) have the ability to recognise even the complex patterns
in data which makes it suitable for detecting human actions. Thus, we modeled a
CNN which can predict the human activity using the joint data. Furthermore,
using the joint data representation has the benefit of lower dimensionality
than image or video representations. This makes our model simpler and faster
than the RNN models. In this study, we have developed a six layer convolutional
network, which reduces each input feature vector of the form 15x1961x4 to an
one dimensional binary vector which gives us the predicted activity. Results:
Our model is able to recognise an activity correctly upto 87% accuracy. Joint
data is taken from the Cornell Activity Datasets which have day to day
activities like talking, relaxing, eating, cooking etc.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 15:20:28 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Jayabalan",
"Adhavan",
""
],
[
"Karunakaran",
"Harish",
""
],
[
"Murlidharan",
"Shravan",
""
],
[
"Shizume",
"Tesia",
""
]
] | TITLE: Dynamic Action Recognition: A convolutional neural network model for
temporally organized joint location data
ABSTRACT: Motivation: Recognizing human actions in a video is a challenging task which
has applications in various fields. Previous works in this area have either
used images from a 2D or 3D camera. Few have used the idea that human actions
can be easily identified by the movement of the joints in the 3D space and
instead used a Recurrent Neural Network (RNN) for modeling. Convolutional
neural networks (CNN) have the ability to recognise even the complex patterns
in data which makes it suitable for detecting human actions. Thus, we modeled a
CNN which can predict the human activity using the joint data. Furthermore,
using the joint data representation has the benefit of lower dimensionality
than image or video representations. This makes our model simpler and faster
than the RNN models. In this study, we have developed a six layer convolutional
network, which reduces each input feature vector of the form 15x1961x4 to an
one dimensional binary vector which gives us the predicted activity. Results:
Our model is able to recognise an activity correctly upto 87% accuracy. Joint
data is taken from the Cornell Activity Datasets which have day to day
activities like talking, relaxing, eating, cooking etc.
| no_new_dataset | 0.951818 |
1612.06704 | Donggeun Yoo | Donggeun Yoo, Sunggyun Park, Kyunghyun Paeng, Joon-Young Lee, In So
Kweon | Action-Driven Object Detection with Top-Down Visual Attentions | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A dominant paradigm for deep learning based object detection relies on a
"bottom-up" approach using "passive" scoring of class agnostic proposals. These
approaches are efficient but lack of holistic analysis of scene-level context.
In this paper, we present an "action-driven" detection mechanism using our
"top-down" visual attention model. We localize an object by taking sequential
actions that the attention model provides. The attention model conditioned with
an image region provides required actions to get closer toward a target object.
An action at each time step is weak itself but an ensemble of the sequential
actions makes a bounding-box accurately converge to a target object boundary.
This attention model we call AttentionNet is composed of a convolutional neural
network. During our whole detection procedure, we only utilize the actions from
a single AttentionNet without any modules for object proposals nor post
bounding-box regression. We evaluate our top-down detection mechanism over the
PASCAL VOC series and ILSVRC CLS-LOC dataset, and achieve state-of-the-art
performances compared to the major bottom-up detection methods. In particular,
our detection mechanism shows a strong advantage in elaborate localization by
outperforming Faster R-CNN with a margin of +7.1% over PASCAL VOC 2007 when we
increase the IoU threshold for positive detection to 0.7.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 15:24:46 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Yoo",
"Donggeun",
""
],
[
"Park",
"Sunggyun",
""
],
[
"Paeng",
"Kyunghyun",
""
],
[
"Lee",
"Joon-Young",
""
],
[
"Kweon",
"In So",
""
]
] | TITLE: Action-Driven Object Detection with Top-Down Visual Attentions
ABSTRACT: A dominant paradigm for deep learning based object detection relies on a
"bottom-up" approach using "passive" scoring of class agnostic proposals. These
approaches are efficient but lack of holistic analysis of scene-level context.
In this paper, we present an "action-driven" detection mechanism using our
"top-down" visual attention model. We localize an object by taking sequential
actions that the attention model provides. The attention model conditioned with
an image region provides required actions to get closer toward a target object.
An action at each time step is weak itself but an ensemble of the sequential
actions makes a bounding-box accurately converge to a target object boundary.
This attention model we call AttentionNet is composed of a convolutional neural
network. During our whole detection procedure, we only utilize the actions from
a single AttentionNet without any modules for object proposals nor post
bounding-box regression. We evaluate our top-down detection mechanism over the
PASCAL VOC series and ILSVRC CLS-LOC dataset, and achieve state-of-the-art
performances compared to the major bottom-up detection methods. In particular,
our detection mechanism shows a strong advantage in elaborate localization by
outperforming Faster R-CNN with a margin of +7.1% over PASCAL VOC 2007 when we
increase the IoU threshold for positive detection to 0.7.
| no_new_dataset | 0.945751 |
1612.06753 | Spencer Cappallo | Spencer Cappallo, Thomas Mensink, Cees G. M. Snoek | Video Stream Retrieval of Unseen Queries using Semantic Memory | Presented at BMVC 2016, British Machine Vision Conference, 2016 | null | null | null | cs.IR cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval of live, user-broadcast video streams is an under-addressed and
increasingly relevant challenge. The on-line nature of the problem requires
temporal evaluation and the unforeseeable scope of potential queries motivates
an approach which can accommodate arbitrary search queries. To account for the
breadth of possible queries, we adopt a no-example approach to query retrieval,
which uses a query's semantic relatedness to pre-trained concept classifiers.
To adapt to shifting video content, we propose memory pooling and memory
welling methods that favor recent information over long past content. We
identify two stream retrieval tasks, instantaneous retrieval at any particular
time and continuous retrieval over a prolonged duration, and propose means for
evaluating them. Three large scale video datasets are adapted to the challenge
of stream retrieval. We report results for our search methods on the new stream
retrieval tasks, as well as demonstrate their efficacy in a traditional,
non-streaming video task.
| [
{
"version": "v1",
"created": "Tue, 20 Dec 2016 16:59:24 GMT"
}
] | 2016-12-21T00:00:00 | [
[
"Cappallo",
"Spencer",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: Video Stream Retrieval of Unseen Queries using Semantic Memory
ABSTRACT: Retrieval of live, user-broadcast video streams is an under-addressed and
increasingly relevant challenge. The on-line nature of the problem requires
temporal evaluation and the unforeseeable scope of potential queries motivates
an approach which can accommodate arbitrary search queries. To account for the
breadth of possible queries, we adopt a no-example approach to query retrieval,
which uses a query's semantic relatedness to pre-trained concept classifiers.
To adapt to shifting video content, we propose memory pooling and memory
welling methods that favor recent information over long past content. We
identify two stream retrieval tasks, instantaneous retrieval at any particular
time and continuous retrieval over a prolonged duration, and propose means for
evaluating them. Three large scale video datasets are adapted to the challenge
of stream retrieval. We report results for our search methods on the new stream
retrieval tasks, as well as demonstrate their efficacy in a traditional,
non-streaming video task.
| no_new_dataset | 0.945551 |
1412.0364 | Manas Joglekar | Manas Joglekar, Hector Garcia-Molina, Aditya Parameswaran | Interactive Data Exploration with Smart Drill-Down | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present {\em smart drill-down}, an operator for interactively exploring a
relational table to discover and summarize "interesting" groups of tuples. Each
group of tuples is described by a {\em rule}. For instance, the rule $(a, b,
\star, 1000)$ tells us that there are a thousand tuples with value $a$ in the
first column and $b$ in the second column (and any value in the third column).
Smart drill-down presents an analyst with a list of rules that together
describe interesting aspects of the table. The analyst can tailor the
definition of interesting, and can interactively apply smart drill-down on an
existing rule to explore that part of the table. We demonstrate that the
underlying optimization problems are {\sc NP-Hard}, and describe an algorithm
for finding the approximately optimal list of rules to display when the user
uses a smart drill-down, and a dynamic sampling scheme for efficiently
interacting with large tables. Finally, we perform experiments on real datasets
on our experimental prototype to demonstrate the usefulness of smart drill-down
and study the performance of our algorithms.
| [
{
"version": "v1",
"created": "Mon, 1 Dec 2014 07:09:14 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Oct 2015 01:05:03 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Dec 2016 06:31:52 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Joglekar",
"Manas",
""
],
[
"Garcia-Molina",
"Hector",
""
],
[
"Parameswaran",
"Aditya",
""
]
] | TITLE: Interactive Data Exploration with Smart Drill-Down
ABSTRACT: We present {\em smart drill-down}, an operator for interactively exploring a
relational table to discover and summarize "interesting" groups of tuples. Each
group of tuples is described by a {\em rule}. For instance, the rule $(a, b,
\star, 1000)$ tells us that there are a thousand tuples with value $a$ in the
first column and $b$ in the second column (and any value in the third column).
Smart drill-down presents an analyst with a list of rules that together
describe interesting aspects of the table. The analyst can tailor the
definition of interesting, and can interactively apply smart drill-down on an
existing rule to explore that part of the table. We demonstrate that the
underlying optimization problems are {\sc NP-Hard}, and describe an algorithm
for finding the approximately optimal list of rules to display when the user
uses a smart drill-down, and a dynamic sampling scheme for efficiently
interacting with large tables. Finally, we perform experiments on real datasets
on our experimental prototype to demonstrate the usefulness of smart drill-down
and study the performance of our algorithms.
| no_new_dataset | 0.938745 |
1608.03866 | Shripad Gade | Shripad Gade and Nitin H. Vaidya | Distributed Optimization for Client-Server Architecture with Negative
Gradient Weights | [Submitted 12 Aug., 2016. Revised 18 Dec.,2016.] Added Section 3.1,
added additional discussion to Section 5, added references | null | null | null | cs.DC cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Availability of both massive datasets and computing resources have made
machine learning and predictive analytics extremely pervasive. In this work we
present a synchronous algorithm and architecture for distributed optimization
motivated by privacy requirements posed by applications in machine learning. We
present an algorithm for the recently proposed multi-parameter-server
architecture. We consider a group of parameter servers that learn a model based
on randomized gradients received from clients. Clients are computational
entities with private datasets (inducing a private objective function), that
evaluate and upload randomized gradients to the parameter servers. The
parameter servers perform model updates based on received gradients and share
the model parameters with other servers. We prove that the proposed algorithm
can optimize the overall objective function for a very general architecture
involving $C$ clients connected to $S$ parameter servers in an arbitrary time
varying topology and the parameter servers forming a connected network.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 18:34:06 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2016 15:19:25 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Gade",
"Shripad",
""
],
[
"Vaidya",
"Nitin H.",
""
]
] | TITLE: Distributed Optimization for Client-Server Architecture with Negative
Gradient Weights
ABSTRACT: Availability of both massive datasets and computing resources have made
machine learning and predictive analytics extremely pervasive. In this work we
present a synchronous algorithm and architecture for distributed optimization
motivated by privacy requirements posed by applications in machine learning. We
present an algorithm for the recently proposed multi-parameter-server
architecture. We consider a group of parameter servers that learn a model based
on randomized gradients received from clients. Clients are computational
entities with private datasets (inducing a private objective function), that
evaluate and upload randomized gradients to the parameter servers. The
parameter servers perform model updates based on received gradients and share
the model parameters with other servers. We prove that the proposed algorithm
can optimize the overall objective function for a very general architecture
involving $C$ clients connected to $S$ parameter servers in an arbitrary time
varying topology and the parameter servers forming a connected network.
| no_new_dataset | 0.943971 |
1611.05104 | Sabeek Pradhan | Shayne Longpre, Sabeek Pradhan, Caiming Xiong, Richard Socher | A Way out of the Odyssey: Analyzing and Combining Recent Insights for
LSTMs | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LSTMs have become a basic building block for many deep NLP models. In recent
years, many improvements and variations have been proposed for deep sequence
models in general, and LSTMs in particular. We propose and analyze a series of
augmentations and modifications to LSTM networks resulting in improved
performance for text classification datasets. We observe compounding
improvements on traditional LSTMs using Monte Carlo test-time model averaging,
average pooling, and residual connections, along with four other suggested
modifications. Our analysis provides a simple, reliable, and high quality
baseline model.
| [
{
"version": "v1",
"created": "Wed, 16 Nov 2016 00:53:01 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Dec 2016 06:47:05 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Longpre",
"Shayne",
""
],
[
"Pradhan",
"Sabeek",
""
],
[
"Xiong",
"Caiming",
""
],
[
"Socher",
"Richard",
""
]
] | TITLE: A Way out of the Odyssey: Analyzing and Combining Recent Insights for
LSTMs
ABSTRACT: LSTMs have become a basic building block for many deep NLP models. In recent
years, many improvements and variations have been proposed for deep sequence
models in general, and LSTMs in particular. We propose and analyze a series of
augmentations and modifications to LSTM networks resulting in improved
performance for text classification datasets. We observe compounding
improvements on traditional LSTMs using Monte Carlo test-time model averaging,
average pooling, and residual connections, along with four other suggested
modifications. Our analysis provides a simple, reliable, and high quality
baseline model.
| no_new_dataset | 0.947672 |
1612.04440 | Will Grathwohl | Will Grathwohl, Aaron Wilson | Disentangling Space and Time in Video with Hierarchical Variational
Auto-encoders | fixed typo in equation 16 | null | null | null | cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are many forms of feature information present in video data. Principle
among them are object identity information which is largely static across
multiple video frames, and object pose and style information which continuously
transforms from frame to frame. Most existing models confound these two types
of representation by mapping them to a shared feature space. In this paper we
propose a probabilistic approach for learning separable representations of
object identity and pose information using unsupervised video data. Our
approach leverages a deep generative model with a factored prior distribution
that encodes properties of temporal invariances in the hidden feature set.
Learning is achieved via variational inference. We present results of learning
identity and pose information on a dataset of moving characters as well as a
dataset of rotating 3D objects. Our experimental results demonstrate our
model's success in factoring its representation, and demonstrate that the model
achieves improved performance in transfer learning tasks.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 00:20:46 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2016 17:17:26 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Grathwohl",
"Will",
""
],
[
"Wilson",
"Aaron",
""
]
] | TITLE: Disentangling Space and Time in Video with Hierarchical Variational
Auto-encoders
ABSTRACT: There are many forms of feature information present in video data. Principle
among them are object identity information which is largely static across
multiple video frames, and object pose and style information which continuously
transforms from frame to frame. Most existing models confound these two types
of representation by mapping them to a shared feature space. In this paper we
propose a probabilistic approach for learning separable representations of
object identity and pose information using unsupervised video data. Our
approach leverages a deep generative model with a factored prior distribution
that encodes properties of temporal invariances in the hidden feature set.
Learning is achieved via variational inference. We present results of learning
identity and pose information on a dataset of moving characters as well as a
dataset of rotating 3D objects. Our experimental results demonstrate our
model's success in factoring its representation, and demonstrate that the model
achieves improved performance in transfer learning tasks.
| no_new_dataset | 0.942823 |
1612.05710 | Saeed Moghaddam | Saeed Moghaddam, Ahmed Helmy | Multi-modal Mining and Modeling of Big Mobile Networks Based on Users
Behavior and Interest | null | null | null | null | cs.NI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Usage of mobile wireless Internet has grown very fast in recent years. This
radical change in availability of Internet has led to communication of big
amount of data over mobile networks and consequently new challenges and
opportunities for modeling of mobile Internet characteristics. While the
traditional approach toward network modeling suggests finding a generic traffic
model for the whole network, in this paper, we show that this approach does not
capture all the dynamics of big mobile networks and does not provide enough
accuracy. Our case study based on a big dataset including billions of netflow
records collected from a campus-wide wireless mobile network shows that user
interests acquired based on accessed domains and visited locations as well as
user behavioral groups have a significant impact on traffic characteristics of
big mobile networks. For this purpose, we utilize a novel graph-based approach
based on KS-test as well as a novel co-clustering technique. Our study shows
that interest-based modeling of big mobile networks can significantly improve
the accuracy and reduce the KS distance by factor of 5 comparing to the generic
approach.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2016 06:21:05 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Moghaddam",
"Saeed",
""
],
[
"Helmy",
"Ahmed",
""
]
] | TITLE: Multi-modal Mining and Modeling of Big Mobile Networks Based on Users
Behavior and Interest
ABSTRACT: Usage of mobile wireless Internet has grown very fast in recent years. This
radical change in availability of Internet has led to communication of big
amount of data over mobile networks and consequently new challenges and
opportunities for modeling of mobile Internet characteristics. While the
traditional approach toward network modeling suggests finding a generic traffic
model for the whole network, in this paper, we show that this approach does not
capture all the dynamics of big mobile networks and does not provide enough
accuracy. Our case study based on a big dataset including billions of netflow
records collected from a campus-wide wireless mobile network shows that user
interests acquired based on accessed domains and visited locations as well as
user behavioral groups have a significant impact on traffic characteristics of
big mobile networks. For this purpose, we utilize a novel graph-based approach
based on KS-test as well as a novel co-clustering technique. Our study shows
that interest-based modeling of big mobile networks can significantly improve
the accuracy and reduce the KS distance by factor of 5 comparing to the generic
approach.
| no_new_dataset | 0.925701 |
1612.05729 | Mirko Polato | Mirko Polato and Fabio Aiolli | Exploiting sparsity to build efficient kernel based collaborative
filtering for top-N item recommendation | Under revision for Neurocomputing (Elsevier Journal) | null | null | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing availability of implicit feedback datasets has raised the
interest in developing effective collaborative filtering techniques able to
deal asymmetrically with unambiguous positive feedback and ambiguous negative
feedback. In this paper, we propose a principled kernel-based collaborative
filtering method for top-N item recommendation with implicit feedback. We
present an efficient implementation using the linear kernel, and we show how to
generalize it to kernels of the dot product family preserving the efficiency.
We also investigate on the elements which influence the sparsity of a standard
cosine kernel. This analysis shows that the sparsity of the kernel strongly
depends on the properties of the dataset, in particular on the long tail
distribution. We compare our method with state-of-the-art algorithms achieving
good results both in terms of efficiency and effectiveness.
| [
{
"version": "v1",
"created": "Sat, 17 Dec 2016 10:50:41 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Polato",
"Mirko",
""
],
[
"Aiolli",
"Fabio",
""
]
] | TITLE: Exploiting sparsity to build efficient kernel based collaborative
filtering for top-N item recommendation
ABSTRACT: The increasing availability of implicit feedback datasets has raised the
interest in developing effective collaborative filtering techniques able to
deal asymmetrically with unambiguous positive feedback and ambiguous negative
feedback. In this paper, we propose a principled kernel-based collaborative
filtering method for top-N item recommendation with implicit feedback. We
present an efficient implementation using the linear kernel, and we show how to
generalize it to kernels of the dot product family preserving the efficiency.
We also investigate on the elements which influence the sparsity of a standard
cosine kernel. This analysis shows that the sparsity of the kernel strongly
depends on the properties of the dataset, in particular on the long tail
distribution. We compare our method with state-of-the-art algorithms achieving
good results both in terms of efficiency and effectiveness.
| no_new_dataset | 0.946745 |
1612.05858 | Afsin Akdogan | Afsin Akdogan | Partitioning, Indexing and Querying Spatial Data on Cloud | PhD Dissertation - University of Southern California | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of mobile devices (e.g., smartphones, wearable technologies) is
rapidly growing. In line with this trend, a massive amount of spatial data is
being collected since these devices allow users to geo-tag user-generated
content. Clearly, a scalable computing infrastructure is needed to manage such
large datasets. Meanwhile, Cloud Computing service providers (e.g., Amazon,
Google, and Microsoft) allow users to lease computing resources. However, most
of the existing spatial indexing techniques are designed for the centralized
paradigm which is limited to the capabilities of a single sever. To address the
scalability shortcomings of existing approaches, we provide a study that focus
on generating a distributed spatial index structure that not only scales out to
multiple servers but also scales up since it fully exploits the multi-core CPUs
available on each server using Voronoi diagram as the partitioning and indexing
technique which we also use to process spatial queries effectively. More
specifically, since the data objects continuously move and issue position
updates to the index structure, we collect the latest positions of objects and
periodically generate a read-only index to eliminate costly distributed
updates. Our approach scales near-linearly in index construction and query
processing, and can efficiently construct an index for millions of objects
within a few seconds. In addition to scalability and efficiency, we also aim to
maximize the server utilization that can support the same workload with less
number of servers. Server utilization is a crucial point while using Cloud
Computing because users are charged based on the total amount of time they
reserve each server, with no consideration of utilization.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 06:24:06 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Akdogan",
"Afsin",
""
]
] | TITLE: Partitioning, Indexing and Querying Spatial Data on Cloud
ABSTRACT: The number of mobile devices (e.g., smartphones, wearable technologies) is
rapidly growing. In line with this trend, a massive amount of spatial data is
being collected since these devices allow users to geo-tag user-generated
content. Clearly, a scalable computing infrastructure is needed to manage such
large datasets. Meanwhile, Cloud Computing service providers (e.g., Amazon,
Google, and Microsoft) allow users to lease computing resources. However, most
of the existing spatial indexing techniques are designed for the centralized
paradigm which is limited to the capabilities of a single sever. To address the
scalability shortcomings of existing approaches, we provide a study that focus
on generating a distributed spatial index structure that not only scales out to
multiple servers but also scales up since it fully exploits the multi-core CPUs
available on each server using Voronoi diagram as the partitioning and indexing
technique which we also use to process spatial queries effectively. More
specifically, since the data objects continuously move and issue position
updates to the index structure, we collect the latest positions of objects and
periodically generate a read-only index to eliminate costly distributed
updates. Our approach scales near-linearly in index construction and query
processing, and can efficiently construct an index for millions of objects
within a few seconds. In addition to scalability and efficiency, we also aim to
maximize the server utilization that can support the same workload with less
number of servers. Server utilization is a crucial point while using Cloud
Computing because users are charged based on the total amount of time they
reserve each server, with no consideration of utilization.
| no_new_dataset | 0.947284 |
1612.05859 | Afsin Akdogan | Afsin Akdogan, Hien To | Distributed Data Processing Frameworks for Big Graph Data | Survey paper that covers data processing frameworks for big graph
data | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently we create so much data (2.5 quintillion bytes every day) that 90% of
the data in the world today has been created in the last two years alone [1].
This data comes from sensors used to gather traffic or climate information,
posts to social media sites, photos, videos, emails, purchase transaction
records, call logs of cellular networks, etc. This data is big data. In this
report, we first briefly discuss what programming models are used for big data
processing, and focus on graph data and do a survey study about what
programming models/frameworks are used to solve graph problems at very
large-scale. In section 2, we introduce the programming models which are not
specifically designed to handle graph data but we include them in this survey
because we believe these are important frameworks and/or there have been
studies to customize them for more efficient graph processing. In section 3, we
discuss some techniques that yield up to 1340 times speedup for some certain
graph problems when applied to Hadoop. In section 4, we discuss vertex-based
programming model which is simply designed to process large graphs and the
frameworks adapting it. In section 5, we implement two of the fundamental graph
algorithms (Page Rank and Weight Bipartite Matching), and run them on a single
node as the baseline approach to see how fast they are for large datasets and
whether it is worth to partition them.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 06:32:31 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Akdogan",
"Afsin",
""
],
[
"To",
"Hien",
""
]
] | TITLE: Distributed Data Processing Frameworks for Big Graph Data
ABSTRACT: Recently we create so much data (2.5 quintillion bytes every day) that 90% of
the data in the world today has been created in the last two years alone [1].
This data comes from sensors used to gather traffic or climate information,
posts to social media sites, photos, videos, emails, purchase transaction
records, call logs of cellular networks, etc. This data is big data. In this
report, we first briefly discuss what programming models are used for big data
processing, and focus on graph data and do a survey study about what
programming models/frameworks are used to solve graph problems at very
large-scale. In section 2, we introduce the programming models which are not
specifically designed to handle graph data but we include them in this survey
because we believe these are important frameworks and/or there have been
studies to customize them for more efficient graph processing. In section 3, we
discuss some techniques that yield up to 1340 times speedup for some certain
graph problems when applied to Hadoop. In section 4, we discuss vertex-based
programming model which is simply designed to process large graphs and the
frameworks adapting it. In section 5, we implement two of the fundamental graph
algorithms (Page Rank and Weight Bipartite Matching), and run them on a single
node as the baseline approach to see how fast they are for large datasets and
whether it is worth to partition them.
| no_new_dataset | 0.944331 |
1612.05932 | Franziska Meier | Franziska Meier and Stefan Schaal | A Probabilistic Representation for Dynamic Movement Primitives | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Movement Primitives have successfully been used to realize imitation
learning, trial-and-error learning, reinforce- ment learning, movement
recognition and segmentation and control. Because of this they have become a
popular represen- tation for motor primitives. In this work, we showcase how
DMPs can be reformulated as a probabilistic linear dynamical system with
control inputs. Through this probabilistic repre- sentation of DMPs, algorithms
such as Kalman filtering and smoothing are directly applicable to perform
inference on pro- prioceptive sensor measurements during execution. We show
that inference in this probabilistic model automatically leads to a feedback
term to online modulate the execution of a DMP. Furthermore, we show how
inference allows us to measure the likelihood that we are successfully
executing a given motion primitive. In this context, we show initial results of
using the probabilistic model to detect execution failures on a simulated
movement primitive dataset.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 15:32:45 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Meier",
"Franziska",
""
],
[
"Schaal",
"Stefan",
""
]
] | TITLE: A Probabilistic Representation for Dynamic Movement Primitives
ABSTRACT: Dynamic Movement Primitives have successfully been used to realize imitation
learning, trial-and-error learning, reinforce- ment learning, movement
recognition and segmentation and control. Because of this they have become a
popular represen- tation for motor primitives. In this work, we showcase how
DMPs can be reformulated as a probabilistic linear dynamical system with
control inputs. Through this probabilistic repre- sentation of DMPs, algorithms
such as Kalman filtering and smoothing are directly applicable to perform
inference on pro- prioceptive sensor measurements during execution. We show
that inference in this probabilistic model automatically leads to a feedback
term to online modulate the execution of a DMP. Furthermore, we show how
inference allows us to measure the likelihood that we are successfully
executing a given motion primitive. In this context, we show initial results of
using the probabilistic model to detect execution failures on a simulated
movement primitive dataset.
| no_new_dataset | 0.942612 |
1612.05968 | Wentao Zhu | Wentao Zhu, Qi Lou, Yeeleng Scott Vang, Xiaohui Xie | Deep Multi-instance Networks with Sparse Label Assignment for Whole
Mammogram Classification | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mammogram classification is directly related to computer-aided diagnosis of
breast cancer. Traditional methods requires great effort to annotate the
training data by costly manual labeling and specialized computational models to
detect these annotations during test. Inspired by the success of using deep
convolutional features for natural image analysis and multi-instance learning
for labeling a set of instances/patches, we propose end-to-end trained deep
multi-instance networks for mass classification based on whole mammogram
without the aforementioned costly need to annotate the training data. We
explore three different schemes to construct deep multi-instance networks for
whole mammogram classification. Experimental results on the INbreast dataset
demonstrate the robustness of proposed deep networks compared to previous work
using segmentation and detection annotations in the training.
| [
{
"version": "v1",
"created": "Sun, 18 Dec 2016 18:31:11 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Zhu",
"Wentao",
""
],
[
"Lou",
"Qi",
""
],
[
"Vang",
"Yeeleng Scott",
""
],
[
"Xie",
"Xiaohui",
""
]
] | TITLE: Deep Multi-instance Networks with Sparse Label Assignment for Whole
Mammogram Classification
ABSTRACT: Mammogram classification is directly related to computer-aided diagnosis of
breast cancer. Traditional methods requires great effort to annotate the
training data by costly manual labeling and specialized computational models to
detect these annotations during test. Inspired by the success of using deep
convolutional features for natural image analysis and multi-instance learning
for labeling a set of instances/patches, we propose end-to-end trained deep
multi-instance networks for mass classification based on whole mammogram
without the aforementioned costly need to annotate the training data. We
explore three different schemes to construct deep multi-instance networks for
whole mammogram classification. Experimental results on the INbreast dataset
demonstrate the robustness of proposed deep networks compared to previous work
using segmentation and detection annotations in the training.
| no_new_dataset | 0.951233 |
1612.06057 | Rameshwar Pratap | Raghav Kulkarni, Rameshwar Pratap | Similarity preserving compressions of high dimensional sparse data | null | null | null | null | cs.DS cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rise of internet has resulted in an explosion of data consisting of
millions of articles, images, songs, and videos. Most of this data is high
dimensional and sparse. The need to perform an efficient search for similar
objects in such high dimensional big datasets is becoming increasingly common.
Even with the rapid growth in computing power, the brute-force search for such
a task is impractical and at times impossible. Therefore it is quite natural to
investigate the techniques that compress the dimension of the data-set while
preserving the similarity between data objects.
In this work, we propose an efficient compression scheme mapping binary
vectors into binary vectors and simultaneously preserving Hamming distance and
Inner Product. The length of our compression depends only on the sparsity and
is independent of the dimension of the data. Moreover our schemes provide
one-shot solution for Hamming distance and Inner Product, and work in the
streaming setting as well. In contrast with the "local projection" strategies
used by most of the previous schemes, our scheme combines (using sparsity) the
following two strategies: $1.$ Partitioning the dimensions into several
buckets, $2.$ Then obtaining "global linear summaries" in each of these
buckets. We generalize our scheme for real-valued data and obtain compressions
for Euclidean distance, Inner Product, and $k$-way Inner Product.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 06:27:45 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Kulkarni",
"Raghav",
""
],
[
"Pratap",
"Rameshwar",
""
]
] | TITLE: Similarity preserving compressions of high dimensional sparse data
ABSTRACT: The rise of internet has resulted in an explosion of data consisting of
millions of articles, images, songs, and videos. Most of this data is high
dimensional and sparse. The need to perform an efficient search for similar
objects in such high dimensional big datasets is becoming increasingly common.
Even with the rapid growth in computing power, the brute-force search for such
a task is impractical and at times impossible. Therefore it is quite natural to
investigate the techniques that compress the dimension of the data-set while
preserving the similarity between data objects.
In this work, we propose an efficient compression scheme mapping binary
vectors into binary vectors and simultaneously preserving Hamming distance and
Inner Product. The length of our compression depends only on the sparsity and
is independent of the dimension of the data. Moreover our schemes provide
one-shot solution for Hamming distance and Inner Product, and work in the
streaming setting as well. In contrast with the "local projection" strategies
used by most of the previous schemes, our scheme combines (using sparsity) the
following two strategies: $1.$ Partitioning the dimensions into several
buckets, $2.$ Then obtaining "global linear summaries" in each of these
buckets. We generalize our scheme for real-valued data and obtain compressions
for Euclidean distance, Inner Product, and $k$-way Inner Product.
| no_new_dataset | 0.942029 |
1612.06098 | Sailesh Conjeti | Sailesh Conjeti, Anees Kazi, Nassir Navab and Amin Katouzian | Cross-Modal Manifold Learning for Cross-modal Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new scalable algorithm for cross-modal similarity
preserving retrieval in a learnt manifold space. Unlike existing approaches
that compromise between preserving global and local geometries, the proposed
technique respects both simultaneously during manifold alignment. The global
topologies are maintained by recovering underlying mapping functions in the
joint manifold space by deploying partially corresponding instances. The
inter-, and intra-modality affinity matrices are then computed to reinforce
original data skeleton using perturbed minimum spanning tree (pMST), and
maximizing the affinity among similar cross-modal instances, respectively. The
performance of proposed algorithm is evaluated upon two multimodal image
datasets (coronary atherosclerosis histology and brain MRI) for two
applications: classification, and regression. Our exhaustive validations and
results demonstrate the superiority of our technique over comparative methods
and its feasibility for improving computer-assisted diagnosis systems, where
disease-specific complementary information shall be aggregated and interpreted
across modalities to form the final decision.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 10:03:58 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Conjeti",
"Sailesh",
""
],
[
"Kazi",
"Anees",
""
],
[
"Navab",
"Nassir",
""
],
[
"Katouzian",
"Amin",
""
]
] | TITLE: Cross-Modal Manifold Learning for Cross-modal Retrieval
ABSTRACT: This paper presents a new scalable algorithm for cross-modal similarity
preserving retrieval in a learnt manifold space. Unlike existing approaches
that compromise between preserving global and local geometries, the proposed
technique respects both simultaneously during manifold alignment. The global
topologies are maintained by recovering underlying mapping functions in the
joint manifold space by deploying partially corresponding instances. The
inter-, and intra-modality affinity matrices are then computed to reinforce
original data skeleton using perturbed minimum spanning tree (pMST), and
maximizing the affinity among similar cross-modal instances, respectively. The
performance of proposed algorithm is evaluated upon two multimodal image
datasets (coronary atherosclerosis histology and brain MRI) for two
applications: classification, and regression. Our exhaustive validations and
results demonstrate the superiority of our technique over comparative methods
and its feasibility for improving computer-assisted diagnosis systems, where
disease-specific complementary information shall be aggregated and interpreted
across modalities to form the final decision.
| no_new_dataset | 0.946843 |
1612.06129 | Christoph K\"ading | Christoph K\"ading and Erik Rodner and Alexander Freytag and Joachim
Denzler | Active and Continuous Exploration with Deep Neural Networks and Expected
Model Output Changes | accepted contribution at NIPS 2016 Workshop on Continual Learning and
Deep Networks | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The demands on visual recognition systems do not end with the complexity
offered by current large-scale image datasets, such as ImageNet. In
consequence, we need curious and continuously learning algorithms that actively
acquire knowledge about semantic concepts which are present in available
unlabeled data. As a step towards this goal, we show how to perform continuous
active learning and exploration, where an algorithm actively selects relevant
batches of unlabeled examples for annotation. These examples could either
belong to already known or to yet undiscovered classes. Our algorithm is based
on a new generalization of the Expected Model Output Change principle for deep
architectures and is especially tailored to deep neural networks. Furthermore,
we show easy-to-implement approximations that yield efficient techniques for
active selection. Empirical experiments show that our method outperforms
currently used heuristics.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 11:27:33 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Käding",
"Christoph",
""
],
[
"Rodner",
"Erik",
""
],
[
"Freytag",
"Alexander",
""
],
[
"Denzler",
"Joachim",
""
]
] | TITLE: Active and Continuous Exploration with Deep Neural Networks and Expected
Model Output Changes
ABSTRACT: The demands on visual recognition systems do not end with the complexity
offered by current large-scale image datasets, such as ImageNet. In
consequence, we need curious and continuously learning algorithms that actively
acquire knowledge about semantic concepts which are present in available
unlabeled data. As a step towards this goal, we show how to perform continuous
active learning and exploration, where an algorithm actively selects relevant
batches of unlabeled examples for annotation. These examples could either
belong to already known or to yet undiscovered classes. Our algorithm is based
on a new generalization of the Expected Model Output Change principle for deep
architectures and is especially tailored to deep neural networks. Furthermore,
we show easy-to-implement approximations that yield efficient techniques for
active selection. Empirical experiments show that our method outperforms
currently used heuristics.
| no_new_dataset | 0.94256 |
1612.06152 | Zhongwen Xu | Zhongwen Xu, Linchao Zhu, Yi Yang | Few-Shot Object Recognition from Machine-Labeled Web Images | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the tremendous advances of Convolutional Neural Networks (ConvNets) on
object recognition, we can now obtain reliable enough machine-labeled
annotations easily by predictions from off-the-shelf ConvNets. In this work, we
present an abstraction memory based framework for few-shot learning, building
upon machine-labeled image annotations. Our method takes some large-scale
machine-annotated datasets (e.g., OpenImages) as an external memory bank. In
the external memory bank, the information is stored in the memory slots with
the form of key-value, where image feature is regarded as key and label
embedding serves as value. When queried by the few-shot examples, our model
selects visually similar data from the external memory bank, and writes the
useful information obtained from related external data into another memory
bank, i.e., abstraction memory. Long Short-Term Memory (LSTM) controllers and
attention mechanisms are utilized to guarantee the data written to the
abstraction memory is correlated to the query example. The abstraction memory
concentrates information from the external memory bank, so that it makes the
few-shot recognition effective. In the experiments, we firstly confirm that our
model can learn to conduct few-shot object recognition on clean human-labeled
data from ImageNet dataset. Then, we demonstrate that with our model,
machine-labeled image annotations are very effective and abundant resources to
perform object recognition on novel categories. Experimental results show that
our proposed model with machine-labeled annotations achieves great performance,
only with a gap of 1% between of the one with human-labeled annotations.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 12:25:36 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Xu",
"Zhongwen",
""
],
[
"Zhu",
"Linchao",
""
],
[
"Yang",
"Yi",
""
]
] | TITLE: Few-Shot Object Recognition from Machine-Labeled Web Images
ABSTRACT: With the tremendous advances of Convolutional Neural Networks (ConvNets) on
object recognition, we can now obtain reliable enough machine-labeled
annotations easily by predictions from off-the-shelf ConvNets. In this work, we
present an abstraction memory based framework for few-shot learning, building
upon machine-labeled image annotations. Our method takes some large-scale
machine-annotated datasets (e.g., OpenImages) as an external memory bank. In
the external memory bank, the information is stored in the memory slots with
the form of key-value, where image feature is regarded as key and label
embedding serves as value. When queried by the few-shot examples, our model
selects visually similar data from the external memory bank, and writes the
useful information obtained from related external data into another memory
bank, i.e., abstraction memory. Long Short-Term Memory (LSTM) controllers and
attention mechanisms are utilized to guarantee the data written to the
abstraction memory is correlated to the query example. The abstraction memory
concentrates information from the external memory bank, so that it makes the
few-shot recognition effective. In the experiments, we firstly confirm that our
model can learn to conduct few-shot object recognition on clean human-labeled
data from ImageNet dataset. Then, we demonstrate that with our model,
machine-labeled image annotations are very effective and abundant resources to
perform object recognition on novel categories. Experimental results show that
our proposed model with machine-labeled annotations achieves great performance,
only with a gap of 1% between of the one with human-labeled annotations.
| no_new_dataset | 0.950641 |
1612.06195 | Jerome Darmont | Ciprian-Octavian Truic\u{a}, J\'er\^ome Darmont (ERIC), Julien Velcin
(ERIC) | A Scalable Document-based Architecture for Text Analysis | null | 12th International Conference on Advanced Data Mining and
Applications (ADMA 2016), Dec 2016, Gold Coast, Australia. Springer, 10086,
pp.481-494, 2016, Lecture Notes in Artificial Intelligence | null | null | cs.DB cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analyzing textual data is a very challenging task because of the huge volume
of data generated daily. Fundamental issues in text analysis include the lack
of structure in document datasets, the need for various preprocessing steps
%(e.g., stem or lemma extraction, part-of-speech tagging, named entities
recognition...), and performance and scaling issues. Existing text analysis
architectures partly solve these issues, providing restrictive data schemas,
addressing only one aspect of text preprocessing and focusing on one single
task when dealing with performance optimization. %As a result, no definite
solution is currently available. Thus, we propose in this paper a new generic
text analysis architecture, where document structure is flexible, many
preprocessing techniques are integrated and textual datasets are indexed for
efficient access. We implement our conceptual architecture using both a
relational and a document-oriented database. Our experiments demonstrate the
feasibility of our approach and the superiority of the document-oriented
logical and physical implementation.
| [
{
"version": "v1",
"created": "Mon, 19 Dec 2016 14:24:23 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Truică",
"Ciprian-Octavian",
"",
"ERIC"
],
[
"Darmont",
"Jérôme",
"",
"ERIC"
],
[
"Velcin",
"Julien",
"",
"ERIC"
]
] | TITLE: A Scalable Document-based Architecture for Text Analysis
ABSTRACT: Analyzing textual data is a very challenging task because of the huge volume
of data generated daily. Fundamental issues in text analysis include the lack
of structure in document datasets, the need for various preprocessing steps
%(e.g., stem or lemma extraction, part-of-speech tagging, named entities
recognition...), and performance and scaling issues. Existing text analysis
architectures partly solve these issues, providing restrictive data schemas,
addressing only one aspect of text preprocessing and focusing on one single
task when dealing with performance optimization. %As a result, no definite
solution is currently available. Thus, we propose in this paper a new generic
text analysis architecture, where document structure is flexible, many
preprocessing techniques are integrated and textual datasets are indexed for
efficient access. We implement our conceptual architecture using both a
relational and a document-oriented database. Our experiments demonstrate the
feasibility of our approach and the superiority of the document-oriented
logical and physical implementation.
| no_new_dataset | 0.950088 |
1612.06287 | Antoine Deleforge | Cl\'ement Gaultier (PANAMA), Saurabh Kataria (PANAMA, IIT Kanpur),
Antoine Deleforge (PANAMA) | VAST : The Virtual Acoustic Space Traveler Dataset | International Conference on Latent Variable Analysis and Signal
Separation (LVA/ICA), Feb 2017, Grenoble, France. International Conference on
Latent Variable Analysis and Signal Separation | null | null | null | cs.SD cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new paradigm for sound source lo-calization referred
to as virtual acoustic space traveling (VAST) and presents a first dataset
designed for this purpose. Existing sound source localization methods are
either based on an approximate physical model (physics-driven) or on a
specific-purpose calibration set (data-driven). With VAST, the idea is to learn
a mapping from audio features to desired audio properties using a massive
dataset of simulated room impulse responses. This virtual dataset is designed
to be maximally representative of the potential audio scenes that the
considered system may be evolving in, while remaining reasonably compact. We
show that virtually-learned mappings on this dataset generalize to real data,
overcoming some intrinsic limitations of traditional binaural sound
localization methods based on time differences of arrival.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 15:40:44 GMT"
}
] | 2016-12-20T00:00:00 | [
[
"Gaultier",
"Clément",
"",
"PANAMA"
],
[
"Kataria",
"Saurabh",
"",
"PANAMA, IIT Kanpur"
],
[
"Deleforge",
"Antoine",
"",
"PANAMA"
]
] | TITLE: VAST : The Virtual Acoustic Space Traveler Dataset
ABSTRACT: This paper introduces a new paradigm for sound source lo-calization referred
to as virtual acoustic space traveling (VAST) and presents a first dataset
designed for this purpose. Existing sound source localization methods are
either based on an approximate physical model (physics-driven) or on a
specific-purpose calibration set (data-driven). With VAST, the idea is to learn
a mapping from audio features to desired audio properties using a massive
dataset of simulated room impulse responses. This virtual dataset is designed
to be maximally representative of the potential audio scenes that the
considered system may be evolving in, while remaining reasonably compact. We
show that virtually-learned mappings on this dataset generalize to real data,
overcoming some intrinsic limitations of traditional binaural sound
localization methods based on time differences of arrival.
| new_dataset | 0.959573 |
1603.02814 | Chunhua Shen | Qi Wu, Chunhua Shen, Anton van den Hengel, Peng Wang, Anthony Dick | Image Captioning and Visual Question Answering Based on Attributes and
External Knowledge | 14 pages. arXiv admin note: text overlap with arXiv:1511.06973 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much recent progress in Vision-to-Language problems has been achieved through
a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural
Networks (RNNs). This approach does not explicitly represent high-level
semantic concepts, but rather seeks to progress directly from image features to
text. In this paper we first propose a method of incorporating high-level
concepts into the successful CNN-RNN approach, and show that it achieves a
significant improvement on the state-of-the-art in both image captioning and
visual question answering. We further show that the same mechanism can be used
to incorporate external knowledge, which is critically important for answering
high level visual questions. Specifically, we design a visual question
answering model that combines an internal representation of the content of an
image with information extracted from a general knowledge base to answer a
broad range of image-based questions. It particularly allows questions to be
asked about the contents of an image, even when the image itself does not
contain a complete answer. Our final model achieves the best reported results
on both image captioning and visual question answering on several benchmark
datasets.
| [
{
"version": "v1",
"created": "Wed, 9 Mar 2016 08:56:45 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Dec 2016 11:44:34 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Wu",
"Qi",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
],
[
"Wang",
"Peng",
""
],
[
"Dick",
"Anthony",
""
]
] | TITLE: Image Captioning and Visual Question Answering Based on Attributes and
External Knowledge
ABSTRACT: Much recent progress in Vision-to-Language problems has been achieved through
a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural
Networks (RNNs). This approach does not explicitly represent high-level
semantic concepts, but rather seeks to progress directly from image features to
text. In this paper we first propose a method of incorporating high-level
concepts into the successful CNN-RNN approach, and show that it achieves a
significant improvement on the state-of-the-art in both image captioning and
visual question answering. We further show that the same mechanism can be used
to incorporate external knowledge, which is critically important for answering
high level visual questions. Specifically, we design a visual question
answering model that combines an internal representation of the content of an
image with information extracted from a general knowledge base to answer a
broad range of image-based questions. It particularly allows questions to be
asked about the contents of an image, even when the image itself does not
contain a complete answer. Our final model achieves the best reported results
on both image captioning and visual question answering on several benchmark
datasets.
| no_new_dataset | 0.947962 |
1612.03350 | Zheng Xu | Zheng Xu, Furong Huang, Louiqa Raschid, Tom Goldstein | Non-negative Factorization of the Occurrence Tensor from Financial
Contracts | NIPS tensor workshop | null | null | null | cs.CE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an algorithm for the non-negative factorization of an occurrence
tensor built from heterogeneous networks. We use l0 norm to model sparse errors
over discrete values (occurrences), and use decomposed factors to model the
embedded groups of nodes. An efficient splitting method is developed to
optimize the nonconvex and nonsmooth objective. We study both synthetic
problems and a new dataset built from financial documents, resMBS.
| [
{
"version": "v1",
"created": "Sat, 10 Dec 2016 22:26:30 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Xu",
"Zheng",
""
],
[
"Huang",
"Furong",
""
],
[
"Raschid",
"Louiqa",
""
],
[
"Goldstein",
"Tom",
""
]
] | TITLE: Non-negative Factorization of the Occurrence Tensor from Financial
Contracts
ABSTRACT: We propose an algorithm for the non-negative factorization of an occurrence
tensor built from heterogeneous networks. We use l0 norm to model sparse errors
over discrete values (occurrences), and use decomposed factors to model the
embedded groups of nodes. An efficient splitting method is developed to
optimize the nonconvex and nonsmooth objective. We study both synthetic
problems and a new dataset built from financial documents, resMBS.
| new_dataset | 0.958693 |
1612.05348 | Ndapandula Nakashole | Ndapandula Nakashole, Tom M. Mitchell | Machine Reading with Background Knowledge | 28 pages | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent systems capable of automatically understanding natural language
text are important for many artificial intelligence applications including
mobile phone voice assistants, computer vision, and robotics. Understanding
language often constitutes fitting new information into a previously acquired
view of the world. However, many machine reading systems rely on the text alone
to infer its meaning. In this paper, we pursue a different approach; machine
reading methods that make use of background knowledge to facilitate language
understanding. To this end, we have developed two methods: The first method
addresses prepositional phrase attachment ambiguity. It uses background
knowledge within a semi-supervised machine learning algorithm that learns from
both labeled and unlabeled data. This approach yields state-of-the-art results
on two datasets against strong baselines; The second method extracts
relationships from compound nouns. Our knowledge-aware method for compound noun
analysis accurately extracts relationships and significantly outperforms a
baseline that does not make use of background knowledge.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 03:33:07 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Nakashole",
"Ndapandula",
""
],
[
"Mitchell",
"Tom M.",
""
]
] | TITLE: Machine Reading with Background Knowledge
ABSTRACT: Intelligent systems capable of automatically understanding natural language
text are important for many artificial intelligence applications including
mobile phone voice assistants, computer vision, and robotics. Understanding
language often constitutes fitting new information into a previously acquired
view of the world. However, many machine reading systems rely on the text alone
to infer its meaning. In this paper, we pursue a different approach; machine
reading methods that make use of background knowledge to facilitate language
understanding. To this end, we have developed two methods: The first method
addresses prepositional phrase attachment ambiguity. It uses background
knowledge within a semi-supervised machine learning algorithm that learns from
both labeled and unlabeled data. This approach yields state-of-the-art results
on two datasets against strong baselines; The second method extracts
relationships from compound nouns. Our knowledge-aware method for compound noun
analysis accurately extracts relationships and significantly outperforms a
baseline that does not make use of background knowledge.
| no_new_dataset | 0.949248 |
1612.05386 | Qi Wu | Peng Wang, Qi Wu, Chunhua Shen, Anton van den Hengel | The VQA-Machine: Learning How to Use Existing Vision Algorithms to
Answer New Questions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most intriguing features of the Visual Question Answering (VQA)
challenge is the unpredictability of the questions. Extracting the information
required to answer them demands a variety of image operations from detection
and counting, to segmentation and reconstruction. To train a method to perform
even one of these operations accurately from {image,question,answer} tuples
would be challenging, but to aim to achieve them all with a limited set of such
training data seems ambitious at best. We propose here instead a more general
and scalable approach which exploits the fact that very good methods to achieve
these operations already exist, and thus do not need to be trained. Our method
thus learns how to exploit a set of external off-the-shelf algorithms to
achieve its goal, an approach that has something in common with the Neural
Turing Machine. The core of our proposed method is a new co-attention model. In
addition, the proposed approach generates human-readable reasons for its
decision, and can still be trained end-to-end without ground truth reasons
being given. We demonstrate the effectiveness on two publicly available
datasets, Visual Genome and VQA, and show that it produces the state-of-the-art
results in both cases.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 07:07:25 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Wang",
"Peng",
""
],
[
"Wu",
"Qi",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] | TITLE: The VQA-Machine: Learning How to Use Existing Vision Algorithms to
Answer New Questions
ABSTRACT: One of the most intriguing features of the Visual Question Answering (VQA)
challenge is the unpredictability of the questions. Extracting the information
required to answer them demands a variety of image operations from detection
and counting, to segmentation and reconstruction. To train a method to perform
even one of these operations accurately from {image,question,answer} tuples
would be challenging, but to aim to achieve them all with a limited set of such
training data seems ambitious at best. We propose here instead a more general
and scalable approach which exploits the fact that very good methods to achieve
these operations already exist, and thus do not need to be trained. Our method
thus learns how to exploit a set of external off-the-shelf algorithms to
achieve its goal, an approach that has something in common with the Neural
Turing Machine. The core of our proposed method is a new co-attention model. In
addition, the proposed approach generates human-readable reasons for its
decision, and can still be trained end-to-end without ground truth reasons
being given. We demonstrate the effectiveness on two publicly available
datasets, Visual Genome and VQA, and show that it produces the state-of-the-art
results in both cases.
| no_new_dataset | 0.943608 |
1612.05420 | Arkanath Pathak | Arkanath Pathak, Pawan Goyal and Plaban Bhowmick | A Two-Phase Approach Towards Identifying Argument Structure in Natural
Language | Presented at NLPTEA 2016, held in conjunction with COLING 2016 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose a new approach for extracting argument structure from natural
language texts that contain an underlying argument. Our approach comprises of
two phases: Score Assignment and Structure Prediction. The Score Assignment
phase trains models to classify relations between argument units (Support,
Attack or Neutral). To that end, different training strategies have been
explored. We identify different linguistic and lexical features for training
the classifiers. Through ablation study, we observe that our novel use of
word-embedding features is most effective for this task. The Structure
Prediction phase makes use of the scores from the Score Assignment phase to
arrive at the optimal structure. We perform experiments on three argumentation
datasets, namely, AraucariaDB, Debatepedia and Wikipedia. We also propose two
baselines and observe that the proposed approach outperforms baseline systems
for the final task of Structure Prediction.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 10:39:53 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Pathak",
"Arkanath",
""
],
[
"Goyal",
"Pawan",
""
],
[
"Bhowmick",
"Plaban",
""
]
] | TITLE: A Two-Phase Approach Towards Identifying Argument Structure in Natural
Language
ABSTRACT: We propose a new approach for extracting argument structure from natural
language texts that contain an underlying argument. Our approach comprises of
two phases: Score Assignment and Structure Prediction. The Score Assignment
phase trains models to classify relations between argument units (Support,
Attack or Neutral). To that end, different training strategies have been
explored. We identify different linguistic and lexical features for training
the classifiers. Through ablation study, we observe that our novel use of
word-embedding features is most effective for this task. The Structure
Prediction phase makes use of the scores from the Score Assignment phase to
arrive at the optimal structure. We perform experiments on three argumentation
datasets, namely, AraucariaDB, Debatepedia and Wikipedia. We also propose two
baselines and observe that the proposed approach outperforms baseline systems
for the final task of Structure Prediction.
| no_new_dataset | 0.952442 |
1612.05532 | Mario Mureddu | Mario Mureddu | Representation of the German transmission grid for Renewable Energy
Sources impact analysis | The dataset to which this paper refers can be found in: Mureddu, M.
(2016). Representation of the German transmission grid for Renewable Energy
Sources impact analysis.figshare.
http://doi.org/10.6084/m9.figshare.4233782.v2 | null | null | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing impact of fossil energy generation on the Earth ecological
balance is pointing to the need of a transition in power generation technology
towards the more clean and sustainable Renewable Energy Sources (RES). This
transition is leading to new paradigms and technologies useful for the
effective energy transmission and distribution, which take into account the RES
stochastic power output. In this scenario, the availability of up to date and
reliable datasets regarding topological and operative parameters of power
systems in presence of RES are needed, for both proposing and testing new
solutions. In this spirit, I present here a dataset regarding the German 380 KV
grid which contains fully DC Power Flow operative states of the grid in the
presence of various amounts of RES share, ranging from realistic up to 60\%,
which can be used as reference dataset for both steady state and dynamical
analysis.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 16:15:24 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Mureddu",
"Mario",
""
]
] | TITLE: Representation of the German transmission grid for Renewable Energy
Sources impact analysis
ABSTRACT: The increasing impact of fossil energy generation on the Earth ecological
balance is pointing to the need of a transition in power generation technology
towards the more clean and sustainable Renewable Energy Sources (RES). This
transition is leading to new paradigms and technologies useful for the
effective energy transmission and distribution, which take into account the RES
stochastic power output. In this scenario, the availability of up to date and
reliable datasets regarding topological and operative parameters of power
systems in presence of RES are needed, for both proposing and testing new
solutions. In this spirit, I present here a dataset regarding the German 380 KV
grid which contains fully DC Power Flow operative states of the grid in the
presence of various amounts of RES share, ranging from realistic up to 60\%,
which can be used as reference dataset for both steady state and dynamical
analysis.
| new_dataset | 0.961965 |
1612.05571 | Daniel Neil | Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu | Delta Networks for Optimized Recurrent Network Computation | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many neural networks exhibit stability in their activation patterns over time
in response to inputs from sensors operating under real-world conditions. By
capitalizing on this property of natural signals, we propose a Recurrent Neural
Network (RNN) architecture called a delta network in which each neuron
transmits its value only when the change in its activation exceeds a threshold.
The execution of RNNs as delta networks is attractive because their states must
be stored and fetched at every timestep, unlike in convolutional neural
networks (CNNs). We show that a naive run-time delta network implementation
offers modest improvements on the number of memory accesses and computes, but
optimized training techniques confer higher accuracy at higher speedup. With
these optimizations, we demonstrate a 9X reduction in cost with negligible loss
of accuracy for the TIDIGITS audio digit recognition benchmark. Similarly, on
the large Wall Street Journal speech recognition benchmark even existing
networks can be greatly accelerated as delta networks, and a 5.7x improvement
with negligible loss of accuracy can be obtained through training. Finally, on
an end-to-end CNN trained for steering angle prediction in a driving dataset,
the RNN cost can be reduced by a substantial 100X.
| [
{
"version": "v1",
"created": "Fri, 16 Dec 2016 17:57:15 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Neil",
"Daniel",
""
],
[
"Lee",
"Jun Haeng",
""
],
[
"Delbruck",
"Tobi",
""
],
[
"Liu",
"Shih-Chii",
""
]
] | TITLE: Delta Networks for Optimized Recurrent Network Computation
ABSTRACT: Many neural networks exhibit stability in their activation patterns over time
in response to inputs from sensors operating under real-world conditions. By
capitalizing on this property of natural signals, we propose a Recurrent Neural
Network (RNN) architecture called a delta network in which each neuron
transmits its value only when the change in its activation exceeds a threshold.
The execution of RNNs as delta networks is attractive because their states must
be stored and fetched at every timestep, unlike in convolutional neural
networks (CNNs). We show that a naive run-time delta network implementation
offers modest improvements on the number of memory accesses and computes, but
optimized training techniques confer higher accuracy at higher speedup. With
these optimizations, we demonstrate a 9X reduction in cost with negligible loss
of accuracy for the TIDIGITS audio digit recognition benchmark. Similarly, on
the large Wall Street Journal speech recognition benchmark even existing
networks can be greatly accelerated as delta networks, and a 5.7x improvement
with negligible loss of accuracy can be obtained through training. Finally, on
an end-to-end CNN trained for steering angle prediction in a driving dataset,
the RNN cost can be reduced by a substantial 100X.
| no_new_dataset | 0.946745 |
1612.05626 | Biplav Srivastava | Biplav Srivastava, Sandeep Sandha, Vaskar Raychoudhury, Sukanya
Randhawa, Viral Kapoor, Anmol Agrawal | An Open, Multi-Sensor, Dataset of Water Pollution of Ganga Basin and its
Application to Understand Impact of Large Religious Gathering | 7 pages | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Water is a crucial pre-requisite for all human activities. Due to growing
demand from population and shrinking supply of potable water, there is an
urgent need to use computational methods to manage available water
intelligently, and especially in developing countries like India where even
basic data to track water availability or physical infrastructure to process
water are inadequate. In this context, we present a dataset of water pollution
containing quantitative and qualitative data from a combination for modalities
- real-time sensors, lab results, and estimates from people using mobile apps.
The data on our API-accessible cloud platform covers more than 60 locations and
consists of both what we have ourselves collected from multiple location
following a novel process, and from others (lab-results) which were open but
hither-to difficult to access. Further, we discuss an application of released
data to understand spatio-temporal pollution impact of a large event with
hundreds of millions of people converging on a river during a religious
gathering (Ardh Khumbh 2016) spread over months. Such unprecedented details can
help authorities manage an ongoing event or plan for future ones. The community
can use the data for any application and also contribute new data to the
platform.
| [
{
"version": "v1",
"created": "Sun, 20 Nov 2016 01:45:36 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Srivastava",
"Biplav",
""
],
[
"Sandha",
"Sandeep",
""
],
[
"Raychoudhury",
"Vaskar",
""
],
[
"Randhawa",
"Sukanya",
""
],
[
"Kapoor",
"Viral",
""
],
[
"Agrawal",
"Anmol",
""
]
] | TITLE: An Open, Multi-Sensor, Dataset of Water Pollution of Ganga Basin and its
Application to Understand Impact of Large Religious Gathering
ABSTRACT: Water is a crucial pre-requisite for all human activities. Due to growing
demand from population and shrinking supply of potable water, there is an
urgent need to use computational methods to manage available water
intelligently, and especially in developing countries like India where even
basic data to track water availability or physical infrastructure to process
water are inadequate. In this context, we present a dataset of water pollution
containing quantitative and qualitative data from a combination for modalities
- real-time sensors, lab results, and estimates from people using mobile apps.
The data on our API-accessible cloud platform covers more than 60 locations and
consists of both what we have ourselves collected from multiple location
following a novel process, and from others (lab-results) which were open but
hither-to difficult to access. Further, we discuss an application of released
data to understand spatio-temporal pollution impact of a large event with
hundreds of millions of people converging on a river during a religious
gathering (Ardh Khumbh 2016) spread over months. Such unprecedented details can
help authorities manage an ongoing event or plan for future ones. The community
can use the data for any application and also contribute new data to the
platform.
| new_dataset | 0.969556 |
1612.05627 | Giulio Ruffini | Giulio Ruffini | Models, networks and algorithmic complexity | null | null | null | STARLAB TECHNICAL NOTE, TN00339 (V0.9) | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I aim to show that models, classification or generating functions,
invariances and datasets are algorithmically equivalent concepts once properly
defined, and provide some concrete examples of them. I then show that a) neural
networks (NNs) of different kinds can be seen to implement models, b) that
perturbations of inputs and nodes in NNs trained to optimally implement simple
models propagate strongly, c) that there is a framework in which recurrent,
deep and shallow networks can be seen to fall into a descriptive power
hierarchy in agreement with notions from the theory of recursive functions. The
motivation for these definitions and following analysis lies in the context of
cognitive neuroscience, and in particular in Ruffini (2016), where the concept
of model is used extensively, as is the concept of algorithmic complexity.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 00:54:03 GMT"
}
] | 2016-12-19T00:00:00 | [
[
"Ruffini",
"Giulio",
""
]
] | TITLE: Models, networks and algorithmic complexity
ABSTRACT: I aim to show that models, classification or generating functions,
invariances and datasets are algorithmically equivalent concepts once properly
defined, and provide some concrete examples of them. I then show that a) neural
networks (NNs) of different kinds can be seen to implement models, b) that
perturbations of inputs and nodes in NNs trained to optimally implement simple
models propagate strongly, c) that there is a framework in which recurrent,
deep and shallow networks can be seen to fall into a descriptive power
hierarchy in agreement with notions from the theory of recursive functions. The
motivation for these definitions and following analysis lies in the context of
cognitive neuroscience, and in particular in Ruffini (2016), where the concept
of model is used extensively, as is the concept of algorithmic complexity.
| no_new_dataset | 0.949949 |
1512.04848 | Travis Dick | Travis Dick, Mu Li, Venkata Krishna Pillutla, Colin White, Maria
Florina Balcan, Alex Smola | Data Driven Resource Allocation for Distributed Learning | null | null | null | null | cs.LG cs.DS stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In distributed machine learning, data is dispatched to multiple machines for
processing. Motivated by the fact that similar data points often belong to the
same or similar classes, and more generally, classification rules of high
accuracy tend to be "locally simple but globally complex" (Vapnik & Bottou
1993), we propose data dependent dispatching that takes advantage of such
structure. We present an in-depth analysis of this model, providing new
algorithms with provable worst-case guarantees, analysis proving existing
scalable heuristics perform well in natural non worst-case conditions, and
techniques for extending a dispatching rule from a small sample to the entire
distribution. We overcome novel technical challenges to satisfy important
conditions for accurate distributed learning, including fault tolerance and
balancedness. We empirically compare our approach with baselines based on
random partitioning, balanced partition trees, and locality sensitive hashing,
showing that we achieve significantly higher accuracy on both synthetic and
real world image and advertising datasets. We also demonstrate that our
technique strongly scales with the available computing power.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 16:41:42 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2016 20:45:52 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Dick",
"Travis",
""
],
[
"Li",
"Mu",
""
],
[
"Pillutla",
"Venkata Krishna",
""
],
[
"White",
"Colin",
""
],
[
"Balcan",
"Maria Florina",
""
],
[
"Smola",
"Alex",
""
]
] | TITLE: Data Driven Resource Allocation for Distributed Learning
ABSTRACT: In distributed machine learning, data is dispatched to multiple machines for
processing. Motivated by the fact that similar data points often belong to the
same or similar classes, and more generally, classification rules of high
accuracy tend to be "locally simple but globally complex" (Vapnik & Bottou
1993), we propose data dependent dispatching that takes advantage of such
structure. We present an in-depth analysis of this model, providing new
algorithms with provable worst-case guarantees, analysis proving existing
scalable heuristics perform well in natural non worst-case conditions, and
techniques for extending a dispatching rule from a small sample to the entire
distribution. We overcome novel technical challenges to satisfy important
conditions for accurate distributed learning, including fault tolerance and
balancedness. We empirically compare our approach with baselines based on
random partitioning, balanced partition trees, and locality sensitive hashing,
showing that we achieve significantly higher accuracy on both synthetic and
real world image and advertising datasets. We also demonstrate that our
technique strongly scales with the available computing power.
| no_new_dataset | 0.948251 |
1601.07265 | Xi Li | Siyu Huang, Xi Li, Zhongfei Zhang, Zhouzhou He, Fei Wu, Wei Liu,
Jinhui Tang, and Yueting Zhuang | Deep Learning Driven Visual Path Prediction from a Single Image | null | IEEE Transactions on Image Processing, vol. 25, no. 12, pp.
5892-5904, Dec. 2016 | 10.1109/TIP.2016.2613686 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Capabilities of inference and prediction are significant components of visual
systems. In this paper, we address an important and challenging task of them:
visual path prediction. Its goal is to infer the future path for a visual
object in a static scene. This task is complicated as it needs high-level
semantic understandings of both the scenes and motion patterns underlying video
sequences. In practice, cluttered situations have also raised higher demands on
the effectiveness and robustness of the considered models. Motivated by these
observations, we propose a deep learning framework which simultaneously
performs deep feature learning for visual representation in conjunction with
spatio-temporal context modeling. After that, we propose a unified path
planning scheme to make accurate future path prediction based on the analytic
results of the context models. The highly effective visual representation and
deep context models ensure that our framework makes a deep semantic
understanding of the scene and motion pattern, consequently improving the
performance of the visual path prediction task. In order to comprehensively
evaluate the model's performance on the visual path prediction task, we
construct two large benchmark datasets from the adaptation of video tracking
datasets. The qualitative and quantitative experimental results show that our
approach outperforms the existing approaches and owns a better generalization
capability.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 05:04:31 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Huang",
"Siyu",
""
],
[
"Li",
"Xi",
""
],
[
"Zhang",
"Zhongfei",
""
],
[
"He",
"Zhouzhou",
""
],
[
"Wu",
"Fei",
""
],
[
"Liu",
"Wei",
""
],
[
"Tang",
"Jinhui",
""
],
[
"Zhuang",
"Yueting",
""
]
] | TITLE: Deep Learning Driven Visual Path Prediction from a Single Image
ABSTRACT: Capabilities of inference and prediction are significant components of visual
systems. In this paper, we address an important and challenging task of them:
visual path prediction. Its goal is to infer the future path for a visual
object in a static scene. This task is complicated as it needs high-level
semantic understandings of both the scenes and motion patterns underlying video
sequences. In practice, cluttered situations have also raised higher demands on
the effectiveness and robustness of the considered models. Motivated by these
observations, we propose a deep learning framework which simultaneously
performs deep feature learning for visual representation in conjunction with
spatio-temporal context modeling. After that, we propose a unified path
planning scheme to make accurate future path prediction based on the analytic
results of the context models. The highly effective visual representation and
deep context models ensure that our framework makes a deep semantic
understanding of the scene and motion pattern, consequently improving the
performance of the visual path prediction task. In order to comprehensively
evaluate the model's performance on the visual path prediction task, we
construct two large benchmark datasets from the adaptation of video tracking
datasets. The qualitative and quantitative experimental results show that our
approach outperforms the existing approaches and owns a better generalization
capability.
| no_new_dataset | 0.943556 |
1612.00390 | Andreas Savakis | Jefferson Ryan Medel, Andreas Savakis | Anomaly Detection in Video Using Predictive Convolutional Long
Short-Term Memory Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automating the detection of anomalous events within long video sequences is
challenging due to the ambiguity of how such events are defined. We approach
the problem by learning generative models that can identify anomalies in videos
using limited supervision. We propose end-to-end trainable composite
Convolutional Long Short-Term Memory (Conv-LSTM) networks that are able to
predict the evolution of a video sequence from a small number of input frames.
Regularity scores are derived from the reconstruction errors of a set of
predictions with abnormal video sequences yielding lower regularity scores as
they diverge further from the actual sequence over time. The models utilize a
composite structure and examine the effects of conditioning in learning more
meaningful representations. The best model is chosen based on the
reconstruction and prediction accuracy. The Conv-LSTM models are evaluated both
qualitatively and quantitatively, demonstrating competitive results on anomaly
detection datasets. Conv-LSTM units are shown to be an effective tool for
modeling and predicting video sequences.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 19:28:59 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Dec 2016 16:39:32 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Medel",
"Jefferson Ryan",
""
],
[
"Savakis",
"Andreas",
""
]
] | TITLE: Anomaly Detection in Video Using Predictive Convolutional Long
Short-Term Memory Networks
ABSTRACT: Automating the detection of anomalous events within long video sequences is
challenging due to the ambiguity of how such events are defined. We approach
the problem by learning generative models that can identify anomalies in videos
using limited supervision. We propose end-to-end trainable composite
Convolutional Long Short-Term Memory (Conv-LSTM) networks that are able to
predict the evolution of a video sequence from a small number of input frames.
Regularity scores are derived from the reconstruction errors of a set of
predictions with abnormal video sequences yielding lower regularity scores as
they diverge further from the actual sequence over time. The models utilize a
composite structure and examine the effects of conditioning in learning more
meaningful representations. The best model is chosen based on the
reconstruction and prediction accuracy. The Conv-LSTM models are evaluated both
qualitatively and quantitatively, demonstrating competitive results on anomaly
detection datasets. Conv-LSTM units are shown to be an effective tool for
modeling and predicting video sequences.
| no_new_dataset | 0.948058 |
1612.00575 | Rongpeng Li | Chao Yuan, Zhifeng Zhao, Rongpeng Li, Meng Li, Honggang Zhang | Not Call Me Cellular Any More: The Emergence of Scaling Law, Fractal
Patterns and Small-World in Wireless Networks | null | null | null | null | cs.SI cs.IT math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In conventional cellular networks, for base stations (BSs) that are deployed
far away from each other, it is general to assume them to be mutually
independent. Nevertheless, after long-term evolution of cellular networks in
various generations, this assumption no longer holds. Instead, the BSs, which
seem to be gradually deployed by operators in a service-oriented manner, have
embedded many fundamentally distinctive features in their locations, coverage
and traffic loading. These features can be leveraged to analyze the intrinsic
pattern in BSs and even human community. In this paper, according to
large-scale measurement datasets, we build up a correlation model of BSs by
utilizing one of the most important features, ie., spatial traffic. Coupling
with the theory of complex networks, we make further analysis on the structure
and characteristics of this traffic load correlation model. Numerical results
show that the degree distribution follows scale-free property. Also the
datasets unveil the characteristics of fractality and small-world. Furthermore,
we apply collective influence (CI) algorithm to localize the influential base
stations and demonstrate that some low-degree BSs may outrank BSs with larger
degree.
| [
{
"version": "v1",
"created": "Fri, 2 Dec 2016 06:47:54 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 01:26:17 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Dec 2016 09:12:14 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Yuan",
"Chao",
""
],
[
"Zhao",
"Zhifeng",
""
],
[
"Li",
"Rongpeng",
""
],
[
"Li",
"Meng",
""
],
[
"Zhang",
"Honggang",
""
]
] | TITLE: Not Call Me Cellular Any More: The Emergence of Scaling Law, Fractal
Patterns and Small-World in Wireless Networks
ABSTRACT: In conventional cellular networks, for base stations (BSs) that are deployed
far away from each other, it is general to assume them to be mutually
independent. Nevertheless, after long-term evolution of cellular networks in
various generations, this assumption no longer holds. Instead, the BSs, which
seem to be gradually deployed by operators in a service-oriented manner, have
embedded many fundamentally distinctive features in their locations, coverage
and traffic loading. These features can be leveraged to analyze the intrinsic
pattern in BSs and even human community. In this paper, according to
large-scale measurement datasets, we build up a correlation model of BSs by
utilizing one of the most important features, ie., spatial traffic. Coupling
with the theory of complex networks, we make further analysis on the structure
and characteristics of this traffic load correlation model. Numerical results
show that the degree distribution follows scale-free property. Also the
datasets unveil the characteristics of fractality and small-world. Furthermore,
we apply collective influence (CI) algorithm to localize the influential base
stations and demonstrate that some low-degree BSs may outrank BSs with larger
degree.
| no_new_dataset | 0.948251 |
1612.04853 | Hoel Le Capitaine | Hoel Le Capitaine | Constraint Selection in Metric Learning | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of machine learning algorithms are using a metric, or a distance, in
order to compare individuals. The Euclidean distance is usually employed, but
it may be more efficient to learn a parametric distance such as Mahalanobis
metric. Learning such a metric is a hot topic since more than ten years now,
and a number of methods have been proposed to efficiently learn it. However,
the nature of the problem makes it quite difficult for large scale data, as
well as data for which classes overlap. This paper presents a simple way of
improving accuracy and scalability of any iterative metric learning algorithm,
where constraints are obtained prior to the algorithm. The proposed approach
relies on a loss-dependent weighted selection of constraints that are used for
learning the metric. Using the corresponding dedicated loss function, the
method clearly allows to obtain better results than state-of-the-art methods,
both in terms of accuracy and time complexity. Some experimental results on
real world, and potentially large, datasets are demonstrating the effectiveness
of our proposition.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 21:45:14 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Capitaine",
"Hoel Le",
""
]
] | TITLE: Constraint Selection in Metric Learning
ABSTRACT: A number of machine learning algorithms are using a metric, or a distance, in
order to compare individuals. The Euclidean distance is usually employed, but
it may be more efficient to learn a parametric distance such as Mahalanobis
metric. Learning such a metric is a hot topic since more than ten years now,
and a number of methods have been proposed to efficiently learn it. However,
the nature of the problem makes it quite difficult for large scale data, as
well as data for which classes overlap. This paper presents a simple way of
improving accuracy and scalability of any iterative metric learning algorithm,
where constraints are obtained prior to the algorithm. The proposed approach
relies on a loss-dependent weighted selection of constraints that are used for
learning the metric. Using the corresponding dedicated loss function, the
method clearly allows to obtain better results than state-of-the-art methods,
both in terms of accuracy and time complexity. Some experimental results on
real world, and potentially large, datasets are demonstrating the effectiveness
of our proposition.
| no_new_dataset | 0.950457 |
1612.04862 | Nadir Murru | Giuseppe Air\`o Farulla, Nadir Murru, Rosaria Rossini | A fuzzy approach for segmentation of touching characters | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of correctly segmenting touching characters is an hard task to
solve and it is of major relevance in pattern recognition. In the recent years,
many methods and algorithms have been proposed; still, a definitive solution is
far from being found. In this paper, we propose a novel method based on fuzzy
logic. The proposed method combines in a novel way three features for
segmenting touching characters that have been already proposed in other studies
but have been exploited only singularly so far. The proposed strategy is based
on a 3--input/1--output fuzzy inference system with fuzzy rules specifically
optimized for segmenting touching characters in the case of Latin printed and
handwritten characters. The system performances are illustrated and supported
by numerical examples showing that our approach can achieve a reasonable good
overall accuracy in segmenting characters even on tricky conditions of touching
characters. Moreover, numerical results suggest that the method can be applied
to many different datasets of characters by means of a convenient tuning of the
fuzzy sets and rules.
| [
{
"version": "v1",
"created": "Thu, 8 Dec 2016 14:44:31 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Farulla",
"Giuseppe Airò",
""
],
[
"Murru",
"Nadir",
""
],
[
"Rossini",
"Rosaria",
""
]
] | TITLE: A fuzzy approach for segmentation of touching characters
ABSTRACT: The problem of correctly segmenting touching characters is an hard task to
solve and it is of major relevance in pattern recognition. In the recent years,
many methods and algorithms have been proposed; still, a definitive solution is
far from being found. In this paper, we propose a novel method based on fuzzy
logic. The proposed method combines in a novel way three features for
segmenting touching characters that have been already proposed in other studies
but have been exploited only singularly so far. The proposed strategy is based
on a 3--input/1--output fuzzy inference system with fuzzy rules specifically
optimized for segmenting touching characters in the case of Latin printed and
handwritten characters. The system performances are illustrated and supported
by numerical examples showing that our approach can achieve a reasonable good
overall accuracy in segmenting characters even on tricky conditions of touching
characters. Moreover, numerical results suggest that the method can be applied
to many different datasets of characters by means of a convenient tuning of the
fuzzy sets and rules.
| no_new_dataset | 0.948632 |
1612.04868 | I\~nigo Lopez-Gazpio | I. Lopez-Gazpio and M. Maritxalar and A. Gonzalez-Agirre and G. Rigau
and L. Uria and E. Agirre | Interpretable Semantic Textual Similarity: Finding and explaining
differences between sentences | Preprint version, Knowledge-Based Systems (ISSN: 0950-7051). (2016) | null | 10.1016/j.knosys.2016.12.013 | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User acceptance of artificial intelligence agents might depend on their
ability to explain their reasoning, which requires adding an interpretability
layer that fa- cilitates users to understand their behavior. This paper focuses
on adding an in- terpretable layer on top of Semantic Textual Similarity (STS),
which measures the degree of semantic equivalence between two sentences. The
interpretability layer is formalized as the alignment between pairs of segments
across the two sentences, where the relation between the segments is labeled
with a relation type and a similarity score. We present a publicly available
dataset of sentence pairs annotated following the formalization. We then
develop a system trained on this dataset which, given a sentence pair, explains
what is similar and different, in the form of graded and typed segment
alignments. When evaluated on the dataset, the system performs better than an
informed baseline, showing that the dataset and task are well-defined and
feasible. Most importantly, two user studies show how the system output can be
used to automatically produce explanations in natural language. Users performed
better when having access to the explanations, pro- viding preliminary evidence
that our dataset and method to automatically produce explanations is useful in
real applications.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 22:22:33 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Lopez-Gazpio",
"I.",
""
],
[
"Maritxalar",
"M.",
""
],
[
"Gonzalez-Agirre",
"A.",
""
],
[
"Rigau",
"G.",
""
],
[
"Uria",
"L.",
""
],
[
"Agirre",
"E.",
""
]
] | TITLE: Interpretable Semantic Textual Similarity: Finding and explaining
differences between sentences
ABSTRACT: User acceptance of artificial intelligence agents might depend on their
ability to explain their reasoning, which requires adding an interpretability
layer that fa- cilitates users to understand their behavior. This paper focuses
on adding an in- terpretable layer on top of Semantic Textual Similarity (STS),
which measures the degree of semantic equivalence between two sentences. The
interpretability layer is formalized as the alignment between pairs of segments
across the two sentences, where the relation between the segments is labeled
with a relation type and a similarity score. We present a publicly available
dataset of sentence pairs annotated following the formalization. We then
develop a system trained on this dataset which, given a sentence pair, explains
what is similar and different, in the form of graded and typed segment
alignments. When evaluated on the dataset, the system performs better than an
informed baseline, showing that the dataset and task are well-defined and
feasible. Most importantly, two user studies show how the system output can be
used to automatically produce explanations in natural language. Users performed
better when having access to the explanations, pro- viding preliminary evidence
that our dataset and method to automatically produce explanations is useful in
real applications.
| new_dataset | 0.962848 |
1612.04891 | Aaron Lee | Cecilia S. Lee, Doug M. Baughman, Aaron Y. Lee | Deep learning is effective for the classification of OCT images of
normal versus Age-related Macular Degeneration | 4 Figures, 1 Table | null | null | null | stat.ML cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: The advent of Electronic Medical Records (EMR) with large
electronic imaging databases along with advances in deep neural networks with
machine learning has provided a unique opportunity to achieve milestones in
automated image analysis. Optical coherence tomography (OCT) is the most
commonly obtained imaging modality in ophthalmology and represents a dense and
rich dataset when combined with labels derived from the EMR. We sought to
determine if deep learning could be utilized to distinguish normal OCT images
from images from patients with Age-related Macular Degeneration (AMD). Methods:
Automated extraction of an OCT imaging database was performed and linked to
clinical endpoints from the EMR. OCT macula scans were obtained by Heidelberg
Spectralis, and each OCT scan was linked to EMR clinical endpoints extracted
from EPIC. The central 11 images were selected from each OCT scan of two
cohorts of patients: normal and AMD. Cross-validation was performed using a
random subset of patients. Area under receiver operator curves (auROC) were
constructed at an independent image level, macular OCT level, and patient
level. Results: Of an extraction of 2.6 million OCT images linked to clinical
datapoints from the EMR, 52,690 normal and 48,312 AMD macular OCT images were
selected. A deep neural network was trained to categorize images as either
normal or AMD. At the image level, we achieved an auROC of 92.78% with an
accuracy of 87.63%. At the macula level, we achieved an auROC of 93.83% with an
accuracy of 88.98%. At a patient level, we achieved an auROC of 97.45% with an
accuracy of 93.45%. Peak sensitivity and specificity with optimal cutoffs were
92.64% and 93.69% respectively. Conclusions: Deep learning techniques are
effective for classifying OCT images. These findings have important
implications in utilizing OCT in automated screening and computer aided
diagnosis tools.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 00:23:43 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Lee",
"Cecilia S.",
""
],
[
"Baughman",
"Doug M.",
""
],
[
"Lee",
"Aaron Y.",
""
]
] | TITLE: Deep learning is effective for the classification of OCT images of
normal versus Age-related Macular Degeneration
ABSTRACT: Objective: The advent of Electronic Medical Records (EMR) with large
electronic imaging databases along with advances in deep neural networks with
machine learning has provided a unique opportunity to achieve milestones in
automated image analysis. Optical coherence tomography (OCT) is the most
commonly obtained imaging modality in ophthalmology and represents a dense and
rich dataset when combined with labels derived from the EMR. We sought to
determine if deep learning could be utilized to distinguish normal OCT images
from images from patients with Age-related Macular Degeneration (AMD). Methods:
Automated extraction of an OCT imaging database was performed and linked to
clinical endpoints from the EMR. OCT macula scans were obtained by Heidelberg
Spectralis, and each OCT scan was linked to EMR clinical endpoints extracted
from EPIC. The central 11 images were selected from each OCT scan of two
cohorts of patients: normal and AMD. Cross-validation was performed using a
random subset of patients. Area under receiver operator curves (auROC) were
constructed at an independent image level, macular OCT level, and patient
level. Results: Of an extraction of 2.6 million OCT images linked to clinical
datapoints from the EMR, 52,690 normal and 48,312 AMD macular OCT images were
selected. A deep neural network was trained to categorize images as either
normal or AMD. At the image level, we achieved an auROC of 92.78% with an
accuracy of 87.63%. At the macula level, we achieved an auROC of 93.83% with an
accuracy of 88.98%. At a patient level, we achieved an auROC of 97.45% with an
accuracy of 93.45%. Peak sensitivity and specificity with optimal cutoffs were
92.64% and 93.69% respectively. Conclusions: Deep learning techniques are
effective for classifying OCT images. These findings have important
implications in utilizing OCT in automated screening and computer aided
diagnosis tools.
| no_new_dataset | 0.956594 |
1612.04902 | Hanna Suominen | Hanna Suominen, Henning M\"uller, Lucila Ohno-Machado, Sanna
Salanter\"a, G\"unter Schreier, Leif Hanlen | Prerequisites for International Exchanges of Health Information:
Comparison of Australian, Austrian, Finnish, Swiss, and US Privacy Policies | null | null | null | null | cs.CY cs.DL | http://creativecommons.org/licenses/by/4.0/ | Capabilities to exchange health information are critical to accelerate
discovery and its diffusion to healthcare practice. However, the same ethical
and legal policies that protect privacy hinder these data exchanges, and the
issues accumulate if moving data across geographical or organizational borders.
This can be seen as one of the reasons why many health technologies and
research findings are limited to very narrow domains. In this paper, we compare
how using and disclosing personal data for research purposes is addressed in
Australian, Austrian, Finnish, Swiss, and US policies with a focus on text data
analytics. Our goal is to identify approaches and issues that enable or hinder
international health information exchanges. As expected, the policies within
each country are not as diverse as across countries. Most policies apply the
principles of accountability and/or adequacy and are thereby fundamentally
similar. Their following requirements create complications with re-using and
re-disclosing data and even secondary data: 1) informing data subjects about
the purposes of data collection and use, before the dataset is collected; 2)
assurance that the subjects are no longer identifiable; and 3) destruction of
data when the research activities are finished. Using storage and compute cloud
services as well as other exchange technologies on the Internet without proper
permissions is technically not allowed if the data are stored in another
country. Both legislation and technologies are available as vehicles for
overcoming these barriers. The resulting richness in information variety will
contribute to the development and evaluation of new clinical hypotheses and
technologies.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 01:28:25 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Suominen",
"Hanna",
""
],
[
"Müller",
"Henning",
""
],
[
"Ohno-Machado",
"Lucila",
""
],
[
"Salanterä",
"Sanna",
""
],
[
"Schreier",
"Günter",
""
],
[
"Hanlen",
"Leif",
""
]
] | TITLE: Prerequisites for International Exchanges of Health Information:
Comparison of Australian, Austrian, Finnish, Swiss, and US Privacy Policies
ABSTRACT: Capabilities to exchange health information are critical to accelerate
discovery and its diffusion to healthcare practice. However, the same ethical
and legal policies that protect privacy hinder these data exchanges, and the
issues accumulate if moving data across geographical or organizational borders.
This can be seen as one of the reasons why many health technologies and
research findings are limited to very narrow domains. In this paper, we compare
how using and disclosing personal data for research purposes is addressed in
Australian, Austrian, Finnish, Swiss, and US policies with a focus on text data
analytics. Our goal is to identify approaches and issues that enable or hinder
international health information exchanges. As expected, the policies within
each country are not as diverse as across countries. Most policies apply the
principles of accountability and/or adequacy and are thereby fundamentally
similar. Their following requirements create complications with re-using and
re-disclosing data and even secondary data: 1) informing data subjects about
the purposes of data collection and use, before the dataset is collected; 2)
assurance that the subjects are no longer identifiable; and 3) destruction of
data when the research activities are finished. Using storage and compute cloud
services as well as other exchange technologies on the Internet without proper
permissions is technically not allowed if the data are stored in another
country. Both legislation and technologies are available as vehicles for
overcoming these barriers. The resulting richness in information variety will
contribute to the development and evaluation of new clinical hypotheses and
technologies.
| no_new_dataset | 0.936518 |
1612.04910 | Yukiaki Ishida | Y. Ishida, T. Togashi, K. Yamamoto, M. Tanaka, T. Kiss, T. Otsu, Y.
Kobayashi, S. Shin | Time-resolved photoemission apparatus achieving sub-20-meV energy
resolution and high stability | null | Rev. Sci. Instrum. 85, 123904 (2014) | 10.1063/1.4903788 | null | cond-mat.mtrl-sci physics.ins-det | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper describes a time- and angle-resolved photoemission apparatus
consisting of a hemispherical analyzer and a pulsed laser source. We
demonstrate 1.48-eV pump and 5.90-eV probe measurements at the >10.5-meV and
>240-fs resolutions by use of fairly monochromatic 170-fs pulses delivered from
a regeneratively amplified Ti:sapphire laser system operating typically at 250
kHz. The apparatus is capable to resolve the optically filled superconducting
peak in the unoccupied states of a cuprate superconductor, Bi2Sr2CaCu2O8+d. A
dataset recorded on Bi(111) surface is also presented. Technical descriptions
include the followings: A simple procedure to fine-tune the spatio-temporal
overlap of the pump-and-probe beams and their diameters; achieving a long-term
stability of the system that enables a normalization-free dataset acquisition;
changing the repetition rate by utilizing acoustic optical modulator and
frequency-division circuit.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 02:37:46 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Ishida",
"Y.",
""
],
[
"Togashi",
"T.",
""
],
[
"Yamamoto",
"K.",
""
],
[
"Tanaka",
"M.",
""
],
[
"Kiss",
"T.",
""
],
[
"Otsu",
"T.",
""
],
[
"Kobayashi",
"Y.",
""
],
[
"Shin",
"S.",
""
]
] | TITLE: Time-resolved photoemission apparatus achieving sub-20-meV energy
resolution and high stability
ABSTRACT: The paper describes a time- and angle-resolved photoemission apparatus
consisting of a hemispherical analyzer and a pulsed laser source. We
demonstrate 1.48-eV pump and 5.90-eV probe measurements at the >10.5-meV and
>240-fs resolutions by use of fairly monochromatic 170-fs pulses delivered from
a regeneratively amplified Ti:sapphire laser system operating typically at 250
kHz. The apparatus is capable to resolve the optically filled superconducting
peak in the unoccupied states of a cuprate superconductor, Bi2Sr2CaCu2O8+d. A
dataset recorded on Bi(111) surface is also presented. Technical descriptions
include the followings: A simple procedure to fine-tune the spatio-temporal
overlap of the pump-and-probe beams and their diameters; achieving a long-term
stability of the system that enables a normalization-free dataset acquisition;
changing the repetition rate by utilizing acoustic optical modulator and
frequency-division circuit.
| no_new_dataset | 0.944125 |
1612.04949 | Yang Yang | Hao Liu, Yang Yang, Fumin Shen, Lixin Duan and Heng Tao Shen | Recurrent Image Captioner: Describing Images with Spatial-Invariant
Transformation and Attention Filtering | null | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Along with the prosperity of recurrent neural network in modelling sequential
data and the power of attention mechanism in automatically identify salient
information, image captioning, a.k.a., image description, has been remarkably
advanced in recent years. Nonetheless, most existing paradigms may suffer from
the deficiency of invariance to images with different scaling, rotation, etc.;
and effective integration of standalone attention to form a holistic end-to-end
system. In this paper, we propose a novel image captioning architecture, termed
Recurrent Image Captioner (\textbf{RIC}), which allows visual encoder and
language decoder to coherently cooperate in a recurrent manner. Specifically,
we first equip CNN-based visual encoder with a differentiable layer to enable
spatially invariant transformation of visual signals. Moreover, we deploy an
attention filter module (differentiable) between encoder and decoder to
dynamically determine salient visual parts. We also employ bidirectional LSTM
to preprocess sentences for generating better textual representations. Besides,
we propose to exploit variational inference to optimize the whole architecture.
Extensive experimental results on three benchmark datasets (i.e., Flickr8k,
Flickr30k and MS COCO) demonstrate the superiority of our proposed architecture
as compared to most of the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 07:19:46 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Liu",
"Hao",
""
],
[
"Yang",
"Yang",
""
],
[
"Shen",
"Fumin",
""
],
[
"Duan",
"Lixin",
""
],
[
"Shen",
"Heng Tao",
""
]
] | TITLE: Recurrent Image Captioner: Describing Images with Spatial-Invariant
Transformation and Attention Filtering
ABSTRACT: Along with the prosperity of recurrent neural network in modelling sequential
data and the power of attention mechanism in automatically identify salient
information, image captioning, a.k.a., image description, has been remarkably
advanced in recent years. Nonetheless, most existing paradigms may suffer from
the deficiency of invariance to images with different scaling, rotation, etc.;
and effective integration of standalone attention to form a holistic end-to-end
system. In this paper, we propose a novel image captioning architecture, termed
Recurrent Image Captioner (\textbf{RIC}), which allows visual encoder and
language decoder to coherently cooperate in a recurrent manner. Specifically,
we first equip CNN-based visual encoder with a differentiable layer to enable
spatially invariant transformation of visual signals. Moreover, we deploy an
attention filter module (differentiable) between encoder and decoder to
dynamically determine salient visual parts. We also employ bidirectional LSTM
to preprocess sentences for generating better textual representations. Besides,
we propose to exploit variational inference to optimize the whole architecture.
Extensive experimental results on three benchmark datasets (i.e., Flickr8k,
Flickr30k and MS COCO) demonstrate the superiority of our proposed architecture
as compared to most of the state-of-the-art methods.
| no_new_dataset | 0.948442 |
1612.04978 | EPTCS | Ladislav Peska (Charles University in Prague, Faculty of Mathematics
and Physics) | Using the Context of User Feedback in Recommender Systems | In Proceedings MEMICS 2016, arXiv:1612.04037 | EPTCS 233, 2016, pp. 1-12 | 10.4204/EPTCS.233.1 | null | cs.IR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our work is generally focused on recommending for small or medium-sized
e-commerce portals, where explicit feedback is absent and thus the usage of
implicit feedback is necessary. Nonetheless, for some implicit feedback
features, the presentation context may be of high importance. In this paper, we
present a model of relevant contextual features affecting user feedback,
propose methods leveraging those features, publish a dataset of real e-commerce
users containing multiple user feedback indicators as well as its context and
finally present results of purchase prediction and recommendation experiments.
Off-line experiments with real users of a Czech travel agency website
corroborated the importance of leveraging presentation context in both purchase
prediction and recommendation tasks.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 08:49:50 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Peska",
"Ladislav",
"",
"Charles University in Prague, Faculty of Mathematics\n and Physics"
]
] | TITLE: Using the Context of User Feedback in Recommender Systems
ABSTRACT: Our work is generally focused on recommending for small or medium-sized
e-commerce portals, where explicit feedback is absent and thus the usage of
implicit feedback is necessary. Nonetheless, for some implicit feedback
features, the presentation context may be of high importance. In this paper, we
present a model of relevant contextual features affecting user feedback,
propose methods leveraging those features, publish a dataset of real e-commerce
users containing multiple user feedback indicators as well as its context and
finally present results of purchase prediction and recommendation experiments.
Off-line experiments with real users of a Czech travel agency website
corroborated the importance of leveraging presentation context in both purchase
prediction and recommendation tasks.
| new_dataset | 0.955569 |
1612.05038 | Adrian Keith Davison | Adrian K. Davison, Cliff Lansley, Choon Ching Ng, Kevin Tan, Moi Hoon
Yap | Objective Micro-Facial Movement Detection Using FACS-Based Regions and
Baseline Evaluation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Micro-facial expressions are regarded as an important human behavioural event
that can highlight emotional deception. Spotting these movements is difficult
for humans and machines, however research into using computer vision to detect
subtle facial expressions is growing in popularity. This paper proposes an
individualised baseline micro-movement detection method using 3D Histogram of
Oriented Gradients (3D HOG) temporal difference method. We define a face
template consisting of 26 regions based on the Facial Action Coding System
(FACS). We extract the temporal features of each region using 3D HOG. Then, we
use Chi-square distance to find subtle facial motion in the local regions.
Finally, an automatic peak detector is used to detect micro-movements above the
newly proposed adaptive baseline threshold. The performance is validated on two
FACS coded datasets: SAMM and CASME II. This objective method focuses on the
movement of the 26 face regions. When comparing with the ground truth, the best
result was an AUC of 0.7512 and 0.7261 on SAMM and CASME II, respectively. The
results show that 3D HOG outperformed for micro-movement detection, compared to
state-of-the-art feature representations: Local Binary Patterns in Three
Orthogonal Planes and Histograms of Oriented Optical Flow.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 12:15:36 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Davison",
"Adrian K.",
""
],
[
"Lansley",
"Cliff",
""
],
[
"Ng",
"Choon Ching",
""
],
[
"Tan",
"Kevin",
""
],
[
"Yap",
"Moi Hoon",
""
]
] | TITLE: Objective Micro-Facial Movement Detection Using FACS-Based Regions and
Baseline Evaluation
ABSTRACT: Micro-facial expressions are regarded as an important human behavioural event
that can highlight emotional deception. Spotting these movements is difficult
for humans and machines, however research into using computer vision to detect
subtle facial expressions is growing in popularity. This paper proposes an
individualised baseline micro-movement detection method using 3D Histogram of
Oriented Gradients (3D HOG) temporal difference method. We define a face
template consisting of 26 regions based on the Facial Action Coding System
(FACS). We extract the temporal features of each region using 3D HOG. Then, we
use Chi-square distance to find subtle facial motion in the local regions.
Finally, an automatic peak detector is used to detect micro-movements above the
newly proposed adaptive baseline threshold. The performance is validated on two
FACS coded datasets: SAMM and CASME II. This objective method focuses on the
movement of the 26 face regions. When comparing with the ground truth, the best
result was an AUC of 0.7512 and 0.7261 on SAMM and CASME II, respectively. The
results show that 3D HOG outperformed for micro-movement detection, compared to
state-of-the-art feature representations: Local Binary Patterns in Three
Orthogonal Planes and Histograms of Oriented Optical Flow.
| no_new_dataset | 0.948585 |
1612.05153 | Rainer Kelz | Rainer Kelz, Matthias Dorfer, Filip Korzeniowski, Sebastian B\"ock,
Andreas Arzt, Gerhard Widmer | On the Potential of Simple Framewise Approaches to Piano Transcription | Proceedings of the 17th International Society for Music Information
Retrieval Conference (ISMIR 2016), New York, NY | null | null | null | cs.SD cs.LG | http://creativecommons.org/licenses/by/4.0/ | In an attempt at exploring the limitations of simple approaches to the task
of piano transcription (as usually defined in MIR), we conduct an in-depth
analysis of neural network-based framewise transcription. We systematically
compare different popular input representations for transcription systems to
determine the ones most suitable for use with neural networks. Exploiting
recent advances in training techniques and new regularizers, and taking into
account hyper-parameter tuning, we show that it is possible, by simple
bottom-up frame-wise processing, to obtain a piano transcriber that outperforms
the current published state of the art on the publicly available MAPS dataset
-- without any complex post-processing steps. Thus, we propose this simple
approach as a new baseline for this dataset, for future transcription research
to build on and improve.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 17:32:11 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Kelz",
"Rainer",
""
],
[
"Dorfer",
"Matthias",
""
],
[
"Korzeniowski",
"Filip",
""
],
[
"Böck",
"Sebastian",
""
],
[
"Arzt",
"Andreas",
""
],
[
"Widmer",
"Gerhard",
""
]
] | TITLE: On the Potential of Simple Framewise Approaches to Piano Transcription
ABSTRACT: In an attempt at exploring the limitations of simple approaches to the task
of piano transcription (as usually defined in MIR), we conduct an in-depth
analysis of neural network-based framewise transcription. We systematically
compare different popular input representations for transcription systems to
determine the ones most suitable for use with neural networks. Exploiting
recent advances in training techniques and new regularizers, and taking into
account hyper-parameter tuning, we show that it is possible, by simple
bottom-up frame-wise processing, to obtain a piano transcriber that outperforms
the current published state of the art on the publicly available MAPS dataset
-- without any complex post-processing steps. Thus, we propose this simple
approach as a new baseline for this dataset, for future transcription research
to build on and improve.
| new_dataset | 0.958963 |
1612.05236 | Shripad Gade | Shripad Gade and Nitin H. Vaidya | Private Learning on Networks | null | null | null | null | cs.DC cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Continual data collection and widespread deployment of machine learning
algorithms, particularly the distributed variants, have raised new privacy
challenges. In a distributed machine learning scenario, the dataset is stored
among several machines and they solve a distributed optimization problem to
collectively learn the underlying model. We present a secure multi-party
computation inspired privacy preserving distributed algorithm for optimizing a
convex function consisting of several possibly non-convex functions. Each
individual objective function is privately stored with an agent while the
agents communicate model parameters with neighbor machines connected in a
network. We show that our algorithm can correctly optimize the overall
objective function and learn the underlying model accurately. We further prove
that under a vertex connectivity condition on the topology, our algorithm
preserves privacy of individual objective functions. We establish limits on the
what a coalition of adversaries can learn by observing the messages and states
shared over a network.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 20:44:50 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Gade",
"Shripad",
""
],
[
"Vaidya",
"Nitin H.",
""
]
] | TITLE: Private Learning on Networks
ABSTRACT: Continual data collection and widespread deployment of machine learning
algorithms, particularly the distributed variants, have raised new privacy
challenges. In a distributed machine learning scenario, the dataset is stored
among several machines and they solve a distributed optimization problem to
collectively learn the underlying model. We present a secure multi-party
computation inspired privacy preserving distributed algorithm for optimizing a
convex function consisting of several possibly non-convex functions. Each
individual objective function is privately stored with an agent while the
agents communicate model parameters with neighbor machines connected in a
network. We show that our algorithm can correctly optimize the overall
objective function and learn the underlying model accurately. We further prove
that under a vertex connectivity condition on the topology, our algorithm
preserves privacy of individual objective functions. We establish limits on the
what a coalition of adversaries can learn by observing the messages and states
shared over a network.
| no_new_dataset | 0.940735 |
1612.05251 | Franck Dernoncourt | Franck Dernoncourt, Ji Young Lee, Peter Szolovits | Neural Networks for Joint Sentence Classification in Medical Paper
Abstracts | null | null | null | null | cs.CL cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing models based on artificial neural networks (ANNs) for sentence
classification often do not incorporate the context in which sentences appear,
and classify sentences individually. However, traditional sentence
classification approaches have been shown to greatly benefit from jointly
classifying subsequent sentences, such as with conditional random fields. In
this work, we present an ANN architecture that combines the effectiveness of
typical ANN models to classify sentences in isolation, with the strength of
structured prediction. Our model achieves state-of-the-art results on two
different datasets for sequential sentence classification in medical abstracts.
| [
{
"version": "v1",
"created": "Thu, 15 Dec 2016 20:57:56 GMT"
}
] | 2016-12-16T00:00:00 | [
[
"Dernoncourt",
"Franck",
""
],
[
"Lee",
"Ji Young",
""
],
[
"Szolovits",
"Peter",
""
]
] | TITLE: Neural Networks for Joint Sentence Classification in Medical Paper
Abstracts
ABSTRACT: Existing models based on artificial neural networks (ANNs) for sentence
classification often do not incorporate the context in which sentences appear,
and classify sentences individually. However, traditional sentence
classification approaches have been shown to greatly benefit from jointly
classifying subsequent sentences, such as with conditional random fields. In
this work, we present an ANN architecture that combines the effectiveness of
typical ANN models to classify sentences in isolation, with the strength of
structured prediction. Our model achieves state-of-the-art results on two
different datasets for sequential sentence classification in medical abstracts.
| no_new_dataset | 0.954732 |
1509.01329 | Piotr Doll\'ar | Yan Zhu and Yuandong Tian and Dimitris Mexatas and Piotr Doll\'ar | Semantic Amodal Segmentation | major update including new COCO data, metrics, and baselines | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common visual recognition tasks such as classification, object detection, and
semantic segmentation are rapidly reaching maturity, and given the recent rate
of progress, it is not unreasonable to conjecture that techniques for many of
these problems will approach human levels of performance in the next few years.
In this paper we look to the future: what is the next frontier in visual
recognition?
We offer one possible answer to this question. We propose a detailed image
annotation that captures information beyond the visible pixels and requires
complex reasoning about full scene structure. Specifically, we create an amodal
segmentation of each image: the full extent of each region is marked, not just
the visible pixels. Annotators outline and name all salient regions in the
image and specify a partial depth order. The result is a rich scene structure,
including visible and occluded portions of each region, figure-ground edge
information, semantic labels, and object overlap.
We create two datasets for semantic amodal segmentation. First, we label 500
images in the BSDS dataset with multiple annotators per image, allowing us to
study the statistics of human annotations. We show that the proposed full scene
annotation is surprisingly consistent between annotators, including for regions
and edges. Second, we annotate 5000 images from COCO. This larger dataset
allows us to explore a number of algorithmic ideas for amodal segmentation and
depth ordering. We introduce novel metrics for these tasks, and along with our
strong baselines, define concrete new challenges for the community.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2015 02:20:13 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Dec 2016 19:49:24 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Zhu",
"Yan",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Mexatas",
"Dimitris",
""
],
[
"Dollár",
"Piotr",
""
]
] | TITLE: Semantic Amodal Segmentation
ABSTRACT: Common visual recognition tasks such as classification, object detection, and
semantic segmentation are rapidly reaching maturity, and given the recent rate
of progress, it is not unreasonable to conjecture that techniques for many of
these problems will approach human levels of performance in the next few years.
In this paper we look to the future: what is the next frontier in visual
recognition?
We offer one possible answer to this question. We propose a detailed image
annotation that captures information beyond the visible pixels and requires
complex reasoning about full scene structure. Specifically, we create an amodal
segmentation of each image: the full extent of each region is marked, not just
the visible pixels. Annotators outline and name all salient regions in the
image and specify a partial depth order. The result is a rich scene structure,
including visible and occluded portions of each region, figure-ground edge
information, semantic labels, and object overlap.
We create two datasets for semantic amodal segmentation. First, we label 500
images in the BSDS dataset with multiple annotators per image, allowing us to
study the statistics of human annotations. We show that the proposed full scene
annotation is surprisingly consistent between annotators, including for regions
and edges. Second, we annotate 5000 images from COCO. This larger dataset
allows us to explore a number of algorithmic ideas for amodal segmentation and
depth ordering. We introduce novel metrics for these tasks, and along with our
strong baselines, define concrete new challenges for the community.
| new_dataset | 0.970688 |
1510.07146 | Lorenzo Livi | Enrico Maiorino, Filippo Maria Bianchi, Lorenzo Livi, Antonello Rizzi,
Alireza Sadeghian | Data-driven detrending of nonstationary fractal time series with echo
state networks | Revised version | null | 10.1016/j.ins.2016.12.015 | null | physics.data-an cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel data-driven approach for removing trends
(detrending) from nonstationary, fractal and multifractal time series. We
consider real-valued time series relative to measurements of an underlying
dynamical system that evolves through time. We assume that such a dynamical
process is predictable to a certain degree by means of a class of recurrent
networks called Echo State Network (ESN), which are capable to model a generic
dynamical process. In order to isolate the superimposed (multi)fractal
component of interest, we define a data-driven filter by leveraging on the ESN
prediction capability to identify the trend component of a given input time
series. Specifically, the (estimated) trend is removed from the original time
series and the residual signal is analyzed with the multifractal detrended
fluctuation analysis procedure to verify the correctness of the detrending
procedure. In order to demonstrate the effectiveness of the proposed technique,
we consider several synthetic time series consisting of different types of
trends and fractal noise components with known characteristics. We also process
a real-world dataset, the sunspot time series, which is well-known for its
multifractal features and has recently gained attention in the complex systems
field. Results demonstrate the validity and generality of the proposed
detrending method based on ESNs.
| [
{
"version": "v1",
"created": "Sat, 24 Oct 2015 13:38:13 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Oct 2016 18:19:30 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Maiorino",
"Enrico",
""
],
[
"Bianchi",
"Filippo Maria",
""
],
[
"Livi",
"Lorenzo",
""
],
[
"Rizzi",
"Antonello",
""
],
[
"Sadeghian",
"Alireza",
""
]
] | TITLE: Data-driven detrending of nonstationary fractal time series with echo
state networks
ABSTRACT: In this paper, we propose a novel data-driven approach for removing trends
(detrending) from nonstationary, fractal and multifractal time series. We
consider real-valued time series relative to measurements of an underlying
dynamical system that evolves through time. We assume that such a dynamical
process is predictable to a certain degree by means of a class of recurrent
networks called Echo State Network (ESN), which are capable to model a generic
dynamical process. In order to isolate the superimposed (multi)fractal
component of interest, we define a data-driven filter by leveraging on the ESN
prediction capability to identify the trend component of a given input time
series. Specifically, the (estimated) trend is removed from the original time
series and the residual signal is analyzed with the multifractal detrended
fluctuation analysis procedure to verify the correctness of the detrending
procedure. In order to demonstrate the effectiveness of the proposed technique,
we consider several synthetic time series consisting of different types of
trends and fractal noise components with known characteristics. We also process
a real-world dataset, the sunspot time series, which is well-known for its
multifractal features and has recently gained attention in the complex systems
field. Results demonstrate the validity and generality of the proposed
detrending method based on ESNs.
| no_new_dataset | 0.930774 |
1610.07675 | Kamil Rocki | Kamil Rocki, Tomasz Kornuta, Tegan Maharaj | Surprisal-Driven Zoneout | Published at the Continual Learning and Deep Networks Workshop; NIPS
2016 | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method of regularization for recurrent neural networks
called suprisal-driven zoneout. In this method, states zoneout (maintain their
previous value rather than updating), when the suprisal (discrepancy between
the last state's prediction and target) is small. Thus regularization is
adaptive and input-driven on a per-neuron basis. We demonstrate the
effectiveness of this idea by achieving state-of-the-art bits per character of
1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to
the best known highly-engineered compression methods.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 22:38:52 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Oct 2016 19:55:16 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Oct 2016 15:18:11 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Nov 2016 17:09:23 GMT"
},
{
"version": "v5",
"created": "Thu, 24 Nov 2016 06:40:26 GMT"
},
{
"version": "v6",
"created": "Tue, 13 Dec 2016 23:32:24 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Rocki",
"Kamil",
""
],
[
"Kornuta",
"Tomasz",
""
],
[
"Maharaj",
"Tegan",
""
]
] | TITLE: Surprisal-Driven Zoneout
ABSTRACT: We propose a novel method of regularization for recurrent neural networks
called suprisal-driven zoneout. In this method, states zoneout (maintain their
previous value rather than updating), when the suprisal (discrepancy between
the last state's prediction and target) is small. Thus regularization is
adaptive and input-driven on a per-neuron basis. We demonstrate the
effectiveness of this idea by achieving state-of-the-art bits per character of
1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to
the best known highly-engineered compression methods.
| no_new_dataset | 0.949809 |
1611.00196 | Wei Li | Wei Li, Brian Kan Wing Mak | Recurrent Neural Network Language Model Adaptation Derived Document
Vector | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many natural language processing (NLP) tasks, a document is commonly
modeled as a bag of words using the term frequency-inverse document frequency
(TF-IDF) vector. One major shortcoming of the frequency-based TF-IDF feature
vector is that it ignores word orders that carry syntactic and semantic
relationships among the words in a document, and they can be important in some
NLP tasks such as genre classification. This paper proposes a novel distributed
vector representation of a document: a simple recurrent-neural-network language
model (RNN-LM) or a long short-term memory RNN language model (LSTM-LM) is
first created from all documents in a task; some of the LM parameters are then
adapted by each document, and the adapted parameters are vectorized to
represent the document. The new document vectors are labeled as DV-RNN and
DV-LSTM respectively. We believe that our new document vectors can capture some
high-level sequential information in the documents, which other current
document representations fail to capture. The new document vectors were
evaluated in the genre classification of documents in three corpora: the Brown
Corpus, the BNC Baby Corpus and an artificially created Penn Treebank dataset.
Their classification performances are compared with the performance of TF-IDF
vector and the state-of-the-art distributed memory model of paragraph vector
(PV-DM). The results show that DV-LSTM significantly outperforms TF-IDF and
PV-DM in most cases, and combinations of the proposed document vectors with
TF-IDF or PV-DM may further improve performance.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2016 12:14:02 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Li",
"Wei",
""
],
[
"Mak",
"Brian Kan Wing",
""
]
] | TITLE: Recurrent Neural Network Language Model Adaptation Derived Document
Vector
ABSTRACT: In many natural language processing (NLP) tasks, a document is commonly
modeled as a bag of words using the term frequency-inverse document frequency
(TF-IDF) vector. One major shortcoming of the frequency-based TF-IDF feature
vector is that it ignores word orders that carry syntactic and semantic
relationships among the words in a document, and they can be important in some
NLP tasks such as genre classification. This paper proposes a novel distributed
vector representation of a document: a simple recurrent-neural-network language
model (RNN-LM) or a long short-term memory RNN language model (LSTM-LM) is
first created from all documents in a task; some of the LM parameters are then
adapted by each document, and the adapted parameters are vectorized to
represent the document. The new document vectors are labeled as DV-RNN and
DV-LSTM respectively. We believe that our new document vectors can capture some
high-level sequential information in the documents, which other current
document representations fail to capture. The new document vectors were
evaluated in the genre classification of documents in three corpora: the Brown
Corpus, the BNC Baby Corpus and an artificially created Penn Treebank dataset.
Their classification performances are compared with the performance of TF-IDF
vector and the state-of-the-art distributed memory model of paragraph vector
(PV-DM). The results show that DV-LSTM significantly outperforms TF-IDF and
PV-DM in most cases, and combinations of the proposed document vectors with
TF-IDF or PV-DM may further improve performance.
| new_dataset | 0.962813 |
1612.04426 | Edouard Grave | Edouard Grave, Armand Joulin, Nicolas Usunier | Improving Neural Language Models with a Continuous Cache | Submitted to ICLR 2017 | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an extension to neural network language models to adapt their
prediction to the recent history. Our model is a simplified version of memory
augmented networks, which stores past hidden activations as memory and accesses
them through a dot product with the current hidden activation. This mechanism
is very efficient and scales to very large memory sizes. We also draw a link
between the use of external memory in neural network and cache models used with
count based language models. We demonstrate on several language model datasets
that our approach performs significantly better than recent memory augmented
networks.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 23:09:49 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Grave",
"Edouard",
""
],
[
"Joulin",
"Armand",
""
],
[
"Usunier",
"Nicolas",
""
]
] | TITLE: Improving Neural Language Models with a Continuous Cache
ABSTRACT: We propose an extension to neural network language models to adapt their
prediction to the recent history. Our model is a simplified version of memory
augmented networks, which stores past hidden activations as memory and accesses
them through a dot product with the current hidden activation. This mechanism
is very efficient and scales to very large memory sizes. We also draw a link
between the use of external memory in neural network and cache models used with
count based language models. We demonstrate on several language model datasets
that our approach performs significantly better than recent memory augmented
networks.
| no_new_dataset | 0.945851 |
1612.04520 | Zhichen Zhao | Zhichen Zhao and Huimin Ma and Shaodi You | Single Image Action Recognition using Semantic Body Part Actions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel single image action recognition algorithm
which is based on the idea of semantic body part actions. Unlike existing
bottom up methods, we argue that the human action is a combination of
meaningful body part actions. In detail, we divide human body into five parts:
head, torso, arms, hands and legs. And for each of the body parts, we define
several semantic body part actions, e.g., hand holding, hand waving. These
semantic body part actions are strongly related to the body actions, e.g.,
writing, and jogging. Based on the idea, we propose a deep neural network based
system: first, body parts are localized by a Semi-FCN network. Second, for each
body parts, a Part Action Res-Net is used to predict semantic body part
actions. And finally, we use SVM to fuse the body part actions and predict the
entire body action. Experiments on two dataset: PASCAL VOC 2012 and Stanford-40
report mAP improvement from the state-of-the-art by 3.8% and 2.6% respectively.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 07:54:55 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Zhao",
"Zhichen",
""
],
[
"Ma",
"Huimin",
""
],
[
"You",
"Shaodi",
""
]
] | TITLE: Single Image Action Recognition using Semantic Body Part Actions
ABSTRACT: In this paper, we propose a novel single image action recognition algorithm
which is based on the idea of semantic body part actions. Unlike existing
bottom up methods, we argue that the human action is a combination of
meaningful body part actions. In detail, we divide human body into five parts:
head, torso, arms, hands and legs. And for each of the body parts, we define
several semantic body part actions, e.g., hand holding, hand waving. These
semantic body part actions are strongly related to the body actions, e.g.,
writing, and jogging. Based on the idea, we propose a deep neural network based
system: first, body parts are localized by a Semi-FCN network. Second, for each
body parts, a Part Action Res-Net is used to predict semantic body part
actions. And finally, we use SVM to fuse the body part actions and predict the
entire body action. Experiments on two dataset: PASCAL VOC 2012 and Stanford-40
report mAP improvement from the state-of-the-art by 3.8% and 2.6% respectively.
| no_new_dataset | 0.949295 |
1612.04580 | M\'arton Karsai | Yannick Leo, Eric Fleury, J. Ignacio Alvarez-Hamelin, Carlos Sarraute,
M\'arton Karsai | Socioeconomic correlations and stratification in social-communication
networks | 19 pages, 6 figures | J. Roy. Soc. Interface, 13 125 (2016) | 10.1098/rsif.2016.0598 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The uneven distribution of wealth and individual economic capacities are
among the main forces which shape modern societies and arguably bias the
emerging social structures. However, the study of correlations between the
social network and economic status of individuals is difficult due to the lack
of large-scale multimodal data disclosing both the social ties and economic
indicators of the same population. Here, we close this gap through the analysis
of coupled datasets recording the mobile phone communications and bank
transaction history of one million anonymised individuals living in a Latin
American country. We show that wealth and debt are unevenly distributed among
people in agreement with the Pareto principle; the observed social structure is
strongly stratified, with people being better connected to others of their own
socioeconomic class rather than to others of different classes; the social
network appears with assortative socioeconomic correlations and tightly
connected "rich clubs"; and that egos from the same class live closer to each
other but commute further if they are wealthier. These results are based on a
representative, society-large population, and empirically demonstrate some
long-lasting hypotheses on socioeconomic correlations which potentially lay
behind social segregation, and induce differences in human mobility.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 11:19:01 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Leo",
"Yannick",
""
],
[
"Fleury",
"Eric",
""
],
[
"Alvarez-Hamelin",
"J. Ignacio",
""
],
[
"Sarraute",
"Carlos",
""
],
[
"Karsai",
"Márton",
""
]
] | TITLE: Socioeconomic correlations and stratification in social-communication
networks
ABSTRACT: The uneven distribution of wealth and individual economic capacities are
among the main forces which shape modern societies and arguably bias the
emerging social structures. However, the study of correlations between the
social network and economic status of individuals is difficult due to the lack
of large-scale multimodal data disclosing both the social ties and economic
indicators of the same population. Here, we close this gap through the analysis
of coupled datasets recording the mobile phone communications and bank
transaction history of one million anonymised individuals living in a Latin
American country. We show that wealth and debt are unevenly distributed among
people in agreement with the Pareto principle; the observed social structure is
strongly stratified, with people being better connected to others of their own
socioeconomic class rather than to others of different classes; the social
network appears with assortative socioeconomic correlations and tightly
connected "rich clubs"; and that egos from the same class live closer to each
other but commute further if they are wealthier. These results are based on a
representative, society-large population, and empirically demonstrate some
long-lasting hypotheses on socioeconomic correlations which potentially lay
behind social segregation, and induce differences in human mobility.
| no_new_dataset | 0.929568 |
1612.04609 | Ruobing Xie | Ruobing Xie, Zhiyuan Liu, Rui Yan, Maosong Sun | Neural Emoji Recommendation in Dialogue Systems | 7 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emoji is an essential component in dialogues which has been broadly utilized
on almost all social platforms. It could express more delicate feelings beyond
plain texts and thus smooth the communications between users, making dialogue
systems more anthropomorphic and vivid. In this paper, we focus on
automatically recommending appropriate emojis given the contextual information
in multi-turn dialogue systems, where the challenges locate in understanding
the whole conversations. More specifically, we propose the hierarchical long
short-term memory model (H-LSTM) to construct dialogue representations,
followed by a softmax classifier for emoji classification. We evaluate our
models on the task of emoji classification in a real-world dataset, with some
further explorations on parameter sensitivity and case study. Experimental
results demonstrate that our method achieves the best performances on all
evaluation metrics. It indicates that our method could well capture the
contextual information and emotion flow in dialogues, which is significant for
emoji recommendation.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 12:46:18 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Xie",
"Ruobing",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Yan",
"Rui",
""
],
[
"Sun",
"Maosong",
""
]
] | TITLE: Neural Emoji Recommendation in Dialogue Systems
ABSTRACT: Emoji is an essential component in dialogues which has been broadly utilized
on almost all social platforms. It could express more delicate feelings beyond
plain texts and thus smooth the communications between users, making dialogue
systems more anthropomorphic and vivid. In this paper, we focus on
automatically recommending appropriate emojis given the contextual information
in multi-turn dialogue systems, where the challenges locate in understanding
the whole conversations. More specifically, we propose the hierarchical long
short-term memory model (H-LSTM) to construct dialogue representations,
followed by a softmax classifier for emoji classification. We evaluate our
models on the task of emoji classification in a real-world dataset, with some
further explorations on parameter sensitivity and case study. Experimental
results demonstrate that our method achieves the best performances on all
evaluation metrics. It indicates that our method could well capture the
contextual information and emotion flow in dialogues, which is significant for
emoji recommendation.
| no_new_dataset | 0.945349 |
1612.04770 | Spyros Gidaris | Spyros Gidaris, Nikos Komodakis | Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise
Labeling | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pixel wise image labeling is an interesting and challenging problem with
great significance in the computer vision community. In order for a dense
labeling algorithm to be able to achieve accurate and precise results, it has
to consider the dependencies that exist in the joint space of both the input
and the output variables. An implicit approach for modeling those dependencies
is by training a deep neural network that, given as input an initial estimate
of the output labels and the input image, it will be able to predict a new
refined estimate for the labels. In this context, our work is concerned with
what is the optimal architecture for performing the label improvement task. We
argue that the prior approaches of either directly predicting new label
estimates or predicting residual corrections w.r.t. the initial labels with
feed-forward deep network architectures are sub-optimal. Instead, we propose a
generic architecture that decomposes the label improvement task to three steps:
1) detecting the initial label estimates that are incorrect, 2) replacing the
incorrect labels with new ones, and finally 3) refining the renewed labels by
predicting residual corrections w.r.t. them. Furthermore, we explore and
compare various other alternative architectures that consist of the
aforementioned Detection, Replace, and Refine components. We extensively
evaluate the examined architectures in the challenging task of dense disparity
estimation (stereo matching) and we report both quantitative and qualitative
results on three different datasets. Finally, our dense disparity estimation
network that implements the proposed generic architecture, achieves
state-of-the-art results in the KITTI 2015 test surpassing prior approaches by
a significant margin.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 18:54:33 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Gidaris",
"Spyros",
""
],
[
"Komodakis",
"Nikos",
""
]
] | TITLE: Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise
Labeling
ABSTRACT: Pixel wise image labeling is an interesting and challenging problem with
great significance in the computer vision community. In order for a dense
labeling algorithm to be able to achieve accurate and precise results, it has
to consider the dependencies that exist in the joint space of both the input
and the output variables. An implicit approach for modeling those dependencies
is by training a deep neural network that, given as input an initial estimate
of the output labels and the input image, it will be able to predict a new
refined estimate for the labels. In this context, our work is concerned with
what is the optimal architecture for performing the label improvement task. We
argue that the prior approaches of either directly predicting new label
estimates or predicting residual corrections w.r.t. the initial labels with
feed-forward deep network architectures are sub-optimal. Instead, we propose a
generic architecture that decomposes the label improvement task to three steps:
1) detecting the initial label estimates that are incorrect, 2) replacing the
incorrect labels with new ones, and finally 3) refining the renewed labels by
predicting residual corrections w.r.t. them. Furthermore, we explore and
compare various other alternative architectures that consist of the
aforementioned Detection, Replace, and Refine components. We extensively
evaluate the examined architectures in the challenging task of dense disparity
estimation (stereo matching) and we report both quantitative and qualitative
results on three different datasets. Finally, our dense disparity estimation
network that implements the proposed generic architecture, achieves
state-of-the-art results in the KITTI 2015 test surpassing prior approaches by
a significant margin.
| no_new_dataset | 0.950088 |
1612.04774 | Xu Xu | Xu Xu, Sinisa Todorovic | Beam Search for Learning a Deep Convolutional Neural Network of 3D
Shapes | ICPR 2016 | null | null | null | cs.CV cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses 3D shape recognition. Recent work typically represents a
3D shape as a set of binary variables corresponding to 3D voxels of a uniform
3D grid centered on the shape, and resorts to deep convolutional neural
networks(CNNs) for modeling these binary variables. Robust learning of such
CNNs is currently limited by the small datasets of 3D shapes available, an
order of magnitude smaller than other common datasets in computer vision.
Related work typically deals with the small training datasets using a number of
ad hoc, hand-tuning strategies. To address this issue, we formulate CNN
learning as a beam search aimed at identifying an optimal CNN architecture,
namely, the number of layers, nodes, and their connectivity in the network, as
well as estimating parameters of such an optimal CNN. Each state of the beam
search corresponds to a candidate CNN. Two types of actions are defined to add
new convolutional filters or new convolutional layers to a parent CNN, and thus
transition to children states. The utility function of each action is
efficiently computed by transferring parameter values of the parent CNN to its
children, thereby enabling an efficient beam search. Our experimental
evaluation on the 3D ModelNet dataset demonstrates that our model pursuit using
the beam search yields a CNN with superior performance on 3D shape
classification than the state of the art.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 19:06:05 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Xu",
"Xu",
""
],
[
"Todorovic",
"Sinisa",
""
]
] | TITLE: Beam Search for Learning a Deep Convolutional Neural Network of 3D
Shapes
ABSTRACT: This paper addresses 3D shape recognition. Recent work typically represents a
3D shape as a set of binary variables corresponding to 3D voxels of a uniform
3D grid centered on the shape, and resorts to deep convolutional neural
networks(CNNs) for modeling these binary variables. Robust learning of such
CNNs is currently limited by the small datasets of 3D shapes available, an
order of magnitude smaller than other common datasets in computer vision.
Related work typically deals with the small training datasets using a number of
ad hoc, hand-tuning strategies. To address this issue, we formulate CNN
learning as a beam search aimed at identifying an optimal CNN architecture,
namely, the number of layers, nodes, and their connectivity in the network, as
well as estimating parameters of such an optimal CNN. Each state of the beam
search corresponds to a candidate CNN. Two types of actions are defined to add
new convolutional filters or new convolutional layers to a parent CNN, and thus
transition to children states. The utility function of each action is
efficiently computed by transferring parameter values of the parent CNN to its
children, thereby enabling an efficient beam search. Our experimental
evaluation on the 3D ModelNet dataset demonstrates that our model pursuit using
the beam search yields a CNN with superior performance on 3D shape
classification than the state of the art.
| no_new_dataset | 0.952706 |
1612.04804 | Asaf Shabtai | Asaf Shabtai | Anomaly Detection Using the Knowledge-based Temporal Abstraction Method | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth in stored time-oriented data necessitates the development of
new methods for handling, processing, and interpreting large amounts of
temporal data. One important example of such processing is detecting anomalies
in time-oriented data. The Knowledge-Based Temporal Abstraction method was
previously proposed for intelligent interpretation of temporal data based on
predefined domain knowledge. In this study we propose a framework that
integrates the KBTA method with a temporal pattern mining process for anomaly
detection. According to the proposed method a temporal pattern mining process
is applied on a dataset of basic temporal abstraction database in order to
extract patterns representing normal behavior. These patterns are then analyzed
in order to identify abnormal time periods characterized by a significantly
small number of normal patterns. The proposed approach was demonstrated using a
dataset collected from a real server.
| [
{
"version": "v1",
"created": "Wed, 14 Dec 2016 20:50:48 GMT"
}
] | 2016-12-15T00:00:00 | [
[
"Shabtai",
"Asaf",
""
]
] | TITLE: Anomaly Detection Using the Knowledge-based Temporal Abstraction Method
ABSTRACT: The rapid growth in stored time-oriented data necessitates the development of
new methods for handling, processing, and interpreting large amounts of
temporal data. One important example of such processing is detecting anomalies
in time-oriented data. The Knowledge-Based Temporal Abstraction method was
previously proposed for intelligent interpretation of temporal data based on
predefined domain knowledge. In this study we propose a framework that
integrates the KBTA method with a temporal pattern mining process for anomaly
detection. According to the proposed method a temporal pattern mining process
is applied on a dataset of basic temporal abstraction database in order to
extract patterns representing normal behavior. These patterns are then analyzed
in order to identify abnormal time periods characterized by a significantly
small number of normal patterns. The proposed approach was demonstrated using a
dataset collected from a real server.
| no_new_dataset | 0.947527 |
1601.07630 | Jiangye Yuan | Jiangye Yuan and Anil M. Cheriyadat | Combining Maps and Street Level Images for Building Height and Facade
Estimation | UrbanGIS '16 Proceedings of the 2nd ACM SIGSPATIAL Workshop on Smart
Cities and Urban Analytics | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method that integrates two widely available data sources,
building footprints from 2D maps and street level images, to derive valuable
information that is generally difficult to acquire -- building heights and
building facade masks in images. Building footprints are elevated in world
coordinates and projected onto images. Building heights are estimated by
scoring projected footprints based on their alignment with building features in
images. Building footprints with estimated heights can be converted to simple
3D building models, which are projected back to images to identify buildings.
In this procedure, accurate camera projections are critical. However, camera
position errors inherited from external sensors commonly exist, which adversely
affect results. We derive a solution to precisely locate cameras on maps using
correspondence between image features and building footprints. Experiments on
real-world datasets show the promise of our method.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 02:58:04 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 18:47:44 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Yuan",
"Jiangye",
""
],
[
"Cheriyadat",
"Anil M.",
""
]
] | TITLE: Combining Maps and Street Level Images for Building Height and Facade
Estimation
ABSTRACT: We propose a method that integrates two widely available data sources,
building footprints from 2D maps and street level images, to derive valuable
information that is generally difficult to acquire -- building heights and
building facade masks in images. Building footprints are elevated in world
coordinates and projected onto images. Building heights are estimated by
scoring projected footprints based on their alignment with building features in
images. Building footprints with estimated heights can be converted to simple
3D building models, which are projected back to images to identify buildings.
In this procedure, accurate camera projections are critical. However, camera
position errors inherited from external sensors commonly exist, which adversely
affect results. We derive a solution to precisely locate cameras on maps using
correspondence between image features and building footprints. Experiments on
real-world datasets show the promise of our method.
| no_new_dataset | 0.95561 |
1602.00773 | Vera Moffitt | Vera Zaychik Moffitt and Julia Stoyanovich | Querying Evolving Graphs with Portal | 12 pages plus appendix. Submitted to SIGMOD 2017 | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graphs are used to represent a plethora of phenomena, from the Web and social
networks, to biological pathways, to semantic knowledge bases. Arguably the
most interesting and important questions one can ask about graphs have to do
with their evolution. Which Web pages are showing an increasing popularity
trend? How does influence propagate in social networks? How does knowledge
evolve?
This paper proposes a logical model of an evolving graph called a TGraph,
which captures evolution of graph topology and of its vertex and edge
attributes. We present a compositional temporal graph algebra TGA, and show a
reduction of TGA to temporal relational algebra with graph-specific primitives.
We formally study the properties of TGA, and also show that it is sufficient to
concisely express a wide range of common use cases. We describe an
implementation of our model and algebra in Portal, built on top of Apache Spark
/ GraphX. We conduct extensive experiments on real datasets, and show that
Portal scales.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 03:10:45 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 04:25:11 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Moffitt",
"Vera Zaychik",
""
],
[
"Stoyanovich",
"Julia",
""
]
] | TITLE: Querying Evolving Graphs with Portal
ABSTRACT: Graphs are used to represent a plethora of phenomena, from the Web and social
networks, to biological pathways, to semantic knowledge bases. Arguably the
most interesting and important questions one can ask about graphs have to do
with their evolution. Which Web pages are showing an increasing popularity
trend? How does influence propagate in social networks? How does knowledge
evolve?
This paper proposes a logical model of an evolving graph called a TGraph,
which captures evolution of graph topology and of its vertex and edge
attributes. We present a compositional temporal graph algebra TGA, and show a
reduction of TGA to temporal relational algebra with graph-specific primitives.
We formally study the properties of TGA, and also show that it is sufficient to
concisely express a wide range of common use cases. We describe an
implementation of our model and algebra in Portal, built on top of Apache Spark
/ GraphX. We conduct extensive experiments on real datasets, and show that
Portal scales.
| no_new_dataset | 0.945045 |
1612.00119 | Xiaojie Jin Mr. | Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang,
Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, Shuicheng Yan | Video Scene Parsing with Predictive Feature Learning | 15 pages, 7 figures, 5 tables, currently v2 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address the challenging video scene parsing problem by
developing effective representation learning methods given limited parsing
annotations. In particular, we contribute two novel methods that constitute a
unified parsing framework. (1) \textbf{Predictive feature learning}} from
nearly unlimited unlabeled video data. Different from existing methods learning
features from single frame parsing, we learn spatiotemporal discriminative
features by enforcing a parsing network to predict future frames and their
parsing maps (if available) given only historical frames. In this way, the
network can effectively learn to capture video dynamics and temporal context,
which are critical clues for video scene parsing, without requiring extra
manual annotations. (2) \textbf{Prediction steering parsing}} architecture that
effectively adapts the learned spatiotemporal features to scene parsing tasks
and provides strong guidance for any off-the-shelf parsing model to achieve
better video scene parsing performance. Extensive experiments over two
challenging datasets, Cityscapes and Camvid, have demonstrated the
effectiveness of our methods by showing significant improvement over
well-established baselines.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2016 02:48:48 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 04:55:42 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Jin",
"Xiaojie",
""
],
[
"Li",
"Xin",
""
],
[
"Xiao",
"Huaxin",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Lin",
"Zhe",
""
],
[
"Yang",
"Jimei",
""
],
[
"Chen",
"Yunpeng",
""
],
[
"Dong",
"Jian",
""
],
[
"Liu",
"Luoqi",
""
],
[
"Jie",
"Zequn",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Yan",
"Shuicheng",
""
]
] | TITLE: Video Scene Parsing with Predictive Feature Learning
ABSTRACT: In this work, we address the challenging video scene parsing problem by
developing effective representation learning methods given limited parsing
annotations. In particular, we contribute two novel methods that constitute a
unified parsing framework. (1) \textbf{Predictive feature learning}} from
nearly unlimited unlabeled video data. Different from existing methods learning
features from single frame parsing, we learn spatiotemporal discriminative
features by enforcing a parsing network to predict future frames and their
parsing maps (if available) given only historical frames. In this way, the
network can effectively learn to capture video dynamics and temporal context,
which are critical clues for video scene parsing, without requiring extra
manual annotations. (2) \textbf{Prediction steering parsing}} architecture that
effectively adapts the learned spatiotemporal features to scene parsing tasks
and provides strong guidance for any off-the-shelf parsing model to achieve
better video scene parsing performance. Extensive experiments over two
challenging datasets, Cityscapes and Camvid, have demonstrated the
effectiveness of our methods by showing significant improvement over
well-established baselines.
| no_new_dataset | 0.952926 |
1612.03211 | Xiaolin Andy Li | Rajendra Rana Bhat, Vivek Viswanath, Xiaolin Li | DeepCancer: Detecting Cancer through Gene Expressions via Deep
Generative Learning | null | null | null | null | cs.AI cs.LG q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcriptional profiling on microarrays to obtain gene expressions has been
used to facilitate cancer diagnosis. We propose a deep generative machine
learning architecture (called DeepCancer) that learn features from unlabeled
microarray data. These models have been used in conjunction with conventional
classifiers that perform classification of the tissue samples as either being
cancerous or non-cancerous. The proposed model has been tested on two different
clinical datasets. The evaluation demonstrates that DeepCancer model achieves a
very high precision score, while significantly controlling the false positive
and false negative scores.
| [
{
"version": "v1",
"created": "Fri, 9 Dec 2016 23:01:12 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2016 16:27:34 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Bhat",
"Rajendra Rana",
""
],
[
"Viswanath",
"Vivek",
""
],
[
"Li",
"Xiaolin",
""
]
] | TITLE: DeepCancer: Detecting Cancer through Gene Expressions via Deep
Generative Learning
ABSTRACT: Transcriptional profiling on microarrays to obtain gene expressions has been
used to facilitate cancer diagnosis. We propose a deep generative machine
learning architecture (called DeepCancer) that learn features from unlabeled
microarray data. These models have been used in conjunction with conventional
classifiers that perform classification of the tissue samples as either being
cancerous or non-cancerous. The proposed model has been tested on two different
clinical datasets. The evaluation demonstrates that DeepCancer model achieves a
very high precision score, while significantly controlling the false positive
and false negative scores.
| no_new_dataset | 0.951414 |
1612.03940 | Soheil Hashemi | Soheil Hashemi, Nicholas Anthony, Hokchhay Tann, R. Iris Bahar,
Sherief Reda | Understanding the Impact of Precision Quantization on the Accuracy and
Energy of Neural Networks | Accepted for conference proceedings in DATE17 | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks are gaining in popularity as they are used to generate
state-of-the-art results for a variety of computer vision and machine learning
applications. At the same time, these networks have grown in depth and
complexity in order to solve harder problems. Given the limitations in power
budgets dedicated to these networks, the importance of low-power, low-memory
solutions has been stressed in recent years. While a large number of dedicated
hardware using different precisions has recently been proposed, there exists no
comprehensive study of different bit precisions and arithmetic in both inputs
and network parameters. In this work, we address this issue and perform a study
of different bit-precisions in neural networks (from floating-point to
fixed-point, powers of two, and binary). In our evaluation, we consider and
analyze the effect of precision scaling on both network accuracy and hardware
metrics including memory footprint, power and energy consumption, and design
area. We also investigate training-time methodologies to compensate for the
reduction in accuracy due to limited bit precision and demonstrate that in most
cases, precision scaling can deliver significant benefits in design metrics at
the cost of very modest decreases in network accuracy. In addition, we propose
that a small portion of the benefits achieved when using lower precisions can
be forfeited to increase the network size and therefore the accuracy. We
evaluate our experiments, using three well-recognized networks and datasets to
show its generality. We investigate the trade-offs and highlight the benefits
of using lower precisions in terms of energy and memory footprint.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 21:36:48 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Hashemi",
"Soheil",
""
],
[
"Anthony",
"Nicholas",
""
],
[
"Tann",
"Hokchhay",
""
],
[
"Bahar",
"R. Iris",
""
],
[
"Reda",
"Sherief",
""
]
] | TITLE: Understanding the Impact of Precision Quantization on the Accuracy and
Energy of Neural Networks
ABSTRACT: Deep neural networks are gaining in popularity as they are used to generate
state-of-the-art results for a variety of computer vision and machine learning
applications. At the same time, these networks have grown in depth and
complexity in order to solve harder problems. Given the limitations in power
budgets dedicated to these networks, the importance of low-power, low-memory
solutions has been stressed in recent years. While a large number of dedicated
hardware using different precisions has recently been proposed, there exists no
comprehensive study of different bit precisions and arithmetic in both inputs
and network parameters. In this work, we address this issue and perform a study
of different bit-precisions in neural networks (from floating-point to
fixed-point, powers of two, and binary). In our evaluation, we consider and
analyze the effect of precision scaling on both network accuracy and hardware
metrics including memory footprint, power and energy consumption, and design
area. We also investigate training-time methodologies to compensate for the
reduction in accuracy due to limited bit precision and demonstrate that in most
cases, precision scaling can deliver significant benefits in design metrics at
the cost of very modest decreases in network accuracy. In addition, we propose
that a small portion of the benefits achieved when using lower precisions can
be forfeited to increase the network size and therefore the accuracy. We
evaluate our experiments, using three well-recognized networks and datasets to
show its generality. We investigate the trade-offs and highlight the benefits
of using lower precisions in terms of energy and memory footprint.
| no_new_dataset | 0.945399 |
1612.03961 | Arjun Raj Rajanna | Arjun Raj Rajanna, Kamelia Aryafar, Rajeev Ramchandran, Christye
Sisson, Ali Shokoufandeh, Raymond Ptucha | Neural Networks with Manifold Learning for Diabetic Retinopathy
Detection | Published in Proceedings of "IEEE Western NY Image & Signal
Processing Workshop" | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Widespread outreach programs using remote retinal imaging have proven to
decrease the risk from diabetic retinopathy, the leading cause of blindness in
the US. However, this process still requires manual verification of image
quality and grading of images for level of disease by a trained human grader
and will continue to be limited by the lack of such scarce resources.
Computer-aided diagnosis of retinal images have recently gained increasing
attention in the machine learning community. In this paper, we introduce a set
of neural networks for diabetic retinopathy classification of fundus retinal
images. We evaluate the efficiency of the proposed classifiers in combination
with preprocessing and augmentation steps on a sample dataset. Our experimental
results show that neural networks in combination with preprocessing on the
images can boost the classification accuracy on this dataset. Moreover the
proposed models are scalable and can be used in large scale datasets for
diabetic retinopathy detection. The models introduced in this paper can be used
to facilitate the diagnosis and speed up the detection process.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2016 22:51:17 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"Rajanna",
"Arjun Raj",
""
],
[
"Aryafar",
"Kamelia",
""
],
[
"Ramchandran",
"Rajeev",
""
],
[
"Sisson",
"Christye",
""
],
[
"Shokoufandeh",
"Ali",
""
],
[
"Ptucha",
"Raymond",
""
]
] | TITLE: Neural Networks with Manifold Learning for Diabetic Retinopathy
Detection
ABSTRACT: Widespread outreach programs using remote retinal imaging have proven to
decrease the risk from diabetic retinopathy, the leading cause of blindness in
the US. However, this process still requires manual verification of image
quality and grading of images for level of disease by a trained human grader
and will continue to be limited by the lack of such scarce resources.
Computer-aided diagnosis of retinal images have recently gained increasing
attention in the machine learning community. In this paper, we introduce a set
of neural networks for diabetic retinopathy classification of fundus retinal
images. We evaluate the efficiency of the proposed classifiers in combination
with preprocessing and augmentation steps on a sample dataset. Our experimental
results show that neural networks in combination with preprocessing on the
images can boost the classification accuracy on this dataset. Moreover the
proposed models are scalable and can be used in large scale datasets for
diabetic retinopathy detection. The models introduced in this paper can be used
to facilitate the diagnosis and speed up the detection process.
| no_new_dataset | 0.945349 |
1612.03982 | Marcel Sheeny De Moraes | Marcel Sheeny de Moraes, Sankha Mukherjee, Neil M Robertson | Deep Convolutional Poses for Human Interaction Recognition in Monocular
Videos | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Human interaction recognition is a challenging problem in computer vision and
has been researched over the years due to its important applications. With the
development of deep models for the human pose estimation problem, this work
aims to verify the effectiveness of using the human pose in order to recognize
the human interaction in monocular videos. This paper developed a method based
on 5 steps: detect each person in the scene, track them, retrieve the human
pose, extract features based on the pose and finally recognize the interaction
using a classifier. The Two-Person interaction dataset was used for the
development of this methodology. Using a whole sequence evaluation approach it
achieved 87.56% of average accuracy of all interaction. Yun, et at achieved
91.10% using the same dataset, however their methodology used the depth sensor
to recognize the interaction. The methodology developed in this paper shows
that an RGB camera can be as effective as depth cameras to recognize the
interaction between two persons using the recent development of deep models to
estimate the human pose.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2016 00:22:58 GMT"
}
] | 2016-12-14T00:00:00 | [
[
"de Moraes",
"Marcel Sheeny",
""
],
[
"Mukherjee",
"Sankha",
""
],
[
"Robertson",
"Neil M",
""
]
] | TITLE: Deep Convolutional Poses for Human Interaction Recognition in Monocular
Videos
ABSTRACT: Human interaction recognition is a challenging problem in computer vision and
has been researched over the years due to its important applications. With the
development of deep models for the human pose estimation problem, this work
aims to verify the effectiveness of using the human pose in order to recognize
the human interaction in monocular videos. This paper developed a method based
on 5 steps: detect each person in the scene, track them, retrieve the human
pose, extract features based on the pose and finally recognize the interaction
using a classifier. The Two-Person interaction dataset was used for the
development of this methodology. Using a whole sequence evaluation approach it
achieved 87.56% of average accuracy of all interaction. Yun, et at achieved
91.10% using the same dataset, however their methodology used the depth sensor
to recognize the interaction. The methodology developed in this paper shows
that an RGB camera can be as effective as depth cameras to recognize the
interaction between two persons using the recent development of deep models to
estimate the human pose.
| no_new_dataset | 0.944536 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.