id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
listlengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1502.06344
|
Erik Rodner
|
Clemens-Alexander Brust, Sven Sickert, Marcel Simon, Erik Rodner,
Joachim Denzler
|
Convolutional Patch Networks with Spatial Prior for Road Detection and
Urban Scene Understanding
|
VISAPP 2015 paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classifying single image patches is important in many different applications,
such as road detection or scene understanding. In this paper, we present
convolutional patch networks, which are convolutional networks learned to
distinguish different image patches and which can be used for pixel-wise
labeling. We also show how to incorporate spatial information of the patch as
an input to the network, which allows for learning spatial priors for certain
categories jointly with an appearance model. In particular, we focus on road
detection and urban scene understanding, two application areas where we are
able to achieve state-of-the-art results on the KITTI as well as on the
LabelMeFacade dataset.
Furthermore, our paper offers a guideline for people working in the area and
desperately wandering through all the painstaking details that render training
CNs on image patches extremely difficult.
|
[
{
"version": "v1",
"created": "Mon, 23 Feb 2015 08:47:37 GMT"
}
] | 2015-02-24T00:00:00 |
[
[
"Brust",
"Clemens-Alexander",
""
],
[
"Sickert",
"Sven",
""
],
[
"Simon",
"Marcel",
""
],
[
"Rodner",
"Erik",
""
],
[
"Denzler",
"Joachim",
""
]
] |
TITLE: Convolutional Patch Networks with Spatial Prior for Road Detection and
Urban Scene Understanding
ABSTRACT: Classifying single image patches is important in many different applications,
such as road detection or scene understanding. In this paper, we present
convolutional patch networks, which are convolutional networks learned to
distinguish different image patches and which can be used for pixel-wise
labeling. We also show how to incorporate spatial information of the patch as
an input to the network, which allows for learning spatial priors for certain
categories jointly with an appearance model. In particular, we focus on road
detection and urban scene understanding, two application areas where we are
able to achieve state-of-the-art results on the KITTI as well as on the
LabelMeFacade dataset.
Furthermore, our paper offers a guideline for people working in the area and
desperately wandering through all the painstaking details that render training
CNs on image patches extremely difficult.
|
no_new_dataset
| 0.949342 |
1502.03913
|
Smitha M.L.
|
B.H. Shekar, Smitha M.L
|
Skeleton Matching based approach for Text Localization in Scene Images
|
10 pages, 8 figures, Eighth International Conference on Image and
Signal Processing,Elsevier Publications,pp: 145-153, held at UVCE, Bangalore
in July 2014. ISBN: 9789351072522
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a skeleton matching based approach which aids in
text localization in scene images. The input image is preprocessed and
segmented into blocks using connected component analysis. We obtain the
skeleton of the segmented block using morphology based approach. The
skeletonized images are compared with the trained templates in the database to
categorize into text and non-text blocks. Further, the newly designed
geometrical rules and morphological operations are employed on the detected
text blocks for scene text localization. The experimental results obtained on
publicly available standard datasets illustrate that the proposed method can
detect and localize the texts of various sizes, fonts and colors.
|
[
{
"version": "v1",
"created": "Fri, 13 Feb 2015 08:42:00 GMT"
}
] | 2015-02-23T00:00:00 |
[
[
"Shekar",
"B. H.",
""
],
[
"L",
"Smitha M.",
""
]
] |
TITLE: Skeleton Matching based approach for Text Localization in Scene Images
ABSTRACT: In this paper, we propose a skeleton matching based approach which aids in
text localization in scene images. The input image is preprocessed and
segmented into blocks using connected component analysis. We obtain the
skeleton of the segmented block using morphology based approach. The
skeletonized images are compared with the trained templates in the database to
categorize into text and non-text blocks. Further, the newly designed
geometrical rules and morphological operations are employed on the detected
text blocks for scene text localization. The experimental results obtained on
publicly available standard datasets illustrate that the proposed method can
detect and localize the texts of various sizes, fonts and colors.
|
no_new_dataset
| 0.956917 |
1502.03918
|
Smitha M.L.
|
B.H. Shekar, Smitha M.L
|
Gradient Difference based approach for Text Localization in Compressed
domain
|
11 pages, Second International Conference on Emerging Research in
Computing, Information, Communications and Applications, Elsevier
Publications, ISBN: 9789351072638, vol. III, pp: 299-308, held at NMIT,
Bangalore August 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a gradient difference based approach to text
localization in videos and scene images. The input video frame/ image is first
compressed using multilevel 2-D wavelet transform. The edge information of the
reconstructed image is found which is further used for finding the maximum
gradient difference between the pixels and then the boundaries of the detected
text blocks are computed using zero crossing technique. We perform logical AND
operation of the text blocks obtained by gradient difference and the zero
crossing technique followed by connected component analysis to eliminate the
false positives. Finally, the morphological dilation operation is employed on
the detected text blocks for scene text localization. The experimental results
obtained on publicly available standard datasets illustrate that the proposed
method can detect and localize the texts of various sizes, fonts and colors.
|
[
{
"version": "v1",
"created": "Fri, 13 Feb 2015 09:08:35 GMT"
}
] | 2015-02-23T00:00:00 |
[
[
"Shekar",
"B. H.",
""
],
[
"L",
"Smitha M.",
""
]
] |
TITLE: Gradient Difference based approach for Text Localization in Compressed
domain
ABSTRACT: In this paper, we propose a gradient difference based approach to text
localization in videos and scene images. The input video frame/ image is first
compressed using multilevel 2-D wavelet transform. The edge information of the
reconstructed image is found which is further used for finding the maximum
gradient difference between the pixels and then the boundaries of the detected
text blocks are computed using zero crossing technique. We perform logical AND
operation of the text blocks obtained by gradient difference and the zero
crossing technique followed by connected component analysis to eliminate the
false positives. Finally, the morphological dilation operation is employed on
the detected text blocks for scene text localization. The experimental results
obtained on publicly available standard datasets illustrate that the proposed
method can detect and localize the texts of various sizes, fonts and colors.
|
no_new_dataset
| 0.956104 |
1502.05925
|
Feng Nan
|
Feng Nan, Joseph Wang, Venkatesh Saligrama
|
Feature-Budgeted Random Forest
| null | null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We seek decision rules for prediction-time cost reduction, where complete
data is available for training, but during prediction-time, each feature can
only be acquired for an additional cost. We propose a novel random forest
algorithm to minimize prediction error for a user-specified {\it average}
feature acquisition budget. While random forests yield strong generalization
performance, they do not explicitly account for feature costs and furthermore
require low correlation among trees, which amplifies costs. Our random forest
grows trees with low acquisition cost and high strength based on greedy minimax
cost-weighted-impurity splits. Theoretically, we establish near-optimal
acquisition cost guarantees for our algorithm. Empirically, on a number of
benchmark datasets we demonstrate superior accuracy-cost curves against
state-of-the-art prediction-time algorithms.
|
[
{
"version": "v1",
"created": "Fri, 20 Feb 2015 16:42:40 GMT"
}
] | 2015-02-23T00:00:00 |
[
[
"Nan",
"Feng",
""
],
[
"Wang",
"Joseph",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] |
TITLE: Feature-Budgeted Random Forest
ABSTRACT: We seek decision rules for prediction-time cost reduction, where complete
data is available for training, but during prediction-time, each feature can
only be acquired for an additional cost. We propose a novel random forest
algorithm to minimize prediction error for a user-specified {\it average}
feature acquisition budget. While random forests yield strong generalization
performance, they do not explicitly account for feature costs and furthermore
require low correlation among trees, which amplifies costs. Our random forest
grows trees with low acquisition cost and high strength based on greedy minimax
cost-weighted-impurity splits. Theoretically, we establish near-optimal
acquisition cost guarantees for our algorithm. Empirically, on a number of
benchmark datasets we demonstrate superior accuracy-cost curves against
state-of-the-art prediction-time algorithms.
|
no_new_dataset
| 0.945197 |
1502.05461
|
Carl Vondrick
|
Carl Vondrick, Aditya Khosla, Hamed Pirsiavash, Tomasz Malisiewicz,
Antonio Torralba
|
Visualizing Object Detection Features
|
In submission to IJCV
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce algorithms to visualize feature spaces used by object detectors.
Our method works by inverting a visual feature back to multiple natural images.
We found that these visualizations allow us to analyze object detection systems
in new ways and gain new insight into the detector's failures. For example,
when we visualize the features for high scoring false alarms, we discovered
that, although they are clearly wrong in image space, they do look deceptively
similar to true positives in feature space. This result suggests that many of
these false alarms are caused by our choice of feature space, and supports that
creating a better learning algorithm or building bigger datasets is unlikely to
correct these errors. By visualizing feature spaces, we can gain a more
intuitive understanding of recognition systems.
|
[
{
"version": "v1",
"created": "Thu, 19 Feb 2015 04:11:14 GMT"
}
] | 2015-02-20T00:00:00 |
[
[
"Vondrick",
"Carl",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Pirsiavash",
"Hamed",
""
],
[
"Malisiewicz",
"Tomasz",
""
],
[
"Torralba",
"Antonio",
""
]
] |
TITLE: Visualizing Object Detection Features
ABSTRACT: We introduce algorithms to visualize feature spaces used by object detectors.
Our method works by inverting a visual feature back to multiple natural images.
We found that these visualizations allow us to analyze object detection systems
in new ways and gain new insight into the detector's failures. For example,
when we visualize the features for high scoring false alarms, we discovered
that, although they are clearly wrong in image space, they do look deceptively
similar to true positives in feature space. This result suggests that many of
these false alarms are caused by our choice of feature space, and supports that
creating a better learning algorithm or building bigger datasets is unlikely to
correct these errors. By visualizing feature spaces, we can gain a more
intuitive understanding of recognition systems.
|
no_new_dataset
| 0.951729 |
1411.3985
|
Riccardo Scatamacchia
|
L. Biferale, A. S. Lanotte, R. Scatamacchia and F. Toschi
|
Intermittency in the relative separations of tracers and of heavy
particles in turbulent flows
|
22 pages, 14 figures
|
Journal of Fluid Mechanics, Vol. 757, 2014
|
10.1017/jfm.2014.515
| null |
physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Results from Direct Numerical Simulations of particle relative dispersion in
three dimensional homogeneous and isotropic turbulence at Reynolds number
$Re_\lambda \sim 300$ are presented. We study point-like passive tracers and
heavy particles, at Stokes number St = 0, 0.6, 1 and 5. Particles are emitted
from localised sources, in bunches of thousands, periodically in time, allowing
to reach an unprecedented statistical accuracy, with a total number of events
for two-point observables of the order of $10^{11}$. The right tail of the
probability density function for tracers develops a clear deviation from
Richardson's self-similar prediction, pointing to the intermittent nature of
the dispersion process. In our numerical experiment, such deviations are
manifest once the probability to measure an event becomes of the order of -or
rarer than- one part over one million, hence the crucial importance of a large
dataset. The role of finite-Reynolds effects and the related fluctuations when
pair separations cross the boundary between viscous and inertial range scales
are discussed. An asymptotic prediction based on the multifractal theory for
inertial range intermittency and valid for large Reynolds numbers is found to
agree with the data better than the Richardson theory. The agreement is
improved when considering heavy particles, whose inertia filters out viscous
scale fluctuations. By using the exit-time statistics we also show that events
associated to pairs experiencing unusually slow inertial range separations have
a non self-similar probability distribution function.
|
[
{
"version": "v1",
"created": "Fri, 14 Nov 2014 17:38:00 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Feb 2015 17:59:59 GMT"
}
] | 2015-02-19T00:00:00 |
[
[
"Biferale",
"L.",
""
],
[
"Lanotte",
"A. S.",
""
],
[
"Scatamacchia",
"R.",
""
],
[
"Toschi",
"F.",
""
]
] |
TITLE: Intermittency in the relative separations of tracers and of heavy
particles in turbulent flows
ABSTRACT: Results from Direct Numerical Simulations of particle relative dispersion in
three dimensional homogeneous and isotropic turbulence at Reynolds number
$Re_\lambda \sim 300$ are presented. We study point-like passive tracers and
heavy particles, at Stokes number St = 0, 0.6, 1 and 5. Particles are emitted
from localised sources, in bunches of thousands, periodically in time, allowing
to reach an unprecedented statistical accuracy, with a total number of events
for two-point observables of the order of $10^{11}$. The right tail of the
probability density function for tracers develops a clear deviation from
Richardson's self-similar prediction, pointing to the intermittent nature of
the dispersion process. In our numerical experiment, such deviations are
manifest once the probability to measure an event becomes of the order of -or
rarer than- one part over one million, hence the crucial importance of a large
dataset. The role of finite-Reynolds effects and the related fluctuations when
pair separations cross the boundary between viscous and inertial range scales
are discussed. An asymptotic prediction based on the multifractal theory for
inertial range intermittency and valid for large Reynolds numbers is found to
agree with the data better than the Richardson theory. The agreement is
improved when considering heavy particles, whose inertia filters out viscous
scale fluctuations. By using the exit-time statistics we also show that events
associated to pairs experiencing unusually slow inertial range separations have
a non self-similar probability distribution function.
|
no_new_dataset
| 0.951414 |
1502.05167
|
Mansaf Alam Dr
|
Kashish Ara Shakil, Shadma Anis and Mansaf Alam
|
Dengue disease prediction using weka data mining tool
| null | null | null | null |
cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dengue is a life threatening disease prevalent in several developed as well
as developing countries like India.In this paper we discuss various algorithm
approaches of data mining that have been utilized for dengue disease
prediction. Data mining is a well known technique used by health organizations
for classification of diseases such as dengue, diabetes and cancer in
bioinformatics research. In the proposed approach we have used WEKA with 10
cross validation to evaluate data and compare results. Weka has an extensive
collection of different machine learning and data mining algorithms. In this
paper we have firstly classified the dengue data set and then compared the
different data mining techniques in weka through Explorer, knowledge flow and
Experimenter interfaces. Furthermore in order to validate our approach we have
used a dengue dataset with 108 instances but weka used 99 rows and 18
attributes to determine the prediction of disease and their accuracy using
classifications of different algorithms to find out the best performance. The
main objective of this paper is to classify data and assist the users in
extracting useful information from data and easily identify a suitable
algorithm for accurate predictive model from it. From the findings of this
paper it can be concluded that Na\"ive Bayes and J48 are the best performance
algorithms for classified accuracy because they achieved maximum accuracy= 100%
with 99 correctly classified instances, maximum ROC = 1, had least mean
absolute error and it took minimum time for building this model through
Explorer and Knowledge flow results
|
[
{
"version": "v1",
"created": "Wed, 18 Feb 2015 09:52:55 GMT"
}
] | 2015-02-19T00:00:00 |
[
[
"Shakil",
"Kashish Ara",
""
],
[
"Anis",
"Shadma",
""
],
[
"Alam",
"Mansaf",
""
]
] |
TITLE: Dengue disease prediction using weka data mining tool
ABSTRACT: Dengue is a life threatening disease prevalent in several developed as well
as developing countries like India.In this paper we discuss various algorithm
approaches of data mining that have been utilized for dengue disease
prediction. Data mining is a well known technique used by health organizations
for classification of diseases such as dengue, diabetes and cancer in
bioinformatics research. In the proposed approach we have used WEKA with 10
cross validation to evaluate data and compare results. Weka has an extensive
collection of different machine learning and data mining algorithms. In this
paper we have firstly classified the dengue data set and then compared the
different data mining techniques in weka through Explorer, knowledge flow and
Experimenter interfaces. Furthermore in order to validate our approach we have
used a dengue dataset with 108 instances but weka used 99 rows and 18
attributes to determine the prediction of disease and their accuracy using
classifications of different algorithms to find out the best performance. The
main objective of this paper is to classify data and assist the users in
extracting useful information from data and easily identify a suitable
algorithm for accurate predictive model from it. From the findings of this
paper it can be concluded that Na\"ive Bayes and J48 are the best performance
algorithms for classified accuracy because they achieved maximum accuracy= 100%
with 99 correctly classified instances, maximum ROC = 1, had least mean
absolute error and it took minimum time for building this model through
Explorer and Knowledge flow results
|
no_new_dataset
| 0.951953 |
1502.05212
|
Gianluigi Cioca
|
Gianluigi Ciocca, Paolo Napoletano, Raimondo Schettini
|
IAT - Image Annotation Tool: Manual
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The annotation of image and video data of large datasets is a fundamental
task in multimedia information retrieval and computer vision applications. In
order to support the users during the image and video annotation process,
several software tools have been developed to provide them with a graphical
environment which helps drawing object contours, handling tracking information
and specifying object metadata. Here we introduce a preliminary version of the
image annotation tools developed at the Imaging and Vision Laboratory.
|
[
{
"version": "v1",
"created": "Wed, 18 Feb 2015 13:11:46 GMT"
}
] | 2015-02-19T00:00:00 |
[
[
"Ciocca",
"Gianluigi",
""
],
[
"Napoletano",
"Paolo",
""
],
[
"Schettini",
"Raimondo",
""
]
] |
TITLE: IAT - Image Annotation Tool: Manual
ABSTRACT: The annotation of image and video data of large datasets is a fundamental
task in multimedia information retrieval and computer vision applications. In
order to support the users during the image and video annotation process,
several software tools have been developed to provide them with a graphical
environment which helps drawing object contours, handling tracking information
and specifying object metadata. Here we introduce a preliminary version of the
image annotation tools developed at the Imaging and Vision Laboratory.
|
no_new_dataset
| 0.952442 |
1502.05241
|
Michael Dirnberger
|
Michael Dirnberger and Adrian Neumann and Tim Kehl
|
NEFI: Network Extraction From Images
| null | null | null | null |
cs.CV cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Networks and network-like structures are amongst the central building blocks
of many technological and biological systems. Given a mathematical graph
representation of a network, methods from graph theory enable a precise
investigation of its properties. Software for the analysis of graphs is widely
available and has been applied to graphs describing large scale networks such
as social networks, protein-interaction networks, etc. In these applications,
graph acquisition, i.e., the extraction of a mathematical graph from a network,
is relatively simple. However, for many network-like structures, e.g. leaf
venations, slime molds and mud cracks, data collection relies on images where
graph extraction requires domain-specific solutions or even manual. Here we
introduce Network Extraction From Images, NEFI, a software tool that
automatically extracts accurate graphs from images of a wide range of networks
originating in various domains. While there is previous work on graph
extraction from images, theoretical results are fully accessible only to an
expert audience and ready-to-use implementations for non-experts are rarely
available or insufficiently documented. NEFI provides a novel platform allowing
practitioners from many disciplines to easily extract graph representations
from images by supplying flexible tools from image processing, computer vision
and graph theory bundled in a convenient package. Thus, NEFI constitutes a
scalable alternative to tedious and error-prone manual graph extraction and
special purpose tools. We anticipate NEFI to enable the collection of larger
datasets by reducing the time spent on graph extraction. The analysis of these
new datasets may open up the possibility to gain new insights into the
structure and function of various types of networks. NEFI is open source and
available http://nefi.mpi-inf.mpg.de.
|
[
{
"version": "v1",
"created": "Wed, 18 Feb 2015 14:20:25 GMT"
}
] | 2015-02-19T00:00:00 |
[
[
"Dirnberger",
"Michael",
""
],
[
"Neumann",
"Adrian",
""
],
[
"Kehl",
"Tim",
""
]
] |
TITLE: NEFI: Network Extraction From Images
ABSTRACT: Networks and network-like structures are amongst the central building blocks
of many technological and biological systems. Given a mathematical graph
representation of a network, methods from graph theory enable a precise
investigation of its properties. Software for the analysis of graphs is widely
available and has been applied to graphs describing large scale networks such
as social networks, protein-interaction networks, etc. In these applications,
graph acquisition, i.e., the extraction of a mathematical graph from a network,
is relatively simple. However, for many network-like structures, e.g. leaf
venations, slime molds and mud cracks, data collection relies on images where
graph extraction requires domain-specific solutions or even manual. Here we
introduce Network Extraction From Images, NEFI, a software tool that
automatically extracts accurate graphs from images of a wide range of networks
originating in various domains. While there is previous work on graph
extraction from images, theoretical results are fully accessible only to an
expert audience and ready-to-use implementations for non-experts are rarely
available or insufficiently documented. NEFI provides a novel platform allowing
practitioners from many disciplines to easily extract graph representations
from images by supplying flexible tools from image processing, computer vision
and graph theory bundled in a convenient package. Thus, NEFI constitutes a
scalable alternative to tedious and error-prone manual graph extraction and
special purpose tools. We anticipate NEFI to enable the collection of larger
datasets by reducing the time spent on graph extraction. The analysis of these
new datasets may open up the possibility to gain new insights into the
structure and function of various types of networks. NEFI is open source and
available http://nefi.mpi-inf.mpg.de.
|
no_new_dataset
| 0.935051 |
1502.04967
|
Ivan Maximov
|
Ivan I. Maximov, Farida Grinberg, Irene Neuner, N. Jon Shah
|
Robust diffusion imaging framework for clinical studies
|
30 pages, 9 figures
| null | null | null |
physics.med-ph physics.bio-ph
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Clinical diffusion imaging requires short acquisition times and good image
quality to permit its use in various medical applications. In turn, these
demands require the development of a robust and efficient post-processing
framework in order to guarantee useful and reliable results. However, multiple
artefacts abound in in vivo measurements; from either subject such as cardiac
pulsation, bulk head motion, respiratory motion and involuntary tics and
tremor, or imaging hardware related problems, such as table vibrations, etc.
These artefacts can severely degrade the resulting images and render diffusion
analysis difficult or impossible. In order to overcome these problems, we
developed a robust and efficient framework enabling the use of initially
corrupted images from a clinical study. At the heart of this framework is an
improved least trimmed squares diffusion tensor estimation algorithm that works
well with severely degraded datasets with low signal-to-noise ratio. This
approach has been compared with other diffusion imaging post-processing
algorithms using simulations and in vivo experiments. Exploiting track-based
spatial statistics analysis, we demonstrate that corrupted datasets can be
restored and reused in further clinical studies rather than being discarded due
to poor quality. The developed robust framework is shown to exhibit a high
efficiency and accuracy and can, in principle, be exploited in other MR studies
where artefact/outlier suppression is needed.
|
[
{
"version": "v1",
"created": "Tue, 17 Feb 2015 17:33:15 GMT"
}
] | 2015-02-18T00:00:00 |
[
[
"Maximov",
"Ivan I.",
""
],
[
"Grinberg",
"Farida",
""
],
[
"Neuner",
"Irene",
""
],
[
"Shah",
"N. Jon",
""
]
] |
TITLE: Robust diffusion imaging framework for clinical studies
ABSTRACT: Clinical diffusion imaging requires short acquisition times and good image
quality to permit its use in various medical applications. In turn, these
demands require the development of a robust and efficient post-processing
framework in order to guarantee useful and reliable results. However, multiple
artefacts abound in in vivo measurements; from either subject such as cardiac
pulsation, bulk head motion, respiratory motion and involuntary tics and
tremor, or imaging hardware related problems, such as table vibrations, etc.
These artefacts can severely degrade the resulting images and render diffusion
analysis difficult or impossible. In order to overcome these problems, we
developed a robust and efficient framework enabling the use of initially
corrupted images from a clinical study. At the heart of this framework is an
improved least trimmed squares diffusion tensor estimation algorithm that works
well with severely degraded datasets with low signal-to-noise ratio. This
approach has been compared with other diffusion imaging post-processing
algorithms using simulations and in vivo experiments. Exploiting track-based
spatial statistics analysis, we demonstrate that corrupted datasets can be
restored and reused in further clinical studies rather than being discarded due
to poor quality. The developed robust framework is shown to exhibit a high
efficiency and accuracy and can, in principle, be exploited in other MR studies
where artefact/outlier suppression is needed.
|
no_new_dataset
| 0.951594 |
1502.04983
|
Thanapong Intharah
|
Thanapong Intharah and Gabriel J. Brostow
|
Context Tricks for Cheap Semantic Segmentation
|
Supplementary material can be found at
http://www0.cs.ucl.ac.uk/staff/T.Intharah/research.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurate semantic labeling of image pixels is difficult because intra-class
variability is often greater than inter-class variability. In turn, fast
semantic segmentation is hard because accurate models are usually too
complicated to also run quickly at test-time. Our experience with building and
running semantic segmentation systems has also shown a reasonably obvious
bottleneck on model complexity, imposed by small training datasets. We
therefore propose two simple complementary strategies that leverage context to
give better semantic segmentation, while scaling up or down to train on
different-sized datasets.
As easy modifications for existing semantic segmentation algorithms, we
introduce Decorrelated Semantic Texton Forests, and the Context Sensitive Image
Level Prior. The proposed modifications are tested using a Semantic Texton
Forest (STF) system, and the modifications are validated on two standard
benchmark datasets, MSRC-21 and PascalVOC-2010. In Python based comparisons,
our system is insignificantly slower than STF at test-time, yet produces
superior semantic segmentations overall, with just push-button training.
|
[
{
"version": "v1",
"created": "Tue, 17 Feb 2015 18:08:53 GMT"
}
] | 2015-02-18T00:00:00 |
[
[
"Intharah",
"Thanapong",
""
],
[
"Brostow",
"Gabriel J.",
""
]
] |
TITLE: Context Tricks for Cheap Semantic Segmentation
ABSTRACT: Accurate semantic labeling of image pixels is difficult because intra-class
variability is often greater than inter-class variability. In turn, fast
semantic segmentation is hard because accurate models are usually too
complicated to also run quickly at test-time. Our experience with building and
running semantic segmentation systems has also shown a reasonably obvious
bottleneck on model complexity, imposed by small training datasets. We
therefore propose two simple complementary strategies that leverage context to
give better semantic segmentation, while scaling up or down to train on
different-sized datasets.
As easy modifications for existing semantic segmentation algorithms, we
introduce Decorrelated Semantic Texton Forests, and the Context Sensitive Image
Level Prior. The proposed modifications are tested using a Semantic Texton
Forest (STF) system, and the modifications are validated on two standard
benchmark datasets, MSRC-21 and PascalVOC-2010. In Python based comparisons,
our system is insignificantly slower than STF at test-time, yet produces
superior semantic segmentations overall, with just push-button training.
|
no_new_dataset
| 0.949809 |
1212.0975
|
Arya Iranmehr
|
Hamed Masnadi-Shirazi, Nuno Vasconcelos and Arya Iranmehr
|
Cost-Sensitive Support Vector Machines
|
32 pages, 4 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A new procedure for learning cost-sensitive SVM(CS-SVM) classifiers is
proposed. The SVM hinge loss is extended to the cost sensitive setting, and the
CS-SVM is derived as the minimizer of the associated risk. The extension of the
hinge loss draws on recent connections between risk minimization and
probability elicitation. These connections are generalized to cost-sensitive
classification, in a manner that guarantees consistency with the cost-sensitive
Bayes risk, and associated Bayes decision rule. This ensures that optimal
decision rules, under the new hinge loss, implement the Bayes-optimal
cost-sensitive classification boundary. Minimization of the new hinge loss is
shown to be a generalization of the classic SVM optimization problem, and can
be solved by identical procedures. The dual problem of CS-SVM is carefully
scrutinized by means of regularization theory and sensitivity analysis and the
CS-SVM algorithm is substantiated. The proposed algorithm is also extended to
cost-sensitive learning with example dependent costs. The minimum cost
sensitive risk is proposed as the performance measure and is connected to ROC
analysis through vector optimization. The resulting algorithm avoids the
shortcomings of previous approaches to cost-sensitive SVM design, and is shown
to have superior experimental performance on a large number of cost sensitive
and imbalanced datasets.
|
[
{
"version": "v1",
"created": "Wed, 5 Dec 2012 09:24:11 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Feb 2015 11:17:57 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Masnadi-Shirazi",
"Hamed",
""
],
[
"Vasconcelos",
"Nuno",
""
],
[
"Iranmehr",
"Arya",
""
]
] |
TITLE: Cost-Sensitive Support Vector Machines
ABSTRACT: A new procedure for learning cost-sensitive SVM(CS-SVM) classifiers is
proposed. The SVM hinge loss is extended to the cost sensitive setting, and the
CS-SVM is derived as the minimizer of the associated risk. The extension of the
hinge loss draws on recent connections between risk minimization and
probability elicitation. These connections are generalized to cost-sensitive
classification, in a manner that guarantees consistency with the cost-sensitive
Bayes risk, and associated Bayes decision rule. This ensures that optimal
decision rules, under the new hinge loss, implement the Bayes-optimal
cost-sensitive classification boundary. Minimization of the new hinge loss is
shown to be a generalization of the classic SVM optimization problem, and can
be solved by identical procedures. The dual problem of CS-SVM is carefully
scrutinized by means of regularization theory and sensitivity analysis and the
CS-SVM algorithm is substantiated. The proposed algorithm is also extended to
cost-sensitive learning with example dependent costs. The minimum cost
sensitive risk is proposed as the performance measure and is connected to ROC
analysis through vector optimization. The resulting algorithm avoids the
shortcomings of previous approaches to cost-sensitive SVM design, and is shown
to have superior experimental performance on a large number of cost sensitive
and imbalanced datasets.
|
no_new_dataset
| 0.949248 |
1406.7611
|
Lutz Bornmann Dr.
|
Lutz Bornmann
|
Validity of altmetrics data for measuring societal impact: A study using
data from Altmetric and F1000Prime
| null | null | null | null |
cs.DL physics.soc-ph stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Can altmetric data be validly used for the measurement of societal impact?
The current study seeks to answer this question with a comprehensive dataset
(about 100,000 records) from very disparate sources (F1000, Altmetric, and an
in-house database based on Web of Science). In the F1000 peer review system,
experts attach particular tags to scientific papers which indicate whether a
paper could be of interest for science or rather for other segments of society.
The results show that papers with the tag "good for teaching" do achieve higher
altmetric counts than papers without this tag - if the quality of the papers is
controlled. At the same time, a higher citation count is shown especially by
papers with a tag that is specifically scientifically oriented ("new finding").
The findings indicate that papers tailored for a readership outside the area of
research should lead to societal impact. If altmetric data is to be used for
the measurement of societal impact, the question arises of its normalization.
In bibliometrics, citations are normalized for the papers' subject area and
publication year. This study has taken a second analytic step involving a
possible normalization of altmetric data. As the results show there are
particular scientific topics which are of especial interest for a wide
audience. Since these more or less interesting topics are not completely
reflected in Thomson Reuters' journal sets, a normalization of altmetric data
should not be based on the level of subject categories, but on the level of
topics.
|
[
{
"version": "v1",
"created": "Mon, 30 Jun 2014 06:19:24 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Feb 2015 08:37:42 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Bornmann",
"Lutz",
""
]
] |
TITLE: Validity of altmetrics data for measuring societal impact: A study using
data from Altmetric and F1000Prime
ABSTRACT: Can altmetric data be validly used for the measurement of societal impact?
The current study seeks to answer this question with a comprehensive dataset
(about 100,000 records) from very disparate sources (F1000, Altmetric, and an
in-house database based on Web of Science). In the F1000 peer review system,
experts attach particular tags to scientific papers which indicate whether a
paper could be of interest for science or rather for other segments of society.
The results show that papers with the tag "good for teaching" do achieve higher
altmetric counts than papers without this tag - if the quality of the papers is
controlled. At the same time, a higher citation count is shown especially by
papers with a tag that is specifically scientifically oriented ("new finding").
The findings indicate that papers tailored for a readership outside the area of
research should lead to societal impact. If altmetric data is to be used for
the measurement of societal impact, the question arises of its normalization.
In bibliometrics, citations are normalized for the papers' subject area and
publication year. This study has taken a second analytic step involving a
possible normalization of altmetric data. As the results show there are
particular scientific topics which are of especial interest for a wide
audience. Since these more or less interesting topics are not completely
reflected in Thomson Reuters' journal sets, a normalization of altmetric data
should not be based on the level of subject categories, but on the level of
topics.
|
no_new_dataset
| 0.932944 |
1409.2863
|
Lutz Bornmann Dr.
|
Lutz Bornmann
|
Usefulness of altmetrics for measuring the broader impact of research: A
case study using data from PLOS (altmetrics) and F1000Prime (paper tags)
|
arXiv admin note: text overlap with arXiv:1406.7611
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Whereas citation counts allow the measurement of the impact of
research on research itself, an important role in the measurement of the impact
of research on other parts of society is ascribed to altmetrics. The present
case study investigates the usefulness of altmetrics for measuring the broader
impact of research. Methods: This case study is essentially based on a dataset
with papers obtained from F1000. The dataset was augmented with altmetrics
(such as Twitter counts) which were provided by PLOS (the Public Library of
Science). In total, the case study covers a total of 1,082 papers. Findings:
The F1000 dataset contains tags on papers which were assigned intellectually by
experts and which can characterise a paper. The most interesting tag for
altmetric research is "good for teaching". This tag is assigned to papers which
could be of interest to a wider circle of readers than the peers in a
specialist area. Particularly on Facebook and Twitter, one could expect papers
with this tag to be mentioned more often than those without this tag. With
respect to the "good for teaching" tag, the results from regression models were
able to confirm these expectations: Papers with this tag show significantly
higher Facebook and Twitter counts than papers without this tag. This
association could not be seen with Mendeley or Figshare counts (that is with
counts from platforms which are chiefly of interest in a scientific context).
Conclusions: The results of the current study indicate that Facebook and
Twitter, but not Figshare or Mendeley, can provide indications of papers which
are of interest to a broader circle of readers (and not only for the peers in a
specialist area), and seem therefore be useful for societal impact measurement.
|
[
{
"version": "v1",
"created": "Tue, 9 Sep 2014 08:05:37 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Feb 2015 08:43:56 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Bornmann",
"Lutz",
""
]
] |
TITLE: Usefulness of altmetrics for measuring the broader impact of research: A
case study using data from PLOS (altmetrics) and F1000Prime (paper tags)
ABSTRACT: Purpose: Whereas citation counts allow the measurement of the impact of
research on research itself, an important role in the measurement of the impact
of research on other parts of society is ascribed to altmetrics. The present
case study investigates the usefulness of altmetrics for measuring the broader
impact of research. Methods: This case study is essentially based on a dataset
with papers obtained from F1000. The dataset was augmented with altmetrics
(such as Twitter counts) which were provided by PLOS (the Public Library of
Science). In total, the case study covers a total of 1,082 papers. Findings:
The F1000 dataset contains tags on papers which were assigned intellectually by
experts and which can characterise a paper. The most interesting tag for
altmetric research is "good for teaching". This tag is assigned to papers which
could be of interest to a wider circle of readers than the peers in a
specialist area. Particularly on Facebook and Twitter, one could expect papers
with this tag to be mentioned more often than those without this tag. With
respect to the "good for teaching" tag, the results from regression models were
able to confirm these expectations: Papers with this tag show significantly
higher Facebook and Twitter counts than papers without this tag. This
association could not be seen with Mendeley or Figshare counts (that is with
counts from platforms which are chiefly of interest in a scientific context).
Conclusions: The results of the current study indicate that Facebook and
Twitter, but not Figshare or Mendeley, can provide indications of papers which
are of interest to a broader circle of readers (and not only for the peers in a
specialist area), and seem therefore be useful for societal impact measurement.
|
no_new_dataset
| 0.919643 |
1502.03536
|
Vamsi Ithapu
|
Chris Hinrichs, Vamsi K Ithapu, Qinyuan Sun, Sterling C Johnson, Vikas
Singh
|
Speeding up Permutation Testing in Neuroimaging
|
NIPS 13
|
Advances in neural information processing systems (2013), pp.
890-898
| null | null |
stat.CO cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple hypothesis testing is a significant problem in nearly all
neuroimaging studies. In order to correct for this phenomena, we require a
reliable estimate of the Family-Wise Error Rate (FWER). The well known
Bonferroni correction method, while simple to implement, is quite conservative,
and can substantially under-power a study because it ignores dependencies
between test statistics. Permutation testing, on the other hand, is an exact,
non-parametric method of estimating the FWER for a given $\alpha$-threshold,
but for acceptably low thresholds the computational burden can be prohibitive.
In this paper, we show that permutation testing in fact amounts to populating
the columns of a very large matrix ${\bf P}$. By analyzing the spectrum of this
matrix, under certain conditions, we see that ${\bf P}$ has a low-rank plus a
low-variance residual decomposition which makes it suitable for highly
sub--sampled --- on the order of $0.5\%$ --- matrix completion methods. Based
on this observation, we propose a novel permutation testing methodology which
offers a large speedup, without sacrificing the fidelity of the estimated FWER.
Our evaluations on four different neuroimaging datasets show that a
computational speedup factor of roughly $50\times$ can be achieved while
recovering the FWER distribution up to very high accuracy. Further, we show
that the estimated $\alpha$-threshold is also recovered faithfully, and is
stable.
|
[
{
"version": "v1",
"created": "Thu, 12 Feb 2015 04:30:06 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Hinrichs",
"Chris",
""
],
[
"Ithapu",
"Vamsi K",
""
],
[
"Sun",
"Qinyuan",
""
],
[
"Johnson",
"Sterling C",
""
],
[
"Singh",
"Vikas",
""
]
] |
TITLE: Speeding up Permutation Testing in Neuroimaging
ABSTRACT: Multiple hypothesis testing is a significant problem in nearly all
neuroimaging studies. In order to correct for this phenomena, we require a
reliable estimate of the Family-Wise Error Rate (FWER). The well known
Bonferroni correction method, while simple to implement, is quite conservative,
and can substantially under-power a study because it ignores dependencies
between test statistics. Permutation testing, on the other hand, is an exact,
non-parametric method of estimating the FWER for a given $\alpha$-threshold,
but for acceptably low thresholds the computational burden can be prohibitive.
In this paper, we show that permutation testing in fact amounts to populating
the columns of a very large matrix ${\bf P}$. By analyzing the spectrum of this
matrix, under certain conditions, we see that ${\bf P}$ has a low-rank plus a
low-variance residual decomposition which makes it suitable for highly
sub--sampled --- on the order of $0.5\%$ --- matrix completion methods. Based
on this observation, we propose a novel permutation testing methodology which
offers a large speedup, without sacrificing the fidelity of the estimated FWER.
Our evaluations on four different neuroimaging datasets show that a
computational speedup factor of roughly $50\times$ can be achieved while
recovering the FWER distribution up to very high accuracy. Further, we show
that the estimated $\alpha$-threshold is also recovered faithfully, and is
stable.
|
no_new_dataset
| 0.941654 |
1502.04132
|
Zhenzhong Lan
|
Zhenzhong Lan, Xuanchong Li, Ming Lin, Alexander G. Hauptmann
|
Long-short Term Motion Feature for Action Classification and Retrieval
|
arXiv admin note: text overlap with arXiv:1411.6660
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method for representing motion information for video
classification and retrieval. We improve upon local descriptor based methods
that have been among the most popular and successful models for representing
videos. The desired local descriptors need to satisfy two requirements: 1) to
be representative, 2) to be discriminative. Therefore, they need to occur
frequently enough in the videos and to be be able to tell the difference among
different types of motions. To generate such local descriptors, the video
blocks they are based on must contain just the right amount of motion
information. However, current state-of-the-art local descriptor methods use
video blocks with a single fixed size, which is insufficient for covering
actions with varying speeds. In this paper, we introduce a long-short term
motion feature that generates descriptors from video blocks with multiple
lengths, thus covering motions with large speed variance. Experimental results
show that, albeit simple, our model achieves state-of-the-arts results on
several benchmark datasets.
|
[
{
"version": "v1",
"created": "Fri, 13 Feb 2015 21:15:57 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Lan",
"Zhenzhong",
""
],
[
"Li",
"Xuanchong",
""
],
[
"Lin",
"Ming",
""
],
[
"Hauptmann",
"Alexander G.",
""
]
] |
TITLE: Long-short Term Motion Feature for Action Classification and Retrieval
ABSTRACT: We propose a method for representing motion information for video
classification and retrieval. We improve upon local descriptor based methods
that have been among the most popular and successful models for representing
videos. The desired local descriptors need to satisfy two requirements: 1) to
be representative, 2) to be discriminative. Therefore, they need to occur
frequently enough in the videos and to be be able to tell the difference among
different types of motions. To generate such local descriptors, the video
blocks they are based on must contain just the right amount of motion
information. However, current state-of-the-art local descriptor methods use
video blocks with a single fixed size, which is insufficient for covering
actions with varying speeds. In this paper, we introduce a long-short term
motion feature that generates descriptors from video blocks with multiple
lengths, thus covering motions with large speed variance. Experimental results
show that, albeit simple, our model achieves state-of-the-arts results on
several benchmark datasets.
|
no_new_dataset
| 0.956063 |
1502.04220
|
Can Lu
|
Can Lu, Jeffrey Xu Yu, Rong-Hua Li, Hao Wei
|
Exploring Hierarchies in Online Social Networks
| null | null | null | null |
cs.SI cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social hierarchy (i.e., pyramid structure of societies) is a fundamental
concept in sociology and social network analysis. The importance of social
hierarchy in a social network is that the topological structure of the social
hierarchy is essential in both shaping the nature of social interactions
between individuals and unfolding the structure of the social networks. The
social hierarchy found in a social network can be utilized to improve the
accuracy of link prediction, provide better query results, rank web pages, and
study information flow and spread in complex networks. In this paper, we model
a social network as a directed graph G, and consider the social hierarchy as
DAG (directed acyclic graph) of G, denoted as GD. By DAG, all the vertices in G
can be partitioned into different levels, the vertices at the same level
represent a disjoint group in the social hierarchy, and all the edges in DAG
follow one direction. The main issue we study in this paper is how to find DAG
GD in G. The approach we take is to find GD by removing all possible cycles
from G such that G = U(G) + GD where U(G) is a maximum Eulerian subgraph which
contains all possible cycles. We give the reasons for doing so, investigate the
properties of GD found, and discuss the applications. In addition, we develop a
novel two-phase algorithm, called Greedy-&-Refine, which greedily computes an
Eulerian subgraph and then refines this greedy solution to find the maximum
Eulerian subgraph. We give a bound between the greedy solution and the optimal.
The quality of our greedy approach is high. We conduct comprehensive
experimental studies over 14 real-world datasets. The results show that our
algorithms are at least two orders of magnitude faster than the baseline
algorithm.
|
[
{
"version": "v1",
"created": "Sat, 14 Feb 2015 16:02:00 GMT"
}
] | 2015-02-17T00:00:00 |
[
[
"Lu",
"Can",
""
],
[
"Yu",
"Jeffrey Xu",
""
],
[
"Li",
"Rong-Hua",
""
],
[
"Wei",
"Hao",
""
]
] |
TITLE: Exploring Hierarchies in Online Social Networks
ABSTRACT: Social hierarchy (i.e., pyramid structure of societies) is a fundamental
concept in sociology and social network analysis. The importance of social
hierarchy in a social network is that the topological structure of the social
hierarchy is essential in both shaping the nature of social interactions
between individuals and unfolding the structure of the social networks. The
social hierarchy found in a social network can be utilized to improve the
accuracy of link prediction, provide better query results, rank web pages, and
study information flow and spread in complex networks. In this paper, we model
a social network as a directed graph G, and consider the social hierarchy as
DAG (directed acyclic graph) of G, denoted as GD. By DAG, all the vertices in G
can be partitioned into different levels, the vertices at the same level
represent a disjoint group in the social hierarchy, and all the edges in DAG
follow one direction. The main issue we study in this paper is how to find DAG
GD in G. The approach we take is to find GD by removing all possible cycles
from G such that G = U(G) + GD where U(G) is a maximum Eulerian subgraph which
contains all possible cycles. We give the reasons for doing so, investigate the
properties of GD found, and discuss the applications. In addition, we develop a
novel two-phase algorithm, called Greedy-&-Refine, which greedily computes an
Eulerian subgraph and then refines this greedy solution to find the maximum
Eulerian subgraph. We give a bound between the greedy solution and the optimal.
The quality of our greedy approach is high. We conduct comprehensive
experimental studies over 14 real-world datasets. The results show that our
algorithms are at least two orders of magnitude faster than the baseline
algorithm.
|
no_new_dataset
| 0.949059 |
1408.3873
|
Lorenzo Livi
|
Lorenzo Livi, Antonello Rizzi, Alireza Sadeghian
|
Classifying sequences by the optimized dissimilarity space embedding
approach: a case study on the solubility analysis of the E. coli proteome
|
10 pages, 49 references
| null |
10.3233/IFS-151550
| null |
cs.CV cs.AI physics.bio-ph q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We evaluate a version of the recently-proposed classification system named
Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space
of sequences of generic objects. The ODSE system has been originally presented
as a classification system for patterns represented as labeled graphs. However,
since ODSE is founded on the dissimilarity space representation of the input
data, the classifier can be easily adapted to any input domain where it is
possible to define a meaningful dissimilarity measure. Here we demonstrate the
effectiveness of the ODSE classifier for sequences by considering an
application dealing with the recognition of the solubility degree of the
Escherichia coli proteome. Solubility, or analogously aggregation propensity,
is an important property of protein molecules, which is intimately related to
the mechanisms underlying the chemico-physical process of folding. Each protein
of our dataset is initially associated with a solubility degree and it is
represented as a sequence of symbols, denoting the 20 amino acid residues. The
herein obtained computational results, which we stress that have been achieved
with no context-dependent tuning of the ODSE system, confirm the validity and
generality of the ODSE-based approach for structured data classification.
|
[
{
"version": "v1",
"created": "Sun, 17 Aug 2014 23:46:55 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Jan 2015 21:20:19 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"Livi",
"Lorenzo",
""
],
[
"Rizzi",
"Antonello",
""
],
[
"Sadeghian",
"Alireza",
""
]
] |
TITLE: Classifying sequences by the optimized dissimilarity space embedding
approach: a case study on the solubility analysis of the E. coli proteome
ABSTRACT: We evaluate a version of the recently-proposed classification system named
Optimized Dissimilarity Space Embedding (ODSE) that operates in the input space
of sequences of generic objects. The ODSE system has been originally presented
as a classification system for patterns represented as labeled graphs. However,
since ODSE is founded on the dissimilarity space representation of the input
data, the classifier can be easily adapted to any input domain where it is
possible to define a meaningful dissimilarity measure. Here we demonstrate the
effectiveness of the ODSE classifier for sequences by considering an
application dealing with the recognition of the solubility degree of the
Escherichia coli proteome. Solubility, or analogously aggregation propensity,
is an important property of protein molecules, which is intimately related to
the mechanisms underlying the chemico-physical process of folding. Each protein
of our dataset is initially associated with a solubility degree and it is
represented as a sequence of symbols, denoting the 20 amino acid residues. The
herein obtained computational results, which we stress that have been achieved
with no context-dependent tuning of the ODSE system, confirm the validity and
generality of the ODSE-based approach for structured data classification.
|
no_new_dataset
| 0.939192 |
1409.2802
|
William March
|
William B. March and George Biros
|
Far-Field Compression for Fast Kernel Summation Methods in High
Dimensions
|
43 pages, 21 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider fast kernel summations in high dimensions: given a large set of
points in $d$ dimensions (with $d \gg 3$) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
$d$ and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.
|
[
{
"version": "v1",
"created": "Tue, 9 Sep 2014 16:28:40 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Feb 2015 00:30:28 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"March",
"William B.",
""
],
[
"Biros",
"George",
""
]
] |
TITLE: Far-Field Compression for Fast Kernel Summation Methods in High
Dimensions
ABSTRACT: We consider fast kernel summations in high dimensions: given a large set of
points in $d$ dimensions (with $d \gg 3$) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
$d$ and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.
|
no_new_dataset
| 0.941708 |
1502.03845
|
Alessandro Provetti
|
Biagio Bonasera, Emilio Ferrara, Giacomo Fiumara, Francesco Pagano,
Alessandro Provetti
|
Adaptive Search over Sorted Sets
|
9 pages
|
Journal of Discrete Algorithms, Volume 30, 2015, pp. 128--133
|
10.1016/j.jda.2014.12.007
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the classical algorithms for searching over sorted sets to
introduce an algorithm refinement, called Adaptive Search, that combines the
good features of Interpolation search and those of Binary search. W.r.t.
Interpolation search, only a constant number of extra comparisons is
introduced. Yet, under diverse input data distributions our algorithm shows
costs comparable to that of Interpolation search, i.e., O(log log n) while the
worst-case cost is always in O(log n), as with Binary search. On benchmarks
drawn from large datasets, both synthetic and real-life, Adaptive search scores
better times and lesser memory accesses even than Santoro and Sidney's
Interpolation-Binary search.
|
[
{
"version": "v1",
"created": "Thu, 12 Feb 2015 22:12:54 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"Bonasera",
"Biagio",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Fiumara",
"Giacomo",
""
],
[
"Pagano",
"Francesco",
""
],
[
"Provetti",
"Alessandro",
""
]
] |
TITLE: Adaptive Search over Sorted Sets
ABSTRACT: We revisit the classical algorithms for searching over sorted sets to
introduce an algorithm refinement, called Adaptive Search, that combines the
good features of Interpolation search and those of Binary search. W.r.t.
Interpolation search, only a constant number of extra comparisons is
introduced. Yet, under diverse input data distributions our algorithm shows
costs comparable to that of Interpolation search, i.e., O(log log n) while the
worst-case cost is always in O(log n), as with Binary search. On benchmarks
drawn from large datasets, both synthetic and real-life, Adaptive search scores
better times and lesser memory accesses even than Santoro and Sidney's
Interpolation-Binary search.
|
no_new_dataset
| 0.950088 |
1502.03851
|
Arash Vahdat
|
Mehran Khodabandeh, Arash Vahdat, Guang-Tong Zhou, Hossein
Hajimirsadeghi, Mehrsan Javan Roshtkhari, Greg Mori, Stephen Se
|
Discovering Human Interactions in Videos with Limited Data Labeling
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel approach for discovering human interactions in videos.
Activity understanding techniques usually require a large number of labeled
examples, which are not available in many practical cases. Here, we focus on
recovering semantically meaningful clusters of human-human and human-object
interaction in an unsupervised fashion. A new iterative solution is introduced
based on Maximum Margin Clustering (MMC), which also accepts user feedback to
refine clusters. This is achieved by formulating the whole process as a unified
constrained latent max-margin clustering problem. Extensive experiments have
been carried out over three challenging datasets, Collective Activity, VIRAT,
and UT-interaction. Empirical results demonstrate that the proposed algorithm
can efficiently discover perfect semantic clusters of human interactions with
only a small amount of labeling effort.
|
[
{
"version": "v1",
"created": "Thu, 12 Feb 2015 22:38:28 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"Khodabandeh",
"Mehran",
""
],
[
"Vahdat",
"Arash",
""
],
[
"Zhou",
"Guang-Tong",
""
],
[
"Hajimirsadeghi",
"Hossein",
""
],
[
"Roshtkhari",
"Mehrsan Javan",
""
],
[
"Mori",
"Greg",
""
],
[
"Se",
"Stephen",
""
]
] |
TITLE: Discovering Human Interactions in Videos with Limited Data Labeling
ABSTRACT: We present a novel approach for discovering human interactions in videos.
Activity understanding techniques usually require a large number of labeled
examples, which are not available in many practical cases. Here, we focus on
recovering semantically meaningful clusters of human-human and human-object
interaction in an unsupervised fashion. A new iterative solution is introduced
based on Maximum Margin Clustering (MMC), which also accepts user feedback to
refine clusters. This is achieved by formulating the whole process as a unified
constrained latent max-margin clustering problem. Extensive experiments have
been carried out over three challenging datasets, Collective Activity, VIRAT,
and UT-interaction. Empirical results demonstrate that the proposed algorithm
can efficiently discover perfect semantic clusters of human interactions with
only a small amount of labeling effort.
|
no_new_dataset
| 0.945751 |
1502.03879
|
Weiya Ren
|
Weiya Ren
|
Semi-supervised Data Representation via Affinity Graph Learning
|
10 pages,2 Tables. Written in Aug,2013
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the general problem of utilizing both labeled and unlabeled data
to improve data representation performance. A new semi-supervised learning
framework is proposed by combing manifold regularization and data
representation methods such as Non negative matrix factorization and sparse
coding. We adopt unsupervised data representation methods as the learning
machines because they do not depend on the labeled data, which can improve
machine's generation ability as much as possible. The proposed framework forms
the Laplacian regularizer through learning the affinity graph. We incorporate
the new Laplacian regularizer into the unsupervised data representation to
smooth the low dimensional representation of data and make use of label
information. Experimental results on several real benchmark datasets indicate
that our semi-supervised learning framework achieves encouraging results
compared with state-of-art methods.
|
[
{
"version": "v1",
"created": "Fri, 13 Feb 2015 03:35:15 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"Ren",
"Weiya",
""
]
] |
TITLE: Semi-supervised Data Representation via Affinity Graph Learning
ABSTRACT: We consider the general problem of utilizing both labeled and unlabeled data
to improve data representation performance. A new semi-supervised learning
framework is proposed by combing manifold regularization and data
representation methods such as Non negative matrix factorization and sparse
coding. We adopt unsupervised data representation methods as the learning
machines because they do not depend on the labeled data, which can improve
machine's generation ability as much as possible. The proposed framework forms
the Laplacian regularizer through learning the affinity graph. We incorporate
the new Laplacian regularizer into the unsupervised data representation to
smooth the low dimensional representation of data and make use of label
information. Experimental results on several real benchmark datasets indicate
that our semi-supervised learning framework achieves encouraging results
compared with state-of-art methods.
|
no_new_dataset
| 0.948965 |
1502.04064
|
Esa Niemi
|
Keijo H\"am\"al\"ainen, Lauri Harhanen, Aki Kallonen, Antti
Kujanp\"a\"a, Esa Niemi and Samuli Siltanen
|
Tomographic X-ray data of a walnut
| null | null | null | null |
physics.data-an physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is the documentation of the tomographic X-ray data of a walnut made
available at http://www.fips.fi/dataset.php . The data can be freely used for
scientific purposes with appropriate references to the data and to this
document in arXiv. The data set consists of (1) the X-ray sinogram of a single
2D slice of the walnut with three different resolutions and (2) the
corresponding measurement matrices modeling the linear operation of the X-ray
transform. Each of these sinograms was obtained from a measured 120-projection
fan-beam sinogram by down-sampling and taking logarithms. The original
(measured) sinogram is also provided in its original form and resolution. In
addition, a larger set of 1200 projections of the same walnut was measured and
a high-resolution filtered back-projection reconstruction was computed from
this data; both the sinogram and the FBP reconstruction are included in the
data set, the latter serving as a ground truth reconstruction.
|
[
{
"version": "v1",
"created": "Wed, 11 Feb 2015 21:39:36 GMT"
}
] | 2015-02-16T00:00:00 |
[
[
"Hämäläinen",
"Keijo",
""
],
[
"Harhanen",
"Lauri",
""
],
[
"Kallonen",
"Aki",
""
],
[
"Kujanpää",
"Antti",
""
],
[
"Niemi",
"Esa",
""
],
[
"Siltanen",
"Samuli",
""
]
] |
TITLE: Tomographic X-ray data of a walnut
ABSTRACT: This is the documentation of the tomographic X-ray data of a walnut made
available at http://www.fips.fi/dataset.php . The data can be freely used for
scientific purposes with appropriate references to the data and to this
document in arXiv. The data set consists of (1) the X-ray sinogram of a single
2D slice of the walnut with three different resolutions and (2) the
corresponding measurement matrices modeling the linear operation of the X-ray
transform. Each of these sinograms was obtained from a measured 120-projection
fan-beam sinogram by down-sampling and taking logarithms. The original
(measured) sinogram is also provided in its original form and resolution. In
addition, a larger set of 1200 projections of the same walnut was measured and
a high-resolution filtered back-projection reconstruction was computed from
this data; both the sinogram and the FBP reconstruction are included in the
data set, the latter serving as a ground truth reconstruction.
|
no_new_dataset
| 0.931711 |
1408.6615
|
Shervin Minaee
|
Shervin Minaee and AmirAli Abdolrashidi
|
Multispectral Palmprint Recognition Using Textural Features
|
5 pages, Published in IEEE Signal Processing in Medicine and Biology
Symposium 2014
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to utilize identification to the best extent, we need robust and
fast algorithms and systems to process the data. Having palmprint as a reliable
and unique characteristic of every person, we extract and use its features
based on its geometry, lines and angles. There are countless ways to define
measures for the recognition task. To analyze a new point of view, we extracted
textural features and used them for palmprint recognition. Co-occurrence matrix
can be used for textural feature extraction. As classifiers, we have used the
minimum distance classifier (MDC) and the weighted majority voting system
(WMV). The proposed method is tested on a well-known multispectral palmprint
dataset of 6000 samples and an accuracy rate of 99.96-100% is obtained for most
scenarios which outperforms all previous works in multispectral palmprint
recognition.
|
[
{
"version": "v1",
"created": "Thu, 28 Aug 2014 03:20:38 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Sep 2014 04:49:30 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Feb 2015 03:03:02 GMT"
}
] | 2015-02-13T00:00:00 |
[
[
"Minaee",
"Shervin",
""
],
[
"Abdolrashidi",
"AmirAli",
""
]
] |
TITLE: Multispectral Palmprint Recognition Using Textural Features
ABSTRACT: In order to utilize identification to the best extent, we need robust and
fast algorithms and systems to process the data. Having palmprint as a reliable
and unique characteristic of every person, we extract and use its features
based on its geometry, lines and angles. There are countless ways to define
measures for the recognition task. To analyze a new point of view, we extracted
textural features and used them for palmprint recognition. Co-occurrence matrix
can be used for textural feature extraction. As classifiers, we have used the
minimum distance classifier (MDC) and the weighted majority voting system
(WMV). The proposed method is tested on a well-known multispectral palmprint
dataset of 6000 samples and an accuracy rate of 99.96-100% is obtained for most
scenarios which outperforms all previous works in multispectral palmprint
recognition.
|
no_new_dataset
| 0.945399 |
1502.03556
|
Md. Hanif Seddiqui
|
Md. Hanif Seddiqui, Rudra Pratap Deb Nath, Masaki Aono
|
An Efficient Metric of Automatic Weight Generation for Properties in
Instance Matching Technique
|
17 pages, 5 figures, 3 tables, pp. 1-17, publication year 2015,
journal publication, vol. 6 number 1
|
Journal of Web and Semantic Technology (IJWeST), vol.6 no.1, pp.
1-17 (2015)
|
10.5121/ijwest.2015.6101
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of heterogeneous data sources of semantic knowledge base
intensifies the need of an automatic instance matching technique. However, the
efficiency of instance matching is often influenced by the weight of a property
associated to instances. Automatic weight generation is a non-trivial, however
an important task in instance matching technique. Therefore, identifying an
appropriate metric for generating weight for a property automatically is
nevertheless a formidable task. In this paper, we investigate an approach of
generating weights automatically by considering hypotheses: (1) the weight of a
property is directly proportional to the ratio of the number of its distinct
values to the number of instances contain the property, and (2) the weight is
also proportional to the ratio of the number of distinct values of a property
to the number of instances in a training dataset. The basic intuition behind
the use of our approach is the classical theory of information content that
infrequent words are more informative than frequent ones. Our mathematical
model derives a metric for generating property weights automatically, which is
applied in instance matching system to produce re-conciliated instances
efficiently. Our experiments and evaluations show the effectiveness of our
proposed metric of automatic weight generation for properties in an instance
matching technique.
|
[
{
"version": "v1",
"created": "Thu, 12 Feb 2015 07:51:39 GMT"
}
] | 2015-02-13T00:00:00 |
[
[
"Seddiqui",
"Md. Hanif",
""
],
[
"Nath",
"Rudra Pratap Deb",
""
],
[
"Aono",
"Masaki",
""
]
] |
TITLE: An Efficient Metric of Automatic Weight Generation for Properties in
Instance Matching Technique
ABSTRACT: The proliferation of heterogeneous data sources of semantic knowledge base
intensifies the need of an automatic instance matching technique. However, the
efficiency of instance matching is often influenced by the weight of a property
associated to instances. Automatic weight generation is a non-trivial, however
an important task in instance matching technique. Therefore, identifying an
appropriate metric for generating weight for a property automatically is
nevertheless a formidable task. In this paper, we investigate an approach of
generating weights automatically by considering hypotheses: (1) the weight of a
property is directly proportional to the ratio of the number of its distinct
values to the number of instances contain the property, and (2) the weight is
also proportional to the ratio of the number of distinct values of a property
to the number of instances in a training dataset. The basic intuition behind
the use of our approach is the classical theory of information content that
infrequent words are more informative than frequent ones. Our mathematical
model derives a metric for generating property weights automatically, which is
applied in instance matching system to produce re-conciliated instances
efficiently. Our experiments and evaluations show the effectiveness of our
proposed metric of automatic weight generation for properties in an instance
matching technique.
|
no_new_dataset
| 0.953794 |
1410.2922
|
Ravishankar Sundararaman
|
Ravishankar Sundararaman and William A. Goddard III
|
The charge-asymmetric nonlocally-determined local-electric (CANDLE)
solvation model
|
8 pages, 6 figures
|
J. Chem. Phys. 142, 064107 (2015)
|
10.1063/1.4907731
| null |
physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many important applications of electronic structure methods involve molecules
or solid surfaces in a solvent medium. Since explicit treatment of the solvent
in such methods is usually not practical, calculations often employ continuum
solvation models to approximate the effect of the solvent. Previous solvation
models either involve a parametrization based on atomic radii, which limits the
class of applicable solutes, or based on solute electron density, which is more
general but less accurate, especially for charged systems. We develop an
accurate and general solvation model that includes a cavity that is a nonlocal
functional of both solute electron density and potential, local dielectric
response on this nonlocally-determined cavity, and nonlocal approximations to
the cavity-formation and dispersion energies. The dependence of the cavity on
the solute potential enables an explicit treatment of the solvent charge
asymmetry. With only three parameters per solvent, this `CANDLE' model
simultaneously reproduces solvation energies of large datasets of neutral
molecules, cations and anions with a mean absolute error of 1.8 kcal/mol in
water and 3.0 kcal/mol in acetonitrile.
|
[
{
"version": "v1",
"created": "Fri, 10 Oct 2014 22:33:44 GMT"
}
] | 2015-02-12T00:00:00 |
[
[
"Sundararaman",
"Ravishankar",
""
],
[
"Goddard",
"William A.",
"III"
]
] |
TITLE: The charge-asymmetric nonlocally-determined local-electric (CANDLE)
solvation model
ABSTRACT: Many important applications of electronic structure methods involve molecules
or solid surfaces in a solvent medium. Since explicit treatment of the solvent
in such methods is usually not practical, calculations often employ continuum
solvation models to approximate the effect of the solvent. Previous solvation
models either involve a parametrization based on atomic radii, which limits the
class of applicable solutes, or based on solute electron density, which is more
general but less accurate, especially for charged systems. We develop an
accurate and general solvation model that includes a cavity that is a nonlocal
functional of both solute electron density and potential, local dielectric
response on this nonlocally-determined cavity, and nonlocal approximations to
the cavity-formation and dispersion energies. The dependence of the cavity on
the solute potential enables an explicit treatment of the solvent charge
asymmetry. With only three parameters per solvent, this `CANDLE' model
simultaneously reproduces solvation energies of large datasets of neutral
molecules, cations and anions with a mean absolute error of 1.8 kcal/mol in
water and 3.0 kcal/mol in acetonitrile.
|
no_new_dataset
| 0.949248 |
1502.00996
|
Brian Thomas
|
Brian Thomas, Tim Jenness, Frossie Economou, Perry Greenfield, Paul
Hirst, David S. Berry, Erik Bray, Norman Gray, Demitri Muna, James Turner,
Miguel de Val-Borro, Juande Santander-Vela, David Shupe, John Good, G. Bruce
Berriman, Slava Kitaeff, Jonathan Fay, Omar Laurino, Anastasia Alexov, Walter
Landry, Joe Masters, Adam Brazier, Reinhold Schaaf, Kevin Edwards, Russell O.
Redman, Thomas R. Marsh, Ole Streicher, Pat Norris, Sergio Pascual, Matthew
Davie, Michael Droettboom, Thomas Robitaille, Riccardo Campana, Alex Hagen,
Paul Hartogh, Dominik Klaes, Matthew W. Craig, Derek Homeier
|
Learning from FITS: Limitations in use in modern astronomical research
| null | null |
10.1016/j.ascom.2015.01.009
| null |
astro-ph.IM cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Flexible Image Transport System (FITS) standard has been a great boon to
astronomy, allowing observatories, scientists and the public to exchange
astronomical information easily. The FITS standard, however, is showing its
age. Developed in the late 1970s, the FITS authors made a number of
implementation choices that, while common at the time, are now seen to limit
its utility with modern data. The authors of the FITS standard could not
anticipate the challenges which we are facing today in astronomical computing.
Difficulties we now face include, but are not limited to, addressing the need
to handle an expanded range of specialized data product types (data models),
being more conducive to the networked exchange and storage of data, handling
very large datasets, and capturing significantly more complex metadata and data
relationships.
There are members of the community today who find some or all of these
limitations unworkable, and have decided to move ahead with storing data in
other formats. If this fragmentation continues, we risk abandoning the
advantages of broad interoperability, and ready archivability, that the FITS
format provides for astronomy. In this paper we detail some selected important
problems which exist within the FITS standard today. These problems may provide
insight into deeper underlying issues which reside in the format and we provide
a discussion of some lessons learned. It is not our intention here to prescribe
specific remedies to these issues; rather, it is to call attention of the FITS
and greater astronomical computing communities to these problems in the hope
that it will spur action to address them.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 20:27:29 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Feb 2015 21:35:57 GMT"
}
] | 2015-02-12T00:00:00 |
[
[
"Thomas",
"Brian",
""
],
[
"Jenness",
"Tim",
""
],
[
"Economou",
"Frossie",
""
],
[
"Greenfield",
"Perry",
""
],
[
"Hirst",
"Paul",
""
],
[
"Berry",
"David S.",
""
],
[
"Bray",
"Erik",
""
],
[
"Gray",
"Norman",
""
],
[
"Muna",
"Demitri",
""
],
[
"Turner",
"James",
""
],
[
"de Val-Borro",
"Miguel",
""
],
[
"Santander-Vela",
"Juande",
""
],
[
"Shupe",
"David",
""
],
[
"Good",
"John",
""
],
[
"Berriman",
"G. Bruce",
""
],
[
"Kitaeff",
"Slava",
""
],
[
"Fay",
"Jonathan",
""
],
[
"Laurino",
"Omar",
""
],
[
"Alexov",
"Anastasia",
""
],
[
"Landry",
"Walter",
""
],
[
"Masters",
"Joe",
""
],
[
"Brazier",
"Adam",
""
],
[
"Schaaf",
"Reinhold",
""
],
[
"Edwards",
"Kevin",
""
],
[
"Redman",
"Russell O.",
""
],
[
"Marsh",
"Thomas R.",
""
],
[
"Streicher",
"Ole",
""
],
[
"Norris",
"Pat",
""
],
[
"Pascual",
"Sergio",
""
],
[
"Davie",
"Matthew",
""
],
[
"Droettboom",
"Michael",
""
],
[
"Robitaille",
"Thomas",
""
],
[
"Campana",
"Riccardo",
""
],
[
"Hagen",
"Alex",
""
],
[
"Hartogh",
"Paul",
""
],
[
"Klaes",
"Dominik",
""
],
[
"Craig",
"Matthew W.",
""
],
[
"Homeier",
"Derek",
""
]
] |
TITLE: Learning from FITS: Limitations in use in modern astronomical research
ABSTRACT: The Flexible Image Transport System (FITS) standard has been a great boon to
astronomy, allowing observatories, scientists and the public to exchange
astronomical information easily. The FITS standard, however, is showing its
age. Developed in the late 1970s, the FITS authors made a number of
implementation choices that, while common at the time, are now seen to limit
its utility with modern data. The authors of the FITS standard could not
anticipate the challenges which we are facing today in astronomical computing.
Difficulties we now face include, but are not limited to, addressing the need
to handle an expanded range of specialized data product types (data models),
being more conducive to the networked exchange and storage of data, handling
very large datasets, and capturing significantly more complex metadata and data
relationships.
There are members of the community today who find some or all of these
limitations unworkable, and have decided to move ahead with storing data in
other formats. If this fragmentation continues, we risk abandoning the
advantages of broad interoperability, and ready archivability, that the FITS
format provides for astronomy. In this paper we detail some selected important
problems which exist within the FITS standard today. These problems may provide
insight into deeper underlying issues which reside in the format and we provide
a discussion of some lessons learned. It is not our intention here to prescribe
specific remedies to these issues; rather, it is to call attention of the FITS
and greater astronomical computing communities to these problems in the hope
that it will spur action to address them.
|
no_new_dataset
| 0.938801 |
1502.02772
|
Kean Lau Hong
|
Kean Hong Lau, Yong Haur Tay, Fook Loong Lo
|
A HMAX with LLC for visual recognition
|
10 pages, 3 figures, 2 tables, 23 references
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today's high performance deep artificial neural networks (ANNs) rely heavily
on parameter optimization, which is sequential in nature and even with a
powerful GPU, would have taken weeks to train them up for solving challenging
tasks [22]. HMAX [17] has demonstrated that a simple high performing network
could be obtained without heavy optimization. In this paper, we had improved on
the existing best HMAX neural network [12] in terms of structural simplicity
and performance. Our design replaces the L1 minimization sparse coding (SC)
with a locality-constrained linear coding (LLC) [20] which has a lower
computational demand. We also put the simple orientation filter bank back into
the front layer of the network replacing PCA. Our system's performance has
improved over the existing architecture and reached 79.0% on the challenging
Caltech-101 [7] dataset, which is state-of-the-art for ANNs (without transfer
learning). From our empirical data, the main contributors to our system's
performance include an introduction of partial signal whitening, a spot
detector, and a spatial pyramid matching (SPM) [14] layer.
|
[
{
"version": "v1",
"created": "Tue, 10 Feb 2015 04:01:43 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Feb 2015 04:40:20 GMT"
}
] | 2015-02-12T00:00:00 |
[
[
"Lau",
"Kean Hong",
""
],
[
"Tay",
"Yong Haur",
""
],
[
"Lo",
"Fook Loong",
""
]
] |
TITLE: A HMAX with LLC for visual recognition
ABSTRACT: Today's high performance deep artificial neural networks (ANNs) rely heavily
on parameter optimization, which is sequential in nature and even with a
powerful GPU, would have taken weeks to train them up for solving challenging
tasks [22]. HMAX [17] has demonstrated that a simple high performing network
could be obtained without heavy optimization. In this paper, we had improved on
the existing best HMAX neural network [12] in terms of structural simplicity
and performance. Our design replaces the L1 minimization sparse coding (SC)
with a locality-constrained linear coding (LLC) [20] which has a lower
computational demand. We also put the simple orientation filter bank back into
the front layer of the network replacing PCA. Our system's performance has
improved over the existing architecture and reached 79.0% on the challenging
Caltech-101 [7] dataset, which is state-of-the-art for ANNs (without transfer
learning). From our empirical data, the main contributors to our system's
performance include an introduction of partial signal whitening, a spot
detector, and a spatial pyramid matching (SPM) [14] layer.
|
no_new_dataset
| 0.945851 |
1502.03406
|
Adeline Decuyper
|
Vincent D. Blondel, Adeline Decuyper, Gautier Krings
|
A survey of results on mobile phone datasets analysis
| null | null | null | null |
physics.soc-ph cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we review some advances made recently in the study of mobile
phone datasets. This area of research has emerged a decade ago, with the
increasing availability of large-scale anonymized datasets, and has grown into
a stand-alone topic. We will survey the contributions made so far on the social
networks that can be constructed with such data, the study of personal
mobility, geographical partitioning, urban planning, and help towards
development as well as security and privacy issues.
|
[
{
"version": "v1",
"created": "Wed, 11 Feb 2015 18:58:12 GMT"
}
] | 2015-02-12T00:00:00 |
[
[
"Blondel",
"Vincent D.",
""
],
[
"Decuyper",
"Adeline",
""
],
[
"Krings",
"Gautier",
""
]
] |
TITLE: A survey of results on mobile phone datasets analysis
ABSTRACT: In this paper, we review some advances made recently in the study of mobile
phone datasets. This area of research has emerged a decade ago, with the
increasing availability of large-scale anonymized datasets, and has grown into
a stand-alone topic. We will survey the contributions made so far on the social
networks that can be constructed with such data, the study of personal
mobility, geographical partitioning, urban planning, and help towards
development as well as security and privacy issues.
|
no_new_dataset
| 0.942135 |
1502.03409
|
Karl Ni
|
Karl Ni, Roger Pearce, Kofi Boakye, Brian Van Essen, Damian Borth,
Barry Chen, Eric Wang
|
Large-Scale Deep Learning on the YFCC100M Dataset
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/publicdomain/
|
We present a work-in-progress snapshot of learning with a 15 billion
parameter deep learning network on HPC architectures applied to the largest
publicly available natural image and video dataset released to-date. Recent
advancements in unsupervised deep neural networks suggest that scaling up such
networks in both model and training dataset size can yield significant
improvements in the learning of concepts at the highest layers. We train our
three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M
dataset. The dataset comprises approximately 99.2 million images and 800,000
user-created videos from Yahoo's Flickr image and video sharing platform.
Training of our network takes eight days on 98 GPU nodes at the High
Performance Computing Center at Lawrence Livermore National Laboratory.
Encouraging preliminary results and future research directions are presented
and discussed.
|
[
{
"version": "v1",
"created": "Wed, 11 Feb 2015 19:24:36 GMT"
}
] | 2015-02-12T00:00:00 |
[
[
"Ni",
"Karl",
""
],
[
"Pearce",
"Roger",
""
],
[
"Boakye",
"Kofi",
""
],
[
"Van Essen",
"Brian",
""
],
[
"Borth",
"Damian",
""
],
[
"Chen",
"Barry",
""
],
[
"Wang",
"Eric",
""
]
] |
TITLE: Large-Scale Deep Learning on the YFCC100M Dataset
ABSTRACT: We present a work-in-progress snapshot of learning with a 15 billion
parameter deep learning network on HPC architectures applied to the largest
publicly available natural image and video dataset released to-date. Recent
advancements in unsupervised deep neural networks suggest that scaling up such
networks in both model and training dataset size can yield significant
improvements in the learning of concepts at the highest layers. We train our
three-layer deep neural network on the Yahoo! Flickr Creative Commons 100M
dataset. The dataset comprises approximately 99.2 million images and 800,000
user-created videos from Yahoo's Flickr image and video sharing platform.
Training of our network takes eight days on 98 GPU nodes at the High
Performance Computing Center at Lawrence Livermore National Laboratory.
Encouraging preliminary results and future research directions are presented
and discussed.
|
no_new_dataset
| 0.928603 |
1502.02761
|
Yujia Li
|
Yujia Li, Kevin Swersky and Richard Zemel
|
Generative Moment Matching Networks
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of learning deep generative models from data. We
formulate a method that generates an independent sample via a single
feedforward pass through a multilayer perceptron, as in the recently proposed
generative adversarial networks (Goodfellow et al., 2014). Training a
generative adversarial network, however, requires careful optimization of a
difficult minimax program. Instead, we utilize a technique from statistical
hypothesis testing known as maximum mean discrepancy (MMD), which leads to a
simple objective that can be interpreted as matching all orders of statistics
between a dataset and samples from the model, and can be trained by
backpropagation. We further boost the performance of this approach by combining
our generative network with an auto-encoder network, using MMD to learn to
generate codes that can then be decoded to produce samples. We show that the
combination of these techniques yields excellent generative models compared to
baseline approaches as measured on MNIST and the Toronto Face Database.
|
[
{
"version": "v1",
"created": "Tue, 10 Feb 2015 02:54:58 GMT"
}
] | 2015-02-11T00:00:00 |
[
[
"Li",
"Yujia",
""
],
[
"Swersky",
"Kevin",
""
],
[
"Zemel",
"Richard",
""
]
] |
TITLE: Generative Moment Matching Networks
ABSTRACT: We consider the problem of learning deep generative models from data. We
formulate a method that generates an independent sample via a single
feedforward pass through a multilayer perceptron, as in the recently proposed
generative adversarial networks (Goodfellow et al., 2014). Training a
generative adversarial network, however, requires careful optimization of a
difficult minimax program. Instead, we utilize a technique from statistical
hypothesis testing known as maximum mean discrepancy (MMD), which leads to a
simple objective that can be interpreted as matching all orders of statistics
between a dataset and samples from the model, and can be trained by
backpropagation. We further boost the performance of this approach by combining
our generative network with an auto-encoder network, using MMD to learn to
generate codes that can then be decoded to produce samples. We show that the
combination of these techniques yields excellent generative models compared to
baseline approaches as measured on MNIST and the Toronto Face Database.
|
no_new_dataset
| 0.944587 |
1407.8186
|
Xiaoting Zhao
|
Xiaoting Zhao, Peter I. Frazier
|
Exploration vs. Exploitation in the Information Filtering Problem
|
36 pages, 5 figures
| null | null | null |
math.OC cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider information filtering, in which we face a stream of items too
voluminous to process by hand (e.g., scientific articles, blog posts, emails),
and must rely on a computer system to automatically filter out irrelevant
items. Such systems face the exploration vs. exploitation tradeoff, in which it
may be beneficial to present an item despite a low probability of relevance,
just to learn about future items with similar content. We present a Bayesian
sequential decision-making model of this problem, show how it may be solved to
optimality using a decomposition to a collection of two-armed bandit problems,
and show structural results for the optimal policy. We show that the resulting
method is especially useful when facing the cold start problem, i.e., when
filtering items for new users without a long history of past interactions. We
then present an application of this information filtering method to a
historical dataset from the arXiv.org repository of scientific articles.
|
[
{
"version": "v1",
"created": "Wed, 30 Jul 2014 20:00:11 GMT"
},
{
"version": "v2",
"created": "Tue, 2 Sep 2014 20:01:00 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Feb 2015 21:03:08 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Zhao",
"Xiaoting",
""
],
[
"Frazier",
"Peter I.",
""
]
] |
TITLE: Exploration vs. Exploitation in the Information Filtering Problem
ABSTRACT: We consider information filtering, in which we face a stream of items too
voluminous to process by hand (e.g., scientific articles, blog posts, emails),
and must rely on a computer system to automatically filter out irrelevant
items. Such systems face the exploration vs. exploitation tradeoff, in which it
may be beneficial to present an item despite a low probability of relevance,
just to learn about future items with similar content. We present a Bayesian
sequential decision-making model of this problem, show how it may be solved to
optimality using a decomposition to a collection of two-armed bandit problems,
and show structural results for the optimal policy. We show that the resulting
method is especially useful when facing the cold start problem, i.e., when
filtering items for new users without a long history of past interactions. We
then present an application of this information filtering method to a
historical dataset from the arXiv.org repository of scientific articles.
|
no_new_dataset
| 0.949856 |
1501.03326
|
Heiko Strathmann
|
Heiko Strathmann, Dino Sejdinovic, Mark Girolami
|
Unbiased Bayes for Big Data: Paths of Partial Posteriors
|
18 pages, 10 figures
| null | null | null |
stat.ML cs.LG stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key quantity of interest in Bayesian inference are expectations of
functions with respect to a posterior distribution. Markov Chain Monte Carlo is
a fundamental tool to consistently compute these expectations via averaging
samples drawn from an approximate posterior. However, its feasibility is being
challenged in the era of so called Big Data as all data needs to be processed
in every iteration. Realising that such simulation is an unnecessarily hard
problem if the goal is estimation, we construct a computationally scalable
methodology that allows unbiased estimation of the required expectations --
without explicit simulation from the full posterior. The scheme's variance is
finite by construction and straightforward to control, leading to algorithms
that are provably unbiased and naturally arrive at a desired error tolerance.
This is achieved at an average computational complexity that is sub-linear in
the size of the dataset and its free parameters are easy to tune. We
demonstrate the utility and generality of the methodology on a range of common
statistical models applied to large-scale benchmark and real-world datasets.
|
[
{
"version": "v1",
"created": "Wed, 14 Jan 2015 12:15:14 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Feb 2015 16:21:20 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Strathmann",
"Heiko",
""
],
[
"Sejdinovic",
"Dino",
""
],
[
"Girolami",
"Mark",
""
]
] |
TITLE: Unbiased Bayes for Big Data: Paths of Partial Posteriors
ABSTRACT: A key quantity of interest in Bayesian inference are expectations of
functions with respect to a posterior distribution. Markov Chain Monte Carlo is
a fundamental tool to consistently compute these expectations via averaging
samples drawn from an approximate posterior. However, its feasibility is being
challenged in the era of so called Big Data as all data needs to be processed
in every iteration. Realising that such simulation is an unnecessarily hard
problem if the goal is estimation, we construct a computationally scalable
methodology that allows unbiased estimation of the required expectations --
without explicit simulation from the full posterior. The scheme's variance is
finite by construction and straightforward to control, leading to algorithms
that are provably unbiased and naturally arrive at a desired error tolerance.
This is achieved at an average computational complexity that is sub-linear in
the size of the dataset and its free parameters are easy to tune. We
demonstrate the utility and generality of the methodology on a range of common
statistical models applied to large-scale benchmark and real-world datasets.
|
no_new_dataset
| 0.941115 |
1502.00558
|
Marcelo Cicconet
|
Marcelo Cicconet, Davi Geiger, and Michael Werman
|
Complex-Valued Hough Transforms for Circles
|
The paper has been withdrawn since the authors concluded a more
comprehensive study on the choice of parameters needs to be performed
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper advocates the use of complex variables to represent votes in the
Hough transform for circle detection. Replacing the positive numbers
classically used in the parameter space of the Hough transforms by complex
numbers allows cancellation effects when adding up the votes. Cancellation and
the computation of shape likelihood via a complex number's magnitude square
lead to more robust solutions than the "classic" algorithms, as shown by
computational experiments on synthetic and real datasets.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 17:22:26 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Feb 2015 19:38:58 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Cicconet",
"Marcelo",
""
],
[
"Geiger",
"Davi",
""
],
[
"Werman",
"Michael",
""
]
] |
TITLE: Complex-Valued Hough Transforms for Circles
ABSTRACT: This paper advocates the use of complex variables to represent votes in the
Hough transform for circle detection. Replacing the positive numbers
classically used in the parameter space of the Hough transforms by complex
numbers allows cancellation effects when adding up the votes. Cancellation and
the computation of shape likelihood via a complex number's magnitude square
lead to more robust solutions than the "classic" algorithms, as shown by
computational experiments on synthetic and real datasets.
|
no_new_dataset
| 0.954393 |
1502.02072
|
Bharath Ramsundar
|
Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David
Konerding, Vijay Pande
|
Massively Multitask Networks for Drug Discovery
|
Preliminary work. Under review by the International Conference on
Machine Learning (ICML)
| null | null | null |
stat.ML cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process.
|
[
{
"version": "v1",
"created": "Fri, 6 Feb 2015 23:04:01 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Ramsundar",
"Bharath",
""
],
[
"Kearnes",
"Steven",
""
],
[
"Riley",
"Patrick",
""
],
[
"Webster",
"Dale",
""
],
[
"Konerding",
"David",
""
],
[
"Pande",
"Vijay",
""
]
] |
TITLE: Massively Multitask Networks for Drug Discovery
ABSTRACT: Massively multitask neural architectures provide a learning framework for
drug discovery that synthesizes information from many distinct biological
sources. To train these architectures at scale, we gather large amounts of data
from public sources to create a dataset of nearly 40 million measurements
across more than 200 biological targets. We investigate several aspects of the
multitask framework by performing a series of empirical studies and obtain some
interesting results: (1) massively multitask networks obtain predictive
accuracies significantly better than single-task methods, (2) the predictive
power of multitask networks improves as additional tasks and data are added,
(3) the total amount of data and the total number of tasks both contribute
significantly to multitask improvement, and (4) multitask networks afford
limited transferability to tasks not in the training set. Our results
underscore the need for greater data sharing and further algorithmic innovation
to accelerate the drug discovery process.
|
no_new_dataset
| 0.951594 |
1502.02171
|
Liang Zheng
|
Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jiahao Bu, Qi Tian
|
Person Re-identification Meets Image Search
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For long time, person re-identification and image search are two separately
studied tasks. However, for person re-identification, the effectiveness of
local features and the "query-search" mode make it well posed for image search
techniques.
In the light of recent advances in image search, this paper proposes to treat
person re-identification as an image search problem. Specifically, this paper
claims two major contributions. 1) By designing an unsupervised Bag-of-Words
representation, we are devoted to bridging the gap between the two tasks by
integrating techniques from image search in person re-identification. We show
that our system sets up an effective yet efficient baseline that is amenable to
further supervised/unsupervised improvements. 2) We contribute a new high
quality dataset which uses DPM detector and includes a number of distractor
images. Our dataset reaches closer to realistic settings, and new perspectives
are provided.
Compared with approaches that rely on feature-feature match, our method is
faster by over two orders of magnitude. Moreover, on three datasets, we report
competitive results compared with the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Sat, 7 Feb 2015 18:56:35 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Zheng",
"Liang",
""
],
[
"Shen",
"Liyue",
""
],
[
"Tian",
"Lu",
""
],
[
"Wang",
"Shengjin",
""
],
[
"Bu",
"Jiahao",
""
],
[
"Tian",
"Qi",
""
]
] |
TITLE: Person Re-identification Meets Image Search
ABSTRACT: For long time, person re-identification and image search are two separately
studied tasks. However, for person re-identification, the effectiveness of
local features and the "query-search" mode make it well posed for image search
techniques.
In the light of recent advances in image search, this paper proposes to treat
person re-identification as an image search problem. Specifically, this paper
claims two major contributions. 1) By designing an unsupervised Bag-of-Words
representation, we are devoted to bridging the gap between the two tasks by
integrating techniques from image search in person re-identification. We show
that our system sets up an effective yet efficient baseline that is amenable to
further supervised/unsupervised improvements. 2) We contribute a new high
quality dataset which uses DPM detector and includes a number of distractor
images. Our dataset reaches closer to realistic settings, and new perspectives
are provided.
Compared with approaches that rely on feature-feature match, our method is
faster by over two orders of magnitude. Moreover, on three datasets, we report
competitive results compared with the state-of-the-art methods.
|
new_dataset
| 0.958731 |
1502.02215
|
Jobin Wilson
|
Jobin Wilson, Chitharanj Kachappilly, Rakesh Mohan, Prateek Kapadia,
Arun Soman, Santanu Chaudhury
|
Real World Applications of Machine Learning Techniques over Large Mobile
Subscriber Datasets
|
SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop)
https://sites.google.com/site/software4ml/accepted-papers
| null | null | null |
cs.LG cs.CY cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Communication Service Providers (CSPs) are in a unique position to utilize
their vast transactional data assets generated from interactions of subscribers
with network elements as well as with other subscribers. CSPs could leverage
its data assets for a gamut of applications such as service personalization,
predictive offer management, loyalty management, revenue forecasting, network
capacity planning, product bundle optimization and churn management to gain
significant competitive advantage. However, due to the sheer data volume,
variety, velocity and veracity of mobile subscriber datasets, sophisticated
data analytics techniques and frameworks are necessary to derive actionable
insights in a useable timeframe. In this paper, we describe our journey from a
relational database management system (RDBMS) based campaign management
solution which allowed data scientists and marketers to use hand-written rules
for service personalization and targeted promotions to a distributed Big Data
Analytics platform, capable of performing large scale machine learning and data
mining to deliver real time service personalization, predictive modelling and
product optimization. Our work involves a careful blend of technology,
processes and best practices, which facilitate man-machine collaboration and
continuous experimentation to derive measurable economic value from data. Our
platform has a reach of more than 500 million mobile subscribers worldwide,
delivering over 1 billion personalized recommendations annually, processing a
total data volume of 64 Petabytes, corresponding to 8.5 trillion events.
|
[
{
"version": "v1",
"created": "Sun, 8 Feb 2015 06:18:55 GMT"
}
] | 2015-02-10T00:00:00 |
[
[
"Wilson",
"Jobin",
""
],
[
"Kachappilly",
"Chitharanj",
""
],
[
"Mohan",
"Rakesh",
""
],
[
"Kapadia",
"Prateek",
""
],
[
"Soman",
"Arun",
""
],
[
"Chaudhury",
"Santanu",
""
]
] |
TITLE: Real World Applications of Machine Learning Techniques over Large Mobile
Subscriber Datasets
ABSTRACT: Communication Service Providers (CSPs) are in a unique position to utilize
their vast transactional data assets generated from interactions of subscribers
with network elements as well as with other subscribers. CSPs could leverage
its data assets for a gamut of applications such as service personalization,
predictive offer management, loyalty management, revenue forecasting, network
capacity planning, product bundle optimization and churn management to gain
significant competitive advantage. However, due to the sheer data volume,
variety, velocity and veracity of mobile subscriber datasets, sophisticated
data analytics techniques and frameworks are necessary to derive actionable
insights in a useable timeframe. In this paper, we describe our journey from a
relational database management system (RDBMS) based campaign management
solution which allowed data scientists and marketers to use hand-written rules
for service personalization and targeted promotions to a distributed Big Data
Analytics platform, capable of performing large scale machine learning and data
mining to deliver real time service personalization, predictive modelling and
product optimization. Our work involves a careful blend of technology,
processes and best practices, which facilitate man-machine collaboration and
continuous experimentation to derive measurable economic value from data. Our
platform has a reach of more than 500 million mobile subscribers worldwide,
delivering over 1 billion personalized recommendations annually, processing a
total data volume of 64 Petabytes, corresponding to 8.5 trillion events.
|
no_new_dataset
| 0.943034 |
1406.0167
|
Saurabh Paul
|
Saurabh Paul, Malik Magdon-Ismail and Petros Drineas
|
Feature Selection for Linear SVM with Provable Guarantees
|
Appearing in Proceedings of 18th AISTATS, JMLR W&CP, vol 38, 2015
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We give two provably accurate feature-selection techniques for the linear
SVM. The algorithms run in deterministic and randomized time respectively. Our
algorithms can be used in an unsupervised or supervised setting. The supervised
approach is based on sampling features from support vectors. We prove that the
margin in the feature space is preserved to within $\epsilon$-relative error of
the margin in the full feature space in the worst-case. In the unsupervised
setting, we also provide worst-case guarantees of the radius of the minimum
enclosing ball, thereby ensuring comparable generalization as in the full
feature space and resolving an open problem posed in Dasgupta et al. We present
extensive experiments on real-world datasets to support our theory and to
demonstrate that our method is competitive and often better than prior
state-of-the-art, for which there are no known provable guarantees.
|
[
{
"version": "v1",
"created": "Sun, 1 Jun 2014 14:37:54 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Oct 2014 14:20:00 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Feb 2015 13:43:54 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"Paul",
"Saurabh",
""
],
[
"Magdon-Ismail",
"Malik",
""
],
[
"Drineas",
"Petros",
""
]
] |
TITLE: Feature Selection for Linear SVM with Provable Guarantees
ABSTRACT: We give two provably accurate feature-selection techniques for the linear
SVM. The algorithms run in deterministic and randomized time respectively. Our
algorithms can be used in an unsupervised or supervised setting. The supervised
approach is based on sampling features from support vectors. We prove that the
margin in the feature space is preserved to within $\epsilon$-relative error of
the margin in the full feature space in the worst-case. In the unsupervised
setting, we also provide worst-case guarantees of the radius of the minimum
enclosing ball, thereby ensuring comparable generalization as in the full
feature space and resolving an open problem posed in Dasgupta et al. We present
extensive experiments on real-world datasets to support our theory and to
demonstrate that our method is competitive and often better than prior
state-of-the-art, for which there are no known provable guarantees.
|
no_new_dataset
| 0.951188 |
1409.5181
|
Zhilin Zhang
|
Zhilin Zhang, Zhouyue Pi, Benyuan Liu
|
TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type
Photoplethysmographic Signals During Intensive Physical Exercise
|
Matlab codes and data are available at:
https://sites.google.com/site/researchbyzhang/
|
IEEE Transactions on Biomedical Engineering, vol. 62, no. 2, pp.
522-531, February 2015
|
10.1109/TBME.2014.2359372
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heart rate monitoring using wrist-type photoplethysmographic (PPG) signals
during subjects' intensive exercise is a difficult problem, since the signals
are contaminated by extremely strong motion artifacts caused by subjects' hand
movements. So far few works have studied this problem. In this work, a general
framework, termed TROIKA, is proposed, which consists of signal decomposiTion
for denoising, sparse signal RecOnstructIon for high-resolution spectrum
estimation, and spectral peaK trAcking with verification. The TROIKA framework
has high estimation accuracy and is robust to strong motion artifacts. Many
variants can be straightforwardly derived from this framework. Experimental
results on datasets recorded from 12 subjects during fast running at the peak
speed of 15 km/hour showed that the average absolute error of heart rate
estimation was 2.34 beat per minute (BPM), and the Pearson correlation between
the estimates and the ground-truth of heart rate was 0.992. This framework is
of great values to wearable devices such as smart-watches which use PPG signals
to monitor heart rate for fitness.
|
[
{
"version": "v1",
"created": "Thu, 18 Sep 2014 03:24:49 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jan 2015 07:47:00 GMT"
},
{
"version": "v3",
"created": "Fri, 6 Feb 2015 06:16:09 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"Zhang",
"Zhilin",
""
],
[
"Pi",
"Zhouyue",
""
],
[
"Liu",
"Benyuan",
""
]
] |
TITLE: TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type
Photoplethysmographic Signals During Intensive Physical Exercise
ABSTRACT: Heart rate monitoring using wrist-type photoplethysmographic (PPG) signals
during subjects' intensive exercise is a difficult problem, since the signals
are contaminated by extremely strong motion artifacts caused by subjects' hand
movements. So far few works have studied this problem. In this work, a general
framework, termed TROIKA, is proposed, which consists of signal decomposiTion
for denoising, sparse signal RecOnstructIon for high-resolution spectrum
estimation, and spectral peaK trAcking with verification. The TROIKA framework
has high estimation accuracy and is robust to strong motion artifacts. Many
variants can be straightforwardly derived from this framework. Experimental
results on datasets recorded from 12 subjects during fast running at the peak
speed of 15 km/hour showed that the average absolute error of heart rate
estimation was 2.34 beat per minute (BPM), and the Pearson correlation between
the estimates and the ground-truth of heart rate was 0.992. This framework is
of great values to wearable devices such as smart-watches which use PPG signals
to monitor heart rate for fitness.
|
no_new_dataset
| 0.931463 |
1502.01782
|
Conrad Sanderson
|
Johanna Carvajal, Conrad Sanderson, Chris McCool, Brian C. Lovell
|
Multi-Action Recognition via Stochastic Modelling of Optical Flow and
Gradients
| null |
Workshop on Machine Learning for Sensory Data Analysis (MLSDA),
pp. 19-24, 2014
|
10.1145/2689746.2689748
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose a novel approach to multi-action recognition that
performs joint segmentation and classification. This approach models each
action using a Gaussian mixture using robust low-dimensional action features.
Segmentation is achieved by performing classification on overlapping temporal
windows, which are then merged to produce the final result. This approach is
considerably less complicated than previous methods which use dynamic
programming or computationally expensive hidden Markov models (HMMs). Initial
experiments on a stitched version of the KTH dataset show that the proposed
approach achieves an accuracy of 78.3%, outperforming a recent HMM-based
approach which obtained 71.2%.
|
[
{
"version": "v1",
"created": "Fri, 6 Feb 2015 03:30:10 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"Carvajal",
"Johanna",
""
],
[
"Sanderson",
"Conrad",
""
],
[
"McCool",
"Chris",
""
],
[
"Lovell",
"Brian C.",
""
]
] |
TITLE: Multi-Action Recognition via Stochastic Modelling of Optical Flow and
Gradients
ABSTRACT: In this paper we propose a novel approach to multi-action recognition that
performs joint segmentation and classification. This approach models each
action using a Gaussian mixture using robust low-dimensional action features.
Segmentation is achieved by performing classification on overlapping temporal
windows, which are then merged to produce the final result. This approach is
considerably less complicated than previous methods which use dynamic
programming or computationally expensive hidden Markov models (HMMs). Initial
experiments on a stitched version of the KTH dataset show that the proposed
approach achieves an accuracy of 78.3%, outperforming a recent HMM-based
approach which obtained 71.2%.
|
no_new_dataset
| 0.950869 |
1502.01812
|
Teng Li Dr.
|
Teng Li, Huan Chang, Meng Wang, Bingbing Ni, Richang Hong and
Shuicheng Yan
|
Crowded Scene Analysis: A Survey
|
20 pages in IEEE Transactions on Circuits and Systems for Video
Technology, 2015
| null |
10.1109/TCSVT.2014.2358029
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated scene analysis has been a topic of great interest in computer
vision and cognitive science. Recently, with the growth of crowd phenomena in
the real world, crowded scene analysis has attracted much attention. However,
the visual occlusions and ambiguities in crowded scenes, as well as the complex
behaviors and scene semantics, make the analysis a challenging task. In the
past few years, an increasing number of works on crowded scene analysis have
been reported, covering different aspects including crowd motion pattern
learning, crowd behavior and activity analysis, and anomaly detection in
crowds. This paper surveys the state-of-the-art techniques on this topic. We
first provide the background knowledge and the available features related to
crowded scenes. Then, existing models, popular algorithms, evaluation
protocols, as well as system performance are provided corresponding to
different aspects of crowded scene analysis. We also outline the available
datasets for performance evaluation. Finally, some research problems and
promising future directions are presented with discussions.
|
[
{
"version": "v1",
"created": "Fri, 6 Feb 2015 06:36:12 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"Li",
"Teng",
""
],
[
"Chang",
"Huan",
""
],
[
"Wang",
"Meng",
""
],
[
"Ni",
"Bingbing",
""
],
[
"Hong",
"Richang",
""
],
[
"Yan",
"Shuicheng",
""
]
] |
TITLE: Crowded Scene Analysis: A Survey
ABSTRACT: Automated scene analysis has been a topic of great interest in computer
vision and cognitive science. Recently, with the growth of crowd phenomena in
the real world, crowded scene analysis has attracted much attention. However,
the visual occlusions and ambiguities in crowded scenes, as well as the complex
behaviors and scene semantics, make the analysis a challenging task. In the
past few years, an increasing number of works on crowded scene analysis have
been reported, covering different aspects including crowd motion pattern
learning, crowd behavior and activity analysis, and anomaly detection in
crowds. This paper surveys the state-of-the-art techniques on this topic. We
first provide the background knowledge and the available features related to
crowded scenes. Then, existing models, popular algorithms, evaluation
protocols, as well as system performance are provided corresponding to
different aspects of crowded scene analysis. We also outline the available
datasets for performance evaluation. Finally, some research problems and
promising future directions are presented with discussions.
|
no_new_dataset
| 0.949949 |
1502.01827
|
Guang-Tong Zhou
|
Guang-Tong Zhou, Sung Ju Hwang, Mark Schmidt, Leonid Sigal and Greg
Mori
|
Hierarchical Maximum-Margin Clustering
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a hierarchical maximum-margin clustering method for unsupervised
data analysis. Our method extends beyond flat maximum-margin clustering, and
performs clustering recursively in a top-down manner. We propose an effective
greedy splitting criteria for selecting which cluster to split next, and employ
regularizers that enforce feature sharing/competition for capturing data
semantics. Experimental results obtained on four standard datasets show that
our method outperforms flat and hierarchical clustering baselines, while
forming clean and semantically meaningful cluster hierarchies.
|
[
{
"version": "v1",
"created": "Fri, 6 Feb 2015 08:37:55 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"Zhou",
"Guang-Tong",
""
],
[
"Hwang",
"Sung Ju",
""
],
[
"Schmidt",
"Mark",
""
],
[
"Sigal",
"Leonid",
""
],
[
"Mori",
"Greg",
""
]
] |
TITLE: Hierarchical Maximum-Margin Clustering
ABSTRACT: We present a hierarchical maximum-margin clustering method for unsupervised
data analysis. Our method extends beyond flat maximum-margin clustering, and
performs clustering recursively in a top-down manner. We propose an effective
greedy splitting criteria for selecting which cluster to split next, and employ
regularizers that enforce feature sharing/competition for capturing data
semantics. Experimental results obtained on four standard datasets show that
our method outperforms flat and hierarchical clustering baselines, while
forming clean and semantically meaningful cluster hierarchies.
|
no_new_dataset
| 0.949623 |
1502.01852
|
Kaiming He
|
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
|
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rectified activation units (rectifiers) are essential for state-of-the-art
neural networks. In this work, we study rectifier neural networks for image
classification from two aspects. First, we propose a Parametric Rectified
Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU
improves model fitting with nearly zero extra computational cost and little
overfitting risk. Second, we derive a robust initialization method that
particularly considers the rectifier nonlinearities. This method enables us to
train extremely deep rectified models directly from scratch and to investigate
deeper or wider network architectures. Based on our PReLU networks
(PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012
classification dataset. This is a 26% relative improvement over the ILSVRC 2014
winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass
human-level performance (5.1%, Russakovsky et al.) on this visual recognition
challenge.
|
[
{
"version": "v1",
"created": "Fri, 6 Feb 2015 10:44:00 GMT"
}
] | 2015-02-09T00:00:00 |
[
[
"He",
"Kaiming",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Ren",
"Shaoqing",
""
],
[
"Sun",
"Jian",
""
]
] |
TITLE: Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification
ABSTRACT: Rectified activation units (rectifiers) are essential for state-of-the-art
neural networks. In this work, we study rectifier neural networks for image
classification from two aspects. First, we propose a Parametric Rectified
Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU
improves model fitting with nearly zero extra computational cost and little
overfitting risk. Second, we derive a robust initialization method that
particularly considers the rectifier nonlinearities. This method enables us to
train extremely deep rectified models directly from scratch and to investigate
deeper or wider network architectures. Based on our PReLU networks
(PReLU-nets), we achieve 4.94% top-5 test error on the ImageNet 2012
classification dataset. This is a 26% relative improvement over the ILSVRC 2014
winner (GoogLeNet, 6.66%). To our knowledge, our result is the first to surpass
human-level performance (5.1%, Russakovsky et al.) on this visual recognition
challenge.
|
no_new_dataset
| 0.951594 |
1403.6025
|
Ruven Pillay
|
Emmanuel Bertin, Ruven Pillay, Chiara Marmo
|
Web-Based Visualization of Very Large Scientific Astronomy Imagery
|
Published in Astronomy & Computing. IIPImage server available from
http://iipimage.sourceforge.net . Visiomatic code and demos available from
http://www.visiomatic.org/
|
Astronomy and Computing, vol. 10, pp. 43-53, Apr. 2015
|
10.1016/j.ascom.2014.12.006
| null |
astro-ph.IM cs.CE cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visualizing and navigating through large astronomy images from a remote
location with current astronomy display tools can be a frustrating experience
in terms of speed and ergonomics, especially on mobile devices. In this paper,
we present a high performance, versatile and robust client-server system for
remote visualization and analysis of extremely large scientific images.
Applications of this work include survey image quality control, interactive
data query and exploration, citizen science, as well as public outreach. The
proposed software is entirely open source and is designed to be generic and
applicable to a variety of datasets. It provides access to floating point data
at terabyte scales, with the ability to precisely adjust image settings in
real-time. The proposed clients are light-weight, platform-independent web
applications built on standard HTML5 web technologies and compatible with both
touch and mouse-based devices. We put the system to the test and assess the
performance of the system and show that a single server can comfortably handle
more than a hundred simultaneous users accessing full precision 32 bit
astronomy data.
|
[
{
"version": "v1",
"created": "Mon, 24 Mar 2014 16:24:57 GMT"
},
{
"version": "v2",
"created": "Fri, 9 Jan 2015 14:21:56 GMT"
},
{
"version": "v3",
"created": "Sun, 1 Feb 2015 22:29:40 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Feb 2015 10:40:31 GMT"
}
] | 2015-02-06T00:00:00 |
[
[
"Bertin",
"Emmanuel",
""
],
[
"Pillay",
"Ruven",
""
],
[
"Marmo",
"Chiara",
""
]
] |
TITLE: Web-Based Visualization of Very Large Scientific Astronomy Imagery
ABSTRACT: Visualizing and navigating through large astronomy images from a remote
location with current astronomy display tools can be a frustrating experience
in terms of speed and ergonomics, especially on mobile devices. In this paper,
we present a high performance, versatile and robust client-server system for
remote visualization and analysis of extremely large scientific images.
Applications of this work include survey image quality control, interactive
data query and exploration, citizen science, as well as public outreach. The
proposed software is entirely open source and is designed to be generic and
applicable to a variety of datasets. It provides access to floating point data
at terabyte scales, with the ability to precisely adjust image settings in
real-time. The proposed clients are light-weight, platform-independent web
applications built on standard HTML5 web technologies and compatible with both
touch and mouse-based devices. We put the system to the test and assess the
performance of the system and show that a single server can comfortably handle
more than a hundred simultaneous users accessing full precision 32 bit
astronomy data.
|
no_new_dataset
| 0.943608 |
1410.6973
|
Anna Choromanska
|
Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Yann LeCun
|
Differentially- and non-differentially-private random decision trees
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider supervised learning with random decision trees, where the tree
construction is completely random. The method is popularly used and works well
in practice despite the simplicity of the setting, but its statistical
mechanism is not yet well-understood. In this paper we provide strong
theoretical guarantees regarding learning with random decision trees. We
analyze and compare three different variants of the algorithm that have minimal
memory requirements: majority voting, threshold averaging and probabilistic
averaging. The random structure of the tree enables us to adapt these methods
to a differentially-private setting thus we also propose differentially-private
versions of all three schemes. We give upper-bounds on the generalization error
and mathematically explain how the accuracy depends on the number of random
decision trees. Furthermore, we prove that only logarithmic (in the size of the
dataset) number of independently selected random decision trees suffice to
correctly classify most of the data, even when differential-privacy guarantees
must be maintained. We empirically show that majority voting and threshold
averaging give the best accuracy, also for conservative users requiring high
privacy guarantees. Furthermore, we demonstrate that a simple majority voting
rule is an especially good candidate for the differentially-private classifier
since it is much less sensitive to the choice of forest parameters than other
methods.
|
[
{
"version": "v1",
"created": "Sun, 26 Oct 2014 00:16:16 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Feb 2015 20:48:11 GMT"
}
] | 2015-02-06T00:00:00 |
[
[
"Bojarski",
"Mariusz",
""
],
[
"Choromanska",
"Anna",
""
],
[
"Choromanski",
"Krzysztof",
""
],
[
"LeCun",
"Yann",
""
]
] |
TITLE: Differentially- and non-differentially-private random decision trees
ABSTRACT: We consider supervised learning with random decision trees, where the tree
construction is completely random. The method is popularly used and works well
in practice despite the simplicity of the setting, but its statistical
mechanism is not yet well-understood. In this paper we provide strong
theoretical guarantees regarding learning with random decision trees. We
analyze and compare three different variants of the algorithm that have minimal
memory requirements: majority voting, threshold averaging and probabilistic
averaging. The random structure of the tree enables us to adapt these methods
to a differentially-private setting thus we also propose differentially-private
versions of all three schemes. We give upper-bounds on the generalization error
and mathematically explain how the accuracy depends on the number of random
decision trees. Furthermore, we prove that only logarithmic (in the size of the
dataset) number of independently selected random decision trees suffice to
correctly classify most of the data, even when differential-privacy guarantees
must be maintained. We empirically show that majority voting and threshold
averaging give the best accuracy, also for conservative users requiring high
privacy guarantees. Furthermore, we demonstrate that a simple majority voting
rule is an especially good candidate for the differentially-private classifier
since it is much less sensitive to the choice of forest parameters than other
methods.
|
no_new_dataset
| 0.951549 |
1311.1266
|
Diego Amancio
|
Diego R. Amancio and Osvaldo N. Oliveira Jr. and Luciano da F. Costa
|
Topological-collaborative approach for disambiguating authors' names in
collaborative networks
|
To appear in Scientometrics, 2014
|
Scientometrics 102 (1), 465--485, 2015
|
10.1007/s11192-014-1381-9
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concepts and methods of complex networks have been employed to uncover
patterns in a myriad of complex systems. Unfortunately, the relevance and
significance of these patterns strongly depends on the reliability of the data
sets. In the study of collaboration networks, for instance, unavoidable noise
pervading author's collaboration datasets arises when authors share the same
name. To address this problem, we derive a hybrid approach based on authors'
collaboration patterns and on topological features of collaborative networks.
Our results show that the combination of strategies, in most cases, performs
better than the traditional approach which disregards topological features. We
also show that the main factor for improving the discriminability of homonymous
authors is the average distance between authors. Finally, we show that it is
possible to predict the weighting associated to each strategy compounding the
hybrid system by examining the discrimination obtained from the traditional
analysis of collaboration patterns. Once the methodology devised here is
generic, our approach is potentially useful to classify many other networked
systems governed by complex interactions.
|
[
{
"version": "v1",
"created": "Wed, 6 Nov 2013 01:43:51 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jun 2014 16:01:45 GMT"
}
] | 2015-02-05T00:00:00 |
[
[
"Amancio",
"Diego R.",
""
],
[
"Oliveira",
"Osvaldo N.",
"Jr."
],
[
"Costa",
"Luciano da F.",
""
]
] |
TITLE: Topological-collaborative approach for disambiguating authors' names in
collaborative networks
ABSTRACT: Concepts and methods of complex networks have been employed to uncover
patterns in a myriad of complex systems. Unfortunately, the relevance and
significance of these patterns strongly depends on the reliability of the data
sets. In the study of collaboration networks, for instance, unavoidable noise
pervading author's collaboration datasets arises when authors share the same
name. To address this problem, we derive a hybrid approach based on authors'
collaboration patterns and on topological features of collaborative networks.
Our results show that the combination of strategies, in most cases, performs
better than the traditional approach which disregards topological features. We
also show that the main factor for improving the discriminability of homonymous
authors is the average distance between authors. Finally, we show that it is
possible to predict the weighting associated to each strategy compounding the
hybrid system by examining the discrimination obtained from the traditional
analysis of collaboration patterns. Once the methodology devised here is
generic, our approach is potentially useful to classify many other networked
systems governed by complex interactions.
|
no_new_dataset
| 0.946941 |
1501.06992
|
Andr\'e Walker-Loud
|
Thorsten Kurth, Andrew Pochinsky, Abhinav Sarje, Sergey Syritsyn,
Andre Walker-Loud
|
High-Performance I/O: HDF5 for Lattice QCD
|
Contribution to the 32nd International Symposium on Lattice Field
Theory (Lattice 2014), 23-28 June 2014, Columbia University, New York, NY,
USA
| null | null |
NT@WM-15-02, JLAB-THY-15-2007
|
hep-lat physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Practitioners of lattice QCD/QFT have been some of the primary pioneer users
of the state-of-the-art high-performance-computing systems, and contribute
towards the stress tests of such new machines as soon as they become available.
As with all aspects of high-performance-computing, I/O is becoming an
increasingly specialized component of these systems. In order to take advantage
of the latest available high-performance I/O infrastructure, to ensure
reliability and backwards compatibility of data files, and to help unify the
data structures used in lattice codes, we have incorporated parallel HDF5 I/O
into the SciDAC supported USQCD software stack. Here we present the design and
implementation of this I/O framework. Our HDF5 implementation outperforms
optimized QIO at the 10-20% level and leaves room for further improvement by
utilizing appropriate dataset chunking.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 05:30:33 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Kurth",
"Thorsten",
""
],
[
"Pochinsky",
"Andrew",
""
],
[
"Sarje",
"Abhinav",
""
],
[
"Syritsyn",
"Sergey",
""
],
[
"Walker-Loud",
"Andre",
""
]
] |
TITLE: High-Performance I/O: HDF5 for Lattice QCD
ABSTRACT: Practitioners of lattice QCD/QFT have been some of the primary pioneer users
of the state-of-the-art high-performance-computing systems, and contribute
towards the stress tests of such new machines as soon as they become available.
As with all aspects of high-performance-computing, I/O is becoming an
increasingly specialized component of these systems. In order to take advantage
of the latest available high-performance I/O infrastructure, to ensure
reliability and backwards compatibility of data files, and to help unify the
data structures used in lattice codes, we have incorporated parallel HDF5 I/O
into the SciDAC supported USQCD software stack. Here we present the design and
implementation of this I/O framework. Our HDF5 implementation outperforms
optimized QIO at the 10-20% level and leaves room for further improvement by
utilizing appropriate dataset chunking.
|
no_new_dataset
| 0.946399 |
1502.00652
|
Lubor Ladicky
|
\v{L}ubor Ladick\'y, Christian H\"ane and Marc Pollefeys
|
Learning the Matching Function
|
rejected from ACCV 2014 and probably from CVPR 2015
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The matching function for the problem of stereo reconstruction or optical
flow has been traditionally designed as a function of the distance between the
features describing matched pixels. This approach works under assumption, that
the appearance of pixels in two stereo cameras or in two consecutive video
frames does not change dramatically. However, this might not be the case, if we
try to match pixels over a large interval of time.
In this paper we propose a method, which learns the matching function, that
automatically finds the space of allowed changes in visual appearance, such as
due to the motion blur, chromatic distortions, different colour calibration or
seasonal changes. Furthermore, it automatically learns the importance of
matching scores of contextual features at different relative locations and
scales. Proposed classifier gives reliable estimations of pixel disparities
already without any form of regularization.
We evaluated our method on two standard problems - stereo matching on KITTI
outdoor dataset, optical flow on Sintel data set, and on newly introduced
TimeLapse change detection dataset. Our algorithm obtained very promising
results comparable to the state-of-the-art.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 21:20:43 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Ladický",
"Ľubor",
""
],
[
"Häne",
"Christian",
""
],
[
"Pollefeys",
"Marc",
""
]
] |
TITLE: Learning the Matching Function
ABSTRACT: The matching function for the problem of stereo reconstruction or optical
flow has been traditionally designed as a function of the distance between the
features describing matched pixels. This approach works under assumption, that
the appearance of pixels in two stereo cameras or in two consecutive video
frames does not change dramatically. However, this might not be the case, if we
try to match pixels over a large interval of time.
In this paper we propose a method, which learns the matching function, that
automatically finds the space of allowed changes in visual appearance, such as
due to the motion blur, chromatic distortions, different colour calibration or
seasonal changes. Furthermore, it automatically learns the importance of
matching scores of contextual features at different relative locations and
scales. Proposed classifier gives reliable estimations of pixel disparities
already without any form of regularization.
We evaluated our method on two standard problems - stereo matching on KITTI
outdoor dataset, optical flow on Sintel data set, and on newly introduced
TimeLapse change detection dataset. Our algorithm obtained very promising
results comparable to the state-of-the-art.
|
no_new_dataset
| 0.902952 |
1502.00712
|
Liang Lin
|
Zhanglin Peng, Liang Lin, Ruimao Zhang, Jing Xu
|
Deep Boosting: Layered Feature Mining for General Image Classification
|
6 pages, 4 figures, ICME 2014
|
Multimedia and Expo (ICME), 2014 IEEE International Conference on
, vol., no., pp.1,6, 14-18 July 2014
|
10.1109/ICME.2014.6890323
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Constructing effective representations is a critical but challenging problem
in multimedia understanding. The traditional handcraft features often rely on
domain knowledge, limiting the performances of exiting methods. This paper
discusses a novel computational architecture for general image feature mining,
which assembles the primitive filters (i.e. Gabor wavelets) into compositional
features in a layer-wise manner. In each layer, we produce a number of base
classifiers (i.e. regression stumps) associated with the generated features,
and discover informative compositions by using the boosting algorithm. The
output compositional features of each layer are treated as the base components
to build up the next layer. Our framework is able to generate expressive image
representations while inducing very discriminate functions for image
classification. The experiments are conducted on several public datasets, and
we demonstrate superior performances over state-of-the-art approaches.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 02:44:10 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Peng",
"Zhanglin",
""
],
[
"Lin",
"Liang",
""
],
[
"Zhang",
"Ruimao",
""
],
[
"Xu",
"Jing",
""
]
] |
TITLE: Deep Boosting: Layered Feature Mining for General Image Classification
ABSTRACT: Constructing effective representations is a critical but challenging problem
in multimedia understanding. The traditional handcraft features often rely on
domain knowledge, limiting the performances of exiting methods. This paper
discusses a novel computational architecture for general image feature mining,
which assembles the primitive filters (i.e. Gabor wavelets) into compositional
features in a layer-wise manner. In each layer, we produce a number of base
classifiers (i.e. regression stumps) associated with the generated features,
and discover informative compositions by using the boosting algorithm. The
output compositional features of each layer are treated as the base components
to build up the next layer. Our framework is able to generate expressive image
representations while inducing very discriminate functions for image
classification. The experiments are conducted on several public datasets, and
we demonstrate superior performances over state-of-the-art approaches.
|
no_new_dataset
| 0.94868 |
1502.00717
|
Hongyuan Zhu
|
Hongyuan Zhu, Fanman Meng, Jianfei Cai, Shijian Lu
|
Beyond Pixels: A Comprehensive Survey from Bottom-up to Semantic Image
Segmentation and Cosegmentation
|
submitted to Elsevier Journal of Visual Communications and Image
Representation
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Image segmentation refers to the process to divide an image into
nonoverlapping meaningful regions according to human perception, which has
become a classic topic since the early ages of computer vision. A lot of
research has been conducted and has resulted in many applications. However,
while many segmentation algorithms exist, yet there are only a few sparse and
outdated summarizations available, an overview of the recent achievements and
issues is lacking. We aim to provide a comprehensive review of the recent
progress in this field. Covering 180 publications, we give an overview of broad
areas of segmentation topics including not only the classic bottom-up
approaches, but also the recent development in superpixel, interactive methods,
object proposals, semantic image parsing and image cosegmentation. In addition,
we also review the existing influential datasets and evaluation metrics.
Finally, we suggest some design flavors and research directions for future
research in image segmentation.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 03:00:52 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Zhu",
"Hongyuan",
""
],
[
"Meng",
"Fanman",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Lu",
"Shijian",
""
]
] |
TITLE: Beyond Pixels: A Comprehensive Survey from Bottom-up to Semantic Image
Segmentation and Cosegmentation
ABSTRACT: Image segmentation refers to the process to divide an image into
nonoverlapping meaningful regions according to human perception, which has
become a classic topic since the early ages of computer vision. A lot of
research has been conducted and has resulted in many applications. However,
while many segmentation algorithms exist, yet there are only a few sparse and
outdated summarizations available, an overview of the recent achievements and
issues is lacking. We aim to provide a comprehensive review of the recent
progress in this field. Covering 180 publications, we give an overview of broad
areas of segmentation topics including not only the classic bottom-up
approaches, but also the recent development in superpixel, interactive methods,
object proposals, semantic image parsing and image cosegmentation. In addition,
we also review the existing influential datasets and evaluation metrics.
Finally, we suggest some design flavors and research directions for future
research in image segmentation.
|
no_new_dataset
| 0.945701 |
1502.00725
|
Hongwei Li
|
Hongwei Li and Qiang Liu
|
Cheaper and Better: Selecting Good Workers for Crowdsourcing
| null | null | null | null |
stat.ML cs.AI cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowdsourcing provides a popular paradigm for data collection at scale. We
study the problem of selecting subsets of workers from a given worker pool to
maximize the accuracy under a budget constraint. One natural question is
whether we should hire as many workers as the budget allows, or restrict on a
small number of top-quality workers. By theoretically analyzing the error rate
of a typical setting in crowdsourcing, we frame the worker selection problem
into a combinatorial optimization problem and propose an algorithm to solve it
efficiently. Empirical results on both simulated and real-world datasets show
that our algorithm is able to select a small number of high-quality workers,
and performs as good as, sometimes even better than, the much larger crowds as
the budget allows.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 03:45:48 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Li",
"Hongwei",
""
],
[
"Liu",
"Qiang",
""
]
] |
TITLE: Cheaper and Better: Selecting Good Workers for Crowdsourcing
ABSTRACT: Crowdsourcing provides a popular paradigm for data collection at scale. We
study the problem of selecting subsets of workers from a given worker pool to
maximize the accuracy under a budget constraint. One natural question is
whether we should hire as many workers as the budget allows, or restrict on a
small number of top-quality workers. By theoretically analyzing the error rate
of a typical setting in crowdsourcing, we frame the worker selection problem
into a combinatorial optimization problem and propose an algorithm to solve it
efficiently. Empirical results on both simulated and real-world datasets show
that our algorithm is able to select a small number of high-quality workers,
and performs as good as, sometimes even better than, the much larger crowds as
the budget allows.
|
no_new_dataset
| 0.952442 |
1502.00739
|
Liang Lin
|
Wei Yang, Ping Luo, Liang Lin
|
Clothing Co-Parsing by Joint Image Segmentation and Labeling
|
8 pages, 5 figures, CVPR 2014
|
Computer Vision and Pattern Recognition (CVPR), 2014 IEEE
Conference on , vol., no., pp.3182,3189, 23-28 June 2014
|
10.1109/CVPR.2014.407
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims at developing an integrated system of clothing co-parsing, in
order to jointly parse a set of clothing images (unsegmented but annotated with
tags) into semantic configurations. We propose a data-driven framework
consisting of two phases of inference. The first phase, referred as "image
co-segmentation", iterates to extract consistent regions on images and jointly
refines the regions over all images by employing the exemplar-SVM (E-SVM)
technique [23]. In the second phase (i.e. "region co-labeling"), we construct a
multi-image graphical model by taking the segmented regions as vertices, and
incorporate several contexts of clothing configuration (e.g., item location and
mutual interactions). The joint label assignment can be solved using the
efficient Graph Cuts algorithm. In addition to evaluate our framework on the
Fashionista dataset [30], we construct a dataset called CCP consisting of 2098
high-resolution street fashion photos to demonstrate the performance of our
system. We achieve 90.29% / 88.23% segmentation accuracy and 65.52% / 63.89%
recognition rate on the Fashionista and the CCP datasets, respectively, which
are superior compared with state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 04:59:41 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Yang",
"Wei",
""
],
[
"Luo",
"Ping",
""
],
[
"Lin",
"Liang",
""
]
] |
TITLE: Clothing Co-Parsing by Joint Image Segmentation and Labeling
ABSTRACT: This paper aims at developing an integrated system of clothing co-parsing, in
order to jointly parse a set of clothing images (unsegmented but annotated with
tags) into semantic configurations. We propose a data-driven framework
consisting of two phases of inference. The first phase, referred as "image
co-segmentation", iterates to extract consistent regions on images and jointly
refines the regions over all images by employing the exemplar-SVM (E-SVM)
technique [23]. In the second phase (i.e. "region co-labeling"), we construct a
multi-image graphical model by taking the segmented regions as vertices, and
incorporate several contexts of clothing configuration (e.g., item location and
mutual interactions). The joint label assignment can be solved using the
efficient Graph Cuts algorithm. In addition to evaluate our framework on the
Fashionista dataset [30], we construct a dataset called CCP consisting of 2098
high-resolution street fashion photos to demonstrate the performance of our
system. We achieve 90.29% / 88.23% segmentation accuracy and 65.52% / 63.89%
recognition rate on the Fashionista and the CCP datasets, respectively, which
are superior compared with state-of-the-art methods.
|
new_dataset
| 0.956675 |
1502.00750
|
Liang Lin
|
Xiaodan Liang, Qingxing Cao, Rui Huang, Liang Lin
|
Recognizing Focal Liver Lesions in Contrast-Enhanced Ultrasound with
Discriminatively Trained Spatio-Temporal Model
|
5 pages, 1 figures
|
Biomedical Imaging (ISBI), 2014 IEEE 11th International Symposium
on , vol., no., pp.1184-1187, April 2014
|
10.1109/ISBI.2014.6868087
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this study is to provide an automatic computational framework to
assist clinicians in diagnosing Focal Liver Lesions (FLLs) in
Contrast-Enhancement Ultrasound (CEUS). We represent FLLs in a CEUS video clip
as an ensemble of Region-of-Interests (ROIs), whose locations are modeled as
latent variables in a discriminative model. Different types of FLLs are
characterized by both spatial and temporal enhancement patterns of the ROIs.
The model is learned by iteratively inferring the optimal ROI locations and
optimizing the model parameters. To efficiently search the optimal spatial and
temporal locations of the ROIs, we propose a data-driven inference algorithm by
combining effective spatial and temporal pruning. The experiments show that our
method achieves promising results on the largest dataset in the literature (to
the best of our knowledge), which we have made publicly available.
|
[
{
"version": "v1",
"created": "Tue, 3 Feb 2015 06:14:30 GMT"
}
] | 2015-02-04T00:00:00 |
[
[
"Liang",
"Xiaodan",
""
],
[
"Cao",
"Qingxing",
""
],
[
"Huang",
"Rui",
""
],
[
"Lin",
"Liang",
""
]
] |
TITLE: Recognizing Focal Liver Lesions in Contrast-Enhanced Ultrasound with
Discriminatively Trained Spatio-Temporal Model
ABSTRACT: The aim of this study is to provide an automatic computational framework to
assist clinicians in diagnosing Focal Liver Lesions (FLLs) in
Contrast-Enhancement Ultrasound (CEUS). We represent FLLs in a CEUS video clip
as an ensemble of Region-of-Interests (ROIs), whose locations are modeled as
latent variables in a discriminative model. Different types of FLLs are
characterized by both spatial and temporal enhancement patterns of the ROIs.
The model is learned by iteratively inferring the optimal ROI locations and
optimizing the model parameters. To efficiently search the optimal spatial and
temporal locations of the ROIs, we propose a data-driven inference algorithm by
combining effective spatial and temporal pruning. The experiments show that our
method achieves promising results on the largest dataset in the literature (to
the best of our knowledge), which we have made publicly available.
|
no_new_dataset
| 0.948632 |
1403.0388
|
Mohammadzaman Zamani
|
Mohammadzaman Zamani, Hamid Beigy, and Amirreza Shaban
|
Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm
|
15 pages, 3 figures
| null | null | null |
stat.ML cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
|
[
{
"version": "v1",
"created": "Mon, 3 Mar 2014 11:05:10 GMT"
},
{
"version": "v2",
"created": "Fri, 5 Dec 2014 17:57:03 GMT"
},
{
"version": "v3",
"created": "Sun, 4 Jan 2015 03:01:38 GMT"
},
{
"version": "v4",
"created": "Mon, 2 Feb 2015 17:18:43 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Zamani",
"Mohammadzaman",
""
],
[
"Beigy",
"Hamid",
""
],
[
"Shaban",
"Amirreza",
""
]
] |
TITLE: Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm
ABSTRACT: With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
|
no_new_dataset
| 0.949342 |
1408.4910
|
Kai Zhao
|
Kai Zhao, Mirco Musolesi, Pan Hui, Weixiong Rao and Sasu Tarkoma
|
Explaining the Power-law Distribution of Human Mobility Through
Transportation Modality Decomposition
| null | null | null | null |
physics.soc-ph cs.SI physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human mobility has been empirically observed to exhibit Levy flight
characteristics and behaviour with power-law distributed jump size. The
fundamental mechanisms behind this behaviour has not yet been fully explained.
In this paper, we analyze urban human mobility and we propose to explain the
Levy walk behaviour observed in human mobility patterns by decomposing them
into different classes according to the different transportation modes, such as
Walk/Run, Bicycle, Train/Subway or Car/Taxi/Bus. Our analysis is based on two
real-life GPS datasets containing approximately 10 and 20 million GPS samples
with transportation mode information. We show that human mobility can be
modelled as a mixture of different transportation modes, and that these single
movement patterns can be approximated by a lognormal distribution rather than a
power-law distribution. Then, we demonstrate that the mixture of the decomposed
lognormal flight distributions associated with each modality is a power-law
distribution, providing an explanation to the emergence of Levy Walk patterns
that characterize human mobility patterns.
|
[
{
"version": "v1",
"created": "Thu, 21 Aug 2014 08:19:19 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Sep 2014 09:23:48 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Feb 2015 08:30:34 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Zhao",
"Kai",
""
],
[
"Musolesi",
"Mirco",
""
],
[
"Hui",
"Pan",
""
],
[
"Rao",
"Weixiong",
""
],
[
"Tarkoma",
"Sasu",
""
]
] |
TITLE: Explaining the Power-law Distribution of Human Mobility Through
Transportation Modality Decomposition
ABSTRACT: Human mobility has been empirically observed to exhibit Levy flight
characteristics and behaviour with power-law distributed jump size. The
fundamental mechanisms behind this behaviour has not yet been fully explained.
In this paper, we analyze urban human mobility and we propose to explain the
Levy walk behaviour observed in human mobility patterns by decomposing them
into different classes according to the different transportation modes, such as
Walk/Run, Bicycle, Train/Subway or Car/Taxi/Bus. Our analysis is based on two
real-life GPS datasets containing approximately 10 and 20 million GPS samples
with transportation mode information. We show that human mobility can be
modelled as a mixture of different transportation modes, and that these single
movement patterns can be approximated by a lognormal distribution rather than a
power-law distribution. Then, we demonstrate that the mixture of the decomposed
lognormal flight distributions associated with each modality is a power-law
distribution, providing an explanation to the emergence of Levy Walk patterns
that characterize human mobility patterns.
|
no_new_dataset
| 0.948775 |
1501.07093
|
Mallenahalli Naresh Kumar Prof. Dr.
|
V. Sree Hari Rao and M. Naresh Kumar
|
Novel Approaches for Predicting Risk Factors of Atherosclerosis
|
7 pages, 2 figures
|
Biomedical and Health Informatics, IEEE Journal of , vol.17, no.1,
pp.183,189, Jan. 2013
|
10.1109/TITB.2012.2227271
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coronary heart disease (CHD) caused by hardening of artery walls due to
cholesterol known as atherosclerosis is responsible for large number of deaths
world-wide. The disease progression is slow, asymptomatic and may lead to
sudden cardiac arrest, stroke or myocardial infraction. Presently, imaging
techniques are being employed to understand the molecular and metabolic
activity of atherosclerotic plaques to estimate the risk. Though imaging
methods are able to provide some information on plaque metabolism they lack the
required resolution and sensitivity for detection. In this paper we consider
the clinical observations and habits of individuals for predicting the risk
factors of CHD. The identification of risk factors helps in stratifying
patients for further intensive tests such as nuclear imaging or coronary
angiography. We present a novel approach for predicting the risk factors of
atherosclerosis with an in-built imputation algorithm and particle swarm
optimization (PSO). We compare the performance of our methodology with other
machine learning techniques on STULONG dataset which is based on longitudinal
study of middle aged individuals lasting for twenty years. Our methodology
powered by PSO search has identified physical inactivity as one of the risk
factor for the onset of atherosclerosis in addition to other already known
factors. The decision rules extracted by our methodology are able to predict
the risk factors with an accuracy of $99.73%$ which is higher than the
accuracies obtained by application of the state-of-the-art machine learning
techniques presently being employed in the identification of atherosclerosis
risk studies.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 13:26:02 GMT"
},
{
"version": "v2",
"created": "Sat, 31 Jan 2015 03:22:15 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Rao",
"V. Sree Hari",
""
],
[
"Kumar",
"M. Naresh",
""
]
] |
TITLE: Novel Approaches for Predicting Risk Factors of Atherosclerosis
ABSTRACT: Coronary heart disease (CHD) caused by hardening of artery walls due to
cholesterol known as atherosclerosis is responsible for large number of deaths
world-wide. The disease progression is slow, asymptomatic and may lead to
sudden cardiac arrest, stroke or myocardial infraction. Presently, imaging
techniques are being employed to understand the molecular and metabolic
activity of atherosclerotic plaques to estimate the risk. Though imaging
methods are able to provide some information on plaque metabolism they lack the
required resolution and sensitivity for detection. In this paper we consider
the clinical observations and habits of individuals for predicting the risk
factors of CHD. The identification of risk factors helps in stratifying
patients for further intensive tests such as nuclear imaging or coronary
angiography. We present a novel approach for predicting the risk factors of
atherosclerosis with an in-built imputation algorithm and particle swarm
optimization (PSO). We compare the performance of our methodology with other
machine learning techniques on STULONG dataset which is based on longitudinal
study of middle aged individuals lasting for twenty years. Our methodology
powered by PSO search has identified physical inactivity as one of the risk
factor for the onset of atherosclerosis in addition to other already known
factors. The decision rules extracted by our methodology are able to predict
the risk factors with an accuracy of $99.73%$ which is higher than the
accuracies obtained by application of the state-of-the-art machine learning
techniques presently being employed in the identification of atherosclerosis
risk studies.
|
no_new_dataset
| 0.943815 |
1502.00030
|
Sravanthi Bondugula
|
Sravanthi Bondugula, Varun Manjunatha, Larry S. Davis, David Doermann
|
SHOE: Supervised Hashing with Output Embeddings
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a supervised binary encoding scheme for image retrieval that
learns projections by taking into account similarity between classes obtained
from output embeddings. Our motivation is that binary hash codes learned in
this way improve both the visual quality of retrieval results and existing
supervised hashing schemes. We employ a sequential greedy optimization that
learns relationship aware projections by minimizing the difference between
inner products of binary codes and output embedding vectors. We develop a joint
optimization framework to learn projections which improve the accuracy of
supervised hashing over the current state of the art with respect to standard
and sibling evaluation metrics. We further boost performance by applying the
supervised dimensionality reduction technique on kernelized input CNN features.
Experiments are performed on three datasets: CUB-2011, SUN-Attribute and
ImageNet ILSVRC 2010. As a by-product of our method, we show that using a
simple k-nn pooling classifier with our discriminative codes improves over the
complex classification models on fine grained datasets like CUB and offer an
impressive compression ratio of 1024 on CNN features.
|
[
{
"version": "v1",
"created": "Fri, 30 Jan 2015 22:04:12 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Bondugula",
"Sravanthi",
""
],
[
"Manjunatha",
"Varun",
""
],
[
"Davis",
"Larry S.",
""
],
[
"Doermann",
"David",
""
]
] |
TITLE: SHOE: Supervised Hashing with Output Embeddings
ABSTRACT: We present a supervised binary encoding scheme for image retrieval that
learns projections by taking into account similarity between classes obtained
from output embeddings. Our motivation is that binary hash codes learned in
this way improve both the visual quality of retrieval results and existing
supervised hashing schemes. We employ a sequential greedy optimization that
learns relationship aware projections by minimizing the difference between
inner products of binary codes and output embedding vectors. We develop a joint
optimization framework to learn projections which improve the accuracy of
supervised hashing over the current state of the art with respect to standard
and sibling evaluation metrics. We further boost performance by applying the
supervised dimensionality reduction technique on kernelized input CNN features.
Experiments are performed on three datasets: CUB-2011, SUN-Attribute and
ImageNet ILSVRC 2010. As a by-product of our method, we show that using a
simple k-nn pooling classifier with our discriminative codes improves over the
complex classification models on fine grained datasets like CUB and offer an
impressive compression ratio of 1024 on CNN features.
|
no_new_dataset
| 0.944434 |
1502.00046
|
Davis King
|
Davis E. King
|
Max-Margin Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most object detection methods operate by applying a binary classifier to
sub-windows of an image, followed by a non-maximum suppression step where
detections on overlapping sub-windows are removed. Since the number of possible
sub-windows in even moderately sized image datasets is extremely large, the
classifier is typically learned from only a subset of the windows. This avoids
the computational difficulty of dealing with the entire set of sub-windows,
however, as we will show in this paper, it leads to sub-optimal detector
performance.
In particular, the main contribution of this paper is the introduction of a
new method, Max-Margin Object Detection (MMOD), for learning to detect objects
in images. This method does not perform any sub-sampling, but instead optimizes
over all sub-windows. MMOD can be used to improve any object detection method
which is linear in the learned parameters, such as HOG or bag-of-visual-word
models. Using this approach we show substantial performance gains on three
publicly available datasets. Strikingly, we show that a single rigid HOG filter
can outperform a state-of-the-art deformable part model on the Face Detection
Data Set and Benchmark when the HOG filter is learned via MMOD.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 00:32:34 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"King",
"Davis E.",
""
]
] |
TITLE: Max-Margin Object Detection
ABSTRACT: Most object detection methods operate by applying a binary classifier to
sub-windows of an image, followed by a non-maximum suppression step where
detections on overlapping sub-windows are removed. Since the number of possible
sub-windows in even moderately sized image datasets is extremely large, the
classifier is typically learned from only a subset of the windows. This avoids
the computational difficulty of dealing with the entire set of sub-windows,
however, as we will show in this paper, it leads to sub-optimal detector
performance.
In particular, the main contribution of this paper is the introduction of a
new method, Max-Margin Object Detection (MMOD), for learning to detect objects
in images. This method does not perform any sub-sampling, but instead optimizes
over all sub-windows. MMOD can be used to improve any object detection method
which is linear in the learned parameters, such as HOG or bag-of-visual-word
models. Using this approach we show substantial performance gains on three
publicly available datasets. Strikingly, we show that a single rigid HOG filter
can outperform a state-of-the-art deformable part model on the Face Detection
Data Set and Benchmark when the HOG filter is learned via MMOD.
|
no_new_dataset
| 0.945651 |
1502.00093
|
Sotetsu Koyamada
|
Sotetsu Koyamada and Yumi Shikauchi and Ken Nakae and Masanori Koyama
and Shin Ishii
|
Deep learning of fMRI big data: a novel approach to subject-transfer
decoding
| null | null | null | null |
stat.ML cs.LG q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a technology to read brain states from measurable brain activities, brain
decoding are widely applied in industries and medical sciences. In spite of
high demands in these applications for a universal decoder that can be applied
to all individuals simultaneously, large variation in brain activities across
individuals has limited the scope of many studies to the development of
individual-specific decoders. In this study, we used deep neural network (DNN),
a nonlinear hierarchical model, to construct a subject-transfer decoder. Our
decoder is the first successful DNN-based subject-transfer decoder. When
applied to a large-scale functional magnetic resonance imaging (fMRI) database,
our DNN-based decoder achieved higher decoding accuracy than other baseline
methods, including support vector machine (SVM). In order to analyze the
knowledge acquired by this decoder, we applied principal sensitivity analysis
(PSA) to the decoder and visualized the discriminative features that are common
to all subjects in the dataset. Our PSA successfully visualized the
subject-independent features contributing to the subject-transferability of the
trained decoder.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 11:58:26 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Koyamada",
"Sotetsu",
""
],
[
"Shikauchi",
"Yumi",
""
],
[
"Nakae",
"Ken",
""
],
[
"Koyama",
"Masanori",
""
],
[
"Ishii",
"Shin",
""
]
] |
TITLE: Deep learning of fMRI big data: a novel approach to subject-transfer
decoding
ABSTRACT: As a technology to read brain states from measurable brain activities, brain
decoding are widely applied in industries and medical sciences. In spite of
high demands in these applications for a universal decoder that can be applied
to all individuals simultaneously, large variation in brain activities across
individuals has limited the scope of many studies to the development of
individual-specific decoders. In this study, we used deep neural network (DNN),
a nonlinear hierarchical model, to construct a subject-transfer decoder. Our
decoder is the first successful DNN-based subject-transfer decoder. When
applied to a large-scale functional magnetic resonance imaging (fMRI) database,
our DNN-based decoder achieved higher decoding accuracy than other baseline
methods, including support vector machine (SVM). In order to analyze the
knowledge acquired by this decoder, we applied principal sensitivity analysis
(PSA) to the decoder and visualized the discriminative features that are common
to all subjects in the dataset. Our PSA successfully visualized the
subject-independent features contributing to the subject-transferability of the
trained decoder.
|
no_new_dataset
| 0.935169 |
1502.00141
|
Mathieu Lagrange
|
Mathieu Lagrange, Gr\'egoire Lafay, Mathias Rossignol, Emmanouil
Benetos, Axel Roebel
|
An evaluation framework for event detection using a morphological model
of acoustic scenes
| null | null | null | null |
stat.ML cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a model of environmental acoustic scenes which adopts a
morphological approach by ab-stracting temporal structures of acoustic scenes.
To demonstrate its potential, this model is employed to evaluate the
performance of a large set of acoustic events detection systems. This model
allows us to explicitly control key morphological aspects of the acoustic scene
and isolate their impact on the performance of the system under evaluation.
Thus, more information can be gained on the behavior of evaluated systems,
providing guidance for further improvements. The proposed model is validated
using submitted systems from the IEEE DCASE Challenge; results indicate that
the proposed scheme is able to successfully build datasets useful for
evaluating some aspects the performance of event detection systems, more
particularly their robustness to new listening conditions and the increasing
level of background sounds.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 18:12:34 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Lagrange",
"Mathieu",
""
],
[
"Lafay",
"Grégoire",
""
],
[
"Rossignol",
"Mathias",
""
],
[
"Benetos",
"Emmanouil",
""
],
[
"Roebel",
"Axel",
""
]
] |
TITLE: An evaluation framework for event detection using a morphological model
of acoustic scenes
ABSTRACT: This paper introduces a model of environmental acoustic scenes which adopts a
morphological approach by ab-stracting temporal structures of acoustic scenes.
To demonstrate its potential, this model is employed to evaluate the
performance of a large set of acoustic events detection systems. This model
allows us to explicitly control key morphological aspects of the acoustic scene
and isolate their impact on the performance of the system under evaluation.
Thus, more information can be gained on the behavior of evaluated systems,
providing guidance for further improvements. The proposed model is validated
using submitted systems from the IEEE DCASE Challenge; results indicate that
the proposed scheme is able to successfully build datasets useful for
evaluating some aspects the performance of event detection systems, more
particularly their robustness to new listening conditions and the increasing
level of background sounds.
|
no_new_dataset
| 0.949295 |
1502.00166
|
Marijn ten Thij
|
Marijn ten Thij, Tanneke Ouboter, Daniel Worm, Nelly Litvak, Hans van
den Berg and Sandjai Bhulai
|
Modelling of trends in Twitter using retweet graph dynamics
|
16 pages, 5 figures, presented at WAW 2014
|
Algorithms and Models for the Web Graph, 11th International
Workshop, WAW 2014, Beijing, China, December 17-18, 2014, Proceedings pp
132-147, Lecture Notes in Computer Science, Springer
|
10.1007/978-3-319-13123-8_11
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we model user behaviour in Twitter to capture the emergence of
trending topics. For this purpose, we first extensively analyse tweet datasets
of several different events. In particular, for these datasets, we construct
and investigate the retweet graphs. We find that the retweet graph for a
trending topic has a relatively dense largest connected component (LCC). Next,
based on the insights obtained from the analyses of the datasets, we design a
mathematical model that describes the evolution of a retweet graph by three
main parameters. We then quantify, analytically and by simulation, the
influence of the model parameters on the basic characteristics of the retweet
graph, such as the density of edges and the size and density of the LCC.
Finally, we put the model in practice, estimate its parameters and compare the
resulting behavior of the model to our datasets.
|
[
{
"version": "v1",
"created": "Sat, 31 Jan 2015 21:45:17 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Thij",
"Marijn ten",
""
],
[
"Ouboter",
"Tanneke",
""
],
[
"Worm",
"Daniel",
""
],
[
"Litvak",
"Nelly",
""
],
[
"Berg",
"Hans van den",
""
],
[
"Bhulai",
"Sandjai",
""
]
] |
TITLE: Modelling of trends in Twitter using retweet graph dynamics
ABSTRACT: In this paper we model user behaviour in Twitter to capture the emergence of
trending topics. For this purpose, we first extensively analyse tweet datasets
of several different events. In particular, for these datasets, we construct
and investigate the retweet graphs. We find that the retweet graph for a
trending topic has a relatively dense largest connected component (LCC). Next,
based on the insights obtained from the analyses of the datasets, we design a
mathematical model that describes the evolution of a retweet graph by three
main parameters. We then quantify, analytically and by simulation, the
influence of the model parameters on the basic characteristics of the retweet
graph, such as the density of edges and the size and density of the LCC.
Finally, we put the model in practice, estimate its parameters and compare the
resulting behavior of the model to our datasets.
|
no_new_dataset
| 0.950411 |
1502.00192
|
Menglong Zhu
|
Menglong Zhu, Xiaowei Zhou, Kostas Daniilidis
|
Pose and Shape Estimation with Discriminatively Learned Parts
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new approach for estimating the 3D pose and the 3D shape of an
object from a single image. Given a training set of view exemplars, we learn
and select appearance-based discriminative parts which are mapped onto the 3D
model from the training set through a facil- ity location optimization. The
training set of 3D models is summarized into a sparse set of shapes from which
we can generalize by linear combination. Given a test picture, we detect
hypotheses for each part. The main challenge is to select from these hypotheses
and compute the 3D pose and shape coefficients at the same time. To achieve
this, we optimize a function that minimizes simultaneously the geometric
reprojection error as well as the appearance matching of the parts. We apply
the alternating direction method of multipliers (ADMM) to minimize the
resulting convex function. We evaluate our approach on the Fine Grained 3D Car
dataset with superior performance in shape and pose errors. Our main and novel
contribution is the simultaneous solution for part localization, 3D pose and
shape by maximizing both geometric and appearance compatibility.
|
[
{
"version": "v1",
"created": "Sun, 1 Feb 2015 04:09:23 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Zhu",
"Menglong",
""
],
[
"Zhou",
"Xiaowei",
""
],
[
"Daniilidis",
"Kostas",
""
]
] |
TITLE: Pose and Shape Estimation with Discriminatively Learned Parts
ABSTRACT: We introduce a new approach for estimating the 3D pose and the 3D shape of an
object from a single image. Given a training set of view exemplars, we learn
and select appearance-based discriminative parts which are mapped onto the 3D
model from the training set through a facil- ity location optimization. The
training set of 3D models is summarized into a sparse set of shapes from which
we can generalize by linear combination. Given a test picture, we detect
hypotheses for each part. The main challenge is to select from these hypotheses
and compute the 3D pose and shape coefficients at the same time. To achieve
this, we optimize a function that minimizes simultaneously the geometric
reprojection error as well as the appearance matching of the parts. We apply
the alternating direction method of multipliers (ADMM) to minimize the
resulting convex function. We evaluate our approach on the Fine Grained 3D Car
dataset with superior performance in shape and pose errors. Our main and novel
contribution is the simultaneous solution for part localization, 3D pose and
shape by maximizing both geometric and appearance compatibility.
|
no_new_dataset
| 0.943919 |
1502.00231
|
Yishi Zhang
|
Zhijun Chen, Chaozhong Wu, Yishi Zhang, Zhen Huang, Bin Ran, Ming
Zhong, Nengchao Lyu
|
Feature Selection with Redundancy-complementariness Dispersion
|
28 pages, 13 figures, 7 tables
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Feature selection has attracted significant attention in data mining and
machine learning in the past decades. Many existing feature selection methods
eliminate redundancy by measuring pairwise inter-correlation of features,
whereas the complementariness of features and higher inter-correlation among
more than two features are ignored. In this study, a modification item
concerning the complementariness of features is introduced in the evaluation
criterion of features. Additionally, in order to identify the interference
effect of already-selected False Positives (FPs), the
redundancy-complementariness dispersion is also taken into account to adjust
the measurement of pairwise inter-correlation of features. To illustrate the
effectiveness of proposed method, classification experiments are applied with
four frequently used classifiers on ten datasets. Classification results verify
the superiority of proposed method compared with five representative feature
selection methods.
|
[
{
"version": "v1",
"created": "Sun, 1 Feb 2015 10:44:26 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Chen",
"Zhijun",
""
],
[
"Wu",
"Chaozhong",
""
],
[
"Zhang",
"Yishi",
""
],
[
"Huang",
"Zhen",
""
],
[
"Ran",
"Bin",
""
],
[
"Zhong",
"Ming",
""
],
[
"Lyu",
"Nengchao",
""
]
] |
TITLE: Feature Selection with Redundancy-complementariness Dispersion
ABSTRACT: Feature selection has attracted significant attention in data mining and
machine learning in the past decades. Many existing feature selection methods
eliminate redundancy by measuring pairwise inter-correlation of features,
whereas the complementariness of features and higher inter-correlation among
more than two features are ignored. In this study, a modification item
concerning the complementariness of features is introduced in the evaluation
criterion of features. Additionally, in order to identify the interference
effect of already-selected False Positives (FPs), the
redundancy-complementariness dispersion is also taken into account to adjust
the measurement of pairwise inter-correlation of features. To illustrate the
effectiveness of proposed method, classification experiments are applied with
four frequently used classifiers on ten datasets. Classification results verify
the superiority of proposed method compared with five representative feature
selection methods.
|
no_new_dataset
| 0.949295 |
1502.00341
|
Liang Lin
|
Liang Lin, Xiaolong Wang, Wei Yang, Jian-Huang Lai
|
Discriminatively Trained And-Or Graph Models for Object Shape Detection
|
15 pages, 14 figures, TPAMI 2014
|
Pattern Analysis and Machine Intelligence, IEEE Transactions on ,
vol.PP, no.99, pp.1,1, 2014
|
10.1109/TPAMI.2014.2359888
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate a novel reconfigurable part-based model, namely
And-Or graph model, to recognize object shapes in images. Our proposed model
consists of four layers: leaf-nodes at the bottom are local classifiers for
detecting contour fragments; or-nodes above the leaf-nodes function as the
switches to activate their child leaf-nodes, making the model reconfigurable
during inference; and-nodes in a higher layer capture holistic shape
deformations; one root-node on the top, which is also an or-node, activates one
of its child and-nodes to deal with large global variations (e.g. different
poses and views). We propose a novel structural optimization algorithm to
discriminatively train the And-Or model from weakly annotated data. This
algorithm iteratively determines the model structures (e.g. the nodes and their
layouts) along with the parameter learning. On several challenging datasets,
our model demonstrates the effectiveness to perform robust shape-based object
detection against background clutter and outperforms the other state-of-the-art
approaches. We also release a new shape database with annotations, which
includes more than 1500 challenging shape instances, for recognition and
detection.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 02:04:01 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Lin",
"Liang",
""
],
[
"Wang",
"Xiaolong",
""
],
[
"Yang",
"Wei",
""
],
[
"Lai",
"Jian-Huang",
""
]
] |
TITLE: Discriminatively Trained And-Or Graph Models for Object Shape Detection
ABSTRACT: In this paper, we investigate a novel reconfigurable part-based model, namely
And-Or graph model, to recognize object shapes in images. Our proposed model
consists of four layers: leaf-nodes at the bottom are local classifiers for
detecting contour fragments; or-nodes above the leaf-nodes function as the
switches to activate their child leaf-nodes, making the model reconfigurable
during inference; and-nodes in a higher layer capture holistic shape
deformations; one root-node on the top, which is also an or-node, activates one
of its child and-nodes to deal with large global variations (e.g. different
poses and views). We propose a novel structural optimization algorithm to
discriminatively train the And-Or model from weakly annotated data. This
algorithm iteratively determines the model structures (e.g. the nodes and their
layouts) along with the parameter learning. On several challenging datasets,
our model demonstrates the effectiveness to perform robust shape-based object
detection against background clutter and outperforms the other state-of-the-art
approaches. We also release a new shape database with annotations, which
includes more than 1500 challenging shape instances, for recognition and
detection.
|
no_new_dataset
| 0.904945 |
1502.00363
|
Wangmeng Zuo
|
Wangmeng Zuo, Faqiang Wang, David Zhang, Liang Lin, Yuchi Huang, Deyu
Meng, Lei Zhang
|
Iterated Support Vector Machines for Distance Metric Learning
|
14 pages, 10 figures
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distance metric learning aims to learn from the given training data a valid
distance metric, with which the similarity between data samples can be more
effectively evaluated for classification. Metric learning is often formulated
as a convex or nonconvex optimization problem, while many existing metric
learning algorithms become inefficient for large scale problems. In this paper,
we formulate metric learning as a kernel classification problem, and solve it
by iterated training of support vector machines (SVM). The new formulation is
easy to implement, efficient in training, and tractable for large-scale
problems. Two novel metric learning models, namely Positive-semidefinite
Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained
Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the
global optimality of their solutions. Experimental results on UCI dataset
classification, handwritten digit recognition, face verification and person
re-identification demonstrate that the proposed metric learning methods achieve
higher classification accuracy than state-of-the-art methods and they are
significantly more efficient in training.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 05:30:44 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Zuo",
"Wangmeng",
""
],
[
"Wang",
"Faqiang",
""
],
[
"Zhang",
"David",
""
],
[
"Lin",
"Liang",
""
],
[
"Huang",
"Yuchi",
""
],
[
"Meng",
"Deyu",
""
],
[
"Zhang",
"Lei",
""
]
] |
TITLE: Iterated Support Vector Machines for Distance Metric Learning
ABSTRACT: Distance metric learning aims to learn from the given training data a valid
distance metric, with which the similarity between data samples can be more
effectively evaluated for classification. Metric learning is often formulated
as a convex or nonconvex optimization problem, while many existing metric
learning algorithms become inefficient for large scale problems. In this paper,
we formulate metric learning as a kernel classification problem, and solve it
by iterated training of support vector machines (SVM). The new formulation is
easy to implement, efficient in training, and tractable for large-scale
problems. Two novel metric learning models, namely Positive-semidefinite
Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained
Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the
global optimality of their solutions. Experimental results on UCI dataset
classification, handwritten digit recognition, face verification and person
re-identification demonstrate that the proposed metric learning methods achieve
higher classification accuracy than state-of-the-art methods and they are
significantly more efficient in training.
|
no_new_dataset
| 0.951863 |
1502.00377
|
Liang Lin
|
Liang Lin, Yongyi Lu, Yan Pan, Xiaowu Chen
|
Integrating Graph Partitioning and Matching for Trajectory Analysis in
Video Surveillance
|
10 pages, 12 figures
|
Image Processing, IEEE Transactions on , vol.21, no.12,
pp.4844-4857, 2012
|
10.1109/TIP.2012.2211373
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to track the moving objects in long range against occlusion,
interruption, and background clutter, this paper proposes a unified approach
for global trajectory analysis. Instead of the traditional frame-by-frame
tracking, our method recovers target trajectories based on a short sequence of
video frames, e.g. $15$ frames. We initially calculate a foreground map at each
frame, as obtained from a state-of-the-art background model. An attribute graph
is then extracted from the foreground map, where the graph vertices are image
primitives represented by the composite features. With this graph
representation, we pose trajectory analysis as a joint task of spatial graph
partitioning and temporal graph matching. The task can be formulated by
maximizing a posteriori under the Bayesian framework, in which we integrate the
spatio-temporal contexts and the appearance models. The probabilistic inference
is achieved by a data-driven Markov Chain Monte Carlo (MCMC) algorithm. Given a
peroid of observed frames, the algorithm simulates a ergodic and aperiodic
Markov Chain, and it visits a sequence of solution states in the joint space of
spatial graph partitioning and temporal graph matching. In the experiments, our
method is tested on several challenging videos from the public datasets of
visual surveillance, and it outperforms the state-of-the-art methods.
|
[
{
"version": "v1",
"created": "Mon, 2 Feb 2015 06:52:47 GMT"
}
] | 2015-02-03T00:00:00 |
[
[
"Lin",
"Liang",
""
],
[
"Lu",
"Yongyi",
""
],
[
"Pan",
"Yan",
""
],
[
"Chen",
"Xiaowu",
""
]
] |
TITLE: Integrating Graph Partitioning and Matching for Trajectory Analysis in
Video Surveillance
ABSTRACT: In order to track the moving objects in long range against occlusion,
interruption, and background clutter, this paper proposes a unified approach
for global trajectory analysis. Instead of the traditional frame-by-frame
tracking, our method recovers target trajectories based on a short sequence of
video frames, e.g. $15$ frames. We initially calculate a foreground map at each
frame, as obtained from a state-of-the-art background model. An attribute graph
is then extracted from the foreground map, where the graph vertices are image
primitives represented by the composite features. With this graph
representation, we pose trajectory analysis as a joint task of spatial graph
partitioning and temporal graph matching. The task can be formulated by
maximizing a posteriori under the Bayesian framework, in which we integrate the
spatio-temporal contexts and the appearance models. The probabilistic inference
is achieved by a data-driven Markov Chain Monte Carlo (MCMC) algorithm. Given a
peroid of observed frames, the algorithm simulates a ergodic and aperiodic
Markov Chain, and it visits a sequence of solution states in the joint space of
spatial graph partitioning and temporal graph matching. In the experiments, our
method is tested on several challenging videos from the public datasets of
visual surveillance, and it outperforms the state-of-the-art methods.
|
no_new_dataset
| 0.951142 |
1407.0316
|
Mahito Sugiyama
|
Mahito Sugiyama, Felipe Llinares L\'opez, Niklas Kasenburg, Karsten M.
Borgwardt
|
Significant Subgraph Mining with Multiple Testing Correction
|
18 pages, 5 figure, accepted to the 2015 SIAM International
Conference on Data Mining (SDM15)
| null | null | null |
stat.ME cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of finding itemsets that are statistically significantly enriched
in a class of transactions is complicated by the need to correct for multiple
hypothesis testing. Pruning untestable hypotheses was recently proposed as a
strategy for this task of significant itemset mining. It was shown to lead to
greater statistical power, the discovery of more truly significant itemsets,
than the standard Bonferroni correction on real-world datasets. An open
question, however, is whether this strategy of excluding untestable hypotheses
also leads to greater statistical power in subgraph mining, in which the number
of hypotheses is much larger than in itemset mining. Here we answer this
question by an empirical investigation on eight popular graph benchmark
datasets. We propose a new efficient search strategy, which always returns the
same solution as the state-of-the-art approach and is approximately two orders
of magnitude faster. Moreover, we exploit the dependence between subgraphs by
considering the effective number of tests and thereby further increase the
statistical power.
|
[
{
"version": "v1",
"created": "Tue, 1 Jul 2014 16:53:51 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Jul 2014 13:39:21 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jan 2015 16:11:17 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Sugiyama",
"Mahito",
""
],
[
"López",
"Felipe Llinares",
""
],
[
"Kasenburg",
"Niklas",
""
],
[
"Borgwardt",
"Karsten M.",
""
]
] |
TITLE: Significant Subgraph Mining with Multiple Testing Correction
ABSTRACT: The problem of finding itemsets that are statistically significantly enriched
in a class of transactions is complicated by the need to correct for multiple
hypothesis testing. Pruning untestable hypotheses was recently proposed as a
strategy for this task of significant itemset mining. It was shown to lead to
greater statistical power, the discovery of more truly significant itemsets,
than the standard Bonferroni correction on real-world datasets. An open
question, however, is whether this strategy of excluding untestable hypotheses
also leads to greater statistical power in subgraph mining, in which the number
of hypotheses is much larger than in itemset mining. Here we answer this
question by an empirical investigation on eight popular graph benchmark
datasets. We propose a new efficient search strategy, which always returns the
same solution as the state-of-the-art approach and is approximately two orders
of magnitude faster. Moreover, we exploit the dependence between subgraphs by
considering the effective number of tests and thereby further increase the
statistical power.
|
no_new_dataset
| 0.953057 |
1409.0575
|
Olga Russakovsky
|
Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and
Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya
Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei
|
ImageNet Large Scale Visual Recognition Challenge
|
43 pages, 16 figures. v3 includes additional comparisons with PASCAL
VOC (per-category comparisons in Table 3, distribution of localization
difficulty in Fig 16), a list of queries used for obtaining object detection
images (Appendix C), and some additional references
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ImageNet Large Scale Visual Recognition Challenge is a benchmark in
object category classification and detection on hundreds of object categories
and millions of images. The challenge has been run annually from 2010 to
present, attracting participation from more than fifty institutions.
This paper describes the creation of this benchmark dataset and the advances
in object recognition that have been possible as a result. We discuss the
challenges of collecting large-scale ground truth annotation, highlight key
breakthroughs in categorical object recognition, provide a detailed analysis of
the current state of the field of large-scale image classification and object
detection, and compare the state-of-the-art computer vision accuracy with human
accuracy. We conclude with lessons learned in the five years of the challenge,
and propose future directions and improvements.
|
[
{
"version": "v1",
"created": "Mon, 1 Sep 2014 22:29:38 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Dec 2014 01:08:31 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Jan 2015 01:23:59 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Russakovsky",
"Olga",
""
],
[
"Deng",
"Jia",
""
],
[
"Su",
"Hao",
""
],
[
"Krause",
"Jonathan",
""
],
[
"Satheesh",
"Sanjeev",
""
],
[
"Ma",
"Sean",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Karpathy",
"Andrej",
""
],
[
"Khosla",
"Aditya",
""
],
[
"Bernstein",
"Michael",
""
],
[
"Berg",
"Alexander C.",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
TITLE: ImageNet Large Scale Visual Recognition Challenge
ABSTRACT: The ImageNet Large Scale Visual Recognition Challenge is a benchmark in
object category classification and detection on hundreds of object categories
and millions of images. The challenge has been run annually from 2010 to
present, attracting participation from more than fifty institutions.
This paper describes the creation of this benchmark dataset and the advances
in object recognition that have been possible as a result. We discuss the
challenges of collecting large-scale ground truth annotation, highlight key
breakthroughs in categorical object recognition, provide a detailed analysis of
the current state of the field of large-scale image classification and object
detection, and compare the state-of-the-art computer vision accuracy with human
accuracy. We conclude with lessons learned in the five years of the challenge,
and propose future directions and improvements.
|
new_dataset
| 0.810629 |
1501.05703
|
Ning Zhang
|
Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, Lubomir Bourdev
|
Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore the task of recognizing peoples' identities in photo albums in an
unconstrained setting. To facilitate this, we introduce the new People In Photo
Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals
collected from public Flickr photo albums. With only about half of the person
images containing a frontal face, the recognition task is very challenging due
to the large variations in pose, clothing, camera viewpoint, image resolution
and illumination. We propose the Pose Invariant PErson Recognition (PIPER)
method, which accumulates the cues of poselet-level person recognizers trained
by deep convolutional networks to discount for the pose variations, combined
with a face recognizer and a global recognizer. Experiments on three different
settings confirm that in our unconstrained setup PIPER significantly improves
on the performance of DeepFace, which is one of the best face recognizers as
measured on the LFW dataset.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 02:35:01 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Jan 2015 18:48:27 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Zhang",
"Ning",
""
],
[
"Paluri",
"Manohar",
""
],
[
"Taigman",
"Yaniv",
""
],
[
"Fergus",
"Rob",
""
],
[
"Bourdev",
"Lubomir",
""
]
] |
TITLE: Beyond Frontal Faces: Improving Person Recognition Using Multiple Cues
ABSTRACT: We explore the task of recognizing peoples' identities in photo albums in an
unconstrained setting. To facilitate this, we introduce the new People In Photo
Albums (PIPA) dataset, consisting of over 60000 instances of 2000 individuals
collected from public Flickr photo albums. With only about half of the person
images containing a frontal face, the recognition task is very challenging due
to the large variations in pose, clothing, camera viewpoint, image resolution
and illumination. We propose the Pose Invariant PErson Recognition (PIPER)
method, which accumulates the cues of poselet-level person recognizers trained
by deep convolutional networks to discount for the pose variations, combined
with a face recognizer and a global recognizer. Experiments on three different
settings confirm that in our unconstrained setup PIPER significantly improves
on the performance of DeepFace, which is one of the best face recognizers as
measured on the LFW dataset.
|
new_dataset
| 0.958654 |
1501.07716
|
Dominik Kowald
|
Paul Seitlinger, Dominik Kowald, Simone Kopeinik, Ilire
Hasani-Mavriqi, Tobias Ley, Elisabeth Lex
|
Attention Please! A Hybrid Resource Recommender Mimicking
Attention-Interpretation Dynamics
|
Submitted to WWW'15 WebScience Track
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classic resource recommenders like Collaborative Filtering (CF) treat users
as being just another entity, neglecting non-linear user-resource dynamics
shaping attention and interpretation. In this paper, we propose a novel hybrid
recommendation strategy that refines CF by capturing these dynamics. The
evaluation results reveal that our approach substantially improves CF and,
depending on the dataset, successfully competes with a computationally much
more expensive Matrix Factorization variant.
|
[
{
"version": "v1",
"created": "Fri, 30 Jan 2015 09:55:24 GMT"
}
] | 2015-02-02T00:00:00 |
[
[
"Seitlinger",
"Paul",
""
],
[
"Kowald",
"Dominik",
""
],
[
"Kopeinik",
"Simone",
""
],
[
"Hasani-Mavriqi",
"Ilire",
""
],
[
"Ley",
"Tobias",
""
],
[
"Lex",
"Elisabeth",
""
]
] |
TITLE: Attention Please! A Hybrid Resource Recommender Mimicking
Attention-Interpretation Dynamics
ABSTRACT: Classic resource recommenders like Collaborative Filtering (CF) treat users
as being just another entity, neglecting non-linear user-resource dynamics
shaping attention and interpretation. In this paper, we propose a novel hybrid
recommendation strategy that refines CF by capturing these dynamics. The
evaluation results reveal that our approach substantially improves CF and,
depending on the dataset, successfully competes with a computationally much
more expensive Matrix Factorization variant.
|
no_new_dataset
| 0.947672 |
1501.07304
|
Miriam Redi
|
Miriam Redi, Nikhil Rasiwasia, Gaurav Aggarwal, Alejandro Jaimes
|
The Beauty of Capturing Faces: Rating the Quality of Digital Portraits
|
FG 2015, 8 pages
| null | null | null |
cs.CV cs.CY cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digital portrait photographs are everywhere, and while the number of face
pictures keeps growing, not much work has been done to on automatic portrait
beauty assessment. In this paper, we design a specific framework to
automatically evaluate the beauty of digital portraits. To this end, we procure
a large dataset of face images annotated not only with aesthetic scores but
also with information about the traits of the subject portrayed. We design a
set of visual features based on portrait photography literature, and
extensively analyze their relation with portrait beauty, exposing interesting
findings about what makes a portrait beautiful. We find that the beauty of a
portrait is linked to its artistic value, and independent from age, race and
gender of the subject. We also show that a classifier trained with our features
to separate beautiful portraits from non-beautiful portraits outperforms
generic aesthetic classifiers.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 22:51:23 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Redi",
"Miriam",
""
],
[
"Rasiwasia",
"Nikhil",
""
],
[
"Aggarwal",
"Gaurav",
""
],
[
"Jaimes",
"Alejandro",
""
]
] |
TITLE: The Beauty of Capturing Faces: Rating the Quality of Digital Portraits
ABSTRACT: Digital portrait photographs are everywhere, and while the number of face
pictures keeps growing, not much work has been done to on automatic portrait
beauty assessment. In this paper, we design a specific framework to
automatically evaluate the beauty of digital portraits. To this end, we procure
a large dataset of face images annotated not only with aesthetic scores but
also with information about the traits of the subject portrayed. We design a
set of visual features based on portrait photography literature, and
extensively analyze their relation with portrait beauty, exposing interesting
findings about what makes a portrait beautiful. We find that the beauty of a
portrait is linked to its artistic value, and independent from age, race and
gender of the subject. We also show that a classifier trained with our features
to separate beautiful portraits from non-beautiful portraits outperforms
generic aesthetic classifiers.
|
new_dataset
| 0.956836 |
1501.07467
|
Hamed Zamani
|
Hamed Zamani, Azadeh Shakery, Pooya Moradi
|
Regression and Learning to Rank Aggregation for User Engagement
Evaluation
|
In Proceedings of the 2014 ACM Recommender Systems Challenge,
RecSysChallenge '14
| null |
10.1145/2668067.2668077
| null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
User engagement refers to the amount of interaction an instance (e.g., tweet,
news, and forum post) achieves. Ranking the items in social media websites
based on the amount of user participation in them, can be used in different
applications, such as recommender systems. In this paper, we consider a tweet
containing a rating for a movie as an instance and focus on ranking the
instances of each user based on their engagement, i.e., the total number of
retweets and favorites it will gain.
For this task, we define several features which can be extracted from the
meta-data of each tweet. The features are partitioned into three categories:
user-based, movie-based, and tweet-based. We show that in order to obtain good
results, features from all categories should be considered. We exploit
regression and learning to rank methods to rank the tweets and propose to
aggregate the results of regression and learning to rank methods to achieve
better performance. We have run our experiments on an extended version of
MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show
that learning to rank approach outperforms most of the regression models and
the combination can improve the performance significantly.
|
[
{
"version": "v1",
"created": "Thu, 29 Jan 2015 14:54:12 GMT"
}
] | 2015-01-30T00:00:00 |
[
[
"Zamani",
"Hamed",
""
],
[
"Shakery",
"Azadeh",
""
],
[
"Moradi",
"Pooya",
""
]
] |
TITLE: Regression and Learning to Rank Aggregation for User Engagement
Evaluation
ABSTRACT: User engagement refers to the amount of interaction an instance (e.g., tweet,
news, and forum post) achieves. Ranking the items in social media websites
based on the amount of user participation in them, can be used in different
applications, such as recommender systems. In this paper, we consider a tweet
containing a rating for a movie as an instance and focus on ranking the
instances of each user based on their engagement, i.e., the total number of
retweets and favorites it will gain.
For this task, we define several features which can be extracted from the
meta-data of each tweet. The features are partitioned into three categories:
user-based, movie-based, and tweet-based. We show that in order to obtain good
results, features from all categories should be considered. We exploit
regression and learning to rank methods to rank the tweets and propose to
aggregate the results of regression and learning to rank methods to achieve
better performance. We have run our experiments on an extended version of
MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show
that learning to rank approach outperforms most of the regression models and
the combination can improve the performance significantly.
|
no_new_dataset
| 0.944228 |
1501.05382
|
Wenjuan Gong
|
Wenjuan Gong and Yongzhen Huang and Jordi Gonzalez and and Liang Wang
|
Enhanced Mixtures of Part Model for Human Pose Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mixture of parts model has been successfully applied to 2D human pose
estimation problem either as explicitly trained body part model or as latent
variables for the whole human body model. Mixture of parts model usually
utilize tree structure for representing relations between body parts. Tree
structures facilitate training and referencing of the model but could not deal
with double counting problems, which hinder its applications in 3D pose
estimation. While most of work targeted to solve these problems tend to modify
the tree models or the optimization target. We incorporate other cues from
input features. For example, in surveillance environments, human silhouettes
can be extracted relative easily although not flawlessly. In this condition, we
can combine extracted human blobs with histogram of gradient feature, which is
commonly used in mixture of parts model for training body part templates. The
method can be easily extend to other candidate features under our generalized
framework. We show 2D body part detection results on a public available
dataset: HumanEva dataset. Furthermore, a 2D to 3D pose estimator is trained
with Gaussian process regression model and 2D body part detections from the
proposed method is fed to the estimator, thus 3D poses are predictable given
new 2D body part detections. We also show results of 3D pose estimation on
HumanEva dataset.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 03:54:15 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Jan 2015 08:16:55 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Gong",
"Wenjuan",
""
],
[
"Huang",
"Yongzhen",
""
],
[
"Gonzalez",
"Jordi",
""
],
[
"Wang",
"and Liang",
""
]
] |
TITLE: Enhanced Mixtures of Part Model for Human Pose Estimation
ABSTRACT: Mixture of parts model has been successfully applied to 2D human pose
estimation problem either as explicitly trained body part model or as latent
variables for the whole human body model. Mixture of parts model usually
utilize tree structure for representing relations between body parts. Tree
structures facilitate training and referencing of the model but could not deal
with double counting problems, which hinder its applications in 3D pose
estimation. While most of work targeted to solve these problems tend to modify
the tree models or the optimization target. We incorporate other cues from
input features. For example, in surveillance environments, human silhouettes
can be extracted relative easily although not flawlessly. In this condition, we
can combine extracted human blobs with histogram of gradient feature, which is
commonly used in mixture of parts model for training body part templates. The
method can be easily extend to other candidate features under our generalized
framework. We show 2D body part detection results on a public available
dataset: HumanEva dataset. Furthermore, a 2D to 3D pose estimator is trained
with Gaussian process regression model and 2D body part detections from the
proposed method is fed to the estimator, thus 3D poses are predictable given
new 2D body part detections. We also show results of 3D pose estimation on
HumanEva dataset.
|
no_new_dataset
| 0.946399 |
1501.06993
|
Youjie Zhou
|
Youjie Zhou and Hongkai Yu and Song Wang
|
Feature Sampling Strategies for Action Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although dense local spatial-temporal features with bag-of-features
representation achieve state-of-the-art performance for action recognition, the
huge feature number and feature size prevent current methods from scaling up to
real size problems. In this work, we investigate different types of feature
sampling strategies for action recognition, namely dense sampling, uniformly
random sampling and selective sampling. We propose two effective selective
sampling methods using object proposal techniques. Experiments conducted on a
large video dataset show that we are able to achieve better average recognition
accuracy using 25% less features, through one of proposed selective sampling
methods, and even remain comparable accuracy while discarding 70% features.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 05:41:07 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Zhou",
"Youjie",
""
],
[
"Yu",
"Hongkai",
""
],
[
"Wang",
"Song",
""
]
] |
TITLE: Feature Sampling Strategies for Action Recognition
ABSTRACT: Although dense local spatial-temporal features with bag-of-features
representation achieve state-of-the-art performance for action recognition, the
huge feature number and feature size prevent current methods from scaling up to
real size problems. In this work, we investigate different types of feature
sampling strategies for action recognition, namely dense sampling, uniformly
random sampling and selective sampling. We propose two effective selective
sampling methods using object proposal techniques. Experiments conducted on a
large video dataset show that we are able to achieve better average recognition
accuracy using 25% less features, through one of proposed selective sampling
methods, and even remain comparable accuracy while discarding 70% features.
|
no_new_dataset
| 0.948537 |
1501.07184
|
Zehra Meral Ozsoyoglu
|
Shi Qiao, Z. Meral Ozsoyoglu
|
One Size Does not Fit All: When to Use Signature-based Pruning to
Improve Template Matching for RDF graphs
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Signature-based pruning is broadly accepted as an effective way to improve
query performance of graph template matching on general labeled graphs. Most
existing techniques which utilize signature-based pruning claim its benefits on
all datasets and queries. However, the effectiveness of signature-based pruning
varies greatly among different RDF datasets and highly related with their
dataset characteristics. We observe that the performance benefits from
signature-based pruning depend not only on the size of the RDF graphs, but also
the underlying graph structure and the complexity of queries. This motivates us
to propose a flexible RDF querying framework, called RDF-h, which selectively
utilizes signature-based pruning by evaluating the characteristics of RDF
datasets and query templates. Scalability and efficiency of RDF-h is
demonstrated in experimental results using both real and synthetic datasets.
Keywords: RDF, Graph Template Matching, Signature-based Pruning
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 16:49:18 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Qiao",
"Shi",
""
],
[
"Ozsoyoglu",
"Z. Meral",
""
]
] |
TITLE: One Size Does not Fit All: When to Use Signature-based Pruning to
Improve Template Matching for RDF graphs
ABSTRACT: Signature-based pruning is broadly accepted as an effective way to improve
query performance of graph template matching on general labeled graphs. Most
existing techniques which utilize signature-based pruning claim its benefits on
all datasets and queries. However, the effectiveness of signature-based pruning
varies greatly among different RDF datasets and highly related with their
dataset characteristics. We observe that the performance benefits from
signature-based pruning depend not only on the size of the RDF graphs, but also
the underlying graph structure and the complexity of queries. This motivates us
to propose a flexible RDF querying framework, called RDF-h, which selectively
utilizes signature-based pruning by evaluating the characteristics of RDF
datasets and query templates. Scalability and efficiency of RDF-h is
demonstrated in experimental results using both real and synthetic datasets.
Keywords: RDF, Graph Template Matching, Signature-based Pruning
|
no_new_dataset
| 0.94887 |
1501.07203
|
Walter Quattrociocchi
|
Michela Del Vicario, Qian Zhang, Alessandro Bessi, Fabiana Zollo,
Antonio Scala, Guido Caldarelli, Walter Quattrociocchi
|
Structural Patterns of the Occupy Movement on Facebook
| null | null | null | null |
cs.SI cs.HC physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we study a peculiar example of social organization on Facebook:
the Occupy Movement -- i.e., an international protest movement against social
and economic inequality organized online at a city level. We consider 179 US
Facebook public pages during the time period between September 2011 and
February 2013. The dataset includes 618K active users and 753K posts that
received about 5.2M likes and 1.1M comments. By labeling user according to
their interaction patterns on pages -- e.g., a user is considered to be
polarized if she has at least the 95% of her likes on a specific page -- we
find that activities are not locally coordinated by geographically close pages,
but are driven by pages linked to major US cities that act as hubs within the
various groups. Such a pattern is verified even by extracting the backbone
structure -- i.e., filtering statistically relevant weight heterogeneities --
for both the pages-reshares and the pages-common users networks.
|
[
{
"version": "v1",
"created": "Wed, 28 Jan 2015 17:23:59 GMT"
}
] | 2015-01-29T00:00:00 |
[
[
"Del Vicario",
"Michela",
""
],
[
"Zhang",
"Qian",
""
],
[
"Bessi",
"Alessandro",
""
],
[
"Zollo",
"Fabiana",
""
],
[
"Scala",
"Antonio",
""
],
[
"Caldarelli",
"Guido",
""
],
[
"Quattrociocchi",
"Walter",
""
]
] |
TITLE: Structural Patterns of the Occupy Movement on Facebook
ABSTRACT: In this work we study a peculiar example of social organization on Facebook:
the Occupy Movement -- i.e., an international protest movement against social
and economic inequality organized online at a city level. We consider 179 US
Facebook public pages during the time period between September 2011 and
February 2013. The dataset includes 618K active users and 753K posts that
received about 5.2M likes and 1.1M comments. By labeling user according to
their interaction patterns on pages -- e.g., a user is considered to be
polarized if she has at least the 95% of her likes on a specific page -- we
find that activities are not locally coordinated by geographically close pages,
but are driven by pages linked to major US cities that act as hubs within the
various groups. Such a pattern is verified even by extracting the backbone
structure -- i.e., filtering statistically relevant weight heterogeneities --
for both the pages-reshares and the pages-common users networks.
|
no_new_dataset
| 0.855127 |
1409.2080
|
Pierre Bellec
|
P. Bellec and Y. Benhajali and F. Carbonell and C. Dansereau and G.
Albouy and M. Pelland and C. Craddock and O. Collignon and J. Doyon and E.
Stip and P. Orban
|
Multiscale statistical testing for connectome-wide association studies
in fMRI
|
54 pages, 12 main figures, 1 main table, 10 supplementary figures, 1
supplementary table
| null | null | null |
q-bio.QM cs.CV stat.AP
|
http://creativecommons.org/licenses/by/3.0/
|
Alterations in brain connectivity have been associated with a variety of
clinical disorders using functional magnetic resonance imaging (fMRI). We
investigated empirically how the number of brain parcels (or scale) impacted
the results of a mass univariate general linear model (GLM) on connectomes. The
brain parcels used as nodes in the connectome analysis were functionnally
defined by a group cluster analysis. We first validated that a classic
Benjamini-Hochberg procedure with parametric GLM tests did control
appropriately the false-discovery rate (FDR) at a given scale. We then observed
on realistic simulations that there was no substantial inflation of the FDR
across scales, as long as the FDR was controlled independently within each
scale, and the presence of true associations could be established using an
omnibus permutation test combining all scales. Second, we observed both on
simulations and on three real resting-state fMRI datasets (schizophrenia,
congenital blindness, motor practice) that the rate of discovery varied
markedly as a function of scales, and was relatively higher for low scales,
below 25. Despite the differences in discovery rate, the statistical maps
derived at different scales were generally very consistent in the three real
datasets. Some seeds still showed effects better observed around 50,
illustrating the potential benefits of multiscale analysis. On real data, the
statistical maps agreed well with the existing literature. Overall, our results
support that the multiscale GLM connectome analysis with FDR is statistically
valid and can capture biologically meaningful effects in a variety of
experimental conditions.
|
[
{
"version": "v1",
"created": "Sun, 7 Sep 2014 04:07:22 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Jan 2015 21:00:45 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Bellec",
"P.",
""
],
[
"Benhajali",
"Y.",
""
],
[
"Carbonell",
"F.",
""
],
[
"Dansereau",
"C.",
""
],
[
"Albouy",
"G.",
""
],
[
"Pelland",
"M.",
""
],
[
"Craddock",
"C.",
""
],
[
"Collignon",
"O.",
""
],
[
"Doyon",
"J.",
""
],
[
"Stip",
"E.",
""
],
[
"Orban",
"P.",
""
]
] |
TITLE: Multiscale statistical testing for connectome-wide association studies
in fMRI
ABSTRACT: Alterations in brain connectivity have been associated with a variety of
clinical disorders using functional magnetic resonance imaging (fMRI). We
investigated empirically how the number of brain parcels (or scale) impacted
the results of a mass univariate general linear model (GLM) on connectomes. The
brain parcels used as nodes in the connectome analysis were functionnally
defined by a group cluster analysis. We first validated that a classic
Benjamini-Hochberg procedure with parametric GLM tests did control
appropriately the false-discovery rate (FDR) at a given scale. We then observed
on realistic simulations that there was no substantial inflation of the FDR
across scales, as long as the FDR was controlled independently within each
scale, and the presence of true associations could be established using an
omnibus permutation test combining all scales. Second, we observed both on
simulations and on three real resting-state fMRI datasets (schizophrenia,
congenital blindness, motor practice) that the rate of discovery varied
markedly as a function of scales, and was relatively higher for low scales,
below 25. Despite the differences in discovery rate, the statistical maps
derived at different scales were generally very consistent in the three real
datasets. Some seeds still showed effects better observed around 50,
illustrating the potential benefits of multiscale analysis. On real data, the
statistical maps agreed well with the existing literature. Overall, our results
support that the multiscale GLM connectome analysis with FDR is statistically
valid and can capture biologically meaningful effects in a variety of
experimental conditions.
|
no_new_dataset
| 0.951594 |
1412.3409
|
Christopher Clark
|
Christopher Clark and Amos Storkey
|
Teaching Deep Convolutional Neural Networks to Play Go
|
9 pages, 8 figures, 5 tables. Corrected typos, minor adjustment to
table format
| null | null | null |
cs.AI cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mastering the game of Go has remained a long standing challenge to the field
of AI. Modern computer Go systems rely on processing millions of possible
future positions to play well, but intuitively a stronger and more 'humanlike'
way to play the game would be to rely on pattern recognition abilities rather
then brute force computation. Following this sentiment, we train deep
convolutional neural networks to play Go by training them to predict the moves
made by expert Go players. To solve this problem we introduce a number of novel
techniques, including a method of tying weights in the network to 'hard code'
symmetries that are expect to exist in the target function, and demonstrate in
an ablation study they considerably improve performance. Our final networks are
able to achieve move prediction accuracies of 41.1% and 44.4% on two different
Go datasets, surpassing previous state of the art on this task by significant
margins. Additionally, while previous move prediction programs have not yielded
strong Go playing programs, we show that the networks trained in this work
acquired high levels of skill. Our convolutional neural networks can
consistently defeat the well known Go program GNU Go, indicating it is state of
the art among programs that do not use Monte Carlo Tree Search. It is also able
to win some games against state of the art Go playing program Fuego while using
a fraction of the play time. This success at playing Go indicates high level
principles of the game were learned.
|
[
{
"version": "v1",
"created": "Wed, 10 Dec 2014 18:59:43 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jan 2015 10:31:31 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Clark",
"Christopher",
""
],
[
"Storkey",
"Amos",
""
]
] |
TITLE: Teaching Deep Convolutional Neural Networks to Play Go
ABSTRACT: Mastering the game of Go has remained a long standing challenge to the field
of AI. Modern computer Go systems rely on processing millions of possible
future positions to play well, but intuitively a stronger and more 'humanlike'
way to play the game would be to rely on pattern recognition abilities rather
then brute force computation. Following this sentiment, we train deep
convolutional neural networks to play Go by training them to predict the moves
made by expert Go players. To solve this problem we introduce a number of novel
techniques, including a method of tying weights in the network to 'hard code'
symmetries that are expect to exist in the target function, and demonstrate in
an ablation study they considerably improve performance. Our final networks are
able to achieve move prediction accuracies of 41.1% and 44.4% on two different
Go datasets, surpassing previous state of the art on this task by significant
margins. Additionally, while previous move prediction programs have not yielded
strong Go playing programs, we show that the networks trained in this work
acquired high levels of skill. Our convolutional neural networks can
consistently defeat the well known Go program GNU Go, indicating it is state of
the art among programs that do not use Monte Carlo Tree Search. It is also able
to win some games against state of the art Go playing program Fuego while using
a fraction of the play time. This success at playing Go indicates high level
principles of the game were learned.
|
no_new_dataset
| 0.942929 |
1501.06247
|
Peng Xia
|
Peng Xia, Benyuan Liu, Yizhou Sun and Cindy Chen
|
Reciprocal Recommendation System for Online Dating
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online dating sites have become popular platforms for people to look for
potential romantic partners. Different from traditional user-item
recommendations where the goal is to match items (e.g., books, videos, etc)
with a user's interests, a recommendation system for online dating aims to
match people who are mutually interested in and likely to communicate with each
other. We introduce similarity measures that capture the unique features and
characteristics of the online dating network, for example, the interest
similarity between two users if they send messages to same users, and
attractiveness similarity if they receive messages from same users. A
reciprocal score that measures the compatibility between a user and each
potential dating candidate is computed and the recommendation list is generated
to include users with top scores. The performance of our proposed
recommendation system is evaluated on a real-world dataset from a major online
dating site in China. The results show that our recommendation algorithms
significantly outperform previously proposed approaches, and the collaborative
filtering-based algorithms achieve much better performance than content-based
algorithms in both precision and recall. Our results also reveal interesting
behavioral difference between male and female users when it comes to looking
for potential dates. In particular, males tend to be focused on their own
interest and oblivious towards their attractiveness to potential dates, while
females are more conscientious to their own attractiveness to the other side of
the line.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 03:22:10 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jan 2015 13:10:25 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Xia",
"Peng",
""
],
[
"Liu",
"Benyuan",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Chen",
"Cindy",
""
]
] |
TITLE: Reciprocal Recommendation System for Online Dating
ABSTRACT: Online dating sites have become popular platforms for people to look for
potential romantic partners. Different from traditional user-item
recommendations where the goal is to match items (e.g., books, videos, etc)
with a user's interests, a recommendation system for online dating aims to
match people who are mutually interested in and likely to communicate with each
other. We introduce similarity measures that capture the unique features and
characteristics of the online dating network, for example, the interest
similarity between two users if they send messages to same users, and
attractiveness similarity if they receive messages from same users. A
reciprocal score that measures the compatibility between a user and each
potential dating candidate is computed and the recommendation list is generated
to include users with top scores. The performance of our proposed
recommendation system is evaluated on a real-world dataset from a major online
dating site in China. The results show that our recommendation algorithms
significantly outperform previously proposed approaches, and the collaborative
filtering-based algorithms achieve much better performance than content-based
algorithms in both precision and recall. Our results also reveal interesting
behavioral difference between male and female users when it comes to looking
for potential dates. In particular, males tend to be focused on their own
interest and oblivious towards their attractiveness to potential dates, while
females are more conscientious to their own attractiveness to the other side of
the line.
|
no_new_dataset
| 0.940243 |
1501.06722
|
Alin Popa
|
Alin-Ionut Popa and Cristian Sminchisescu
|
Parametric Image Segmentation of Humans with Structural Shape Priors
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The figure-ground segmentation of humans in images captured in natural
environments is an outstanding open problem due to the presence of complex
backgrounds, articulation, varying body proportions, partial views and
viewpoint changes. In this work we propose class-specific segmentation models
that leverage parametric max-flow image segmentation and a large dataset of
human shapes. Our contributions are as follows: (1) formulation of a
sub-modular energy model that combines class-specific structural constraints
and data-driven shape priors, within a parametric max-flow optimization
methodology that systematically computes all breakpoints of the model in
polynomial time; (2) design of a data-driven class-specific fusion methodology,
based on matching against a large training set of exemplar human shapes
(100,000 in our experiments), that allows the shape prior to be constructed
on-the-fly, for arbitrary viewpoints and partial views. (3) demonstration of
state of the art results, in two challenging datasets, H3D and MPII (where
figure-ground segmentation annotations have been added by us), where we
substantially improve on the first ranked hypothesis estimates of mid-level
segmentation methods, by 20%, with hypothesis set sizes that are up to one
order of magnitude smaller.
|
[
{
"version": "v1",
"created": "Tue, 27 Jan 2015 10:03:45 GMT"
}
] | 2015-01-28T00:00:00 |
[
[
"Popa",
"Alin-Ionut",
""
],
[
"Sminchisescu",
"Cristian",
""
]
] |
TITLE: Parametric Image Segmentation of Humans with Structural Shape Priors
ABSTRACT: The figure-ground segmentation of humans in images captured in natural
environments is an outstanding open problem due to the presence of complex
backgrounds, articulation, varying body proportions, partial views and
viewpoint changes. In this work we propose class-specific segmentation models
that leverage parametric max-flow image segmentation and a large dataset of
human shapes. Our contributions are as follows: (1) formulation of a
sub-modular energy model that combines class-specific structural constraints
and data-driven shape priors, within a parametric max-flow optimization
methodology that systematically computes all breakpoints of the model in
polynomial time; (2) design of a data-driven class-specific fusion methodology,
based on matching against a large training set of exemplar human shapes
(100,000 in our experiments), that allows the shape prior to be constructed
on-the-fly, for arbitrary viewpoints and partial views. (3) demonstration of
state of the art results, in two challenging datasets, H3D and MPII (where
figure-ground segmentation annotations have been added by us), where we
substantially improve on the first ranked hypothesis estimates of mid-level
segmentation methods, by 20%, with hypothesis set sizes that are up to one
order of magnitude smaller.
|
no_new_dataset
| 0.9463 |
1411.7399
|
Lior Wolf
|
Benjamin Klein, Guy Lev, Gil Sadeh, Lior Wolf
|
Fisher Vectors Derived from Hybrid Gaussian-Laplacian Mixture Models for
Image Annotation
|
new version includes text synthesis by an RNN and experiments with
the COCO benchmark
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the traditional object recognition pipeline, descriptors are densely
sampled over an image, pooled into a high dimensional non-linear representation
and then passed to a classifier. In recent years, Fisher Vectors have proven
empirically to be the leading representation for a large variety of
applications. The Fisher Vector is typically taken as the gradients of the
log-likelihood of descriptors, with respect to the parameters of a Gaussian
Mixture Model (GMM). Motivated by the assumption that different distributions
should be applied for different datasets, we present two other Mixture Models
and derive their Expectation-Maximization and Fisher Vector expressions. The
first is a Laplacian Mixture Model (LMM), which is based on the Laplacian
distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian
Mixture Model (HGLMM) which is based on a weighted geometric mean of the
Gaussian and Laplacian distribution. An interesting property of the
Expectation-Maximization algorithm for the latter is that in the maximization
step, each dimension in each component is chosen to be either a Gaussian or a
Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we
achieve state-of-the-art results for both the image annotation and the image
search by a sentence tasks.
|
[
{
"version": "v1",
"created": "Wed, 26 Nov 2014 21:21:51 GMT"
},
{
"version": "v2",
"created": "Sat, 24 Jan 2015 20:03:50 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Klein",
"Benjamin",
""
],
[
"Lev",
"Guy",
""
],
[
"Sadeh",
"Gil",
""
],
[
"Wolf",
"Lior",
""
]
] |
TITLE: Fisher Vectors Derived from Hybrid Gaussian-Laplacian Mixture Models for
Image Annotation
ABSTRACT: In the traditional object recognition pipeline, descriptors are densely
sampled over an image, pooled into a high dimensional non-linear representation
and then passed to a classifier. In recent years, Fisher Vectors have proven
empirically to be the leading representation for a large variety of
applications. The Fisher Vector is typically taken as the gradients of the
log-likelihood of descriptors, with respect to the parameters of a Gaussian
Mixture Model (GMM). Motivated by the assumption that different distributions
should be applied for different datasets, we present two other Mixture Models
and derive their Expectation-Maximization and Fisher Vector expressions. The
first is a Laplacian Mixture Model (LMM), which is based on the Laplacian
distribution. The second Mixture Model presented is a Hybrid Gaussian-Laplacian
Mixture Model (HGLMM) which is based on a weighted geometric mean of the
Gaussian and Laplacian distribution. An interesting property of the
Expectation-Maximization algorithm for the latter is that in the maximization
step, each dimension in each component is chosen to be either a Gaussian or a
Laplacian. Finally, by using the new Fisher Vectors derived from HGLMMs, we
achieve state-of-the-art results for both the image annotation and the image
search by a sentence tasks.
|
no_new_dataset
| 0.949669 |
1412.3705
|
Weicong Ding
|
Weicong Ding, Prakash Ishwar, Venkatesh Saligrama
|
A Topic Modeling Approach to Ranking
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a topic modeling approach to the prediction of preferences in
pairwise comparisons. We develop a new generative model for pairwise
comparisons that accounts for multiple shared latent rankings that are
prevalent in a population of users. This new model also captures inconsistent
user behavior in a natural way. We show how the estimation of latent rankings
in the new generative model can be formally reduced to the estimation of topics
in a statistically equivalent topic modeling problem. We leverage recent
advances in the topic modeling literature to develop an algorithm that can
learn shared latent rankings with provable consistency as well as sample and
computational complexity guarantees. We demonstrate that the new approach is
empirically competitive with the current state-of-the-art approaches in
predicting preferences on some semi-synthetic and real world datasets.
|
[
{
"version": "v1",
"created": "Thu, 11 Dec 2014 16:15:53 GMT"
},
{
"version": "v2",
"created": "Fri, 12 Dec 2014 22:01:20 GMT"
},
{
"version": "v3",
"created": "Sun, 25 Jan 2015 22:32:10 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Ding",
"Weicong",
""
],
[
"Ishwar",
"Prakash",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] |
TITLE: A Topic Modeling Approach to Ranking
ABSTRACT: We propose a topic modeling approach to the prediction of preferences in
pairwise comparisons. We develop a new generative model for pairwise
comparisons that accounts for multiple shared latent rankings that are
prevalent in a population of users. This new model also captures inconsistent
user behavior in a natural way. We show how the estimation of latent rankings
in the new generative model can be formally reduced to the estimation of topics
in a statistically equivalent topic modeling problem. We leverage recent
advances in the topic modeling literature to develop an algorithm that can
learn shared latent rankings with provable consistency as well as sample and
computational complexity guarantees. We demonstrate that the new approach is
empirically competitive with the current state-of-the-art approaches in
predicting preferences on some semi-synthetic and real world datasets.
|
no_new_dataset
| 0.947769 |
1501.06102
|
Terrence Adams
|
Terrence Adams
|
Development of a Big Data Framework for Connectomic Research
|
6 pages, 9 figures
| null | null | null |
cs.DC cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper outlines research and development of a new Hadoop-based
architecture for distributed processing and analysis of electron microscopy of
brains. We show development of a new C++ library for implementation of 3D image
analysis techniques, and deployment in a distributed map/reduce framework. We
demonstrate our new framework on a subset of the Kasthuri11 dataset from the
Open Connectome Project.
|
[
{
"version": "v1",
"created": "Sun, 25 Jan 2015 01:42:09 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Adams",
"Terrence",
""
]
] |
TITLE: Development of a Big Data Framework for Connectomic Research
ABSTRACT: This paper outlines research and development of a new Hadoop-based
architecture for distributed processing and analysis of electron microscopy of
brains. We show development of a new C++ library for implementation of 3D image
analysis techniques, and deployment in a distributed map/reduce framework. We
demonstrate our new framework on a subset of the Kasthuri11 dataset from the
Open Connectome Project.
|
no_new_dataset
| 0.934335 |
1501.06129
|
Swagat Kumar
|
Sourav Garg, Swagat Kumar, Rajesh Ratnakaram, Prithwijit Guha
|
An Occlusion Reasoning Scheme for Monocular Pedestrian Tracking in
Dynamic Scenes
|
8 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper looks into the problem of pedestrian tracking using a monocular,
potentially moving, uncalibrated camera. The pedestrians are located in each
frame using a standard human detector, which are then tracked in subsequent
frames. This is a challenging problem as one has to deal with complex
situations like changing background, partial or full occlusion and camera
motion. In order to carry out successful tracking, it is necessary to resolve
associations between the detected windows in the current frame with those
obtained from the previous frame. Compared to methods that use temporal windows
incorporating past as well as future information, we attempt to make decision
on a frame-by-frame basis. An occlusion reasoning scheme is proposed to resolve
the association problem between a pair of consecutive frames by using an
affinity matrix that defines the closeness between a pair of windows and then,
uses a binary integer programming to obtain unique association between them. A
second stage of verification based on SURF matching is used to deal with those
cases where the above optimization scheme might yield wrong associations. The
efficacy of the approach is demonstrated through experiments on several
standard pedestrian datasets.
|
[
{
"version": "v1",
"created": "Sun, 25 Jan 2015 08:38:48 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Garg",
"Sourav",
""
],
[
"Kumar",
"Swagat",
""
],
[
"Ratnakaram",
"Rajesh",
""
],
[
"Guha",
"Prithwijit",
""
]
] |
TITLE: An Occlusion Reasoning Scheme for Monocular Pedestrian Tracking in
Dynamic Scenes
ABSTRACT: This paper looks into the problem of pedestrian tracking using a monocular,
potentially moving, uncalibrated camera. The pedestrians are located in each
frame using a standard human detector, which are then tracked in subsequent
frames. This is a challenging problem as one has to deal with complex
situations like changing background, partial or full occlusion and camera
motion. In order to carry out successful tracking, it is necessary to resolve
associations between the detected windows in the current frame with those
obtained from the previous frame. Compared to methods that use temporal windows
incorporating past as well as future information, we attempt to make decision
on a frame-by-frame basis. An occlusion reasoning scheme is proposed to resolve
the association problem between a pair of consecutive frames by using an
affinity matrix that defines the closeness between a pair of windows and then,
uses a binary integer programming to obtain unique association between them. A
second stage of verification based on SURF matching is used to deal with those
cases where the above optimization scheme might yield wrong associations. The
efficacy of the approach is demonstrated through experiments on several
standard pedestrian datasets.
|
no_new_dataset
| 0.949153 |
1501.06194
|
Ilan Shomorony
|
Ilan Shomorony, Thomas Courtade, and David Tse
|
Do Read Errors Matter for Genome Assembly?
|
Submitted to ISIT 2015
| null | null | null |
cs.IT math.IT q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While most current high-throughput DNA sequencing technologies generate short
reads with low error rates, emerging sequencing technologies generate long
reads with high error rates. A basic question of interest is the tradeoff
between read length and error rate in terms of the information needed for the
perfect assembly of the genome. Using an adversarial erasure error model, we
make progress on this problem by establishing a critical read length, as a
function of the genome and the error rate, above which perfect assembly is
guaranteed. For several real genomes, including those from the GAGE dataset, we
verify that this critical read length is not significantly greater than the
read length required for perfect assembly from reads without errors.
|
[
{
"version": "v1",
"created": "Sun, 25 Jan 2015 18:40:19 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Shomorony",
"Ilan",
""
],
[
"Courtade",
"Thomas",
""
],
[
"Tse",
"David",
""
]
] |
TITLE: Do Read Errors Matter for Genome Assembly?
ABSTRACT: While most current high-throughput DNA sequencing technologies generate short
reads with low error rates, emerging sequencing technologies generate long
reads with high error rates. A basic question of interest is the tradeoff
between read length and error rate in terms of the information needed for the
perfect assembly of the genome. Using an adversarial erasure error model, we
make progress on this problem by establishing a critical read length, as a
function of the genome and the error rate, above which perfect assembly is
guaranteed. For several real genomes, including those from the GAGE dataset, we
verify that this critical read length is not significantly greater than the
read length required for perfect assembly from reads without errors.
|
no_new_dataset
| 0.954816 |
1501.06396
|
Ben Teng
|
Ben Teng, Can Yang, Jiming Liu, Zhipeng Cai and Xiang Wan
|
Exploring the genetic patterns of complex diseases via the integrative
genome-wide approach
| null | null | null | null |
cs.CE q-bio.QM stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivation: Genome-wide association studies (GWASs), which assay more than a
million single nucleotide polymorphisms (SNPs) in thousands of individuals,
have been widely used to identify genetic risk variants for complex diseases.
However, most of the variants that have been identified contribute relatively
small increments of risk and only explain a small portion of the genetic
variation in complex diseases. This is the so-called missing heritability
problem. Evidence has indicated that many complex diseases are genetically
related, meaning these diseases share common genetic risk variants. Therefore,
exploring the genetic correlations across multiple related studies could be a
promising strategy for removing spurious associations and identifying
underlying genetic risk variants, and thereby uncovering the mystery of missing
heritability in complex diseases. Results: We present a general and robust
method to identify genetic patterns from multiple large-scale genomic datasets.
We treat the summary statistics as a matrix and demonstrate that genetic
patterns will form a low-rank matrix plus a sparse component. Hence, we
formulate the problem as a matrix recovering problem, where we aim to discover
risk variants shared by multiple diseases/traits and those for each individual
disease/trait. We propose a convex formulation for matrix recovery and an
efficient algorithm to solve the problem. We demonstrate the advantages of our
method using both synthesized datasets and real datasets. The experimental
results show that our method can successfully reconstruct both the shared and
the individual genetic patterns from summary statistics and achieve better
performance compared with alternative methods under a wide range of scenarios.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 13:59:23 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Teng",
"Ben",
""
],
[
"Yang",
"Can",
""
],
[
"Liu",
"Jiming",
""
],
[
"Cai",
"Zhipeng",
""
],
[
"Wan",
"Xiang",
""
]
] |
TITLE: Exploring the genetic patterns of complex diseases via the integrative
genome-wide approach
ABSTRACT: Motivation: Genome-wide association studies (GWASs), which assay more than a
million single nucleotide polymorphisms (SNPs) in thousands of individuals,
have been widely used to identify genetic risk variants for complex diseases.
However, most of the variants that have been identified contribute relatively
small increments of risk and only explain a small portion of the genetic
variation in complex diseases. This is the so-called missing heritability
problem. Evidence has indicated that many complex diseases are genetically
related, meaning these diseases share common genetic risk variants. Therefore,
exploring the genetic correlations across multiple related studies could be a
promising strategy for removing spurious associations and identifying
underlying genetic risk variants, and thereby uncovering the mystery of missing
heritability in complex diseases. Results: We present a general and robust
method to identify genetic patterns from multiple large-scale genomic datasets.
We treat the summary statistics as a matrix and demonstrate that genetic
patterns will form a low-rank matrix plus a sparse component. Hence, we
formulate the problem as a matrix recovering problem, where we aim to discover
risk variants shared by multiple diseases/traits and those for each individual
disease/trait. We propose a convex formulation for matrix recovery and an
efficient algorithm to solve the problem. We demonstrate the advantages of our
method using both synthesized datasets and real datasets. The experimental
results show that our method can successfully reconstruct both the shared and
the individual genetic patterns from summary statistics and achieve better
performance compared with alternative methods under a wide range of scenarios.
|
no_new_dataset
| 0.945147 |
1501.06456
|
Sanjay Chakraborty
|
Ratul Dey Sanjay Chakraborty Lopamudra Dey
|
Weather forecasting using Convex hull & K-Means Techniques An Approach
|
1st International Science & Technology Congress(IEMCON-2015) Elsevier
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data mining is a popular concept of mined necessary data from a large set of
data. Data mining using clustering is a powerful way to analyze data and gives
prediction. In this paper non structural time series data is used to forecast
daily average temperature, humidity and overall weather conditions of Kolkata
city. The air pollution data have been taken from West Bengal Pollution Control
Board to build the original dataset on which the prediction approach of this
paper is studied and applied. This paper describes a new technique to predict
the weather conditions using convex hull which gives structural data and then
apply incremental K-means to define the appropriate clusters. It splits the
total database into four separate databases with respect to different weather
conditions. In the final step, the result will be calculated on the basis of
priority based protocol which is defined based on some mathematical deduction.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 16:12:01 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Dey",
"Ratul Dey Sanjay Chakraborty Lopamudra",
""
]
] |
TITLE: Weather forecasting using Convex hull & K-Means Techniques An Approach
ABSTRACT: Data mining is a popular concept of mined necessary data from a large set of
data. Data mining using clustering is a powerful way to analyze data and gives
prediction. In this paper non structural time series data is used to forecast
daily average temperature, humidity and overall weather conditions of Kolkata
city. The air pollution data have been taken from West Bengal Pollution Control
Board to build the original dataset on which the prediction approach of this
paper is studied and applied. This paper describes a new technique to predict
the weather conditions using convex hull which gives structural data and then
apply incremental K-means to define the appropriate clusters. It splits the
total database into four separate databases with respect to different weather
conditions. In the final step, the result will be calculated on the basis of
priority based protocol which is defined based on some mathematical deduction.
|
new_dataset
| 0.637482 |
1501.06561
|
Mina Ghashami
|
Amey Desai, Mina Ghashami and Jeff M. Phillips
|
Improved Practical Matrix Sketching with Guarantees
|
27 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Matrices have become essential data representations for many large-scale
problems in data analytics, and hence matrix sketching is a critical task.
Although much research has focused on improving the error/size tradeoff under
various sketching paradigms, the many forms of error bounds make these
approaches hard to compare in theory and in practice. This paper attempts to
categorize and compare most known methods under row-wise streaming updates with
provable guarantees, and then to tweak some of these methods to gain practical
improvements while retaining guarantees.
For instance, we observe that a simple heuristic iSVD, with no guarantees,
tends to outperform all known approaches in terms of size/error trade-off. We
modify the best performing method with guarantees FrequentDirections under the
size/error trade-off to match the performance of iSVD and retain its
guarantees. We also demonstrate some adversarial datasets where iSVD performs
quite poorly. In comparing techniques in the time/error trade-off, techniques
based on hashing or sampling tend to perform better. In this setting we modify
the most studied sampling regime to retain error guarantee but obtain dramatic
improvements in the time/error trade-off.
Finally, we provide easy replication of our studies on APT, a new testbed
which makes available not only code and datasets, but also a computing platform
with fixed environmental settings.
|
[
{
"version": "v1",
"created": "Mon, 26 Jan 2015 20:44:31 GMT"
}
] | 2015-01-27T00:00:00 |
[
[
"Desai",
"Amey",
""
],
[
"Ghashami",
"Mina",
""
],
[
"Phillips",
"Jeff M.",
""
]
] |
TITLE: Improved Practical Matrix Sketching with Guarantees
ABSTRACT: Matrices have become essential data representations for many large-scale
problems in data analytics, and hence matrix sketching is a critical task.
Although much research has focused on improving the error/size tradeoff under
various sketching paradigms, the many forms of error bounds make these
approaches hard to compare in theory and in practice. This paper attempts to
categorize and compare most known methods under row-wise streaming updates with
provable guarantees, and then to tweak some of these methods to gain practical
improvements while retaining guarantees.
For instance, we observe that a simple heuristic iSVD, with no guarantees,
tends to outperform all known approaches in terms of size/error trade-off. We
modify the best performing method with guarantees FrequentDirections under the
size/error trade-off to match the performance of iSVD and retain its
guarantees. We also demonstrate some adversarial datasets where iSVD performs
quite poorly. In comparing techniques in the time/error trade-off, techniques
based on hashing or sampling tend to perform better. In this setting we modify
the most studied sampling regime to retain error guarantee but obtain dramatic
improvements in the time/error trade-off.
Finally, we provide easy replication of our studies on APT, a new testbed
which makes available not only code and datasets, but also a computing platform
with fixed environmental settings.
|
no_new_dataset
| 0.940572 |
1003.6059
|
Satadal Saha
|
Satadal Saha, Subhadip Basu, Mita Nasipuri and Dipak Kumar Basu
|
A novel scheme for binarization of vehicle images using hierarchical
histogram equalization technique
|
International Conference on Computer, Communication, Control and
Information Technology (C3IT 2009)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic License Plate Recognition system is a challenging area of research
now-a-days and binarization is an integral and most important part of it. In
case of a real life scenario, most of existing methods fail to properly
binarize the image of a vehicle in a congested road, captured through a CCD
camera. In the current work we have applied histogram equalization technique
over the complete image and also over different hierarchy of image
partitioning. A novel scheme is formulated for giving the membership value to
each pixel for each hierarchy of histogram equalization. Then the image is
binarized depending on the net membership value of each pixel. The technique is
exhaustively evaluated on the vehicle image dataset as well as the license
plate dataset, giving satisfactory performances.
|
[
{
"version": "v1",
"created": "Wed, 31 Mar 2010 14:00:16 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Jan 2015 21:26:41 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Saha",
"Satadal",
""
],
[
"Basu",
"Subhadip",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak Kumar",
""
]
] |
TITLE: A novel scheme for binarization of vehicle images using hierarchical
histogram equalization technique
ABSTRACT: Automatic License Plate Recognition system is a challenging area of research
now-a-days and binarization is an integral and most important part of it. In
case of a real life scenario, most of existing methods fail to properly
binarize the image of a vehicle in a congested road, captured through a CCD
camera. In the current work we have applied histogram equalization technique
over the complete image and also over different hierarchy of image
partitioning. A novel scheme is formulated for giving the membership value to
each pixel for each hierarchy of histogram equalization. Then the image is
binarized depending on the net membership value of each pixel. The technique is
exhaustively evaluated on the vehicle image dataset as well as the license
plate dataset, giving satisfactory performances.
|
no_new_dataset
| 0.939304 |
1404.5562
|
Sai Zhang
|
Sai Zhang, Ke Xu, Xi Chen, Xue Liu
|
Characterizing Information Spreading in Online Social Networks
|
17 pages, 8 figures
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online social networks (OSNs) are changing the way in which the information
spreads throughout the Internet. A deep understanding of the information
spreading in OSNs leads to both social and commercial benefits. In this paper,
we characterize the dynamic of information spreading (e.g., how fast and widely
the information spreads against time) in OSNs by developing a general and
accurate model based on the Interactive Markov Chains (IMCs) and mean-field
theory. This model explicitly reveals the impacts of the network topology on
information spreading in OSNs. Further, we extend our model to feature the
time-varying user behaviors and the ever-changing information popularity. The
complicated dynamic patterns of information spreading are captured by our model
using six key parameters. Extensive tests based on Renren's dataset validate
the accuracy of our model, which demonstrate that it can characterize the
dynamic patterns of video sharing in Renren precisely and predict future
spreading tendency successfully.
|
[
{
"version": "v1",
"created": "Tue, 22 Apr 2014 17:17:40 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Apr 2014 05:29:52 GMT"
},
{
"version": "v3",
"created": "Fri, 23 Jan 2015 10:36:53 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Zhang",
"Sai",
""
],
[
"Xu",
"Ke",
""
],
[
"Chen",
"Xi",
""
],
[
"Liu",
"Xue",
""
]
] |
TITLE: Characterizing Information Spreading in Online Social Networks
ABSTRACT: Online social networks (OSNs) are changing the way in which the information
spreads throughout the Internet. A deep understanding of the information
spreading in OSNs leads to both social and commercial benefits. In this paper,
we characterize the dynamic of information spreading (e.g., how fast and widely
the information spreads against time) in OSNs by developing a general and
accurate model based on the Interactive Markov Chains (IMCs) and mean-field
theory. This model explicitly reveals the impacts of the network topology on
information spreading in OSNs. Further, we extend our model to feature the
time-varying user behaviors and the ever-changing information popularity. The
complicated dynamic patterns of information spreading are captured by our model
using six key parameters. Extensive tests based on Renren's dataset validate
the accuracy of our model, which demonstrate that it can characterize the
dynamic patterns of video sharing in Renren precisely and predict future
spreading tendency successfully.
|
no_new_dataset
| 0.950732 |
1501.05192
|
Umit Rusen Aktas
|
Umit Rusen Aktas, Mete Ozay, Ales Leonardis, Jeremy L. Wyatt
|
A Graph Theoretic Approach for Object Shape Representation in
Compositional Hierarchies Using a Hybrid Generative-Descriptive Model
|
Paper : 17 pages. 13th European Conference on Computer Vision (ECCV
2014), Zurich, Switzerland, September 6-12, 2014, Proceedings, Part III, pp
566-581. Supplementary material can be downloaded from
http://link.springer.com/content/esm/chp:10.1007/978-3-319-10578-9_37/file/MediaObjects/978-3-319-10578-9_37_MOESM1_ESM.pdf
| null |
10.1007/978-3-319-10578-9_37
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph theoretic approach is proposed for object shape representation in a
hierarchical compositional architecture called Compositional Hierarchy of Parts
(CHOP). In the proposed approach, vocabulary learning is performed using a
hybrid generative-descriptive model. First, statistical relationships between
parts are learned using a Minimum Conditional Entropy Clustering algorithm.
Then, selection of descriptive parts is defined as a frequent subgraph
discovery problem, and solved using a Minimum Description Length (MDL)
principle. Finally, part compositions are constructed by compressing the
internal data representation with discovered substructures. Shape
representation and computational complexity properties of the proposed approach
and algorithms are examined using six benchmark two-dimensional shape image
datasets. Experiments show that CHOP can employ part shareability and indexing
mechanisms for fast inference of part compositions using learned shape
vocabularies. Additionally, CHOP provides better shape retrieval performance
than the state-of-the-art shape retrieval methods.
|
[
{
"version": "v1",
"created": "Wed, 21 Jan 2015 15:19:09 GMT"
},
{
"version": "v2",
"created": "Fri, 23 Jan 2015 16:04:57 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Aktas",
"Umit Rusen",
""
],
[
"Ozay",
"Mete",
""
],
[
"Leonardis",
"Ales",
""
],
[
"Wyatt",
"Jeremy L.",
""
]
] |
TITLE: A Graph Theoretic Approach for Object Shape Representation in
Compositional Hierarchies Using a Hybrid Generative-Descriptive Model
ABSTRACT: A graph theoretic approach is proposed for object shape representation in a
hierarchical compositional architecture called Compositional Hierarchy of Parts
(CHOP). In the proposed approach, vocabulary learning is performed using a
hybrid generative-descriptive model. First, statistical relationships between
parts are learned using a Minimum Conditional Entropy Clustering algorithm.
Then, selection of descriptive parts is defined as a frequent subgraph
discovery problem, and solved using a Minimum Description Length (MDL)
principle. Finally, part compositions are constructed by compressing the
internal data representation with discovered substructures. Shape
representation and computational complexity properties of the proposed approach
and algorithms are examined using six benchmark two-dimensional shape image
datasets. Experiments show that CHOP can employ part shareability and indexing
mechanisms for fast inference of part compositions using learned shape
vocabularies. Additionally, CHOP provides better shape retrieval performance
than the state-of-the-art shape retrieval methods.
|
no_new_dataset
| 0.946794 |
1501.05759
|
Rodrigo Benenson
|
Shanshan Zhang and Rodrigo Benenson and Bernt Schiele
|
Filtered Channel Features for Pedestrian Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper starts from the observation that multiple top performing
pedestrian detectors can be modelled by using an intermediate layer filtering
low-level features in combination with a boosted decision forest. Based on this
observation we propose a unifying framework and experimentally explore
different filter families. We report extensive results enabling a systematic
analysis.
Using filtered channel features we obtain top performance on the challenging
Caltech and KITTI datasets, while using only HOG+LUV as low-level features.
When adding optical flow features we further improve detection quality and
report the best known results on the Caltech dataset, reaching 93% recall at 1
FPPI.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 10:19:33 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Zhang",
"Shanshan",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Schiele",
"Bernt",
""
]
] |
TITLE: Filtered Channel Features for Pedestrian Detection
ABSTRACT: This paper starts from the observation that multiple top performing
pedestrian detectors can be modelled by using an intermediate layer filtering
low-level features in combination with a boosted decision forest. Based on this
observation we propose a unifying framework and experimentally explore
different filter families. We report extensive results enabling a systematic
analysis.
Using filtered channel features we obtain top performance on the challenging
Caltech and KITTI datasets, while using only HOG+LUV as low-level features.
When adding optical flow features we further improve detection quality and
report the best known results on the Caltech dataset, reaching 93% recall at 1
FPPI.
|
no_new_dataset
| 0.951504 |
1501.05790
|
Rodrigo Benenson
|
Jan Hosang and Mohamed Omran and Rodrigo Benenson and Bernt Schiele
|
Taking a Deeper Look at Pedestrians
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the use of convolutional neural networks (convnets)
for the task of pedestrian detection. Despite their recent diverse successes,
convnets historically underperform compared to other pedestrian detectors. We
deliberately omit explicitly modelling the problem into the network (e.g. parts
or occlusion modelling) and show that we can reach competitive performance
without bells and whistles. In a wide range of experiments we analyse small and
big convnets, their architectural choices, parameters, and the influence of
different training data, including pre-training on surrogate tasks.
We present the best convnet detectors on the Caltech and KITTI dataset. On
Caltech our convnets reach top performance both for the Caltech1x and
Caltech10x training setup. Using additional data at training time our strongest
convnet model is competitive even to detectors that use additional data
(optical flow) at test time.
|
[
{
"version": "v1",
"created": "Fri, 23 Jan 2015 13:07:56 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Hosang",
"Jan",
""
],
[
"Omran",
"Mohamed",
""
],
[
"Benenson",
"Rodrigo",
""
],
[
"Schiele",
"Bernt",
""
]
] |
TITLE: Taking a Deeper Look at Pedestrians
ABSTRACT: In this paper we study the use of convolutional neural networks (convnets)
for the task of pedestrian detection. Despite their recent diverse successes,
convnets historically underperform compared to other pedestrian detectors. We
deliberately omit explicitly modelling the problem into the network (e.g. parts
or occlusion modelling) and show that we can reach competitive performance
without bells and whistles. In a wide range of experiments we analyse small and
big convnets, their architectural choices, parameters, and the influence of
different training data, including pre-training on surrogate tasks.
We present the best convnet detectors on the Caltech and KITTI dataset. On
Caltech our convnets reach top performance both for the Caltech1x and
Caltech10x training setup. Using additional data at training time our strongest
convnet model is competitive even to detectors that use additional data
(optical flow) at test time.
|
no_new_dataset
| 0.950041 |
1501.05916
|
Nafees Qamar
|
Nafees Qamar, Yilong Yang, Andras Nadas, Zhiming Liu, and Janos
Sztipanovits
|
Anonymously Analyzing Clinical Datasets
| null | null | null | null |
cs.SE cs.CR cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper takes on the problem of automatically identifying
clinically-relevant patterns in medical datasets without compromising patient
privacy. To achieve this goal, we treat datasets as a black box for both
internal and external users of data that lets us handle clinical data queries
directly and far more efficiently. The novelty of the approach lies in avoiding
the data de-identification process often used as a means of preserving patient
privacy. The implemented toolkit combines software engineering technologies
such as Java EE and RESTful web services, to allow exchanging medical data in
an unidentifiable XML format as well as restricting users to the need-to-know
principle. Our technique also inhibits retrospective processing of data, such
as attacks by an adversary on a medical dataset using advanced computational
methods to reveal Protected Health Information (PHI). The approach is validated
on an endoscopic reporting application based on openEHR and MST standards. From
the usability perspective, the approach can be used to query datasets by
clinical researchers, governmental or non-governmental organizations in
monitoring health care services to improve quality of care.
|
[
{
"version": "v1",
"created": "Wed, 19 Nov 2014 07:53:34 GMT"
}
] | 2015-01-26T00:00:00 |
[
[
"Qamar",
"Nafees",
""
],
[
"Yang",
"Yilong",
""
],
[
"Nadas",
"Andras",
""
],
[
"Liu",
"Zhiming",
""
],
[
"Sztipanovits",
"Janos",
""
]
] |
TITLE: Anonymously Analyzing Clinical Datasets
ABSTRACT: This paper takes on the problem of automatically identifying
clinically-relevant patterns in medical datasets without compromising patient
privacy. To achieve this goal, we treat datasets as a black box for both
internal and external users of data that lets us handle clinical data queries
directly and far more efficiently. The novelty of the approach lies in avoiding
the data de-identification process often used as a means of preserving patient
privacy. The implemented toolkit combines software engineering technologies
such as Java EE and RESTful web services, to allow exchanging medical data in
an unidentifiable XML format as well as restricting users to the need-to-know
principle. Our technique also inhibits retrospective processing of data, such
as attacks by an adversary on a medical dataset using advanced computational
methods to reveal Protected Health Information (PHI). The approach is validated
on an endoscopic reporting application based on openEHR and MST standards. From
the usability perspective, the approach can be used to query datasets by
clinical researchers, governmental or non-governmental organizations in
monitoring health care services to improve quality of care.
|
no_new_dataset
| 0.943191 |
1310.3911
|
Yongqing Wang
|
Yongqing Wang and Hua-Wei Shen and Shenghua Liu and Xue-Qi Cheng
|
Learning user-specific latent influence and susceptibility from
information cascades
|
from The 29th AAAI Conference on Artificial Intelligence (AAAI-2015)
| null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Predicting cascade dynamics has important implications for understanding
information propagation and launching viral marketing. Previous works mainly
adopt a pair-wise manner, modeling the propagation probability between pairs of
users using n^2 independent parameters for n users. Consequently, these models
suffer from severe overfitting problem, specially for pairs of users without
direct interactions, limiting their prediction accuracy. Here we propose to
model the cascade dynamics by learning two low-dimensional user-specific
vectors from observed cascades, capturing their influence and susceptibility
respectively. This model requires much less parameters and thus could combat
overfitting problem. Moreover, this model could naturally model
context-dependent factors like cumulative effect in information propagation.
Extensive experiments on synthetic dataset and a large-scale microblogging
dataset demonstrate that this model outperforms the existing pair-wise models
at predicting cascade dynamics, cascade size, and "who will be retweeted".
|
[
{
"version": "v1",
"created": "Tue, 15 Oct 2013 03:45:58 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Nov 2014 08:49:46 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jan 2015 10:03:19 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Wang",
"Yongqing",
""
],
[
"Shen",
"Hua-Wei",
""
],
[
"Liu",
"Shenghua",
""
],
[
"Cheng",
"Xue-Qi",
""
]
] |
TITLE: Learning user-specific latent influence and susceptibility from
information cascades
ABSTRACT: Predicting cascade dynamics has important implications for understanding
information propagation and launching viral marketing. Previous works mainly
adopt a pair-wise manner, modeling the propagation probability between pairs of
users using n^2 independent parameters for n users. Consequently, these models
suffer from severe overfitting problem, specially for pairs of users without
direct interactions, limiting their prediction accuracy. Here we propose to
model the cascade dynamics by learning two low-dimensional user-specific
vectors from observed cascades, capturing their influence and susceptibility
respectively. This model requires much less parameters and thus could combat
overfitting problem. Moreover, this model could naturally model
context-dependent factors like cumulative effect in information propagation.
Extensive experiments on synthetic dataset and a large-scale microblogging
dataset demonstrate that this model outperforms the existing pair-wise models
at predicting cascade dynamics, cascade size, and "who will be retweeted".
|
no_new_dataset
| 0.949623 |
1408.4712
|
Wen-Ze Shao
|
Wen-Ze Shao, Hai-Bo Li, Michael Elad
|
Bi-l0-l2-Norm Regularization for Blind Motion Deblurring
|
32 pages, 16 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In blind motion deblurring, leading methods today tend towards highly
non-convex approximations of the l0-norm, especially in the image
regularization term. In this paper, we propose a simple, effective and fast
approach for the estimation of the motion blur-kernel, through a bi-l0-l2-norm
regularization imposed on both the intermediate sharp image and the
blur-kernel. Compared with existing methods, the proposed regularization is
shown to be more effective and robust, leading to a more accurate motion
blur-kernel and a better final restored image. A fast numerical scheme is
deployed for alternatingly computing the sharp image and the blur-kernel, by
coupling the operator splitting and augmented Lagrangian methods. Experimental
results on both a benchmark image dataset and real-world motion blurred images
show that the proposed approach is highly competitive with state-of-the- art
methods in both deblurring effectiveness and computational efficiency.
|
[
{
"version": "v1",
"created": "Wed, 20 Aug 2014 16:18:13 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Jan 2015 08:06:47 GMT"
},
{
"version": "v3",
"created": "Thu, 22 Jan 2015 14:02:38 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Shao",
"Wen-Ze",
""
],
[
"Li",
"Hai-Bo",
""
],
[
"Elad",
"Michael",
""
]
] |
TITLE: Bi-l0-l2-Norm Regularization for Blind Motion Deblurring
ABSTRACT: In blind motion deblurring, leading methods today tend towards highly
non-convex approximations of the l0-norm, especially in the image
regularization term. In this paper, we propose a simple, effective and fast
approach for the estimation of the motion blur-kernel, through a bi-l0-l2-norm
regularization imposed on both the intermediate sharp image and the
blur-kernel. Compared with existing methods, the proposed regularization is
shown to be more effective and robust, leading to a more accurate motion
blur-kernel and a better final restored image. A fast numerical scheme is
deployed for alternatingly computing the sharp image and the blur-kernel, by
coupling the operator splitting and augmented Lagrangian methods. Experimental
results on both a benchmark image dataset and real-world motion blurred images
show that the proposed approach is highly competitive with state-of-the- art
methods in both deblurring effectiveness and computational efficiency.
|
no_new_dataset
| 0.947381 |
1501.05396
|
Youssef Mroueh
|
Youssef Mroueh, Etienne Marcheret, Vaibhava Goel
|
Deep Multimodal Learning for Audio-Visual Speech Recognition
|
ICASSP 2015
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present methods in deep multimodal learning for fusing
speech and visual modalities for Audio-Visual Automatic Speech Recognition
(AV-ASR). First, we study an approach where uni-modal deep networks are trained
separately and their final hidden layers fused to obtain a joint feature space
in which another deep network is built. While the audio network alone achieves
a phone error rate (PER) of $41\%$ under clean condition on the IBM large
vocabulary audio-visual studio dataset, this fusion model achieves a PER of
$35.83\%$ demonstrating the tremendous value of the visual channel in phone
classification even in audio with high signal to noise ratio. Second, we
present a new deep network architecture that uses a bilinear softmax layer to
account for class specific correlations between modalities. We show that
combining the posteriors from the bilinear networks with those from the fused
model mentioned above results in a further significant phone error rate
reduction, yielding a final PER of $34.03\%$.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 05:25:33 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Mroueh",
"Youssef",
""
],
[
"Marcheret",
"Etienne",
""
],
[
"Goel",
"Vaibhava",
""
]
] |
TITLE: Deep Multimodal Learning for Audio-Visual Speech Recognition
ABSTRACT: In this paper, we present methods in deep multimodal learning for fusing
speech and visual modalities for Audio-Visual Automatic Speech Recognition
(AV-ASR). First, we study an approach where uni-modal deep networks are trained
separately and their final hidden layers fused to obtain a joint feature space
in which another deep network is built. While the audio network alone achieves
a phone error rate (PER) of $41\%$ under clean condition on the IBM large
vocabulary audio-visual studio dataset, this fusion model achieves a PER of
$35.83\%$ demonstrating the tremendous value of the visual channel in phone
classification even in audio with high signal to noise ratio. Second, we
present a new deep network architecture that uses a bilinear softmax layer to
account for class specific correlations between modalities. We show that
combining the posteriors from the bilinear networks with those from the fused
model mentioned above results in a further significant phone error rate
reduction, yielding a final PER of $34.03\%$.
|
no_new_dataset
| 0.951953 |
1501.05472
|
Subhadip Basu
|
Ram Sarkar, Bibhash Sen, Nibaran Das, Subhadip Basu
|
Handwritten Devanagari Script Segmentation: A non-linear Fuzzy Approach
|
In Proceedings of IEEE Conference on AI Tools and Engineering
(ICAITE-08), March 6-8, 2008, Pune
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper concentrates on improvement of segmentation accuracy by addressing
some of the key challenges of handwritten Devanagari word image segmentation
technique. In the present work, we have developed a new feature based approach
for identification of Matra pixels from a word image, design of a non-linear
fuzzy membership functions for headline estimation and finally design of a
non-linear fuzzy functions for identifying segmentation points on the Matra.
The segmentation accuracy achieved by the current technique is 94.8%. This
shows an improvement of performance by 1.8% over the previous technique [1] on
a 300-word dataset, used for the current experiment.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 12:05:25 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Sarkar",
"Ram",
""
],
[
"Sen",
"Bibhash",
""
],
[
"Das",
"Nibaran",
""
],
[
"Basu",
"Subhadip",
""
]
] |
TITLE: Handwritten Devanagari Script Segmentation: A non-linear Fuzzy Approach
ABSTRACT: The paper concentrates on improvement of segmentation accuracy by addressing
some of the key challenges of handwritten Devanagari word image segmentation
technique. In the present work, we have developed a new feature based approach
for identification of Matra pixels from a word image, design of a non-linear
fuzzy membership functions for headline estimation and finally design of a
non-linear fuzzy functions for identifying segmentation points on the Matra.
The segmentation accuracy achieved by the current technique is 94.8%. This
shows an improvement of performance by 1.8% over the previous technique [1] on
a 300-word dataset, used for the current experiment.
|
no_new_dataset
| 0.942612 |
1501.05497
|
Subhadip Basu
|
Nibaran Das, Subhadip Basu, Ram Sarkar, Mahantapas Kundu, Mita
Nasipuri, Dipak kumar Basu
|
An Improved Feature Descriptor for Recognition of Handwritten Bangla
Alphabet
|
In proceedings of ICSIP 2009, pp. 451 to 454, August 2009, Mysore,
India. arXiv admin note: substantial text overlap with arXiv:1203.0882,
arXiv:1002.4040, arXiv:1410.0478
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Appropriate feature set for representation of pattern classes is one of the
most important aspects of handwritten character recognition. The effectiveness
of features depends on the discriminating power of the features chosen to
represent patterns of different classes. However, discriminatory features are
not easily measurable. Investigative experimentation is necessary for
identifying discriminatory features. In the present work we have identified a
new variation of feature set which significantly outperforms on handwritten
Bangla alphabet from the previously used feature set. 132 number of features in
all viz. modified shadow features, octant and centroid features, distance based
features, quad tree based longest run features are used here. Using this
feature set the recognition performance increases sharply from the 75.05%
observed in our previous work [7], to 85.40% on 50 character classes with MLP
based classifier on the same dataset.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 13:50:25 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Das",
"Nibaran",
""
],
[
"Basu",
"Subhadip",
""
],
[
"Sarkar",
"Ram",
""
],
[
"Kundu",
"Mahantapas",
""
],
[
"Nasipuri",
"Mita",
""
],
[
"Basu",
"Dipak kumar",
""
]
] |
TITLE: An Improved Feature Descriptor for Recognition of Handwritten Bangla
Alphabet
ABSTRACT: Appropriate feature set for representation of pattern classes is one of the
most important aspects of handwritten character recognition. The effectiveness
of features depends on the discriminating power of the features chosen to
represent patterns of different classes. However, discriminatory features are
not easily measurable. Investigative experimentation is necessary for
identifying discriminatory features. In the present work we have identified a
new variation of feature set which significantly outperforms on handwritten
Bangla alphabet from the previously used feature set. 132 number of features in
all viz. modified shadow features, octant and centroid features, distance based
features, quad tree based longest run features are used here. Using this
feature set the recognition performance increases sharply from the 75.05%
observed in our previous work [7], to 85.40% on 50 character classes with MLP
based classifier on the same dataset.
|
no_new_dataset
| 0.936865 |
1501.05546
|
Ashley Conard
|
Ashley Mae Conard, Stephanie Dodson, Jeremy Kepner, Darrell Ricke
|
Using a Big Data Database to Identify Pathogens in Protein Data Space
|
2 pages, 3 figures
| null | null | null |
cs.DB q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current metagenomic analysis algorithms require significant computing
resources, can report excessive false positives (type I errors), may miss
organisms (type II errors / false negatives), or scale poorly on large
datasets. This paper explores using big data database technologies to
characterize very large metagenomic DNA sequences in protein space, with the
ultimate goal of rapid pathogen identification in patient samples. Our approach
uses the abilities of a big data databases to hold large sparse associative
array representations of genetic data to extract statistical patterns about the
data that can be used in a variety of ways to improve identification
algorithms.
|
[
{
"version": "v1",
"created": "Thu, 22 Jan 2015 15:58:42 GMT"
}
] | 2015-01-23T00:00:00 |
[
[
"Conard",
"Ashley Mae",
""
],
[
"Dodson",
"Stephanie",
""
],
[
"Kepner",
"Jeremy",
""
],
[
"Ricke",
"Darrell",
""
]
] |
TITLE: Using a Big Data Database to Identify Pathogens in Protein Data Space
ABSTRACT: Current metagenomic analysis algorithms require significant computing
resources, can report excessive false positives (type I errors), may miss
organisms (type II errors / false negatives), or scale poorly on large
datasets. This paper explores using big data database technologies to
characterize very large metagenomic DNA sequences in protein space, with the
ultimate goal of rapid pathogen identification in patient samples. Our approach
uses the abilities of a big data databases to hold large sparse associative
array representations of genetic data to extract statistical patterns about the
data that can be used in a variety of ways to improve identification
algorithms.
|
no_new_dataset
| 0.953319 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.