id
stringlengths 9
16
| submitter
stringlengths 3
64
⌀ | authors
stringlengths 5
6.63k
| title
stringlengths 7
245
| comments
stringlengths 1
482
⌀ | journal-ref
stringlengths 4
382
⌀ | doi
stringlengths 9
151
⌀ | report-no
stringclasses 984
values | categories
stringlengths 5
108
| license
stringclasses 9
values | abstract
stringlengths 83
3.41k
| versions
listlengths 1
20
| update_date
timestamp[s]date 2007-05-23 00:00:00
2025-04-11 00:00:00
| authors_parsed
sequencelengths 1
427
| prompt
stringlengths 166
3.49k
| label
stringclasses 2
values | prob
float64 0.5
0.98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1411.5065 | Chen Chen | Chen Chen, Yeqing Li, Wei Liu, and Junzhou Huang | SIRF: Simultaneous Image Registration and Fusion in A Unified Framework | null | null | 10.1109/TIP.2015.2456415 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel method for image fusion with a
high-resolution panchromatic image and a low-resolution multispectral image at
the same geographical location. The fusion is formulated as a convex
optimization problem which minimizes a linear combination of a least-squares
fitting term and a dynamic gradient sparsity regularizer. The former is to
preserve accurate spectral information of the multispectral image, while the
latter is to keep sharp edges of the high-resolution panchromatic image. We
further propose to simultaneously register the two images during the fusing
process, which is naturally achieved by virtue of the dynamic gradient sparsity
property. An efficient algorithm is then devised to solve the optimization
problem, accomplishing a linear computational complexity in the size of the
output image in each iteration. We compare our method against seven
state-of-the-art image fusion methods on multispectral image datasets from four
satellites. Extensive experimental results demonstrate that the proposed method
substantially outperforms the others in terms of both spatial and spectral
qualities. We also show that our method can provide high-quality products from
coarsely registered real-world datasets. Finally, a MATLAB implementation is
provided to facilitate future research.
| [
{
"version": "v1",
"created": "Tue, 18 Nov 2014 23:26:37 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Jan 2015 22:00:10 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Chen",
"Chen",
""
],
[
"Li",
"Yeqing",
""
],
[
"Liu",
"Wei",
""
],
[
"Huang",
"Junzhou",
""
]
] | TITLE: SIRF: Simultaneous Image Registration and Fusion in A Unified Framework
ABSTRACT: In this paper, we propose a novel method for image fusion with a
high-resolution panchromatic image and a low-resolution multispectral image at
the same geographical location. The fusion is formulated as a convex
optimization problem which minimizes a linear combination of a least-squares
fitting term and a dynamic gradient sparsity regularizer. The former is to
preserve accurate spectral information of the multispectral image, while the
latter is to keep sharp edges of the high-resolution panchromatic image. We
further propose to simultaneously register the two images during the fusing
process, which is naturally achieved by virtue of the dynamic gradient sparsity
property. An efficient algorithm is then devised to solve the optimization
problem, accomplishing a linear computational complexity in the size of the
output image in each iteration. We compare our method against seven
state-of-the-art image fusion methods on multispectral image datasets from four
satellites. Extensive experimental results demonstrate that the proposed method
substantially outperforms the others in terms of both spatial and spectral
qualities. We also show that our method can provide high-quality products from
coarsely registered real-world datasets. Finally, a MATLAB implementation is
provided to facilitate future research.
| no_new_dataset | 0.946646 |
1502.03436 | Felix X. Yu | Yu Cheng, Felix X. Yu, Rogerio S. Feris, Sanjiv Kumar, Alok Choudhary,
Shih-Fu Chang | An exploration of parameter redundancy in deep networks with circulant
projections | International Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the redundancy of parameters in deep neural networks by replacing
the conventional linear projection in fully-connected layers with the circulant
projection. The circulant structure substantially reduces memory footprint and
enables the use of the Fast Fourier Transform to speed up the computation.
Considering a fully-connected neural network layer with d input nodes, and d
output nodes, this method improves the time complexity from O(d^2) to O(dlogd)
and space complexity from O(d^2) to O(d). The space savings are particularly
important for modern deep convolutional neural network architectures, where
fully-connected layers typically contain more than 90% of the network
parameters. We further show that the gradient computation and optimization of
the circulant projections can be performed very efficiently. Our experiments on
three standard datasets show that the proposed approach achieves this
significant gain in storage and efficiency with minimal increase in error rate
compared to neural networks with unstructured projections.
| [
{
"version": "v1",
"created": "Wed, 11 Feb 2015 20:56:02 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Oct 2015 06:45:51 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Cheng",
"Yu",
""
],
[
"Yu",
"Felix X.",
""
],
[
"Feris",
"Rogerio S.",
""
],
[
"Kumar",
"Sanjiv",
""
],
[
"Choudhary",
"Alok",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: An exploration of parameter redundancy in deep networks with circulant
projections
ABSTRACT: We explore the redundancy of parameters in deep neural networks by replacing
the conventional linear projection in fully-connected layers with the circulant
projection. The circulant structure substantially reduces memory footprint and
enables the use of the Fast Fourier Transform to speed up the computation.
Considering a fully-connected neural network layer with d input nodes, and d
output nodes, this method improves the time complexity from O(d^2) to O(dlogd)
and space complexity from O(d^2) to O(d). The space savings are particularly
important for modern deep convolutional neural network architectures, where
fully-connected layers typically contain more than 90% of the network
parameters. We further show that the gradient computation and optimization of
the circulant projections can be performed very efficiently. Our experiments on
three standard datasets show that the proposed approach achieves this
significant gain in storage and efficiency with minimal increase in error rate
compared to neural networks with unstructured projections.
| no_new_dataset | 0.951459 |
1502.06260 | Xin Yuan | Xin Yuan, Tsung-Han Tsai, Ruoyu Zhu, Patrick Llull, David Brady,
Lawrence Carin | Compressive Hyperspectral Imaging with Side Information | 20 pages, 21 figures. To appear in the IEEE Journal of Selected
Topics Signal Processing | null | 10.1109/JSTSP.2015.2411575 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A blind compressive sensing algorithm is proposed to reconstruct
hyperspectral images from spectrally-compressed measurements.The
wavelength-dependent data are coded and then superposed, mapping the
three-dimensional hyperspectral datacube to a two-dimensional image. The
inversion algorithm learns a dictionary {\em in situ} from the measurements via
global-local shrinkage priors. By using RGB images as side information of the
compressive sensing system, the proposed approach is extended to learn a
coupled dictionary from the joint dataset of the compressed measurements and
the corresponding RGB images, to improve reconstruction quality. A prototype
camera is built using a liquid-crystal-on-silicon modulator. Experimental
reconstructions of hyperspectral datacubes from both simulated and real
compressed measurements demonstrate the efficacy of the proposed inversion
algorithm, the feasibility of the camera and the benefit of side information.
| [
{
"version": "v1",
"created": "Sun, 22 Feb 2015 19:10:31 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Yuan",
"Xin",
""
],
[
"Tsai",
"Tsung-Han",
""
],
[
"Zhu",
"Ruoyu",
""
],
[
"Llull",
"Patrick",
""
],
[
"Brady",
"David",
""
],
[
"Carin",
"Lawrence",
""
]
] | TITLE: Compressive Hyperspectral Imaging with Side Information
ABSTRACT: A blind compressive sensing algorithm is proposed to reconstruct
hyperspectral images from spectrally-compressed measurements.The
wavelength-dependent data are coded and then superposed, mapping the
three-dimensional hyperspectral datacube to a two-dimensional image. The
inversion algorithm learns a dictionary {\em in situ} from the measurements via
global-local shrinkage priors. By using RGB images as side information of the
compressive sensing system, the proposed approach is extended to learn a
coupled dictionary from the joint dataset of the compressed measurements and
the corresponding RGB images, to improve reconstruction quality. A prototype
camera is built using a liquid-crystal-on-silicon modulator. Experimental
reconstructions of hyperspectral datacubes from both simulated and real
compressed measurements demonstrate the efficacy of the proposed inversion
algorithm, the feasibility of the camera and the benefit of side information.
| no_new_dataset | 0.951233 |
1503.01245 | David Morales-Jimenez | David Morales-Jimenez, Romain Couillet, Matthew R. McKay | Large Dimensional Analysis of Robust M-Estimators of Covariance with
Outliers | Submitted to IEEE Transactions on Signal Processing | null | 10.1109/TSP.2015.2460225 | null | math.ST cs.IT math.IT stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A large dimensional characterization of robust M-estimators of covariance (or
scatter) is provided under the assumption that the dataset comprises
independent (essentially Gaussian) legitimate samples as well as arbitrary
deterministic samples, referred to as outliers. Building upon recent random
matrix advances in the area of robust statistics, we specifically show that the
so-called Maronna M-estimator of scatter asymptotically behaves similar to
well-known random matrices when the population and sample sizes grow together
to infinity. The introduction of outliers leads the robust estimator to behave
asymptotically as the weighted sum of the sample outer products, with a
constant weight for all legitimate samples and different weights for the
outliers. A fine analysis of this structure reveals importantly that the
propensity of the M-estimator to attenuate (or enhance) the impact of outliers
is mostly dictated by the alignment of the outliers with the inverse population
covariance matrix of the legitimate samples. Thus, robust M-estimators can
bring substantial benefits over more simplistic estimators such as the
per-sample normalized version of the sample covariance matrix, which is not
capable of differentiating the outlying samples. The analysis shows that,
within the class of Maronna's estimators of scatter, the Huber estimator is
most favorable for rejecting outliers. On the contrary, estimators more similar
to Tyler's scale invariant estimator (often preferred in the literature) run
the risk of inadvertently enhancing some outliers.
| [
{
"version": "v1",
"created": "Wed, 4 Mar 2015 07:28:27 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Morales-Jimenez",
"David",
""
],
[
"Couillet",
"Romain",
""
],
[
"McKay",
"Matthew R.",
""
]
] | TITLE: Large Dimensional Analysis of Robust M-Estimators of Covariance with
Outliers
ABSTRACT: A large dimensional characterization of robust M-estimators of covariance (or
scatter) is provided under the assumption that the dataset comprises
independent (essentially Gaussian) legitimate samples as well as arbitrary
deterministic samples, referred to as outliers. Building upon recent random
matrix advances in the area of robust statistics, we specifically show that the
so-called Maronna M-estimator of scatter asymptotically behaves similar to
well-known random matrices when the population and sample sizes grow together
to infinity. The introduction of outliers leads the robust estimator to behave
asymptotically as the weighted sum of the sample outer products, with a
constant weight for all legitimate samples and different weights for the
outliers. A fine analysis of this structure reveals importantly that the
propensity of the M-estimator to attenuate (or enhance) the impact of outliers
is mostly dictated by the alignment of the outliers with the inverse population
covariance matrix of the legitimate samples. Thus, robust M-estimators can
bring substantial benefits over more simplistic estimators such as the
per-sample normalized version of the sample covariance matrix, which is not
capable of differentiating the outlying samples. The analysis shows that,
within the class of Maronna's estimators of scatter, the Huber estimator is
most favorable for rejecting outliers. On the contrary, estimators more similar
to Tyler's scale invariant estimator (often preferred in the literature) run
the risk of inadvertently enhancing some outliers.
| no_new_dataset | 0.940463 |
1509.06566 | Valerio Lattanzi | Valerio Lattanzi, Gabriele Cazzoli, Cristina Puzzarini | Rare isotopic species of sulphur monoxide: the rotational spectrum in
the THz region | 18 pages, 3 figures, to be published in ApJ | null | 10.1088/0004-637X/813/1/4 | null | astro-ph.EP physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many sulphur-bearing species have been detected in different astronomical
environments and have allowed to derive important information about the
chemical and physical composition of interstellar regions. In particular, these
species have also been showed to trace and probe hot-core environment time
evolution. Among the most prominent sulphur-bearing molecules, SO, sulphur
monoxide radical, is one of the more ubiquitous and abundant, observed also in
its isotopic substituted species such as $^{34}$SO and S$^{18}$O. Due to the
importance of this simple diatomic system and to face the challenge of modern
radioastronomical facilities, an extension to THz range of the rare
isotopologues of sulphur monoxide has been performed. High-resolution
rotational molecular spectroscopy has been employed to extend the available
dataset of four isotopic species, SO, $^{34}$SO, S$^{17}$O, and S$^{18}$O up to
the 1.5 THz region. The frequency coverage and the spectral resolution of our
measurements allowed a better constraint of the molecular constants of the four
species considered, focusing especially for the two oxygen substituted
isotopologues. Our measurements were also employed in an isotopically invariant
fit including all available pure rotational and ro-vibrational transitions for
all SO isotopologues, thus enabling accurate predictions for rotational
transitions at higher frequencies. Comparison with recent works performed on
the same system are also provided, showing the quality of our experiment and
the improvement of the datasets for all the species here considered. Transition
frequencies for this system can now be used with confidence by the astronomical
community well into the THz spectral region.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 12:26:18 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Lattanzi",
"Valerio",
""
],
[
"Cazzoli",
"Gabriele",
""
],
[
"Puzzarini",
"Cristina",
""
]
] | TITLE: Rare isotopic species of sulphur monoxide: the rotational spectrum in
the THz region
ABSTRACT: Many sulphur-bearing species have been detected in different astronomical
environments and have allowed to derive important information about the
chemical and physical composition of interstellar regions. In particular, these
species have also been showed to trace and probe hot-core environment time
evolution. Among the most prominent sulphur-bearing molecules, SO, sulphur
monoxide radical, is one of the more ubiquitous and abundant, observed also in
its isotopic substituted species such as $^{34}$SO and S$^{18}$O. Due to the
importance of this simple diatomic system and to face the challenge of modern
radioastronomical facilities, an extension to THz range of the rare
isotopologues of sulphur monoxide has been performed. High-resolution
rotational molecular spectroscopy has been employed to extend the available
dataset of four isotopic species, SO, $^{34}$SO, S$^{17}$O, and S$^{18}$O up to
the 1.5 THz region. The frequency coverage and the spectral resolution of our
measurements allowed a better constraint of the molecular constants of the four
species considered, focusing especially for the two oxygen substituted
isotopologues. Our measurements were also employed in an isotopically invariant
fit including all available pure rotational and ro-vibrational transitions for
all SO isotopologues, thus enabling accurate predictions for rotational
transitions at higher frequencies. Comparison with recent works performed on
the same system are also provided, showing the quality of our experiment and
the improvement of the datasets for all the species here considered. Transition
frequencies for this system can now be used with confidence by the astronomical
community well into the THz spectral region.
| no_new_dataset | 0.949949 |
1509.08368 | Lorenzo Coviello | Lorenzo Coviello, Massimo Franceschetti, Manuel Garcia-Herranz, Iyad
Rahwan | Limits of Friendship Networks in Predicting Epidemic Risk | 74 pages, 28 figures, 12 tables | null | null | null | physics.soc-ph cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The spread of an infection on a real-world social network is determined by
the interplay of two processes: the dynamics of the network, whose structure
changes over time according to the encounters between individuals, and the
dynamics on the network, whose nodes can infect each other after an encounter.
Physical encounter is the most common vehicle for the spread of infectious
diseases, but detailed information about encounters is often unavailable
because expensive, unpractical to collect or privacy sensitive. We asks whether
the friendship ties between the individuals in a social network successfully
predict who is at risk. Using a dataset from a popular online review service,
we build a time-varying network that is a proxy of physical encounter between
users and a static network based on reported friendship. Through computer
simulations, we compare infection processes on the resulting networks and show
that, whereas distance on the friendship network is correlated to epidemic
risk, friendship provides a poor identification of the individuals at risk if
the infection is driven by physical encounter. Such limit is not due to the
randomness of the infection, but to the structural differences of the two
networks. In contrast to the macroscopic similarity between processes spreading
on different networks, the differences in local connectivity determined by the
two definitions of edges result in striking differences between the dynamics at
a microscopic level. Despite the limits highlighted, we show that periodical
and relatively infrequent monitoring of the real infection on the encounter
network allows to correct the predicted infection on the friendship network and
to achieve satisfactory prediction accuracy. In addition, the friendship
network contains valuable information to effectively contain epidemic outbreaks
when a limited budget is available for immunization.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 15:47:13 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Sep 2015 18:27:04 GMT"
},
{
"version": "v3",
"created": "Sat, 10 Oct 2015 21:35:01 GMT"
},
{
"version": "v4",
"created": "Tue, 27 Oct 2015 15:33:24 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Coviello",
"Lorenzo",
""
],
[
"Franceschetti",
"Massimo",
""
],
[
"Garcia-Herranz",
"Manuel",
""
],
[
"Rahwan",
"Iyad",
""
]
] | TITLE: Limits of Friendship Networks in Predicting Epidemic Risk
ABSTRACT: The spread of an infection on a real-world social network is determined by
the interplay of two processes: the dynamics of the network, whose structure
changes over time according to the encounters between individuals, and the
dynamics on the network, whose nodes can infect each other after an encounter.
Physical encounter is the most common vehicle for the spread of infectious
diseases, but detailed information about encounters is often unavailable
because expensive, unpractical to collect or privacy sensitive. We asks whether
the friendship ties between the individuals in a social network successfully
predict who is at risk. Using a dataset from a popular online review service,
we build a time-varying network that is a proxy of physical encounter between
users and a static network based on reported friendship. Through computer
simulations, we compare infection processes on the resulting networks and show
that, whereas distance on the friendship network is correlated to epidemic
risk, friendship provides a poor identification of the individuals at risk if
the infection is driven by physical encounter. Such limit is not due to the
randomness of the infection, but to the structural differences of the two
networks. In contrast to the macroscopic similarity between processes spreading
on different networks, the differences in local connectivity determined by the
two definitions of edges result in striking differences between the dynamics at
a microscopic level. Despite the limits highlighted, we show that periodical
and relatively infrequent monitoring of the real infection on the encounter
network allows to correct the predicted infection on the friendship network and
to achieve satisfactory prediction accuracy. In addition, the friendship
network contains valuable information to effectively contain epidemic outbreaks
when a limited budget is available for immunization.
| no_new_dataset | 0.941922 |
1510.03167 | Mi Jin Lee | Mi Jin Lee, Woo Seong Jo, Il Gu Yi, Seung Ki Baek and Beom Jun Kim | Evolution of popularity in given names | 16 pages, 5 figures, 2 tables | null | 10.1016/j.physa.2015.09.076 | null | physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An individual's identity in a human society is specified by his or her name.
Differently from family names, usually inherited from fathers, a given name for
a child is often chosen at the parents' disposal. However, their decision
cannot be made in a vacuum but affected by social conventions and trends.
Furthermore, such social pressure changes in time, as new names gain popularity
while some other names are gradually forgotten. In this paper, we investigate
how popularity of given names has evolved over the last century by using
datasets collected in Korea, the province of Quebec in Canada, and the United
States. In each of these countries, the average popularity of given names
exhibits typical patterns of rise and fall with a time scale of about one
generation. We also observe that notable changes of diversity in given names
signal major social changes.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 07:19:00 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Lee",
"Mi Jin",
""
],
[
"Jo",
"Woo Seong",
""
],
[
"Yi",
"Il Gu",
""
],
[
"Baek",
"Seung Ki",
""
],
[
"Kim",
"Beom Jun",
""
]
] | TITLE: Evolution of popularity in given names
ABSTRACT: An individual's identity in a human society is specified by his or her name.
Differently from family names, usually inherited from fathers, a given name for
a child is often chosen at the parents' disposal. However, their decision
cannot be made in a vacuum but affected by social conventions and trends.
Furthermore, such social pressure changes in time, as new names gain popularity
while some other names are gradually forgotten. In this paper, we investigate
how popularity of given names has evolved over the last century by using
datasets collected in Korea, the province of Quebec in Canada, and the United
States. In each of these countries, the average popularity of given names
exhibits typical patterns of rise and fall with a time scale of about one
generation. We also observe that notable changes of diversity in given names
signal major social changes.
| no_new_dataset | 0.95018 |
1510.07623 | Muhammad Anis Uddin Nasir | Muhammad Anis Uddin Nasir, Gianmarco De Francisci Morales, David
Garcia-Soriano, Nicolas Kourtellis, and Marco Serafini | Partial Key Grouping: Load-Balanced Partitioning of Distributed Streams | 14 pages. arXiv admin note: substantial text overlap with
arXiv:1504.00788 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of load balancing in distributed stream processing
engines, which is exacerbated in the presence of skew. We introduce Partial Key
Grouping (PKG), a new stream partitioning scheme that adapts the classical
"power of two choices" to a distributed streaming setting by leveraging two
novel techniques: key splitting and local load estimation. In so doing, it
achieves better load balancing than key grouping while being more scalable than
shuffle grouping.
We test PKG on several large datasets, both real-world and synthetic.
Compared to standard hashing, PKG reduces the load imbalance by up to several
orders of magnitude, and often achieves nearly-perfect load balance. This
result translates into an improvement of up to 175% in throughput and up to 45%
in latency when deployed on a real Storm cluster. PKG has been integrated in
Apache Storm v0.10.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2015 15:35:14 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Nasir",
"Muhammad Anis Uddin",
""
],
[
"Morales",
"Gianmarco De Francisci",
""
],
[
"Garcia-Soriano",
"David",
""
],
[
"Kourtellis",
"Nicolas",
""
],
[
"Serafini",
"Marco",
""
]
] | TITLE: Partial Key Grouping: Load-Balanced Partitioning of Distributed Streams
ABSTRACT: We study the problem of load balancing in distributed stream processing
engines, which is exacerbated in the presence of skew. We introduce Partial Key
Grouping (PKG), a new stream partitioning scheme that adapts the classical
"power of two choices" to a distributed streaming setting by leveraging two
novel techniques: key splitting and local load estimation. In so doing, it
achieves better load balancing than key grouping while being more scalable than
shuffle grouping.
We test PKG on several large datasets, both real-world and synthetic.
Compared to standard hashing, PKG reduces the load imbalance by up to several
orders of magnitude, and often achieves nearly-perfect load balance. This
result translates into an improvement of up to 175% in throughput and up to 45%
in latency when deployed on a real Storm cluster. PKG has been integrated in
Apache Storm v0.10.
| no_new_dataset | 0.948346 |
1510.08039 | Georg Poier | Georg Poier, Konstantinos Roditakis, Samuel Schulter, Damien Michel,
Horst Bischof, Antonis A. Argyros | Hybrid One-Shot 3D Hand Pose Estimation by Exploiting Uncertainties | BMVC 2015 (oral); see also
http://lrs.icg.tugraz.at/research/hybridhape/ | null | 10.5244/C.29.182 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-based approaches to 3D hand tracking have been shown to perform well in
a wide range of scenarios. However, they require initialisation and cannot
recover easily from tracking failures that occur due to fast hand motions.
Data-driven approaches, on the other hand, can quickly deliver a solution, but
the results often suffer from lower accuracy or missing anatomical validity
compared to those obtained from model-based approaches. In this work we propose
a hybrid approach for hand pose estimation from a single depth image. First, a
learned regressor is employed to deliver multiple initial hypotheses for the 3D
position of each hand joint. Subsequently, the kinematic parameters of a 3D
hand model are found by deliberately exploiting the inherent uncertainty of the
inferred joint proposals. This way, the method provides anatomically valid and
accurate solutions without requiring manual initialisation or suffering from
track losses. Quantitative results on several standard datasets demonstrate
that the proposed method outperforms state-of-the-art representatives of the
model-based, data-driven and hybrid paradigms.
| [
{
"version": "v1",
"created": "Tue, 27 Oct 2015 19:44:44 GMT"
}
] | 2015-10-28T00:00:00 | [
[
"Poier",
"Georg",
""
],
[
"Roditakis",
"Konstantinos",
""
],
[
"Schulter",
"Samuel",
""
],
[
"Michel",
"Damien",
""
],
[
"Bischof",
"Horst",
""
],
[
"Argyros",
"Antonis A.",
""
]
] | TITLE: Hybrid One-Shot 3D Hand Pose Estimation by Exploiting Uncertainties
ABSTRACT: Model-based approaches to 3D hand tracking have been shown to perform well in
a wide range of scenarios. However, they require initialisation and cannot
recover easily from tracking failures that occur due to fast hand motions.
Data-driven approaches, on the other hand, can quickly deliver a solution, but
the results often suffer from lower accuracy or missing anatomical validity
compared to those obtained from model-based approaches. In this work we propose
a hybrid approach for hand pose estimation from a single depth image. First, a
learned regressor is employed to deliver multiple initial hypotheses for the 3D
position of each hand joint. Subsequently, the kinematic parameters of a 3D
hand model are found by deliberately exploiting the inherent uncertainty of the
inferred joint proposals. This way, the method provides anatomically valid and
accurate solutions without requiring manual initialisation or suffering from
track losses. Quantitative results on several standard datasets demonstrate
that the proposed method outperforms state-of-the-art representatives of the
model-based, data-driven and hybrid paradigms.
| no_new_dataset | 0.948106 |
1304.1014 | Emanuele Frandi | Hector Allende, Emanuele Frandi, Ricardo Nanculef, Claudio Sartori | A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale
SVM Training | REVISED VERSION (October 2013) -- Title and abstract have been
revised. Section 5 was added. Some proofs have been summarized (full-length
proofs available in the previous version) | Information Sciences 285, 66-99, 2014 | null | null | cs.CV cs.AI cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a renewed interest in the machine learning community
for variants of a sparse greedy approximation procedure for concave
optimization known as {the Frank-Wolfe (FW) method}. In particular, this
procedure has been successfully applied to train large-scale instances of
non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has
allowed to obtain efficient algorithms but also important theoretical results,
including convergence analysis of training algorithms and new characterizations
of model sparsity.
In this paper, we present and analyze a novel variant of the FW method based
on a new way to perform away steps, a classic strategy used to accelerate the
convergence of the basic FW procedure. Our formulation and analysis is focused
on a general concave maximization problem on the simplex. However, the
specialization of our algorithm to quadratic forms is strongly related to some
classic methods in computational geometry, namely the Gilbert and MDM
algorithms.
On the theoretical side, we demonstrate that the method matches the
guarantees in terms of convergence rate and number of iterations obtained by
using classic away steps. In particular, the method enjoys a linear rate of
convergence, a result that has been recently proved for MDM on quadratic forms.
On the practical side, we provide experiments on several classification
datasets, and evaluate the results using statistical tests. Experiments show
that our method is faster than the FW method with classic away steps, and works
well even in the cases in which classic away steps slow down the algorithm.
Furthermore, these improvements are obtained without sacrificing the predictive
accuracy of the obtained SVM model.
| [
{
"version": "v1",
"created": "Wed, 3 Apr 2013 17:15:43 GMT"
},
{
"version": "v2",
"created": "Sun, 13 Oct 2013 09:50:26 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Allende",
"Hector",
""
],
[
"Frandi",
"Emanuele",
""
],
[
"Nanculef",
"Ricardo",
""
],
[
"Sartori",
"Claudio",
""
]
] | TITLE: A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale
SVM Training
ABSTRACT: Recently, there has been a renewed interest in the machine learning community
for variants of a sparse greedy approximation procedure for concave
optimization known as {the Frank-Wolfe (FW) method}. In particular, this
procedure has been successfully applied to train large-scale instances of
non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has
allowed to obtain efficient algorithms but also important theoretical results,
including convergence analysis of training algorithms and new characterizations
of model sparsity.
In this paper, we present and analyze a novel variant of the FW method based
on a new way to perform away steps, a classic strategy used to accelerate the
convergence of the basic FW procedure. Our formulation and analysis is focused
on a general concave maximization problem on the simplex. However, the
specialization of our algorithm to quadratic forms is strongly related to some
classic methods in computational geometry, namely the Gilbert and MDM
algorithms.
On the theoretical side, we demonstrate that the method matches the
guarantees in terms of convergence rate and number of iterations obtained by
using classic away steps. In particular, the method enjoys a linear rate of
convergence, a result that has been recently proved for MDM on quadratic forms.
On the practical side, we provide experiments on several classification
datasets, and evaluate the results using statistical tests. Experiments show
that our method is faster than the FW method with classic away steps, and works
well even in the cases in which classic away steps slow down the algorithm.
Furthermore, these improvements are obtained without sacrificing the predictive
accuracy of the obtained SVM model.
| no_new_dataset | 0.944331 |
1410.4062 | Emanuele Frandi | Emanuele Frandi, Ricardo Nanculef, Johan Suykens | Complexity Issues and Randomization Strategies in Frank-Wolfe Algorithms
for Machine Learning | null | null | null | null | stat.ML cs.LG cs.NA math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frank-Wolfe algorithms for convex minimization have recently gained
considerable attention from the Optimization and Machine Learning communities,
as their properties make them a suitable choice in a variety of applications.
However, as each iteration requires to optimize a linear model, a clever
implementation is crucial to make such algorithms viable on large-scale
datasets. For this purpose, approximation strategies based on a random sampling
have been proposed by several researchers. In this work, we perform an
experimental study on the effectiveness of these techniques, analyze possible
alternatives and provide some guidelines based on our results.
| [
{
"version": "v1",
"created": "Wed, 15 Oct 2014 13:50:34 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Frandi",
"Emanuele",
""
],
[
"Nanculef",
"Ricardo",
""
],
[
"Suykens",
"Johan",
""
]
] | TITLE: Complexity Issues and Randomization Strategies in Frank-Wolfe Algorithms
for Machine Learning
ABSTRACT: Frank-Wolfe algorithms for convex minimization have recently gained
considerable attention from the Optimization and Machine Learning communities,
as their properties make them a suitable choice in a variety of applications.
However, as each iteration requires to optimize a linear model, a clever
implementation is crucial to make such algorithms viable on large-scale
datasets. For this purpose, approximation strategies based on a random sampling
have been proposed by several researchers. In this work, we perform an
experimental study on the effectiveness of these techniques, analyze possible
alternatives and provide some guidelines based on our results.
| no_new_dataset | 0.951684 |
1412.6651 | Sixin Zhang Sixin Zhang | Sixin Zhang, Anna Choromanska, Yann LeCun | Deep learning with Elastic Averaging SGD | NIPS2015 camera-ready version | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of stochastic optimization for deep learning in the
parallel computing environment under communication constraints. A new algorithm
is proposed in this setting where the communication and coordination of work
among concurrent processes (local workers), is based on an elastic force which
links the parameters they compute with a center variable stored by the
parameter server (master). The algorithm enables the local workers to perform
more exploration, i.e. the algorithm allows the local variables to fluctuate
further from the center variable by reducing the amount of communication
between local workers and the master. We empirically demonstrate that in the
deep learning setting, due to the existence of many local optima, allowing more
exploration can lead to the improved performance. We propose synchronous and
asynchronous variants of the new algorithm. We provide the stability analysis
of the asynchronous variant in the round-robin scheme and compare it with the
more common parallelized method ADMM. We show that the stability of EASGD is
guaranteed when a simple stability condition is satisfied, which is not the
case for ADMM. We additionally propose the momentum-based version of our
algorithm that can be applied in both synchronous and asynchronous settings.
Asynchronous variant of the algorithm is applied to train convolutional neural
networks for image classification on the CIFAR and ImageNet datasets.
Experiments demonstrate that the new algorithm accelerates the training of deep
architectures compared to DOWNPOUR and other common baseline approaches and
furthermore is very communication efficient.
| [
{
"version": "v1",
"created": "Sat, 20 Dec 2014 13:22:23 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Dec 2014 20:50:02 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Jan 2015 01:18:40 GMT"
},
{
"version": "v4",
"created": "Wed, 25 Feb 2015 19:00:29 GMT"
},
{
"version": "v5",
"created": "Wed, 29 Apr 2015 11:56:24 GMT"
},
{
"version": "v6",
"created": "Sat, 6 Jun 2015 00:20:58 GMT"
},
{
"version": "v7",
"created": "Sat, 8 Aug 2015 02:52:48 GMT"
},
{
"version": "v8",
"created": "Sun, 25 Oct 2015 12:12:52 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Zhang",
"Sixin",
""
],
[
"Choromanska",
"Anna",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Deep learning with Elastic Averaging SGD
ABSTRACT: We study the problem of stochastic optimization for deep learning in the
parallel computing environment under communication constraints. A new algorithm
is proposed in this setting where the communication and coordination of work
among concurrent processes (local workers), is based on an elastic force which
links the parameters they compute with a center variable stored by the
parameter server (master). The algorithm enables the local workers to perform
more exploration, i.e. the algorithm allows the local variables to fluctuate
further from the center variable by reducing the amount of communication
between local workers and the master. We empirically demonstrate that in the
deep learning setting, due to the existence of many local optima, allowing more
exploration can lead to the improved performance. We propose synchronous and
asynchronous variants of the new algorithm. We provide the stability analysis
of the asynchronous variant in the round-robin scheme and compare it with the
more common parallelized method ADMM. We show that the stability of EASGD is
guaranteed when a simple stability condition is satisfied, which is not the
case for ADMM. We additionally propose the momentum-based version of our
algorithm that can be applied in both synchronous and asynchronous settings.
Asynchronous variant of the algorithm is applied to train convolutional neural
networks for image classification on the CIFAR and ImageNet datasets.
Experiments demonstrate that the new algorithm accelerates the training of deep
architectures compared to DOWNPOUR and other common baseline approaches and
furthermore is very communication efficient.
| no_new_dataset | 0.945045 |
1502.01563 | Emanuele Frandi | Emanuele Frandi, Ricardo Nanculef, Johan A. K. Suykens | A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM
Classification | null | null | null | null | stat.ML cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frank-Wolfe algorithms have recently regained the attention of the Machine
Learning community. Their solid theoretical properties and sparsity guarantees
make them a suitable choice for a wide range of problems in this field. In
addition, several variants of the basic procedure exist that improve its
theoretical properties and practical performance. In this paper, we investigate
the application of some of these techniques to Machine Learning, focusing in
particular on a Parallel Tangent (PARTAN) variant of the FW algorithm that has
not been previously suggested or studied for this type of problems. We provide
experiments both in a standard setting and using a stochastic speed-up
technique, showing that the considered algorithms obtain promising results on
several medium and large-scale benchmark datasets for SVM classification.
| [
{
"version": "v1",
"created": "Thu, 5 Feb 2015 14:17:55 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Frandi",
"Emanuele",
""
],
[
"Nanculef",
"Ricardo",
""
],
[
"Suykens",
"Johan A. K.",
""
]
] | TITLE: A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM
Classification
ABSTRACT: Frank-Wolfe algorithms have recently regained the attention of the Machine
Learning community. Their solid theoretical properties and sparsity guarantees
make them a suitable choice for a wide range of problems in this field. In
addition, several variants of the basic procedure exist that improve its
theoretical properties and practical performance. In this paper, we investigate
the application of some of these techniques to Machine Learning, focusing in
particular on a Parallel Tangent (PARTAN) variant of the FW algorithm that has
not been previously suggested or studied for this type of problems. We provide
experiments both in a standard setting and using a stochastic speed-up
technique, showing that the considered algorithms obtain promising results on
several medium and large-scale benchmark datasets for SVM classification.
| no_new_dataset | 0.949342 |
1510.07104 | Qi Fan | Qi Fan, Zhengkui Wang, Chee-Yong Chan and Kian-Lee Tan | Supporting Window Analytics over Large-scale Dynamic Graphs | 14 pages, 16 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In relational DBMS, window functions have been widely used to facilitate data
analytics. Surprisingly, while similar concepts have been employed for graph
analytics, there has been no explicit notions of graph window analytic
functions. In this paper, we formally introduce window queries for graph
analytics. In such queries, for each vertex, the analysis is performed on a
window of vertices defined based on the graph structure. In particular, we
identify two instantiations, namely the k-hop window and the topological
window. We develop two novel indices, Dense Block index (DBIndex) and
Inheritance index (I-Index), to facilitate efficient processing of these two
types of windows respectively. Extensive experiments are conducted over both
real and synthetic datasets with hundreds of millions of vertices and edges.
Experimental results indicate that our proposed index-based query processing
solutions achieve four orders of magnitude of query performance gain than the
non-index algorithm and are superior over EAGR wrt scalability and efficiency.
| [
{
"version": "v1",
"created": "Sat, 24 Oct 2015 04:09:38 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Fan",
"Qi",
""
],
[
"Wang",
"Zhengkui",
""
],
[
"Chan",
"Chee-Yong",
""
],
[
"Tan",
"Kian-Lee",
""
]
] | TITLE: Supporting Window Analytics over Large-scale Dynamic Graphs
ABSTRACT: In relational DBMS, window functions have been widely used to facilitate data
analytics. Surprisingly, while similar concepts have been employed for graph
analytics, there has been no explicit notions of graph window analytic
functions. In this paper, we formally introduce window queries for graph
analytics. In such queries, for each vertex, the analysis is performed on a
window of vertices defined based on the graph structure. In particular, we
identify two instantiations, namely the k-hop window and the topological
window. We develop two novel indices, Dense Block index (DBIndex) and
Inheritance index (I-Index), to facilitate efficient processing of these two
types of windows respectively. Extensive experiments are conducted over both
real and synthetic datasets with hundreds of millions of vertices and edges.
Experimental results indicate that our proposed index-based query processing
solutions achieve four orders of magnitude of query performance gain than the
non-index algorithm and are superior over EAGR wrt scalability and efficiency.
| no_new_dataset | 0.94743 |
1510.07136 | Marian George | Marian George | Image Parsing with a Wide Range of Classes and Scene-Level Context | Published at CVPR 2015, Computer Vision and Pattern Recognition
(CVPR), 2015 IEEE Conference on | null | 10.1109/CVPR.2015.7298985 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a nonparametric scene parsing approach that improves the
overall accuracy, as well as the coverage of foreground classes in scene
images. We first improve the label likelihood estimates at superpixels by
merging likelihood scores from different probabilistic classifiers. This boosts
the classification performance and enriches the representation of
less-represented classes. Our second contribution consists of incorporating
semantic context in the parsing process through global label costs. Our method
does not rely on image retrieval sets but rather assigns a global likelihood
estimate to each label, which is plugged into the overall energy function. We
evaluate our system on two large-scale datasets, SIFTflow and LMSun. We achieve
state-of-the-art performance on the SIFTflow dataset and near-record results on
LMSun.
| [
{
"version": "v1",
"created": "Sat, 24 Oct 2015 12:16:27 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"George",
"Marian",
""
]
] | TITLE: Image Parsing with a Wide Range of Classes and Scene-Level Context
ABSTRACT: This paper presents a nonparametric scene parsing approach that improves the
overall accuracy, as well as the coverage of foreground classes in scene
images. We first improve the label likelihood estimates at superpixels by
merging likelihood scores from different probabilistic classifiers. This boosts
the classification performance and enriches the representation of
less-represented classes. Our second contribution consists of incorporating
semantic context in the parsing process through global label costs. Our method
does not rely on image retrieval sets but rather assigns a global likelihood
estimate to each label, which is plugged into the overall energy function. We
evaluate our system on two large-scale datasets, SIFTflow and LMSun. We achieve
state-of-the-art performance on the SIFTflow dataset and near-record results on
LMSun.
| no_new_dataset | 0.953275 |
1510.07169 | Emanuele Frandi | Emanuele Frandi, Ricardo Nanculef, Stefano Lodi, Claudio Sartori,
Johan A. K. Suykens | Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a
Convergence Guarantee | null | null | null | Internal Report 15-93, ESAT-STADIUS, KU Leuven, 2015 | stat.ML cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frank-Wolfe (FW) algorithms have been often proposed over the last few years
as efficient solvers for a variety of optimization problems arising in the
field of Machine Learning. The ability to work with cheap projection-free
iterations and the incremental nature of the method make FW a very effective
choice for many large-scale problems where computing a sparse model is
desirable.
In this paper, we present a high-performance implementation of the FW method
tailored to solve large-scale Lasso regression problems, based on a randomized
iteration, and prove that the convergence guarantees of the standard FW method
are preserved in the stochastic setting. We show experimentally that our
algorithm outperforms several existing state of the art methods, including the
Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso
solvers), on several benchmark datasets with a very large number of features,
without sacrificing the accuracy of the model. Our results illustrate that the
algorithm is able to generate the complete regularization path on problems of
size up to four million variables in less than one minute.
| [
{
"version": "v1",
"created": "Sat, 24 Oct 2015 17:56:27 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Frandi",
"Emanuele",
""
],
[
"Nanculef",
"Ricardo",
""
],
[
"Lodi",
"Stefano",
""
],
[
"Sartori",
"Claudio",
""
],
[
"Suykens",
"Johan A. K.",
""
]
] | TITLE: Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a
Convergence Guarantee
ABSTRACT: Frank-Wolfe (FW) algorithms have been often proposed over the last few years
as efficient solvers for a variety of optimization problems arising in the
field of Machine Learning. The ability to work with cheap projection-free
iterations and the incremental nature of the method make FW a very effective
choice for many large-scale problems where computing a sparse model is
desirable.
In this paper, we present a high-performance implementation of the FW method
tailored to solve large-scale Lasso regression problems, based on a randomized
iteration, and prove that the convergence guarantees of the standard FW method
are preserved in the stochastic setting. We show experimentally that our
algorithm outperforms several existing state of the art methods, including the
Coordinate Descent algorithm by Friedman et al. (one of the fastest known Lasso
solvers), on several benchmark datasets with a very large number of features,
without sacrificing the accuracy of the model. Our results illustrate that the
algorithm is able to generate the complete regularization path on problems of
size up to four million variables in less than one minute.
| no_new_dataset | 0.94545 |
1510.07211 | Lili Mou | Lili Mou, Rui Men, Ge Li, Lu Zhang, Zhi Jin | On End-to-End Program Generation from User Intention by Deep Neural
Networks | Submitted to 2016 International Conference of Software Engineering
"Vision of 2025 and Beyond" track | null | null | null | cs.SE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2015 06:52:45 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Mou",
"Lili",
""
],
[
"Men",
"Rui",
""
],
[
"Li",
"Ge",
""
],
[
"Zhang",
"Lu",
""
],
[
"Jin",
"Zhi",
""
]
] | TITLE: On End-to-End Program Generation from User Intention by Deep Neural
Networks
ABSTRACT: This paper envisions an end-to-end program generation scenario using
recurrent neural networks (RNNs): Users can express their intention in natural
language; an RNN then automatically generates corresponding code in a
characterby-by-character fashion. We demonstrate its feasibility through a case
study and empirical analysis. To fully make such technique useful in practice,
we also point out several cross-disciplinary challenges, including modeling
user intention, providing datasets, improving model architectures, etc.
Although much long-term research shall be addressed in this new field, we
believe end-to-end program generation would become a reality in future decades,
and we are looking forward to its practice.
| no_new_dataset | 0.952042 |
1510.07299 | Shant Shahbazian | Alireza Marefat Khah and Shant Shahbazian | Revisiting the Z-dependence of the electron density at the nuclei | 5 pages, 1 figure, supporting information | null | null | null | physics.chem-ph physics.atom-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new formula that relates the electron density at the nucleus of
atoms,rho(0,Z), and the atomic number,Z, is proposed. This
formula,rho(0,Z)=a(Z-bZ^(0.5))^3, contains two unknown parameters (a,b) that
are derived using a least square regression to the ab initio derived rho(0,Z)
of Koga dataset from He (Z=2) to Lr (Z=103) atoms (Theor Chim Acta 95, 113
(1997)). In comparison to the well-known formula,rho(0,Z)=aZ^b, used for the
same purpose previously, the resulting new formula is capable of reproducing
the ab initio rho(0,Z) dataset an order of magnitude more precisely without
introducing more regression parameters. This new formula may be used to
transform the equations that relate correlation energy of atoms and rho(0,Z)
into simpler equations just containing the atomic number as a fundamental
property of atoms.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2015 20:41:48 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Khah",
"Alireza Marefat",
""
],
[
"Shahbazian",
"Shant",
""
]
] | TITLE: Revisiting the Z-dependence of the electron density at the nuclei
ABSTRACT: A new formula that relates the electron density at the nucleus of
atoms,rho(0,Z), and the atomic number,Z, is proposed. This
formula,rho(0,Z)=a(Z-bZ^(0.5))^3, contains two unknown parameters (a,b) that
are derived using a least square regression to the ab initio derived rho(0,Z)
of Koga dataset from He (Z=2) to Lr (Z=103) atoms (Theor Chim Acta 95, 113
(1997)). In comparison to the well-known formula,rho(0,Z)=aZ^b, used for the
same purpose previously, the resulting new formula is capable of reproducing
the ab initio rho(0,Z) dataset an order of magnitude more precisely without
introducing more regression parameters. This new formula may be used to
transform the equations that relate correlation energy of atoms and rho(0,Z)
into simpler equations just containing the atomic number as a fundamental
property of atoms.
| no_new_dataset | 0.943504 |
1510.07317 | S. Hussain Raza | S. Hussain Raza, Omar Javed, Aveek Das, Harpreet Sawhney, Hui Cheng,
Irfan Essa | Depth Extraction from Videos Using Geometric Context and Occlusion
Boundaries | British Machine Vision Conference (BMVC) 2014 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm to estimate depth in dynamic video scenes. We propose
to learn and infer depth in videos from appearance, motion, occlusion
boundaries, and geometric context of the scene. Using our method, depth can be
estimated from unconstrained videos with no requirement of camera pose
estimation, and with significant background/foreground motions. We start by
decomposing a video into spatio-temporal regions. For each spatio-temporal
region, we learn the relationship of depth to visual appearance, motion, and
geometric classes. Then we infer the depth information of new scenes using
piecewise planar parametrization estimated within a Markov random field (MRF)
framework by combining appearance to depth learned mappings and occlusion
boundary guided smoothness constraints. Subsequently, we perform temporal
smoothing to obtain temporally consistent depth maps. To evaluate our depth
estimation algorithm, we provide a novel dataset with ground truth depth for
outdoor video scenes. We present a thorough evaluation of our algorithm on our
new dataset and the publicly available Make3d static image dataset.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2015 22:41:24 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Raza",
"S. Hussain",
""
],
[
"Javed",
"Omar",
""
],
[
"Das",
"Aveek",
""
],
[
"Sawhney",
"Harpreet",
""
],
[
"Cheng",
"Hui",
""
],
[
"Essa",
"Irfan",
""
]
] | TITLE: Depth Extraction from Videos Using Geometric Context and Occlusion
Boundaries
ABSTRACT: We present an algorithm to estimate depth in dynamic video scenes. We propose
to learn and infer depth in videos from appearance, motion, occlusion
boundaries, and geometric context of the scene. Using our method, depth can be
estimated from unconstrained videos with no requirement of camera pose
estimation, and with significant background/foreground motions. We start by
decomposing a video into spatio-temporal regions. For each spatio-temporal
region, we learn the relationship of depth to visual appearance, motion, and
geometric classes. Then we infer the depth information of new scenes using
piecewise planar parametrization estimated within a Markov random field (MRF)
framework by combining appearance to depth learned mappings and occlusion
boundary guided smoothness constraints. Subsequently, we perform temporal
smoothing to obtain temporally consistent depth maps. To evaluate our depth
estimation algorithm, we provide a novel dataset with ground truth depth for
outdoor video scenes. We present a thorough evaluation of our algorithm on our
new dataset and the publicly available Make3d static image dataset.
| new_dataset | 0.958069 |
1510.07480 | Felipe Olmos | Felipe Olmos, Bruno Kauffmann | An Inverse Problem Approach for Content Popularity Estimation | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Internet increasingly focuses on content, as exemplified by the now
popular Information Centric Networking paradigm. This means, in particular,
that estimating content popularities becomes essential to manage and distribute
content pieces efficiently. In this paper, we show how to properly estimate
content popularities from a traffic trace.
Specifically, we consider the problem of the popularity inference in order to
tune content-level performance models, e.g. caching models. In this context,
special care must be brought on the fact that an observer measures only the
flow of requests, which differs from the model parameters, though both
quantities are related by the model assumptions. Current studies, however,
ignore this difference and use the observed data as model parameters. In this
paper, we highlight the inverse problem that consists in determining parameters
so that the flow of requests is properly predicted by the model. We then show
how such an inverse problem can be solved using Maximum Likelihood Estimation.
Based on two large traces from the Orange network and two synthetic datasets,
we eventually quantify the importance of this inversion step for the
performance evaluation accuracy.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2015 13:54:47 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Olmos",
"Felipe",
""
],
[
"Kauffmann",
"Bruno",
""
]
] | TITLE: An Inverse Problem Approach for Content Popularity Estimation
ABSTRACT: The Internet increasingly focuses on content, as exemplified by the now
popular Information Centric Networking paradigm. This means, in particular,
that estimating content popularities becomes essential to manage and distribute
content pieces efficiently. In this paper, we show how to properly estimate
content popularities from a traffic trace.
Specifically, we consider the problem of the popularity inference in order to
tune content-level performance models, e.g. caching models. In this context,
special care must be brought on the fact that an observer measures only the
flow of requests, which differs from the model parameters, though both
quantities are related by the model assumptions. Current studies, however,
ignore this difference and use the observed data as model parameters. In this
paper, we highlight the inverse problem that consists in determining parameters
so that the flow of requests is properly predicted by the model. We then show
how such an inverse problem can be solved using Maximum Likelihood Estimation.
Based on two large traces from the Orange network and two synthetic datasets,
we eventually quantify the importance of this inversion step for the
performance evaluation accuracy.
| no_new_dataset | 0.947186 |
1510.07586 | Sudha Rao | Sudha Rao, Yogarshi Vyas, Hal Daume III, Philip Resnik | Parser for Abstract Meaning Representation using Learning to Search | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a novel technique to parse English sentences into Abstract Meaning
Representation (AMR) using SEARN, a Learning to Search approach, by modeling
the concept and the relation learning in a unified framework. We evaluate our
parser on multiple datasets from varied domains and show an absolute
improvement of 2% to 6% over the state-of-the-art. Additionally we show that
using the most frequent concept gives us a baseline that is stronger than the
state-of-the-art for concept prediction. We plan to release our parser for
public use.
| [
{
"version": "v1",
"created": "Mon, 26 Oct 2015 18:34:16 GMT"
}
] | 2015-10-27T00:00:00 | [
[
"Rao",
"Sudha",
""
],
[
"Vyas",
"Yogarshi",
""
],
[
"Daume",
"Hal",
"III"
],
[
"Resnik",
"Philip",
""
]
] | TITLE: Parser for Abstract Meaning Representation using Learning to Search
ABSTRACT: We develop a novel technique to parse English sentences into Abstract Meaning
Representation (AMR) using SEARN, a Learning to Search approach, by modeling
the concept and the relation learning in a unified framework. We evaluate our
parser on multiple datasets from varied domains and show an absolute
improvement of 2% to 6% over the state-of-the-art. Additionally we show that
using the most frequent concept gives us a baseline that is stronger than the
state-of-the-art for concept prediction. We plan to release our parser for
public use.
| no_new_dataset | 0.928797 |
1510.06939 | Mihir Jain | Mihir Jain, Jan C. van Gemert, Thomas Mensink and Cees G. M. Snoek | Objects2action: Classifying and localizing actions without any video
example | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this paper is to recognize actions in video without the need for
examples. Different from traditional zero-shot approaches we do not demand the
design and specification of attribute classifiers and class-to-attribute
mappings to allow for transfer from seen classes to unseen classes. Our key
contribution is objects2action, a semantic word embedding that is spanned by a
skip-gram model of thousands of object categories. Action labels are assigned
to an object encoding of unseen video based on a convex combination of action
and object affinities. Our semantic embedding has three main characteristics to
accommodate for the specifics of actions. First, we propose a mechanism to
exploit multiple-word descriptions of actions and objects. Second, we
incorporate the automated selection of the most responsive objects per action.
And finally, we demonstrate how to extend our zero-shot approach to the
spatio-temporal localization of actions in video. Experiments on four action
datasets demonstrate the potential of our approach.
| [
{
"version": "v1",
"created": "Fri, 23 Oct 2015 14:23:44 GMT"
}
] | 2015-10-26T00:00:00 | [
[
"Jain",
"Mihir",
""
],
[
"van Gemert",
"Jan C.",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: Objects2action: Classifying and localizing actions without any video
example
ABSTRACT: The goal of this paper is to recognize actions in video without the need for
examples. Different from traditional zero-shot approaches we do not demand the
design and specification of attribute classifiers and class-to-attribute
mappings to allow for transfer from seen classes to unseen classes. Our key
contribution is objects2action, a semantic word embedding that is spanned by a
skip-gram model of thousands of object categories. Action labels are assigned
to an object encoding of unseen video based on a convex combination of action
and object affinities. Our semantic embedding has three main characteristics to
accommodate for the specifics of actions. First, we propose a mechanism to
exploit multiple-word descriptions of actions and objects. Second, we
incorporate the automated selection of the most responsive objects per action.
And finally, we demonstrate how to extend our zero-shot approach to the
spatio-temporal localization of actions in video. Experiments on four action
datasets demonstrate the potential of our approach.
| no_new_dataset | 0.944022 |
1411.2384 | Jean-Francois Mercure | J.-F. Mercure and A. Lam | The effectiveness of policy on consumer choices for private road
passenger transport emissions reductions in six major economies | 12 pages, 5 figures, 2 tables + 8 pages Supplementary Information
included, to appear in this final form in Environmental Research Letters | Environmental Research Letters 10 (2015) 064008 | 10.1088/1748-9326/10/6/064008 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effectiveness of fiscal policy to influence vehicle purchases for
emissions reductions in private passenger road transport depends on its ability
to incentivise consumers to make choices oriented towards lower emissions
vehicles. However, car purchase choices are known to be strongly socially
determined, and this sector is highly diverse due to significant socio-economic
differences between consumer groups. Here, we present a comprehensive dataset
and analysis of the structure of the 2012 private passenger vehicle fleet-years
in six major economies across the World (UK, USA, China, India, Japan and
Brazil) in terms of price, engine size and emissions distributions. We argue
that choices and aggregate elasticities of substitution can be predicted using
this data, enabling to evaluate the effectiveness of potential fiscal and
technological change policies on fleet-year emissions reductions. We provide
tools to do so based on the distributive structure of prices and emissions in
segments of a diverse market, both for conventional as well as unconventional
engine technologies. We find that markets differ significantly between nations,
and that correlations between engine sizes, emissions and prices exist strongly
in some markets and not strongly in others. We furthermore find that markets
for unconventional engine technologies have patchy coverages of varying levels.
These findings are interpreted in terms of policy strategy.
| [
{
"version": "v1",
"created": "Mon, 10 Nov 2014 11:23:00 GMT"
},
{
"version": "v2",
"created": "Mon, 16 Mar 2015 17:03:02 GMT"
},
{
"version": "v3",
"created": "Tue, 5 May 2015 15:46:03 GMT"
}
] | 2015-10-23T00:00:00 | [
[
"Mercure",
"J. -F.",
""
],
[
"Lam",
"A.",
""
]
] | TITLE: The effectiveness of policy on consumer choices for private road
passenger transport emissions reductions in six major economies
ABSTRACT: The effectiveness of fiscal policy to influence vehicle purchases for
emissions reductions in private passenger road transport depends on its ability
to incentivise consumers to make choices oriented towards lower emissions
vehicles. However, car purchase choices are known to be strongly socially
determined, and this sector is highly diverse due to significant socio-economic
differences between consumer groups. Here, we present a comprehensive dataset
and analysis of the structure of the 2012 private passenger vehicle fleet-years
in six major economies across the World (UK, USA, China, India, Japan and
Brazil) in terms of price, engine size and emissions distributions. We argue
that choices and aggregate elasticities of substitution can be predicted using
this data, enabling to evaluate the effectiveness of potential fiscal and
technological change policies on fleet-year emissions reductions. We provide
tools to do so based on the distributive structure of prices and emissions in
segments of a diverse market, both for conventional as well as unconventional
engine technologies. We find that markets differ significantly between nations,
and that correlations between engine sizes, emissions and prices exist strongly
in some markets and not strongly in others. We furthermore find that markets
for unconventional engine technologies have patchy coverages of varying levels.
These findings are interpreted in terms of policy strategy.
| new_dataset | 0.963437 |
1510.06582 | Bartosz Hawelka | Bartosz Hawelka, Izabela Sitko, Pavlos Kazakopoulos and Euro Beinat | Collective Prediction of Individual Mobility Traces with Exponential
Weights | 15 pages, 8 figures | null | null | null | physics.soc-ph cs.CY cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present and test a sequential learning algorithm for the short-term
prediction of human mobility. This novel approach pairs the Exponential Weights
forecaster with a very large ensemble of experts. The experts are individual
sequence prediction algorithms constructed from the mobility traces of 10
million roaming mobile phone users in a European country. Average prediction
accuracy is significantly higher than that of individual sequence prediction
algorithms, namely constant order Markov models derived from the user's own
data, that have been shown to achieve high accuracy in previous studies of
human mobility prediction. The algorithm uses only time stamped location data,
and accuracy depends on the completeness of the expert ensemble, which should
contain redundant records of typical mobility patterns. The proposed algorithm
is applicable to the prediction of any sufficiently large dataset of sequences.
| [
{
"version": "v1",
"created": "Thu, 22 Oct 2015 11:27:03 GMT"
}
] | 2015-10-23T00:00:00 | [
[
"Hawelka",
"Bartosz",
""
],
[
"Sitko",
"Izabela",
""
],
[
"Kazakopoulos",
"Pavlos",
""
],
[
"Beinat",
"Euro",
""
]
] | TITLE: Collective Prediction of Individual Mobility Traces with Exponential
Weights
ABSTRACT: We present and test a sequential learning algorithm for the short-term
prediction of human mobility. This novel approach pairs the Exponential Weights
forecaster with a very large ensemble of experts. The experts are individual
sequence prediction algorithms constructed from the mobility traces of 10
million roaming mobile phone users in a European country. Average prediction
accuracy is significantly higher than that of individual sequence prediction
algorithms, namely constant order Markov models derived from the user's own
data, that have been shown to achieve high accuracy in previous studies of
human mobility prediction. The algorithm uses only time stamped location data,
and accuracy depends on the completeness of the expert ensemble, which should
contain redundant records of typical mobility patterns. The proposed algorithm
is applicable to the prediction of any sufficiently large dataset of sequences.
| no_new_dataset | 0.946646 |
1409.4326 | Jure \v{Z}bontar | Jure \v{Z}bontar and Yann LeCun | Computing the Stereo Matching Cost with a Convolutional Neural Network | Conference on Computer Vision and Pattern Recognition (CVPR), June
2015 | null | 10.1109/CVPR.2015.7298767 | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for extracting depth information from a rectified image
pair. We train a convolutional neural network to predict how well two image
patches match and use it to compute the stereo matching cost. The cost is
refined by cross-based cost aggregation and semiglobal matching, followed by a
left-right consistency check to eliminate errors in the occluded regions. Our
stereo method achieves an error rate of 2.61 % on the KITTI stereo dataset and
is currently (August 2014) the top performing method on this dataset.
| [
{
"version": "v1",
"created": "Mon, 15 Sep 2014 16:54:42 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Oct 2015 15:08:48 GMT"
}
] | 2015-10-21T00:00:00 | [
[
"Žbontar",
"Jure",
""
],
[
"LeCun",
"Yann",
""
]
] | TITLE: Computing the Stereo Matching Cost with a Convolutional Neural Network
ABSTRACT: We present a method for extracting depth information from a rectified image
pair. We train a convolutional neural network to predict how well two image
patches match and use it to compute the stereo matching cost. The cost is
refined by cross-based cost aggregation and semiglobal matching, followed by a
left-right consistency check to eliminate errors in the occluded regions. Our
stereo method achieves an error rate of 2.61 % on the KITTI stereo dataset and
is currently (August 2014) the top performing method on this dataset.
| no_new_dataset | 0.953579 |
1510.05763 | Won-Yong Shin | Won-Yong Shin, Jaehee Cho, and Andr\'e M. Everett | Clarifying the Role of Distance in Friendships on Twitter: Discovery of
a Double Power-Law Relationship | 7 pages, 1 figure, To be presented at the 23rd ACM SIGSPATIAL
International Conference on Advances in Geographic Information Systems (ACM
SIGSPATIAL 2015), Seattle, WA USA, November 2015 | null | null | null | cs.SI physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study analyzes friendships in online social networks involving
geographic distance with a geo-referenced Twitter dataset, which provides the
exact distance between corresponding users. We start by introducing a strong
definition of "friend" on Twitter, requiring bidirectional communication. Next,
by utilizing geo-tagged mentions delivered by users to determine their
locations, we introduce a two-stage distance estimation algorithm. As our main
contribution, our study provides the following newly-discovered friendship
degree related to the issue of space: The number of friends according to
distance follows a double power-law (i.e., a double Pareto law) distribution,
indicating that the probability of befriending a particular Twitter user is
significantly reduced beyond a certain geographic distance between users,
termed the separation point. Our analysis provides much more fine-grained
social ties in space, compared to the conventional results showing a
homogeneous power-law with distance.
| [
{
"version": "v1",
"created": "Tue, 20 Oct 2015 05:52:02 GMT"
}
] | 2015-10-21T00:00:00 | [
[
"Shin",
"Won-Yong",
""
],
[
"Cho",
"Jaehee",
""
],
[
"Everett",
"André M.",
""
]
] | TITLE: Clarifying the Role of Distance in Friendships on Twitter: Discovery of
a Double Power-Law Relationship
ABSTRACT: This study analyzes friendships in online social networks involving
geographic distance with a geo-referenced Twitter dataset, which provides the
exact distance between corresponding users. We start by introducing a strong
definition of "friend" on Twitter, requiring bidirectional communication. Next,
by utilizing geo-tagged mentions delivered by users to determine their
locations, we introduce a two-stage distance estimation algorithm. As our main
contribution, our study provides the following newly-discovered friendship
degree related to the issue of space: The number of friends according to
distance follows a double power-law (i.e., a double Pareto law) distribution,
indicating that the probability of befriending a particular Twitter user is
significantly reduced beyond a certain geographic distance between users,
termed the separation point. Our analysis provides much more fine-grained
social ties in space, compared to the conventional results showing a
homogeneous power-law with distance.
| no_new_dataset | 0.953013 |
1510.05822 | Xavier Gibert | Xavier Gibert, Vishal M. Patel, Rama Chellappa | Sequential Score Adaptation with Extreme Value Theory for Robust Railway
Track Inspection | To be presented at the 3rd Workshop on Computer Vision for Road Scene
Understanding and Autonomous Driving (CVRSUAD 2015) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Periodic inspections are necessary to keep railroad tracks in state of good
repair and prevent train accidents. Automatic track inspection using machine
vision technology has become a very effective inspection tool. Because of its
non-contact nature, this technology can be deployed on virtually any railway
vehicle to continuously survey the tracks and send exception reports to track
maintenance personnel. However, as appearance and imaging conditions vary,
false alarm rates can dramatically change, making it difficult to select a good
operating point. In this paper, we use extreme value theory (EVT) within a
Bayesian framework to optimally adjust the sensitivity of anomaly detectors. We
show that by approximating the lower tail of the probability density function
(PDF) of the scores with an Exponential distribution (a special case of the
Generalized Pareto distribution), and using the Gamma conjugate prior learned
from the training data, it is possible to reduce the variability in false alarm
rate and improve the overall performance. This method has shown an increase in
the defect detection rate of rail fasteners in the presence of clutter (at PFA
0.1%) from 95.40% to 99.26% on the 85-mile Northeast Corridor (NEC) 2012-2013
concrete tie dataset.
| [
{
"version": "v1",
"created": "Tue, 20 Oct 2015 10:16:43 GMT"
}
] | 2015-10-21T00:00:00 | [
[
"Gibert",
"Xavier",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Chellappa",
"Rama",
""
]
] | TITLE: Sequential Score Adaptation with Extreme Value Theory for Robust Railway
Track Inspection
ABSTRACT: Periodic inspections are necessary to keep railroad tracks in state of good
repair and prevent train accidents. Automatic track inspection using machine
vision technology has become a very effective inspection tool. Because of its
non-contact nature, this technology can be deployed on virtually any railway
vehicle to continuously survey the tracks and send exception reports to track
maintenance personnel. However, as appearance and imaging conditions vary,
false alarm rates can dramatically change, making it difficult to select a good
operating point. In this paper, we use extreme value theory (EVT) within a
Bayesian framework to optimally adjust the sensitivity of anomaly detectors. We
show that by approximating the lower tail of the probability density function
(PDF) of the scores with an Exponential distribution (a special case of the
Generalized Pareto distribution), and using the Gamma conjugate prior learned
from the training data, it is possible to reduce the variability in false alarm
rate and improve the overall performance. This method has shown an increase in
the defect detection rate of rail fasteners in the presence of clutter (at PFA
0.1%) from 95.40% to 99.26% on the 85-mile Northeast Corridor (NEC) 2012-2013
concrete tie dataset.
| no_new_dataset | 0.947478 |
1510.05976 | Liping Liu | Li-Ping Liu and Thomas G. Dietterich and Nan Li and Zhi-Hua Zhou | Transductive Optimization of Top k Precision | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consider a binary classification problem in which the learner is given a
labeled training set, an unlabeled test set, and is restricted to choosing
exactly $k$ test points to output as positive predictions. Problems of this
kind---{\it transductive precision@$k$}---arise in information retrieval,
digital advertising, and reserve design for endangered species. Previous
methods separate the training of the model from its use in scoring the test
points. This paper introduces a new approach, Transductive Top K (TTK), that
seeks to minimize the hinge loss over all training instances under the
constraint that exactly $k$ test instances are predicted as positive. The paper
presents two optimization methods for this challenging problem. Experiments and
analysis confirm the importance of incorporating the knowledge of $k$ into the
learning process. Experimental evaluations of the TTK approach show that the
performance of TTK matches or exceeds existing state-of-the-art methods on 7
UCI datasets and 3 reserve design problem instances.
| [
{
"version": "v1",
"created": "Tue, 20 Oct 2015 17:27:12 GMT"
}
] | 2015-10-21T00:00:00 | [
[
"Liu",
"Li-Ping",
""
],
[
"Dietterich",
"Thomas G.",
""
],
[
"Li",
"Nan",
""
],
[
"Zhou",
"Zhi-Hua",
""
]
] | TITLE: Transductive Optimization of Top k Precision
ABSTRACT: Consider a binary classification problem in which the learner is given a
labeled training set, an unlabeled test set, and is restricted to choosing
exactly $k$ test points to output as positive predictions. Problems of this
kind---{\it transductive precision@$k$}---arise in information retrieval,
digital advertising, and reserve design for endangered species. Previous
methods separate the training of the model from its use in scoring the test
points. This paper introduces a new approach, Transductive Top K (TTK), that
seeks to minimize the hinge loss over all training instances under the
constraint that exactly $k$ test instances are predicted as positive. The paper
presents two optimization methods for this challenging problem. Experiments and
analysis confirm the importance of incorporating the knowledge of $k$ into the
learning process. Experimental evaluations of the TTK approach show that the
performance of TTK matches or exceeds existing state-of-the-art methods on 7
UCI datasets and 3 reserve design problem instances.
| no_new_dataset | 0.945801 |
1505.00487 | Subhashini Venugopalan | Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney,
Trevor Darrell, Kate Saenko | Sequence to Sequence -- Video to Text | ICCV 2015 camera-ready. Includes code, project page and LSMDC
challenge results | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world videos often have complex dynamics; and methods for generating
open-domain video descriptions should be sensitive to temporal structure and
allow both input (sequence of frames) and output (sequence of words) of
variable length. To approach this problem, we propose a novel end-to-end
sequence-to-sequence model to generate captions for videos. For this we exploit
recurrent neural networks, specifically LSTMs, which have demonstrated
state-of-the-art performance in image caption generation. Our LSTM model is
trained on video-sentence pairs and learns to associate a sequence of video
frames to a sequence of words in order to generate a description of the event
in the video clip. Our model naturally is able to learn the temporal structure
of the sequence of frames as well as the sequence model of the generated
sentences, i.e. a language model. We evaluate several variants of our model
that exploit different visual features on a standard set of YouTube videos and
two movie description datasets (M-VAD and MPII-MD).
| [
{
"version": "v1",
"created": "Sun, 3 May 2015 22:32:00 GMT"
},
{
"version": "v2",
"created": "Tue, 12 May 2015 16:08:57 GMT"
},
{
"version": "v3",
"created": "Mon, 19 Oct 2015 18:01:06 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Venugopalan",
"Subhashini",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Donahue",
"Jeff",
""
],
[
"Mooney",
"Raymond",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Sequence to Sequence -- Video to Text
ABSTRACT: Real-world videos often have complex dynamics; and methods for generating
open-domain video descriptions should be sensitive to temporal structure and
allow both input (sequence of frames) and output (sequence of words) of
variable length. To approach this problem, we propose a novel end-to-end
sequence-to-sequence model to generate captions for videos. For this we exploit
recurrent neural networks, specifically LSTMs, which have demonstrated
state-of-the-art performance in image caption generation. Our LSTM model is
trained on video-sentence pairs and learns to associate a sequence of video
frames to a sequence of words in order to generate a description of the event
in the video clip. Our model naturally is able to learn the temporal structure
of the sequence of frames as well as the sequence model of the generated
sentences, i.e. a language model. We evaluate several variants of our model
that exploit different visual features on a standard set of YouTube videos and
two movie description datasets (M-VAD and MPII-MD).
| no_new_dataset | 0.949295 |
1506.01060 | Nan Zhou | Nan Zhou, Yangyang Xu, Hong Cheng, Jun Fang, Witold Pedrycz | Global and Local Structure Preserving Sparse Subspace Learning: An
Iterative Approach to Unsupervised Feature Selection | 32 page, 6 figures and 60 references | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As we aim at alleviating the curse of high-dimensionality, subspace learning
is becoming more popular. Existing approaches use either information about
global or local structure of the data, and few studies simultaneously focus on
global and local structures as the both of them contain important information.
In this paper, we propose a global and local structure preserving sparse
subspace learning (GLoSS) model for unsupervised feature selection. The model
can simultaneously realize feature selection and subspace learning. In
addition, we develop a greedy algorithm to establish a generic combinatorial
model, and an iterative strategy based on an accelerated block coordinate
descent is used to solve the GLoSS problem. We also provide whole iterate
sequence convergence analysis of the proposed iterative algorithm. Extensive
experiments are conducted on real-world datasets to show the superiority of the
proposed approach over several state-of-the-art unsupervised feature selection
approaches.
| [
{
"version": "v1",
"created": "Tue, 2 Jun 2015 21:02:16 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Oct 2015 18:13:24 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Zhou",
"Nan",
""
],
[
"Xu",
"Yangyang",
""
],
[
"Cheng",
"Hong",
""
],
[
"Fang",
"Jun",
""
],
[
"Pedrycz",
"Witold",
""
]
] | TITLE: Global and Local Structure Preserving Sparse Subspace Learning: An
Iterative Approach to Unsupervised Feature Selection
ABSTRACT: As we aim at alleviating the curse of high-dimensionality, subspace learning
is becoming more popular. Existing approaches use either information about
global or local structure of the data, and few studies simultaneously focus on
global and local structures as the both of them contain important information.
In this paper, we propose a global and local structure preserving sparse
subspace learning (GLoSS) model for unsupervised feature selection. The model
can simultaneously realize feature selection and subspace learning. In
addition, we develop a greedy algorithm to establish a generic combinatorial
model, and an iterative strategy based on an accelerated block coordinate
descent is used to solve the GLoSS problem. We also provide whole iterate
sequence convergence analysis of the proposed iterative algorithm. Extensive
experiments are conducted on real-world datasets to show the superiority of the
proposed approach over several state-of-the-art unsupervised feature selection
approaches.
| no_new_dataset | 0.945801 |
1507.07629 | Garrick Orchard | Garrick Orchard and Ajinkya Jayawant and Gregory Cohen and Nitish
Thakor | Converting Static Image Datasets to Spiking Neuromorphic Datasets Using
Saccades | 10 pages, 6 figures in Frontiers in Neuromorphic Engineering, special
topic on Benchmarks and Challenges for Neuromorphic Engineering, 2015 (under
review) | null | null | null | cs.DB q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating datasets for Neuromorphic Vision is a challenging task. A lack of
available recordings from Neuromorphic Vision sensors means that data must
typically be recorded specifically for dataset creation rather than collecting
and labelling existing data. The task is further complicated by a desire to
simultaneously provide traditional frame-based recordings to allow for direct
comparison with traditional Computer Vision algorithms. Here we propose a
method for converting existing Computer Vision static image datasets into
Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving
the sensor rather than the scene or image is a more biologically realistic
approach to sensing and eliminates timing artifacts introduced by monitor
updates when simulating motion on a computer monitor. We present conversion of
two popular image datasets (MNIST and Caltech101) which have played important
roles in the development of Computer Vision, and we provide performance metrics
on these datasets using spike-based recognition algorithms. This work
contributes datasets for future use in the field, as well as results from
spike-based algorithms against which future works can compare. Furthermore, by
converting datasets already popular in Computer Vision, we enable more direct
comparison with frame-based approaches.
| [
{
"version": "v1",
"created": "Tue, 28 Jul 2015 03:23:25 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Orchard",
"Garrick",
""
],
[
"Jayawant",
"Ajinkya",
""
],
[
"Cohen",
"Gregory",
""
],
[
"Thakor",
"Nitish",
""
]
] | TITLE: Converting Static Image Datasets to Spiking Neuromorphic Datasets Using
Saccades
ABSTRACT: Creating datasets for Neuromorphic Vision is a challenging task. A lack of
available recordings from Neuromorphic Vision sensors means that data must
typically be recorded specifically for dataset creation rather than collecting
and labelling existing data. The task is further complicated by a desire to
simultaneously provide traditional frame-based recordings to allow for direct
comparison with traditional Computer Vision algorithms. Here we propose a
method for converting existing Computer Vision static image datasets into
Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving
the sensor rather than the scene or image is a more biologically realistic
approach to sensing and eliminates timing artifacts introduced by monitor
updates when simulating motion on a computer monitor. We present conversion of
two popular image datasets (MNIST and Caltech101) which have played important
roles in the development of Computer Vision, and we provide performance metrics
on these datasets using spike-based recognition algorithms. This work
contributes datasets for future use in the field, as well as results from
spike-based algorithms against which future works can compare. Furthermore, by
converting datasets already popular in Computer Vision, we enable more direct
comparison with frame-based approaches.
| no_new_dataset | 0.9462 |
1509.05360 | Jiaji Huang | Jiaji Huang, Qiang Qiu, Robert Calderbank, Guillermo Sapiro | Geometry-aware Deep Transform | to appear in ICCV2015, updated with minor revision | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many recent efforts have been devoted to designing sophisticated deep
learning structures, obtaining revolutionary results on benchmark datasets. The
success of these deep learning methods mostly relies on an enormous volume of
labeled training samples to learn a huge number of parameters in a network;
therefore, understanding the generalization ability of a learned deep network
cannot be overlooked, especially when restricted to a small training set, which
is the case for many applications. In this paper, we propose a novel deep
learning objective formulation that unifies both the classification and metric
learning criteria. We then introduce a geometry-aware deep transform to enable
a non-linear discriminative and robust feature transform, which shows
competitive performance on small training sets for both synthetic and
real-world data. We further support the proposed framework with a formal
$(K,\epsilon)$-robustness analysis.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 18:30:10 GMT"
},
{
"version": "v2",
"created": "Sun, 18 Oct 2015 19:28:25 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Huang",
"Jiaji",
""
],
[
"Qiu",
"Qiang",
""
],
[
"Calderbank",
"Robert",
""
],
[
"Sapiro",
"Guillermo",
""
]
] | TITLE: Geometry-aware Deep Transform
ABSTRACT: Many recent efforts have been devoted to designing sophisticated deep
learning structures, obtaining revolutionary results on benchmark datasets. The
success of these deep learning methods mostly relies on an enormous volume of
labeled training samples to learn a huge number of parameters in a network;
therefore, understanding the generalization ability of a learned deep network
cannot be overlooked, especially when restricted to a small training set, which
is the case for many applications. In this paper, we propose a novel deep
learning objective formulation that unifies both the classification and metric
learning criteria. We then introduce a geometry-aware deep transform to enable
a non-linear discriminative and robust feature transform, which shows
competitive performance on small training sets for both synthetic and
real-world data. We further support the proposed framework with a formal
$(K,\epsilon)$-robustness analysis.
| no_new_dataset | 0.948775 |
1510.05145 | Shoaib Ehsan | Shoaib Ehsan, Adrian F. Clark and Klaus D. McDonald-Maier | Rapid Online Analysis of Local Feature Detectors and Their
Complementarity | null | Sensors 2013, 13, 10876-10907 | 10.3390/s130810876 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A vision system that can assess its own performance and take appropriate
actions online to maximize its effectiveness would be a step towards achieving
the long-cherished goal of imitating humans. This paper proposes a method for
performing an online performance analysis of local feature detectors, the
primary stage of many practical vision systems. It advocates the spatial
distribution of local image features as a good performance indicator and
presents a metric that can be calculated rapidly, concurs with human visual
assessments and is complementary to existing offline measures such as
repeatability. The metric is shown to provide a measure of complementarity for
combinations of detectors, correctly reflecting the underlying principles of
individual detectors. Qualitative results on well-established datasets for
several state-of-the-art detectors are presented based on the proposed measure.
Using a hypothesis testing approach and a newly-acquired, larger image
database, statistically-significant performance differences are identified.
Different detector pairs and triplets are examined quantitatively and the
results provide a useful guideline for combining detectors in applications that
require a reasonable spatial distribution of image features. A principled
framework for combining feature detectors in these applications is also
presented. Timing results reveal the potential of the metric for online
applications.
| [
{
"version": "v1",
"created": "Sat, 17 Oct 2015 16:14:11 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Ehsan",
"Shoaib",
""
],
[
"Clark",
"Adrian F.",
""
],
[
"McDonald-Maier",
"Klaus D.",
""
]
] | TITLE: Rapid Online Analysis of Local Feature Detectors and Their
Complementarity
ABSTRACT: A vision system that can assess its own performance and take appropriate
actions online to maximize its effectiveness would be a step towards achieving
the long-cherished goal of imitating humans. This paper proposes a method for
performing an online performance analysis of local feature detectors, the
primary stage of many practical vision systems. It advocates the spatial
distribution of local image features as a good performance indicator and
presents a metric that can be calculated rapidly, concurs with human visual
assessments and is complementary to existing offline measures such as
repeatability. The metric is shown to provide a measure of complementarity for
combinations of detectors, correctly reflecting the underlying principles of
individual detectors. Qualitative results on well-established datasets for
several state-of-the-art detectors are presented based on the proposed measure.
Using a hypothesis testing approach and a newly-acquired, larger image
database, statistically-significant performance differences are identified.
Different detector pairs and triplets are examined quantitatively and the
results provide a useful guideline for combining detectors in applications that
require a reasonable spatial distribution of image features. A principled
framework for combining feature detectors in these applications is also
presented. Timing results reveal the potential of the metric for online
applications.
| no_new_dataset | 0.942718 |
1510.05214 | Or Zuk | Tom Hope, Avishai Wagner and Or Zuk | Clustering Noisy Signals with Structured Sparsity Using Time-Frequency
Representation | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple and efficient time-series clustering framework
particularly suited for low Signal-to-Noise Ratio (SNR), by simultaneous
smoothing and dimensionality reduction aimed at preserving clustering
information. We extend the sparse K-means algorithm by incorporating structured
sparsity, and use it to exploit the multi-scale property of wavelets and group
structure in multivariate signals. Finally, we extract features invariant to
translation and scaling with the scattering transform, which corresponds to a
convolutional network with filters given by a wavelet operator, and use the
network's structure in sparse clustering. By promoting sparsity, this transform
can yield a low-dimensional representation of signals that gives improved
clustering results on several real datasets.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2015 09:41:50 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Hope",
"Tom",
""
],
[
"Wagner",
"Avishai",
""
],
[
"Zuk",
"Or",
""
]
] | TITLE: Clustering Noisy Signals with Structured Sparsity Using Time-Frequency
Representation
ABSTRACT: We propose a simple and efficient time-series clustering framework
particularly suited for low Signal-to-Noise Ratio (SNR), by simultaneous
smoothing and dimensionality reduction aimed at preserving clustering
information. We extend the sparse K-means algorithm by incorporating structured
sparsity, and use it to exploit the multi-scale property of wavelets and group
structure in multivariate signals. Finally, we extract features invariant to
translation and scaling with the scattering transform, which corresponds to a
convolutional network with filters given by a wavelet operator, and use the
network's structure in sparse clustering. By promoting sparsity, this transform
can yield a low-dimensional representation of signals that gives improved
clustering results on several real datasets.
| no_new_dataset | 0.949716 |
1510.05263 | Yung-Yin Lo | Yung-Yin Lo, Wanjiun Liao, Cheng-Shang Chang | Temporal Matrix Factorization for Tracking Concept Drift in Individual
User Preferences | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The matrix factorization (MF) technique has been widely adopted for solving
the rating prediction problem in recommender systems. The MF technique utilizes
the latent factor model to obtain static user preferences (user latent vectors)
and item characteristics (item latent vectors) based on historical rating data.
However, in the real world user preferences are not static but full of
dynamics. Though there are several previous works that addressed this time
varying issue of user preferences, it seems (to the best of our knowledge) that
none of them is specifically designed for tracking concept drift in individual
user preferences. Motivated by this, we develop a Temporal Matrix Factorization
approach (TMF) for tracking concept drift in each individual user latent
vector. There are two key innovative steps in our approach: (i) we develop a
modified stochastic gradient descent method to learn an individual user latent
vector at each time step, and (ii) by the Lasso regression we learn a linear
model for the transition of the individual user latent vectors. We test our
method on a synthetic dataset and several real datasets. In comparison with the
original MF, our experimental results show that our temporal method is able to
achieve lower root mean square errors (RMSE) for both the synthetic and real
datasets. One interesting finding is that the performance gain in RMSE is
mostly from those users who indeed have concept drift in their user latent
vectors at the time of prediction. In particular, for the synthetic dataset and
the Ciao dataset, there are quite a few users with that property and the
performance gains for these two datasets are roughly 20% and 5%, respectively.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2015 15:33:41 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Lo",
"Yung-Yin",
""
],
[
"Liao",
"Wanjiun",
""
],
[
"Chang",
"Cheng-Shang",
""
]
] | TITLE: Temporal Matrix Factorization for Tracking Concept Drift in Individual
User Preferences
ABSTRACT: The matrix factorization (MF) technique has been widely adopted for solving
the rating prediction problem in recommender systems. The MF technique utilizes
the latent factor model to obtain static user preferences (user latent vectors)
and item characteristics (item latent vectors) based on historical rating data.
However, in the real world user preferences are not static but full of
dynamics. Though there are several previous works that addressed this time
varying issue of user preferences, it seems (to the best of our knowledge) that
none of them is specifically designed for tracking concept drift in individual
user preferences. Motivated by this, we develop a Temporal Matrix Factorization
approach (TMF) for tracking concept drift in each individual user latent
vector. There are two key innovative steps in our approach: (i) we develop a
modified stochastic gradient descent method to learn an individual user latent
vector at each time step, and (ii) by the Lasso regression we learn a linear
model for the transition of the individual user latent vectors. We test our
method on a synthetic dataset and several real datasets. In comparison with the
original MF, our experimental results show that our temporal method is able to
achieve lower root mean square errors (RMSE) for both the synthetic and real
datasets. One interesting finding is that the performance gain in RMSE is
mostly from those users who indeed have concept drift in their user latent
vectors at the time of prediction. In particular, for the synthetic dataset and
the Ciao dataset, there are quite a few users with that property and the
performance gains for these two datasets are roughly 20% and 5%, respectively.
| no_new_dataset | 0.948298 |
1510.05477 | Mehmet Basbug | Mehmet Emin Basbug, Koray Ozcan and Senem Velipasalar | Accelerometer based Activity Classification with Variational Inference
on Sticky HDP-SLDS | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As part of daily monitoring of human activities, wearable sensors and devices
are becoming increasingly popular sources of data. With the advent of
smartphones equipped with acceloremeter, gyroscope and camera; it is now
possible to develop activity classification platforms everyone can use
conveniently. In this paper, we propose a fast inference method for an
unsupervised non-parametric time series model namely variational inference for
sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical
System). We show that the proposed algorithm can differentiate various indoor
activities such as sitting, walking, turning, going up/down the stairs and
taking the elevator using only the acceloremeter of an Android smartphone
Samsung Galaxy S4. We used the front camera of the smartphone to annotate
activity types precisely. We compared the proposed method with Hidden Markov
Models with Gaussian emission probabilities on a dataset of 10 subjects. We
showed that the efficacy of the stickiness property. We further compared the
variational inference to the Gibbs sampler on the same model and show that
variational inference is faster in one order of magnitude.
| [
{
"version": "v1",
"created": "Mon, 19 Oct 2015 13:58:37 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Basbug",
"Mehmet Emin",
""
],
[
"Ozcan",
"Koray",
""
],
[
"Velipasalar",
"Senem",
""
]
] | TITLE: Accelerometer based Activity Classification with Variational Inference
on Sticky HDP-SLDS
ABSTRACT: As part of daily monitoring of human activities, wearable sensors and devices
are becoming increasingly popular sources of data. With the advent of
smartphones equipped with acceloremeter, gyroscope and camera; it is now
possible to develop activity classification platforms everyone can use
conveniently. In this paper, we propose a fast inference method for an
unsupervised non-parametric time series model namely variational inference for
sticky HDP-SLDS(Hierarchical Dirichlet Process Switching Linear Dynamical
System). We show that the proposed algorithm can differentiate various indoor
activities such as sitting, walking, turning, going up/down the stairs and
taking the elevator using only the acceloremeter of an Android smartphone
Samsung Galaxy S4. We used the front camera of the smartphone to annotate
activity types precisely. We compared the proposed method with Hidden Markov
Models with Gaussian emission probabilities on a dataset of 10 subjects. We
showed that the efficacy of the stickiness property. We further compared the
variational inference to the Gibbs sampler on the same model and show that
variational inference is faster in one order of magnitude.
| no_new_dataset | 0.947235 |
1510.05588 | David Weyburne | David Weyburne | Are Defect Profile Similarity Criteria Different Than Velocity Profile
Similarity Criteria for the Turbulent Boundary Layer? | 27 pages including 4 Appendices | null | null | null | physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of the defect profile instead of the experimentally observed velocity
profile for the search for similarity parameters has become firmly imbedded in
the turbulent boundary layer literature. However, a search of the literature
reveals that there are no theoretical reasons for this defect profile
preference over the more traditional velocity profile. In the report herein, we
use the flow governing equation approach to develop similarity criteria for the
two profiles. Results show that the derived similarity criteria are identical.
Together with previous work that found that defect profile similarity must be
accompanied by velocity profile similarity, then ones expectations must be that
either profile can be used to search for similarity in experimental datasets.
The choice should therefore be dictated by which one works best for
experimental investigations, which in this case is the velocity profile.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2015 18:46:48 GMT"
}
] | 2015-10-20T00:00:00 | [
[
"Weyburne",
"David",
""
]
] | TITLE: Are Defect Profile Similarity Criteria Different Than Velocity Profile
Similarity Criteria for the Turbulent Boundary Layer?
ABSTRACT: The use of the defect profile instead of the experimentally observed velocity
profile for the search for similarity parameters has become firmly imbedded in
the turbulent boundary layer literature. However, a search of the literature
reveals that there are no theoretical reasons for this defect profile
preference over the more traditional velocity profile. In the report herein, we
use the flow governing equation approach to develop similarity criteria for the
two profiles. Results show that the derived similarity criteria are identical.
Together with previous work that found that defect profile similarity must be
accompanied by velocity profile similarity, then ones expectations must be that
either profile can be used to search for similarity in experimental datasets.
The choice should therefore be dictated by which one works best for
experimental investigations, which in this case is the velocity profile.
| no_new_dataset | 0.952882 |
1510.04842 | David Varas | David Varas, M\'onica Alfaro and Ferran Marques | Multiresolution hierarchy co-clustering for semantic segmentation in
sequences with small variations | International Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a co-clustering technique that, given a collection of
images and their hierarchies, clusters nodes from these hierarchies to obtain a
coherent multiresolution representation of the image collection. We formalize
the co-clustering as a Quadratic Semi-Assignment Problem and solve it with a
linear programming relaxation approach that makes effective use of information
from hierarchies. Initially, we address the problem of generating an optimal,
coherent partition per image and, afterwards, we extend this method to a
multiresolution framework. Finally, we particularize this framework to an
iterative multiresolution video segmentation algorithm in sequences with small
variations. We evaluate the algorithm on the Video Occlusion/Object Boundary
Detection Dataset, showing that it produces state-of-the-art results in these
scenarios.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2015 11:25:33 GMT"
}
] | 2015-10-19T00:00:00 | [
[
"Varas",
"David",
""
],
[
"Alfaro",
"Mónica",
""
],
[
"Marques",
"Ferran",
""
]
] | TITLE: Multiresolution hierarchy co-clustering for semantic segmentation in
sequences with small variations
ABSTRACT: This paper presents a co-clustering technique that, given a collection of
images and their hierarchies, clusters nodes from these hierarchies to obtain a
coherent multiresolution representation of the image collection. We formalize
the co-clustering as a Quadratic Semi-Assignment Problem and solve it with a
linear programming relaxation approach that makes effective use of information
from hierarchies. Initially, we address the problem of generating an optimal,
coherent partition per image and, afterwards, we extend this method to a
multiresolution framework. Finally, we particularize this framework to an
iterative multiresolution video segmentation algorithm in sequences with small
variations. We evaluate the algorithm on the Video Occlusion/Object Boundary
Detection Dataset, showing that it produces state-of-the-art results in these
scenarios.
| no_new_dataset | 0.948394 |
1510.04868 | Alexander Thomasian | Alexander Thomasian and Jun Xu | Data Allocation in a Heterogeneous Disk Array - HDA with Multiple RAID
Levels for Database Applications | IEEE 2-column format | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the allocation of Virtual Arrays (VAs) in a Heterogeneous Disk
Array (HDA). Each VA holds groups of related objects and datasets such as
files, relational tables, which has similar performance and availability
characteristics. We evaluate single-pass data allocation methods for HDA using
a synthetic stream of allocation requests, where each VA is characterized by
its RAID level, disk loads and space requirements. The goal is to maximize the
number of allocated VAs and maintain high disk bandwidth and capacity
utilization, while balancing disk loads. Although only RAID1 (basic mirroring)
and RAID5 (rotated parity arrays) are considered in the experimental study, we
develop the analysis required to estimate disk loads for other RAID levels.
Since VA loads vary significantly over time, the VA allocation is carried out
at the peak load period, while ensuring that disk bandwidth is not exceeded at
other high load periods. Experimental results with a synthetic stream of
allocation requests show that allocation methods minimizing the maximum disk
bandwidth and capacity utilization or their variance across all disks yield the
maximum number of allocated VAs. HDA saves disk bandwidth, since a single RAID
level accommodating the most stringent availability requirements for a small
subset of objects would incur an unnecessarily high overhead for updating check
blocks or data replicas for all objects. The number of allocated VAs can be
increased by adopting the clustered RAID5 paradigm, which exploits the tradeoff
between redundancy and bandwidth utilization. Since rebuild can be carried out
at the level of individual VAs, prioritizing rebuild of VAs with higher access
rates can improve overall performance.
| [
{
"version": "v1",
"created": "Fri, 16 Oct 2015 12:58:15 GMT"
}
] | 2015-10-19T00:00:00 | [
[
"Thomasian",
"Alexander",
""
],
[
"Xu",
"Jun",
""
]
] | TITLE: Data Allocation in a Heterogeneous Disk Array - HDA with Multiple RAID
Levels for Database Applications
ABSTRACT: We consider the allocation of Virtual Arrays (VAs) in a Heterogeneous Disk
Array (HDA). Each VA holds groups of related objects and datasets such as
files, relational tables, which has similar performance and availability
characteristics. We evaluate single-pass data allocation methods for HDA using
a synthetic stream of allocation requests, where each VA is characterized by
its RAID level, disk loads and space requirements. The goal is to maximize the
number of allocated VAs and maintain high disk bandwidth and capacity
utilization, while balancing disk loads. Although only RAID1 (basic mirroring)
and RAID5 (rotated parity arrays) are considered in the experimental study, we
develop the analysis required to estimate disk loads for other RAID levels.
Since VA loads vary significantly over time, the VA allocation is carried out
at the peak load period, while ensuring that disk bandwidth is not exceeded at
other high load periods. Experimental results with a synthetic stream of
allocation requests show that allocation methods minimizing the maximum disk
bandwidth and capacity utilization or their variance across all disks yield the
maximum number of allocated VAs. HDA saves disk bandwidth, since a single RAID
level accommodating the most stringent availability requirements for a small
subset of objects would incur an unnecessarily high overhead for updating check
blocks or data replicas for all objects. The number of allocated VAs can be
increased by adopting the clustered RAID5 paradigm, which exploits the tradeoff
between redundancy and bandwidth utilization. Since rebuild can be carried out
at the level of individual VAs, prioritizing rebuild of VAs with higher access
rates can improve overall performance.
| no_new_dataset | 0.953449 |
1501.06265 | Martin Bergemann | Martin Bergemann and Christian Jakob and Todd P. Lane | Global detection and analysis of coastline associated rainfall using an
objective pattern recognition technique | null | Journal of Climate, 28, 18, 2015 | 10.1175/JCLI-D-15-0098.1 | null | physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coastally associated rainfall is a common feature especially in tropical and
subtropical regions. However, it has been difficult to quantify the
contribution of coastal rainfall features to the overall local rainfall. We
develop a novel technique to objectively identify precipitation associated with
land-sea interaction and apply it to satellite based rainfall estimates. The
Maritime Continent, the Bight of Panama, Madagascar and the Mediterranean are
found to be regions where land-sea interactions plays a crucial role in the
formation of precipitation. In these regions $\approx$ 40% to 60% of the total
rainfall can be related to coastline effects. Due to its importance for the
climate system, the Maritime Continent is a particular region of interest with
high overall amounts of rainfall and large fractions resulting from land-sea
interactions throughout the year. To demonstrate the utility of our
identification method we investigate the influence of several modes of
variability, such as the Madden-Julian-Oscillation and the El Ni\~no Southern
Oscillation, on coastal rainfall behavior. The results suggest that during
large scale suppressed convective conditions coastal effects tend modulate the
rainfall over the Maritime Continent leading to enhanced rainfall over land
regions compared to the surrounding oceans. We propose that the novel objective
dataset of coastally influenced precipitation can be used in a variety of ways,
such as to inform cumulus parametrization or as an additional tool for
evaluating the simulation of coastal precipitation within weather and climate
models.
| [
{
"version": "v1",
"created": "Mon, 26 Jan 2015 07:03:58 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Jan 2015 21:51:51 GMT"
},
{
"version": "v3",
"created": "Thu, 29 Jan 2015 09:41:23 GMT"
},
{
"version": "v4",
"created": "Mon, 1 Jun 2015 01:32:32 GMT"
},
{
"version": "v5",
"created": "Wed, 14 Oct 2015 21:47:29 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Bergemann",
"Martin",
""
],
[
"Jakob",
"Christian",
""
],
[
"Lane",
"Todd P.",
""
]
] | TITLE: Global detection and analysis of coastline associated rainfall using an
objective pattern recognition technique
ABSTRACT: Coastally associated rainfall is a common feature especially in tropical and
subtropical regions. However, it has been difficult to quantify the
contribution of coastal rainfall features to the overall local rainfall. We
develop a novel technique to objectively identify precipitation associated with
land-sea interaction and apply it to satellite based rainfall estimates. The
Maritime Continent, the Bight of Panama, Madagascar and the Mediterranean are
found to be regions where land-sea interactions plays a crucial role in the
formation of precipitation. In these regions $\approx$ 40% to 60% of the total
rainfall can be related to coastline effects. Due to its importance for the
climate system, the Maritime Continent is a particular region of interest with
high overall amounts of rainfall and large fractions resulting from land-sea
interactions throughout the year. To demonstrate the utility of our
identification method we investigate the influence of several modes of
variability, such as the Madden-Julian-Oscillation and the El Ni\~no Southern
Oscillation, on coastal rainfall behavior. The results suggest that during
large scale suppressed convective conditions coastal effects tend modulate the
rainfall over the Maritime Continent leading to enhanced rainfall over land
regions compared to the surrounding oceans. We propose that the novel objective
dataset of coastally influenced precipitation can be used in a variety of ways,
such as to inform cumulus parametrization or as an additional tool for
evaluating the simulation of coastal precipitation within weather and climate
models.
| no_new_dataset | 0.944842 |
1502.06648 | Marcus Rohrbach | Marcus Rohrbach and Anna Rohrbach and Michaela Regneri and Sikandar
Amin and Mykhaylo Andriluka and Manfred Pinkal and Bernt Schiele | Recognizing Fine-Grained and Composite Activities using Hand-Centric
Features and Script Data | in International Journal of Computer Vision (IJCV) 2015 | null | 10.1007/s11263-015-0851-8 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Activity recognition has shown impressive progress in recent years. However,
the challenges of detecting fine-grained activities and understanding how they
are combined into composite activities have been largely overlooked. In this
work we approach both tasks and present a dataset which provides detailed
annotations to address them. The first challenge is to detect fine-grained
activities, which are defined by low inter-class variability and are typically
characterized by fine-grained body motions. We explore how human pose and hands
can help to approach this challenge by comparing two pose-based and two
hand-centric features with state-of-the-art holistic features. To attack the
second challenge, recognizing composite activities, we leverage the fact that
these activities are compositional and that the essential components of the
activities can be obtained from textual descriptions or scripts. We show the
benefits of our hand-centric approach for fine-grained activity classification
and detection. For composite activity recognition we find that decomposition
into attributes allows sharing information across composites and is essential
to attack this hard task. Using script data we can recognize novel composites
without having training data for them.
| [
{
"version": "v1",
"created": "Mon, 23 Feb 2015 22:48:17 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2015 16:02:19 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Rohrbach",
"Marcus",
""
],
[
"Rohrbach",
"Anna",
""
],
[
"Regneri",
"Michaela",
""
],
[
"Amin",
"Sikandar",
""
],
[
"Andriluka",
"Mykhaylo",
""
],
[
"Pinkal",
"Manfred",
""
],
[
"Schiele",
"Bernt",
""
]
] | TITLE: Recognizing Fine-Grained and Composite Activities using Hand-Centric
Features and Script Data
ABSTRACT: Activity recognition has shown impressive progress in recent years. However,
the challenges of detecting fine-grained activities and understanding how they
are combined into composite activities have been largely overlooked. In this
work we approach both tasks and present a dataset which provides detailed
annotations to address them. The first challenge is to detect fine-grained
activities, which are defined by low inter-class variability and are typically
characterized by fine-grained body motions. We explore how human pose and hands
can help to approach this challenge by comparing two pose-based and two
hand-centric features with state-of-the-art holistic features. To attack the
second challenge, recognizing composite activities, we leverage the fact that
these activities are compositional and that the essential components of the
activities can be obtained from textual descriptions or scripts. We show the
benefits of our hand-centric approach for fine-grained activity classification
and detection. For composite activity recognition we find that decomposition
into attributes allows sharing information across composites and is essential
to attack this hard task. Using script data we can recognize novel composites
without having training data for them.
| new_dataset | 0.958693 |
1505.01809 | Jacob Devlin | Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong
He, Geoffrey Zweig, Margaret Mitchell | Language Models for Image Captioning: The Quirks and What Works | See http://research.microsoft.com/en-us/projects/image_captioning for
project information | null | null | null | cs.CL cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two recent approaches have achieved state-of-the-art results in image
captioning. The first uses a pipelined process where a set of candidate words
is generated by a convolutional neural network (CNN) trained on images, and
then a maximum entropy (ME) language model is used to arrange these words into
a coherent sentence. The second uses the penultimate activation layer of the
CNN as input to a recurrent neural network (RNN) that then generates the
caption sequence. In this paper, we compare the merits of these different
language modeling approaches for the first time by using the same
state-of-the-art CNN as input. We examine issues in the different approaches,
including linguistic irregularities, caption repetition, and data set overlap.
By combining key aspects of the ME and RNN methods, we achieve a new record
performance over previously published results on the benchmark COCO dataset.
However, the gains we see in BLEU do not translate to human judgments.
| [
{
"version": "v1",
"created": "Thu, 7 May 2015 18:36:14 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jul 2015 22:10:49 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Oct 2015 22:03:40 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Devlin",
"Jacob",
""
],
[
"Cheng",
"Hao",
""
],
[
"Fang",
"Hao",
""
],
[
"Gupta",
"Saurabh",
""
],
[
"Deng",
"Li",
""
],
[
"He",
"Xiaodong",
""
],
[
"Zweig",
"Geoffrey",
""
],
[
"Mitchell",
"Margaret",
""
]
] | TITLE: Language Models for Image Captioning: The Quirks and What Works
ABSTRACT: Two recent approaches have achieved state-of-the-art results in image
captioning. The first uses a pipelined process where a set of candidate words
is generated by a convolutional neural network (CNN) trained on images, and
then a maximum entropy (ME) language model is used to arrange these words into
a coherent sentence. The second uses the penultimate activation layer of the
CNN as input to a recurrent neural network (RNN) that then generates the
caption sequence. In this paper, we compare the merits of these different
language modeling approaches for the first time by using the same
state-of-the-art CNN as input. We examine issues in the different approaches,
including linguistic irregularities, caption repetition, and data set overlap.
By combining key aspects of the ME and RNN methods, we achieve a new record
performance over previously published results on the benchmark COCO dataset.
However, the gains we see in BLEU do not translate to human judgments.
| no_new_dataset | 0.953966 |
1509.00947 | Jalil Rasekhi | Jalil Rasekhi | Motion planning using shortest path | The paper has been withdrawn due to a crucial sign error in equation
3,4 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a new method for path planning to a point for robot
in environment with obstacles. The resulting algorithm is implemented as a
simple variation of Dijkstra's algorithm. By adding a constraint to the
shortest-path, the algorithm is able to exclude all the paths between two
points that violate the constraint.This algorithm provides the robot the
possibility to move from the initial position to the final position (target)
when we have enough samples in the domain. In this case the robot follows a
smooth path that does not fall in to the obstacles. Our method is simpler than
the previous proposals in the literature and performs comparably to the best
methods, both on simulated and some real datasets.
| [
{
"version": "v1",
"created": "Thu, 3 Sep 2015 05:14:05 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Oct 2015 23:15:09 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Rasekhi",
"Jalil",
""
]
] | TITLE: Motion planning using shortest path
ABSTRACT: In this paper, we propose a new method for path planning to a point for robot
in environment with obstacles. The resulting algorithm is implemented as a
simple variation of Dijkstra's algorithm. By adding a constraint to the
shortest-path, the algorithm is able to exclude all the paths between two
points that violate the constraint.This algorithm provides the robot the
possibility to move from the initial position to the final position (target)
when we have enough samples in the domain. In this case the robot follows a
smooth path that does not fall in to the obstacles. Our method is simpler than
the previous proposals in the literature and performs comparably to the best
methods, both on simulated and some real datasets.
| no_new_dataset | 0.953837 |
1510.04396 | Manolis Tsakiris | Manolis C. Tsakiris and Rene Vidal | Filtrated Spectral Algebraic Subspace Clustering | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algebraic Subspace Clustering (ASC) is a simple and elegant method based on
polynomial fitting and differentiation for clustering noiseless data drawn from
an arbitrary union of subspaces. In practice, however, ASC is limited to
equi-dimensional subspaces because the estimation of the subspace dimension via
algebraic methods is sensitive to noise. This paper proposes a new ASC
algorithm that can handle noisy data drawn from subspaces of arbitrary
dimensions. The key ideas are (1) to construct, at each point, a decreasing
sequence of subspaces containing the subspace passing through that point; (2)
to use the distances from any other point to each subspace in the sequence to
construct a subspace clustering affinity, which is superior to alternative
affinities both in theory and in practice. Experiments on the Hopkins 155
dataset demonstrate the superiority of the proposed method with respect to
sparse and low rank subspace clustering methods.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 04:12:37 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Tsakiris",
"Manolis C.",
""
],
[
"Vidal",
"Rene",
""
]
] | TITLE: Filtrated Spectral Algebraic Subspace Clustering
ABSTRACT: Algebraic Subspace Clustering (ASC) is a simple and elegant method based on
polynomial fitting and differentiation for clustering noiseless data drawn from
an arbitrary union of subspaces. In practice, however, ASC is limited to
equi-dimensional subspaces because the estimation of the subspace dimension via
algebraic methods is sensitive to noise. This paper proposes a new ASC
algorithm that can handle noisy data drawn from subspaces of arbitrary
dimensions. The key ideas are (1) to construct, at each point, a decreasing
sequence of subspaces containing the subspace passing through that point; (2)
to use the distances from any other point to each subspace in the sequence to
construct a subspace clustering affinity, which is superior to alternative
affinities both in theory and in practice. Experiments on the Hopkins 155
dataset demonstrate the superiority of the proposed method with respect to
sparse and low rank subspace clustering methods.
| no_new_dataset | 0.951051 |
1510.04437 | Satyabrata Maity | Satyabrata Maity, Debotosh Bhattacharjee and Amlan Chakrabarti | A Novel Approach for Human Action Recognition from Silhouette Images | Manuscript | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel human action recognition technique from video is
presented. Any action of human is a combination of several micro action
sequences performed by one or more body parts of the human. The proposed
approach uses spatio-temporal body parts movement (STBPM) features extracted
from foreground silhouette of the human objects. The newly proposed STBPM
feature estimates the movements of different body parts for any given time
segment to classify actions. We also proposed a rule based logic named rule
action classifier (RAC), which uses a series of condition action rules based on
prior knowledge and hence does not required training to classify any action.
Since we don't require training to classify actions, the proposed approach is
view independent. The experimental results on publicly available Wizeman and
MuHVAi datasets are compared with that of the related research work in terms of
accuracy in the human action detection, and proposed technique outperforms the
others.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 08:10:42 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Maity",
"Satyabrata",
""
],
[
"Bhattacharjee",
"Debotosh",
""
],
[
"Chakrabarti",
"Amlan",
""
]
] | TITLE: A Novel Approach for Human Action Recognition from Silhouette Images
ABSTRACT: In this paper, a novel human action recognition technique from video is
presented. Any action of human is a combination of several micro action
sequences performed by one or more body parts of the human. The proposed
approach uses spatio-temporal body parts movement (STBPM) features extracted
from foreground silhouette of the human objects. The newly proposed STBPM
feature estimates the movements of different body parts for any given time
segment to classify actions. We also proposed a rule based logic named rule
action classifier (RAC), which uses a series of condition action rules based on
prior knowledge and hence does not required training to classify any action.
Since we don't require training to classify actions, the proposed approach is
view independent. The experimental results on publicly available Wizeman and
MuHVAi datasets are compared with that of the related research work in terms of
accuracy in the human action detection, and proposed technique outperforms the
others.
| no_new_dataset | 0.946892 |
1510.04501 | Alan Freihof Tygel | Alan Tygel, S\"oren Auer, Jeremy Debattista, Fabrizio Orlandi, Maria
Luiza Machado Campos | Towards Cleaning-up Open Data Portals: A Metadata Reconciliation
Approach | 8 pages,10 Figures - Under Revision for ICSC2016 | null | null | null | cs.IR cs.DB | http://creativecommons.org/licenses/by/4.0/ | This paper presents an approach for metadata reconciliation, curation and
linking for Open Governamental Data Portals (ODPs). ODPs have been lately the
standard solution for governments willing to put their public data available
for the society. Portal managers use several types of metadata to organize the
datasets, one of the most important ones being the tags. However, the tagging
process is subject to many problems, such as synonyms, ambiguity or
incoherence, among others. As our empiric analysis of ODPs shows, these issues
are currently prevalent in most ODPs and effectively hinders the reuse of Open
Data. In order to address these problems, we develop and implement an approach
for tag reconciliation in Open Data Portals, encompassing local actions related
to individual portals, and global actions for adding a semantic metadata layer
above individual portals. The local part aims to enhance the quality of tags in
a single portal, and the global part is meant to interlink ODPs by establishing
relations between tags.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 12:29:56 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Tygel",
"Alan",
""
],
[
"Auer",
"Sören",
""
],
[
"Debattista",
"Jeremy",
""
],
[
"Orlandi",
"Fabrizio",
""
],
[
"Campos",
"Maria Luiza Machado",
""
]
] | TITLE: Towards Cleaning-up Open Data Portals: A Metadata Reconciliation
Approach
ABSTRACT: This paper presents an approach for metadata reconciliation, curation and
linking for Open Governamental Data Portals (ODPs). ODPs have been lately the
standard solution for governments willing to put their public data available
for the society. Portal managers use several types of metadata to organize the
datasets, one of the most important ones being the tags. However, the tagging
process is subject to many problems, such as synonyms, ambiguity or
incoherence, among others. As our empiric analysis of ODPs shows, these issues
are currently prevalent in most ODPs and effectively hinders the reuse of Open
Data. In order to address these problems, we develop and implement an approach
for tag reconciliation in Open Data Portals, encompassing local actions related
to individual portals, and global actions for adding a semantic metadata layer
above individual portals. The local part aims to enhance the quality of tags in
a single portal, and the global part is meant to interlink ODPs by establishing
relations between tags.
| no_new_dataset | 0.952618 |
1510.04565 | Zhenzhong Lan | Zhenzhong Lan, Alexander G. Hauptmann | Beyond Spatial Pyramid Matching: Space-time Extended Descriptor for
Action Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of generating video features for action recognition.
The spatial pyramid and its variants have been very popular feature models due
to their success in balancing spatial location encoding and spatial invariance.
Although it seems straightforward to extend spatial pyramid to the temporal
domain (spatio-temporal pyramid), the large spatio-temporal diversity of
unconstrained videos and the resulting significantly higher dimensional
representations make it less appealing. This paper introduces the space-time
extended descriptor, a simple but efficient alternative way to include the
spatio-temporal location into the video features. Instead of only coding motion
information and leaving the spatio-temporal location to be represented at the
pooling stage, location information is used as part of the encoding step. This
method is a much more effective and efficient location encoding method as
compared to the fixed grid model because it avoids the danger of over
committing to artificial boundaries and its dimension is relatively low.
Experimental results on several benchmark datasets show that, despite its
simplicity, this method achieves comparable or better results than
spatio-temporal pyramid.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 14:57:37 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Lan",
"Zhenzhong",
""
],
[
"Hauptmann",
"Alexander G.",
""
]
] | TITLE: Beyond Spatial Pyramid Matching: Space-time Extended Descriptor for
Action Recognition
ABSTRACT: We address the problem of generating video features for action recognition.
The spatial pyramid and its variants have been very popular feature models due
to their success in balancing spatial location encoding and spatial invariance.
Although it seems straightforward to extend spatial pyramid to the temporal
domain (spatio-temporal pyramid), the large spatio-temporal diversity of
unconstrained videos and the resulting significantly higher dimensional
representations make it less appealing. This paper introduces the space-time
extended descriptor, a simple but efficient alternative way to include the
spatio-temporal location into the video features. Instead of only coding motion
information and leaving the spatio-temporal location to be represented at the
pooling stage, location information is used as part of the encoding step. This
method is a much more effective and efficient location encoding method as
compared to the fixed grid model because it avoids the danger of over
committing to artificial boundaries and its dimension is relatively low.
Experimental results on several benchmark datasets show that, despite its
simplicity, this method achieves comparable or better results than
spatio-temporal pyramid.
| no_new_dataset | 0.95096 |
1510.04609 | Bharat Singh | Bharat Singh, Soham De, Yangmuzi Zhang, Thomas Goldstein, and Gavin
Taylor | Layer-Specific Adaptive Learning Rates for Deep Networks | ICMLA 2015, deep learning, adaptive learning rates for training,
layer specific learning rate | null | null | null | cs.CV cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing complexity of deep learning architectures is resulting in
training time requiring weeks or even months. This slow training is due in part
to vanishing gradients, in which the gradients used by back-propagation are
extremely large for weights connecting deep layers (layers near the output
layer), and extremely small for shallow layers (near the input layer); this
results in slow learning in the shallow layers. Additionally, it has also been
shown that in highly non-convex problems, such as deep neural networks, there
is a proliferation of high-error low curvature saddle points, which slows down
learning dramatically. In this paper, we attempt to overcome the two above
problems by proposing an optimization method for training deep neural networks
which uses learning rates which are both specific to each layer in the network
and adaptive to the curvature of the function, increasing the learning rate at
low curvature points. This enables us to speed up learning in the shallow
layers of the network and quickly escape high-error low curvature saddle
points. We test our method on standard image classification datasets such as
MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy
as well as reduces the required training time over standard algorithms.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 16:31:46 GMT"
}
] | 2015-10-16T00:00:00 | [
[
"Singh",
"Bharat",
""
],
[
"De",
"Soham",
""
],
[
"Zhang",
"Yangmuzi",
""
],
[
"Goldstein",
"Thomas",
""
],
[
"Taylor",
"Gavin",
""
]
] | TITLE: Layer-Specific Adaptive Learning Rates for Deep Networks
ABSTRACT: The increasing complexity of deep learning architectures is resulting in
training time requiring weeks or even months. This slow training is due in part
to vanishing gradients, in which the gradients used by back-propagation are
extremely large for weights connecting deep layers (layers near the output
layer), and extremely small for shallow layers (near the input layer); this
results in slow learning in the shallow layers. Additionally, it has also been
shown that in highly non-convex problems, such as deep neural networks, there
is a proliferation of high-error low curvature saddle points, which slows down
learning dramatically. In this paper, we attempt to overcome the two above
problems by proposing an optimization method for training deep neural networks
which uses learning rates which are both specific to each layer in the network
and adaptive to the curvature of the function, increasing the learning rate at
low curvature points. This enables us to speed up learning in the shallow
layers of the network and quickly escape high-error low curvature saddle
points. We test our method on standard image classification datasets such as
MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy
as well as reduces the required training time over standard algorithms.
| no_new_dataset | 0.952706 |
1211.1364 | Jean-Gabriel Young | Jean-Gabriel Young, Antoine Allard, Laurent H\'ebert-Dufresne and
Louis J. Dub\'e | A shadowing problem in the detection of overlapping communities: lifting
the resolution limit through a cascading procedure | 14 pages, 12 figures + supporting information (5 pages, 6 tables, 3
figures) | PLoS ONE 10(10): e0140133 (2015) | 10.1371/journal.pone.0140133 | null | physics.soc-ph cond-mat.stat-mech cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community detection is the process of assigning nodes and links in
significant communities (e.g. clusters, function modules) and its development
has led to a better understanding of complex networks. When applied to sizable
networks, we argue that most detection algorithms correctly identify prominent
communities, but fail to do so across multiple scales. As a result, a
significant fraction of the network is left uncharted. We show that this
problem stems from larger or denser communities overshadowing smaller or
sparser ones, and that this effect accounts for most of the undetected
communities and unassigned links. We propose a generic cascading approach to
community detection that circumvents the problem. Using real and artificial
network datasets with three widely used community detection algorithms, we show
how a simple cascading procedure allows for the detection of the missing
communities. This work highlights a new detection limit of community structure,
and we hope that our approach can inspire better community detection
algorithms.
| [
{
"version": "v1",
"created": "Tue, 6 Nov 2012 20:09:09 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Dec 2012 19:00:58 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Dec 2012 16:54:53 GMT"
},
{
"version": "v4",
"created": "Wed, 30 Sep 2015 20:32:57 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"Young",
"Jean-Gabriel",
""
],
[
"Allard",
"Antoine",
""
],
[
"Hébert-Dufresne",
"Laurent",
""
],
[
"Dubé",
"Louis J.",
""
]
] | TITLE: A shadowing problem in the detection of overlapping communities: lifting
the resolution limit through a cascading procedure
ABSTRACT: Community detection is the process of assigning nodes and links in
significant communities (e.g. clusters, function modules) and its development
has led to a better understanding of complex networks. When applied to sizable
networks, we argue that most detection algorithms correctly identify prominent
communities, but fail to do so across multiple scales. As a result, a
significant fraction of the network is left uncharted. We show that this
problem stems from larger or denser communities overshadowing smaller or
sparser ones, and that this effect accounts for most of the undetected
communities and unassigned links. We propose a generic cascading approach to
community detection that circumvents the problem. Using real and artificial
network datasets with three widely used community detection algorithms, we show
how a simple cascading procedure allows for the detection of the missing
communities. This work highlights a new detection limit of community structure,
and we hope that our approach can inspire better community detection
algorithms.
| no_new_dataset | 0.944587 |
1509.03001 | Byeongkeun Kang | Byeongkeun Kang, Subarna Tripathi, Truong Q. Nguyen | Real-time Sign Language Fingerspelling Recognition using Convolutional
Neural Networks from Depth map | 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sign language recognition is important for natural and convenient
communication between deaf community and hearing majority. We take the highly
efficient initial step of automatic fingerspelling recognition system using
convolutional neural networks (CNNs) from depth maps. In this work, we consider
relatively larger number of classes compared with the previous literature. We
train CNNs for the classification of 31 alphabets and numbers using a subset of
collected depth data from multiple subjects. While using different learning
configurations, such as hyper-parameter selection with and without validation,
we achieve 99.99% accuracy for observed signers and 83.58% to 85.49% accuracy
for new signers. The result shows that accuracy improves as we include more
data from different subjects during training. The processing time is 3 ms for
the prediction of a single image. To the best of our knowledge, the system
achieves the highest accuracy and speed. The trained model and dataset is
available on our repository.
| [
{
"version": "v1",
"created": "Thu, 10 Sep 2015 03:58:56 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Sep 2015 17:07:56 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Oct 2015 19:15:41 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"Kang",
"Byeongkeun",
""
],
[
"Tripathi",
"Subarna",
""
],
[
"Nguyen",
"Truong Q.",
""
]
] | TITLE: Real-time Sign Language Fingerspelling Recognition using Convolutional
Neural Networks from Depth map
ABSTRACT: Sign language recognition is important for natural and convenient
communication between deaf community and hearing majority. We take the highly
efficient initial step of automatic fingerspelling recognition system using
convolutional neural networks (CNNs) from depth maps. In this work, we consider
relatively larger number of classes compared with the previous literature. We
train CNNs for the classification of 31 alphabets and numbers using a subset of
collected depth data from multiple subjects. While using different learning
configurations, such as hyper-parameter selection with and without validation,
we achieve 99.99% accuracy for observed signers and 83.58% to 85.49% accuracy
for new signers. The result shows that accuracy improves as we include more
data from different subjects during training. The processing time is 3 ms for
the prediction of a single image. To the best of our knowledge, the system
achieves the highest accuracy and speed. The trained model and dataset is
available on our repository.
| no_new_dataset | 0.945298 |
1510.03913 | Ubiratam de Paula Junior | Ubiratam de Paula and Daniel de Oliveira and Yuri Frota and Valmir C.
Barbosa and L\'ucia Drummond | Detecting and Handling Flash-Crowd Events on Cloud Environments | Submitted to the ACM Transactions on the Web (TWEB) | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing is a highly scalable computing paradigm where resources are
delivered to users on demand via Internet. There are several areas that can
benefit from cloud computing and one in special is gaining much attention: the
flash-crowd handling. Flash-crowd events happen when servers are unable to
handle the volume of requests for a specific content (or a set of contents)
that actually reach it, thus causing some requests to be denied. For the
handling of flash-crowd events in Web applications, clouds can offer elastic
computing and storage capacity during these events in order to process all
requests. However, it is important that flash-crowd events are quickly detected
and the amount of resources to be instantiated during flash crowds is correctly
estimated. In this paper, a new mechanism for detection of flash crowds based
on concepts of entropy and total correlation is proposed. Moreover, the
Flash-Crowd Handling Problem (FCHP) is precisely defined and formulated as an
integer programming problem. A new algorithm for solving it, named FCHP-ILS, is
also proposed. With FCHP-ILS the Web provider is able to replicate contents in
the available resources and define the types and amount of resources to
instantiate in the cloud during a flash-crowd event. Finally we present a case
study, based on a synthetic dataset representing flash-crowd events in small
scenarios aiming at comparing the proposed approach with de facto standard
Amazon's Auto Scaling mechanism.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2015 22:10:06 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"de Paula",
"Ubiratam",
""
],
[
"de Oliveira",
"Daniel",
""
],
[
"Frota",
"Yuri",
""
],
[
"Barbosa",
"Valmir C.",
""
],
[
"Drummond",
"Lúcia",
""
]
] | TITLE: Detecting and Handling Flash-Crowd Events on Cloud Environments
ABSTRACT: Cloud computing is a highly scalable computing paradigm where resources are
delivered to users on demand via Internet. There are several areas that can
benefit from cloud computing and one in special is gaining much attention: the
flash-crowd handling. Flash-crowd events happen when servers are unable to
handle the volume of requests for a specific content (or a set of contents)
that actually reach it, thus causing some requests to be denied. For the
handling of flash-crowd events in Web applications, clouds can offer elastic
computing and storage capacity during these events in order to process all
requests. However, it is important that flash-crowd events are quickly detected
and the amount of resources to be instantiated during flash crowds is correctly
estimated. In this paper, a new mechanism for detection of flash crowds based
on concepts of entropy and total correlation is proposed. Moreover, the
Flash-Crowd Handling Problem (FCHP) is precisely defined and formulated as an
integer programming problem. A new algorithm for solving it, named FCHP-ILS, is
also proposed. With FCHP-ILS the Web provider is able to replicate contents in
the available resources and define the types and amount of resources to
instantiate in the cloud during a flash-crowd event. Finally we present a case
study, based on a synthetic dataset representing flash-crowd events in small
scenarios aiming at comparing the proposed approach with de facto standard
Amazon's Auto Scaling mechanism.
| new_dataset | 0.966914 |
1510.03924 | Steffen Moritz | Steffen Moritz, Alexis Sard\'a, Thomas Bartz-Beielstein, Martin
Zaefferer, J\"org Stork | Comparison of different Methods for Univariate Time Series Imputation in
R | null | null | null | null | stat.AP cs.OH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Missing values in datasets are a well-known problem and there are quite a lot
of R packages offering imputation functions. But while imputation in general is
well covered within R, it is hard to find functions for imputation of
univariate time series. The problem is, most standard imputation techniques can
not be applied directly. Most algorithms rely on inter-attribute correlations,
while univariate time series imputation needs to employ time dependencies. This
paper provides an overview of univariate time series imputation in general and
an in-detail insight into the respective implementations within R packages.
Furthermore, we experimentally compare the R functions on different time series
using four different ratios of missing data. Our results show that either an
interpolation with seasonal kalman filter from the zoo package or a linear
interpolation on seasonal loess decomposed data from the forecast package were
the most effective methods for dealing with missing data in most of the
scenarios assessed in this paper.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2015 23:16:10 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"Moritz",
"Steffen",
""
],
[
"Sardá",
"Alexis",
""
],
[
"Bartz-Beielstein",
"Thomas",
""
],
[
"Zaefferer",
"Martin",
""
],
[
"Stork",
"Jörg",
""
]
] | TITLE: Comparison of different Methods for Univariate Time Series Imputation in
R
ABSTRACT: Missing values in datasets are a well-known problem and there are quite a lot
of R packages offering imputation functions. But while imputation in general is
well covered within R, it is hard to find functions for imputation of
univariate time series. The problem is, most standard imputation techniques can
not be applied directly. Most algorithms rely on inter-attribute correlations,
while univariate time series imputation needs to employ time dependencies. This
paper provides an overview of univariate time series imputation in general and
an in-detail insight into the respective implementations within R packages.
Furthermore, we experimentally compare the R functions on different time series
using four different ratios of missing data. Our results show that either an
interpolation with seasonal kalman filter from the zoo package or a linear
interpolation on seasonal loess decomposed data from the forecast package were
the most effective methods for dealing with missing data in most of the
scenarios assessed in this paper.
| no_new_dataset | 0.946646 |
1510.03979 | Limin Wang | Limin Wang, Zhe Wang, Sheng Guo, Yu Qiao | Better Exploiting OS-CNNs for Better Event Recognition in Images | 8 pages. This work is following our previous work:
http://arxiv.org/abs/1505.00296 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event recognition from still images is one of the most important problems for
image understanding. However, compared with object recognition and scene
recognition, event recognition has received much less research attention in
computer vision community. This paper addresses the problem of cultural event
recognition in still images and focuses on applying deep learning methods on
this problem. In particular, we utilize the successful architecture of
Object-Scene Convolutional Neural Networks (OS-CNNs) to perform event
recognition. OS-CNNs are composed of object nets and scene nets, which transfer
the learned representations from the pre-trained models on large-scale object
and scene recognition datasets, respectively. We propose four types of
scenarios to explore OS-CNNs for event recognition by treating them as either
"end-to-end event predictors" or "generic feature extractors". Our experimental
results demonstrate that the global and local representations of OS-CNNs are
complementary to each other. Finally, based on our investigation of OS-CNNs, we
come up with a solution for the cultural event recognition track at the ICCV
ChaLearn Looking at People (LAP) challenge 2015. Our team secures the third
place at this challenge and our result is very close to the best performance.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 06:56:54 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"Wang",
"Limin",
""
],
[
"Wang",
"Zhe",
""
],
[
"Guo",
"Sheng",
""
],
[
"Qiao",
"Yu",
""
]
] | TITLE: Better Exploiting OS-CNNs for Better Event Recognition in Images
ABSTRACT: Event recognition from still images is one of the most important problems for
image understanding. However, compared with object recognition and scene
recognition, event recognition has received much less research attention in
computer vision community. This paper addresses the problem of cultural event
recognition in still images and focuses on applying deep learning methods on
this problem. In particular, we utilize the successful architecture of
Object-Scene Convolutional Neural Networks (OS-CNNs) to perform event
recognition. OS-CNNs are composed of object nets and scene nets, which transfer
the learned representations from the pre-trained models on large-scale object
and scene recognition datasets, respectively. We propose four types of
scenarios to explore OS-CNNs for event recognition by treating them as either
"end-to-end event predictors" or "generic feature extractors". Our experimental
results demonstrate that the global and local representations of OS-CNNs are
complementary to each other. Finally, based on our investigation of OS-CNNs, we
come up with a solution for the cultural event recognition track at the ICCV
ChaLearn Looking at People (LAP) challenge 2015. Our team secures the third
place at this challenge and our result is very close to the best performance.
| no_new_dataset | 0.946794 |
1510.04074 | Marian George | Marian George, Dejan Mircic, G\'abor S\"or\"os, Christian
Floerkemeier, Friedemann Mattern | Fine-Grained Product Class Recognition for Assisted Shopping | Accepted at ICCV Workshop on Assistive Computer Vision and Robotics
(ICCV-ACVR) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Assistive solutions for a better shopping experience can improve the quality
of life of people, in particular also of visually impaired shoppers. We present
a system that visually recognizes the fine-grained product classes of items on
a shopping list, in shelves images taken with a smartphone in a grocery store.
Our system consists of three components: (a) We automatically recognize useful
text on product packaging, e.g., product name and brand, and build a mapping of
words to product classes based on the large-scale GroceryProducts dataset. When
the user populates the shopping list, we automatically infer the product class
of each entered word. (b) We perform fine-grained product class recognition
when the user is facing a shelf. We discover discriminative patches on product
packaging to differentiate between visually similar product classes and to
increase the robustness against continuous changes in product design. (c) We
continuously improve the recognition accuracy through active learning. Our
experiments show the robustness of the proposed method against cross-domain
challenges, and the scalability to an increasing number of products with
minimal re-training.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 13:07:05 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"George",
"Marian",
""
],
[
"Mircic",
"Dejan",
""
],
[
"Sörös",
"Gábor",
""
],
[
"Floerkemeier",
"Christian",
""
],
[
"Mattern",
"Friedemann",
""
]
] | TITLE: Fine-Grained Product Class Recognition for Assisted Shopping
ABSTRACT: Assistive solutions for a better shopping experience can improve the quality
of life of people, in particular also of visually impaired shoppers. We present
a system that visually recognizes the fine-grained product classes of items on
a shopping list, in shelves images taken with a smartphone in a grocery store.
Our system consists of three components: (a) We automatically recognize useful
text on product packaging, e.g., product name and brand, and build a mapping of
words to product classes based on the large-scale GroceryProducts dataset. When
the user populates the shopping list, we automatically infer the product class
of each entered word. (b) We perform fine-grained product class recognition
when the user is facing a shelf. We discover discriminative patches on product
packaging to differentiate between visually similar product classes and to
increase the robustness against continuous changes in product design. (c) We
continuously improve the recognition accuracy through active learning. Our
experiments show the robustness of the proposed method against cross-domain
challenges, and the scalability to an increasing number of products with
minimal re-training.
| no_new_dataset | 0.947721 |
1510.04104 | Nicholas H. Kirk | Simon Kaltenbacher, Nicholas H. Kirk, Dongheui Lee | A Preliminary Study on the Learning Informativeness of Data Subsets | The 8th International Workshop on Human-Friendly Robotics (HFR 2015),
Munich, Germany | null | 10.13140/RG.2.1.2213.9361 | null | cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating the internal state of a robotic system is complex: this is
performed from multiple heterogeneous sensor inputs and knowledge sources.
Discretization of such inputs is done to capture saliences, represented as
symbolic information, which often presents structure and recurrence. As these
sequences are used to reason over complex scenarios, a more compact
representation would aid exactness of technical cognitive reasoning
capabilities, which are today constrained by computational complexity issues
and fallback to representational heuristics or human intervention. Such
problems need to be addressed to ensure timely and meaningful human-robot
interaction. Our work is towards understanding the variability of learning
informativeness when training on subsets of a given input dataset. This is in
view of reducing the training size while retaining the majority of the symbolic
learning potential. We prove the concept on human-written texts, and conjecture
this work will reduce training data size of sequential instructions, while
preserving semantic relations, when gathering information from large remote
sources.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 15:21:00 GMT"
}
] | 2015-10-15T00:00:00 | [
[
"Kaltenbacher",
"Simon",
""
],
[
"Kirk",
"Nicholas H.",
""
],
[
"Lee",
"Dongheui",
""
]
] | TITLE: A Preliminary Study on the Learning Informativeness of Data Subsets
ABSTRACT: Estimating the internal state of a robotic system is complex: this is
performed from multiple heterogeneous sensor inputs and knowledge sources.
Discretization of such inputs is done to capture saliences, represented as
symbolic information, which often presents structure and recurrence. As these
sequences are used to reason over complex scenarios, a more compact
representation would aid exactness of technical cognitive reasoning
capabilities, which are today constrained by computational complexity issues
and fallback to representational heuristics or human intervention. Such
problems need to be addressed to ensure timely and meaningful human-robot
interaction. Our work is towards understanding the variability of learning
informativeness when training on subsets of a given input dataset. This is in
view of reducing the training size while retaining the majority of the symbolic
learning potential. We prove the concept on human-written texts, and conjecture
this work will reduce training data size of sequential instructions, while
preserving semantic relations, when gathering information from large remote
sources.
| no_new_dataset | 0.940243 |
1508.07631 | Giulio Del Zanna | G. Del Zanna, K.P. Dere, P.R. Young, E.Landi, H.E. Mason | CHIANTI - An atomic database for Emission Lines. Version 8 | Accepted for publication in Astronomy & Astrophysics | null | 10.1051/0004-6361/201526827 | null | astro-ph.SR physics.atom-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present version 8 of the CHIANTI database. This version includes a large
amount of new data and ions, which represent a significant improvement in the
soft X-ray, EUV and UV spectral regions, which several space missions currently
cover. New data for neutrals and low charge states are also added. The data are
assessed, but to improve the modelling of low-temperature plasma the effective
collision strengths for most of the new datasets are not spline-fitted as
previously, but are retained as calculated. This required a change of the
format of the CHIANTI electron excitation files. The format of the energy files
has also been changed. Excitation rates between all the levels are retained for
most of the new datasets, so the data can in principle be used to model
high-density plasma. In addition, the method for computing the differential
emission measure used in the CHIANTI software has been changed.
| [
{
"version": "v1",
"created": "Sun, 30 Aug 2015 20:27:35 GMT"
}
] | 2015-10-14T00:00:00 | [
[
"Del Zanna",
"G.",
""
],
[
"Dere",
"K. P.",
""
],
[
"Young",
"P. R.",
""
],
[
"Landi",
"E.",
""
],
[
"Mason",
"H. E.",
""
]
] | TITLE: CHIANTI - An atomic database for Emission Lines. Version 8
ABSTRACT: We present version 8 of the CHIANTI database. This version includes a large
amount of new data and ions, which represent a significant improvement in the
soft X-ray, EUV and UV spectral regions, which several space missions currently
cover. New data for neutrals and low charge states are also added. The data are
assessed, but to improve the modelling of low-temperature plasma the effective
collision strengths for most of the new datasets are not spline-fitted as
previously, but are retained as calculated. This required a change of the
format of the CHIANTI electron excitation files. The format of the energy files
has also been changed. Excitation rates between all the levels are retained for
most of the new datasets, so the data can in principle be used to model
high-density plasma. In addition, the method for computing the differential
emission measure used in the CHIANTI software has been changed.
| no_new_dataset | 0.814201 |
1510.03715 | Iva Bojic | Iva Bojic, Emanuele Massaro, Alexander Belyi, Stanislav Sobolevsky,
Carlo Ratti | Choosing the right home location definition method for the given dataset | null | null | null | null | cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ever since first mobile phones equipped with GPS came to the market, knowing
the exact user location has become a holy grail of almost every service that
lives in the digital world. Starting with the idea of location based services,
nowadays it is not only important to know where users are in real time, but
also to be able predict where they will be in future. Moreover, it is not
enough to know user location in form of latitude longitude coordinates provided
by GPS devices, but also to give a place its meaning (i.e., semantically label
it), in particular detecting the most probable home location for the given
user. The aim of this paper is to provide novel insights on differences among
the ways how different types of human digital trails represent the actual
mobility patterns and therefore the differences between the approaches
interpreting those trails for inferring said patterns. Namely, with the
emergence of different digital sources that provide information about user
mobility, it is of vital importance to fully understand that not all of them
capture exactly the same picture. With that being said, in this paper we start
from an example showing how human mobility patterns described by means of
radius of gyration are different for Flickr social network and dataset of bank
card transactions. Rather than capturing human movements closer to their homes,
Flickr more often reveals people travel mode. Consequently, home location
inferring methods used in both cases cannot be the same. We consider several
methods for home location definition known from the literature and demonstrate
that although for bank card transactions they provide highly consistent
results, home location definition detection methods applied to Flickr dataset
happen to be way more sensitive to the method selected, stressing the paramount
importance of adjusting the method to the specific dataset being used.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2015 14:48:04 GMT"
}
] | 2015-10-14T00:00:00 | [
[
"Bojic",
"Iva",
""
],
[
"Massaro",
"Emanuele",
""
],
[
"Belyi",
"Alexander",
""
],
[
"Sobolevsky",
"Stanislav",
""
],
[
"Ratti",
"Carlo",
""
]
] | TITLE: Choosing the right home location definition method for the given dataset
ABSTRACT: Ever since first mobile phones equipped with GPS came to the market, knowing
the exact user location has become a holy grail of almost every service that
lives in the digital world. Starting with the idea of location based services,
nowadays it is not only important to know where users are in real time, but
also to be able predict where they will be in future. Moreover, it is not
enough to know user location in form of latitude longitude coordinates provided
by GPS devices, but also to give a place its meaning (i.e., semantically label
it), in particular detecting the most probable home location for the given
user. The aim of this paper is to provide novel insights on differences among
the ways how different types of human digital trails represent the actual
mobility patterns and therefore the differences between the approaches
interpreting those trails for inferring said patterns. Namely, with the
emergence of different digital sources that provide information about user
mobility, it is of vital importance to fully understand that not all of them
capture exactly the same picture. With that being said, in this paper we start
from an example showing how human mobility patterns described by means of
radius of gyration are different for Flickr social network and dataset of bank
card transactions. Rather than capturing human movements closer to their homes,
Flickr more often reveals people travel mode. Consequently, home location
inferring methods used in both cases cannot be the same. We consider several
methods for home location definition known from the literature and demonstrate
that although for bank card transactions they provide highly consistent
results, home location definition detection methods applied to Flickr dataset
happen to be way more sensitive to the method selected, stressing the paramount
importance of adjusting the method to the specific dataset being used.
| no_new_dataset | 0.905657 |
1510.03743 | Scott Workman | Scott Workman, Richard Souvenir, Nathan Jacobs | Wide-Area Image Geolocalization with Aerial Reference Imagery | International Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to use deep convolutional neural networks to address the problem
of cross-view image geolocalization, in which the geolocation of a ground-level
query image is estimated by matching to georeferenced aerial images. We use
state-of-the-art feature representations for ground-level images and introduce
a cross-view training approach for learning a joint semantic feature
representation for aerial images. We also propose a network architecture that
fuses features extracted from aerial images at multiple spatial scales. To
support training these networks, we introduce a massive database that contains
pairs of aerial and ground-level images from across the United States. Our
methods significantly out-perform the state of the art on two benchmark
datasets. We also show, qualitatively, that the proposed feature
representations are discriminative at both local and continental spatial
scales.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2015 15:33:01 GMT"
}
] | 2015-10-14T00:00:00 | [
[
"Workman",
"Scott",
""
],
[
"Souvenir",
"Richard",
""
],
[
"Jacobs",
"Nathan",
""
]
] | TITLE: Wide-Area Image Geolocalization with Aerial Reference Imagery
ABSTRACT: We propose to use deep convolutional neural networks to address the problem
of cross-view image geolocalization, in which the geolocation of a ground-level
query image is estimated by matching to georeferenced aerial images. We use
state-of-the-art feature representations for ground-level images and introduce
a cross-view training approach for learning a joint semantic feature
representation for aerial images. We also propose a network architecture that
fuses features extracted from aerial images at multiple spatial scales. To
support training these networks, we introduce a massive database that contains
pairs of aerial and ground-level images from across the United States. Our
methods significantly out-perform the state of the art on two benchmark
datasets. We also show, qualitatively, that the proposed feature
representations are discriminative at both local and continental spatial
scales.
| new_dataset | 0.928797 |
1412.7122 | Xingchao Peng | Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko | Learning Deep Object Detectors from 3D Models | null | null | null | null | cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourced 3D CAD models are becoming easily accessible online, and can
potentially generate an infinite number of training images for almost any
object category.We show that augmenting the training data of contemporary Deep
Convolutional Neural Net (DCNN) models with such synthetic data can be
effective, especially when real training data is limited or not well matched to
the target domain. Most freely available CAD models capture 3D shape but are
often missing other low level cues, such as realistic object texture, pose, or
background. In a detailed analysis, we use synthetic CAD-rendered images to
probe the ability of DCNN to learn without these cues, with surprising
findings. In particular, we show that when the DCNN is fine-tuned on the target
detection task, it exhibits a large degree of invariance to missing low-level
cues, but, when pretrained on generic ImageNet classification, it learns better
when the low-level cues are simulated. We show that our synthetic DCNN training
approach significantly outperforms previous methods on the PASCAL VOC2007
dataset when learning in the few-shot scenario and improves performance in a
domain shift scenario on the Office benchmark.
| [
{
"version": "v1",
"created": "Mon, 22 Dec 2014 20:10:31 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Jan 2015 23:44:24 GMT"
},
{
"version": "v3",
"created": "Tue, 19 May 2015 17:56:07 GMT"
},
{
"version": "v4",
"created": "Mon, 12 Oct 2015 01:01:39 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Peng",
"Xingchao",
""
],
[
"Sun",
"Baochen",
""
],
[
"Ali",
"Karim",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Learning Deep Object Detectors from 3D Models
ABSTRACT: Crowdsourced 3D CAD models are becoming easily accessible online, and can
potentially generate an infinite number of training images for almost any
object category.We show that augmenting the training data of contemporary Deep
Convolutional Neural Net (DCNN) models with such synthetic data can be
effective, especially when real training data is limited or not well matched to
the target domain. Most freely available CAD models capture 3D shape but are
often missing other low level cues, such as realistic object texture, pose, or
background. In a detailed analysis, we use synthetic CAD-rendered images to
probe the ability of DCNN to learn without these cues, with surprising
findings. In particular, we show that when the DCNN is fine-tuned on the target
detection task, it exhibits a large degree of invariance to missing low-level
cues, but, when pretrained on generic ImageNet classification, it learns better
when the low-level cues are simulated. We show that our synthetic DCNN training
approach significantly outperforms previous methods on the PASCAL VOC2007
dataset when learning in the few-shot scenario and improves performance in a
domain shift scenario on the Office benchmark.
| no_new_dataset | 0.948394 |
1509.08985 | Chen-Yu Lee | Chen-Yu Lee, Patrick W. Gallagher, Zhuowen Tu | Generalizing Pooling Functions in Convolutional Neural Networks: Mixed,
Gated, and Tree | Patent disclosure, UCSD Docket No. SD2015-184, "Forest Convolutional
Neural Network", filed on March 4, 2015. UCSD Docket No. SD2016-053,
"Generalizing Pooling Functions in Convolutional Neural Network", filed on
Sept 23, 2015 | null | null | null | stat.ML cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We seek to improve deep neural networks by generalizing the pooling
operations that play a central role in current architectures. We pursue a
careful exploration of approaches to allow pooling to learn and to adapt to
complex and variable patterns. The two primary directions lie in (1) learning a
pooling function via (two strategies of) combining of max and average pooling,
and (2) learning a pooling function in the form of a tree-structured fusion of
pooling filters that are themselves learned. In our experiments every
generalized pooling operation we explore improves performance when used in
place of average or max pooling. We experimentally demonstrate that the
proposed pooling operations provide a boost in invariance properties relative
to conventional pooling and set the state of the art on several widely adopted
benchmark datasets; they are also easy to implement, and can be applied within
various deep neural network architectures. These benefits come with only a
light increase in computational overhead during training and a very modest
increase in the number of model parameters.
| [
{
"version": "v1",
"created": "Wed, 30 Sep 2015 01:06:36 GMT"
},
{
"version": "v2",
"created": "Sat, 10 Oct 2015 03:18:45 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Lee",
"Chen-Yu",
""
],
[
"Gallagher",
"Patrick W.",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Generalizing Pooling Functions in Convolutional Neural Networks: Mixed,
Gated, and Tree
ABSTRACT: We seek to improve deep neural networks by generalizing the pooling
operations that play a central role in current architectures. We pursue a
careful exploration of approaches to allow pooling to learn and to adapt to
complex and variable patterns. The two primary directions lie in (1) learning a
pooling function via (two strategies of) combining of max and average pooling,
and (2) learning a pooling function in the form of a tree-structured fusion of
pooling filters that are themselves learned. In our experiments every
generalized pooling operation we explore improves performance when used in
place of average or max pooling. We experimentally demonstrate that the
proposed pooling operations provide a boost in invariance properties relative
to conventional pooling and set the state of the art on several widely adopted
benchmark datasets; they are also easy to implement, and can be applied within
various deep neural network architectures. These benefits come with only a
light increase in computational overhead during training and a very modest
increase in the number of model parameters.
| no_new_dataset | 0.950319 |
1510.02927 | Srinivas S S Kruthiventi | Srinivas S. S. Kruthiventi, Kumar Ayush and R. Venkatesh Babu | DeepFix: A Fully Convolutional Neural Network for predicting Human Eye
Fixations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding and predicting the human visual attentional mechanism is an
active area of research in the fields of neuroscience and computer vision. In
this work, we propose DeepFix, a first-of-its-kind fully convolutional neural
network for accurate saliency prediction. Unlike classical works which
characterize the saliency map using various hand-crafted features, our model
automatically learns features in a hierarchical fashion and predicts saliency
map in an end-to-end manner. DeepFix is designed to capture semantics at
multiple scales while taking global context into account using network layers
with very large receptive fields. Generally, fully convolutional nets are
spatially invariant which prevents them from modeling location dependent
patterns (e.g. centre-bias). Our network overcomes this limitation by
incorporating a novel Location Biased Convolutional layer. We evaluate our
model on two challenging eye fixation datasets -- MIT300, CAT2000 and show that
it outperforms other recent approaches by a significant margin.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2015 13:36:31 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Kruthiventi",
"Srinivas S. S.",
""
],
[
"Ayush",
"Kumar",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | TITLE: DeepFix: A Fully Convolutional Neural Network for predicting Human Eye
Fixations
ABSTRACT: Understanding and predicting the human visual attentional mechanism is an
active area of research in the fields of neuroscience and computer vision. In
this work, we propose DeepFix, a first-of-its-kind fully convolutional neural
network for accurate saliency prediction. Unlike classical works which
characterize the saliency map using various hand-crafted features, our model
automatically learns features in a hierarchical fashion and predicts saliency
map in an end-to-end manner. DeepFix is designed to capture semantics at
multiple scales while taking global context into account using network layers
with very large receptive fields. Generally, fully convolutional nets are
spatially invariant which prevents them from modeling location dependent
patterns (e.g. centre-bias). Our network overcomes this limitation by
incorporating a novel Location Biased Convolutional layer. We evaluate our
model on two challenging eye fixation datasets -- MIT300, CAT2000 and show that
it outperforms other recent approaches by a significant margin.
| no_new_dataset | 0.950365 |
1510.02942 | Baris Gecer | Baris Gecer, Ozge Yalcinkaya, Onur Tasar and Selim Aksoy | Evaluation of Joint Multi-Instance Multi-Label Learning For Breast
Cancer Diagnosis | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-instance multi-label (MIML) learning is a challenging problem in many
aspects. Such learning approaches might be useful for many medical diagnosis
applications including breast cancer detection and classification. In this
study subset of digiPATH dataset (whole slide digital breast cancer
histopathology images) are used for training and evaluation of six
state-of-the-art MIML methods.
At the end, performance comparison of these approaches are given by means of
effective evaluation metrics. It is shown that MIML-kNN achieve the best
performance that is %65.3 average precision, where most of other methods attain
acceptable results as well.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2015 14:30:25 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Gecer",
"Baris",
""
],
[
"Yalcinkaya",
"Ozge",
""
],
[
"Tasar",
"Onur",
""
],
[
"Aksoy",
"Selim",
""
]
] | TITLE: Evaluation of Joint Multi-Instance Multi-Label Learning For Breast
Cancer Diagnosis
ABSTRACT: Multi-instance multi-label (MIML) learning is a challenging problem in many
aspects. Such learning approaches might be useful for many medical diagnosis
applications including breast cancer detection and classification. In this
study subset of digiPATH dataset (whole slide digital breast cancer
histopathology images) are used for training and evaluation of six
state-of-the-art MIML methods.
At the end, performance comparison of these approaches are given by means of
effective evaluation metrics. It is shown that MIML-kNN achieve the best
performance that is %65.3 average precision, where most of other methods attain
acceptable results as well.
| no_new_dataset | 0.942295 |
1510.02949 | Marcus Rohrbach | Damian Mrowca, Marcus Rohrbach, Judy Hoffman, Ronghang Hu, Kate
Saenko, Trevor Darrell | Spatial Semantic Regularisation for Large Scale Object Detection | accepted at ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large scale object detection with thousands of classes introduces the problem
of many contradicting false positive detections, which have to be suppressed.
Class-independent non-maximum suppression has traditionally been used for this
step, but it does not scale well as the number of classes grows. Traditional
non-maximum suppression does not consider label- and instance-level
relationships nor does it allow an exploitation of the spatial layout of
detection proposals. We propose a new multi-class spatial semantic
regularisation method based on affinity propagation clustering, which
simultaneously optimises across all categories and all proposed locations in
the image, to improve both the localisation and categorisation of selected
detection proposals. Constraints are shared across the labels through the
semantic WordNet hierarchy. Our approach proves to be especially useful in
large scale settings with thousands of classes, where spatial and semantic
interactions are very frequent and only weakly supervised detectors can be
built due to a lack of bounding box annotations. Detection experiments are
conducted on the ImageNet and COCO dataset, and in settings with thousands of
detected categories. Our method provides a significant precision improvement by
reducing false positives, while simultaneously improving the recall.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2015 15:15:45 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Mrowca",
"Damian",
""
],
[
"Rohrbach",
"Marcus",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Hu",
"Ronghang",
""
],
[
"Saenko",
"Kate",
""
],
[
"Darrell",
"Trevor",
""
]
] | TITLE: Spatial Semantic Regularisation for Large Scale Object Detection
ABSTRACT: Large scale object detection with thousands of classes introduces the problem
of many contradicting false positive detections, which have to be suppressed.
Class-independent non-maximum suppression has traditionally been used for this
step, but it does not scale well as the number of classes grows. Traditional
non-maximum suppression does not consider label- and instance-level
relationships nor does it allow an exploitation of the spatial layout of
detection proposals. We propose a new multi-class spatial semantic
regularisation method based on affinity propagation clustering, which
simultaneously optimises across all categories and all proposed locations in
the image, to improve both the localisation and categorisation of selected
detection proposals. Constraints are shared across the labels through the
semantic WordNet hierarchy. Our approach proves to be especially useful in
large scale settings with thousands of classes, where spatial and semantic
interactions are very frequent and only weakly supervised detectors can be
built due to a lack of bounding box annotations. Detection experiments are
conducted on the ImageNet and COCO dataset, and in settings with thousands of
detected categories. Our method provides a significant precision improvement by
reducing false positives, while simultaneously improving the recall.
| no_new_dataset | 0.950041 |
1510.03042 | Thuc Le Ph.D | Thuc Duy Le, Tao Hoang, Jiuyong Li, Lin Liu, Shu Hu | ParallelPC: an R package for efficient constraint based causal
exploration | null | null | null | null | cs.AI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discovering causal relationships from data is the ultimate goal of many
research areas. Constraint based causal exploration algorithms, such as PC,
FCI, RFCI, PC-simple, IDA and Joint-IDA have achieved significant progress and
have many applications. A common problem with these methods is the high
computational complexity, which hinders their applications in real world high
dimensional datasets, e.g gene expression datasets. In this paper, we present
an R package, ParallelPC, that includes the parallelised versions of these
causal exploration algorithms. The parallelised algorithms help speed up the
procedure of experimenting big datasets and reduce the memory used when running
the algorithms. The package is not only suitable for super-computers or
clusters, but also convenient for researchers using personal computers with
multi core CPUs. Our experiment results on real world datasets show that using
the parallelised algorithms it is now practical to explore causal relationships
in high dimensional datasets with thousands of variables in a single multicore
computer. ParallelPC is available in CRAN repository at
https://cran.rproject.org/web/packages/ParallelPC/index.html.
| [
{
"version": "v1",
"created": "Sun, 11 Oct 2015 11:55:39 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Le",
"Thuc Duy",
""
],
[
"Hoang",
"Tao",
""
],
[
"Li",
"Jiuyong",
""
],
[
"Liu",
"Lin",
""
],
[
"Hu",
"Shu",
""
]
] | TITLE: ParallelPC: an R package for efficient constraint based causal
exploration
ABSTRACT: Discovering causal relationships from data is the ultimate goal of many
research areas. Constraint based causal exploration algorithms, such as PC,
FCI, RFCI, PC-simple, IDA and Joint-IDA have achieved significant progress and
have many applications. A common problem with these methods is the high
computational complexity, which hinders their applications in real world high
dimensional datasets, e.g gene expression datasets. In this paper, we present
an R package, ParallelPC, that includes the parallelised versions of these
causal exploration algorithms. The parallelised algorithms help speed up the
procedure of experimenting big datasets and reduce the memory used when running
the algorithms. The package is not only suitable for super-computers or
clusters, but also convenient for researchers using personal computers with
multi core CPUs. Our experiment results on real world datasets show that using
the parallelised algorithms it is now practical to explore causal relationships
in high dimensional datasets with thousands of variables in a single multicore
computer. ParallelPC is available in CRAN repository at
https://cran.rproject.org/web/packages/ParallelPC/index.html.
| no_new_dataset | 0.940517 |
1510.03375 | Irfan Ahmed | Irshad Ahmed, Irfan Ahmed, Waseem Shahzad | Scaling up for high dimensional and high speed data streams: HSDStream | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel high speed clustering scheme for high dimensional
data streams. Data stream clustering has gained importance in different
applications, for example, in network monitoring, intrusion detection, and
real-time sensing are few of those. High dimensional stream data is inherently
more complex when used for clustering because the evolving nature of the stream
data and high dimensionality make it non-trivial. In order to tackle this
problem, projected subspace within the high dimensions and limited window sized
data per unit of time are used for clustering purpose. We propose a High Speed
and Dimensions data stream clustering scheme (HSDStream) which employs
exponential moving averages to reduce the size of the memory and speed up the
processing of projected subspace data stream. The proposed algorithm has been
tested against HDDStream for cluster purity, memory usage, and the cluster
sensitivity. Experimental results have been obtained for corrected KDD
intrusion detection dataset. These results show that HSDStream outperforms the
HDDStream in all performance metrics, especially the memory usage and the
processing speed.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 17:47:18 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Ahmed",
"Irshad",
""
],
[
"Ahmed",
"Irfan",
""
],
[
"Shahzad",
"Waseem",
""
]
] | TITLE: Scaling up for high dimensional and high speed data streams: HSDStream
ABSTRACT: This paper presents a novel high speed clustering scheme for high dimensional
data streams. Data stream clustering has gained importance in different
applications, for example, in network monitoring, intrusion detection, and
real-time sensing are few of those. High dimensional stream data is inherently
more complex when used for clustering because the evolving nature of the stream
data and high dimensionality make it non-trivial. In order to tackle this
problem, projected subspace within the high dimensions and limited window sized
data per unit of time are used for clustering purpose. We propose a High Speed
and Dimensions data stream clustering scheme (HSDStream) which employs
exponential moving averages to reduce the size of the memory and speed up the
processing of projected subspace data stream. The proposed algorithm has been
tested against HDDStream for cluster purity, memory usage, and the cluster
sensitivity. Experimental results have been obtained for corrected KDD
intrusion detection dataset. These results show that HSDStream outperforms the
HDDStream in all performance metrics, especially the memory usage and the
processing speed.
| no_new_dataset | 0.949902 |
1510.03409 | Olivier Cur\'e | Olivier Cur\'e, Hubert Naacke, Tendry Randriamalala, Bernd Amann | LiteMat: a scalable, cost-efficient inference encoding scheme for large
RDF graphs | 8 pages, 1 figure | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of linked data sources and the size of the linked open data graph
keep growing every day. As a consequence, semantic RDF services are more and
more confronted with various "big data" problems. Query processing in the
presence of inferences is one them. For instance, to complete the answer set of
SPARQL queries, RDF database systems evaluate semantic RDFS relationships
(subPropertyOf, subClassOf) through time-consuming query rewriting algorithms
or space-consuming data materialization solutions. To reduce the memory
footprint and ease the exchange of large datasets, these systems generally
apply a dictionary approach for compressing triple data sizes by replacing
resource identifiers (IRIs), blank nodes and literals with integer values. In
this article, we present a structured resource identification scheme using a
clever encoding of concepts and property hierarchies for efficiently evaluating
the main common RDFS entailment rules while minimizing triple materialization
and query rewriting. We will show how this encoding can be computed by a
scalable parallel algorithm and directly be implemented over the Apache Spark
framework. The efficiency of our encoding scheme is emphasized by an evaluation
conducted over both synthetic and real world datasets.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 19:45:51 GMT"
}
] | 2015-10-13T00:00:00 | [
[
"Curé",
"Olivier",
""
],
[
"Naacke",
"Hubert",
""
],
[
"Randriamalala",
"Tendry",
""
],
[
"Amann",
"Bernd",
""
]
] | TITLE: LiteMat: a scalable, cost-efficient inference encoding scheme for large
RDF graphs
ABSTRACT: The number of linked data sources and the size of the linked open data graph
keep growing every day. As a consequence, semantic RDF services are more and
more confronted with various "big data" problems. Query processing in the
presence of inferences is one them. For instance, to complete the answer set of
SPARQL queries, RDF database systems evaluate semantic RDFS relationships
(subPropertyOf, subClassOf) through time-consuming query rewriting algorithms
or space-consuming data materialization solutions. To reduce the memory
footprint and ease the exchange of large datasets, these systems generally
apply a dictionary approach for compressing triple data sizes by replacing
resource identifiers (IRIs), blank nodes and literals with integer values. In
this article, we present a structured resource identification scheme using a
clever encoding of concepts and property hierarchies for efficiently evaluating
the main common RDFS entailment rules while minimizing triple materialization
and query rewriting. We will show how this encoding can be computed by a
scalable parallel algorithm and directly be implemented over the Apache Spark
framework. The efficiency of our encoding scheme is emphasized by an evaluation
conducted over both synthetic and real world datasets.
| no_new_dataset | 0.944842 |
1506.07656 | Team Lear | Jerome Revaud (LEAR), Philippe Weinzaepfel (LEAR), Zaid Harchaoui
(LEAR), Cordelia Schmid (LEAR) | DeepMatching: Hierarchical Deformable Dense Matching | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel matching algorithm, called DeepMatching, to compute
dense correspondences between images. DeepMatching relies on a hierarchical,
multi-layer, correlational architecture designed for matching images and was
inspired by deep convolutional approaches. The proposed matching algorithm can
handle non-rigid deformations and repetitive textures and efficiently
determines dense correspondences in the presence of significant changes between
images. We evaluate the performance of DeepMatching, in comparison with
state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al
2005), the MPI-Sintel (Butler et al 2012) and the Kitti (Geiger et al 2013)
datasets. DeepMatching outperforms the state-of-the-art algorithms and shows
excellent results in particular for repetitive textures.We also propose a
method for estimating optical flow, called DeepFlow, by integrating
DeepMatching in the large displacement optical flow (LDOF) approach of Brox and
Malik (2011). Compared to existing matching algorithms, additional robustness
to large displacements and complex motion is obtained thanks to our matching
approach. DeepFlow obtains competitive performance on public benchmarks for
optical flow estimation.
| [
{
"version": "v1",
"created": "Thu, 25 Jun 2015 08:12:02 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Oct 2015 11:37:28 GMT"
}
] | 2015-10-12T00:00:00 | [
[
"Revaud",
"Jerome",
"",
"LEAR"
],
[
"Weinzaepfel",
"Philippe",
"",
"LEAR"
],
[
"Harchaoui",
"Zaid",
"",
"LEAR"
],
[
"Schmid",
"Cordelia",
"",
"LEAR"
]
] | TITLE: DeepMatching: Hierarchical Deformable Dense Matching
ABSTRACT: We introduce a novel matching algorithm, called DeepMatching, to compute
dense correspondences between images. DeepMatching relies on a hierarchical,
multi-layer, correlational architecture designed for matching images and was
inspired by deep convolutional approaches. The proposed matching algorithm can
handle non-rigid deformations and repetitive textures and efficiently
determines dense correspondences in the presence of significant changes between
images. We evaluate the performance of DeepMatching, in comparison with
state-of-the-art matching algorithms, on the Mikolajczyk (Mikolajczyk et al
2005), the MPI-Sintel (Butler et al 2012) and the Kitti (Geiger et al 2013)
datasets. DeepMatching outperforms the state-of-the-art algorithms and shows
excellent results in particular for repetitive textures.We also propose a
method for estimating optical flow, called DeepFlow, by integrating
DeepMatching in the large displacement optical flow (LDOF) approach of Brox and
Malik (2011). Compared to existing matching algorithms, additional robustness
to large displacements and complex motion is obtained thanks to our matching
approach. DeepFlow obtains competitive performance on public benchmarks for
optical flow estimation.
| no_new_dataset | 0.948489 |
1507.01484 | Arkadiusz Stopczynski Dr. | Arkadiusz Stopczynski, Piotr Sapiezynski, Alex 'Sandy' Pentland, Sune
Lehmann | Temporal Fidelity in Dynamic Social Networks | null | null | 10.1140/epjb/e2015-60549-7 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has recently become possible to record detailed social interactions in
large social systems with high resolution. As we study these datasets, human
social interactions display patterns that emerge at multiple time scales, from
minutes to months. On a fundamental level, understanding of the network
dynamics can be used to inform the process of measuring social networks. The
details of measurement are of particular importance when considering dynamic
processes where minute-to-minute details are important, because collection of
physical proximity interactions with high temporal resolution is difficult and
expensive. Here, we consider the dynamic network of proximity-interactions
between approximately 500 individuals participating in the Copenhagen Networks
Study. We show that in order to accurately model spreading processes in the
network, the dynamic processes that occur on the order of minutes are essential
and must be included in the analysis.
| [
{
"version": "v1",
"created": "Mon, 6 Jul 2015 14:48:05 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Aug 2015 10:54:05 GMT"
}
] | 2015-10-12T00:00:00 | [
[
"Stopczynski",
"Arkadiusz",
""
],
[
"Sapiezynski",
"Piotr",
""
],
[
"Pentland",
"Alex 'Sandy'",
""
],
[
"Lehmann",
"Sune",
""
]
] | TITLE: Temporal Fidelity in Dynamic Social Networks
ABSTRACT: It has recently become possible to record detailed social interactions in
large social systems with high resolution. As we study these datasets, human
social interactions display patterns that emerge at multiple time scales, from
minutes to months. On a fundamental level, understanding of the network
dynamics can be used to inform the process of measuring social networks. The
details of measurement are of particular importance when considering dynamic
processes where minute-to-minute details are important, because collection of
physical proximity interactions with high temporal resolution is difficult and
expensive. Here, we consider the dynamic network of proximity-interactions
between approximately 500 individuals participating in the Copenhagen Networks
Study. We show that in order to accurately model spreading processes in the
network, the dynamic processes that occur on the order of minutes are essential
and must be included in the analysis.
| no_new_dataset | 0.941223 |
1509.06658 | Nikita Prabhu | Nikita Prabhu and R. Venkatesh Babu | Attribute-Graph: A Graph based approach to Image Ranking | In IEEE International Conference on Computer Vision (ICCV) 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel image representation, termed Attribute-Graph, to rank
images by their semantic similarity to a given query image. An Attribute-Graph
is an undirected fully connected graph, incorporating both local and global
image characteristics. The graph nodes characterise objects as well as the
overall scene context using mid-level semantic attributes, while the edges
capture the object topology. We demonstrate the effectiveness of
Attribute-Graphs by applying them to the problem of image ranking. We benchmark
the performance of our algorithm on the 'rPascal' and 'rImageNet' datasets,
which we have created in order to evaluate the ranking performance on complex
queries containing multiple objects. Our experimental evaluation shows that
modelling images as Attribute-Graphs results in improved ranking performance
over existing techniques.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 16:01:02 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Oct 2015 04:38:36 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Prabhu",
"Nikita",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | TITLE: Attribute-Graph: A Graph based approach to Image Ranking
ABSTRACT: We propose a novel image representation, termed Attribute-Graph, to rank
images by their semantic similarity to a given query image. An Attribute-Graph
is an undirected fully connected graph, incorporating both local and global
image characteristics. The graph nodes characterise objects as well as the
overall scene context using mid-level semantic attributes, while the edges
capture the object topology. We demonstrate the effectiveness of
Attribute-Graphs by applying them to the problem of image ranking. We benchmark
the performance of our algorithm on the 'rPascal' and 'rImageNet' datasets,
which we have created in order to evaluate the ranking performance on complex
queries containing multiple objects. Our experimental evaluation shows that
modelling images as Attribute-Graphs results in improved ranking performance
over existing techniques.
| no_new_dataset | 0.922132 |
1510.02131 | Forrest Iandola | Forrest N. Iandola, Anting Shen, Peter Gao, Kurt Keutzer | DeepLogo: Hitting Logo Recognition with the Deep Neural Network Hammer | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a flurry of industrial activity around logo
recognition, such as Ditto's service for marketers to track their brands in
user-generated images, and LogoGrab's mobile app platform for logo recognition.
However, relatively little academic or open-source logo recognition progress
has been made in the last four years. Meanwhile, deep convolutional neural
networks (DCNNs) have revolutionized a broad range of object recognition
applications. In this work, we apply DCNNs to logo recognition. We propose
several DCNN architectures, with which we surpass published state-of-art
accuracy on a popular logo recognition dataset.
| [
{
"version": "v1",
"created": "Wed, 7 Oct 2015 21:01:34 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Iandola",
"Forrest N.",
""
],
[
"Shen",
"Anting",
""
],
[
"Gao",
"Peter",
""
],
[
"Keutzer",
"Kurt",
""
]
] | TITLE: DeepLogo: Hitting Logo Recognition with the Deep Neural Network Hammer
ABSTRACT: Recently, there has been a flurry of industrial activity around logo
recognition, such as Ditto's service for marketers to track their brands in
user-generated images, and LogoGrab's mobile app platform for logo recognition.
However, relatively little academic or open-source logo recognition progress
has been made in the last four years. Meanwhile, deep convolutional neural
networks (DCNNs) have revolutionized a broad range of object recognition
applications. In this work, we apply DCNNs to logo recognition. We propose
several DCNN architectures, with which we surpass published state-of-art
accuracy on a popular logo recognition dataset.
| no_new_dataset | 0.942929 |
1510.02188 | Zhi-Hong Deng | Zhi-Hong Deng, Shulei Ma, He Liu | An Efficient Data Structure for Fast Mining High Utility Itemsets | 25 pages,9 figures | null | null | null | cs.DB cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a novel data structure called PUN-list, which
maintains both the utility information about an itemset and utility upper bound
for facilitating the processing of mining high utility itemsets. Based on
PUN-lists, we present a method, called MIP (Mining high utility Itemset using
PUN-Lists), for fast mining high utility itemsets. The efficiency of MIP is
achieved with three techniques. First, itemsets are represented by a highly
condensed data structure, PUN-list, which avoids costly, repeatedly utility
computation. Second, the utility of an itemset can be efficiently calculated by
scanning the PUN-list of the itemset and the PUN-lists of long itemsets can be
fast constructed by the PUN-lists of short itemsets. Third, by employing the
utility upper bound lying in the PUN-lists as the pruning strategy, MIP
directly discovers high utility itemsets from the search space, called
set-enumeration tree, without generating numerous candidates. Extensive
experiments on various synthetic and real datasets show that PUN-list is very
effective since MIP is at least an order of magnitude faster than recently
reported algorithms on average.
| [
{
"version": "v1",
"created": "Thu, 8 Oct 2015 03:04:12 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Deng",
"Zhi-Hong",
""
],
[
"Ma",
"Shulei",
""
],
[
"Liu",
"He",
""
]
] | TITLE: An Efficient Data Structure for Fast Mining High Utility Itemsets
ABSTRACT: In this paper, we propose a novel data structure called PUN-list, which
maintains both the utility information about an itemset and utility upper bound
for facilitating the processing of mining high utility itemsets. Based on
PUN-lists, we present a method, called MIP (Mining high utility Itemset using
PUN-Lists), for fast mining high utility itemsets. The efficiency of MIP is
achieved with three techniques. First, itemsets are represented by a highly
condensed data structure, PUN-list, which avoids costly, repeatedly utility
computation. Second, the utility of an itemset can be efficiently calculated by
scanning the PUN-list of the itemset and the PUN-lists of long itemsets can be
fast constructed by the PUN-lists of short itemsets. Third, by employing the
utility upper bound lying in the PUN-lists as the pruning strategy, MIP
directly discovers high utility itemsets from the search space, called
set-enumeration tree, without generating numerous candidates. Extensive
experiments on various synthetic and real datasets show that PUN-list is very
effective since MIP is at least an order of magnitude faster than recently
reported algorithms on average.
| no_new_dataset | 0.944587 |
1510.02192 | Eric Tzeng | Eric Tzeng, Judy Hoffman, Trevor Darrell, Kate Saenko | Simultaneous Deep Transfer Across Domains and Tasks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent reports suggest that a generic supervised deep CNN model trained on a
large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning
deep models in a new domain can require a significant amount of labeled data,
which for many applications is simply not available. We propose a new CNN
architecture to exploit unlabeled and sparsely labeled target domain data. Our
approach simultaneously optimizes for domain invariance to facilitate domain
transfer and uses a soft label distribution matching loss to transfer
information between tasks. Our proposed adaptation method offers empirical
performance which exceeds previously published results on two standard
benchmark visual domain adaptation tasks, evaluated across supervised and
semi-supervised adaptation settings.
| [
{
"version": "v1",
"created": "Thu, 8 Oct 2015 03:42:45 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Tzeng",
"Eric",
""
],
[
"Hoffman",
"Judy",
""
],
[
"Darrell",
"Trevor",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: Simultaneous Deep Transfer Across Domains and Tasks
ABSTRACT: Recent reports suggest that a generic supervised deep CNN model trained on a
large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning
deep models in a new domain can require a significant amount of labeled data,
which for many applications is simply not available. We propose a new CNN
architecture to exploit unlabeled and sparsely labeled target domain data. Our
approach simultaneously optimizes for domain invariance to facilitate domain
transfer and uses a soft label distribution matching loss to transfer
information between tasks. Our proposed adaptation method offers empirical
performance which exceeds previously published results on two standard
benchmark visual domain adaptation tasks, evaluated across supervised and
semi-supervised adaptation settings.
| no_new_dataset | 0.948585 |
1510.02342 | Stella Lee Miss | Stella Lee, Martin Walda, Delimpasi Vasiliki | Born In Bradford Mobile Application | 4 pages | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Born In Bradford mobile application is an Android mobile application and
a working prototype that enables interaction with a sample cohort of the Born
in Bradford study. It provides an interface and visualization for several
surveys participated in by mothers and their children. This data is stored in
the Born In Bradford database. A subset of this data is provided for mothers
and children. The mobile application provides a way to engage the mothers and
promote their consistency in participating in subsequent surveys. It has been
designed to allow selected mothers to participate in the visualization of their
babies data. Samsung mobile phones have been provided with the application
loaded on the phone to limit and control its use and access to data. Mothers
login to interact with the data. This includes the ability to compare children
data through infographics and graphs and comparing their children data with the
average child. This comparison is done at different stages of the children ages
as provided in the dataset.
| [
{
"version": "v1",
"created": "Thu, 8 Oct 2015 14:42:42 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Lee",
"Stella",
""
],
[
"Walda",
"Martin",
""
],
[
"Vasiliki",
"Delimpasi",
""
]
] | TITLE: Born In Bradford Mobile Application
ABSTRACT: The Born In Bradford mobile application is an Android mobile application and
a working prototype that enables interaction with a sample cohort of the Born
in Bradford study. It provides an interface and visualization for several
surveys participated in by mothers and their children. This data is stored in
the Born In Bradford database. A subset of this data is provided for mothers
and children. The mobile application provides a way to engage the mothers and
promote their consistency in participating in subsequent surveys. It has been
designed to allow selected mothers to participate in the visualization of their
babies data. Samsung mobile phones have been provided with the application
loaded on the phone to limit and control its use and access to data. Mothers
login to interact with the data. This includes the ability to compare children
data through infographics and graphs and comparing their children data with the
average child. This comparison is done at different stages of the children ages
as provided in the dataset.
| no_new_dataset | 0.908537 |
1510.02343 | Haruna Isah | Haruna Isah, Daniel Neagu, Paul Trundle | Bipartite Network Model for Inferring Hidden Ties in Crime Data | 8 pages | null | 10.1145/2808797.2808842 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Certain crimes are hardly committed by individuals but carefully organised by
group of associates and affiliates loosely connected to each other with a
single or small group of individuals coordinating the overall actions. A common
starting point in understanding the structural organisation of criminal groups
is to identify the criminals and their associates. Situations arise in many
criminal datasets where there is no direct connection among the criminals. In
this paper, we investigate ties and community structure in crime data in order
to understand the operations of both traditional and cyber criminals, as well
as to predict the existence of organised criminal networks. Our contributions
are twofold: we propose a bipartite network model for inferring hidden ties
between actors who initiated an illegal interaction and objects affected by the
interaction, we then validate the method in two case studies on pharmaceutical
crime and underground forum data using standard network algorithms for
structural and community analysis. The vertex level metrics and community
analysis results obtained indicate the significance of our work in
understanding the operations and structure of organised criminal networks which
were not immediately obvious in the data. Identifying these groups and mapping
their relationship to one another is essential in making more effective
disruption strategies in the future.
| [
{
"version": "v1",
"created": "Thu, 8 Oct 2015 14:43:12 GMT"
}
] | 2015-10-09T00:00:00 | [
[
"Isah",
"Haruna",
""
],
[
"Neagu",
"Daniel",
""
],
[
"Trundle",
"Paul",
""
]
] | TITLE: Bipartite Network Model for Inferring Hidden Ties in Crime Data
ABSTRACT: Certain crimes are hardly committed by individuals but carefully organised by
group of associates and affiliates loosely connected to each other with a
single or small group of individuals coordinating the overall actions. A common
starting point in understanding the structural organisation of criminal groups
is to identify the criminals and their associates. Situations arise in many
criminal datasets where there is no direct connection among the criminals. In
this paper, we investigate ties and community structure in crime data in order
to understand the operations of both traditional and cyber criminals, as well
as to predict the existence of organised criminal networks. Our contributions
are twofold: we propose a bipartite network model for inferring hidden ties
between actors who initiated an illegal interaction and objects affected by the
interaction, we then validate the method in two case studies on pharmaceutical
crime and underground forum data using standard network algorithms for
structural and community analysis. The vertex level metrics and community
analysis results obtained indicate the significance of our work in
understanding the operations and structure of organised criminal networks which
were not immediately obvious in the data. Identifying these groups and mapping
their relationship to one another is essential in making more effective
disruption strategies in the future.
| no_new_dataset | 0.950549 |
1412.0767 | Du Tran | Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, Manohar
Paluri | Learning Spatiotemporal Features with 3D Convolutional Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a simple, yet effective approach for spatiotemporal feature
learning using deep 3-dimensional convolutional networks (3D ConvNets) trained
on a large scale supervised video dataset. Our findings are three-fold: 1) 3D
ConvNets are more suitable for spatiotemporal feature learning compared to 2D
ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in
all layers is among the best performing architectures for 3D ConvNets; and 3)
Our learned features, namely C3D (Convolutional 3D), with a simple linear
classifier outperform state-of-the-art methods on 4 different benchmarks and
are comparable with current best methods on the other 2 benchmarks. In
addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset
with only 10 dimensions and also very efficient to compute due to the fast
inference of ConvNets. Finally, they are conceptually very simple and easy to
train and use.
| [
{
"version": "v1",
"created": "Tue, 2 Dec 2014 03:05:54 GMT"
},
{
"version": "v2",
"created": "Sat, 7 Feb 2015 01:59:04 GMT"
},
{
"version": "v3",
"created": "Fri, 8 May 2015 03:24:33 GMT"
},
{
"version": "v4",
"created": "Wed, 7 Oct 2015 01:29:12 GMT"
}
] | 2015-10-08T00:00:00 | [
[
"Tran",
"Du",
""
],
[
"Bourdev",
"Lubomir",
""
],
[
"Fergus",
"Rob",
""
],
[
"Torresani",
"Lorenzo",
""
],
[
"Paluri",
"Manohar",
""
]
] | TITLE: Learning Spatiotemporal Features with 3D Convolutional Networks
ABSTRACT: We propose a simple, yet effective approach for spatiotemporal feature
learning using deep 3-dimensional convolutional networks (3D ConvNets) trained
on a large scale supervised video dataset. Our findings are three-fold: 1) 3D
ConvNets are more suitable for spatiotemporal feature learning compared to 2D
ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in
all layers is among the best performing architectures for 3D ConvNets; and 3)
Our learned features, namely C3D (Convolutional 3D), with a simple linear
classifier outperform state-of-the-art methods on 4 different benchmarks and
are comparable with current best methods on the other 2 benchmarks. In
addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset
with only 10 dimensions and also very efficient to compute due to the fast
inference of ConvNets. Finally, they are conceptually very simple and easy to
train and use.
| no_new_dataset | 0.946745 |
1503.01521 | Liwen Zhang | Liwen Zhang, Subhransu Maji, Ryota Tomioka | Jointly Learning Multiple Measures of Similarities from Triplet
Comparisons | null | null | null | null | stat.ML cs.AI cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity between objects is multi-faceted and it can be easier for human
annotators to measure it when the focus is on a specific aspect. We consider
the problem of mapping objects into view-specific embeddings where the distance
between them is consistent with the similarity comparisons of the form "from
the t-th view, object A is more similar to B than to C". Our framework jointly
learns view-specific embeddings exploiting correlations between views.
Experiments on a number of datasets, including one of multi-view crowdsourced
comparison on bird images, show the proposed method achieves lower triplet
generalization error when compared to both learning embeddings independently
for each view and all views pooled into one view. Our method can also be used
to learn multiple measures of similarity over input features taking class
labels into account and compares favorably to existing approaches for
multi-task metric learning on the ISOLET dataset.
| [
{
"version": "v1",
"created": "Thu, 5 Mar 2015 02:57:19 GMT"
},
{
"version": "v2",
"created": "Fri, 6 Mar 2015 20:09:09 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Oct 2015 21:42:55 GMT"
}
] | 2015-10-08T00:00:00 | [
[
"Zhang",
"Liwen",
""
],
[
"Maji",
"Subhransu",
""
],
[
"Tomioka",
"Ryota",
""
]
] | TITLE: Jointly Learning Multiple Measures of Similarities from Triplet
Comparisons
ABSTRACT: Similarity between objects is multi-faceted and it can be easier for human
annotators to measure it when the focus is on a specific aspect. We consider
the problem of mapping objects into view-specific embeddings where the distance
between them is consistent with the similarity comparisons of the form "from
the t-th view, object A is more similar to B than to C". Our framework jointly
learns view-specific embeddings exploiting correlations between views.
Experiments on a number of datasets, including one of multi-view crowdsourced
comparison on bird images, show the proposed method achieves lower triplet
generalization error when compared to both learning embeddings independently
for each view and all views pooled into one view. Our method can also be used
to learn multiple measures of similarity over input features taking class
labels into account and compares favorably to existing approaches for
multi-task metric learning on the ISOLET dataset.
| no_new_dataset | 0.944331 |
1508.03868 | Brendan Jou | Brendan Jou, Tao Chen, Nikolaos Pappas, Miriam Redi, Mercan Topkara,
Shih-Fu Chang | Visual Affect Around the World: A Large-scale Multilingual Visual
Sentiment Ontology | 11 pages, to appear at ACM MM'15 | null | 10.1145/2733373.2806246 | null | cs.MM cs.CL cs.CV cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.
| [
{
"version": "v1",
"created": "Sun, 16 Aug 2015 21:43:59 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Aug 2015 16:33:13 GMT"
},
{
"version": "v3",
"created": "Wed, 7 Oct 2015 19:07:14 GMT"
}
] | 2015-10-08T00:00:00 | [
[
"Jou",
"Brendan",
""
],
[
"Chen",
"Tao",
""
],
[
"Pappas",
"Nikolaos",
""
],
[
"Redi",
"Miriam",
""
],
[
"Topkara",
"Mercan",
""
],
[
"Chang",
"Shih-Fu",
""
]
] | TITLE: Visual Affect Around the World: A Large-scale Multilingual Visual
Sentiment Ontology
ABSTRACT: Every culture and language is unique. Our work expressly focuses on the
uniqueness of culture and language in relation to human affect, specifically
sentiment and emotion semantics, and how they manifest in social multimedia. We
develop sets of sentiment- and emotion-polarized visual concepts by adapting
semantic structures called adjective-noun pairs, originally introduced by Borth
et al. (2013), but in a multilingual context. We propose a new
language-dependent method for automatic discovery of these adjective-noun
constructs. We show how this pipeline can be applied on a social multimedia
platform for the creation of a large-scale multilingual visual sentiment
concept ontology (MVSO). Unlike the flat structure in Borth et al. (2013), our
unified ontology is organized hierarchically by multilingual clusters of
visually detectable nouns and subclusters of emotionally biased versions of
these nouns. In addition, we present an image-based prediction task to show how
generalizable language-specific models are in a multilingual context. A new,
publicly available dataset of >15.6K sentiment-biased visual concepts across 12
languages with language-specific detector banks, >7.36M images and their
metadata is also released.
| new_dataset | 0.947817 |
1510.02055 | Akshaya Mishra Dr | Justin A. Eichel, Akshaya Mishra, Nicholas Miller, Nicholas Jankovic,
Mohan A. Thomas, Tyler Abbott, Douglas Swanson, Joel Keller | Diverse Large-Scale ITS Dataset Created from Continuous Learning for
Real-Time Vehicle Detection | 13 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In traffic engineering, vehicle detectors are trained on limited datasets
resulting in poor accuracy when deployed in real world applications. Annotating
large-scale high quality datasets is challenging. Typically, these datasets
have limited diversity; they do not reflect the real-world operating
environment. There is a need for a large-scale, cloud based positive and
negative mining (PNM) process and a large-scale learning and evaluation system
for the application of traffic event detection. The proposed positive and
negative mining process addresses the quality of crowd sourced ground truth
data through machine learning review and human feedback mechanisms. The
proposed learning and evaluation system uses a distributed cloud computing
framework to handle data-scaling issues associated with large numbers of
samples and a high-dimensional feature space. The system is trained using
AdaBoost on $1,000,000$ Haar-like features extracted from $70,000$ annotated
video frames. The trained real-time vehicle detector achieves an accuracy of at
least $95\%$ for $1/2$ and about $78\%$ for $19/20$ of the time when tested on
approximately $7,500,000$ video frames. At the end of 2015, the dataset is
expect to have over one billion annotated video frames.
| [
{
"version": "v1",
"created": "Wed, 7 Oct 2015 18:34:36 GMT"
}
] | 2015-10-08T00:00:00 | [
[
"Eichel",
"Justin A.",
""
],
[
"Mishra",
"Akshaya",
""
],
[
"Miller",
"Nicholas",
""
],
[
"Jankovic",
"Nicholas",
""
],
[
"Thomas",
"Mohan A.",
""
],
[
"Abbott",
"Tyler",
""
],
[
"Swanson",
"Douglas",
""
],
[
"Keller",
"Joel",
""
]
] | TITLE: Diverse Large-Scale ITS Dataset Created from Continuous Learning for
Real-Time Vehicle Detection
ABSTRACT: In traffic engineering, vehicle detectors are trained on limited datasets
resulting in poor accuracy when deployed in real world applications. Annotating
large-scale high quality datasets is challenging. Typically, these datasets
have limited diversity; they do not reflect the real-world operating
environment. There is a need for a large-scale, cloud based positive and
negative mining (PNM) process and a large-scale learning and evaluation system
for the application of traffic event detection. The proposed positive and
negative mining process addresses the quality of crowd sourced ground truth
data through machine learning review and human feedback mechanisms. The
proposed learning and evaluation system uses a distributed cloud computing
framework to handle data-scaling issues associated with large numbers of
samples and a high-dimensional feature space. The system is trained using
AdaBoost on $1,000,000$ Haar-like features extracted from $70,000$ annotated
video frames. The trained real-time vehicle detector achieves an accuracy of at
least $95\%$ for $1/2$ and about $78\%$ for $19/20$ of the time when tested on
approximately $7,500,000$ video frames. At the end of 2015, the dataset is
expect to have over one billion annotated video frames.
| no_new_dataset | 0.952086 |
1411.6241 | Xiaojun Chang | Xiaojun Chang, Feiping Nie, Yi Yang and Heng Huang | Improved Spectral Clustering via Embedded Label Propagation | Withdraw for a wrong formulation | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spectral clustering is a key research topic in the field of machine learning
and data mining. Most of the existing spectral clustering algorithms are built
upon Gaussian Laplacian matrices, which are sensitive to parameters. We propose
a novel parameter free, distance consistent Locally Linear Embedding. The
proposed distance consistent LLE promises that edges between closer data points
have greater weight.Furthermore, we propose a novel improved spectral
clustering via embedded label propagation. Our algorithm is built upon two
advancements of the state of the art:1) label propagation,which propagates a
node\'s labels to neighboring nodes according to their proximity; and 2)
manifold learning, which has been widely used in its capacity to leverage the
manifold structure of data points. First we perform standard spectral
clustering on original data and assign each cluster to k nearest data points.
Next, we propagate labels through dense, unlabeled data regions. Extensive
experiments with various datasets validate the superiority of the proposed
algorithm compared to current state of the art spectral algorithms.
| [
{
"version": "v1",
"created": "Sun, 23 Nov 2014 13:35:29 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Oct 2015 16:49:39 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Chang",
"Xiaojun",
""
],
[
"Nie",
"Feiping",
""
],
[
"Yang",
"Yi",
""
],
[
"Huang",
"Heng",
""
]
] | TITLE: Improved Spectral Clustering via Embedded Label Propagation
ABSTRACT: Spectral clustering is a key research topic in the field of machine learning
and data mining. Most of the existing spectral clustering algorithms are built
upon Gaussian Laplacian matrices, which are sensitive to parameters. We propose
a novel parameter free, distance consistent Locally Linear Embedding. The
proposed distance consistent LLE promises that edges between closer data points
have greater weight.Furthermore, we propose a novel improved spectral
clustering via embedded label propagation. Our algorithm is built upon two
advancements of the state of the art:1) label propagation,which propagates a
node\'s labels to neighboring nodes according to their proximity; and 2)
manifold learning, which has been widely used in its capacity to leverage the
manifold structure of data points. First we perform standard spectral
clustering on original data and assign each cluster to k nearest data points.
Next, we propagate labels through dense, unlabeled data regions. Extensive
experiments with various datasets validate the superiority of the proposed
algorithm compared to current state of the art spectral algorithms.
| no_new_dataset | 0.954223 |
1502.02734 | Liang-Chieh Chen | George Papandreou and Liang-Chieh Chen and Kevin Murphy and Alan L.
Yuille | Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image
Segmentation | Accepted to ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep convolutional neural networks (DCNNs) trained on a large number of
images with strong pixel-level annotations have recently significantly pushed
the state-of-art in semantic image segmentation. We study the more challenging
problem of learning DCNNs for semantic image segmentation from either (1)
weakly annotated training data such as bounding boxes or image-level labels or
(2) a combination of few strongly labeled and many weakly labeled images,
sourced from one or multiple datasets. We develop Expectation-Maximization (EM)
methods for semantic image segmentation model training under these weakly
supervised and semi-supervised settings. Extensive experimental evaluation
shows that the proposed techniques can learn models delivering competitive
results on the challenging PASCAL VOC 2012 image segmentation benchmark, while
requiring significantly less annotation effort. We share source code
implementing the proposed system at
https://bitbucket.org/deeplab/deeplab-public.
| [
{
"version": "v1",
"created": "Mon, 9 Feb 2015 23:38:45 GMT"
},
{
"version": "v2",
"created": "Fri, 8 May 2015 17:49:00 GMT"
},
{
"version": "v3",
"created": "Mon, 5 Oct 2015 23:29:28 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Papandreou",
"George",
""
],
[
"Chen",
"Liang-Chieh",
""
],
[
"Murphy",
"Kevin",
""
],
[
"Yuille",
"Alan L.",
""
]
] | TITLE: Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image
Segmentation
ABSTRACT: Deep convolutional neural networks (DCNNs) trained on a large number of
images with strong pixel-level annotations have recently significantly pushed
the state-of-art in semantic image segmentation. We study the more challenging
problem of learning DCNNs for semantic image segmentation from either (1)
weakly annotated training data such as bounding boxes or image-level labels or
(2) a combination of few strongly labeled and many weakly labeled images,
sourced from one or multiple datasets. We develop Expectation-Maximization (EM)
methods for semantic image segmentation model training under these weakly
supervised and semi-supervised settings. Extensive experimental evaluation
shows that the proposed techniques can learn models delivering competitive
results on the challenging PASCAL VOC 2012 image segmentation benchmark, while
requiring significantly less annotation effort. We share source code
implementing the proposed system at
https://bitbucket.org/deeplab/deeplab-public.
| no_new_dataset | 0.951278 |
1503.03061 | Michele Borassi | Michele Borassi, Alessandro Chessa, Guido Caldarelli | Hyperbolicity Measures "Democracy" in Real-World Networks | null | Phys. Rev. E 92, 032812 (2015) | 10.1103/PhysRevE.92.032812 | null | physics.soc-ph cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the hyperbolicity of real-world networks, a geometric quantity
that measures if a space is negatively curved. In our interpretation, a network
with small hyperbolicity is "aristocratic", because it contains a small set of
vertices involved in many shortest paths, so that few elements "connect" the
systems, while a network with large hyperbolicity has a more "democratic"
structure with a larger number of crucial elements.
We prove mathematically the soundness of this interpretation, and we derive
its consequences by analyzing a large dataset of real-world networks. We
confirm and improve previous results on hyperbolicity, and we analyze them in
the light of our interpretation.
Moreover, we study (for the first time in our knowledge) the hyperbolicity of
the neighborhood of a given vertex. This allows to define an "influence area"
for the vertices in the graph. We show that the influence area of the highest
degree vertex is small in what we define "local" networks, like most social or
peer-to-peer networks. On the other hand, if the network is built in order to
reach a "global" goal, as in metabolic networks or autonomous system networks,
the influence area is much larger, and it can contain up to half the vertices
in the graph. In conclusion, our newly introduced approach allows to
distinguish the topology and the structure of various complex networks.
| [
{
"version": "v1",
"created": "Tue, 10 Mar 2015 11:08:20 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Borassi",
"Michele",
""
],
[
"Chessa",
"Alessandro",
""
],
[
"Caldarelli",
"Guido",
""
]
] | TITLE: Hyperbolicity Measures "Democracy" in Real-World Networks
ABSTRACT: We analyze the hyperbolicity of real-world networks, a geometric quantity
that measures if a space is negatively curved. In our interpretation, a network
with small hyperbolicity is "aristocratic", because it contains a small set of
vertices involved in many shortest paths, so that few elements "connect" the
systems, while a network with large hyperbolicity has a more "democratic"
structure with a larger number of crucial elements.
We prove mathematically the soundness of this interpretation, and we derive
its consequences by analyzing a large dataset of real-world networks. We
confirm and improve previous results on hyperbolicity, and we analyze them in
the light of our interpretation.
Moreover, we study (for the first time in our knowledge) the hyperbolicity of
the neighborhood of a given vertex. This allows to define an "influence area"
for the vertices in the graph. We show that the influence area of the highest
degree vertex is small in what we define "local" networks, like most social or
peer-to-peer networks. On the other hand, if the network is built in order to
reach a "global" goal, as in metabolic networks or autonomous system networks,
the influence area is much larger, and it can contain up to half the vertices
in the graph. In conclusion, our newly introduced approach allows to
distinguish the topology and the structure of various complex networks.
| no_new_dataset | 0.949153 |
1505.00687 | Xiaolong Wang | Xiaolong Wang, Abhinav Gupta | Unsupervised Learning of Visual Representations using Videos | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation.
| [
{
"version": "v1",
"created": "Mon, 4 May 2015 15:50:53 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Oct 2015 17:05:49 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Wang",
"Xiaolong",
""
],
[
"Gupta",
"Abhinav",
""
]
] | TITLE: Unsupervised Learning of Visual Representations using Videos
ABSTRACT: Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation.
| no_new_dataset | 0.941761 |
1510.01440 | Baoyuan Wang | Ruobing Wu and Baoyuan Wang and Wenping Wang and Yizhou Yu | Harvesting Discriminative Meta Objects with Deep CNN Features for Scene
Classification | To Appear in ICCV 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work on scene classification still makes use of generic CNN features
in a rudimentary manner. In this ICCV 2015 paper, we present a novel pipeline
built upon deep CNN features to harvest discriminative visual objects and parts
for scene classification. We first use a region proposal technique to generate
a set of high-quality patches potentially containing objects, and apply a
pre-trained CNN to extract generic deep features from these patches. Then we
perform both unsupervised and weakly supervised learning to screen these
patches and discover discriminative ones representing category-specific objects
and parts. We further apply discriminative clustering enhanced with local CNN
fine-tuning to aggregate similar objects and parts into groups, called meta
objects. A scene image representation is constructed by pooling the feature
response maps of all the learned meta objects at multiple spatial scales. We
have confirmed that the scene image representation obtained using this new
pipeline is capable of delivering state-of-the-art performance on two popular
scene benchmark datasets, MIT Indoor 67~\cite{MITIndoor67} and
Sun397~\cite{Sun397}
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 05:59:11 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Wu",
"Ruobing",
""
],
[
"Wang",
"Baoyuan",
""
],
[
"Wang",
"Wenping",
""
],
[
"Yu",
"Yizhou",
""
]
] | TITLE: Harvesting Discriminative Meta Objects with Deep CNN Features for Scene
Classification
ABSTRACT: Recent work on scene classification still makes use of generic CNN features
in a rudimentary manner. In this ICCV 2015 paper, we present a novel pipeline
built upon deep CNN features to harvest discriminative visual objects and parts
for scene classification. We first use a region proposal technique to generate
a set of high-quality patches potentially containing objects, and apply a
pre-trained CNN to extract generic deep features from these patches. Then we
perform both unsupervised and weakly supervised learning to screen these
patches and discover discriminative ones representing category-specific objects
and parts. We further apply discriminative clustering enhanced with local CNN
fine-tuning to aggregate similar objects and parts into groups, called meta
objects. A scene image representation is constructed by pooling the feature
response maps of all the learned meta objects at multiple spatial scales. We
have confirmed that the scene image representation obtained using this new
pipeline is capable of delivering state-of-the-art performance on two popular
scene benchmark datasets, MIT Indoor 67~\cite{MITIndoor67} and
Sun397~\cite{Sun397}
| no_new_dataset | 0.947672 |
1510.01544 | Efstratios Gavves Dr. | Efstratios Gavves and Thomas Mensink and Tatiana Tommasi and Cees G.M.
Snoek and Tinne Tuytelaars | Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets
for Future Tasks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we reuse existing knowledge, in the form of available datasets, when
solving a new and apparently unrelated target task from a set of unlabeled
data? In this work we make a first contribution to answer this question in the
context of image classification. We frame this quest as an active learning
problem and use zero-shot classifiers to guide the learning process by linking
the new task to the existing classifiers. By revisiting the dual formulation of
adaptive SVM, we reveal two basic conditions to choose greedily only the most
relevant samples to be annotated. On this basis we propose an effective active
learning algorithm which learns the best possible target classification model
with minimum human labeling effort. Extensive experiments on two challenging
datasets show the value of our approach compared to the state-of-the-art active
learning methodologies, as well as its potential to reuse past datasets with
minimal effort for future tasks.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 12:06:19 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Gavves",
"Efstratios",
""
],
[
"Mensink",
"Thomas",
""
],
[
"Tommasi",
"Tatiana",
""
],
[
"Snoek",
"Cees G. M.",
""
],
[
"Tuytelaars",
"Tinne",
""
]
] | TITLE: Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets
for Future Tasks
ABSTRACT: How can we reuse existing knowledge, in the form of available datasets, when
solving a new and apparently unrelated target task from a set of unlabeled
data? In this work we make a first contribution to answer this question in the
context of image classification. We frame this quest as an active learning
problem and use zero-shot classifiers to guide the learning process by linking
the new task to the existing classifiers. By revisiting the dual formulation of
adaptive SVM, we reveal two basic conditions to choose greedily only the most
relevant samples to be annotated. On this basis we propose an effective active
learning algorithm which learns the best possible target classification model
with minimum human labeling effort. Extensive experiments on two challenging
datasets show the value of our approach compared to the state-of-the-art active
learning methodologies, as well as its potential to reuse past datasets with
minimal effort for future tasks.
| no_new_dataset | 0.94699 |
1510.01553 | Dan Xu | Dan Xu, Elisa Ricci, Yan Yan, Jingkuan Song, Nicu Sebe | Learning Deep Representations of Appearance and Motion for Anomalous
Event Detection | Oral paper in BMVC 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel unsupervised deep learning framework for anomalous event
detection in complex video scenes. While most existing works merely use
hand-crafted appearance and motion features, we propose Appearance and Motion
DeepNet (AMDN) which utilizes deep neural networks to automatically learn
feature representations. To exploit the complementary information of both
appearance and motion patterns, we introduce a novel double fusion framework,
combining both the benefits of traditional early fusion and late fusion
strategies. Specifically, stacked denoising autoencoders are proposed to
separately learn both appearance and motion features as well as a joint
representation (early fusion). Based on the learned representations, multiple
one-class SVM models are used to predict the anomaly scores of each input,
which are then integrated with a late fusion strategy for final anomaly
detection. We evaluate the proposed method on two publicly available video
surveillance datasets, showing competitive performance with respect to state of
the art approaches.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 12:42:55 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Xu",
"Dan",
""
],
[
"Ricci",
"Elisa",
""
],
[
"Yan",
"Yan",
""
],
[
"Song",
"Jingkuan",
""
],
[
"Sebe",
"Nicu",
""
]
] | TITLE: Learning Deep Representations of Appearance and Motion for Anomalous
Event Detection
ABSTRACT: We present a novel unsupervised deep learning framework for anomalous event
detection in complex video scenes. While most existing works merely use
hand-crafted appearance and motion features, we propose Appearance and Motion
DeepNet (AMDN) which utilizes deep neural networks to automatically learn
feature representations. To exploit the complementary information of both
appearance and motion patterns, we introduce a novel double fusion framework,
combining both the benefits of traditional early fusion and late fusion
strategies. Specifically, stacked denoising autoencoders are proposed to
separately learn both appearance and motion features as well as a joint
representation (early fusion). Based on the learned representations, multiple
one-class SVM models are used to predict the anomaly scores of each input,
which are then integrated with a late fusion strategy for final anomaly
detection. We evaluate the proposed method on two publicly available video
surveillance datasets, showing competitive performance with respect to state of
the art approaches.
| no_new_dataset | 0.947672 |
1510.01562 | Benjamin Piwowarski | Benjamin Piwowarski and Sylvain Lamprier and Nicolas Despres | Parameterized Neural Network Language Models for Information Retrieval | null | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information Retrieval (IR) models need to deal with two difficult issues,
vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to
the difficulty of retrieving relevant documents that do not contain exact query
terms but semantically related terms. Term dependencies refers to the need of
considering the relationship between the words of the query when estimating the
relevance of a document. A multitude of solutions has been proposed to solve
each of these two problems, but no principled model solve both. In parallel, in
the last few years, language models based on neural networks have been used to
cope with complex natural language processing tasks like emotion and paraphrase
detection. Although they present good abilities to cope with both term
dependencies and vocabulary mismatch problems, thanks to the distributed
representation of words they are based upon, such models could not be used
readily in IR, where the estimation of one language model per document (or
query) is required. This is both computationally unfeasible and prone to
over-fitting. Based on a recent work that proposed to learn a generic language
model that can be modified through a set of document-specific parameters, we
explore use of new neural network models that are adapted to ad-hoc IR tasks.
Within the language model IR framework, we propose and study the use of a
generic language model as well as a document-specific language model. Both can
be used as a smoothing component, but the latter is more adapted to the
document at hand and has the potential of being used as a full document
language model. We experiment with such models and analyze their results on
TREC-1 to 8 datasets.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 13:07:31 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Piwowarski",
"Benjamin",
""
],
[
"Lamprier",
"Sylvain",
""
],
[
"Despres",
"Nicolas",
""
]
] | TITLE: Parameterized Neural Network Language Models for Information Retrieval
ABSTRACT: Information Retrieval (IR) models need to deal with two difficult issues,
vocabulary mismatch and term dependencies. Vocabulary mismatch corresponds to
the difficulty of retrieving relevant documents that do not contain exact query
terms but semantically related terms. Term dependencies refers to the need of
considering the relationship between the words of the query when estimating the
relevance of a document. A multitude of solutions has been proposed to solve
each of these two problems, but no principled model solve both. In parallel, in
the last few years, language models based on neural networks have been used to
cope with complex natural language processing tasks like emotion and paraphrase
detection. Although they present good abilities to cope with both term
dependencies and vocabulary mismatch problems, thanks to the distributed
representation of words they are based upon, such models could not be used
readily in IR, where the estimation of one language model per document (or
query) is required. This is both computationally unfeasible and prone to
over-fitting. Based on a recent work that proposed to learn a generic language
model that can be modified through a set of document-specific parameters, we
explore use of new neural network models that are adapted to ad-hoc IR tasks.
Within the language model IR framework, we propose and study the use of a
generic language model as well as a document-specific language model. Both can
be used as a smoothing component, but the latter is more adapted to the
document at hand and has the potential of being used as a full document
language model. We experiment with such models and analyze their results on
TREC-1 to 8 datasets.
| no_new_dataset | 0.952131 |
1510.01576 | Daniel Castro Chin | Daniel Castro, Steven Hickson, Vinay Bettadapura, Edison Thomaz,
Gregory Abowd, Henrik Christensen, Irfan Essa | Predicting Daily Activities From Egocentric Images Using Deep Learning | 8 pages | ISWC '15 Proceedings of the 2015 ACM International Symposium on
Wearable Computers - Pages 75-82 | 10.1145/2802083.2808398 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method to analyze images taken from a passive egocentric
wearable camera along with the contextual information, such as time and day of
week, to learn and predict everyday activities of an individual. We collected a
dataset of 40,103 egocentric images over a 6 month period with 19 activity
classes and demonstrate the benefit of state-of-the-art deep learning
techniques for learning and predicting daily activities. Classification is
conducted using a Convolutional Neural Network (CNN) with a classification
method we introduce called a late fusion ensemble. This late fusion ensemble
incorporates relevant contextual information and increases our classification
accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a
person's activity across the 19 activity classes. We also demonstrate some
promising results from two additional users by fine-tuning the classifier with
one day of training data.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 13:56:50 GMT"
}
] | 2015-10-07T00:00:00 | [
[
"Castro",
"Daniel",
""
],
[
"Hickson",
"Steven",
""
],
[
"Bettadapura",
"Vinay",
""
],
[
"Thomaz",
"Edison",
""
],
[
"Abowd",
"Gregory",
""
],
[
"Christensen",
"Henrik",
""
],
[
"Essa",
"Irfan",
""
]
] | TITLE: Predicting Daily Activities From Egocentric Images Using Deep Learning
ABSTRACT: We present a method to analyze images taken from a passive egocentric
wearable camera along with the contextual information, such as time and day of
week, to learn and predict everyday activities of an individual. We collected a
dataset of 40,103 egocentric images over a 6 month period with 19 activity
classes and demonstrate the benefit of state-of-the-art deep learning
techniques for learning and predicting daily activities. Classification is
conducted using a Convolutional Neural Network (CNN) with a classification
method we introduce called a late fusion ensemble. This late fusion ensemble
incorporates relevant contextual information and increases our classification
accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a
person's activity across the 19 activity classes. We also demonstrate some
promising results from two additional users by fine-tuning the classifier with
one day of training data.
| no_new_dataset | 0.911574 |
1309.0787 | Furong Huang | Furong Huang, U. N. Niranjan, Mohammad Umar Hakeem, Animashree
Anandkumar | Online Tensor Methods for Learning Latent Variable Models | JMLR 2014 | null | null | null | cs.LG cs.DC cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.
| [
{
"version": "v1",
"created": "Tue, 3 Sep 2013 19:30:55 GMT"
},
{
"version": "v2",
"created": "Tue, 10 Sep 2013 20:56:08 GMT"
},
{
"version": "v3",
"created": "Wed, 16 Oct 2013 01:58:14 GMT"
},
{
"version": "v4",
"created": "Mon, 31 Mar 2014 17:24:07 GMT"
},
{
"version": "v5",
"created": "Sat, 3 Oct 2015 04:26:19 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Huang",
"Furong",
""
],
[
"Niranjan",
"U. N.",
""
],
[
"Hakeem",
"Mohammad Umar",
""
],
[
"Anandkumar",
"Animashree",
""
]
] | TITLE: Online Tensor Methods for Learning Latent Variable Models
ABSTRACT: We introduce an online tensor decomposition based approach for two latent
variable modeling problems namely, (1) community detection, in which we learn
the latent communities that the social actors in social networks belong to, and
(2) topic modeling, in which we infer hidden topics of text articles. We
consider decomposition of moment tensors using stochastic gradient descent. We
conduct optimization of multilinear operations in SGD and avoid directly
forming the tensors, to save computational and storage costs. We present
optimized algorithm in two platforms. Our GPU-based implementation exploits the
parallelism of SIMD architectures to allow for maximum speed-up by a careful
optimization of storage and data transfer, whereas our CPU-based implementation
uses efficient sparse matrix computations and is suitable for large sparse
datasets. For the community detection problem, we demonstrate accuracy and
computational efficiency on Facebook, Yelp and DBLP datasets, and for the topic
modeling problem, we also demonstrate good performance on the New York Times
dataset. We compare our results to the state-of-the-art algorithms such as the
variational method, and report a gain of accuracy and a gain of several orders
of magnitude in the execution time.
| no_new_dataset | 0.947914 |
1504.06375 | Saining Xie | Saining Xie and Zhuowen Tu | Holistically-Nested Edge Detection | v2 Add appendix A for updated results (ODS=0.790) on BSDS-500 in a
new experiment setting. Fix typos and reorganize formulations. Add Table 2 to
discuss the role of deep supervision. Add links to publicly available
repository for code, models and data | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a new edge detection algorithm that tackles two important issues
in this long-standing vision problem: (1) holistic image training and
prediction; and (2) multi-scale and multi-level feature learning. Our proposed
method, holistically-nested edge detection (HED), performs image-to-image
prediction by means of a deep learning model that leverages fully convolutional
neural networks and deeply-supervised nets. HED automatically learns rich
hierarchical representations (guided by deep supervision on side responses)
that are important in order to approach the human ability resolve the
challenging ambiguity in edge and object boundary detection. We significantly
advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and
the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed
(0.4 second per image) that is orders of magnitude faster than some recent
CNN-based edge detection algorithms.
| [
{
"version": "v1",
"created": "Fri, 24 Apr 2015 02:12:15 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Oct 2015 02:15:38 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Xie",
"Saining",
""
],
[
"Tu",
"Zhuowen",
""
]
] | TITLE: Holistically-Nested Edge Detection
ABSTRACT: We develop a new edge detection algorithm that tackles two important issues
in this long-standing vision problem: (1) holistic image training and
prediction; and (2) multi-scale and multi-level feature learning. Our proposed
method, holistically-nested edge detection (HED), performs image-to-image
prediction by means of a deep learning model that leverages fully convolutional
neural networks and deeply-supervised nets. HED automatically learns rich
hierarchical representations (guided by deep supervision on side responses)
that are important in order to approach the human ability resolve the
challenging ambiguity in edge and object boundary detection. We significantly
advance the state-of-the-art on the BSD500 dataset (ODS F-score of .782) and
the NYU Depth dataset (ODS F-score of .746), and do so with an improved speed
(0.4 second per image) that is orders of magnitude faster than some recent
CNN-based edge detection algorithms.
| no_new_dataset | 0.947769 |
1509.07074 | Soumi Chaki | Akhilesh K Verma, Soumi Chaki, Aurobinda Routray, William K Mohanty,
Mamata Jenamani | Quantification of sand fraction from seismic attributes using
Neuro-Fuzzy approach | Journal of Applied Geophysics, volume 111, page 141-155 | null | 10.1016/j.jappgeo.2014.10.005 | null | cs.CE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we illustrate the modeling of a reservoir property (sand
fraction) from seismic attributes namely seismic impedance, seismic amplitude,
and instantaneous frequency using Neuro-Fuzzy (NF) approach. Input dataset
includes 3D post-stacked seismic attributes and six well logs acquired from a
hydrocarbon field located in the western coast of India. Presence of thin sand
and shale layers in the basin area makes the modeling of reservoir
characteristic a challenging task. Though seismic data is helpful in
extrapolation of reservoir properties away from boreholes; yet, it could be
challenging to delineate thin sand and shale reservoirs using seismic data due
to its limited resolvability. Therefore, it is important to develop
state-of-art intelligent methods for calibrating a nonlinear mapping between
seismic data and target reservoir variables. Neural networks have shown its
potential to model such nonlinear mappings; however, uncertainties associated
with the model and datasets are still a concern. Hence, introduction of Fuzzy
Logic (FL) is beneficial for handling these uncertainties. More specifically,
hybrid variants of Artificial Neural Network (ANN) and fuzzy logic, i.e., NF
methods, are capable for the modeling reservoir characteristics by integrating
the explicit knowledge representation power of FL with the learning ability of
neural networks. The documented results in this study demonstrate acceptable
resemblance between target and predicted variables, and hence, encourage the
application of integrated machine learning approaches such as Neuro-Fuzzy in
reservoir characterization domain. Furthermore, visualization of the variation
of sand probability in the study area would assist in identifying placement of
potential wells for future drilling operations.
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2015 17:48:24 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Verma",
"Akhilesh K",
""
],
[
"Chaki",
"Soumi",
""
],
[
"Routray",
"Aurobinda",
""
],
[
"Mohanty",
"William K",
""
],
[
"Jenamani",
"Mamata",
""
]
] | TITLE: Quantification of sand fraction from seismic attributes using
Neuro-Fuzzy approach
ABSTRACT: In this paper, we illustrate the modeling of a reservoir property (sand
fraction) from seismic attributes namely seismic impedance, seismic amplitude,
and instantaneous frequency using Neuro-Fuzzy (NF) approach. Input dataset
includes 3D post-stacked seismic attributes and six well logs acquired from a
hydrocarbon field located in the western coast of India. Presence of thin sand
and shale layers in the basin area makes the modeling of reservoir
characteristic a challenging task. Though seismic data is helpful in
extrapolation of reservoir properties away from boreholes; yet, it could be
challenging to delineate thin sand and shale reservoirs using seismic data due
to its limited resolvability. Therefore, it is important to develop
state-of-art intelligent methods for calibrating a nonlinear mapping between
seismic data and target reservoir variables. Neural networks have shown its
potential to model such nonlinear mappings; however, uncertainties associated
with the model and datasets are still a concern. Hence, introduction of Fuzzy
Logic (FL) is beneficial for handling these uncertainties. More specifically,
hybrid variants of Artificial Neural Network (ANN) and fuzzy logic, i.e., NF
methods, are capable for the modeling reservoir characteristics by integrating
the explicit knowledge representation power of FL with the learning ability of
neural networks. The documented results in this study demonstrate acceptable
resemblance between target and predicted variables, and hence, encourage the
application of integrated machine learning approaches such as Neuro-Fuzzy in
reservoir characterization domain. Furthermore, visualization of the variation
of sand probability in the study area would assist in identifying placement of
potential wells for future drilling operations.
| no_new_dataset | 0.944434 |
1510.00745 | Eric Orenstein | Eric C. Orenstein, Oscar Beijbom, Emily E. Peacock and Heidi M. Sosik | WHOI-Plankton- A Large Scale Fine Grained Visual Recognition Benchmark
Dataset for Plankton Classification | 2 pages, 1 figure, presented at the Third Workshop on Fine-Grained
Visual Categorization at CVPR 2015 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planktonic organisms are of fundamental importance to marine ecosystems: they
form the basis of the food web, provide the link between the atmosphere and the
deep ocean, and influence global-scale biogeochemical cycles. Scientists are
increasingly using imaging-based technologies to study these creatures in their
natural habit. Images from such systems provide an unique opportunity to model
and understand plankton ecosystems, but the collected datasets can be enormous.
The Imaging FlowCytobot (IFCB) at Woods Hole Oceanographic Institution, for
example, is an \emph{in situ} system that has been continuously imaging
plankton since 2006. To date, it has generated more than 700 million samples.
Manual classification of such a vast image collection is impractical due to the
size of the data set. In addition, the annotation task is challenging due to
the large space of relevant classes, intra-class variability, and inter-class
similarity. Methods for automated classification exist, but the accuracy is
often below that of human experts. Here we introduce WHOI-Plankton: a large
scale, fine-grained visual recognition dataset for plankton classification,
which comprises over 3.4 million expert-labeled images across 70 classes. The
labeled image set is complied from over 8 years of near continuous data
collection with the IFCB at the Martha's Vineyard Coastal Observatory (MVCO).
We discuss relevant metrics for evaluation of classification performance and
provide results for a traditional method based on hand-engineered features and
two methods based on convolutional neural networks.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 22:06:52 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Orenstein",
"Eric C.",
""
],
[
"Beijbom",
"Oscar",
""
],
[
"Peacock",
"Emily E.",
""
],
[
"Sosik",
"Heidi M.",
""
]
] | TITLE: WHOI-Plankton- A Large Scale Fine Grained Visual Recognition Benchmark
Dataset for Plankton Classification
ABSTRACT: Planktonic organisms are of fundamental importance to marine ecosystems: they
form the basis of the food web, provide the link between the atmosphere and the
deep ocean, and influence global-scale biogeochemical cycles. Scientists are
increasingly using imaging-based technologies to study these creatures in their
natural habit. Images from such systems provide an unique opportunity to model
and understand plankton ecosystems, but the collected datasets can be enormous.
The Imaging FlowCytobot (IFCB) at Woods Hole Oceanographic Institution, for
example, is an \emph{in situ} system that has been continuously imaging
plankton since 2006. To date, it has generated more than 700 million samples.
Manual classification of such a vast image collection is impractical due to the
size of the data set. In addition, the annotation task is challenging due to
the large space of relevant classes, intra-class variability, and inter-class
similarity. Methods for automated classification exist, but the accuracy is
often below that of human experts. Here we introduce WHOI-Plankton: a large
scale, fine-grained visual recognition dataset for plankton classification,
which comprises over 3.4 million expert-labeled images across 70 classes. The
labeled image set is complied from over 8 years of near continuous data
collection with the IFCB at the Martha's Vineyard Coastal Observatory (MVCO).
We discuss relevant metrics for evaluation of classification performance and
provide results for a traditional method based on hand-engineered features and
two methods based on convolutional neural networks.
| new_dataset | 0.964489 |
1510.00755 | Taylor Arnold | Taylor Arnold | Sparse Density Representations for Simultaneous Inference on Large
Spatial Datasets | 9 pages, 3 figures, 5 tables | null | null | null | stat.CO cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large spatial datasets often represent a number of spatial point processes
generated by distinct entities or classes of events. When crossed with
covariates, such as discrete time buckets, this can quickly result in a data
set with millions of individual density estimates. Applications that require
simultaneous access to a substantial subset of these estimates become resource
constrained when densities are stored in complex and incompatible formats. We
present a method for representing spatial densities along the nodes of sparsely
populated trees. Fast algorithms are provided for performing set operations and
queries on the resulting compact tree structures. The speed and simplicity of
the approach is demonstrated on both real and simulated spatial data.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 23:05:48 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Arnold",
"Taylor",
""
]
] | TITLE: Sparse Density Representations for Simultaneous Inference on Large
Spatial Datasets
ABSTRACT: Large spatial datasets often represent a number of spatial point processes
generated by distinct entities or classes of events. When crossed with
covariates, such as discrete time buckets, this can quickly result in a data
set with millions of individual density estimates. Applications that require
simultaneous access to a substantial subset of these estimates become resource
constrained when densities are stored in complex and incompatible formats. We
present a method for representing spatial densities along the nodes of sparsely
populated trees. Fast algorithms are provided for performing set operations and
queries on the resulting compact tree structures. The speed and simplicity of
the approach is demonstrated on both real and simulated spatial data.
| no_new_dataset | 0.945801 |
1510.00902 | Luis M. A. Bettencourt | Luis M. A. Bettencourt, Jose Lobo | Urban Scaling in Europe | 35 pages, 7 Figures, 1 Table | null | null | null | physics.soc-ph nlin.AO physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last decades, in disciplines as diverse as economics, geography, and
complex systems, a perspective has arisen proposing that many properties of
cities are quantitatively predictable due to agglomeration or scaling effects.
Using new harmonized definitions for functional urban areas, we examine to what
extent these ideas apply to European cities. We show that while most large
urban systems in Western Europe (France, Germany, Italy, Spain, UK)
approximately agree with theoretical expectations, the small number of cities
in each nation and their natural variability preclude drawing strong
conclusions. We demonstrate how this problem can be overcome so that cities
from different urban systems can be pooled together to construct larger
datasets. This leads to a simple statistical procedure to identify urban
scaling relations, which then clearly emerge as a property of European cities.
We compare the predictions of urban scaling to Zipf's law for the size
distribution of cities and show that while the former holds well the latter is
a poor descriptor of European cities. We conclude with scenarios for the size
and properties of future pan-European megacities and their implications for the
economic productivity, technological sophistication and regional inequalities
of an integrated European urban system.
| [
{
"version": "v1",
"created": "Sun, 4 Oct 2015 03:31:34 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Bettencourt",
"Luis M. A.",
""
],
[
"Lobo",
"Jose",
""
]
] | TITLE: Urban Scaling in Europe
ABSTRACT: Over the last decades, in disciplines as diverse as economics, geography, and
complex systems, a perspective has arisen proposing that many properties of
cities are quantitatively predictable due to agglomeration or scaling effects.
Using new harmonized definitions for functional urban areas, we examine to what
extent these ideas apply to European cities. We show that while most large
urban systems in Western Europe (France, Germany, Italy, Spain, UK)
approximately agree with theoretical expectations, the small number of cities
in each nation and their natural variability preclude drawing strong
conclusions. We demonstrate how this problem can be overcome so that cities
from different urban systems can be pooled together to construct larger
datasets. This leads to a simple statistical procedure to identify urban
scaling relations, which then clearly emerge as a property of European cities.
We compare the predictions of urban scaling to Zipf's law for the size
distribution of cities and show that while the former holds well the latter is
a poor descriptor of European cities. We conclude with scenarios for the size
and properties of future pan-European megacities and their implications for the
economic productivity, technological sophistication and regional inequalities
of an integrated European urban system.
| no_new_dataset | 0.946101 |
1510.01027 | Xinggang Wang | Xinggang Wang, Zhuotun Zhu, Cong Yao, Xiang Bai | Relaxed Multiple-Instance SVM with Application to Object Discovery | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple-instance learning (MIL) has served as an important tool for a wide
range of vision applications, for instance, image classification, object
detection, and visual tracking. In this paper, we propose a novel method to
solve the classical MIL problem, named relaxed multiple-instance SVM (RMI-SVM).
We treat the positiveness of instance as a continuous variable, use Noisy-OR
model to enforce the MIL constraints, and jointly optimize the bag label and
instance label in a unified framework. The optimization problem can be
efficiently solved using stochastic gradient decent. The extensive experiments
demonstrate that RMI-SVM consistently achieves superior performance on various
benchmarks for MIL. Moreover, we simply applied RMI-SVM to a challenging vision
task, common object discovery. The state-of-the-art results of object discovery
on Pascal VOC datasets further confirm the advantages of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2015 04:18:18 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Wang",
"Xinggang",
""
],
[
"Zhu",
"Zhuotun",
""
],
[
"Yao",
"Cong",
""
],
[
"Bai",
"Xiang",
""
]
] | TITLE: Relaxed Multiple-Instance SVM with Application to Object Discovery
ABSTRACT: Multiple-instance learning (MIL) has served as an important tool for a wide
range of vision applications, for instance, image classification, object
detection, and visual tracking. In this paper, we propose a novel method to
solve the classical MIL problem, named relaxed multiple-instance SVM (RMI-SVM).
We treat the positiveness of instance as a continuous variable, use Noisy-OR
model to enforce the MIL constraints, and jointly optimize the bag label and
instance label in a unified framework. The optimization problem can be
efficiently solved using stochastic gradient decent. The extensive experiments
demonstrate that RMI-SVM consistently achieves superior performance on various
benchmarks for MIL. Moreover, we simply applied RMI-SVM to a challenging vision
task, common object discovery. The state-of-the-art results of object discovery
on Pascal VOC datasets further confirm the advantages of the proposed method.
| no_new_dataset | 0.951278 |
1510.01218 | Malgorzata Peszynska | Malgorzata Peszynska and Anna Trykozko and Gabriel Iltis and Steffen
Schlueter and Dorthe Wildenschild | Biofilm growth in porous media: experiments, computational modeling at
the porescale, and upscaling | 34 pages, 8 figures | null | 10.1016/j.advwatres.2015.07.008 | null | physics.flu-dyn math.NA nlin.AO q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Biofilm growth changes many physical properties of porous media such as
porosity, permeability and mass transport parameters. The growth depends on
various environmental conditions, and in particular, on flow rates. Modeling
the evolution of such properties is difficult both at the porescale where the
phase morphology can be distinguished, as well as during upscaling to the
corescale effective properties. Experimental data on biofilm growth is also
limited because its collection can interfere with the growth, while imaging
itself presents challenges.
In this paper we combine insight from imaging, experiments, and numerical
simulations and visualization. The experimental dataset is based on glass beads
domain inoculated by biomass which is subjected to various flow conditions
promoting the growth of biomass and the appearance of a biofilm phase. The
domain is imaged and the imaging data is used directly by a computational model
for flow and transport. The results of the computational flow model are
upscaled to produce conductivities which compare well with the experimentally
obtained hydraulic properties of the medium. The flow model is also coupled to
a newly developed biomass--nutrient growth model, and the model reproduces
morphologies qualitatively similar to those observed in the experiment.
| [
{
"version": "v1",
"created": "Thu, 24 Sep 2015 17:06:43 GMT"
}
] | 2015-10-06T00:00:00 | [
[
"Peszynska",
"Malgorzata",
""
],
[
"Trykozko",
"Anna",
""
],
[
"Iltis",
"Gabriel",
""
],
[
"Schlueter",
"Steffen",
""
],
[
"Wildenschild",
"Dorthe",
""
]
] | TITLE: Biofilm growth in porous media: experiments, computational modeling at
the porescale, and upscaling
ABSTRACT: Biofilm growth changes many physical properties of porous media such as
porosity, permeability and mass transport parameters. The growth depends on
various environmental conditions, and in particular, on flow rates. Modeling
the evolution of such properties is difficult both at the porescale where the
phase morphology can be distinguished, as well as during upscaling to the
corescale effective properties. Experimental data on biofilm growth is also
limited because its collection can interfere with the growth, while imaging
itself presents challenges.
In this paper we combine insight from imaging, experiments, and numerical
simulations and visualization. The experimental dataset is based on glass beads
domain inoculated by biomass which is subjected to various flow conditions
promoting the growth of biomass and the appearance of a biofilm phase. The
domain is imaged and the imaging data is used directly by a computational model
for flow and transport. The results of the computational flow model are
upscaled to produce conductivities which compare well with the experimentally
obtained hydraulic properties of the medium. The flow model is also coupled to
a newly developed biomass--nutrient growth model, and the model reproduces
morphologies qualitatively similar to those observed in the experiment.
| no_new_dataset | 0.956227 |
1504.06692 | Junhua Mao | Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille | Learning like a Child: Fast Novel Visual Concept Learning from Sentence
Descriptions of Images | ICCV 2015 camera ready version. We add much more novel visual
concepts in the NVC dataset and have released it, see
http://www.stat.ucla.edu/~junhua.mao/projects/child_learning.html | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the task of learning novel visual concepts, and
their interactions with other concepts, from a few images with sentence
descriptions. Using linguistic context and visual features, our method is able
to efficiently hypothesize the semantic meaning of new words and add them to
its word dictionary so that they can be used to describe images which contain
these novel concepts. Our method has an image captioning module based on m-RNN
with several improvements. In particular, we propose a transposed weight
sharing scheme, which not only improves performance on image captioning, but
also makes the model more suitable for the novel concept learning task. We
propose methods to prevent overfitting the new concepts. In addition, three
novel concept datasets are constructed for this new task. In the experiments,
we show that our method effectively learns novel visual concepts from a few
examples without disturbing the previously learned concepts. The project page
is http://www.stat.ucla.edu/~junhua.mao/projects/child_learning.html
| [
{
"version": "v1",
"created": "Sat, 25 Apr 2015 06:45:35 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Oct 2015 02:36:05 GMT"
}
] | 2015-10-05T00:00:00 | [
[
"Mao",
"Junhua",
""
],
[
"Xu",
"Wei",
""
],
[
"Yang",
"Yi",
""
],
[
"Wang",
"Jiang",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Learning like a Child: Fast Novel Visual Concept Learning from Sentence
Descriptions of Images
ABSTRACT: In this paper, we address the task of learning novel visual concepts, and
their interactions with other concepts, from a few images with sentence
descriptions. Using linguistic context and visual features, our method is able
to efficiently hypothesize the semantic meaning of new words and add them to
its word dictionary so that they can be used to describe images which contain
these novel concepts. Our method has an image captioning module based on m-RNN
with several improvements. In particular, we propose a transposed weight
sharing scheme, which not only improves performance on image captioning, but
also makes the model more suitable for the novel concept learning task. We
propose methods to prevent overfitting the new concepts. In addition, three
novel concept datasets are constructed for this new task. In the experiments,
we show that our method effectively learns novel visual concepts from a few
examples without disturbing the previously learned concepts. The project page
is http://www.stat.ucla.edu/~junhua.mao/projects/child_learning.html
| no_new_dataset | 0.918845 |
1510.00542 | Gaurav Sharma | Gaurav Sharma and Frederic Jurie | Local Higher-Order Statistics (LHS) describing images with statistics of
local non-binarized pixel patterns | CVIU preprint | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new image representation for texture categorization and facial
analysis, relying on the use of higher-order local differential statistics as
features. It has been recently shown that small local pixel pattern
distributions can be highly discriminative while being extremely efficient to
compute, which is in contrast to the models based on the global structure of
images. Motivated by such works, we propose to use higher-order statistics of
local non-binarized pixel patterns for the image description. The proposed
model does not require either (i) user specified quantization of the space (of
pixel patterns) or (ii) any heuristics for discarding low occupancy volumes of
the space. We propose to use a data driven soft quantization of the space, with
parametric mixture models, combined with higher-order statistics, based on
Fisher scores. We demonstrate that this leads to a more expressive
representation which, when combined with discriminatively learned classifiers
and metrics, achieves state-of-the-art performance on challenging texture and
facial analysis datasets, in low complexity setup. Further, it is complementary
to higher complexity features and when combined with them improves performance.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 09:41:39 GMT"
}
] | 2015-10-05T00:00:00 | [
[
"Sharma",
"Gaurav",
""
],
[
"Jurie",
"Frederic",
""
]
] | TITLE: Local Higher-Order Statistics (LHS) describing images with statistics of
local non-binarized pixel patterns
ABSTRACT: We propose a new image representation for texture categorization and facial
analysis, relying on the use of higher-order local differential statistics as
features. It has been recently shown that small local pixel pattern
distributions can be highly discriminative while being extremely efficient to
compute, which is in contrast to the models based on the global structure of
images. Motivated by such works, we propose to use higher-order statistics of
local non-binarized pixel patterns for the image description. The proposed
model does not require either (i) user specified quantization of the space (of
pixel patterns) or (ii) any heuristics for discarding low occupancy volumes of
the space. We propose to use a data driven soft quantization of the space, with
parametric mixture models, combined with higher-order statistics, based on
Fisher scores. We demonstrate that this leads to a more expressive
representation which, when combined with discriminatively learned classifiers
and metrics, achieves state-of-the-art performance on challenging texture and
facial analysis datasets, in low complexity setup. Further, it is complementary
to higher complexity features and when combined with them improves performance.
| no_new_dataset | 0.951459 |
1510.00562 | Lin Sun | Lin Sun, Kui Jia, Dit-Yan Yeung, Bertram E. Shi | Human Action Recognition using Factorized Spatio-Temporal Convolutional
Networks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human actions in video sequences are three-dimensional (3D) spatio-temporal
signals characterizing both the visual appearance and motion dynamics of the
involved humans and objects. Inspired by the success of convolutional neural
networks (CNN) for image classification, recent attempts have been made to
learn 3D CNNs for recognizing human actions in videos. However, partly due to
the high complexity of training 3D convolution kernels and the need for large
quantities of training videos, only limited success has been reported. This has
triggered us to investigate in this paper a new deep architecture which can
handle 3D signals more effectively. Specifically, we propose factorized
spatio-temporal convolutional networks (FstCN) that factorize the original 3D
convolution kernel learning as a sequential process of learning 2D spatial
kernels in the lower layers (called spatial convolutional layers), followed by
learning 1D temporal kernels in the upper layers (called temporal convolutional
layers). We introduce a novel transformation and permutation operator to make
factorization in FstCN possible. Moreover, to address the issue of sequence
alignment, we propose an effective training and inference strategy based on
sampling multiple video clips from a given action video sequence. We have
tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51).
Without using auxiliary training videos to boost the performance, FstCN
outperforms existing CNN based methods and achieves comparable performance with
a recent method that benefits from using auxiliary training videos.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 11:24:04 GMT"
}
] | 2015-10-05T00:00:00 | [
[
"Sun",
"Lin",
""
],
[
"Jia",
"Kui",
""
],
[
"Yeung",
"Dit-Yan",
""
],
[
"Shi",
"Bertram E.",
""
]
] | TITLE: Human Action Recognition using Factorized Spatio-Temporal Convolutional
Networks
ABSTRACT: Human actions in video sequences are three-dimensional (3D) spatio-temporal
signals characterizing both the visual appearance and motion dynamics of the
involved humans and objects. Inspired by the success of convolutional neural
networks (CNN) for image classification, recent attempts have been made to
learn 3D CNNs for recognizing human actions in videos. However, partly due to
the high complexity of training 3D convolution kernels and the need for large
quantities of training videos, only limited success has been reported. This has
triggered us to investigate in this paper a new deep architecture which can
handle 3D signals more effectively. Specifically, we propose factorized
spatio-temporal convolutional networks (FstCN) that factorize the original 3D
convolution kernel learning as a sequential process of learning 2D spatial
kernels in the lower layers (called spatial convolutional layers), followed by
learning 1D temporal kernels in the upper layers (called temporal convolutional
layers). We introduce a novel transformation and permutation operator to make
factorization in FstCN possible. Moreover, to address the issue of sequence
alignment, we propose an effective training and inference strategy based on
sampling multiple video clips from a given action video sequence. We have
tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51).
Without using auxiliary training videos to boost the performance, FstCN
outperforms existing CNN based methods and achieves comparable performance with
a recent method that benefits from using auxiliary training videos.
| no_new_dataset | 0.95253 |
1510.00585 | Bidyut Kr. Patra | Ranveer Singh, Bidyut Kr. Patra and Bibhas Adhikari | A Complex Network Approach for Collaborative Recommendation | 22 Pages | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collaborative filtering (CF) is the most widely used and successful approach
for personalized service recommendations. Among the collaborative
recommendation approaches, neighborhood based approaches enjoy a huge amount of
popularity, due to their simplicity, justifiability, efficiency and stability.
Neighborhood based collaborative filtering approach finds K nearest neighbors
to an active user or K most similar rated items to the target item for
recommendation. Traditional similarity measures use ratings of co-rated items
to find similarity between a pair of users. Therefore, traditional similarity
measures cannot compute effective neighbors in sparse dataset. In this paper,
we propose a two-phase approach, which generates user-user and item-item
networks using traditional similarity measures in the first phase. In the
second phase, two hybrid approaches HB1, HB2, which utilize structural
similarity of both the network for finding K nearest neighbors and K most
similar items to a target items are introduced. To show effectiveness of the
measures, we compared performances of neighborhood based CFs using
state-of-the-art similarity measures with our proposed structural similarity
measures based CFs. Recommendation results on a set of real data show that
proposed measures based CFs outperform existing measures based CFs in various
evaluation metrics.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 13:05:42 GMT"
}
] | 2015-10-05T00:00:00 | [
[
"Singh",
"Ranveer",
""
],
[
"Patra",
"Bidyut Kr.",
""
],
[
"Adhikari",
"Bibhas",
""
]
] | TITLE: A Complex Network Approach for Collaborative Recommendation
ABSTRACT: Collaborative filtering (CF) is the most widely used and successful approach
for personalized service recommendations. Among the collaborative
recommendation approaches, neighborhood based approaches enjoy a huge amount of
popularity, due to their simplicity, justifiability, efficiency and stability.
Neighborhood based collaborative filtering approach finds K nearest neighbors
to an active user or K most similar rated items to the target item for
recommendation. Traditional similarity measures use ratings of co-rated items
to find similarity between a pair of users. Therefore, traditional similarity
measures cannot compute effective neighbors in sparse dataset. In this paper,
we propose a two-phase approach, which generates user-user and item-item
networks using traditional similarity measures in the first phase. In the
second phase, two hybrid approaches HB1, HB2, which utilize structural
similarity of both the network for finding K nearest neighbors and K most
similar items to a target items are introduced. To show effectiveness of the
measures, we compared performances of neighborhood based CFs using
state-of-the-art similarity measures with our proposed structural similarity
measures based CFs. Recommendation results on a set of real data show that
proposed measures based CFs outperform existing measures based CFs in various
evaluation metrics.
| no_new_dataset | 0.951953 |
1409.6780 | Jouni Sir\'en | Travis Gagie, Aleksi Hartikainen, Juha K\"arkk\"ainen, Gonzalo
Navarro, Simon J. Puglisi, Jouni Sir\'en | Document Counting in Practice | This is a slightly extended version of the paper that was presented
at DCC 2015. The implementations are available at
http://jltsiren.kapsi.fi/rlcsa and https://github.com/ahartik/succinct | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of counting the number of strings in a collection
where a given pattern appears, which has applications in information retrieval
and data mining. Existing solutions are in a theoretical stage. We implement
these solutions and develop some new variants, comparing them experimentally on
various datasets. Our results not only show which are the best options for each
situation and help discard practically unappealing solutions, but also uncover
some unexpected compressibility properties of the best data structures. By
taking advantage of these properties, we can reduce the size of the structures
by a factor of 5--400, depending on the dataset.
| [
{
"version": "v1",
"created": "Wed, 24 Sep 2014 00:27:17 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Oct 2015 10:57:53 GMT"
}
] | 2015-10-02T00:00:00 | [
[
"Gagie",
"Travis",
""
],
[
"Hartikainen",
"Aleksi",
""
],
[
"Kärkkäinen",
"Juha",
""
],
[
"Navarro",
"Gonzalo",
""
],
[
"Puglisi",
"Simon J.",
""
],
[
"Sirén",
"Jouni",
""
]
] | TITLE: Document Counting in Practice
ABSTRACT: We address the problem of counting the number of strings in a collection
where a given pattern appears, which has applications in information retrieval
and data mining. Existing solutions are in a theoretical stage. We implement
these solutions and develop some new variants, comparing them experimentally on
various datasets. Our results not only show which are the best options for each
situation and help discard practically unappealing solutions, but also uncover
some unexpected compressibility properties of the best data structures. By
taking advantage of these properties, we can reduce the size of the structures
by a factor of 5--400, depending on the dataset.
| no_new_dataset | 0.943086 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.